Docker images used to be, essentially, a linked list of layers. Each
layer would have a tarball and a json document pointing to its parent,
and the image pointed to the top layer:
imageA ----> layerA
|
v
layerB
|
v
layerC
The current image spec changed this format to where the Image defined
the order and set of layers:
imageA ---> layerA
|--> layerB
`--> layerC
For backwards compatibility, docker produces images which follow both
specs: layers point to parents, and images also point to the entire
list:
imageA ---> layerA
| |
| v
|--> layerB
| |
| v
`--> layerC
This is nice for tooling which supported the older version and never
updated to support the newer format.
Our `buildImage` code only supported the old version, so in order for
`buildImage` to properly generate an image based on another image
with `fromImage`, the parent image's layers must fully support the old
mechanism.
This is not a problem in general, but is a problem with
`buildLayeredImage`.
`buildLayeredImage` creates images with newer image spec, because
individual store paths don't have a guaranteed parent layer. Including
a specific parent ID in the layer's json makes the output less likely
to cache hit when published or pulled.
This means until now, `buildLayeredImage` could not be the input to
`buildImage`.
The changes in this PR change `buildImage` to only use the layer's
manifest when locating parent IDs. This does break buildImage on
extremely old Docker images, though I do wonder how many of these
exist.
This work has been sponsored by Target.
This fixes an impurity in nix-index: Previously it would take the nix-env
binary from the users PATH. I discovered this while trying to run nix-index in a
systemd service, which by default doesn't have nix-env in its path. The
errors it threw were not informative at all and it took me hours to
finally figure out the reason.
The install.sh script looks for all perls in $PATH, tries to execute
these to test whether that perl is "good", if it is, takes it and
puts it into the shebang.
This obviously can't work for cross. As installation seems to be pretty
trivial, do it in a custom install phase.
HTTPS is never worse and often better than FTP, since it's faster, more secure,
and more likely to be accessible through firewalls.
This does not change the tarball sha, as confirmed by `nix-prefetch-url`.