There was a bunch of stuff in the cross section that haddn't had any
attention in a while. I might need to slim it down later, but this is
good for now.
$(shell ...) looks a little sketch like it will be run no matter what.
And there are problems building the manual on darwin so hopefully this
fixes them.
The function buildGoModule builds Go programs managed with Go modules. It builds
a Go module through a two phase build:
- An intermediate fetcher derivation. This derivation will be used to
fetch all of the dependencies of the Go module.
- A final derivation will use the output of the intermediate derivation
to build the binaries and produce the final output.
Especially older hardware doesn't support AVX instructions. DLib is
still functional there, but significantly slower[1].
By setting `avxInstructions` to false, DLib will be compiled without
this feature.
[1] http://dlib.net/compile.html
Whenever we create scripts that are installed to $out, we must use runtimeShell
in order to get the shell that can be executed on the machine we create the
package for. This is relevant for cross-compiling. The only use case for
stdenv.shell are scripts that are executed as part of the build system.
Usages in checkPhase are borderline however to decrease the likelyhood
of people copying the wrong examples, I decided to use runtimeShell as well.
The appimageTools attrset contains utilities to prevent
the usage of appimage-run to package AppImages, like done/attempted
in #49370 and #53156.
This has the advantage of allowing for per-package environment changes,
and extracts into the store instead of the users home directory.
The package list was extracted into appimageTools to prevent
duplication.
Since #53055 was merged the Makefile for the manual could not be run
correctly as the generated function documentation was included, but
not actually generated.
This adds the necessary generation step by first building the XML file
containing function locations and preserving its store path in a
variable, which is then used both for linking of the locations file
and as a build input for the function docs generator.
This fixes#55014
Comments on conflicts:
- llvm: d6f401e1 vs. 469ecc70 - docs for 6 and 7 say the default is
to build all targets, so we should be fine
- some pypi hashes: they were equivalent, just base16 vs. base32
This is useful when running tools like NixOps or nix-review
on workstations where the upload to the builder is significantly
slower then downloading the source on the builder itself.
We can't run the checkPhase when build != host, so we may as well make
the checkInputs native.
This signicantly improves the situation of Python packages when enabling
strictDeps.
Currently the manual scales to the view port of the browser.
This leads to an unreadable layout and I found myself
reading the xml source instead.
The optimal width would be around 50 characters per line.
Since we have code listings also in the manual I relaxed
this limit a bit towards 70 characters per line.
Modifies the build process of the manual to invoke nixdoc
automatically to generate XML files with function documentation.
Currently documentation is present for five of the files in `lib/`.
To add another file to the generated docs, both
`doc/functions/library.xml` and `doc/lib-function-docs.nix` must be
updated.
Since Intel's default openmp implementation is available in the same src
tarball, we can just include it in the package. This means that `mkl` now "just
works" without any environment variables, fragile setup-hooks, or forced
propagation.
Since the openmp implementation is only needed at runtime (and for test cases),
users can substitute a different one if they prefer by exporting it with
`LD_PRELOAD`, which is how Intel recommends handling this. If they do not do so,
`libiomp.so` lives next to `libmkl_rt.so` and thus will be in the RPATH as a
sane default.
Since this still comes from the same src tarball, we can ship it without losing
the fixed-output derivation; likewise, since Hydra is not building or caching
these, shipping these proprietary packages costs no bandwidth for the nix
community.
To make updating large attribute sets faster, the update scripts
are now run in parallel.
Please note the following changes in semantics:
- The string passed to updateScript needs to be a path to an executable file.
- The updateScript can also be a list: the tail elements will then be passed
to the head as command line arguments.
Encouraging to put container elements on their own lines to minimize
diffs, merge conflicts and making re-ordering easier.
Nix doesn't suffer the restrictions of other languages where commas are
used to separate list items.
First of all, this makes the existing documentation a bit more clear on
what autoPatchelfHook is all about, because after discussing with
@svanderburg - who wrote a similar implementation - the rationale about
autoPatchelfHook wasn't very clear in the documentation.
I also added the recent changes around being able to use autoPatchelf
manually and the new --no-recurse flag.
Signed-off-by: aszlig <aszlig@nix.build>
It's incorrect (preferLocalBuild does not prevent uploading to binary
caches) and is not a stdenv attribute (it's already documented in the
Nix manual).
Python 3.4 will receive it's final patch release in March 2019 and there won't
be any releases anymore after that, so also not during NixOS 2019.03.
Python 3.4 is not used anymore in Nixpkgs. In any case, migrating code from
3.4 to 3.4+ is trivial.
This commit renames the pythondaemon module to match its module name, github
name, and pypi name, which makes it easier to find and reference. In order to
avoid breaking any external users, I've left an alias with a deprecated warning.
Rationale
---------
Currently, tests are hard to discover. For instance, someone updating
`dovecot` might not notice that the interaction of `dovecot` with
`opensmtpd` is handled in the `opensmtpd.nix` test.
And even for someone updating `opensmtpd`, it requires manual work to go
check in `nixos/tests` whether there is actually a test, especially
given not so many packages in `nixpkgs` have tests and this is thus most
of the time useless.
Finally, for the reviewer, it is much easier to check that the “Tested
via one or more NixOS test(s)” has been checked if the file modified
already includes the list of relevant tests.
Implementation
--------------
Currently, this commit only adds the metadata in the package. Each
element of the `meta.tests` attribute is a derivation that, when it
builds successfully, means the test has passed (ie. following the same
convention as NixOS tests).
Future Work
-----------
In the future, the tools could be made aware of this `meta.tests`
attribute, and for instance a `--with-tests` could be added to
`nix-build` so that it also builds all the tests. Or a `--without-tests`
to build without all the tests. @Profpatsch described in his NixCon talk
such systems.
Another thing that would help in the future would be the possibility to
reasonably easily have cross-derivation nix tests without the whole
NixOS VM stack. @7c6f434c already proposed such a system.
This RFC currently handles none of these concerns. Only the addition of
`meta.tests` as metadata to be used by maintainers to remember to run
relevant tests.
The `name` arg of `vim_configurable.customize` does not only determine
the package name, but also the name of the command/ executable to be
called.
In my opinion this is not documented properly and finding that out took
me several hours.
Allows for adding Perl libraries in the same way as for Python. Doesn't
really need to be a function, since there's only one perlPackages in
nixpkgs, but I went for consistency with the python plugin.
This touches up a handful of places in the python documentation to try to make
the current best-practices more obvious. In particular, I often find the
function signatures (what to pass, what not to pass) confusing and have added
them to the docs.
Also updated the metas to be more consistent with the most frequently used
modern style.
Hydra passes the full revision in to the input, which we pass through.
If we don't get this ,we try to get it from other sources, or default to
master which should have the definition in a close-ish location.
All published docs should have theURL resolve properly, only local
hackers will have the link break.
Create a many-layered Docker Image.
Implements much less than buildImage:
- Doesn't support specific uids/gids
- Doesn't support runninng commands after building
- Doesn't require qemu
- Doesn't create mutable copies of the files in the path
- Doesn't support parent images
If you want those feature, I recommend using buildLayeredImage as an
input to buildImage.
Notably, it does support:
- Caching low level, common paths based on a graph traversial
algorithm, see referencesByPopularity in
0a80233487993256e811f566b1c80a40394c03d6
- Configurable number of layers. If you're not using AUFS or not
extending the image, you can specify a larger number of layers at
build time:
pkgs.dockerTools.buildLayeredImage {
name = "hello";
maxLayers = 128;
config.Cmd = [ "${pkgs.gitFull}/bin/git" ];
};
- Parallelized creation of the layers, improving build speed.
- The contents of the image includes the closure of the configuration,
so you don't have to specify paths in contents and config.
With buildImage, paths referred to by the config were not included
automatically in the image. Thus, if you wanted to call Git, you
had to specify it twice:
pkgs.dockerTools.buildImage {
name = "hello";
contents = [ pkgs.gitFull ];
config.Cmd = [ "${pkgs.gitFull}/bin/git" ];
};
buildLayeredImage on the other hand includes the runtime closure of
the config when calculating the contents of the image:
pkgs.dockerTools.buildImage {
name = "hello";
config.Cmd = [ "${pkgs.gitFull}/bin/git" ];
};
Minor Problems
- If any of the store paths change, every layer will be rebuilt in
the nix-build. However, beacuse the layers are bit-for-bit
reproducable, when these images are loaded in to Docker they will
match existing layers and not be imported or uploaded twice.
Common Questions
- Aren't Docker layers ordered?
No. People who have used a Dockerfile before assume Docker's
Layers are inherently ordered. However, this is not true -- Docker
layers are content-addressable and are not explicitly layered until
they are composed in to an Image.
- What happens if I have more than maxLayers of store paths?
The first (maxLayers-2) most "popular" paths will have their own
individual layers, then layer #(maxLayers-1) will contain all the
remaining "unpopular" paths, and finally layer #(maxLayers) will
contain the Image configuration.