This change introduces the cargoLock argument to buildRustPackage,
which can be used in place of cargo{Sha256,Hash} or cargoVendorDir. It
uses the importCargoLock function to build the vendor
directory. Differences compared to cargo{Sha256,Hash}:
- Requires a Cargo.lock file.
- Does not require a Cargo hash.
- Retrieves all dependencies as fixed-output derivations.
This makes buildRustPackage much easier to use as part of a Rust
project, since it does not require updating cargo{Sha256,Hash} for
every change to the lock file.
This function can be used to create an output path that is a cargo
vendor directory. In contrast to e.g. fetchCargoTarball all the
dependent crates are fetched using fixed-output derivations. The
hashes for the fixed-output derivations are gathered from the
Cargo.lock file.
Usage is very simple, e.g.:
importCargoLock {
lockFile = ./Cargo.lock;
}
would use the lockfile from the current directory.
The implementation of this function is based on Eelco Dolstra's
import-cargo:
https://github.com/edolstra/import-cargo/blob/master/flake.nix
Compared to upstream:
- We use fetchgit in place of builtins.fetchGit.
- Sync to current cargo vendoring.
Adds includeStorePaths, allowing the omission of the store paths.
You generally want to leave it on, but tooling may disable this
to insert the store paths more efficiently via other means, such
as bind mounting the host store.
Add a small utility script which securely replaces secrets in
files. Doing this with `sed`, `replace-literal` or similar utilities
leaks the secrets through the spawned process' `/proc/<pid>/cmdline` file.
> There is an issue in the test added by #123111.
> [it] introduces a dependency on the contents of nixpkgs,
> making every change evaluate with a different hash.
Previously, mangleVarList would be used which would concatenate the
variables using a space as a separator. Paths are however separated by
`:` in PKG_CONFIG_PATH leading to entries being broken.
This is fixed by introducing mangleVarListGeneric which allows us to
specify the desired separator.
Reproducer for the issue prior to this change:
$ nix-shell -A pkgsLLVM.wayland
[nix-shell] $ pkg-config --libs expat
Package expat was not found in the pkg-config search path.
Perhaps you should add the directory containing `expat.pc'
to the PKG_CONFIG_PATH environment variable
No package 'expat' found
$ printf 'Host: %s\nBuild: %s' $PKG_CONFIG_PATH $PKG_CONFIG_PATH_FOR_BUILD
Host: /nix/store/5h308a4ab8w7prcp8iflh5pnl78mayi2-expat-2.2.10-x86_64-unknown-linux-gnu-dev/lib/pkgconfig:/nix/store/z3y9ska2h4l1map25m195iq577g7g3gz-libxml2-x86_64-unknown-linux-gnu-2.9.12-dev/lib/pkgconfig:/nix/store/lbz5m1s0r7zn0cxvl21czfspli6ribzb-zlib-1.2.11-x86_64-unknown-linux-gnu-dev/lib/pkgconfig:/nix/store/rfhvp8r8n3ygpzh8j0l34lk8hwwi3z0h-libffi-3.3-x86_64-unknown-linux-gnu-dev/lib/pkgconfig
Build: /nix/store/dw11ywy7qwfz53qisz0dggbgix88jah2-wayland-1.19.0-bin/lib/pkgconfig
strace reveals the issue:
stat("/nix/store/dw11ywy7qwfz53qisz0dggbgix88jah2-wayland-1.19.0-bin/lib/pkgconfig /nix/store/5h308a4ab8w7prcp8iflh5pnl78mayi2-expat-2.2.10-x86_64-unknown-linux-gnu-dev/lib/pkgconfig/expat-uninstalled.pc", 0x7fff49829fa0) = -1 ENOENT (No such file or directory)
In the pkg-config wrapper $PKG_CONFIG_PATH_FOR_BUILD and
$PKG_CONFIG_PATH are concatenated with a space which leads to two paths
being messed up. This issue likely only affects native cross
compilation.
This will begin the process of breaking up the `useLLVM` monolith. That
is good in general, but I hope will be good for NetBSD and Darwin in
particular.
Co-authored-by: sterni <sternenseemann@systemli.org>
The distinction between the inputs doesn't really make sense in the
mkShell context. Technically speaking, we should be using the
nativeBuildInputs most of the time.
So in order to make this function more beginner-friendly, add "packages"
as an attribute, that maps to nativeBuildInputs.
This commit also updates all the uses in nixpkgs.
This PR adds a new aarch64 android toolchain, which leverages the
existing crossSystem infrastructure and LLVM builders to generate a
working toolchain with minimal prebuilt components.
The only thing that is prebuilt is the bionic libc. This is because it
is practically impossible to compile bionic outside of an AOSP tree. I
tried and failed, braver souls may prevail. For now I just grab the
relevant binaries from https://android.googlesource.com/.
I also grab the msm kernel sources from there to generate headers. I've
included a minor patch to the existing kernel-headers derivation in
order to expose an internal function.
Everything else, from binutils up, is using stock code. Many thanks to
@Ericson2314 for his help on this, and for building such a powerful
system in the first place!
One motivation for this is to be able to build a toolchain which will
work on an aarch64 linux machine. To my knowledge, there is no existing
toolchain for an aarch64-linux builder and an aarch64-android target.
Also begin to start work on cross compilation, though that will have to
be finished later.
The patches are based on the first version of
https://reviews.llvm.org/D99484. It's very annoying to do the
back-porting but the review has uncovered nothing super major so I'm
fine sticking with what I've got.
Beyond making the outputs work, I also strove to re-sync the packages,
as they have been drifting pointlessly apart for some time.
----
Other misc notes, highly incomplete
- lvm-config-native and llvm-config are put in `dev` because they are
tools just for build time.
- Clang no longer has an lld dep. That was introduced in
db29857eb3, but if clang needs help
finding lld when it is used we should just pass it flags / put in the
resource dir. Providing it at build time increases critical path
length for no good reason.
----
A note on `nativeCC`:
`stdenv` takes tools from the previous stage, so:
1. `pkgsBuildBuild`: `(?1, x, x)`
2. `pkgsBuildBuild.stdenv.cc`: `(?0, ?1, x)`
while:
1. `pkgsBuildBuild`: `(?1, x, x)`
2. `pkgsBuildBuild.targetPackages`: `(x, x, ?2)`
3. `pkgsBuildBuild.targetPackages.stdenv.cc`: `(?1, x, x)`
In a typical build environment the toolchain will use the value of the
MACOSX_DEPLOYMENT_TARGET environment variable to determine the version
of macOS to support. When cross compiling there are two distinct
toolchains, but they will look at this single environment variable. To
avoid contamination, we always set the equivalent command line flag
which effectively disables the toolchain's internal handling.
Prior to this change, the MACOSX_DEPLOYMENT_TARGET variable was
ignored, and the toolchains always used the Nix platform
definition (`darwinMinVersion`) unless overridden with command line
arguments.
This change restores support for MACOSX_DEPLOYMENT_TARGET, and adds
nix-specific MACOSX_DEPLOYMENT_TARGET_FOR_BUILD and
MACOSX_DEPLOYMENT_TARGET_FOR_TARGET for cross compilation.
Instead of always supplying flags, apply the flags as defaults. Use
clang's native flags instead of lifting the linker flags from binutils
with `-Wl,`.
If a project is using clang to drive linking, make clang do the right
thing with MACOSX_DEPLOYMENT_TARGET. This can be overridden by command
line arguments. This will cause modern clang to pass
`-platform_version 10.12 0.0.0`, since it doesn't know about the SDK
settings. Older versions of clang will pass down `-macos_version_min`
flags with no sdk version.
At the linker layer, apply a default value for anything left
ambiguous. If nothing is specified, pass a full
`-platform_version`. If only `-macos_version_min` is specified, then
lock down the sdk_version explicitly with `-sdk_version`. If a min
version and sdk version is passed, do nothing.
The `docker load` command supports loading tarballs that contain
multiple docker images with their respective image names and tags. This
enables distributing these images as a single file which simplifies the
release of software when an application requires multiple services to
run.
However, pkgs.dockerTools only create tarballs with a single docker
image and there exists is no mechanism in nixpkgs to combine the created
tarballs. This commit implements merging of tarballs in a way that is
compatible with `docker load`.
Since 03eaa48 added perl.withPackages, there is a canonical way to
create a perl interpreter from a list of libraries, for use in script
shebangs or generic build inputs. This method is declarative (what we
are doing is clear), produces short shebangs[1] and needs not to wrap
existing scripts.
Unfortunately there are a few exceptions that I've found:
1. Scripts that are calling perl with the -T switch. This makes perl
ignore PERL5LIB, which is what perl.withPackages is using to inform
the interpreter of the library paths.
2. Perl packages that depends on libraries in their own path. This
is not possible because perl.withPackages works at build time. The
workaround is to add `-I $out/${perl.libPrefix}` to the shebang.
In all other cases I propose to switch to perl.withPackages.
[1]: https://lwn.net/Articles/779997/
Without this fix, I can no longer build anything with releaseTools.nixBuild {}. A job typically fails with:
$ nix-build release.nix -A build.basic.x86_64-linux --show-trace
error: while evaluating the attribute 'lib' of the derivation 'libnixxml-0.1pre1234' at /home/sander/teststuff/nixpkgs/pkgs/build-support/release/nix-build.nix:89:5:
cannot coerce a set to a string, at /home/sander/teststuff/nixpkgs/pkgs/build-support/release/nix-build.nix:89:5
This is caused by the fact that `lib' is propagated as a parameter, which is a function. Functions cannot be converted to strings.
For images running on Kubernetes, there is no guarantee on how duplicate
environment variables in the image config will be handled. This seems
to be different from Docker, where the last environment variable value
is consistently selected.
The current code for `streamLayeredImage` was exploiting that assumption
to easily propagate environment variables from the base image, leaving
duplicates unchecked. It should rather resolve these duplicates to
ensure consistent behavior on Docker and Kubernetes.
It is now possible to pass a `fromImage` to `buildLayeredImage` and
`streamLayeredImage`, similar to what `buildImage` currently supports.
This will prepend the layers of the given base image to the resulting
image, while ensuring that at most `maxLayers` are used. It will also
ensure that environment variables from the base image are propagated
to the final image.
The check for including the C++ standard library headers was nested inside the
check for linking with the C++ standard library. As a result, the `-nostdlib`
flag incorrectly implied `-nostdinc++`, which made it virtually impossible to
partially link C++ objects.
runCommandWith receives an attribute set with options which previously
were positional arguments of runCommand' and a buildCommand. This
allows for overriding the used stdenv freely (so stuff like
llvmPackages.stdenv can be used). Additionally the possibility to change
arguments passed to stdenv.mkDerivation is made more explicit via the
derivationArgs argument.
Previously it was awkward to use the runCommand-variants with
passAsFile as a double definition of passAsFile would potentially
break runCommand: passAsFile would overwrite the previous definition,
defeating the purpose of setting it in runCommand in the first place.
This is now fixed by concatenating the [ "buildCommand" ] list with
one the one from env, if present.
Adjust buildEnv where passAsFile = null; was passed in some cases,
breaking evaluation since it'd evaluate to [ "buildCommand" ] ++ null.
Commit df4761 added a call to readlink, which fails if it is not in the
user's path when run. Updated the readlink call to pull from the
coreutils store path directly.
When using `buildLayeredImage`, it is not possible to specify an image
name of the form `<registry>/my/image`, although it is a valid name.
This is due to derivations under `buildLayeredImage` using that image
name as their derivation name, but slashes are not permitted in that
context.
A while ago, #13099 fixed that exact same problem in `buildImage` by
using `baseNameOf name` in derivation names instead of `name`. This
change does the same thing for `buildLayeredImage`.
`stream_layered_image.py` currently assumes that the store root will be
at `/nix/store`, although the user might have configured this
differently. This makes `buildLayeredImage` unusable with stores having
a different root, as they will fail an assertion in the python script.
This change updates that assertion to use `builtins.storeDir` as the
source of truth about where the store lives, instead of assuming
`/nix/store`.
- This is the first packages which uses Dune in order to build and install
so I had to refactor build-support/coq/default.nix in order to support it.
- I added a new feature: one can now release.v.sha256 empty to try to download
with a fake sha256, hence failures are reported and one can copy paste the
sha256 given by the error message.
- I updated the documentation of languages-frameworks/coq.section.md accordingly.
Fixes build failures with clang:
clang-7: error: unknown argument: '-fPIC -target'
clang-7: error: no such file or directory: '@<(printf %qn -O2'
clang-7: error: no such file or directory: 'x86_64-apple-darwin'
Introduced by 60c5cf9cea in #112449
Since #112276, we should always put `makeWrapper` in
`nativeBuildInputs`. But `buildEnv` was saying put it in `buildInputs`.
That's wrong!
Fix the instructions, and make the right thing possible.
The `checkType` argument of buildRustPackage was not used anymore
since the refactoring of `buildRustPackage` into hooks. This was
an oversight that is fixed by this change.
The check type can also be passed directly to cargoCheckHook using the
`cargoCheckType` environment variable.
This change makes the wrapper script avoid displaying echo area messages
during startup. This helps prevent split second UI glitches early in the
startup process. The messages itself will still be logged and therefore
will not hamper inspection for debugging purposes.
Preserve top-level symlinks such as /lib -> /usr/lib.
This allows nested containers such as Steam's new runtime to remount
/usr if they need to and then run unmodified binaries that reference
e.g. /lib/ld-linux-x86-64.so.2
Before, we would mount the fully resolved host directory at /lib and
thus the dynamic loader would always be the one from the host filesystem.