This reverts commit 0696b0ef78.
Okay, now finally, let's get this straight. We actually *want*
preferLocalBuild, *because* we have improved the source splitup in
c92dbffeac.
The idea is to use local builds in order to prevent the source being
pushed to a remote machine, splitted up there (and thus copied again)
and then being copied *again* FROM the remote machine.
"DOH!" - as @edolstra or @rbvermaa would call it... and good d^Hnight.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This reverts commit 26f024626c.
I actually wasn't reading the "remove" in the commit message, so sorry
for the brainfart/noise.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This reverts commit fdb5cf8107.
The reason I'm reverting this is that the implications this had on the
IO load of Hydra are fixed by c92dbffeac.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
So far we've done the source code split up by using the generic
unpackPhase and copying it all over into the different outputs.
However, this had the problem of generating the I/O load of about three
times the size of the source tree: First at fetchurl of the tarball
(although it's not as much because it's compressed), second at
unpackPhase and third at installPhase.
Now we don't use installPhase anymore and directly unpack into the
output paths, which unfortunately becomes quite a bit more complex
because we need to transform the paths of the tar file on the fly.
I've also tried using GNU Tar's --to-command option to even untar *and*
patch it at the same time, but forking for every single file in the
tarball gets REALLY slow and also gets even more complex than this two
stage approach because you need to make sure that the patch file is
applied correctly, for example for files that don't yet exist but are to
be created by the patch file.
We're using --anchored and --no-wildcards-match-slash here to prevent
accidentally excluding files we don't want to exclude. One example is
something like v8/tools/gyp/v8.gyp.
So the current approach is some compromise between complexity and speed
and should hopefully get rid of the Hydra build timeouts by lowering I/O
load.
See here for examples of builds having this issue:
http://hydra.nixos.org/build/19045023http://hydra.nixos.org/build/19044973http://hydra.nixos.org/build/19044968http://hydra.nixos.org/build/19045019
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Overview of the updated versions:
stable: 40.0.2214.91 -> 40.0.2214.115
beta: 41.0.2272.16 -> 41.0.2272.64
dev: 41.0.2272.16 -> 42.0.2305.3
Introduces 42.0.2305.3 as the new dev version, which no longer requires
our user namespaces sandbox patch. Thanks to everyone participating in
https://crbug.com/312380 for finally having this upstream.
In the course of supporting the official namespace sandbox (that's what
the user namespace sandbox is called), a few things needed to be fixed
for version 42:
* Add an updated nix_plugin_paths.patch, because the old
one tries to patch the path for libpdf, which is now natively included
in Chromium.
* Don't copy libpdf.so to libexec path for version 42, it's no longer
needed as it's completely built-in now.
* Disable SUID sandbox directly in the source instead of going the easy
route of passing --disable-setuid-sandbox. The reason is that with
the command line flag a nasty nagbar will appear.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This patch adds the luakit browser. It has to be build using lua5.1, I
tried 5.2 but I couldn't run luakit due to a runtime error with it.
It also uses gtk3 here, override to use gtk2, which should also work.
Suggested-by: Benno Fünfstück <benno.fuenfstueck@gmail.com>
We're propagating the plugin flags by importing from another Nix
expression file, which in turn exports the Nix path to the wrapper. This
causes that the store path isn't referenced in the wrapper and the path
isn't recognized by scanning the wrapper script (only those already
referenced at build time are).
So let's add the activated plugins to the buildInputs of the wrapper.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This brings a new stable version 40.0.2214.91 along with a beta update
to version 41.0.2272.16, the dev channel is still stuck at version
41.0.2272.12 and within the next days will jump to version 42.
For this reason, I've done some cheating here and brought the beta
channel in par with the dev channel, because dev is older than beta on
OmahaProxy.
Here's an overview of the channel upgrades:
stable: 39.0.2171.65 -> 40.0.2214.91 [1]
beta: 40.0.2214.10 -> 41.0.2272.16 [1] [2] [3]
dev: 41.0.2224.3 -> 41.0.2272.16 [1] [2] [3]
[1]: We needed to patch in locations of lib{pci,udev}.so, because
Chromium tries to load them at runtime. For version 41 startup will
fail if it is unable to load libudev, but it also has the advantage
that this fixes GPU detection using libpci in the stable version,
which in turn could fix a few bugs on NixOS.
[2]: The upstream Debian package for the binary plugins now uses XZ
compression for the enclosed data tarball.
[3]: Chromium 41 needs {shapshot,natives}_blob.bin in order to start up,
so let's cp it among with the .pak files to avoid adding a
conditional for version 40.
The release annoucement of the stable channel update can be found here:
http://googlechromereleases.blogspot.de/2015/01/stable-update.html
Note that this release contains 62 security fixes(!) and I'm hereby
apologizing for the delay of this update.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Writing the gid_map is already non-fatal, but the actual sandbox process
still tries to setresgid() to nogroup (usually 65534). This however
fails, because if user namespace sandboxing is present, the namespace
doesn't have CAP_SETGID at this point.
Fortunately, the effective GID is already 65534, so we just need to
check whether the target gid matches and only(!) setresgid() if it
doesn't.
So if someone would run a SUID version of the sandbox, it would still
work nonetheless without a negative impact on security.
Fixes#5730, thanks to @wizeman for reporting and initial debugging.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is more of an attempt rather than a real fix (or maybe it is? let's
see) for the corrupted .pyc files during build. I believe the reason we
get these are likely due to several instances of the Python interpreter
that run in parallel and one of these processes might still be writing
the .pyc file.
So, rather than deleting all .pyc files, we now precompile then in order
to avoid any build process trying to generate any .pyc file.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>