- services.iodined moved to services.iodine
- configuration file backwards compatable
- old iodine server configuration moved to services.iodine.server
- attribute set services.iodine.clients added to specify any number
of iodine clients
- example:
iodine.clients.home = { server = "iodinesubdomain.yourserver.com"; ... };
- client services names iodine-name where name would be home
Systemd 229 sets kernel.core_pattern to "|/bin/false" by default,
unless systemd-coredump is enabled. Revert back to the default of
writing "core" in the current directory.
This reverts commit e8e8164f348a0e8655e1d50a7a404bdc62055f4e. I
misread the original commit as adding the "which" package, but it only
adds it to base.nix. So then the original motivation (making it work
in subshells) doesn't hold. Note that we already have some convenience
aliases that don't work in subshells either (such as "ll").
Previously, the cisco resolver was used on the theory that it would
provide the best user experience regardless of location. The downsides
of cisco are 1) logging; 2) missing supoprt for DNS security extensions.
The new upstream resolver is located in Holland, supports DNS security,
and *claims* to not log activity. For users outside of Europe, this will
mean reduced performance, but I believe it's a worthy tradeoff.
hydra user is already pinned, this is needed due to
https://github.com/NixOS/nixpkgs/issues/14148
(cherry picked from commit 0858ece1ad0bd281d2332c40f9fd08005e04a3c5)
Signed-off-by: Domen Kožar <domen@dev.si>
When iodined tries to start before any interface other than loopback has an ip, iodined fails.
Wait for ip-up.target
The above is because of the following:
in iodined's code: src/common.c line 157
the flag AI_ADDRCONFIG is passed as a flag to getaddrinfo.
Iodine uses the function
get_addr(char *host,
int port,
int addr_family,
int flags,
struct sockaddr_storage *out);
to get address information via getaddrinfo().
Within get_addr, the flag AI_ADDRCONFIG is forced.
What this flag does, is cause getaddrinfo to return
"Name or service not known" as an error explicitly if no ip
has been assigned to the computer.
see getaddrinfo(3)
Wait for an ip before starting iodined.
Fixes#12794 by reverting the source tree splitup (c92dbff) to use the
source tarball directly into the main Chromium derivation and making the
whole source/ subdirectory obsolete. The reasons for this are explained
in 4f981b4f84.
This also now renames the "sources.nix" file to "upstream-info.nix",
which is a more proper name for the file, because it not only contains
"source code" but also the Chrome binaries needed for the proprietary
plugins (of course "source" could also mean "where to get it", but I
wanted to avoid this ambiguity entirely).
I have successfully built and tested this using the VM tests.
All results can be found here:
https://headcounter.org/hydra/eval/313435
Assigning the channelMap by the function attrset argument at the
top-level of the test expression file may reference a different
architecture than we need for the tests.
So if we get the pkgs attribute by auto-calling, this will lead to test
failure because we have a different architecture for the test than for
the browser.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This has been the case before e45c211, but it turns out that it's very
useful to override the channel packages so we can run tests with
different Chromium build options.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
* the major change is to set TARGETDIR=${vardir}, and symlink from
${vardir} back to ${out} instead of the other way around. this
gives CP more liberty to write to more directories -- in particular
it seems to want to write some configuration files outside of conf?
* run.conf does not need 'export'
* minor tweaks to CrashPlanDesktop.patch
The docker service is socket activated by default; thus,
`waitForUnit("docker.service")` before any docker command causes the
unit test to time out.
Instead, do `waitForUnit("sockets.target")` to ensure that sockets are
setup before running docker commands.
GnuPG 2.1.x changed the way the gpg-agent works, and that new approach no
longer requires (or even supports) the "start everything as a child of the
agent" scheme we've implemented in NixOS for older versions.
To configure the gpg-agent for your X session, add the following code to
~/.xsession or some other appropriate place that's sourced at start-up:
gpg-connect-agent /bye
GPG_TTY=$(tty)
export GPG_TTY
If you want to use gpg-agent for SSH, too, also add the settings
unset SSH_AGENT_PID
export SSH_AUTH_SOCK="${HOME}/.gnupg/S.gpg-agent.ssh"
and make sure that
enable-ssh-support
is included in your ~/.gnupg/gpg-agent.conf.
The gpg-agent(1) man page has more details about this subject, i.e. in the
"EXAMPLES" section.
This patch fixes https://github.com/NixOS/nixpkgs/issues/12927.
It would be great to configure good rate-limiting defaults for this via
/proc/sys/net/ipv4/icmp_ratelimit and /proc/sys/net/ipv6/icmp/ratelimit,
too, but I didn't since I don't know what a "good default" would be.
Some users may wish to improve their privacy by using per-query
key pairs, which makes it more difficult for upstream resolvers to
track users across IP addresses.
- fix `enable` option description
using `mkEnableOption longDescription` is incorrect; override
`description` instead
- additional details for proper usage of the service, including
an example of the recommended configuration
- clarify `localAddress` option description
- clarify `localPort` option description
- clarify `customResolver` option description
Probably not many people care about i686-linux any more, but building
all these images is fairly expensive (e.g. in the worst case, every
Nixpkgs commit would trigger a few gigabytes of uploads to S3).
Previously this was done in three derivations (one to build the raw
disk image, one to convert to OVA, one to add a hydra-build-products
file). Now it's done in one step to reduce the amount of copying
to/from S3. In particular, not uploading the raw disk image prevents
us from hitting hydra-queue-runner's size limit of 2 GiB.