The motivation for this change is the following: As gnu-netcat,
e. g. does not support ipv6, it is not suitable as default netcat.
This commit also fixes all obvious build issues caused by this change.
The test complains[1][2] that
Failed to start message bus: Failed to bind socket "/run/dbus/system_bus_socket": No such file or directory
In 639e5401ff, the dbus socket dir is set
to `/run/dbus`; in the test vm `/var/run/dbus` is used, but the standard
`/run -> /var/run` link is typically not created until stage 2 init, not
in the minimal init used here. Thus, dbus fails to run within the test
environment . Fix by changing `/var/run/dbus` to simply `/run/dbus`.
[1]: https://hydra.nixos.org/build/42534725
[2]: https://hydra.nixos.org/build/42523834
Since 97bfc2fac9, runCommand doesn't
include a compiler anymore. So let's switch to the new runCommandCC,
which resembles the old state.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The initial commit accidentally left in some commented code and if you were
using alerts, they simply didn't work.
Smokeping also includes some JS code for the webui allowing you to zoom into
graphs and it was not passed into the homedir. Additionally, generate
static html pages for other webservers to serve the cache directory.
Add additional options to specify sendmail path or mailhost and verify that both
are not set.
Add one extra config hook that allows you to bypass all of the invidual config
stanzas and just hand it a string.
Now the tracking works with aggregated devices on aggregated devices.
So container with physical device where the device is put in a bond
which is the basis for a bridge is now handled correctly.
Test that adding physical devices to containers works, find that network setup
then doesn't work because there is no udev in the container to tell systemd
that the device is present.
Fixed by not depending on the device in the container.
Activate the new container test for release
Bonds, bridges and other network devices need the underlying not as
dependency when used inside the container. Because the device is already
there.
But the address configuration needs the aggregated device itself.
This ensures that most "trivial" derivations used to build NixOS
configurations no longer depend on GCC. For commands that do invoke
gcc, there is runCommandCC.
* influxdb module: add postStart
* cadvisor module: increase TimeoutStartSec
Under high load, the cadvisor module can take longer than the default 90
seconds to start. This change should hopefully fix the test on Hydra.
This introduces VirtualBox version 5.1.6 along with a few refactored
stuff, notably:
* Kernel modules and user space applications are now separate
derivations.
* If config.pulseaudio doesn't exist in nixpkgs config, the default is
now to build with PulseAudio modules.
* A new updater to keep VirtualBox up to date.
All subtests in nixos/tests/virtualbox.nix succeed on my machine and
VirtualBox was reported to be working by @DamienCassou (although with
unrelated audio problems for another fix/branch) and @calbrecht.
One reason why it took me so long for debugging the test failure with
systemd-detect-virt was that simple-cli has succeeded while the former
has not.
This now makes sure we have consistency accross all the subtests and if
problems like the one in the previos commit ever show up again, we will
have just the headless test succeeding and it's more obvious where the
actual problem resides.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We don't have (simulated) sound hardware within the qemu VM, neither do
we have it available within VirtualBox that's running within the qemu
VMs.
With sound hardware the VirtualBox UI displays an error dialog, which in
turn causes the VM process to hang on unregister. This in turn has
caused the tests to fail because of the following error:
Cannot unregister the machine '...' while it is locked
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Using waitUntilSucceeds for testing whether the shutdown signalling
files have vanished is quite noisy because it prints two lines for every
try. This is now fixed with a while loop on the guest VM which does the
same check but with only one output for the command that's executed and
another one when the conditions are met.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
- logDriver option, use journald for logging by default
- keep storage driver intact by default, as docker has sane defaults
- do not choose storage driver in tests, docker will choose by itself
- use dockerd binary as "docker daemon" command is deprecated and will be
removed
- add overlay2 to list of storage drivers
modify tests to not fail if the event handlers are
registered too slowly or if the wrong window is in focus
(cherry picked from commit e087b0d12f97604ee1fdd09ef3d78b772c12468e)
Signed-off-by: Domen Kožar <domen@dev.si>
The loopback-based tests use a storage size of 102400 blocks (one block
is 1024 bytes), which doesn't seem to fit for btrfs volumes in recent
btrfs versions. I'm setting this to 409600 (400 MB) now so that it
should be enough for later versions in case they need even more space
for subvolumes.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Follow-up to the following commits:
abdc5961c3: Fix starting the firewall
e090701e2d: Order before sysinit
Solely use sysinit.target here instead of multi-user.target because we
want to make sure that the iptables rules are applied *before* any
socket units are started.
The reason I've dropped the wantedBy on multi-user.target is that
sysinit.target is already a part of the dependency chain of
multi-user.target.
To make sure that this holds true, I've added a small test case to
ensure that during switch of the configuration the firewall.service is
considered as well.
Tested using the firewall NixOS test.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @edolstra
Fixes#15512 and #16032
With the multi output, postgresql cannot find at runtime what is its
basedir when looking for libdir and pkglibdir. This commit fixes that.
- Agent now takes a full URL to the Go.CD server
- Instruct the agent to attempt restart every 30s upon failure
- Test's Accept header did not match the server's expectation
- Replace the tests' complex Awk matches with calls to `jq`
Update gocd-agent package version to 16.6.0-3590 including new sha. Modify heapSize
and maxMemory mkOption to accurately reflect their intended purpose of configuring
initial java heap sizes.
The module will configure a Cassandra server with common options being
tweakable. Included is also a test which will spin up 3 nodes and
verify that the cluster can be formed, broken, and repaired.
With these changes, a container can have more then one veth-pair. This allows for example to have LAN and DMZ as bridges on the host and add dedicated containers for proxies, ipv4-firewall and ipv6-firewall. Or to have a bridge for normal WAN, one bridge for administration and one bridge for customer-internal communication. So that web-server containers can be reached from outside per http, from the management via ssh and can talk to their database via the customer network.
The scripts to set up the containers are now rendered several times instead of just one template. The scripts now contain per-container code to configure the extra veth interfaces. The default template without support for extra-veths is still rendered for the imperative containers.
Also a test is there to see if extra veths can be placed into host-bridges or can be reached via routing.
GoCD is an open source continuous delivery server specializing in advanced workflow
modeling and visualization. Update maintainers list to include swarren83. Update
module list to include gocd agent and server module. Update packages list to include
gocd agent and server package. Update version, revision and checksum for GoCD
release 16.5.0.
The LUKS passphrase prompt has changed from "Enter passphrase" to "Enter
LUKS Passphrase" in c69c76ca7e, so the OCR
detection of the test fails indefinitely.
Unfortunately, this doesn't fix the test because we have a real problem
here:
Enter LUKS Passphrase:
killall: cryptsetup: no process killed
Enter LUKS Passphrase:
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @abbradar
I've failed to figure out what why `paxtest blackhat` hangs the vm, and
have resigned to running individual `paxtest` programs. This provides
limited coverage, but at least verifies that some important features are
in fact working.
Ideas for future work includes a subtest for basic desktop
functionality.
IceWM is not part of KDE 5 and is now no longer part of the test. KDE 5
applications: Dolphin, System Monitor, and System Settings are started
in this test.
VBoxService needs dbus in order to work properly, which failed to start
up so far, because it was searching in /run/current-system/sw for its
configuration files.
We now no longer run with the --system flag but specify the
configuration file directly instead.
This fixes at least the "simple-gui" test and probably the others as
well, which I haven't tested yet.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We can't use waitForWindow here because it runs xwininfo as user root,
who in turn is not authorized to connect to the X server running as
alice.
So instead, we use xprop from user alice which should fix waiting for
the VirtualBox manager window.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The VirtualBox tests so far ran the X server as root instead of user
"alice" and it did work, because we had access control turned off by
default.
Fortunately, it was changed in 1541fa351b.
As a side effect, it caused all the VirtualBox tests to fail because
they now can't connect to the X server, which is a good thing because
it's a bug of the VirtualBox tests.
So to fix it, let's just start the X server as user alice.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This allows setting options for the same LUKS device in different
modules. For example, the auto-generated hardware-configuration.nix
can contain
boot.initrd.luks.devices.crypted.device = "/dev/disk/...";
while configuration.nix can add
boot.initrd.luks.devices.crypted.allowDiscards = true;
Also updated the examples/docs to use /disk/disk/by-uuid instead of
/dev/sda, since we shouldn't promote the use of the latter.
As @edolstra pointed out that the kernel module might be painful to
maintain. I strongly disagree because it's only a small module and it's
good to have such a canary in the tests no matter how the bootup process
looks like, so I'm going the masochistic route and try to maintain it.
If it *really* becomes too much maintenance burden, we can still drop or
disable kcanary.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We already have a small regression test for #15226 within the swraid
installer test. Unfortunately, we only check there whether the md
kthread got signalled but not whether other rampaging processes are
still alive that *should* have been killed.
So in order to do this we provide multiple canary processes which are
checked after the system has booted up:
* canary1: It's a simple forking daemon which just sleeps until it's
going to be killed. Of course we expect this process to not
be alive anymore after boot up.
* canary2: Similar to canary1, but tries to mimick a kthread to make
sure that it's going to be properly killed at the end of
stage 1.
* canary3: Like canary2, but this time using a @ in front of its
command name to actually prevent it from being killed.
* kcanary: This one is a real kthread and it runs until killed, which
shouldn't be the case.
Tested with and without 67223ee and everything works as expected, at
least on my machine.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is a regression test for #15226, so that the test will fail once we
accidentally kill one or more of the md kthreads (aka: if safe mode is
enabled).
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Just removing the system argument because it doesn't exist (it's
actually config.nixpkgs.system, which we're already using). We won't get
an error anyway if we're not actually using it, so this is just an
aesthetics fix.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Make sure that we always have everything available within the store of
the VM, so let's evaluate/build the test container fully on the host
system and propagate all dependencies to the VM.
This way, even if there are additional default dependencies that come
with containers in the future we should be on the safe side as these
dependencies should now be included for the test as well.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @kampfschlaefer, @edolstra
This partially reverts f2d24b9840.
Instead of disabling the channels via removing the channel mapping from
the tests themselves, let's just explicitly reference the stable test in
release.nix. That way it's still possible to run the beta and dev tests
via something like "nix-build nixos/tests/chromium.nix -A beta" and
achieve the same effect of not building beta and dev versions on Hydra.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
It's not the job of Nixpkgs to distribute beta versions of upstream
packages. More importantly, building these delays channel updates by
several hours, which is bad for our security fix turnaround time.
Regression introduced by dfe608c8a2.
The commit turns the two arguments into one attrset argument so we need
to adapt that to use the new calling convention.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The Nix store squashfs is stored inside the initrd instead of separately
(cherry picked from commit 976fd407796877b538c470d3a5253ad3e1f7bc68)
Signed-off-by: Domen Kožar <domen@dev.si>
Two fixes:
Not really sure why removing `--fail` from the curl calls is necessary,
but with that option, curl erronously reports 404 (which it shouldn't
per my interactive vm testing).
Fix paths to example files used for the printing test
Toghether, these changes allow the test to run to completion on my machine.
This adds a Taskserver module along with documentation and a small
helper tool which eases managing a custom CA along with Taskserver
organisations, users and groups.
Taskserver is the server component of Taskwarrior, a TODO list
application for the command line.
The work has been started by @matthiasbeyer back in mid 2015 and I have
continued to work on it recently, so this merge contains commits from
both of us.
Thanks particularly to @nbp and @matthiasbeyer for reviewing and
suggesting improvements.
I've tested this with the new test (nixos/tests/taskserver.nix) this
branch adds and it fails because of the changes introduced by the
closure-size branch, so we need to do additional work on base of this.
Coreutils is multi-output and the `info` output doesn't seem to be
included on the install disk, failing like this (because now nix-env
wants to build coreutils):
````
machine# these derivations will be built:
machine# /nix/store/0jk4wzg11sa6cqyw8g7w5lb35axji969-bison-3.0.4.tar.gz.drv
...
machine# /nix/store/ybjgqwxx63l8cj1s7b8axx09wz06kxbv-coreutils-8.25.drv
machine# building path(s) ‘/nix/store/4xvdi5740vq8vlsi48lik3saz0v5jsx0-coreutils-8.25.tar.xz’
machine# downloading ‘http://ftpmirror.gnu.org/coreutils/coreutils-8.25.tar.xz’...
machine# error: unable to download ‘http://ftpmirror.gnu.org/coreutils/coreutils-8.25.tar.xz’: Couldn't resolve host name (6)
machine# builder for ‘/nix/store/5j3bc5sjr6271fnjh9gk9hrid8kgbpx3-coreutils-8.25.tar.xz.drv’ failed with exit code 1
machine# cannot build derivation ‘/nix/store/ybjgqwxx63l8cj1s7b8axx09wz06kxbv-coreutils-8.25.drv’: 1 dependencies couldn't be built
machine# error: build of ‘/nix/store/ybjgqwxx63l8cj1s7b8axx09wz06kxbv-coreutils-8.25.drv’ failed
````
Try to match the subcommands to act more like the subcommands from the
taskd binary and also add a subcommand to list groups.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
As suggested by @matthiasbeyer:
"We might add a short note that this port has to be opened in the
firewall, or is this done by the service automatically?"
This commit now adds the listenPort to
networking.firewall.allowedTCPPorts as soon as the listenHost is not
"localhost".
In addition to that, this is now also documented in the listenHost
option declaration and I have removed disabling of the firewall from the
VM test.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Whenever the nixos-taskserver tool was invoked manually for creating an
organisation/group/user we now add an empty file called .imperative to
the data directory.
During the preStart of the Taskserver service, we use process-json which
in turn now checks whether those .imperative files exist and if so, it
doesn't do anything with it.
This should now ensure that whenever there is a manually created user,
it doesn't get killed off by the declarative configuration in case it
shouldn't exist within that configuration.
In addition, we also add a small subtest to check whether this is
happening or not and fail if the imperatively created user got deleted
by process-json.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We were putting the whole output of "nixos-taskserver export-user" from
the server to the respective client and on every such operation the
whole output was shown again in the test log.
Now we're *only* showing these details whenever a user import fails on
the client.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Now we finally can delete organisations, groups and users along with
certificate revocation. The new subtests now make sure that the client
certificate is also revoked (both when removing the whole organisation
and just a single user).
If we use the imperative way to add and delete users, we have to restart
the Taskserver in order for the CRL to be effective.
However, by using the declarative configuration we now get this for
free, because removing a user will also restart the service and thus its
client certificate will end up in the CRL.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
It's not necessarily related to the PKI options, because this is also
used for setting the server address on the Taskwarrior client.
So if someone doesn't have his/her own certificates from another CA, all
options that need to be adjusted are in .pki. And if someone doesn't
want to bother with getting certificates from another CA, (s)he just
doesn't set anything in .pki.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
After moving out the PKI-unrelated options, let's name this a bit more
appropriate, so we can finally get rid of the taskserver.server thing.
This also moves taskserver.caCert to taskserver.pki.caCert, because that
clearly belongs to the PKI options.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Having an option called services.taskserver.server.host is quite
confusing because we already have "server" in the service name, so let's
first get rid of the listening options before we rename the rest of the
options in that .server attribute.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
As the nixos-taskserver command can also be used to imperatively manage
users, we need to test this as well.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This module adds an option `security.hideProcessInformation` that, when
enabled, restricts access to process information such as command-line
arguments to the process owner. The module adds a static group "proc"
whose members are exempt from process information hiding.
Ideally, this feature would be implemented by simply adding the
appropriate mount options to `fileSystems."/proc".fsOptions`, but this
was found to not work in vmtests. To ensure that process information
hiding is enforced, we use a systemd service unit that remounts `/proc`
after `systemd-remount-fs.service` has completed.
To verify the correctness of the feature, simple tests were added to
nixos/tests/misc: the test ensures that unprivileged users cannot see
process information owned by another user, while members of "proc" CAN.
Thanks to @abbradar for feedback and suggestions.
Using nixos-taskserver is more verbose but less cryptic and I think it
fits the purpose better because it can't be confused to be a wrapper
around the taskdctl command from the upstream project as
nixos-taskserver shares no commonalities with it.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
A small test which checks whether tasks can be synced using the
Taskserver.
It doesn't test group functionality because I suspect that they're not
yet implemented upstream. I haven't done an in-depth check on that but I
couldn't find a method of linking groups to users yet so I guess this
will get in with one of the text releases of Taskwarrior/Taskserver.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
A testcase each for
- declarative ipv6-only container
Seems odd to define the container IPs with their prefix length attached.
There should be a better way…
- declarative bridged container
Also fix the ping test by waiting for the container to start
When the ping was executed, the container might not have finished starting. Or
the host-side of the container wasn't finished with config. Waiting for
2 seconds in between fixes this.
I had the basic version of this laying around for some while but didn't
continue on it. Originally it was for testing support for the Neo layout
introduced back then (8cd6d53).
We only test the first three Neo layers, because the last three layers
are largely comprised of special characters and in addition to that the
support for the VT keymap seems to be limited compared to the Xorg
keymap.
Yesterday @NicolasPetton on IRC had troubles with the Colemak layout
(IRC logs: http://nixos.org/irc/logs/log.20160330, starting at 16:08)
and I found that test again, so I went for improving and adding to
<nixpkgs>.
While the original problem seemed to be related to GDM, we can still add
another subtest that checks whether GDM correctly applies the keyboard
layout. However I don't have a clue how to properly configure the
keyboard layout on GDM, at least not within the NixOS configuration.
The main goal of this test is not to test a complete set of all key
mappings but to check whether the keymap is loaded and working at all.
It also serves as an example for NixOS keyboard configurations.
The list of keyboard layouts is by no means complete, so everybody is
free to add their own to the test or improve the existing ones.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We now generate a qcow2 image to prevent hitting Hydra's output size
limit. Also updated /root/user-data -> /etc/ec2-metadata/user-data.
http://hydra.nixos.org/build/33843133
These two steps seem to fail intermittently with exit code 1. It isn't clear to me why, or what the issue is. Adding the `--verbose` option, hoping to capture some debugging information which might aid stabilization. Also: I was unable to replicate the failure locally.
Assigning the channelMap by the function attrset argument at the
top-level of the test expression file may reference a different
architecture than we need for the tests.
So if we get the pkgs attribute by auto-calling, this will lead to test
failure because we have a different architecture for the test than for
the browser.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This has been the case before e45c211, but it turns out that it's very
useful to override the channel packages so we can run tests with
different Chromium build options.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The docker service is socket activated by default; thus,
`waitForUnit("docker.service")` before any docker command causes the
unit test to time out.
Instead, do `waitForUnit("sockets.target")` to ensure that sockets are
setup before running docker commands.