The README of nfs-utils explains that we must not notify clients
before nfsd is running, otherwise they may fail to reclaim their
locks. OTOH it's allowed but not required to start "rpc.statd
--no-notify" before nfsd. So for simplicity we do both after starting
nfsd.
Turns out that remote-fs-pre.target is not actually "wanted" anywhere,
so statd is not started before remote filesystems are mounted. But
remote filesystems do "want" network-online.target, so we can use that
to pull in statd and idmapd.
Not sure if this is really the right thing to do, but it works for
now. Background:
https://bugzilla.redhat.com/show_bug.cgi?id=787314http://hydra.nixos.org/build/5542230
When nixos-rebuild grabs a new kernel, it will build new spl/zfs
modules, which will change the service. On completion nixos will try and
restart the services which will try and import pools again, and
generally will fail.
The pools are already imported, we don't need to do it again..
Just like in the MySQL service module it really makes sense to provide a
way to inject SQL on the first start of the database cluster.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This should integrate the logging more tightly into systemd, so for
example "systemctl status mysql" actually gives an overview about what's
actually going on.
This removes the logError option attribute, so in case you still want to
write into a logfile, I've introduced an option called extraOptions, so
you can use something like:
services.mysql*.extraOptions = ''
log-error = /var/log/mysql_err.log
'';
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Grub uses mdadm to find out the device it is on, especially when mdadm itself
resides in a separate boot partition. When bootstrapping from a NixOS
installation CD, it's not a big issue because usually the paths from the Nix
store of the installation CD are matching with the ones in the chrooted
environment.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This checks if nixpart is able to mount the filesystems from scratch again, just
with the information provided by the kickstart file.
Found an odd issue about findmnt here, because it seems to not show /mnt/boot,
even though it _is_ mounted and even shows up in /proc/self/mountinfo. I'm not
quite sure whether this is a bug or I'm doing something wrong here, but might
need some investigation.
Mountpoints are checked by adding empty canary files, remounting and checking if
the same canaries still exist. If they don't, the partitioner either has
formatted the filesystem or just not mounted the device. Either way, both
shouldn't happen, but that's why we're testing it, no? :-)
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
As the whole partitioning run is quite an invasive procedure, we want to
especially make sure that it doesn't unmount any filesystems that were mounted
before the partitioner was run.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This will ensure that we don't get errors because the kernel doesn't recognize
the new partitioning scheme on some conditions or architectures, such as i686.
See here for the Hydra build log on i686:
http://hydra.nixos.org/build/5432090/download/1/log.html
Signed-off-by: aszlig <aszlig@redmoonstudios.org>