Container config example code mentions `postgresql` service, but the correct use of that service involves setting `system.stateVersion` option (as discovered in https://github.com/NixOS/nixpkgs/issues/30056).
The actual system state version is set randomly to 17.03 because I have no preferences here
Allows one or more directories to be mounted as a read-only file system.
This makes it convenient to run volatile containers that do not retain
application state.
This adds the containers.<name>.enableTun option allowing containers to
access /dev/net/tun. This is required by openvpn, tinc, etc. in order to
work properly inside containers.
The new option builds on top of two generic options
containers.<name>.additionalCapabilities and
containers.<name>.allowedDevices which also can be used for example when
adding support for FUSE later down the road.
Get rid of the "or null" stuff. Also change 'cfg . "foo"' to 'cfg.foo'.
Also fixed what appears to be an actual bug: in postStartScript,
cfg.attribute (where attribute is a function argument) should be
cfg.${attribute}.
With these changes, a container can have more then one veth-pair. This allows for example to have LAN and DMZ as bridges on the host and add dedicated containers for proxies, ipv4-firewall and ipv6-firewall. Or to have a bridge for normal WAN, one bridge for administration and one bridge for customer-internal communication. So that web-server containers can be reached from outside per http, from the management via ssh and can talk to their database via the customer network.
The scripts to set up the containers are now rendered several times instead of just one template. The scripts now contain per-container code to configure the extra veth interfaces. The default template without support for extra-veths is still rendered for the imperative containers.
Also a test is there to see if extra veths can be placed into host-bridges or can be reached via routing.
This makes the container a bit more secure, by preventing root
creating device nodes to access the host file system, for
instance. (Reference: systemd-nspawn@.service in systemd.)
This moves nixos-containers into its own package so that it can be
relied upon by other packages/systems. This should make development
using dynamic containers much easier.
Since systemd version 230, it is required to have a machine-id file
prior to the startup of the container. If the file is empty, a transient
machine ID is generated by systemd-nspawn.
See systemd/systemd#3014 for more details on the matter.
This unbreaks all of the containers-* NixOS tests.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @edolstra
Closes: #15808
The existence of $root/var/lib/private/host-notify as a socket
prevented a bind mount:
container foo[8083]: Failed to create mount point /var/lib/containers/foo/var/lib/private/host-notify: No such device or address
Without the templating (which is still present for imperative containers), it
will be possible to set individual dependencies. Like depending on the network
only if the hostbridge or hardware interfaces are used.
Ported from #3021
This allows the containers to have their interface in a bridge on the host.
Also this adds IPv6 addresses to the containers both with bridged and unbridged
network.
If the host is shutting down, machinectl may fail because it's
bus-activated and D-Bus will be shutting down. So just send a signal
to the leader process directly.
Fixes#6212.