Note: nixos-shell must be able to extend the specified system configuration with [certain modules]!(share/modules).
If your version of nixpkgs provides the extendModules function on system configurations, nixos-shell will use it to inject the required modules; no additional work on your part is needed.
If your version of nixpkgsdoes not provide extendModules, you must make your system configurations overridable with lib.makeOverridable to use them with nixos-shell:
{
nixosConfigurations=let
lib=nixpkgs.lib;
in {
vm=lib.makeOverridablelib.nixosSystem {
# ...
};
};
}
Specifying a non-overridable system configuration will cause nixos-shell to abort with a non-zero exit status.
When using the --flake flag, if no attribute is given, nixos-shell tries the following flake output attributes:
packages.<system>.nixosConfigurations.<vm>
nixosConfigurations.<vm>
nixosModules.<vm>
If an attribute name is given, nixos-shell tries the following flake output attributes:
To forward ports from the virtual machine to the host, use the
virtualisation.forwardPorts NixOS option.
See examples/vm-forward.nix where the ssh server running on port 22 in the
virtual machine is made accessible through port 2222 on the host.
The same can be also achieved by using the QEMU_NET_OPTS environment variable.
Your keys are used to enable passwordless login for the root user.
At the moment only ~/.ssh/id_rsa.pub, ~/.ssh/id_ecdsa.pub and ~/.ssh/id_ed25519.pub are
added automatically. Use users.users.root.openssh.authorizedKeys.keyFiles to add more.
Note: sshd is not started by default. It can be enabled by setting
services.openssh.enable = true.
QEMU is started with user mode network by default. To use bridge network instead,
set virtualisation.qemu.networkingOptions to something like
[ "-nic bridge,br=br0,model=virtio-net-pci,mac=11:11:11:11:11:11,helper=/run/wrappers/bin/qemu-bridge-helper" ]. /run/wrappers/bin/qemu-bridge-helper is a NixOS specific
path for qemu-bridge-helper on other Linux distributions it will be different.
QEMU needs to be installed on the host to get qemu-bridge-helper with setuid bit
set - otherwise you will need to start VM as root. On NixOS this can be achieved using
virtualisation.libvirtd.enable = true;
To increase the size of the virtual hard drive, i. e. to 20 GB (see [virtualisation] options at bottom, defaults to 512M):
{ virtualisation.diskSize=20*1024; }
Notice that for this option to become effective you may also need to delete previous block device files created by qemu (nixos.qcow2).
Notice that changes in the nix store are written to an overlayfs backed by tmpfs rather than the block device
that is configured by virtualisation.diskSize. This tmpfs can be disabled however by using:
{ virtualisation.writableStoreUseTmpfs=false; }
This option is recommend if you plan to use nixos-shell as a remote builder.
There does not exists any explicit options right now but
one can use either the $QEMU_OPTS environment variable
or set virtualisation.qemu.options to pass the right qemu
command line flags:
{
# /dev/sdc also needs to be read-writable by the user executing nixos-shell
In many cloud environments KVM is not available and therefore nixos-shell will fail with: CPU model 'host' requires KVM.
In newer versions of nixpkgs this has been fixed by falling back to emulation.
In older version one can set the virtualisation.qemu.options or set the environment variable QEMU_OPTS:
Terminal window
exportQEMU_OPTS="-cpu max"
nixos-shell
A full list of supported qemu cpus can be obtained by running qemu-kvm -cpu help.
By default VMs will have a NIX_PATH configured for nix channels but no channel are downloaded yet.
To avoid having to download a nix-channel every time the VM is reset, you can use the following nixos configuration:
{...}: {
nix.nixPath=[
"nixpkgs=${pkgs.path}"
];
}
This will add the nixpkgs that is used for the VM in the NIX_PATH of login shell.
Embedding nixos-shell in your own nixos-configuration
It’s possible to specify a different architecture using --guest-system.
This requires your host system to have a either a remote builder
(i.e. darwin-builder on macOS)
or beeing able to run builds in emulation
for the guest system (boot.binfmt.emulatedSystems on NixOS.).
Here is an example for macOS (arm) that will run an aarch64-linux vm: