title: Proxmox VE
breadcrumbs:
- title: Configuration
- title: Linux Servers
---
{% include header.md %}
Using
{:.no_toc}
Host
TODO Ignore this whole section for now.
- Initial setup
- Notes from Google Docs
localhost
must resolve to both 127.0.0.1 and ::1 and the domain name must resolve to the mgmt. interface IP addresses (v4+v6).
- See Debian Server: Initial Setup.
- Setup the PVE repos (assuming no subscription):
- In
/etc/apt/sources.list.d/pve-enterprise.list
, comment out the Enterprise repo.
- In
/etc/apt/sources.list
, add the PVE No-Subscription repo. See Package Repositories.
- Update the package index.
- Disable the console MOTD:
- Disable
pvebanner.service
.
- Clear or update
/etc/issue
(e.g. use use the logo).
- Setup firewall:
- Open an SSH session, as this will prevent full lock-out.
- Enable the cluster/datacenter firewall.
- Disable NDP. This is because of a vulnerability in Proxmox where it autoconfigures itself on all bridges.
- Add incoming rules on the management network (!) for NDP (ICMPv6), ping (macro), SSH (macro) and the web GUI (TCP port 8006).
- Enable the host/node firewall.
- Make sure ping, SSH and the web GUI is working both for IPv4 and IPv6.
Cluster
/etc/pve
will get synchronized across all nodes.
- High availability:
- Clusters must be explicitly configured for HA.
- Provides live migration.
- Requires shared storage (e.h. Ceph).
Simple Setup
- Setup a management network for the cluster.
- It should generally be isolated.
- Setup each node.
- Add each other host to each host's hostfile.
- So that IP addresses can be more easily changed.
- Use short hostnames, not FQDNs.
- Create the cluster on one of the nodes:
pvecm create <name>
- Join the cluster on the other hosts:
pvecm add <name>
- Check the status:
pvecm status
High Availability Info
See: Proxmox: High Availability
- Requires a cluster of at least 3 nodes.
- Configured using HA groups.
- The local resource manager (LRM/"pve-ha-lrm") controls services running on the local node.
- The cluster resource manager (CRM/"pve-ha-crm") communicates with the nodes' LRMs and handles things like migrations and node fencing.
There's only one master CRM.
- Fencing:
- Fencing is required to prevent services running on multiple nodes due to communication problems, causes corruption and other problems.
- Can be provided using watchdog timers (software or hardware), external power switches, network traffic isolation and more.
- Watchdogs: When a node loses quorum, it doesn't reset the watchdog. When it expires (typically after 60 seconds), the node is killed and restarted.
- Hardware watchdogs must be explicitly configured.
- The software watchdog (using the Linux kernel driver "softdog") is used by default and doesn't require any configuretion,
but it's not as reliable as other solutions as it's running inside the host.
- Services are not migrated from failed nodes until fencing is finished.
VMs
Initial Setup
- Generally:
- Use VirtIO if the guest OS supports it, since it provices a paravirtualized interface instead of an emulated physical interface.
- General tab:
- Use start/shutdown order if som VMs depend on other VMs (like virtualized routers).
0 is first, unspecified is last. Shutdown follows reverse order.
For equal order, the VMID in is used in ascending order.
- OS tab: No notes.
- System tab:
- Graphics card: TODO SPICE graphics card?
- Qemu Agent: It provides more information about the guest and allows PVE to perform some actions more intelligently,
but requires the guest to run the agent.
- SCSI controller: Use VirtIO SCSI for Linux and the LSI for Windows.
- BIOS: Generally use SeaBIOS. Use OVMF (UEFI) if you need PCIe pass-through.
- Machine: Generally use Intel 440FX. Use Q35 if you need PCIe pass-through.
- Hard disk tab:
- Bus/device: Use SCSI with the VirtIO SCSI controller selected in the system tab.
It supersedes the VirtIO Block controller.
- Cache: Optional, typically using write back.
- Discard: When using thin-provisioning storage for the disk and a TRIM-enabled guest OS,
this option will relay guest TRIM commands to the storage so it may shrink the disk image.
The guest OS may require SSD emulation to be enabled.
- IO thread: If the VirtIO SCSI single controller is used (which uses one controller per disk),
this will create one I/O thread for each controller for maximum performance.
- CPU tab:
- CPU type: Generally, use "kvm64".
For HA, use "kvm64" or similar (since the new host must support the same CPU flags).
For maximum performance on one node or HA with same-CPU nodes, use "host".
- NUMA: Enable for NUMA systems. Set the socket count equal to the numbre of NUMA nodes.
- CPU limit: Aka CPU quota. Floating-point number where 1.0 is equivalent to 100% of one CPU core.
- CPU units: Aka CPU shares/weight. Processing priority, higher is higher priority.
- See the documentation for the various CPU flags (especially the ones related to Meltdown/Spectre).
- Memory tab:
- Ballooning allows the guest OS to release memory back to the host when the host is running low on it.
For Linux, it uses the "balloon" kernel driver in the guest, which will swap out processes or start the OOM killer if needed.
For Windows, it must be added manually and may incur a slowdown of the guest.
- Network tab:
- Model: Use VirtIO.
- Firewall: Enable if the guest does not provide one itself.
- Multiqueue: When using VirtUO, it can be set to the total CPU cores of the VM for increased performance.
It will increase the CPU load, so only use it for VMs that need to handle a high amount of connections.
Linux Setup
- Setup QEMU Guest Agent:
- Install:
apt install qemu-guest-agent
- Toggle the "QEMU Guest Agent" option for the VM in Proxmox.
- If enabled in Proxmox but not installed, Proxmox will fail to shutdown/restart the VM.
Setup SPICE Console
- In the VM hardware configuration, set the display to SPICE.
- Install the guest agent:
- Linux:
spice-vdagent
- Windows:
spice-guest-tools
- Install a SPICE compatible viewer on your client:
Firewall
- PVE uses three different/overlapping firewalls:
- Cluster: Applies to all hosts/nodes in the cluster/datacenter.
- Host: Applies to all nodes/hosts and overrides the cluster rules.
- VM: Applies to VM (and CT) firewalls.
- To enable the firewall for nodes, both the cluster and host firewall options must be enabled.
- To enable the firewall for VMs, both the VM option and the option for individual interfaces must be enabled.
- The firewall is pretty pre-configured for most basic stuff, like connection tracking and management network access.
- Host NDP problem:
- For hosts, there is a vulnerability where the hosts autoconfigures itself for IPv6 on all bridges (see Bug 1251 - Security issue: IPv6 autoconfiguration on Bridge-Interfaces ).
- Even though you firewall off management traffic to the host, the host may still use the "other" networks as default gateways.
- To partially fix this, disable NDP on all nodes and add a rule allowing protocol "ipv6-icmp" on trusted interfaces.
- To verify that it's working, reboot and check its IPv6 routes and neighbors.
- Check firewall status:
pve-firewall status
Special Aliases and IP Sets
- Alias
localnet
(cluster):
- For allowing cluster and management access (Corosync, API, SSH).
- Automatically detected and defined for the management network (one of them), but can be overridden at cluster level.
- Check:
pve-firewall localnet
- IP set
cluster_network
(cluster):
- Consists of all cluster hosts.
- IP set
management
(cluster):
- For management access to hosts.
- Includes
cluster_network
.
- If you want to handle management firewalling elsewhere/differently, just ignore this and add appropriate rules directly.
- IP set
blacklist
(cluster):
- For blocking traffic to hosts and VMs.
PVE Ports
- TCP 22: SSH.
- TCP 3128: SPICE proxy.
- TCP 5900-5999: VNC web console.
- TCP 8006: Web interface.
- TCP 60000-60050: Live migration (internal).
- UDP 111: rpcbind (optional).
- UDP 5404-5405: Corosync (internal).
Ceph
See Storage: Ceph for general notes.
The notes below are PVE-specific.
Notes
- It's recommended to use a high-bandwidth SAN/management network within the cluster for Ceph traffic.
It may be the same as used for out-of-band PVE cluster management traffic.
- When used with PVE, the configuration is stored in the cluster-synchronized PVE config dir.
Setup
- Setup a shared network.
- It should be high-bandwidth and isolated.
- It can be the same as used for PVE cluster management traffic.
- Install (all nodes):
pveceph install
- Initialize (one node):
pveceph init --network <subnet>
- Setup a monitor (all nodes):
pveceph createmon
- Check the status:
ceph status
- Requires at least one monitor.
- Add a disk (all nodes, all disks):
pveceph createosd <dev>
- If the disks contains any partitions, run
ceph-disk zap <dev>
to clean the disk.
- Can also be done from the dashboard.
- Check the disks:
ceph osd tree
- Create a pool (PVE dashboard).
- "Size" is the number of replicas.
- "Minimum size" is the number of replicas that must be written before the write should be considered done.
- Use at least size 3 and min. size 2 in production.
- "Add storage" adds the pool to PVE for disk image and container content.
{% include footer.md %}