Håvard O. Nordstrand 5 年之前
父節點
當前提交
aa2fd03947
共有 4 個文件被更改,包括 37 次插入26 次删除
  1. 10 10
      config/game-servers/tf2.md
  2. 3 4
      config/linux-general/examples.md
  3. 21 9
      config/linux-server/proxmox-ve.md
  4. 3 3
      config/linux-server/storage.md

+ 10 - 10
config/game-servers/tf2.md

@@ -33,9 +33,8 @@ In addition to the default Pterodactyl command arguments.
 
 Config dir: `tf/cfg/`
 
-**`autoexec.cfg`:**
+**`autoexec.cfg` (example):**
 
-Example:
 ```
 hostname ""
 // Optional, set to an email address
@@ -45,9 +44,8 @@ rcon_password ""
 sv_password ""
 ```
 
-**`server.cfg`:**
+**`server.cfg` (example):**
 
-Example:
 ```
 // Time in minutes per map, use 0 to disable time limit
 mp_timelimit 30
@@ -56,14 +54,16 @@ mp_maxrounds 10
 ```
 
 **`motd.txt` and `motd_text.txt`:**
-Contains the full MOTD shown to players when joining the server.
-`motd.txt` may contain HTML and is used by default.
-`motd_text.txt` is used if the player has disabled HTML MOTDs.
-If `motd.txt` contains *any* HTML/CSS/JS, it will be rendered using some ugly default font and opaque background.
+
+- Contains the full MOTD shown to players when joining the server.
+- `motd.txt` may contain HTML and is used by default.
+- `motd_text.txt` is used if the player has disabled HTML MOTDs.
+- If `motd.txt` contains *any* HTML/CSS/JS, it will be rendered using some ugly default font and opaque background.
 
 **`mapcycle.txt`:**
-Lists the all maps in the map pool.
-Use `tf/cfg/mapcycle_default.txt` as a reference.
+
+- Lists the all maps in the map pool.
+- Use `tf/cfg/mapcycle_default.txt` as a reference.
 
 ## MvM Configuration
 

+ 3 - 4
config/linux-general/examples.md

@@ -44,9 +44,8 @@ breadcrumbs:
 
 - Monitor usage:
     - `nload <if>`
-    - `speedometer -t <if> -r <if>`
-      - Prettier than nload.
-      - Multiple interfaces can be specified.
+    - `iftop -i <if>`
+    - `speedometer -t <if> -r <if> [...]`
 - Monitor per-process usage:
     - `nethog`
 - Test throughput:
@@ -91,7 +90,7 @@ breadcrumbs:
 
 - Test read speed: `hdparm -t <dev>` (safe)
 - Show IO load for devices/partitions: `iostat [-xpm] [refresh-interval]`
-- Show IO usage for processes: `iotop`
+- Show IO usage for processes: `iotop -o [-a]`
 
 ### System
 

+ 21 - 9
config/linux-server/proxmox-ve.md

@@ -60,7 +60,7 @@ Follow the instructions for [Debian server basic setup](../debian-server/#initia
     1. Create a ZFS pool or something.
     1. Add it to `/etc/pve/storage.cfg`: See [Proxmox VE: Storage](https://pve.proxmox.com/wiki/Storage)
 
-### Setup PCI(e) Passthrough
+### Configure PCI(e) Passthrough
 
 **Possibly outdated**
 
@@ -88,18 +88,18 @@ Follow the instructions for [Debian server basic setup](../debian-server/#initia
 
 ### Troubleshooting
 
-#### Failed Login
+**Failed login:**
 
 Make sure `/etc/hosts` contains both the IPv4 and IPv6 addresses for the management networks.
 
 ## Cluster
 
-- `/etc/pve` will get synchronized across all nodes.
-    - This includes `storage.cfg`, so storage configuration must be the same for all nodes.
-- High availability:
-    - Clusters must be explicitly configured for HA.
-    - Provides live migration.
-    - Requires shared storage (e.g. Ceph).
+### Usage
+
+- The cluster file system (`/etc/pve`) will get synchronized across all nodes, meaning quorum rules applies to it.
+- The storage configiration (`storage.cfg`) is shared by all cluster nodes, as part of `/etc/pve`. This means all nodes must have the same storage configuration.
+- Show cluster status: `pvecm status`
+- Show HA status: `ha-manager status`
 
 ### Creating a Cluster
 
@@ -132,6 +132,8 @@ This is the recommended method to remove a node from a cluster. The removed node
 See: [Proxmox: High Availability](https://pve.proxmox.com/wiki/High_Availability)
 
 - Requires a cluster of at least 3 nodes.
+- Requires shared storage.
+- Provides live migration.
 - Configured using HA groups.
 - The local resource manager (LRM/"pve-ha-lrm") controls services running on the local node.
 - The cluster resource manager (CRM/"pve-ha-crm") communicates with the nodes' LRMs and handles things like migrations and node fencing.
@@ -147,12 +149,16 @@ See: [Proxmox: High Availability](https://pve.proxmox.com/wiki/High_Availability
 
 ### Troubleshooting
 
-#### Modify Without Quorum
+**Unable to modify because of lost quorum:**
 
 If you lost quorum because if connection problems and need to modify something (e.g. to fix the connection problems), run `pvecm expected 1` to set the expected quorum to 1.
 
 ## VMs
 
+### Usage
+
+- List: `qm list`
+
 ### Initial Setup
 
 - Generally:
@@ -264,6 +270,12 @@ SPICE allows interacting with graphical VM desktop environments, including suppo
     - Windows: See [Windows Setup](#windows-setup).
 1. In the VM hardware configuration, set the display to SPICE.
 
+### Troubleshooting
+
+**VM failed to start, possibly after migration:**
+
+Check the host system logs. It may for instance be due to hardware changes or storage that's no longre available after migration.
+
 ## Firewall
 
 - PVE uses three different/overlapping firewalls:

+ 3 - 3
config/linux-server/storage.md

@@ -276,10 +276,10 @@ Typically an early indicator of faulty hardware, so take note of which disk it i
     - Unmount it if not: `umount <dev>`
 1. Replace the physical disk.
 1. Zap the new disk: `ceph-disk zap <dev>`
-1. Create new OSD: `pveceph osd create <dev> [options]` (PVE)
-    - Specify any WAL or DB devices.
+1. Create new OSD: `pveceph osd create <dev> [options]` (Proxmox VE)
+    - Optionally specify any WAL or DB devices.
     - See [PVE: pveceph(1)](https://pve.proxmox.com/pve-docs/pveceph.1.html).
-    - Without `pveceph osd create`, a series of steps are required.
+    - Without PVE's `pveceph(1)`, a series of steps are required.
     - Check that the new OSD is up: `ceph osd tree`
 1. Start the OSD daemon: `systemctl start ceph-osd@<id>`
 1. Wait for rebalancing: `ceph -s [-w]`