HON95 3 năm trước cách đây
mục cha
commit
9b466c07ef

+ 3 - 1
404.md

@@ -5,6 +5,8 @@ permalink: /404.html
 ---
 {% include header.md %}
 
-**Error 404:** Page not found.
+## Error 404
+
+**Page not found.**
 
 {% include footer.md %}

+ 15 - 6
config/automation/ansible.md

@@ -32,23 +32,32 @@ breadcrumbs:
 - Specify inventory file: `ansible-playbook -i <hosts> <playbook>`
 - Limit which groups/hosts to use (comma-separated): `ansible-playbook -l <group|host> <playbook>`
 - Limit which tasks to run using tags (comma-separated): `ansible-playbook -t <tag> <playbook>`
-- Use Vault password file: `ansible-playbook --vault-password-file <file> <...>`
 
 ### Vault
 
-- Use file for password: Just add the password as the only line in a file.
-- Encrypt, prompt for secret, using password file: `ansible-vault encrypt_string --vault-password-file ~/.ansible_vault/stuff`
+- Used to encrypt files and values. For values, just paste the `!vault ...` output directly into the configs to use the encrypted value in.
+- Use file to keep password: Just add the password as the only line in a file, e.g. `~/.ansible_vault/<name>` (with appropriate parent dir perms). A generated `[a-zA-Z0-9]{32}` string is more than strong enough.
+- Encrypt, prompt for secret, using password file: `ansible-vault encrypt_string --vault-password-file=~/.ansible_vault/stuff`
+- Use password file with playbook: `ansible-playbook --vault-password-file=<file> <...>`
 - To avoid leaking secrets in logs and stuff, use `no_log` in tasks handling secrets.
 
 ## Configuration
 
-Example `/etc/ansible/ansible.cfg` or `~/.ansible.cfg`:
+Config locations:
+
+- Global: `/etc/ansible/ansible.cfg`
+- User: `~/.ansible.cfg`
+- Project: `ansible.cfg`
+
+Example config:
 
 ```
 [defaults]
-# Change to "auto" if this path causes problems
-interpreter_python = /usr/bin/python3
 host_key_checking = false
+#interpreter_python = auto
+interpreter_python = /usr/bin/python3
+#inventory = hosts.ini
+#roles_path = ansible-roles:~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
 ```
 
 ## Templating

+ 45 - 0
config/cloud/aws.md

@@ -0,0 +1,45 @@
+---
+title: AWS
+breadcrumbs:
+- title: Configuration
+- title: Cloud
+---
+{% include header.md %}
+
+## General
+
+- Note that almost everything is tied to some availability zone, so make sure your active zone is the correct one before making any changes.
+
+## Networking (VPC etc.)
+
+### Security Groups
+
+- Remember to setup IPv6 rules too (typically mirroring the IPv4 ones).
+- Typical DMZ setup: Allow everything from everywhere.
+- Typical non-DMZ setup: Allow ICMPv4, ICMPv6 and SSH from everywhere.
+
+### Add IPv6 Support
+
+1. Add an IPv6 prefix to the VPC:
+    1. Find the VPC.
+    1. Enter the "edit CIDRs" config page.
+    1. Add an Amazon-managed IPv6 prefix.
+1. Add a default gateway for the new prefix:
+    1. Enter the "routing tables" page and find the table associated with the VPC.
+    1. Click "edit routes".
+    1. Add a new route with destination `::/0` and the same internet gateway as for the IPv4 default route as the target.
+1. Create a subnet from the IPv6 prefix:
+    1. Enter the "subnets" page.
+    1. (Optional) Delete the existing IPv4-only subnets (not possible if any resources are using them).
+    1. Create a new dual-stack subnet for the VPC, with no name (optional), the same availability zone as the VM/resource to use it with. Select some IPv4 subnet (e.g. the first `/24`) and IPv6 subnet (e.g. add `00` to the templated subnet) from the VPC prefixes.
+
+## EC2
+
+### General
+
+### Networking
+
+- **Warning:** The primary network interface of a VM can't be changed after creation. Likewise, the "subnet" of an existing network interface can't be changed. Make sure you assign the VM to the correct subnet (or network interface) during creation. (Required e.g. if you want IPv6 support.)
+- For IPv6 support, see the warning above.
+
+{% include footer.md %}

+ 3 - 3
config/cloud/azure.md

@@ -6,18 +6,18 @@ breadcrumbs:
 ---
 {% include header.md %}
 
-## Virtual Machines Setup
+## Virtual Machines
 
 ### Networking
 
-- For IPv6 support, you apparently need to create a new VM.
+- For IPv6 support, you apparently need to create a new VM (_what?_).
 - You're forced to use NAT (with an internal network conneted to the VM) both for IPv4 and IPv6 (which is just disgusting).
 - Some guides may tell you that you need to create a load balancer in order to add IPv6 to VMs, but that's avoidable.
 - ICMPv6 is completely broken. You can't ping over IPv6, path MTU discovery (PMTUD) is broken, etc. Broken PMTUD can be avoided by simply setting the link MTU from 1500 to 1280 (the minimum for IPv6).
 - The default ifupdown network config (which uses DHCP for v4 and v6) broke IPv6 connectivity for me after a while for some reason. Switching to systemd-networkd with DHCP and disabling Ifupdown (comment out everything in `/etc/network/interfaces` and mask `ifup@eth0.service`) solved this for me.
 - If you configure non-Azure DNS servers in the VM config, it will seemingly only add one of the configured servers to `/etc/resolv.conf`. **TODO** It stops overriding `/etc/resolv.conf` if using Azure DNS servers?
 - Adding IPv6 to VM:
-    1. Note: This was written afterwards, I may be forgetting some steps.
+    1. (Note) This was written afterwards, I may be forgetting some steps.
     1. Create an IPv4 address and an IPv6 address.
     1. In the virtual network for the VM, add a ULA IPv6 address space (e.g. an `fdXX:XXXX:XXXX::/48`). Then modify the existing subnet (e.g. `default`), tick the "Add IPv6 address space" box and add a /64 subnet from the address space you just added.
     1. In the network interface for the VM, configure the primary config to use the private IPv4 subnet and the public IPv4 address. Add a new secondary config for for the IPv6 (private) ULA subnet and the (public) GUA.

+ 9 - 4
config/general/linux-general.md

@@ -70,6 +70,11 @@ breadcrumbs:
     - Test with various record sizes and file sizes: `iozone -a`
     - Benchmark: `iozone -t1` (1 thread)
     - Plot results: **TODO** It should be doable with gnuplot somehow.
+- Shred disk with `shred`:
+    - Example: `shred -n2 -v <file>`
+    - `-n<n>` speficied the number of passes. This takes ages to begin with for large disks, so keep it as low as appropriate. 1 pass is generally enough, 2 to be sure.
+    - `--zero` adds an extra, final pass to write all zeroes.
+    - `-v` shows progress.
 
 ### Files
 
@@ -83,7 +88,7 @@ breadcrumbs:
     - `du -sh <dirs>`
     - K4DirStat (GUI) (package `k4dirstat`)
 - Shred files:
-    - `shred --remove --zero <file>`
+    - `shred --remove --zero -v <file>`
 
 ### Fun
 
@@ -188,7 +193,7 @@ breadcrumbs:
     - Internal: `iperf3`
 - Show sockets (with `ss`):
     - Example: `ss -tulpn`
-    - Note: `ss` replaces `netstat` and is mostly option compatible.
+    - (Note) `ss` replaces `netstat` and is mostly option compatible.
     - Option `tu`: Include TCP and UDP sockets (no UNIX sockets).
     - Option `l`: Include listening sockets (no client sockets).
     - Option `p`: Show protocol (requires root).
@@ -197,7 +202,7 @@ breadcrumbs:
     - Show kernel SNMP counters: `nstat`
     - Show per-protocol stats: `netstat -s`
 - Bring interface up or down:
-    - Note: Your network manager probably has a more appropriate way to do this.
+    - (Note) Your network manager probably has a more appropriate way to do this.
     - Directly up or down interface: `ip link set dev <if> {up|down}`
 - Traffic shaping and link simulation:
     - See `tc` to simulate e.g. random packet drop, random latencies, limited bandwidth etc.
@@ -344,7 +349,7 @@ Using GPG (from package `gnupg2` on Debian).
     - Install (Debian): `apt install stress-ng`
     - Stress CPU: `stress-ng -c $(nproc) -t $((10*60))` (use all CPU threads for 10 minutes)
 - Chroot into other Linux installation:
-    1. Note: Used to e.g. fix a broken install or reset a user password from a USB live ISO.
+    1. (Note) Used to e.g. fix a broken install or reset a user password from a USB live ISO.
     1. Mount the root partition: `mount /dev/sda2 /mnt` (example)
     1. Mount e.g. the EFI partition: `mount /dev/sda1 /mnt/boot/efi` (example)
     1. Mount system stuff:

+ 12 - 173
config/linux-server/applications.md

@@ -6,7 +6,9 @@ breadcrumbs:
 ---
 {% include header.md %}
 
-Note: If not stated then it's for Debian. Some may be for CentOS (5?) and extremely outdated.
+If not stated then the instructions are for Debian.
+Some may be for CentOS (5?) and extremely outdated.
+Some applications may be located in different sections/pages on this wiki.
 
 ## Apache
 
@@ -211,24 +213,6 @@ This setup requires pubkey plus MFA (if configured) plus password.
 
 See [Storage: isdct](/config/linux-server/storage/#intel-ssd-data-center-tool-isdct).
 
-## Grafana
-
-Typically used with a data source like [Prometheus](#prometheus).
-
-### Setup (Docker)
-
-1. See [(Grafana) Run Grafana Docker image](https://grafana.com/docs/grafana/latest/installation/docker/).
-1. Mount:
-    - Config: `./grafana.ini:/etc/grafana/grafana.ini:ro`
-    - Data: `./data:/var/lib/grafana/:rw` (requires UID 472)
-    - Logs: `./logs:/var/log/grafana/:rw` (requires UID 472)
-1. Configure `grafana.ini`.
-1. Open the webpage to configure it.
-
-### Notes
-
-- Be careful with public dashboards. "Viewers" can modify any query and thus query the entire data source for the dashboard, unless you have configured some type of access control for the data source (which you probably haven't).
-
 ## Home Assistant
 
 See [Home Assistant](/config/iot-ha/home-assistant/).
@@ -302,6 +286,12 @@ A MySQL fork that is generally MySQL compatible.
 
 The instructions below use NFSv4 *without* Kerberos. This should only be used on trusted networks and requires manual user and group ID management.
 
+### TODO
+
+#### Subtree Checking
+
+> As a general guide, a home directory filesystem, which is normally exported at the root and may see lots of file renames, should be exported with subtree checking disabled. A filesystem which is mostly readonly, and at least doesn't see many file renames (e.g. /usr or /var) and for which subdirectories may be exported, should probably be exported with subtree checks enabled.
+
 ### Server (without Kerberos)
 
 #### Setup
@@ -661,157 +651,6 @@ Use this mess to change the ugly `From: root@node.example.net` and `To: root@nod
 - Print the config: `postconf -n`
 - If `mailq` tells you mails are stuck in the mail queue because of previous errors, run `postqueue -f` to flush them.
 
-## Prometheus
-
-Typically used with [Grafana](#grafana) and sometimes with Cortex/Thanos in-between.
-
-### Setup (Docker)
-
-1. See [(Prometheus) Installation](https://prometheus.io/docs/prometheus/latest/installation/).
-1. Set the retention period and size:
-    - (Docker) Find and re-specify all default arguments. Check with `docker inspect` or the source code.
-    - Add the command-line argument `--storage.tsdb.retention.time=15d` and/or `--storage.tsdb.retention.size=100GB` (with example values).
-    - Note that the old `storage.local.*` and `storage.remote.*` flags no longer work.
-1. Mount:
-    - Config: `./prometheus.yml:/etc/prometheus/prometheus.yml:ro`
-    - Data: `./data/:/prometheus/:rw`
-1. Configure `prometheus.yml`.
-    - I.e. set global variables (like `scrape_interval`, `scrape_timeout` and `evaluation_interval`) and scrape configs.
-1. (Optional) Setup remote storage to replicate all scraped data to a remote backend.
-1. (Optional) Setup Cortex or Thanos for global view, HA and/or long-term storage.
-
-### Notes
-
-- The open port (9090 by default) contains both the dashboard and the query API.
-- You can check the status of scrape jobs in the dashboard.
-- Prometheus does not store data forever, it's meant for short- to mid-term storage.
-- Prometheus should be "physically" close to the apps it's monitoring. For large infrastructures, you should use multiple instances, not one huge global instance.
-- If you need a "global view" (when using multiple instances), long-term storage and (in some way) HA, consider using Cortex or Thanos.
-- Since Prometheus receives an almost continuous stream of telemetry, any restart or crash will cause a gap in the stored data. Therefore you should generally always use some type of HA in production setups.
-- Cardinality is the number of time series. Each unique combination of metrics and key-value label pairs (yes, including the label value) amounts to a new time series. Very high cardinality (i.e. over 100 000 series, number taken from a Splunk presentation from 2019) amounts to significantly reduced performance and increased memory and resource usage, which is also shared by HA peers (fate sharing). Therefore, avoid using valueless labels, add labels only to metrics they belong with, try to limit the numer of unique values of a label and consider splitting metrics to use less labels. Some useful queries to monitor cardinality: `sum(scrape_series_added) by (job)`, `sum(scrape_samples_scraped) by (job)`, `prometheus_tsdb_symbol_table_size_bytes`, `rate(prometheus_tsdb_head_series_created_total[5m])`, `sum(sum_over_time(scrape_series_added[5m])) by (job)`. You can also find some useful stats in the dashboard.
-
-### About Cortex and Thanos
-
-- Two similar projects, which both provide global view, HA and long-term storage.
-- Cortex is push-based using Prometheus remote writing, while Thanos is pull-based using Thanos sidecars for all Prometheus instances.
-- Global view: Cortex stores all data internally, while Thanos queries the Prometheus instances.
-- Prometheus HA: Cortex stores one instance of the received data (at write time), while Thanos queries Prometheus instances which have data (at query time). Both approaches removes gaps in the data.
-- Long-term storage: Cortex periodically flushes the NoSQL index and chunks to an external object store, while Thanos uploads TSDB blocks to an object store.
-
-## Prometheus Exporters
-
-### General
-
-- Exporters often expose the metrics endpoint over plain HTTP without any scraper or exporter authentication. Prometheus supports exporters using HTTPS for scraping (for integrity, confidentiality and authenticating the Prometheus), as well as using client authentication (from Prometheus, for authenticating Prometheus), providing mutual authentication if both are used. This may require setting up a reverse web proxy in front of the exporter. Therefore, the simplest alternative (where appropriate) is often to just secure the network itself using segmentation and segregation.
-
-### List of Exporters and Software
-
-This list contains exporters and software with built-in exposed metrics I typically use. Some are described in more detail in separate subsections.
-
-#### Software with exposed metrics
-
-- Prometheus (exports metrics about itself)
-- [Grafana](https://grafana.com/docs/grafana/latest/administration/metrics/)
-- [Docker Daemon](https://docs.docker.com/config/daemon/prometheus/)
-- [Traefik](https://github.com/containous/traefik)
-- [AWX](https://docs.ansible.com/ansible-tower/latest/html/administration/metrics.html)
-
-#### Exporters
-
-- [Node exporter (Prometheus)](https://github.com/prometheus/node_exporter)
-- [Windows exporter (Prometheus Community)](https://github.com/prometheus-community/windows_exporter)
-- [SNMP exporter (Prometheus)](https://github.com/prometheus/snmp_exporter)
-- [IPMI exporter (Soundcloud)](https://github.com/soundcloud/ipmi_exporter)
-- [NVIDIA DCGM exporter (NVIDIA)](https://github.com/NVIDIA/gpu-monitoring-tools/)
-- [NVIDIA GPU exporter (mindprince)](https://github.com/mindprince/nvidia_gpu_prometheus_exporter)
-- [cAdvisor (Google)](https://github.com/google/cadvisor)
-- [UniFi exporter (jessestuart)](https://github.com/jessestuart/unifi_exporter)
-- [BIND exporter (Prometheus Community)](https://github.com/prometheus-community/bind_exporter)
-- [Blackbox exporter (Prometheus)](https://github.com/prometheus/blackbox_exporter)
-- [Prometheus Proxmox VE exporter (prometheus-pve)](https://github.com/prometheus-pve/prometheus-pve-exporter)
-- [NUT Exporter (HON95)](https://github.com/HON95/prometheus-nut-exporter)
-- [ESP8266 DHT Exporter (HON95)](https://github.com/HON95/prometheus-esp8266-dht-exporter)
-
-#### Special
-
-- [Pushgateway (Prometheus)](https://github.com/prometheus/pushgateway)
-
-### Prometheus Node Exporter
-
-Can be set up either using Docker ([prom/node-exporter](https://hub.docker.com/r/prom/node-exporter/)), using the package manager (`prometheus-node-exporter` on Debian), or by building it from source. The Docker method provides a small level of protection as it's given only read-only system access. The package version is almost always out of date and is typically not optimal to use. If Docker isn't available and you want the latest version, build it from source.
-
-#### Setup (Downloaded Binary)
-
-See [Building and running](https://github.com/prometheus/node_exporter#building-and-running (node_exporter)).
-
-Details:
-
-- User: `prometheus`
-- Binary file: `/usr/bin/prometheus-node-exporter`
-- Service file: `/etc/systemd/system/prometheus-node-exporter.service`
-- Configuration file: `/etc/default/prometheus-node-exporter`
-- Textfile directory: `/var/lib/prometheus/node-exporter/`
-
-Instructions:
-
-1. Install requirements: `apt install moreutils`
-1. Find the link to the latest tarball from [the download page](https://prometheus.io/download/#node_exporter).
-1. Download and unzip it: `wget <url>` and `tar xvf <file>`
-1. Move the binary to the system: `cp node_exporter*/node_exporter /usr/bin/prometheus-node-exporter`
-1. Make sure it's runnable: `node_exporter -h`
-1. Add the user: `useradd -r prometheus`
-    - If you have hidepid setup to hide system process details from normal users, remember to add the user to a group with access to that information. This is only required for some metrics, most of them work fine without this extra access.
-1. Create the required files and directories:
-    - `touch /etc/default/prometheus-node-exporter`
-    - `mkdir -p /var/lib/prometheus/node-exporter/`
-1. Create the systemd service `/etc/systemd/system/prometheus-node-exporter.service`, see [prometheus-node-exporter.service](../files/prometheus-node-exporter.service.txt).
-1. (Optional) Configure it:
-    - The defaults work fine.
-    - File: `/etc/default/prometheus-node-exporter`
-    - Example: `ARGS="--collector.processes --collector.interrupts --collector.systemd"` (enables more detailed process and interrupt collectors)
-1. Enable and start the service: `systemctl enable --now prometheus-node-exporter`
-1. (Optional) Setup textfile exporters.
-
-#### Textfile Collector
-
-##### Setup and Usage
-
-1. Set the collector script output directory using the CLI argument `--collector.textfile.directory=<dir>`.
-    - Example dir: `/var/lib/prometheus/node-exporter/`
-    - If the node exporter was installed as a package, it can be set in the `ARGS` variable in `/etc/default/prometheus-node-exporter`.
-    - If using Docker, the CLI argument specified as part of the command.
-1. Download the collector scripts and make them executable.
-    - Example dir: `/opt/prometheus/node-exporter/textfile-collectors/`
-1. Add cron jobs for the scripts using sponge to wrote to the output dir.
-    - Make sure `sponge` is installed. For Debian, it's found in the `moreutils` package.
-    - Example cron file: `/etc/cron.d/prometheus-node-exporter-textfile-collectors`
-    - Example cron entry: `0 * * * * root /opt/prometheus/node-exporter/textfile-collectors/apt.sh | sponge /var/lib/prometheus/node-exporter/apt.prom`
-
-##### Collector Scripts
-
-Some I typically use.
-
-- [apt.sh (Prometheus Community)](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/apt.sh)
-- [yum.sh (Prometheus Community)](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/yum.sh)
-- [deleted_libraries.sh (Prometheus Community)](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/deleted_libraries.py)
-- [ipmitool (Prometheus Community)](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/ipmitool) (requires ipmitool) (**Warning:** This is slow, don't run it frequently. If you do, it may spawn more and more processes waiting to read the IPMI sensors. Run it manually to get a feeling.)
-- [smartmon.sh (Prometheus Community)](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/smartmon.sh) (requires smartctl)
-- [My own textfile exporters](https://github.com/HON95/prometheus-textfile-exporters)
-
-### Prometheus Blackbox Exporter
-
-#### Monitor Service Availability
-
-Add a HTTP probe job for the services and query for probe success over time.
-
-Example query: `avg_over_time(probe_success{job="node"}[1d]) * 100`
-
-#### Monitor for Expiring Certificates
-
-Add a HTTP probe job for the services and query for `probe_ssl_earliest_cert_expiry - time()`.
-
-Example alert rule: `probe_ssl_earliest_cert_expiry{job="blackbox"} - time() < 86400 * 30` (30 days)
-
 ## Pterodactyl
 
 ### General
@@ -856,7 +695,7 @@ See [Team Fortress 2 (TF2)](/config/game-servers/tf2/).
 1. Install and enable `radvd`.
 1. Setup config file: `/etc/radvd.conf`
 
-## Samba
+## Samba (CIFS)
 
 ### Server
 
@@ -869,7 +708,7 @@ See [Team Fortress 2 (TF2)](/config/game-servers/tf2/).
 
 #### Configuration
 
-- Note: Unless otherwise states, all options should go in the `global` section.
+- (Note) Unless otherwise states, all options should go in the `global` section.
 - General:
     - Set description (shown some places): `server string`
     - Set authentication method to standalone: `security = user`
@@ -935,7 +774,7 @@ See [Team Fortress 2 (TF2)](/config/game-servers/tf2/).
     1. In `/etc/fstab`, add: `//<share> <mountpoint> cifs vers=3.1.1,uid=<uid>,gid=<gid>,credentials=<file>,iocharset=utf8 0 0`
     1. Test it: `mount -a`
 - Add automounted share:
-    1. Set up the permanent share (see steps above).
+    1. Set up the permanent share (see steps above, skip `mount -a`).
     1. In the `/etc/fstab` entry, add `,noauto,x-systemd.automount,x-systemd.idle-timeout=30`.
     1. Reload systemd automounting: `systemctl daemon-reload && systemctl restart remote-fs.target`
 

+ 9 - 8
config/linux-server/debian.md

@@ -107,8 +107,9 @@ The first steps may be skipped if already configured during installation (i.e. n
     - Disable mouse globally: In `/etc/vim/vimrc.local`, add `set mouse=` and `set ttymouse=`.
     - Fix YAML formatting globally: In `/etc/vim/vimrc.local`, add `autocmd FileType yaml setlocal ts=2 sts=2 sw=2 expandtab`.
 1. Add mount options:
-    - Setup hidepid:
-        - Note: The `adm` group will be granted access.
+    - (Not recommended) Setup hidepid:
+        - (Note) Hidepid breaks certain systemd things. It's not recommended to use it until that gets fixed.
+        - (Note) The `adm` group will be granted access.
         - Add your personal user to the PID monitor group: `usermod -aG adm <user>`
         - Enable hidepid in `/etc/fstab`: `proc /proc proc defaults,hidepid=2,gid=<adm-gid> 0 0` (using the numerical GID of `adm`)
     - (Optional) Disable the tiny swap partition added by the guided installer by commenting it in the fstab.
@@ -140,10 +141,10 @@ The first steps may be skipped if already configured during installation (i.e. n
     - Clear `/etc/motd`, `/etc/issue` and `/etc/issue.net`.
     - (Optional) Add a MOTD script (see below).
 1. (Optional) (Buster) Enable persistent logging:
-    - Note: Persistent logging is the default for Debian 11/Bullseye, but not Debian 10/Buster.
+    - (Note) Persistent logging is the default for Debian 11/Bullseye, but not Debian 10/Buster.
     - In `/etc/systemd/journald.conf`, under `[Journal]`, set `Storage=persistent`.
-    - Note: `auto` (the default) is like `persistent`, but does not automatically create the log directory.
-    - Note: The default journal directory is `/var/log/journal`.
+    - (Note) `auto` (the default) is like `persistent`, but does not automatically create the log directory.
+    - (Note) The default journal directory is `/var/log/journal`.
 
 ### Machine-Specific Configuration
 
@@ -152,7 +153,7 @@ The first steps may be skipped if already configured during installation (i.e. n
 1. Install extra firmware:
     - Enable the `non-free` repo areas.
     - Update microcode: Install `intel-microcode` (for Intel) or `amd64-microcode` (for AMD) and reboot (now or later).
-    - Note: APT package examples: `firmware-atheros -bnx2 -bnx2x -ralink -realtek`
+    - (Note) APT package examples: `firmware-atheros -bnx2 -bnx2x -ralink -realtek`
     - If it asked to install non-free firmware in the initial installation installation, try to install it now.
     - Install firmware from other sources (e.g. for some Intel NICs).
     - (Optional) To install all common common firmware and microcode, install `firmware-linux` (or `firmware-linux-free`) (includes e.g. microcode packages).
@@ -215,7 +216,7 @@ Prevent enabled (and potentially untrusted) interfaces from accepting router adv
     - Don't save the current rules when it asks.
 1. Manually add IPTables rules or make [a simple iptables script](https://github.com/HON95/scripts/blob/master/iptables/iptables.sh) or something.
 1. Open a new SSH session and make sure you can still log in without closing the current one.
-1. Note: If you flush the firewall and reconfigure it, remember to restart services modifying it (like libvirt, Docker, Fail2Ban).
+1. (Note) If you flush the firewall and reconfigure it, remember to restart services modifying it (like libvirt, Docker, Fail2Ban).
 
 #### DNS
 
@@ -275,7 +276,7 @@ Everything here is optional.
     - Check status: `fail2ban-client status [sshd]`
     - See [Linux Server Applications: Fail2Ban](applications.md#fail-2-ban) for more info.
 - Set up a swap file:
-    1. Note: You should have enough memory installed to never need swapping, but it's a nice backup to prevent the system from potentially crashing if anything bugs out and eats up too much memory.
+    1. (Note) You should have enough memory installed to never need swapping, but it's a nice backup to prevent the system from potentially crashing if anything bugs out and eats up too much memory.
     1. Show if swap is already enabled: `swapon --show`
     1. Allocate the swap file: `fallocate -l <size> /swapfile`
         - Alternatively, use dd.

+ 33 - 29
config/linux-server/storage-zfs.md

@@ -44,7 +44,7 @@ The backports repo is used to get the newest version of ZoL.
 
 ### Configuration (Debian)
 
-1. Check that the cron scrub script exists:
+1. (Typically not needed) Check that the cron scrub script exists:
     - Typical location: `/etc/cron.d/zfsutils-linux`
     - If it doesn't exist, add one which runs `/usr/lib/zfs-linux/scrub` e.g. monthly. It'll scrub all disks.
 1. (Typically not needed) Check that ZED is working:
@@ -52,11 +52,12 @@ The backports repo is used to get the newest version of ZoL.
     - Email sending: In `/etc/zfs/zed.d/zed.rc`, make sure `ZED_EMAIL_ADDR="root"` is uncommented.,
     - Service: `zfs-zed.service` should be enabled.
 1. (Optional) Set the max ARC size:
-    - Command: `echo "options zfs zfs_arc_max=<bytes>" >> /etc/modprobe.d/zfs.conf`
+    - Command: `echo "options zfs zfs_arc_max=$((<gigabytes>*1024*1024*1024))" >> /etc/modprobe.d/zfs.conf`
     - It should typically be around 15-25% of the physical RAM size on general nodes. It defaults to 50%.
     - This is generally not required, ZFS should happily yield RAM to other processes that need it.
+1. (Optional) Automatically load key (if encrypted) and mount pool/dataset on boot: See encryption section.
 1. (Optional) Fix pool cache causing pool loading problems at boot:
-    1. Note: Do this if `systemctl status zfs-import-cache.service` shows that no pools were found. I had significant problems with this multiple times with Proxmox VE on an older server.
+    1. (Note) Do this if `systemctl status zfs-import-cache.service` shows that no pools were found. I had significant problems with this multiple times with Proxmox VE on an older server.
     1. Make sure the pools are not set to use a cache file: `zpool get cachefile` and `zpool set cachefile=none <pool>`
     1. Copy `/lib/systemd/system/zfs-import-scan.service` to `/etc/systemd/system/`.
     1. In `zfs-mount.service`, comment the `ConditionFileNotEmpty=!/etc/zfs/zpool.cache` line (the file tends to find a way back to existance).
@@ -83,13 +84,18 @@ The backports repo is used to get the newest version of ZoL.
 
 ### Pools
 
-- Recommended pool options:
-    - Typical example: `-o ashift=<9|12> -o autotrim=on -o autoreplace=off -O compression=zstd -O xattr=sa -O atime=off -O relatime=on` (`autotrim` only for SSDs)
+- Create pool:
+    - Format: `zpool create [options] <name> <levels-and-drives>`
+    - Basic example: `zpool create [-f] [options] <name> {[mirror|raidz|raidz2|spare|...] <drives>}+`
+        - Use `-f` (force) if the disks aren't clean.
+        - See example above for recommended options.
+    - Recommended example: `zpool create -o ashift=<9|12> -o autotrim=on -O compression=zstd -O xattr=sa -O atime=off -O relatime=on <disks>` (`autotrim` only for SSDs)
     - Specifying options during creation: For `zpool`/pools, use `-o` for pool options and `-O` for dataset options. For `zfs`/datasets, use `-o` for dataset options.
     - Set physical block/sector size (pool option): `ashift=<9|12>`
         - Use 9 for 512 (2^9) and 12 for 4096 (2^12). Use 12 if unsure (bigger is safer).
     - Enable TRIM (for SSDs): `autotrim=on`
-        - It's also recommended to create a cron job to run `zpool trim` periodically for the SSD pool.
+        - Auto-trim is the continuous type (not periodic), but it avoids trimming if the deleted range is too small to avoid excessive load.
+        - It's also somewhat recommended to create a cron job to run `zpool trim <pool>` periodically for the SSD pool.
     - Enable autoreplacement for new disks in the same physical slot as old ones (using ZED): `autoreplace=on`
     - Enable compression (dataset option): `compression=zstd`
         - Use `lz4` for boot drives (`zstd` booting isn't currently supported) or if `zstd` isn't yet available in the version you're using.
@@ -97,25 +103,19 @@ The backports repo is used to get the newest version of ZoL.
         - The default is `on`, which stores them in a hidden file.
     - Relax access times (dataset option): `atime=off` and `relatime=on`
     - Don't enable dedup.
-- Create pool:
-    - Format: `zpool create [options] <name> <levels-and-drives>`
-    - Basic example: `zpool create [-f] [options] <name> {[mirror|raidz|raidz2|spare|...] <drives>}+`
-        - Use `-f` (force) if the disks aren't clean.
-        - See example above for recommended options.
-    - The pool definition is two-level hierarchical, where top-level elements are striped.
+    - Use absolute drive paths (`/dev/disk/by-id/` or similar), not `/dev/sdX`.
+    - The pool definition is two-level hierarchical, where top-level elements are striped. Examples:
         - RAID 0 (striped): `<drives>`
         - RAID 1 (mirrored): `mirror <drives>`
         - RAID 10 (stripe of mirrors): `mirror <drives> mirror <drives>`
-        - Etc.
+        - RAID 50 (stripe of mirrors): `raidz <drives> raidz <drives>`
     - Create encrypted pool: See encryption section.
     - Add special device: See special device section.
     - Add hot spare (if after creation): `zpool add <pool> spare <disks>`
         - Note that hot spares are currently a bit broken and don't automatically replace pool disks. Make sure to test the setup before relying on it.
-    - Use absolute drive paths (`/dev/disk/by-id/` or similar), not `/dev/sdX`.
 - View pool activity: `zpool iostat [-v] [interval]`
     - Includes metadata operations.
     - If no interval is specified, the operations and bandwidths are averaged from the system boot. If an interval is specified, the very first interval will still show this.
-- Automatically load key (if encrypted) and mount on boot: See dataset section.
 
 #### L2ARC
 
@@ -157,7 +157,8 @@ The backports repo is used to get the newest version of ZoL.
 - Recommended dataset options:
     - Set quota: `quota=<size>`
     - Set reservation: `reservation=<size>`
-    - Disable data caching (in the ARC) if the upper layer already uses caching (databases, VMs, etc.): `primarycache=metadata`
+    - Disable data caching (in the ARC), if the upper layer already uses caching (databases, VMs, etc.): `primarycache=metadata`
+    - Unset the mountpoint, e.g. if it will only be a parent of volumes: `mountpoint=none`
     - (See the recommended pool options since most are inherited.)
 - Create dataset:
     - Format: `zfs create [options] <pool>/<name>`
@@ -167,17 +168,11 @@ The backports repo is used to get the newest version of ZoL.
     - Properties may have the following sources, as seen in the "source" column: Local, default, inherited, temporary, received and none.
     - Get: `zfs get {all|<property>} [-r] [dataset]` (`-r` for recursive)
     - Set: `zfs set <property>=<value> <dataset>`
-    - Inherit: `zfs inherit [-r] [dataset]` (`-r` for recursive)
+    - Unset: `zfs set <property>=none <dataset>` (keeps source "local")
+    - Inherit: `zfs inherit [-r] [dataset]` (`-r` for recursive, `-S` to use the received value if one exists)
         - See the encryption section for inheritance of certain encryption properties.
-    - Reset to default/inherit: `zfs inherit -S [-r] <property> <dataset>` (`-r` for recursive, `-S` to use the received value if one exists)
 - Other useful dataset properties:
     - `canmount={on|off|noauto}`: If the dataset will be mounted by `zfs mount` or `zfs mount -a`. Set to no if it shouldn't be mounted automatically e.g. during boot.
-- Automatically load key (if encrypted) and mount on boot:
-    - Note: This will load all keys and mount everything (unless `canmount=off`) within the pool by generating mounting and key-load services at boot. Key-load services for encrypted roots will ge generated regardless of `canmount`, use `org.openzfs.systemd:ignore=on` to avoid creating any services for the dataset.
-    - Make sure ZED is set up correctly (see config section).
-    - Enable tracking for the pool: `touch /etc/zfs/zfs-list.cache/POOLNAME`
-    - Trigger an update of the stale cache file: `zfs set canmount=on <pool>`
-    - (Optional) Don't automatically decrypt and mount a dataset: Set `org.openzfs.systemd:ignore=on` on it.
 - Don't store anything in the root dataset itself, since it can't be replicated.
 
 ### Snapshots
@@ -227,16 +222,16 @@ The backports repo is used to get the newest version of ZoL.
 - Create a password encrypted pool:
     - Create: `zpool create -O encryption=aes-128-gcm -O keyformat=passphrase ...`
 - Create a raw key encrypted pool:
-    - Generate the key: `dd if=/dev/urandom of=/root/keys/zfs/<tank> bs=32 count=1`
-    - Create: `zpool create <normal-options> -O encryption=aes-128-gcm -O keyformat=raw -O keylocation=file:///root/keys/zfs/<tank> <name> ...`
+    - Generate the key: `dd if=/dev/urandom of=/var/keys/zfs/<tank> bs=32 count=1` (and fix permissions)
+    - Create: `zpool create <normal-options> -O encryption=aes-128-gcm -O keyformat=raw -O keylocation=file:///var/keys/zfs/<tank> <name> ...`
 - Encrypt an existing dataset by sending and receiving:
     1. Rename the old dataset: `zfs rename <dataset> <old-dataset>`
     1. Snapshot the old dataset: `zfs snapshot -r <dataset>@<snapshot-name>`
-    1. Command: `zfs send [-R] <snapshot> | zfs recv -o encryption=aes-128-gcm -o keyformat=raw -o keylocation=file:///root/keys/zfs/<tank> <new-dataset>`
+    1. Command: `zfs send [-R] <snapshot> | zfs recv -o encryption=aes-128-gcm -o keyformat=raw -o keylocation=file:///var/keys/zfs/<tank> <new-dataset>`
     1. Test the new dataset.
     1. Delete the snapshots and the old dataset.
-    1. Note: All child datasets will be encrypted too (if `-r` and `-R` were used).
-    1. Note: The new dataset will become its own encryption root instead of inheriting from any parent dataset/pool.
+    1. (Note) All child datasets will be encrypted too (if `-r` and `-R` were used).
+    1. (Note) The new dataset will become its own encryption root instead of inheriting from any parent dataset/pool.
 - Change encryption property:
     - The key must generally already be loaded.
     - The encryption properties `keyformat`, `keylocation` and `pbkdf2iters` are inherited from the encryptionroot instead, unlike normal properties.
@@ -250,6 +245,12 @@ The backports repo is used to get the newest version of ZoL.
     - Sending encrypted datasets requires using raw (`-w`).
     - Encrypted snapshots sent as raw may be sent incrementally.
     - Make sure to check the encryption root, key format, key location etc. to make sure they're what they should be.
+- Automatically load key (if encrypted) and mount on boot:
+    - (Note) This will load all keys and mount everything (unless `canmount=off`) within the pool by generating mounting and key-load services at boot. Key-load services for encrypted roots will be generated regardless of `canmount`, use `org.openzfs.systemd:ignore=on` to avoid creating any services for the dataset.
+    - Make sure ZED is set up correctly (see config section).
+    - Enable tracking for the pool: `mkdir /etc/zfs/zfs-list.cache && touch /etc/zfs/zfs-list.cache/<pool>`
+    - Trigger an update of the stale cache file: `zfs set canmount=on <pool>`
+    - (Optional) Don't automatically decrypt and mount a dataset: Set `org.openzfs.systemd:ignore=on` on it.
 
 ### Error Handling and Replacement
 
@@ -288,6 +289,9 @@ The backports repo is used to get the newest version of ZoL.
 
 - As far as possible, use raw disks and HBA disk controllers (or RAID controllers in IT mode).
 - Always use `/etc/disk/by-id/X`, not `/dev/sdX`.
+    - Using `/dev/sdX` may degrade/fail the pool if the active disks are swapped or the numbering is shuffled for some reason.
+    - Pool info is stored on the disks themselves, so running an `zpool export <pool> && zpool import <pool>` may fix disks that got degraded due to number shuffling.
+    - If you want auto-replacement wrt. physical slots, you need to use whatever naming works for the physical slots.
 - Always manually set the correct ashift for pools.
     - Should be the log-2 of the physical block/sector size of the drive.
     - E.g. 12 for 4kB (Advanced Format (AF), common on HDDs) and 9 for 512B (common on SSDs).

+ 4 - 0
config/linux-server/storage.md

@@ -78,6 +78,10 @@ Using **Debian**, unless otherwise stated.
 
 This is just a suggestion for how to partition your main system drive. Since LVM volumes can be expanded later, it's fine to make them initially small. Create the volumes during system installation and set the mount options later in `/etc/fstab`.
 
+For a much simpler setup, just use a big root partition with a separate EFI partition. This complex setup is mainly targeted for old-fashioned, "monolithic" servers.
+
+Note: Hidepid is no longer recommended, but still kept here for reference.
+
 | Volume/Mount | Type | Minimal Size (GB) | Mount Options |
 | :--- | :--- | :--- | :--- |
 | `/proc` | Runtime | N/A | hidepid=2,gid=1500 |

+ 1 - 1
config/media/vlc.md

@@ -92,7 +92,7 @@ breadcrumbs:
 ### Media Processing
 
 - Convert/transcode or fix badly encoded files (GUI method):
-    1. Note: E.g. when othe programs complain that a GoPro video file is corrupted but VLC still manages to open it somehow.
+    1. (Note) E.g. when othe programs complain that a GoPro video file is corrupted but VLC still manages to open it somehow.
     1. Open VLC.
     1. Go to "File", "Convert".
     1. Open add the source files.

+ 27 - 0
config/monitoring/grafana.md

@@ -0,0 +1,27 @@
+---
+title: Grafana
+breadcrumbs:
+- title: Configuration
+- title: Monitoring
+---
+{% include header.md %}
+
+For visualizing stuff.
+Supports metrics backends like Prometheus, InfluxDB, MySQL etc.
+
+## Setup (Docker)
+
+1. (Note) See [(Grafana) Run Grafana Docker image](https://grafana.com/docs/grafana/latest/installation/docker/).
+1. Mount:
+    - Config: `./grafana.ini:/etc/grafana/grafana.ini:ro`
+    - Data: `./data:/var/lib/grafana/:rw` (requires UID 472)
+    - Logs: `./logs:/var/log/grafana/:rw` (requires UID 472)
+1. Configure `grafana.ini`.
+1. Open the webpage to configure it.
+
+## Miscellanea
+
+- Be careful with public dashboards. "Viewers" can modify any query and thus query the entire data source for the dashboard, unless you have configured some type of access control for the data source (which you probably haven't).
+- If the Grafana metrics endpoint is enabled, make sure your reverse proxy blocks the metrics path `/metrics` to avoid leaking them.
+
+{% include footer.md %}

+ 22 - 0
config/monitoring/loki.md

@@ -0,0 +1,22 @@
+---
+title: Grafana Loki
+breadcrumbs:
+- title: Configuration
+- title: Monitoring
+---
+{% include header.md %}
+
+For log collection.
+
+## Info
+
+- No ingestion log format requirements.
+- Index-free (somewhat).
+    - Meaning it indexes less data from log lines.
+    - This gives a smaller index file and faster ingestion, but slower querying.
+    - Specifically, the timestamp and a set of labels (key-value pairs) are indexed, but the content is unindexed.
+- Prometheus-inspired query language.
+- Typically uses Promtail for log collection from servers, sometimes with syslog-ng for log format conversion.
+- Good integration with e.g. Kubernetes, Grafana and Prometheus.
+
+{% include footer.md %}

+ 175 - 0
config/monitoring/prometheus.md

@@ -0,0 +1,175 @@
+---
+title: Prometheus
+breadcrumbs:
+- title: Configuration
+- title: Monitoring
+---
+{% include header.md %}
+
+For metrics collection.
+
+## Info
+
+**TODO:** Info about the pull model, OpenMetrics format, etc.
+
+## Setup (Docker)
+
+Includes instructions for both the normal mode (aka server mode) and agent mode (no local storage).
+
+1. (Note) See [(Prometheus) Installation](https://prometheus.io/docs/prometheus/latest/installation/).
+1. (Server mode) Set CLI args:
+    - Set retention time: `--storage.tsdb.retention.time=15d` (for 15 days)
+    - Alternatively, set retention size: `--storage.tsdb.retention.size=100GB` (for 100GB)
+    - (Note) The old `storage.local.*` and `storage.remote.*` flags no longer work.
+1. (Agent mode) Set CLI args:
+    - Enable: `--enable-feature=agent`
+    - (Note) You can mount the data path, but it's a bit pointless wrt. how short-lived the data is.
+1. Configure mounts:
+    - Config: `./prometheus.yml:/etc/prometheus/prometheus.yml:ro`
+    - Data (server mode): `./data/:/prometheus/:rw`
+1. Configure `prometheus.yml`.
+    - I.e. set global variables (like `scrape_interval`, `scrape_timeout` and `evaluation_interval`) and scrape configs.
+1. (Optional) Setup Cortex or Thanos for global view, HA and/or long-term storage. **TODO:** See Grafana Mimir too.
+
+## Notes
+
+- Prometheus hierarchies and stuff:
+    - High-availability: Simply run multiple instances in parallel, scraping the same targets.
+    - Federation: Allows one instance to scrape specific metrics from another instance. May be used to forward metrics from a local to a global instance, but you may want to use remote write for that instead now. Also useful to to setup instances with a more limited view, e.g. for metrics accessible from some public Grafana dashboard.
+    - Remote write: Used to forward metrics to an upstream instance. Prometheus Agent uses this to forward all metrics instead of storing them locally. Typically the remote instance is a Cortex or Mimir instance.
+    - Remote read: Used to query another Prometheus instance. Generally as a reversed alternative to the remote write approach.
+- The agent mode disables local metrics storage, for cases where you just want to forward the metrics upstream to e.g. a centralized Grafana Mimir instance. It uses the remote write feature of Prometheus. The normal TSDB is replaced by a simpler Agent TSDB WAL, which stores data temporarily until it's successfully written upstream. In practice, this turns Prometheus into a pull-to-push-based proxy. This also preserves separation of concerns since the upstream/central instance doesn't need to know what and where to scrape (as well as the mess of firewall and ACLs rules the alternative would entail). The agent mode (aka Prometheus Agent) is based on the older Grafana Agent.
+- Prometheus currently uses the Prometheus exposition format v0.0.4 to ingest metrics into Prometheus. It later gave rise to the OpenMetrics metrics format.
+- The open port (9090 by default) contains both the dashboard and the query API. It has no authentication mechanism, so you don't want this exposed publicly.
+- You can check the status of scrape jobs in the dashboard.
+- Prometheus does not store data forever, it's meant for short- to mid-term storage.
+- Prometheus should be "physically" close to the apps it's monitoring. For large infrastructures, you should use multiple instances, not one huge global instance.
+- If you need a "global view" (when using multiple instances), long-term storage and (in some way) HA, consider using Cortex or Thanos.
+- Since Prometheus receives an almost continuous stream of telemetry, any restart or crash will cause a gap in the stored data. Therefore you should generally always use some type of HA in production setups.
+- Cardinality is the number of time series. Each unique combination of metrics and key-value label pairs (yes, including the label value) amounts to a new time series. Very high cardinality (i.e. over 100 000 series, number taken from a Splunk presentation from 2019) amounts to significantly reduced performance and increased memory and resource usage, which is also shared by HA peers (fate sharing). Therefore, avoid using valueless labels, add labels only to metrics they belong with, try to limit the numer of unique values of a label and consider splitting metrics to use less labels. Some useful queries to monitor cardinality: `sum(scrape_series_added) by (job)`, `sum(scrape_samples_scraped) by (job)`, `prometheus_tsdb_symbol_table_size_bytes`, `rate(prometheus_tsdb_head_series_created_total[5m])`, `sum(sum_over_time(scrape_series_added[5m])) by (job)`. You can also find some useful stats in the dashboard.
+
+## Cortex and Thanos
+
+**TODO:** This is outdated, see Grafana Mimir instead (based on Cortex).
+
+- Two similar projects, which both provide global view, HA and long-term storage.
+- Cortex is push-based using Prometheus remote writing, while Thanos is pull-based using Thanos sidecars for all Prometheus instances.
+- Global view: Cortex stores all data internally, while Thanos queries the Prometheus instances.
+- Prometheus HA: Cortex stores one instance of the received data (at write time), while Thanos queries Prometheus instances which have data (at query time). Both approaches removes gaps in the data.
+- Long-term storage: Cortex periodically flushes the NoSQL index and chunks to an external object store, while Thanos uploads TSDB blocks to an object store.
+
+## Prometheus Exporters
+
+### General
+
+- Exporters often expose the metrics endpoint over plain HTTP without any scraper or exporter authentication. Prometheus supports exporters using HTTPS for scraping (for integrity, confidentiality and authenticating the Prometheus), as well as using client authentication (from Prometheus, for authenticating Prometheus), providing mutual authentication if both are used. This may require setting up a reverse web proxy in front of the exporter. Therefore, the simplest alternative (where appropriate) is often to just secure the network itself using segmentation and segregation.
+
+### List of Exporters and Software
+
+This list contains exporters and software with built-in exposed metrics I typically use. Some are described in more detail in separate subsections.
+
+#### Software with exposed metrics
+
+- Prometheus (exports metrics about itself)
+- [Grafana](https://grafana.com/docs/grafana/latest/administration/metrics/)
+- [Docker Daemon](https://docs.docker.com/config/daemon/prometheus/)
+- [Traefik](https://github.com/containous/traefik)
+- [AWX](https://docs.ansible.com/ansible-tower/latest/html/administration/metrics.html)
+
+#### Exporters
+
+- [Node exporter (Prometheus)](https://github.com/prometheus/node_exporter)
+- [Windows exporter (Prometheus Community)](https://github.com/prometheus-community/windows_exporter)
+- [SNMP exporter (Prometheus)](https://github.com/prometheus/snmp_exporter)
+- [IPMI exporter (Soundcloud)](https://github.com/soundcloud/ipmi_exporter)
+- [NVIDIA DCGM exporter (NVIDIA)](https://github.com/NVIDIA/gpu-monitoring-tools/)
+- [NVIDIA GPU exporter (mindprince)](https://github.com/mindprince/nvidia_gpu_prometheus_exporter)
+- [cAdvisor (Google)](https://github.com/google/cadvisor)
+- [UniFi exporter (jessestuart)](https://github.com/jessestuart/unifi_exporter)
+- [BIND exporter (Prometheus Community)](https://github.com/prometheus-community/bind_exporter)
+- [Blackbox exporter (Prometheus)](https://github.com/prometheus/blackbox_exporter)
+- [Prometheus Proxmox VE exporter (prometheus-pve)](https://github.com/prometheus-pve/prometheus-pve-exporter)
+- [NUT Exporter (HON95)](https://github.com/HON95/prometheus-nut-exporter)
+- [ESP8266 DHT Exporter (HON95)](https://github.com/HON95/prometheus-esp8266-dht-exporter)
+
+#### Special
+
+- [Pushgateway (Prometheus)](https://github.com/prometheus/pushgateway)
+
+## Prometheus Node Exporter
+
+Can be set up either using Docker ([prom/node-exporter](https://hub.docker.com/r/prom/node-exporter/)), using the package manager (`prometheus-node-exporter` on Debian), or by building it from source. The Docker method provides a small level of protection as it's given only read-only system access. The package version is almost always out of date and is typically not optimal to use. If Docker isn't available and you want the latest version, build it from source.
+
+### Setup (Downloaded Binary)
+
+See [Building and running](https://github.com/prometheus/node_exporter#building-and-running (node_exporter)).
+
+Details:
+
+- User: `prometheus`
+- Binary file: `/usr/bin/prometheus-node-exporter`
+- Service file: `/etc/systemd/system/prometheus-node-exporter.service`
+- Configuration file: `/etc/default/prometheus-node-exporter`
+- Textfile directory: `/var/lib/prometheus/node-exporter/`
+
+Instructions:
+
+1. Install requirements: `apt install moreutils`
+1. Find the link to the latest tarball from [the download page](https://prometheus.io/download/#node_exporter).
+1. Download and unzip it: `wget <url>` and `tar xvf <file>`
+1. Move the binary to the system: `cp node_exporter*/node_exporter /usr/bin/prometheus-node-exporter`
+1. Make sure it's runnable: `node_exporter -h`
+1. Add the user: `useradd -r prometheus`
+    - If you have hidepid setup to hide system process details from normal users, remember to add the user to a group with access to that information. This is only required for some metrics, most of them work fine without this extra access.
+1. Create the required files and directories:
+    - `touch /etc/default/prometheus-node-exporter`
+    - `mkdir -p /var/lib/prometheus/node-exporter/`
+1. Create the systemd service `/etc/systemd/system/prometheus-node-exporter.service`, see [prometheus-node-exporter.service](../files/prometheus-node-exporter.service.txt).
+1. (Optional) Configure it:
+    - The defaults work fine.
+    - File: `/etc/default/prometheus-node-exporter`
+    - Example: `ARGS="--collector.processes --collector.interrupts --collector.systemd"` (enables more detailed process and interrupt collectors)
+1. Enable and start the service: `systemctl enable --now prometheus-node-exporter`
+1. (Optional) Setup textfile exporters.
+
+### Textfile Collector
+
+#### Setup and Usage
+
+1. Set the collector script output directory using the CLI argument `--collector.textfile.directory=<dir>`.
+    - Example dir: `/var/lib/prometheus/node-exporter/`
+    - If the node exporter was installed as a package, it can be set in the `ARGS` variable in `/etc/default/prometheus-node-exporter`.
+    - If using Docker, the CLI argument specified as part of the command.
+1. Download the collector scripts and make them executable.
+    - Example dir: `/opt/prometheus/node-exporter/textfile-collectors/`
+1. Add cron jobs for the scripts using sponge to wrote to the output dir.
+    - Make sure `sponge` is installed. For Debian, it's found in the `moreutils` package.
+    - Example cron file: `/etc/cron.d/prometheus-node-exporter-textfile-collectors`
+    - Example cron entry: `0 * * * * root /opt/prometheus/node-exporter/textfile-collectors/apt.sh | sponge /var/lib/prometheus/node-exporter/apt.prom`
+
+#### Collector Scripts
+
+Some I typically use.
+
+- [apt.sh (Prometheus Community)](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/apt.sh)
+- [yum.sh (Prometheus Community)](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/yum.sh)
+- [deleted_libraries.sh (Prometheus Community)](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/deleted_libraries.py)
+- [ipmitool (Prometheus Community)](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/ipmitool) (requires ipmitool) (**Warning:** This is slow, don't run it frequently. If you do, it may spawn more and more processes waiting to read the IPMI sensors. Run it manually to get a feeling.)
+- [smartmon.sh (Prometheus Community)](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/smartmon.sh) (requires smartctl)
+- [My own textfile exporters](https://github.com/HON95/prometheus-textfile-exporters)
+
+## Prometheus Blackbox Exporter
+
+### Monitor Service Availability
+
+Add a HTTP probe job for the services and query for probe success over time.
+
+Example query: `avg_over_time(probe_success{job="node"}[1d]) * 100`
+
+### Monitor for Expiring Certificates
+
+Add a HTTP probe job for the services and query for `probe_ssl_earliest_cert_expiry - time()`.
+
+Example alert rule: `probe_ssl_earliest_cert_expiry{job="blackbox"} - time() < 86400 * 30` (30 days)
+
+{% include footer.md %}

+ 3 - 3
config/network/cisco-ios-routers.md

@@ -74,16 +74,16 @@ An example of a full configuration.
 1. Configure DNS: `ip name-server <addr1> <addr2> [...]`
 1. Enable IPv6 forwarding: `ipv6 unicast-routing`
 1. Enable Cisco Express Forwarding (CEF):
-    1. Note: This may be enabled by default and the commands below to enable it may not work.
+    1. (Note) This may be enabled by default and the commands below to enable it may not work.
     1. Enable for IPv4: `ip cef`
     1. Enable for IPv6: `ipv6 cef`
     1. Show status: `sh cef state` (should show "enabled/running" for both IPv4 and IPv6)
 1. (Optional) Add black hole route for the site prefixes:
-    1. Note: To avoid leakage of local traffic without a route.
+    1. (Note) To avoid leakage of local traffic without a route.
     1. IPv4 prefix: `ip route <address> <mask> Null 0`
     1. IPv6 prefix: `ipv6 route <prefix> Null 0`
 1. (Optional) Configure management interface:
-    1. Note: The management interface is out-of-band by being contained in the special management interface VRF "Mgmt-intf".
+    1. (Note) The management interface is out-of-band by being contained in the special management interface VRF "Mgmt-intf".
     1. Enter the mgmt interface config: `interface GigabitEthernet 0` (example)
     1. Set an IPv4 and IPv6 address: See "configure interface".
     1. Set a default IPv4 route: `ip route vrf Mgmt-intf 0.0.0.0 0.0.0.0 <gateway>`

+ 5 - 5
config/network/fs-fsos-switches.md

@@ -58,7 +58,7 @@ Using an FS S3700-24T4F (access) and an FS S5860-20SQ (core).
     1. Enter VTY lines: `line vty 0 35`
     1. Use default authentication (e.g. local): `login authentication default`
 1. (Optional) Disable inactivity timeout:
-    1. Note: For prod systems you should keep this disabled, but it's really annoying when labbing.
+    1. (Note) For prod systems you should keep this disabled, but it's really annoying when labbing.
     1. Enter console line.
     1. Disable timer: `exec-timeout 0`
 1. (Optional) Disable management interface:
@@ -69,7 +69,7 @@ Using an FS S3700-24T4F (access) and an FS S5860-20SQ (core).
     1. Enter physical interface range (e.g. `int range te0/1-20`).
     1. Disable them: `shutdown`
 1. (Meta) Setup basic interface:
-    1. Note: Applies to most interfaces.
+    1. (Note) Applies to most interfaces.
     1. Set description: `description <description>`
     1. Enable or disable: `[no] shutdown`
 1. Setup physical L2 interface:
@@ -86,7 +86,7 @@ Using an FS S3700-24T4F (access) and an FS S5860-20SQ (core).
 1. Setup VLANs:
     1. Define L2 VLAN and enter section: `vlan <VID>`
     1. Set name: `name <name>`
-    1. Note: To setup L3 interfaces for VLANs, enter `interface VLAN <VID>`.
+    1. (Note) To setup L3 interfaces for VLANs, enter `interface VLAN <VID>`.
 1. Add interfaces to VLANs:
     1. Enter the interface(s).
     1. Set the mode: `switchport mode {access|trunk}`
@@ -105,9 +105,9 @@ Using an FS S3700-24T4F (access) and an FS S5860-20SQ (core).
 1. Set default gateway (and other static routes):
     1. Set default gateway (IPv4): `ip route 0.0.0.0 0.0.0.0 <next-hop>`
     1. Set default gateway (IPv6): `ipv6 route ::/0 <next-hop>`
-    1. Note: To avoid leakage, you may want to setup a blackhole route for the site prefixes on the topmost routers.
+    1. (Note) To avoid leakage, you may want to setup a blackhole route for the site prefixes on the topmost routers.
 1. Enable router advertisements (RAs) for IPv6 L3 interfaces:
-    1. Note: This is required for IPv6 autoconfiguration. Set the two flags for DHCPv6 or unset them for SLAAC.
+    1. (Note) This is required for IPv6 autoconfiguration. Set the two flags for DHCPv6 or unset them for SLAAC.
     1. Enter the interface.
     1. (DHCPv6) Set the ND managed flag: `ipv6 nd managed-config-flag`
     1. (DHCPv6) Set the ND other flag: `ipv6 nd other-config-flag`

+ 1 - 1
config/network/juniper-junos-general.md

@@ -76,7 +76,7 @@ breadcrumbs:
     - **TODO** Certain restrictions of committing for exclusive mode.
 - Exit any mode: `exit`
 - Show configuration:
-    - Note: You can only see config elements and changes you have permissions to see. Chekc the `system login` section to check.
+    - (Note) You can only see config elements and changes you have permissions to see. Chekc the `system login` section to check.
     - From (op mode): `show configuration [statement]`
     - From (conf mode): `show [statement]`
     - Show changes (conf mode): `show | compare`

+ 19 - 18
config/network/juniper-junos-switches.md

@@ -30,9 +30,7 @@ breadcrumbs:
 - Serial config: RS-232 w/ RJ45, baud 115200, 8 data bits, no parity bits, 1 stop bit, no flow control.
 - Native VLAN: 0, aka `default`
 
-## Initial Setup
-
-**TODO** (some general info, some switch config info, move this to some appropriate place):
+## Random Notes (**TODO:** Move Somewhere Appropriate)
 
 - `request system storage cleanup` for cleanup of old files.
 - `system auto-snapshot` (already added here)
@@ -109,7 +107,7 @@ breadcrumbs:
         }
         ```
 
-**TODO** Remaining stuff:
+## Initial Setup
 
 1. Connect to the switch using serial:
     - RS-232 w/ RJ45, baud 9600, 8 data bits, no parity, 1 stop bits, no flow control.
@@ -148,8 +146,8 @@ breadcrumbs:
     1. Set server to use while booting (forces initial time): `set system ntp boot-server <address>`
     1. Set server to use periodically (for tiny, incremental changes): `set system ntp server <address>`
     1. Set time zone: `set system time-zone Europe/Oslo` (example)
-    1. Note: After committing, use `show ntp associations` to verify NTP.
-    1. Note: After committing, use `set date ntp` to force it to update. This may be required if the delta is too large and the NTP client refuses to update.
+    1. (Note) After committing, use `show ntp associations` to verify NTP.
+    1. (Note) After committing, use `set date ntp` to force it to update. This may be required if the delta is too large and the NTP client refuses to update.
 1. Delete default interfaces configs:
     - `wildcard range delete interface ge-0/0/[0-47]` (example, repeat for all FPCs/PICs)
 1. Disable unused interfaces:
@@ -161,14 +159,14 @@ breadcrumbs:
 1. Disable default VLAN:
     1. Delete logical interface (before disabling): `delete int vlan.0`
     1. Disable logical interface: `set int vlan.0 disable`
-1. Create VLANs (not interfaces):
+1. Create VLANs:
     - `set vlans <name> vlan-id <VID>`
-1. Setup port-ranges:
+1. Setup interface-ranges (apply config to multiple configured interfaces):
     - Declare range: `edit interfaces interface-range <name>`
     - Add member ports: `member-range <begin-if> to <end-if>`
     - Configure it as a normal interface, which will be applied to all members.
 1. Setup LACP:
-    1. Note: Make sure you allocate enough LACP interfaces and that the interface numbers are below 512 (empirically discovered on EX3300).
+    1. (Note) Make sure you allocate enough LACP interfaces and that the interface numbers are below 512 (empirically discovered on EX3300).
     1. Set number of available LACP interfaces: `set chassis aggregated-devices ethernet device-count <0-64>` (just set it to some standard large size)
     1. Add individual Ethernet interfaces (not using interface range):
         1. Delete logical units (or the whole interfaces): `wildcard range delete interfaces ge-0/0/[0-1] unit 0` (example)
@@ -180,9 +178,12 @@ breadcrumbs:
     1. Setup VLAN/address/etc.
 1. Setup VLAN interfaces:
     1. Setup trunk ports:
+        1. (Note) `vlan members` supports both numbers and names. Use the `[VLAN1 VLAN2 <...>]` syntax to specify multiple VLANs.
+        1. (Note) Instead of specifying which VLANs to add, specify `vlan members all` and `vlan except <excluded-VLANs>`.
+        1. (Note) `vlan members` should not include the native VLAN (if any).
         1. Enter unit 0 and `family ethernet-switching` of the physical/LACP interface.
         1. Set mode: `set port-mode trunk`
-        1. Set non-native VLANs: `set vlan members [<VLAN-name-1> [VLAN-name-2] [...]]` (once per VLAN or repeated syntax)
+        1. Set VLANs: `set vlan members <VLANs>`
         1. (Optional) Set native VLAN: `set native-vlan-id <VID>`
     1. Setup access ports:
         1. Enter unit 0 and `family ethernet-switching` of the physical/LACP interface.
@@ -196,18 +197,18 @@ breadcrumbs:
     1. IPv4 default gateway: `set routing-options rib inet.0 static route 0.0.0.0/0 next-hop <next-hop>`
     1. IPv6 default gateway: `set routing-options rib inet6.0 static route ::/0 next-hop <next-hop>`
 1. Disable/enable Ethernet flow control:
-    - Note: Junos uses the symmetric/bidirectional PAUSE variant of flow control.
-    - Note: This simple PAUSE variant does not take traffic classes (for QoS) into account and will pause _all_ traffic for a short period (no random early detection (RED)) if the receiver detects that it's running out of buffer space, but it will prevent dropping packets _within_ the flow control-enabled section of the L2 network. Enabling it or disabling it boils down to if you prefer to pause (all) traffic or drop (some) traffic during congestion. As a guideline, keep it disabled generally (and use QoS or more sophisticated variants instead), but use it e.g. for dedicated iSCSI networks (which handle delays better than drops). Note that Ethernet and IP don't require guaranteed packet delivery.
-    - Note: It _may_ be enabled by default, so you should probably enable/disable it explicitly (the docs aren't consistent with my observations).
-    - Note: Simple/PAUSE flow control (`flow-control`) is mutually exclusive with priority-based flow control (PFC) and asymmetric flow control (`configured-flow-control`).
+    - (Note) Junos uses the symmetric/bidirectional PAUSE variant of flow control.
+    - (Note) This simple PAUSE variant does not take traffic classes (for QoS) into account and will pause _all_ traffic for a short period (no random early detection (RED)) if the receiver detects that it's running out of buffer space, but it will prevent dropping packets _within_ the flow control-enabled section of the L2 network. Enabling it or disabling it boils down to if you prefer to pause (all) traffic or drop (some) traffic during congestion. As a guideline, keep it disabled generally (and use QoS or more sophisticated variants instead), but use it e.g. for dedicated iSCSI networks (which handle delays better than drops). Note that Ethernet and IP don't require guaranteed packet delivery.
+    - (Note) It _may_ be enabled by default, so you should probably enable/disable it explicitly (the docs aren't consistent with my observations).
+    - (Note) Simple/PAUSE flow control (`flow-control`) is mutually exclusive with priority-based flow control (PFC) and asymmetric flow control (`configured-flow-control`).
     - Disable on Ethernet interface (explicit): `set interface <if> [aggregated-]ether-options no-flow-control`
     - Enable (explicit): `... flow-control`
 1. Enable EEE (Energy-Efficient Ethernet, IEEE 802.3az):
-    - Note: For reducing power consumption during idle periods. Supported on RJ45 copper ports.
-    - Note: There generally is no reason to not enable this on all ports, however, there may be certain devices or protocols which don't play nice with EEE (due to poor implementations).
+    - (Note) For reducing power consumption during idle periods. Supported on RJ45 copper ports.
+    - (Note) There generally is no reason to not enable this on all ports, however, there may be certain devices or protocols which don't play nice with EEE (due to poor implementations).
     - Enable on RJ45 Ethernet interface: `set interface <if> ether-options ieee-802-3az-eee`
 1. (Optional) Configure RSTP:
-    - Note: RSTP is the default STP variant for Junos.
+    - (Note) RSTP is the default STP variant for Junos.
     - Enter config section: `edit protocols rstp`
     - (ELS) Set interfaces: `set interfaces all` (or specific)
     - Set priority: `set bridge-priority <priority>` (default 32768, should be a multiple of 4096, use e.g. 32768 for access, 16384 for distro and 8192 for core)
@@ -218,7 +219,7 @@ breadcrumbs:
     - **TODO** Guards, e.g. `bpdu-block-on-edge` or something.
     - **TODO** Enabled on all interfaces and VLANs by default?
 1. Configure SNMP:
-    - Note: SNMP is extremely slow on the Juniper switches I've tested it on.
+    - (Note) SNMP is extremely slow on the Juniper switches I've tested it on.
     - Enable public RO access: `set snmp community public authorization read-only`
 1. Configure sFlow:
     - **TODO**

+ 11 - 11
config/network/tplink-jetstream-switches.md

@@ -74,9 +74,9 @@ breadcrumbs:
     1. Enable server: `ip ssh server`
     1. Disable Telnet: `telnet disable`
 1. Change Switch Database Management (SDM) template:
-    1. Note: Show SDM template info: `show sdm prefer {used|default|...}`
-    1. Note: Show actual usage: `ipv6 source binding`
-    1. Note: `enterpriseV6` is required for enabling IPv6 ND inspection.
+    1. (Note) Show SDM template info: `show sdm prefer {used|default|...}`
+    1. (Note) Show actual usage: `ipv6 source binding`
+    1. (Note) `enterpriseV6` is required for enabling IPv6 ND inspection.
     1. Allocate more resources to IPv6: `sdm prefer enterpriseV6`
     1. **TODO** Check how many entries are actually used. The max count seems low.
 1. Setup physical interfaces (basics):
@@ -124,7 +124,7 @@ breadcrumbs:
 1. Set time and NTP servers:
     1. Set recurring DST: `system-time dst recurring last Sun Mar 2:00 last Sun Oct 3:00` (Norway)
     1. (Optional) Set time and NTP servers: `system-time ntp UTC+01:00 <ip-1> <ip-2> <update-hours>`
-    1. Note: Both NTP servers must be IP addresses and using the same IP version, but they may be the same address.
+    1. (Note) Both NTP servers must be IP addresses and using the same IP version, but they may be the same address.
 1. Enable LLDP:
     1. Enable globally: `lldp`
     1. Enter physical interface configs.
@@ -132,12 +132,12 @@ breadcrumbs:
     1. (Optional) Disable receive: `no lldp receive`
     1. (Optional) Enable LLDP-MED: `lldp med-status`
 1. (Optional) Enable flow control:
-    1. Note: Flow control requires that the connected devices support it in order for it to work. As it pauses all traffic when "triggered", setting up QoS _instead_ of flow control is a much better option if possible.
+    1. (Note) Flow control requires that the connected devices support it in order for it to work. As it pauses all traffic when "triggered", setting up QoS _instead_ of flow control is a much better option if possible.
     1. Enter the interface configs (physical or LAG).
     1. Enable: `flow-control`
     1. Show status: `show int status`
 1. Enable Enerfy Efficient Ethernet (EEE):
-    1. Note: EEE is safe to enable on all ports and does not require that the connected devices are compatible in any way.
+    1. (Note) EEE is safe to enable on all ports and does not require that the connected devices are compatible in any way.
     1. Enter the physical interfaces (preferably all ports).
     1. Enable: `eee`
     1. Show status: `show int eee`
@@ -149,13 +149,13 @@ breadcrumbs:
     1. Enable for multicast: `storm-control multicast <threshold>` (e.g. 1%)
     1. Enable for unknown unicast: `storm-control unicast <threshold>` (e.g. 1%)
 1. Enable DHCPv4/DHCPv6/ND snooping:
-    1. Note: Snooping by itself doesn't do anything but is used by other protection mechanisms.
+    1. (Note) Snooping by itself doesn't do anything but is used by other protection mechanisms.
     1. Enable globally (global): `{ip|ipv6} {dhcp|nd} snooping`
     1. Enable for VLAN (global): `{ip|ipv6} {dhcp|nd} snooping vlan <vid-range>`
     1. Set max number of bindings per port (interface): `{ip|ipv6} {dhcp|nd} snooping max-entries <n>` (e.g. 2)
     1. Show bindings: `show {ip|ipv6} source binding`
 1. Enable ARP (IPv4) inspection/detection:
-    1. Note: ARP detection prevents ARP spoofing and flooding.
+    1. (Note) ARP detection prevents ARP spoofing and flooding.
     1. Enable globally: `ip arp inspection`
     1. Enable for VLAN (global): `ip arp inspection vlan <vid-range>`
     1. (Debug) Enable logging (global): `ip arp inspection vlan <vid-range> logging`
@@ -164,16 +164,16 @@ breadcrumbs:
     1. Validate sender/target IP address (global): `ip arp inspection validate ip`
     1. Set trusted interface (interface): `ip arp inspection trust`
     1. **TODO** Rate limiting interfaces.
-    1. Note: To restore an interface that has exceeded the rate limit, run `ip arp inspection recover` on it.
+    1. (Note) To restore an interface that has exceeded the rate limit, run `ip arp inspection recover` on it.
 1. Enable ND (IPv6) detection:
-    1. Note: ND detection will validate the source IPv6 and MAC addresses for ND packets and will discard router adversisements and router redirects on untrusted ports.
+    1. (Note) ND detection will validate the source IPv6 and MAC addresses for ND packets and will discard router adversisements and router redirects on untrusted ports.
     1. Enable globally (global): `ipv6 nd detection`
     1. Enable for VLAN (global): `ipv6 nd detection vlan <vid-range>`
     1. (Debug) Enable logging (global): `ipv6 nd detection vlan <vid-range> logging`
     1. Set trusted interface (interface): `ipv6 nd detection trust`
     1. **TODO** Fix, seems to fail to learn link local addresses from newly connected devices and then drops RSes and NAs from them due to IMPB mismatch.
 1. Enable IPv4/IPv6 source guard:
-    1. Note: IP source guard validates the source IP and MAC addresses for normal traffic.
+    1. (Note) IP source guard validates the source IP and MAC addresses for normal traffic.
     1. Enable DHCPv4/DHCPv6/ND snooping (see above).
     1. **TODO** Enable globally?
     1. Enable for IP and MAC (interface): `{ip|ipv6} verify source sip[v6]-mac`

+ 2 - 2
config/network/vyos.md

@@ -40,7 +40,7 @@ An example of a full configuration. Except intuitive stuff I forgot to mention.
 1. Enter configuration mode: `configure`
     - This changes the prompt from `$` to `#`.
 1. Set hostname:
-    1. Note: `<host-name>.<domain-name>` should be an FQDN.
+    1. (Note) `<host-name>.<domain-name>` should be an FQDN.
     1. Hostname: `set system host-name <hostname>`
     1. Domain name: `set system domain-name <domain-name>`
 1. Set the DNS servers: `set system name-server <ip-address>` (for each server)
@@ -64,7 +64,7 @@ An example of a full configuration. Except intuitive stuff I forgot to mention.
     1. Enable server: `set service ssh`
     1. (Optional) Commit and log in through SSH instead of the console.
 1. Replace default user:
-    1. Note: You may want to skip ahead to the SSHD step so you can paste stuff vis SSH instead of manually writing it into the console.
+    1. (Note) You may want to skip ahead to the SSHD step so you can paste stuff vis SSH instead of manually writing it into the console.
     1. Enter new user: `edit system login user <username>`
     1. Set password: `set authentication plaintext-password "<password>"`
         - Remember quotation marks if the password string spaces.

+ 1 - 1
config/pc/applications.md

@@ -254,7 +254,7 @@ Snippets for `/etc/pipewire/media-session.d/media-session.conf`:
 
 - Open serial session: `screen /dev/ttyUSB0 38400,-crtscts` (38400 baud, no flow control)
 - End session: `Ctrl+A, \`
-- Note: For some devices, you may need to use `Ctrl+H` instead of backspace.
+- (Note) For some devices, you may need to use `Ctrl+H` instead of backspace.
 
 ## SMB
 

+ 11 - 5
config/pc/arch-i3.md

@@ -91,7 +91,7 @@ Note: The use of `sudo` in the text below is a bit inconsistent, but you should
     - Mount root: `mount /dev/mapper/crypt_root /mnt`
     - Mount ESP: `mkdir -p /mnt/boot/efi && mount /dev/<partition> /mnt/boot/efi`
 1. Install packages to the new root:
-    - Base command and packages: `pacstrap /mnt base linux linux-firmware archlinux-keyring vim sudo bash-completion man-db man-pages xdg-utils xdg-user-dirs zsh vim htop git jq rsync openssh tmux screen reflector usbutils`
+    - Base command and packages: `pacstrap /mnt base linux linux-firmware archlinux-keyring sudo bash-completion man-db man-pages xdg-utils xdg-user-dirs smartmontools zsh vim tar zip unzip htop git jq rsync openssh tmux screen reflector usbutils tcpdump nmap`
     - **TODO** Maybe for laptops: `wpa_supplicant networkmanager`
 1. Generate the fstab file:
     1. `genfstab -U /mnt >> /mnt/etc/fstab`
@@ -138,15 +138,17 @@ Note: The use of `sudo` in the text below is a bit inconsistent, but you should
     - Set the editor: `export EDITOR=vim`
     - Set the visual editor: `export VISUAL=vim`
 1. Setup wired networking:
-    1. Enable and start: `systemctl enable --now systemd-networkd`
+    1. Enable: `systemctl enable systemd-networkd`
+    1. Don't wait for network during boot: `systemctl disable systemd-networkd-wait-online.service`
     1. Add a config for the main interface (or all interfaces): See the section with an example below.
-    1. Restart: `systemctl restart systemd-networkd`
+    1. (Re)start: `systemctl restart systemd-networkd`
     1. Wait for connectivity (see `ip a`).
 1. Setup DNS server(s):
     1. `echo nameserver 1.1.1.1 >> /etc/resolv.conf` (Cloudflare)
     1. `echo nameserver 2606:4700:4700::1111 >> /etc/resolv.conf` (Cloudflare)
 1. Setup Pacman:
     1. Enable color: In `/etc/pacman.conf`, uncomment `Color`.
+    1. Enable the multilib repo (for 32-bit apps): In `/etc/pacman.conf`, uncomment the `[multilib]` section.
 1. Update the system and install useful stuff:
     1. Upgrade: `pacman -Syu`
 1. Install display driver:
@@ -284,7 +286,6 @@ Note: Install _either_ the LightDM (X11 GUI) or Ly (TTY TUI) display manager, no
     1. (Optional) Download the Dracula theme: `curl https://raw.githubusercontent.com/dracula/alacritty/master/dracula.yml -o ~/.config/alacritty/dracula.yml`
     1. Configure: Setup `~/.config/alacritty/alacritty.yml`, see the example config below.
     1. Setup i3: In the i3 config, replace the `bindsym $mod+Return ...` line with `bindsym $mod+Return exec alacritty`
-    1. Fix `TERM` for SSH (since the remote probably don't have Alacritty terminal support): In `.zshrc` (or `.bashrc` if using BASH), set `alias ssh="TERM=xterm-256color ssh"`.
     1. (Note) Press `Ctrl+Shift+Space` to enter vi mode, allowing you to e.g. move around (and scroll up) using arrow keys and select text using `V` or `Shift+V`. Press `Ctrl+Shift+Space` again to exit.
 1. Setup the Rofi application launcher:
     1. Install: `sudo pacman -S rofi`
@@ -408,7 +409,7 @@ See [PipeWire (Applications)](../applications/#pipewire) for more config info.
     1. Enable tray icon on i3 start: In the i3 config, add `exec --no-startup-id blueman-applet`. (**TODO** Test.)
     1. (Optional) Try to run it. It's the "Bluetooth Manager" entry in e.g. Rofi.
 1. (Example) Connect a device using `bluetoothctl`:
-    1. Note: To avoid entering the interactive TUI and run single commands instead, use `bluetoothctl -- <cmd>`.
+    1. (Note) To avoid entering the interactive TUI and run single commands instead, use `bluetoothctl -- <cmd>`.
     1. Enter the TUI: `bluetoothctl`
     1. List controllers: `list`
     1. (Optional) Select a controller: `select <mac>`
@@ -456,6 +457,8 @@ See [PipeWire (Applications)](../applications/#pipewire) for more config info.
 1. Setup the 7-Zip CLI/GUI archiver:
     1. Install: `yay -S p7zip-gui`
     1. (Note) Don't use the `.7z` file format, it doesn't preserve owner info.
+1. Setup network tools:
+    1. Install: `sudo pacman -S nmap tcpdump wireshark-qt`
 
 ### Extra (Optional)
 
@@ -498,6 +501,9 @@ font:
   #   style: Regular
   size: 9
 
+env:
+  TERM: xterm-256color
+
 import:
   # Theme
   - ~/.config/alacritty/dracula.yml

+ 1 - 1
config/pc/windows.md

@@ -22,7 +22,7 @@ breadcrumbs:
 - Install all available updates.
 - Install graphics drivers and fix display frame rates, color ranges (use full range for PC displays and limited for TVs, generally) etc.
 - Enable BitLocker drive encryption (requires Pro edition):
-    - Note: Using passwords and not TPM because I don't want my PC to decrypt itself without me and because I need to move disks between PCs.
+    - (Note) Using passwords and not TPM because I don't want my PC to decrypt itself without me and because I need to move disks between PCs.
     - Allow using it without a TPM module:
         - Open `gpedit.msc`.
         - Go to: `Local Computer Policy/Computer Configuration/Administrative Templates/Windows Components/Bitlocker Drive Encryption/Operating System Drives`

+ 1 - 1
config/virt-cont/libvirt-kvm.md

@@ -82,7 +82,7 @@ I'll only focus on using it with KVM (and QEMU) here.
 - Edit network config (without applying it): `virsh net-edit <network>`
 - Apply changed network config: Restart libvirt or reboot the system.
 - Create bridge connected to physical NIC:
-    - Note: If you're connected remotely, try to avoid locking yourself out.
+    - (Note) If you're connected remotely, try to avoid locking yourself out.
     - Create bridge on the host: See [BridgeNetworkConnections (Debian Wiki)](https://wiki.debian.org/BridgeNetworkConnections) or something.
 
 ### Storage

+ 1 - 1
config/virt-cont/podman.md

@@ -129,7 +129,7 @@ Warning: If you have any existing CNI networks, forcing Netavark will break thos
 
 ### Networking
 
-- Note: Podman 4.0 introduced a new network stack built from scratch and scrapped the CNI network stack (which targets Kubernetes more than Podman).
+- (Note) Podman 4.0 introduced a new network stack built from scratch and scrapped the CNI network stack (which targets Kubernetes more than Podman).
 - **TODO** Update the below notes for Podman 4.0.
 - Firewall:
     - Unlike Docker, you can't just restart some daemon to fix the firewall rules after reapplying your normal IPTables rules from a script or something.

+ 142 - 25
config/virt-cont/proxmox-ve.md

@@ -6,41 +6,47 @@ breadcrumbs:
 ---
 {% include header.md %}
 
-Using **Proxmox VE 7**.
+Using **Proxmox VE 7** (based on Debian 11).
 
 ## Host
 
 ### Installation
 
-1. Find a mouse.
-    - Just a keyboard is not enough.
-    - You don't need the mouse too often, though, so you can hot-swap between the keyboard and mouse during the install.
-1. Download PVE and boot from the installation medium (in UEFI mode if supported, otherwise BIOS is fine).
+1. Make sure UEFI and virtualization extensions are enabled in the BIOS settings.
+1. (Optional) Find a mouse.
+    - The GUI installer doesn't require it any more, but it's still somewhat practical.
+1. Download PVE and boot from the installation medium
 1. Storage:
-    - Use 1-2 mirrored SSDs with ZFS.
+    - Note that you can use e.g. ZFS with 2 mirrored SSDs. But a single reliable one with EXT4 is fine too.
     - (ZFS) enable compression and checksums and set the correct ashift for the SSD(s). If in doubt, use ashift=12.
 1. Localization:
     - (Nothing special.)
 1. Administrator user:
-    - Set a root password. It should be different from your personal password.
-    - Set the email to "root@localhost" or something. It's not important before actually setting up email.
+    - Set a root password. It _should_ be different from your personal user's password.
+    - Set the email to "root@localhost" or something. It's not important (yet).
 1. Network:
-    - (Nothing special.)
+    - Just set up something temporary that works. You'll probably change this after installation to setup bonding and VLANs and stuff.
+1. Miscellanea:
+    - Make sure you set the correct FQDN during the install. This is a bit messy to change afterwards.
 
 ### Initial Configuration
 
-Follow the instructions for [Debian](/config/linux-server/debian/), but with the following changes:
+Follow the instructions for [Debian server](/config/linux-server/debian/) in addition to the notes and instructions below (read them first).
+
+Warning: Don't install any of the firmware packages, it will remove the PVE firmware packages.
+
+PVE-specific instructions:
 
-1. Before installing updates, setup the PVE repos (assuming no subscription):
+1. Setup the PVE repos (assuming no subscription):
+    1. (Note) More info: [Proxmox VE: Package Repositories](https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo)
     1. Comment out all content from `/etc/apt/sources.list.d/pve-enterprise.list` to disable the enterprise repo.
-    1. Create `/etc/apt/sources.list.d/pve-no-subscription.list` containing `deb http://download.proxmox.com/debian/pve buster pve-no-subscription` to enable the no-subscription repo.
-    1. More info: [Proxmox VE: Package Repositories](https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo)
-1. Don't install any of the firmware packages, it will remove the PVE firmware packages.
-1. Update network config and hostname:
-    1. Do NOT manually modify the configs for network, DNS, NTP, firewall, etc. as specified in the Debian guide.
-    1. (Optional) Install `ifupdown2` to enable live network reloading. This does not work if using OVS interfaces.
+    1. Create `/etc/apt/sources.list.d/pve-no-subscription.list` containing `deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription` to enable the no-subscription repo.
+    1. Run a full upgrade: `apt update && apt full-upgrade`
+1. Update network config:
+    1. (Note) Do NOT manually modify the configs for DNS, NTP, IPTables, etc. However, the network config (`/etc/network/interfaces`) and PVE configs _may_ be manually modified, but the GUI or API is still recommended.
+    1. (Note) For complicated VLAN setups, you want to use OVS stuff instead of plain Linux stuff. Plain Linux stuff (the way PVE uses it) may break for certain setups where e.g. PVE has a VLAN L3 interface on the same bridge as a VM is connected to the same VLAN.
+    1. (Note) OVS bonds: Use mode "LACP (balance-tcp)" and manually specify OVS option `lacp-time=fast`.
     1. Update network config: Use the web GUI.
-    1. (Optional) Update hostname: See the Debian guide. Note that the short and FQDN hostnames must resolve to the IPv4 and IPv6 management address to avoid breaking the GUI.
 1. Update MOTD:
     1. Disable the special PVE banner: `systemctl disable --now pvebanner.service`
     1. Clear or update `/etc/issue` and `/etc/motd`.
@@ -61,6 +67,46 @@ Follow the instructions for [Debian](/config/linux-server/debian/), but with the
     1. Setup backup pruning:
         - [Backup and Restore (Proxmox VE)](https://pve.proxmox.com/wiki/Backup_and_Restore)
         - [Prune Simulator (Proxmox BS)](https://pbs.proxmox.com/docs/prune-simulator/)
+1. Setup users (PAM realm):
+    1. Add a Linux user: `adduser <username>` etc.
+    1. Create a PVE group: In the "groups" menu, create e.g. an admin group.
+    1. Give the group permissions: In the "permissions" menu, add a group permission. E.g. path `/` and role `Administrator` for full admin access.
+    1. Add the user to PVE: In the "users" menu, add the PAM user and add it to the group.
+    1. (Optional) Relog as the new admin user and disable the root user.
+1. Setup backups:
+    1. Figure it out. You probably want to set up a separate storage for backups.
+
+### Manual Configuration
+
+This is generally not recommended if you want to avoid breaking the system.
+Most of this stuff may be changed in the GUI.
+None of this stuff is required for a normal, full setup.
+
+- Change domain:
+    - Note that changing the hostname (excluding the domain part) is rather messy. Check the wiki if you really need to.
+    - Update the search domain in `/etc/resolv.conf`.
+    - Update `/etc/hosts` with the new FQDN.
+- Change DNS:
+    - Update `/etc/resolv.conf`.
+- Change NTP:
+    - Update `/etc/chrony/chrony.conf`.
+- Change network interfaces:
+    - Change `/etc/network/interfaces`.
+    - Reload: **TODO:** How? OVS requires special care?
+- Change firewall:
+    - Do NOT manually change IPTables rules.
+    - Update the datacenter firewall in `/etc/pve/TODO.fw`.
+    - Update the node firewall in `/etc/pve/local/TODO.fw`.
+- Change storage:
+    - Update `/etc/pve/storage.cfg`.
+    - See the wiki for config options.
+- Change users, groups and permissions:
+    - Update `/etc/pve/user.cfg`.
+    - Note that PAM users need a backing local Linux user.
+    - This file is a bit messy, avoid breaking it.
+- Change tokens:
+    - Update `/etc/pve/user.cfg` (again).
+    - Update `/etc/pve/priv/token.cfg` with the token ID and the secret key.
 
 ### Configure PCI(e) Passthrough
 
@@ -161,7 +207,7 @@ If you lost quorum because if connection problems and need to modify something (
 
 - List: `qm list`
 
-### General Setup
+### General VM Setup
 
 The "Cloud-Init" notes can be ignored if you're not using Cloud-Init. See the separate section below first if you are.
 
@@ -213,24 +259,95 @@ The "Cloud-Init" notes can be ignored if you're not using Cloud-Init. See the se
     - Open a graphical console to show what's going on.
     - See the separate sections below for more specific stuff.
 
-### Linux Setup (Manual)
+### Linux VM Setup (Manual)
 
 1. Setup the VM (see the general setup section).
 1. (Recommended) Setup the QEMU guest agent: See the section about it.
 1. (Optional) Setup SPICE (for better graphics): See the section about it.
 1. More detailed Debian setup: [Debian](/config/linux-server/debian/)
 
-### Linux Setup (Cloud-Init)
+### Linux VM Cloud-Init Debian Template
+
+*Using Debian 11.*
+
+Example for creating a Cloud-Init-enabled Debian template using official cloud images.
+
+**Resources:**
+
+- [Proxmox: Cloud-Init Support](https://pve.proxmox.com/wiki/Cloud-Init_Support)
+- [Debian: Cloud](https://wiki.debian.org/Cloud/)
+- [Debian: Debian Official Cloud Images](https://cloud.debian.org/images/cloud/)
+
+**Instructions:**
+
+1. Download the VM image:
+    1. (Note) Supported formats: `qcow2`, `vmdk`, `raw` (use `qemu img <FILE>` to check support)
+    1. Download the image.
+    1. (Optional) Verify the image integrity and authenticity: See [Debian: Verifying authenticity of Debian CDs](https://www.debian.org/CD/verify).
+1. Create the VM:
+    1. (Note) You may want to use a high VMID like 1000+ for templates to visually separate them from the rest of VMs e.g. in the PVE UI.
+    1. (Note) Using legacy BIOS and chipset (SeaBIOS and i440fx).
+    1. Create: `qm create <VMID> --name <NAME> --description "<DESC>" --ostype l26 --numa 1 --cpu cputype=host --sockets <CPU_SOCKETS> --cores <CPU_CORES> --memory <MEM_MB> --scsihw virtio-scsi-pci --ide2 <STORAGE>:vm-<VMID>-cloudinit --net0 virtio,bridge=<NET_BRIDGE>[,tag=<VLAN_ID>][,firewall=1] --serial0 socket [--vga serial0] --boot c --bootdisk scsi0  --onboot no`
+1. Import the cloud disk image:
+    1. Import as unused disk: `qm importdisk <VMID> <FILE> <STORAGE>`
+    1. Attach the disk: `qm set <VMID> --scsi0 <STORAGE>:vm-<VMID>-disk-0` (or whatever disk ID it got)
+1. Make it a template:
+    1. (Note) The Cloud-Init disk will not be created automatically before starting the VM, so the the template command might complain about it not existing.
+    1. Protect it (prevent destruction): `qm set <VMID> --protection 1`
+    1. Convert to template: `qm template <VMID>`
+1. (Example) Create a VM:
+    1. (Note) Only SSH login is enabled, no local credentials. Use user `debian` with the specified SSH key(s). Sudo is passwordless for that user.
+    1. Clone the template: `qm clone <TEMPL_VMID> <VMID> --name <NAME> --storage <STORAGE> --full`
+    1. Set Cloud-Init user and SSH pubkeys: `qm set <VMID> --ciuser <USERNAME> --sshkeys <PUBKEYS_FILE>`
+    1. Update the network interface: `qm set <VMID> --net0 virtio,bridge=vmbr1,tag=10,firewall=1` (example)
+    1. Set static IP config: `qm set <VMID> --ipconfig0 ip=<>,gw=<>,ip6=<>,gw6=<>` (for netif 0, using CIDR notation)
+        - (Alternative) Set dynamic IP config: `qm set <VMID> --ipconfig0 ip=dhcp,ip6=auto`
+    1. Set DNS server and search domain: `qm set <VMID> --nameserver "<DNS_1> <DNS_2> <DNS_3>" --searchdomain <DOMAIN>`
+    1. (Optional) Disable protection: `qm set <VMID> --protection 1`
+    1. (Optional) Enable auto-start: `qm set <VMID> --onboot yes`
+    1. (Optional) Enable the QEMU agent (must be installed in guest): `qm set <VMID> --agent enabled=1>`
+    1. Resize the volume (Cloud-Inif will resize the FS): `qm resize <VMID> scsi0 <SIZE>` (e.g. `20G`)
+    1. Set firewall config: See the example file and notes below.
+    1. Start the VM: `qm start 101`
+    1. Check the console in the web UI to see the status. Connect using SSH when it's up.
+
+**VM firewall example:**
+
+File `/etc/pve/firewall/<VMID>.fw`:
+
+```
+[OPTIONS]
+enable: 1
+ndp: 1
+dhcp: 0
+radv: 0
+policy_in: ACCEPT
+policy_out: REJECT
+
+[RULES]
+OUT ACCEPT -source fe80::/10 -log nolog # Allow IPv6 LL local source
+OUT ACCEPT -source <IPV4_ADDR> -log nolog # Verify IPv4 local source
+OUT ACCEPT -source <IPV6_ADDR> -log nolog # Verify IPv6 GUA/ULA local source
+```
+
+Notes:
+
+- `dhcp` and `radv` decides if the VM is allowed to act as a DHCP server and to send router advertisements. Most VMs should not be able to do this.
+- `ndp` enable IPv6 NDP, which is required for IPv6 to function properly.
+- The input policy is set to allow all since the VM is expected to implement its own firewall.
+- The output policy and rules are defined to enforce (static) IP source verification, to prevent it from spoofing other (non-local) addresses.
+
+#### Old Notes
 
 *Using Debian 10.*
 
-**TODO** Script this and use snippets. The UEFI boot order fix, though ...
+**Ignore this section.** I'm keeping it for future reference only.
 
 1. Download a cloud-init-ready Linux image to the hypervisor:
     - Debian cloud-init downloads: [Debian Official Cloud Images](https://cloud.debian.org/images/cloud/) (the `genericcloud` or `generic` variant and `qcow2` format)
     - **TODO**: `genericcloud` or `generic`? Does the latter fix the missing console?
     - Copy the download link and download it to the host (`wget <url>`).
-1. Note: It is an UEFI installation (so the BIOS/UEFI mode must be set accordingly) and the image contains an EFI partition (so you don't need a separate EFI disk).
+1. (Note) It is an UEFI installation (so the BIOS/UEFI mode must be set accordingly) and the image contains an EFI partition (so you don't need a separate EFI disk).
 1. Setup a VM as in the general setup section (take note of the specified Cloud-Init notes).
     1. Set the VM up as UEFI with an "EFI disk" added.
     1. Add a serial interface since the GUI console may be broken (it is for me).
@@ -266,9 +383,9 @@ The "Cloud-Init" notes can be ignored if you're not using Cloud-Init. See the se
     - Consider purging the cloud-init package to avoid accidental reconfiguration later.
     - Consider running `cloud-init status --wait` before configuring it to make sure the Cloud-Init setup has completed.
 
-### Windows Setup
+### Windows VM Setup
 
-*Using Windows 10.*
+Using Windows 10.
 
 [Proxmox VE Wiki: Windows 10 guest best practices](https://pve.proxmox.com/wiki/Windows_10_guest_best_practices)
 

+ 13 - 2
index.md

@@ -23,6 +23,11 @@ Random collection of config notes and miscellaneous stuff. _Technically not a wi
 - [Ansible](/config/automation/ansible/)
 - [Puppet](/config/automation/puppet/)
 
+### Cloud
+
+- [Azure](/config/cloud/azure/)
+- [AWS](/config/cloud/aws/)
+
 ### Computers
 
 - [Dell OptiPlex Series](/config/computers/dell-optiplex/)
@@ -54,7 +59,7 @@ Random collection of config notes and miscellaneous stuff. _Technically not a wi
 
 ### Linux Server
 
-- [Debian](/config/linux-server/debian/)
+- [Debian Server](/config/linux-server/debian/)
 - [Applications](/config/linux-server/applications/)
 - [Storage](/config/linux-server/storage/)
 - [Storage: ZFS](/config/linux-server/storage-zfs/)
@@ -71,6 +76,12 @@ Random collection of config notes and miscellaneous stuff. _Technically not a wi
 - [VLC](/config/media/vlc/)
 - [youtube-dl](/config/media/youtube-dl/)
 
+### Monitoring
+
+- [Grafana](/config/monitoring/grafana/)
+- [Prometheus](/config/monitoring/prometheus/)
+- [Grafana Loki](/config/monitoring/loki/)
+
 ### Network
 
 #### General
@@ -106,7 +117,7 @@ Random collection of config notes and miscellaneous stuff. _Technically not a wi
 - [Kubuntu](/config/pc/kubuntu/)
 - [Manjaro (KDE)](/config/pc/manjaro-kde/)
 - [Windows](/config/pc/windows/)
-- [PC Applications](/config/pc/applications/)
+- [Applications](/config/pc/applications/)
 
 ### Power
 

+ 1 - 1
it/services/dns.md

@@ -31,7 +31,7 @@ Everyone knows this, no point reiterating.
 - Host systems and recursive DNS servers may be configured to validate received RRs for DNSSEC-enabled domains.
 - The set of all RRs of the same type for a domain is called an "RRset".
 - The presence if a DS record for a child zone signals that the child zone is DNSSEC-enabled.
-- The NSEC RR may be used to search for all subdomains and which RRs exist for them (aka "zone walking"), so _secret_ subdomains are no longer possible, although NSEC3 _partially_ prevents this. See "DNSSEC white lies" as well for more info.
+- The NSEC RR may be used to search for all subdomains and which RRs exist for them (aka zone walking or zone enumeration), so hidden records are no longer possible. NSEC3 with "white lies" and NSEC5 (when supported) prevents this. Blocking NSEC all-together breaks DNSSEC-enabled resolvers, so don't do that.
 - A zone's RRsets may be signed in live mode, where the DNSKEY private key is present on the authorative DNS server(s), or in offline mode, where the zone's RRsets are signed in advance and the private key is somewhere safe.
 - Due to the size of DNSSEC record types, it makes the DNS server more vulnerable to amplification attacks.
 

+ 1 - 1
media/audio/basics.md

@@ -12,7 +12,7 @@ breadcrumbs:
     - High midrange (ca. 1kHz-10kHz)
     - Highs (ca. 10kHz-20kHz)
 - Signal levels:
-    - Note: This is the voltage (and somewhat impedance) inside cables/equipment.
+    - (Note) This is the voltage (and somewhat impedance) inside cables/equipment.
     - Mic level: Output from a microphone. Very weak, requires a preamp.
     - Instrument level: Output from e.g. a guitar. Like mic level but slightly stronger.
     - Line level (+4dBu): Professional equipment.

+ 1 - 1
se/general/web-security.md

@@ -89,7 +89,7 @@ breadcrumbs:
 
 ### Headers
 
-- Note: These are response headers unless otherwise stated.
+- (Note) These are response headers unless otherwise stated.
 - `X-Frame-Options`: Determines if the current page can be framed. Can prevent e.g. clickjacking. Unless the page is intended to be framed on other sites, set it to `SAMEORIGIN` or `DENY`.
 - `X-Content-Type-Options`: Can prevent e.g. MINE sniffing by denying browsers to ignore the sent `Content-Type` and try to determine the content type of a document by itself, which can lead to XSS. Always set to `nosniff`.
 - `X-XSS-Protection`: Determines if built-in XSS features in the browser (e.g. for detecting reflected XSS) should be enabled or disabled. The default (`1`) is to detect and sanitize unsafe parts (which could potentially be exploited). Set to `1; mode=block` to stop loading the page when detected instead.