Ver código fonte

Cloudflare, Ceph, ZFS

Håvard O. Nordstrand 5 anos atrás
pai
commit
87bcfa6597
2 arquivos alterados com 54 adições e 17 exclusões
  1. 31 6
      config/general/general.md
  2. 23 11
      config/linux-servers/storage.md

+ 31 - 6
config/general/general.md

@@ -18,11 +18,36 @@ breadcrumbs:
 
 ## Addresses
 
-- Cloudflare DNS:
-  - `1.1.1.1`
-  - `1.0.0.1`
-  - `2606:4700:4700::1111`
-  - `2606:4700:4700::1001`
-- Justervesenet NTP: `ntp.justervesenet.no`
+- Cloudflare DNS (1.1.1.1):
+    - Notes:
+        - Privacy-focused.
+        - Supports DNSSEC.
+        - Supports DNS over HTTPS and DNS over TLS.
+        - Supports malware and adult content blocking.
+        - Supports DNS64.
+        - Does not allow ANY queries.
+        - Upstreams to a locally hosted F root server for privacy and reduced latency.
+        - Allows cache purging: [1.1.1.1: Purge Cache](https://1.1.1.1/purge-cache/)
+    - Direct:
+        - `1.1.1.1`
+        - `1.0.0.1`
+        - `2606:4700:4700::1111`
+        - `2606:4700:4700::1001`
+    - Malware blocking:
+        - `1.1.1.2`
+        - `1.0.0.2`
+        - `2606:4700:4700::1112`
+        - `2606:4700:4700::1002`
+    - Malware and adult content blocking:
+        - `1.1.1.3`
+        - `1.0.0.3`
+        - `2606:4700:4700::1113`
+        - `2606:4700:4700::1003`
+    - DNS64:
+        - `2606:4700:4700::64`
+        - `2606:4700:4700::6400`
+- Justervesenet NTP (JV-UTC):
+    - Info: [Justervesenet: NTP-tenester frå Justervesenet](https://www.justervesenet.no/maleteknikk/tid-og-frekvens/ntp-tjenester-fra-justervesenet/)
+    - Address: `ntp.justervesenet.no`
 
 {% include footer.md %}

+ 23 - 11
config/linux-servers/storage.md

@@ -199,7 +199,19 @@ This is just a suggestion for how to partition your main system drive. Since LVM
 
 - General:
     - List pools: `rados lspools` or `ceph osd lspools`
-    - Show pool utilization: `rados df`
+- Show utilization:
+    - `rados df`
+    - `ceph df [detail]`
+    - `deph osd df`
+- Show health and status:
+    - `ceph status`
+    - `ceph health [detail]`
+    - `ceph osd stat`
+    - `ceph osd tree`
+    - `ceph mon stat`
+    - `ceph osd perf`
+    - `ceph osd pool stats`
+    - `ceph pg dump pgs_brief`
 - Pools:
     - Create: `ceph osd pool create <pool> <pg-num>`
     - Delete: `ceph osd pool delete <pool> [<pool> --yes-i-really-mean-it]`
@@ -207,8 +219,6 @@ This is just a suggestion for how to partition your main system drive. Since LVM
     - Make or delete snapshot: `ceph osd pool <mksnap|rmsnap> <pool> <snap>`
     - Set or get values: `ceph osd pool <set|get> <pool> <key>`
     - Set quota: `ceph osd pool set-quota <pool> [max_objects <count>] [max_bytes <bytes>]`
-- PGs:
-    - Status of PGs: `ceph pg dump pgs_brief`
 - Interact with pools directly using RADOS:
     - Ceph is built on based on RADOS.
     - List files: `rados -p <pool> ls`
@@ -273,12 +283,16 @@ Typically an early indicator of faulty hardware, so take note of which disk it i
     - Check that the new OSD is up: `ceph osd tree`
 1. Start the OSD daemon: `systemctl start ceph-osd@<id>`
 1. Wait for rebalancing: `ceph -s [-w]`
-1. Check the health: `ceph health`
+1. Check the health: `ceph health [detail]`
 
 ## ZFS
 
+Using ZFS on Linux (ZoL).
+
 ### Info
 
+Note: ZFS's history (Oracle) and license (CDDL, which is incompatible with the Linux mainline kernel) are pretty good reasons to avoid ZFS.
+
 #### Features
 
 - Filesystem and physical storage decoupled
@@ -299,12 +313,13 @@ Typically an early indicator of faulty hardware, so take note of which disk it i
 #### Terminology
 
 - Vdev
-- Zpool
+- Pool
+- Dataset
 - Zvol
 - ZFS POSIX Layer (ZPL)
 - ZFS Intent Log (ZIL)
-- Adaptive Replacement Cache (ARC)
-- Dataset
+- Adaptive Replacement Cache (ARC) and L2ARC
+- ZFS Event Daemon (ZED)
 
 #### Encryption
 
@@ -407,10 +422,7 @@ Some guides recommend using backport repos, but this way avoids that.
 - Make sure regular automatic scrubs are enabled.
     - There should be a cron job/script or something.
     - Run it e.g. every 2 weeks or monthly.
-- Snapshots are great for incremental backups. They're easy to send places too. If the dataset is encrypted then so is the snapshot.
-- Enabling features like encryption, compression, deduplication is not retro-active. You'll need to move the old data away and back for the features to apply to the data.
-
-### Tuning
+- Snapshots are great for incremental backups. They're easy to send placesOS
 
 - Use quotas, reservations and compression.
 - Very frequent reads: