diff --git a/blog.json b/blog.json index df06fcc..f1e0cd2 100644 --- a/blog.json +++ b/blog.json @@ -131,7 +131,7 @@ "get-most-out-your-hosthatch-vps": { "title": "how to get the most out of your HostHatch VPS", "description": "Unlock the full potential of your HostHatch VPS with my friendly and easy-to-follow guide! In this blog post, I'll take you step-by-step through optimizing your NVMe and Storage VPS setups using some cool techniques like NFSv4.2, Cachefilesd, zRAM, and IPTables. Whether you're just starting out or you're a seasoned professional, you'll discover how to manage your resources more efficiently, boost performance, and keep your server environment secure. I'll also cover everything from configuring reverse DNS to setting up swap space and implementing private networking. This guide is perfect for anyone eager to maximize their VPS capabilities and make the most out of their hosting experience!", - "content": "Hi!\n\nRecently I've migrated to HostHatch as my hosting provider, and while switching (even before, actually)\nI noticed that my target plan (NVME 16 GB) had only 75 GB of NVMe storage. This is why I also bought\nStorage VPS 1 TB on the side for $5 which has an HDD so it is not as expensive.\n\nThis blog post is meant to serve as a guide to how to get the most out of your HostHatch VPS\nby using NFSv4.2 (or whatever the latest is at the time you're reading this), Cachefilesd,\nzRAM, swap, and IPTables.\n\n## Disclaimer\n\nWhile I strive to provide accurate and helpful information in this guide, please note that any actions you take based on\nthe content provided are at your own risk. I am not liable for any damages, data loss, or other issues that may arise from\nfollowing the instructions outlined in this post. Always ensure you have proper backups and consult with a professional\nif you're unsure about any steps. Happy optimizing!\n\n## Hardware\n\n- Processing VPS: NVMe 16 GB\n - 4 AMD EPYC cores (2 dedicated, 2 fair-shared)\n - 16 GB of DDR4 RAM\n - 75 GB of NVMe storage\n - 4 TB of network bandwidth\n - Location: Stockholm, Sweden. (or whatever you want as long as the location supports Storage VPSes, if you're planning on using private networking) (a person I know has experienced performance problems using Swedish VPSes, you might want to use another location)\n- Storage VPS: Storage 1 TB\n - 1 vCPU core\n - 1024 MB of RAM\n - 1000 GB of storage\n - 2500 GB of network bandwidth\n - Location: Same one as the processing VPS. (this will be useful when using private networking)\n\nWe have extremely limited resources on the storage VPS, so we will try to work around that.\n\n## Operating systems & software stack\n\nThis guide should work for pretty much all Linux-based operating systems. Most commonly it is Debian Linux,\nalthough nobody is stopping you from using another distribution, such as Alpine Linux, which may even decrease\nthe resource usage.\n\nPersonally, I chose Debian Linux because it is very versatile and it has huge software repositories. It worked\nfine for me over and over again and I believe it to be a very reliable choice.\n\nIf you use anything other than Debian or Debian-based (such as Ubuntu) - adjust the procedures as needed based\non your software stack.\n\n## Reverse DNS\n\nThis is mainly a convenience feature, but you might want to change the rDNS of your\nHostHatch VPS(es). To change the rDNS of your VPS do the following steps:\n\n1. Log into HostHatch at .\n2. Go to your server's panel by clicking on its hostname.\n3. Go to the 'Network' tab.\n\nThen:\n\n- For IPv4\n 1. Click the arrow at the end of the IP row (looks like a gray `>` character at the edge of the row).\n 2. Enter your reverse DNS.\n 3. Press the confirm checkmark.\n- For IPv6, do the same steps, but for interface ID enter `0` the first time and then `1` the second time. This will ensure the best IPv6 rDNS compatibility: `::0` is oftentimes seen as a placeholder address, while `::1` should be your main IPv6 address. (if you enable IPv6 on HostHatch you get a whole /64 subnet)\n\n## zRAM and Swap space\n\nSwap space is an extra bit of virtual RAM so to say on your computer which your computer can fall back onto if it runs out of RAM.\nzRAM is like swap, although, it is compressed and all in-memory.\n\nzRAM might be useful for the processing VPS as it'll require CPU to compress and decompress the RAM, although, it will allow you to\nget better use out of RAM. While swap might be more useful on the storage VPS due to CPU and memory constraints.\n\nPersonally I have set up zRAM and normal swap (with a lower priority) on the processing VPS, and normal swap on the storage VPS.\n\n### zRAM\n\nFollowing the guide on zRAM on debian.org at you can easily set up zRAM as follows:\n\n apt install zram-tools\n echo -e \"ALGO=zstd\\nPERCENT=60\" | tee -a /etc/default/zramswap\n systemctl restart zramswap\n\nThis will allow zRAM to compress up to 60% of your normal RAM using the ZSTD compression algorithm which provides\nfast (de)compression with great compression ratios (around 5:1, which means for every 5 units of data it can compress\nit down to 1 unit).\n\nThis is only useful if you have spare CPU to give as the process will be using your CPU more than just using normal\nswap or just uncompressed RAM.\n\nTo mount it on boot, add this to your `/etc/fstab` file:\n\n /dev/zram0 none swap sw,pri=100 0 0\n\n### Swap\n\nThere's two main ways of setting swap up on Linux:\n\n- Swap partition: A separate partition where swap lives. This is faster than a swap file, but might be hard to achieve on a VPS due to having to modify the partition layout while the VPS is live.\n- Swap file: A normal file on your file system where swap space lives. This is more flexible as you can change the swap size at any point and you don't need to change your partition layout for it.\n\nI, personally, chose a swap file instead for both VPSes. This is how I set it up:\n\n fallocate -l 4G /swapfile # You can change the size at your accord\n chmod 600 /swapfile\n mkswap /swapfile\n\nAfter doing this, I added this to my `/etc/fstab` on my server:\n\n /swapfile none swap sw,pri=1 0 0\n\n### Finishing\n\nAfter setting swap up, you may want to reboot. Though in this case it's optional to reboot until the final reboot.\n\n## Private networking\n\nIf you were able to get both your storage VPS and processing VPS in the same location, do the following steps to enable\nand set private networking up. Do this for both of your VPSes:\n\n1. Log into HostHatch at .\n2. Go to your server's panel by clicking on its hostname.\n3. Go to the 'Network' tab.\n4. Press 'enable private networking'.\n5. Reboot the VPS.\n\nAfter enabling private networking, reboot the VPSes.\n\nAfter rebooting, log into your through ssh and follow the\n[private networking guide by HostHatch](https://docs.hosthatch.com/networking/#private-networking):\n\n1. Log in as root (either by pre-sshing as root or using the `su` command)\n2. Identify the interface name and MAC address using the command `ip -o link | grep 00:22` (the MAC address is the one that starts with `00:22:..`, and interface will usually be `enp2s0` or `eth1`)\n3. Identify the public IPv4 address of your VPS by running `curl -4 ip.me`. Remember the last digit. (for example last digit of `153.92.126.215` is `215`)\n4. Run `tee /etc/netplan/90-private.yaml` and paste in or type out the following text:\n\n network:\n version: 2\n ethernets:\n [interface name]:\n addresses:\n - 192.168.10.[last digit of the current server's public IP address]/24\n match:\n macaddress: 00:22:xx:xx:xx:xx\n dhcp4: no\n\nAfter you are done: press CTRL+D, and then reboot the VPS. (this is required for private networking to take change if running `/usr/sbin/netplan apply` won't work)\n\nNow you have private networking set up between the VPSes.\n\n### No private networking?\n\nNo worries - outside traffic will be blocked using IPTables, although, all the bandwidth will be taken into account while\nusing NFS and the performance might be worse.\n\n## IPTables rules\n\nAfter setting private networking up, you will most likely want to isolate the storage VPS from the rest of\nthe internet to avoid leakage of data. This can be done easily using Iptables and iptables-persistent.\nThis will only cover IPv4 rules, but this can be easily translated into ip6tables as well. I would recommend not\nusing IPv6 on the storage VPS as it is pretty useless in the case of a storage server, and it'll only be more\nwork to manage everything: keep it simple.\n\nFirstly, install the required dependencies:\n\n apt install iptables iptables-persistent\n\nThen create a script called `iptables.sh` as follows:\n\n #!/bin/sh\n\n # Add /usr/sbin to PATH\n export PATH=\"$PATH:/usr/sbin\"\n\n # Flush and discard all iptables policies\n iptables -F\n iptables -X\n\n # Set default policies\n iptables -P INPUT DROP\n iptables -P FORWARD DROP\n iptables -P OUTPUT ACCEPT\n\n # Accept loopback traffic\n iptables -A INPUT -i lo -j ACCEPT\n\n # Accept established and related connections\n iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT\n\n # Accept SSH connections on port 22\n iptables -A INPUT -p tcp --dport 22 -j ACCEPT\n\n # Accept TCP connections on NFS ports on server IPs\n iptables -A INPUT -s 192.168.10.[last digit of the storage server's public IP address]/32 -p tcp --dport 2049 -j ACCEPT\n iptables -A INPUT -s 192.168.10.[last digit of the processing server's public IP address]/32 -p tcp --dport 2049 -j ACCEPT\n\n # Rate limiting for new SSH connections\n iptables -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW -m recent --set --name DEFAULT --mask 255.255.255.255 --rsource\n\n # Drop SSH connections if more than 5 attempts occur within 60 seconds\n iptables -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW -m recent --update --seconds 60 --hitcount 5 --name DEFAULT --mask 255.255.255.255 --rsource -j DROP\n\n # Drop invalid packets\n iptables -A INPUT -m state --state INVALID -j DROP\n\n # Accept loopback traffic for outgoing connections\n iptables -A OUTPUT -o lo -j ACCEPT\n\n # Save iptables rules\n iptables-save >/etc/iptables/rules.v4\n\nYou may also want to add the following rules as well to block IPv6 traffic:\n\n # Block IPv6\n ip6tables -F\n ip6tables -X\n ip6tables -P INPUT DROP\n ip6tables -P OUTPUT DROP\n ip6tables -P FORWARD DROP\n iptables-save >/etc/iptables/rules.v6\n\nAfter creating this script, go into your HostHatch console and do this:\n\n1. Press on your server's hostname.\n2. Go into the 'Console' tab.\n3. Log in as root.\n4. Run the script.\n5. Enable the `netfilter-persistent` service: `systemctl enable netfilter-persistent`\n\nYou should do it this way because you may experience connection issues while applying these IPTables rules.\n\nThis script will protect your VPS from brute-force attacks on the SSH port and it'll cut off the VPS from\nthe rest of the internet for the most part.\n\n### Sysctl for IPv6\n\nIf you want to truly disable IPv6, you will need to edit `/etc/sysctl.conf` and add this to it:\n\n net.ipv6.conf.all.disable_ipv6=1\n net.ipv6.conf.default.disable_ipv6=1\n\nAfter which, run this as root to apply the settings:\n\n sysctl -p\n\nNow absolutely no IPv6 traffic will be available in the storage VPS.\n\n### Firewall with IPTables\n\nIf you want IPTables rules for your processing VPS, especially if you also allow IPv6, you are free to use\nmy `fw.sh` script located at :\n\n #!/bin/sh\n\n set -eu\n\n main() {\n for ip in iptables ip6tables; do\n echo '----------------------------------------------------------------'\n\n echo \"[$ip] Setting up iptables rules...\"\n\n echo \"[$ip] Flushing all rules...\"\n \"$ip\" -F\n \"$ip\" -X\n\n echo \"[$ip] Allowing established connections...\"\n \"$ip\" -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT\n\n echo \"[$ip] Allowing loopback interface...\"\n \"$ip\" -A INPUT -i lo -j ACCEPT\n \"$ip\" -A OUTPUT -o lo -j ACCEPT\n\n echo \"[$ip] Allowing SSH, HTTP, HTTPS, Email federation, Matrix federation, and XMPP federation on tcp...\"\n \"$ip\" -A INPUT -p tcp --dport 22 -j ACCEPT # SSH\n \"$ip\" -A INPUT -p tcp --dport 80 -j ACCEPT # HTTP\n \"$ip\" -A INPUT -p tcp --dport 443 -j ACCEPT # HTTPS\n \"$ip\" -A INPUT -p tcp -m multiport --dports 25,465,587,143,993,110,995,2525,4190 -j ACCEPT # Email federation\n \"$ip\" -A INPUT -p tcp --dport 8448 -j ACCEPT # Matrix federation\n \"$ip\" -A INPUT -p tcp -m multiport --dports 5222,5269,5223,5270,5280,5281 -j ACCEPT # XMPP federation\n\n echo \"[$ip] Rate limiting SSH traffic on tcp...\"\n \"$ip\" -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW -m recent --set\n \"$ip\" -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW -m recent --update --seconds 60 --hitcount 5 -j DROP\n\n echo \"[$ip] Dropping invalid packets on tcp...\"\n \"$ip\" -A INPUT -p tcp -m state --state INVALID -j DROP\n\n echo \"[$ip] Dropping other traffic...\"\n \"$ip\" -P INPUT DROP\n \"$ip\" -P FORWARD DROP\n\n echo \"[$ip] Rules:\"\n \"$ip\" -vL\n\n echo '----------------------------------------------------------------'\n done\n\n echo '[ICMP] Allowing limited ICMP traffic...'\n iptables -A INPUT -p icmp -m limit --limit 5/sec --limit-burst 10 -j ACCEPT\n iptables -A INPUT -p icmp -j DROP\n ip6tables -A INPUT -p icmpv6 -m limit --limit 5/sec --limit-burst 10 -j ACCEPT\n ip6tables -A INPUT -p icmpv6 -j DROP\n\n echo '----------------------------------------------------------------'\n\n echo '[iptables-save] Saving rules...'\n iptables-save | tee /etc/iptables/rules.v4\n\n echo '----------------------------------------------------------------'\n\n echo '[ip6tables-save] Saving rules...'\n ip6tables-save | tee /etc/iptables/rules.v6\n\n echo 'Meoww :3 done'\n }\n\n main \"$@\"\n\nMake sure no iptables or ip6tables rules are set on the server already so they don't get flushed and you experience\nnetworking problems.\n\n## NFS (storage server)\n\nIn this section, we will set up nfs-kernel-server on the *storage* server.\n\nFirstly do the prerequisite steps:\n\n1. Make sure you are logged in as root.\n2. Install the required dependencies: `apt install nfs-kernel-server nfs-common`\n3. Create the shared exports directory, I personally chose `/share/nfs`: `mkdir -p /share/nfs`\n4. Set up the correct ownership for the directories: `chown nobody:nogroup -R /share`\n5. Set up the correct permissions for the directories: `chmod 755 -R /share`\n6. Enable the NFS service: `systemctl enable nfs-kernel-server`\n\nNow, simply export the NFS share by editing `/etc/exports`:\n\n /share/nfs 192.168.10.[last digit of processing server's public IP](rw,sync,no_subtree_check,async)\n\nIf you are going to be using this share for database storage, make sure to remove the `async` flag as that may\nlead to data loss and/or corruption. I do that with PostgreSQL:\n\n /share/ 192.168.10.[last digit of processing server's public IP](rw,sync,no_subtree_check)\n\nNext, simply export the filesystems:\n\n exportfs -a\n\nAnd start the NFS service:\n\n systemctl start nfs-kernel-server\n\nNow, for the next steps, verify the available NFS versions:\n\n $ cat /proc/fs/nfsd/versions\n +3 +4 +4.1 +4.2\n\nRemember the biggest number that has a `+` in front of it.\n\nYou have successfully set NFS up on the storage server! The NFS server will only be accessible by\npurely the processing server and noone else.\n\n## NFS (processing server)\n\nNow, we are going to set up NFS and Cachefilesd on the processing VPS.\n\nFirstly do the prerequisite steps:\n\n1. Open `/etc/fstab`.\n2. Edit your `/` mount to have the following mount options: `rw,discard,errors=remount-ro,x-systemd.growfs,user_xattr,acl`.\n3. Reboot the VPS.\n4. Make sure you are logged in as root.\n5. Install the required dependencies: `apt install nfs-common`\n6. Make the NFS mountpoint: `mkdir -p /mnt/nfs`\n7. Set up correct ownership: `chown nobody:nogroup /mnt/nfs`\n8. Set up the correct permissions: `chmod 755 /mnt/nfs`\n\nNow open up your `/etc/fstab` and add this:\n\n 192.168.10.[last digit of the storage server's public IP]:/share/nfs /mnt/nfs nfs4 defaults,fsc,noatime,nodiratime,_netdev,x-systemd.automount,x-systemd.requires=network-online.target,timeo=600,rsize=65536,wsize=65536,hard,intr,nfsvers=[latest version of NFS available, such as 4.2],namlen=255,proto=tcp,retrans=2,sec=sys,clientaddr=192.168.10.[last digit of the processing server's public IP],local_lock=none,addr=192.168.10.[last digit of the storage server's public IP] 0 0\n\nFor database storage, you may want to modify these options to:\n\n 192.168.10.[same]:/share/[database path] /var/lib/[database path] nfs4 defaults,fsc,noatime,nodiratime,_netdev,x-systemd.automount,x-systemd.requires=network-online.target,timeo=600,rsize=65536,wsize=65536,hard,intr,nfsvers=[same],namlen=255,proto=tcp,retrans=2,sec=sys,clientaddr=192.168.10.[same],local_lock=none,addr=192.168.10.[same] 0 0\n\nDon't yet do anything. First, we will set Cachefilesd up (`fsc` mount option). This will give us better performance by being able to utilize the mass storage of the HDD server and the performance of the NVMe server:\n\n1. Install Cachefilesd: `apt install cachefilesd`.\n2. Edit `/etc/cachefilesd.conf` if needed. (or just use default configuration - it is okay)\n3. Edit `/etc/default/cachefilesd` and change the `RUN=no` to `RUN=yes`.\n4. Start and enable the cachefilesd service: `systemctl enable --now cachefilesd`.\n5. Check the status, and debug if needed: `systemctl status cachefilesd`.\n6. Done. You should now reboot the VPS.\n\nNFS is not successfully set up with caching. You can use the mountpoint as any mounted filesystem.\n\n## SSHD (SSH daemon) configuration\n\nOn the processing VPS you may want to use the following configuration **only after adding an unprivileged user, adding your public ssh key in ~/.ssh/authorized_keys, and testing it:**\n\nFirst run `rm /etc/ssh/ssh_host_* && dpkg-reconfigure openssh-server` and then edit `/etc/ssh/sshd_config`:\n\n ...\n Port 22\n AddressFamily any\n ...\n SyslogFacility AUTH\n LogLevel INFO\n ...\n PermitRootLogin no\n ...\n MaxAuthTries 3\n ...\n PubkeyAuthentication yes\n ...\n AuthorizedKeysFile .ssh/authorized_keys\n ...\n IgnoreRhosts yes\n ...\n PasswordAuthentication no\n PermitEmptyPasswords no\n ...\n KbdInteractiveAuthentication no\n ...\n UsePAM yes\n ..\n AllowAgentForwarding no\n AllowTcpForwarding no\n ...\n X11Forwarding no\n ...\n PrintMotd no\n ...\n TCPKeepAlive no\n ...\n UseDNS no\n ...\n Banner none\n ...\n AcceptEnv none\n ...\n Subsystem sftp /usr/lib/openssh/sftp-server\n ...\n ChallengeResponseAuthentication no\n\n KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256\n\n Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr\n\n MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com\n\n AuthenticationMethods publickey\n\n HostKey /etc/ssh/ssh_host_ed25519_key\n HostKey /etc/ssh/ssh_host_rsa_key\n HostKey /etc/ssh/ssh_host_ecdsa_key\n\n AllowUsers \n\nIf you also run a git server you may want to restrict it even more:\n\n Match User git\n X11Forwarding no\n AllowTcpForwarding no\n AllowAgentForwarding no\n PermitTTY no\n AuthorizedKeysFile /home/git/.ssh/authorized_keys\n PermitTunnel no\n ClientAliveInterval 300\n ClientAliveCountMax 0\n\nOn the storage VPS you may want to have a singular unprivileged user and only allow traffic from IPv4 (`AddressFamily inet`).\nYou may also want to specify a `Banner /etc/issue` to show a legal disclaimer by overwriting the issue and motd files in etc.\nFeel free to take this one:\n\n ********************************************************************************\n * WARNING: AUTHORIZED ACCESS ONLY *\n ********************************************************************************\n * *\n * You are accessing a private computer system owned by .......... and operated *\n * under the domain ....... This system, including all related equipment, *\n * networks, and network devices (specifically including Internet access), is *\n * provided only for authorized use. This system may be monitored for all *\n * lawful purposes, including to ensure that its use is authorized, for *\n * management of the system, to facilitate protection against unauthorized *\n * access, and to verify security procedures, survivability, and operational *\n * security. Monitoring includes active attacks by authorized entities to test *\n * or verify the security of this system. During monitoring, information may be *\n * examined, recorded, copied, and used for authorized purposes. Use of this *\n * system constitutes consent to monitoring for these purposes. *\n * *\n * Unauthorized or improper use of this system may result in civil and criminal *\n * penalties and administrative or disciplinary action, as appropriate. By *\n * continuing to use this system you indicate your awareness of and consent to *\n * these terms and conditions of use. LOG OFF IMMEDIATELY if you do not agree *\n * to the conditions stated in this warning. *\n * *\n ********************************************************************************\n\n System owned by Jane Dane - example.com\n\n## DNS servers\n\nFor best privacy, security, and generally reliable services - I recommend using [Quad9 DNS](https://quad9.net/).\nYou may use these DNS servers by editing `/etc/systemd/resolved.conf` and setting the following value as such:\n\n DNS=9.9.9.9#dns.quad9.net 149.112.112.112#dns.quad9.net 2620:fe::fe#dns.quad9.net 2620:fe::9#dns.quad9.net\n\nThen either reboot or run:\n\n systemctl restart systemd-resolved\n\n## Closing note\n\nThat's about it. Good luck and have fun with your new infrastructure!\n\n(btw that's basically the infrastructure ari.lt runs on at the moment, if I find any bottlenecks - I'll tackle them)\n\nMy storage server seems to be idling at about 100M of RAM and around 5% CPU on average, of course with spikes.\nThat play room might seem crazy, but the spikes are even crazier - keep it light and simple on the storage server!\nIt is *literally* responsible for your storage - be careful and make sure you understand what you are doing.\n\nCya next time!", + "content": "Hi!\n\nRecently I've migrated to [HostHatch](https://hosthatch.com/)\nas my hosting provider, and while switching (even before, actually)\nI noticed that my target plan (NVME 16 GB) had only 75 GB of NVMe storage. This is why I also bought\nStorage VPS 1 TB on the side for $5 which has an HDD so it is not as expensive.\n\nThis blog post is meant to serve as a guide to how to get the most out of your HostHatch VPS\nby using NFSv4.2 (or whatever the latest is at the time you're reading this), Cachefilesd,\nzRAM, swap, and IPTables.\n\n## Disclaimer\n\nWhile I strive to provide accurate and helpful information in this guide, please note that any actions you take based on\nthe content provided are at your own risk. I am not liable for any damages, data loss, or other issues that may arise from\nfollowing the instructions outlined in this post. Always ensure you have proper backups and consult with a professional\nif you're unsure about any steps. Happy optimizing!\n\n## Hardware\n\n- Processing VPS: NVMe 16 GB\n - 4 AMD EPYC cores (2 dedicated, 2 fair-shared)\n - 16 GB of DDR4 RAM\n - 75 GB of NVMe storage\n - 4 TB of network bandwidth\n - Location: Stockholm, Sweden. (or whatever you want as long as the location supports Storage VPSes, if you're planning on using private networking) (a person I know has experienced performance problems using Swedish VPSes, you might want to use another location)\n- Storage VPS: Storage 1 TB\n - 1 vCPU core\n - 1024 MB of RAM\n - 1000 GB of storage\n - 2500 GB of network bandwidth\n - Location: Same one as the processing VPS. (this will be useful when using private networking)\n\nWe have extremely limited resources on the storage VPS, so we will try to work around that.\n\n## Operating systems & software stack\n\nThis guide should work for pretty much all Linux-based operating systems. Most commonly it is Debian Linux,\nalthough nobody is stopping you from using another distribution, such as Alpine Linux, which may even decrease\nthe resource usage.\n\nPersonally, I chose Debian Linux because it is very versatile and it has huge software repositories. It worked\nfine for me over and over again and I believe it to be a very reliable choice.\n\nIf you use anything other than Debian or Debian-based (such as Ubuntu) - adjust the procedures as needed based\non your software stack.\n\n## Reverse DNS\n\nThis is mainly a convenience feature, but you might want to change the rDNS of your\nHostHatch VPS(es). To change the rDNS of your VPS do the following steps:\n\n1. Log into HostHatch at .\n2. Go to your server's panel by clicking on its hostname.\n3. Go to the 'Network' tab.\n\nThen:\n\n- For IPv4\n 1. Click the arrow at the end of the IP row (looks like a gray `>` character at the edge of the row).\n 2. Enter your reverse DNS.\n 3. Press the confirm checkmark.\n- For IPv6, do the same steps, but for interface ID enter `0` the first time and then `1` the second time. This will ensure the best IPv6 rDNS compatibility: `::0` is oftentimes seen as a placeholder address, while `::1` should be your main IPv6 address. (if you enable IPv6 on HostHatch you get a whole /64 subnet)\n\n## zRAM and Swap space\n\nSwap space is an extra bit of virtual RAM so to say on your computer which your computer can fall back onto if it runs out of RAM.\nzRAM is like swap, although, it is compressed and all in-memory.\n\nzRAM might be useful for the processing VPS as it'll require CPU to compress and decompress the RAM, although, it will allow you to\nget better use out of RAM. While swap might be more useful on the storage VPS due to CPU and memory constraints.\n\nPersonally I have set up zRAM and normal swap (with a lower priority) on the processing VPS, and normal swap on the storage VPS.\n\n### zRAM\n\nFollowing the guide on zRAM on debian.org at you can easily set up zRAM as follows:\n\n apt install zram-tools\n echo -e \"ALGO=zstd\\nPERCENT=60\" | tee -a /etc/default/zramswap\n systemctl restart zramswap\n\nThis will allow zRAM to compress up to 60% of your normal RAM using the ZSTD compression algorithm which provides\nfast (de)compression with great compression ratios (around 5:1, which means for every 5 units of data it can compress\nit down to 1 unit).\n\nThis is only useful if you have spare CPU to give as the process will be using your CPU more than just using normal\nswap or just uncompressed RAM.\n\nTo mount it on boot, add this to your `/etc/fstab` file:\n\n /dev/zram0 none swap sw,pri=100 0 0\n\n### Swap\n\nThere's two main ways of setting swap up on Linux:\n\n- Swap partition: A separate partition where swap lives. This is faster than a swap file, but might be hard to achieve on a VPS due to having to modify the partition layout while the VPS is live.\n- Swap file: A normal file on your file system where swap space lives. This is more flexible as you can change the swap size at any point and you don't need to change your partition layout for it.\n\nI, personally, chose a swap file instead for both VPSes. This is how I set it up:\n\n fallocate -l 4G /swapfile # You can change the size at your accord\n chmod 600 /swapfile\n mkswap /swapfile\n\nAfter doing this, I added this to my `/etc/fstab` on my server:\n\n /swapfile none swap sw,pri=1 0 0\n\n### Finishing\n\nAfter setting swap up, you may want to reboot. Though in this case it's optional to reboot until the final reboot.\n\n## Private networking\n\nIf you were able to get both your storage VPS and processing VPS in the same location, do the following steps to enable\nand set private networking up. Do this for both of your VPSes:\n\n1. Log into HostHatch at .\n2. Go to your server's panel by clicking on its hostname.\n3. Go to the 'Network' tab.\n4. Press 'enable private networking'.\n5. Reboot the VPS.\n\nAfter enabling private networking, reboot the VPSes.\n\nAfter rebooting, log into your through ssh and follow the\n[private networking guide by HostHatch](https://docs.hosthatch.com/networking/#private-networking):\n\n1. Log in as root (either by pre-sshing as root or using the `su` command)\n2. Identify the interface name and MAC address using the command `ip -o link | grep 00:22` (the MAC address is the one that starts with `00:22:..`, and interface will usually be `enp2s0` or `eth1`)\n3. Identify the public IPv4 address of your VPS by running `curl -4 ip.me`. Remember the last digit. (for example last digit of `153.92.126.215` is `215`)\n4. Run `tee /etc/netplan/90-private.yaml` and paste in or type out the following text:\n\n network:\n version: 2\n ethernets:\n [interface name]:\n addresses:\n - 192.168.10.[last digit of the current server's public IP address]/24\n match:\n macaddress: 00:22:xx:xx:xx:xx\n dhcp4: no\n\nAfter you are done: press CTRL+D, and then reboot the VPS. (this is required for private networking to take change if running `/usr/sbin/netplan apply` won't work)\n\nNow you have private networking set up between the VPSes.\n\n### No private networking?\n\nNo worries - outside traffic will be blocked using IPTables, although, all the bandwidth will be taken into account while\nusing NFS and the performance might be worse.\n\n## IPTables rules\n\nAfter setting private networking up, you will most likely want to isolate the storage VPS from the rest of\nthe internet to avoid leakage of data. This can be done easily using Iptables and iptables-persistent.\nThis will only cover IPv4 rules, but this can be easily translated into ip6tables as well. I would recommend not\nusing IPv6 on the storage VPS as it is pretty useless in the case of a storage server, and it'll only be more\nwork to manage everything: keep it simple.\n\nFirstly, install the required dependencies:\n\n apt install iptables iptables-persistent\n\nThen create a script called `iptables.sh` as follows:\n\n #!/bin/sh\n\n # Add /usr/sbin to PATH\n export PATH=\"$PATH:/usr/sbin\"\n\n # Flush and discard all iptables policies\n iptables -F\n iptables -X\n\n # Set default policies\n iptables -P INPUT DROP\n iptables -P FORWARD DROP\n iptables -P OUTPUT ACCEPT\n\n # Accept loopback traffic\n iptables -A INPUT -i lo -j ACCEPT\n\n # Accept established and related connections\n iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT\n\n # Accept SSH connections on port 22\n iptables -A INPUT -p tcp --dport 22 -j ACCEPT\n\n # Accept TCP connections on NFS ports on server IPs\n iptables -A INPUT -s 192.168.10.[last digit of the storage server's public IP address]/32 -p tcp --dport 2049 -j ACCEPT\n iptables -A INPUT -s 192.168.10.[last digit of the processing server's public IP address]/32 -p tcp --dport 2049 -j ACCEPT\n\n # Rate limiting for new SSH connections\n iptables -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW -m recent --set --name DEFAULT --mask 255.255.255.255 --rsource\n\n # Drop SSH connections if more than 5 attempts occur within 60 seconds\n iptables -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW -m recent --update --seconds 60 --hitcount 5 --name DEFAULT --mask 255.255.255.255 --rsource -j DROP\n\n # Drop invalid packets\n iptables -A INPUT -m state --state INVALID -j DROP\n\n # Accept loopback traffic for outgoing connections\n iptables -A OUTPUT -o lo -j ACCEPT\n\n # Save iptables rules\n iptables-save >/etc/iptables/rules.v4\n\nYou may also want to add the following rules as well to block IPv6 traffic:\n\n # Block IPv6\n ip6tables -F\n ip6tables -X\n ip6tables -P INPUT DROP\n ip6tables -P OUTPUT DROP\n ip6tables -P FORWARD DROP\n iptables-save >/etc/iptables/rules.v6\n\nAfter creating this script, go into your HostHatch console and do this:\n\n1. Press on your server's hostname.\n2. Go into the 'Console' tab.\n3. Log in as root.\n4. Run the script.\n5. Enable the `netfilter-persistent` service: `systemctl enable netfilter-persistent`\n\nYou should do it this way because you may experience connection issues while applying these IPTables rules.\n\nThis script will protect your VPS from brute-force attacks on the SSH port and it'll cut off the VPS from\nthe rest of the internet for the most part.\n\n### Sysctl for IPv6\n\nIf you want to truly disable IPv6, you will need to edit `/etc/sysctl.conf` and add this to it:\n\n net.ipv6.conf.all.disable_ipv6=1\n net.ipv6.conf.default.disable_ipv6=1\n\nAfter which, run this as root to apply the settings:\n\n sysctl -p\n\nNow absolutely no IPv6 traffic will be available in the storage VPS.\n\n### Firewall with IPTables\n\nIf you want IPTables rules for your processing VPS, especially if you also allow IPv6, you are free to use\nmy `fw.sh` script located at :\n\n #!/bin/sh\n\n set -eu\n\n main() {\n for ip in iptables ip6tables; do\n echo '----------------------------------------------------------------'\n\n echo \"[$ip] Setting up iptables rules...\"\n\n echo \"[$ip] Flushing all rules...\"\n \"$ip\" -F\n \"$ip\" -X\n\n echo \"[$ip] Allowing established connections...\"\n \"$ip\" -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT\n\n echo \"[$ip] Allowing loopback interface...\"\n \"$ip\" -A INPUT -i lo -j ACCEPT\n \"$ip\" -A OUTPUT -o lo -j ACCEPT\n\n echo \"[$ip] Allowing SSH, HTTP, HTTPS, Email federation, Matrix federation, and XMPP federation on tcp...\"\n \"$ip\" -A INPUT -p tcp --dport 22 -j ACCEPT # SSH\n \"$ip\" -A INPUT -p tcp --dport 80 -j ACCEPT # HTTP\n \"$ip\" -A INPUT -p tcp --dport 443 -j ACCEPT # HTTPS\n \"$ip\" -A INPUT -p tcp -m multiport --dports 25,465,587,143,993,110,995,2525,4190 -j ACCEPT # Email federation\n \"$ip\" -A INPUT -p tcp --dport 8448 -j ACCEPT # Matrix federation\n \"$ip\" -A INPUT -p tcp -m multiport --dports 5222,5269,5223,5270,5280,5281 -j ACCEPT # XMPP federation\n\n echo \"[$ip] Rate limiting SSH traffic on tcp...\"\n \"$ip\" -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW -m recent --set\n \"$ip\" -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW -m recent --update --seconds 60 --hitcount 5 -j DROP\n\n echo \"[$ip] Dropping invalid packets on tcp...\"\n \"$ip\" -A INPUT -p tcp -m state --state INVALID -j DROP\n\n echo \"[$ip] Dropping other traffic...\"\n \"$ip\" -P INPUT DROP\n \"$ip\" -P FORWARD DROP\n\n echo \"[$ip] Rules:\"\n \"$ip\" -vL\n\n echo '----------------------------------------------------------------'\n done\n\n echo '[ICMP] Allowing limited ICMP traffic...'\n iptables -A INPUT -p icmp -m limit --limit 5/sec --limit-burst 10 -j ACCEPT\n iptables -A INPUT -p icmp -j DROP\n ip6tables -A INPUT -p icmpv6 -m limit --limit 5/sec --limit-burst 10 -j ACCEPT\n ip6tables -A INPUT -p icmpv6 -j DROP\n\n echo '----------------------------------------------------------------'\n\n echo '[iptables-save] Saving rules...'\n iptables-save | tee /etc/iptables/rules.v4\n\n echo '----------------------------------------------------------------'\n\n echo '[ip6tables-save] Saving rules...'\n ip6tables-save | tee /etc/iptables/rules.v6\n\n echo 'Meoww :3 done'\n }\n\n main \"$@\"\n\nMake sure no iptables or ip6tables rules are set on the server already so they don't get flushed and you experience\nnetworking problems.\n\n## NFS (storage server)\n\nIn this section, we will set up nfs-kernel-server on the *storage* server.\n\nFirstly do the prerequisite steps:\n\n1. Make sure you are logged in as root.\n2. Install the required dependencies: `apt install nfs-kernel-server nfs-common`\n3. Create the shared exports directory, I personally chose `/share/nfs`: `mkdir -p /share/nfs`\n4. Set up the correct ownership for the directories: `chown nobody:nogroup -R /share`\n5. Set up the correct permissions for the directories: `chmod 755 -R /share`\n6. Enable the NFS service: `systemctl enable nfs-kernel-server`\n\nNow, simply export the NFS share by editing `/etc/exports`:\n\n /share/nfs 192.168.10.[last digit of processing server's public IP](rw,sync,no_subtree_check,async)\n\nIf you are going to be using this share for database storage, make sure to remove the `async` flag as that may\nlead to data loss and/or corruption. I do that with PostgreSQL:\n\n /share/ 192.168.10.[last digit of processing server's public IP](rw,sync,no_subtree_check)\n\nNext, simply export the filesystems:\n\n exportfs -a\n\nAnd start the NFS service:\n\n systemctl start nfs-kernel-server\n\nNow, for the next steps, verify the available NFS versions:\n\n $ cat /proc/fs/nfsd/versions\n +3 +4 +4.1 +4.2\n\nRemember the biggest number that has a `+` in front of it.\n\nYou have successfully set NFS up on the storage server! The NFS server will only be accessible by\npurely the processing server and noone else.\n\n## NFS (processing server)\n\nNow, we are going to set up NFS and Cachefilesd on the processing VPS.\n\nFirstly do the prerequisite steps:\n\n1. Open `/etc/fstab`.\n2. Edit your `/` mount to have the following mount options: `rw,discard,errors=remount-ro,x-systemd.growfs,user_xattr,acl`.\n3. Reboot the VPS.\n4. Make sure you are logged in as root.\n5. Install the required dependencies: `apt install nfs-common`\n6. Make the NFS mountpoint: `mkdir -p /mnt/nfs`\n7. Set up correct ownership: `chown nobody:nogroup /mnt/nfs`\n8. Set up the correct permissions: `chmod 755 /mnt/nfs`\n\nNow open up your `/etc/fstab` and add this:\n\n 192.168.10.[last digit of the storage server's public IP]:/share/nfs /mnt/nfs nfs4 defaults,fsc,noatime,nodiratime,_netdev,x-systemd.automount,x-systemd.requires=network-online.target,timeo=600,rsize=65536,wsize=65536,hard,intr,nfsvers=[latest version of NFS available, such as 4.2],namlen=255,proto=tcp,retrans=2,sec=sys,clientaddr=192.168.10.[last digit of the processing server's public IP],local_lock=none,addr=192.168.10.[last digit of the storage server's public IP] 0 0\n\nFor database storage, you may want to modify these options to:\n\n 192.168.10.[same]:/share/[database path] /var/lib/[database path] nfs4 defaults,fsc,noatime,nodiratime,_netdev,x-systemd.automount,x-systemd.requires=network-online.target,timeo=600,rsize=65536,wsize=65536,hard,intr,nfsvers=[same],namlen=255,proto=tcp,retrans=2,sec=sys,clientaddr=192.168.10.[same],local_lock=none,addr=192.168.10.[same] 0 0\n\nDon't yet do anything. First, we will set Cachefilesd up (`fsc` mount option). This will give us better performance by being able to utilize the mass storage of the HDD server and the performance of the NVMe server:\n\n1. Install Cachefilesd: `apt install cachefilesd`.\n2. Edit `/etc/cachefilesd.conf` if needed. (or just use default configuration - it is okay)\n3. Edit `/etc/default/cachefilesd` and change the `RUN=no` to `RUN=yes`.\n4. Start and enable the cachefilesd service: `systemctl enable --now cachefilesd`.\n5. Check the status, and debug if needed: `systemctl status cachefilesd`.\n6. Done. You should now reboot the VPS.\n\nNFS is not successfully set up with caching. You can use the mountpoint as any mounted filesystem.\n\n## SSHD (SSH daemon) configuration\n\nOn the processing VPS you may want to use the following configuration **only after adding an unprivileged user, adding your public ssh key in ~/.ssh/authorized_keys, and testing it:**\n\nFirst run `rm /etc/ssh/ssh_host_* && dpkg-reconfigure openssh-server` and then edit `/etc/ssh/sshd_config`:\n\n ...\n Port 22\n AddressFamily any\n ...\n SyslogFacility AUTH\n LogLevel INFO\n ...\n PermitRootLogin no\n ...\n MaxAuthTries 3\n ...\n PubkeyAuthentication yes\n ...\n AuthorizedKeysFile .ssh/authorized_keys\n ...\n IgnoreRhosts yes\n ...\n PasswordAuthentication no\n PermitEmptyPasswords no\n ...\n KbdInteractiveAuthentication no\n ...\n UsePAM yes\n ..\n AllowAgentForwarding no\n AllowTcpForwarding no\n ...\n X11Forwarding no\n ...\n PrintMotd no\n ...\n TCPKeepAlive no\n ...\n UseDNS no\n ...\n Banner none\n ...\n AcceptEnv none\n ...\n Subsystem sftp /usr/lib/openssh/sftp-server\n ...\n ChallengeResponseAuthentication no\n\n KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256\n\n Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr\n\n MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com\n\n AuthenticationMethods publickey\n\n HostKey /etc/ssh/ssh_host_ed25519_key\n HostKey /etc/ssh/ssh_host_rsa_key\n HostKey /etc/ssh/ssh_host_ecdsa_key\n\n AllowUsers \n\nIf you also run a git server you may want to restrict it even more:\n\n Match User git\n X11Forwarding no\n AllowTcpForwarding no\n AllowAgentForwarding no\n PermitTTY no\n AuthorizedKeysFile /home/git/.ssh/authorized_keys\n PermitTunnel no\n ClientAliveInterval 300\n ClientAliveCountMax 0\n\nOn the storage VPS you may want to have a singular unprivileged user and only allow traffic from IPv4 (`AddressFamily inet`).\nYou may also want to specify a `Banner /etc/issue` to show a legal disclaimer by overwriting the issue and motd files in etc.\nFeel free to take this one:\n\n ********************************************************************************\n * WARNING: AUTHORIZED ACCESS ONLY *\n ********************************************************************************\n * *\n * You are accessing a private computer system owned by .......... and operated *\n * under the domain ....... This system, including all related equipment, *\n * networks, and network devices (specifically including Internet access), is *\n * provided only for authorized use. This system may be monitored for all *\n * lawful purposes, including to ensure that its use is authorized, for *\n * management of the system, to facilitate protection against unauthorized *\n * access, and to verify security procedures, survivability, and operational *\n * security. Monitoring includes active attacks by authorized entities to test *\n * or verify the security of this system. During monitoring, information may be *\n * examined, recorded, copied, and used for authorized purposes. Use of this *\n * system constitutes consent to monitoring for these purposes. *\n * *\n * Unauthorized or improper use of this system may result in civil and criminal *\n * penalties and administrative or disciplinary action, as appropriate. By *\n * continuing to use this system you indicate your awareness of and consent to *\n * these terms and conditions of use. LOG OFF IMMEDIATELY if you do not agree *\n * to the conditions stated in this warning. *\n * *\n ********************************************************************************\n\n System owned by Jane Dane - example.com\n\n## DNS servers\n\nFor best privacy, security, and generally reliable services - I recommend using [Quad9 DNS](https://quad9.net/).\nYou may use these DNS servers by editing `/etc/systemd/resolved.conf` and setting the following value as such:\n\n DNS=9.9.9.9#dns.quad9.net 149.112.112.112#dns.quad9.net 2620:fe::fe#dns.quad9.net 2620:fe::9#dns.quad9.net\n\nThen either reboot or run:\n\n systemctl restart systemd-resolved\n\n## Closing note\n\nThat's about it. Good luck and have fun with your new infrastructure!\n\n(btw that's basically the infrastructure ari.lt runs on at the moment, if I find any bottlenecks - I'll tackle them)\n\nMy storage server seems to be idling at about 100M of RAM and around 5% CPU on average, of course with spikes.\nThat play room might seem crazy, but the spikes are even crazier - keep it light and simple on the storage server!\nIt is *literally* responsible for your storage - be careful and make sure you understand what you are doing.\n\nCya next time!", "keywords": [ "cloud hosting", "server security",