How To Run Services on a Linux Server
I have been running services myself for a few years on Linux servers. It took a while to figure out what works best. Here's what I've learned.
First of all, all maintenance is done on headless servers via SSH.
Learning this might seem daunting for some at first, but it is truly unbeatable in terms of productivity and speed.
To easily log in via SSH, add the SSH keys to the server and then add the server to your ~/.ssh/config.
For example,
Host arnold
Hostname 123.456.789.012
User rik
IdentityFile ~/.ssh/arnold
Now you can log in via ssh arnold instead of having to manually type something like ssh rik@123.456.789.012 each time.
Next, what most likely doesn't work is Kubernetes. You probably don't need it. It was written to keep massive sites like YouTube up and running at all times and not for your personal services.
Instead, I went for Docker Compose and it has not disappointed. To get it running, use their official installation instructions.
After that, say that you want to run two public services hosted on the same server.
To do that, I usually create a folder for each service.
Say the services are two blogs called Foo and Bar.
Create a folder for both and add a docker-compose.yml file in both of them.
For example, say the blog is using fx, then add the following docker-compose.yml to the foo folder:
services:
foo:
image: 'rikhuijzer/fx:1'
container_name: 'foo'
ports:
- '3001:3000'
volumes:
- './data:/data:rw'
restart: 'unless-stopped'
And run docker compose up -d inside the foo folder to start the service.
Next add another docker-compose.yml to the bar folder:
services:
bar:
image: 'rikhuijzer/fx:1'
container_name: 'bar'
ports:
- '3002:3000'
volumes:
- './data:/data:rw'
restart: 'unless-stopped'
And run docker compose up -d.
Now the two services are running on your server, available at ports 3001 and 3002, and will automatically restart when the server restarts.
But this doesn't make them available from the internet yet. To achieve that, you can use a so called reverse proxy. The term sounds more complicated than what it is.
To setup a reverse proxy, add another folder called caddy and place the following docker-compose.yml inside of it:
services:
caddy:
image: 'caddy:2.10.0-alpine'
network_mode: 'host'
container_name: 'caddy'
volumes:
- '/data/caddy:/data:rw'
- './Caddyfile:/etc/caddy/Caddyfile:ro'
restart: 'unless-stopped'
Next, point two domains to your server IP.
Go into the DNS configuration for your domain, and point foo.example.com to your server by setting the A record to the IPv4 of your server, and do the same for bar.example.com.
When this is done, anyone visiting foo.example.com or bar.example.com will be redirected to your server, but your server will not respond yet.
To make that happen, add also a Caddyfile to the caddy dir:
{
email email@example.com
admin off
}
foo.example.com {
reverse_proxy :3001
}
bar.example.com {
reverse_proxy :3002
}
And start Caddy by running docker compose up -d inside the caddy folder.
When you do that, Caddy will automatically handle HTTPS for you.
That's basically it, your services should now be available on the web.
This essentially is my setup that has worked for many years.
For the maintenance, you can go overboard and fully try to automate it.
Or, you can just set a notification in your calendar to maintain the server every two weeks.
Once I get the notification, I go into the server and look whether storage and memory usage is looking good.
Then I run sudo apt-get update and sudo apt-get upgrade.
And I check whether the containers need an update.
I prefer this over automation because it gives me a bit of a feeling for how the server is doing, and it checks whether my SSH access still works.
For backups, a simple way is to add a shell script on your own computer that pulls the data from the server via scp.
This can be as simple as
#!/usr/bin/env bash
set -euo pipefail
scp -r -v -s "arnold:/root/web" "$DIR/tmp-web-data"
rm -rf "$DIR/web-data"
mv "$DIR/tmp-web-data" "$DIR/web-data"
And running it every two weeks, or more often depending on how problematic data loss would be.