After running my personal #Mastodon instance on a #DigitalOcean Droplet for a couple of weeks, and a relatively small one at that being a single CPU and only 2GB of RAM, I was starting to feel some of the sluggishness of my instance when using either a desktop browser or a mobile app on my phone. My Droplet was constantly hovering at almost 100% memory utilization all the time. I had peeked at the #Oracle Cloud Infrastructure (#OCI) in the past when looking for cloud hosting solutions for even my WordPress websites. I had also seen many articles and blog posts about running websites on their Free Tier service which also included their newer #arm64 instances. The interesting thing about their arm64 instances is that on the free tier you can get 4 CPU cores and up to 24GB of RAM. Setting up my Mastodon instance on even half of that we be in theory a huge jump in performance at zero cost. I couldn’t find much information about running Mastodon on arm64 except I did come across 1 blog post, in Japanese, that talked about running an arm64 Docker image of Mastodon on a Kubernetes. I really wasn’t feeling the urge to mess with going down the road of all the setup required to run Kubernetes and Docker. It’s not that I don’t have the skill to manage it but at the end of the day for some personal stuff I just don’t have any need to manage all of that. #HowTo
So most of this guide can be followed for setting up a brand new Mastodon instance on Oracle Cloud Infrastructure, if you are just getting started, but the second half will be more focused on moving an existing instance over from another hosting provider.
Generally the first thing you’ll need to do is go to cloud.oracle.com and login, or sign-up, to the service. I already had an account setup from some work I had already done with OpenID Connect testing with the WordPress plugin that I maintain. Once you are logged in then you’ll want to head over to the Compute services to begin creating an instance.
Creating A New Instance
- Create an instance.
- Edit the Image and shape.
- Choose the Ubuntu 22.04 image.
- Choose the Ampere VM.Standard.A1.Flex shape, with the number of CPUs & RAM desired.
- Create a new Virtual Cloud Network or choose an existing one.
- Assign a public IPv4 address.
- Add an SSH public key for logging in remotely later.
Allowing Internet Traffic
You will need to set Ingress rules for your Oracle Virtual Cloud Network to allow web traffic for both ports 80 & 443. The basic run down is as follows:
- Open the navigation menu and click Networking, and then click Virtual Cloud Networks.
- Select the VCN you created with your compute instance.
- With your new VCN displayed, click <your-subnet-name> subnet link.The public subnet information is displayed with the Security Lists at the bottom of the page. A link to the Default Security List for your VCN is displayed.
- Click the Default Security List link.The default Ingress Rules for your VCN are displayed.
- Click Add Ingress Rules.An Add Ingress Rules dialog is displayed.
- Fill in the ingress rule with the following information.Fill in the ingress rule as follows:
- Stateless: Checked
- Source Type: CIDR
- Source CIDR: 0.0.0.0/0
- IP Protocol: TCP
- Source port range: (leave-blank)
- Destination Port Range: 80
- Description: Allow HTTP connections
Preparing The OCI Instance
The Mastodon documentation has a run down of the steps to perform. The first item is to configure SSH so that password logins are not possible, this should already be the default on the instance with Ubuntu 22.04 but just follow the Mastodon instructions to confirm.
Note: Throughout the setup of the server after installing system packages I would be instructed to reboot for those changes to take affect. In all of those cases I would proceed with a
sudo rebootand then SSH back into the system to continue on.
Installing And Configuring
This step was problematic. After installing
fail2ban the configuration has problems. There were 2 changes required for the standard setup in the
/etc/fail2ban/jail.local file, in order to get the service running. I tracked those changes down to through a couple of sources. The first was that
sshd-ddos was not recognized as service configured for
fail2ban to monitor. The required change was to add
filter = sshd under the
[sshd-ddos] section, as documented on itgala.xyz.
The next item that was an issue starting
fail2ban due to errors concerning the log file.
Failed during configuration: Have not found any log file for sshd jail
The issue was that
fail2ban wasn’t properly looking to
systemd for logging, essentially through
journalctl, and an additional line was needed of
backend = systemd which I added to both the
[sshd-ddos] sections. After those 2 changes I was able to successfully startup
Installing and Configuring the Firewall
With the #Ubuntu 22.04 image that was setup with the OCI instance the
iptables-persistent package was already installed. Initially I left this configuration alone, and I’ll revisit later when I’m certain that making changes to the defaults set by OCI won’t cause issues.
Installing and Configuring the Required System Software
In the Mastodon installation guide this is referred to as “Installing from source”.
Pre-Installation of Required Software/Packages
Following the steps to install the required system packages and software I found that I was only missing
apt-transport-https from the first list of required packages, the rest were already installed.
$ sudo apt install -y curl wget gnupg apt-transport-https lsb-release ca-certificates
I added the #Node 16 repository exactly as indicated in the Mastodon setup guide. The default Node that comes with Ubuntu 22.04 is version 12 which is already EOL, Node v14 will be EOL next year. In order to not have to deal with upgrading Node to a whole new version so soon.
$ sudo curl -sL https://deb.nodesource.com/setup_16.x | bash -
There was a message about the possible need to have additional system packages installed which I went again and did to ensure I wouldn’t run into build issues later.
$ sudo apt install -y gcc g++ make
I proceeded to install Node as instructed by the initial setup output.
$ sudo apt install -y nodejs
And finally, as the pre-install message indicated I also installed Yarn and ran the Yarn setup steps as mentioned in the Mastodon install guide later on down.
$ curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | sudo tee /usr/share/keyrings/yarnkey.gpg >/dev/null $ echo "deb [signed-by=/usr/share/keyrings/yarnkey.gpg] https://dl.yarnpkg.com/debian stable main" | sudo tee /etc/apt/sources.list.d/yarn.list $ sudo apt update && apt install yarn
For the Yarn setup steps it seems wise to run that as the
root user instead of just using
$ sudo su -- root $ corepack enable $ yarn set version classic
The official instructions for Mastodon instruct you to setup the #PostgreSQL repositories directly and then I’m assuming will install the latest version of PostgreSQL which is v15 at the time of this post. The Ubuntu 22.04 repositories include version 14 which is supported until 2026 so I chose to install the included distribution version.
Note: I have found that in the future if you want to upgrade to a new LTS version of Ubuntu that adding many custom repositories can result in broken installs in the future. Given this aspect it’s wise to be very selective on what additional repositories you add to the system.
The actual install of PostgreSQL happens in the install guide with a bunch of additional system packages, so proceed to the next step.
Despite the warning about adding additional repositories to the system I did however opt to setup the official Redis repository, as the Ubuntu 22.04 distributed version of #Redis is only at version 6.0 which is nearing the end of support with 6.2 & 7.0 ahead of it.
$ curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg $ echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list
Additional System Package Install
Installing PostgreSQL, Redis, Nginx, and additional packages needed for the server happens at this point.
$ sudo apt update $ sudo apt install -y \ imagemagick ffmpeg libpq-dev libxml2-dev libxslt1-dev file git-core \ g++ libprotobuf-dev protobuf-compiler pkg-config nodejs gcc autoconf \ bison build-essential libssl-dev libyaml-dev libreadline6-dev \ zlib1g-dev libncurses5-dev libffi-dev libgdbm-dev \ nginx redis-server redis-tools postgresql postgresql-contrib \ certbot python3-certbot-nginx libidn11-dev libicu-dev libjemalloc-dev
Enabling the Redis Server Service
After installing the Redis server software we need to configure Redis to be managed by
systemd and enable the service.
Configure Redis by editing the
/etc/redis/redis.conf and change the
supervised option to
. . . # If you run Redis from upstart or systemd, Redis can interact with your # supervision tree. Options: # supervised no - no supervision interaction # supervised upstart - signal upstart by putting Redis into SIGSTOP mode # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET # supervised auto - detect upstart or systemd method based on # UPSTART_JOB or NOTIFY_SOCKET environment variables # Note: these supervision methods only signal "process is ready." # They do not enable continuous liveness pings back to your supervisor. supervised systemd . . .
Enable the Redis service.
$ sudo systemctl enable --now redis-server
As the Mastodon guide indicates #Ruby should be setup via
rbenv under the
mastodon user that the Mastodon software will be installed and run under.
We’ll need to have the
mastodon user create for much of the later setup but also an important piece that was needed to get things working after the migration was to make sure that the
/home/mastodon directory was pretty much world listable. I had things running but a lot of the instance was broken and throwing 404 errors. That piece made the final difference.
$ sudo adduser --disabled-login mastodon $ sudo chmod a+x /home/mastodon
I simply hit
<enter> for the following prompts for additional details just going with the defaults, wrapping up with
Y to confirm the settings. You can feel free to customize this to your liking if you want.
mastodon User and Install
Basically these steps as taken directly from the Mastodon guide.
$ sudo su - mastodon $ git clone https://github.com/rbenv/rbenv.git ~/.rbenv $ cd ~/.rbenv && src/configure && make -C src $ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc $ echo 'eval "$(rbenv init -)"' >> ~/.bashrc $ exec bash $ git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
At the time of this post the recommended version of Ruby is 3.0.4. Since we are managing Ruby using the
mastodon user via
rbenv this can easily be upgraded as needed. This will typically be noted in the Mastodon upgrade guides between versions.
$ cd $ RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 3.0.4 $ rbenv global 3.0.4
Then as per the official guide install
bundler and return to the
$ gem install bundler --no-document $ exit
Setup and Configure the Required Software
I opted to go ahead and use pgTune to optimize the PostgreSQL database configuration. It is marked as optional but knowing all of the activity that goes on with the distributed nature of Mastodon I wanted to make sure things were optimized. I filled out the details based on my instance setup which is 2 CPUs/12GB RAM and SSD for storage as Oracles Block Storage is based on SSD disks. The configuration that was generated is as follows:
# DB Version: 14 # OS Type: linux # DB Type: web # Total Memory (RAM): 12 GB # CPUs num: 2 # Data Storage: ssd max_connections = 200 shared_buffers = 3GB effective_cache_size = 9GB maintenance_work_mem = 768MB checkpoint_completion_target = 0.9 wal_buffers = 16MB default_statistics_target = 100 random_page_cost = 1.1 effective_io_concurrency = 200 work_mem = 7864kB min_wal_size = 1GB max_wal_size = 4GB
Since I stuck with PostgreSQL 14 I dropped that configuration in
/etc/postgresql/14/main/postgresql.conf and made sure to comment out any existing configuration items already setup. Then restarted PostgreSQL.
$ sudo systemctl restart postgresql
Creating the PostgreSQL User for Mastodon
Again this is pretty much taken exactly from the official guide.
$ sudo -u postgres psql
After entering the PostgeSQL console I executed:
CREATE USER mastodon CREATEDB; \q
This is where we will actually install Mastodon as documented in the official guide.
$ sudo su - mastodon
Checkout the Mastodon Source Code
$ git clone https://github.com/mastodon/mastodon.git live && cd live $ git checkout $(git tag -l | grep -v 'rc[0-9]*$' | sort -V | tail -n 1)
An explanation of that second command is that it will basically automatically checkout the latest tagged release of Mastodon. At the time of this post the version checked out is v4.0.2.
$ bundle config deployment 'true' $ bundle config without 'development test' $ bundle install -j$(getconf _NPROCESSORS_ONLN) $ yarn install --pure-lockfile
Note: At this point if you are creating a brand new Mastodon instance you can just follow the official installation guide at the “Generating a configuration” step. The rest of this guide will take you through the process of migrating an entire existing instance from it’s current hosting to the new OCI server.
Migrating an Existing Mastodon Server to the New OCI Mastodon Server
This is where things are going to get interesting and there is a chance of a lot of downtime. If you are attempting to do this sort of migration with a Mastodon instance that has many users I would be sure that your users are on board with this change and the potential downtime that could result. Not only will you be taking your existing instance offline but you’ll also be dealing with DNS propagation delays. One aspect that I also haven’t researched or tested is what happens if someone has a post/toot of yours cached on their instance and they Like or Reply, or if they are following you and try to DM you, what happens to that transaction if your instance is completely down. I’m not sure if there is some retry setup such that within whatever expiration time it will attempt to pass along that message to your instance so that once you are back online you’ll receive anything that was headed to your instance.
In order to make copying data between instances I’d recommend setting up an SSH key pair so that you can connect to the new server from the old server. For the sake of long term security and not having to worry about protecting keys long term I’m going to just create a key pair just for this migration. Digital Ocean has a decent guide for doing this that should work well enough for getting
rsync going between 2 servers. I’d recommend doing this as the
mastodon on the old server and then you’ll need to connect to the new server as the
rsync on the New System
The OCI Ubuntu system is a minimal install and doesn’t come with
rsync installed by default.
$ sudo apt install rsync
Create the New SSH Key Pair
You will do this on the old server, with no password.
$ ssh-keygen -f ~/.ssh/id_rsa -q -P ""
Copy SSH Public Key to New Server
On the old server you can print out the public key.
$ cat ~/.ssh/id_rsa.pub
Then copy the output and insert it into the
~/.ssh/authorized_keys file on the new server. Once you save the new public key to the
authorized_keys file you should be able to test this by trying to ssh from the old server to the new server.
$ ssh ubuntu@<new-server-ip>
Shutting Down Your Old Instance And Preparing to Migrate
$ sudo systemctl stop 'mastodon-*.service'
Migrating the PostgreSQL Database
mastodon user on the old system, dump the PostgreSQL database.
$ su - mastodon $ pg_dump -Fc mastodon_production -f backup.dump
rsync the backup over to the new system.
$ rsync -avz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --progress backup.dump <new-server-ip>:~/
On the new system move the backup file into the
mastodon home directory and change the ownership.
$ sudo mv backup.dump /home/mastodon/ $ sudo chown mastodon:mastodon /home/mastodon/backup.dump
Switch to the
mastodon user on the new system to begin setup of the PostgreSQL database. Create the empty database for Mastodon, then import the older server dump.
$ sudo su - mastodon $ createdb -T template0 mastodon_production $ pg_restore -Fc -U mastodon -n public --no-owner --role=mastodon -d mastodon_production backup.dump
Copying Old System Mastodon Files to New System
Note: In order to simplify the copy process without having to move files after the
rsyncI opted to setup an SSH key pair between the
rootusers on both systems.
On the old system switch to the
root user then
rsync the files. This process will most likely take a long time. You may want to consider running the
nohup so that you can end your remote SSH connection if needed but let the file transfer continue. Reference for using nohup with rsync.
$ rsync -avz /home/mastodon/live/public/system/ root@<new-server-ip>:/home/mastodon/live/public/system/ $ rsync -avz /home/mastodon/live/.env.production root@<new-server-ip>:/home/mastodon/live/
Note: I had my Mastodon
system/cachesymlinked to a separate attached volume on my old instance so performing that
rsynctook an extra step.
Copy additional system files for the Mastodon systemd service management, Nginx, and Let’s Encrypt.
$ rsync -avz /etc/systemd/system/mastodon-*.service root@<new-server-ip>:/etc/systemd/system/ $ rsync -avz /etc/nginx/sites-available/<mastodon-domain-name> or "default" root@<new-server-ip>:/etc/nginx/sites-available/ $ rsync -avz /etc/letsencrypt/ root@<new-server-ip>:/etc/letsencrypt/
In my case on my old server for Nginx there was a configuration file for my specific domain name
mastodon.timnolte.com so that is what I copied not just the
default file as indicated in the official documentation.
After copying over the Nginx configuration you’ll need to ensure that if there was a second site configuration that it is enabled and you restart Nginx.
$ ln -s /etc/nginx/sites-available/<mastodon-domain-name> /etc/nginx/sites-enabled/
Building and Starting Mastodon
The final steps are to build Mastodon and then start things up.
$ sudo su - mastodon $ cd live $ RAILS_ENV=production bundle exec rails assets:precompile $ RAILS_ENV=production ./bin/tootctl feeds build
Wrapping Things Up
So the steps I outlined were not necessarily actually completed in the order I’ve posted here. As I ran into issues along the way I determined at what point in this process a fix, or the proper changes required to prevent the need for a fix, should be made in order to keep things smooth for anyone else attempting to follow these instructions.
Post-Migration Optional Steps
The following items are just a quick list of items that I did as a follow up when the migration was complete. These items are entirely optional.
- Setup Postfix for Gmail SMTP Relaying – This was to allow system emails to be reliably sent out. #Postfix
- Enable Automatic Updates – This was generally already partially setup but I reconfigured to my liking.
- Setup Boot Volume Automatic Backups – The instructions I found were geared towards the separate Block Volumes but but editing the Boot Volume for the instance I was able to add the Backup Policy I wanted, which OCI defines as Policy-Based Backups. #backups
- Set a Password for the
ubuntuAccount – This involved switching to the
sudo su - rootand running
passwd ubuntuto set a password.
- Low Disk Space Notifications – OCI doesn’t appear to have alarm monitoring for disk usage, additionally for monitoring an instance you are limited to only 2 alarms on the free tier. So some internal server monitoring setup is required.
- Adding Support for IPv6 – New instance don’t seem to include IPv6 addresses by default. In order to be forward compatible I wanted to make my instance available via IPv6 networks directly.