Unraid 321 Backups With Duplicati

Backup, Docker, Storage, Unraid -

Unraid 321 Backups With Duplicati

Unraid provides an accessible platform for network-accessible storage for an affordable price point due to its ability to easily use second-hand consumer hardware for a new purpose. However, you may be led into a false sense of security with your data being protected by the Unraid array and its parity process. It will be essential to be cautious with this setup and to ensure the recommended backup processes fully protect your data.

This article will discuss recommendations for Unraid backups and how you can achieve this through a 3–2–1 backup strategy implemented with a single container called Duplicati. This container provides a highly polished feature set for an open-source product. The post assumes that you already have Unraid set up and are comfortable using Docker, the Unraid UI, and the Docker CLI tool.

Our Backup Problem

Unraid provides a great way to protect the data in its shares from disk failure. However, as the Unraid docs state heavily, it is not a backup solution, and any complete data solution should involve protecting data from hardware failure through redundancy and backing up this data to other platforms. In this solution, we do not just want to copy data to another location. We also want to ensure that this solution is easy to use regarding data recovery and returning it to the Unraid system. This easy backup is valid when doing application deployment or development on your Unraid system if you have corrupted a database and can restore it to a point in time.

Most importantly, it protects you from the complete hardware failure of your NAS or the corruption of the operating system. Currently, with all the data we want on Unraid, we may wish to use tiered storage that moves it quickly from fast (or “Hot” storage, often more expensive) to slow storage (otherwise known as” cold” storage, usually taking time to access this data). Through the use of tired storage, we can quickly maximise the protection of our data while also balancing things like cost.

Backup Requirements

With our backup problem defined, we can explain some of the requirements for the backup solution to ensure that it is fit for purpose and provides us with the data protection we seek. Since this is a backup, we will want to do the planning early to ensure that we are not missing anything. You do not want to find out about a missing requirement when you need it, and your data is at risk, or you may already have experienced data loss due to a missing requirement.

3–2–1 Backup Strategy

A robust solution that meets all the backup requirements is commonly known as the 3–2–1 backup strategy. This strategy ensures your data is protected and spread across different platforms and media to ensure it is safe in all scenarios, even if your backup targets may fail or become inaccessible. You can see a diagram of this backup strategy below.

This backup strategy revolves around the 3 following principles to ensure your data is protected from failure as robustly and cost-effectively as possible:

  • 3 Copies of Data: These 3 separate copies are necessary to ensure that even if one copy of data is lost or becomes inaccessible, you still have another copy that you can rely on.
  • 2 Different Media: Having 3 different copies of the data, you also want to ensure that these copies are spread across two different media types. This could be on physical HDDs, cloud storage, or something more extreme (and more long-lasting and reliable for the cost) with tape backups.
  • 1 Offsite Location: Finally, you will want to make sure that at least 1 of your backup copies is on an offsite location from your other 2 copies to ensure that the disaster scenario where your primary storage is destroyed (think a fire, flood, etc.) you will still have a way to restore your data. Now, this offsite backup is usually the most expensive component of the solution, either through the cost of transferring data or storing it at the location.

The 3–2–1 backup strategy is widely recommended as a best practice for data backup and disaster recovery. It provides a balanced approach to ensuring data resilience and recovery in various scenarios, such as hardware failures, data corruption, or disasters. To balance cost, consider moving from a local backup to your offsite backup and from warmer to colder storage. Similarly, you will also want to balance the intervals of these backups, going from hourly or daily for your local backup to weekly or monthly for offsite backups.

Ease Of Use

Modern backup tools can often be very clunky and are not usually tailored towards individual usage, preferring to focus on enterprise offerings and features. So, for a more consumer-focused tool like Unraid, ease of use on the platform is critical. Ideally, our solution will have a UI allowing you to view backup logs, view all available backups, and trigger the restoration of backups in one place.

Security

Finally, as a backup tool with access to all of your most sensitive files, you want to ensure this solution is secure. Some of the general requirements we have for this on an Unraid setup include the running of the application inside a container, restricting the data that is mounted into the container as much as possible (Ideally, these mounts would be read-only if possible), and most importantly, the data is encrypted both in transit and at rest as per best practice when handling sensitive data.

The Implementation

For my homelab solution, I use Duplicati as the backup and restoration orchestration tool. What drew me to it was that it can be run efficiently via Docker, is open source, has easy-to-understand job configuration, and has a high level of configuration available. With Duplicati, it’s as easy as running the following command with Docker to get an instance running.

To add data, it can access and manage it by just mounting the data locally with a local volume mount. Using Duplicati, I created a config for each storage medium for the 3–2–1 backup solution; for me, these included.

  • local: a dedicated share on the Unraid NAS
  • external: a dedicated external drive connected to the NAS
  • cloud: Back Blaze b2 as an affordable s3 S3-compatible API provider for object storage

I looked at each of my Unraid shares that I wanted to back up and decided what frequency I wanted to back up my data, and here’s the general list I came up with.


From this setup, I ended up with nine different configs to do the syncing, 1 for each of the storage types and frequency combinations for a share. so 3 x 3 = 9 configs. In the future, this is an area I want to explore as there could be ways to reduce the number of configurations to be created. However, this setup has allowed for some finer-grained controls as some shares are more stable to backup than others, such as the applications shares that contain a lot of files that I have filtered, like logs and runtime applications db.

 

You can find a diagram of how this backup was implemented using a single NAS server, external HDD and Back Blaze B2 storage.

Wrapping Up

Using the 3–2–1 backup strategy, you can quickly build your solution that can balance ease of use and cost-effectiveness by allowing you to select the different services for each backup stage. With this strategy, you can enable a high level of reliability in your solution but still ensure the most expensive cloud storage or remote storage is used the least.

Overall, I have been happy with the setup so far, being able to change configurations over time easily and having notifications via Email with failure reports in case anything goes wrong. In a homelab environment, I recommend it as a backup orchestration solution.

If you are looking for other articles related to Unraid, check the following posts:


Leave a comment