A short history of Slackware if you have yet to hear about it. It was created as a Linux Distribution in 1993 by Patrick Volkerding. The distribution was initially based on the Softlanding Linux System but has become the basis of other Linux distributions building up on the Slackware distribution and is one of the oldest distributions still maintained. One notable distribution to come from Slackware is the SUSE Linux distribution.
This history of the distribution and other joint distributions building up on top of it means the packages used by the distributions can be installed across many joint distributions due to the common Slackware core. For Unraid users, this means that even though you are not provided with the most accessible experience in installing your extra packages. Still, you can use the wide variety of Slackware packages available.
Unraid utilises the /boot/extra/
folder to look for Slackware files. These files can be added to the folder beforehand, and Unraid will automatically pick them up and install them during the boot process. This is a beneficial feature for anyone trying to ensure their system runs with the latest version of their favourite applications that are not installed by default as part of the Unraid operating system. It saves time and effort that would otherwise be spent manually downloading and installing them via the Unraid Go file or manually via the terminal.
Now, let's look at the complete process for installing Slackware packages on Unraid to extend its software functionality for your needs. First, where can you search for these Slackware packages? This can be done at the Slackware Linux project, where you can search for standard Linux packages and view what is available.
From this UI, you can select a package and download it to your machine if you move the package file manually onto your Unraid host.
Or you can download them directly via the terminal. Create a folder inside /tmp and run the following script to download the package into /tmp
cd /tmp mkdir /slackware-pkgs && cd ./slackware-pkgs curl http://mirrors.slackware.com/slackware/slackware-current/slackware/d/git-2.43.1-i586-1.txz |
Once downloaded, move the package files into the/boot/extra/ folder.
mv /tmp/slackware-pkgs/git-2.43.1-i586-1.txz /boot/extra |
To ensure your changes are applied on a system restart, the package files download must be placed into the/boot/extra folder. On a system restart, Unraid will check this folder for valid .tzg files and attempt to install the packages onto the system; any failed installs will be skipped. If you don't want to wait for a system restart to try out a Slackware package, you can also install it right away with the following command.
upgradepkg --install-new /boot/extra/git-2.43.1-i586-1.txz |
You should know better how your Slackware packages can easily extend your Unraid server's functionality. By simply adding these packages onto your server or using the CLI tool, you can begin installing these tools and use further utilities like Python, Ansible, Terraform and more.
I have recently gone from running the most minimal HTTP version of the registry on my machine to support my local docker image development workflows to running my own private docker registry available to private and public hosts with access control. I’ll run you through all the steps and gotchas so you can set up whatever kind of registry you need as it’s become a key part of my Dev Ops infrastructure and probably will be for you as well when you see how easy it is.
You can start with running the provider docker registry image from Docker co for a minimal Docker registry setup. This will start an HTTP version of the server without access control accessible on port 5000.
$ docker run -d -p 5000:5000 — name registry registry:latest |
Once the registry image has been pulled and is up and running on your machine, you are ready to push your built images to it. Prefix your image tag with your host and port localhost:5000 whenever you tag or push to that registry.
$ docker push localhost:5000/test-image |
If you get the following error when trying to push to your HTTP registry, you will need to do some configuration to work around a security restriction for registries to be HTTP only unless the hosts are whitelisted for the docker deamon.
$ docker push localhost:5000/ubuntu-localUsing default tag: latest
Error response from daemon: Get https://localhost:5000/v2/: http: server gave HTTP response to HTTPS client
|
to work around this error for local testing we can configure our Docker daemon to allow for HTTP connection to our local Docker registry that is running. You will want to configure your Docker daemon.json file with the following config and restart the docker service on your machine to have the settings take effect.
{ “insecure-registries” : [“localhost:5000”] } |
You can find this file at /etc/docker/daemon.json on a Unix based machine and C:\ProgramData\docker\config\daemon.json on windows. However, when running Docker for Windows and Mac, you can also access it via the Docker desktop GUI.
Once you have restarted Docker, you should be able to push to the HTTP registry. You can read more about testing a local insecure HTTP registry at the following docs.
Once you have restarted Docker, you should be able to push to the HTTP registry. You can read more about testing a local insecure HTTP registry at the following docs.
https://docs.docker.com/registry/insecure/
An HTTP Docker registry should only be used for local development, testing, or over a secure internal network so use it at your own risk. After the image has been pushed to the registry, you can verify that your image is now available for consumption with the following command to your registries API or via calling docker pull with your local registry tagged image.
$ curl -X GET http://localhost:5000/v2/_catalog
{“repositories”:[“my-image”]}$ docker pull localhost:5000/ubuntu-local |
Now, if you would like to restrict who can and can’t write to your docker registry, you can force users to log in to your registry before reading or writing to it. Users and their passwords to the registry in its most simple form are handled in a htpasswd format. You can generate the password file for your users by using the following command and container switching out your user and password as needed for your use case.
$ mkdir auth $ docker run — rm \ — entrypoint htpasswd \ httpd:2 -Bbn testuser testpassword >> ./auth/htpasswd |
With this password file generated for your users, you can mount the file into your Docker registry container and configure the REGISTRY_AUTH_HTPASSWD_PATH environment variable to point to this password file inside the container. You should also configure the REGISTRY_AUTH and REGISTRY_AUTH_HTPASSWD_REALM environment variables for basic auth like in the snippet below.
$ docker run -d \ -p 5000:5000 \ — restart=always \ — name registry \ -v “$(pwd)”/auth:/auth \ -e “REGISTRY_AUTH=htpasswd” \ -e “REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm” \ -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ registry:2 |
Your container should now require basic authentication with the credentials you created to push to it. Before you push or pull from your access controlled Docker registry, you should configure Docker for that registry by running the login command.
$ docker login -u testuser -p testpassword localhost:5000 |
You can now push and pull like normal from your private docker registry.
So now that you have a local Docker registry, you will want to do a few more things if you plan to have it externally accessible. For private or public Docker registries that are externally accessible, you will want to run them over HTTPS to assure the person downloading your images they are coming from who you say you are. Whatever your use case, you will want to read the documentation from docker to understand your registry’s security and make it work for your setup and requirements.
https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry
You should configure TLS for any externally accessible Docker registry to assure the consumer of your images that the image data they are received is coming from who they expected it to be. Setting this up is nice and easy with the Docker registry image, mount your .crt and .key file that you might generate with a tool like cerbot into the image and assign the REGISTRY_HTTP_TLS_CERTIFICATE and REGISTRY_HTTP_TLS_KEY environment variables to the paths of your domains certificate and keys inside the container.
$ docker run -d \ — restart=always \ — name registry \ -v “$(pwd)”/certs:/certs \ -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ -p 443:443 \ registry:2 |
Once the image is up and running, you will be able to push and pull from the registry by using your domain for the registry host to push and pull images from.
$ docker pull node $ docker tag node myregistry.domain.com/dev-node $ docker push myregistry.domain.com/dev-node $ docker pull myregistry.domain.com/dev-node |
This setup will not trigger the same HTTP error as earlier requiring the whitelisting of the registry host and is thus the recommended method to have your Docker registry serve its images to your users.
If you already have a reverse proxy on your network doing your SSL termination, you can offload SSL to your reverse proxy and continue running your registry over HTTP. You can also have the authentication of users be performed in the reverse proxy if this is where you centralise SSO for your network. The docker registry docs are best referred to here with a complete NGINX config example to start from with all the necessary paths and redirects to serve the API properly and to handle the possibility of multiple docker versions connecting and implementing different versions of the docker api.
https://docs.docker.com/registry/recipes/nginx/
Running a docker registry is very easy with the provided docker registry image from Docker co. Its flexibility with configuration makes it work for the setup you need. One important thing that we didn’t discuss here that you might want to consider is the built-in storage backends for common object storage services like S3, Azure, etc. You can read more about it here.
https://docs.docker.com/registry/storage-drivers/
This will work perfectly for scaling your setup for multiple users or if you plan to load balance the service and share their underlying storage.
If you have built computers in the past or live in a household with multiple computers, you may want to build a NAS (Network Attached Storage) to provide storage accessible to all machines on the network. This can be used to run a media server, a game server, or simply a file server.
In this article, we will talk about how you effectively plan an Unraid build so you can avoid some common mistakes during the journey. Some hardware will be recommended for your build, and some good considerations for the server's future to ensure you don't build yourself into a corner. Some great resources for going further with your Unraid server will also be shared to keep you learning.
]]>If you have built computers in the past or live in a household with multiple computers, you may want to build a NAS (Network Attached Storage) to provide storage accessible to all machines on the network. This can be used to run a media server, a game server, or simply a file server.
In this article, we will talk about how you effectively plan an Unraid build so you can avoid some common mistakes during the journey. Some hardware will be recommended for your build, and some good considerations for the server's future to ensure you don't build yourself into a corner. Some great resources for going further with your Unraid server will also be shared to keep you learning.
This post assumes that you have built computers or opened up one before to change some parts. But that is okay if you don't have experience with servers or storage.
The first step to getting started with your Unraid NAS build is to think about what hardware you want to use for this build. The key components we will think about are the storage and the compute (Docker containers and/or VMs) you plan to be doing on the machine to drive what hardware you want to buy.
The amount of storage you want on your machine is the first thing we want to consider. This will depend on your requirements; for example, if you are using it for media storage, you will want a lot of bulk storage using a set of spinning HDDs, but if you are using it for computing, you will want more fast SSD storage. Still, storage size may not be a concern for you.
One consideration in your build is Unraid OS requirements for a parity disk. This disk will store data about the data on all the other disks in your array to rebuild the data during a parity check. In Unraid, you can have either 1 or 2 parity disks for redundancy, which can be equal to or larger than the largest non-parity disk in use. You can find more in-depth docs on how parity works on Unraid below.
For the computing component of the server, including the CPU and RAM. If you plan to use docker or KVM heavily on your machine, spec these components with a bit more resources. But if you are using your server just for a storage server in this area, you can save a lot of money on the ongoing costs of the extra energy to run those high-power components that are not being used.
Another consideration is leaving room for upgrades in your build if you need more storage. This can be done by leaving empty SATA slots on your motherboard.
If you have your build planned, consider again the importance of leaving room for upgrades in your system. Since Unraid is focused on storage, you will always want to ensure you have space to grow your machine's resources if needed.
This can be achieved by leaving one or two empty spots for hard drives or SSDs (for now). If you plan for this early, it will save you extra money later from having to get a new case or rebuild your system completely.
Now that we have covered what you need to be thinking about when planning your parts list let's cover what the recommended hardware, at a minimum, should be for your build to have a good stable system. For a mid-range Unraid build, you will want to get:
Also, consider getting a PCI or PCIe SATA controller card for faster read/write speeds. The Promise TX4 is the most widely trusted PCI SATA controller card, but it is slow by today's standards. For a PCIe card, look for one that uses the latest SATA version.
If you're looking for other people's completed build and parts list, below are some hardware recommendations for a mid-range Unraid build for storage and virtualisation.
So now you should be close to having your Unraid server running or planning how to build one. For the future of your server, you can upgrade it or modify it more for your needs, so check out some of these great resources for educational content on administering Unraid.
Spaceinvader One Youtube channel
If you're looking to build your homelab or Unraid server, check out the following link for some HBA cards you can use to expand your NAS storage capability.
If you are in the market for some homelab or home server products, check our store's products.
You can also find some of our other blog posts.
Connect on our social media accounts over on Twitter, Facebook and Instagram.
]]>
As engineers, there will be a lot of small notes, diagrams or doodles that need to be done to get the job done. So why not invest and get them a nice notebook they can use daily on the job with a friendly call back to what they are doing, like in the linked product?
This is a great gift you can keep topping up on year after year with more exciting notebooks, and they will always get used eventually. If you are environmentally conscious about paper usage, you can get notebooks that allow you to switch out the pages for a fresh set so you can keep the skin for years to come.
Every engineer has come up against that moment when they are deep into a problem, thinking away, but they need something to fiddle with to keep that thinking ball rolling.
Studies have even found that fiddling stimulates your thinking and increases your recall. These “useless boxes” come in many different shapes and sizes with a single or many switches and various patterns they run through.
Once again, any good software is developed mainly with coffee, lots and lots of coffee. So to keep all your desks water ring free, these circuit board coasters are a great touch to keep your workspace clean.
For the following product, they use upcycled circuit boards to make sure what can be diverted from landfill is and can get a second use in our homes.
Engineers are natural tinkerers to find those solutions to the big problems in the world, So a perfect gift is a sandbox that can provide them with endless possibilities to build things. The following gift is a Raspberry Pi computer that can create simple machines and automation using the provided sensors and connectors in the kit. With this kit, you can make many things based on guides, such as motion sensors and arcade machines, or create your inventions.
Every engineer is a tinkerer at heart, so they will always need screwdrivers and an assortment of different bits to take apart and repair (or have a poke around) some of those products or electronics around the house.
The iFixit toolkit is an excellent pack that could include everything you might need to take apart almost any electronic or consumer product sold today. And if you ever need any guides on putting the thing back together without extra screws at the end, you can always check out their helpful teardown or repair guides for many everyday tech products today.
For any engineer, constant learning is an important part of the profession. But especially in the software industry, as the landscape changes so fast, keeping up with new technologies and software frameworks can help them build new things and stay productive as engineers.
With a Udemy gift card, the engineer in your life can select from thousands of courses ranging from topics in software engineering and even topics outside of the engineering profession, such as business, finance and everyday life skills.
If you have been using computers for a while, you may be familiar with the blue screen of death and the dread it can cause when you see it depending on your current work. But it would also make a fantastic graphic for a T-shirt for the engineer in your life. Check out the following product that will get giggles around the office.
If a service goes down or you successfully release that new feature, this little machine will make celebrating that much easier. The Bev by BLACK+DECKER makes making your favourite cocktails easy with their capsule and your favourite spirits to start mixing up drinks in seconds.
Keeping your focus can be nice and easy with timers, be they 5, 10, 15 or longer, to keep you focused on what needs to be done.
Check out the following product for a sleek cube you can use as a timer with a simple turn to the face you want to use for the timer value.
Hopefully, you should now have some gift-giving ideas for the engineer in your life. It can be challenging, but you should see a theme between these gifts of providing something technical and fun in a single package.
If you want to check out some other posts on software and engineering, you can find some of the posts below.
]]>In this post, we will discuss the basic history of these cards and their applications in environments. New units will be talked about and how they reach the market. And finally, the firmware running on the HBA cards and what that means for the features available to you when you install the card in your system. If you are thinking about picking up the card, you can check out the following kit available in our store.
The Dell H200 is a standard card available in Dell's PowerEdge server series to more accessible comment up the system to the backplane of the custom PowerEdge case of the systems. This configuration can be seen in the below photo.
Across the range of PowerEdge skews and generations, the Dell H200 has been a critical component as it provides a vast range of Raid and JBOD features required for various use cases from computing to storage.
The Dell H200 is most commonly found in the Dell PowerEdge servers. As previously mentioned is used as an internal Raid card to connect the system to the drives attached to the cases backplane via MINAS 8087 cables. Using forward breakout MiniSAS-8087 to SATA cables (the cable below), you can connect the card to regular hard drives without a backplane. This makes the card very flexible for any use case. You need storage, and the motherboard does not provide enough connectivity or storage features.
Today the cards have been discontinued from new production. Any existing cards you will find on marketplaces will be second-hand, refurbished or grey market cards. In the following sourcing, these cards will be discussed in more detail.
So if you are in the market for one of these cards either for your Homelab or Homeserver, you may see a lot of listings and be thinking, “So why are there so many cards available even if the card has been discontinued?”. This is because even though the cards have been discontinued, grey market factories still produce cards if they have machines set up to make them or if they still have production quotas on existing orders. Due to their being in the distribution channel for these products after they are produced, they are sold as refurbished cards on the second-hand market but are essentially new models. As the existing deployment of PowerEdge systems is decommissioned as their warranty periods go to the end of life, you will also find a lot of their internal components being recycled back onto the second-hand market, leading to even higher amounts of these cards available for less critical deployments.
Due to these two factors, these cards are very prevalent, and you can find them for affordable prices on eBay or our store here at the below link:
With the Dell H200, along with the physical hardware to provide the storage features, the card runs some software in the form of firmware that runs on the card to achieve features like RAID and control how the card manages the connected devices. Depending on your application for the HBA card, you may want to choose a different firmware to remove or unlock some features.
By default, the H200 firmware on the card enables the disks to be managed as raid devices and exposed to the operating system as a single RAID disk. Suppose you are running a storage server with something like Unraid, Free NAS or using a file system like ZFS where disk devices need to be exposed transparently to the operating system so the software can manage the disk and storage on them. For the storage, implementation said the easiest way to enable the Dell H200 card to run as a JBOD (Just a Bunch Of Disks) card that will expose the disks with all of their S.M.A.R.T data directly to the operating system. This is done by flashing the card's firmware to the LSI 9211-8i, which unlocks the card's IT mode. You can check out the following guide for detailed information on how to flash these cards’ firmware to your desired version.
https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/R515-with-H200-with-IT-firmware/td-p/3645137
You should now have a better idea of the Dell H200 HBA card's history, where some of these “New” units are coming from, and how the firmware running on the HBA card plays a vital role in the features you have available on the card. If you're looking to pick up a Dell H200, the Server Labs Aus store has Dell H200 kits with the card and connectors you need to get the most out of all the available storage bandwidth.
If you're looking for any other HBA cards or kits, you can view our full range in the following collection in our store.
All applications running in your Homelab will generate logs or metrics for what the service is doing. To collect, store and visualize these metrics, you will need a stack of applications for each component involved. In this stack, Grafana assists with the visualization and alerting of your logs and metrics, Prometheus aggregates your logs and metrics, and Loki is used to ingest log files from your hosts or containers.
Now with each part of the monitoring stack in its application, it is very configurable for any system you are running, allowing you to use different log message formats or protocols. Check out this guide for information on setting up the stack's components and getting logs and metrics into Grafana and open up a range of new things to explore in the world of logging, monitoring and observability.
If you are running containers in your homelab, Portainer is a fantastic application to run to get some more administration features over your containers. Through Portainer, you are provided access to the images available more efficiently and run containers with many tools for monitoring and debugging your application workloads all through an easy to to use web dashboard seen below.
The application is built as a single container runtime, making it easy to integrate into any environment. Portainer can connect to many container runtime engines running locally or on cloud providers, including Docker, Docker Swarm, Kubernetes and container runtimes.
You may be like many of us with a Homelab and are running some media automation applications to gather or view the files. If you haven't heard of Plex, it is one of the leading free applications to view your media files, including personal videos, music, movies and TV.
You can create a Plex account for free and use the application's essential features. But some features like downloading media onto your mobile devices for offline viewing are paid through a Plex pass subscription. Plex often has lifetime subscriptions to Plex pass so keep an eye on your email and you're likely to get a good deal.
Take note that Plex is an external service you will depend on to use the application. A consequence of this integration means that if Plex goes down for any reason, you may not be able to use Plex features completely. Plex login servers have been going down from time to time, but the overall features and polish you are getting are worth it.
Suppose you are running Plex on your Homelab (or will start too soon due to this article). In that case, Tautulli is an excellent addition to your media automation stack to get further insights into what is happening on your media server that extends to the metrics available from Plex.
These metrics provide historical and real-time, providing deep insight into what your users are doing on the server and how you may be able to optimize the server and media to fit the consumption needs better. The analytics provided by Tautulli can help identify if you may be running background tasks at the wrong time for the user's needs or if your resources are just not enough for the type of media and codecs exported by your server.
Another alternative to Plex you may want to consider is a new addition to the open-source application space. Jellyfin provides an entirely open source solution and a similar feature set to Plex without needing external services.
Compared to the other competitors to Plex out there, this is the most polished alternative available. The focus on building it to be open source should be an advantage of the project in the future. We have seen with both Plex and Emby that introducing a business element to the application leads users to lose out on features.
If you want to do any home automation or make your home smart, Home Assistant is the best-in-class open-source application for the use case. This complete solution provides a UI for accessing all your home automation information and automation scheduling/running capabilities.
The platform has almost standard integrations for your smart devices or external services, allowing you to integrate Home Assistant running on your homelab with other devices efficiently.
Home Assistant has many different deployment options, including a manual install, Docker container or the new HassOS image that is the recommended installation solution. This VM ISO comes with everything you need to run Home Assistant and includes an improved way of running all of the Home Assistant integrations more securely with docker containers in the VM.
Suppose you are running many Docker containers or VMs in your Homelab and want to publish those web services under a domain or with standard features like proxying or SSL. Now you can do this with something like Nginx, but you will need to update these files each time you add a new service or host, which can be hard to manage and lead to mistakes.
That's where Traefik comes in. It provides a dynamic way to create proxy configurations for your Docker containers or static sites.
The dynamic nature of configuration is a strength of Traefik, allowing you to set up common entry points for your services and apply standard headers and middleware such as authentication forwarding. The Traefik docker provider enables you to monitor your Docker services running on a host. When a new container gets created on the docker service, if it is using any Traefik prefixed labels, there's will be used to configure connections between your entry points and your docker services and applications. The below command shows an example of configuring a container Traefik service using labels instead of static configs.
Through this pattern, your proxy configurations are attached to your application container, making managing them a lot easier over time. Check out the Traefik docs for instructions on installing and operating the proxy.
When you are running a server, you will need to access the files on the host, and it can be inconvenient to navigate, upload and download these files just over CLI. Sometimes, however, it can be better to have a simple UI for accessing these files. Filestash comes in as a lightweight option to use in your administrator toolkit.
The application supports many storage backends making it fit in with any setup you want. Supported backends include local, S3, Git, Google Drive, Backblaze etc.
Filestash has a self-hosted and a SASS offering of the application. You can check out the docs for getting started here.
Suppose you are in the market for a highly polished and featured personal or business accounting application Akaunting could be a perfect fit. This application can give you better insights into your financial data and stay on top of some of those invoices or bills.
This application is perfect for anyone running the books of multiple companies, allowing you to separate the finances into their view easily. It also allows you to easily share any reports on the financial data collected so you can share it with the relevant parties. There aren't many options for accounting software that can be self-hosted. So this is a highly polished application you would usually have to pay for to get the same level of features, so it is worth looking into for any use case.
You can find docs for getting started with Akaunting on the following page.
Another good alternative to accessing files on a remote machine like Filestash is FileBrowser. This application provides more features to manage your files with simple file editing and viewing through the browser.
One difference between FileBrowser and Filestash is that this application focuses only on exposing the local filesystem to the application's users. With its additional admin features to add other users with their permissions, it is perfect as an administration utility you can run on your Homelab or Home server host.
You can find documentation for getting started using FileBrowser here.
The library of applications you can install onto your homelab or home server keeps growing every year, with new and more polished features available all the time. The best part is that since they are open source, you get a lot of free features.
Hopefully, you have found something good you can use in your lab. Check out this article to learn more about some lessons and mistakes from running my homelab.
If you are in the market for some homelab or home server products check our store's products.
You can also find some of our other blog posts.
Connect on our social media accounts over on Twitter, Facebook and Instagram.
In this article, we will cover how you install the Rasbian OS, how you can configure remote access to that newly set-up device and finally how you can start using configuration as code tools like Ansible to manage what is running on your device easily.
Any code snippets in this post will be on the Raspberry Pi OS but should work on Debian-like Linux systems.
To start deploying applications to your Raspberry Pi and making use of it you will need to install an operating system. You should select this operating system based on what you wish to use this machine for; being that a basic machine to run services on or a full virtualisation machine with something like Docker.
Some common operating systems to look into include; Raspberry Pi OS and Ubuntu. For any operating system, you choose to try and pick a minimal install to keep the size of the OS small and make running things easier on less powerful devices. A step-by-step guide for how you can install a Raspberry Pi OS to your Raspberry Pi can be found here.
One thing you will want to make note of when running an operating system on a Raspberry Pi and using the SD card slot is that SD card memory does fail so be ready for it. With how the Raspberry Pi cycles power and with the low write capacity of SD cards it will be likely to get damaged the SD card's memory with usage over time.
There is some mitigation you can make that you should be aware of like switching your SD card mounts to read-only or using more stable storage like an add-on SSD or HDD for your operating system like the one seen here.
With your operating system, we will want to configure it so we can access it from another machine via SSH. By default, your operating system will automatically get an IP address from your router via DHCP but we will want to configure a static address so we can always connect to the same address to access our Raspberry Pi that will persist between restarts.
To update your network interface to use a static IP address edit the contents of the /etc/dhcpcd.conf
file. Simply switch out RASPBERRY_PI_IP
and GATEWAY_IP
for the IP address you want the Raspberry Pi to use and the IP address of your router respectively.
---------
interface eth0
static ip_address=RASPBERRY_PI_IP/24
static routers=GATWAY_IP
static domain_name_servers=1.1.1.1
---------
With your Raspberry Pi now accessible at a static IP address, we will want to enable SSH on the Raspberry Pi OS so we can access it remotely. On the Raspbian OS using the command line, you can easily enable the SSH service using the following commands.
---------
# Set the ssh service to start on boot
sudo systemctl enable ssh
# Start the ssh service immediately without a reboot
sudo systemctl start ssh
---------
With the ssh service running on the raspberry pi you can now SSH into the raspberry pi from another machine. Using the ssh client from another machine run the following command substituting your raspberry pi’s static IP address configured earlier.
---------
ssh pi@RASPBERRY_PI_IP
---------
You'll be prompted for your Raspberry Pi’s password and once logged in you can run commands on the Raspberry Pi machine as the pi user. This is a very helpful pattern for accessing the machine through other tooling and is just more conveniently than having to set up another set of IO devices to work on it. In the next section, we will cover how you can use some configurations as code tools to manage the services and config on the raspberry pi in an easy and convenient manner.
Your Raspberry Pi is now set up to be accessed remotely, you can start running some of your own services on it or configure it further for your homelab’s needs. To make this process easier and more importantly repeatable if you wanted to run the same configuration steps on another Raspberry Pi we can use tools that accept configuration as code which is a concept that states that we can define our computers configuration and services through the use of simple configuration written as code that can be run repeatably and get the same results each time. This enables more predictable management of your machines and allows for the same configuration settings to be run multiple times against one or many hosts.
In this section, we will provide an example of how you can use ansible to write simple “playbooks” that define the sequential tasks you want to run on a target host. For the example ansible playbook, we will install some packages to the host, and create another user on the host.
---------
# example-playbook.yaml
- name: Configure the raspberry pi
hosts: RASPBERRY_PI_IP
tasks:
- name: "Update Apt Cache"
apt:
update_cache: yes
tags: installation
- name: "Install Common packages"
apt:
name: ['runc', 'python-pip', 'docker.io', 'python3-venv', 'docker-compose']
state: latest
tags: installation, packages
- name: "Python Docker"
pip:
name:
- docker
tags: python
- name: "Install Minikube"
shell:
cmd: curl -Lo ~/minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-arm && chmod +x ~/minikube && mkdir -p /usr/local/bin/ && install ~/minikube /usr/local/bin/
tags: installation, minikube
---------
As you can see in the playbook we configured the IP address of the host along with what login credentials we would like ansible to use such as the ssh username and password along with the sudo password ansible can use to escalate privileges when needed for performing some actions on a host. Ansible has many modules available for common Linux system functionality and further modules available for other services or applications you may want to interact with such as AWS, other operating systems or something like Docker.
You can now use the following script to run the created playbook against the configured target host.
---------
ansible-playbook example-playbook.yaml
---------
You will see the output for each task as it is run on your raspberry pi. As you can see this configuration as a code tool can allow for a very easy way to configure and run a set of commands on a machine to produce the desired result. This will save you more time debugging as the results of each command should be a lot more predictable. And the worst-case scenario if everything is broken, just start again with a fresh OS and run your playbook again.
If you're looking for some more services to run on your stack check out the below article for 10 of the most popular self-hosted apps you can run.
The Raspberry Pi platform is very easy to work with and provides you with an excellent level of customizability for whatever you are looking at learning or running on it. Now you should have a better idea about some of the good considerations to make when setting up the pi, how you can configure it for easier remote access and how you can use configuration as code to manage your pi and the software on it a lot easier. I would recommend once you have your raspberry pi setup look at setting up some important services on the machine such as backups for important configurations and files you need to be saved off the raspberry pi, and Setup alerting for important services or logs.
Now if you're building out your lab and installing any apps I would recommend that you start looking into the configuration management tools earlier rather than later such as Ansible, Chef, Terraform and a host of others.
These tools can allow you to consistently apply configurations on your hosts based on the code you write and allows you to start treating the components in your lab a bit more like cattle instead of pets. You will start to find by focusing on these configurations as code you will start hitting fewer errors and weird unique issues as these tools encourage the use of the best practices when doing installations of apps or management of your hosts.
These tools are also very useful in the industry if you are interested in DevOps or just regular software engineering, There is always configuration that can be stored and operated on from code and is becoming more important as tools make it easier to deploy apps closer to its source code. You would have seen this with the explosion of things like Docker and Kubernetes that bring the configuration of the runtime environment close to the application source code.
Now once you've started to set up a couple of apps in your lab you might be thinking how do we start making this easier and more manageable. I believe the best way to achieve this is to think about your lab really as your personal computing platform and to think about running some services to better support that platform. This platform can include things like;
By setting up these kinds of tools you can focus on learning about the things you want rather than having to implement things like SSL for every service. If you develop a pattern for doing this in your lab and better yet make it scriptable or configurable so you can do more with your lab whilst also keeping it secure since you are always able to implement those best practices in your own platform. If you want to look into this further check out these kinds of projects in your lab.
Now if you've spent some time on forums like r/homelab you might have seen some of the nice labs that use multiple machines in a large server rack that looks very impressive. I'm going to warn you that when your start to build out your lab and learn more about the equipment you could use you will get this urge to just buy and keep adding loud expensive machines to the lab.
You are going to want to stay away from just spending time and money accumulating these machines that you don't actually utilise the computing resources for in your to you don't drive up the complexity, power bill and noise. While you're still learning consider using just a single machine at first to understand the basics of administering your lab and managing the apps you may be running.
Later down the line if you think yeah I want to learn how to manage groups of large hosts or how to host highly available apps then you should look at how you can effectively expand your lab to fit your new learning goals for your homelab. This should be a constant process whenever you look at expanding your lab, plan out what you need it for and get the equipment to suit that job.
In your lab, you may be using a mix of hardware that may be new or used and generally for your homelab I would encourage this as it gets you to build faster for less money which is always good to get more people learning and to reduce those barriers to entry. But I will share a warning that it is important to not be buying junk when buying used hardware for your homelab.
Before you go into buying hardware for your lab make sure you do your research first and work out what features your actually looking for and keep an eye on some key things like power consumption and noise for your machines that you may not have considered if you were just buying consumer products before. If you are looking for some current recommendations for hardware that is still worth your time and money try looking at the following series of products and if you can't find that series in your region on second-hand marketplaces look at the next newer revision of it.
Generally the rule of thumb with enterprise equipment that is affordable on the second-hand marketplace for your Homelab checkout the major brands like HP and Dell support pages and takes a look at when the support lifespan for their equipment is ending. You will generally find an influx of equipment for sale or going for free on sites like craigslist, gumtree and eBay as these end of support dates approach and organisations decommission old equipment and dump it onto the marketplaces to try and recoup some of that equipment cost, but we will happily pick up those equipment for our lab for a nice discount.
So to close this article I just want to remind you that you need to have fun with your lab and you should be using it as another tool to engage and develop your skills in a nice practical way. But if it starts costing too much money or causing too much stress maybe it's time to take a step back do some planning and even downsize if that's the best decision at the time.
Now get out there and build something interesting for you, and have fun doing it!
HBA cards are a simple way to extend a computer’s connectivity using some PCI bus bandwidth for other applications like storage. Adding an HBA card to your system and attaching drives to it saves your CPU from processing all of the storage IO operations. It can improve your system's overall performance and reduce this bottleneck caused by many IO operations happening at once. This performance saving is due to the HBA cards having integrated processors responsible for managing these IO operations and other features like RAID natively on the HBA card. So in a server context where you have high performance and high connectivity needs, HBA cards fit the bill perfectly without having to build more and more SATA and SAS ports into motherboards as connectivity needs continue to grow in the server landscape.
Initially, the LSI 9211-8i HBA cards were designed and manufactured by LSI, but as a company, LSI is no more. Avago technologies bought LSI in 2014 and continued to develop LSI’s HBA card designs under their brand. From that point on, we have not seen any new manufacturing of the old LSI designs like the 9211-8i series of HBA cards. We probably won't, as Avago (now Broadcom) is developing new card designs and are focusing on their further development.
So this turbulent history with these HBA card designs manufactured by multiple companies explains why you might see the same card model produced by LSI, Avago, Fujitsu etc. if you are searching for them online. But since all of these cards are using the same underlying design they are in fact identical cards regardless of the manufacturer.
In 2022 however, this card is no longer being officially produced after the acquisition of LSI. So any cards you are seeing on the market these days will either be second-hand from existing systems, refurbished by factories in China who still have the appropriate tooling, or never sold stock from the original manufacturers in the day. However, these do have a price premium.
If you have seen any recommendations for the LSI 9211-8i HBA card, you may have seen the recommendation to have the card be in “IT mode” when the HBA card is used for applications like Unraid, FreeNAS or ZFS.
If you buy a stock 9211-8i HBA card, you will find it will most likely be in "IR" mode and have the full RAID functionality. Since the card will be handling operations on the disk, the individual drives you have connected to the HBA card won't be transparently passed through to your host OS. So if you would like some transparent passthrough of disks to your host OS, "IT" mode disables the RAID functionality of the HBA card. In this mode, the disk's diagnostic info, like SMART data, is sent out like normal for the OS and other software to handle.
But how would we switch between these two modes on the cards? That's where the HBA card firmware comes in. The software on the card implements some of the features listed above in both the IR and IT modes. So if we wanted to change our card mode, we would update its firmware. If you feel brave, you can also use the following guide to flash a 9211-8i card into “IT mode”.
But note, Similar to updating a BIOS, this process has some inherent risk. When writing the new firmware to the card, if the card or system loses power the card can be left in an unrecoverable state due to corruption of the card's internal storage where the boot loader is stored. So if the card's storage is in a state that this loader cant is accessed, then the card's firmware cannot be updated again, and the card is essentially bricked. At serverlabs.com, we provide pre-flashed LSI 9211-8i IT mode HBA cards that can be used in your builds without having to go through the steps to update the card's bios.
Hopefully, you have a better idea of the 9211-8i HBA card and how you can use it in your homelab and home server needs. This card has a fascinating history but is still affordable for the features it enables, whereas similar cards could cost hundreds of dollars more.
Hopefully, you can pick up one of these cards from our store, eBay or other great places to find these older pieces of homelab hardware. By searching for this card with the IT mode descriptor, you should be able to find plenty of pre-flashed cards like the one we sell in our store, but if you decide to flash the card yourself, we wish you the best of luck on the learning journey.
When you're looking to set up a homelab to learn more about running applications or administering computers effectively, You can get away with running labs on your computer. Still, you might start hitting bottlenecks on CPU and memory resources, or your current computer doesn't have the features or hardware you want more hands-on time.
]]>When you're looking to set up a homelab to learn more about running applications or administering computers effectively, You can get away with running labs on your computer. Still, you might start hitting bottlenecks on CPU and memory resources, or your current computer doesn't have the features or hardware you want more hands-on time.
So at some point, you may start thinking about running some of these labs somewhere else. Cloud services are an option but have high ongoing costs and can be too expensive to run depending on the workloads you want to learn about, particularly if you're looking at high availability or applications with high data transfer. You might then consider building your machine for the purpose as it can be more cost-effective to purchase old enterprise equipment for your learning purposes in your homelab. The Dell R610 is a great platform that you might consider for your homelab as it's very flexible to start learning more about enterprise machines and what it takes to administer them correctly. I have picked up 2 of these in the past couple of years, using them from networking appliances, network-attached storage, and a virtualisation cluster for multimedia applications.
So with the Dell R610, you have a lot of options to build it out to fit your needs and your homelab goals. The following table outlines the basic features, but we will talk about some of these areas in more detail.
A detailed spec sheet can be found here. Check out LabGopher for a deal.
If the primary workloads you are looking to run are CPU-dependent, the R610 is an excellent option for virtualisation or application workloads. The Xeon model of CPUs and the duel sockets available give you a silly amount of cores to work with and run all of your containers or VMSs. One thing to note is that you might not have considered running a personal workstation because the machine's power consumption can be higher even when just on standby. You should expect 15W on standby and 144W with Windows Server 2003 running on a single CPU socket populated. With an artificial load across all 16 logical cores with both sockets in use, a peak consumption of 260W. But with this higher power consumption, you do get some features. The R610 comes with a redundant power supply that can do seamless failover if one of the power supplies has a failure or your power source is interrupted. Suppose you are running essential workloads that may be sensitive to ungrateful shutdowns. In that case, this redundancy paid with a backup power supply can give you time to gracefully shut down your machine and avoid any corruption of your data.
Since the R610 is a 1U rack-mount server, you need to consider that it only supports 2.5-inch drives rather than the larger 3.5-inch drives. This form factor will affect the total amount of storage you would have available to you since you can't access the larger-sized drives that come in the 3.5-inch format. However, if you are building a virtualisation server, you would be looking to fill the storage with faster SSDs to make your VMs perform faster, much like you wouldn't be running your desktop operating system on an HDD anymore. When working with these enterprise appliances, you will want to be looking at having a dedicated machine act as network-attached storage for those large files you might need. The form factor of the machine being a 1U server also means that the size of fans it can fit is a lot smaller, so it will have to spin up quite fast to cool the machine. Therefore this machine's noise may become an issue if you are looking to run this near you at all times. You may be able to work around this by just putting this machine physically in a different place, or you could look into the larger 2u or 3u machines that can use more efficient fans and therefore have less noise, but you will be trading off size for noise.
The R610 platform also has excellent connectivity, including 4x 1Gb Ethernet ports. These ports provide you lots of flexibility to either trunk connections to maximise bandwidth for all the applications you may be running on your host or through VLANs to segregate the traffic on your network for increased security. The R610 also has an iDRAC slot that you can use with a supported card from dell. The iDRAC will provide you with another ethernet port to connect to your machine and manage the device remotely to access things like the BIOS, a virtual desktop for the machine, and even upload data like ISOs.
So now that you have your machine, you might be thinking, how do I use this for my homelab. I'm assuming you have installed an OS before, and you will have a choice of either installing windows or Linux-based server OS like Ubuntu, CentOS or Windows Server based on your learning goals. These operating systems provide you with a slimmer and more focused set of tools for running a server than your traditional Desktop OS. You can already get learning installing other devices like docker, KVM etc. To start learning in your homelab.
If you are specifically looking to do virtualisation, pick an operating system catering to this application like ESXi, Hyper V, Proxmox and more. I use Proxmox as it's open-source and pretty straightforward, with it built on top of a Linux distribution and common open standards. A machine like the R610 would be perfect for this application with the multiple CPU sockets paid with some high core and thread count CPUs providing you with a massive bank of resources for virtualisation in a 1U form factor. The bank of 2.5-inch drive bays in the front of the chassis also gives you the ability to hot-swap drives to get zero downtime for the workloads you are running.
Hopefully, you should have a better idea of all the components to think about when building out your homelab based on the R610 platform. An R610 platform is a competent machine that cannot be beaten at the low cost these days for home labers and the density of resources you can fit into a single one you form factor.
Portainer is an open-source project that came about from realizing that the Docker technology was going to be game-changing for organizations but it was tough to operate. For containers to become mainstream, developers and administrators had to solve these operational challenges quickly.
The Portainer project aims to provide a single application for the management of multiple different container runtime platforms, including Docker (both local and remote), Docker Swarm, Kubernetes and Nomad. Portainer reduces the operational complexity associated with multi-cluster management by bringing users easy-to-use tools to administrate common container operations and resources. These provided utilities bridge the skills gap and facilitate feature discovery for new users of Portainer and containers as a technology. Through Portainer, you are also equipped with a centralized access management application for all your container runtimes, allowing you to easily manage access, permissions, groups and log audit activity on your container runtimes.
Portainer runs with both business and community editions that you can run on your infrastructure or have hosted by Portainer as a SAS product. A breakdown of the features of the two projects can be seen below.
Generally, the community edition of Portainer is recommended for testing the application or for use with a single user and node setup. However, the core Portainer features like running, viewing and managing containers are not locked behind the business edition or a license being purchased.
Getting up and running with Portainer is nice and easy as the application runs in a container. The following docker commands will get a Portainer instance up and running on your machine so you can test its functionality.
$ docker volume create portainer_data $ docker run -d -p 8000:8000 -p 9443:9443 --name portainer \ --restart=always \ -v /var/run/docker.sock:/var/run/docker.sock \ -v portainer_data:/data \ portainer/portainer-ce:2.9.3 |
Once the container has started, you can access the GUI at https://localhost:9443 since it will be using a self-signed certificate to accept the warming from your browser, and you will be prompted to do the initial setup for the Portainer instance.
Configure the admin user for your needs, this will be the root user of the server and can be used to configure all other users and features on the server. Once you have created the root user, you will have access to the Portainer dashboard, where you can configure an environment for Portainer to interact with. The “Get Started” card will get you up and running, monitoring the same instance of Docker that Portainer is running on.
Portainer can integrate with environments other than Docker, such as Docker swarm, Kubernetes and more. For information on setting these up, refer to the following documentation link.
If you want to run the Portainer server on other platforms such as WSL, Docker Swarm and Kubernetes, you can find more information for running the Portainer server at the following links.
With Portainer, you are provided with a single dashboard to administer your docker runtime. This section will cover some of the key features you may be interested in.
Portainer has its own RBAC model to allow you to quickly create users and provide them access to resources. Creating new users on your Portainer server is easy, and after their creation, they can log in to the dashboard and access resources just like the admin user.
To these users, you can assign them roles to restrict the user's access to resources based on these predefined roles. For all of these roles, take note of the need for a business license to use these features. But if you are running in a multi-user environment, this feature will be essential for providing the least privileged access to your docker resources.
With Portainer, you can configure it to access and manage many environments that could be used by many developers or teams with their access requirements. This is where the ability to group resources comes into play. Creating environment groups allows you to group your resources into logical groups for your needs.
With these groups created, you can then apply access control to specific developers or roles to allow easier administration of resources available in Portainer. Further details on managing this access can be found at the following link.
If you have container images in public or private registries, you can configure them to be accessible via your runtime engine through the Portainer dashboard.
With all of the runtime configuration done in Portainer, your developers are ready to start running containers on the configured environments through the Portainer dashboard. Portainer allows for all the same actions you can do through the docker CLI through the GUI, including running containers, starting docker-compose stacks, pulling images and inspecting the resources and logs of currently running containers on the environment.
Through Portainer, you are provided with a single point of access to many configured container environments quickly without having to manage secure connectivity for each of your developers or users. Even if you are running Portainer in your homelab or just for yourself, Portainer can provide you with a one-stop shop for all things container operations and administration without needing to jump between hosts.
You should now have a better idea of how you can get started with using Portainer to manage the administration and operating of your container runtimes easier. Through its GUI, your days of jumping between hosts via SSH to run some Docker commands are over with a generous community edition and the room to grow with extra features if needed.
You may have lots of reasons for what you run your lab but there is an always growing list of new things to try in your Homelab or server. Check out this quick list of 10 apps that you can try out and see if you can get some new functionality for free through self-hosting.
Docker is a powerful tool that can allow you to host and run apps easily your servers. Now dockers main interface is a command line tool but with and application like Portainer you get a nice GUI interface to manage your hosts docker client.
Portainer supports your running Docker environments, Docker Swarms, local or remote by interfacing either directly with the local docker.sock on your host or the remote docker endpoint. The UI can easily be used for either monitoring the status of containers, reading current logs or building and modifying docker container stacks (Dockerfile).
Its a well thought out interface, well supported with updates and very low maintenance. It try's to guide you where it can to use the more advanced features of docker to your advantage.
This is one of the top-rated and downloaded media servers that you can run to get your own Amazon Prime or Netflix level features with all your own content.
Plex lets you bring all of your own content and it will automatically try to match your files to its database of movies and tv and give you reach features like chapter detection, automatic subtitle detection and stream to devices outside your network or syncing to your devices if you don't want your server to be accessible outside your network.
This application is regularly updated and supported by the Plex company and they are always looking to improve features in the application. Recently the company has been expanding its streaming services further to offer free media streaming through their cloud services.
Their are other options available to you like Emby or Jelly fin so have a check out of all of them and their features and pick the one that gives you the features you want.
This application is best installed on a bare metal machine as it runs as an OS and will allow for more advanced virtualisation features to be used. This OS is installed as a Debian spin with a custom Ubuntu Kernel. This base install has a tiny ram and cpu draw in comparison to other enterprise level virtualisation platforms like ESXi and HyperV so this is a solution suited for lower powered systems as well. One thing to take not of is the if your looking to use Proxmox is that it is using KVM under the hood for virtualisation converting vdmk or ova files to qcow2 can prove to be a chore if your looking to migrate your existing virtual disks.
Proxmox gives you the perfect environment to quickly spin up and down either VM's or Linux containers for your applications to either try things out or iterate on your infrastructure quickly or run your infrastructure efficiently only when needed using Proxmox's easy to use the web interface to manage your new virtualised infrastructure.
Advertising is EVERYWHERE on the internet these days and you can use extensions in your browsers to try and get around these pesky Ads. But what if you could do this for your whole network to have automatic Ad-Blocking for clients that may not have had ad-blocking capabilities initially.
The application can be installed as a container or on a small device like a Raspberry-Pi would work perfectly. You get a nice GUI interface to administer the ad blocking service and setup normal router capabilities like routes, static ip addresses and DHCP servers if need and you want to replace your router with Pi-Hole. Now with the blocking of any network traffic you'll always have a time you need to whitelist a domain that just needs to be connected too even if it leads to some Ads due to the cat and mouse game between us and the Marketing companies. The GUI's of Pi-Hole makes it nice and easy to update and manage these filtering whitelist and black lists either through domains or ip addresses.
Log Management is important, aside from monitoring it will be the thing which provides you with the most information about your systems and what's running on them. There are plenty of options for this problem such as Greylog to ELK and they provide their own unique features and interfaces so consult which one works best for you.
The true power of Paper trail for me however was being able to search logs in real-time via the web browser. Much like ELK and similar software you may host yourself, you can also create Alerts on specific log events which can be pushed by email, slack or other channels should those events occur or when aggregated. The Search bar is also pretty quick when searching back in the logs.
Now if your running applications across multiple containers, multiple hosts or even multiple clouds your gonna get overrun with how you access the logs of these applications to monitor they are running correctly or diagnose any issues you might come across.
Graylog makes it easy to use standard protocols to export your applications logs either through files and R-SYSLOG or through an automatic remote transport like docker and GELF messaging to export the logs. Once centralised you can setup metrics and dashboards from these imported logs to get some more visibility for your applications and infrastructure.
Graylog is recommended to be setup as soon as possible as the advantages gained from being able to quickly see all logs in once place makes debugging even complex issues a breeze.
Netdata is server monitoring on steroids.. Starting off its a local install with a one liner found on the Netdata site. The never ending (seemingly) downward scroll presents graphical real time breakdowns into systems, applications and everything running on the system. Each release provides a greater set of plugins and delving into more far reaching information on applications.
You can run Netdata locally, and use a pattern of Prometheus to pull data from Netdata and push to influx DB and display the output in Grafana. If this sounds like hard work, recently netdata.cloud was launched and you can "claim" your Netdata installs and pull the data into a centralised Netdata Cloud instance. It's early days for the netdata.cloud interface however its slowly improving and is a useful interface for data aggregation.
While many items on this list have "opensourced" alternatives Netdata stands out as a really useful service for both real-time system monitoring and if using Grafana. The fact that Netdata is so usable right out of the box too is a huge advantage of the Netdata application.
Although it may have a high learning curve ansible is a fantastic tool for creating reusable playbooks for comment operations such as shell commands or file copies on your local and remote hosts. This makes complex operations nice and reproducible across multiple hosts allowing for lots of automation across your physical and virtual infrastructure.
Ansible joins the host of tools such as chef or puppet that allows for similar configuring functionality that ansible provides but over the past 5 years ansible has been embraced by the community for its flexibility to build and share configuration scripts for common software or configuration of hosts.
Another tool similar to ansible, Terraform provides you with the ability to write a definition for your virtual infrastructure and apply it to your hosts. If your using technology like docker or virtualisation for your machine you can utilise terraform to create reusable modules of infrastructure that can be configured for their specific use cases.
And example of this would be to create a terraform module that deploys a generic virtual machine with passed in module paraments used to define the resources for that VM. Now in another terraform module can define an array of application machines all with their unique resource configurations for their specific applications. Now looped over our generic virtual machine module we can create identical infrastructure for each VM deployed the exact same way but configured uniquely for each application.
This can allow you to deploy and test identical infrastructure allow for a higher change confidence when updating infrastructure.
Now there are many uses you might have for a web server in your Homelab be it an actual webserver to a reverse proxy so that you can expose your services being run in your Homelab to the internet from a single port under subdomains.
You can easily learn the Nginx configuration language to deploy these configurations for Nginx and get some utlity out of your current applications by securing them behind SSL or to just make it easier for your users to access through a more memorable URL to access your service on.
Nginx can easily be run via its binary installed via apt or with your applications in a lightweight docker container. The docker container deployment is particularly good for deploying application-specific configuration for routing with your applications through a docker-compose file.