First off, I'm not about to sell you anything. I'm just sharing how I run multiple quasi-production websites and microserves on a budget. $50 per month can buy you decent server "rented" at a data center, which can be cheaper than colocating your own server.
For this example, I'm using a dedicated server from reliablesite.com with i7, 32GB memory and SSD. Not all that powerful compared to the Windows servers I run for true production sites, but sufficient for some websites and microservices. And, did I mention, cheap?
Why am I doing this? For one, there are a number of microservices in use by my production sites and the Windows/Docker combination just didn't work out.
I have used Linux since 1994 to host numerous site over the years. Then mid 2000s jumped ship and switched to Windows Servers. Switching back to Linux for Docker, Docker Compose, Docker Swarm, etc. was simply a no-brainer. Using cloud services proved too expensive (small company, no million $ budget for infrastructure :-) so I set out to find alternatives.
Linux is now used for quasi-production sites like client sites used by clients before finalizing projects, demo sites, and of course micro services. No, this won't get you high availability/scalability like Kubernetes or cloud services would, but you do get some those benefits, (better than my Windows servers) without breaking the bank.
Rant
But getting there is a bit harder with Linux because (you'll hate me), you're not using "real" products. You're using tools, pieced together like Legos, or an Erector set. You build it. You may say "Docker is a product". Well no. It solves/implements a solution, but giving me a command line to manage the whole thing? That's not a product. Portainer on the other hand is what I consider a product. It gives me point and click access to pretty much everything in Docker. Being a command line jockey is not my idea of knowing a product, understanding what it does or seeing the big picture. Portainer on the other hand is what Docker should have been. Portainer + Docker is a real product, worth paying for. And with this combination, the learning curve is substantially reduced.
Piecing Things Together
So my $50 server runs Docker and Docker Compose. Nginx, Portainer and Certbot (for SSL certs) run in containers. This setup is sufficient to manage and maintain the websites and micro services. The sites use SQL, Redis and some other services (all running in containers).
Nginx
Nginx is essential to running multiple sites on the same server as all sites (normally) are accessed through ports 80 and 443. Multiple sites cannot "share" the same port so Nginx has to sit there and divvy up the traffic and send it to the correct site, which all run in Docker containers.
But I quickly realized, configuring Nginx is a pain in the butt. And
Blue/Green deployment which I have used for years on Windows was not really an option out of the box. Yes, I could build a bunch of bash scripts to switch, but come on, this is not the 1990s.
Nginx Config
Nginx.conf can quickly become unwieldy with many sites and SSL certs. With Blue/Green deployment each site essentially has 3 URLs/domains (production, blue and green). And the Blue and Green sites should only be accessible from certain IP addresses, as I did not want them publicly accessible. And each URL has its own SSL cert. So many opportunities to make mistakes in the configuration...
So I set out to develop a web based tool to configure Nginx. Of course this is based on YetaWF and is simply a web site running in a container that lets me configure a site, using SSL with Blue/Green deployment and URLs for access to both Blue/Green sites for testing purposes before releasing to become the active site.
Below is a screenshot of a server showing a few domains:

After changing a site's properties, it generates a new nginx.conf file. It actually uses a model file and merges its generated code into it, so predefined nginx settings can be merged with the generated settings.
And clicking on "Reload Nginx Configuration" will activate the new settings in Nginx. No command line needed.

Security
Sites, particularly the Blue/Green URLs are only accessible by defined IP addresses. All this is generated automatically.
The entire server only exposes ports 80 and 443. No Sql, Redis, Portainer, etc. ports can be accessed as they're all running in containers and none of the ports is exposed.
Iptables are also used to lock down all ports except 80/443. This of course was a manual process. It's a shame there is no Iptables UI. Oh well.
Finally
With this extremely cheap and easy to use setup I get better availablility (containers restart automatically), scalability for micro services and of course super simple website deployments. With Blue/Green deployment you'll never publish a site that doesn't work, because you can test the site before it becomes the active site. All for around $50 per month. Of course it does nothing if the entire server dies. That's a problem for another day to find an equally cheap solution.