Table of Contents
1. Introduction — The Problem and the Solution
Let's say you're a freelance developer with two clients — Faisal and Rifat. Both of them need their own web apps deployed, their own databases, their own SSH access, and their own GitHub credentials. But you've only got one VPS with a single public IP address.
The naive solution is to create two different user accounts on the same server. But this immediately breaks down: they can potentially see each other's processes, they fight over ports, and if one client's app crashes the system, the other client goes down too. It's a mess.
You might think about Docker. Docker is great for running applications, but it's not really designed to give a client a full Linux environment with root access. Docker containers aren't meant to be SSH'd into like a real server — they're process wrappers, not system containers.
LXC (Linux Containers) is the answer. LXC gives each client their own full Ubuntu (or any Linux) environment — a complete filesystem, their own root user, their own network interface, their own processes. From inside the container, it feels exactly like a real VPS. They can install packages with apt, run services, manage SSH keys — everything. And crucially, they cannot see anything outside their container.
This is actually how budget VPS providers work under the hood. When you buy a $5/month VPS, you're often getting an LXC container on a big dedicated machine. Now you're going to do the same thing yourself.
2. Architecture Overview
Before we dive into commands, let's get the full picture clear. Here's how everything connects:
Internet (Faisal's domain & Rifat's domain)
│
▼
┌─────────────────────────────────────────┐
│ Your VPS (1 Public IP) │
│ │
│ ┌─────────────────────────────────┐ │
│ │ Host Nginx (Reverse Proxy) │ │
│ │ faisaldomain.com → :3000 │ │
│ │ rifatdomain.com → :3000 │ │
│ └────────────┬────────────────────┘ │
│ │ │
│ ┌──────────┴──────────┐ │
│ ▼ ▼ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ LXC: Faisal │ │ LXC: Rifat │ │
│ │ 10.4.39.212 │ │ 10.4.39.231 │ │
│ │ SSH: 2221 │ │ SSH: 2222 │ │
│ │ App: :3000 │ │ App: :3000 │ │
│ └──────────────┘ └──────────────┘ │
│ │
│ ┌─────────────────────────────────┐ │
│ │ MongoDB on Host (10.4.39.1) │ │
│ │ DB: faisaldb / rifatdb │ │
│ └─────────────────────────────────┘ │
└─────────────────────────────────────────┘
Key Points
- One public IP. All incoming traffic hits your VPS's single IP. Nginx on the host reads the domain name (HTTP Host header) and decides which container to forward the request to.
- Private internal IPs. LXC creates a virtual bridge network (lxdbr0) on the host. Each container gets its own private IP in the 10.x.x.x range. These IPs aren't reachable from the internet — only the host can talk to them.
- Host controls Nginx. The clients never touch Nginx. Only you (the host admin) configure which domains point where. Clients can use whatever port they want inside their container.
- Full root inside container. Each client gets root access inside their own container. They can install packages, create users, run any service. They just can't escape the container.
- SSH via different ports. You expose SSH on different host ports (2221, 2222) that forward into each container's port 22.
3. Step-by-Step Setup
Step 1 — Install LXD on Ubuntu 24.04
LXD is the daemon that manages LXC containers. On Ubuntu 24.04, install it via snap:
sudo snap install lxd
After installation, initialize LXD. It'll ask a bunch of questions — for a basic setup, just hit Enter on everything. This sets up the default storage pool and network bridge (lxdbr0).
sudo lxd init
Once initialized, add your user to the lxd group so you don't need sudo every time:
sudo usermod -aG lxd $USER newgrp lxd
Step 2 — Create Your Containers
Creating a container is one command. LXD downloads the Ubuntu 24.04 image and fires it up:
lxc launch ubuntu:24.04 Faisal lxc launch ubuntu:24.04 Rifat
Verify they're running and get their IPs:
lxc list
You'll see output like:
+--------+---------+----------------------+------+-----------+ | NAME | STATE | IPV4 | TYPE | SNAPSHOTS | +--------+---------+----------------------+------+-----------+ | Faisal | RUNNING | 10.4.39.212 (eth0) | CONTAINER | 0 | | Rifat | RUNNING | 10.4.39.231 (eth0) | CONTAINER | 0 | +--------+---------+----------------------+------+-----------+
Note these IPs — you'll need them for Nginx configuration. The IPs are assigned by LXD's built-in DHCP server and stay consistent as long as you don't delete and recreate the containers.
Step 3 — Set Root Password and Enable SSH Inside Containers
Jump into the Faisal container:
lxc exec Faisal -- bash
You're now root inside the container. Install SSH, set a password, and configure SSH to allow root with password authentication:
# Inside the Faisal container apt update && apt install -y openssh-server # Set root password passwd root # Allow root login with password in SSH config sed -i 's/#PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config sed -i 's/#PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config # Restart SSH systemctl restart sshd systemctl enable sshd exit
Repeat the same steps inside the Rifat container:
lxc exec Rifat -- bash # ... same steps as above ... exit
Step 4 — Expose SSH on Different Host Ports
The containers are on a private network. To let clients SSH in from the outside world, we need to forward host ports to container ports using iptables NAT rules. On the host machine:
# Forward host port 2221 → Faisal container port 22 sudo iptables -t nat -A PREROUTING -p tcp --dport 2221 -j DNAT --to-destination 10.4.39.212:22 sudo iptables -A FORWARD -p tcp -d 10.4.39.212 --dport 22 -j ACCEPT # Forward host port 2222 → Rifat container port 22 sudo iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.4.39.231:22 sudo iptables -A FORWARD -p tcp -d 10.4.39.231 --dport 22 -j ACCEPT
To make these rules survive a reboot, install iptables-persistent:
sudo apt install iptables-persistent sudo netfilter-persistent save
Now clients can SSH in like this:
# Faisal connects on port 2221 ssh -p 2221 root@your.vps.ip # Rifat connects on port 2222 ssh -p 2222 root@your.vps.ip
Step 5 — Install Nginx on Host as Reverse Proxy
On the host machine, install Nginx:
sudo apt update && sudo apt install -y nginx sudo systemctl enable nginx sudo systemctl start nginx
Step 6 — Configure Nginx to Route Domains to Containers
Create separate Nginx config files for each client. For Faisal:
sudo nano /etc/nginx/sites-available/faisaldomain.com
server {
listen 80;
server_name faisaldomain.com www.faisaldomain.com;
location / {
proxy_pass http://10.4.39.212:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
}
}For Rifat:
server {
listen 80;
server_name rifatdomain.com www.rifatdomain.com;
location / {
proxy_pass http://10.4.39.231:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
}
}Enable both sites and test:
sudo ln -s /etc/nginx/sites-available/faisaldomain.com /etc/nginx/sites-enabled/ sudo ln -s /etc/nginx/sites-available/rifatdomain.com /etc/nginx/sites-enabled/ sudo nginx -t sudo systemctl reload nginx
4. Real World Example
Here's the exact scenario: both clients are running Next.js e-commerce apps, and both are running them on port 3000. This would be impossible with two users on the same server — you can't bind two processes to the same port. With LXC, there's zero conflict.
Client 1 — Faisal
Faisal SSHs in on port 2221, clones his repo, and starts his Next.js app:
# Faisal's container — 10.4.39.212 # Install Node.js curl -fsSL https://deb.nodesource.com/setup_20.x | bash - apt install -y nodejs # Clone and build git clone git@github.com:faisal/ecommerce-app.git cd ecommerce-app npm install npm run build # Start on port 3000 (using pm2) npm install -g pm2 pm2 start npm --name "faisal-app" -- start pm2 startup && pm2 save
Client 2 — Rifat
Rifat does the exact same thing inside his own container — port 3000, same setup:
# Rifat's container — 10.4.39.231 # Same commands — completely separate environment git clone git@github.com:rifat/his-app.git cd his-app npm install && npm run build pm2 start npm --name "rifat-app" -- start
How the Routing Works
When a browser visits faisaldomain.com:
- DNS resolves faisaldomain.com → your VPS IP (e.g., 103.x.x.x)
- Nginx on the host receives the request on port 80
- Nginx reads the Host header:
faisaldomain.com - Nginx proxies the request to
10.4.39.212:3000(Faisal's container) - Faisal's Next.js app responds, Nginx sends the response back to the browser
When someone visits rifatdomain.com, the same process happens but Nginx forwards to 10.4.39.231:3000. Both apps can be on port 3000 because they're in completely separate network namespaces.
5. MongoDB Setup — One Instance, Multiple Isolated Databases
You don't need to install MongoDB separately inside each container. That would eat RAM fast. Instead, install MongoDB once on the host and create isolated databases with separate users per client.
Install MongoDB on Host
# Import MongoDB GPG key and repo curl -fsSL https://www.mongodb.org/static/pgp/server-7.0.asc | sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list sudo apt update && sudo apt install -y mongodb-org sudo systemctl enable mongod && sudo systemctl start mongod
Bind MongoDB to the LXC Bridge IP
By default, MongoDB only listens on localhost (127.0.0.1). You need it to also listen on the LXC bridge interface so the containers can reach it. Edit the MongoDB config:
sudo nano /etc/mongod.conf
Change the bindIp line:
# Before: net: bindIp: 127.0.0.1 # After (10.4.39.1 is the host's LXC bridge IP): net: bindIp: 127.0.0.1,10.4.39.1
sudo systemctl restart mongod
Create Separate Database and User Per Client
Connect to MongoDB on the host and set up isolated databases:
mongosh
// Create admin user first
use admin
db.createUser({
user: "adminUser",
pwd: "strongAdminPassword",
roles: [{ role: "userAdminAnyDatabase", db: "admin" }]
})
// Create Faisal's database and user
use faisaldb
db.createUser({
user: "faisal",
pwd: "faisalSecurePassword123",
roles: [{ role: "readWrite", db: "faisaldb" }]
})
// Create Rifat's database and user
use rifatdb
db.createUser({
user: "rifat",
pwd: "rifatSecurePassword456",
roles: [{ role: "readWrite", db: "rifatdb" }]
})Enable Authentication
In /etc/mongod.conf, enable security:
security: authorization: enabled
sudo systemctl restart mongod
Now each client connects using their own credentials and can only access their own database. The connection string in Faisal's .env.local:
MONGODB_URI=mongodb://faisal:faisalSecurePassword123@10.4.39.1:27017/faisaldb
And in Rifat's .env.local:
MONGODB_URI=mongodb://rifat:rifatSecurePassword456@10.4.39.1:27017/rifatdb
Faisal literally cannot access rifatdb even if he tries, because MongoDB authentication would reject him.
6. GitHub SSH Setup Per Client
This is one of the most important isolation benefits. Each container has a completely separate ~/.ssh/ directory. SSH keys generated inside one container never leave that container.
Inside Faisal's Container
# SSH into Faisal's container first ssh -p 2221 root@your.vps.ip # Generate SSH key ssh-keygen -t ed25519 -C "faisal@faisaldomain.com" # Accept defaults (no passphrase if you want automation-friendly) # Display the public key cat ~/.ssh/id_ed25519.pub
Copy that public key and add it to Faisal's GitHub account under Settings → SSH and GPG Keys → New SSH Key. Then test:
ssh -T git@github.com # Hi faisal! You've successfully authenticated...
Inside Rifat's Container
# SSH into Rifat's container ssh -p 2222 root@your.vps.ip # Same steps — totally separate SSH key ssh-keygen -t ed25519 -C "rifat@rifatdomain.com" cat ~/.ssh/id_ed25519.pub
Add to Rifat's GitHub account. These are two completely independent GitHub identities. Faisal cannot push to Rifat's repos and vice versa — the keys are physically in different containers.
7. RAM and Performance Considerations
This is where real-world constraints bite. Let's be honest about the numbers.
Memory Usage Breakdown
Ubuntu 24.04 host OS baseline: ~300 MB LXC + Nginx on host: ~100 MB Faisal's container (idle): ~150 MB Rifat's container (idle): ~150 MB MongoDB (idle): ~200-400 MB Next.js (Faisal, running): ~200-300 MB Next.js (Rifat, running): ~200-300 MB ────────────────────────────────────────── Total (rough estimate): ~1.3 - 1.8 GB
A 2GB RAM VPS is cutting it very close. The moment one client runs npm run build, you'll feel it.
Add 4GB Swap Space
Swap is your safety net. It's slow (disk-backed), but it prevents the OOM killer from terminating your processes. Always add swap on a low-RAM VPS:
# Create a 4GB swap file sudo fallocate -l 4G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile # Make it permanent echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab # Verify free -h
The Next.js Build Problem — Exit Code 137
If you've ever seen this during npm run build:
info - Generating static pages (0/24) Killed npm ERR! code 137
Exit code 137 = process was killed by the OOM killer. The Linux kernel ran out of RAM, panicked, and killed the most memory-hungry process (your Node.js build).
The fix is the swap space above. Additionally:
- Stop unused containers before building:
lxc stop Rifat - Build one client at a time, never simultaneously
- After the build is done, start the container again:
lxc start Rifat - If you regularly hit OOM, upgrade to a 4GB RAM plan — it's absolutely worth it
Stopping and Starting Containers
# Stop a container (frees its RAM immediately) lxc stop Rifat # Start it back up lxc start Rifat # Check status lxc list
8. FAQ
Do clients have access to host Nginx?
No. Nginx runs on the host machine, not inside any container. The client only has root access inside their own container's filesystem. They can't see /etc/nginx/on the host, they can't modify configurations, and they can't even reach the host's localhost. The host is an entirely separate environment from their perspective.
Can clients see each other's files or processes?
No. Each LXC container has its own isolated filesystem, process namespace, and network namespace. When Faisal runs ps auxinside his container, he only sees his own processes — not Rifat's, not the host's. When he browses /, he only sees his own container's root filesystem.
Can two clients run apps on the same port?
Yes, absolutely. Port numbers are per network namespace. Faisal's port 3000 exists inside his container's network namespace. Rifat's port 3000 exists inside his. These are completely separate networking stacks — they have no awareness of each other. The host Nginx is what bridges them to the outside world by proxying to their respective container IPs.
How do clients SSH into their container?
Via the iptables port forwarding we set up. Faisal SSHs to the VPS on port 2221, which gets forwarded to his container's port 22. Rifat uses port 2222. The commands are ssh -p 2221 root@your.vps.ip and ssh -p 2222 root@your.vps.ip respectively.
Will Faisal see Rifat's GitHub credentials?
No. SSH keys live at ~/.ssh/which is inside each container's filesystem. Faisal literally cannot navigate to Rifat's container's filesystem. Git configs, GitHub tokens, SSH keys — all of it is siloed per container.
Do I need to install MongoDB separately for each client?
No. Install MongoDB once on the host. Create one database per client with its own user credentials. MongoDB's authentication system ensures Faisal can only read/write to faisaldb and Rifat can only read/write to rifatdb. This is the most RAM-efficient approach.
How does one public IP serve multiple clients?
Through Nginx's Server Name Indication (SNI) and the HTTP Host header. All traffic comes in on port 80/443. Nginx reads the domain name from the request headers and routes to the correct backend. faisaldomain.com goes to 10.4.39.212:3000 and rifatdomain.com goes to 10.4.39.231:3000. This is called virtual hosting.
What happens during Next.js build with low RAM? Explain OOM kill.
The Linux OOM (Out Of Memory) killer is a kernel mechanism that activates when the system runs completely out of physical RAM and swap. It picks the process consuming the most memory and kills it with SIGKILL (signal 9). The process exits with code 137 (128 + 9). Next.js builds are particularly memory-heavy because webpack/turbopack loads the entire app into memory for bundling. The fix: add swap space (4GB recommended) and stop other containers before building.
How do I switch from host root into a client container?
Use lxc exec. This directly executes a command inside the container as root without needing SSH:
# Get a shell inside Faisal's container lxc exec Faisal -- bash # Run a specific command lxc exec Faisal -- pm2 status lxc exec Rifat -- systemctl status sshd
9. Conclusion
Here's what you've built: a production-capable multi-tenant hosting environment on a single budget VPS. Each client gets their own isolated Ubuntu system with full root access. They can run any app on any port, manage their own GitHub keys, and connect to their own MongoDB database — all without having the slightest awareness of each other's existence.
The architecture: Internet → Host Nginx (routes by domain) → LXC containers (isolated environments) → MongoDB on host (isolated by credentials).
And here's a fun fact: this is exactly how cheap VPS providers work. DigitalOcean's $4/month droplet, Vultr's basic plan, Linode Nanode — many of these are LXC (or similar) containers on large physical servers. You just built your own mini cloud provider.
Practical recommendations by client count:
- 1-2 clients: 2GB RAM with 4GB swap can work, but it's tight. Always have swap enabled.
- 3-4 clients: Upgrade to 4GB RAM. You'll thank yourself during builds.
- 5+ clients: Go to 8GB RAM or distribute across multiple VPS instances.
The isolation you get from LXC is serious — kernel-level namespace separation, not just file permission tricks. It's the right tool for running multiple client environments without buying multiple servers. Scale smart.