*not* just another blog ;)

Proxmox allows a lot of powerful options when it comes to shared file systems, but nothing could be simpler than using sshfs and adding the directory to Proxmox as a storage option for backups, VMs and ISOs.

In my case, I have 2 Proxmox servers and a 3rd server running Ubuntu which I would like to store backups on.

I my 2 Proxmox servers which are an i5 and an i7 from SerweryDedyKowane.pl which are both hosted on the same network in Poland, and the backup server is an i5 V-Dedi from Wishosting which is hosted in Canada.

Here's what I did:

First, I set up sshfs to point to the storage on the backup server.

In this example, I have set up sshfs on both Proxmox servers. So, on each Proxmox server there is a folder called /mnt/remotebackup which is where I'd like to store my backups.

In the Proxmox UI, we need to click on Datacenter then click on Storage. From there, we need to Add a Directory.

  • I gave mine the ID of remotebackup
  • For Directory, I put in /mnt/remotebackup which if you're following this example, feel free to change to what you're using.
  • For Content, I have selected VZ Dump Backup File which allows me to store backups there. I have not tried using it to store anything else, but I plan to test that out at some stage.
  • Nodes are set to all, Enabled set with a tick, same with Shared.
  • I have set Max Backups to 5, but you can change that as you wish. This is the number of copies of the backup it will keep before it starts deleting the old ones to make room for new ones.

Now that we have that set up, on BOTH Proxmox servers (In my case, it actually copied across by itself so you may only need to do it once, but if not, do it on all the other Proxmox servers) - we can start using the storage.

To verify that backups will work, I will select a VM, and then go to the Backup tab and click on Backup Now - on this screen make sure that Storage is showing the new storage we made, in my case it's called remotebackups

Go ahead an make a backup, it'll take a little while depending on the size of the VM but it will be stored on the remote server.

Neat eh?

Remove a node from Proxmox 5

- Posted in Quick Tip by with comments

I have a Proxmox cluster set up using a few dedicated servers on the same network. I have decided to downsize my cluster and remove 2 of the nodes, leaving only 2 behind.

Normally, to remove a node from Proxmox it's a simple matter of shutting down the node to be removed and then on one of the Proxmox cluster members, removing it like this:

pvecm nodes

That will list the nodes in the cluster

pvecm delnode <nodename>

Which will remove it from the cluster.

However, when I went to do that I was given an error:

cluster not ready - no quorum?

Ruh roh! A simple fix is to let Proxmox know that there are only 2 servers left in the cluster, therefore the number of "votes" the quorum needs to do things will be set.

pvecm expected 2

Then try pvecm delnode again:

pvecm delnode testnode3

The output you will see if succesful will be similar to this:

Killing node 3

To remove them from the web UI, we need to delete the folders like this:

rm -rf /etc/pve/nodes/nodename

And we're done :)

Today I will be setting up SSL certificates for Proxmox 5 so that when you go to the web UI, it will be HTTPS and not using the self-signed cert that comes with Proxmox, which is rather insecure.

I will be doing this with Certbot.

First, we need to install Certbot:

apt install certbot -y 

Now, we need to set up the domain we're using for PVE and obtain a certificate:

certbot certonly

I will be using option 2, to spin up a temporary webserver so that certbot can verify that the domain points to the IP of the Proxmox server.

Now, we need to copy the cert files into the Proxmox directory like this:

cp /etc/letsencrypt/live/**yourdomain.com**/fullchain.pem /etc/pve/local/pveproxy-ssl.pem
cp /etc/letsencrypt/live/**yourdomain.com**/privkey.pem /etc/pve/local/pveproxy-ssl.key

And when that's done, we need to refresh Proxmox so it can be aware of the changes:

systemctl restart pveproxy

You should be able to see that it's now accessing through HTTPS and with a valid certificate - no more warnings :)

We need to make this permanent, so we'll create a cron job to keep it updated and renew the cert as needed:

crontab -e

Then paste the following on a new line:

30 6 1,15 * * root /usr/bin/certbot renew --quiet --post-hook /usr/local/bin/renew-pve-certs.sh

Control-X to exit, Y to save and press Enter to save the file with the original name.

And we're done :)

I'm going to add something extra here because it might apply to you too, but if you're also running VestaCP on your Proxmox server with port 80 and 443 forwarding to your VestaCP server, the certbot method shown will fail - what we need to do is set up the PVE domain in VestaCP first, which will work, then copy the files from VestaCP to Proxmox and then follow the steps. I'll clarify this if someone comments requesting more details.

When I was setting up my dedicated server for the first time, I wanted to be able to set up multiple KVM or LXC containers that share the same public IP address, since my dedicated server only has 1.

From what I understood, Proxmox was designed to allow each VPS to have it's own public IP but this wouldn't suit.

I searched the internet for hours trying to find a solution, and it turned out to be relatively simple.

What we need to do is edit the /etc/network/interfaces file and enable a few things; ipv4 forwarding and some iptables rules. Yuck! That sounds like hard work, but it's actually simple, especially if you like to copy/paste :)

What we are going to do is set the dedicated server up with an "internal" network, and that's where the VMs will communicate. They can communicate with each other, as well as the host server.

Before I post the contents of the /etc/network/interfaces file, I will point out a few things that you may need to change depending on your setup. The main thing is the way your network is laid out, mine looks like this:

Dedicated Server/Proxmox Server Public IP: x.x.x.x Internal IP:

VM 1 Internal IP:

VM 2 Internal IP:

Let's say I want to run a web server on port 80 on VM 1 and an FTP server on VM 2, I would need to forward port 80 from the Proxmox public IP to port 80 on VM 1. I would also need to forward port 21 from Proxmox to port 21 on VM 2.

To complicate things, and for extra points, if you wanted to have multiple FTP servers for example on VM 1 and VM 2 then we can change the port on the Proxmox server - for example port 2121 goes to VM 1 and port 2222 goes to VM 2.

Have a look at your /etc/network/interfaces file:

nano /etc/network/interfaces

Mine looks a little like this:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet static
        address X.X.X.X
        gateway X.X.X.1

Yours will be different, but very similar. The main thing we need to take note of if the interface - mine is eth0, and I believe in most cases that's what it would be, but just double check because if it's not the next part will need to be modified.

Basically that is the basic setup to get your Proxmox server talking to the internet, but it doesn't do anything for VM 1 or VM 2, until we add this underneath the above:

auto vmbr0
#private sub network
iface vmbr0 inet static
        bridge_ports none
        bridge_stp off
        bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '' -o eth0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '' -o eth0 -j MASQUERADE

This gives the Proxmox host an internal network called "vmbr0", with an IP of, and it also enabled ipv4 forwarding and sets up the basic iptables config. Notice how it refers to eth0 - if your interface is different then change it to that.

So at this stage we've created the internal network, and the VMs will be able to use this new network "vmbr0" to access the internet, but it's not going to allow incoming connections or port forwarding until we add the next bit:

post-up iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to
post-down iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to

This will take anything that's sent to your Proxmox public IP on port 80 and forward it to port 80 on (which is VM 1 in this example)

If you wanted to forward port 8080, for example, to port 80 on the VM, you could change it to this:

post-up iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 8080 -j DNAT --to
post-down iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 8080 -j DNAT --to

You'll notice there are 2 lines, post-up and post-down, which means when the connection goes up it will set up the forwarding and when it goes down it will remove the forwarding.

Go ahead and repeat the process for any VMs you have and the ports you'd like to be forwarded.

So, now that we've added the network config, we need to reboot to make it take effect. See you after the reboot...

... okay, so we're back.

Hopefully everything is fine at this stage and you're still able to access your Proxmox.

Let's move onto the next part, which is setting up the VM themselves with this new config.

I will walk through adding a new VM.

Let's create a new LXC container, "click Create CT".

Give it a hostname, can be anything, choose the image etc and wait at the Network tab.

For IP address, we need to give it the STATIC IP in the range we set before, so if you're following this example it would be - notice the /24 at the end, leave that in otherwise it won't work.

The gateway will be

Note that DHCP will not work, you MUST set the IP yourself.

The DNS settings, you can either leave blank or fill it in. My advice would be leave it blank, and if you can't resolve a hostname in the VM then go back and change it. The best test is to "ping google.com" and if it fails, try to "ping" if that works it needs the DNS to be set up.

Now you've got it set up, your port forwarding should work!