Basically what you do is that you point your domain/subdomains (i.e. vm1.mydomain.com, vm2.mydomain.com, can setup via A records at your hosting supplier) to your public IP. In uppe router via NAT redirect http/https ports to your NGINX server. In your NGINX server decides what subdomain goes to what internal IP/port.
Hence you run ONE NGINX instance on your network and it redirects to your internal servers accordingly.
To add to this, I generally run three things on my "router" Proxmox instance:
OPNsense VM
Wifi LXC controller (Omada or previously Unifi)
NPM LXC (add 80/443 rule to firewall)
I set up Dynamic DNS in OPNsense to update my DNS records on my registrar. I then set up my proxy hosts in NPM to do DNS challenges to grab LE Certs. I then point the subdomains to the host on my local network (mostly separate LXCs).
Works great, and is very easy to add/modify services.
You don’t need to install a proxy manager on each VM. You can set up Nginx Proxy Manager on one machine and point it to each VM using different subdomains or paths. This way, you have a single reverse proxy for all your VMs under one domain.
You just run it on one machine. And then point to the ip of the other services you want to access online.
Proxmox helper scripts has a proxy manager.
And that is just asking to lose your whole database. TTeck scripts are pretty good, but for critical services, stick to what works and that is docker for NPM.
No. You can created an internal only network between the two VM’s which doesn’t have to travel over the wire and the traffic is handled internally to Proxmox. You can then reverse proxy over that network.
Are you doing a different unique network between each VM or the same on all vms?
Basically what you do is that you point your domain/subdomains (i.e. vm1.mydomain.com, vm2.mydomain.com, can setup via A records at your hosting supplier) to your public IP. In uppe router via NAT redirect http/https ports to your NGINX server. In your NGINX server decides what subdomain goes to what internal IP/port.
Hence you run ONE NGINX instance on your network and it redirects to your internal servers accordingly.
To add to this, I generally run three things on my "router" Proxmox instance:
OPNsense VM
Wifi LXC controller (Omada or previously Unifi)
NPM LXC (add 80/443 rule to firewall)
I set up Dynamic DNS in OPNsense to update my DNS records on my registrar. I then set up my proxy hosts in NPM to do DNS challenges to grab LE Certs. I then point the subdomains to the host on my local network (mostly separate LXCs).
Works great, and is very easy to add/modify services.
More replies More replies
Why use nginx, why not haproxy?
You don’t need to install a proxy manager on each VM. You can set up Nginx Proxy Manager on one machine and point it to each VM using different subdomains or paths. This way, you have a single reverse proxy for all your VMs under one domain.
You just run it on one machine. And then point to the ip of the other services you want to access online.
Proxmox helper scripts has a proxy manager.
And that is just asking to lose your whole database. TTeck scripts are pretty good, but for critical services, stick to what works and that is docker for NPM.
Your mileage may vary of course.
More replies
No. You can created an internal only network between the two VM’s which doesn’t have to travel over the wire and the traffic is handled internally to Proxmox. You can then reverse proxy over that network.
Are you doing a different unique network between each VM or the same on all vms?
More replies More replies
Forget all those managers, you don't need them at all.