Implementation of OpenVPN and OpenZiti for Secure Remote Access
Summary
This documentation describes the implementation of two different approaches to secure remote access, specifically a classic VPN-based solution with OpenVPN and a zero-trust architecture implemented using OpenZiti. The project focuses on the practical aspects of deploying and operating both technologies in a real home lab environment, including architecture design decisions and configuration considerations. The goal is to use practical implementation experience to provide insights into the characteristics of each approach.
Background and Motivation
Secure remote access is a common requirement in modern IT environments, allowing users to access internal resources from external networks in a secure way. Traditionally, this has often been achieved using VPN technologies, which extend a private network over a public infrastructure. In recent years, alternative architectural approaches have come out that aim to reduce implicit trust and limit network-level access. Zero Trust architectures follow a different design philosophy by focusing on identity-based and service-oriented access control. Both approaches address the same fundamental problem but differ significantly in terms of architecture, configuration, and operational characteristics.
Requirements
General Infrastructure Requirements
The following infrastructure requirements are based on the network conditions of the configured environment and apply to both OpenVPN and OpenZiti deployments. The scenario described assumes an internal network behind carrier-grade NAT (CGNAT) and results from the overall architecture decisions for the two scenarios.
- One publicly reachable VPS with a static public IPv4 address
- One or more internal host(s) located in a private home network
- One client used for remote access testing
- Reliable Internet connectivity for all systems
The VPS acts as the publicly exposed entry point, while the internal host provides services that are accessed remotely in both scenarios. The internal lab environment is hosted on a Proxmox VE system. All internal components (e.g., connector and service hosts) are deployed as virtual machines on Proxmox.
Operating Systems
- VPS: Ubuntu Server LTS
- Internal hosts: Ubuntu Server LTS
- Client system: MacOS
OpenVPN-Specific Requirements
- OpenVPN server deployed on a publicly reachable VPS
- UDP connectivity between VPN clients and the VPS
- Certificate-based authentication using a public key infrastructure (PKI)
- IP routing enabled on the VPS to allow communication between multiple VPN clients
- One OpenVPN client located in the internal network acting as a connector to local services
OpenZiti-Specific Requirements
- OpenZiti controller and edge router reachable via the VPS
- OpenZiti connector installed on the internal host (in the form of an edge router)
- OpenZiti client installed on the remote client system
- Outbound connectivity from the internal network to the VPS
Architecture
The setup consists of a publicly reachable VPS, an internal home lab environment located behind carrier-grade NAT (CGNAT), and a remote client system. Both OpenVPN and OpenZiti are deployed using the same underlying infrastructure in order to ensure comparability between the two approaches.
The VPS serves as the only publicly reachable system and acts as the central entry point for remote access in both scenarios. Because of the CGNAT-scenario, the internal network does not allow incoming connections from the Internet.
All internal components are hosted on a Proxmox VE system and deployed as virtual machines. Connectivity from the internal network to the VPS is established exclusively through outbound connections.
In the OpenVPN setup, the VPN server is deployed on the VPS and serves as a central hub. Remote clients establish a VPN tunnel to the VPS using UDP-based connectivity. An additional OpenVPN client is deployed within the internal network on the VPN-Connector-VM. This client acts as a connector by establishing an outbound VPN connection to the VPS and providing access to local services within the private network. IP routing is enabled on the VPS to allow communication between multiple VPN clients. As a result, traffic from the remote client can be routed through the VPS to the internal connector and further to the internal services located on the Service-VM.
In the OpenZiti setup, a Zero Trust overlay network is deployed. The OpenZiti controller and an edge router are hosted on the VPS and form the publicly reachable control and data plane components. Within the internal network, an OpenZiti connector is implemented using an edge router that is deployed on an internal host. This connector establishes outbound connections to the edge router on the VPS and provides service-level access to internal resources. Remote clients authenticate using cryptographic identities and connect to the OpenZiti overlay without gaining network-level access to the internal environment. Access is restricted to explicitly defined services based on identity and policy.
Description
Step 1: Proxmox-Based Lab Setup
1.1 Proxmox Installation and Access
Proxmox VE is installed on a dedicated host within the home network. The installation is performed using the official Proxmox VE installer with default settings. A complete installation guide can be found here.
An important remark: Proxmox VE is a so-called bare-metal installer. This means that the entire server is used and all existing data on the selected hard drives is deleted. It is best to provide a dedicated device for this purpose.
Download the ISO Image here. For the installation, a bootable USB stick can be used, for example.
You should ensure that a static IP address is configured for the Proxmox host. After installation, the Proxmox web interface is accessible via HTTPS and is used for all further configuration steps.
After installation, the Proxmox web interface is accessible via HTTPS and is used for all further configuration steps.
The Proxmox host is connected to the home network and configured with a Linux bridge. All virtual machines are attached to this bridge, providing Layer-2 connectivity to the internal network and outbound Internet access. No additional VLANs or isolated networks are used in this setup. The default bridge interface (vmbr0) is used.
1.2 Virtual Machine Creation
The following virtual machines are created via the Proxmox web interface:
| VM Name | Role | Operating System | Network |
|---|---|---|---|
| service-vm | Internal service host | Ubuntu Server 24.04 LTS | vmbr0 |
| vpn-connector | OpenVPN internal connector | Ubuntu Server 24.04 LTS | vmbr0 |
| ziti-edge-private | OpenZiti connector (edge router) | Ubuntu Server 24.04 LTS | vmbr0 |
Each virtual machine is created using the following steps:
- Open the Proxmox web interface.
- Click Create VM
- Assign the VM name according to the table above
- Select the Ubuntu Server installation ISO, which can be installed here
- Configure CPU, memory, and storage according to your available resources
- Attach the network interface to the Proxmox bridge
After the OS is installed on all VMs, the basic system preparation is performed:
sudo apt update && sudo apt upgrade -y sudo apt install -y openssh-server sudo systemctl enable ssh
Step 2: OpenVPN Setup
The OpenVPN server in this setup is deployed on the VPS. The OpenVPN client inside the internal network acts as a connector by establishing an outbound VPN connection to the VPS and forwarding traffic to internal services.
The setup consists of:
- OpenVPN server on the VPS (publicly reachable)
- OpenVPN client on the remote client system (macOS)
- OpenVPN connector client on an internal VM (Proxmox) within the home network
OpenVPN Server Installation (VPS)
Install OpenVPN and Easy-RSA on the VPS:
sudo apt update sudo apt install -y openvpn easy-rsa
Create a Public Key Infrastructure (PKI) using Easy-RSA:
make-cadir ~/easy-rsa cd ~/easy-rsa ./easyrsa init-pki ./easyrsa build-ca
Then create server certificate/key:
./easyrsa gen-req server nopass ./easyrsa sign-req server server
./easyrsa gen-dh openvpn --genkey --secret ta.key
./easyrsa gen-req client-remote nopass ./easyrsa sign-req client client-remote ./easyrsa gen-req client-connector nopass ./easyrsa sign-req client client-connector
Copy the generated keys/certificates:
sudo cp ~/easy-rsa/pki/ca.crt /etc/openvpn/ sudo cp ~/easy-rsa/pki/issued/server.crt /etc/openvpn/ sudo cp ~/easy-rsa/pki/private/server.key /etc/openvpn/ sudo cp ~/easy-rsa/pki/dh.pem /etc/openvpn/ sudo cp ~/easy-rsa/ta.key /etc/openvpn/
Create the OpenVPN server configuration:
sudo mkdir -p /etc/openvpn/server sudo nano /etc/openvpn/server/server.conf
port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh.pem topology subnet server 10.8.0.0 255.255.255.0 # Allow VPN clients to talk to each other (needed for hub-and-spoke via VPS) client-to-client # Keepalive keepalive 10 120 # Cryptographic hardening tls-crypt /etc/openvpn/ta.key cipher AES-256-GCM auth SHA256 user nobody group nogroup persist-key persist-tun verb 3 # Route internal LAN via connector route 192.168.68.0 255.255.255.0
Enable IP forwarding on the VPS:
echo 'net.ipv4.ip_forward=1' | sudo tee /etc/sysctl.d/99-openvpn-forward.conf sudo sysctl --system
Open the OpenVPN UDP port on the VPS firewall:
sudo ufw allow 1194/udp sudo ufw status verbose
Start and enable the OpenVPN server:
sudo systemctl enable --now openvpn-server@server sudo systemctl status openvpn-server@server --no-pager
Create Client Configurations
Create an .ovpn file for each client on the VPS (or locally). First create the client configuration (remote client):
nano client-remote.ovpn
client
dev tun
proto udp
remote {{VPS_PUBLIC_IP}} 1194
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
cipher AES-256-GCM
auth SHA256
verb 3
<ca>
# paste ca.crt here
</ca>
<cert>
# paste client-remote.crt here
</cert>
<key>
# paste client-remote.key here
</key>
<tls-crypt>
# paste ta.key here
</tls-crypt>
Similarly do the same for the connector-VM: client-connector.ovpn with its certificate/key. Then transfer the client configuration files securely to their target systems. For example, use scp from the VPS:
scp client-remote.ovpn user@{{CLIENT_IP}}:~
scp client-connector.ovpn user@{{CONNECTOR_VM_IP}}:~
Internal OpenVPN Connector Setup
Install OpenVPN:
sudo apt update sudo apt install -y openvpn
Run the OpenVPN client:
sudo openvpn --config ~/client-connector.ovpn
Move the config and enable systemd unit:
sudo mkdir -p /etc/openvpn/client sudo cp ~/client-connector.ovpn /etc/openvpn/client/connector.conf sudo systemctl enable --now openvpn-client@connector sudo systemctl status openvpn-client@connector --no-pager
Enable IP forwarding on the connector VM:
echo 'net.ipv4.ip_forward=1' | sudo tee /etc/sysctl.d/99-connector-forward.conf sudo sysctl --system
In order for the remote client to reach internal hosts, it must be clear how traffic gets from the VPS to the LAN. We can use NAT on the connector for this. The connector translates traffic from the VPN to the LAN (SNAT/MASQUERADE). The advantage is that no additional routes need to be set in the home network. On the connector (adjust interface names: tun0 for VPN, ens18 for LAN):
sudo apt install -y iptables-persistent sudo iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o ens18 -j MASQUERADE sudo netfilter-persistent save
Internal Service Setup
To verify end-to-end connectivity through the OpenVPN setup, an internal web service needs to be deployed on the internal service host. One could take nginx as an example.
Install nginx on the internal service VM:
sudo apt update sudo apt install -y nginx
Ensure that the service is running:
sudo systemctl enable --now nginx sudo systemctl status nginx --no-pager
By default, nginx listens on TCP port 80 and serves a default test page. The service is reachable from within the internal network via the private IP address of the service VM. Subsequently, one can of course change the nignx page or add further services.
Remote Client Setup
Install an OpenVPN client on your device. (In this setup OpenVPN Connect was used which can be installed here), then import the client-remote.ovpn profile.
Then you can establish a connection.
Verify that the tunnel interface exists and a VPN IP address is assigned:
ifconfig | grep -A2 utun
Then test:
- Ping to the VPS tunnel interface (or to a known VPN peer)
- Ping to an internal host behind the connector
- Access to the internal service
Some common issues which occured during the setup and how you can troubleshoot them:
- If the VPN connects but internal hosts are unreachable, verify routing/NAT on the connector VM.
- Verify that IP forwarding is enabled on both the VPS and the connector VM.
- Check that the OpenVPN UDP port is reachable on the VPS firewall. Check if your provider has a firewall enabled on the VPS. You might need to configure the firewall via a web GUI.
- On the VPS, check logs using:
sudo journalctl -u openvpn-server@server -n 200 --no-pager
Step 3: OpenZiti Setup
The goal of this step is to provide the same internal nginx service as in Step 2, but service-based via OpenZiti and zero-trust principles. The first part of the setup is based on the following documentation.
Firewall Ports on VPS
The following ports must be opened on the VPS. These ports are not generally visible to clients, but are only relevant for designated Ziti components.
- 8440/tcp – Controller Control Plane (Router to Controller)
- 8441/tcp – Edge API (Management + Client API and ZitiAdminConsole)
- 8442/tcp – Public Edge Router Listener (Clients connect here for data)
- 10080/tcp – Router Link Listener (Router to Router for link establishment)
The Ziti Admin Console does not run on its own port on our system, but is hosted by the controller via the WebListener on port 8441.
On the VPS:
sudo ufw allow 8440/tcp sudo ufw allow 8441/tcp sudo ufw allow 8442/tcp sudo ufw allow 10080/tcp sudo ufw status verbose
In addition, the same ports must be enabled in the VPS provider firewall (otherwise UFW will appear to be ‘open’, but it will still be blocked externally).
Controller & Public Edge Router Installation on the VPS
export EXTERNAL_IP="$(curl -s eth0.me)"
export ZITI_CTRL_ADVERTISED_ADDRESS="${EXTERNAL_IP}"
export ZITI_CTRL_ADVERTISED_PORT=8440 export ZITI_CTRL_EDGE_ADVERTISED_ADDRESS="${EXTERNAL_IP}"
export ZITI_CTRL_EDGE_ADVERTISED_PORT=8441 export ZITI_ROUTER_ADVERTISED_ADDRESS="${EXTERNAL_IP}"
export ZITI_ROUTER_PORT=8442
source /dev/stdin <<< "$(wget -qO- https://get.openziti.io/ziti-cli-functions.sh)"; expressInstall
Among other things, the Express Installer downloads the Ziti binaries, generates a PKI and certificates, and creates a controller configuration.
source /root/.ziti/quickstart/ubuntu/ubuntu.env
which ziti
Troubleshooting: if which ziti is empty, the .env may not have been sourced or the shell may have been reopened.
Ziti Admin Console on the VPS
Download and unzip Ziti Admin Console:
source /root/.ziti/quickstart/ubuntu/ubuntu.env wget https://github.com/openziti/ziti-console/releases/latest/download/ziti-console.zip unzip -d ${ZITI_HOME}/console ./ziti-console.zip
Adapt Controller Configuartion on VPS
In the controller YAML (in our case: /root/.ziti/quickstart/ubuntu/ubuntu.yaml) in the WebListener block (web: --> apis:), add the ZAC binding:
- binding: zac options: location: ./console indexFile: index.html
The admin console is delivered via the same WebListener that is bound to 8441. Therefore, ZAC can be accessed at: https://Template:PUBLIC VPS IP/zac/login
Restart the controller:
sudo systemctl restart ziti-controller
Now the ziti admin console should be reachable:
Deployment of the Edge Router in the Local Network
On the Ziti Edge VM in the local network:
curl -sS https://get.openziti.io/install.bash | sudo bash -s openziti-router
Check services:
sudo systemctl list-units --type=service | grep -i ziti sudo systemctl list-unit-files | grep -i ziti
In our case, it was relevant that ziti-router.service is available and enabled.
Then continue in the Ziti Admin Console:
- Routers --> Create --> Type: Edge Router
- Name: here: home-edge-router
- Attributes: e.g. #home (this is important for our policies)
Afterwards, in the Admin Console: Generate and download enrolment/JWT for this router.
The generated JWT must now be transferred to the home network VM.
Now on the Ziti Edge VM in the local network:
sudo /opt/openziti/etc/router/bootstrap.bash
Now the controller address (in our case the public VPS IP), the controller port 8440 and the path to the enrollment JWT-token have to be entered.
Afterwards the router service can be started:
sudo systemctl enable --now ziti-router.service sudo systemctl restart ziti-router.service sudo systemctl status ziti-router.service --no-pager
You can view logs with:
sudo journalctl -u ziti-router -n 80 --no-pager
If a timeout appears in the logs, port 10080/tcp on the VPS (provider firewall or UFW) is probably still blocked.
Service Definition
After this step, the client should be able to access http://nginx.home (the service address we specified in this setup), even though nginx is running internally on a private IP address on port 80 in the local network.
First, the service has to be defined in the Ziti Admin Console:
In the Admin Console:
Services --> Create --> Simple Service
- Name: nginx-home
- Protocol: tcp
- Intercept Hostname: in our case ‘nginx.home’
- Intercept Port: 80
- Host Address (Target): In our case: 192.168.68.60
- Host Port: 80
Afterwards, under Services --> nginx-home --> Configurations, two configurations must exist:
- one of type intercept.v1 (contains host name/port)
- one of type host.v1 (contains target IP/port)
Verification via CLI on VPS:
source /root/.ziti/quickstart/ubuntu/ubuntu.env
ziti edge login https://{{PUBLIC_VPS_IP}}:8441 -u admin -p {{ADMIN_PASSWORD}}
ziti edge list services | grep -i nginx-home
ziti edge list configs | grep -E 'intercept|host' ziti edge list service-configs 'service.name="nginx-home"'
If the intercept/host configurations are missing or have incorrect values, clean access will not work later on.
Generation of a Terminator
In addition to service definitions, configurations (intercept/host) and policies, an OpenZiti service requires a terminator to bind the service to a router and a specific destination in order to enable actual data flow. This is the assignment between the specific service, the router that provides this service (in our case, the home edge router) and the underlay destination that this router can reach, i.e. the IP of the service VM on port 80.
The terminator can e.g. be created in the CLI on the VPS:
source /root/.ziti/quickstart/ubuntu/ubuntu.env ziti edge login https://{{PUBLIC_VPS_IP}}:8441 -u admin -p {{ZITI_ADMIN_PASSWORD}}
ziti edge create terminator \ nginx-home \ home-edge-router \ tcp:192.168.68.60:80
ziti edge list terminators | grep -i nginx-home
Client Setup
Now the client identity still must be created in the controller.
In the Ziti Admin Console:
- Identities --> Create
- Name, e.g.: client-macbook
- Set attribute --> in our case: #clients
Download the generated JWT and save it on the Mac. A new identity now exists in the controller, but it is not yet enrolled until the client redeems it.
The Ziti DesktopEdge client can be downloaded here.
Open the Client Application and open the jwt-File. The enrollment will then proceed on the Client.
Policies
Without policies, even a correct terminator and enrolled client will not grant the client authorisation. There are two main types of policies. Firstly, there are dial policies, which determine who is allowed to use a service, and then there is the service bind, which determines which router is allowed to host the service.
In the Ziti Admin Console:
Policies --> Service Policies
- Policy type: Dial/Access
- Service: nginx-home
- Identity/Attributes: In our setup by attribute: #clients
- The service-bind binds the nginx-home service to router attribute #home
One can use a specific identity (e.g. the one enrolled on the device) or use attributes (e.g. #clients) and assign this attribute to the client identity.
Now you can test the setup. Open Ziti DesktopEdge, select the identity and click on ‘Turn Ziti On’. The connection will now be established. After connecting, the services for this specific identity will be displayed:
Note that we can not ping the service VM anymore:
But we can access the test web page using http://nginx.home/
References
- https://openvpn.net/community-docs/getting-started.html
- https://netfoundry.io/docs/openziti/
- https://www.proxmox.com/de/produkte/proxmox-virtual-environment/uebersicht
- https://www.proxmox.com/en/downloads/proxmox-virtual-environment/iso
- https://ubuntu.com/download/server
- https://netfoundry.io/docs/openziti/learn/quickstarts/network/hosted/
- https://netfoundry.io/docs/openziti/downloads/?os=Mac+OS