How-to: Self-Hosted Development Proxy Server
Introduction to Development Proxy Servers
For developers, a development proxy server is a vital part of the workflow, acting as a conduit between the local environment and the wider internet. It's particularly useful for tasks such as:
Real-time testing of webhooks and APIs
Showcasing in-progress work to clients without full deployment
Streamlining development by avoiding unnecessary deployment steps
While public proxy services like ngrok and Cloudflare Tunnel are commonly used, they may not suit all needs due to privacy, cost, and domain stability issues. Plus sometimes it’s fun to build and roll your own.
Redtun: A(nother) Self-Hosted Proxy Server
In the landscape of tunneling solutions, there's no shortage of commercial options and open-source projects, as highlighted in the comprehensive list here. However, when we scoured through these resources, none fit our use case perfectly. Many solutions were either too complex or didn't offer the simplicity and control we desired for deployment.
The primary objectives for Redtun were:
Simplicity in Deployment: The server needed to be deployable as a straightforward Docker container, capable of sitting neatly behind an existing HTTPS webserver and managed via nginx.
Domain Flexibility: It was essential to forward static domains specific to developers' needs, such as my-dev-tunnel.dev.exampledomain.com, with SSL termination handled at the ingress to ensure secure connections.
Comprehensive Protocol Support: Full support for all HTTP methods and WebSocket connections was non-negotiable, as these features are critical for modern web development, including the convenience of hot module reload during development.
Ultimately we ended up building something based on an existing project (lite-http-tunnel) that met most of our needs.
In the following sections, we'll share some details about Redtun and guide you through deploying and using it for your own development workflows.
Getting Started with Redtun
Deploying Redtun involves setting up the server component on the public internet and running the client locally to establish a secure tunnel. Here's a detailed guide for developers.
Prerequisites for Deployment
Ensure you have:
A domain name you can manage, for a wildcard domain
(you’ll need to set some records to pass Let’s Encrypt challenges)
A kubernetes cluster with an ingress as an entrypoint (e.g. in Google GKE)
OR
A hosted/public server, with Docker available (and nginx running)
We will do a deep dive into an implementation on GKE that reflects the actual setup we use in practice.
Deploy the Server
Deploying via GKE/Kubernetes
(The following section will have some assumptions specific to the Google Cloud implementation of Kubernetes (GKE); some bits may be slightly different if you’re using AWS or Azure.)
In our Kubernetes cluster on Google Cloud, we had an existing ingress already, with nginx proxying web traffic to apps we have. To extend our setup to include Redtun, we’ll need to do the following:
Deploy a Redtun server container, that is secured with a key
Switch over from a ManagedCert, to a Certificate issued by installing and using cert-manager, so we can add a wildcard domain to our SSL certificate for Redtun (Google’s ManagedCert entities don’t support wildcard domains)
Configure a new service account, so cert-manager can perform ACME DNS challenges
Create the necessary cert-manager resources needed for the new SSL certificate
Create a parallel ingress to use the new certificate we just made, to avoid downtime
Modify the existing nginx configuration in-place, to relay traffic to the new server
Standing up Redtun Server
For a Kubernetes deployment of Redtun, you can adapt the following YAML configuration to get both a `Service` and `Deployment` up and running:
Create a namespace to run the Redtun server in, then deploy both the Service and Deployment there:
Installing cert-manager
We have SSL certificates for our nginx already via GCP; unfortunately, they do not support wildcard certificates, so we’ll need to leverage something else in order to support Redtun / its wildcard domain record alongside our existing infrastructure. Fortunately for us, this problem is straightforward and not unique, so there exist many solutions already.
We decided to go with a cloud-native solution called `cert-manager`, which installs itself as a set of Kubernetes resources inside an existing Kubernetes cluster. You configure your certificates via yaml, `kubectl apply` the yaml, and their implementation takes care of creating and managing the certificates. Since our existing certificates were already in this form, we’ll feel right at home switching over.
Their installation instructions are quite good, and they even have their own guide for setting up SSL certificates for an ingress using Let’s Encrypt in GKE that was a very handy reference in the creation of this post. Simply run the `kubectl apply` command in step 6 to install it in your cluster. For us, that looks like:
Configuring Service Accounts (both kubernetes, google)
In order to validate a wildcard certificate, Let’s Encrypt mandates ACME challenges via DNS records. We’ll need to do a little legwork to facilitate allowing cert-manager to modify our DNS records inside Cloud DNS. Fortunately for us, cert-manager has this process meticulously documented for each cloud provider they support.
We’ll use workload identity to link our cert-manager kubernetes service account (KSA) to a permission-limited Google service account (GSA) that has access to DNS records to accomplish this, so we don’t generate another private key to store/rotate/manage.
To create the Google service account (GSA), we do the following:
Under GCP, go to IAM -> Roles and create a new role with the following permissions, call it dns01_solver:
Give it the following permissions:
dns.changes.create
dns.changes.get
dns.changes.list
dns.managedZones.list
dns.resourceRecordSets.create
dns.resourceRecordSets.delete
dns.resourceRecordSets.get
dns.resourceRecordSets.list
dns.resourceRecordSets.update
Then, run the following commands to create the GSA / prep it with the correct roles/permissions:
To bind the GSA we just created to the `cert-manager` service account (KSA) created during the installation of `cert-manager`, we do the following:
Configuring the Certificate Issuers
Now that we have the service accounts configured and linked together, we can create the “certificate issuer” resource within Kubernetes. When you create a Certificate resource in Kubernetes, an Issuer (or ClusterIssuer) facilitates validating the details in the Certificate with Let’s Encrypt (or whatever provider), and afterwards, if successful, populates a Secret with the corresponding files needed to attach the certificate to a running web server like nginx.
We’ll create two issuers as ClusterIssuer resources: letsencrypt-staging and letsencrypt-production. We’ll start with staging, so we can validate whether or not the certificate is created successfully (mirroring the flow found in this helpful blog post). Afterwards, we’ll create the second issuer, re-create the certificate against this new issuer, and have it populate the secret with a working, valid certificate that we can attach to our ingress.
The YAML for the staging issuer looks like:
Create the cluster issuer in the `cert-manager` namespace:
You can validate the resource is created/running by running:
If that looks good, we can create the new certificate resource next.
Configuring the Certificate
The `Certificate` we create will transparently create two other Kubernetes resources:
A `CertificateRequest`, which contains high-level details about the certificate (e.g. domains), and is handed off to the `ClusterIssuer`,
A `Secret`, which contains the successfully-generated certificate as a pair of files
Use the following YAML to create a `Certificate` - populate the various fields with your respective details. Be sure to include any existing domains on your certificate in the `dnsNames` list, so you don’t lose SSL when you switch over to this certificate!
And then apply the change:
Check the `Certificate` out with `kubectl describe` - follow along as it creates the secret/validates the request, it may take a second to become ready:
Eventually you should see `True` under the Ready column - you can use `cmctl` to validate that the certificate has all the details you expect:
Inspect the output. If it looks right, we can create the production issuer && re-create the certificate without worry.
The production issuer YAML looks very similar to the staging one:
Apply it in the `cert-manager` namespace:
Validate that this eventually says `True` via `kubectl get`:
Delete the old certificate:
Change the `certificate.yaml` file to use the new issuer:
Apply the modified YAML to create the new, valid certificate:
After a bit, it should create a valid certificate, at which point we can start working on the ingress:
Modifying/Testing the Ingress
We couldn’t afford any downtime in our rollout, so we went about this while trying to minimize any disruption, by doing the following:
Stand up a new, separate ingress, with a new IP address, pointing to the existing nginx configuration
Modify the reverse proxy configurations live, have them pick up the changes safely/one by one
Validate the old ingress is still working, and the new ingress is working as expected, with use of redtun
Switch the DNS over to the new IP, delete the old ingress/IP
To create a new IP address, we run the following `gcloud` command:
Once that finishes, get the IP address associated with it:
Go reference this in the wildcard A record created for Redtun, we’ll change the remainder of the domains over after validating.
While that propagates, we’ll modify the ingress to reference both the new IP address, and the new certificate - the diff of our ingress YAML looks like this:
We change the name and static IP address to avoid colliding with the old one, and we trade `networking.gke.io/managed-certificates` for `spec.tls.secretName` and `cert-manager.io/cluster-issuer` to apply the new cert.
Apply this YAML and create the new ingress:
Once you see both ingresses up, and the new one has a listed address associated with it / `kubectl get ing` looks like this:
We can configure nginx to forward traffic to Redtun now.
Configuring Nginx
The config is very similar to the one we use in our simple setup up above, with a few key differences:
The redirect URL will reflect our cluster service name for redtun,
The resolver will reference `kube-dns` so we can resolve the redirect URL properly,
There is no HTTPS configuration, since the Ingress in front handles that.
Full server directive for nginx in kubernetes looks like this, append to your config accordingly:
Apply that to your nginx instances, restart them to pick up the changes, do so in a tiered fashion so you avoid any downtime. In kubernetes, that looks like:
Once all your proxies are restarted, you can validate the entire end-to-end flow via `curl`:
Lots of good signs here:
The wildcard record is working, and pointing to the correct IP
The new ingress is working complete with SSL termination (or curl would have failed), and forwarding properly to nginx
Redtun is responding `Not Found`, which means nginx is working/redirecting traffic to it, which means it’s ready to use!
We can modify the remaining DNS records to point to the new IP now that it’s validated to work. Once a bit of time passes, and the domain names seem to work on the new IP, the old ingress / IP can be torn down.
We can configure the client now.
Run the Client
Configure the Redtun client on your development machine with the new domain you want to use, and the API key:
Start the client to establish a tunnel, replacing `www` with whatever endpoint you want to be reachable at:
By following these instructions, you'll have a secure, private development proxy server that allows your local environment to be accessed over the internet.
Under the Hood: How Redtun is Built
Let's dive into the code to understand the mechanics of Redtun's tunneling capabilities, focusing on the TunnelResponse and TunnelRequest classes.
The TunnelResponse Class
The TunnelResponse class in tunnel-response.ts is designed to handle the HTTP response by extending Node.js's Duplex stream, which allows it to both read and write data.
Here's a snippet showing how the class listens for a response and then emits the response metadata:
When data is received, it pushes the data chunks to the stream:
And when the response is finished, it ends the stream:
The TunnelRequest Classes
In tunnel-request.ts, the WritableTunnelRequest class is responsible for sending the request from the client to the Redtun server:
The ReadableTunnelRequest class reads the incoming request data on the server side:
And it listens for the end of the request data stream:
These classes and their methods orchestrate the flow of HTTP requests and responses through the WebSocket tunnel. By emitting and listening for specific events, Redtun maintains a continuous stream of data between the client and server, encapsulating the complexity of real-time communication within a simple and elegant interface.
Feel free to check out the full source code for Redtun.
Conclusion
We've covered the essentials of setting up and using Redtun, a self-hosted development proxy server that offers security, cost savings, and control over your development environment. It's a practical solution for developers looking to streamline their workflow and maintain privacy.
For further details, to contribute, or to get started with your own instance, visit the Redtun repository on Github. Your feedback and contributions are welcome as we continue to improve and evolve this tool. Happy coding!