Multiple Web Applications - Same VM vs Multiple VMs - nginx

Firstly, I am more of a dev than admin. And I have always asked questions here. But please let me know if there is a better place to ask this question.
Here's my situation. I have an application that is built to run on linux. It serves both https (on port 443 using nginx) and ssh (on port 22). But due to organizational restrictions, I am forced to run it on a windows host with a linux guest using virtual box. Also, there is another web application on the host box; both these web applications should be served based on the URL (example: app1.com, app2.com). URLs need to be preserved. All ssh traffic can default to guest.
One idea I have to make this work is below, and I would like to know
if I am making this more complicated than it should be. Any help is appreciated.
Steps:
Use an unused port for https (say 8443) on my host and redirect all
traffic to the guest. Use NAT based port forwarding (8443 -> 443, 22 -> 22)
in Virtualbox.
The only thing left would be to setup another nginx on
the host as a reverse proxy. Set up virtual hosts on windows
(/etc/hosts) and have the two IP and URL entries (app1.com and app2.com).
Use a separate nginx on the host as a reverse proxy to redirect app1 traffic
to the web app on the host and app2 traffic to 8443.
Questions:
Can I avoid the extra nginx reverse proxy on the host while preserving the URL?
Also what about ssl. Can I just set up https on the host and route it to port 80 on guest and avoid having two certs? Note: I am using NAT in Virtualbox, so there should not be any security issues I guess.

This is an administration question, and the user posted the identical question to serverfault, where it was answered: https://serverfault.com/questions/835220/multiple-web-applications-same-vm-vs-multiple-vms

Related

.NET 7 Blazor on Docker - Host Multiple Containers on Single Server

I am new to hosting, especially with Blazor & Docker. I have a Debian 11 server which currently hosts a Blazor Server project running in a Docker container. I do not have any web hosting service on my server itself; my project is currently hosted using my container's built-in Kestrel web host for ASP.NET. However, I am now trying to host a second website on my server.
From what I have read, DNS configuration can only point records to port 80, meaning I can't simply use a different port to host my other site (and have it indexed by search engines with SSL configured). I am trying to figure out how to host both Blazor containers on my server, with my DNS pointing to the same IP address, but I am not sure how to assign each domain to the separate containers. For example, my DNS looks like this:
A record / website1.com -> 10.0.0.1
A record / website2.com -> 10.0.0.1
How should I handle this configuration? Do I need to use a web hosting service directly on my server (such as apache) or is there a way to map the DNS to my interal Docker IP addresses? Or is there another way to figure this through my DNS altogether? Thank you.

What are the networking requirements for reverse proxy tunneling on a self hosted vscode-server instance?

I recently moved over from coder/code-server to microsoft's implementation of a code server. Included in this is the ability to set up tunnels to the remote host, however i can't seem to get it to work. I'm using nginx to forward to the code-server service as a reverse proxy which works just fine for the most part. Since the documentation doesn't include a list of networking requirements for hosting a code-server i'd like to know if there are any ports that should be open and forwarded or any additional configuration?
I've configured nginx to forward all requests to the code-server instance, but this isn't enough to get tunnels working. I can connect to it through the domain i'm listening for.

SSL certificate not working on Nginx Proxy Manager (Cloudflare DNS)

I set up a Cloudflare account and redirected my domain to its nameservers. During setup I left all the settings at default.
I added two DNS records:
An "A" record targetting my IP address and a "CNAME" record creating an alias for it.
In my Nginx Proxy Manager (running in Docker on a bridged network connected with a database), there is only one proxy host directing the "CNAME" alias to a LAN IP (https://192.168.0.50:9443; Portainer operates on HTTPS).
Everything works flawlessly until I decide to add an SSL certificate. The only option I tick is "Force SSL". When I try to access the site at this point, it loads for a bit and then times-out to the "522" error. The only way I can get the site to work is to clear the Nginx volumes and restart the stack.
Turning Cloudflare proxy off doesn't seem to make any difference. Neither does trying to access different docker containers operating on HTTP
I managed to solve the problem. I didn't forward port "443" on my router to the target device... I hope that this helps anyone else who made this mistake.

Best software for dynamic dns proxying to docker containers

Currently i am using haproxy with manual updating backends which points to separate docker nginx containers for different apps.
What is best software to proxying request to different local nginx containers based on hostname?
I would have a simple map file or even /etc/hosts/ which my script would update when docker containers change, for example:
domain1 1.1.1.1
domain2 1.1.1.2
domain3 1.1.1.3
So ideal will be haproxy -> some software proxy or dns -> docker nginx
and software would use map file in fly, not reloading and point request to local ip address.
Maybe i would put varnish cache in front so it would need to be compatible with that too (and why wouldn't) so flow would be:
request -> haproxy (for load balancing in multiple servers)
-> varnish on public server ip ( for in memory caching based on host and route, so if there is cache return response immediately )
-> SOME PROXY OR DNS BASED ON SIMPLE MAP FILE which will further proxy to local ip of one of multiple docker nginx containers
-> docker nginx inside custom network
-> some app in container
What is best practice for this flow, should i put varnish somewhere else, and what is a software i am seeking for?
I am currently using one extra nginx and mapping $host to custom ip address in custom maps.conf file and gracefully reloading nginx on change, but i got feeling that there is better solution for this.
Also i forgot to mention that i dont need only http proxying based on map file, but tcp (ssh, smtp, ftp..) too, just in those cases i will not have haproxy and varnish in front and this app would be public faced on those port.
for example:
port:22
domain1 1.1.1.1
domain2 1.1.1.2
domain3 1.1.1.3
port:25
domain1 1.1.1.4
domain2 1.1.1.5
domain3 1.1.1.6
I think something like Muguet might solve your issue.
From their github repo:
When using Docker, it's sometimes a pain to access your containers
using specific IPs/ports.
Muguet provides you with a DNS Server that resolves auto-generated
hostnames to your containers IPs, plus a Reverse Proxy to access all
your web apps on port 80.
I think what you want is dnsmasq. This basically is a lightweight DNS service you run on your host running docker containers and it allows you to use hostnames instead of IP addresses. It's a pretty common way to solve this issue.
A nice guide to setting up dnsmasq can be found at:
http://docs.blowb.org/setup-host/dnsmasq.html
and searching dnsmasq and docker will point you to many more resources.
One thing to remember is on your haproxy host, make sure you modify the /etc/resolv.conf to include your dnsmasq server.

Is it possible to create a subdomain served by tomcat based on a domain served by Apache on ASW ec2?

I'm new to web development. I'm planning to move my wordpress site to aws, says it's "example.com". I'm also planning to create a subdomain "xxx.example.com" using spring boot. I'm wondering is that possible?
Yes it's possible but remember only one process can only listen to a port (80 for http, 443 for https) in a machine.
Two options:
Have subdomain on a different machine with different IP address for it. So you can have Wordpress on one machine and your spring application on another.
Host in same machine and have one process (Apache, or a load balancer) listen to traffic for both and send it in appropriately. This is achieved with the ProxyPass command in Apache. Having a webserver in front of an application server is often recommended anyway as can be better for security and performance reasons.
There is a third option, which is to use a non-standard port (e.g. 8443) but that just makes your URL look messy (https://xxx.subdomain.com:8443). Which might be fine if you just want to test for your own sake but not great for production applications.

Resources