.NET 7 Blazor on Docker - Host Multiple Containers on Single Server - asp.net

I am new to hosting, especially with Blazor & Docker. I have a Debian 11 server which currently hosts a Blazor Server project running in a Docker container. I do not have any web hosting service on my server itself; my project is currently hosted using my container's built-in Kestrel web host for ASP.NET. However, I am now trying to host a second website on my server.
From what I have read, DNS configuration can only point records to port 80, meaning I can't simply use a different port to host my other site (and have it indexed by search engines with SSL configured). I am trying to figure out how to host both Blazor containers on my server, with my DNS pointing to the same IP address, but I am not sure how to assign each domain to the separate containers. For example, my DNS looks like this:
A record / website1.com -> 10.0.0.1
A record / website2.com -> 10.0.0.1
How should I handle this configuration? Do I need to use a web hosting service directly on my server (such as apache) or is there a way to map the DNS to my interal Docker IP addresses? Or is there another way to figure this through my DNS altogether? Thank you.

Related

Is it possible to reverse proxy to native running applications from a containerized Nginx in K3S?

On my server I run some applications directly on the host. In parallel I have a single-node K3S that also contains a few applications. To be able to manage the traffic routing and HTTPS certificates to the individual services in a central place I want to use Nginx. In the cluster runs a traefik ingress controller which I use for the routing in this context.
To be able to reverse proxy to each application, no matter if it runs directly on the host or in a container in K3S, Nginx must be able to reach the applications locally, no matter where it runs (without the traffic leaving the server). E.g. proxy myservice.mydomain.com to localhost:8080 from Nginx should end up on the webserver of a nativly running application and myservice2.mydomain.com to the webserver of a container in K3S.
Now, is this possible if the Nginx runs in the K3S cluster or do I have to install it directly on the host machine?
If you want to use Nginx that way yes you can do it.
keeping Nginx in front of Host and K3S also.
You can expose your service as NodePort from K3s and while local servie that you will be running on Host machine will be also running on one Port.
in this Nginx will forward the traffic like
Nginx -> Port-(8080) MachineIp: 8080 -> Application on K3s
|
Port-(3000) MachineIp: 3000 -> Application running on Host
Example : https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/

How to send requests to a web server running inside a docker container inside an AWS EC2 machine from the outside world?

I have a Python Flask web server running inside a docker container that is running in an AWS EC2 Ubuntu machine. The container is running on a default network setting (docker0). Within the host EC2, I can send requests (Get, Post) to this web server using docker-machine ip (172.x.x.x) and the forwarded ports (3000: 3000) of the host.
url: http:// 172.x.x.x:3000 / <api address>
How can I send requests (GET, POST) to this web server from the outside world? For example from another web server running in another EC2 machine. Or even from the web using my web browser?
Do I need to get a public IP Address for my docker host?
Is there is another way to interact with such web server within another web server running in another EC2?
If you have a solution please explain with as many details as you can for me to understand it.
The only way that I can think of is to write a web server on the main EC2 that listens to the requests and forward them to the appropriate docker container webservers?! But that would be too many redundant codes and I would rather just request to the web server running on the container directly!
The IP address of the docker is not public. Your EC2 instance usually has a public IP address though. You need an agent listening on a port on your EC2 instance and pass it to your docker/Flask server. Then you would be able to call it from outside using ec2-instance-ip:agent-port.
It's still not a long-term solution as EC2 IPs change when they are stopped. You'd better use a load-balancer or an elastic IP if you want the ip/port to be reliable.
That's right, it makes a lot of redundant code and an extra failure point. That's why it's better to use Amazon's managed docker service (https://aws.amazon.com/ecs/). This way you just launch an EC2 instance which is a docker and has a public IP address. It still allows you to SSH into your EC2 instance and change stuff.

Multiple Web Applications - Same VM vs Multiple VMs

Firstly, I am more of a dev than admin. And I have always asked questions here. But please let me know if there is a better place to ask this question.
Here's my situation. I have an application that is built to run on linux. It serves both https (on port 443 using nginx) and ssh (on port 22). But due to organizational restrictions, I am forced to run it on a windows host with a linux guest using virtual box. Also, there is another web application on the host box; both these web applications should be served based on the URL (example: app1.com, app2.com). URLs need to be preserved. All ssh traffic can default to guest.
One idea I have to make this work is below, and I would like to know
if I am making this more complicated than it should be. Any help is appreciated.
Steps:
Use an unused port for https (say 8443) on my host and redirect all
traffic to the guest. Use NAT based port forwarding (8443 -> 443, 22 -> 22)
in Virtualbox.
The only thing left would be to setup another nginx on
the host as a reverse proxy. Set up virtual hosts on windows
(/etc/hosts) and have the two IP and URL entries (app1.com and app2.com).
Use a separate nginx on the host as a reverse proxy to redirect app1 traffic
to the web app on the host and app2 traffic to 8443.
Questions:
Can I avoid the extra nginx reverse proxy on the host while preserving the URL?
Also what about ssl. Can I just set up https on the host and route it to port 80 on guest and avoid having two certs? Note: I am using NAT in Virtualbox, so there should not be any security issues I guess.
This is an administration question, and the user posted the identical question to serverfault, where it was answered: https://serverfault.com/questions/835220/multiple-web-applications-same-vm-vs-multiple-vms

Managing and Utilizing Multiple Docker Containers (Microservices) in a Single Server

I have a GCE (Google Compute Engine) server running with the Nginx/Apache web server listing at port 80 which will serve the website. At the same time I have multiple microservices running in the same server as Docker containers. Each container will serve a website at it's appropriate local-IP Address as well as I have bind it to localhost:PORT. I don't want to bind the ports to the Public-IP address, Since it will publicly expose the microservices to the outside world.
Now the problem is, I have to embed the website pages served by the containers to the website which is running at port 80 of the web server. Since the embed code with we executed by the browser, I cannot use either the local-IP (172.17.0.x) or localhost:PORT in the python/HTML code.
Now how do I embed the pages of microservices running locally inside the containers to the website serving the users?
For Example:
My Server's Public IP: 104.145.178.114
The website is served from: 104.145.178.114:80
Inside the same server we have multiple microservices running in the local-IP like 172.17.0.1 and 172.17.0.2 and so on. Each container will have a server running inside itself which will server pages at 172.17.0.1:8080/test.html and similarly for the other containers also. Now I need to embed this page test.html to another web page which is served by the Nginx/Apache webserver at 104.145.178.114 without exposing the internal/Local-IP Port to the public.
I would like to hear suggestions and alternative solutions for this problem
I'm assuming Nginx has access to all internal docker ips (microservices). Unless I'm missing something, proxy_pass (http://nginx.org/en/docs/http/ngx_http_proxy_module.html) should work for you. You could assume a certain (externally available) url pattern to proxy to your microservice container without exposing the microservice port to the world.

How to access my sites on IIS 7.5 from outside my network on Windows Server 2008

I'm new to Windows Server 2008 and IIS, so please be patient.
I want to access my sites from outside my network.
I can browse my sites from the localhost, it's working.
I've added a binding to my site, Type: http, Host Name: www.dev.com, port: 80.
I have a static IP from my ISP, and my router is forwarding http requests to my server.
If I remove the Host Name and access directly using my network IP address, I get my site, but I want to provide a host name to the site because I'm going to add another web sites.
I've added www.dev.com to the DNS with my IP address.
What should I do next?
Thanks
I found the answer to what I was looking for, I should've done this:
create virtual application www.dev.com under default website on IIS,now I can access the website using STATIC_IP_ADDRESS/www.dev.com .

Resources