I'm planning to create a new service for handling user image uploads. Although I know that the best practice is to use some external cloud storage dedicated for hosting static files (e.g. Amazon S3), currently I'm tight on the budget and try to store the files in the same shared hosting that also hosts my website. This host already has an Nginx reverse-proxy that directs the request to the appropriate service.
In the traditional non-containerized application, the backend service and Nginx reverse-proxy are in the host and can access the same filesystem. Backend service would just store the uploads, and the Nginx serves the static files directly. But how should it be approached in a containerized way? Currently, I could think of two approaches:
Create a container containing both the backend service and Nginx. Nginx will reverse proxy between the service's API and the static image files.
Create a container containing only the backend service. The Nginx reverse-proxy container will share a volume with the backend service so it can access the files.
This is how the traffic would be directed if I use those methods above.
Upload: Client -> Nginx reverse-proxy -> Nginx reverse-proxy #2 -> backend -> store static files
Download: Client -> Nginx reverse-proxy -> Nginx reverse-proxy #2 -> get static files
Upload: Client -> Nginx reverse proxy -> backend -> store static files in shared volume
Download: Client -> Nginx reverse proxy -> get static files
I'm quite new with containerized architecture. Is there a better architecture or more efficient way for file uploading case?
Related
I'm using eks, aws-rds, binami/nginx (not bitnami/nginx-ingress-controller) with AWS NLB, my aws-rds is private access only, eks server can access it. one ec2 in another aws account want to access the private rds throuth nginx server (NLB).
ec2(vpc2) -> nginx (eks/vpc1) -> rds(vpc1)
How should i configure the bitnami/nginx Chart values.yaml? I have other solutions, but this question want to discuss bitnami/nginx solution.
I have 3 containers on ECS: web, api and nginx. Basically nginx is proxying traffic to web and api containers:
upstream web {
server web-container:3000;
}
upstream api {
server api-container:3001;
}
But every time I redeploy web or api they change their IPs so I need to redeploy nginx afterwards in order to make it to "pick up" new IPs.
Is there a way to avoid this so I could just update let's say api service and nginx service would automatically proxy to correct IP address?
I assume these containers belong to 3 different task definitions and ultimately 3 different tasks (or better 3 different services).
If that is the setup then you want to use service discovery for this. This only works with ECS services and the idea is that you create 3 distinct services each with 1+ tasks in it. You give the service a name (e.g. nginx, web, api) and each container in them is going to be able to resolve the other containers by pointing to the fqdn (e.g. api.local). When your container in the nginx service tries to connect to api.local service discovery will resolve that name to the IP of one of the tasks in the ECS service api.
If you want to see an example re how this is setup you can look at this demo app and particularly at this CloudFormation template
I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?
I want to setup web app using three components that i already have:
Domain name registered on domains.google.com
Frontend web app hosted on Firebase Hosting and served from example.com
Backend on Kubernetes cluster behind Load Balancer with external static IP 1.2.3.4
I want to serve the backend from example.com/api or api.example.com
My best guess is to use Cloud DNS to connect IP adress and subdomain (or URL)
1.2.3.4 -> api.exmple.com
1.2.3.4 -> example.com/api
The problem is that Cloud DNS uses custom name servers, like this:
ns-cloud-d1.googledomains.com
So if I set Google default name servers I can reach Firebase hosting only, and if I use custom name servers I can reach only Kubernetes backend.
What is a proper way to be able to reach both api.example.com and example.com?
edit:
As a temporary workaround i'm combining two default name servers and two custom name servers from cloud DNS, like this:
ns-cloud-d1.googledomains.com (custom)
ns-cloud-d2.googledomains.com (custom)
ns-cloud-b1.googledomains.com (default)
ns-cloud-b2.googledomains.com (default)
But if someone knows the proper way to do it - please post the answer.
Approach 1:
example.com --> Firebase Hosting (A record)
api.example.com --> Kubernetes backend
Pro: Super-simple
Con: CORS request needed by browser before API calls can be made.
Approach 2:
example.com --> Firebase Hosting via k8s ExternalName service
example.com/api --> Kubernetes backend
Unfortunately from my own efforts to make this work with service type: ExternalName all I could manage is to get infinitely redirected, something which I am still unable to debug.
Approach 3:
example.com --> Google Cloud Storage via NGINX proxy to redirect paths to index.html
example.com/api --> Kubernetes backend
You will need to deploy the static files to Cloud Storage, with an NGINX proxy in front if you want SPA-like redirection to index.html for all routes. This approach does not use Firebase Hosting altogether.
The complication lies in the /api redirect which depends on which Ingress you are using.
Hope that helps.
I would suggest creating two host paths. The first would be going to "example.com" using NodePort type. You can then use the External Name service for "api.exmple.com".
Let's say we have 2 separate applications, a Web Api application and a MVC application both written in .NET 4.5. If you were to host the MVC application in IIS under the host header "https://www.mymvcapp.com/" would it be possible to host the Web Api application separately in IIS under the host header "https://www.mymvcapp.com/api/"?
The processes running the 2 applications in IIS need to be separate. I know of the separate methods of hosting, self hosting and hosting using IIS. I would like to use IIS if at all possible.
Also, how would I host two applications (an API and a web application) if each were on a separate server so that I could serve the api from http://www.mymvcapp.com/api?
There are at least 4 ways of doing what you want to do. The first two methods are for if you have 1 web server, and both applications are served from that one web server running IIS. This method also works if you have multiple web servers running behind a load-balancer, so long as the API and the Web site are running on the same server.
The second two methods are using what's called a "Reverse Proxy", essentially a way to route traffic from one server (the proxy server) to multiple internal servers depending on what type of traffic you're receiving. This is for when you run your web servers on a set of servers and run your API on a different set of servers. You can use any reverse proxy software you want, I mention nginx and HAProxy because I've used both in the past.
Single Web Server running IIS
There are two ways to do it in IIS:
If your physical folder structure is as follows:
c:\sites\mymvcapp
c:\sites\mymvcapp\api
You can do the following:
Create a Child Application
Creating a child application will allow your "API" site to be reachable from www.mymvcapp.com/api, without any routing changes needed.
To do that:
Open IIS Manager
Click on the appropriate site in the "Sites" folder tree on the left side
Right Click on the API folder
click "Convert to Application"
The downside is that all Child Applications inherit the web config of their parent, and if you have conflicting settings in there, you'll see some runtime weirdness (if it works at all).
Create a directory Junction
The second way is a way to do it so that the applications maintain their separateness; and again you don't have to do any routing.
Assuming two folder structures:
c:\sites\api
c:\sites\mvcapp
You can set up Junctions in Windows. From the command line*:
cd c:\sites
mklink /D /J mymvcapp c:\sites\mvcapp
cd mymvcapp
mklink /D /J api c:\sites\api
Then go into IIS Manager, and convert both to applications. This way, the API will be available in \api\, but not actually share its web.config settings with the parent.
Multiple Servers
If you use nginx or haproxy as a reverse proxy, you can set it up to route calls to each app depending.
nginx Reverse Proxy settings
In your nginx.conf (best practice is to create a sites-enabled conf that's a symlink to sites-available, and you can destroy that symlink whenever deploying) do the following:
location / {
proxy_pass http://mymvcapp.com:80
}
location /api {
proxy_pass http://mymvcapp.com:81
}
and then you'd set the correct IIS settings to have each site listen on ports 80 (mymvcapp) and ports 81 (api).
HAProxy
acl acl_WEB hdr_beg(host) -i mymvcapp.com
acl acl_API path_beg -i /api
use_backend API if acl_API
use_backend WEB if acl_WEB
backend API
server web mymvcapp.com:81
backend WEB
server web mymvcapp.com:80
*I'm issuing the Junction command from memory; I did this a few months ago, but not recently, so let me know if there are issues with the command
NB: the config files are not meant to be complete config files -- only to show the settings necessary for reverse proxying. Depending on your environment there may be other settings you need to set.