I am using Azure Cloud Service and it packs two Roles: ASP.net MVC and ASP.net Web API projects. My MVC project uses ports 80 (for standard http protocol) and 443 (for https). And my API project is using 8080 (http) and 8443 (https).
What I would like to do is to assign 8080,8443 and 443 to my API project, and to my MVC project keep 80, and assign some other port for https (for example 8444).
What I did is edited Endpoints under Roles' properties (as described above), and it worked, kind of.
What is troubling me still is that when in a browser I enter https://mypage.com I am taken to my API page and not my MVC web page.
So how can I fix this (meaning that I want both my MVC and API projects to be accessible trough port 443) Could having https://api.mypage.com for my API project fix this? If yes how can I configure my DNS (which is on GoDaddy) and Cloud Service to support https://api.mypage.com?
Application Gateway can do the routing for you -
You'll have 80 and 443/TCP for the outside world and then map that based on URL/subdomain to custom ports back to your WebRoles.
E.g.
example.com ---> 8080/TCP and 8443/TCP
api.example.com ---> 8081/TCP and 8444/TCP
More on that here:
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-multi-site-overview
A pair of varnish or nginx boxes will pretty much have the same result with the added benefit of caching things along the way. Downside is that you'll have to do it in IaaS and that's the opposite of an elegant weapon for a more civilized age...
Sample config for varnish: Configure multiple sites with Varnish
EDIT
Actually you COULD do it in PaaS on App Service with Docker containers:
https://learn.microsoft.com/en-us/azure/app-service-web/app-service-linux-using-custom-docker-image
You know what, maybe it's time to move those services over to App Service as two different Web Apps, each listening on its own subdomain. If you don't need elevation and you don't do messy things like write to registry you should be fine and you could drop a locomotive-worth of complexity straight away.
It's a crazy time to be alive.
Related
As far as I know, Instance Management and the Controller have the same functions, which managing NGINX Plus and the Instances. but it does not make more sense.
So my question is
What are the differences between Instance Management and Controller?
What is Ingress Controller?
Nginx Instance Management: NGINX Instance Manager empowers you to
Automate configuration and monitoring using APIs.
For example, if you have multiple servers using Nginx then in Nginx Plus service provides a dashboard where all the events can be monitored, Including spikes on specific events, Or think as one of the servers has not been updated from having multiple VM, monitor the list of inventory. To achieve nginx-agant needs to install along with Nginx server on the host.
Ensure your fleet of NGINX web servers and proxies have fixes for
active CVEs
Seamlessly integrate with third‑party monitoring solutions such as
Prometheus and Grafana for insights
Nginx Controller: NGINX Controller is cloud‑agnostic and includes a set of enterprise‑grade services that give you a clear line of sight to apps in development, test, or production. With per‑app analytics, you gain new insights into app performance and reliability so you can pinpoint performance issues before they impact production.
Example: To enable the ingress you did need an Ingress Controller to enabled first.
Nginx Ingress: Each LoadBalancer service requires its own load balancer with its own public IP address, whereas an Ingress only requires one, even when providing access to dozens of services. When a client sends an HTTP request to the Ingress, the host and path in the request determine which service the request is forwarded to.
For example Google Kubernetes Controller
I followed this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes on how to setup an Nginx Ingress with Cert Manager with Kubernetes having DigitalOcean as a cloud provider.
The tutorial worked fine, I was able to setup everything according to what it was written. Though, (as it is stated) following the tutorial one ends up with three pods of which only one is in "Running 1/1", while the other two are "Down". Also when checking the comments section, it seems that it is quite a problem. Since if all the traffic gets routed to only 1 pods, it is not really scalable. Or am I missing something? Quoting from their tutorial:
Note: By default the Nginx Ingress LoadBalancer Service has
service.spec.externalTrafficPolicy set to the value Local, which
routes all load balancer traffic to nodes running Nginx Ingress Pods.
The other nodes will deliberately fail load balancer health checks so
that Ingress traffic does not get routed to them.
Mainly my question is: Is there a best practice that I am missing in order to have Kubernetes hosting my website? It seems I have to choose either scalability (having all the pods healthy and running) or getting IP of the client visitor.
And for whoever will ever find himself/herself in my situation, this is the reply I got from the DigitaOcean Support:
Unfortunately with that Kubernetes setup it would show those other
nodes as down without additional traffic configuration. It is possible
to skip the nginx ingress part and just use a DigitalOcean load
balancer but this again does require a good deal of setup and can be
more difficult then easy.
The suggestion to have a website with analytics (IP) and scalable was to setup a droplet with Nginx and setup a LoadBalancer to it. More specifically:
As for using a droplet this would be a normal website configuration
with Nginx as your webserver configured to serve content to your app.
You would have full access to your application and the Nginx logs on
the droplet itself. Putting a load balancer in front of this would
require additional configuration as load balancers do not pass the
x-forward header so the IP addresses of clients would not show up in
the logs by default. You would need to configured proxy protocol on
the load balancer and in your nginx configuration to be able to obtain
those IPs.
https://www.digitalocean.com/blog/load-balancers-now-support-proxy-protocol/
This is also a bit more complex unfortunately.
Hope it might save some time to someone
I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?
how are you?, i´m having trouble with the connection between my frontend and backend deployments in kubernetes. Inside of my Nginx frontend I can:
curl http://abphost
But in the browser I'm getting:
net::ERR_NAME_NOT_RESOLVED
abphost is a ClusterIP service.
I´m using a NodePort service to access my nginx frontend.
But in the browser I'm getting:
Sure, that's because the cluster has its own DNS server, called kube-dns that is designed to resolve things inside the cluster that would not ordinarily have any meaning whatsoever outside the cluster.
It is an improper expectation to think that http://my-service.my-ns.svc.cluster.local will work anywhere that doesn't have kube-dns's Servce IP as its DNS resolver.
If you want to access the backend service, there are two tricks to do that: create a 2nd Service of type: NodePort that points to the backend, and the point your browser at that new NodePort's port, or
By far the more reasonable and scalable solution is to use an Ingress controller to surface as many Services as you wish, using the same nginx virtual-hosting that you are likely already familiar with using. That way you only expend one NodePort but can expose almost infinite Services, and have very, very fine grained control over the manner in which those Services are exposed -- something much harder to do using just type: NodePort.
I'm new to web development. I'm planning to move my wordpress site to aws, says it's "example.com". I'm also planning to create a subdomain "xxx.example.com" using spring boot. I'm wondering is that possible?
Yes it's possible but remember only one process can only listen to a port (80 for http, 443 for https) in a machine.
Two options:
Have subdomain on a different machine with different IP address for it. So you can have Wordpress on one machine and your spring application on another.
Host in same machine and have one process (Apache, or a load balancer) listen to traffic for both and send it in appropriately. This is achieved with the ProxyPass command in Apache. Having a webserver in front of an application server is often recommended anyway as can be better for security and performance reasons.
There is a third option, which is to use a non-standard port (e.g. 8443) but that just makes your URL look messy (https://xxx.subdomain.com:8443). Which might be fine if you just want to test for your own sake but not great for production applications.