I'm reading up and doing basic mesos/marathon installation. If Im deploying a webapp as a marathon application, the instance(s) of my webapp could run in any mesos slave. Howe would I then configure my nginx upstream to point to correct host(s).
Should my webapp register its host in zookeeper and reconfigure nginx periodically ?.
Is there any examples how to do this.
Thanks
Should my webapp register its host in zookeeper and reconfigure nginx periodically ?
You don't need zookeeper. All data required to configure nginx are available in Mesos or Marathon. You can periodically query Mesos/Marathon and generate Nginx configuration like Nixy does.
To minimize unavability time you can use Marathon SSE to get information about instances start/stop just like allegro/marathon-consul does.
Related
I am running a Flask application in digitalocean via a Gunicorn and NGINX set up.
I have SSH access to my digitalocean droplet and am able to log in via the terminal.
Gunicorn, NGINX and Flask are already running and this is a production server.
Now, I'd like to SSH into my droplet and run a terminal command in order to see a print out of any errors that occur from my Flask application. I guess it would gunicorn errors.
Is such a thing possible? Or would I have to print things out to an error log? If so, I'll probably have questions about how to do that too! :D
Thank you in advance!!
I
Have a look at this Digital Ocean tutorial for Flask, gunicorn and NGINX. It includes a section on logging obtaining logs at the end of section #5 that should be helpful.
A common approach with Cloud-based deployments is to centralize logs by aggregating them automatically from multiple resources e.g. Droplets to save ssh'ing (or scp'ing) to to machines to query logs.
With a single Droplet, it's relatively straightforward to ssh in to the Droplet and query the logs but, as the number of resources grows, this can become burdensome. Other Cloud providers provide logging-as-a-service solutions. With Digital Ocean you may want to look at alternative solutions. I'm not an expert but the ELK stack, Splunk and Datadog are often used.
All,
We have an infrastructure where we have 1 consul server, 2 ngnix web servers and 2 application servers. The app servers connects to consul to register the services. The Ngnix server will connect to consul and updates the ngix.conf file using the nginx.ctmpl (consultemplate config) file to have the latest information on the services via consul.
The problem I see is that the nginx.conf is not getting updated on the 2 nginx servers. The following are the agents/services running on each server:
Consul server:
consul
consultemplate
Ngix servers:
nginx
consultemplate
Application Servers:
consul
Couple of questions here:
Which agent/process/service on Nginx will use the nginx.cmptl file
to update the nginx.conf with the latest status?
What can be a problem on my nginx servers?
Nginx does not utilize that template directly. consul-template should be configured to use nginx.cmptl to template the config, and reload nginx.
See Load Balancing with NGINX and Consul Template for an example of this configuration.
Can you verify that the consul-template service is running, and perhaps provide any error logs it might be generating?
The problem is solved. I renamed the data directory for the Consul in Nginx servers and ran the chef-client. I recreated the data directory and the Consul server was able to identify the two Nginx servers and the 2 application servers. Everything is back in the working condition.
Some times flask app server may not be running which the page will just say server not reached. Is there any way we can have Nginx to redirect to different url if flask app is not able to be reached
This kind of dynamic change of proxying is not possible in Nginx directly. One way you could do is by having a dedicated service(application) that takes care of this by polling your primary flask endpoint at regular intervals.
If the is a negatory response then your service could simply change the nginx config and then send a HUP signal to the nginx process which in turn reloads nginx with the newly available config. This method is pretty efficient and fast.
In case you are making this service in Python, you could use signals library to send the signal to nginx master process and also the nginxparser library to play around with the nginx config
I have nginx that I am using to receive traffic for multiple domains on port 80 each with upstream to different application servers on application specific ports
e.g
abc.com:80 --> :3345
xyz.com:80 --> :3346
Is it possible to
1. add/delete domains (abc/xyz) without downtime
2. change application level port mapping (3345,3346) without downtime
If nginx can't do it, is there any other service that can do it without restarting the service and incurring downtime ?
Thanks in advance
In short: Yes.
Typically, you'd overwrite the existing config file(s) in place while nginx is running, test it using nginx -t and once everything is fine, reload nginx using nginx -s reload. This will cause nginx to spawn new worker processes which use your new config while old worker processes are being shut down gracefully.. Graceful means closing listen sockets while still serving currently active connections. Every new request/connection will use the new config.
Note that in case nginx is not able to parse the new config file(s), the old config will stay in place.
Let's say we have 2 separate applications, a Web Api application and a MVC application both written in .NET 4.5. If you were to host the MVC application in IIS under the host header "https://www.mymvcapp.com/" would it be possible to host the Web Api application separately in IIS under the host header "https://www.mymvcapp.com/api/"?
The processes running the 2 applications in IIS need to be separate. I know of the separate methods of hosting, self hosting and hosting using IIS. I would like to use IIS if at all possible.
Also, how would I host two applications (an API and a web application) if each were on a separate server so that I could serve the api from http://www.mymvcapp.com/api?
There are at least 4 ways of doing what you want to do. The first two methods are for if you have 1 web server, and both applications are served from that one web server running IIS. This method also works if you have multiple web servers running behind a load-balancer, so long as the API and the Web site are running on the same server.
The second two methods are using what's called a "Reverse Proxy", essentially a way to route traffic from one server (the proxy server) to multiple internal servers depending on what type of traffic you're receiving. This is for when you run your web servers on a set of servers and run your API on a different set of servers. You can use any reverse proxy software you want, I mention nginx and HAProxy because I've used both in the past.
Single Web Server running IIS
There are two ways to do it in IIS:
If your physical folder structure is as follows:
c:\sites\mymvcapp
c:\sites\mymvcapp\api
You can do the following:
Create a Child Application
Creating a child application will allow your "API" site to be reachable from www.mymvcapp.com/api, without any routing changes needed.
To do that:
Open IIS Manager
Click on the appropriate site in the "Sites" folder tree on the left side
Right Click on the API folder
click "Convert to Application"
The downside is that all Child Applications inherit the web config of their parent, and if you have conflicting settings in there, you'll see some runtime weirdness (if it works at all).
Create a directory Junction
The second way is a way to do it so that the applications maintain their separateness; and again you don't have to do any routing.
Assuming two folder structures:
c:\sites\api
c:\sites\mvcapp
You can set up Junctions in Windows. From the command line*:
cd c:\sites
mklink /D /J mymvcapp c:\sites\mvcapp
cd mymvcapp
mklink /D /J api c:\sites\api
Then go into IIS Manager, and convert both to applications. This way, the API will be available in \api\, but not actually share its web.config settings with the parent.
Multiple Servers
If you use nginx or haproxy as a reverse proxy, you can set it up to route calls to each app depending.
nginx Reverse Proxy settings
In your nginx.conf (best practice is to create a sites-enabled conf that's a symlink to sites-available, and you can destroy that symlink whenever deploying) do the following:
location / {
proxy_pass http://mymvcapp.com:80
}
location /api {
proxy_pass http://mymvcapp.com:81
}
and then you'd set the correct IIS settings to have each site listen on ports 80 (mymvcapp) and ports 81 (api).
HAProxy
acl acl_WEB hdr_beg(host) -i mymvcapp.com
acl acl_API path_beg -i /api
use_backend API if acl_API
use_backend WEB if acl_WEB
backend API
server web mymvcapp.com:81
backend WEB
server web mymvcapp.com:80
*I'm issuing the Junction command from memory; I did this a few months ago, but not recently, so let me know if there are issues with the command
NB: the config files are not meant to be complete config files -- only to show the settings necessary for reverse proxying. Depending on your environment there may be other settings you need to set.