I am running a Flask application in digitalocean via a Gunicorn and NGINX set up.
I have SSH access to my digitalocean droplet and am able to log in via the terminal.
Gunicorn, NGINX and Flask are already running and this is a production server.
Now, I'd like to SSH into my droplet and run a terminal command in order to see a print out of any errors that occur from my Flask application. I guess it would gunicorn errors.
Is such a thing possible? Or would I have to print things out to an error log? If so, I'll probably have questions about how to do that too! :D
Thank you in advance!!
I
Have a look at this Digital Ocean tutorial for Flask, gunicorn and NGINX. It includes a section on logging obtaining logs at the end of section #5 that should be helpful.
A common approach with Cloud-based deployments is to centralize logs by aggregating them automatically from multiple resources e.g. Droplets to save ssh'ing (or scp'ing) to to machines to query logs.
With a single Droplet, it's relatively straightforward to ssh in to the Droplet and query the logs but, as the number of resources grows, this can become burdensome. Other Cloud providers provide logging-as-a-service solutions. With Digital Ocean you may want to look at alternative solutions. I'm not an expert but the ELK stack, Splunk and Datadog are often used.
Related
Thanks in advance. My question is, how to set in a Dockerfile an nginx in front of a container? I saw other questions [5], and it seems that the only way to allow http/2 on odoo in cloud run is to create an nginx container, as sidecars are not allowed in gcrun. But I also read that with supervisord it can be done. Has anyone been able do that so to handle http/2 and so to increase the cloud run max request quota?
I wanted to to try this: in the entrpoint.sh, write a command to install nginx, and then set its configuration to be used as a proxy to allow http2. But, I ask you here as I'm not sure if it'll work, as I read in [2] that nginx won't work with a python server.
The whole story: I'm deploying an odoo ce on google cloud run + cloud sql. I configured one odoo as a demo with 5 blogposts and when I tried to download it, it says that the request was too large. Then, I imagined this was because of the 32MB of cloud run request quota [1], as the backup size was 52 MB. Then, I saw that the quota for http/2 connections was unlimited, so I activated http/2 button in cloud run. Next, when I accessed the service an error relating "connection failure appeared".
To do so I thought of two ways: one, was upgrading the odoo http server to one that can handle http/2 like Quark. This first way seems impossible to me, because it would force me to rewrite many pieces of odoo, maybe. Then, the second option that I thought of was running in front of the odoo container (that runs a python web server on a Werkzeug), an nginx. I read in the web that nginx could upgrade connections to http/2. But, I also read that cloud run was running its internal load balancer [2]. So, then my question: would it be possible to run in the same odoo container an nginx that exposes this service on cloud run?
References:
[1] https://cloud.google.com/run/quotas
[2] Cloud Run needs NGINX or not?
[3] https://linuxize.com/post/configure-odoo-with-nginx-as-a-reverse-proxy/
[4] https://github.com/odoo/docker/tree/master/14.0
[5] How to setup nginx in front of node in docker for Cloud Run?
Has anyone been able do that so to handle http/2 and so to increase the cloud run max request quota?
Handling HTTP/2 does not help you increase your maximum requests per container limit on Cloud Run.
HTTP/2 only helps you reuse TCP connections to send concurrent requests over a single connection, but Cloud Run does not really count connections so you are not on the right track here. HTTP/2 won't help you.
Cloud Run today already supports 1000 container instances (with 250 concurrent requests in preview) so that's 250,000 simultaneous requests for your service. If you need more, contact support.
But, I ask you here as I'm not sure if it'll work, as I read in [2] that nginx won't work with a python server.
Sounds incorrect.
If you configure a multi-process container you can run Python behind nginx on Cloud Run. But as you said, Cloud Run does not need nginx.
Overall you don't need HTTP/2 in this scenario.
I'm attempting to set up Apache Airflow with nginx as my proxy. Generally, when I create my own Flask apps, I'll tell gunicorn to --bind unix:/path/to/socket.sock and avoid much pain when configuring nginx. I have, so far, found nothing indicating that this is going to work with Airflow, but I could just be looking in the wrong place.
I have a simple flask application deployed on an AWS EC2 instance. The flask app accepts the incoming HTTP request, then do some (potentially heavy and lengthy) computations with the request, and then return the results.
Based on my limited understanding, it is recommended to always use nginx + gunicorn stack for a true flask app. As I try to keep things on the simple side, I just used gunicorn with 8 workers. The app works just fine, as I can query the EC2 instance, and got the result as expected. There is no (or very little) static contents for the app.
As for the traffic, I won't expect many simultaneous requests to the site (maybe ~10 requests at the same time), since it is for internal use. My question is, given my use case, will this (no nginx) harm me in the near future?
have you deployed using Elastic Beanstalk or EC2?
If the later, i recommend for this app using Elastic Beanstalk as it handles a lot of the configuration for you.
From AWS:
Elastic Beanstalk uses nginx as the reverse proxy to map your application to your load balancer on port 80. If you want to provide your own nginx configuration, you can override the default configuration provided by Elastic Beanstalk by including the .ebextensions/nginx/nginx.conf file in your source bundle. If this file is present, Elastic Beanstalk uses it in place of the default nginx configuration file.
Otherwise, at this stage not having NGINX isn't going to affect your App performance however since it's not best practise/future proof, there's no harm in including it. There's a lot of content out there, describing how to do just that.
Cheers!
We have a Google Cloud project with several VM instances and also Kubernetes cluster.
I am able to easily access Kubernetes services with kubefwd and I can ping them and also curl them. The problem is that kubefwd works only for Kubernetes, but not for other VM instances.
Is there a way to mount the network locally, so I could ping and curl any instance without it having public IP and with DNS the same as inside the cluster?
I would highly recommend rolling a vpn server like openvpn. You can also run this inside of the Kubernetes Cluster.
I have a make install ready repo for ya to check out at https://github.com/mateothegreat/k8-byexamples-openvpn.
Basically openvpn is running inside of a container (inside of a pod) and you can set the routes that you want the client(s) to be able to see.
I would not rely on kubefwd as it isn't production grade and will give you issues with persistent connections.
Hope this help ya out.. if you still have questions/concerns please reach out.
I'm looking to host a Python webapp on Heroku. The backend is written in different Java programs (the servers) and ZeroMQ is used for communication. If I deploy everything on Heroku (the servers just need to be running listening to requests), would they all run on the same LAN?
I'm asking because I don't want to run ZeroMQ over the internet, cause then I'll need something like SSH tunneling which I want to avoid.
Thanks!
Dynos do not share the same network. See this article on Heroku's website. This is by design for resiliance but you can choose the (Amazon) region that they all reside in.