Nginx used as reverse proxy on the app server for all requests to python aiohttp web application. Browser client uploading a file with size 220kb and above to the server fails through nginx. Without nginx in the loop the file upload works fine. There is no response from nginx when uploading a larger file and it just hangs, nginx only responds after killing the POST method. I have tried modifying different client buffer sizes and timeouts but that did not help.
Tried different options with the following configuration settings -> client_body_in_file_only clean;client_body_buffer_size 32K;client_max_body_size 30M;send_timeout 300s.
I do not have your setting, but here is the guide that may help you with aiohttp and NGINX. I use python-socketio and aiohttp and nginx:
Declare file upload size in web.Application(), this is easy to miss :)
web.Application(client_max_size=1024**2*30)
Check if your NGINX has both HTTP and https and declare at both places:
client_max_body_size 30M;
Uses Gunicorn or not does NOT matter but is highly recommended.
Hopes this helps.
Steve
Related
I am doing a poc on nginx server. It would listen to ports and redirect the path to different domains. The servers I am adding is dynamic in nature.
server config blocks looks like below
attatched image
I have to fetch server name|port address from an api and create servers based on it. The number of servers may increase or decrease it is dynamic in nature.
What I tried was creating new-config.conf which is already included into nginx.conf. I am writing server config dynamically into new-config.conf and restarting nginx after it.
I need something like where I don't require to restart nginx and embed server config into nginx.conf
I have a local network, on which there are some old insecure services. I use nginx reverse proxy with client certificates authentication as safe entrypoint to this local network from the Internet. Till now I used it only to proxy HTTP servers using
location / {
proxy_pass http://192.168.123.45:80/;
}
and everything works fine.
But now I would like to serve static files, that are accessible through FTP on a local server, I tried simply:
location /foo {
proxy_pass ftp://user:password#192.168.100.200:5000/;
}
but that doesn't work, and I could not find anything that would simply proxy HTTP request to FTP request.
Is there any way to do this?
Nginx doesn't support proxying to FTP servers. At best, you can proxy the socket... and this is a real hassle with regular old FTP due to it opening new connections on random ports every time a file is requested.
What you can probably do instead is create a FUSE mount to that FTP server on some local path, and serve that path with Nginx like normal. To that end, CurlFtpFS is one tool for this. Tutorial: https://linuxconfig.org/mount-remote-ftp-directory-host-locally-into-linux-filesystem
(Note: For security and reliability, it's strongly recommended you migrate away from FTP when possible. Consider SSH/SFTP instead.)
I've recently setup a Crucible instances in AWS connected via a HTTPS ELB. I have a nginx reverse proxy setup on the instance as well to redirect HTTP requests to HTTPS.
This partially works. However Crucible itself doesn't know it's running over HTTPS so serves up mixed content, and ajax queries often break due to HTTP -> HTTPS conflicts.
I've found documentation for installing a certificate in Crucible directly...
https://confluence.atlassian.com/fisheye/fisheye-ssl-configuration-298976938.html
However I'd really rather not have to do it this way. I want to have the HTTPS terminated at the ELB, to make it easier to manage centrally through AWS.
I've also found documentation for using Crucible through a reverse proxy...
https://confluence.atlassian.com/kb/proxying-atlassian-server-applications-with-apache-http-server-mod_proxy_http-806032611.html
However this doesn't specifically deal with HTTPS.
All I really need is a way to ensure that Crucible doesn't serve up content with hard coded internal HTTP references. It needs to either leave off the protocol, or set HTTPS for the links.
Setting up the reverse proxy configuration should help accomplish this. Under Administration >> Global Settings >> Server >> Web Server set the following:
Proxy scheme: https
Proxy host: elb.hostname.com
Proxy port: 443
And restart Crucible.
Making configuration on UI is one way. You can also change config.xml in $FISHEYE_HOME:
<web-server site-url="https://your-public-crucible-url">
<http bind=":8060" proxy-host=“your-public-crucible-url" proxy-port="443" proxy-scheme="https"/>
</web-server>
Make sure to shutdown FishEye/Crucible before making this change.
AFAIK, this configuration is the only way to tell internal Jetty of FishEye/Crucible to be aware of the reversed proxy in front of them.
This question already has answers here:
What are the benefits of using Nginx in front of a webserver for Go?
(4 answers)
Closed 7 years ago.
Sorry, I cannot find this answer from Google search
and nobody seems to explain clearly the difference between pure Go webserver
and nginx reverse proxy. Everybody seems to use nginx in front
for web applications.
My question is, while Go has all http serving functions,
what is the benefit of using nginx over pure Go web servers?
And in most cases, we set up the Go webserver for all routes here
and have the nginx configurations in front.
Something like:
limit_req_zone $binary_remote_addr zone=limit:10m rate=2r/s;
server {
listen 80;
log_format lf '[$time_local] $remote_addr ;
access_log /var/log/nginx/access.log lf;
error_log /var/log/nginx/error.log;
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
server_name 1.2.3.4 mywebsite.com;
}
When we have this Go:
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe(":8080", nil)
}
Are the traffic to nginx and Go web server different?
If not, why do we have two layers of web server?
Please help me understand this.
Thanks,
There's nothing stopping you from serving requests from Go directly.
On the other hand, there are some features that nginx provides out-of-the box that may be useful, for example:
handle many virtual servers (e.g. have go respond on app.example.com and a different app on www.example.com)
http basic auth in some paths, say www.example.com/secure
access logs
etc
All of this can be done in go but would require programming, while in nginx it's just a matter of editing a .conf file and reloading the configuration. Nginx doesn't even need a restart for this changes to take place.
(From a "process" point of view, nginx could be managed by an ops employee, with root permissions, running in a well known port, while developers deploy their apps on higher ones.)
The general idea of using nginx in this scenario is to serve up static resources via nginx and allow Go to handle everything else.
Search for "try_files" in nginx. It allows for checking the disk for the existence of a file and serving it directly instead of having to handle static assets in the Go app.
This has been asked a few times before[1] but for posterity:
It depends.
Out of the box, putting nginx in front as a reverse proxy is going to
give you:
Access logs
Error logs
Easy SSL termination
SPDY support
gzip support
Easy ways to set HTTP headers for certain routes in a couple of lines
Very fast static asset serving (if you're serving off S3/etc. though, this isn't that relevant)
The Go HTTP server is very good, but you will need to reinvent the
wheel to do some of these things (which is fine: it's not meant to be
everything to everyone).
I've always found it easier to put nginx in front—which is what it is
good at—and let it do the "web server" stuff. My Go application does
the application stuff, and only the bare minimum of headers/etc. that
it needs to. Don't look at putting nginx in front as a "bad" thing.
Further, to extend on my answer there, there's also the question of crash resilience: your Go application isn't restricted by a configuration language and can do a lot of things.
Some of these things may crash your program. Having nginx (or HAProxy, or Varnish, etc.) as a reverse proxy can give you a some request buffering (to allow your program to restart) and/or serve stale content from its local cache (i.e. your static home page), which may be better than having the browser time out and serve a "cannot connect to server error".
On the other hand, if you're building small internal services, 'naked' Go web servers with your own logging library can be easier to manage (in terms of ops).
If you do want to keep everything in your Go program, look at gorilla/handlers for gzip, logging and proxy header middleware, and lumberjack for log rotation (else you can use your system's logging tools).
This is the main idea, I want to use NGINX or Apache webservers as a tcp processor, so they manage all threads and connections and client sockets, all packets received from a port, lets say, port 9000 will be redirected to a program made on php or python, and that program will process each request, storing the data in a database. The big problem is also that this program needs to send data to the client or socket that is currently connecting to the NGINX or Apache server, I've been told that I should do something like this instead of creating my own TCP server, which is too difficult and is very hard to maintain since the socket communication with huge loads could lead in memory faults or even could crash down the server. I have done it before, and in fact the server crashed.
Any ideas how to achieve this ??
thanks.
apache/ nginx is web server and could be used to provide static content service to your cusomter and forwarding the application service requests to other application servers.
i only knows about django and here is sample configuration of nginx from Configuration for Django, Apache and Nginx
location / {
# proxy / requests to apache running django on port 8081
proxy_pass http://127.0.0.1:8081/;
proxy_redirect off;
}
location /media/ {
# serve static media directly from nginx
root /srv/anuva_project/www/;
expires 30d;
break;
}
Based on this configuration, the nginx access local static data for url under /media/*
and forward requests to django server located at localhost port 8018.
I have the feeling HAProxy is certainly a tool better suited for your needs, which have to do with TCP and not HTTP apparently. You should at least give it a try.