I am trying to upload files through my web application, but i am keep getting 404 on uploads.
I am useing Nginx as a reverse proxy for .net core web application. Everything worked fine so far, however upload of files for some reason fails.
I am new to nginx, so i might just miss a simple config for this to work.
When i am uploading i am sending post request with content type :
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryOge8Ovx1kqih4lfp
Nginx config :
server {
proxy_request_buffering off;
listen 80;
location / {
proxy_pass http://localhost:5000;
client_max_body_size 500m;
}
}
I really cant figure out where to look for an error
Check the user permission of /var/lib/nginx/tmp/client_body
Related
I'm very new to nginx and I was tasked with building a reverse proxy with a few services. So far so good, but my next assignment was for the reverse proxy to have a setting for an that we have on 192.168.0.16/app to be accesible from app.domain.com.
The nginx settings I've tried so far are:
server_name app.mydomain.com;
location / {
proxy_pass http://192.168.0.16/app;
}
Which gives me 404 not found,
server_name app.mydomain.com;
location / {
proxy_pass http://192.168.0.16/app/;
}
Which also gives 404 not found but the IIS server gives a little more info:
URL requested:
http://app.mydomain.com:80/app/APP/Account/Login?ReturnUrl=%2fapp%2f
So it's redirecting twice or entering a directory it shouldn't.
I appreciate any insight!
I have a server running with Nginx reverse proxy.
We have our application running in another server, which is served using this Nginx proxy. Below is the configuration I have used and its working fine.
location / {
rewrite ^/(.*) /$1 break;
proxy_pass http://10.0.0.121:8000;
}
I would need to download a pdf file in the application machine (10.0.0.121) , which is under /home/ubuntu/app/pdf/data-2021-03-25.pdf.
How could I make the file in application machine downloadable from the proxy server, please help.
Thanks in Advance.
I would simply install another nginx instance on 10.0.0.121 and configure it like this. NON-PROD READY!
server {
listen 8080;
server_name ...;
root /home/ubuntu/app/pdf;
location = /data-2021-03-25.pdf {
try_files $uri $uri/ =404;
}
server {
listen 8090;
location / {
proxy_pass http://localhost:8080;
}
}
}
Not tested but this server will handling the request serving the file. Then you could just use proxy_pass on the other server to proxy the request.
But beside from this option you can use a python, perl, php, java, nodejs, assembly or what ever programming language you want to use to open a http port and serve the file on an incoming request. Its really your choice.
just make sure if you're going for the proxy solution you are sanitizing the requests on your proxy. For example. With a small change in the setup above you could cheat and get any other files from your home/app directory by sending an request like curl -v localhost:8090/pdf/../other/file. So make sure you are using the root(/home/ubuntu/app/pdf/) directive and set a location matching the pdf-file on the proxy-server as well.
That worked in my demo app.
I have a Single Page Application with regular Browser Router (without hash). Whenever someone navigates through page and hits refresh button nginx tries to find file on this path. So if someone is on mypage.com/about nginx looks for about file and responds with 404 Not Found. How to fix this issue?
I'm thinking about specifying a location with wildcard - mypage.com/* except /api tho, because every backend endpoint in this app starts with api. How to match all paths except one? This is how my config looks like:
upstream frontend {
server frontend:3000;
}
upstream backend {
server backend:8000;
}
server {
listen 80;
location /api {
proxy_pass http://backend;
proxy_set_header Host \$http_host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
}
location / {
proxy_pass http://frontend;
proxy_redirect default;
}
}
Why do your proxy requests for frontend app ? I assume that you are using some kind of development server to serve your frontend application. It is better to build your frontend application to static files and serve them as regular static files, without any server except the nginx.
As for your question, if you will build your frontend application into static files you may configure location in nginx like this:
root /var/www/your_site;
location / {
try_files $uri /index.html;
}
where index.html is entrypoint into your application and the root path should be configured to place where it stored.
If you still want to serve frontend application from development server through nginx you may configure nginx to handle errors from upstream and point error page to root of dev server.
In this case following directives should help you:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors
http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page
nginx server serves http://server1.com, http://server2.com and http://server3.com.
nginx upstreams request process to some ruby code.
server1.com, server2.com and server3.com are actually some static files stored on amazon s3.
I want to do next: find bucket name for 'server1' host, put in db some logs and notify nginx to stream from amazon.
Maybe via setting in ruby code header with url to amazon s3 bucket and using this url later by nginx.
The flow: browser -> nginx -> ruby -> nginx -> amazon_s3 -> browser
I found how i can do this on error:
http {
server {
listen 12345; #Port that my custom app was assigned
server_name mydomain.com;
location / {
proxy_intercept_errors on;
error_page 400 403 502 503 504 = #fallback;
proxy_pass http://the_old_site_domain.com;
}
location #fallback {
proxy_pass http://myfallback.domain.com;
}
}
}
But is there a way to do something similar based on header appereance?
Thanks!
UPD
This is how i can test my header:
if ($http_x_custom_header) {
....
}
If set nginx should do some internal redirect, right?
But how it can be invoked after ruby code?
There is special headers called X-Accel-....
You need X-Accel-Redirect.
I am trying to build a docker-registry server from source (not as a container) on Ubuntu 14.04.1. I was able to get most of the way there using the instructions found on digitalocean.
I am able to curl http://localhost:5000 and https://user:password#localhost:8000 with no problems
When I try to open a web browser to see hopefully more than just that, that is when the issues seem to happen.
Here is my docker-registry file in /etc/nginx/sites-available/:
# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary
upstream docker-registry {
server 192.168.x.x:5000;
}
server {
listen 8000;
server_name docker-registry;
ssl on;
ssl_certificate /etc/nginx/ssl/docker-registry.crt;
ssl_certificate_key /etc/nginx/ssl/docker-registry.key;
proxy_set_header Host $http_host; # required for Docker client sake
X-Real-IP $remote_addr; # pass on real client IP
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
# required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
chunked_transfer_encoding on;
location / {
# let Nginx know about our auth file
auth_basic "Restricted";
auth_basic_user_file docker-registry.htpasswd;
proxy_pass http://docker-registry;
}
location /_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
}
I have my docker registry stored locally in /var/docker-registry and ensured that it was readable by the www-data user. Why can I not see my images on the web browser?
If I tag an image and push it to my repository it works, I can see it in the web browser:
https://192.168.x.x:8000/v1/repositories/ubuntu-test/tags/latest
I see the following:
"5ba9dab47459d81c0037ca3836a368a4f8ce5050505ce89720e1fb8839ea048a"
When I try to get to:
https://192.168.x.x:8000/v1
Or:
https://192.168.x.x:8000/v1/repositories
Or:
https://192.168.x.x:8000/v1/images
I get a "not found" error
How would I be able to see everything in my /var/docker-registry folder (which is where these are stored....and yes, they are owned by the www-data user) through the web interface?
This is by design. Not only is there no reason one would implement the entire url path, but there are severe security implications with implementing it.
I'm assuming you don't have much experience with web programming. There is no directory '/v1/repositories'... etc. Instead, there is a program (in this case either Python or Ruby) that is listening for the url path and has logic built-in to determine what to do.
i.e. if url = /v1/_ping: return 'ok'