Looking for some insight into a problem I have been unable to solve, as I am inexperienced with server workings.
Main question: how can I see more detailed error information?
I have a web application that allows a user to upload a csv file; this csv is altered and returned to the user as a download. As it stands, this webapp worked just fine in my localhost development environment. As I try to use it on a separate server I have run into an issue when the file is uploaded. I am getting a 500 Internal Server Error (111 connection refused).
Since I am inexperienced, I don't know how to diagnose this error beyond checking error logs at /var/log/myapp-error.log
2018/10/04 09:24:53 [error] 24401#0: *286 connect() failed (111: Connection refused) while connecting to upstream, client: 128.172.245.174, server: _, request: "GET / HTTP/1.1", upstream: "http://[::1]:8000/", host: "ip addr here"
I would like to know how I can learn more about what is causing this error. I have set flask into debug mode but can't get a stack trace on that either. I will put some of the flask app and nginx configurations below in case that is helpful.
#app.route('/<path:filename>')
def get_download(filename):
return send_from_directory(UPLOAD_FOLDER, filename, as_attachment=True)
#app.route('/', methods = ['GET', 'POST'])
#app.route('/index', methods = ['GET', 'POST'])
def index():
path = os.path.join(dirname(realpath(__file__)), 'uploads')
uploaded = ''
if request.method == 'POST':
if 'file' not in request.files:
flash('no file part')
return redirect(request.url)
file = request.files['file']
if file.filename == '':
flash('no selected file')
return redirect(request.url)
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
uploaded = str(path + '/' + filename)
df = preprocess_table(uploaded)
fp = classify(df, filename)
print(fp)
head, tail = os.path.split(fp)
print(tail)
get_download(tail)
return send_from_directory(UPLOAD_FOLDER, tail, as_attachment=True)
return render_template('index.html')
And here are my nginx settings.
server {
#listen on port 80 http
listen 80;
server_name _;
location / {
#redirect requests to same URL but https
return 301 https://$host$request_uri;
}
}
server {
#listen on port 443 https
listen 443 ssl;
server_name _;
#location of the self sign ssl cert
ssl_certificate /path/to/certs/cert.pem;
ssl_certificate_key /path/to/certs/key.pem;
#write access and error logs to /var/log
access_log /var/log/myapp_access.log;
error_log /var/log/myapp_error.log;
location / {
#forward application requests to the gunicorn server
proxy_pass http://localhost:8000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
#handle static files directly, without forwarding to the application
alias /path/to/static;
expires 30d;
}
}
Appreciate any insight/advice. Thank you!
Related
I have a very weird problem with socket.io and I was hoping someone can help me out.
For some reason few clients cannot connect to the server no matter what, when I am using https.
I am getting the following error code: ERR_CRYPTO_OPERATION_FAILED (see the detailed log below)
Again, most of the time the connection is perfectly fine, only some (random) clients seem to have this problem.
I have created a super simple server.js and client.js to make it easy to test.
I am using socket.io#2.4.1, and socket.io-client#2.4.0
Unfortunately version 3.x.x is not an option.
The OS is Ubuntu 18.04, both on the server, and the client side.
Nginx:
server {
listen 80;
server_name example.domain.com;
return 301 https://example.domain.com$request_uri;
}
server {
listen 443 ssl http2;
server_name example.domain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/cert.key;
ssl_protocols TLSv1.2 TLSv1.3;
location /
{
proxy_pass http://127.0.0.1:8000;
include /etc/nginx/proxy_params;
}
location /socket.io {
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 30s;
proxy_read_timeout 30s;
proxy_send_timeout 30s;
proxy_pass http://127.0.0.1:8000/socket.io;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
internal;
}
}
client.js:
const client = io.connect("https://example.domain.com", {
origins: '*:*',
transportOptions: {
polling: {
extraHeaders: {
'Authorization': token
}
}
},
});
Tried adding secure: true, reconnect: true, and rejectUnauthorized : false, but no difference.
Also, I tested it with and without the transportOptions.
server.js:
const port = 5000;
const app = express();
const server = app.listen(port, () => {
console.log(`Listening on port: ${port}`);
});
const io = socket(server);
io.on("connection", (socket) => {
console.log("Client connected", socket.id);
});
Of course, when I remove the redirect in nginx and use plain old http to connect, then everything is fine.
When I run DEBUG=* node client.js, I get the following:
socket.io-client:url parse https://example.domain.com/ +0ms
socket.io-client new io instance for https://example.domain.com/ +0ms
socket.io-client:manager readyState closed +0ms
socket.io-client:manager opening https://example.domain.com/ +1ms
engine.io-client:socket creating transport "polling" +0ms
engine.io-client:polling polling +0ms
engine.io-client:polling-xhr xhr poll +0ms
engine.io-client:polling-xhr xhr open GET: https://example.domain.com/socket.io/?EIO=3&transport=polling&t=NVowV1t&b64=1 +2ms
engine.io-client:polling-xhr xhr data null +2ms
engine.io-client:socket setting transport polling +61ms
socket.io-client:manager connect attempt will timeout after 20000 +66ms
socket.io-client:manager readyState opening +3ms
engine.io-client:socket socket error {"type":"TransportError","description":{"code":"ERR_CRYPTO_OPERATION_FAILED"}} +12ms
socket.io-client:manager connect_error +9ms
socket.io-client:manager cleanup +1ms
socket.io-client:manager will wait 1459ms before reconnect attempt +3ms
engine.io-client:socket socket close with reason: "transport error" +6ms
engine.io-client:polling transport not open - deferring close +74ms
socket.io-client:manager attempting reconnect +1s
...
Searching for ERR_CRYPTO_OPERATION_FAILED error, only leads me to the node.js errors page
which has only the following description:
Added in: v15.0.0
A crypto operation failed for an otherwise unspecified reason.
I am using Let's Encrypt certificate.
I don't get it. If it is an SSL issue, why am I getting this error only for few clients?
Maybe I am missing something in nginx?
Any help is much appreciated.
I've seem similar error with node-apn. My solution was to downgrade to nodejs v14. Maybe give that a try?
two step
the version of node must be 14.x
add this config when connect rejectUnauthorized: false
I am having a problem with my Nginx configuration.
I have an Nginx server(A) that adds custom headers and then that proxy_passes to another server(B) which then proxy passes to my flask app(C) that reads the headers. If I go from A -> C the flask app can read the headers that are set but if I go through B (A -> B -> C) the headers seem to be removed.
Config
events {
worker_connections 512;
}
http {
# Server B
server {
listen 127.0.0.1:5001;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
# Server A
server {
listen 4999;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-User 'username';
}
}
}
Flask app running on 127.0.0.1:5000
If I change the server A config to proxy_pass http://127.0.0.1:5000 then the Flask app can see the X-Forwarded-User but if I go through server B the headers are "lost"
I am not sure what I am doing wrong. Any suggestions?
Thanks
I can not reproduce the issue, sending the custom header X-custom-header: custom in my netcat server i get:
nc -l -vvv -p 5000
Listening on [0.0.0.0] (family 0, port 5000)
Connection from localhost 41368 received!
GET / HTTP/1.0
Host: 127.0.0.1:5000
Connection: close
X-Forwarded-User: username
User-Agent: curl/7.58.0
Accept: */*
X-custom-header: custom
(see? the X-custom-header is on the last line)
when i run this curl command:
curl -H "X-custom-header: custom" http://127.0.0.1:4999/
against an nginx server running this exact config:
events {
worker_connections 512;
}
http {
# Server B
server {
listen 127.0.0.1:5001;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
# Server A
server {
listen 4999;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-User 'username';
}
}
}
thus i can only assume that the problem is in the part of your config that you isn't showing us. (you said it yourself, it's not the real config you're showing us, but a replica. specifically, a replica that isn't showing the problem)
thus i have voted to close this question as "can not reproduce" - at least i can't.
I'm getting the following error when i hit my server Url
my request: curl 'http://52.66.253.182/osrm/nearest?loc=73.920239,18.5649413'
{"message":"URL string malformed close to position 1: \"\/\/ne\"","code":"InvalidUrl"}
I think it's something related to my nginx server conf file
here's my conf file code :
upstream osrm {
server 0.0.0.0:5000;
}
server {
listen 80;
server_name 52.66.253.182;
location /osrm {
proxy_pass http://osrm/;
proxy_set_header Host $http_host;
}
}
Browser resposne
I'm able to run it on localhost .
my Request : curl 'http://localhost:5000/route/v1/driving/73.9635837,18.5330435;74.0200603,18.5454127'
Response : {"code":"Ok","routes":[{"geometry":"mkbpByambM|e#xDIul#tKkf#pJeNGwTmJ{NuHoBXkIrBkGoIcIsD\\gD{Bk#aEfBsAoFyJ}[oDgc#yI_oAgMdFoX|Eyj#v#}}#","legs":[{"steps":[],"distance":9577.3,"duration":883.7,"summary":"","weight":883.7}],"distance":9577.3,"duration":883.7,"weight_name":"routability","weight":883.7}],"waypoints":[{"hint":"Ji2DgCktg4AAAAAAJgAAAAAAAADQAwAAAAAAABNd1EEAAAAAvawpRAAAAAAmAAAAAAAAANADAAD4AgAAwJloBMnDGgFAmGgEtMoaAQAAjwsvU8Mk","name":"","location":[73.963968,18.531273]},{"hint":"dwQogM0IdYAAAAAANAAAAPgLAACOBAAAAAAAAMSZD0KaBQVFm1qERAAAAAA0AAAA-AsAAI4EAAD4AgAA8HRpBMUIGwHcdGkEBfsaASkAnwUvU8Mk","name":"","location":[74.02008,18.548933]}]}
I have a CentOS server running NGINX listening to 80 and a DB servering an app on 8080. I want to be able to Type
http://example.com/dev/abc
and have it actually access
http://example.com:8080/apex/abc or http://localhost:8080/apex/abc
I have used this location configuration
location /dev {
proxy_pass http://example.com:8080/apex;
}
However when I try it out the url displayed is
http://example.com/apex/apex
the page is not found and the log says:
2018/06/14 12:51:33 [error] 7209#0: *2067 open()
"/usr/share/nginx/html/apex/apex" failed (2: No such file or directory),
client: 124.157.113.187, server: _, request: "GET /apex/apex HTTP/1.1", host: "example.com"
Looks like two strange things are happening
1) Port 80 not 8080 is being used despite the proxy_pass
2) why is apex twice "/apex/apex/"
Help please :)
Adding entire Server block from config file:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location /dev {
proxy_pass http://example.com:8080/apex;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Update - More information on what the app that might help
The app is Oracle Application Express (APEX) it listens on port 8080.
The URL works as follows:
HTTP://example.com:8080/apex/f?p=[APP]:[Page]:[Session] etc
Where [APP],[Page] and [Session] are all corrisponding numbers
The development environment url is actualy:
http://example.com:8080/apex/f?p=4550
This is the default so if I try http://example.com:8080/apex/ it defaults to http://example.com:8080/apex/f?p=4550 and takes you to the login page
Everything after the app number never changes so that is what I want to replaced by /dev/ http://example.com:8080/apex/f?p=4550:1 -> http://example.com/dev/:1
Once I have leant how this works, I plan to setup three proxy_pass's
example.com/dev ->
http://example.com:8080/apex/f?p=4550
example.com/desktop ->
http://example.com:8080/apex/f?p=1001
example.com/mobile ->
http://example.com:8080/apex/f?p=201
Where the only thing that changes is the app number.
Rewrites are working fine for all three but I don't want the rewrite to be visible in the URL
Here are the rewrites:
location ~ /dev {
rewrite ^/dev(.*) http://smallblockpro.com:8080/apex$1 last;
}
location ~ /desktop/ {
rewrite ^/desktop/(.*) http://smallblockpro.com:8080/apex/f?p=1001:$1 last;
}
location ~ /desktop {
rewrite ^/desktop(.*) http://smallblockpro.com:8080/apex/f?p=1001:$1 last;
}
location ~ /mobile/ {
rewrite ^/mobile/(.*) http://smallblockpro.com:8080/apex/f?p=201:$1 last;
}
location ~ /mobile {
rewrite ^/mobile(.*) http://smallblockpro.com:8080/apex/f?p=201:$1 last;
}
location ~ /desktop/ {
rewrite ^/desktop/(.*) http://smallblockpro.com:8080/apex/f?p=1001:$1 last;
}
The reason you're getting the :8080 port number shown up to the user is because you use absolute URLs in your rewrite directives, which results in NGINX producing 301 Moved responses directly to the user — your presumed expectation that it'll still go through proxy_pass after a rewrite like that is incorrect, see http://nginx.org/r/rewrite:
If a replacement string starts with “http://”, “https://”, or “$scheme”, the processing stops and the redirect is returned to a client.
If you want to just create the mapping between /desktop/$1 on the front-end and /apex/f?p=1001:$1 on the back-end of your Oracle Application Express (APEX), then the best way would be to use the following code on your nginx front-end server:
location /desktop/ {
rewrite ^/desktop/?(.*)$ /apex/f?p=1001:$1 break;
return 400;
proxy_pass http://smallblockpro.com:8080;
}
I would recommend copy-pasting it for each of /dev/, /mobile/ and /desktop/; also, I would not recommend to keep a slash-less versions, as per ServerFault's nginx-reverse-proxy-url-rewrite and how-to-remove-the-path-with-an-nginx-proxy-pass, as nginx already takes care of the requests without the trailing slash in a situation such as yours with the code as I propose above.
Here's the copy-paste from what I'm using on our ORDS / SDW ( sqldev-web ) development server.
Here's a basic example with ORDS for the REST side of the house.
The access is to:
https://xyz.oraclecorp.com/sdw/klrice/metadata-catalog/
Then it's proxied to:
https://xyz.oraclecorp.com:8083/ords/klrice/metadata-catalog/
With this config. Beside not to rewrite to an absolute URI as that will do a full browser redirect vs just rewriting the url for the proxy pass.
location /sdw/ {
rewrite /sdw/(.*) /ords/$1 break;
proxy_pass https://xyz.oraclecorp.com:8083/ords/;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
The issue you will face is this
rewrite ^/desktop/(.*) http://smallblockpro.com:8080/apex/f?p=1001:$1 last;
APEX will see and write links/redirects/includes ( javascript/css/...) as the .../apex/XYZ which will hit the nginx server and that will not know what to do with a /apex/
Here's an example of that based on my above setup. Notice my request to /sdw/ turns into a Location redirect to /ords/
wget -S https://xyz.oraclecorp.com/sdw/
--2018-06-21 17:10:28-- https://xyz.oraclecorp.com/sdw/
Resolving xyz.oraclecorp.com... 123.456.789.123
Connecting to xyz.oraclecorp.com|123.456.789.123|:443... connected.
HTTP request sent, awaiting response...
HTTP/1.1 302 Found
Server: nginx/1.12.1
Location: https://xyz.oraclecorp.com/ords/f?p=4550:1:375440000433:::::
Location: https://xyz.oraclecorp.com/ords/f?p=4550:1:375440000433::::: [following]
So the easiest thing to do is match up the ords deployment ( /apex/ ) to what the rewrite/redirects are and use proxy pass to internalize the :8080 stuff. So
location ~ /desktop/ {
rewrite ^/desktop/(.*) http://smallblockpro.com/apex/f?p=1001:$1 last;
}
location ~ /apex/ {
proxy_pass http://smallblockpro.com:8080/apex/;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
This option will let your users have a nice entry point of /desktop/ but then that redirect the /apex/ for the app itself.
There is another option with ORDS url-mappings.xml to keep the /desktop/ also which would be something like this to add in mappings to ords so it knows the /desktop/. Then the nginx could do the same proxy pass for each of the entry urls.
url-mapping.xml file contents
<pool-config xmlns="http://xmlns.oracle.com/apex/pool-config">
<pool name="mypool" base-path="/desktop" />
</pool-config>
then in nginx
location ~ /desktop/ {
proxy_pass http://smallblockpro.com:8080/desktop/;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Before you read further go through below SO thread which explains about the extra /apex/
Nginx proxy_pass only works partially
Two issues in your config
You need to pass the correct URL to backend service
You need to make sure you handle any redirects and replace the url correctly
Below is the config I think should work for you
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location /dev/ {
proxy_pass http://example.com:8080/apex/;
proxy_redirect http://example.com:8080/apex/ $scheme://$host/dev/;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I have the following configuration in my nginx conf file for my domain
server {
listen 80;
server_name domain.com
location / {
proxy_pass http://localhost:8080;
proxy_set_header host http://domain.com;
proxy_cache_bypass $http_upgrade;
}
}
and when I got to domain.com -> nothing I have to append :8080 and Bam! I get the app just fine
however, I want to be able to just redirect domain.com -> to localhost:8080 where the app is running.
How can I fix this in my nginx config file under sites-available
Thank you,
have a /nginx/sites-available conf file as following afterward make a symbolic link to it at /nginx/sites-enabled/
server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
inside javascript file
create the server with the private ip addr of the server NOT LOCAL HOST
like the following
//change ip address form localhost to the private IP address of server
app.listen(3000, '192.241.191.56', function () {
console.log('Example app listening on port 3001!');
});