Network issue - Flask on raspberry pi get stuck on outbound response when accessing from Internet - networking

I have an issue about configuration the Flask on raspberry pi so that it can be accessed the web server from internet. The Flask is configured as 0.0.0.0 already as
if __name__ == '__main__':
app.run(debug=True , host='0.0.0.0', port=8080)
I have managed to access the web from LAN like below
P:\Desktop\py>curl 218.191.220.131:8080/restful/demo {
"result": [
{
"humidity": 57.13673400878906,
"id": 1,
"temperature": 31.51284408569336,
"time": "12:45:30"
}
]
}
However when i try access it from internet, the response is stuck. I can see from the debug message the request is sent successfully to Flask
192.168.1.1 - - [18/Jan/2017 11:23:06] "GET /restful/demo HTTP/1.1" 200 - # accessed from LAN
14.0.229.145 - - [18/Jan/2017 11:23:17] "GET /restful/demo HTTP/1.1" 200 - # accessed from Internet
it looks like the response cannot send successfully, stuck at FIN_WAIT1 likely means the response failed to reach to client.
pi#pi:~/Desktop/py $ netstat -n | grep 8080
tcp 0 155 192.168.1.116:8080 14.0.229.145:18934 FIN_WAIT1
tcp 0 155 192.168.1.116:8080 14.0.229.145:18935 FIN_WAIT1
tcp 0 0 192.168.1.116:8080 192.168.1.1:52304 TIME_WAIT
tcp 0 0 192.168.1.116:8080 192.168.1.1:52311 TIME_WAIT
Any idea please? I've already setup port forwarding/ triggering and even try DMZ mode but still stuck.

Problem solved. it is due to my synology router treat the package is intrusive and blocked. I have disabled the intrusive prevention mode and it can now access from the internet.

Related

Why does the browser client receive responses from an NGINX server at my remote address if my NGINX server is down?

I am using chrome Version 110.0.5481.77 (Official Build) (64-bit)
My web server in nginx/1.22.1 it is down and on service is listening on my IP:443
There are no running NGINX processes on my host
But, when I request my app bundle at IP:443 I receive a bundle.js with the following details in chrome dev tools:
Remote Address is IP:443 (MY IP and Port for HTTPS)
Size is 65.5 KB (I believe it would say disk if cached locally)
Response Header
Server: nginx/1.22.0
ETag: W/"SOME ETAG HASH"
There is an error in the console: net::ERR_INCOMPLETE_CHUNKED_ENCODING 200 (OK)
1 - I have not set up my own cache
2 - I am using nginx/1.22.1 NOT nginx/1.22.0
3 - My server is not up when I receive this response
4 - netstat -nptwc on my host shows:
tcp 0 0 192.168.1.14:42384 IP:443 TIME_WAIT -
tcp 0 0 192.168.1.14:49090 IP:443 ESTABLISHED 245476/chrome --typ
5 - netstat -nptwc on my host shows no traffic from my host.
Okay so what is going on here...is my web server's response cached somewhere outside my server???

GCP deployment with nginx - uwsgi - flask fails

I have a very simple flask app that is deployed on GKE and exposed via google external load balancer. And getting random 502 responses from the backend-service (added a custom headers on backend-service and nginx to make sure the source and I can see the backend-service's header but not nginx's)
The setup is;
LB -> backend-service -> neg -> pod (nginx -> uwsgi) where pod is the application built using flask and deployed via uwsgi and nginx.
The scenario is to handle image uploads in simple-secured way. Sender sends me a token with upload request.
My flask app
receive request and check the sent token via another service using "requests".
If token valid, proceed to handle the image and return 200
If token is not valid, stop and send back a 401 response.
First, I got suspicious about the 200 and 401's. And reverted all responses to 200. Following some of the expected responses, server starts to respond 502 and keep sending it. "Some of the messages at the very beginning succeeded".
nginx error logs contains below lines
2023/02/08 18:22:29 [error] 10#10: *145 readv() failed (104: Connection reset by peer) while reading upstream, client: 35.191.17.139, server: _, request: "POST /api/v1/imageUpload/image HTTP/1.1", upstream: "uwsgi://127.0.0.1:21270", host: "example-host.com"
my uwsgi.ini file is as below;
[uwsgi]
socket = 127.0.0.1:21270
master
processes = 8
threads = 1
buffer-size = 32768
stats = 127.0.0.1:21290
log-maxsize = 104857600
logdate
log-reopen
log-x-forwarded-for
uid = image_processor
gid = image_processor
need-app
chdir = /server/
wsgi-file = image_processor_application.py
callable = app
py-auto-reload = 1
pidfile = /tmp/uwsgi-imgproc-py.pid
my nginx.conf is as below
location ~ ^/api/ {
client_max_body_size 15M;
include uwsgi_params;
uwsgi_pass 127.0.0.1:21270;
}
Lastly, my app has a healthcheck method with simple JSON response. It does no extra stuff and simply returns. This never fails as explained above.
Edit : my nginx access logs in the pod shows the response as 401 while the client receives 502.
for those who gonna face with the same issue, the problem was post data reading (or not reading).
nginx was expecting to get post data read by the proxied, in our case uwsgi, app. But according to my logic I was not reading it in some cases and returning back the response.
Setting uwsgi post-buffering solved the issue.
post-buffering = %(16 * 1024 * 1024)
Which led me to this solution;
https://stackoverflow.com/a/26765936/631965
Nginx uwsgi (104: Connection reset by peer) while reading response header from upstream

HTTP/HTTPS timeouts in/out because of DHCP?

I'm trying to debug a new server I ordered at OVH.com and they insist everything is working properly even though it times out when doing a curl request towards for an example github.com (times out 9 in around 10 tries)
curl -L -v https://github.com
I get
* Rebuilt URL to: https://github.com/
* Trying 140.82.118.4...
* connect to 140.82.118.4 port 443 failed: Connection timed out
* Failed to connect to github.com port 443: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to github.com port 443: Connection timed out
Even when I set up NGINX sever, site timeouts almost every second request
So I thought perhaps DHCP server can be an issue so I checked it and I see this from (var/lib/dhcp..)
lease {
interface "ens4";
fixed-address 10.0.X.XX;
option subnet-mask 255.255.255.0;
option routers 10.0.X.X;
option dhcp-lease-time 86400;
option dhcp-message-type 5;
option dhcp-server-identifier 10.0.X.X;
option domain-name-servers 10.0.X.X;
renew 6 2020/03/28 02:16:19;
rebind 6 2020/03/28 13:47:57;
expire 6 2020/03/28 16:47:57;
}
lease {
interface "ens4";
fixed-address 10.0.X.XX;
option subnet-mask 255.255.255.0;
option routers 10.0.X.X;
option dhcp-lease-time 86400;
option dhcp-message-type 5;
option dhcp-server-identifier 10.0.X.X;
option domain-name-servers 10.0.X.X;
renew 5 2020/03/27 16:51:54;
rebind 5 2020/03/27 16:51:54;
expire 5 2020/03/27 16:51:54;
}
I tried getting a new one by doing this command but nothing changes, still the same as above
sudo dhclient -r
Am I looking at the DHCP wrong or does it look normal? For the record my public IP on this dedicated starts with 5 not 1 and it is run on Ubuntu 16.04 LTS
What is the offer you have at OVH ? They usually don't give private IP to dedicated server or virtual private server, so that's quite odd.
You may want to collect some trace to check what is going wrong with tools like :
tcptraceroute to check if the path to a domain on port 80 or 443
looks strange
ping to be able to see if there packet loss
tcpdump to capture raw network packet while a timeout is occuring to see what's going on
That's a good start and may also help you go back to OVH Support and prove them there's something wrong.

How would a Jetty Servlet detect a request from a LocalConnector?

I want to use
LocalConnector.getResponses( "POST /myservlet/SpecialRequest HTTP/1.0\r\nContent-Length: x\r\n\r\n<content>\r\n\r\n" );
to send special requests to servlets from within my application (uses embedded Jetty server). How can the Servlet detect that the request isfrom a LocalConnector instead of an external source?
The doPost method only has a HttpServletRequest and HttpServletResponse objects as parameters.
Using Jetty 9.2.4 and Servlet 3.1 APIs.
I determined there is no way to identify the Connector but looking inside the request, there are a few 0.0.0.0 and port 0 values which I'm confident wouldn't be the case from any external request (even localhost requests show up as 127.0.0.1).
LocalAddr = 0.0.0.0
LocalName = 0.0.0.0
LocalPort = 0
ServerName = <a real ip>
ServerPort = 80
RemoteAddr = 0.0.0.0
RemoteHost = 0.0.0.0
The ServerName and ServerPort are bogus - I don't have a Connector on port 80 and the request log shows
0.0.0.0 - - [21/Nov/2014:11:20:56 -0500] "POST /myservlet/SpecialRequest HTTP/1.0" - 0 "-" "-" "-"
which doesn't match the ServerName.
Conclusion: if LocalAddr and RemoteAddr are 0.0.0.0, request is internal from the LocalConnector.
Hope this answer helps the next person - and thanks to the one who posted the other LocalConnector question which pointed me to that feature!

Binding external IP address to Rabbit MQ server

I have box A and it has a consumer on it that listens on a Rabbit MQ server
I have box B that will publish a message to the listener
So as long as all of this in on box A and I start Rabbit MQ server w/ defaults it works fine.
The defaults are host=127.0.0.1 on port 5672, but
when I telnet box.a.ip.addy 5672 from box B I get:
Trying box.a.ip.addy...
telnet: connect to address box.a.ip.addy: No route to host
telnet: Unable to connect to remote host: No route to host
telnet on port 22 is fine, I can ssh into Box A from Box B
So I assume I need to change the ip that the RabbitMQ server uses
I found this: http://www.rabbitmq.com/configure.html and I now have a config file in the location the documentation said to use, with the name rabbitmq.config and it contains:
[
{rabbit, [{tcp_listeners, {"box.a.ip.addy", 5672}}]}
].
So I stopped the server, and started RabbitMQ server again. It failed. Here are the errors from the error logs. It's a little over my head. (in fact most of this is)
=ERROR REPORT==== 23-Aug-2011::14:49:36 ===
FAILED
Reason: {{case_clause,{{"box.a.ip.addy",5672}}},
[{rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1},
{rabbit_networking,boot_tcp,0},
{rabbit_networking,boot,0},
{rabbit,'-run_boot_step/1-lc$^1/1-1-',1},
{rabbit,run_boot_step,1},
{rabbit,'-start/2-lc$^0/1-0-',1},
{rabbit,start,2},
{application_master,start_it_old,4}]}
=INFO REPORT==== 23-Aug-2011::14:49:37 ===
application: rabbit
exited: {bad_return,{{rabbit,start,[normal,[]]},
{'EXIT',{rabbit,failure_during_boot}}}}
type: permanent
and here is some more from the start up log:
Erlang has closed
Error: {node_start_failed,normal}
^M
Crash dump was written to: erl_crash.dump^M
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}}}})^M
Please help
did you try adding?
RABBITMQ_NODE_IP_ADDRESS=box.a.ip.addy
to the /etc/rabbitmq/rabbitmq.conf file?
Per http://www.rabbitmq.com/configure.html#customise-general-unix-environment
Also per this documentation it states that the default is to bind to all interfaces. Perhaps there is a configuration setting or environment variable already set in your system to restrict the server to localhost overriding anything else you do.
UPDATE: After reading again I realize that the telnet should have returned "Connection Refused" not "No route to host." I would also check to see if you are having a firewall related issue.
You need to open up the tcp port on your firewall
Using Linux, Find the iptables config file:
eric#dev ~$ find / -name "iptables" 2>/dev/null
/etc/sysconfig/iptables
Edit the file:
sudo vi /etc/sysconfig/iptables
Fix the file by adding a port:
# Generated by iptables-save v1.4.7 on Thu Jan 16 16:43:13 2014
*filter
-A INPUT -p tcp -m tcp --dport 15672 -j ACCEPT
COMMIT

Resources