wget with proxy not bypassing ip block? - http

This is about a server A which I use to browse the website pixiv.net.
One day, all my http requests (pings or wget) from this server stopped working (they keep timing out). I concluded it was most likely an IP block from pixiv, blocking the IP of server A.
I luckily have access to another server B which I could use for testing, this one is able to issue requests to pixiv just fine (but I cant use it permanently it's not mine).
To bypass what I thought was an IP block, I tried to issue the HTTP requests through proxies. I've tried a few different ones, courtesy of https://gimmeproxy.com/, but the requests still time out. However, they still work fine from server B even with proxy, which leads me to believe there is nothing wrong with the proxy.
I've concluded that one of the following is true:
I'm misusing wget with proxy and I'm actually not doing anything at all. I'm doing
wget pixiv.net -e use_proxy=yes -e http_proxy=ip:port
Proxies don't help solving my IP block issue.
The original issue is not an IP block. In that case I have no idea what it could be

Related

Web request issue [ok from Postman but not from python]

When I try to send web requests (any kind) from Postman, it goes through the network and I can see the response. If I want to do the same from Python (I use spyder IDE), I get a http connection error.
Basically, the requests are timed out.
When I do a tracert to any host (i.e. google.com), after a number of hops the requests are getting timed out.
I'm on company network. We use dynamic proxy file to direct requests.
My question is twofold:
What is the root cause of the issue?
How can I fix it on my end? (Not involving company IT.)
Many thanks
I could solve this issue with the help of company IT. Problem was - if anyone interested - that I wrongly defined the proxy in the request itself, so that it never reached the proxy. Once I changed the proxy settings, the request could go through.

When implementing a web proxy, how should the server report lower-level protocol errors?

I'm implementing an HTTP proxy. Sometimes when a browser makes a request via my proxy, I get an error such as ECONNRESET, Address not found, and the like. These indicate errors below the HTTP level. I'm not talking about bugs in my program -- but how other servers behave when I send them an HTTP request.
Some servers might simply not exist, others close the socket, and still others not answer at all.
What is the best way to report these errors to the caller? Is there a standard method that, if I use it, browsers will convert my HTTP message to an appropriate error message? (i.e. they get a reply from the proxy that tells them ECONNRESET, and they act as though they received the ECONNRESET themselves).
If not, how should it be handled?
Motivations
I really want my proxy to be totally transparent and for the browser or other client to work exactly as if it wasn't connected to it, so I want to replicate the organic behavior of errors such as ECONNRESET instead of sending an HTTP message with an error code, which would be totally different behavior.
I kind of thought that was the intention when writing an HTTP proxy.
There are several things to keep in mind.
Firstly, if the client is configured to use the proxy (which actually I'd recommend) then fundamentally it will behave differently than if it were directly connecting out over the Internet. This is mostly invisible to the user, but affects things like:
FTP URLs
some caching differences
authentication to the proxy if required
reporting of connection errors etc <= your question.
In the case of reporting errors, a browser will show a connectivity error if it can't connect to the proxy, or open a tunnel via the proxy, but for upstream errors, the proxy will be providing a page (depending on the error, e.g. if a response has already been sent the proxy can't do much but close the connection). This page won't look anything like your browser page would.
If the browser is NOT configured to use a proxy, then you would need to divert or intercept the connection to the proxy. This can cause problems if you decide you want to authenticate your users against the proxy (to identify them / implement user-specific rules etc).
Secondly HTTPS can be a real pain in the neck. This problem is growing as more and more sites move to HTTPS only. There are several issues:
browsers configured to use a proxy, for HTTPS URLS will firstly open a tunnel via the proxy using the CONNECT method. If your proxy wants to prevent this then any information it provides in the block response is ignored by the browser, and instead you get the generic browser connectivity error page.
if you want to provide any other benefits one normally wishes from a proxy (e.g. caching / scanning etc) you need to implement a MitM (Man-in-the-middle) and spoof server SSL certificates etc. In fact you need to do this if you just want to send back a block-page to deny things.
There is a way a browser can act a bit more like it was directly connected via a proxy, and that's using SOCKS. SOCKS has a way to return an error code if there's an upstream connection error. It's not the actual socket error code however.
These are all reasons why we wrote the WinGate Internet Client, which is a LSP-based product for our product WinGate. Client applications then learn the actual upstream error codes etc.
It's not a favoured approach nowadays though, as it requires installation of software on the client computer.
I wouldn't provide them too much info. Report what you need through internal logs in case you have to solve the problem. Return a 400, 403 or 418. Why? Perhaps the're just hacking.

What causes 'The underlying connection was closed' on nginx?

We have a payment gateway integration that posts data to a third party URL. The user then completes their payment process and when the transaction is complete the gateway posts back to a URL on our server.
That post is failing and the gateway are reporting the following error:
ERROR 13326: Couldn't speak to ServerResultURL [https://foo.com/bar].
Full Error Details: The underlying connection was closed: An unexpected error occurred on a send.
Response object is null
When I post direct to https://foo.com/bar I get a 200 response as I'd expect so I'm not sure where this is falling down.
This is on an Ubuntu box running nginx.
What could be causing that issue and how can I find more detail about it and a way to resolve it?
EDIT:
For brevity the example above is on a URL of /bar but the reality is that I have a rewrite in place (see below). The URL that actually gets posted to is /themes/third_party/cartthrob/lib/extload.php/cardsave_server/result so I'm not sure if the rewrite below is what's causing an issue.
I would still assume not as I do get a 200 response when posting via POSTMAN.
# http://expressionengine.stackexchange.com/questions/19296/404-when-sagepay-attempts-to-contact-cartthrob-notification-url-in-nginx
location /themes/third_party/cartthrob/lib/extload.php {
rewrite ^(.*) /themes/third_party/cartthrob/lib/extload.php?$1 last;
}
Typical causes of this kind of error
I bet your server is responding to the POST to /bar with something that the gateway (PaymentSense, right?) doesn't expect. This might be because:
The gateway can't reach your Ubuntu box over the network, because a firewall or network hardware between the two is blocking it.
Your https cert is bad / expired / self-signed, and the gateway is refusing the connection.
A misconfiguration of NGINX or your web application software (PHP, I imagine? or whatever nginx is serving up) is causing /bar to respond with some odd response, like a 30x, or a 50x error page, or possibly with just the wrong response, such as an HTML page.
Something else is wrong with the response to the POST.
The script/controller running at /bar could be getting unexpected input in the POST request, so you might want to look at the request coming in.
You have a network connectivity issue.
I'll leave the first two items for you to troubleshoot, because I don't think that's what you're asking in this question.
Troubleshooting NGINX Responses
I recommend configuring it to dump its response into an nginx variable using body_filter_by_lua so that you can see what response is coming out. A good example of how to set this up is available here. I think that will lead you understand why /bar is not behaving.
Troubleshooting NGINX Requests
If that isn't revealing the cause of this, try logging the request data. You can do that with something like:
location = /bar {
log_format postdata $request_body;
access_log /var/log/nginx/postdata.log postdata;
fastcgi_pass php_cgi;
}
Review the request headers and body of this POST, and if the error isn't immediately apparent, try to replay the exact same request (using an HTTP client that gives you complete control, such as curl) and debug what is happening with /bar. Is nginx running the script/controller that you think it should be running when you make an identical POST to /bar? Add logging to the /bar script/controller process.
Use interactive debugging if necessary. (This might require remote Xdebug if you're working with PHP, but no matter what you're using on your server, most web application tools offer some form of interactive debugging.)
Network Troubleshooting
If none of this works, it's possible that the gateway simply can't reach the host and port you're running this on, or that you have some other kind of network connectivity issue. I would run tcpdump on your Ubuntu box to capture the network traffic. If you can recreate this on a quiet (network) system, that will be to your advantage. Still, it's TLS (https), so don't expect to see much other than that the connection opens and packets are arriving. If you find that you need to see inside the TLS traffic in order to troubleshoot, you might consider using mitmproxy to do so.

Why does a telnet to port 80 seem to hit a different server than Firefox?

I'm new to low-level HTTP stuff and am not sure what to make of what I am seeing.
If I go to a particular internet web server (let's call it www.someserver.com for now... I'll give the real one if it's really needed), Firefox happily pulls up its home page. If, however, I do a
telnet www.someserver.com 80
GET / HTTP/1.0
...what I get returned appears to be the Apache default "It works" page. Trying to GET another page on the server that Firefox will happily pull up receives a 404 from telnet. It's like they're hitting different servers, but these requests are both coming from the same machine, so I'm not sure how.
What could cause such behavior?
It could be serving different sites based on the host header sent by the browser. Your telnet connection wouldn't send that header unless you explicitly typed it.
http://support.microsoft.com/kb/308163

curl issue with URL not connecting

So I'm not a very good network person so I was hoping someone could point me in the right direction to figuring out what I am doing wrong.
I am trying to use curl to post a SOAP message. I am running the following:
curl -d "string of xml message" -H "Content-Type:text/xml; charset=utf-8" "[ip]:[port]/[service]"
This results in a 'Connection refused' message.
So I try pinging ip by itself...no problems.
Then I think maybe I need http://[ip]:[port]/[service] so I tried pinging http://[ip] and I get:
unknown host http://[ip] yet if I ping the IP by itself I get no issues.
Any thoughts on where to start debugging this issue?
First of all, ping can't use the HTTP-protocol, you can only ping domain names. Have a look at ping at wikipedia to learn more.
Curl normally doesn't need anything fancy, just begin by typing curl [protocol]:[host]:[port]/[service] and see if you get a response at all. I think that's what you're looking for when you tried to ping the remote address.
Judging by the response of the cURL attempt, you'll know if your attempt was successfull. It probably won't be since it is indeed the connection that was refused, you didn't include bad parameters.
Now, assuming it's a connection problem, try curling something else (a regular domain, like Google.com) to make sure you don't have a connection problem. Then, to learn whether the remote server has a problem, perform the same Curl attempt from another server somewhere (or ask someone else to do it) and see if they, too, are refused to connect. This is a good attempt to circle in around the problem and gain more clarity.
Ping (or ICMP) traffic runs (usually) on a different port than HTTP traffic. HTTP typically runs on port 80.
Try to telnet to the service (the ip and the port) using the following command:
telnet (ip) 80 (without the parens).
If you are able to connect to the service then you have some other issues, however, if the service doesn't let you connect, then you know the service is blocking you from the port (usually 80 for http) on which the service is running.

Resources