Force HTTP1.1 instead of HTTP2 through Proxy (Charles) [closed] - http

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Since we updated our clients to HTTP2, I've had problems with mapping files to local resources. We normally use Charles (App) to do this, but since we updated to HTTP2, we've had some errors.
It seems to cut the files short and only load a tiny part of the files. Charles then gives a Failure message back saying:
Client closed connection before receiving entire response
I've been looking through the big interwebs for answers, but haven't been able to find any yet.
Hopefully there's some brilliant minds in here.

We have addressed this issue in Charles 4.1.2b2. Please try it out from https://www.charlesproxy.com/download/beta/
Please let me know if this does or doesn't correct the issue for you! We plan to roll out this build to release pretty soon, especially once we've had more users confirm the solution.

One workaround I've found is using the disable-http2 flag when launching Chrome. In MacOS the terminal command would be:
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --disable-http2
In windows you could alter your shortcut to launch with that --disable-http2 option.

As you said the problem is raised since client has been updated, have you double check all point relative to any client cache issue ? ( see here about no-caching tool in Charles)
You may use "Upgrade header" to force a change of http protocol version:
The Upgrade header field is a HTTP header field introduced in HTTP/1.1. In the exchange, the client begins by making a cleartext request, which is later upgraded to a newer http protocol version or switched to a different protocol. Connection upgrade must be requested by the client, if the server wants to enforce an upgrade it may send a 426 upgrade required response. The client can then send a new request with the appropriate upgrade headers while keeping the connection open.

Related

X-Lite call failed forbidden error while setting up Asterisk telephony [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 days ago.
Improve this question
I'm trying to set up Asterisk telephony on my system and I'm encountering an issue with X-Lite. Whenever I try to make a call using X-Lite, I get an error message saying "Call failed: Forbidden." I'm not sure what's causing this issue or how to resolve it.
Sharing a screen shot here
Here's what I've tried so far:
I've double-checked my SIP settings in X-Lite and made sure they match the settings in my Asterisk configuration files.
I've also checked my firewall settings to ensure that SIP traffic is allowed through.
I've tried making calls to different SIP endpoints, but I still get the same error message.
I'm not sure what else to try at this point. Could anyone suggest some troubleshooting steps or possible solutions? Thank you in advance for your help.
99% it is an incorrect password or you have not pressed "Apply" button in FreePBX.
Other 1% can be
Blocked by firewall at asterisk or your provider
Your router maked something weird with sip nat support(SIPALG in menu most likely).
You are in country where VoIP is blocked.
For troubleshoot you have know something about asterisk and linux, also need basic knowledge about SIP protocol. You can start from this page
https://wiki.asterisk.org/wiki/display/AST/Collecting+Debug+Information

how exactly does http.sys work [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm trying to get a deeper understanding of how IIS works.
http.sys i understand is one its major components. However, i have been having trouble finding easily digestible information about it. I couldn't get a good mental model going until i heard about the WSK, then i think it all fell into place.
From a lot of random googling a little experimentation this is my current high level understanding of why it exists and how it does it's stuff.
Why:
Port sharing, and higher performance caching.
How:
User mode processes use the WinSock api to open a socket listening on a port to gain access to the networking subsystem, e.g. tcp/ip. Kernal mode software like the http.sys driver uses Winsock Kernal Sockets (WSK) api to achieve the same end using the same pool of TCP port numbers as the WinSock api.
IIS, a web service or anything that wants to use http registers itself with http.sys using a unique url/port combination. http.sys opens up a socket on this port using WSK (if it hasn't already for another url/port combination with the same port) and listens.
When the transport layer (tcpip.sys) has reconstructed a load of ip packets back into an http request that a client sent it gives it to http.sys via the port in the request. Http.sys uses the url/port number to send it the the appropriate process which parses it however it pleases.
I know it seems like I'm answering my own question but I'm really not that sure of myself on this and would like some closure so i can get on with more interesting things.
Am i close?

Writing a cache-everything/quick-response HTTP proxy [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Are there any open source HTTP caching proxies I can use to give myself a good starting point?
I want to write a personal HTTP caching proxy to achieve the following purposes
Serve content instantly even if the remote site is slow
Serve content even if the network is down
Allow me to read old content if I'd like to
Why do I want to do this?
The speed of Internet connection in my area is far from spectacular.
I want to cache contents even if the HTTP headers tell me not to
I really don't like it when I couldn't quickly access content that I've read in the past.
I feel powerless when a website removes useful content and I find no way to get it back
The project comprises
A proxy running it on the local network (or perhaps on localhost), and
A browser plugin or a desktop program to show content-updated notifications
What's special about the proxy?
The browser initiates an HTTP request
The proxy serves the content first, if it's already in the cache
Then the proxy contacts the remote website and check whether the content has been updated
If the content has been updated, send a notification to the desktop/browser (e.g. to show a little popup or change the color of a plug-in icon), and download the content in the background.
Every time the proxy download new content, save it into the cache
Let me choose to load the updated content or not (if not, stop downloading the new content; if yes, stream the new content to me)
Let me assign rules to always/never load fresh content from certain websites
Automatically set the rules if the proxy finds that (1) I always want to load fresh content from a certain website, or (2) the website's content frequently updates
Note:
Caching everything does not pose a security problem, as I'm the only one with physical access to the proxy, and the proxy is only serving me (from the local network)
I think this is technologically feasible (let me know if you see any architectural problems)
I haven't decided whether I should keep old versions of the webpages. But given that my everyday bandwidth usage is just 1-2 GB, a cheap 1TB hard drive can easily hold two years of data!
Does my plan make sense? Any suggestions/objections/recommedations?
Take a look at polipo:
http://www.pps.univ-paris-diderot.fr/~jch/software/polipo/
Source is here:
https://github.com/jech/polipo
It is a caching web proxy implemented in C. It should definitely help you.

What does a Server do once it receives a request from a client? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm trying to get down to the details of what happens once a server gets a request from a client...
Open a socket on the port specified by the request...
Then access the asset or resource?
What if the resource refers to a cgi/script?
What "layers" does the request info have to pass through?
How is the response generated?
I've looked up info on "how the internet works", and "request response cycle", but I'm looking for details as to what happens inside the server.
It seems like you're having a little trouble separating out the different parts of your question so I'll do my best to help you out with that.
First and foremost, a common method for understanding communication between two computers is described using what is called the OSI model. This model attempts to distinguish the responsibilities between each protocol in a protocol stack. For example, when you surf a website on your home network the protocol stack is most likely something like
Ethernet-IPv4-TCP-HTTP
This modularization of protocols is used to create a separation of concerns so that developers don't have to "reinvent the wheel" each time they try to get two computers to communicate in some way. If you're trying to write a chat program you don't want to worry about packet loss or internet routing methodologies so you go ahead and take advantage of the lower level protocols that already exist and handle more of the nitty gritty stuff for you.
When people refer to socket communication these days they're typically using TCP or UDP. These are both known as transport protocols. If you'd like to learn more of the fine details on socket communication I would start with UDP because it's a simpler protocol and then move on to TCP.
While your web server is aware of some information in the lower level protocols it doesn't really do much with it. Primarily that's all handled by the operating system libraries which eventually hand the web server some raw HTTP data which the web server then begins to process.
To add another layer, HTTP has nothing to do with the gateway language running behind the scenes. This is fairly obvious due to the fact that the protocol is the same whether the web server is serving CGI perl scripts, PHP, ASP.Net or static HTML files. HTTP simply makes the request and the webserver processes the request accordingly.
Hopefully this clarifies a few concepts for you and gives you a better idea what you're trying to understand.
It depends on the server. An apache 2 server could do any amount of request rewriting, automatic responses (301, 303, 307, 403, 404, 500) based on rules, starting a CGI script, exchanging data with a FastCGI script, passing some data to a script module like mod_php, and so on. The CouchDB web server would do something else entirely.
Basically, aside from parsing the request and sending back the appropriate response, there's no real common aspect to web servers.
You could try looking into the documentation of the various web servers: Apache, IIS, lighttpd, nginx...

Are there any tool for monitoring HTTP response? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Are there any tool for monitoring HTTP responses? so open such tool up. give it URL. And it goes to it and brings you back not only body of http response but all http response.
Fiddler2
http://www.fiddlertool.com/fiddler2/version.asp
Fiddler acts as an http proxy, it lets you examine outgoing requests and incoming responses (raw headers, data, everything). It also lets you change requests, resend them and manipulate them directly. It is invaluable.
(source: fiddler2.com)
Use Firebug,if you're using FireFox.
(source: getfirebug.com)
Examine HTTP Headers
(source: getfirebug.com)
XMLHttpRequest monitoring
(source: getfirebug.com)
Also checkout the light version Firebug Light which works on all the browsers out of the box. no setup required. Firebug Lite does show Http Response headers and network monitoring but it's good enough to play around with the DOM.
Try REDbot:
http://redbot.org/
If you're looking for a tool allowing to monitor HTTP requests / responses on client side, you should take a look at Fiddler2.
Live HTTP Headers
Some options:
Download "Live HTTP headers" or Firebug add-ons for Firefox
Use wget with the -s option (if you're on Unix/Linux)
Download something like Wireshark to see the whole TCP/IP traffic stream
parros proxy is great for this :
http://www.parosproxy.org/index.shtml
http://www.httpdebugger.com/download.html
Or wireshark..
Old question, but there's definitely a few tools available for this. The existing answers here give awesome recommendations if you just need to look at a request/response actively.
If you need to do automation HTTP response monitoring, you can use my tool: https://assertible.com. With this, you can set up requests and make 'assertions' on the response. If the response doesn't match what you expected, you can set up alerts to get notified.
There are, of course, many other ways to approach this. For something more manual, I would recommend Chrome Dev Tools or, for Firefox, Firebug as mentioned in another answer.
Hope it helps!
well... I suppose that defines any browser, but if you want to analyze the response in code, cURL is one of the most used tools for networking, not just HTTP but other protocols as well. It will give you the headers as well as the body and allow authentication, etc., all from a command line or embedded elsewhere, as in a PHP script.
I love Burp Suite but this is as much focused on intercepting and modifying HTTP requests as it is monitoring them.
Not the specific tools you are looking for, but I highly recommend getting familiar with tcpdump and/or wireshark packet analyzers if you are into any sort of network programming. The latter has a "Follow TCP Stream" feature to look at the bytes flowing through TCP pipe.
GNU Wget
wget --save-headers URL
I know this is an old thread, but I came across this thread and wanted to leave a link to PostMan. Great way to simulate requests and examine responses...
An open source tool Insomnia works great and is available on macOS, Linux, and Windows. It works great for API testing, but you can also use it as a simple HTTP client.

Resources