So I'm trying to implement the following scenario:
An application is protected by Basic Authentication. Let's say it is hosted on app.com
An HTTP proxy, in front of the application, requires authentication as well. It is hosted on proxy.com
The user must therefore provide credentials for both the proxy and the application in the same request, thus he has different username/password pairs: one pair to authenticate himself against the application, and another username/password pair to authenticate himself against the proxy.
After reading the specs, I'm not really sure on how I should implement this. What I was thinking to do is:
The user makes an HTTP request to the proxy without any sort of authentication.
The proxy answers 407 Proxy Authentication Required and returns a Proxy-Authenticate header in the format of: "Proxy-Authenticate: Basic realm="proxy.com".Question: Is this Proxy-Authenticate header correctly set?
The client then retries the request with a Proxy-Authorization header, that is the Base64 representation of the proxy username:password.
This time the proxy authenticates the request, but then the application answers with a 401 Unauthorized header. The user was authenticated by the proxy, but not by the application. The application adds a WWW-Authenticate header to the response like WWW-Authenticate: Basic realm="app.com". Question: this header value is correct right?
The client retries again the request with both a Proxy-Authorization header, and a Authorization header valued with the Base64 representation of the app's username:password.
At this point, the proxy successfully authenticates the request, forwards the request to the application that authenticates the user as well. And the client finally gets a response back.
Is the whole workflow correct?
Yes, that looks like a valid workflow for the situation you described, and those Authenticate headers seem to be in the correct format.
It's interesting to note that it's possible, albeit unlikely, for a given connection to involve multiple proxies that are chained together, and each one can itself require authentication. In this case, the client side of each intermediate proxy would itself get back a 407 Proxy Authentication Required message and itself repeat the request with the Proxy-Authorization header; the Proxy-Authenticate and Proxy-Authorization headers are single-hop headers that do not get passed from one server to the next, but WWW-Authenticate and Authorization are end-to-end headers that are considered to be from the client to the final server, passed through verbatim by the intermediaries.
Since the Basic scheme sends the password in the clear (base64 is a reversible encoding) it is most commonly used over SSL. This scenario is implemented in a different fashion, because it is desirable to prevent the proxy from seeing the password sent to the final server:
the client opens an SSL channel to the proxy to initiate the request, but instead of submitting a regular HTTP request it would submit a special CONNECT request (still with a Proxy-Authorization header) to open a TCP tunnel to the remote server.
The client then proceeds to create another SSL channel nested inside the first, over which it transfers the final HTTP message including the Authorization header.
In this scenario the proxy only knows the host and port the client connected to, not what was transmitted or received over the inner SSL channel. Further, the use of nested channels allows the client to "see" the SSL certificates of both the proxy and the server, allowing the identity of both to be authenticated.
Related
Imagine that there is server required authentication for getting an access to some endpoint.
And developer added some logic to bypass the authentication: if server recieve HTTP header give_me_access=true, then server answers without requiring authenticaton.
Is there way for an attacker to know that the server accept this header?
I'm working on an application which takes HTTP message to and from the routers web server.
The problem i'm facing is in the HTTP basic authentication.
RFC 7617 states:
"the server can reply with a challenge using the 401 (Unauthorized) status code"
What I've seen from the browser HTTP captures that it isn't the case for every router. For example, TPLINK TLWR840N doesn't sends me 401 and i can get the resource by simply transferring http request along with the correct credentials in the form of base64{username:pass} in the http message as shown below.
GET //main/ddos.htm?_=1572950350469 HTTP/1.1
Host: 192.168.0.1
Accept: */*
Connection: keep-alive
Referer: http://192.168.0.1
Cookie: Authorization=Basic YeRtaW46YWRtaW5AMTIz
It gives me the requested content if the password is correctly given otherwise it redirects me to the login page (why this router doesn't follow the 401 protocol?).
I have another TPLINK TL-WR841N router which doesn't take credentials (in http message) in the form of base64{username:pass} as the previous router, but instead it takes credentials in the form of base64(user):md5(password). I have two question about this router (and all routers in general)
I want to know how the router communicates the protocol for credentials to the browser so that i can embed that thing in my application. I have inspected the http messages (in the Chrome/Firefox) but couldn't found the message where the protocol is being communicated.
When i login to TPLINK TL-WR841N router, unlike the previous model, the web browser contains some SessionID in the URL, e.g. the URL shows www.192.168.0.1/SessionID/path/to/resource. I would like to know how this SessionID is communicated to the browser?
People who write router maintenance applications, as well as people who design graphics cards driver installer screens (looking at you, AMD), do not adhere to any guidelines, best practices or protocols whatsoever.
But they don't need to, either. They've written an application that happens to use HTTP, but you're not obliged to use all of HTTP. They write the frond-end as well as the back-end, so they can closely control their server as well as their client.
The client most likely is a dumb couple of HTML pages that does some requests using JavaScript.
If they were to decide that the web interface authenticates to the server with a request header that literally states LetMeIn: true, then that would work as well.
HTTP does not mandate that the server should return a 401 when that header is missing or bears false, so they don't have to.
When a web server receives a http(s) GET request from a client, it has access to some information such as:
The client IP
The request itself :
the headers (including the cookies)
the content
and... that's all ?
I am wondering if there is something else.
Indeed, I am trying to make a server that can access to a page where it can collect some information to update its database. The site denied access to my server but not to web browsers, even if I replicate the IP, the headers and the content.
Thanks for your help.
Yes, it's only what is contained in the request itself. The server cannot reach back to the client to "pull" information, it only has the information contained in the HTTP request and the underlying TCP/IP packet. That's:
the requesting IP address
the HTTP headers, including requested URL and HTTP method
the HTTP request body, if any
if it's HTTPS, any data exchanged during the TLS handshake, which is usually not very relevant for identifying anything significant
All of that information is voluntarily provided by the requesting client.
Imagine a webbrowser that makes an HTTP request to a remote server, such as site.example.com
If the browser is then configured to use a proxy server, let's call it proxy.example.com using port 8080, in which ways are the request now different?
Obviously the request is now sent to proxy.example.com:8080, but there must surely be other changes to enable the proxy to make a request to the original url?
RFC 7230 - Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing, Section 5.3.2. absolute-form:
When making a request to a proxy, other than a CONNECT or server-wide
OPTIONS request (as detailed below), a client MUST send the target
URI in absolute-form as the request-target.
absolute-form = absolute-URI
The proxy is requested to either service that request from a valid
cache, if possible, or make the same request on the client's behalf
to either the next inbound proxy server or directly to the origin
server indicated by the request-target. Requirements on such
"forwarding" of messages are defined in Section 5.7.
An example absolute-form of request-line would be:
GET http://www.example.org/pub/WWW/TheProject.html HTTP/1.1
So, without proxy, the connection is made to www.example.org:80:
GET /pub/WWW/TheProject.html HTTP/1.1
Host: www.example.org
With proxy it is made to proxy.example.com:8080:
GET http://www.example.org/pub/WWW/TheProject.html HTTP/1.1
Host: www.example.org
Where in the latter case the Host header is optional (for HTTP/1.0 clients), and must be recalculated by the proxy anyway.
The proxy simply makes the request on behalf of the original client. Hence the name "proxy", the same meaning as in legalese. The browser sends their request to the proxy, the proxy makes a request to the requested server (or not, depending on whether the proxy wants to forward this request or deny it), the server returns a response to the proxy, the proxy returns the response to the original client. There's no fundamental difference in what the server will see, except for the fact that the originating client will appear to be the proxy server. The proxy may or may not alter the request, and it may or may not cache it; meaning the server may not receive a request at all if the proxy decides to deliver a cached version instead.
I am trying to write (and understand) a transparent proxy.
My setup would look like this
Client Browser ---> TProxy ----> Upstream Proxy ------> cloud
When the client browser makes a GET request, the idea is TProxy would then CONNECT to the Upstream proxy. The upstream proxy requires digest authentication. So, essentially the flow would look like
Client Browser ---> TProxy --------> Upstream Proxy ---------------> cloud
GET BBC.co.uk
CONNECT
407 PROXY AUTH REQUIRED
CONNECT
(with proxy-authorization)
200 OK
GET BBC.co.uk
I am confused what happens once CONNECT with authorization succeeds.
Am I suppose to modify the original GET request now to include a
Proxy-Authorization header?
or would the original GET request be then tunnelled in another http header something like
HTTP Header
Proxy Authorization
HTTP Header (GET BBC.CO.UK)
Data
or I can just pass the original GET request as is?
I am just starting with http and would appreciate any help.
Thanks
When you authenticate upstream from your transparent proxy, the Proxy-Authorization header applies only to the CONNECT.
The GET requests happen within the tunnel, so the upstream explicit proxy is not supposed to see them, and for sure does not expect any proxy authentication headers on them.
In short, you do not need to worry about the GET, but not because of the answer given above, but because there is a tunnel between the transparent proxy and the site, and the explicit proxy only sees and authenticates the CONNECT.
There is no such thing as nested headers in HTTP.
A proxy - whether transparent or not - always terminates the HTTP connection from the client, and initiates a new one to the server.
That means that the HTTP GET from the client goes to your TProxy. TProxy creates a new GET request to Upstream Proxy. Ideally, TProxy will simply pass on all the headers. That would make it (nearly) undetectable.
The same goes in reverse for the response headers.
In reality, proxy servers will, and in many cases have to, manipulate some headers. They will often add their own header (for instance, to alert the communication partners to the presence of a proxy), and they can also manipulate existing headers.
So, the short answer to your question: whatever header field your TProxy receives, pass it on unchanged unless you fully understand the implications.