I am working on a websocket implementation and do not know what the sense of a mask is in a frame.
Could somebody explain me what it does and why it is recommend?
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-------+-+-------------+-------------------------------+
|F|R|R|R| opcode|M| Payload len | Extended payload length |
|I|S|S|S| (4) |A| (7) | (16/64) |
|N|V|V|V| |S| | (if payload len==126/127) |
| |1|2|3| |K| | |
+-+-+-+-+-------+-+-------------+ - - - - - - - - - - - - - - - +
| Extended payload length continued, if payload len == 127 |
+ - - - - - - - - - - - - - - - +-------------------------------+
| |Masking-key, if MASK set to 1 |
+-------------------------------+-------------------------------+
| Masking-key (continued) | Payload Data |
+-------------------------------- - - - - - - - - - - - - - - - +
: Payload Data continued ... :
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
| Payload Data continued ... |
+---------------------------------------------------------------+
Websockets are defined in RFC6455, which states in Section 5.3:
The unpredictability of the masking key is
essential to prevent authors of malicious applications from selecting
the bytes that appear on the wire.
In a blog entry about Websockets I found the following explanation:
masking-key (32 bits): if the mask bit is set (and trust me, it is if you write for the server side) you can read for unsigned bytes here which are used to xor the payload with. It's used to ensure that shitty proxies cannot be abused by attackers from the client side.
But the most clearly answer I found in an mailing list archive. There John Tamplin states:
Basically, WebSockets is unique in that you need to protect the network
infrastructure, even if you have hostile code running in the client, full
hostile control of the server, and the only piece you can trust is the
client browser. By having the browser generate a random mask for each
frame, the hostile client code cannot choose the byte patterns that appear
on the wire and use that to attack vulnerable network infrastructure.
As kmkaplan stated, the attack vector is described in Section 10.3 of the RFC.
This is a measure to prevent proxy cache poisoning attacks1.
What it does, is creating some randomness. You have to XOR the payload with the random masking-key.
By the way: It isn't just recommended. It is obligatory.
1: See Huang, Lin-Shung, et al. "Talking to yourself for fun and profit." Proceedings of W2SP (2011)
From this article:
Masking of WebSocket traffic from client to server is required because of the unlikely chance that malicious code could cause some broken proxies to do the wrong thing and use this as an attack of some kind. Nobody has proved that this could actually happen, but since the fact that it could happen was reason enough for browser vendors to get twitchy, masking was added to remove the possibility of it being used as an attack.
So assuming attackers were able to compromise both the JavaScript code executed in a browser as well as the the backend server, masking is designed to prevent the the sequence of bytes sent between these two endpoints being crafted in a special way that could disrupt any broken proxies between these two endpoints (by broken this means proxies that might attempt to interpret a websocket stream as HTTP when in fact they shouldn't).
The browser (and not the JavaScript code in the browser) has the final say on the randomly generated mask used to send the message which is why it's impossible for the attackers to know what the final stream of bytes the proxy might see will be.
Note that the mask is redundant if your WebSocket stream is encrypted (as it should be). Article from the author of Python's Flask:
Why is there masking at all? Because apparently there is enough broken infrastructure out there that lets the upgrade header go through and then handles the rest of the connection as a second HTTP request which it then stuffs into the cache. I have no words for this. In any case, the defense against that is basically a strong 32bit random number as masking key. Or you know… use TLS and don't use shitty proxies.
I have struggled to understand the purpose of the WebSocket mask until I encountered the following two resources which summarize it clearly.
From the book High Performance Browser Networking:
The payload of all client-initiated frames is masked using the value specified in the frame header: this prevents malicious scripts executing on the client from performing a cache poisoning attack against intermediaries that may not understand the WebSocket protocol.
Since the WebSocket protocol is not always understood by intermediaries (e.g. transparent proxies), a malicious script can take advantage of it and create traffic that causes cache poisoning in these intermediaries.
But how?
The article Talking to Yourself for Fun and Profit (http://www.adambarth.com/papers/2011/huang-chen-barth-rescorla-jackson.pdf) further explains how a cache poisoning attack works:
The attacker’s Java applet opens a raw socket connection to attacker.com:80 (as before, the attacker can also a SWF to mount a
similar attack by hosting an appropriate policy file to authorize this
request).
The attacker’s Java applet sends a sequence of bytes over the socket crafted with a forged Host header as follows: GET /script.js
HTTP/1.1 Host: target.com
The transparent proxy treats the sequence of bytes as an HTTP request and routes the request based on the original destination IP,
that is to the attacker’s server.
The attacker’s server replies with malicious script file with an HTTP Expires header far in the future (to instruct the proxy to cache
the response for as long as possible).
Because the proxy caches based on the Host header, the proxy stores the malicious
script file in its cache as http://target.com/script.js, not as
http://attacker.com/script.js.
In the future, whenever any client
requests http://target.com/script.js via the proxy, the proxy will
serve the cached copy of the malicious script.
The article also further explains how WebSockets come into the picture in a cache-poisoning attack:
Consider an intermediary examining packets exchanged between the browser and the attacker’s server. As above, the client requests
WebSockets and the server agrees. At this point, the client can send
any traffic it wants on the channel. Unfortunately, the intermediary
does not know about WebSockets, so the initial WebSockets handshake
just looks like a standard HTTP request/response pair, with the
request being terminated, as usual, by an empty line. Thus, the client
program can inject new data which looks like an HTTP request and the
proxy may treat it as such. So, for instance, he might inject the
following sequence of bytes: GET /sensitive-document HTTP/1.1 Host: target.com
When the intermediary examines these bytes, it might conclude that
these bytes represent a second HTTP request over the same socket. If
the intermediary is a transparent proxy, the intermediary might route
the request or cache the response according to the forged Host header.
In the above example, the malicious script took advantage of the WebSocket not being understood by the intermediary and "poisoned" its cache. Next time someone asks for sensitive-document from target.com they will receive the attacker's version of it. Imagine the scale of the attack if that document is for google-analytics.
To conclude, by forcing a mask on the payload, this poisoning won't be possible. The intermediary's cache entry will be different every time.
Related
I'd like to create a monitor that will show near realtime average response time of nginx.
Below image shows CPU usage for example, I'd like to create something similar for avg response time
I know how I can track the response time for individual requests (https://lincolnloop.com/blog/tracking-application-response-time-nginx/)
Although I 'll have to think how to ignore non-page / api requests such as static image request.
This must be pretty basic requirements, but couldn't find google how to do it.
This is actually trickier than you'd expect:
Metricbeat
The nginx module of Metricbeat doesn't contain this information. It's built around stubstatus and is more around the process itself rather than the timing of individual requests.
Filebeat
The nginx module for Filebeat is where you might expect this. It's built around the nginx access log and has the individual requests. Unfortunately the response time isn't part of the access log by default (at least on Ubuntu) — only the number of bytes sent. Here's an example (response code 200, 158 bytes sent):
34.24.14.22 - - [10/Nov/2019:06:54:51 +0000] "GET / HTTP/1.1" 200 159 "-" "Go-http-client/1.1"
Packetbeat
This one has a field called event.duration that sounds promising. But be careful with the HTTP module — this one is really only for HTTP traffic and not HTTPS (because you can't see the encrypted traffic). In most cases you'll want to use HTTPS for your application, so this isn't all that helpful and will mostly show redirects to HTTPS.
The other protocols such as TLS (this is only the time for the initial handshake) or Flow information (this is a group of packets) are not what you are after IMO.
Customization
I'm afraid you'll need some customization and you basically have two options:
Customize the log format of nginx as described in the blog post you linked to. You'll also need to change the pattern in the Elasticsearch ingest pipeline to extract the timing information correctly.
I assume you have an application behind nginx. Then you might want to get even more insights into that than just timing by using (APM / tracing](https://www.elastic.co/products/apm) with the agents for various languages. This way you'll also automatically skip static resources like images and focus on the relevant parts of your application.
If an Akamai edge server has cached an url, will it share that content with other edge servers, or will edge servers that don't have the content cached locally go back to the origin to get the content?
I'd love to have an official, Akamai document for this, but will of course appreciate any input!
Note that I have tried this out, and see what I expect is the answer - that edge servers will go back to origin at least some of the time to get the content, even if it resides on another edge server.
For example, I left a curl running all weekend to request a resource that caches for 7 days, and see that I got 3 different cached responses (different by response headers), and can see that the origin must've been accessed at least 3 times,
$ cat /t/akamai_dump_requests_all_weekend.txt | grep x-rate-limit-reset| sort | uniq -c
259 < x-rate-limit-reset: 1489776484
1 < x-rate-limit-reset: 1489779291
12 < x-rate-limit-reset: 1489781137
and I see alot of different edge servers listed in my dumps as well, though this is normal, i think.
66 a80-12-96-140.deploy.akamaitechnologies.com
65 a204-237-142-14.deploy.akamaitechnologies.com
51 a204-237-142-44.deploy.akamaitechnologies.com
38 a80-12-96-230.deploy.akamaitechnologies.com
8 a65-158-180-197.deploy.akamaitechnologies.com
6 a23-58-92-92.deploy.akamaitechnologies.com
6 a23-58-92-39.deploy.akamaitechnologies.com
5 a65-158-180-190.deploy.akamaitechnologies.com
5 a64-145-68-25.deploy.akamaitechnologies.com
5 a64-145-68-15.deploy.akamaitechnologies.com
4 a65-158-180-180.deploy.akamaitechnologies.com
4 a204-141-247-173.deploy.akamaitechnologies.com
4 a204-141-247-143.deploy.akamaitechnologies.com
2 a66-110-100-23.deploy.akamaitechnologies.com
1 a72-37-154-53.deploy.akamaitechnologies.com
1 a23-61-206-205.deploy.akamaitechnologies.com
1 a205-185-195-182.deploy.akamaitechnologies.com
I didnt get an answer here, so I posted this same question on the Akamai Community Forums, and got the following response from Neil Jedrzejewski, a Senior Solutions Architect at Akamai. Thanks Neil !
First of all, which edge server answers a request will vary over time based on
our low-level mapping system and load in a specific network region. Don't read
too much into the fact that you get many different edge servers - swapping them
in and out is part and parcel of how we give our customers scale and
availability.
To answer your question about shared caching, a high level explanation goes like
this.
When a client makes a request our mapping system will return the IP address(es)
of an edge server best located to honour the request. These edge servers are
grouped together in what we call network "regions". If a specific edge server
receives a request and cannot fulfil it from it's own cache, it will send out
a broadcast message (ICP) on it's back-plane asking if any other edge machine
peers in the same region has the object. The timeout for a response to this
request is very short (as the request is local) and if a peer has it, it will
pass the request to the peer and served the response before caching it
locally.
If no local peer is able to satisfy the request, the edge machine will them go
forward to it's cache parent as a new client request and the parent will attempt
to satisfy the request (again, checking with it's own ICP peers), serving the
object out of cache to the edge machine. The edge machine will then serve it
back to the client and cache it locally for next time. If the object is
unavailable or invalid (read: TTL expired) along the entire cache hierarchy
chain, then yes, it will go back to origin to re-validate or reacquire the
object.
An important point to remember is that caching is "best effort" only. Although
your TTL for an object was 7 days, that is simply an instruction to the cache on
how long to consider the content "fresh" and a valid response for a request.
However it is not a guarantee that the object will remain in a servers cache.
Objects can and will drop out of cache if they are infrequently requested or due
to other operational factors. This is where ICP and parent caches help because
they help consolidate requests. So although an object may drop out of a specific
edge cache, it may well still be in the cache parent as many edges are passing
requests through it thus giving a high cache-hit ratio.
So in short, our caching system. Will use different edge machines to respond to
a request over time based on our mapping systems insight into which machine will
best serve the client request. Will ask a local peer if it has a copy of an
object if it cannot satisfy the request itself. Will forward the request to a
cache parent if necessary to fulfil the request. Will go back to origin for the
object if the object is "stale" or if itself, a peer or parent cannot satisfy
the request.
Hope that helps.
I have a client side GUI app for human usage that consumes some SOAP web services and uses cURL as the underlying HTTP communication lib. Depending on the input, processing a request can take some large amount of time, even one hour. Neither the client nor server time out for that reason on their own and that's tested and works. Most of the requests get processed in some minutes anyway, so this is an edge case.
One of my users is forced to use a proxy between my client app and my server and for various reasons has no control over it. That proxy has a time out configured and closes the connection to my client after 4 minutes of no data transfer. So the user can (and did) upload data for e.g. 30 minutes, afterwards the server starts to process the data and after 4 minutes the proxy closes the connection, the server will silently continue to process the request, but the user is left with some error message AND won't get the processing result. My app already uses TCP Keep Alive, so that shouldn't be the problem, but instead the time out seems to be defined for higher level data. It works the same like the option read_timeout for squid, which I used to reproduce the behaviour in our internal setup.
What I would like to do now is start a background thread in my web service which simply outputs some garbage data to my client over all the time the request is processed, which is ignored by the client and tells the proxy that the connection is still active. I can recognize my client using the user agent and can configure if to ouput that data or not server side and such, so other clients consuming the web service wouldn't get a problem.
What I'm asking for is, if there's any HTTP compliant method to output such garbage data before the actual HTTP response? So e.g. would it be enough to simply output \r\n without any additional content over and over again to be HTTP compliant with all requesting libs? Or maybe even binary 0? Or some full fledged HTTP headers stating something like "real answer about to come, please be patient"? From my investigation this pretty much sounds like chunked HTTP encoding, but I'm not sure yet if this is applicable.
I would like to have the following, where all those "Wait" stuff is simply ignored in the end and the real HTTP response at the end contains Content-Length and such.
Wait...\r\n
Wait...\r\n
Wait...\r\n
[...]
HTTP/1.1 200 OK\r\n
Server: Apache/2.4.23 (Win64) mod_jk/1.2.41\r\n
[...]
<?xml version="1.0" encoding="UTF-8"?><soap:Envelope[...]
Is that possible in some standard HTTP way and if so, what's the approach I need to take? Thanks!
HTTP Status 102
Isn't HTTP Status 102 exactly what I need? As I understand the spec, I can simply print that response line over and over again until the final response is available?
HTTP Status 102 was a dead-end, two things might work, depending on the proxy used: A NPH script can be used to regularly print headers directly to the client. The important thing is that NPH scripts normally bypass header buffers from the web server and can therefore be transferred over the wire as needed. They "only" need be correct HTTP headers and depending on the web server and proxy and such it might be a good idea to create incrementing, unique headers. Simply by adding some counter in the header name.
The second thing is chunked transfer-encoding, in which case small chunks of dummy data can be printed to the client in the response body. The good thing is that such small amount of data can be transferred over the wire as needed using server side flush and such, the bad thing is that the client receives this data and by default behaves as if it was part of the expected response body. That might break the application of course, but most HTTP libs provide callbacks for processing received data and if you print some unique one, the client should be able to filter the garbage out.
In my case the web service is spawning some background thread and depending on the entry point of the service requested it either prints headers using NPH or chunks of data. In both cases the data can be the same, so a NPH-header can be used for chunked transfer-encoding as well.
My NPH solution doesn't work with Squid, but the chunked one does. The problem with Squid is that its read_timeout setting is not low level for the connection to receive data at all, but instead some logical HTTP thing. This means that Squid does receive my headers, but it expects a complete HTTP header within the period of time defined using read_timeout. With my NPH approach this isn't the case, simply because by design I only want to send some garbage headers to ignore until the real headers arrive.
Additionally, one has to be careful about NPH in Apache httpd, but in my use case it works. I can see the individual headers in Squid's log and without any garbage after the response body or such. Avoid the Action directive.
Apache2 sends two HTTP headers with a mapped "nph-" CGI
I've heared that you (in some cases) can prevent timeouts by sending the HTTP-header back to the client before the whole HTTP-body is prepared.
I know that this is impossible using gzip ... but is this possible using HTTPS?
I read in some posts that the secure part of HTTPS is done in the transport-layer (TLS/SSL) - therefore it should be possible, right?
Sorry for mixing gzip in here - it's a completely different level - I know ... and it may is more confusing than giving an example ;)
In HTTP 1.1 it's possible to send the response header before preparing of the body of the response is completed . To do this one normally uses chunked encoding.
Some servers also stream the data as is by not specifying the content length and indicating the end of stream by closing connection, but this is quite a brutal way to do things (chunked encoding was designed exactly for sending the data before it's completely available).
As HTTP(S) is HTTP running over SSL/TLS channel, TLS doesn't affect the above behavior in any way.
Yes, you can do this. HTTPS is just HTTP over an TLS/SSL transport, the HTTP protocol is exactly the same.
Lets say I want to GET one byte from a server using the HTTP protocol and I want to minimize everything. No headers just http://myserver.com/b, where b is a text file with one character in it, or better still b is just one character (not sure if that is possible).
Is there a way to do this with Apache and what is the smallest possible amount of data that is required for complete http and, complete HTTPS transactions?
Alternatively, the transaction could be done with just a head request if that is more data efficient.
If you're planning to use HTTP/1.1 (more or less require if you end up on a virtual host), your GET request will need to have the host name, either in the Host header or as an absolute URI in the request line (see RFC 2616 section 5.1.2).
Your response will also need a Content-Length or transfer encoding headers and delimiters.
If you're willing to "break" HTTP by using a HEAD request, it sounds like HTTP might not be the best choice of protocol. You might also be able to return something in a custom header, but that's not a clean way of doing it.
Note that, even if you implement your own protocol, you will need to implement a mechanism similar to what Content-Length or chunked encoding provide, to be able to determine when to stop reading from the remote party (otherwise, you won't be able to detect badly closed connections).
EDIT:
Here is a quick example, this will vary depending on your host name (assuming HTTP 1.1). I guess you could use OPTIONS instead. It depends on how much you're willing to break HTTP...
Request:
GET / HTTP/1.1
Host: www.example.com
That's 14 + 2 + 21 + 2 + 2 = 41 bytes (2 for CRLF)
Response:
HTTP/1.1 200 OK
Content-Length: 1
Content-Type: text/plain
a
That's 15 + 2 + 17 + 2 + 24 + 2 + 2 + 1 = 65 bytes
For HTTPS, there will be a small overhead for the SSL/TLS channel itself, but the bulk of it will be taken by the handshake, in particular, the server certificate (assuming you're not using client-cert authentication) should be the biggest. Check the size (in DER format) of your certificate.
What exactly are you trying to achieve, is this a sort of keep alive?
You could do a "GET /", which implies HTTP/1.0 being used, but that locks you out of stuff like virtual hosting etc. You can map "/" to a cgi-script, it doesn't need to be a real file, depending on what you're trying to achieve. You can configure Apache to only return the minimum set of headers, which would basically be "Content-Type: text/plain" (or another, shorter mime type, possibly custom mimetype e.g. "Content-Type: a/b") and "Content-Length: 0", thus not returning a response body at all.
It is an old question, but maybe someone found it useful, because nobody has answered the HTTPS part of the question.
For me this was needed for an easy validation of HTTPS communication in my proxy, which connects untrustable other proxies through tunnel.
This site explains it clearly: http://netsekure.org/2010/03/tls-overhead/
Quotes from the article:
One thing to keep in mind that will influence the calculation is the variable size of most of the messages. The variable nature will not allow to calculate a precise value, but taking some reasonable average values for the variable fields, one can get a good approximation of the overhead. Now, let’s go through each of the messages and consider their sizes.
ClientHello – the average size of initial client hello is about 160 to 170 bytes. It will vary based on the number of ciphersuites sent by the client as well as how many TLS ClientHello extensions are present. If session resumption is used, another 32 bytes need to be added for the Session ID field.
ServerHello – this message is a bit more static than the ClientHello, but still variable size due to TLS extensions. The average size is 70 to 75 bytes.
-Certificate – this message is the one that varies the most in size between different servers. The message carries the certificate of the server, as well as all intermediate issuer certificates in the certificate chain (minus the root cert). Since certificate sizes vary quite a bit based on the parameters and keys used, I would use an average of 1500 bytes per certificate (self-signed certificates can be as small as 800 bytes). The other varying factor is the length of the certificate chain up to the root certificate. To be on the more conservative side of what is on the web, let’s assume 4 certificates in the chain. Overall this gives us about 6k for this message.
ClientKeyExchange – let’s assume again the most widely used case – RSA server certificate. This corresponds to size of 130 bytes for this message.
ChangeCipherSpec – fixed size of 1 (technically not a handshake message)
Finished – depending whether SSLv3 is used or TLS, the size varies quite a bit – 36 and 12 bytes respectively. Most implementations these days support TLSv1.0 at least, so let’s assume TLS will be used and therefore the size will be 12 bytes
So the minimum can be as big (or small) as:
20 + 28 + 170 + 75 + 800 + 130 + 2*1 + 2*12 ≈ 1249
Though according to the article, the average is about 6449 bytes.
Also it is important to know that TLS sessions can be resumed, so only the 1st connection has this overhead. All other messages have about 330 bytes plus.