Atmosphere - Need to modify ws URL before connection - atmosphere

I need to modify the ws url before it establishes a websocket connection.
I'm using #ManagedService in the server code and subscribing from client side using $.atmosphere.subscribe(request).
I've the following setup:
Cyberoam firewall --> Apache httpd(mod_proxy_ajp,mod_jk for load balancing) --> Glassfish
Primary Transport --> WebSocket
Fallback Transport --> long-polling
Problem I'm facing:
The Cyberoam firewall is having a limitation (in WAF) that the max characters in GET URL cannot exceeds 50 chars. But when I hit the subscribe in atmosphere, it is constructing the URL with all the X-Atmosphere-.. params along with it.
For e.g ws://localhost:8080/chat?X-Atmosphere-tracking-id=5ebed4c5-0b90-4166-88b2-9f273719ab75&X-Atmosphere-Framework=2.2.1-jquery&X-Atmosphere-Transport=websocket&Content-Type=application/json&X-atmo-protocol=true") which clearly exceeds the allowable limits.
I just need to know whether I can somehow construct the URL in my server code appending all the necessary headers and params before it connects?
Yes, I have set the attachHeadersAsQueryString:false while making the initial request, and obviously it doesn't connect with missing headers information while doing a WebSocket connection.
Any suggestions/thoughts would be greatly appreciated.
Thank You.

Use request.attachHeadersAsQueryString = false so mo query string will be passed. You may need to upgrade to the latest version of Javascript
https://github.com/Atmosphere/atmosphere-javascript
-- Jeanfrancois (Atmosphere's lead)

Related

Mulesoft Http:Request - Receiving Error: HTTP packet header is too large

I have a mule flow that uses the http:request and this flow is running multiple requests through it at the same time (synchronously). On only a few of the request I will receive the following error:
HTTP packet header is too large (java.lang.IllegalStateException)
The problem here is that the service I am sending the request to receives these like normal and then it blows up in mule so I don't get the correct response that I am looking for.
So in the other system it looks like the call was successful, but on my end it is a failure. I am fairly new to mule so go easy on me lol!
Any and all help would be greatly appreciated.
This is not a Mule error message. It looks like people report this issue with Grizzly.
I guess Mule sends a header that the server you're calling finds too big. I bet it's the serialized session. If you are using the http transport, this can be disabled like this:
<http:connector name="NoSessionConnector">
<service-overrides sessionHandler="org.mule.session.NullSessionHandler"/>
</http:connector>
If you are using the new HTTP connector, well, someone else will have to tell you how to disable it...
EDIT: Adding comment from Anirban.
With the new HTTP connector, use:
<remove-property propertyName="MULE_SESSION" />
to remove the massive session header.

Custom response headers not sent by server (Rails Devise)

I'm trying to retrieve 3 response headers (Rails Devise Auth Headers: uid, client, access-token) in every request to a Rails Server.
Using Postman (http client) it's working.
With OkHttp (java http client) the headers just don't show up in the client (i've checked using Wireshark).
When i'm in debug mode it just work...
The additional headers with postman are due to postman sending an Origin header and the server is replying with CORS headers, i.e. Access-Control-.... These headers are send within the normal HTTP header, i.e. not after the response.
But these access control headers are only relevant when the access is done from a browser because they control the cross origin behavior of XHR. Since you are not inside a browser they should be irrelevant for what you are doing. What is relevant are the body of the response and some of the other headers and here you'll find no differences. Also irrelevant should be if multiple requests are send within the same TCP connection (HTTP keep-alive done by postman) or with multiple connections (OkHttp) because each request is independent from the other and using the same TCP connection is only a performance optimization.
If you really want to get these special headers you should add an Origin header within you OkHttp request. See the OkHttp examples on how to add your own headers. But like I said: these access control headers should be irrelevant for the real task and there should be no need to get to these headers.
There is a property "config.batch_request_buffer_throttle" in the file "config/initializers/devise_token_auth.rb" of the Rails Project. We changed it from 5 seconds to 0 seconds.
It is a property to keep the current token available for that amount of time to the following requests.
As the original documentation: "Sometimes it's necessary to make several requests to the API at the same time. In this case, each request in the batch will need to share the same auth token. This setting determines how far apart the requests can be while still using the same auth token."
So when we did the request using Postman or in Java Debug the 5 seconds was running allowing Devise to generate new tokens then retrieve them to the client.

jetty BadMessage: 400 No Host for HttpChannelOverHttp

I have seen previous posts about Jetty BadMessage: 400 No Host for HttpChannelOverHttp and I can confirm that I am able to repeat the problem.
I have a Jetty route in Camel Blueprint, which creates another request and forwards on to a Dropwizard service via Camel HTTP.
.process(new Processor() {
//Creates Object for request
}
.marshal(jsonFormat)
.convertBodyTo(String.class)
.setHeader(Exchange.HTTP_URI, simple(serviceEndpoint))
.setHeader(Exchange.HTTP_METHOD, constant(HttpMethod.POST))
.to(userviceEndpoint)
When this request executes, I see the following error on Dropwizard
WARN [2014-11-12 23:15:35,333] org.eclipse.jetty.http.HttpParser: BadMessage: 400 No Host for HttpChannelOverHttp#3aa99dd2{r=0,a=IDLE,uri=-}
This happens constantly, and this problem does not occur when I send a request to the DW service using SOAP-UI (using the serviceEndpoint URL).
Please if anyone has solved this problem, I would like to know how. Thank you.
Capture your network traffic, and post the HTTP request headers you are sending to Jetty.
Odds are that your HTTP client is not sending the Host: header (which is required on HTTP/1.1)
In my case, I was setting header with null value. After removing header having null value from request solved the issue.
Jetty Version: 9.3.8
I got this error when I was making a request with incorrectly formatted headers. So instead of having the header as "X_S_ID: ABC" I had "X_S_ID: ["X_S_ID":BLAH]". So the error sometimes may not literally mean you need to pass a Host header.
Fixing the headers fixed this. Print the exact request you are making and make sure all headers are correctly formatted.

How do server-sent events actually work?

So I understand the concept of server-sent events (EventSource):
A client connects to an endpoint via EventSource
Client just listens to messages sent from the endpoint
The thing I'm confused about is how it works on the server. I've had a look at different examples, but the one that comes to mind is Mozilla's: http://hacks.mozilla.org/2011/06/a-wall-powered-by-eventsource-and-server-sent-events/
Now this may be just a bad example, but it kinda makes sense how the server side would work, as I understand it:
Something changes in a datastore, such as a database
A server-side script polls the datastore every Nth second
If the polling script notices a change, a server-sent event is fired to the clients
Does that make sense? Is that really how it works from a barebones perspective?
The HTML5 doctor site has a great write-up on server-sent events, but I'll try to provide a (reasonably) short summary here as well.
Server-sent events are, at its core, a long running http connection, a special mime type (text/event-stream) and a user agent that provides the EventSource API. Together, these make the foundation of a unidirectional connection between a server and a client, where messages can be sent from server to client.
On the server side, it's rather simple. All you really need to do is set the following http headers:
Content-Type: text/event-stream
Cache-Control: no-cache
Connection: keep-alive
Be sure to respond with the code 200 and not 204 or any other code, as this will cause compliant user agents to disconnect. Also, make sure to not end the connection on the server side. You are now free to start pushing messages down that connection. In nodejs (using express), this might look something like the following:
app.get("/my-stream", function(req, res) {
res.status(200)
.set({ "content-type" : "text/event-stream"
, "cache-control" : "no-cache"
, "connection" : "keep-alive"
})
res.write("data: Hello, world!\n\n")
})
On the client, you just use the EventSource API, as you noted:
var source = new EventSource("/my-stream")
source.addEventListener("message", function(message) {
console.log(message.data)
})
And that's it, basically.
Now, in practice, what actually happens here is that the connection is kept alive by the server and the client by means of a mutual contract. The server will keep the connection alive for as long as it sees fit. Should it want to, it may terminate the connection and respond with a 204 No Content next time the client tries to connect. This will cause the client to stop trying to reconnect. I'm not sure if there's a way to end the connection in a way that the client is told not to reconnect at all, thereby skipping the client trying to reconnect once.
As mentioned client will keep the connection alive as well, and try to reconnect if it is dropped. The algorithm to reconnect is specified in the spec, and is fairly straight forward.
One super important bit that I've so far barely touched on however is the mime type. The mime type defines the format of the message coming down the connecting. Note however that it doesn't dictate the format of the contents of the messages, just the structure of the messages themselves. The mime type is extremely straight forward. Messages are essentially key/value pairs of information. The key must be one of a predefined set:
id - the id of the message
data - the actual data
event - the event type
retry - milleseconds the user agent should wait before retrying a failed connection
Any other keys should be ignored. Messages are then delimited by the use of two newline characters: \n\n
The following is a valid message: (last new line characters added for verbosity)
data: Hello, world!
\n
The client will see this as: Hello, world!.
As is this:
data: Hello,
data: world!
\n
The client will see this as: Hello,\nworld!.
That pretty much sums up what server-sent events are: a long running non-cached http connection, a mime type and a simple javascript API.
For more information, I strongly suggest reading the specification. It's small and describes things very well (although the requirements of the server side could possibly be summarized a bit better.) I highly suggest reading it for the expected behavior with certain http status codes, for instance.
You also need to make sure to call res.flushHeaders(), otherwise Node.js won't send the HTTP headers until you call res.end(). See this tutorial for a complete example.

How do I get the response from an HTTP request on error?

I tried using both HTTPService and URLRequest/URLLoader. But I can't figure out how to get either the response output or the response headers in case of a server error(like 500). Some help would be really appreciated.
Listening to the HTTPStatusEvent should give you the right status code, but you won't have access to the response body.
You need to do this through sockets.
This projects encapsulates requests through sockets, giving you access to the status code, body response and other niceties (making PUT and DELETE requests, for example).
Note that since flash player 10, using sockets will require additional steps regarding crossdomains.
Cheers

Resources