Sending huge message from HTTP server to an angular client - http

From a win32 HTTP server I have to send a big (~32MB) message to an angular based client running in chrome.
Must I use Websocket for this purpose ?
Or should I use Server-sent-events (SSE) ?
What are my alternatives ?
Thank you,
Zvika

Neither Websocket nor SSE are for sending large messages specifically and it makes sense to use specifically for solving this problem.
HTTP works perfectly fine with large responses, even if it's gigabytes. Try it. If you run into problems report back here with detailed explanations of what failed.

Related

gRPC java implementation v0.15 might lost data, was it fixed in v1? Backpressure?

When I was working with v0.15 of gRPC (java version) I've faced with issue that if client is slow, then some data might be simply lost. I.e. server emits lets say 5 mils records and client receives around 80 percents.
To handle this situation I had to implement local 'buffer' which seems to be an issue to me if I want to use gRPC for communication between microservices.
I wonder was this issue fixed in v1? I remember I've seen (googled) corresponding issue somethere in gRPC discussion, but cannot find it now.
I would assume it is somehow correlates with backpressure, do we have backpressure in gRPC out of the box? In my tests 1.0.2 blows with io.netty.util.internal.OutOfDirectMemoryError:
Should I manually implement backpressure?
Just sending a message does not imply the client has received it. The ClientCall and ServerCall APIs describe it as:
No generic method for determining message receipt or providing acknowledgement is provided. Applications are expected to utilize normal payload messages for such signals, as a response naturally acknowledges its request.
I would agree your issue appears to be related to flow control/backpressure. You should cast the StreamObserver to a ServerCallStreamObserver and use setOnReadyHandler() and isReady() for controlling the memory usage. There's a brief example in an issue comment.

SignalR connection is endless

I have a signalR connection working but something very weird happens, sometimes it works perfectly in very few seconds and other times when I track the request it took more than 10 minutes trying to connect and it gives me something like that
can anyone give me an explanation for this? any hints, how to search for the problem
The request your looking at: /connect?transport=serverSentEvents&... is supposed to be endless.
SingalR is using comet technology called server-sent events or SSE. The basic idea is that SignalR responds to SSE requests in chunks, but never actually closes the response unless the client asks it to.
Browsers with SSE support can read the chunks sent from the server as they are sent even though the response doesn't end. This allows an unlimited number of messages to be sent in response to a single request.

Netty Snoop-like implementation does not receive server response data

I've implemented a Netty Snoop-like HTTP server and client.
The server can be tested easily with a browser - it works as expected. The client is harder to test but through debugging I can tell that it receives the headers just fine but it doesn't seem to receive the HTTP response body.
Since I know the server is sending the body (by checking in the browser) I'm wondering why the client code can't see it, or maybe can't decode it.
I am using Netty 4.0.15 as it appears to be the most stable release right now. You can see my version of the ClientHandler and ClientInitalizer classes at http://pastebin.com/tQ6d72pn And you can see my ServerHandler and ServerInitalizer classes at http://pastebin.com/JbHrTEkg
No doubt I'm doing something stupid, any help would be really great!

Is replying to client before receiving complete request allowed for HTTP 1.0 server?

I couldn't find RFC that may answer this question. Perhaps you guys can point me to right direction.
I'm implementing strippeddown http server whose only function is to accept big multi-part encoded uploads.
In certain cases, such as file is too big or client is not authorized to upload, I want server to reply with error and close connection immediately.
It looks like Chrome browser doesn't like it because it thinks server returned http code zero.
Could not get any response
This seems to be like an error connecting to http://my_ubuntu:8080/api/upload. The response status was 0.
Check out the W3C XMLHttpRequest Level 2 spec for more details about when this happens.
Therefore question:
Is replying to client before receiving complete request allowed for HTTP server ?
update: Just tested it with iOS 6 client. Same thing, it thinks server abruptly closed connection :(
This is a great question and apparently it is very ambiguous. You will probably enjoy reading this article on the "Million Dollar Bug" - http://jacquesmattheij.com/the-several-million-dollar-bug
I think this is certificate trust issue. Try manually trusting the site and subsequent requests should work.

Why can't I view Omegle's HTTP request/response headers?

I'm trying to write a small program that I can talk to Omegle strangers via command line for school. However I'm having some issues, I'm sure I could solve the problem if I could view the headers sent however if you talk to a stranger on Omegle while Live HTTP Headers (or a similar plug-in or program) is running the headers don't show. Why is this? Are they not sending HTTP headers and using a different protocol instead?
I'm really lost with this, any ideas?
I had success in writing a command line Omegle chat client. However it is hardcoded in C for POSIX and curses.
I'm not sure what exactly your problem is, maybe it's just something with your method of reverse engineering Omegle's protocol. If you want to make a chat client, use a network packet analyzer such as Wireshark (or if you're on a POSIX system I recommend tcpdump), study exactly what data is sent and received during a chat session and have your program emulate what the default web client is doing. Another option is to de-compile/reverse engineer the default web client itself, which would be a more thorough method but more complicated.

Resources