Processing: How do I set a client to read multiple messages sent at once from a server as separate? - networking

I am working on setting up a basic network system in Processing:
import processing.net.*;
Server myServer;
Client myClient;
However, I'm having trouble with my server and client side communication set up. My client is interpreting all incoming messages as Strings, and the issue is that whenever multiple messages are sent from the server in the same frame, they become added into a single string, which my program cannot interpret. After testing, I found that sending multiple messages from a single client to the server are given the same treatment.
My client reader looks like this:
while (myClient.available() > 0) {
String dataIn = myClient.readString();
As of now, I don't know if the problem is in the reader (combining the strings), or in the fact that I'm using write() multiple times in a single frame (and the data is being sent as a single string).
I am wondering if the messages can somehow be sent/read separately or, if not, there is some method to test if a message has already been sent (that works for both the client and server side) so that I can set up a queue to keep track of messages to be sent.

Well, I decided to forgo the idea of checking if a message had already been sent as I did not see any functions that would be able to help do that. Instead, I went ahead and created an arrayList of strings for the server called serverQ to act as a queue for messages to be sent:
ArrayList <String> serverQ = new ArrayList<String>();
I also added a function writeSQ(String) that would place any input string into the queue:
void writeSQ(String s) {
serverQ.add(s);
}
I then proceeded to replace every usage of myServer.write(String) with writeSQ(String s). At the end of my ServerUpdate function, I added a section that would empty the queue, sending the next string to all clients, one frame at a time:
// send data
if (serverQ.size() > 0) {
myServer.write(serverQ.get(0));
serverQ.remove(0);
}
}
However, for some reason, messages still got compounded, so I speculated that it may be due to the closeness (frequency) of the sent messages (each frame); so I set a boolean serverSent to alternate the messages sent to every other frame. The new code looks like this:
// send data
if (serverQ.size() > 0) {
if (!serverSent) {
myServer.write(serverQ.get(0));
serverQ.remove(0);
serverSent = true;
} else
serverSent = false;
}
This worked perfectly and the messages were interpreted individually by clients. I added that exact same support code to the clients (changing everything from server to client when needed) and after a good amount of testing, can confirm that this new support works properly both ways.

Related

How to get continuous HTTP data?

I'm trying to get live trading data from the Internet via HTTP, but it is updated continuously, so if I GET the data, it will keep downloading as long as there is data available. Until I stop the downloading stream, then I can access the data.
How to access the stream of data while the downloading is in progress?
I tried using Indy's TIdHTTP, so I can use SSL, but I tried the IdIOHandlerStream, but it was already used for IdSSLIOHandlerSocketOpenSSL. So I'm absolutely clueless here.
This is in response to a "multipart/form-data" request.
Please guide me...
Lrequest.Values['__RequestVerificationToken'] := RequestVerificationToken;
Lrequest.Values['acct'] := 'demo';
Lrequest.Values['pwd'] := 'demo';
try
Response.Text := Onhttp.Post('https://trading/data', Lrequest);
Form1.Memo1.Lines.Add(TimeToStr(Time) + ': ' + Response.Text);
except
on E: Exception do
Form1.Memo1.Lines.Add(TimeToStr(Time) + ': ' + E.ClassName +
' error raised, with message : ' + E.Message);
end;
UPDATE:
The data is an endless JSON string, like this:
{"id":"data","val":[{"rc":2,"tpc":"\\RealTime\\Global\\SGDIDR.FX","item":[{"val":{"F009":"10454.90","F011":"-33.1"}}]}]}
{"id":"data","val":[{"rc":2,"tpc":"\\RealTime\\Global\\SGDIDR.FX","item":[{"val":{"F009":"10458.80","F011":"-29.2"}}]}]}
and so on, and so on...
You can't use TIdIOHandlerStream to interface with a TCP connection, that is not what it is designed for. It is meant for performing I/O operations using user-provided TStream objects, ie for debugging previously captured sessions.
TIdHTTP is not really designed to handle endless HTTP responses in most cases, as you have described. What is the exact format that the server is delivering its live data as? What do the HTTP response headers look like? It is really difficult to answer your question without know the exact format being used.
However, that being said, there are some cases to consider, depending on what the server is actually sending:
if the server is using a MIME-based server-push format, like multipart/x-mixed-replace, you can enable the hoNoReadMultipartMIME flag in the TIdHTTP.HTTPOptions property, and then read the MIME data yourself from the TIdHTTP.IOHandler after TIdHTTP.Get() exits. For instance, you can use TIdMessageDecoderMIME to help you parse the MIME parts, see New TIdHTTP hoNoReadMultipartMIME flag in Indy's blog, or Delphi Indy TIdHttp and multipart/x-mixed-replace with Text and jpeg image.
Otherwise, if the server is using Transfer-Encoding: chunked, where each data update is sent as a new HTTP chunk, you can use the TIdHTTP.OnChunkReceived event. Or, you can enable the hoNoReadChunked flag in the TIdHTTP.HTTPOptions property, and then read the chunks yourself from the TIdHTTP.IOHandler after TIdHTTP.Get() exits. See New TIdHTTP flags and OnChunkReceived event in Indy's blog.
Otherwise, you could give TIdHTTP.Get() a TIdEventStream to write into, and then use that stream's OnWrite event to access the raw bytes. Or, you could write your own TStream-derived class that overrides the virtual Write() method. Either way, you would be responsible for manually parsing and buffering the raw body data as they are being written to the stream.
Otherwise, you may have to resort to using TIdTCPClient instead, implementing the HTTP protocol manually, then you would be solely responsible for reading in the HTTP response body however you want.

How reactive stream works with HTTP? What is reactive http?

I'm kind of new to Reactive Stream, so I got a question when using Spring Webflux and Reactor.
I made a snippet like below:
#RestController
public class TestController {
#GetMapping("responsebody/flux")
public Flux<String> tt2() {
return Flux.range(1, 5)
.delayElements(Duration.ofMillis(1000))
.map(l -> "hi");
}
}
and, interestingly, the chrome shows the each element in the sequence separately, rather exposes all at a time when I request it just using browser. (But dev tools shows whole body at once)
But I'm wondering that, HOW IT WORKS when even HTTP 1 uses only one connection, and the data server sent is put in the body in HTTP protocol. HOW can the client know which separates each element and when the sequence completes? and what if client is not ready to use reactive stream?
I don't need any code using reactive library, but wanna know how the protocol works.
and what if client is not ready to use reactive stream?
Client has no idea about "reactive stream".
The behaviour you're wondering about is achieved with Chunked transfer encoding mechanism. When a client sends a request to the server, the server responses with header transfer-encoding: chunked
Client starts receiving the data in chunks.
Data is sent in a series of chunks.
The Content-Length header is omitted in this case and at the beginning of each chunk added the length of the current chunk in hexadecimal format, followed by '\r\n' and then the chunk itself, followed by another '\r\n'. The terminating chunk is a regular chunk, with the exception that its length is zero. It is followed by the trailer, which consists of a (possibly empty) sequence of header fields.

Send a File as well as parameters (through JSON) inside one HTTP request

I am creating a server using Go that allows the client to upload a file and then use a server function to parse the file. Currently, I am using two separate requests:
1) First request sends the file the user has uploaded
2) Second request sends the parameters to the server that the server needs to parse the file.
However, I have realised that due to the nature of the program, there can be concurrency problem if multiple users try to use the server at the same time. My solution to that was using mutex locks. However, I am receiving the file, sending a response, and then receiving the parameters and it seems that Go cannot send a response back when the mutex is locked. I am thinking about solving this problem by sending both the file and the parameters in one single HTTP request. Is there a way to do that? Thanks
Sample code (only relevant parts):
Code to send file from client:
handleUpload() {
const data = new FormData()
for(var x = 0; x < this.state.selectedFile.length; x++) {
data.append('myFile', this.state.selectedFile[x])
}
var self = this;
let url = *the appropriate url*
axios.post(url, data, {})
.then(res => {
//other logic
self.handleParser();
})
}
Code for handleParser():
handleNessusParser(){
let parserParameter = {
SourcePath : location,
ProjectName : this.state.projectName
}
// fetch the the response from the server
let self = this;
let url = *url*
fetch(url, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(parserParameter),
}).then( (response) => {
if(response.status === 200) {
//success logic
}
}).catch (function (error) {
console.log("error: ", error);
});
}
The question is not really about Go or reactjs or any particular software library.
To solve your problem you'd first need to understand how HTTP POST works,
hence I invite you to first read this intro on MDN.
In short:
There are multiple ways to encode the data sent in a POST request.
The way the receiver should deal with this data depends on how it's encoded by the sender.
The sender has to communicate the encoding with its request — usually via the Content-Type header field.
I won't go into the details of possible encodings — the referenced introductory material covers them, and you should do your own research on them, but to maybe recap what's written there, here is some perspective.
Back in the 80s and 90s the web was "static" and the dreaded era of JavaScript-heavy "web apps" did not yet come. "Static" means you could not run any code in the client's browser, and had to encode any communication with the server in terms of plain HTML.
An HTML document could have two ways to make the client rendering it to send something back to the server: a) embed an URL which would include query parameters; this would make the client to perform a GET request with these parameters sent to the server; b) embed an HTML "form" which, when "submitted", would result in performing a rather more complex POST request with the data taken from the filled in form.
The latter approach was the way to leverage the browser's ability to perform reasonably complex data processing — such as slurpling a file selected by the user in a specific form's control, encoding it appropriately and sending it to the server along with the other form's data.
There were two ways to encode the form's data, and they are both covered by the linked article, please read about them.
The crucial thing to understand about this "static web with forms" approach is that it worked like this: the server sends an HTML document containing a web form, the browser renders the document, the user fills the form in and clicks the "submit" button rendered by the browser; the browser collects the data from the form's controls, for entries of type "file" it reads and encodes the contents of those files and finally performs an HTTP POST request with this stuff encoded to the URL specified by the form. The server would typically respond with another HTML document and so on.
OK, so here came "web 2.0", and an "XHR" (XMLHttpRequest) was invented. It has "XML" in its name because that was the time when XML was perceived by some as a holy grail which would solve any computing problem (which it, of course, failed to do). That thing was invended to be able to send almost arbitrary data payloads; XML and JSON encoding were supported at least.
The crucial thing to understand is that this way to communicate with the server is completely parallel to the original one, and the only thing they share is that they both use HTTP POST requests.
By now you should possibly see the whole picture: contemporary JS libs allow you to contruct and perform any sort of request: they allow you to create a "web form"-style request or to create a JS object, and serialise it to JSON, and send the result in an HTTP POST request.
As you can see, any approach allows you to pass structured data containing multiple distinct pieces of data to the server, and the way to handle this all is a matter of agreement between the server and the client, that is, the API convention, if you want.
The difference between various approaches is that the web-form-style approach would take care of encoding the contents of the file for you, while if you opt to send your file in a JSON object, you'll need to encode it yourself — say, using base64 encoding.
Combined approaches are possible, too.
For instance, you can directly send binary data of a file as a POST request's body, and submit a set of parameters along with the request by encoding them as query-parameters of the URL. Again, it's up to the agreement between the client and the server about how the latter encodes the data to be sent and the former decodes them.
All-in-all, I'd recommend to take a pause and educate yourself on the stuff I have outlined above, and then have another stab at solving the problem, but this time — with reasonably complete understanding about how the stuff works under the hood, and how you intend to wield it.

Why Tomcat returns different headers for HEAD and GET requests to my RESTful API?

My initial purpose was to verify the HTTP chunked transfer. But accidentally found this inconsistency.
The API is designed to return a file to client. I use HEAD and GET methods against it. Different headers are returned.
For GET, I get these headers: (This is what I expected.)
For HEAD, I get these headers:
According to this thread, HEAD and GET SHOULD return identical headers but not necessarily.
My question is:
If Transfer-Encoding: chunked is used because the file is dynamically fed to the client and Tomcat server cannot know its size beforehand, how could Tomcat know the Content-Length when HEAD method is used? Does Tomcat just dry-run the handler and count all the file bytes? Why doesn't it simply return the same Transfer-Encoding: chunked header?
Below is my RESTful API implemented with Spring Web MVC:
#RestController
public class ChunkedTransferAPI {
#Autowired
ServletContext servletContext;
#RequestMapping(value = "bootfile.efi", method = { RequestMethod.GET, RequestMethod.HEAD })
public void doHttpBoot(HttpServletResponse response) {
String filename = "/bootfile.efi";
try {
ServletOutputStream output = response.getOutputStream();
InputStream input = servletContext.getResourceAsStream(filename);
BufferedInputStream bufferedInput = new BufferedInputStream(input);
int datum = bufferedInput.read();
while (datum != -1) {
output.write(datum);
datum = bufferedInput.read();
}
output.flush();
output.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
ADD 1
In my code, I didn't explicitly add any headers, then it must be Tomcat that add the Content-Length and Transfer-Encoding headers as it sees fit.
So, what are the rules for Tomcat to decide which headers to send?
ADD 2
Maybe it's related to how Tomcat works. I hope someone can shed some light here. Otherwise, I will debug into the source of Tomcat 8 and share the result. But that may take a while.
Related:
HTTP HEAD and GET different result
Content-Length header with HEAD requests?
Does Tomcat just dry-run the handler and count all the file bytes?
Yes, the default implementation of javax.servlet.http.HttpServlet.doHead() does that.
You can look at helper classes NoBodyResponse, NoBodyOutputStream in HttpServlet.java
The DefaultServlet class (the Tomcat servlet that is used to serve static files) is more wise. It is capable of sending the correct Content-Length value, as well as serving GET requests for a subset of the file (the Range header). You can forward your request to that servlet, with
ServletContext.getNamedDispatcher("default").forward(request, response);
Although it seems strange, it might make sense to send the size only in response to a HEAD request and chunked in response to a GET request, depending on the type of data that has to be returned by the server.
While your API seems to provide a static file, you also talk about dynamically created files or data, so I will be talking in general here (also for webservers in general).
First let's have a look at the different usages for GET and HEAD:
With GET the client is requesting the whole file or data (or a range of the data), and wants it as fast as possible. So there is no specific reason for the server to send the size of the data first, especially when it could start sending faster/sooner in chunked mode. So the fastest possible way is preferred here (the client will have the size after the download anyway).
With HEAD on the other hand, the client usually wants some specific information. This could just be a check on existance or 'last-changed', but it could also be used if the client wants a certain part of the data (with a range request, including a check to see if range requests are supported for that request), or just needs to know the size of the data up front for some reason.
Lest's look at some possible scenarios:
Static file:
HEAD: there's no reason to not include the size in the response-header because that information is available.
GET: most of the time the size will be inluded in the header and the data sent in one go, unless there are specific performance reasons to send it in chunks. On the other hand it seems you are expecting chunked transfer for you file, so this could make sense here.
Live logfile:
Ok, somewhat strange, but possible: downloading a file where the size could change while downloading.
HEAD: again, the client probably wants the size, and the server can easily provide the size of the file at that specific time in the header.
GET: since loglines could be added while downloading, the size is unknown up front. Only option is to send chunked.
Table with fixed-sized records:
Let's imagine a server needs to send back a table with fixed-length records coming from multiple sources/databases:
HEAD: size is probably wanted by the client. The server could quickly do a query for count in each database, and send the calculated size back to the client.
GET: instead of doing a query for count in each database first, the server better starts sending the resulting records from each database in chunks.
Dynamically generated zip-files:
Maybe not common, but an interesting example.
Imagine you want to provide dynamically generated zip-files to the user based on some parameters.
Let's first have a look at the structure of a zip-file:
There are two parts: first there's a block for each file: a small header followed by the compressed data for that file. Then there's a list of all the files inside the zip-file (including sizes/positions).
So the prepared blocks for each file could be pre-generated on disk (and the names/sizes stored in some data structure.
HEAD: the client probably wants to know the size here. The server can easily calculate the size of all the needed blocks + the size of the second part with the list of the files inside.
If the client wants to extract a single file, it could directly ask for the last part of the file (with a range-request) to have the list, and then with a second request ask for that single file. Although the size is not necessarily needed to get the last n bytes, it could be handy if for example if you wanted to store the different parts in a sparse file with the same size of the full zip-file.
GET: no need to do the calculations first (including generating the second part to know its size). It would be better and faster to just start sending each block in chunks.
Fully dynamically generated file:
In this case it wouldn't be very efficient to return the size to a HEAD request of course, since the whole file would need to be generated just to know its size.

How do I return multiple "documents" from a web forms application?

We have a webforms application that generates parametric documents. The user supplies some information, clicks a button, and our web service generates Word documents.
The service works for one document at a time but not batches. We want to add the ability to process more than one document. We now have the code below, where contactIdsForLetters is a List<int>.
foreach (int contactId in contactIdsForLetters)
{
string parameters = string.Format("ContactID~{0}", contactId);
string defaultFilename = Reporting.Utilities.CreateDefaultFileName(outputformat);
byte[] bytes = Reporting.Reports.CreateReport(selectedReportId, parameters, outputformat, out serviceCallWasSuccessful);
if (!serviceCallWasSuccessful || bytes == null)
{
Reporting.Reports.LogReportActivity(selectedReportId, string.Empty, parameters, userLogin, false);
return;
}
Reporting.Reports.LogReportActivity(selectedReportId, string.Empty, parameters, userLogin, true);
Reporting.Utilities.SendResponse(defaultFilename, bytes);
}
When running the above code, only one document is ever returned. One document is processed (the For-Each never gets to the second item in contactIdsForLetters), a dialog pops up asking to open or save the file, and after clicking open, Word opens with the document. Everything is happening like it should but we can't get the For-Each to process the second and subsequent documents.
The users want a seperate Word session for each document returned. Subsequent documents will need to open in their own Word session.
How do I loop through a List<int>, send each int to a service one-at-a-time, and open a Word session for each returned document?
Here is SendResponse() ...
public static void SendResponse(string defaultFilename, byte[] bytes)
{
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.ClearHeaders();
HttpContext.Current.Response.Buffer = false;
HttpContext.Current.Response.ContentType = "application/octet-stream";
HttpContext.Current.Response.AppendHeader("Content-Disposition", string.Format("attachment; filename={0}", defaultFilename));
HttpContext.Current.Response.AppendHeader("Content-Length", bytes.Length.ToString());
HttpContext.Current.Response.BinaryWrite(bytes);
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.End();
}
There should be no problem sending a List<int> to the service and having the service return List<byte[]>.
I don't know what you mean by "open a Word session". I hope you're not trying to call Word from an ASP.NET application. That's not supported, is unreliable, and doesn't work very well.
ASP.NET runs on top HTTP.
HTTP is based on a per request basis.
Each request returns a response if the browser is able to reach the server.
This response contains a stream (we can call it File in your case).
After the server flushes that stream (1st file) the request connection to the server is closed and the request life cycle ends, closing any threads related to that request.
This is why you never get to the second document.
You are going to have to deliver the documents in a compressed container like a zip file so all the documents go in a single stream. First get all the documents, package them and send the response to the client.
Hope this helps.
UPDATE: Also you might want to look at using AJAX to generate a javascript call to the server returning different links to pull the different documents. It would be like clicking on different invisible links that open different documents after the user clicks a single button or link. Using this approach is really easy to achieve what you want. You can trigger the click event for all the invisible links with javascript.

Resources