I have written a server using QTcpSocket. The server handles the POST request from client app. I use the content-length field to determine the end of the request body. The code is as follows:
int length = 0;
QString line = "";
while(this->canReadLine())
{
line = this.readLine();
if(line.contains("Content-Length"))
{
length = line.split(':')[1].toInt();
}
if(line == "\r\n")
{
break;
}
}
for(; this.bytesAvailable() < length; ){}
QString requestBody = this.readAll();
It works in my localhost. But when it is run on remote server, bytesAvailable always returns a fixed value: 0 or 2896. Is there something wrong in my code or the network is causing that?
The bytesAvailable() function only tells you how many bytes are in the internal buffer of QTcpSocket, it does not instruct it to look for more data coming across the network. Calling this function repeatedly is thus a pointless exercise and can cause your program to hang.
What you are trying to do is wait for more data to arrive. To do this, you must let your program go back to the event loop or call one of the blocking functions, like waitForReadyRead()
EDIT:
If you're using standard HTTP processing, also consider using QNetworkAccessManager along with QNetworkReply to simplify the data retrieval process.
Related
My understanding of the uses of pointer receiver and value receiver is rather weak. Here's a scenario where I can't decide between the two:
I recently learned to re-use http.Client objects already created instead of creating a new http.Client each time, in order to benefit from connection pooling. So I did something like this:
type MailClient struct {
HTTPClient *http.Client
// ... bunch of other stuff
}
func newMailClient( // ... arguments for initializing stuff ) *MailClient {
return &MailClient{
HTTPClient: &http.Client{},
// ... init other stuff
}
}
func (c *MailClient) SendMail( // ... arguments that form a email request ) {
// ... prepare the email request
httpResp, err := c.HTTPClient.Do( // ... args for sending )
if err != nil {
// ... handle error
}
defer httpResp.Body.Close()
// ...
}
This way, as long as SendMail() is called on the same MailClient, I expect the connection pool to kick in (I understand the default MaxIdleConnsPerHost of the http.Transport is default to 2 without customization).
But notice I defined SendMail() to use a pointer receiver? Yup, I don't really know why I did it. I'm just hoping by using a pointer receiver, each time the method is called, it's the same instance of MailClient that's doing the work, and not a copy of it. I also thought the fact that value receiver prevents modifying the receiver is because it's a copy.
I also cautiously defined the HTTPClient field in the MailClient struct to be a pointer of http.Client, for the same reason - I don't know how value pointer works.
So here's to summarise my question:
Will value receivers result in a copy of the receiver being used in the method?
Will changing the receiver type to MailClient affect the connection pooling behavior?
Will changing the HTTPClient field to a http.Client affect the connection pooling behavior?
Connection pooling is implemented in http.Transport. Your application uses the default transport because the application does not set the client transport field.
No matter what the application does with pointer vs value receivers, there is no impact on connection pooling because the application uses the default transport.
The most important part to remember is that method calls translate to function calls with receivers. All rules that apply to function calls apply to method calls with the correspondent type of receiver (value or reference). A good way to see this is by using the following syntax
type XType struct {
A int
B *int
}
func (x XType) Mutate() {
x.A = 1
*x.B = 2
}
func (x *XType) MutateRef() {
x.A = 2
*x.B = 3
}
func main() {
i := 5
x1 := XType{
A: 9999,
B: &i,
}
//calling Mutate as a function passing the receiver by value
XType.Mutate(x1)
log.Printf("%#v", x1)
log.Printf("%#v", *x1.B)
//calling MutateRef as a function passing the receiver by reference
(*XType).MutateRef(&x1)
log.Printf("%#v", x1)
log.Printf("%#v", *x1.B)
}
Another thing to note is that when structs are passed by value, all fields are copied, imagine a pointer as an int, so a copy will still hold the same address value.
Now, it's easy to follow the answers for your questions:
Will value receivers result in a copy of the receiver being used in the method?
Yes, because is the same thing as calling a function passing a value, same rules apply
Will changing the receiver type to MailClient affect the connection pooling behavior
Not really, since the client is a pointer. Even if you call the method using a value receiver, you will still be using the client pointed by the value in your copy
Will changing the HTTPClient field to a http.Client affect the connection pooling behavior?
I don't think so, as the http.Transport is in charge of connection pooling and it is set by reference to the http.Client. Still, your best bet is to go with a pointer to a client
I currently use the below callback to check the PAYLOAD of incoming MQTT messages, but does anyone know how I could continue to do this but also find messages coming under a specific TOPIC?
void callback(char * topic, byte * payload, unsigned int length) {
char p[length + 1];
memcpy(p, payload, length);
p[length] = NULL;
if (!strcmp(p, "home")) {
Particle.publish(DEVICE_NAME, HOME_MSSG, 60, PRIVATE);
} else if (!strcmp(p, "chome")) {
Particle.publish(DEVICE_NAME, CHOME_MSSG, 60, PRIVATE);
}
}
The topic can be handled in pretty much the same way as the payload; e.g.
if (!strcmp(topic, "thisIsATopic")) {
// do something
}
Note that the payload is copied for two reasons:
The buffer is reused once the callback returns (so if you store that pointer and refer to it later it may not contain what you expect).
The message is binary so it is important to ensure a \0 is added to the end if using functions like strcmp (to avoid overruns).
It looks like the library you are using copies the topic so you should be fine using that as-is (unlike with some other libraries).
I'm learning gRPC using the official doc, but found the method signature of client-streaming and bidirectional-streaming very confusing (the two are the same).
From the doc here, the function takes StreamObserver<ResponseType> as the input parameter and returns a StreamObserver<ResponseType> instance, as the following:
public StreamObserver<RequestType> bidirectionalStreamingExample(
StreamObserver<ResponseType> responseObserver)
But in my mind, it should take the RequestType type as input and returns the ResponseType type:
public StreamObserver<ResponseType> bidirectionalStreamingExample(
StreamObserver<RequestType> responseObserver)
This confuses me very much and I'm actually a little surprised that the answer didn't prompt up when I search is in google, I thought many people would have the same question. Am I missing something obvious here? Why would gRPC defines the signature like this?
Your confusion probably stems from being used to REST or non-streaming frameworks, where request-response is often mapped to a function's parameter-return. The paradigm shift here is that you're no longer supplying request-response, but rather channels to drop requests and responses. If you've studied C or C++, it's very much like going from
int get_square_root(int input);
to
void get_square_root(int input, int& output);
See how output's now a parameter? But in case that makes no sense at all (my fault :-) here's a more organic path:
Server Streaming
Let's start with the server streaming stub, even if your eventual goal is client streaming.
public void serverStreamingExample(
RequestType request,
StreamObserver<ResponseType> responseObserver)
Q: Why is the "response" in the parameter list? A: It's not the response that's in the parameter list, but rather a channel to feed the eventual response to. So for example:
public void serverStreamingExample(
RequestType request,
StreamObserver<ResponseType> responseObserver) {
ResponseType response = processRequest(request);
responseObserver.onNext(response); // this is the "return"
responseObserver.onCompleted();
}
Why? Because, the point of streaming is to keep alive a channel on which responses can keep flowing through. If you could only return 1 response and that's that, the function's done, then that's not a stream. By supplying a channel, you as the developer can choose to pass it along as needed, feeding it as many responses as you'd like via onNext() until you're satisfied and call onCompleted().
Client Streaming
Now, let's move on to the client streaming stub:
public StreamObserver<RequestType> clientStreamingExample(
StreamObserver<ResponseType> responseObserver)
Q: Wait, what! We know why the response is in the parameter list now, but how does it make sense to return a request? A: Again, we're not actually returning a request, but a channel for the client to drop requests! Why? Because the point of client streaming is to allow the client to supply requests in pieces. It can't do that with a single, traditional call to the server. So here's one way this can be implemented:
class ClientStreamingExample {
int piecesRcvd = 0;
public StreamObserver<RequestType> myClientStreamingEndpoint(
StreamObserver<ResponseType> responseObserver) {
return new StreamObserver<RequestType>() {
#Override
public void onNext(RequestType requestPiece) {
// do whatever you want with the request pieces
piecesRcvd++;
}
#Override
public void onCompleted() {
// when the client says they're done sending request pieces,
// send them a response back (but you don't have to! or it can
// be conditional!)
ResponseType response =
new ResponseType("received " + piecesRcvd + " pieces");
responseObserver.onNext(response);
responseObserver.onCompleted();
piecesRcvd = 0;
}
#Override
public void onError() {
piecesRcvd = 0;
}
};
}
}
You might have to spend a little time studying this to fully understand, but basically, since the client may now send a stream of requests, you have to define handlers for each request piece, as well as handlers for the client saying it's done or errored out. (In my example, I have the server only respond when the client says it's done, but you're free to do anything you want. You can even have the server respond even before the client says it's done or not respond at all.)
Bidirectional Streaming
This isn't really a thing! :-) What I mean is, tutorials just mean to point out that nothing's stopping you from implementing exactly the above, just on both sides. So you end up with 2 applications that send and receive requests in pieces, and send and receive responses. They call this setup bidirectional streaming, and they're correct to, but it's just a little misleading since it's not doing anything technically different from client streaming. That's exactly why the signatures are the same. IMHO, tutorials should just mention a note like I have here, rather than repeat the stub.
Optional: Just for "fun"...
We began with the C++ analogy of going from
int get_square_root(int input); // "traditional" request-response
to
void get_square_root(int input, int& output); // server streaming
Do we want to carry on this analogy? Of course we do.
🎵 Hello, C++ function pointers, my old friend... 🎶
void (*fnPtr)(int) get_square_root_fn(int& output); // client streaming
And a demonstration of its use(lessness):
int main() { // aka the client
int result;
void (*fnPtr)(int) = server.get_square_root_fn(result);
fnPtr(2);
std::cout << result << std::endl; // 1.4142 assuming the fn actually does sqrt
}
As far as I understand one has two options to port a C program to Native Client:
Implement a number of initializing functions like PPP_InitializeModule and PPP_GetInterface.
Simply pass your main function to PPAPI_SIMPLE_REGISTER_MAIN.
So the question is how can I implement JS message handling (handle messages emitted by JS code in native code) in the second case?
Take a look at some of the examples in the SDK in examples/demo directory: earth, voronoi, flock, pi_generator, and life all use ppapi_simple.
Here's basically how it works:
When using ppapi_simple, all events (e.g. input events, messages from JavaScript) are added to an event queue. The following code is from the life example (though some of it is modified and untested):
PSEventSetFilter(PSE_ALL);
while (true) {
PSEvent* ps_event;
/* Process all waiting events without blocking */
while ((ps_event = PSEventTryAcquire()) != NULL) {
earth.HandleEvent(ps_event);
PSEventRelease(ps_event);
}
...
}
HandleEvent then determines what kind of event it is, and handles it in an application specific way:
void ProcessEvent(PSEvent* ps_event) {
...
if (ps_event->type == PSE_INSTANCE_HANDLEINPUT) {
...
} else if (ps_event->type == PSE_INSTANCE_HANDLEMESSAGE) {
// ps_event->as_var is a PP_Var with the value sent from JavaScript.
// See docs for it here: https://developers.google.com/native-client/dev/pepperc/struct_p_p___var
if (ps_event->as_var->type == PP_VARTYPE_STRING) {
const char* message;
uint32_t len;
message = PSInterfaceVar()->VarToUtf8(ps_event->as_var, &len);
// Do something with the message. Note that it is NOT null-terminated.
}
}
To send messages back to JavaScript, use the PostMessage function on the messaging interface:
PP_Var message;
message = PSInterfaceVar()->VarFromUtf8("Hello, World!", 13);
// Send a string message to JavaScript
PSInterfaceMessaging()->PostMessage(PSGetInstanceId(), message);
// Release the string resource
PSInterfaceVar()->Release(message);
You can send and receive other JavaScript types too: ints, floats, arrays, array buffers, and dictionaries. See also PPB_VarArray, PPB_VarArrayBuffer and PPB_VarDictionary interfaces.
I have a page that needs to combine data from four different webrequests into a single list of items. Currently, I'm running these sequentially, appending to a single list, then binding that list to my repeater.
However, I would like to be able to call these four webrequests asynchronously so that they can run simultaneously and save load time. Unfortunately, all the async tutorials and articles I've seen deal with a single request, using the finished handler to continue processing.
How can I perform the four (this might even increase!) simultaneously, keeping in mind that each result has to be fed into a single list?
many thanks!
EDIT: simplified example of what i'm doing:
var itm1 = Serialize(GetItems(url1));
list.AddRange(itm1);
var itm2 = Serialize(GetItems(url2));
list.AddRange(itm2);
string GetItems(string url)
{
var webRequest = WebRequest.Create(url) as HttpWebRequest;
var response = webRequest.GetResponse() as HttpWebResponse;
string retval;
using (var sr = new StreamReader(response.GetResponseStream()))
{ retval = sr.ReadToEnd(); }
return retval;
}
This should be really simple since your final data depends on the result of all the four requests.
What you can do is create 4 async delegates each pointing to the appropriate web method. Do a BeginInvoke on all of them. And then use a WaitHandle to wait for all. There is no need to use call backs, in your case, as you do not want to continue while the web methods are being processed, but rather wait till all web methods finish execution.
Only after all web methods are executed, will the code after the wait statement execute. Here you can combine the 4 results.
Here's a sample code I developed for you:
class Program
{
delegate string DelegateCallWebMethod(string arg1, string arg2);
static void Main(string[] args)
{
// Create a delegate list to point to the 4 web methods
// If the web methods have different signatures you can put them in a common method and call web methods from within
// If that is not possible you can have an List of DelegateCallWebMethod
DelegateCallWebMethod del = new DelegateCallWebMethod(CallWebMethod);
// Create list of IAsyncResults and WaitHandles
List<IAsyncResult> results = new List<IAsyncResult>();
List<WaitHandle> waitHandles = new List<WaitHandle>();
// Call the web methods asynchronously and store the results and waithandles for future use
for (int counter = 0; counter < 4; )
{
IAsyncResult result = del.BeginInvoke("Method ", (++counter).ToString(), null, null);
results.Add(result);
waitHandles.Add(result.AsyncWaitHandle);
}
// Make sure that further processing is halted until all the web methods are executed
WaitHandle.WaitAll(waitHandles.ToArray());
// Get the web response
string webResponse = String.Empty;
foreach (IAsyncResult result in results)
{
DelegateCallWebMethod invokedDel = (result as AsyncResult).AsyncDelegate as DelegateCallWebMethod;
webResponse += invokedDel.EndInvoke(result);
}
}
// Web method or a class method that sends web requests
public static string CallWebMethod(string arg1, string arg2)
{
// Code that calls the web method and returns the result
return arg1 + " " + arg2 + " called\n";
}
}
How about launching each request on their own separate thread and then appending the results to the list?
you can test this following code:
Parallel.Invoke(() =>
{
//TODO run your requests...
});
You need reference Parallel extensions :
http://msdn.microsoft.com/en-us/concurrency/bb896007.aspx
#Josh: Regarding your question about sending 4 (potentially more) asynchronous requests and keeping track of the responses (for example to feed in a list). You can write 4 requests and 4 response handlers, but since you will potentially have more requests, you can write an asynchronous loop. A classic for loop is made of an init, a condition, and an increment. You can break down a classic for loop into a while loop and still be equivalent. Then you make the while loop into a recursive function. Now you can make it asynchronous. I put some sample scripts here at http://asynchronous.me/ . In your case, select the for loop in the options. If you want the requests to be sent in sequence, i.e. one request after the previous response (request1, response1, request2, response2, request3, response3, etc.) then choose Serial communication (i.e. sequential), but the code is a bit more intricate. On the other hand, if you don't care about the order in which the responses are received (random order), then choose Parallel communication (i.e concurrent), the code is more intuitive. In either case, each response will be associated with its corresponding request by an identifier (a simple integer) so you can keep track of them all. The site will give you a sample script. The samples are written in JavaScript but it's applicable to any language. Adapt the script to your language and coding preferences. With that script, your browser will send 4 requests asynchronously, and by the identifier you'll be able to keep track of which request the response corresponds to. Hope this helps. /Thibaud Lopez Schneider