how does the client contact the impalad deamon when a query is fired ?
What exactly happens in the background when a client fires a query which has to be executed by impala ?
Take impala-shell for example, it is a ImpalaShell Python class extends cmd.Cmd. User will:
1) connect ip:port in the shell, which will call do_connect(..) and connect to Impala backend through thrift. and a thrift client is created as self.imp_service = ImpalaService.Client(protocol)
2) select xxx from table... in the shell, which will call do_select(...) and self.imp_service.query(query) will be called which is a thrfit rpc.
3) Then the rpc query is executed on the Impalad side by void ImpalaServer::query(QueryHandle&, const Query&) :
coordinator parse the query and create a fragmented AST, and assign each fragment to a set of hosts to execute;
rpc calls are issued in parallel to each host for each fragment.
parent fragment will wait until child fragment is done.
4) When all fragments are done, data will be shown on the screen after fetch() which is a thrift call from client side.
Related
I have a problem with the message, which I send from my custom TCP client app service to the server (also with my custom app layer service) in OMNET++ simulation.
My TCPCustomClientApp service is created from TCPBasicCientApp service from INET framework. I overrode some methods like initialize, handleMessage, socketEstablished and I added some helper methods for my needs.
I have my custom message, now, after some trigger from a network, I would like to send this message to the server encapsulated to GenericAppMsg.
this is my code:
...
if (trigger){
connect(); // connect to the server - 3way TCP handshake
auto customMsg = new MyCustomMessage();
customMsg->set ...
msgBuffer.push_back(customMsg); // list with messages
}
then in method socketEstablished(int connId, void *ptr) I have this code for sending:
auto msg = new GenericAppMsg();
msg->setByteLength(requestLength);
msg->setExpectedReplyLength(replyLength);
msg->setServerClose(false);
msg->setKind(1); // set message kind to 1 = TCP_I_DATA (definned in enum TcpStatusInd in TCPCommand.msg)
msg->encapsulate(msgBuffer.front()); // encapsulate my custom message into GenericAppMsg
sendPacket(msg);
The problem is, that when this message comes to the server kind is 3 = ESTABLISHED.
What am I missing? Is this sending wrong?
The kind field is a freely usable field in messages that can be used for anything, but you should be aware that there is absolutely no guarantee that you will get the same value for the kind field on the receiving side. This is considered meta data that is bound to the actual message object. Downwards in the various lower OSI layers the packet can be aggregated or fragmented so the identity of the message object is not kept.
In short, exchanging data in the kind field is safe only if it is used for the communication between two modules that are directly connected. If there is anything between them, you cannot be sure whether the message is forwarded, or recreated with the same content or that some modules on the path between them decides to use the kind field to something else.
Anything that you want to pass to the other end must be encapsulated inside the message.
I have a server-side streaming gRPC service that may have messages coming in very rapidly. A nice to have client feature would be to know there are more updates already queued by the time this onNext execution is ready to display in the UI, as I would simply display the next one instead.
StreamObserver< Info > streamObserver = new StreamObserver< info >( )
{
#Override
public void onNext( Info info )
{
doStuffForALittleWhile();
if( !someHasNextFunction() )
render();
}
}
Is there some has next function or method of detection I'm unaware of?
There's no API to determine if additional messages have been received, but not yet delivered to the application.
The client-side stub API (e.g., StreamObserver) is implemented using the more advanced ClientCall/ClientCall.Listener API. It does not provide any received-but-not-delivered hint.
Internally, gRPC processes messages lazily. gRPC waits until the application is ready for more messages (typically by returning from StreamObserver.onNext()) to try to decode another message. If it decodes another message then it will immediately begin delivering that message.
One way would be to have a small, buffer with messages from onNext. That would let you should the current message, and then check to see if another has arrived in the mean time.
i use grpc::CompletionQueue in my programm, you can also find the example in
"grpc/examples/cpp/helloword/greeter_async_clients.cc".
The problem code as follow!
// stub_->PrepareAsyncSayHello() creates an RPC object, returning
// an instance to store in "call" but does not actually start the RPC
// Because we are using the asynchronous API, we need to hold on to
// the "call" instance in order to get updates on the ongoing RPC.
call->response_reader =
stub_->PrepareAsyncSayHello(&call->context, request, &cq_);
// StartCall initiates the RPC call
call->response_reader->StartCall();
// Request that, upon completion of the RPC, "reply" be updated with the
// server's response; "status" with the indication of whether the operation
// was successful. Tag the request with the memory address of the call object.
call->response_reader->Finish(&call->reply, &call->status, (void*)call);
The Client send 1,2,3...100 to Server, but the Server get the number list is "100,99,98...2,1". Why? i could not find any source code about this... thank you very much
and does is Nagle Algorithm for gRPC?
CompletionQueue is somewhat of a misnomer. They will return events in the order they finish (rather than the order they are issued).
gRPC C++ will disable Nagle Algorithm by default.
I am trying to use Skype's DBus API in order to retrieve the list of messages (message IDs) I've exchanged with a contact. However, both the SEARCH CHATMESSAGES <target> (protocol >= 3) and the SEARCH MESSAGES <target> (protocol < 3) commands return unexpectedly empty results.
Here is the trace of a few exchanges I had with the API. I used d-feet to send my requests, but the result is exactly the same when I send the request from my own program.
Bus name: com.Skype.API
Object: /com/Skype
Interface: com.Skype.API
Method used: Invoke(String request)
Trace:
-> NAME dfeet
<- OK
-> PROTOCOL 8
<- PROTOCOL 8
-> SEARCH CHATMESSAGES mycontact
<-
The same thing happens with two other SEARCH commands:
SEARCH MESSAGES <target> (with PROTOCOL 2).
SEARCH CHATS
Additionally, I also get an empty result when I try to request a message list based on a chat ID: GET CHAT <chat_id> GETMESSAGES.
However, commands such as SEARCH FRIENDS, SEARCH CALLS, or SEARCH ACTIVECHATS work just fine, and return their lists of IDs (contacts IDs, calls IDs, or chat IDs) as expected.
It might also be worth noting that this happens for all contacts, regardless of how many messages I've exchanged with them (I thought at first that there might be too many messages involved, but the result is the same, whether I've sent 3, or thousands of messages to the contact).
Is there anything that would explain why I get these empty responses through DBus, for these requests?
Skype will not use Invoke's return value when its reply is too heavy. As it so happens, when Skype has too much data to prepare and transfer after a request, it automatically returns an empty string to the Invoke call. The true, heavy reply is then prepared asynchrously by Skype, and the client program must be ready to receive it when it eventually arrives.
Whenever you are communicating with Skype over DBus, your application must act as both a client (calling Invoke), and a server (providing a DBus object for Skype to reach). This design was a little unexpected (I guess we could argue on its quality), but here is what it requires you to do:
Make your program a DBus "server" (providing objects to reach). Through your bus name to Skype, register an object path called /com/Skype/Client implementing the com.Skype.API.Client interface.
Prepare a message handler for the only method of this interface: Notify(s). This is the method Skype will try to call to send you the heavy reply to one of your previous requests.
Program your own mechanism to match your Invoke request with the asynchronous Notify message coming in as an answer later on.
The creation of an object can be done through dbus_connection_register_object_path, the parameters for which are:
The DBusConnection structure representing your bus name.
The object path you are registering, here /com/Skype/Client.
A table of message handlers (DBusObjectPathVTable) used to process all incoming requests.
Data to be sent to these handlers when they are called. This is additional data, not the actual message being received since you're just setting up the handler here.
For instance...
DBusHandlerResult notify_handler(DBusConnection *connection,
DBusMessage *message,
void *user_data){
return DBUS_HANDLER_RESULT_HANDLED;
}
void unregister_handler(DBusConnection *connection,
void *user_data){}
DBusObjectPathVTable vtable = {
unregister_handler,
message_handler,
NULL
};
if(!dbus_connection_register_object_path(connection,
"/com/Skype/Client",
&vtable, NULL)){
// Error...
}
Note that this is just an object's definition. In order to actually hook on the Notify calls, you'll have to select() on a DBusWatch file descriptor, and dispatch the incoming DBusMessage in order to have your message handler called.
If you are working with other bindings, you'll probably find much faster ways to setup objects and start working as a client application. See:
GLib's g_dbus_connection_register_object
Exporting objects with dbus-python
QtDBus's QDBusConnection::registerObject
... (other bindings)
I have an application that does the following:
After the app receives a get request, it reads the client's cookies
for identification.
It stores the identification information in Postgresql DB
And it sends the appropriate response and finishes the handling
process.
But in this way the client is also waiting for me to store the data in PSQL. I don'
t want this what I want is:
After the app receives a get request, it reads the client's cookies
for identification.
It sends the appropriate response and finishes the handling process.
It stores the identification information in Postgresql DB.
In the second part storing process is happening after the client has received his response so he won't have to wait for it. I've searched for a solution but haven't found anything thus far. I believe I'm searching with wrong keywords because, I believe this is a common problem.
Any feedback is appreciated.
You should add a callback to the ioloop. Via some code like this:
from tornado import ioloop
def somefuction(*args):
# call the DB
...
... now in your get() or post() handler
...
io_loop = ioloop.IOLoop.instance()
io_loop.add_callback(partial(somefunction, arg, arg2))
... rest of your handler ...
self.finish()
This will get called after the response is returned to the user on the next iteration through the event handler to call your DB processor somefunction.
If you dont want to wait for Postgres to respond you could try
1) An async postgres driver
2) Put the DB jobs on a queue and let the queue handle the DB write. Try Rabbit MQ
Remember because you return to the user before you write to the DB you have to think about how to handle write errors