Would a private MessageQueue object with an invalid path trigger a MessageQueueException? - asp.net-2.0

First off, I have no experience with MSMQ.
Supposing an invalid path is passed into the constructor for a private MessageQueue object, would it trigger a MessageQueueException when the send method is called? If so, would the error code still be QueueNotFound? MSDN states that this error is returned for public queues not registered in the directory and Internet queues that do not exist in the Message Queueing namespace. However, according to what I read, a private queue is not published in the Active Directory, so I'm not sure what would happen.
I apologize if this makes no sense.

If you try to send to a queue which does not exist then the call to send will not fail as long as the queue name is in a valid format. What I mean by this is that the destination queue address does not need to exist, but the format of the queue name must be correct.
Rather, the MSMQ subsystem will queue the message locally in a temporary outbound queue, which you should be able to see appear on the box.
The reason it does this is because it thinks that the destination queue may be temporarily unavailable and when it becomes available again the message will be transmitted.
However after a period of time (can't remember default) it will move the message into the dead letter queue as undeliverable.
Hope this helps.

Related

Grpc C++: How to wait until a unary request has been sent?

I'm writing a wrapper around gRPC unary calls, but I'm having an issue: let's say I have a ClientAsyncResponseReader object which is created and starts a request like so
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
);
// Set a breakpoint here
where all of the arguments are valid.
I was under the impression that when the Finish call returned, the request object was guaranteed to have been sent out over the wire. However by setting a breakpoint after that Finish() call (in the client program, to be clear) and inspecting my server's logs, I've discovered that the server does not log the request until after I resume from the breakpoint.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Of course, perhaps my assumptions are wrong and the server isn't logging the request as soon as it comes in. If not though, then clearly I don't understand gRPC's semantics as well as I should, so I was hoping for some more experienced insight.
You can see the code for my unary call abstraction here. It should be sufficient, but if anything else is required I'm happy to provide it.
EDIT: The plot thickens. After setting a breakpoint on the server's handler for the incoming requests, it looks like the call to Finish generally does "ensure" that the request has been sent out: except for the first request sent by the process. I guess that there is some state maintained either in grpc::channel or maybe even in grpc::completion_queue which is delaying the initial request
From the documentation
response_reader_ = std::unique_ptr<grpc::ClientAsyncResponseReader<ResponseType>>(
grpc::internal::ClientAsyncResponseReaderFactory<ResponseType>::Create(
channel.get(), completion_queue, rpc_method, &client_context_, request, true
)
);
This will start a call and write the request out (start=true). This function does not have a tag parameter. So there is no way the completion queue can notify when the call start is finished. Calling an RPC method is a bit complicated, it basically involves creating the network packet and putting it in the wire. It can fail if there is a transient failure of the transport or the channel completely gone or the user did something stupid. Another thing, why we need the tag notification is that the completion queue is really a contention point. All RPC objects talk to this, it can happen completion queue is not free and the request is still pending.
response_reader_->Finish(
response_sharedptr_.get(), status_sharedptr_.get(), static_cast<void*>(some_tag)
This one will request the RPC runtime to receive the server's response. The output is when the server response arrives, then the completion queue will notify the client. At this point. we assume that there is no error on the client side, everything okay and the request is already in flight. So the status of Finish call will never be false for unary rpc.
This would seem to indicate that there's something else I need to wait on in order to ensure that the request is really sent out: and moreover, that the thread executing the code above still has some sort of role in sending out the request which appears post-breakpoint.
Perhaps, you want to reuse the request object(I did some experiments on that). For me, I keep the request object in memory till the response arrives. There is no way to guarantee that the request object won't be required after the create call.

Handling defunct deferred (timeout) messages?

I am new to Rebus and am trying to get up to speed with some patterns we currently use in Azure Logic Apps. The current target implementation would use Azure Service Bus with Saga storage preferably in Cosmos DB (still investigating that sample implementation). Maybe even use Rebus Mongo DB with Cosmos DB using the Mongo DB API (not sure if that is possible though).
One major use case we have is an event/timeout pattern, and after doing some reading of samples/forums/Stack Overflow this is not uncommon. The tricky part is that our Sagas would behave more as a Finite State Machine vs. a Directed Acyclic Graph. This mainly happens because dates are externally changed and therefore timeouts for events change.
The Defer() method does not return a timeout identifier, which we assume is an implementation restriction (Azure Service Bus returns a long). Since we must ignore timeouts that had been scheduled for an event which has now shifted in time, we see a way of having those timeouts "ignored" (since they cannot be cancelled) as follows:
Use a Dictionary<string, Guid> in our own SagaData-derived base class, where the key is some derivative of the timeout message type, and the Guid is the identifier given to the timeout message when it was created. I don't believe this needs to be a concurrent dictionary but that is why I am here...
On receipt of the event message, remove the corresponding timeout message type key from the above dictionary;
On receipt of the timeout message:
Ignore if it's timeout message type key is not present or the Guid does not match the dictionary key/value; else
Process. We could also remove the dictionary key at this point as well.
When event rescheduling occurs, simply add the timeout message type/Guid dictionary entry, or update the Guid with the new timeout message Guid.
Is this on the right track, or is there a more 'correct' way of handling defunct timeout (deferred) messages?
You are on the right track 🙂
I don't believe this needs to be a concurrent dictionary but that is why I am here...
Rebus lets your saga handler work on its own copy of the saga data (using optimistic concurrency), so you're free to model the saga data as if it's being only being accessed by one at a time.

Can Meteor call a method on the server twice if the client gets disconnected?

In LivedataConnection, line 357, there's a comment that says:
// Sends the method message to the server. May be called additional times if
// we lose the connection and reconnect before receiving a result.
Does this mean that if a client calls a method and gets disconnected before the method returns, it will re-call that message when it reconnects? What if that method isn't idempotent?
Basically, yes, as of version 1.0. Meteor methods are not idempotent. I've reported this issue, as have others, and it's basically confirmed by a core developer:
https://github.com/meteor/meteor/issues/2407#issuecomment-52375372
The best way to fix this in most cases is to try and write a unique key to the database, associating with a method request, or to make use of some other clever conditional database updates. There are some discussions about how to do that in these threads:
https://groups.google.com/d/msg/meteor-talk/j1YF7JO5Rdo/cYHR5kbhC8UJ
https://stackoverflow.com/a/26430334/586086

Implementing a robust and efficient RPC system

I need to have a server which is able to call functions on the client. I always used RPC's in different networking game API's but I never implemented it by myself.
How would I do it?
My naive approach would be:
connect client to the server:
server
fn update_position_client(){
unique_id = 1;
send.to_client(unique_id);
}
client
while recv_messages {
if id == 1
update_position();
}
Is this how I would do it?
This works if you only have a few messages that you want to send, and if the data basically known. To be more robust, you would want to have the ability to dynamically add/remove messages that can be called, and figure out how to look up the methods to be called when RPC is called.
Assuming you want this to be completely transparent to the user, what typically happens in this case is that when the a message is sent, the RPC library will wait until there's a response back. Assuming bi-directional capabilities, what normally happens is that there's a single thread that listens for data. If an RPC message comes in, this thread will figure out what to do with the message, i.e. what method to call in your(local) address space and with what parameters you want to call it with. When you send an RPC message out, the thread that you sent the message out on is blocked(probably with a semaphore) until the return message comes back, at which point your local thread is unblocked and allowed to continue.
A Linux-specific RPC library you could look at would be DBus.

Managing state server side to be requested by a hub client

I have created a really basic hub that I get the IHubContext and call the my javascript client side method via a group so I can always push the data to the same user no matter how many diffrent connections they are on, which i pass some text to and that text is appended in a multiline textbox in the browser.
It all works really nicely. The thread will typically run long operations, often appending text reporting on the status of the operation using the hub context to call the client.
However, I wish to cater for the situation where someone would close their browser then return later the page with the textbox.
Now at the moment they would just start reciving the text from the point in the operation they connect on. How can I send a request from the client to the server to retrive all the text back from the begining of the operation?
My idea was to have a StringBuilder object that i append each line identically as the text I send to the client hub.
Then on connect of the hub call a server side function from the client which asks for the full text which can be taken from the StringBuilder object ToString();
But how can the hub know where to get the StringBuilder object from in the still executing thread?
OR
If there is a way to push it to the client instead, how can I know in the executing thread that the user has connected and send the StringBuilder ToString() to the user?
NB. I do not want to resend the full string appened every time! Only when the user has just connected and needs to catch up.
I think understanding how to do this would help understand how to deal with signalR and state on the server outside of the hub. Thank you.
Well, for starters, you should probably store this data in some kind of persistent storage (not just in an in-memory StringBuilder). Regardless of that though, what you really need to do is store the individual strings with timestamps. Then just remember the last time you saw the logical user and dump all entries since that time when they are first connecting to your Hub.
JabbR, the flagship, test bed application for SignalR, does something just like this except it just does it using message ids and asks for all message ids since the last message id the client received. Check out the Chat Hub's GetPreviousMessages for details.

Resources