The photon docs state that
In the application frameworks we provide (Lite, LoadBalancing, etc.),
the server automatically responds encrypted, if an operation was sent encrypted.
This makes it safe to fetch critical data by simply requesting with encryption turned on.
Can it be made mandatory on the server side? So that Photon doesn't process certain events if the operation is invoked or received un-encrypted?
Yes - you can check if the client sent the request encrypted in OnOperationRequest in the peer:
protected override void OnOperationRequest(OperationRequest request, SendParameters sendParameters)
{
...
if (!sendParameters.Encrypted)
{
string message = string.Format("Only encrypted operations allowed.");
var response = new OperationResponse { ReturnCode = (short)ErrorCode.OperationDenied, DebugMessage = message, OperationCode = request.OperationCode };
this.SendOperationResponse(response, sendParameters);
return;
}
You would implement your own peer and inherit from the frameworks peer, have a look at MyAppliction in Lite.
You would simply need to trap OnStatusChanged for the client and upon receiving a Connect type status change you can invoke the EstablishEncryption method on the client.
This will send down the requirement that all communication is to be encrypted. You will want to discard all requests until you receive a status change of type EncryptionEstablished and disconnect the client if you see EncryptionFailedToEstablish, this gives you the functionality that no events are processed until encryption is completely established.
As far as doing this on a per operation basis, that's going to be a bit more tricky. I recommend just having everything encrypted. The overhead is minimal and it solves your problem. As far as additional CPU time or RAM utilization or network traffic, it's completely negligible.
For specifics about the implementation of this behavior, you can review this reference.
Related
I would like to ask about Rebus Timeout Manager. I know we have Internal timeout manager and External timeout manager and I have been using Internal timeout manager for quite some time. And I have been sharing one timeout database (Sql Server) for all my endpoints.
I would like to know if this is correct.
Secondly I would like to know if I can also use one external Timeout Manager for all my endpoints.
My questions comes from the the fact that the information contained in the Timeouts table (id,due_time,headers,body) has no connection with the endpoint that sent a message to the timeout manager.
I just would like to get assurance.
Regards
You can definitely use the internal timeout manager like you're currently doing.
The MSSSQL-based timeout storage is safe to use concurrently from multiple instances, as it used some finely trimmed lock hints when reading due messages, thus preventing issues that could otherwise have happened due to concurrent access.
But it's also a valid (and often very sensible) approach to create a dedicated timeout manager and then configure all other Rebus instances to use that.
And you are absolutely right that the sender of the timeout is irrelevant. The recipient is determined when sending the timeout, so that
await bus.DeferLocal(TimeSpan.FromMinutes(2), "HELLO FROM THE PAST 🙂");
will send the string to the bus' own input queue, and
await bus.Defer(TimeSpan.FromMinutes(2), "HELLO FROM THE PAST 🙂");
will send the string to the queue mapped as the owner of string:
.Routing(r => r.TypeBased().Map<string>("string-owner"))
In both cases, the message will actually be sent to the timeout manager, which will read the rbs2-deferred-until and rbs2-defer-recipient headers and keep the message until it is due.
I am yet to understand the behavior of web server thread, if I make an async call to say, a database, and immediately return response ( say OK ) to the client without even waiting for the async call to return back. First of all, is it a good approach ? What will happen to the thread which made the async call and if it is used again to serve another request and then the previous async call returns to this particular thread. Or does web server holds this thread waiting till the async call which it made, returns. Then the issue would be many hanging threads would be open as and web server would be available to take more requests. I am looking for an answer.
It depends on the way your HTTP servers works. But you should be very cautious.
Let's say you have a main event loop taking care of incoming HTTP connections, and workers threads which manage the HTTP communications.
A worker thread should be considered ready to accept a new HTTP request management only when it is effectively completly ready for that.
In terms of pure HTTP the more important thing is to avoid sending a response before having received the whole query. It seems simple, and it's usually the case. But if the query as a body, which may be a chunked body, it could take time to receive the whole message.
You should never send a response before, unless it's something like a 400 bad request response, followed by a real tcp/ip connection closing. If you fail to do so, and you have a message length parsing issue, the fact that you sent a response before the end of the query may lead to security problems. It could be used to exploit differences in the parsing of messages between your server and any other HTTP agent in front of your server (ssl terminator, reverse proxy, etc), in some sort of http smuggling issue. For this agent, if you made a response, it means you had the whole message, and it can send the next message, where you will in fact think this is just another part of the body.
Now if you have the whole message, you can decide to send an early response and detach an asynchronous task to really perform some sort of stuff. but this means:
you have to assume that no more output should be generated, you will not try to send any output to the request issuer, you should consider that the communication is now closed
the worker thread should not receive new requests to manage, and this is the hard part. If this thread is marked as available for a new request, it may also be killed by the thread manager (you have in Nginx or Apache request counters associated with workers, and they are killed after reaching a limit, to create fresh ones). it may also receive a gracefull reload command (usually it's a kill), etc.
So you start to enter a zone where you should know the internals of the HTTP server, which is maybe managed by you, or not, and where changes may appear sooner or later. And you start to make very strange things, which leads usually to strange issues, hard to reproduce.
Uausally the best way to handle asynchronous tasks, while still being able to understand what happen, is to use a messaging system. Put a list of tasks in queue, and get a parallel asynchronous worker process which does things with theses tasks. track status of theses tasks if you need it.
Same things may apply with the client, after receiving a very fast HTTP answer, it may need to perform some ajax status polling for the task status. And you will maybe only have to check the status of the task in the queue to send a response.
You will get more control on the whole thing.
For me I really dislike having detached threads, coming from strange code, performing heavy tasks without any way of outputing a status or reporting errors, and maybe preventing the nice application stop calls (still waiting for strange threads to join) which does not imply a killall.
It depends whether this asynchronous operation performs something which the client should be notified about.
If you return 200 OK (i.e. successfully completed) and later the asynchronous operation fails then the client will not know about the error.
You of course have some options like sending some kind of push notification over websocket or sending another request which would return the actual result and things like that. So basically depends on your needs...
Here is my scenario. BizTalk needs to transfer a file from a shared/central document library. First BizTalk receives an incoming message with a reference/path to this document in the library. Then it simply needs to read it out from this library and send it (potentially through different adapters). This is in essence, a scenario not so remote from the ClaimCheck EAI pattern.
Some ways to implement a claim check have been documented, noticeably BizTalk ESB Toolkit Claim Check, and BizTalk 2009: Dealing with Extremely Large Messages, Part I & Part II. These implementations do however take the assumption that the send pipeline can immediately read the stream that has been “checked in.”
That is not my case: the document will take some time before it is available in the shared library, and I cannot delay the initial received message. That leaves me with 2 options: either introduce some delay via an orchestration or ensure the send port will later on retry if the document is not there yet.
(A delay can only be introduced via an orchestration, there is no time-based subscriptions in BizTalk. Right?)
Since this a message-only flow I’d figure I could skip the orchestration. I have seen ways on how to have "Custom Retry Logic in Message Only Solution Using Pipeline" but what I need is not only a way to control the retry behavior (as performed by the adapter) but also to enforce it right from within the pipeline…
Every attempt I made so far just ended up with a suspended message that won’t be automatically retried even though the send adapter had retry configured… If this is indeed possible, then where/what should I do?
Oh right… and there is queuing… but unfortunately neither on premises nor in the cloud ;)
OK I may be pushing the limits… but just out of curiosity…
Many thanks for your help and suggestions!
I'm puzzled as to how this could be done without an Orch. The only way I can think of would be along the lines of:
The receive port for the initial messages just 'eats' the messages,
e.g. subscribing these messages to a dummy Send port with the Null Adapter,
ignoring them totally.
You monitor the Shared document library with a receive port, looking for any ? any new? document there.
Any located documents are subscribed by a send port and sent downstream.
An orchestration based approach would be along the lines of:
Orch is triggered by a receive of the Initial notification of an 'upcoming' new file to the library. If your initial notification is request response (e.g. exposed web service, you can immediately and synchronously issue the response)
Another receive port is used to do the monitoring of availability and retrieval of the file from shared library, correlating to the original notification message (e.g. by filename, or other key)
A mechanism to handle the retry if the document isn't available, and potentially an eventual timeout, e.g. if the document never makes it to the shared library.
And on success, a send port to then send the document downstream
Placing the delay shape in the Orch will offer more scalability than e.g. using Thread.Sleep() or similar in custom adapter or pipeline code, since BTS just calculates ad stamps the 'awaken' timestamp on the SQL record and can then dehydrate the orch, freeing up the thread.
The 'is the file there yet?' check can be done with a retry loop, delaying after each failed check, with a parallel branch with a timeout e.g. after an hour or so.
The polling interval can be controlled in the receive location, so I do not understand what you mean by there is no time based subscriptions in Biztalk. You also have a schedule window.
One way to introduce delay is to send that initial message to an internal webservice, which will simply post back the message to Biztalk after a specified time interval.
There are also loopback adapters, which simply post the message back into the messagebox. This can be ammended to add a delay.
In an app, I Have a network server and clients.
After a handshake, let's say the client sends "userId sessionId SOME_COMMAND param param param".
I have already identified the client and the sessionId is checked on the server accordingly, so identity is no more an issue.
But I'd like to prevent a hacker to modify the message or create a false one, for example sending "userId sessionId SOME_COMMAND paramModified paramModified paramModified".
I thought about using a pair of private/public encryption keys, and send the hash of the message in the message itself. But since it's automated in the client program, I may have to send the public key during the handshake. So the hacker could simply retrieve it and generate the proper hash.
I could also use complex encryption seeds or algorithms, but my experience with hackers has shown me that they will decompile anything.
So the bottom line is: I can hide everything that runs on the server, but I can't hide anything on the client program. And I'd like to to forbid to modify the message that the client program is supposed to send.
I don't even know if it's possible. And I'm opened to any suggestion. And by the way, I'm using Java, although it should not be very relevant. Thanks.
Forget it. Use SSL like everybody else. There are complexities which you haven't even begun to address.
In general chat application, client's browser always poll to server to check for new messages.
// the function to check new messages in server
function check(){
// but this question is less about jQuery.
$.ajax({
type: "POST",
url: "check.aspx",
data: "someparam=123",
success: function(msg){
// process msg here
// CHECK IT AGAIN, but sometimes we need to make delay here
check();
}
});
}
Then I read Nicholas Allen's blog about Keeping Connections Open in IIS.
It makes me think if it is possible to push data from my server to client's browser by transfer chunked HTTP (it means like streaming, right?) and keep the connection open.
while keeping the connection open, in server, I have idea to keep something run to check new messages. something like this, maybe
while(connectionStillOpen) {
// any new message?
if( AnyMessage() )
{
// send chunked data, can I?
SendMessageToBrowser();
// may be we need to make delay here
Sleep(forSomeTime);
}
}
that's a raw idea.
My Chat App created in ASP.net. With my less understanding of WCF and advanced IIS streaming module, I need your advice about to implement this idea.
yea, Impossible is probably the answer. But I need to know why if its still impossible.
UPDATE (3 years later):
This is exactly what I was looking for:
Microsoft ASP.NET SignalR
Yes, it's impossible to push data from server directly to your browser client.
But you can to check server for new messages every, let's say, 3 seconds and refresh your client interface.
Maybe you want to take a look on some Comet implementations
A server cannot initiate communication with the client. So the server cannot push data to the client. But you can achieve the push mechanism using "Reverse AJAX". The following article should shed more light.
Reverse AJAX
One method is there which is called Reverse AJAX. By using which server can transfer data to client without any request from the Client.
The current generation of JavaScript / Ajax libraries don't provide access to partial responses; you only receive notification when the entire request is complete.
If you're open to using Silverlight, you can use a raw TCP connection.
Comet is another option -- but that's basically just long polling that still originates from client-side script.
Its not possible to push the data from the server. Because HTTP respond only to the requests and cannot contact the client directly. But we have a workaround here called COMET or ReverseAJAX, by using this technique we can simulate the duplex calls.
Its nothing but the long living AJAX
calls, and will respond to the client
if there is a expected event happening
in server side, otherwise it stays
calm. This Comet (programming)
wikipedia article explains more about
the approach
I have answerd similar question here asp-net-chat-with-wcf. pls check out