Using Rebus, is it possible to have more than one transport configured within an application?
Our business domain is image processing, due to the potentially large size of the images being processed, I would like to use an InMemory transport for communication within one service (WebApi 2) to tokenize (Guid) and persist the images to be processed into a database.
In addition, after the images are tokenized we would like to used the RabbitMQ transport to send the images for processing to the ImageProcessingService (Console App - using TopShelf), and reply back to the calling application (WebApi) once processing is complete.
I cant seem to figure out the correct way to handle the scenario where I would like to use messaging within an application (WebApi) using an InMemory transport, and also have the WebApi able to send to the ImageProcessingService via the RabbitMQ Transport.
Obviously I don't know the details of your problem, but did you consider saving the image data somewhere else (a network share, MongoDB GridFS, a SQL Server LOB, etc)?
In my experience everything turns out to be easier to handle if you use messages for coordination only, and not so much for transporting the actual bulk of the data.
PS: I did experiment with a multi-modal transport at some point, which would allow you to do something like this:
Configure.With(...)
.Transport(c => c.Multi()
.Add("amqp", t => t.UseRabbitMq(connStr, "rabbitqueue")
.Add("inmem", t => t.UseInMemoryTransport(network, "inmemqueue"))
.Start();
and then specify addresses on the form amqp://rabbitqueue and inmem://inmemqueue to address the configured endpoint via RabbitMQ and in-mem transports respectively.
It would simple wrap any number of transports which you would then need to qualify with a protocol, which could be anything you would feel would be appropriate.
This way of addressing turned out to raise a bunch of questions though, so it wasn't that simple to introduce.
Related
I'm using Node.js and ws for my WebSocket servers and want to know the best practice methods of tracking connections and incoming and outgoing messages with Azure Azure Application Insights.
It appears as though this service is really only designed for HTTP requests and responses so would I be fine if I tracked everything as an event? I'm currently passing the JSON.parse'd connection message values.
What to do here really depends on the semantics of your websocket operations. You will have to track these manually since the Application Insights SDK can't infer the semantics to map to Request/Dependency/Event/Trace the same way it can for HTTP. The method names in the API do indeed make this unclear for non-HTTP, but it becomes clearer if you map the methods to the telemetry schema generated and what those item types actually represent.
If you would consider a receiving a socket message to be semantically beginning an "operation" that would trigger dependencies in your code, you should use trackRequest to record this information. This will populate the information in the most useful way for you to take advantage of the UI in the Azure Portal (eg. response time analysis in the Performance blade or failure rate analysis in the Failures blade). Because this request isn't HTTP, you'll have to mend your data to fit the schema a bit. An example:
client.trackRequest({name:"WS Event (low cardinality name)", url:"WS Event (high cardinality name)", duration:309, resultCode:200, success:true});
In this example, use the name field to describe that items that share this name are related and should be grouped in the UI. Use the url field as information that more completely describes the operation (like GET parameters would in HTTP). For example, name might be "SendInstantMessage" and url might be "SendInstantMessage/user:Bob".
In the same way, if you consider sending a socket message to be request for information from your app, and has meaningful impact how your "operation" acts, you should use trackDependency to record this information. Much like above, doing this will populate the data in most useful way to take advantage of the Portal UI (Application Map in this case would then be able to show you % of failed Websocket calls)
If you find you're using websockets in a way that doesn't really fit into these, tracking as an event as you are now would be the correct use of the API.
Structure
This is a question how to best use Apollo Client 2 in our current infrastructure. We have one Graphql server (Apollo server 2) which connects to multiple other endpoints. Mutations are send via RabbitMQ and our Graphql also listens to RabbitMQ which then are pushed to the client via subscriptions.
Implementation
We have a Apollo server which we send mutations, these always give a null because they are async. Results will be send back via a subscription.
I have currently implemented it like this.
Send mutation.
Create a optimisticResponse.
Update the state optimistically via writeQuery in the update function.
When the real response comes (which is null) use the optimisticResponse again in the update method.
Wait for the subscription to come back with the response.
Then refresh the the state/component with the actual data.
As you can see.. not the most ideal way, and a lot of complexity in the client.
Would love to keep the client as dumb as possible and reduce complexity.
Seems Apollo is mostly designed for sync mutations (which is logical).
What are your thoughts on a better way to implement this?
Mutations can be asynchronous
Mutations are generally not synchronous in Apollo Client. The client can wait for a mutation result as long as it needs to. You might not want your GraphQL service to keep the HTTP connection open for that duration and this seems to be the problem you are dealing with here. Your approach responding to mutations goes against how GraphQL is designed to work and that starts to create complexity that - in my opinion - you don't want to have.
Solve the problem on the implementation level - not API level
My idea to make this work better is the following: Follow the GraphQL spec and dogmatic approach. Let mutations return mutation results. This will create an API that any developer is familiar working with. Instead treat the delivery of these results as the actual problem you want to solve. GraphQL does not specify the transport protocol of the client server communication. If you have websockets running between server and client already throw away HTTP and completely operate on the socket level.
Leverage Apollo Client's flexible link system
This is where Apollo Client 2 comes in. Apollo Client 2 lets you write your own network links that handle client server communication. If you solve the communication on the link level, developers can use the client as they are used to without any knowledge of the network communication details.
I hope this helps and you still have the chance to go in this direction. I know that this might require changes on the server side but could be totally worth it when your application is frontend heavy (as most applications are these days)
I am writing an application using SignalR which acts as a push service of resources for clients. The service sends notifications to its clients regarding state change of resources at regular intervals. Sometimes, the resource state remains the same. In such cases I need a way to convey the same to clients without sending the same resource again. In a way, I want to implement something like ETags with SignalR. To do that I need to modify SignalR response headers (or I could use query string).
Is there a way to do this?
you are saying to pass the same state to clients (But with NOT MODIFIED kind of header) as well not to pass if it is same.
In such case you should actually not pass any message to the clients if the server hasn't anything new to give to the clients. This is how realtime applications are supposed to work.
Also instead of adding an overhead of extra HEADERS why don't you just send a simple actual signalR ping to the clients saying "Hey, nothing modified. Just chill!"
This is a theoretical question.
imagine an aspnet website. by clicking a button site sends mail.now:
I can send mail async with code
I can send mail using QueueBackgroundWorkItem
I can call a ONEWAY webservice located in same website
I can call a ONEWAY webservice located in ANOTHER website (or another subdomain)
none of above solutions wait for mail operation to be completed.so they are fine.
my question is why I should use service solution instead of other solutions. is there an advantage ?
4th solution adds additional tcpip traffic to use service its not efficient right ?
if so, using service under same web site (3rd solution) also generates additional traffic. is that correct ?
I need to understand why people using services under same website ? Is there any reason besides make something available to ajax calls ?
any information would be great. I really need to get opinions.
best
The most appropriate architecture will depend on several factors:
the volume of emails that needs to be sent
the need to reuse the email sending capability beyond the use case described
the simplicity of implementation, deployment, and maintenance of the code
Separating out the sending of emails in a service either in the same or another web application will make it available to other applications and from client side code. It also adds some complexity to the code calling the service as it will need to deal with the case when the service is not available and handle errors that may occur when placing the call.
Using a separate web application for the service is useful if the volume of emails sent is really large as it allows to offload the work to one or servers if needed. Given the use case given (user clicks on a button), this seems rather unlikely, unless the web site will have really large traffic. Creating a separate web application adds significant development, deployment and maintenance work, initially and over time.
Unless the volume of emails to be sent is really large (millions per day) or there is a need to reuse the email capability in other systems, creating the email sending function within the same web application (first two options listed in the question) is almost certainly the best way to go. It will result in the least amount of initial work, is easy to deploy, and (perhaps most importantly) will be the easiest to maintain.
An important concern to pay significant attention to when implementing an email sending function is the issue of robustness. Robustness can be achieved with any of the possible architectures and is somewhat of an different concern as the one emphasized by the question. However, it is important to consider the proper course of action needed if (1) the receiving SMTP refuses the take the message (e.g., mailbox full; non-existent account; rejection as spam) and (2) an NDR is generated after the message is sent (e.g., rejection as spam). Depending on the kind of email sent, it may be OK to ignore these errors or some corrective action may be needed (e.g., retry sending, alert the user at the origination of the emails, ...)
I am interested in creating an application using the the Meteor framework that will be disconnected from the network for long periods of time (multiple hours). I believe meteor stores local data in RAM in a mini-mongodb js structure. If the user closes the browser, or refreshes the page, all local changes are lost. It would be nice if local changes were persisted to disk (localStorage? indexedDB?). Any chance that's coming soon for Meteor?
Related question... how does Meteor deal with document conflicts? In other words, if 2 users edit the same MongoDB JSON doc, how is that conflict resolved? Optimistic locking?
Conflict resolution is "last writer wins".
More specifically, each MongoDB insert/update/remove operation on a client maps to an RPC. RPCs from a given client always play back in order. RPCs from different clients are interleaved on the server without any particular ordering guarantee.
If a client tries to issue RPCs while disconnected, those RPCs queue up until the client reconnects, and then play back to the server in order. When multiple clients are executing offline RPCs, the order they finally run on the server is highly dependent on exactly when each client reconnects.
For some offline mutations like MongoDB's $inc and $addToSet, this model works pretty well as is. But many common modifiers like $set won't behave very well across long disconnects, because the mutation will likely conflict with intervening changes from other clients.
So building "offline" apps is more than persisting the local database. You also need to define RPCs that implement some type of conflict resolution. Eventually we hope to have turnkey packages that implement various resolution schemes.