One of the projects I'm on is currently using a JMS Topic setup, with the client application using containing listeners for two different durable subscribers.
Long story short, we're looking at several different ways to solve an ordering issue, and one of those is by looking at JMSTimestamps. At first we were thinking we might use whatever is the durable sub equivalent of a QueueBrowser, but so far I haven't found anything.
Is there any way to accomplish either browsing the contents of a durable subscription, or seeing the next message without actually consuming it?
JMS does not provide any API to browse messages. However there is TopicBrowser interface which is Oracle-specific extension to JMS.
You can use JMSToolBox on sourceforge to subscribe to the Topic in parallel of your regular clients and capture all the messages post to that topic
Related
I need to schedule actions (HTTP requests is enough) at a certain point in time. Every programmed request will only run once.
Sure I could implement this myself; saving the event to a database, then have an event-loop check if an action should be launched.
However this is such a generic need, there must be an existing service for this general type of need, feels like something I shouldn't implement myself. Any ideas where this can be found? I'm thinking one could just specify the http request to be saved (uri, body, headers)
AWS sure has a way of doing this using Cloudwatch events with a cron configured at the specific point in time. But this is way to clunky IMO. Is there an existing service/solution for this?
Agenda-Rest is a solution that does exactly what I asked for. It has to be self hosted though, as there seems to be no commercial hosting of it. It's also not actively developed, which could very well be that it's pretty much feature complete. After all it's a small wrapper on top of the library Agenda.
There's an alternative, suggested in a GitHub issue of Agenda-Rest, called AgenDash build on top of the same library. It's actively developed, as of autumn 2022. It's primarily a UI on top of Agenda, but it has rest routes that can be called.
There are also several libraries in various languages that exposes this functionality provided a persistence mechanism
agenda (nodejs + mongodb)
redbeat (python + redis)
db-scheduler (java + any rdbms)
I'm quite surprised that I can't find this functionality as a first class citizen in the public cloud providers.
Edit:
AWS introduced the feature EventBridge Scheduler in nov 2022. It does not allow for a http request per see, but things like invoke a lambda or post a message to a queue is possible. They support one-time schedules so no need for cron and no need for removing it later as mentioned in my question above.
I need to implement the following logic - I send a message to the user, and if he doesn't reply, I send it again after 12 hours.
I wonder what is the best way to do this? I was thinking about using Akka.NET - after a certain amount of time the actor would check if the user replied to my message and if not, would send it again.
Is there maybe an easier way? If not, there are some questions for Akka.NET
Do you know any good sources where I can see how this library should be used in ASP.NET Core? The documentation is not clear enough for me.
Where to keep the actors and the logic associated with them? In a separate project? Where can I create an actorSystem?
I'm new to this topic, thank you in advance for all the answers.
I theory you could just use standard actor system schedule a message order to resend an email after 12h, but this has natural problems with a fact, that if your process will crash, all of its in-memory state will be lost.
In practice you could use one of two existing plugins, which give you durable schedules:
Akka.Persistence.Reminders which works on top of Akka.Persistence, so you can use it on top of any akka.net persistence plugin.
Another way is to use Akka.Quartz.Actor which offers dedicated actors on top of Quartz.NET and makes use of Quartz's persistence capabilities.
I'd like to create a system that 'appends' mails to each other.
Situation: Everytime an entity is changed I'd like to send a mail to subscribers of that entity.
But when the entity is changed 10 times on a small time (like 5 / 10 minutes) the subscribers don't need to be spammed with emails.
So I was thinking of creating a 'Queue'. And to be more precise I was thinking about using the Azure Servicebus.
After searching some of the documentation. I found two interesting properties.
SessionId => Would be the entity of the Id
BatchFlushInterval (Client-side batching)
'If the client sends additional messages during this time period, it transmits the messages in a single batch'
This sounded perfect.
In this way I recieve all the 'changes of the entity' in a single batch. And could construct a single e-mail to send.
But I don't seem to find this option anymore in the new "Azure Service Bus NuGet".
Now that I searched for alternatives, I have a feeling this is not a 'normal' practice.
Does someone have some experience in this field?
I would like to avoid having to use a cron job. But if this is the best solution please let me know.
I know this a really broad question and more a 'need for information'. So commenting with links can already make me real happy.
Thanks in advance
Brecht
Don't think Message Sessions or BatchFlushInterval is the approach to take here. What you're looking for is to buffer messages to create a single notification rather than multiple ones. I'd personally go with receiving a batch from the Azure Service Bus and process the batch to "append" notifications.
We are using Micro services architecture where top services are used for exposing REST API's to end user and backend services does the work of querying database.
When we get 1 user request we make ~30k requests to backend service. We are using RxJava for top service so all 30K requests gets executed in parallel.
We are using haproxy to distribute the load between backend services.
However when we get 3-5 user requests we are getting network connection Exceptions, No Route to Host Exception, Socket connection Exception.
What are the best practices for this kind of use case?
Well you ended up with the classical microservice mayhem. It's completely irrelevant what technologies you employ - the problem lays within the way you applied the concept of microservices!
It is natural in this architecture, that services call each other (preferably that should happen asynchronously!!). Since I know only little about your service APIs I'll have to make some assumptions about what went wrong in your backend:
I assume that a user makes a request to one service. This service will now (obviously synchronously) query another service and receive these 30k records you described. Since you probably have to know more about these records you now have to make another request per record to a third service/endpoint to aggregate all the information your frontend requires!
This shows me that you probably got the whole thing with bounded contexts wrong! So much for the analytical part. Now to the solution:
Your API should return all the information along with the query that enumerates them! Sometimes that could seem like a contradiction to the kind of isolation and authority over data/state that the microservices pattern specifies - but it is not feasible to isolate data/state in one service only because that leads to the problem you currently have - all other services HAVE to query that data every time to be able to return correct data to the frontend! However it is possible to duplicate it as long as the authority over the data/state is clear!
Let me illustrate that with an example: Let's assume you have a classical shop system. Articles are grouped. Now you would probably write two microservices - one that handles articles and one that handles groups! And you would be right to do so! You might have already decided that the group-service will hold the relation to the articles assigned to a group! Now if the frontend wants to show all items in a group - what happens: The group service receives the request and returns 30'000 Article numbers in a beautiful JSON array that the frontend receives. This is where it all goes south: The frontend now has to query the article-service for every article it received from the group-service!!! Aaand your're screwed!
Now there are multiple ways to solve this problem: One is (as previously mentioned) to duplicate article information to the group-service: So every time an article is assigned to a group using the group-service, it has to read all the information for that article form the article-service and store it to be able to return it with the get-me-all-the-articles-in-group-x query. This is fairly simple but keep in mind that you will need to update this information when it changes in the article-service or you'll be serving stale data from the group-service. Event-Sourcing can be a very powerful tool in this use case and I suggest you read up on it! You can also use simple messages sent from one service (in this case the article-service) to a message bus of your preference and make the group-service listen and react to these messages.
Another very simple quick-and-dirty solution to your problem could also be just to provide a new REST endpoint on the articles services that takes an array of article-ids and returns the information to all of them which would be much quicker. This could probably solve your problem very quickly.
A good rule of thumb in a backend with microservices is to aspire for a constant number of these cross-service calls which means your number of calls that go across service boundaries should never be directly related to the amount of data that was requested! We closely monitory what service calls are made because of a given request that comes through our API to keep track of what services calls what other services and where our performance bottlenecks will arise or have been caused. Whenever we detect that a service makes many (there is no fixed threshold but everytime I see >4 I start asking questions!) calls to other services we investigate why and how this could be fixed! There are some great metrics tools out there that can help you with tracing requests across service boundaries!
Let me know if this was helpful or not, and whatever solution you implemented!
I am looking over SignalR as a possible way to manage messaging between clients on a web application. The scenario would be that one person would create a session/room, and a few other people would join it. Then everyone within that room would send messages to each other. A lot like a chat room however they would be sending variable update messages etc.
Now I keep seeing it said that static variables should not be used, which I completely agree with, but if it creates a new Hub (I am planning to use a hub due to the different types of messages) each request, how does it store the group each client is in?
I suggest you review the project's source code here. This project has been very good as you wanted.
https://github.com/davidfowl/JabbR
https://github.com/davidfowl/JabbR/blob/master/JabbR/Hubs/Chat.cs
Also you can look at here the title "Calling methods on specific clients or groups".
https://github.com/SignalR/SignalR/wiki/Hubs