Spring-Flex BlazeDs Multi-User + Global Chat Messaging - apache-flex

I'm working on an application that allows users to send internal message to one-another.
I'll tell you what the current setup is and please help me figure out how to make it work or perhaps suggest another angle to take. We're using BlazeDS with Spring.
User A listens for messages on
message topic Chat.A
User B listens for messages on message topic Chat.B
Both users listen for global messages (system-wide messages) on topic Chat.System
So we have a multi-topic consumer for the personal message topic and one for the global message topic.
So a couple of questions I have:
Is it better to do it as two
distinct consumers (that share the
same handler function) or as one,
multi-topic consumer?
How do I check that the client A is actually the one listening to Chat.A and not just some one else that knows how to write BlazeDS clients? We have Spring Security in place, but how can I listen for subscription requests and block them if their user name (pulled from security context) doesn't match the sub-topic that they requested?
I've also read about selectors. Well, that looked promising, but again, how do I check that when a consumer uses selector="for == A || for == System that the consumer belongs to a client that has authenticated as that "for" user.
How do selectors compare/contrast to sub-topics? What's the best situation for each of them?

A selector is basically an expression you can use to filter which messages will be dispatched through your consumer. According to the docs, it uses SQL 92 conditional expression syntax:
http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=messaging_6.html
A subtopic is sort of a special case of a selector, filtering out messages whose "DSSubtopic" header don't match the provided value.
The important thing to understand with both of these is that the client determines which messages are sent to it, and as such it cannot be relied upon entirely for security.
To implement secure server-based filtering of messages based on an authenticated user's identity, see my answer to a related question here:
Flex Messaging Security
As far as multiple Consumers vs. MultiTopicConsumer, not sure there. They're both going to use the same underlying ChannelSet, so it ought not to have a big performance difference. I think it's mostly a question of whether it's convenient to have one event handler that responds to all messages from the MultiTopicConsumer or whether it's easier to have separate event handlers for each Consumer.

I usually use subtopics for this. But if you do it that way make sure that you disable subscriptions to wildcard subtopics.

Related

How to handle network calls in Microservices architecture

We are using Micro services architecture where top services are used for exposing REST API's to end user and backend services does the work of querying database.
When we get 1 user request we make ~30k requests to backend service. We are using RxJava for top service so all 30K requests gets executed in parallel.
We are using haproxy to distribute the load between backend services.
However when we get 3-5 user requests we are getting network connection Exceptions, No Route to Host Exception, Socket connection Exception.
What are the best practices for this kind of use case?
Well you ended up with the classical microservice mayhem. It's completely irrelevant what technologies you employ - the problem lays within the way you applied the concept of microservices!
It is natural in this architecture, that services call each other (preferably that should happen asynchronously!!). Since I know only little about your service APIs I'll have to make some assumptions about what went wrong in your backend:
I assume that a user makes a request to one service. This service will now (obviously synchronously) query another service and receive these 30k records you described. Since you probably have to know more about these records you now have to make another request per record to a third service/endpoint to aggregate all the information your frontend requires!
This shows me that you probably got the whole thing with bounded contexts wrong! So much for the analytical part. Now to the solution:
Your API should return all the information along with the query that enumerates them! Sometimes that could seem like a contradiction to the kind of isolation and authority over data/state that the microservices pattern specifies - but it is not feasible to isolate data/state in one service only because that leads to the problem you currently have - all other services HAVE to query that data every time to be able to return correct data to the frontend! However it is possible to duplicate it as long as the authority over the data/state is clear!
Let me illustrate that with an example: Let's assume you have a classical shop system. Articles are grouped. Now you would probably write two microservices - one that handles articles and one that handles groups! And you would be right to do so! You might have already decided that the group-service will hold the relation to the articles assigned to a group! Now if the frontend wants to show all items in a group - what happens: The group service receives the request and returns 30'000 Article numbers in a beautiful JSON array that the frontend receives. This is where it all goes south: The frontend now has to query the article-service for every article it received from the group-service!!! Aaand your're screwed!
Now there are multiple ways to solve this problem: One is (as previously mentioned) to duplicate article information to the group-service: So every time an article is assigned to a group using the group-service, it has to read all the information for that article form the article-service and store it to be able to return it with the get-me-all-the-articles-in-group-x query. This is fairly simple but keep in mind that you will need to update this information when it changes in the article-service or you'll be serving stale data from the group-service. Event-Sourcing can be a very powerful tool in this use case and I suggest you read up on it! You can also use simple messages sent from one service (in this case the article-service) to a message bus of your preference and make the group-service listen and react to these messages.
Another very simple quick-and-dirty solution to your problem could also be just to provide a new REST endpoint on the articles services that takes an array of article-ids and returns the information to all of them which would be much quicker. This could probably solve your problem very quickly.
A good rule of thumb in a backend with microservices is to aspire for a constant number of these cross-service calls which means your number of calls that go across service boundaries should never be directly related to the amount of data that was requested! We closely monitory what service calls are made because of a given request that comes through our API to keep track of what services calls what other services and where our performance bottlenecks will arise or have been caused. Whenever we detect that a service makes many (there is no fixed threshold but everytime I see >4 I start asking questions!) calls to other services we investigate why and how this could be fixed! There are some great metrics tools out there that can help you with tracing requests across service boundaries!
Let me know if this was helpful or not, and whatever solution you implemented!

use webservice in same project or handle it with code?

This is a theoretical question.
imagine an aspnet website. by clicking a button site sends mail.now:
I can send mail async with code
I can send mail using QueueBackgroundWorkItem
I can call a ONEWAY webservice located in same website
I can call a ONEWAY webservice located in ANOTHER website (or another subdomain)
none of above solutions wait for mail operation to be completed.so they are fine.
my question is why I should use service solution instead of other solutions. is there an advantage ?
4th solution adds additional tcpip traffic to use service its not efficient right ?
if so, using service under same web site (3rd solution) also generates additional traffic. is that correct ?
I need to understand why people using services under same website ? Is there any reason besides make something available to ajax calls ?
any information would be great. I really need to get opinions.
best
The most appropriate architecture will depend on several factors:
the volume of emails that needs to be sent
the need to reuse the email sending capability beyond the use case described
the simplicity of implementation, deployment, and maintenance of the code
Separating out the sending of emails in a service either in the same or another web application will make it available to other applications and from client side code. It also adds some complexity to the code calling the service as it will need to deal with the case when the service is not available and handle errors that may occur when placing the call.
Using a separate web application for the service is useful if the volume of emails sent is really large as it allows to offload the work to one or servers if needed. Given the use case given (user clicks on a button), this seems rather unlikely, unless the web site will have really large traffic. Creating a separate web application adds significant development, deployment and maintenance work, initially and over time.
Unless the volume of emails to be sent is really large (millions per day) or there is a need to reuse the email capability in other systems, creating the email sending function within the same web application (first two options listed in the question) is almost certainly the best way to go. It will result in the least amount of initial work, is easy to deploy, and (perhaps most importantly) will be the easiest to maintain.
An important concern to pay significant attention to when implementing an email sending function is the issue of robustness. Robustness can be achieved with any of the possible architectures and is somewhat of an different concern as the one emphasized by the question. However, it is important to consider the proper course of action needed if (1) the receiving SMTP refuses the take the message (e.g., mailbox full; non-existent account; rejection as spam) and (2) an NDR is generated after the message is sent (e.g., rejection as spam). Depending on the kind of email sent, it may be OK to ignore these errors or some corrective action may be needed (e.g., retry sending, alert the user at the origination of the emails, ...)

What is the best practice for a REST api to manage maximum characters for a field?

I have a Web Api RESTful service that has a few POST endpoints to insert a new object to the database. We want to limit the maximum characters accepted for the object name to 20. Should the DB, API, or UI handle this?
Obviously, we could prevent more than 20 characters on any of those layers. However, if it gets past the UI then the form has been submitted. At that point, we would want the Service layer or the DB layer to return an informative explanation as to why it was not accepted. What would be the best practice to handle this?
Should the DB, API, or UI handle this?
At the very least, your API must handle data validation. Everyone has their own opinions on how REST should work, but one good approach would be to return HTTP 400 (Bad Request), with some content that includes information about why the request was bad.
Client-side checking is a definite bonus on top of this. Ideally you'll never even see this code get hit at your API layer. The client should also be capable of handling the potential "Bad Request" response in a graceful way. If there's ever a mismatch between the rules applied by the API and the client, you'll want the client to recognize that its action didn't succeed, and display an appropriate error to help the user respond to the issue.
Database-level checks are also a good idea if you ever allow data to bypass your API layer, through bulk imports or something. If not, the database might just be one more place you'll have to remember to change if you ever change your validation requirements.

Does SignalR really broadcast every message to all connected clients?

We currently evaluating solutions for implementing server-sent events (not neccecarily using the Server-Sent Events EventSource Transport).
Our use case is actually quite similar to Stackoverflow. We've got a custom CMS implemented as SPA that supports collaborative editing. As first step we want the server inform all clients on the same page when another user has modified it.
The obvious choice would be chosing SignalR for this but this statement on XSockets's comparison page got me thinking:
If a framework broadcasts data to all clients connected we have just
inverted the AJAX issue and now we have a server hammering clients
with information they do not want.
Is this still true with SignalR 2? I mean wouldn't broadcasting to all clients regardless of group membership make groups totally useless in the first place?
Some Stats:
> 10000 active users
> 10000 pages
A message sent to a group in signalr will only arrive at the clients in that group. So there is no need to worry about broadcasting if you use groups.
Groups are smart until you you have to do something like #Lars Höppner describes, groups are not dynamic and they are really just a very simple subscription. It is just like having pub/sub where you say that you are subscribing to a topic "A" but in SignalR you are a member of a group "A" instead.
If you do not use groups signalr will broadcast, that can be ok but consider this.
You have 2 pages "A" and "B", they both connect to the Hub "MyHub".
When you send a message to all clients from "MyHub" to the method "MyMessage" and only the page "A" has implemented a client-side method for "MyMessage" the clients on page "B" will still get that message sent to the browser if you look in the frames tab (in chrome)
So messages will arrive at clients not interested in the message, not a big deal until you get a lot of clients or send sensitive data.
If you never connect to the same hub from 2 different pages you will be fine. And if you always use groups you will also be fine. Otherwise you will have to think about where the messages will arrive!
Of topic:
- My favorite thing about signalr is the nice transports!
- The worst thing about signalr is that they have transports due to the fact that they have desgined it thinking that websockets is a OS feature (win8 2012 server to get websockets)
When you send messages to a group, the clients that aren't in the group don't receive the message. This is based on a pub/sub model on the server (so when you use groups, it's not the client that decides whether it's interested in the message), so there is no "hammering all clients" going on in that case.
I think what the XSockets team is talking about in this paragraph is that with XSockets, you have more fine-grained control over who to send messages to. So instead of sending to a group with a specific name (which you have to create first), you can send to clients with certain properties (sort of like a dynamic group):
But what happens if you want to send to all clients within the range
of x km from y? Or if you want to target only males between 25 - 30
yrs that has red hair
You'd have to write your own code to do that in SignalR. This could, for example, be done on top of the code that maps users to connections. A simple solution (not necessarily the best) could use LINQ to select the appropriate users depending on the properties you're interested in, then get all corresponding connection ids from the ConnectionMapping instance to send to.
In SignalR you can subscribe to a topic (group) and get messages only for the topics you are subscribed, and global messages.
Working with groups.
I am not completely sure about what that paragraph want to mean.

Workflow services and operations not supported by states

I have two workflow services (state machines) that should cooperate and exchange information to accomplish the desired behavior.
The problem I have (but I also had it with only one state machine) is that sometimes I try to send an operation which is not allowed by the current state.
There are two problems: 1) I have to wait the operation timeout to know that the operation was not allowed 2) I'm "masking" real timeouts due to other problems
By now, I found two possible solutions: 1) I can change signatures to return true (allowed) and false (not allowed) and add all operations to all states, (not allowed operations would trigger a self-transition) 2) I always add all transitions to all states (not allowed would trigger a self-transition) but for transitions not allowed I will send an exception
I would like to know which is the best option (and, of course, I'd appreciate other possible solutions too).
I would also like to know how I could reply to a request with an exception (maybe throwing it within a try/catch?).
Thanks
Another option here is to use the information in the workflow persistence store. One of the columns contains the active bookmarks and in the case of a Receive activity this is the SOAP operation. You could have a separate service that exposes that information for a given workflow instance.
You still need to cater of the fact that you might send messages to a workflow that is in a different state because the workflow persistence store isn't updated right away (unless you make it do so) and because multiple people might send messages to the same workflow instance. Still this basic technique works really well and I have used this to enable/disable buttons on the UI based on the state of a workflow.

Resources