Load balancing MessageQ (ActiveMQ) - asynchronous

This is my scenario..
1) A REST based web service (Say X) takes in requests and puts it into ActiveQ
2) There is a listener on the other side of the Q that will read and process the message. This is async
I decided to go with ActiveMQ.
But trying to find a solution where I can Q and the Q listeners scalable.
1) I have many instances of X running. Hence there are multiple prodders to the Q.
2) Ordering is important to me.
3) Since my REST service is session less, I don't have a way to tag a bunch of requests with the same message ID.
4) Now if I use a single Q, it works fine..
But I want to scale it up and use multiple Q and multiple Q consumers without compromising on the order.
Can someone suggest me a solution to this problem?
Thanks much,

To achieve ordering of messages there are 2 ways defined in activeMQ
1)MessageGroups based on JMSXGroupID
2)Exclusive Consumer
Message groups are more useful than Exclusive consumer,as internally Exclusive Consumer only uses one consumer at a time,but in case of that consumer failure it connects to other consumer.
You can read the ActiveMQ documentation for the same here
http://activemq.apache.org/how-do-i-preserve-order-of-messages.html
hope this helps!
Good luck!

We can use following kind of connection URL to utilize ActiveMQ load....
failover://(tcp://192.nnn.nn.nn:61616,tcp://192.nnn.nn.nn:61616)?randomize=false
randomize=true will made message shuffles between two AciveMQ in active mode, rather not by just fail-over of ActiveMQ......
Complete reference for this can be found under the following Apache Site link....
http://activemq.apache.org/failover-transport-reference.html
But Still high availability (i.e, cluster) configuration make things stable for your App although Apache must advance ActiveMQ High Availability, hence things can work smoother.
Although because of KahaDB restriction load balancing/fault tolerant configuration is restricted. The present Apache ActiveMQ High Availability configuration available in the following link.
http://activemq.apache.org/clustering.html
However KahaDB has file lock restriction, following tweaking/alternates ways of configuration can be done...
1)Shared File System Master Slave,- A shared file system such as a SAN
http://activemq.apache.org/shared-file-system-master-slave.html
2)JDBC Master Slave,- A Shared database
http://activemq.apache.org/jdbc-master-slave.html
3)Replicated LevelDB Store,- ZooKeeper Server
http://activemq.apache.org/replicated-leveldb-store.html
Over & above by having JCA connectors,- AS like JBoss, Weblogic, Websphere, Geronimo, Glassfish,- ActimeMQ patching as a kind of Resource Adapter can be done. And with Apache Camel (karaf), JBoss Fuse ESB kind of products HA & clustering of ActiveMQ can be done.

Related

uWSGI and Flask: keep objects in memory between requests

My stack is uWSGI, flask and nginx currently. I have a need to store data between requests (basically I receive push notifications from another service about events to the server and I want to store those events in the server memory, so client can just query server every n milliseconds, to receive latest update).
Normally this would not work, because of many reasons. One is a good deployment requires you to have several processes in uwsgi in production (and even maybe several machines to scale this out). But my case is very specific: I'm building a web app for a piece of hardware (You can think of your home router configuration page as a good example). This means no need to scale. I also do not have a database (at least not a traditional one) and probably normally 1-2 clients simultaneously.
if I specify --processes 1 --threads 4 in uwsgi, is this enough to ensure the data is kept in the memory as a single instance? Or do I also need to use --threads 1?
I'm also aware that some web servers clear memory randomly from time to time and restart the hosted app. Does nginx/uwsgi do that and where can I read about the rules?
I'd also welcome advises on how to design all of this, if there are better ways to handle this. Please note that I do not consider using any persistant storage for this - this does not worth the effort and may be even impossible due to hardware limitations.
Just to clarify: When I'm talking about one instance of data, I'm thinking of my app.py executing exactly one time and keeping the instances defined there for as long as the server lives.
If you don't need data to persist past a server restart, why not just build a cache object into you application that can do push and pop operations?
A simple array of objects should suffice, one flask route pushes new data to the array and another can pop the data off the array.

BizTalk 2016 Singleton

We have a flow that processes incoming bank statements from an external system to SAP. The process itself is rather simple
Receive bank statement via SFTP
Send to SAP via FTP
Call SAP RFC with filename as parameter
This all happens in an orchestration and on BizTalk side it's working fine.
Now, they have noticed that SAP has some issues when too many bank statements arrive at the same time. So we need to redesign the orchestration so it handles them 1-by-1.
So, my first thought was to redesign it as a Singleton orchestration to solve this issue. Does anybody have some other suggestions to fix this issue?
The messages don't need to be processed in a specific order. Just slower. :-)
I'm just a bit afraid of the possible sideffects of a Singleton.
You could consider putting the port on a dedicated host and configure resource based throttling.
If that doesn't suit, consider a Resource Dispenser pattern which is described here:
https://social.technet.microsoft.com/wiki/contents/articles/23924.biztalk-server-resource-dispenser-send-port-edition.aspx
Basically, messages are queued for a limited number of receiving service instances, and the destination send ports are set to ordered delivery, so each will have only on active instance.

Highly configurable and efficient ESB / SOA / integration framework

my plan is to develop or use a Java-based integration framework (ESB, SOA whatever) that deals with services, with the following constraints:
a Service can be deployed on multiple machines but doesn't have to be present on every one of them
a Service can be deployed and re-deployed (with a newer version) separately
a Service is connected to other services either by:
in-memory connections
(async / sync) remoting to other machines
the routing logic of the Service connections should be configurable on the fly, without re-deploying or restarting anything
I know that OpenESB is close to these requirements, however it requires redeployment of the service to change the routing (suppose the connections are HTTP BC based), but I'm unfamiliar in this regard with MuleESB, WSO2, JBossESB, whatever open source ESB... Is there any good solution for this (e.g. configurable in-memory and/or remoting routing)? I don't really care about clustering as I plan to use the servers separately, and the designated (if required) JMS solution would be HornetQ if that matters.
You mention several different concepts, but a combination of an ESB pattern, Apache Load Balancer and Maven should get you close. Do not get to hung up on the product, settle on a paradigm/pattern and the decision of the product will be easy, it either does things the way you like or does not.
Here is the pattern I use.
SOA Design Patterns
This may also interest you SOA for executives
Cheers
After a long discussions about the pros and cons, we are going to have a HornetQ-based (JMS MQ) solution, where we create message routing rules and sometimes processing codes that handle the different kind of routing. HornetQ is able to handle the in-jvm requirement too, but that part will be covered under the hood.

Logging across multiple web servers

I would like to know how people dealing with logging across multiple web servers. E.g. Assume there are 2 webservers and some events during the users session are serviced from one, some from the other. How would you go about logging events from the session coherently in one place (without e.g.creating single points of failure)? Assuming we are using: ASP.Net MVC, log4net.
Or am I looking at this the wrong way - should I log seperately and then merge later?
Thanks,
S
UPDATE
Please also assume that the load balancers will not guarantee that a session is stuck to one server.
You definitely want your web servers to log locally rather than over a network. You don't want potential network outages to prevent logging operations and you don't want the overhead of a network operation for logging. You should have log rotation set up and all your web servers clock's synced. When log rotation rolls your log files over to a new file, have the completed log files from each web server shipped to a common destination where they can be merged. I'm not a .net guy but you should be able to find software out there to merge IIS logs (or whatever web server you're using). Then you analyze the merged logs. This strategy is optimal except in the case that you need real-time log analysis. Do you? Probably not. It's fairly resilient to failures (assuming you have redundant disks) because if a server goes down, you can just reboot it and reprocess any log ship, log merge or log analysis operations that were interrupted.
An interesting solution alternative:
Have 2 log files appenders
First one in the local machine
In case of network failure you'll keep this log.
Second log to a unix syslog service remotely (of course
a very consistent network connection)
I used a similar approach long time ago, and it work really well, there are
a lot of nice tools for analyzing unix logs.
Normally your load balancing would lock the user to one server after the session is started. Then you wouldn't have to deal with logs for a specific user being spread across multiple servers.
One thing you could try is to have the log file in a location that is accessible by all web servers and have log4net configured to write to it. This may be problematic, however, with multiple processes trying to write to the same file. I have read about NLog which may work better in this scenario.
Also, the log4net FAQ has a question and possible solution to this exact problem

Are there any open standards for server failover?

I'm building a client-server application and I am looking at adding failover to the client so that when a server is down it will try to connect to another available server. Are there any standards or specifications covering server failover? I'd rather adopt an existing standard than implement my own mechanism.
I don't there is, or needs to be any. It's pretty straight forward and all depends on how you can connect to your sever, but basically you need to keep sending pings/keepalives/heartbeats whatever you want to call em, and when a fail occurs (or n fails in a row, if you want) change a switch in your config.
Typically, the above would be running as a separate service on the client machine. Altenativly, you could create a method execution handler which handles thr execution of all server calls you make, and on Communication failure, in your 'catch' block, flick your switch in config
You're question is very general. here are some general answers:
Google for Fault Tolerant Computing
Google for High Availability Solutions
This is usually handled at either the load balancer or the server level. This isn't something you normally do in code at the client.
Typically, you multihome the servers each having their own IP + one that is shared between all of them. Further, they communicate with each other over tcp for the heartbeat to know which is the Active node in an Active / Passive cluster.
I can't tell what type of servers you have, but most of the windows servers can do this natively.
You might consider asking the question at serverfault to see how to properly configure your servers to support this.

Resources