Migrating to Axon Server - axon

Currently I am running Axon Framework and using Postgresql as the event store. I am investigating how to scale this horizontally but adding nodes via the k8s HPA functionality regularly results in errors
org.axonframework.modelling.command.ConcurrencyException: An event for aggregate [x] at sequence [y] was already inserted
I was reading that Axon Server enables this type of scaling but I cannot find anything on migrating the current event store to Axon Server. Is this possible?

Well it seems my google skills are lacking. Found the process here:
https://docs.axoniq.io/reference-guide/axon-server/migration/non-axon-server-to-axon-server
This only covers migration to Axon Server EE though so I will have to dig more I guess.

Related

Processing Requests in the Background

I'm writing a REST API using ASP.Net Core and Microsoft SQL Server. One of my requirements is that clients will POST certain data to this API and the API will have to transform/process the data in some way before it is used or read. Turns out this processing is costly. So I'm thinking of doing it asynchronously in the background without blocking the POST request. I'm considering doing the processing:
In a scheduled SQL job
Using a separate Windows Service running in the background that reads from the DB, does the processing and writes back to it. It'll be slower than the SQL job I presume, but the code will be more readable.
Using Hangfire. Never used it. Not sure how well it works.
What are the best options for this? Are there there any best practices around this kind of thing?
Boilerplate
Store that data somewhere (RDBMS, nonSQL, etc)
Respond to user that his data has been scheduled for processing
Run some worker or pool of workers for job processing
Store result somewhere
Notify client that background job is complete (could be just a GET /jobs/id endpoint which client can check
Show that result
You can use your own daemon, process, script. If it's not enough and you need more features use that Hangfire which is looking solid.
I am using hangfire in production for almost 3 years, and yes this is a great way, retry policy from out of the box, UI dashboard, but extra options can be like this:
Serverless (Azure function, AWS Lambda)
AWS SQS or Azure Queue + Hosted services docs
Another option I've found is to implement IHostedService, a built-in interface in ASP.Net Core. See this page for details.

Ruby-on-Rails - Conditional database condition depending on action

I am no master, but I have been using Ruby-On-Rails for quite few years now and consider myself well-versed in it. Additionally, I have been working as web developer for last 10 years, starting with .Net.
I .Net we used to manually create database connection before firing any query or making a transaction. But Rails on the other hand, while spawning a new thread for request, fires a bag of initialization process which includes setting up a database connection.
Now we are working on a project, where we may not have a need for DB connection for every action. Is it somehow possible to override the default DB connection function and do it action-wise (a before_filter maybe)?
PS: Another way I thought of creating an additional Sinatra web application, which houses all such actions and use them instead to do the work or get the data.
Ehm where did you read Rails sets up a database connection for every request? My understanding is a connection is checked out from the connection pool when needed.
Also I'm surprised this is a big issue! If you don't need to hit the database (which implies no authentication, right?) then you should be caching the entire response, server-side and client-side.
Check out the guide on caching: http://guides.rubyonrails.org/caching_with_rails.html and Dalli https://github.com/mperham/dalli
Separating the client app from the data layer (so Rails on top of an API) is a nice architecture I've used for a project with success. I'd suggest Grape instead of Sinatra however.

Meteorjs how much of a real time is it really?

I had my chance to play with this tool for a while now and made a chat application instead of a hello world. My project has 2 meteor applications sharing the same mongo database:
client
operator
when I type a message from the operator console it sometimes takes as much as 7-8 seconds to appear to the subscribed client. So my question is...how much of a real time can I expect from this meteor? Right now I can see better results with other services such as pubnub or pusher.
Should the delay come from the fact that it's 2 applications subscribed to the same db?
P.S. I need 2 applications because the client and operator apps are totally different mostly in design and media libraries (css/jquery plugins etc.) which is the only way I found to make the client app much lighter.
If you use two databases without DDP your apps are not going to operate in real time. You should either use one complete app or use DDP to relay messages to the other instance (via Meteor.connect)
This is a bit of an issue for the moment if you want to do the subscription on the server as there isn't really server to server ddp support with subscriptions yet. So you need to use the client to make the subscription:
connection = Meteor.connect("http://YourOtherMetorInstanceUrl");
connection.subscribe("messages");
Instead of
Meteor.subscribe("messages");
In your client app, of course using the same subscription names as you do for your corresponding publish functions on the other meteor instance
Akshat's answer is good, but there's a bit more explanation of why:
When Meteor is running it adds an observer to the collection, so any changes to data in that collection are immediately reactive. But, if you have two applications writing to the same database (and this is how you are synchronizing data), the observer is not in place. So it's not going to be fully real-time.
However, the server does regularly poll the database for outside changes, hence the 7-8 second delay.
It looks like your applications are designed this way to overcome the limitation Meteor has right now where all client code is delivered to all clients. Fixing this is on the roadmap.
In the mean time, in addition to Akshat's suggestion, I would also recommend using Meteor methods to insert messages. Then from client application, use Meteor.call('insertMessage', options ... to add messages via DDP, which will keep the application real-time.
You would also want to separate the databases.

Migrating from IBM MQ to javax.jms.* (Weblogic)

I've been looking for days about how one could migrate from using IBM Websphere MQ to rather only using the QueueManager within Weblogic 10.3.x server. This would save cost of licenses for IBM MQ. The closest I came was finiding an external link which stated that IBM examples existed that did something similar(moving away from MQ to standard jms libraries), but when I attempted to follow the link: http://www.developer.ibm.com/isv/tech/sampmq.html
it lead to a dead page :\
More specifically I am interested in
What classes to use in my attempts to replace the following, com.ibm.mq.* classes:
MQEnvironment
MQQueueManager
MQGetMessageOptions
MQPutMessageOptions
and other classes which don't have an obvious javax.jms.* alternative.
Some of the nuances & work-arounds I may encounter in this migration process.
The database we are forwarding the queue messages to is Oracle 11 Standard (with advanced queuing) if that changes anything, so basically we are looking to "cut out the middle-man", so to speak. Your learned responses will be highly appreciated!
You seem to use the MQI api for MQ, to which there is no replacement at hand. There is no other way than to actually rewrite your MQ application logic to use the JMS API.
A good way might be to first migrate into JMS using the same WebSphere MQ server, since it allows you to verify your results in a reliable way.
You ask for what classes to replace say MQGetOptions with. There are no single 1-to-1 replacement (there are even some aspects of MQI that JMS cannot easily replace). Most of the MQPutOptions and other options are available by setting parameters on sessions and messages in JMS. You really need to understand the JMS api before trying this switch.
Then, when you have jms working with WebSphere MQ, you can do as Beryllium suggests, but swapping the libraries to Weblogic, switch any reference to com.ibm.mq.jms.MQConnectionFactory;, configuring the new parameters and pray to any a available god - press run :)
I have completed an application which supported both JBossMQ and MQSeries/WebSphere MQ.
The MQSeries specific classes I required were
import com.ibm.mq.jms.JMSC;
import com.ibm.mq.jms.MQConnectionFactory;
import com.ibm.mq.jms.MQQueueConnectionFactory;
import com.ibm.mq.jms.MQTopicConnectionFactory;
These were sufficient to create the javax.jms.QueueConnection/TopicConnection.
As for WebSphere MQ, I connected directly.
As for JBossMQ I looked up the factories using JNDI.
So on top of this there is only JMS.
So the first step is to rewrite your application so that only the initializing part uses WebSphere MQ specific classes (the ones I have listed above)
Replace the remaining MQ specific part with a JNDI/directory lookup for a queue connection factory provided by your application server
Remove the MQ series specific parts from your source.
Here is a simple example which shows how to send a message.

designing a distributed (over many servers) error logging feature, WCF or?

I am designing a error logging feature so our servers (each donig different things) can have a central data store for logging errors.
Would it be a good idea to have the various applications writing to the error log file using a WCF service, or is that a bad idea?
they can do it just by ADO.NET to the database, which I think is the simpler route.
How about having a look at syslog? It was made for exactly that purpose.
I'd say just log to your local data store. The advantages are :
Speed - it's pretty rapid to just
dump your chosen error report to an
existing data connection.
Tracability - What happens if you
have an error in your service? You
lose all ability to chase down
errors on all servers.
Simplicity - If you change the
endpoint for your errors service,
you have to update every other
application that uses the error
service.
Reporting - Do you really want to
trawl through error reports from
tens / hundreds of applications in
one place when you could easily find
them in the data store local to the
app?
Of course, any of these points could be viewed from the other side, these are just my opinions.
We're looking at a similar approach, except for audit logging as well as error handling.
Looking at using WCF over netTcp, also looking at using the event log, but that seems to require high trust settings, and maybe performance issues.
Not convinced by ZombieSheep's objections:
It's pretty rapid to dump your chosen error report over an existing WCF connection. Seriously. Plus, you can do it async/queued. Not a key factor for me.
You log to the central service and the local service. When the erroer service comes back on line, you poll your machines for events since the last timestamp. Problem solved.
Use a dns alias, and don't change the path - the way you should do internal addressing anyway IMO.
What if you have multiple apps on a single machine? What if you want to see the timing of errors across multiple apps?

Resources