I am working on a application, which is proposed to be a set of webapps (being called agent), running on tomcat 7, configured on different nodes. I have been tasked, to make these webapps(agent) discover each other automatically. The idea is, that each webapp(say agent X) , once up, will communicate a 'request pattern' to all the other webapps. Other webapps(say agents A, B, C) in turn will store this information ('request pattern') and will use these to route any matching request to agent X using http call.
I am looking for some option where in each webapp will have some component listening on particular port, and the agent X while registering itself will send a multicast request to all the nodes on that particular port.
I think apache camel might be useful here.. but I am not sure.
It will be great if some body can tell the technical viability of this approach, or any other suggestions.
My first thought was that you could use apache httpd and the mod_proxy_loadbalancer to balance all requests over the available nodes. You can define different balancers for any kind of agents. Requests will be send to the balancer and the balancer will route it to any available node.
This is more of a messaging than a routing problem. Add Camel if you need complex routing or adapting to legacy protocols.
This looks like a classic publish and subscribe use case. You can do it with any messaging technology. Look at JMS - ActiveMQ is what Camel uses, or AMQP - I've used RabbitMQ very successfully for this, both use the "topic" paradigm for this, a quick search found this as an example: http://jmsexample.zcage.com/index2.html. Or Jabber.
Julian
Related
There's an IIB HTTP SOAP service exposed to multiple channels - the service has 4 operations and one of them is being consumed very frequently by a particular channel (less than 1 transaction per second).
Is there any way within IBM Integration Bus (broker or service level) to limit number of HTTP requests per channel (IP address) to 1 or n transactions per second?
You could implement it manually using the standard facilities of IIB, but rate-limiting is an API management feature and best implemented using out-of-the-box features of IBM API Connect. It works well with IIB, btw.
As already suggested above, this kind of logic should be done outside of IIB if you need it globaly.
On IIB level, you can configure many things, like the maximum amount of connection, but there's no logic to have this kind of pool for each users.
The best solution, in my opinion, is to use a network component specialized in this kind of logic. On my side, I've decided to implement this rule on the load balancer I have in front of my IIB server. A proxy could probably also do it.
For your specific case, if it is the only case where you need this logic, you can also consider creating different entry point for each application. If this is SOAP, and that the users currently calls /kimbertService/, you can consider having multiple SOAP Input node, with the following routes instead : /kimbertService/App1, /kimbertService/App2, /kimbertService/App3, and then you'll be sure that App1 will never block App2 ...
IIB has a feature of throttling by limiting the number of messages processed through a given message flow per second.
For example, to set the maximumRateMsgsPerSec property for a message flow included in an application, you can use the following sample code:
mqsiapplybaroverride –b BARfile -k applicationName -m sampleFlow#maximumRateMsgsPerSec=100
You can also do it through workload management policies by using the IIB web user interface.
Below is the link:
https://www.ibm.com/support/knowledgecenter/en/SSMKHH_9.0.0/com.ibm.etools.mft.doc/bj58270_.htm
A WORK-A-ROUND SOLUTION
The ideal solution, as others have mentioned, would be to have a API management gateway sitting in front of IIB to manage your API.
Now, a work-around solution could be following:
1) Have your main service flow duplicated, making them two different message flows. These two are your back-end flows performing the same thing but on one of them you can enable throttling.
2) Build a new router IIB flow which takes HTTP requests from consumers. This flow identifies the requester and routes it to the back-end flows accordingly.
Hope this helps.
I am new to Nservicebus and have recently started working in it. I am stuck on a point and need input from you guys. I have 2 asp.net core web api projects and I want to use NServicebus to send messages between both of them in some scenarios.
What I have found so far that I can provide name to EndpointConfiguration, what if one of my api is deployed on 1 server and 2nd on another server, in that case how my configuration should be?
I tried to gave url instead of name in EndpointConfiguration but it gave me exception.
Thanks in advance for your help
NServiceBus endpoints communicate over some messaging infrastructure your system will be using. Endpoint names represent queues messages sent to. Messaging infrastructure is abstracted by what NServiceBus is calling a Transport. You will need to decide on the transport you'd like to use (see the options here). Once you've decided what transport your solution will use, you could have a look at the samples for that specific transports to have an idea how to set up your endpoints.
For example, if you'll decide to use Azure Service Bus as your transport, you could download and try the Send/Reply sample.
A good starting point could be the tutorials available on the documentation site here.
Twilio and other HTTP-driven web services have the concept of a fallback URL, where the web services sends a GET or POST to a URL of your choice if the main URL times out or otherwise fails. In the case of Twilio, they will not retry the request if the fallback URL also fails. I'd like the fallback URL to be hosted on a separate machine so that the error doesn't get lost in the ether if the primary server is down or unreachable.
I'd like some way for the secondary to:
Store requests to the fallback URL
Replay the requests to a slightly different URL on the primary server
Retry #2 until success, then delete the request from the queue/database
Is there some existing piece of software that can do this? I can build something myself if need be, I just figured this would be something someone would have already done. I'm not familiar enough with HTTP and the surrounding tools (proxies, reverse proxies, etc.) to know the right buzzword to search for.
There are couple of possibilities.
One option is to use Common Address Redundancy Protocol or carp. Brief description from the man page follows.
"carp allows multiple hosts on the same local network to share a set of IP addresses. Its primary purpose is to ensure that these addresses are always available, but in some configurations carp can also provide load balancing functionality."
It should be possible be configure IP balancing such that when a primary or master http service fails, the secondary or backup http service becomes the master. carp is host oriented as opposed to application service. So when the http service goes down, it should also take down the network interface for carp to do its thing. This means you would need more than one IP address in order to log into the machine and do maintenance. You would need a script to do the follow-up actions once the original service comes back online.
The second option is to use nginx. This is probably better suited to what you are trying to do.
Many years ago I needed something similar to what are trying to do and I ended up hacking together something that did it. Essentially it was a switch. When 'A' fails, switch over to 'B'. Re-sync process was to take time stamped logs from 'B' and play them back to 'A', once 'A' was back online.
I've read of an ESB being used as a SOA approach. What are some other approaches?
This is a very broad question, you may want to focus is.
If you are asking regarding approaches that are instead of ESB, then you may consider using direct access to services, instead of using a service bus.
This approach is often used with a directory or lookup service like UDDI to look up service end point location.
When using an ESB, you send the message to the ESB, who 's responsible to route it to the service provider.
When using direct access the client should know in advance the address of the service provider, and he sends the message directly to him.
When using a lookup service, you first query the address of the service provider (like using DNS to lookup IP addresses), and using this address you send the message to the service provider.
Beyond addressing and routing, the ESB may provide other functions that you loose (or have to implement in other way) if you use the direct access approach.
multi cast routing - sending the request to more then one service provider
context based routing - deciding to which service provider we should send the request, based on the content of the request
central logging
central policy enforcement
load balancing \ fault tolerance
format or protocol translation
buffering and asynchronous service invocation
First.... ask yourself which SOA philosophy are you adhering to. If you are in the IBM camp, then there are 4 different products that provide ESB functionality. Each product is optimized for a different scenarios but basically each one does similar functions.
Think.... SOA == a car. IBM is one manufacturer. Different products == different type of cars for different type of drivers.
I'm building a client-server application and I am looking at adding failover to the client so that when a server is down it will try to connect to another available server. Are there any standards or specifications covering server failover? I'd rather adopt an existing standard than implement my own mechanism.
I don't there is, or needs to be any. It's pretty straight forward and all depends on how you can connect to your sever, but basically you need to keep sending pings/keepalives/heartbeats whatever you want to call em, and when a fail occurs (or n fails in a row, if you want) change a switch in your config.
Typically, the above would be running as a separate service on the client machine. Altenativly, you could create a method execution handler which handles thr execution of all server calls you make, and on Communication failure, in your 'catch' block, flick your switch in config
You're question is very general. here are some general answers:
Google for Fault Tolerant Computing
Google for High Availability Solutions
This is usually handled at either the load balancer or the server level. This isn't something you normally do in code at the client.
Typically, you multihome the servers each having their own IP + one that is shared between all of them. Further, they communicate with each other over tcp for the heartbeat to know which is the Active node in an Active / Passive cluster.
I can't tell what type of servers you have, but most of the windows servers can do this natively.
You might consider asking the question at serverfault to see how to properly configure your servers to support this.