What is the role of the "Service Broker" in SOA Model Architecture?
The service broker is meant to be a registry of services, and stores information about what services are available and who may use them. For example, UDDI which was originally conceived as a web service registry is now considered a SOA Service Broker.
SOA registry is responsible for registering business services, while SOA broker is responsible for orchestrating the connections between these components.. The service broker reads the interface information from the registry and then makes the right connection to other interfaces.
Related
I have a situation where messages are being generated by an internal application but the consumers for the messages are outside our enterprise network. Will either of http(s) transport or REST connectivity work in this scenario, with HTTP reverse proxy on DMZ? If not, is it safe to have a broker on the DMZ which can act as gateway to outside consumers?
Well, the rest/http approach to connect to ActiveMQ is very limited as it does not support true messaging semantics.
Exposing an ActiveMQ broker is no less secure than any other communication software if precautions are taken (TLS, default passwords changed, high entropy passwords are used and/or mutual authentication, recent patches applied, web console/jolokia not exposed externally without precautions etc etc).
In fact - you can buy online ActiveMQ instances from Amazon - which indicates that at least they think it's not such a bad idea to put them on the Internet.
I have the Service consumer and services provider on the same machine. The service consumer sends a large chunk of data to the service provider.
Can the web service provider be configured to accept this large payload through a 'local' interface, instead of relaying on the underlying network to carry the packets? I'm looking for the most efficient way to transfer this large data set from the producer to the consumer on the same host.
If it's traditional WebSphere, not Liberty, check out in-process connections, https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/rrun_inbound.html . That's for same process, not just same machine.
Every major service in OpenStack has an API service as endpoint for clients to access, eg. openstack-nova-api, openstack-glance-api etc. But for every major service, there are other minor services like openstack-nova-scheduler, openstack-nova-conductor etc. these services are suggested to be deployed on other nodes rather the node where API service is running to get some kind of isolation.
My question is how openstack-nova-api knows where the real services(openstack-nova-scheduler/openstack-nova-conductor) are running, how they communicate with other? When openstack-nova-api got a new request, how does it distribute it to the real services which can process and send back the results?
Internal communication between OpenStack modules is done through the AMQP message queue, typically managed by RabbitMQ.
I want deploy backend WCF service in WebRole in Cloud Service 1 only with Internal Endpoint.
And deploy ASP.NET MVC frontend in WebRole in Cloud Service 2.
Is it possible to use Azure Virtual Netowork to call backend from frontend by Internal Endpoint ?
UPDATED: I am just trying build simple SOA architect like this:
Yes and No.
An internal endpoint essentially means that the role instance has been configured to accept traffic on a given port, but that port can NOT receive traffic from outside of the cloud service (hence it being "internal" to the cloud service). Internal endpoints are also not load balanced so you're going to need to "juggle" traffic management from the callers yourself.
Now here is where the issues arise, a virtual network allows you to securely traverse cloud service boundaries, letting a role instance in cloud service 1 call a role instance in cloud service 2. However, to do this, the calling role instance needs to know how to address the receiving instance. If they were in the same cloud service, they you can crawl the cloud service topology via the RoleEnvironment class. But this class only works for the cloud service its exists in, its not aware of a virtual network.
Now you could have the receiving role instance publish its FQDN to a shared area (say Azure table storage). However, a cloud service will only use its own internal DNS resolution (which only allows you to resolve short names in the same cloud service) unless you have configured the virtual network with a self-hosted DNS server.
So yes, you can do what you're trying to accomplish, but it does present some challenges. Given this, I'd have to argue if the convenience of separating for deployment enough to justify the additional complexity of the solution? If so, then I'd also look and see if perhaps there's a better way to interconnect the two services rather then direct calls (like a queue based pattern).
#BrentDaCodeMonkey makes some very valid points in his answer, so read that first.
I, personally, would not want to give up automatic discovery and scale via load balancing. My suggestion would be that you expose the WCF endpoint via an Azure Service Bus Relay endpoint. This will give you a "fixed" endpoint with which to communicate (solving the discovery issue) and infinite scalability because multiple servers can register and listen on the same Service Bus relay address. Additionally it introduces some basic security into the mix via shared key authentication when your web application(s) connect to your WCF services.
If you co-locate the Service Bus instance with your Cloud Services the overhead of the relay in the middle is totally negligible and, IMHO, worth it for the benefits explained above.
When looking at SOA (Service Oriented Architectures) the main problems seems to be distributing a services load across multiple hosts and then managing these hosts (taking hosts off line or adding new hosts)
When one service talks to another it should not need to know any host information (at the application level). Rather the SOA Environment should be able to route a service request to a specific host based on the hosts current load characteristics (so it must know all hots a service is running on and their relative load).
Are there any existing open protocols for service to report their existence and load to an SOA Environment.
SOA is a set of high-level software architecture guidelines. It is not a technical standard or recommendation and it has nothing to do with technical implementation details, like load balancing.
Load balancing is based on addressing, which is dependent on the service access technology.
Systems built in "SOA-way" may be using different service access technology, like SOAP (over HTTP, JMS, etc.), REST, asynchronous XML messages over JMS, etc.
With SOAP, the service consumer may look up a UDDI registry to locate the service provider. Some of the latest UDDI registry software provide simple (e.g. round-robin) load balancing.
Another SOAP idea is using WS-Addressing, but it is not really meant for load balancing.
I think currently the best place for load-balancing is the underlying network transport layer. With HTTP transport you can choose hardware or software (e.g. Apache HTTPD modules) load balancers that can adapt the distribution based on response times and time-outs. With JMS transport, the most popular JMS servers provide some form of load balancing. Other protocols - like CORBA or Rendezvous - usually require a custom solution.
You can also utilize an ESB software, e.g. Oracle Service Bus or TIBCO AMX Service Bus. With an ESB you can easily create a load-balancing proxy for your service instances. The proxy may be enhanced with some logic, like look-up a database table for guidance.
As you can see, there is no one-size-fits-all solution for service load balancing. The optimal solution will be based on the actual implementation architecture and vendors' recommendations.
After reading a lot the concept I was actually looking for was the Enterprise Service Bus ESP.
Though it does not explicitly define an explicit protocol it defines an architectural style that allows the solving of the problems I stated above.