I have an EJB3 bean that needs to GET or POST from multiple HTTP servers. I have read the documentation on writing JCA adapters, as well as the documentation on Apache HTTPComponents, specifically the managed connections, managed connection factories, etc. that HTTPClient offers.
I note that the documentation for BasicHttpClientConnectionManager says to use it, rather than PoolingHttpClientConnectionManager "Inside of an EJB Container." It is unclear whether "Inside of an EJB Container" refers to user code in an EJB to be run in a container, or to the container's own implementation code (e.g. something you might put in a JCA adapter.
I'm still a little unclear, architecturally, how to handle the task to take maximum advantage of the servics offered by the container. Thus far, my choices seem to be:
From within the EJB, create a new BasicHttpClientConnectionManager, and then create a client, like so:
BasicHttpClientConnectionManager cxMgr = new BasicHttpClientConnectionManager()
HttpClients.custom().setConnectionManager(cxMgr).build()
This would, I believe, result in no connection pooling, but rather EJB instance pooling, which probably wouldn't be all that performant since the EJB container has no way of knowing which bean instance is holding an active connection to which remote HTTP server.
Write a (fairly minimal) JCA adapter that wraps the PoolingHttpClientConnectionManager, and then, within the EJB, grab that adapter with a #Resource annotation and use it to build the HTTPClient
Write a JCA adapter that manages a pool of HTTPClients and hands those out when needed.
I am unclear on which approach I should take, or on whether or not I'm ignoring some sort of HTTP Connection management service that's already built into the container (in this case, TomEE plus). How should I do this?
It's not that simple: none of those options match. The particular answer depends on whether remote side of your HTTP connection is a single resource or not.
If it's a single logical resource, then JCA matches best. Still, you should hardly avoid any references to it's HTTP nature in your adapter: design a proper logical interface for your remote system and then implement a resource adapter upon that interface. Your clients request the data they need and then receive it not as HTML strings, but as ready-to-use Java beans. You will be able to utilize much of advantages provided by EJB container without it's requirements violation.
If you need an entry point to access unspecified number of external services (HTML pages search indexing might be this kind of job), then things get tricky. JCA is not intended to be used for that kind of things, but it suites them quite for sure: I am maintaining TCP SocketConnector project which does almost the same and everything works like a charm. EJB won't fit here: stateless beans pooling is hard to control, singletons will require you to write non-blocking pooling implementation yourself and statefuls are, well, kind of single-threaded.
There's also an extra option: while JCA will be easier to integrate to your existing Java EE environment, it might be hard to implement a resource adapter from scratch if you're doing it for the first time. An external HTTP querying tool which is connected with a JMS queue might come in handy here. JMS will make it easier to balance workload, but it removes any interactivity and may slow the whole chain down.
P.S. For the third option of your original question: there's no need to implement pooling in JCA as container already maintains connections pool for all resource adapter instances by default. Just keep a single HTTP Client connection per JCA's ManagedConnection and everything will work out-of-the-box.
Related
I am designing a CorDapp, which would require user input as well as API integration, and I am considering various approaches to expose flows and vault queries to the outside world.
Default option seems to be to use Corda RPC. Unless I missed something, there are only Java bindings for it, which is effectively restricting the clients to only be JVM-based. This is somewhat inconvenient, and ideally I would like something like OpenAPI to make it more open and implementation-agnostic.
Another option is to use some kind of Corda RPC to OpenAPI proxy. I know about Braid, and I'm sure there are others. Braid seems to support deployment as a Corda service packed together with the flows into the CorDapp itself, effectively making it running embedded into the Corda JVM.
Braid can be deployed as a standalone proxy too, which I suppose is option three.
Instinctively I find the embedded mode more attractive, as it reduces the number of moving parts, as opposed to a standalone mode. However, I am concerned that such model may be in fact become discouraged at some point, either because Corda developers consider it to be a misuse of services facility, or because some organisations will not be keen to deploy such code onto their nodes, especially when they may be running multiple CorDapps. I would imagine anything deployed as part of Corda JVM would at least require more scrutiny due to potential impact on other things running there, which in turn would reduce the agility.
I wonder what approach to integrate with a CorDapp is actually recommended?
Edit 1: I know it is technically possible to embed a webserver into the node and expose a REST API from there, at least in the current version of Corda (4.3 at the time of writing). The question is more about whether it is a good idea to do so, or not, and why.
Take a look at the question I had asked on Stackoverflow regarding front end for CordApp. This might be of some help.
Following is the link -
"Corda: Can we develop Dapps that will be run by IIS webserver to talk to Corda platform?"
You can use any front-end technology you want.
As of Corda 3, your backend must be JVM-based, for two reasons:
You need to load various flow, state and other class definitions onto
the classpath to pass as arguments to flows, retrieve objects from the
vault, etc.
You need to use the CordaRPCClient library to create an
RPC connection to the node
If you really need to write your back-end
in another language, there are a few workarounds:
Create a thin Java webserver that sits between your main webserver and
the node. The Java webserver translates HTTP requests from the main
webserver into RPC calls to the node, and RPC responses from the node
into HTTP responses to the main webserver
This is the approach taken
by libraries such as Braid
Use a library such as GraalVM to compile
non-JVM languages to JVM bytecode
An example of writing a JVM
webserver in Javascript using GraalVM is available here:
https://github.com/nitesh7sid/cordapp-example-nodejs-server-graalvm
Let's say I have a gRPC Java service that is hosting a Server.
So when a client wants to call this service, they use:
ManagedChannel channel = ManagedChannelBuilder
.forAddress(grpcHost, grpcPort)
.usePlaintext(true)
.build();
No problem.
Now what if I want to call my service from the same JVM? Is this even possible? Or perhaps this is completely invalid?
You are free to use plain ManagedChannelBuilder to connect to the same JVM. It will work just like normal.
If you want to optimize the case because it can happen frequently and are in the same ClassLoader (so it wouldn't work between Servlets, for example), you can use the in-process transport. The in-process transport has relatively low overhead and can even avoid serializing/deserializing the Protobufs.
I'm trying to implement a inbound resource adapter which will receive a data through HTTP protocol. I have two variants of implementations: to use Jetty as inner server and to use web container from WildFly. I know how to use Jetty, but think that Undertow using is the best. But how? WildFly does not see #WebServlet in RAR. How can I tell to WildFly to deploy a servlet which is located RAR?
When it comes to the point where the whole ecosystem is against me, I usually ask myself whether I'm sure the whole idea is right. Your idea does not seem to be right at all. Still, if you're sure, I'll explain the way of doing something that is close to what you want.
The idea of using servlet inside a resource adapter is a bit odd. Implementing an inbound HTTP adapter is odd either. In some logical sense, servlet container is an inbound HTTP resource adapter itself. It doesn't really utilize JCA container, but it's quite close to what an inbound resource adapter would stand for.
Another reason for not doing so is that resource adapters and application deployments have quite different life cycle. While WAR/EAR deployment represents an application which 'serves as it lives', the RAR semantic is quite different: instead of doing some business logic, resource adapter just provides interfaces for other deployments. You can bundle RAR into your EAR for sure, but if you're not targeting a monstrous monolith, you'll end-up deploying RAR as a separate artefact for your applications to use. Resource adapter should not contain any particular business logic. If you need it to do so, consider rethinking whether you need an application server at the first place: JCA container is quite poor comparing to EJB and Web ones, and if you don't need all the power, Java SE might come in handy.
Now, if you're still 100 % sure you need this, let's take a look at your options:
You may try to implement ServiceActivator -- a JBoss-specific start point for custom extensions. From within this activator you can access UndertowService and perform servlet container bootstrap manually. Here is an example of SA-powered artefact from Wildfly team. Since your question is quite unusual, I cannot confirm whether JCA deployment will support it, but it seems to.
If you cannot just force Wildfly's web container to process RAR deployment, you could fall back to a manual container instantiation. Undertow itself is just a module inside Wildfly so you can access it by specifying module dependency clause in your RAR's JAR manifest like so:
Dependencies: io.undertow
Undertow's classes will then be available to you upon deployment through your classloader and you'll be able to instantiate a new server with custom servlets inside.
I'm working on application running on JBossEAP 6.4 in domain mode. I need to handle multiple (up to 1000) parallel TCP connections from server to different hardware devices.
Connections have to be opened at server side and will be keept up to 90 seconds. All connections uses the same port and protocol, but destination IP adresses are different..
I wonder if JCA adapter is suitable for this use case. Should I create special Activation specification for every single device or use something else than JCA resource adapter?
The question is a bit old, but JCA is a not-that-popular technology so some knowledge about it would be useful despite of the fact you might have been implemented all the things already.
Firstly, JCA is a spec for writing your connector, it's not a connector implementation itself. If you need your application to work with external TCP it's up to you to implement a connector.
Whether JCA is suitable for your particular needs depends solely on the nature of your "different hardware devices". Resource adapters has been designed to be used on per-service basis with possible connections pooling. I.e. all the connections this particular adapter would fire should have quite similar nature and be interchangeable. However, JCA adapters also have sensible advantages outside of this scope, especially in EJB environment. EJB imposes quite harsh restrictions upon your code: no locking, threads spawning, etc. It might be quite difficult to implement TCP I/O without all those things. Here comes JCA and it's threading freedom (MDBs are capable of providing EJB-friendly scope on their activation despite of an underlying thread source).
In conclusion: you should consider using JCA for TCP networking if you're working in EJB environment (or if you're going to do so in observable future). If you're using Spring-powered environment upon Java EE stack it may worth to omit such a complicated technology and stick to some pure Netty networking.
If you choose to proceed with JCA, consider inspecting the JCA SocketConnector I am currently maintaining. It's build upon Netty stack and it could be a good start for you. You can also use it as a source of ideas how to build your own implementation instead.
Suppose I have two or more different server applications developed in Clojure using ZeroMQ and BSON as protocols. How can I deploy them using a single JVM instance while also sharing common dependencies?
It seems a waste of memory to use a JVM instance for each standalone application. I plan to develop several Clojure applications in the future and VPS memory is not cheap.
Although not explicitly said, applications running in an application server (Jetty, Glassfish) seem to share the same JVM while isolating their state. However, they require a container and neither Servlets or Enterprise JavaBeans have an implementation that I could easily adapt to my custom protocol.
I've been thinking about using Servlets and implementing a dummy service() method though I don't like the idea of having a pointless HTTP server overhead. As for the EJB container, I cannot even figure out its implementation.
It would be nice to have a container requiring only init() and destroy() methods but I can't find an application server providing it.
Maybe there is a way around or I don't even need an application server. Could somebody point me in the right direction?
It sounds like you would be okay using an EJB container, but only if it were easier or simpler to work with. Have you looked at Immutant? It's basically a wrapper around JBossAS for Clojure, written by guys at Red Hat (who also own JBossAS).
In addition to being an application server, those guys have wrapped JMS and other Java-EE features around Clojure, such that sending messages between apps appears pretty simple:
Also, they have Daemons and Jobs, which may provide something similar to what you were describing as simple services with init() and destroy().
That being said, I haven't used it, so I'm can't vouch for it's awesomenss/awfulness.
So you have two applications that both share the same dependencies and both want to respond to and/or generate events on a message bus.
If I understand what you're saying, this should be as simple as starting the JVM with access to all code in the classpath and initializing your message bus and your code from a main method.
If you wanted to use a container, you could create some dummy message driven beans that sat between your clojure code and the message bus assuming there is a JMS adapter for your message bus. Using netbeans/glassfish, this may not be that bad. You might gain some in terms of monitoring, but I'm not sure what else you would gain.
I kept searching and found out that some application servers implementing the OSGi service platform have simpler lightweight containers than those offered by Java EE.
Apache Karaf for instance can load POJO applications directly from JAR files.
I am not sure what DDs are but any JAR is a valid bundle. Since clojure is not type safe you will need to bridge the clojure world and OSGi/Java world but the OSGi API is a wet dream for such bridges.
Not that in OSGi bundle do not automatically provide their content, in OSGi a bundle is by default private. However, the API allows you to punch holes where ever you want.