Kaa Custom Transport - kaa

I am trying to implement CoAP as my custom transport protocol for Kaa platform. I've done customization guide creating custom transport step by step, but in the last part (Transport provisioning), it seems that some config files should be created automatically but I can't find any *.config file related to my implemented CoAP class in server/transports package. Neither in /kaa/server nor in /etc/conf sub-folders. Any idea about what should I do?
This attached image show how my implementation developed ‒ project hierarchic.

If it is still relevant for you:
First of all, I don't see in your project neither implementation of transport configuration schema nor transport descriptor. Follow this steps:
Create avro schema that defines configuration parameters for the
transport and compile one with avro maven plugin (look how this done
for tcp transport).
Once you've done with schema, implement the transport descriptor in order to Kaa node can look up your custom transport.
Secondary, the config file for you custom transport you should create manually. The example of config files for tcp and http located in server/node/src/main/resources both for operation and bootstrap servers.
Finally, compile and package your custom transport project into jar artifacts (two jars ‒ config classes and classes related to CoAP transport). Put this jars to /usr/lib/kaa-node/lib on your server, config files ‒ /usr/lib/kaa-node/conf. After that restart the node and enjoy a newly created transport for CoAP.

Related

grpc-java define service and methods without proto file

In my grpc application, I need to have a service that has just two client-side streaming methods. Is it possible to define the service, messages without creating a .proto file in grpc-java?. If yes, can you point me to an example or tutorial.

Starting a flow using Corda RPC

By reading the docs, it is clear that the only way to interact with corda is via RPC. If you want to interact via http, then we have to write a web server exposing specific endpoints.
I am trying to write a rpc client to start a flow in the cordapp without webserver.
rpcOps.startTrackedFlowDynamic(ExampleFlow.Initiator.class, iouValue, otherParty)
I couldn't understand properly here. Should I duplicate the ExampleFlow class both on client end and in the cordapp? What is the structure of rpcclient and cordapp in this case of not having a web server?
tl;dr Write a client to start a flow on already running corda node without webserver? Thanks
Yes - currently, the client must depend on ExampleFlow.Initiator and have it available on the classpath. This is true whether it's a webserver or a regular command line client.

Corda: Can we develop Dapps that will be run by IIS webserver to talk to Corda platform?

We have used "Yo!CorDapp" example (https://github.com/corda/spring-observable-stream) to build a POC.
In this POC, can we replace angular by .NET for frontend and use IIS webserver in place of springboot webserver to talk to Corda platform?
Thanks
You can use any front-end technology you want.
As of Corda 3, your backend must be JVM-based, for two reasons:
You need to load various flow, state and other class definitions onto the classpath to pass as arguments to flows, retrieve objects from the vault, etc.
You need to use the CordaRPCClient library to create an RPC connection to the node
If you really need to write your back-end in another language, there are a few workarounds:
Create a thin Java webserver that sits between your main webserver and the node. The Java webserver translates HTTP requests from the main webserver into RPC calls to the node, and RPC responses from the node into HTTP responses to the main webserver
This is the approach taken by libraries such as Braid
Use a library such as GraalVM to compile non-JVM languages to JVM bytecode
An example of writing a JVM webserver in Javascript using GraalVM is available here: https://github.com/nitesh7sid/cordapp-example-nodejs-server-graalvm

How to use HTTP protocol in Resource Adapter in WildFly

I'm trying to implement a inbound resource adapter which will receive a data through HTTP protocol. I have two variants of implementations: to use Jetty as inner server and to use web container from WildFly. I know how to use Jetty, but think that Undertow using is the best. But how? WildFly does not see #WebServlet in RAR. How can I tell to WildFly to deploy a servlet which is located RAR?
When it comes to the point where the whole ecosystem is against me, I usually ask myself whether I'm sure the whole idea is right. Your idea does not seem to be right at all. Still, if you're sure, I'll explain the way of doing something that is close to what you want.
The idea of using servlet inside a resource adapter is a bit odd. Implementing an inbound HTTP adapter is odd either. In some logical sense, servlet container is an inbound HTTP resource adapter itself. It doesn't really utilize JCA container, but it's quite close to what an inbound resource adapter would stand for.
Another reason for not doing so is that resource adapters and application deployments have quite different life cycle. While WAR/EAR deployment represents an application which 'serves as it lives', the RAR semantic is quite different: instead of doing some business logic, resource adapter just provides interfaces for other deployments. You can bundle RAR into your EAR for sure, but if you're not targeting a monstrous monolith, you'll end-up deploying RAR as a separate artefact for your applications to use. Resource adapter should not contain any particular business logic. If you need it to do so, consider rethinking whether you need an application server at the first place: JCA container is quite poor comparing to EJB and Web ones, and if you don't need all the power, Java SE might come in handy.
Now, if you're still 100 % sure you need this, let's take a look at your options:
You may try to implement ServiceActivator -- a JBoss-specific start point for custom extensions. From within this activator you can access UndertowService and perform servlet container bootstrap manually. Here is an example of SA-powered artefact from Wildfly team. Since your question is quite unusual, I cannot confirm whether JCA deployment will support it, but it seems to.
If you cannot just force Wildfly's web container to process RAR deployment, you could fall back to a manual container instantiation. Undertow itself is just a module inside Wildfly so you can access it by specifying module dependency clause in your RAR's JAR manifest like so:
Dependencies: io.undertow
Undertow's classes will then be available to you upon deployment through your classloader and you'll be able to instantiate a new server with custom servlets inside.

BizTalk Archiving Pipeline Component Consideration

In my scenario, I have a Pipeline that (1) decrypts and then (2) disassembles a flat file on a receive port.
My requirement is to capture the file, and put it on a local fileshare, between (1) and (2).
My initial approach was to introduce an Archive component between these, but I have run into issues with this. The Archiving component uses direct access to storage to dump the file. This is essentially poor methodology, as per BizTalk principles, this is a function of a send port/send adapter. So, if for example the Archiving destination is an FTP host, the Archiving component is useless.
Hence two ideas come to mind:
A) Somehow configure the archiving component to use a Send Port(if that's even possible)
B) Abandon the idea of the archiving component and just use BizTalk's native functionality as follows:
-Receive the file using decrypt only pipeline
-Send the file to a temporary local storage using a Send Port
-Subscribe to the receive port to send the file to an archive
-Pick up the file form local storage using Disassemble pipeline (second receive port)
-Use orchestration to process the file from the second receive port.
Are there any issues with Option B)?
If NOT, then what's the point of even using an archive component?
Other options also include
C) Have an archive send port and a loop-back send port subscribe to the receive port, the loop-back send port would have the flat file dissembler on the receive.
D) Have an archive send port and an Orchestration that subscribe to the receive port. Call the dissemble pipeline in the Orchestration.
We've used used both these scenarios for different solutions.
If you are using Native Biztalk functionality setting up send ports subscribing to the message type for archive is sufficient.
If you are using the BizTalk ESB Toolkit it is very difficult to split message for archiving since you are executing in the pipeline context. Using an orchestration in your itinerary will allow you to split the message but that of course requires the itinerary to leave the pipeline and drop the message on the message box. Just doing simple message archiving may lend this solution to be over kill.
You can use a custom pipeline component such as the one below. It is a pipeline component that can be reused, works in a BizTalk ESB toolkit scenario (very handy if you want to original message because it is transformed), as a file archive or SQL archive and works on both inbound and outbound pipeline scenarios.
BizTalk Archiving - SQL and File
You will only be responsible for the maintenance of the old/unwanted messages to avoid bloat.

Resources