WSO2 ESB TCP/UDP Axis2 server (case study) - tcp

I have been studying WSO2 ESB for this particular case:
We got some remote devices that monitor various types of data (temperature, wind, warnings, alarms, panic, Etc.) this devices send data packages to a server by UDP and TCP mostly in binary format (start bit, protocol, values, time, stop bit).
I know that WSO2 ESB can support TCP and UDP Transport by an axis2 server, however all the examples I have found need the data to be in SOAP format (or XML like).
Is there any way to config the Axis2 server to receive the raw packages?
Thanks in advance.

AFAIK this can be achieved by configuring message builders and formatters in the axis2.xml.
Apache Axis2, which is the base for Apache Synpase's SOAP processing, helps users to add their custom message formats through Builders and Formatters.
A Builder accepts a binary data stream and creates an XML message, and a Formatter accepts a XML message and converts it to bytes.
Add the following message builders and formatters to the corresponding sections in conf/axis2.xml (for both Apache Synapse and WSO2 Enterprise Service Bus).
<messageBuilder contentType="text/html"
class="org.wso2.carbon.relay.BinaryRelayBuilder"/>
<messageFormatter contentType="text/html"
class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>
The above example shows how to enable Binary Relay for text/html content type.
You need to repeat the above pair of configurations for each content type you want to be handled at the byte level.
For more information on Setting up Binary-Relay please refer this documentation.
Hope this information will help you.

Related

Most important format features: Redesign the format of the client to server messages to a HTTP message

I need to redesign the format of the messages that send from a simple client to server java application to a format that utilizes and is supported by HTTP.
I dont need to actually change the program but rather just come up with the key design changes that would need to be implemented.
I understand that HTTP still uses the TCP transport protocol and then some reformatting of CRUD messages into GET POST DELETE PUT is needed.
Is there any other important design / format requirements that i need to consider?

How to inspect Firestore network traffic with charles proxy?

As far as I can tell, Firestore uses protocol buffers when making a connection from an android/ios app. Out of curiosity I want to see what network traffic is going up and down, but I can't seem to make charles proxy show any real decoded info. I can see the open connection, but I'd like to see what's going over the wire.
Firestores sdks are open source it seems. So it should be possible to use it to help decode the output. https://github.com/firebase/firebase-js-sdk/tree/master/packages/firestore/src/protos
A few Google services (like AdMob: https://developers.google.com/admob/android/charles) have documentation on how to read network traffic with Charles Proxy but I think your question is, if it’s possible with Cloud Firestore since Charles has support for protobufs.
The answer is : it is not possible right now. The firestore requests can be seen, but can't actually read any of the data being sent since it's using protocol buffers. There is no documentation on how to use Charles with Firestore requests, there is an open issue(feature request) on this with the product team which has no ETA. In the meanwhile, you can try with the Protocol Buffers Viewer.
Alternatives for viewing Firestore network traffic could be :
From Firestore documentation,
For all app types, Performance Monitoring automatically collects a
trace for each network request issued by your app, called an HTTP/S
network request trace. These traces collect metrics for the time
between when your app issues a request to a service endpoint and when
the response from that endpoint is complete. For any endpoint to which
your app makes a request, Performance Monitoring captures several
metrics:
Response time — Time between when the request is made and when the response is fully received
Response payload size — Byte size of the network payload downloaded by the app
Request payload size — Byte size of the network payload uploaded by the app
Success rate — Percentage of successful responses compared to total responses (to measure network or server failures)
You can view data from these traces in the Network requests subtab of
the traces table, which is at the bottom of the Performance dashboard
(learn more about using the console later on this page).This
out-of-the-box monitoring includes most network requests for your app.
However, some requests might not be reported or you might use a
different library to make network requests. In these cases, you can
use the Performance Monitoring API to manually instrument custom
network request traces. Firebase displays URL patterns and their
aggregated data in the Network tab in the Performance dashboard of the
Firebase console.
From stackoverflow thread,
The wire protocol for Cloud Firestore is based on gRPC, which is
indeed a lot harder to troubleshoot than the websockets that the
Realtime Database uses. One way is to enable debug logging with:
firebase.firestore.setLogLevel('debug');
Once you do that, the debug output will start getting logged.
Firestore use gRPC as their API, and charles not support gRPC now.
In this case you can use Mediator, Mediator is a Cross-platform GUI gRPC debugging proxy like Charles but design for gRPC.
You can dump all gRPC requests without any configuration.
For decode the gRPC/TLS traffic, you need download and install the Mediator Root Certificate to your device follow the document.
For decode the request/response message, you need download proto files which in your description, then configure the proto root in Mediator follow the document.

How to read Modubus TCP/IP data with Apache NiFi?

I am having the data in Modbus TCP/IP. I have to read the available data with Apache NiFi. I don't know, which processor have to use exactly (Ex. GetTCP, ListenTCP, Plc4xSourceProcessor). Can you help me on this? Is there any feasibility with Apache NiFi?
the Plc4xSourceProcessor is what you are looking for. The Apache PLC4X project provides drivers for accessling PLCs using various protocols. One of the is the Modbus protocol. So if you use the Plc4xSourceProcessor and configure a modbus connection string and list the addresses you want to collect, then you will be able to do so.
I happen to have written the PLC4X-NiFi Integration documentation on our website just a couple of days ago: https://plc4x.apache.org/users/integrations/apache-nifi.html
I think this will be helpful.
Chris
I don't really know what Modbus TCP/IP is, but it basically comes down to whether you want NiFi to be a client or a server.
ListenTCP creates a TCP server that is waiting for some client to make a connection and start sending data. The most common case would be a log forwarding system like syslog which can be configured to forward logs to a host/port over TCP.
GetTCP is a client that connects to some host/port which is the server, and starts reading data.
Plc4xSourceProcessor is not part of the official Apache NiFi code, but from quickly looking at it, it seems like more of a client processor similar to GetTCP since you give it a connection string telling it where to connect to.

protobuf vs gRPC

I try to understand protobuf and gRPC and how I can use both. Could you help me understand the following:
Considering the OSI model what is where, for example is Protobuf at layer 4?
Thinking through a message transfer how is the "flow", what is gRPC doing what protobuf misses?
If the sender uses protobuf can the server use gRPC or does gRPC add something which only a gRPC client can deliver?
If gRPC can make synchronous and asynchronous communication possible, Protobuf is just for the marshalling and therefore does not have anything to do with state - true or false?
Can I use gRPC in a frontend application communicating instead of REST or GraphQL?
I already know - or assume I do - that:
Protobuf
Binary protocol for data interchange
Designed by Google
Uses generated "Struct" like description at client and server to un-/-marshall message
gRPC
Uses protobuf (v3)
Again from Google
Framework for RPC calls
Makes use of HTTP/2 as well
Synchronous and asynchronous communication possible
I again assume its an easy question for someone already using the technology. I still would thank you to be patient with me and help me out. I would also be really thankful for any network deep dive of the technologies.
Protocol buffers is (are?) an Interface Definition Language and serialization library:
You define your data structures in its IDL i.e. describe the data objects you want to use
It provides routines to translate your data objects to and from binary, e.g. for writing/reading data from disk
gRPC uses the same IDL but adds syntax "rpc" which lets you define Remote Procedure Call method signatures using the Protobuf data structures as data types:
You define your data structures
You add your rpc method definitions
It provides code to serve up and call the method signatures over a network
You can still serialize the data objects manually with Protobuf if you need to
In answer to the questions:
gRPC works at layers 5, 6 and 7. Protobuf works at layer 6.
When you say "message transfer", Protobuf is not concerned with the transfer itself. It only works at either end of any data transfer, turning bytes into objects
Using gRPC by default means you are using Protobuf. You could write your own client that uses Protobuf but not gRPC to interoperate with gRPC, or plugin other serializers to gRPC - but using gRPC would be easier
True
Yes you can
Actually, gRPC and Protobuf are 2 completely different things. Let me simplify:
gRPC manages the way a client and a server can interact (just like a web client/server with a REST API)
protobuf is just a serialization/deserialization tool (just like JSON)
gRPC has 2 sides: a server side, and a client side, that is able to dial a server. The server exposes RPCs (ie. functions that you can call remotely). And you have plenty of options there: you can secure the communication (using TLS), add authentication layer (using interceptors), ...
You can use protobuf inside any program, that has no need to be client/server. If you need to exchange data, and want them to be strongly typed, protobuf is a nice option (fast & reliable).
That being said, you can combine both to build a nice client/server sytem: gRPC will be your client/server code, and protobuf your data protocol.
PS: I wrote this paper to show how one can build a client/server with gRPC and protobuf using Go, step by step.
grpc is a framework build by google and it is used in production projects from google itself and #HyperledgerFabric is built with grpc there are many opensource applications built with grpc
protobuff is a data representation like json this is also by google in fact they have some thousands of proto file are generated in their production projects
grpc
gRPC is an open-source framework developed by google
It allows us to create Request & Response for RPC and handle rest by the framework
REST is CRUD oriented but grpc is API oriented(no constraints)
Build on top of HTTP/2
Provides >>>>> Auth, Loadbalancing, Monitoring, logging
[HTTP/2]
HTTP1.1 has released in 1997 a long time ago
HTTP1 opens a new TCP connection to a server at each request
It doesn't compress headers
NO server push, it just works with Req, Res
HTTP2 released in 2015 (SPDY)
Supports multiplexing
client & server can push messages in parallel over the same TCP connection
Greatly reduces latency
HTTP2 supports header compression
HTTP2 is binary
proto buff is binary so it is a great match for HTTP2
[TYPES]
Unary
client streaming
server streaming
Bi directional streaming
grpc servers are Async by default
grpc clients can be sync or Async
protobuff
Protocol buffers are language agnostic
Parsing protocol buffers(binary format) is less CPU intensive
[Naming]
Use camel case for message names
underscore_seperated for fields
Use camelcase for Enums and CAPITAL_WITH_UNDERSCORE for value names
[Comments]
Support //
Support /* */
[Advantages]
Data is fully Typed
Data is fully compressed (less bandwidth usage)
Schema(message) is needed to generate code and read the code
Documentation can be embedded in the schema
Data can be read across any language
Schema can evolve any time in a safe manner
faster than XML
code is generated for you automatically
Google invented proto buff, they use 48000 protobuf messages & 12000.proto files
Lots of RPC frameworks, including grpc use protocol buffers to exchange data
gRPC is an instantiation of RPC integration style that is based on protobuf serialization library.
There are five integration styles: RPC, File Transfer, MOM, Distributed Objects, and Shared Database.
RMI is another example of instantiation of RPC integration style. There are many others. MQ is an instantiation of MOM integration style. RabbitMQ as well. Oracle database schema is an instantiation of Shared Database integration style. CORBA is an instantiation of Distributed Objects integration style. And so on.
Avro is an example of another (binary) serialization library.
gRPC (google Remote Procedure Call) is a client-server structure.
Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data.
service Greeter {
rpc SayHello (HelloRequest) returns (HelloResponse) {}
}
message HelloRequest {
string myname = 1;
}
message HelloResponse {
string responseMsg = 1;
}
Protocol buffer is used to exchange data between gRPC client and gRPC Server. It is a protocol between gRPC client and gRPC Server. Protocol buffer is implemented as a .proto file in gRPC project. It defines interface, e.g. service, which is provided by server-side and message format between client and server, and rpc methods, which are used by the client to access the server.
Both client and side have the same proto files. (One real example: envoy xds grpc client side proto files, server side proto files.) It means that both the client and server know the interface, message format, and the way that the client accesses services on the server side.
The proto files (e.g. protocol buffer) will be compiled into real language.
The generated code contains both stub code for clients to use and an abstract interface for servers to implement, both with the method defined in the service.
service defined in the proto file (e.g. protocol buffer) will be translated as abstract class xxxxImplBase (e.g. interface on the server side).
newStub(), which is a synchronous call, is the way to implement a remote procedure call (e.g. rpc in the proto file).
And the methods which build request and response messages are also implemented in the generated files.
I re-implemented simple client and server-side samples based on samples in the official doc. cpp client, cpp server, java client, java server, springboot client, springboot server
Recommended Useful Docs:
cpp/helloworld/README.md#generating-grpc-code,
cpp/basics/#generating-client-and-server-code,
cpp/basics/#defining-the-service,
generated-code/#client-stubs,
a blocking/synchronous stub
StreamObserver
how-to-use-grpc-with-spring-boot
Others: core-concepts,
gRPC can use protocol buffers as both its Interface Definition Language (IDL) and as its underlying message interchange format
In simplest form grpc is like a public vechicle.It will exchange data between client and server.
The protocol Buffer is the protocol like your bus ticket,that decides where you should go or shouldn't go.

Where is the Network Block Device format described?

What is the format of network block device protocol? It is stated that it is simple, but I can't find any RFC or similar things that describes what client and server should send.
Found myself. Looks like here is the document: https://github.com/yoe/nbd/blob/master/doc/proto.md . Not so simple...
Additionally there is simple Python-based server: http://lists.canonical.org/pipermail/kragen-hacks/2004-May/000397.html
Further adding to it, the exact communication format of NBD client and NBD server can be briefly described as following:
Client:
The client component of NBD configures a local block device as /dev/nbdX. Requests submitted to this device are sent through a socket to the server side implemented in userspace. The client can be configured with a userspace utility called nbd-client [1].
Server:
Server implements userspace handlers for requests sent by the client.
NBD can use Unix-domain sockets instead of network sockets to eliminate the
overhead of connection management. Furthermore, a multi-connect connection
can be used to increase performance due to introduced parallelism in the
server part [1]. High-performance server implementations with plugin support
exist, such as nbdkit or nbd-server.
In addition to the earlier useful answer that mentions proto.md below are some more useful resources that can help to understand in more detail the functions of client and server.
References and Resources:
[1] BUSE: Block Device in Userspace
[2] The Network Block Device-Linux Journal
[3] BDUS: implementing block devices in user space

Resources