SpringIntegration and Reactive: Trying to understand the constraints - servlets

We have a SpringIntegration workflow with restful HTTP inbound calls and outbound calls. The workflow is mostly expressed with XML declarations of channels, chains, a splitter and an aggregator.
In the Servlet realm, we use the http:inbound-gateway and http:outbound-gateway components for input/output to the internal workflow. This seems to work well using SpringBoot autoconfiguration for Tomcat/Jetty/Undertow.
We've been trying the Reactive realm, using webflux:inbound-gateway and webflux:outbound-gateway components on the same internal workflow. This seems to work OK for tomcat and jetty servers but getting no responses from netty and some errors from undertow. I have yet to discover why we are getting errors from the last two configurations.
What I'm wondering is if the same internal workflow can be hooked up to reactive or servlet components without requiring changes. We do use a splitter/aggregator, and my reading of the SpringIntegration documentation sections on WebFlux hasn't quite cleared up for me if these constructs can be used in both realms. ( https://docs.spring.io/spring-integration/reference/html/reactive-streams.html#splitter-and-aggregator )
Any pointers on this subject?

The webflux:inbound-gateway is a server side of HTTP protocol. Has to be used in the Reactive Streams HTTP server environment. Not sure about Undertow and Jetty, but Tomcat works in the simulating mode. I usually use an io.projectreactor.netty:reactor-netty-http.
The webflux:outbound-gateway is a client side of HTTP protocol. It is fully based on the WebClient and doesn't matter in what environment it is used.
same applies for a splitter and aggregator components: they don't require any server implementation and they don't expose any external ports to worry about some specifics. Can simply be used in the reactive stream definition and in regular flows.

Related

Angular 6 - how to make a single http request and listens to multiple responses?

My backend generates log on processing some data and i would like to show it as a console in my frontend.
How can i implement a method that can listen to multiple response till a certain parameter is true from backend on a single http request in angular 6.
you can make use of WebSocket, i.e. make websocket connection with the your backend and get data, this is kind of push mechanism where server push data to on connection and client get data as new data is available in connection.
it not possible with help of single http request as it follows pull mechanism. so you will get data which are available. to get new data you have to perform another http request.
Unfortunately, an HTTP request cannot remain open listening for multiple responses, once it receives a response it will close the connection.
Fortunately, you can use websockets.
Implementing websockets is not too difficult, and there are many tutorials for implementing with Angular such as this one: https://tutorialedge.net/typescript/angular/angular-websockets-tutorial/
I'm not sure what back end technology you're using, but most modern ones have websocket support.
If you're not familiar with websockets in general, checkout this article: https://medium.com/#dominik.t/what-are-web-sockets-what-about-rest-apis-b9c15fd72aac
“WebSockets” is an advanced technology that allows real-time interactive communication between the client browser and a server. It uses a completely different protocol that allows bidirectional data flow, making it unique against HTTP.
The article also compares/contrasts it to HTTP, so it may give you a better understanding of HTTP as well.

protobuf vs gRPC

I try to understand protobuf and gRPC and how I can use both. Could you help me understand the following:
Considering the OSI model what is where, for example is Protobuf at layer 4?
Thinking through a message transfer how is the "flow", what is gRPC doing what protobuf misses?
If the sender uses protobuf can the server use gRPC or does gRPC add something which only a gRPC client can deliver?
If gRPC can make synchronous and asynchronous communication possible, Protobuf is just for the marshalling and therefore does not have anything to do with state - true or false?
Can I use gRPC in a frontend application communicating instead of REST or GraphQL?
I already know - or assume I do - that:
Protobuf
Binary protocol for data interchange
Designed by Google
Uses generated "Struct" like description at client and server to un-/-marshall message
gRPC
Uses protobuf (v3)
Again from Google
Framework for RPC calls
Makes use of HTTP/2 as well
Synchronous and asynchronous communication possible
I again assume its an easy question for someone already using the technology. I still would thank you to be patient with me and help me out. I would also be really thankful for any network deep dive of the technologies.
Protocol buffers is (are?) an Interface Definition Language and serialization library:
You define your data structures in its IDL i.e. describe the data objects you want to use
It provides routines to translate your data objects to and from binary, e.g. for writing/reading data from disk
gRPC uses the same IDL but adds syntax "rpc" which lets you define Remote Procedure Call method signatures using the Protobuf data structures as data types:
You define your data structures
You add your rpc method definitions
It provides code to serve up and call the method signatures over a network
You can still serialize the data objects manually with Protobuf if you need to
In answer to the questions:
gRPC works at layers 5, 6 and 7. Protobuf works at layer 6.
When you say "message transfer", Protobuf is not concerned with the transfer itself. It only works at either end of any data transfer, turning bytes into objects
Using gRPC by default means you are using Protobuf. You could write your own client that uses Protobuf but not gRPC to interoperate with gRPC, or plugin other serializers to gRPC - but using gRPC would be easier
True
Yes you can
Actually, gRPC and Protobuf are 2 completely different things. Let me simplify:
gRPC manages the way a client and a server can interact (just like a web client/server with a REST API)
protobuf is just a serialization/deserialization tool (just like JSON)
gRPC has 2 sides: a server side, and a client side, that is able to dial a server. The server exposes RPCs (ie. functions that you can call remotely). And you have plenty of options there: you can secure the communication (using TLS), add authentication layer (using interceptors), ...
You can use protobuf inside any program, that has no need to be client/server. If you need to exchange data, and want them to be strongly typed, protobuf is a nice option (fast & reliable).
That being said, you can combine both to build a nice client/server sytem: gRPC will be your client/server code, and protobuf your data protocol.
PS: I wrote this paper to show how one can build a client/server with gRPC and protobuf using Go, step by step.
grpc is a framework build by google and it is used in production projects from google itself and #HyperledgerFabric is built with grpc there are many opensource applications built with grpc
protobuff is a data representation like json this is also by google in fact they have some thousands of proto file are generated in their production projects
grpc
gRPC is an open-source framework developed by google
It allows us to create Request & Response for RPC and handle rest by the framework
REST is CRUD oriented but grpc is API oriented(no constraints)
Build on top of HTTP/2
Provides >>>>> Auth, Loadbalancing, Monitoring, logging
[HTTP/2]
HTTP1.1 has released in 1997 a long time ago
HTTP1 opens a new TCP connection to a server at each request
It doesn't compress headers
NO server push, it just works with Req, Res
HTTP2 released in 2015 (SPDY)
Supports multiplexing
client & server can push messages in parallel over the same TCP connection
Greatly reduces latency
HTTP2 supports header compression
HTTP2 is binary
proto buff is binary so it is a great match for HTTP2
[TYPES]
Unary
client streaming
server streaming
Bi directional streaming
grpc servers are Async by default
grpc clients can be sync or Async
protobuff
Protocol buffers are language agnostic
Parsing protocol buffers(binary format) is less CPU intensive
[Naming]
Use camel case for message names
underscore_seperated for fields
Use camelcase for Enums and CAPITAL_WITH_UNDERSCORE for value names
[Comments]
Support //
Support /* */
[Advantages]
Data is fully Typed
Data is fully compressed (less bandwidth usage)
Schema(message) is needed to generate code and read the code
Documentation can be embedded in the schema
Data can be read across any language
Schema can evolve any time in a safe manner
faster than XML
code is generated for you automatically
Google invented proto buff, they use 48000 protobuf messages & 12000.proto files
Lots of RPC frameworks, including grpc use protocol buffers to exchange data
gRPC is an instantiation of RPC integration style that is based on protobuf serialization library.
There are five integration styles: RPC, File Transfer, MOM, Distributed Objects, and Shared Database.
RMI is another example of instantiation of RPC integration style. There are many others. MQ is an instantiation of MOM integration style. RabbitMQ as well. Oracle database schema is an instantiation of Shared Database integration style. CORBA is an instantiation of Distributed Objects integration style. And so on.
Avro is an example of another (binary) serialization library.
gRPC (google Remote Procedure Call) is a client-server structure.
Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data.
service Greeter {
rpc SayHello (HelloRequest) returns (HelloResponse) {}
}
message HelloRequest {
string myname = 1;
}
message HelloResponse {
string responseMsg = 1;
}
Protocol buffer is used to exchange data between gRPC client and gRPC Server. It is a protocol between gRPC client and gRPC Server. Protocol buffer is implemented as a .proto file in gRPC project. It defines interface, e.g. service, which is provided by server-side and message format between client and server, and rpc methods, which are used by the client to access the server.
Both client and side have the same proto files. (One real example: envoy xds grpc client side proto files, server side proto files.) It means that both the client and server know the interface, message format, and the way that the client accesses services on the server side.
The proto files (e.g. protocol buffer) will be compiled into real language.
The generated code contains both stub code for clients to use and an abstract interface for servers to implement, both with the method defined in the service.
service defined in the proto file (e.g. protocol buffer) will be translated as abstract class xxxxImplBase (e.g. interface on the server side).
newStub(), which is a synchronous call, is the way to implement a remote procedure call (e.g. rpc in the proto file).
And the methods which build request and response messages are also implemented in the generated files.
I re-implemented simple client and server-side samples based on samples in the official doc. cpp client, cpp server, java client, java server, springboot client, springboot server
Recommended Useful Docs:
cpp/helloworld/README.md#generating-grpc-code,
cpp/basics/#generating-client-and-server-code,
cpp/basics/#defining-the-service,
generated-code/#client-stubs,
a blocking/synchronous stub
StreamObserver
how-to-use-grpc-with-spring-boot
Others: core-concepts,
gRPC can use protocol buffers as both its Interface Definition Language (IDL) and as its underlying message interchange format
In simplest form grpc is like a public vechicle.It will exchange data between client and server.
The protocol Buffer is the protocol like your bus ticket,that decides where you should go or shouldn't go.

HTTP Client in DoFn

I would like to make POST request through a DoFn for a Apache Beam Pipeline running on Dataflow.
For that, I have created a client which instanciate an HttpClosableClient configured on a PoolingHttpClientConnectionManager.
However, I instanciate a client for each element that I process.
How could I setup a persistent client used by all my elements?
And is there other class for parallel and high-speed HTTP requests that I should use?
You can put the client into a member variable, use the #Setup method to open it, and #Teardown to close it. Implementation of almost all IOs in Beam uses this pattern, e.g. see JdbcIO.

Understanding websockets in terms of REST and Server vs Client Events

For a while now I have been implementing a RESTful API in the design of my project because in my case it is very useful for others to be able to interact with the data in a consistent format (and I find REST to be a clean way of handling requests). I am now trying to not only have my current REST API for my resources, but the ability to expose some pieces of information via a bidirectional websocket connection.
Upon searching for a good .net library to use that implements the websocket protocol, I did find out about SignalR. There was a few problems I had with it (maybe specific to my project?)
I want to be able to initialize a web socket connection through my
existing REST API. (I don't know the proper practice to do this, but
I figured a custom header would work fine) I would like them (the
client) to be able to close the connection and get a http response
back (101?) to signify its completion.
The problem I had with SignalR was:
that there was no clean way outside of a hub instance to get a user's connection id and map it to a external controller where the rest call made affects what piece of data gets broadcasted to the specific client (I don't want to use external memory)
the huge reliance on client side code. I really want to make this process as simple to the client and handle the majority of the work on the server side (which I had hoped modifying my current rest api would accomplish). The only responsibility I see of a client is to disconnect peacefully.
So now the question..
Is there a good server side websocket library for .net that implements the latest web socket protocol? The client can use any client library that adheres to the protocol. What is the best practice to incorporate both web socket connections and a restful api?
ASP.NET supports WebSockets itself if you have IIS8 (only Windows 8/2012 and further). SignalR is just a polyfill,
If you do not have IIS8, you can use external WebSocket frameworks like mine: http://vtortola.github.io/WebSocketListener/
Cheers.

how to intercept and modify HTTP responses on server side?

I am working with a client/server application which uses HTTP, and my goal is to add new features to it. I can extend the client by hooking my own code to some specific events, but unfortunately the server is not customizable. Both client and server are in a Windows environment.
My current problem is that performance is awful when a lot of data are received from the server: it takes time to transmit it and time to process it. The solution could be to have an application on server side to do the processing and send only the result (which is much smaller). The problem is there is not built-in functions to manipulate responses from the server before sending them.
I was thinking to listen to all traffic on port 80, identifying relevant HTTP responses and send them to my application while blocking the response (to avoid sending huge data volume which won't be processed by the client). As I am lacking a lot of network knowledge, I am a bit lost when thinking about how to do it.
I had a look at some low-level packet intercepting methods like WinPCap, but it seems to require a lot of work to do what I need. Moreover I think it is not possible to block or modify responses with this API.
A reverse proxy which allows user scripts to be triggered by specific requests or responses would be perfect, but I am wondering if there is no simpler way to do this interception/send elsewhere work.
What would be the simplest and cleanest method to enable this behavior?
Thanks!
I ended making a simple reverse proxy to access the HTTP server. The reverse proxy then extracts relevant information from the server response and sends it to the server-side processing component, and replaces information extracted from the response by an ID the client uses to request the other component to get the processing results.
The article at http://www.codeproject.com/KB/web-security/HTTPReverseProxy.aspx was very helpful to make the first draft of the reverse proxy.
Hmm.... too much choices.
2 ideas:
configure on all clients a Http Proxy. there are some out there, that let you manipulate what goes through in both directions (with scripts, plugins).
or
make a pass through project, that listens to port 80, and forewards the needed stuff to port 8080 (where your original server app runs)
question is, what software is the server app running at,
and what knowledge (dev) do you have?
ah. and what is "huge data"? kilobyte? megabyte? gigabyte?

Resources