Spring Integration. TCP Server Factory - tcp

probably is an easy way to do,
What I want to is:
I have a tcp server which listens to the incoming connection.
I would like to be informed somehow when a client connected.
TcpNetServerConnectionFactory has inside such information "Accepted connection ...".
There is TcpConnectionSupport class, however I cannot find a way how to use it. I am looking something similar to subscriber pattern.
Is there some way to do it?

From one side it isn't clear how you want to implement subscriber pattern, if it is a Spring Integration out-of-the-box feature with <int-ip:tcp-inbound-channel-adapter connection-factory="connectionFactory"/>. When the new connection from client is established and client starts to send the data that component will be ready to receive it and convert to the message to send to the channel for further integration flow.
From other side there is an ApplicationEvent infrastructure, and when connection is open, the TcpNetServerConnectionFactory emits TcpConnectionOpenEvent. You can listen to this event using <int-event:inbound-channel-adapter event-types="org.springframework.integration.ip.tcp.connection.TcpConnectionOpenEvent"/>.
And again: it will be a message flow.

Related

BizTalk - how to subscribe to two-way send ports response, but access data from the request

I need to submit something to a web service, then I need to send something over MLLP using the HL7 MLLP adaptor and the message needs to contain something returned by the service, and something that was sent to the service, and I'd like to use a pure messaging solution if possible, not an orchestration.
So basically I have two send ports. The second needs to subscribe to the response of the first, which means that it's message will the the first send ports response.
The trick is I also need some data from the first send ports request message. Is it possible to get that somehow?
The correct way to do this is use an Orchestration.
There is nothing wrong with using an Orchestration, and Orchestrations exist exactly for this purpose.
If someone is telling you Orchestrations are not right or you've read that somewhere...they're wrong. That's it. If you're having problems using Orchestrations...telling you straight up, you're doing it wrong.
In an Orchestration, you can probably use a Map to merge the content into the service response. Exactly the use case it's meant for.
Here's a started Suspend/Resume pattern: BizTalk Server: Suspend and Resume an Orchestration on Two Way Port Error
You have no control of this in a Messaging only solution.

AMQP, RabbitMQ Push API how works?

I'm trying to get a deep understand how works the Push API communication between the client and the RabbitMQ server.
As I know - but correct me in case - the client open a TCP connenction to the broker (RabbitMQ) and keep this connenction alive until the client decision to close it. But during this connection the client can get messages immediately.
My question is, during this connection, do the client monitor the Broker to ask him for messages, or when the Broker forward a message to the Queue, where the client subscribed, just take that connencion and push the data to the client?
first case: client monitor the broker for messages
last case: client don't need to monitor the broker, broker just push the data
or other?
There are two options to receive messages
The client registers a consumer callback (basicConsume) on the channel; the broker then "pushes" messages to the consumer.
The client sends the broker a basicGet and receives one message (if present).
The first use case is the most common.
Since you tagged the question with spring-amqp I assume you are interested in Spring. For the first case, Spring AMQP has a listener container (and #RabbitListener annotation); for the second case, one of the RabbitTemplate receive operations can be used.
I suggest you look at the tutorials to get a basic understanding. They cover several languages including pure java and Spring AMQP.
You can also look at the Spring AMQP Reference Manual.

Best way to have Netty Client attempt recconnection if remote peer disconnects?

I've implemented Netty client code that connects to a server. If the server closes the connection viathe disconnect message, we want the client to continually try to reconnect?
Is this best way to handle TCP Disconnect through the channelInactive callback?
Also, channelInactive will not handle TCP timeout, correct?
channelInactive or adding a ChannelFutureListener to Channel.closeFuture() is the right way to handle this. Just be aware you can not "reconnect" and existing Channel but need to bootstrap a new one.

Rebus HTTP gateway and MSMQ health state

Let's say we have
Client node with HTTP gateway outbound service
Server node with HTTP gateway inbound service
I consider situation where MSMQ itself stops from some reason on the client node. In current implementation Rebus HTTP gateway will catch the exception.
What do you think about idea that instead of just catching, the MessageQueueException exception could be also sent to server node and put on error queue? (name of error queue could be gathered from headers)
So without additional infrastructure server would know that client has a problem so someone could react.
UPDATE:
I guessed problems described in the answer would be raised. I should have explained my scenario deeper :) Sorry about it. Here it is:
I'm going to modify HTTP gateway in the way that InboundService would be able to do both - Send and Receive messages. So the OutboundService would be the only one who initiate the connection(periodically e.g. once per 5 minutes) in order to get new messages from server and send its messages to server. That is because client node is not considered as a server but as a one of many clients which are behind the NAT.
Indeed, server itself is not interested in client health but I though that instead of creating separate alerting service on client side which would use HTTP gateway HTTP gateway code, the HTTP gateway itelf could do this since it's quite in business of HTTP gateway to have both sides running.
What if the client can't reach the server at all?
Since MSMQ would be dead I thought about using in-process standalone persistent queue object like that http://ayende.com/blog/4540/building-a-managed-persistent-transactional-queue
(just an example implementation, I'm not sure what kind of license it has)
to aggregate exceptions on client side until server is reachable.
And how often will the client notify the server that is has experienced an error?
I'm not sure about that part - I thought it could be related to scheduled time of message synchronization like once per 5 minutes but what in case there would be no scheduled time just like in current implementation (while(true) loop)? Maybe it could be just set by config?
I like to have a consistent strategy about handling errors which usually involves plain old NLog logging
Since client nodes will be in the Internet behind the NAT standard monitoring techniques won't work. I thought about using queue as NLog transport but since MSMQ would be dead it wouldn't work.
I also thought about using HTTP as NLog transport but on the server side it would require queue (not really, but I would like to store it in queue) so we are back to sbus and HTTP gateway...that kind of NLog transport would be de facto clone of HTTP gateway.
UPDATE2: HTTP as NLog transport (by transport I mean target) would also require client side queue like I described in "What if the client can't reach the server at all?" section. It would be clone of HTTP gateway embedded into NLog. Madness :)
All the thing is that client is unreliable so I want to have all the information about client on the server side and log it in there.
UPDATE3
Alternative solution could be creating separate service, which would however be part of HTTP gateway (e.g. OutboundAlertService). Then three goals would be fulfilled:
shared sending loop code
no additional server infrastructure required
no negative impact on OutboundService (no complexity of adding in-process queue to it)
It wouldn't take exceptions from OutboundService but instead it would check MSMQ perodically itself.
Yet other alternative solution would be simply using other than MSMQ queue as NLog target but that's ugly overkill.
Regarding your scenario, my initial thought is that it should never be the server's problem that a client has a problem, so I probably wouldn't send a message to the server when the client fails.
As I see it, there would be multiple problems/obstacles/challenges with that approach because, e.g. what if the client can't reach the server at all? And how often will the client notify the server that is has experienced an error?
Of course I don't know the details of your setup, so it's hard to give specific advice, but in general I like to have a consistent strategy about handling errors which usually involves plain old NLog logging and configuring WARN and ERROR levels to go the Windows Event Log.
This allows for setting up various tools (like e.g. Service Center Operations Manager or similar) to monitor all of your machines' event logs to raise error flags when someting goes wrong.
I hope I've said something you can use :)
UPDATE
After thinking about it some more, I think I'm beginning to understand your problem, and I think that I would prefer a solution where the client lets the HTTP listener in the other end know that it's having a problem, and then the HTTP listener in the other end could (maybe?) log that as an error.
Another option is that the HTTP listener in the other end could have an event, ReceivedClientError or something, that one could attach to and then do whatever is right in the given situation.
In your case, you might put a message in an error queue. I would just avoid putting anything in the error queue as a general solution because I think it confuses the purpose of the error queue - the "thing" in the error queue wouldn't be a message, and as such it would not be retryable etc.

Ensure proper order of spring-integration events

We are using spring-integration for TCP communication, and see behaviour where a TcpConnectionCloseEvent is received just before a message on that connection.
This is a problem because we are using the TCP events to keep track of connections, etc. and it makes for much more complex scenarios when we need to accept messages on connections that we consider closed.
The same is the case the other way around - sometimes we receive a message for a connection that we do not yet know has been opened.
Is there any way to ensure the correct order of these events, even though they are asynchronous in nature?
(Thanks for the great answers here on stackoverflow, Gary).
Hmmm...
On the server side, the open event is published by the thread that accepts the new connection rather than the connection itself. While we could possibly do something there, it still wouldn't be foolproof when using NIO because the threading model there is much more complex and there would be no way to guarantee the order even if the connection itself published the event.
To be honest, we didn't anticipate the events would be used in this way - the primary driver (for the open events) was to allow an application to detect a new connection without the client actually sending anything (just connecting) - allowing a server-side application to accept a new connection, get a handle to the connection id so it can send a welcome message.
One workaround might be to use an event inbound channel adapter and a <delayer/> to delay the event delivery to your application (in the case of the close).
I don't really have a good solution for the late delivery of the open event; perhaps just treat an inbound message for a "new" connection as an "open" event (e.g. publish your own open event when you detect this condition on the thread that's processing the message, and ignore the "real" event).

Resources