Setting TCP_KEEPALIVE_THRESHOLD in Netty on Solaris - tcp

As mentioned in documentation below, Solaris supports setting of TCP_KEEPALIVE_THRESHOLD and TCP_KEEPALIVE_ABORT_THRESHOLD per socket:
https://docs.oracle.com/cd/E19120-01/open.solaris/819-2724/fsvdh/index.html
We are using Netty to set SO_KEEPALIVE to true and changing interval in OS:
ndd -set /dev/tcp tcp_keepalive_interval 1440000
Is there any way in Netty to set keepalive wait/abort interval per socket? If not, is there any interface or native method that we can use for this?

From documentation:
Method Option():
Allow to specify a ChannelOption which is used for the Channel
instances once they got created. Use a value of null to remove a
previous set ChannelOption
Another solution I think should work is get the ServerBootstrap object and set the option false using:
...
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, false)
.handler(new LoggingHandler(LogLevel.INFO))
...
It should works in Netty 4 and 5. Hope it helps :)

Related

How to manually acknowledge on the listener?

I'm creating an application that reads from some source cluster/topic then does some processing on the message and finally writes to a destination cluster/topic. I would like to only move the offset when the message passes the processing phase. Do I just need to set spring.kafka.consumer.enable-auto-commit=false? Or do I need to also implement an AcknowledgingMessageListener or a ConsumerAwareMessageListener and do either consumer.commitSync() or ack.acknowledge()?
Created a plain MessageListener.
Yes, you need at least an AcknowledgingMessageListener and set the container's AckMode to MANUAL or MANUAL_IMMEDIATE.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#committing-offsets

How to turn off reasoning in the Grakn python client

I am using the Grakn python client and I want to query for data without reasoning turned on.
client = GraknClient(uri=uri)
session = client.session(keyspace=keyspace)
tx = session.transaction().read()
Do I pass an argument in the transaction() method?
You can turn the reasoning off for every specific query by passing infer=False parameter like this
transaction.execute(query, infer=True, explain=False, batch_size=50);
Check out the documentation http://dev.grakn.ai/docs/client-api/python#lazily-execute-a-graql-query

Using Winsock (TCP/IP) functions in ATEASY development enviroment

I am using WsReceive() function of the ATEasy framework and wanted to ask what is the meaning of the values "aioDefault
and aioDisableWsReceiveEarlyReturn" of "enMode" parameter?
I found this in the ATEASY documentation:
If enMode, input receive mode includes aioDisableWsReceiveEarlyReturn,
it prevents WsReceive from an "early return" when there is a momentary
interruption in the data being received.
And this from the online help of ateasy (By a tip of an expert from the ateasy forum) :
If sEos parameter is an empty string and aioDisableWsReceiveEarlyReturn mode flag is not used (default case), the function will return immediately if characters are found in the input buffer, and the timeout will be ignored. Using the aioDisableWsReceiveEarlyReturn flag will ensure that the function will return only if the timeout is reached or all lBytes characters were received.

Twain always returns TWRC_NOTDSEVENT

I use twain 2.3 (TWAINDSM.DLL) in my application with HP Scanjet 200 TWAIN Protocol 1.9.
My TWAIIN calls are:
OpenDSM: DG_CONTROL, DAT_PARENT, MSG_OPENDSM
OpenDS: DG_CONTROL, DAT_IDENTITY, MSG_OPENDS
EnableDS: DG_CONTROL, DAT_USERINTERFACE, MSG_ENABLEDS
ProcessDeviceEvent: DG_CONTROL, DAT_EVENT, MSG_PROCESSEVENT
and as a result of the last call I allways get TWRC_NOTDSEVENT instead of TWRC_DSEVENT.
Could please someone help with this?
Once you use DG_CONTROL / DAT_EVENT / MSG_PROCESSEVENT, all messages from the applications message loop must be sent to the data source for processing. Receiving TWRC_NOTDSEVENT means the forwarded message isn't for the source so the application should process it as normal.
Keep forwarding all messages to the source until you receive MSG_XFERREADY which means there is data to transfer. Once the transfer is finished and you have sent MSG_DISABLEDS that's when you can stop forwarding messages to the source.
Twain is a standard, and when many company implement that standard, not all of them do the same way. Along the way to support Twain, we will learn and adjust the code to support all the different implementations.
I experienced this situation before, and this is my workaround:
Instead of placing (rc == TWRC_DSEVENT) at the beginning of code (will skip the following MSG_XFERREADY processing afterward) you can move the comparison to the end after MSG_XFERREADY processing, so that MSG_XFERREADY is always checked and processed.
(rc == TWRC_DSEVENT) is only to determine if we should forward the window message or not.
I don't know your specific situation. I ran into a similar issue because I called OpenDSM with a HWND/wId which is from another process. You should call OpenDSM with the HWND of
the active window/dialog which is owned by current process.

Apache camel using seda

I want to have a behavior like this:
Camel reads a file from a directory, splits it into chunks (using streaming), sends each chunk to a seda queue for concurrent processing, and after the processing is done, a report generator is invoked.
This is my camel route:
from("file://c:/mydir?move=.done")
.to("bean:firstBean")
.split(ExpressionBuilder.beanExpression("splitterBean", "split"))
.streaming()
.to("seda:processIt")
.end()
.to("bean:reportGenerator");
from("seda:processIt")
.to("bean:firstProcessingBean")
.to("bean:secondProcessingBean");
When I run this, the reportGenerator bean is run concurrently with the seda processing.
How to make it run once after the whole seda processing is done?
The splitter has built-in parallel so you can do this easier as follows:
from("file://c:/mydir?move=.done")
.to("bean:firstBean")
.split(ExpressionBuilder.beanExpression("splitterBean", "split"))
.streaming().parallelProcessing()
.to("bean:firstProcessingBean")
.to("bean:secondProcessingBean");
.end()
.to("bean:reportGenerator");
You can see more details about the parallel option at the Camel splitter page: http://camel.apache.org/splitter
I think you can use the delayer pattern of Camel on the second route to achieve the purpose.
delay(long) in which the argument indicates time in milliseconds. You can read more abuout this pattern here
For eg; from("seda:processIt").delay(2000)
.to("bean:firstProcessingBean"); //delays this route by 2 seconds
I'd suggest the usage of startupOrder to configure the startup of route though.
The official documentation provides good details on the topic. Kindly read it here
Point to note - " The routes with the lowest startupOrder is started first. All startupOrder defined must be unique among all routes in your CamelContext."
So, I'd suggest something like this -
from("endpoint1").startupOrder(1)
.to("endpoint2");
from("endpoint2").startupOrder(2)
.to("endpoint3");
Hope that helps..
PS : I'm new to Apache Camel and to stackoverflow as well. Kindly pardon any mistake that might've occured.

Resources