How to: Async Callbacks using Netty with Avro - asynchronous

I'm trying to implement Asynchronous Avro calls by using its NettyServer implementation. After digging the source code, I found an example on how to use NettyServer from TestNettyServerWithCallbacks.java
When running a few test, I realize that NettyServer never calls hello(Callback) method, instead it keeps calling the synchronous hello() method. The client program prints out "Hello" but I'm expecting "Hello-ASYNC" as a result. I really have no clue what's going on.
I hope someone can shine some light on me and perhaps point out the mistake. Below are the codes I use to perform a simple asynchronous avro test.
AvroClient.java - Client code.
public class AvroClient {
public static void main(String[] args) throws InterruptedException, ExecutionException, TimeoutException {
try {
NettyTransceiver transceiver = new NettyTransceiver(new InetSocketAddress(6666));
Chat.Callback client = SpecificRequestor.getClient(Chat.Callback.class, transceiver);
final CallFuture<CharSequence> future1 = new CallFuture<CharSequence>();
client.hello(future1);
System.out.println(future1.get());
transceiver.close();
} catch (IOException ex) {
System.err.println(ex);
}
}
}
AvroNetty.java - The Server Code
public class AvroNetty {
public static void main(String[] args) {
Index indexImpl = new AsyncIndexImpl();
Chat chatImpl = new ChatImpl();
Server server = new NettyServer(new SpecificResponder(Chat.class, chatImpl), new InetSocketAddress(6666));
server.start();
System.out.println("Server is listening at port " + server.getPort());
}
}
ChatImpl.java
public class ChatImpl implements Chat.Callback {
#Override
public void hello(org.apache.avro.ipc.Callback<CharSequence> callback) throws IOException {
callback.handleResult("Hello-ASYNC");
}
#Override
public CharSequence hello() throws AvroRemoteException {
return new Utf8("Hello");
}
}
This interface is auto-generated by avro-tool
Chat.java
#SuppressWarnings("all")
public interface Chat {
public static final org.apache.avro.Protocol PROTOCOL = org.apache.avro.Protocol.parse("{\"protocol\":\"Chat\",\"namespace\":\"avro.test\",\"types\":[],\"messages\":{\"hello\":{\"request\":[],\"response\":\"string\"}}}");
java.lang.CharSequence hello() throws org.apache.avro.AvroRemoteException;
#SuppressWarnings("all")
public interface Callback extends Chat {
public static final org.apache.avro.Protocol PROTOCOL = avro.test.Chat.PROTOCOL;
void hello(org.apache.avro.ipc.Callback<java.lang.CharSequence> callback) throws java.io.IOException;
}
}
Here is the Avro Schema
{
"namespace": "avro.test",
"protocol": "Chat",
"types" : [],
"messages": {
"hello": {
"request": [],
"response": "string"
}
}
}

The NettyServer implementation actually doesn't implement the Async style at all. It is a deficiency in the library. Instead you need to specify an asynchronous execution handler rather than try and chain services together through callbacks. Here is what I use to setup my NettyServer to allow for this:
ExecutorService es = Executors.newCachedThreadPool();
OrderedMemoryAwareThreadPoolExecutor executor = new OrderedMemoryAwareThreadPoolExecutor(Runtime.getRuntime().availableProcessors(), 0, 0);
ExecutionHandler executionHandler = new ExecutionHandler(executor);
final NettyServer server = new NettyServer(responder, addr, new NioServerSocketChannelFactory(es, es), executionHandler);

Related

Open and close channel in the gRPC client with every request

I have a gRPC client in a kafka application. This means the client will constantly open and close channels.
public class UserAgentClient {
protected final Logger logger = LoggerFactory.getLogger(getClass());
private static final Config uaparserConfig = ConfigFactory.load().getConfig(ua);
private final ManagedChannel channel;
private final UserAgentServiceGrpc.UserAgentServiceBlockingStub userAgentBlockingStub;
public UserAgentParserClient() {
this(ManagedChannelBuilder.forAddress(uaConfig.getString("host"), uaConfig.getInt("port")).usePlaintext());
}
public UserAgentClient(ManagedChannelBuilder<?> usePlaintext) {
channel = usePlaintext.build();
userAgentBlockingStub = UserAgentServiceGrpc.newBlockingStub(channel);
}
public UserAgentParseResponse getUserAgent(String userAgent ) {
UserAgentRequest request = UserAgentRequest.newBuilder().setUserAgent(userAgent).build();
UserAgentParseResponse response = null;
try {
response = userAgentBlockingStub.parseUserAgent(request);
} catch(Exception e) {
logger.warn("An exception has occurred during gRPC call to the user agent.", e.getMessage());
}
shutdown();
return response;
}
public void shutdown() {
try {
channel.shutdown();
} catch (InterruptedException ie) {
logger.warn("Interrupted exception during gRPC channel close", ie);
}
}
}
I was wondering if I can keep the channel open the whole time? Or do I have to open a channel every time I make a new call? I was wondering because I was testing the performance and it seems to improve drastically if I just keep the channel open. On the other hand is there something that I'm missing?
creating a new channel has huge overhead, you should keep the channel open as long as possible.
Since the opening and closing of channel is expensive I removed the channel = usePlaintext.build(); completely from my client
Instead I'm opening and closing it in my kafka Transformer. In my class UserAgentDataEnricher that implements Transformer.
public class UserAgentDataEnricher implements Transformer<byte[], EnrichedData, KeyValue<byte[], EnrichedData>> {
private UserAgentParserClient userAgentParserClient;
#Override
public void init(ProcessorContext context) {
this.context = context;
open();
// schedule a punctuate() method every 15 minutes
this.context.schedule(900000, PunctuationType.WALL_CLOCK_TIME, (timestamp) -> {
close();
open();
logger.info("Re-opening of user agent channel is initialized");
});
}
#Override
public void close() {
userAgentParserClient.shutdown();
}
private void open() {
channel = ManagedChannelBuilder.forAddress("localhost", 50051).usePlaintext().build();
userAgentClient = new UserAgentClient(channel);
}
...
}
and now I initialize my client like that:
public UserAgentClient(ManagedChannel channel) {
this.channel = channel;
userAgentBlockingStub = UserAgentServiceGrpc.newBlockingStub(channel);
}

How to use ContainerStoppingErrorHandler in #KafkaListener to terminate application incase of Kafka server DisconnectException

I want to handle the Server DisconnectException and terminate the application when the server DisconnectException occurs
how to catch this error and stop the application?
#KafkaListener(topics = { "${kafka.status-topic}", "${kafka.start-topic}" }, containerFactory = "kafkaListenerContainerFactory")
public void listen(#Payload final String message,
#Header(KafkaHeaders.RECEIVED_TOPIC) final String topic) {
log.debug("Received '{}'-message {} from Kafka", topic, message);
LinkedList<IMessageListener> topicListeners = listeners.get(topic);
for (final IMessageListener l : topicListeners) {
// call listeners in a separate thread
executor.execute(new Runnable() {
#Override
public void run() {
l.messageReceived(topic, message);
}
});
}
}
You can try catching the exception and then calling System.exit(0) inside catch block

SoapFault handling with Spring WS client - WebServiceGatewaySupport and WebServiceTemplate

I am trying to write a Spring WS client using WebServiceGatewaySupport. I managed to test the client for a successful request and response. Now I wanted to write test cases for soap faults.
public class MyClient extends WebServiceGatewaySupport {
public ServiceResponse method(ServiceRequest serviceRequest) {
return (ServiceResponse) getWebServiceTemplate().marshalSendAndReceive(serviceRequest);
}
#ActiveProfiles("test")
#RunWith(SpringRunner.class)
#SpringBootTest(classes = SpringTestConfig.class)
#DirtiesContext
public class MyClientTest {
#Autowired
private MyClient myClient;
private MockWebServiceServer mockServer;
#Before
public void createServer() throws Exception {
mockServer = MockWebServiceServer.createServer(myClient);
}
}
My question is how do i stub the soap fault response in the mock server, so that my custom FaultMessageResolver will be able to unmarshall soap fault?
I tried couple of things below, but nothing worked.
// responsePayload being SoapFault wrapped in SoapEnvelope
mockServer.expect(payload(requestPayload))
.andRespond(withSoapEnvelope(responsePayload));
// tried to build error message
mockServer.expect(payload(requestPayload))
.andRespond(withError("soap fault string"));
// tried with Exception
mockServer.expect(payload(requestPayload))
.andRespond(withException(new RuntimeException));
Any help is appreciated. Thanks!
Follow Up:
Ok so, withSoapEnvelope(payload) I managed to get the controller to go to my custom MySoapFaultMessageResolver.
public class MyCustomSoapFaultMessageResolver implements FaultMessageResolver {
private Jaxb2Marshaller jaxb2Marshaller;
#Override
public void resolveFault(WebServiceMessage message) throws IOException {
if (message instanceof SoapMessage) {
SoapMessage soapMessage = (SoapMessage) message;
SoapFaultDetailElement soapFaultDetailElement = (SoapFaultDetailElement) soapMessage.getSoapBody()
.getFault()
.getFaultDetail()
.getDetailEntries()
.next();
Source source = soapFaultDetailElement.getSource();
jaxb2Marshaller = new Jaxb2Marshaller();
jaxb2Marshaller.setContextPath("com.company.project.schema");
Object object = jaxb2Marshaller.unmarshal(source);
if (object instanceof CustomerAlreadyExistsFault) {
throw new CustomerAlreadyExistsException(soapMessage);
}
}
}
}
But seriously!!! I had to unmarshall every message and check the instance of it. Being a client I should be thorough with all possible exceptions of the service here, and create custom runtime exceptions and throw it from the resolver. Still at the end, its been caught in WebServiceTemplate and re thrown as just a runtime exception.
You could try with something like this:
#Test
public void yourTestMethod() // with no throw here
{
Source requestPayload = new StringSource("<your request>");
String errorMessage = "Your error message from WS";
mockWebServiceServer
.expect(payload(requestPayload))
.andRespond(withError(errorMessage));
YourRequestClass request = new YourRequestClass();
// TODO: set request properties...
try {
yourClient.callMethod(request);
}
catch (Exception e) {
assertThat(e.getMessage()).isEqualTo(errorMessage);
}
mockWebServiceServer.verify();
}
In this part of code mockWebServiceServer represents the instance of MockWebServiceServer class.

How can a native Servlet Filter be used when using Spark web framework?

I'm playing around with Spark (the Java web framework, not Apache Spark).
I find it really nice and easy to define routes and filters, however I'm looking to apply a native servlet filter to my routes and can't seem to find a way to do that.
More specifically, I would like to use Jetty's DoSFilter which is a servlet filter (contrast with the Spark Filter definition). Since Spark is using embedded Jetty, I don't have a web.xml to register the DoSFilter. However, Spark doesn't expose the server instance so I can't find an elegant way of registering the filter programatically either.
Is there a way to apply a native servlet filter to my routes?
I thought of wrapping the DoSFilter in my own Spark Filter, but it seemed like a weird idea.
You can do it like this:
public class App {
private static Logger LOG = LoggerFactory.getLogger(App.class);
public static void main(String[] args) throws Exception {
ServletContextHandler mainHandler = new ServletContextHandler();
mainHandler.setContextPath("/base/path");
Stream.of(
new FilterHolder(new MyServletFilter()),
new FilterHolder(new SparkFilter()) {{
this.setInitParameter("applicationClass", SparkApp.class.getName());
}}
).forEach(h -> mainHandler.addFilter(h, "*", null));
GzipHandler compression = new GzipHandler();
compression.setIncludedMethods("GET");
compression.setMinGzipSize(512);
compression.setHandler(mainHandler);
Server server = new Server(new ExecutorThreadPool(new ThreadPoolExecutor(10,200,60000,TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<>(200),
new CustomizableThreadFactory("jetty-pool-"))));
final ServerConnector serverConnector = new ServerConnector(server);
serverConnector.setPort(9290);
server.setConnectors(new Connector[] { serverConnector });
server.setHandler(compression);
server.start();
hookToShutdownEvents(server);
server.join();
}
private static void hookToShutdownEvents(final Server server) {
LOG.debug("Hooking to JVM shutdown events");
server.addLifeCycleListener(new AbstractLifeCycle.AbstractLifeCycleListener() {
#Override
public void lifeCycleStopped(LifeCycle event) {
LOG.info("Jetty Server has been stopped");
super.lifeCycleStopped(event);
}
});
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
LOG.info("About to stop Jetty Server due to JVM shutdown");
try {
server.stop();
} catch (Exception e) {
LOG.error("Could not stop Jetty Server properly", e);
}
}
});
}
/**
* #implNote {#link SparkFilter} needs to access a public class
*/
#SuppressWarnings("WeakerAccess")
public static class SparkApp implements SparkApplication {
#Override
public void init() {
System.setProperty("spring.profiles.active", ApplicationProfile.readProfilesOrDefault("dev").stream().collect(Collectors.joining()));
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(ModocContext.class);
ctx.registerShutdownHook();
}
}}

Netty: What is the right way to share NioClientSocketChannelFactory among multiple Netty Clients

I am new to Netty. I am using “Netty 3.6.2.Final”. I have created a Netty Client (MyClient) that talks to a remote server (The server implements a custom protocol based on TCP). I create a new ClientBootstrap instance for each MyClient instance (within the constructor).
My question is if I share “NioClientSocketChannelFactory” factory object among all the instances of MyClient then when/how do I release all the resources associated with the “NioClientSocketChannelFactory”?
In other words, since my Netty Client runs inside a JBOSS container running 24x7, should I release all resources by calling “bootstrap.releaseExternalResources();” and when/where should I do so?
More Info: My Netty Client is called from two scenarios inside a JBOSS container. First, in an infinite for loop with each time passing the string that needs to be sent to the remote server (in effect similar to below code)
for( ; ; ){
//Prepare the stringToSend
//Send a string and receive a string
String returnedString=new MyClient().handle(stringToSend);
}
Another scenarios is my Netty Client is called within concurrent threads with each thread calling “new MyClient().handle(stringToSend);”.
I have given the skeleton code below. It is very similar to the TelnetClient example at Netty website.
MyClient
import org.jboss.netty.bootstrap.ClientBootstrap;
import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory;
public class MyClient {
//Instantiate this only once per application
private final static Timer timer = new HashedWheelTimer();
//All below must come from configuration
private final String host ="127.0.0.1";
private final int port =9699;
private final InetSocketAddress address = new InetSocketAddress(host, port);
private ClientBootstrap bootstrap;
//Timeout when the server sends nothing for n seconds.
static final int READ_TIMEOUT = 5;
public MyClient(){
bootstrap = new ClientBootstrap(NioClientSocketFactorySingleton.getInstance());
}
public String handle(String messageToSend){
bootstrap.setOption("connectTimeoutMillis", 20000);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setOption("remoteAddress", address);
bootstrap.setPipelineFactory(new MyClientPipelineFactory(messageToSend,bootstrap,timer));
// Start the connection attempt.
ChannelFuture future = bootstrap.connect();
// Wait until the connection attempt succeeds or fails.
channel = future.awaitUninterruptibly().getChannel();
if (!future.isSuccess()) {
return null;
}
// Wait until the connection is closed or the connection attempt fails.
channel.getCloseFuture().awaitUninterruptibly();
MyClientHandler myClientHandler=(MyClientHandler)channel.getPipeline().getLast();
String messageReceived=myClientHandler.getMessageReceived();
return messageReceived;
}
}
Singleton NioClientSocketChannelFactory
public class NioClientSocketFactorySingleton {
private static NioClientSocketChannelFactory nioClientSocketChannelFactory;
private NioClientSocketFactorySingleton() {
}
public static synchronized NioClientSocketChannelFactory getInstance() {
if ( nioClientSocketChannelFactory == null) {
nioClientSocketChannelFactory=new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
}
return nioClientSocketChannelFactory;
}
protected void finalize() throws Throwable {
try{
if(nioClientSocketChannelFactory!=null){
// Shut down thread pools to exit.
nioClientSocketChannelFactory.releaseExternalResources();
}
}catch(Exception e){
//Can't do anything much
}
}
}
MyClientPipelineFactory
public class MyClientPipelineFactory implements ChannelPipelineFactory {
private String messageToSend;
private ClientBootstrap bootstrap;
private Timer timer;
public MyClientPipelineFactory(){
}
public MyClientPipelineFactory(String messageToSend){
this.messageToSend=messageToSend;
}
public MyClientPipelineFactory(String messageToSend,ClientBootstrap bootstrap, Timer timer){
this.messageToSend=messageToSend;
this.bootstrap=bootstrap;
this.timer=timer;
}
public ChannelPipeline getPipeline() throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = pipeline();
// Add the text line codec combination first,
//pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()));
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
//Add readtimeout
pipeline.addLast("timeout", new ReadTimeoutHandler(timer, MyClient.READ_TIMEOUT));
// and then business logic.
pipeline.addLast("handler", new MyClientHandler(messageToSend,bootstrap));
return pipeline;
}
}
MyClientHandler
public class MyClientHandler extends SimpleChannelUpstreamHandler {
private String messageToSend="";
private String messageReceived="";
public MyClientHandler(String messageToSend,ClientBootstrap bootstrap) {
this.messageToSend=messageToSend;
this.bootstrap=bootstrap;
}
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e){
e.getChannel().write(messageToSend);
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e){
messageReceived=e.getMessage().toString();
//This take the control back to the MyClient
e.getChannel().close();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
// Close the connection when an exception is raised.
e.getChannel().close();
}
}
You should only call releaseExternalResources() once you are sure you not need it anymore. This may be for example when the application gets stopped or undeployed.

Resources