Retrofit RxJava Simple test - retrofit

I'm learning Retrofit and RxJava and I'v created test to connect github:
public class GitHubServiceTests {
RestAdapter restAdapter;
GitHubService service;
#Before
public void setUp(){
Gson gson = new GsonBuilder()
.setFieldNamingPolicy(FieldNamingPolicy.LOWER_CASE_WITH_UNDERSCORES)
.create();
restAdapter = new RestAdapter.Builder()
.setEndpoint("https://api.github.com")
.setConverter(new GsonConverter(gson))
.build();
service = restAdapter.create(GitHubService.class);
}
#Test
public void GitHubUsersListObservableTest(){
service.getObservableUserList().flatMap(Observable::from)
.subscribe(user -> System.out.println(user.login));
}
when I execute test, I see nothing in my console. But when I execute another test
#Test
public void GitHubUsersListTest(){
List<User> users = service.getUsersList();
for (User user : users) {
System.out.println(user.login);
}
it works, and I see user's logins in my console
Here is my Interface for Retrofit:
public interface GitHubService {
#GET("/users")
List<User> getUsersList();
#GET("/users")
Observable<List<User>> getObservableUserList();
}
where I'm wrong?

Because of the asynchronous call your test completes before a result is downloaded. That's typical issue and you have to 'tell' test to wait for the result. In plain java it would be:
#Test
public void GitHubUsersListObservableTest(){
CountDownLatch latch = new CountDownLatch(N);
service.getObservableUserList()
.flatMap(Observable::from)
.subscribe(user -> {
System.out.println(user.login);
latch.countDown();
});
latch.await();
}
Or you can use BlockingObservable from RxJava:
// This does not block.
BlockingObservable<User> observable = service.getObservableUserList()
.flatMap(Observable::from)
.toBlocking();
// This blocks and is called for every emitted item.
observable.forEach(user -> System.out.println(user.login));

Related

How to use spring-kafka for sending a message again

We are using spring-kafka 1.2.2.RELEASE.
What we want
1. As soon as a message is consumed and processed successfully, offset is committed in spring-kafka. I am using Manaul Commit/Acknowledgement for it, it is working fine.
2. In case of any exception we want spring-kafka to resend the same message. We are throwing RunTime exception on any system error, which was logged by spring-kafka and never committed.
This is fine as we don't want it to commit, but that message stays in spring-kafka and never comes back unless we restart the service. On restart message comes back and executes once again and then stay in spring-kafka
What we tried
1. I have tried both ErrorHandler and RetryingMessageListenerAdapter, but in both cases we have to code in service how to process the message again
This is my consumer
public class MyConsumer{
#KafkaListener
public void receive(...){
// application logic to return success/failure
if(success){
acknowledgement.acknowledge();
}else{
throw new RunTimeException();
}
}
}
Also I have following configurations for container factory
factory.getContainerProperties().setErrorHandler(new ErrorHandler(){
#Override
public void handle(...){
throw new RunTimeException("");
}
});
While executing the flow, control is coming inside both first to receive and then handle method. After that service waits for new message. However I was expecting, since we threw an exception, and message is not committed, same message should land in receive method again.
Is there any way, we can tell spring kafka "do not commit this message and send it again asap?"
1.2.x is no longer supported; 1.x users are recommended to upgrade to at least 1.3.x (currently 1.3.8) because of its much simpler threading model, thanks to KIP-62.
The current version is 2.2.2.
2.0.1 introduced the SeekToCurrentErrorHandler which re-seeks the failed record so that it is redelivered.
With earlier versions, you had to stop and restart the container to redeliver a failed message, or add retry to the listener adapter.
I suggest you upgrade to the newest possible release.
Unfortunately version available for us to use is 1.3.7.RELEASE.
I have tried implementing the ConsumerSeekAware interface. Below is how I am doing it and I can see message delivering repreatedly
Consumer
public class MyConsumer implements ConsumerSeekAware{
private ConsumerSeekCallback consumerSeekCallback;
if(condition) {
acknowledgement.acknowledge();
}else {
consumerSeekCallback.seek((String)headers.get("kafka_receivedTopic"),
(int) headers.get("kafka_receivedPartitionId"),
(int) headers.get("kafka_offset"));
}
}
#Override
public void registerSeekCallback(ConsumerSeekCallback consumerSeekCallback) {
this.consumerSeekCallback = consumerSeekCallback;
}
#Override
public void onIdleContainer(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onIdleContainer called");
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onPartitionsAssigned called");
}
}
Config
public class MyConsumerConfig {
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
// Set server, deserializer, group id
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MyModel> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, MyModel> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}
#Bean
public MyConsumer receiver() {
return new MyConsumer();
}
}

spring boot kafka consumer application to implement heartbeat

Below is my spring boot kafka consumer application to read data from kafka topic. In this application we are planning to implement heartbeat funtionally to post its heartbeat to url using #schduling annotaion to know its alive and running(which loads my json input data to db). purpose of this post request is to update the status on application monitoring tool.
to achive this i placed my heartbeat code to in manyplaces of my application but
I could'not able to achive this becuase #postconstuct or consumer.poll() is not allowing to run the heartbeat code piece.
we are using apache kafka 2.12, What could be the right approach to implment this behaviour in my spring boot app? Is their any other api to do such post request to url, every few miuntes through out the application.? Writing background thread will resolve this issue, please share any? why postconstuct() or poll() is blocking other recurresive code to run.
Please help me. Thanks in advance.
#SpringBootApplication
#EnableScheduling
public class KafkaApp {
#Autowired
ConsumerService kcService;
public static void main(String[] args) {
SpringApplication.run(KafkaApp.class, args);
}
#PostConstruct
public void init(){
kcService.getMessagesFromKafka();
}
}
and 2 #Service Definitions:
import org.apache.kafka.clients.consumer.Consumer;
#Service public class ConsumerService {
final Consumer<Long, String> consumer = createConsumer();
final int giveUp = 100;
int noRecordsCount = 0;
while (true) {
final ConsumerRecords<Long, String> consumerRecords = consumer.poll(1000);
if (consumerRecords.count()==0) {
noRecordsCount++;
if (noRecordsCount > giveUp) break;
else continue;
}
consumerRecords.forEach(record -> {
System.out.printf("Consumer Record:(%d, %s, %d, %d)\n",
record.key(), record.value(),
record.partition(), record.offset());
});
consumer.commitAsync();
}
}
#Scheduled(fixedDelay = 180000)
public void heartbeat() {
RestTemplate restTemplate = new RestTemplate();
String url = "endpoint url";
String requestJson = "{\"I am alive\":\"App name?\"}";
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
HttpEntity<String> entity = new HttpEntity<String>(requestJson,headers);
String answer = restTemplate.postForObject(url, entity, String.class);
System.out.println(answer);
}
Add annotation to your main class like:
#SpringBootApplication
#EnableScheduling
public class KafkaApp {
#Autowired
ConsumerService kcService;
public static void main(String[] args) {
SpringApplication.run(KafkaApp.class, args);
}
#PostConstruct
public void init(){
kcService.getMessagesFromKafka();
}
}
For more detail,spring-boot-task-scheduling-with-scheduled-annotation you can visit this link:
If you want to write a cron job for this purpose then in application.properties add this:
cron.expression=5 0 0 ? * * * //Its means it'll execute every 5 sec
You can make cron expression online here is a link:cron-expression-generator-quartz.
And in your heart beat function write this above function like:
#Scheduled(cron = "${cron.expression}")
public void heartbeat() {
//Your code here.
}

SoapFault handling with Spring WS client - WebServiceGatewaySupport and WebServiceTemplate

I am trying to write a Spring WS client using WebServiceGatewaySupport. I managed to test the client for a successful request and response. Now I wanted to write test cases for soap faults.
public class MyClient extends WebServiceGatewaySupport {
public ServiceResponse method(ServiceRequest serviceRequest) {
return (ServiceResponse) getWebServiceTemplate().marshalSendAndReceive(serviceRequest);
}
#ActiveProfiles("test")
#RunWith(SpringRunner.class)
#SpringBootTest(classes = SpringTestConfig.class)
#DirtiesContext
public class MyClientTest {
#Autowired
private MyClient myClient;
private MockWebServiceServer mockServer;
#Before
public void createServer() throws Exception {
mockServer = MockWebServiceServer.createServer(myClient);
}
}
My question is how do i stub the soap fault response in the mock server, so that my custom FaultMessageResolver will be able to unmarshall soap fault?
I tried couple of things below, but nothing worked.
// responsePayload being SoapFault wrapped in SoapEnvelope
mockServer.expect(payload(requestPayload))
.andRespond(withSoapEnvelope(responsePayload));
// tried to build error message
mockServer.expect(payload(requestPayload))
.andRespond(withError("soap fault string"));
// tried with Exception
mockServer.expect(payload(requestPayload))
.andRespond(withException(new RuntimeException));
Any help is appreciated. Thanks!
Follow Up:
Ok so, withSoapEnvelope(payload) I managed to get the controller to go to my custom MySoapFaultMessageResolver.
public class MyCustomSoapFaultMessageResolver implements FaultMessageResolver {
private Jaxb2Marshaller jaxb2Marshaller;
#Override
public void resolveFault(WebServiceMessage message) throws IOException {
if (message instanceof SoapMessage) {
SoapMessage soapMessage = (SoapMessage) message;
SoapFaultDetailElement soapFaultDetailElement = (SoapFaultDetailElement) soapMessage.getSoapBody()
.getFault()
.getFaultDetail()
.getDetailEntries()
.next();
Source source = soapFaultDetailElement.getSource();
jaxb2Marshaller = new Jaxb2Marshaller();
jaxb2Marshaller.setContextPath("com.company.project.schema");
Object object = jaxb2Marshaller.unmarshal(source);
if (object instanceof CustomerAlreadyExistsFault) {
throw new CustomerAlreadyExistsException(soapMessage);
}
}
}
}
But seriously!!! I had to unmarshall every message and check the instance of it. Being a client I should be thorough with all possible exceptions of the service here, and create custom runtime exceptions and throw it from the resolver. Still at the end, its been caught in WebServiceTemplate and re thrown as just a runtime exception.
You could try with something like this:
#Test
public void yourTestMethod() // with no throw here
{
Source requestPayload = new StringSource("<your request>");
String errorMessage = "Your error message from WS";
mockWebServiceServer
.expect(payload(requestPayload))
.andRespond(withError(errorMessage));
YourRequestClass request = new YourRequestClass();
// TODO: set request properties...
try {
yourClient.callMethod(request);
}
catch (Exception e) {
assertThat(e.getMessage()).isEqualTo(errorMessage);
}
mockWebServiceServer.verify();
}
In this part of code mockWebServiceServer represents the instance of MockWebServiceServer class.

Netty: What is the right way to share NioClientSocketChannelFactory among multiple Netty Clients

I am new to Netty. I am using “Netty 3.6.2.Final”. I have created a Netty Client (MyClient) that talks to a remote server (The server implements a custom protocol based on TCP). I create a new ClientBootstrap instance for each MyClient instance (within the constructor).
My question is if I share “NioClientSocketChannelFactory” factory object among all the instances of MyClient then when/how do I release all the resources associated with the “NioClientSocketChannelFactory”?
In other words, since my Netty Client runs inside a JBOSS container running 24x7, should I release all resources by calling “bootstrap.releaseExternalResources();” and when/where should I do so?
More Info: My Netty Client is called from two scenarios inside a JBOSS container. First, in an infinite for loop with each time passing the string that needs to be sent to the remote server (in effect similar to below code)
for( ; ; ){
//Prepare the stringToSend
//Send a string and receive a string
String returnedString=new MyClient().handle(stringToSend);
}
Another scenarios is my Netty Client is called within concurrent threads with each thread calling “new MyClient().handle(stringToSend);”.
I have given the skeleton code below. It is very similar to the TelnetClient example at Netty website.
MyClient
import org.jboss.netty.bootstrap.ClientBootstrap;
import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory;
public class MyClient {
//Instantiate this only once per application
private final static Timer timer = new HashedWheelTimer();
//All below must come from configuration
private final String host ="127.0.0.1";
private final int port =9699;
private final InetSocketAddress address = new InetSocketAddress(host, port);
private ClientBootstrap bootstrap;
//Timeout when the server sends nothing for n seconds.
static final int READ_TIMEOUT = 5;
public MyClient(){
bootstrap = new ClientBootstrap(NioClientSocketFactorySingleton.getInstance());
}
public String handle(String messageToSend){
bootstrap.setOption("connectTimeoutMillis", 20000);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setOption("remoteAddress", address);
bootstrap.setPipelineFactory(new MyClientPipelineFactory(messageToSend,bootstrap,timer));
// Start the connection attempt.
ChannelFuture future = bootstrap.connect();
// Wait until the connection attempt succeeds or fails.
channel = future.awaitUninterruptibly().getChannel();
if (!future.isSuccess()) {
return null;
}
// Wait until the connection is closed or the connection attempt fails.
channel.getCloseFuture().awaitUninterruptibly();
MyClientHandler myClientHandler=(MyClientHandler)channel.getPipeline().getLast();
String messageReceived=myClientHandler.getMessageReceived();
return messageReceived;
}
}
Singleton NioClientSocketChannelFactory
public class NioClientSocketFactorySingleton {
private static NioClientSocketChannelFactory nioClientSocketChannelFactory;
private NioClientSocketFactorySingleton() {
}
public static synchronized NioClientSocketChannelFactory getInstance() {
if ( nioClientSocketChannelFactory == null) {
nioClientSocketChannelFactory=new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
}
return nioClientSocketChannelFactory;
}
protected void finalize() throws Throwable {
try{
if(nioClientSocketChannelFactory!=null){
// Shut down thread pools to exit.
nioClientSocketChannelFactory.releaseExternalResources();
}
}catch(Exception e){
//Can't do anything much
}
}
}
MyClientPipelineFactory
public class MyClientPipelineFactory implements ChannelPipelineFactory {
private String messageToSend;
private ClientBootstrap bootstrap;
private Timer timer;
public MyClientPipelineFactory(){
}
public MyClientPipelineFactory(String messageToSend){
this.messageToSend=messageToSend;
}
public MyClientPipelineFactory(String messageToSend,ClientBootstrap bootstrap, Timer timer){
this.messageToSend=messageToSend;
this.bootstrap=bootstrap;
this.timer=timer;
}
public ChannelPipeline getPipeline() throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = pipeline();
// Add the text line codec combination first,
//pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()));
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
//Add readtimeout
pipeline.addLast("timeout", new ReadTimeoutHandler(timer, MyClient.READ_TIMEOUT));
// and then business logic.
pipeline.addLast("handler", new MyClientHandler(messageToSend,bootstrap));
return pipeline;
}
}
MyClientHandler
public class MyClientHandler extends SimpleChannelUpstreamHandler {
private String messageToSend="";
private String messageReceived="";
public MyClientHandler(String messageToSend,ClientBootstrap bootstrap) {
this.messageToSend=messageToSend;
this.bootstrap=bootstrap;
}
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e){
e.getChannel().write(messageToSend);
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e){
messageReceived=e.getMessage().toString();
//This take the control back to the MyClient
e.getChannel().close();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
// Close the connection when an exception is raised.
e.getChannel().close();
}
}
You should only call releaseExternalResources() once you are sure you not need it anymore. This may be for example when the application gets stopped or undeployed.

How to: Async Callbacks using Netty with Avro

I'm trying to implement Asynchronous Avro calls by using its NettyServer implementation. After digging the source code, I found an example on how to use NettyServer from TestNettyServerWithCallbacks.java
When running a few test, I realize that NettyServer never calls hello(Callback) method, instead it keeps calling the synchronous hello() method. The client program prints out "Hello" but I'm expecting "Hello-ASYNC" as a result. I really have no clue what's going on.
I hope someone can shine some light on me and perhaps point out the mistake. Below are the codes I use to perform a simple asynchronous avro test.
AvroClient.java - Client code.
public class AvroClient {
public static void main(String[] args) throws InterruptedException, ExecutionException, TimeoutException {
try {
NettyTransceiver transceiver = new NettyTransceiver(new InetSocketAddress(6666));
Chat.Callback client = SpecificRequestor.getClient(Chat.Callback.class, transceiver);
final CallFuture<CharSequence> future1 = new CallFuture<CharSequence>();
client.hello(future1);
System.out.println(future1.get());
transceiver.close();
} catch (IOException ex) {
System.err.println(ex);
}
}
}
AvroNetty.java - The Server Code
public class AvroNetty {
public static void main(String[] args) {
Index indexImpl = new AsyncIndexImpl();
Chat chatImpl = new ChatImpl();
Server server = new NettyServer(new SpecificResponder(Chat.class, chatImpl), new InetSocketAddress(6666));
server.start();
System.out.println("Server is listening at port " + server.getPort());
}
}
ChatImpl.java
public class ChatImpl implements Chat.Callback {
#Override
public void hello(org.apache.avro.ipc.Callback<CharSequence> callback) throws IOException {
callback.handleResult("Hello-ASYNC");
}
#Override
public CharSequence hello() throws AvroRemoteException {
return new Utf8("Hello");
}
}
This interface is auto-generated by avro-tool
Chat.java
#SuppressWarnings("all")
public interface Chat {
public static final org.apache.avro.Protocol PROTOCOL = org.apache.avro.Protocol.parse("{\"protocol\":\"Chat\",\"namespace\":\"avro.test\",\"types\":[],\"messages\":{\"hello\":{\"request\":[],\"response\":\"string\"}}}");
java.lang.CharSequence hello() throws org.apache.avro.AvroRemoteException;
#SuppressWarnings("all")
public interface Callback extends Chat {
public static final org.apache.avro.Protocol PROTOCOL = avro.test.Chat.PROTOCOL;
void hello(org.apache.avro.ipc.Callback<java.lang.CharSequence> callback) throws java.io.IOException;
}
}
Here is the Avro Schema
{
"namespace": "avro.test",
"protocol": "Chat",
"types" : [],
"messages": {
"hello": {
"request": [],
"response": "string"
}
}
}
The NettyServer implementation actually doesn't implement the Async style at all. It is a deficiency in the library. Instead you need to specify an asynchronous execution handler rather than try and chain services together through callbacks. Here is what I use to setup my NettyServer to allow for this:
ExecutorService es = Executors.newCachedThreadPool();
OrderedMemoryAwareThreadPoolExecutor executor = new OrderedMemoryAwareThreadPoolExecutor(Runtime.getRuntime().availableProcessors(), 0, 0);
ExecutionHandler executionHandler = new ExecutionHandler(executor);
final NettyServer server = new NettyServer(responder, addr, new NioServerSocketChannelFactory(es, es), executionHandler);

Resources