How to disable HTTP Multiplexing in a Netty Client - http

I have a Netty HttpClient that is created using the code mentioned below. I wish to disable HTTP2.0 connection multiplexing for this client. How can this be achieved ?
public HttpClient getHttpClient(Configuration config,
ConnectionProvider connectionProvider) {
HttpClient httpClient = HttpClient
.create(connectionProvider)
.responseTimeout(Duration.ofMillis(config.getGlobalResponseTimeout()))
.tcpConfiguration(tcpClient -> {
/**
* Create a simple LoopResources to provide automatically for EventLoopGroup and Channel factories
* Params:
* prefix – the event loop thread name prefix selectCount – number of selector threads workerCount – number of worker threads daemon – should the thread be released on jvm shutdown
* Returns:
* a new LoopResources to provide automatically for EventLoopGroup and Channel factories
*/
LoopResources loopResources = LoopResources.create("webClient-event-loop",
config.getSelectorThreadCount(), config.getWorkerThreadCount(), Boolean.TRUE);
tcpClient.runOn(loopResources);
tcpClient.doOnConnected(conn -> conn
.addHandler(new ReadTimeoutHandler(config.getGlobalReadTimeout(), TimeUnit.MILLISECONDS))
.addHandler(new WriteTimeoutHandler(config.getGlobalWriteTimeout(), TimeUnit.MILLISECONDS)));
tcpClient
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, config.getGlobalConnectionTimeout())
.option(ChannelOption.TCP_NODELAY, true);
return tcpClient;
});
return httpClient;

Related

Apache Http EntityUtils.consume() vs EntityUtils.toString()?

I have written a HTTP client, where I am reading the data response from a REST web service. My confusion arises after reading multiple blogs on EntityUtils.consume() and EntiryUtils.toString(). I wanted to know the following:
If EntityUtils.toString(..) ONLY is sufficient as it also closes the stream after reading char bytes. Or I should also do EntityUtils.consume(..) as a good practice.
If both toString() and consume() operation can be used. If yes, then what should be there order.
If I EntityUtils.toString() closes the stream; then why the next call in EntityUtils.consume(..) operations which is entity.isStreaming() still returns true?
Could anyone guide me here to use these operations in a standard way. I am using HTTP version 4+.
I have to use these configurations in multithreaded(web-app) environment.
Thanks
I looked at the recommended example from the apache httpclient commons website.
In the example, they used EntityUtils.toString(..) without needing to use EntityUtils.consume(..) before or after.
They mention that calling httpclient.close() ensures all resources are closed.
source: https://hc.apache.org/httpcomponents-client-ga/httpclient/examples/org/apache/http/examples/client/ClientWithResponseHandler.java
CloseableHttpClient httpclient = HttpClients.createDefault();
try {
HttpGet httpget = new HttpGet("http://httpbin.org/");
System.out.println("Executing request " + httpget.getRequestLine());
// Create a custom response handler
ResponseHandler<String> responseHandler = new ResponseHandler<String>() {
#Override
public String handleResponse(
final HttpResponse response) throws ClientProtocolException, IOException {
int status = response.getStatusLine().getStatusCode();
if (status >= 200 && status < 300) {
HttpEntity entity = response.getEntity();
return entity != null ? EntityUtils.toString(entity) : null;
} else {
throw new ClientProtocolException("Unexpected response status: " + status);
}
}
};
String responseBody = httpclient.execute(httpget, responseHandler);
System.out.println("----------------------------------------");
System.out.println(responseBody);
} finally {
httpclient.close();
}
This is what is quoted for the above example:
This example demonstrates how to process HTTP responses using a response handler. This is the recommended way of executing HTTP requests and processing HTTP responses. This approach enables the caller to concentrate on the process of digesting HTTP responses and to delegate the task of system resource deallocation to HttpClient. The use of an HTTP response handler guarantees that the underlying HTTP connection will be released back to the connection manager automatically in all cases.

Does HttpGet, HttpPost abort() method aborts the request even if it is taking more time to establish the connection

I have a scenario where in certain cases request need to be terminated based on alternate configuration. From https://www.baeldung.com/httpclient-timeout I understood that we can set hard time out. However not sure how to test this.
Does the below code aborts the request with in given time even if there is a scenario of connection or socket or read timeout
HttpGet getMethod = new HttpGet(
"http://localhost:8080/httpclient-simple/api/bars/1");
int hardTimeout = 5; // seconds
TimerTask task = new TimerTask() {
#Override
public void run() {
if (getMethod != null) {
getMethod.abort();
}
}
};
new Timer(true).schedule(task, hardTimeout * 1000);
HttpResponse response = httpClient.execute(getMethod);
For instance if connection time out is set to 10 seconds and it is taking more than 10 seconds then does it terminate in 5 seconds. Similarly for other timeout scenarios.
If Apache httpclient library does not support this, is there an alternative?
Thanks in advance.
Look here for setting connection and read timeouts with apache http client.

Having trouble implementing PoolingHttpClientConnectionManager

I am trying to implement http connection pooling in my java code and when I try to use it I get a Handshake exception. If I take out that one line that sets the connection manager it works. This makes no sense to me. I am using these jar files:
httpclient-4.5.2.jar
httpcore-4.4.4.jar
With connection pooling in place:
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(10000)
.setConnectionRequestTimeout(10000)
.setSocketTimeout(5000)
.build();
SSLContext sslContext = SSLContexts.custom()
.loadKeyMaterial(readStore(), KEYSTOREPASS)
.build();
HttpClientConnectionManager poolingConnManager = new PoolingHttpClientConnectionManager();
httpClient = HttpClients.custom()
.setConnectionManager(poolingConnManager)
.setDefaultRequestConfig(requestConfig)
.setSSLContext(sslContext)
.build();
Throws Received fatal alert: handshake_failure exception:
main, RECV TLSv1.2 ALERT: fatal, handshake_failure
%% Invalidated: [Session-1, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]
%% Invalidated: [Session-2, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]
main, called closeSocket()
main, handling exception: javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
main, called close()
main, called closeInternal(true)
00:22:51,523 ERROR TestHttps:155 - Received fatal alert: handshake_failure
with connection pooling commented out:
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(10000)
.setConnectionRequestTimeout(10000)
.setSocketTimeout(5000)
.build();
SSLContext sslContext = SSLContexts.custom()
.loadKeyMaterial(readStore(), KEYSTOREPASS) // use null as second param if you don't have a separate key password
.build();
HttpClientConnectionManager poolingConnManager = new PoolingHttpClientConnectionManager();
httpClient = HttpClients.custom()
//.setConnectionManager(poolingConnManager)
.setDefaultRequestConfig(requestConfig)
.setSSLContext(sslContext)
.build();
it works successfully and returns my value (obfuscated here):
main, READ: TLSv1.2 Application Data, length = 40
Padded plaintext after DECRYPTION: len = 16
0000: xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx 1234567890
main, called close()
main, called closeInternal(true)
main, SEND TLSv1.2 ALERT: warning, description = close_notify
Padded plaintext before ENCRYPTION: len = 2
0000: 01 00 ..
main, WRITE: TLSv1.2 Alert, length = 26
[Raw write]: length = 31
0000: 15 03 03 00 1A 00 00 00 00 00 00 00 01 2C 7D 6E .............,.n
0010: 66 04 BA 1D FF 4A EB 54 0F 60 C7 A4 41 4A 68 f....J.T.`..AJh
main, called closeSocket(true)
What am I doing wrong? Thanks
I found this doc for an standard implementation of httpconnection manager
**
*This connection manager implementation should be used inside an EJB container. May be because of this and you using this in main method **
HTTP connection managers
2.3.1. Managed connections and connection managers
HTTP connections are complex, stateful, thread-unsafe objects which need to be properly managed to function correctly. HTTP connections can only be used by one execution thread at a time. HttpClient employs a special entity to manage access to HTTP connections called HTTP connection manager and represented by the HttpClientConnectionManager interface. The purpose of an HTTP connection manager is to serve as a factory for new HTTP connections, to manage life cycle of persistent connections and to synchronize access to persistent connections making sure that only one thread can have access to a connection at a time. Internally HTTP connection managers work with instances of ManagedHttpClientConnection acting as a proxy for a real connection that manages connection state and controls execution of I/O operations. If a managed connection is released or get explicitly closed by its consumer the underlying connection gets detached from its proxy and is returned back to the manager. Even though the service consumer still holds a reference to the proxy instance, it is no longer able to execute any I/O operations or change the state of the real connection either intentionally or unintentionally.
This is an example of acquiring a connection from a connection manager:
HttpClientContext context = HttpClientContext.create();
HttpClientConnectionManager connMrg = new BasicHttpClientConnectionManager();
HttpRoute route = new HttpRoute(new HttpHost("localhost", 80));
// Request new connection. This can be a long process
ConnectionRequest connRequest = connMrg.requestConnection(route, null);
// Wait for connection up to 10 sec
HttpClientConnection conn = connRequest.get(10, TimeUnit.SECONDS);
try {
// If not open
if (!conn.isOpen()) {
// establish connection based on its route info
connMrg.connect(conn, route, 1000, context);
// and mark it as route complete
connMrg.routeComplete(conn, route, context);
}
// Do useful things with the connection.
} finally {
connMrg.releaseConnection(conn, null, 1, TimeUnit.MINUTES);
}
The connection request can be terminated prematurely by calling ConnectionRequest#cancel() if necessary. This will unblock the thread blocked in the ConnectionRequest#get() method.
2.3.2. Simple connection manager
BasicHttpClientConnectionManager is a simple connection manager that maintains only one connection at a time. Even though this class is thread-safe it ought to be used by one execution thread only. BasicHttpClientConnectionManager will make an effort to reuse the connection for subsequent requests with the same route. It will, however, close the existing connection and re-open it for the given route, if the route of the persistent connection does not match that of the connection request. If the connection has been already been allocated, then java.lang.IllegalStateException is thrown.
**
**
May be you are using this in main method because of this it is
creating issue.
**
*This connection manager implementation should be used inside an EJB container.**
*
2.3.3. Pooling connection manager
PoolingHttpClientConnectionManager is a more complex implementation that manages a pool of client connections and is able to service connection requests from multiple execution threads. Connections are pooled on a per route basis. A request for a route for which the manager already has a persistent connection available in the pool will be serviced by leasing a connection from the pool rather than creating a brand new connection.
PoolingHttpClientConnectionManager maintains a maximum limit of connections on a per route basis and in total. Per default this implementation will create no more than 2 concurrent connections per given route and no more 20 connections in total. For many real-world applications these limits may prove too constraining, especially if they use HTTP as a transport protocol for their services.
This example shows how the connection pool parameters can be adjusted:
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
// Increase max total connection to 200
cm.setMaxTotal(200);
// Increase default max connection per route to 20
cm.setDefaultMaxPerRoute(20);
// Increase max connections for localhost:80 to 50
HttpHost localhost = new HttpHost("locahost", 80);
cm.setMaxPerRoute(new HttpRoute(localhost), 50);
CloseableHttpClient httpClient = HttpClients.custom()
.setConnectionManager(cm)
.build();
2.3.4. Connection manager shutdown
When an HttpClient instance is no longer needed and is about to go out of scope it is important to shut down its connection manager to ensure that all connections kept alive by the manager get closed and system resources allocated by those connections are released.
CloseableHttpClient httpClient = <...>
httpClient.close();
2.4. Multithreaded request execution
When equipped with a pooling connection manager such as PoolingClientConnectionManager, HttpClient can be used to execute multiple requests simultaneously using multiple threads of execution.
The PoolingClientConnectionManager will allocate connections based on its configuration. If all connections for a given route have already been leased, a request for a connection will block until a connection is released back to the pool. One can ensure the connection manager does not block indefinitely in the connection request operation by setting 'http.conn-manager.timeout' to a positive value. If the connection request cannot be serviced within the given time period ConnectionPoolTimeoutException will be thrown.
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
CloseableHttpClient httpClient = HttpClients.custom()
.setConnectionManager(cm)
.build();
// URIs to perform GETs on
String[] urisToGet = {
"http://www.domain1.com/",
"http://www.domain2.com/",
"http://www.domain3.com/",
"http://www.domain4.com/"
};
// create a thread for each URI
GetThread[] threads = new GetThread[urisToGet.length];
for (int i = 0; i < threads.length; i++) {
HttpGet httpget = new HttpGet(urisToGet[i]);
threads[i] = new GetThread(httpClient, httpget);
}
// start the threads
for (int j = 0; j < threads.length; j++) {
threads[j].start();
}
// join the threads
for (int j = 0; j < threads.length; j++) {
threads[j].join();
}
While HttpClient instances are thread safe and can be shared between multiple threads of execution, it is highly recommended that each thread maintains its own dedicated instance of HttpContext .
static class GetThread extends Thread {
private final CloseableHttpClient httpClient;
private final HttpContext context;
private final HttpGet httpget;
public GetThread(CloseableHttpClient httpClient, HttpGet httpget) {
this.httpClient = httpClient;
this.context = HttpClientContext.create();
this.httpget = httpget;
}
#Override
public void run() {
try {
CloseableHttpResponse response = httpClient.execute(
httpget, context);
try {
HttpEntity entity = response.getEntity();
} finally {
response.close();
}
} catch (ClientProtocolException ex) {
// Handle protocol errors
} catch (IOException ex) {
// Handle I/O errors
}
}
}
2.5. Connection eviction policy
One of the major shortcomings of the classic blocking I/O model is that the network socket can react to I/O events only when blocked in an I/O operation. When a connection is released back to the manager, it can be kept alive however it is unable to monitor the status of the socket and react to any I/O events. If the connection gets closed on the server side, the client side connection is unable to detect the change in the connection state (and react appropriately by closing the socket on its end).
HttpClient tries to mitigate the problem by testing whether the connection is 'stale', that is no longer valid because it was closed on the server side, prior to using the connection for executing an HTTP request. The stale connection check is not 100% reliable. The only feasible solution that does not involve a one thread per socket model for idle connections is a dedicated monitor thread used to evict connections that are considered expired due to a long period of inactivity. The monitor thread can periodically call ClientConnectionManager#closeExpiredConnections() method to close all expired connections and evict closed connections from the pool. It can also optionally call ClientConnectionManager#closeIdleConnections() method to close all connections that have been idle over a given period of time.
public static class IdleConnectionMonitorThread extends Thread {
private final HttpClientConnectionManager connMgr;
private volatile boolean shutdown;
public IdleConnectionMonitorThread(HttpClientConnectionManager connMgr) {
super();
this.connMgr = connMgr;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(5000);
// Close expired connections
connMgr.closeExpiredConnections();
// Optionally, close connections
// that have been idle longer than 30 sec
connMgr.closeIdleConnections(30, TimeUnit.SECONDS);
}
}
} catch (InterruptedException ex) {
// terminate
}
}
public void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}
}
2.6. Connection keep alive strategy
The HTTP specification does not specify how long a persistent connection may be and should be kept alive. Some HTTP servers use a non-standard Keep-Alive header to communicate to the client the period of time in seconds they intend to keep the connection alive on the server side. HttpClient makes use of this information if available. If the Keep-Alive header is not present in the response, HttpClient assumes the connection can be kept alive indefinitely. However, many HTTP servers in general use are configured to drop persistent connections after a certain period of inactivity in order to conserve system resources, quite often without informing the client. In case the default strategy turns out to be too optimistic, one may want to provide a custom keep-alive strategy.
ConnectionKeepAliveStrategy myStrategy = new ConnectionKeepAliveStrategy() {
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
// Honor 'keep-alive' header
HeaderElementIterator it = new BasicHeaderElementIterator(
response.headerIterator(HTTP.CONN_KEEP_ALIVE));
while (it.hasNext()) {
HeaderElement he = it.nextElement();
String param = he.getName();
String value = he.getValue();
if (value != null && param.equalsIgnoreCase("timeout")) {
try {
return Long.parseLong(value) * 1000;
} catch(NumberFormatException ignore) {
}
}
}
HttpHost target = (HttpHost) context.getAttribute(
HttpClientContext.HTTP_TARGET_HOST);
if ("www.naughty-server.com".equalsIgnoreCase(target.getHostName())) {
// Keep alive for 5 seconds only
return 5 * 1000;
} else {
// otherwise keep alive for 30 seconds
return 30 * 1000;
}
}
};
CloseableHttpClient client = HttpClients.custom()
.setKeepAliveStrategy(myStrategy)
.build();
2.7. Connection socket factories
HTTP connections make use of a java.net.Socket object internally to handle transmission of data across the wire. However they rely on the ConnectionSocketFactory interface to create, initialize and connect sockets. This enables the users of HttpClient to provide application specific socket initialization code at runtime. PlainConnectionSocketFactory is the default factory for creating and initializing plain (unencrypted) sockets.
The process of creating a socket and that of connecting it to a host are decoupled, so that the socket could be closed while being blocked in the connect operation.
HttpClientContext clientContext = HttpClientContext.create();
PlainConnectionSocketFactory sf = PlainConnectionSocketFactory.getSocketFactory();
Socket socket = sf.createSocket(clientContext);
int timeout = 1000; //ms
HttpHost target = new HttpHost("localhost");
InetSocketAddress remoteAddress = new InetSocketAddress(
InetAddress.getByAddress(new byte[] {127,0,0,1}), 80);
sf.connectSocket(timeout, socket, target, remoteAddress, null, clientContext);
2.7.1. Secure socket layering
LayeredConnectionSocketFactory is an extension of the ConnectionSocketFactory interface. Layered socket factories are capable of creating sockets layered over an existing plain socket. Socket layering is used primarily for creating secure sockets through proxies. HttpClient ships with SSLSocketFactory that implements SSL/TLS layering. Please note HttpClient does not use any custom encryption functionality. It is fully reliant on standard Java Cryptography (JCE) and Secure Sockets (JSEE) extensions.
2.7.2. Integration with connection manager
Custom connection socket factories can be associated with a particular protocol scheme as as HTTP or HTTPS and then used to create a custom connection manager.
ConnectionSocketFactory plainsf = <...>
LayeredConnectionSocketFactory sslsf = <...>
Registry<ConnectionSocketFactory> r = RegistryBuilder.<ConnectionSocketFactory>create()
.register("http", plainsf)
.register("https", sslsf)
.build();
HttpClientConnectionManager cm = new PoolingHttpClientConnectionManager(r);
HttpClients.custom()
.setConnectionManager(cm)
.build();
2.7.3. SSL/TLS customization
HttpClient makes use of SSLConnectionSocketFactory to create SSL connections. SSLConnectionSocketFactory allows for a high degree of customization. It can take an instance of javax.net.ssl.SSLContext as a parameter and use it to create custom configured SSL connections.
KeyStore myTrustStore = <...>
SSLContext sslContext = SSLContexts.custom()
.loadTrustMaterial(myTrustStore)
.build();
SSLConnectionSocketFactory sslsf = new SSLConnectionSocketFactory(sslContext);
Customization of SSLConnectionSocketFactory implies a certain degree of familiarity with the concepts of the SSL/TLS protocol, a detailed explanation of which is out of scope for this document. Please refer to the Java™ Secure Socket Extension (JSSE) Reference Guide for a detailed description of javax.net.ssl.SSLContext and related tools.
Hostname verification
In addition to the trust verification and the client authentication performed on the SSL/TLS protocol level, HttpClient can optionally verify whether the target hostname matches the names stored inside the server's X.509 certificate, once the connection has been established. This verification can provide additional guarantees of authenticity of the server trust material. The javax.net.ssl.HostnameVerifier interface represents a strategy for hostname verification. HttpClient ships with two javax.net.ssl.HostnameVerifier implementations. Important: hostname verification should not be confused with SSL trust verification.
DefaultHostnameVerifier: The default implementation used by HttpClient is expected to be compliant with RFC 2818. The hostname must match any of alternative names specified by the certificate, or in case no alternative names are given the most specific CN of the certificate subject. A wildcard can occur in the CN, and in any of the subject-alts.
NoopHostnameVerifier: This hostname verifier essentially turns hostname verification off. It accepts any SSL session as valid and matching the target host.
Per default HttpClient uses the DefaultHostnameVerifier implementation. One can specify a different hostname verifier implementation if desired
SSLContext sslContext = SSLContexts.createSystemDefault();
SSLConnectionSocketFactory sslsf = new SSLConnectionSocketFactory(
sslContext,
NoopHostnameVerifier.INSTANCE);
As of version 4.4 HttpClient uses the public suffix list kindly maintained by Mozilla Foundation to make sure that wildcards in SSL certificates cannot be misused to apply to multiple domains with a common top-level domain. HttpClient ships with a copy of the list retrieved at the time of the release. The latest revision of the list can found at https://publicsuffix.org/list/. It is highly adviseable to make a local copy of the list and download the list no more than once per day from its original location.
PublicSuffixMatcher publicSuffixMatcher = PublicSuffixMatcherLoader.load(
PublicSuffixMatcher.class.getResource("my-copy-effective_tld_names.dat"));
DefaultHostnameVerifier hostnameVerifier = new DefaultHostnameVerifier(publicSuffixMatcher);
One can disable verification against the public suffic list by using null matcher.
DefaultHostnameVerifier hostnameVerifier = new DefaultHostnameVerifier(null);
HttpClient proxy configuration
Even though HttpClient is aware of complex routing schemes and proxy chaining, it supports only simple direct or one hop proxy connections out of the box.
The simplest way to tell HttpClient to connect to the target host via a proxy is by setting the default proxy parameter:
HttpHost proxy = new HttpHost("someproxy", 8080);
DefaultProxyRoutePlanner routePlanner = new DefaultProxyRoutePlanner(proxy);
CloseableHttpClient httpclient = HttpClients.custom()
.setRoutePlanner(routePlanner)
.build();
One can also instruct HttpClient to use the standard JRE proxy selector to obtain proxy information:
SystemDefaultRoutePlanner routePlanner = new SystemDefaultRoutePlanner(
ProxySelector.getDefault());
CloseableHttpClient httpclient = HttpClients.custom()
.setRoutePlanner(routePlanner)
.build();
Alternatively, one can provide a custom RoutePlanner implementation in order to have a complete control over the process of HTTP route computation:
HttpRoutePlanner routePlanner = new HttpRoutePlanner() {
public HttpRoute determineRoute(
HttpHost target,
HttpRequest request,
HttpContext context) throws HttpException {
return new HttpRoute(target, null, new HttpHost("someproxy", 8080),
"https".equalsIgnoreCase(target.getSchemeName()));
}
};
CloseableHttpClient httpclient = HttpClients.custom()
.setRoutePlanner(routePlanner)
.build();
}
}

How to detect weather the endpoint (KAA SDK) is connected to KAA server or not from application

Is there any mechanism or method or steps to detect the endpoint(KAA SDK) connectivity to the KAA server from the application.
If no, then how can we identifies failure devices through remotely?? or How can we identifies devices that are not able to communicate with the KAA Server after deploying devices in the field??
How one can achieve this requirement to unlock the power of IOT??
If your endpoint will meet some problems connecting to Kaa server a "failover" will happen.
So you must define your own failover strategy and set it for your Kaa client. Every time failover happens strategy's onFialover() method will be called.
Below you can see the code example for the Java SDK.
import org.kaaproject.kaa.client.DesktopKaaPlatformContext;
import org.kaaproject.kaa.client.Kaa;
import org.kaaproject.kaa.client.KaaClient;
import org.kaaproject.kaa.client.SimpleKaaClientStateListener;
import org.kaaproject.kaa.client.channel.failover.FailoverDecision;
import org.kaaproject.kaa.client.channel.failover.FailoverStatus;
import org.kaaproject.kaa.client.channel.failover.strategies.DefaultFailoverStrategy;
import org.kaaproject.kaa.client.exceptions.KaaRuntimeException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
/**
* A demo application that shows how to use the Kaa credentials API.
*/
public class CredentialsDemo {
private static final Logger LOG = LoggerFactory.getLogger(CredentialsDemo.class);
private static KaaClient kaaClient;
public static void main(String[] args) throws InterruptedException, IOException {
LOG.info("Demo application started");
try {
// Create a Kaa client and add a startup listener
kaaClient = Kaa.newClient(new DesktopKaaPlatformContext(), new SimpleKaaClientStateListener() {
#Override
public void onStarted() {
super.onStarted();
LOG.info("Kaa client started");
}
}, true);
kaaClient.setFailoverStrategy(new CustomFailoverStrategy());
kaaClient.start();
// ... Do some work ...
LOG.info("Stopping application.");
kaaClient.stop();
} catch (KaaRuntimeException e) {
LOG.info("Cannot connect to server - no credentials found.");
LOG.info("Stopping application.");
}
}
// Give a possibility to manage device behavior when it loses connection
// or has other problems dealing with Kaa server.
private static class CustomFailoverStrategy extends DefaultFailoverStrategy {
#Override
public FailoverDecision onFailover(FailoverStatus failoverStatus) {
LOG.info("Failover happen. Failover type: " + failoverStatus);
// See enum DefaultFailoverStrategy from package org.kaaproject.kaa.client.channel.failover
// to list all possible values
switch (failoverStatus) {
case CURRENT_BOOTSTRAP_SERVER_NA:
LOG.info("Current Bootstrap server is not available. Trying connect to another one.");
// ... Do some recovery, send notification messages, etc. ...
// Trying to connect to another bootstrap node one-by-one every 5 seconds
return new FailoverDecision(FailoverDecision.FailoverAction.USE_NEXT_BOOTSTRAP, 5L, TimeUnit.SECONDS);
default:
return super.onFailover(failoverStatus);
}
}
}
}
UPDATED (2016/10/28)
From the server side you can check endpoint credentials status as shown in method checkCredentialsStatus() in code below. The status IN_USE shows that endpoint has at least one successful connection attempt.
Unfortunately in current Kaa version there are no ways to directly check if endpoint is connected to server or not. I describe them after code example.
package org.kaaproject.kaa.examples.credentials.kaa;
import org.kaaproject.kaa.common.dto.ApplicationDto;
import org.kaaproject.kaa.common.dto.admin.AuthResultDto;
import org.kaaproject.kaa.common.dto.credentials.CredentialsStatus;
import org.kaaproject.kaa.examples.credentials.utils.IOUtils;
import org.kaaproject.kaa.server.common.admin.AdminClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.List;
public class KaaAdminManager {
private static final Logger LOG = LoggerFactory.getLogger(KaaAdminManager.class);
private static final int DEFAULT_KAA_PORT = 8080;
private static final String APPLICATION_NAME = "Credentials demo";
public String tenantAdminUsername = "admin";
public String tenantAdminPassword = "admin123";
private AdminClient adminClient;
public KaaAdminManager(String sandboxIp) {
this.adminClient = new AdminClient(sandboxIp, DEFAULT_KAA_PORT);
}
// ...
/**
* Check credentials status for getting information
* #return credential status
*/
public void checkCredentialsStatus() {
LOG.info("Enter endpoint ID:");
// Reads endpoint ID (aka "endpoint key hash") from user input
String endpointId = IOUtils.getUserInput().trim();
LOG.info("Getting credentials status...");
try {
ApplicationDto app = getApplicationByName(APPLICATION_NAME);
String appToken = app.getApplicationToken();
// CredentialsStatus can be: AVAILABLE, IN_USE, REVOKED
// if endpoint is not found on Kaa server, exception will be thrown
CredentialsStatus status = adminClient.getCredentialsStatus(appToken, endpointId);
LOG.info("Credentials for endpoint ID = {} are now in status: {}", endpointId, status.toString());
} catch (Exception e) {
LOG.error("Get credentials status for endpoint ID = {} failed. Error: {}", endpointId, e.getMessage());
}
}
/**
* Get application object by specified application name
*/
private ApplicationDto getApplicationByName(String applicationName) {
checkAuthorizationAndLogin();
try {
List<ApplicationDto> applications = adminClient.getApplications();
for (ApplicationDto application : applications) {
if (application.getName().trim().equals(applicationName)) {
return application;
}
}
} catch (Exception e) {
LOG.error("Exception has occurred: " + e.getMessage());
}
return null;
}
/**
* Checks authorization and log in
*/
private void checkAuthorizationAndLogin() {
if (!checkAuth()) {
adminClient.login(tenantAdminUsername, tenantAdminPassword);
}
}
/**
* Do authorization check
* #return true if user is authorized, false otherwise
*/
private boolean checkAuth() {
AuthResultDto.Result authResult = null;
try {
authResult = adminClient.checkAuth().getAuthResult();
} catch (Exception e) {
LOG.error("Exception has occurred: " + e.getMessage());
}
return authResult == AuthResultDto.Result.OK;
}
}
You can see an more examples of using AdminClient in class KaaAdminManager in Credentials Demo Application from Kaa sample-apps project on GitHub.
Knowing workarounds
Using Kaa Notifications in conjunction with Kaa Data Collection feature. Server sends specific unicast notification to endpoint (using endpoint ID), then endpoint replies sending data with Data Collection feature. Server wait a bit and checks timestamp of the last appender record (typically in database) for your endpoint (by endpoint ID). All messages go asynchronously, so you must select response-wait time according to your real environment.
Using Kaa Data Collection feature only. This method is simpler but has certain performance drawbacks. You can use it if your endpoints must send data to Kaa server by theirs nature (measuring sensors, etc.). Endpoint just sends data to server at regular intervals. When server needs to check if endpoint is "on-line", it query saved data logs (typically database) to get last record by endpoint ID (key hash) and analyze the timestamp field.
* To make effective use of Kaa Data Collection feature, you must add such metadata in settings of selected Log appender (in Kaa Admin UI): "Endpoint key hash" (the same as "Endpoint ID"), "Timestamp". This will automatically add needed fields to every log record received from endpoints.
I'm new to Kaa myself and unsure whether there is a method to determine that directly in the SDK, but a work-around is that you could have an extra endpoint from which you periodically send an event to all the other endpoints and expect a reply. When an endpoint does not reply, you know there's a problem.

How to wrap a JMS to WebSphere MQ bridge in a synchronous call using the request-reply pattern?

I am just dealing with a new scenario for me, which I believe might be common to some :)..
As per requirements I need to build a user experience to be like a synchronous on-line transaction for a web service call, which actually delegates the call to a IBM MQ Series using an asynchronous JMS-MQ Bridge.
The client calls the web service and than his message should be published in a JMS queue on the App server which will be delivered to WebSphere MQ and than after processing a response will delivered back to App server in a FIXED JMS queue endpoint.
The requirement deals with this transaction that will need to time out in case WebSphere MQ does not delivery the response in a defined amount of time, than the web service should send a time-out signal to client and ignore this transaction.
The sketch of the problem follows.
I need to block the request on the web service until the response arrives or time-out.
Than I am looking for some open library to help me on this task.
Or the only solution is blocking a thread and keep pooling for the response?
Maybe I could implement some block with a listener to be notified when the response arrives?
A bit of discussion would be very helpful for me now to try to clear my ideas on this.
Any suggestions?
I have a sketch that I hope will help clearing the picture ;)
Hey, thanks for posting your own solution!
Yep, receive() with timeout is the most elegant way to go in this case.
Beware of what happens with messages that aren't read because of the timeout. If your client acceses the same queue again, he might pick up a stale message.
Make sure the messages that timeout are deleted in a timely manner (if for no other reason, then not to fill up the queue with unprocessed messages).
You can do this easily either through code (setting time-to-live on the message producer) or on the Websphere MQ server (using using queues that expire messages automatically).
The latter is easier if you can't/don't want to modify the MQ side of the code. It's what I would do :)
after a couple of days coding I got to a solution for this. I am using standard EJB3 with JAX-WS annotations and Standard JMS.
The code I have written so far to meet the requirements follows. It is a Stateless Session Bean with bean managed transaction(BMT) as using standart container managed transaction (CMT) was causing some kind of hang on it, I believe because I was trying to put both JMS interactions in the same transaction as they are in the same method so notice I had to start and finish transactions for each interaction with the JMS queues. I am using weblogic for this solution. And I have also coded an MDB which basically consumes the message from queue endpoint jms/Pergunta and places a response message on the jms/Resposta queue I did this to mock the expected behavior on the MQ side of this problem. Actually in a real scenario we would probably have some COBOL application on the mainframe or even other java application dealing with the messages and placing the response on the response queue.
If someone need to try this code basically all you need is to have a container J2EE5 and configure 2 queues with jndi names: jms/Pergunta and jms/Resposta.
The EJB/Webservice code:
#Stateless
#TransactionManagement(TransactionManagementType.BEAN)
#WebService(name="DJOWebService")
public class DJOSessionBeanWS implements DJOSessionBeanWSLocal {
Logger log = Logger.getLogger(DJOSessionBeanWS.class.getName());
#Resource
SessionContext ejbContext;
// Defines the JMS connection factory.
public final static String JMS_FACTORY = "weblogic.jms.ConnectionFactory";
// Defines request queue
public final static String QUEUE_PERG = "jms/Pergunta";
// Defines response queue
public final static String QUEUE_RESP = "jms/Resposta";
Context ctx;
QueueConnectionFactory qconFactory;
/**
* Default constructor.
*/
public DJOSessionBeanWS() {
log.info("Construtor DJOSessionBeanWS");
}
#WebMethod(operationName = "processaMensagem")
public String processaMensagem(String mensagemEntrada, String idUnica)
{
//gets UserTransaction reference as this is a BMT EJB.
UserTransaction ut = ejbContext.getUserTransaction();
try {
ctx = new InitialContext();
//get the factory before any transaction it is a weblogic resource.
qconFactory = (QueueConnectionFactory) ctx.lookup(JMS_FACTORY);
log.info("Got QueueConnectionFactory");
ut.begin();
QueueConnection qcon = qconFactory.createQueueConnection();
QueueSession qsession = qcon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
Queue qs = (Queue) (new InitialContext().lookup("jms/Pergunta"));
TextMessage message = qsession.createTextMessage("this is a request message");
message.setJMSCorrelationID(idUnica);
qsession.createSender(qs).send(message);
ut.commit();
qcon.close();
//had to finish and start a new transaction, I decided also get new references for all JMS related objects, not sure if this is REALLY required
ut.begin();
QueueConnection queuecon = qconFactory.createQueueConnection();
Queue qreceive = (Queue) (new InitialContext().lookup("jms/Resposta"));
QueueSession queuesession = queuecon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
String messageSelector = "JMSCorrelationID = '" + idUnica + "'";
//creates que receiver and sets a message selector to get only related message from the response queue.
QueueReceiver qr = queuesession.createReceiver(qreceive, messageSelector);
queuecon.start();
//sets the timeout to keep waiting for the response...
TextMessage tresposta = (TextMessage) qr.receive(10000);
if(tresposta != null)
{
ut.commit();
queuecon.close();
return(tresposta.toString());
}
else{
//commints anyway.. does not have a response though
ut.commit();
queuecon.close();
log.info("null reply, returned by timeout..");
return "Got no reponse message.";
}
} catch (Exception e) {
log.severe("Unexpected error occurred ==>> " + e.getMessage());
e.printStackTrace();
try {
ut.commit();
} catch (Exception ex) {
ex.printStackTrace();
}
return "Error committing transaction after some other error executing ==> " + e.getMessage();
}
}
}
And this is the code for the MDB which mocks the MQ side of this problem. I had a Thread.sleep fragment during my tests to simulate and test the timeout on the client side to validate the solution but it is not present in this version.
/**
* Mock to get message from request queue and publish a new one on the response queue.
*/
#MessageDriven(
activationConfig = { #ActivationConfigProperty(
propertyName = "destinationType", propertyValue = "javax.jms.Queue"
) },
mappedName = "jms/Pergunta")
public class ConsomePerguntaPublicaRespostaMDB implements MessageListener {
Logger log = Logger.getLogger(ConsomePerguntaPublicaRespostaMDB.class.getName());
// Defines the JMS connection factory.
public final static String JMS_FACTORY = "weblogic.jms.ConnectionFactory";
// Define Queue de resposta
public final static String QUEUE_RESP = "jms/Resposta";
Context ctx;
QueueConnectionFactory qconFactory;
/**
* Default constructor.
*/
public ConsomePerguntaPublicaRespostaMDB() {
log.info("Executou construtor ConsomePerguntaPublicaRespostaMDB");
try {
ctx = new InitialContext();
} catch (NamingException e) {
e.printStackTrace();
}
}
/**
* #see MessageListener#onMessage(Message)
*/
public void onMessage(Message message) {
log.info("Recuperou mensagem da fila jms/FilaPergunta, executando ConsomePerguntaPublicaRespostaMDB.onMessage");
TextMessage tm = (TextMessage) message;
try {
log.info("Mensagem recebida no onMessage ==>> " + tm.getText());
//pega id da mensagem na fila de pergunta para setar corretamente na fila de resposta.
String idMensagem = tm.getJMSCorrelationID();
log.info("Id de mensagem que sera usada na resposta ==>> " + idMensagem);
qconFactory = (QueueConnectionFactory) ctx.lookup(JMS_FACTORY);
log.info("Inicializou contexto jndi e deu lookup na QueueConnectionFactory do weblogic com sucesso. Enviando mensagem");
QueueConnection qcon = qconFactory.createQueueConnection();
QueueSession qsession = qcon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
Queue queue = (Queue) (ctx.lookup("jms/Resposta"));
TextMessage tmessage = qsession.createTextMessage("Mensagem jms para postar na fila de resposta...");
tmessage.setJMSCorrelationID(idMensagem);
qsession.createSender(queue).send(tmessage);
} catch (JMSException e) {
log.severe("Erro no onMessage ==>> " + e.getMessage());
e.printStackTrace();
} catch (NamingException e) {
log.severe("Erro no lookup ==>> " + e.getMessage());
e.printStackTrace();
}
}
}
[]s

Resources