We are using Netflix feign to connect to a downstream client, but our request.options connect and read timeouts are not working.
This is how we are passing parameters to the builder
Feign.builder()
.client(new OkHttpClient(okHttpClient))
.encoder(new GsonEncoder())
.decoder(new GsonDecoder())
.options(new Request.Options(connectTimeoutInMS, readTimeoutInMs)
.target(*,*);
We have set readTimeout and ConnectionTimeout to 1 sec.
But what we see is even when the target takes more than 1 sec to respond, it does not timeout and keeps trying to connect.
Your request options configurations are not working because you're defining an OkHttpClient, according to Feign's documentation:
OkHttpClient directs Feign's http requests to OkHttp, which enables SPDY and better network control.
So, if your OkHttpClient doesn't have defined these values, it will take defaults values, this value is 10000ms (you can find these values at line 373 here: https://github.com/square/okhttp/blob/master/okhttp/src/main/java/okhttp3/OkHttpClient.java). So, you should configure your OkHttpClient like:
OkHttpClient okHttpClient = new OkHttpClient();
okHttpClient.setConnectTimeout(timeout, TimeUnit.MILLISECONDS);
okHttpClient.setReadTimeout(timeout, TimeUnit.MILLISECONDS);
okHttpClient.setWriteTimeout(timeout, TimeUnit.MILLISECONDS);
Related
I have a Spring Boot application to make an API call to a remote service and parse incoming data.
I am connecting to the secure site using a .pfx and truststore.jks. I have used plain Java HttpsUrlConnection to establish the SSL connection and write output (Json payload) and then read the input (Json response). It used to work fine but now it started giving Connection Reset error
Partial error stack trace is as below:
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
I tried alternative of using REST Template by building the SSLContext and that is working fine.
What could be the reason the HttpsURLConnection code is now not working? Is there a difference in the ssl context built in these two ways - below are the 2 different creations:
Using HttpsURLConnection:
SSLContext sslContext = null;
sslContext = SSLContext.getInstance("TLSv1.1");
sslContext.init(kms, tms, new SecureRandom());
HttpsURLConnection.setDefaultSSLSocketFactory(sslContext.getSocketFactory());
Using REST Template:
SSLContext sslContext2 = SSLContextBuilder.create()
.loadKeyMaterial(ResourceUtils.getFile(<PFX File Path>),<store_password>,
<key_password>)
.loadTrustMaterial(ResourceUtils.getFile(<trusstore path>),
<trustore_password>).build();
HttpClient client = HttpClients.custom()
.setSSLContext(sslContext2)
.build();
RestTemplateBuilder builder = new RestTemplateBuilder();
RestTemplate restTemplate = builder.requestFactory(() -> new HttpComponentsClientHttpRequestFactory(client)).build();
I am not even sure if it is a difference between the SSL Context building or something else. I am testing with the same data in both cases and the same data was working earlier.
I am implementing ApiGateway-MicroService communication protocol in my app with MassTransit and RabbitMQ. That protocol is meant to replace "traditional" REST API communication between ApiGateway and Microservices (I am talking about simple request-response here and not about any kind of events, sagas, etc). So on microservice(s) side I have consumers (which respond to requests) and on ApiGateway side I have request clients. Usually microservice has let's say ~10 consumers (for example OrderingMicroservice has consumers for following requests: CreateOrder, UpdateOrder, GetOrderById, ListUserOrders etc). I am trying to figure out best topology (Masstransit + RabbitMQ) for this scenario.
Here are my goals, at least I think it should work like this:
A. Request messages (that are routed to consumer queue) should be durable for short time only (for example 20s) and then removed from the consumer queue (and request client should receive timeout error) and not routed to any other queue. So when microservice is temporary down or it is temporary too busy to receive next request from queue then request messages should be kept in the queue for 20s and then disappear.
B. Since RequestClient should timeout after ~20s, Response messages (that are routed to client "response-queue") should also be durable for short amount of time (~20s). Then they can disappear. If ApiGW is offline / too busy to receive response then response(s) should be discarded.
So basically I want to use MassTransit/RabbitMQ as a short-lived buffer between ApiGW and microservice(s).
// ApiGw MassTransit configuration
services.AddMassTransit(x =>
{
x.SetKebabCaseEndpointNameFormatter();
x.UsingRabbitMq((context, cfg) =>
{
});
x.AddRequestClient<ICreateGroupPayload>();
});
// Service MassTransit configuration
services.AddMassTransit(x =>
{
x.SetKebabCaseEndpointNameFormatter();
var entryAssembly = Assembly.GetEntryAssembly();
x.AddConsumers(entryAssembly);
x.UsingRabbitMq((context, cfg) =>
{
cfg.ConfigureEndpoints(context);
});
});
// Single consumer definition in service
public class CreateGroupActionDefinition : ConsumerDefinition<CreateGroupAction>
{
public CreateGroupActionDefinition()
{
EndpointName = "group-service";
}
}
This setup creates following exchanges and queues:
exchange ICreateGroupPayload (fanout, durable) => bind exchange:group-service
exchange group-service (fanout, durable) => bind queue:group-service
exchange PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3 (fanout, autoDelete) => bind queue:PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3
queue group-service (durable)
queue PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3 (x-expires: 60000)
When I terminate ApiGw following exchanges/queues are removed from RabbitMQ within ~1min:
exchange PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3
queue PublicGateway_bus_4wdoyyro5ycgmgbybdcx1gp3r3
My questions are:
Should I use separate queues (endpoint names) for different consumers in a microservice? Or I can use same queue (group-service for example) for different consumers/message types?
How I can modify my configuration to set expiration time on my consumer queues? Right now it's durable but I want messages to be removed after ~20s. Also I think such queue should not be deleted after consumer is disconnected because it should be able to send requests even when consumer is offline (but only for 20s).
How I can modify my configuration to set expiration time on my request client response queue to be 20s (currently it seems it's 60s by default?).
Maybe someone have any other suggestions on how to adjust topology to best fit for this scenario? The aim is to have the setup as fast as possible just for simple request-response + short time buffering for edge cases.
All the work is done by MassTransit, as you can understand from the request documentation. You can change the default request timeout from 30 seconds to 20 seconds when adding the request client to the container. There is also an .AddGenericRequestClient() method to automatically add requests clients for whatever request type is needed.
You can also specify the request timeout for each request, and it will set the message TimeToLive to match that value. The responses should be sent with a TimeToLive as needed.
What is the rationale behind the following exception when trying to Defer the sending of a message on a one-way client:
System.InvalidOperationException "Cannot use ourselves as timeout manager because we're a one-way client"
A one-way client is a Rebus client that is not capable of receiving messages, so it has no input queue.
The way await bus.Defer(...) works, is by sending a message with some special headers to a "timeout manager", which by default is the endpoint that defers the message.
But since a one-way client has no input queue, it has no place to send the deferred message to.
You can make a one-way client defer messages by configuring an external timeout manager like this:
Configure.With(...)
.(...)
.Options(o => o.UseExternalTimeoutManager(anotherQueue))
.Start();
which will then cause the client to send the deferred message to that queue.
Moreover, you would have to manually set the rbs2-defer-recipient header to some other input queue, so that the timeout manager knows where to send the message when it is time to be consumed(*).
I hope that explains it :) please let me know if it is not clear.
*) This is actually not the case with Rebus 4, because bus.Defer uses the normal endpoint mappings to route messages.
If Rebus.AzureServiceBus is used there is more simple (or hacky) way to send delayed messages.
You have to specify 2 headers: rbs2-deferred-until and rbs2-defer-recipient and call Publish method like in the example.
var deferredUntil = DateTimeOffset.UtcNow.AddDays(1);
var headers = new Dictionary<string, string>();
headers.Add(Headers.DeferredUntil, deferredUntil.ToString("O", CultureInfo.InvariantCulture));
headers.Add(Headers.DeferredRecipient, #"Rebus requires this ¯\_(ツ)_/¯");
await bus.Publish(new SomeMessage(), headers);
Note: rbs2-defer-recipient is required by Rebus so any dummy values are okay.
Be careful, it looks like a workaround so it may not work after Rebus.AzureServiceBus update. It works for me in 5.0.1.
I want just to add WebSockets to my app that uses WinHTTP in async mode.
When I need a WebSocket I call the following.
Before sending request:
WinHttpSetOption(context->hRequest, WINHTTP_OPTION_UPGRADE_TO_WEB_SOCKET, NULL, 0);
In WINHTTP_CALLBACK_STATUS_SENDREQUEST_COMPLETE:
appContext->pIoRequest->hWebSocketHandle = WinHttpWebSocketCompleteUpgrade(appContext->hRequest, NULL);
WinHttpWebSocketReceive(appContext->pIoRequest->hWebSocketHandle, appContext->pszOutBuffer,RESPONSE_BUFFER_SIZE, NULL, NULL);
all without errors.
Now I see in Fiddler that the server sends some data to my WebSocket but there is no WINHTTP_CALLBACK_STATUS_READ_COMPLETE triggered.
Any ideas why this is? How can I read asynchronously from my WebSocket? Sending data to the WebSocket works well.
Omg! I found how its work!
You MUST call additional WinHttpSetStatusCallback to set WebSocket callback for WebSocketHandle returned in WinHttpWebSocketCompleteUpgrade and this callback MUST differ then that from call WinHttpWebSocketCompleteUpgrade was made!
It is no possible to set a context pointer by WinHttpSetOption with WINHTTP_OPTION_CONTEXT_VALUE flag! Its not work. dwContext In WebSocketCallback has wrong data. Call to WinHttpQueryOption in WebSocketCallback return wrong context data. I think that is a BUG in Windows 8.1. I write my own handler to connect my context with WebSocketHandle.
All of this is NOT documented in MSDN! Most of all, I did not google any info about async winhttp websocket usage... So, I am the first=) I will be very glad if my research will help you!
It seems websockets do not get PING and PONG messages to the callback!
I'm trying to get a client/server program exchanging http messages over ssl. To start, I created client and server programs that successfully exchange http requests using DefaultHttpRequest. The code that sends the request looks something like this:
HttpRequest request = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.POST, "https://localhost:8443");
ChannelBuffer buf = ChannelBuffers.copiedBuffer(line, "UTF-8");
request.setContent(buf);
request.setHeader(HttpHeaders.Names.HOST, host);
request.setHeader(HttpHeaders.Names.CONNECTION, HttpHeaders.Values.CLOSE);
request.setHeader(HttpHeaders.Names.CONTENT_TYPE, "text/xml");
request.setHeader(HttpHeaders.Names.CONTENT_LENGTH, Integer.toString(buf.capacity()));
ChannelFuture writeFuture = channel.write(request);
The client pipeline factory contains this:
pipeline.addLast("decoder", new HttpResponseDecoder());
pipeline.addLast("encoder", new HttpRequestEncoder());
// and then business logic.
...
The server pipeline factory contains this:
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("encoder", new HttpResponseEncoder());
// and then business logic.
....
So far so good. Client sends, server receives and decodes the request. The messageReceived method on my handler is called with the correct data.
In order to enable the SSL, I've taken some code from the SecureChat example and added to both client and server pipeline factories:
For the server:
SSLEngine engine = SecureChatSslContextFactory.getServerContext().createSSLEngine();
engine.setUseClientMode(false);
pipeline.addLast("ssl", new SslHandler(engine));
// On top of the SSL handler, add the text line codec.
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(
8192, Delimiters.lineDelimiter()));
For the client:
SSLEngine engine = SecureChatSslContextFactory.getClientContext().createSSLEngine();
engine.setUseClientMode(true);
pipeline.addLast("ssl", new SslHandler(engine));
// On top of the SSL handler, add the text line codec.
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(
8192, Delimiters.lineDelimiter()));
Now when I send the request from the client, nothing seems to happen on the server. When I start up the applications, the server seems to connect (channelConnected is called), but when I send the message none of the data gets to the server (messageReceived is never called).
Is there something obviously wrong with what I am doing? Is this the way that https should work? Or is there a different method for sending http requests over ssl?
Thanks,
Weezn
You need to call SslHandler.handshake() on the client side. Check the example again its in there.
Oops, it seems like I copied and pasted too much from the SecureChat example.
Removing the DelimiterBasedFrameDecoder seems to fix the issue.