I'm working on a micro service powered by SpringMVC and Spring Cloud Kafka.
For simplicity I will only focus on the part that makes HTTP request.
I have a binding function like the following (please note that I'm using the functional style binding).
#SpringBootApplication
public class ExampleApplication {
// PayloadSender uses RestTemplate to send HTTP request.
#Autowired
private PayloadSender payloadSender;
#Bean
public Function<KStream<String, Input>, KStream<String, Output>> process() {
// payloadSender.send() is a blocking call which sends payload using RestTemplate,
// once response is received it will collect all info and create "Output" object
return input -> input
.map((k,v) -> KeyValue.pair(k, payloadSender.send(v))); // "send" is a blocking call
// Question: if autoCommitOffset is set to true, would offset automatically commit right after the "map" function from KStream?
}
public static void main(String[] args) {
SpringApplication.run(ExampleApplication.class, args);
}
}
From this example you can see that the payloadSender is sending the payload from the input stream using RestTemplate and upon receiving the response creating the "Output" object and produce to the output topic.
Since payloadSender.send() is blocking, I'm worried that this will cause performance issue. Most importantly if the HTTP request gets timed out, I'm afraid it will exceed the commit interval (usually the HTTP timeout interval is much much greater than the consumer commit interval) and cause the kafka broker to think the consumer is dead (please correct me if I'm wrong).
So is there a better solution for this case? I would eventually switch over to spring-reactive but for the time being I need to make sure the MVC model works. Although I'm not sure spring-reactive would have magically solve this issue.
The default max.poll.interval is 5 minutes; you can increase it or reduce max.poll.records. You can also set a timeout on the rest call.
Related
This article shows a well-known problem with HttpClient that can lead to socket exhaustion.
I have an ASP.NET Core 3.1 web application. In a .NET Standard 2.0 class library I've added a WCF web service reference in Visual Studio 2019 following this instructions.
In a service I'm using the WCF client the way it's described in the documentation. Creating an instance of the WCF client and then closing the client for every request.
public class TestService
{
public async Task<int> Add(int a, int b)
{
CalculatorSoapClient client = new CalculatorSoapClient();
var resultat = await client.AddAsync(a, b);
//this is a bad way to close the client I should also check
//if I need to call Abort()
await client.CloseAsync();
return resultat;
}
}
I know it's bad practice to close the client without any checks but for the purpose of this example it does not matter.
When I start the application and make five requests to an action method that uses the WCF client and then take a look at the result from netstat I discover open connections with status TIME_WAIT, much like the problems in the article above about HttpClient.
It looks to me like using the WCF client out-of-the-box like this can lead to socket exhaustion or am I missing something?
The WCF client inherits from ClientBase<TChannel>. Reading this article it looks to me like the WCF client uses HttpClient. If that is the case then I probably shouldn't create a new client for every request, right?
I've found several articles (this and this) talking about using a singleton or reusing the WCF client in some way. Is this the way to go?
###UPDATE
Debugging the appropriate parts of the WCF source code I discovered that a new HttpClient and HttpClientHandler were created each time I created a new WCF client which I do for every request.
You can inspect the code here
internal virtual HttpClientHandler GetHttpClientHandler(EndpointAddress to, SecurityTokenContainer clientCertificateToken)
{
return new HttpClientHandler();
}
This handler is used in to create a new HttpClient in the GetHttpClientAsync method:
httpClient = new HttpClient(handler);
This explains why the WCF client in my case behaves just like a HttpClient that is created and disposed for every request.
Matt Connew writes in an issue in the WCF repo that he has made it possible to inject your own HttpMessage factory into the WCF client.
He writes:
I implemented the ability to provide a Func<HttpClientHandler,
HttpMessageHandler> to enable modifying or replacing the
HttpMessageHandler. You provide a method which takes an
HttpClientHandler and returns an HttpMessageHandler.
Using this information I injected my own factory to be able to control the generation of HttpClientHandlers in HttpClient.
I created my own implementation of IEndpointBehavior that injects IHttpMessageHandlerFactory to get a pooled HttpMessageHandler.
public class MyEndpoint : IEndpointBehavior
{
private readonly IHttpMessageHandlerFactory messageHandlerFactory;
public MyEndpoint(IHttpMessageHandlerFactory messageHandlerFactory)
{
this.messageHandlerFactory = messageHandlerFactory;
}
public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters)
{
Func<HttpClientHandler, HttpMessageHandler> myHandlerFactory = (HttpClientHandler clientHandler) =>
{
return messageHandlerFactory.CreateHandler();
};
bindingParameters.Add(myHandlerFactory);
}
<other empty methods needed for implementation of IEndpointBehavior>
}
As you can see in AddBindingParameters I add a very simple factory that returns a pooled HttpMessageHandler.
I add this behavior to my WCF client like this.
public class TestService
{
private readonly MyEndpoint endpoint;
public TestService(MyEndpoint endpoint)
{
this.endpoint = endpoint;
}
public async Task<int> Add(int a, int b)
{
CalculatorSoapClient client = new CalculatorSoapClient();
client.Endpoint.EndpointBehaviors.Add(endpoint);
var resultat = await client.AddAsync(a, b);
//this is a bad way to close the client I should also check
//if I need to call Abort()
await client.CloseAsync();
return resultat;
}
}
Be sure to update any package references to System.ServiceModel.* to at least version 4.5.0 for this to work. If you're using Visual Studio's 'Add service reference' feature, VS will pull in the 4.4.4 versions of these packages (tested with Visual Studio 16.8.4).
When I run the applications with these changes I no longer have an open connection for every request I make.
You should consider disposing your CalculatorSoapClient. Be aware that a simple Dispose() is usually not enough, becaue of the implementation of the ClientBase.
Have a look at https://learn.microsoft.com/en-us/dotnet/framework/wcf/samples/use-close-abort-release-wcf-client-resources?redirectedfrom=MSDN, there the problem is explained.
Also consider that the underlying code is managing your connections, sometimes it will keep them alive for later use. Try calling the server a lot of times to see, if there is a new connection for each call, or if the connections are being reused.
The meaning TIME_WAIT is also discussed here:
https://superuser.com/questions/173535/what-are-close-wait-and-time-wait-states
https://serverfault.com/questions/450055/lot-of-fin-wait2-close-wait-last-ack-and-time-wait-in-haproxy
It looks like your client has done everything required to close the connection and is just waiting for the confirmation of the server.
You should not have to use a singleton since the framework is (usually) taking good care of the connections.
I created an issue in the WCF repository in Github and got some great answers.
According to Matt Connew and Stephen Bonikowsky who are authorities in this area the best solution is to reuse the client or the ChannelFactory.
Bonikowsky writes:
Create a single client and re-use it.
var client = new ImportSoapClient();
And Connew adds:
Another possibility is you could create a channel proxy instance from
the underlying channelfactory. You would do this with code similar to
this:
public void Init()
{
_client?.Close();
_factory?.Close();
_client = new ImportSoapClient();
_factory = client.ChannelFactory;
}
public void DoWork()
{
var proxy = _factory.CreateChannel();
proxy.MyOperation();
((IClientChannel)proxy).Close();
}
According to Connew there is no problem reusing the client in my ASP.NET Core web application with potentially concurrent requests.
Concurrent requests all using the same client is not a problem as long
as you explicitly open the channel before any requests are made. If
using a channel created from the channel factory, you can do this with
((IClientChannel)proxy).Open();. I believe the generated client also
adds an OpenAsync method that you can use.
UPDATE
Since reusing the WCF Client also means reusing the HttpClient instance and that could lead to the known DNS problem I decided to go with my original solution using my own implementation of IEndpointBehavior as described in the question.
I was trying to understand Asynchronous Controller implementation from one of links:
http://shengwangi.blogspot.in/2015/09/asynchronous-spring-mvc-hello-world.html
I was puzzled on point that Controller thread received request and exists. Then service method received the request for further processing.
#RequestMapping("/helloAsync")
public Callable<String> sayHelloAsync() {
logger.info("Entering controller");
Callable<String> asyncTask = new Callable<String>() {
#Override
public String call() throws Exception {
return helloService.doSlowWork();
}
};
logger.info("Leaving controller");
return asyncTask;
}
Since, Controller exists it and pass the control to appropriate handler mapping/ jsp. What will be seen on the browser for the user ?
Browser waits for the response to process it.
Asynchronous process takes place only at the server end and it has nothing to do with the browser. Browser sends the request and waits for the server to write the response back.
Since you returned Callable doesnt mean that controller exists the flow. Spring`s response handlers will wait for async task to get executed to write the response back.
Please go through AsyncHandlerMethodReturnValueHandler which handles Asynchronous response returned from the controller.
if you return callable then it will be handled by CallableHandlerMethodReturnvaluehandler :
public void handleReturnValue(Object returnValue, MethodParameter returnType,
ModelAndViewContainer mavContainer, NativeWebRequest webRequest) throws Exception {
if (returnValue == null) {
mavContainer.setRequestHandled(true);
return;
}
Callable<?> callable = (Callable<?>) returnValue;
WebAsyncUtils.getAsyncManager(webRequest).startCallableProcessing(callable, mavContainer);
}
I had cleared my doubt from this link:
https://dzone.com/articles/jax-rs-20-asynchronous-server-and-client
However, they used different way to accomplish the asynchronous processing but the core concept should be the same for every approach.
Some important part of the article:
The idea behind asynchronous processing model is to separate
connection accepting and request processing operations. Technically
speaking it means to allocate two different threads, one to accept the
client connection and the other to handle heavy and time consuming
operations. In this model, the container dispatched a thread to accept
client connection (acceptor), hand over the request to processing
(worker) thread and releases the acceptor one. The result is sent back
to the client by the worker thread. In this mechanism client’s
connection remains open. May not impact on performance so much, such
processing model impacts on server’s THROUGHPUT and SCALABILITY a lot.
I managed to successfully get my RestTemplate client discover remote service using Eureka and forward calls to it using Ribbon as described in the documentation.
Basically, it was just a matter of adding the following annotations of my Application class and let the magic of Spring-Boot do the rest:
#Configuration
#ComponentScan
#EnableAutoConfiguration
#EnableDiscoveryClient
(PS: you noticed I'm using spring-cloud:1.0.0-SNAPSHOT-BUILD and not 1.0.0.M3 - but this doesn't seem to affect my problem).
When two service instances are started, the rest-template client successfully load balance requests between the two. However, the client won't fallback to the second instance if the first is stopped before the Eureka load balancer notices, instead an exception is thrown.
Hence my question: is there a way to configure the RestTemplate/Ribbon/Eureka stack to automatically retry the call to another instance if the one selected the first place is not available? Zuul proxy and feign clients do this "out of the box" so I believe the library holds the necessary features...
Any idea/hint?
Thx,
/Bertrand
The RestTemplate support on its own does not know how to do any retrying (whereas the Feign client and the proxy support in Spring Cloud does, as you noticed). I think this is probably a good things because it gives you the option to add it yourself. For instance, using Spring Retry you can do it in a simple declarative style:
#Retryable
public Object doSomething() {
// use your RestTemplate here
}
(and add #EnableRetry to your #Configuration). It makes a nice combination with #HystrixCommand (from Spring Cloud / Javanica):
#HystrixCommand
#Retryable
public Object doSomething() {
// use your RestTemplate here
}
In this form, every failure counts towards the circuit breaker metrics (maybe we could change that, or maybe it makes sense to leave it like that), even if the retry is successful.
I couldn't get it to work with both #HystrixCommand and #Retryable, regardless of order of annotations on #Configuration class or on #Retryable method due to order of interceptors. I solved this by creating another class with the matching set of methods and had the #HystrixCommand annotated methods delegate to the corresponding #Retryable method in the second class. You could probably have the two classes implement the same interface. This is kind of a pain in the butt, but until order can be configured, this is all I could come up with. Still waiting on a real solution from Dave Syer and the spring cloud guys.
public class HystrixWrapper {
#Autowired
private RetryableWrapper retryableWrapper;
#HystrixCommand
public Response doSomething(...) {
return retryableWrapper.doSomething(...);
}
}
public class RetryableWrapper {
#Autowired
private RestTemplate restTemplate;
#Retryable
public Response doSomething(...) {
// do something with restTemplate;
}
}
I'm just playing and testing a bit with the Restlet Client Api 2.2, but I don't get a non-blocking asynchronous request with a callback to work. I already have googled extensively
but really not found an answer to a (working) non-blocking solution.
I have the following two approaches:
Approach 1 ( Client - Request ):
Client c = new Client(Protocol.HTTP);
Request r = new Request(Method.GET, url);
System.out.println("START1");
c.handle(r, new Uniform() {
#Override
public void handle(Request request, Response response) {
int statusCode = response.getStatus().getCode();
System.out.println(statusCode);
}
});
System.out.println("START2");
Approach 2 ( ClientResource - setOnResponse() - get() ):
ClientResource cr = new ClientResource(url);
cr.setOnResponse(new Uniform() {
#Override
public void handle(Request request, Response response) {
int statusCode = response.getStatus().getCode();
System.out.println(statusCode);
}
});
System.out.println("START1");
cr.get();
System.out.println("START2");
The Console-Output for both approaches is always:
START1
Starting the internal HTTP client
SOME WAITING HERE
200
START2
Can anyone give me a hint to make one of these approaches non-blocking? Is that at all possible with the Restlet API? What am I missing, do I need another connector or must I define a seperate thread for the request myself?
I make a quick answer, an issue has been created: https://github.com/restlet/restlet-framework-java/issues/943
Initially the support of asynchronous was available using the internal nio connector.
As this connector is not fully stabilized, it has been decided to extract it from the core module and expose it inside a dedicated org.restlet.ext.nio module.
This explains why your code is blocking, as the current internal connector (in both 2.2 and 2.3 branches) does not support it.
At this time, the support is available using the nio extension, but this extension is not fully stabilized yet. So we are not inclined to encourage you to use it.
We are working on another scenario where we rely on the client connector provided by Jetty.
Stay tuned.
I've followed this link http://spring.io/guides/gs/messaging-stomp-websocket/ and got the app up and running.
What I wanted was a little more than that, I wanted to to be able to push the data back to the client without the client having to send any thing.
So I've setup a long running task with a listener similar to the below
GreetingController implements RunnableListener
and the RunnableListener has a method
public Greeting greeting(HelloMessage message);
The implementation of the method is to kick off a thread and then call the listener method..
I see the output on the console when that happens, but I don't see anything on the browser.
Could anyone please show me how to kick off a running task and let the server push the content to the browser using Spring instead of poll (setTimeout stuff in javascript?)
Regards
Tin
What is this RunnableListener interface?
What is triggering this task - is it scheduled regularly?
Once the client has subscribed to a given topic (here, /topic/greetings), you can send messages to that topic whenever you want using a MessagingTemplate. For example, you could schedule this task and let it send messages regularly on a given topic:
#Service
public class GreetingService {
private SimpMessagingTemplate template;
#Autowired
public GreetingService(SimpMessagingTemplate template) {
this.template = template;
}
#Scheduled(fixedDelay=10000)
public void greet() {
this.template.convertAndSend("/topic/greetings", "Hello");
}
}
Check out the reference documentation for more details.