I have two query :
1. Can we have non blocking async routes in camel. I do see async with seda but then if offloads work to other thread that blocks.
2. If so can we use multicast in such routes.
Following is my multistep camel route which seems to work. But not sure if it is async or non blocking async.
from("direct:multiStep")
.to("bean:routeHandler?method=monoReturningMethod1")
.process(new UnwrapStreamProcessor())
.to("bean:routeHandler?method=monoReturningMethod2")
.process(new UnwrapStreamProcessor())
Above works and web request has response from both monoReturningMethod. In this case I want to make sure all process is non blocking.
For multicast I am experimenting with following route. Not sure where to put UnwrapStreamProcessor. I have tried to put it after end() but it does not work. Do I need a custom Processor ? Or how can I tie all Mono returns in one ?
from("direct:incoming")
.multicast()
.parallelProcessing()
.to("bean:routeHandler?method=monoReturningMethod1", "bean:routeHandler?method=monoReturningMethod2")
.end()
I am using apache `camel 3.0.1 with spring boot starter.
#Component("routeHandler")
public class RouteHandler {
Mono<Entity> monoReturningMethod1(Exchange exchange) {
//make some WebClient request which returns Mono.
}
Mono<Entity> monoReturningMethod2(Exchange exchange) {
//make some WebClient request which returns Mono.
}
}
This route handles incoming web request. How to make all route processing non blocking and async. I have tried using process(new UnwrapStreamProcessor()) as process step after monoReturningMehtod and if I do in sequence it works. But it does not work with multicast and give original message overwrite not allowed.
Any suggestions ?
PS : I am initiating my async flow like following :
producerTemplate.asyncSend("RouteName", exchange)
According to this page, all EIPs are supported by the async routing engine, but not all endpoint types (components). Some components have only limited support, i.e. they can only consume or produce asynchronous.
You can use the Threads DSL in your route to tell Camel that from the point forwards, the messages should be routed asynchronous in a new thread.
Related
I am using SignalR and .Net 5.0 and leveraging Hub Filters to execute code on incoming Invokations to my SignalR Hub.
I am looking for a way to do the same thing with outgoing messages from the Hub to the Client but seem to be coming up with no options.
Perhaps alternatively, I would love to hook into and execute code specifically when the built in Ping Messages are sent out.
It looks like similar functionality used to be possible in the old version's HubPipeLineModule but I have not been able to find any way to achieve this in current SignalR. Is this possible?
HubLifetimeManager is responsible for sending messages to clients and it is possible to inject your custom one, right before registering SignalR:
builder.Services.AddSingleton(typeof(HubLifetimeManager<>), typeof(CustomHubLifetimeManager<>));
builder.Services.AddSignalR();
where
public class CustomHubLifetimeManager<THub> : DefaultHubLifetimeManager<THub> where THub : Hub
{
public override Task SendAllAsync(string methodName, object?[] args, CancellationToken cancellationToken = default)
{
//todo: do your stuff
return base.SendAllAsync(methodName, args, cancellationToken);
}
//todo: override rest of the methods
}
It works fine, however this approach looks a bit hacky for me.
I am trying to track the HTTP request and response time in Jetty. I have extended the jetty server and i am able to get the request timestamp using following snippet :
public void handle(HttpChannel connection) throws IOException,
ServletException {
super.handle(connection);
connection.getRequest().getTimeStamp();
}
I need to get the exact time of the response for the request.
How can i achieve it by extending jetty server ?
If any way of doing other than extending jetty. please let me know
Thank you
Since you seem to only be interested in the latency, do this.
The RequestLog mechanism is now to do this.
Instantiate a new RequestLogHandler and add it as the root of your server Handler tree (think nested, not collection).
Add a custom implementation of RequestLog to the RequestLogHandler
In your custom RequestLog.log(Request,Response) method, grab the Request.getTimeStamp() and work out the latency.
This approach is more durable to changes internally in Jetty, and does not require a fork Jetty + modify approach to work.
I'm just playing and testing a bit with the Restlet Client Api 2.2, but I don't get a non-blocking asynchronous request with a callback to work. I already have googled extensively
but really not found an answer to a (working) non-blocking solution.
I have the following two approaches:
Approach 1 ( Client - Request ):
Client c = new Client(Protocol.HTTP);
Request r = new Request(Method.GET, url);
System.out.println("START1");
c.handle(r, new Uniform() {
#Override
public void handle(Request request, Response response) {
int statusCode = response.getStatus().getCode();
System.out.println(statusCode);
}
});
System.out.println("START2");
Approach 2 ( ClientResource - setOnResponse() - get() ):
ClientResource cr = new ClientResource(url);
cr.setOnResponse(new Uniform() {
#Override
public void handle(Request request, Response response) {
int statusCode = response.getStatus().getCode();
System.out.println(statusCode);
}
});
System.out.println("START1");
cr.get();
System.out.println("START2");
The Console-Output for both approaches is always:
START1
Starting the internal HTTP client
SOME WAITING HERE
200
START2
Can anyone give me a hint to make one of these approaches non-blocking? Is that at all possible with the Restlet API? What am I missing, do I need another connector or must I define a seperate thread for the request myself?
I make a quick answer, an issue has been created: https://github.com/restlet/restlet-framework-java/issues/943
Initially the support of asynchronous was available using the internal nio connector.
As this connector is not fully stabilized, it has been decided to extract it from the core module and expose it inside a dedicated org.restlet.ext.nio module.
This explains why your code is blocking, as the current internal connector (in both 2.2 and 2.3 branches) does not support it.
At this time, the support is available using the nio extension, but this extension is not fully stabilized yet. So we are not inclined to encourage you to use it.
We are working on another scenario where we rely on the client connector provided by Jetty.
Stay tuned.
At some point I'm going to want to run my application against something like the real web service. The web service has an API call limit that I could see hitting. I considered serializing out some JSON files manually, but it seems like this would basically be caching the hard way.
Is there a HTTP cache I could run on my local machine which would aggressively (until I manually reset it) cache requests to a certain site?
You say "cache" but I think you really mean "filter" or "proxy". The first solution that comes to mind is the iptables system which can be used, with -limit and -hitcount rules to drop packets to the webserver after some threshold. I won't even pretend to be competent at iptables configuration.
The second course might be a web proxy like Squid using its delay pool mechanism. Expect a learning curve there as well.
I've built a proxy server that handles development requests and ensures that API calls are hammered in testing. This is how I do it with my ASP.Net MVC proxy:
public ActionResult ProxyRequest(string url, string request)
{
object cachedRequest = Cache[url];
if(cachedRequest != null)
{
return Content(cachedRequest, "application/json");
}
else
{
// make the http request here
Cache[url] = cachedRequest;
return Content(cachedRequest, "application/json");
}
}
I'm not on my development box right now so I'm doing this off the top off my head, but the concept is the same. Instead of using Cache[url] = cachedRequest I use a Cache.Insert method but that has a lot of parameters that I couldn't remember. (got lazy and built a wrapper class around it so I don't have to remember it)
This setup proxies all of my JSON requests (using a var isDevelopment = true (|| false)) in my JS code and using the isDevelopment variable know whether or not to proxy the request or hit the server directly.
I have a Tomcat service running on localhost:8080 and I have installed BlazeDS. I created and configured a simple hello world application like this...
package com.adobe.remoteobjects;
import java.util.Date;
public class RemoteServiceHandler {
public RemoteServiceHandler()
{
//This is required for the Blaze DS to instantiate the class
}
public String getResults(String name)
{
String result = “Hi ” + name + “, the time is : ” + new Date();
return result;
}
}
With what query string can I invoke RemoteServiceHandler to my Tomcat instance via just a browser? Something like... http://localhost:8080/blazeds/?xyz
Unfortunately you can't. First the requests (and responses) are encoded in AMF and second I believe they have to be POSTs. If you dig through the BlazeDS source code and the Flex SDK's RPC library you can probably figure out what it's sending. But AFAIK this hasn't been documented anywhere else.
I think that AMFX (which is AMF in XML) will work for you, using HTTPChannel instead of AMFChannel.
From http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=lcarch_2.html#1073189, Channels and channel sets:
Flex clients can use different channel
types such as the AMFChannel and
HTTPChannel. Channel selection depends
on a number of factors, including the
type of application you are building.
If non-binary data transfer is
required, you would use the
HTTPChannel, which uses a non-binary
format called AMFX (AMF in XML). For
more information about channels, see
Channels and endpoints.
This way you can use simple netcat to send the request.
Not sure how authentication will be handled though, you will probably need do a login using Flash, extract the authentication cookie and then submit it as part of your request.
Please update this thread once you make progress so that we all can learn.