Spring cloud stream - Autowiring underlying Consumer for a given PollableMessageSource - spring-kafka

Is it possible to get a hold of underlying KafkaConsumer bean for a defined PollableMessageSource?
I have Binding defined as:
public interface TestBindings {
String TEST_SOURCE = "test";
#Input(TEST_SOURCE)
PollableMessageSource testTopic();
}
and config class:
#EnableBinding(TestBindings.class)
public class TestBindingsPoller {
#Bean
public ApplicationRunner testPoller(PollableMessageSource testTopic) {
// Get kafka consumer for PollableMessageSource
KafkaConsumer kafkaConsumer = getConsumer(testTopic);
return args -> {
while (true) {
if (!testTopic.poll(...) {
Thread.sleep(500);
}
}
};
}
}
The question is, how can I get KafkaConsumer that corresponds to testTopic? Is there any way to get it from beans that are wired in spring cloud stream?

The KafkaMessageSource populates a KafkaConsumer into headers, so it is available in the place you receive messages: https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/support/converter/MessageConverter.java#L57.
If you are going to do stuff like poll yourself, I would suggest to inject a ConsumerFactory and use a consumer from there already.

Related

TelemetryProcessor - Multiple instances overwrite Custom Properties

I have a very basic http-POST triggered api which creates a TelemetryClient. I needed to provide a custom property in this telemetry for each individual request, so I implemented a TelemtryProcessor.
However, when subsequent POST requests are handled and a new TelemetryClient is created that seems to interfere with the first request. I end up seeing maybe a dozen or so entries in App Insights containing the first customPropertyId, and close to 500 for the second, when in reality the number should be split evenly. It seems as though the creation of the 2nd TelemetryClient somehow interferes with the first.
Basic code is below, if anyone has any insight (no pun intended) as to why this might occur, I would greatly appreciate it.
ApiController which handles the POST request:
public class TestApiController : ApiController
{
public HttpResponseMessage Post([FromBody]RequestInput request)
{
try
{
Task.Run(() => ProcessRequest(request));
return Request.CreateResponse(HttpStatusCode.OK);
}
catch (Exception)
{
return Request.CreateErrorResponse(HttpStatusCode.InternalServerError, Constants.GenericErrorMessage);
}
}
private async void ProcessRequest(RequestInput request)
{
string customPropertyId = request.customPropertyId;
//trace handler creates the TelemetryClient for custom property
CustomTelemetryProcessor handler = new CustomTelemetryProcessor(customPropertyId);
//etc.....
}
}
CustomTelemetryProcessor which creates the TelemetryClient:
public class CustomTelemetryProcessor
{
private readonly string _customPropertyId;
private readonly TelemetryClient _telemetryClient;
public CustomTelemetryProcessor(string customPropertyId)
{
_customPropertyId = customPropertyId;
var builder = TelemetryConfiguration.Active.TelemetryProcessorChainBuilder;
builder.Use((next) => new TelemetryProcessor(next, _customPropertyId));
builder.Build();
_telemetryClient = new TelemetryClient();
}
}
TelemetryProcessor:
public class TelemetryProcessor : ITelemetryProcessor
{
private string CustomPropertyId { get; }
private ITelemetryProcessor Next { get; set; }
// Link processors to each other in a chain.
public TelemetryProcessor(ITelemetryProcessor next, string customPropertyId)
{
CustomPropertyId = customPropertyId;
Next = next;
}
public void Process(ITelemetry item)
{
if (!item.Context.Properties.ContainsKey("CustomPropertyId"))
{
item.Context.Properties.Add("CustomPropertyId", CustomPropertyId);
}
else
{
item.Context.Properties["CustomPropertyId"] = CustomPropertyId;
}
Next.Process(item);
}
}
It's better to avoid creating Telemetry Client per each request, isntead re-use single static Telemetry Client instance. Telemetry Processors and/or Telemetry Initializers should also typically be registered only once for the telemetry pipeline and not for every request. TelemetryConfiguration.Active is static and by adding new Processor with each request the queue of processor only grows.
The appropriate setup would be to add Telemetry Initializer (Telemetry Processors are typically used for filtering and Initializers for data enrichment) once into the telemetry pipeline, e.g. though adding an entry to ApplicationInsights.config file (if present) or via code on TelemetryConfiguration.Active somewhere in global.asax, e.g. Application_Start:
TelemetryConfiguration.Active.TelemetryInitializers.Add(new MyTelemetryInitializer());
Initializers are executed in the same context/thread where Track..(..) was called / telemetry was created, so they will have access to the thread local storage and or local objects to read parameters/values from.

Is it possible to run Spring WebFlux and MVC (CXF, Shiro, etc.) services together in Undertow?

We are looking at implementing a few services using the new Spring 5 "Reactive" API.
We currently use, somewhat dependent on MVC, Apache CXF and Apache Shiro for our REST services and security. All of this runs in Undertow now.
We can get one or the other to work but not both together. It appears when we switch over to the reactive application it knocks out the servlets, filters, etc. Conversely, when we use the MVC-style application it does not see the reactive handlers.
Is it possible to run the Spring 5 Reactive services alongside REST/servlet/filter components or customize the SpringBoot startup to run REST and Reactive services on different ports?
Update:
I "seem" to be able to get the reactive handlers working doing this but I don't know if this is the right approach.
#Bean
RouterFunction<ServerResponse> routeGoodbye(TrackingHandler endpoint)
{
RouterFunction<ServerResponse> route = RouterFunctions
.route(GET("/api/rx/goodbye")
.and(accept(MediaType.TEXT_PLAIN)), endpoint::trackRedirect2);
return route;
}
#Bean
RouterFunction<ServerResponse> routeHello(TrackingHandler endpoint)
{
RouterFunction<ServerResponse> route = RouterFunctions
.route(GET("/api/rx/hello")
.and(accept(MediaType.TEXT_PLAIN)), endpoint::trackRedirect);
return route;
}
#Bean
ContextPathCompositeHandler servletReactiveRouteHandler(TrackingHandler handler)
{
final Map<String, HttpHandler> handlers = new HashMap<>();
handlers.put("/hello", toHttpHandler((this.routeHello(handler))));
handlers.put("/goodbye", toHttpHandler(this.routeGoodbye(handler)));
return new ContextPathCompositeHandler(handlers);
}
#Bean
public ServletRegistrationBean servletRegistrationBean(final ContextPathCompositeHandler handlers)
{
ServletRegistrationBean registrationBean = new ServletRegistrationBean<>(
new ReactiveServlet(handlers),
"/api/rx/*");
registrationBean.setLoadOnStartup(1);
registrationBean.setAsyncSupported(true);
return registrationBean;
}
#Bean
TrackingHandler trackingEndpoint(final TrackingService trackingService)
{
return new TrackingHandler(trackingService,
null,
false);
}
public class ReactiveServlet extends ServletHttpHandlerAdapter
{
ReactiveServlet(final HttpHandler httpHandler)
{
super(httpHandler);
}
}
Ok, after playing around with this for too long I finally seemed to be able to cobble together a solution that works for me. Hopefully this is the right way to do what I need to do.
Now, executing normal CXF RESTful routes shows me Undertow using a blocking task and executing my Reactive routes shows me undertow using NIO directly. When I tried using the ServletHttpHandler it looked like it was just invoking the service as a Servlet 3 async call.
The handlers are running completely separate from each other and allows me to run my REST services beside my reactive services.
1) Create an annotation that will be used to map the RouterFunction to an Undertow Handler
#Retention(RetentionPolicy.RUNTIME)
#Documented
#Target({ElementType.METHOD, ElementType.TYPE})
public #interface ReactiveHandler
{
String value();
}
2) Create an UndertowReactiveHandler "Provider" so that I can lazily get the injected RouterFunction and return the UndertowHttpHandler when I configure Undertow.
final class UndertowReactiveHandlerProvider implements Provider<UndertowHttpHandlerAdapter>
{
#Inject
private ApplicationContext context;
private String path;
private String beanName;
#Override
public UndertowHttpHandlerAdapter get()
{
final RouterFunction router = context.getBean(beanName, RouterFunction.class);
return new UndertowHttpHandlerAdapter(toHttpHandler(router));
}
public String getPath()
{
return path;
}
public void setPath(final String path)
{
this.path = path;
}
public void setBeanName(final String beanName)
{
this.beanName = beanName;
}
}
3) Create the NonBLockingHandlerFactory (implements BeanFactoryPostProcessor). This looks for any of my #Bean methods that have been annotated with "ReactiveHandler" and then dynamically creates a "UndertowReactiveHandlerProvider" bean for each annotated router function which is used later to provide the handlers to Undertow.
#Override
public void postProcessBeanFactory(final ConfigurableListableBeanFactory configurableListableBeanFactory) throws BeansException
{
final BeanDefinitionRegistry registry = (BeanDefinitionRegistry)configurableListableBeanFactory;
final String[] beanDefinitions = registry.getBeanDefinitionNames();
for (String name : beanDefinitions)
{
final BeanDefinition beanDefinition = registry.getBeanDefinition(name);
if (beanDefinition instanceof AnnotatedBeanDefinition
&& beanDefinition.getSource() instanceof MethodMetadata)
{
final MethodMetadata beanMethod = (MethodMetadata)beanDefinition.getSource();
final String annotationType = ReactiveHandler.class.getName();
if (beanMethod.isAnnotated(annotationType))
{
//Get the current bean details
final String beanName = beanMethod.getMethodName();
final Map<String, Object> attributes = beanMethod.getAnnotationAttributes(annotationType);
//Create the new bean definition
final GenericBeanDefinition rxHandler = new GenericBeanDefinition();
rxHandler.setBeanClass(UndertowReactiveHandlerProvider.class);
//Set the new bean properties
MutablePropertyValues mpv = new MutablePropertyValues();
mpv.add("beanName", beanName);
mpv.add("path", attributes.get("value"));
rxHandler.setPropertyValues(mpv);
//Register the new bean (Undertow handler) with a matching route suffix
registry.registerBeanDefinition(beanName + "RxHandler", rxHandler);
}
}
}
}
4) Create the Undertow ServletExtension. This looks for any UndertowReactiveHandlerProviders and adds it as an UndertowHttpHandler.
public class NonBlockingHandlerExtension implements ServletExtension
{
#Override
public void handleDeployment(DeploymentInfo deploymentInfo, final ServletContext servletContext)
{
deploymentInfo.addInitialHandlerChainWrapper(handler -> {
final WebApplicationContext ctx = getWebApplicationContext(servletContext);
//Get all of the reactive handler providers
final Map<String, UndertowReactiveHandlerProvider> providers =
ctx.getBeansOfType(UndertowReactiveHandlerProvider.class);
//Create the root handler
final PathHandler rootHandler = new PathHandler();
rootHandler.addPrefixPath("/", handler);
//Iterate the providers and add to the root handler
for (Map.Entry<String, UndertowReactiveHandlerProvider> p : providers.entrySet())
{
final UndertowReactiveHandlerProvider provider = p.getValue();
//Append the HttpHandler to the root
rootHandler.addPrefixPath(
provider.getPath(),
provider.get());
}
//Return the root handler
return rootHandler;
});
}
}
5) Under META-INF/services create a "io.undertow.servlet.ServletExtension" file.
com.mypackage.NonBlockingHandlerExtension
6) Create a SpringBoot AutoConfiguration that loads the post processor if Undertow is on the classpath.
#Configuration
#ConditionalOnClass(Undertow.class)
public class UndertowAutoConfiguration
{
#Bean
BeanFactoryPostProcessor nonBlockingHandlerFactoryPostProcessor()
{
return new NonBlockingHandlerFactoryPostProcessor();
}
}
7) Annotate any RouterFunctions that I want to map to an UndertowHandler.
#Bean
#ReactiveHandler("/api/rx/service")
RouterFunction<ServerResponse> routeTracking(TrackingHandler handler)
{
RouterFunction<ServerResponse> route = RouterFunctions
.nest(path("/api/rx/service"), route(
GET("/{cid}.gif"), handler::trackGif).andRoute(
GET("/{cid}"), handler::trackAll));
return route;
}
With this I can call my REST services (and Shiro works with them), use Swagger2 with my REST services, and call my Reactive services (and they do not use Shiro) in the same SpringBoot application.
In my logs, the REST call shows Undertow using the blocking (task-#) handler. The Reactive call shows Undertow using the non-blocking (I/O-# and nioEventLoopGroup) handler

Mono: Return results from a long running method

Currently I'm beginnging with Spring + reactive programming. My aim is to return a result in a REST-endpoint from a long running method (polling on a database). I'm stuck on the api. I simply don't know how to return the result as Mono in my FooService.findFoo method.
#RestController
public class FooController {
#Autowired
private FooService fooService;
#GetMapping("/foo/{id}")
private Mono<ResponseEntity<Foo> findById(#PathVariable String id) {
return fooService.findFoo(id).map(foo -> ResponseEntity.ok(foo)) //
.defaultIfEmpty(ResponseEntity.notFound().build())
}
...
}
#Service
public class FooService {
public Mono<Foo> findFoo(String id) {
// this is the part where I'm stuck
// I want to return the results of the pollOnDatabase-method
}
private Foo pollOnDatabase(String id) {
// polling on database for a while
}
}
Use the Mono.fromSupplier method! :)
#Service
public class FooService {
public Mono<Foo> findFoo(String id) {
return Mono
.fromSupplier(() -> pollOnDatabase(id))
.subscribeOn(Schedulers.boundedElastic());
}
private Foo pollOnDatabase(String id) {
// polling on database for a while
}
}
With this method we return a Mono value ASAP, constant time with a supplier which will be evaluated on demand by the caller's subscribe. This is the non blocking way to call a long-running-blocking method.
BE AWARE that without subscription on boundedElastic the blocking pollOnDatabase method will block the original thread, which leads to thread starvation. You can find different schedules for every kind of tasks here.
DO NOT use Mono.just with long-running calculations as it will run the calculation before returning the Mono instance, thereby blocking the given thread.
+1: Watch this video to learn to avoid "reactor meltdown". Use some lib to detect blocking calls from non-blocking threads.
It's pretty simple. You could just do
#Service
public class FooService {
public Mono<Foo> findFoo(String id) {
return Mono.just(pollOnDatabase(id));
}
private Foo pollOnDatabase(String id) {
// polling on database for a while
}
}

Flink Retrofit not Serializable Exception

I have a Flink Job reading events from a Kafka queue then calling another service if certain conditions are met.
I wanted to use Retrofit2 to call the REST endpoint of that service but I get a is not Serializable Exception. I have several Flat Maps connected to each other (in series) then calling the service happens in the last FlatMap. The exception I get:
Exception in thread "main"
org.apache.flink.api.common.InvalidProgramException: The
implementation of the RichFlatMapFunction is not serializable. The
object probably contains or references non serializable fields.
...
Caused by: java.io.NotSerializableException: retrofit2.Retrofit$1
...
The way I am initializing retrofit:
RetrofitClient.getClient(BASE_URL).create(NotificationService.class);
And the NotificationService interface
public interface NotificationService {
#PUT("/test")
Call<String> putNotification(#Body Notification notification);
}
The RetrofitClient class
public class RetrofitClient {
private static Retrofit retrofit = null;
public static Retrofit getClient(String baseUrl) {
if (retrofit == null) {
retrofit = new Retrofit.Builder().baseUrl(baseUrl).addConverterFactory(GsonConverterFactory.create())
.build();
}
return retrofit;
}
Put your Notification class code for more details, but looks like this answer helps
java.io.NotSerializableException with "$1" after class

FeignClients get published as REST endpoints in spring cloud application

I've got REST FeignClient defined in my application:
#FeignClient(name = "gateway", configuration = FeignAuthConfig.class)
public interface AccountsClient extends Accounts {
}
I share endpoint interface between server and client:
#RequestMapping(API_PATH)
public interface Accounts {
#PostMapping(path = "/register",
produces = APPLICATION_JSON_VALUE,
consumes = APPLICATION_JSON_VALUE)
ResponseEntity<?> registerAccount(#RequestBody ManagedPassUserVM managedUserDTO)
throws EmailAlreadyInUseException, UsernameAlreadyInUseException, URISyntaxException;
}
Everythng works fine except that my FeignClient definition in my client application also got registered as independent REST endpoint.
At the moment I try to prevent this behavior using filter which returns 404 status code for FeignClinet client mappings in my client application. However this workeraund seems very inelegant.
Is there another way how to prevent feign clients registering as separate REST endpoints?
It is a known limitation of Spring Cloud's feign support. By adding #RequestMapping to the interface, Spring MVC (not Spring Cloud) assumes you want as an endpoint. #RequestMapping on Feign interfaces is not currently supported.
I've used workaround for this faulty Spring Framework behavior:
#Configuration
#ConditionalOnClass({Feign.class})
public class FeignMappingDefaultConfiguration {
#Bean
public WebMvcRegistrations feignWebRegistrations() {
return new WebMvcRegistrationsAdapter() {
#Override
public RequestMappingHandlerMapping getRequestMappingHandlerMapping() {
return new FeignFilterRequestMappingHandlerMapping();
}
};
}
private static class FeignFilterRequestMappingHandlerMapping extends RequestMappingHandlerMapping {
#Override
protected boolean isHandler(Class<?> beanType) {
return super.isHandler(beanType) && (AnnotationUtils.findAnnotation(beanType, FeignClient.class) == null);
}
}
}
I found it in SpringCloud issue

Resources