Logging MDC with #Async and TaskDecorator - spring-mvc

Using Spring MVC, I have the following setup:
An AbstractRequestLoggingFilter derived filter for logging requests.
A TaskDecorator to marshal the MDC context mapping from the web request thread to the #Async thread.
I'm attempting to collect context info using MDC (or a ThreadLocal object) for all components involved in handling the request.
I can correctly retrieve the MDC context info from the #Async thread. However, if the #Async thread were to add context info to the MDC, how can I now marshal the MDC context info to the thread that handles the response?
TaskDecorator
public class MdcTaskDecorator implements TaskDecorator {
#Override
public Runnable decorate(Runnable runnable) {
// Web thread context
// Get the logging MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
try {
// #Async thread context
// Restore the web thread MDC context
if(contextMap != null) {
MDC.setContextMap(contextMap);
}
else {
MDC.clear();
}
// Run the new thread
runnable.run();
}
finally {
MDC.clear();
}
};
}
}
Async method
#Async
public CompletableFuture<String> doSomething_Async() {
MDC.put("doSomething", "started");
return doit();
}
Logging Filter
public class ServletLoggingFilter extends AbstractRequestLoggingFilter {
#Override
protected void beforeRequest(HttpServletRequest request, String message) {
MDC.put("webthread", Thread.currentThread().getName()); // Will be webthread-1
}
#Override
protected void afterRequest(HttpServletRequest request, String message) {
MDC.put("responsethread", Thread.currentThread().getName()); // Will be webthread-2
String s = MDC.get("doSomething"); // Will be null
// logthis();
}
}

I hope you have solved the problem, but if you did not, here comes a solution.
All you have to do can be summarized as following 2 simple steps:
Keep your class MdcTaskDecorator.
Extends AsyncConfigurerSupport for your main class and override getAsyncExecutor() to set decorator with your customized one as follows:
public class AsyncTaskDecoratorApplication extends AsyncConfigurerSupport {
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setTaskDecorator(new MdcTaskDecorator());
executor.initialize();
return executor;
}
public static void main(String[] args) {
SpringApplication.run(AsyncTaskdecoratorApplication.class, args);
}
}

Create a bean that will pass the MDC properties from parent thread to the successor thread.
#Configuration
#Slf4j
public class AsyncMDCConfiguration {
#Bean
public Executor asyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setTaskDecorator(new MDCTaskDecorator());//MDCTaskDecorator i s a custom created class thet implements TaskDecorator that is reponsible for passing on the MDC properties
executor.initialize();
return executor;
}
}
#Slf4j
public class MDCTaskDecorator implements TaskDecorator {
#Override
public Runnable decorate(Runnable runnable) {
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
try {
MDC.setContextMap(contextMap);
runnable.run();
} finally {
MDC.clear();
}
};
}
}
All Good now. Happy Coding

I have some solutions that roughly divided into Callable(for #Async), AsyncExecutionInterceptor(for #Async), CallableProcessingInterceptor(for controller).
1.The Callable solution for putting context infos into #Async thread:
The key is using the ContextAwarePoolExecutor to replace the default executor of #Async:
#Configuration
public class DemoExecutorConfig {
#Bean("demoExecutor")
public Executor contextAwarePoolExecutor() {
return new ContextAwarePoolExecutor();
}
}
And the ContextAwarePoolExecutor overwriting submit and submitListenable methods with ContextAwareCallable inside:
public class ContextAwarePoolExecutor extends ThreadPoolTaskExecutor {
private static final long serialVersionUID = 667815067287186086L;
#Override
public <T> Future<T> submit(Callable<T> task) {
return super.submit(new ContextAwareCallable<T>(task, newThreadContextContainer()));
}
#Override
public <T> ListenableFuture<T> submitListenable(Callable<T> task) {
return super.submitListenable(new ContextAwareCallable<T>(task, newThreadContextContainer()));
}
/**
* set infos what we need
*/
private ThreadContextContainer newThreadContextContainer() {
ThreadContextContainer container = new ThreadContextContainer();
container.setRequestAttributes(RequestContextHolder.currentRequestAttributes());
container.setContextMapOfMDC(MDC.getCopyOfContextMap());
return container;
}
}
The ThreadContextContainer is just a pojo to store infos for convenience:
public class ThreadContextContainer implements Serializable {
private static final long serialVersionUID = -6809291915300091330L;
private RequestAttributes requestAttributes;
private Map<String, String> contextMapOfMDC;
public RequestAttributes getRequestAttributes() {
return requestAttributes;
}
public Map<String, String> getContextMapOfMDC() {
return contextMapOfMDC;
}
public void setRequestAttributes(RequestAttributes requestAttributes) {
this.requestAttributes = requestAttributes;
}
public void setContextMapOfMDC(Map<String, String> contextMapOfMDC) {
this.contextMapOfMDC = contextMapOfMDC;
}
}
The ContextAwareCallable(a Callable proxy for original task) overwriting the call method to storage MDC or other context infos before the original task executing its call method:
public class ContextAwareCallable<T> implements Callable<T> {
/**
* the original task
*/
private Callable<T> task;
/**
* for storing infos what we need
*/
private ThreadContextContainer threadContextContainer;
public ContextAwareCallable(Callable<T> task, ThreadContextContainer threadContextContainer) {
this.task = task;
this.threadContextContainer = threadContextContainer;
}
#Override
public T call() throws Exception {
// set infos
if (threadContextContainer != null) {
RequestAttributes requestAttributes = threadContextContainer.getRequestAttributes();
if (requestAttributes != null) {
RequestContextHolder.setRequestAttributes(requestAttributes);
}
Map<String, String> contextMapOfMDC = threadContextContainer.getContextMapOfMDC();
if (contextMapOfMDC != null) {
MDC.setContextMap(contextMapOfMDC);
}
}
try {
// execute the original task
return task.call();
} finally {
// clear infos after task completed
RequestContextHolder.resetRequestAttributes();
try {
MDC.clear();
} finally {
}
}
}
}
In the end, using the #Async with the configured bean "demoExecutor" like this: #Async("demoExecutor")
void yourTaskMethod();
2.In regard to your question of handling the response:
Regret to tell that I don't really have a verified solution. Maybe the org.springframework.aop.interceptor.AsyncExecutionInterceptor#invoke is possible to solve that.
And I do not think it has a solution to handle the response with your ServletLoggingFilter. Because the Async method will be returned instantly. The afterRequest method executes immediately and returns before Async method doing things. You won't get what you want unless you synchronously wait for the Async method to finish executing.
But if you just want to log something, you can add those codes into my example ContextAwareCallable after the original task executing its call method:
try {
// execute the original task
return task.call();
} finally {
String something = MDC.get("doSomething"); // will not be null
// logthis(something);
// clear infos after task completed
RequestContextHolder.resetRequestAttributes();
try {
MDC.clear();
} finally {
}
}

Related

processing strategy of message in spring kafka listener

Just wanted to make sure that whether messages are processed in correct way or not. When the message gets received at listener, it will be always processed by a new thread( defined the processor bean as prototype). is this implementation correct ? (i have Considered the listener is not thread safe, so for this reason the prototype scope of bean to process the message has been used)
(Input : TestTopic- 5 partitions - 1 consumer) or (Input : TestTopic- 5 partitions - 5 consumers)
public class EventListener {
#Autowired
private EventProcessor eventProcessor;
#KafkaListener(topics = "TestTopic", containerFactory = "kafkaListenerContainerFactory",
autoStartup = "true")
public void onMessage(
#Payload List<ConsumerRecord<String, String>> consumerRecords, Acknowledgment acknowledgment) {
eventProcessor.processAndAcknowledgeBatchMessages(consumerRecords, acknowledgment);
}
}
//event processor
#Slf4j
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
#NoArgsConstructor
#SuppressWarnings("unused")
public class EventProcessorImpl implements EventProcessor {
#Autowired
private KafkaProducerTemplate kafkaProducerTemplate;
#Autowired
private ObjectMapper localObjectMapper;
#Autowired
private Dao dao;
public void processAndAcknowledgeBatchMessages(
List<ConsumerRecord<String, String>> consumerRecords, Acknowledgment acknowledgment) {
long start = System.currentTimeMillis();
consumerRecords.forEach( consumerRecord -> {
Event event = localObjectMapper.readValue(consumerRecord.value(), Event.class);
dao.save(process(event));
});
acknowledgment.acknowledge();
}
}
No it is not correct; you should not execute on another thread; it will cause problems with committing offsets and error handling.
Also, making the EventProcessorImpl a prototype bean won't help. That just means a new instance is used each time the bean is referenced.
Since it is #Autowired it is only referenced once, during initialization. To get a new instance for each request, you would need to call getBean() on the application context each time.
It is better to make your code thread-safe.
EDIT
There are (at least) a couple of ways to deal with a not thread-safe service defined in prototype scope.
Use a ThreadLocal:
#SpringBootApplication
public class So68447863Application {
public static void main(String[] args) {
SpringApplication.run(So68447863Application.class, args);
}
private static final ThreadLocal<NotThreadSafeService> SERVICES = new ThreadLocal<>();
#Autowired
ApplicationContext context;
#KafkaListener(id = "so68447863", topics = "so68447863", concurrency = "5")
void listen(String in) {
NotThreadSafeService service = SERVICES.get();
if (service == null) {
service = this.context.getBean(NotThreadSafeService.class);
SERVICES.set(service);
}
service.process(in);
}
#EventListener
void removeService(ConsumerStoppedEvent event) {
System.out.println("Consumer stopped; removing TL");
SERVICES.remove();
}
#Bean
NewTopic topic() {
return TopicBuilder.name("so68447863").partitions(10).replicas(1).build();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
NotThreadSafeService service() {
return new NotThreadSafeService();
}
}
class NotThreadSafeService {
void process(String msg) {
System.out.println(msg + " processed by " + this);
}
}
Use a pool of instances.
#SpringBootApplication
public class So68447863Application {
public static void main(String[] args) {
SpringApplication.run(So68447863Application.class, args);
}
private static final BlockingQueue<NotThreadSafeService> SERVICES = new LinkedBlockingQueue<>();
#Autowired
ApplicationContext context;
#KafkaListener(id = "so68447863", topics = "so68447863", concurrency = "5")
void listen(String in) {
NotThreadSafeService service = SERVICES.poll();
if (service == null) {
service = this.context.getBean(NotThreadSafeService.class);
}
try {
service.process(in);
}
finally {
SERVICES.add(service);
}
}
#Bean
NewTopic topic() {
return TopicBuilder.name("so68447863").partitions(10).replicas(1).build();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
NotThreadSafeService service() {
return new NotThreadSafeService();
}
}
class NotThreadSafeService {
void process(String msg) {
System.out.println(msg + " processed by " + this);
}
}

How to bind a Store using Spring Cloud Stream and Kafka?

I'd like to use a Kafka state store of type KeyValueStore in a sample application using the Kafka Binder of Spring Cloud Stream.
According to the documentation, it should be pretty simple.
This is my main class:
#SpringBootApplication
public class KafkaStreamTestApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaStreamTestApplication.class, args);
}
#Bean
public BiFunction<KStream<String, String>, KeyValueStore<String,String>, KStream<String, String>> process(){
return (input,store) -> input.mapValues(v -> v.toUpperCase());
}
#Bean
public StoreBuilder myStore() {
return Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("my-store"), Serdes.String(),
Serdes.String());
}
}
I suppose that the KeyValueStore should be passed as the second parameter of the "process" method, but the application fails to start with the message below:
Caused by: java.lang.IllegalStateException: No factory found for binding target type: org.apache.kafka.streams.state.KeyValueStore among registered factories: channelFactory,messageSourceFactory,kStreamBoundElementFactory,kTableBoundElementFactory,globalKTableBoundElementFactory
at org.springframework.cloud.stream.binding.AbstractBindableProxyFactory.getBindingTargetFactory(AbstractBindableProxyFactory.java:82) ~[spring-cloud-stream-3.0.3.RELEASE.jar:3.0.3.RELEASE]
at org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory.bindInput(KafkaStreamsBindableProxyFactory.java:191) ~[spring-cloud-stream-binder-kafka-streams-3.0.3.RELEASE.jar:3.0.3.RELEASE]
at org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory.afterPropertiesSet(KafkaStreamsBindableProxyFactory.java:103) ~[spring-cloud-stream-binder-kafka-streams-3.0.3.RELEASE.jar:3.0.3.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1855) ~[spring-beans-5.2.5.RELEASE.jar:5.2.5.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1792) ~[spring-beans-5.2.5.RELEASE.jar:5.2.5.RELEASE]
I found the solution about how to use a store reading an unit test in Spring Cloud Stream.
The code below is how I applied that solution to my code.
The transformer uses the Store provided by Spring bean method "myStore"
#SpringBootApplication
public class KafkaStreamTestApplication {
public static final String MY_STORE_NAME = "my-store";
public static void main(String[] args) {
SpringApplication.run(KafkaStreamTestApplication.class, args);
}
#Bean
public Function<KStream<String, String>, KStream<String, String>> process2(){
return (input) -> input.
transformValues(() -> new MyValueTransformer(), MY_STORE_NAME);
}
#Bean
public StoreBuilder<?> myStore() {
return Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore(MY_STORE_NAME), Serdes.String(),
Serdes.String());
}
}
public class MyValueTransformer implements ValueTransformer<String, String> {
private KeyValueStore<String,String> store;
private ProcessorContext context;
#Override
public void init(ProcessorContext context) {
this.context = context;
store = (KeyValueStore<String, String>) this.context.getStateStore(KafkaStreamTestApplication.MY_STORE_NAME);
}
#Override
public String transform(String value) {
String tValue = store.get(value);
if(tValue==null) {
store.put(value, value.toUpperCase());
}
return tValue;
}
#Override
public void close() {
if(store!=null) {
store.close();
}
}
}

How to pass data from grpc rpc call to server interceptor in java

I am trying to set some metadata with a value from the response after the rpc server call has been processed. The plan was to use server interceptor and override close method.
Something like this: https://github.com/dconnelly/grpc-error-example/blob/master/src/main/java/example/Errors.java#L38
Since the metadata value depends on the response, I need some way to pass data from rpc server call to server interceptor or access the response from interceptor
In Golang, the metadata can be set easily in the rpc call grpc.SetTrailer after processing but in java there is no way to do it in rpc call. So I am trying to use server interceptor for the same.
Can someone help?
You can use grpc-java's Contexts for that.
In the interceptor you attach a Context with a custom key containing a mutable reference. Then in the call you access that header again and extract the value from it.
public static final Context.Key<TrailerHolder> TRAILER_HOLDER_KEY = Context.key("trailerHolder");
Context context = Context.current().withValue(TRAILER_HOLDER_KEY, new TrailerHolder());
Context previousContext = context.attach();
[...]
context.detach(previousContext);
You can access the context value like this:
TrailerHolder trailerHolder = TRAILER_HOLDER_KEY.get();
You might want to implement your code similar to this method:
Contexts#interceptCall(Context, ServerCall, Metadata, ServerCallHandler)
EDIT:
import io.grpc.Context;
import io.grpc.ForwardingServerCall.SimpleForwardingServerCall;
import io.grpc.ForwardingServerCallListener;
import io.grpc.Metadata;
import io.grpc.ServerCall;
import io.grpc.ServerCall.Listener;
import io.grpc.ServerCallHandler;
import io.grpc.ServerInterceptor;
import io.grpc.Status;
public class TrailerServerInterceptor implements ServerInterceptor {
public static final Context.Key<Metadata> TRAILER_HOLDER_KEY = Context.key("trailerHolder");
#Override
public <ReqT, RespT> Listener<ReqT> interceptCall(final ServerCall<ReqT, RespT> call, final Metadata headers,
final ServerCallHandler<ReqT, RespT> next) {
final TrailerCall<ReqT, RespT> call2 = new TrailerCall<>(call);
final Context context = Context.current().withValue(TRAILER_HOLDER_KEY, new Metadata());
final Context previousContext = context.attach();
try {
return new TrailerListener<>(next.startCall(call2, headers), context);
} finally {
context.detach(previousContext);
}
}
private class TrailerCall<ReqT, RespT> extends SimpleForwardingServerCall<ReqT, RespT> {
public TrailerCall(final ServerCall<ReqT, RespT> delegate) {
super(delegate);
}
#Override
public void close(final Status status, final Metadata trailers) {
trailers.merge(TRAILER_HOLDER_KEY.get());
super.close(status, trailers);
}
}
private class TrailerListener<ReqT> extends ForwardingServerCallListener.SimpleForwardingServerCallListener<ReqT> {
private final Context context;
public TrailerListener(final ServerCall.Listener<ReqT> delegate, final Context context) {
super(delegate);
this.context = context;
}
#Override
public void onMessage(final ReqT message) {
final Context previous = this.context.attach();
try {
super.onMessage(message);
} finally {
this.context.detach(previous);
}
}
#Override
public void onHalfClose() {
final Context previous = this.context.attach();
try {
super.onHalfClose();
} finally {
this.context.detach(previous);
}
}
#Override
public void onCancel() {
final Context previous = this.context.attach();
try {
super.onCancel();
} finally {
this.context.detach(previous);
}
}
#Override
public void onComplete() {
final Context previous = this.context.attach();
try {
super.onComplete();
} finally {
this.context.detach(previous);
}
}
#Override
public void onReady() {
final Context previous = this.context.attach();
try {
super.onReady();
} finally {
this.context.detach(previous);
}
}
}
}
In your grpc service method you can simply use TRAILER_HOLDER_KEY.get().put(...)

SpringBoot Undertow : how to dispatch to worker thread

i'm currently have a look a springboot undertow and it's not really clear (for me) how to dispatch an incoming http request to a worker thread for blocking operation handling.
Looking at the class UndertowEmbeddedServletContainer.class, it look like there is no way to have this behaviour since the only HttpHandler is a ServletHandler, that allow #Controller configurations
private Undertow createUndertowServer() {
try {
HttpHandler servletHandler = this.manager.start();
this.builder.setHandler(getContextHandler(servletHandler));
return this.builder.build();
}
catch (ServletException ex) {
throw new EmbeddedServletContainerException(
"Unable to start embdedded Undertow", ex);
}
}
private HttpHandler getContextHandler(HttpHandler servletHandler) {
if (StringUtils.isEmpty(this.contextPath)) {
return servletHandler;
}
return Handlers.path().addPrefixPath(this.contextPath, servletHandler);
}
By default, in undertow all requests are handled by IO-Thread for non blocking operations.
Does this mean that every #Controller executions will be processed by a non blocking thread ? or is there a solution to chose from IO-THREAD or WORKER-THREAD ?
I try to write a workaround, but this code is pretty uggly, and maybe someone has a better solution:
BlockingHandler.class
#Target({ElementType.TYPE})
#Retention(RetentionPolicy.RUNTIME)
#Documented
public #interface BlockingHandler {
String contextPath() default "/";
}
UndertowInitializer.class
public class UndertowInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
#Override
public void initialize(ConfigurableApplicationContext configurableApplicationContext) {
configurableApplicationContext.addBeanFactoryPostProcessor(new UndertowHandlerPostProcessor());
}
}
UndertowHandlerPostProcessor.class
public class UndertowHandlerPostProcessor implements BeanDefinitionRegistryPostProcessor {
#Override
public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry beanDefinitionRegistry) throws BeansException {
ClassPathScanningCandidateComponentProvider scanner = new ClassPathScanningCandidateComponentProvider(false);
scanner.addIncludeFilter(new AnnotationTypeFilter(BlockingHandler.class));
for (BeanDefinition beanDefinition : scanner.findCandidateComponents("org.me.lah")){
try{
Class clazz = Class.forName(beanDefinition.getBeanClassName());
beanDefinitionRegistry.registerBeanDefinition(clazz.getSimpleName(), beanDefinition);
} catch (ClassNotFoundException e) {
throw new BeanCreationException(format("Unable to create bean %s", beanDefinition.getBeanClassName()), e);
}
}
}
#Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory configurableListableBeanFactory) throws BeansException {
//no need to post process defined bean
}
}
override UndertowEmbeddedServletContainerFactory.class
public class UndertowEmbeddedServletContainerFactory extends AbstractEmbeddedServletContainerFactory implements ResourceLoaderAware, ApplicationContextAware {
private ApplicationContext applicationContext;
#Override
public EmbeddedServletContainer getEmbeddedServletContainer(ServletContextInitializer... initializers) {
DeploymentManager manager = createDeploymentManager(initializers);
int port = getPort();
if (port == 0) {
port = SocketUtils.findAvailableTcpPort(40000);
}
Undertow.Builder builder = createBuilder(port);
Map<String, Object> handlers = applicationContext.getBeansWithAnnotation(BlockingHandler.class);
return new UndertowEmbeddedServletContainer(builder, manager, getContextPath(),
port, port >= 0, handlers);
}
#Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
}
...
override UndertowEmbeddedServletContainer.class
public UndertowEmbeddedServletContainer(Builder builder, DeploymentManager manager,
String contextPath, int port, boolean autoStart, Map<String, Object> handlers) {
this.builder = builder;
this.manager = manager;
this.contextPath = contextPath;
this.port = port;
this.autoStart = autoStart;
this.handlers = handlers;
}
private Undertow createUndertowServer() {
try {
HttpHandler servletHandler = this.manager.start();
String path = this.contextPath.isEmpty() ? "/" : this.contextPath;
PathHandler pathHandler = Handlers.path().addPrefixPath(path, servletHandler);
for(Entry<String, Object> entry : handlers.entrySet()){
Annotation annotation = entry.getValue().getClass().getDeclaredAnnotation(BlockingHandler.class);
System.out.println(((BlockingHandler) annotation).contextPath());
pathHandler.addPrefixPath(((BlockingHandler) annotation).contextPath(), (HttpHandler) entry.getValue());
}
this.builder.setHandler(pathHandler);
return this.builder.build();
}
catch (ServletException ex) {
throw new EmbeddedServletContainerException(
"Unable to start embdedded Undertow", ex);
}
}
set initializer to the application context
public static void main(String[] args) {
new SpringApplicationBuilder(Application.class).initializers(new UndertowInitializer()).run(args);
}
finaly create a HttpHandler that dispatch to worker thread
#BlockingHandler(contextPath = "/blocking/test")
public class DatabaseHandler implements HttpHandler {
#Autowired
private EchoService echoService;
#Override
public void handleRequest(HttpServerExchange httpServerExchange) throws Exception {
if(httpServerExchange.isInIoThread()){
httpServerExchange.dispatch();
}
echoService.getMessage("my message");
}
}
As you can see, my "solution" is really heavy, and i would really appreciate any help to simplify it a lot.
Thank you
You don't need to do anything.
Spring Boot's default Undertow configuration uses Undertow's ServletInitialHandler in front of Spring MVC's DispatcherServlet. This handler performs the exchange.isInIoThread() check and calls dispatch() if necessary.
If you place a breakpoint in your #Controller, you'll see that it's called on a thread named XNIO-1 task-n which is a worker thread (the IO threads are named XNIO-1 I/O-n).

#Context WebConfig not injected when using JerseyTest 2.0

I have a simple resource like:
#Path("/")
public class RootResource {
#Context WebConfig wc;
#PostConstruct
public void init() {
assertNotNull(wc);
}
#GET
public void String method() {
return "Hello\n";
}
}
Which I am trying to use with JerseyTest (2.x, not 1.x) and the GrizzlyTestContainerFactory.
I can't work out what I need to do in terms of config to get the WebConfig object injected.
I solved this issue by creating a subclass of GrizzlyTestContainerFactory and explicitly loading the Jersey servlet. This triggers the injection of the WebConfig object. The code looks like this:
public class ExtendedGrizzlyTestContainerFactory implements TestContainerFactory {
private static class GrizzlyTestContainer implements TestContainer {
private final URI uri;
private final ApplicationHandler appHandler;
private HttpServer server;
private static final Logger LOGGER = Logger.getLogger(GrizzlyTestContainer.class.getName());
private GrizzlyTestContainer(URI uri, ApplicationHandler appHandler) {
this.appHandler = appHandler;
this.uri = uri;
}
#Override
public ClientConfig getClientConfig() {
return null;
}
#Override
public URI getBaseUri() {
return uri;
}
#Override
public void start() {
if (LOGGER.isLoggable(Level.INFO)) {
LOGGER.log(Level.INFO, "Starting GrizzlyTestContainer...");
}
try {
this.server = GrizzlyHttpServerFactory.createHttpServer(uri, appHandler);
// Initialize and register Jersey Servlet
WebappContext context = new WebappContext("WebappContext", "");
ServletRegistration registration = context.addServlet("ServletContainer", ServletContainer.class);
registration.setInitParameter("javax.ws.rs.Application",
appHandler.getConfiguration().getApplication().getClass().getName());
// Add an init parameter - this could be loaded from a parameter in the constructor
registration.setInitParameter("myparam", "myvalue");
registration.addMapping("/*");
context.deploy(server);
} catch (ProcessingException e) {
throw new TestContainerException(e);
}
}
#Override
public void stop() {
if (LOGGER.isLoggable(Level.INFO)) {
LOGGER.log(Level.INFO, "Stopping GrizzlyTestContainer...");
}
this.server.stop();
}
}
#Override
public TestContainer create(URI baseUri, ApplicationHandler application) throws IllegalArgumentException {
return new GrizzlyTestContainer(baseUri, application);
}
Notice that the Jersey Servlet configuration is being loaded from the ApplicationHandler that is passed in as a parameter using the inner Application object's class name (ResourceConfig is a subclass of Application). Therefore, you also need to create a subclass of ResourceConfig for this approach to work. The code for this is very simple:
package com.example;
import org.glassfish.jersey.server.ResourceConfig;
public class MyResourceConfig extends ResourceConfig {
public MyResourceConfig() {
super(MyResource.class);
}
}
This assumes the resource you are testing is MyResource. You also need to override a couple of methods in your test like this:
public class MyResourceTest extends JerseyTest {
public MyResourceTest() throws TestContainerException {
}
#Override
protected Application configure() {
return new MyResourceConfig();
}
#Override
protected TestContainerFactory getTestContainerFactory() throws TestContainerException {
return new ExtendedGrizzlyTestContainerFactory();
}
#Test
public void testCreateSimpleBean() {
final String beanList = target("test").request().get(String.class);
Assert.assertNotNull(beanList);
}
}
Finally, for completeness, here is the code for MyResource:
#Path("test")
public class MyResource {
#Context WebConfig wc;
#PostConstruct
public void init() {
System.out.println("WebConfig: " + wc);
String url = wc.getInitParameter("myparam");
System.out.println("myparam = "+url);
}
#GET
#Produces(MediaType.APPLICATION_JSON)
public Collection<TestBean> createSimpleBean() {
Collection<TestBean> res = new ArrayList<TestBean>();
res.add(new TestBean("a", 1, 1L));
res.add(new TestBean("b", 2, 2L));
return res;
}
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public TestBean roundTrip(TestBean s) {
return s;
}
}
The output of running the test shows that the WebConfig is loaded and the init param is now available:
WebConfig: org.glassfish.jersey.servlet.WebServletConfig#107d0f44
myparam = myvalue
The solution from #ametke worked well but wasn't picking up my ExceptionMapper classes. To solve this I simplified the start() method to:
#Override
public void start() {
try {
initParams.put("jersey.config.server.provider.packages", "my.resources;my.config");
this.server = GrizzlyWebContainerFactory.create(uri, initParams);
} catch (ProcessingException | IOException e) {
throw new TestContainerException(e);
}
}
This was based on Problems running JerseyTest when dealing with HttpServletResponse

Resources