I am using the Firebase Admin Java SDK in a server side Spring WebFlux environment. The Firebase SDK provides two methods for each operation. A synchronous ".doOperation()" method which returns a result and a ".doOperationAsync()" method which returns an instance of ApiFuture.
The Javadoc for ApiFuture can be found here.
ApiFuture extends from java.util.concurrent.Future, which is no longer supported by Mono.fromFuture().
Is there a way to convert Googles ApiFuture into a Mono?
It's possible to convert ApiFuture to CompletableFuture, which can be used with Mono.fromFuture:
internal class ApiCompletableFuture<V>(
private val future: ApiFuture<V>,
executor: Executor = directExecutor(),
) : CompletableFuture<V>(), ApiFutureCallback<V> {
init {
ApiFutures.addCallback(future, this, executor)
}
override fun cancel(mayInterruptIfRunning: Boolean): Boolean {
future.cancel(mayInterruptIfRunning)
return super.cancel(mayInterruptIfRunning)
}
override fun onSuccess(result: V) {
complete(result)
}
override fun onFailure(t: Throwable) {
completeExceptionally(t)
}
}
Some utility methods for you
public static <T> CompletableFuture<T> toCompletableFuture(ApiFuture<T> apiFuture) {
final CompletableFuture<T> cf = new CompletableFuture<>();
ApiFutures.addCallback(apiFuture,
new ApiFutureCallback<T>() {
#Override
public void onFailure(Throwable t) {
cf.completeExceptionally(t);
}
#Override
public void onSuccess(T result) {
cf.complete(result);
}
},
MoreExecutors.directExecutor());
return cf;
}
public static <T> Mono<T> toMono(ApiFuture<T> apiFuture) {
return Mono.create(sink -> ApiFutures.addCallback(apiFuture,
new ApiFutureCallback<T>() {
#Override
public void onFailure(Throwable t) {
sink.error(t);
}
#Override
public void onSuccess(T result) {
sink.success(result);
}
},
MoreExecutors.directExecutor()));
}
Related
If I send multiple commands to the same aggregate, only the first is handled.
Is this a configuration problem, or am I missing something?
The message I am getting after the 2nd command is send:
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.axonframework.commandhandling.CommandExecutionException: Cannot invoke "Object.hashCode()" because "key" is null
The service method where I do my sending of the command is:
public void maakAanvraag() {
UUID aanvraagId = UUID.randomUUID();
commandGateway.sendAndWait(
VerwerkAanvraag.builder()
.aanvraagId(aanvraagId)
.build()
);
commandGateway.sendAndWait(
VerwerkPersoonsgegevensVastgesteld.builder()
.aanvraagId(aanvraagId)
.build()
);
commandGateway.sendAndWait(
VerwerkOrganisatiegegevensVastgesteld.builder()
.aanvraagId(aanvraagId)
.organisatieId(organisatieView.getOrganisatieId())
.rolOrganisatie(rolOrganisatie)
.build()
);
commandGateway.sendAndWait(
VerwerkBeperkingErkenningsdoelGematcht.builder()
.aanvraagId(aanvraagId)
.build());
}
The aggregate I am using is:
#Aggregate
#Getter
#NoArgsConstructor
public class Aanvraag {
public static final String META_DATA_ZAAKNUMMER = "aanvraag_zaaknummer";
#AggregateIdentifier
private UUID aanvraagId;
#CommandHandler
public Aanvraag(VerwerkAanvraag command) {
AanvraagGeregistreerd aanvraagGeregistreerd =
AanvraagGeregistreerd.builder()
.aanvraagId(command.getAanvraagId())
.build();
apply(aanvraagGeregistreerd, MetaData.with(META_DATA_ZAAKNUMMER, "123456789"));
}
#EventSourcingHandler
public void on(AanvraagGeregistreerd event) {
aanvraagId = event.getAanvraagId();
}
#CommandHandler
public void verwerkOrganisatiegegevensVastgesteld(VerwerkOrganisatiegegevensVastgesteld command) {
OrganisatiegegevensVastgesteld persoonsgegevensVastgesteld =
OrganisatiegegevensVastgesteld.builder()
.aanvraagId(command.getAanvraagId())
.build();
apply(persoonsgegevensVastgesteld);
}
#EventSourcingHandler
public void on(OrganisatiegegevensVastgesteld event) {
aanvraagId = event.getAanvraagId();
}
#CommandHandler
public void verwerkPersoonsgegevensVastgesteld(VerwerkPersoonsgegevensVastgesteld command) {
PersoonsgegevensVastgesteld persoonsgegevensVastgesteld =
PersoonsgegevensVastgesteld.builder()
.aanvraagId(command.getAanvraagId())
.build();
apply(persoonsgegevensVastgesteld);
}
#EventSourcingHandler
public void on(PersoonsgegevensVastgesteld event) {
aanvraagId = event.getAanvraagId();
}
#CommandHandler
public void verwerkBeperkingErkenningsdoelGematcht(VerwerkBeperkingErkenningsdoelGematcht command) {
BeperkingErkenningsdoelGematcht beperkingErkenningsdoelGematcht =
BeperkingErkenningsdoelGematcht.builder()
.aanvraagId(command.getAanvraagId())
.build();
apply(beperkingErkenningsdoelGematcht);
}
#EventSourcingHandler
public void on(BeperkingErkenningsdoelGematcht event) {
aanvraagId = event.getAanvraagId();
}
}
The project uses Spring Boot 2.6.6 with axon-spring-boot-starter 4.5.9
It al runs with Java Temurin 17.0.3
We solved the problem.
Issue had nothing to do with Axon.
The problem was an Logging interceptor.
After removing this log interceptor the Axon worked as expected.
We are using spring cloud streams Hoxton.SR4 to consume messages from Kafka topic. We've enabled spring.cloud.stream.bindings..consumer.batch-mode=true, fetching 2000 records per poll. I would like to know if there is a way we can manually acknowledge/commit entire batch.
SR4 is quite old; the current Hoxton release is SR9 and the current spring cloud stream version is 3.0.10.RELEASE (Hoxton.SR9 pulls in 3.0.9).
You need to consume a Message and get the acknowledgment from a header.
#SpringBootApplication
public class So652289261Application {
public static void main(String[] args) {
SpringApplication.run(So652289261Application.class, args);
}
#Bean
Consumer<Message<List<Foo>>> consume() {
return msg -> {
System.out.println(msg.getPayload());
msg.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class).acknowledge();
};
}
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> container.getContainerProperties()
.setCommitLogLevel(LogIfLevelEnabled.Level.INFO);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
template.send("consume-in-0", "{\"bar\":\"baz\"}".getBytes());
template.send("consume-in-0", "{\"bar\":\"qux\"}".getBytes());
};
}
public static class Foo {
private String bar;
public Foo() {
}
public Foo(String bar) {
this.bar = bar;
}
public String getBar() {
return this.bar;
}
public void setBar(String bar) {
this.bar = bar;
}
#Override
public String toString() {
return "Foo [bar=" + this.bar + "]";
}
}
}
Properties for Boot 2.3.6 and Cloud Hoxton.SR9
spring.cloud.stream.bindings.consume-in-0.group=so65228926
spring.cloud.stream.bindings.consume-in-0.consumer.batch-mode=true
spring.cloud.stream.kafka.bindings.consume-in-0.consumer.auto-commit-offset=false
spring.kafka.producer.properties.linger.ms=50
Properties for Boot 2.4.0 and Cloud 2020.0.0-M6
spring.cloud.stream.bindings.consume-in-0.group=so65228926
spring.cloud.stream.bindings.consume-in-0.consumer.batch-mode=true
spring.cloud.stream.kafka.bindings.consume-in-0.consumer.ack-mode=MANUAL
spring.kafka.producer.properties.linger.ms=50
[Foo [bar=baz], Foo [bar=qux]]
... Committing: {consume-in-0-0=OffsetAndMetadata{offset=14, leaderEpoch=null, metadata=''}}
I'd like to use a Kafka state store of type KeyValueStore in a sample application using the Kafka Binder of Spring Cloud Stream.
According to the documentation, it should be pretty simple.
This is my main class:
#SpringBootApplication
public class KafkaStreamTestApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaStreamTestApplication.class, args);
}
#Bean
public BiFunction<KStream<String, String>, KeyValueStore<String,String>, KStream<String, String>> process(){
return (input,store) -> input.mapValues(v -> v.toUpperCase());
}
#Bean
public StoreBuilder myStore() {
return Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("my-store"), Serdes.String(),
Serdes.String());
}
}
I suppose that the KeyValueStore should be passed as the second parameter of the "process" method, but the application fails to start with the message below:
Caused by: java.lang.IllegalStateException: No factory found for binding target type: org.apache.kafka.streams.state.KeyValueStore among registered factories: channelFactory,messageSourceFactory,kStreamBoundElementFactory,kTableBoundElementFactory,globalKTableBoundElementFactory
at org.springframework.cloud.stream.binding.AbstractBindableProxyFactory.getBindingTargetFactory(AbstractBindableProxyFactory.java:82) ~[spring-cloud-stream-3.0.3.RELEASE.jar:3.0.3.RELEASE]
at org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory.bindInput(KafkaStreamsBindableProxyFactory.java:191) ~[spring-cloud-stream-binder-kafka-streams-3.0.3.RELEASE.jar:3.0.3.RELEASE]
at org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory.afterPropertiesSet(KafkaStreamsBindableProxyFactory.java:103) ~[spring-cloud-stream-binder-kafka-streams-3.0.3.RELEASE.jar:3.0.3.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1855) ~[spring-beans-5.2.5.RELEASE.jar:5.2.5.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1792) ~[spring-beans-5.2.5.RELEASE.jar:5.2.5.RELEASE]
I found the solution about how to use a store reading an unit test in Spring Cloud Stream.
The code below is how I applied that solution to my code.
The transformer uses the Store provided by Spring bean method "myStore"
#SpringBootApplication
public class KafkaStreamTestApplication {
public static final String MY_STORE_NAME = "my-store";
public static void main(String[] args) {
SpringApplication.run(KafkaStreamTestApplication.class, args);
}
#Bean
public Function<KStream<String, String>, KStream<String, String>> process2(){
return (input) -> input.
transformValues(() -> new MyValueTransformer(), MY_STORE_NAME);
}
#Bean
public StoreBuilder<?> myStore() {
return Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore(MY_STORE_NAME), Serdes.String(),
Serdes.String());
}
}
public class MyValueTransformer implements ValueTransformer<String, String> {
private KeyValueStore<String,String> store;
private ProcessorContext context;
#Override
public void init(ProcessorContext context) {
this.context = context;
store = (KeyValueStore<String, String>) this.context.getStateStore(KafkaStreamTestApplication.MY_STORE_NAME);
}
#Override
public String transform(String value) {
String tValue = store.get(value);
if(tValue==null) {
store.put(value, value.toUpperCase());
}
return tValue;
}
#Override
public void close() {
if(store!=null) {
store.close();
}
}
}
Using Spring MVC, I have the following setup:
An AbstractRequestLoggingFilter derived filter for logging requests.
A TaskDecorator to marshal the MDC context mapping from the web request thread to the #Async thread.
I'm attempting to collect context info using MDC (or a ThreadLocal object) for all components involved in handling the request.
I can correctly retrieve the MDC context info from the #Async thread. However, if the #Async thread were to add context info to the MDC, how can I now marshal the MDC context info to the thread that handles the response?
TaskDecorator
public class MdcTaskDecorator implements TaskDecorator {
#Override
public Runnable decorate(Runnable runnable) {
// Web thread context
// Get the logging MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
try {
// #Async thread context
// Restore the web thread MDC context
if(contextMap != null) {
MDC.setContextMap(contextMap);
}
else {
MDC.clear();
}
// Run the new thread
runnable.run();
}
finally {
MDC.clear();
}
};
}
}
Async method
#Async
public CompletableFuture<String> doSomething_Async() {
MDC.put("doSomething", "started");
return doit();
}
Logging Filter
public class ServletLoggingFilter extends AbstractRequestLoggingFilter {
#Override
protected void beforeRequest(HttpServletRequest request, String message) {
MDC.put("webthread", Thread.currentThread().getName()); // Will be webthread-1
}
#Override
protected void afterRequest(HttpServletRequest request, String message) {
MDC.put("responsethread", Thread.currentThread().getName()); // Will be webthread-2
String s = MDC.get("doSomething"); // Will be null
// logthis();
}
}
I hope you have solved the problem, but if you did not, here comes a solution.
All you have to do can be summarized as following 2 simple steps:
Keep your class MdcTaskDecorator.
Extends AsyncConfigurerSupport for your main class and override getAsyncExecutor() to set decorator with your customized one as follows:
public class AsyncTaskDecoratorApplication extends AsyncConfigurerSupport {
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setTaskDecorator(new MdcTaskDecorator());
executor.initialize();
return executor;
}
public static void main(String[] args) {
SpringApplication.run(AsyncTaskdecoratorApplication.class, args);
}
}
Create a bean that will pass the MDC properties from parent thread to the successor thread.
#Configuration
#Slf4j
public class AsyncMDCConfiguration {
#Bean
public Executor asyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setTaskDecorator(new MDCTaskDecorator());//MDCTaskDecorator i s a custom created class thet implements TaskDecorator that is reponsible for passing on the MDC properties
executor.initialize();
return executor;
}
}
#Slf4j
public class MDCTaskDecorator implements TaskDecorator {
#Override
public Runnable decorate(Runnable runnable) {
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
try {
MDC.setContextMap(contextMap);
runnable.run();
} finally {
MDC.clear();
}
};
}
}
All Good now. Happy Coding
I have some solutions that roughly divided into Callable(for #Async), AsyncExecutionInterceptor(for #Async), CallableProcessingInterceptor(for controller).
1.The Callable solution for putting context infos into #Async thread:
The key is using the ContextAwarePoolExecutor to replace the default executor of #Async:
#Configuration
public class DemoExecutorConfig {
#Bean("demoExecutor")
public Executor contextAwarePoolExecutor() {
return new ContextAwarePoolExecutor();
}
}
And the ContextAwarePoolExecutor overwriting submit and submitListenable methods with ContextAwareCallable inside:
public class ContextAwarePoolExecutor extends ThreadPoolTaskExecutor {
private static final long serialVersionUID = 667815067287186086L;
#Override
public <T> Future<T> submit(Callable<T> task) {
return super.submit(new ContextAwareCallable<T>(task, newThreadContextContainer()));
}
#Override
public <T> ListenableFuture<T> submitListenable(Callable<T> task) {
return super.submitListenable(new ContextAwareCallable<T>(task, newThreadContextContainer()));
}
/**
* set infos what we need
*/
private ThreadContextContainer newThreadContextContainer() {
ThreadContextContainer container = new ThreadContextContainer();
container.setRequestAttributes(RequestContextHolder.currentRequestAttributes());
container.setContextMapOfMDC(MDC.getCopyOfContextMap());
return container;
}
}
The ThreadContextContainer is just a pojo to store infos for convenience:
public class ThreadContextContainer implements Serializable {
private static final long serialVersionUID = -6809291915300091330L;
private RequestAttributes requestAttributes;
private Map<String, String> contextMapOfMDC;
public RequestAttributes getRequestAttributes() {
return requestAttributes;
}
public Map<String, String> getContextMapOfMDC() {
return contextMapOfMDC;
}
public void setRequestAttributes(RequestAttributes requestAttributes) {
this.requestAttributes = requestAttributes;
}
public void setContextMapOfMDC(Map<String, String> contextMapOfMDC) {
this.contextMapOfMDC = contextMapOfMDC;
}
}
The ContextAwareCallable(a Callable proxy for original task) overwriting the call method to storage MDC or other context infos before the original task executing its call method:
public class ContextAwareCallable<T> implements Callable<T> {
/**
* the original task
*/
private Callable<T> task;
/**
* for storing infos what we need
*/
private ThreadContextContainer threadContextContainer;
public ContextAwareCallable(Callable<T> task, ThreadContextContainer threadContextContainer) {
this.task = task;
this.threadContextContainer = threadContextContainer;
}
#Override
public T call() throws Exception {
// set infos
if (threadContextContainer != null) {
RequestAttributes requestAttributes = threadContextContainer.getRequestAttributes();
if (requestAttributes != null) {
RequestContextHolder.setRequestAttributes(requestAttributes);
}
Map<String, String> contextMapOfMDC = threadContextContainer.getContextMapOfMDC();
if (contextMapOfMDC != null) {
MDC.setContextMap(contextMapOfMDC);
}
}
try {
// execute the original task
return task.call();
} finally {
// clear infos after task completed
RequestContextHolder.resetRequestAttributes();
try {
MDC.clear();
} finally {
}
}
}
}
In the end, using the #Async with the configured bean "demoExecutor" like this: #Async("demoExecutor")
void yourTaskMethod();
2.In regard to your question of handling the response:
Regret to tell that I don't really have a verified solution. Maybe the org.springframework.aop.interceptor.AsyncExecutionInterceptor#invoke is possible to solve that.
And I do not think it has a solution to handle the response with your ServletLoggingFilter. Because the Async method will be returned instantly. The afterRequest method executes immediately and returns before Async method doing things. You won't get what you want unless you synchronously wait for the Async method to finish executing.
But if you just want to log something, you can add those codes into my example ContextAwareCallable after the original task executing its call method:
try {
// execute the original task
return task.call();
} finally {
String something = MDC.get("doSomething"); // will not be null
// logthis(something);
// clear infos after task completed
RequestContextHolder.resetRequestAttributes();
try {
MDC.clear();
} finally {
}
}
I have a simple resource like:
#Path("/")
public class RootResource {
#Context WebConfig wc;
#PostConstruct
public void init() {
assertNotNull(wc);
}
#GET
public void String method() {
return "Hello\n";
}
}
Which I am trying to use with JerseyTest (2.x, not 1.x) and the GrizzlyTestContainerFactory.
I can't work out what I need to do in terms of config to get the WebConfig object injected.
I solved this issue by creating a subclass of GrizzlyTestContainerFactory and explicitly loading the Jersey servlet. This triggers the injection of the WebConfig object. The code looks like this:
public class ExtendedGrizzlyTestContainerFactory implements TestContainerFactory {
private static class GrizzlyTestContainer implements TestContainer {
private final URI uri;
private final ApplicationHandler appHandler;
private HttpServer server;
private static final Logger LOGGER = Logger.getLogger(GrizzlyTestContainer.class.getName());
private GrizzlyTestContainer(URI uri, ApplicationHandler appHandler) {
this.appHandler = appHandler;
this.uri = uri;
}
#Override
public ClientConfig getClientConfig() {
return null;
}
#Override
public URI getBaseUri() {
return uri;
}
#Override
public void start() {
if (LOGGER.isLoggable(Level.INFO)) {
LOGGER.log(Level.INFO, "Starting GrizzlyTestContainer...");
}
try {
this.server = GrizzlyHttpServerFactory.createHttpServer(uri, appHandler);
// Initialize and register Jersey Servlet
WebappContext context = new WebappContext("WebappContext", "");
ServletRegistration registration = context.addServlet("ServletContainer", ServletContainer.class);
registration.setInitParameter("javax.ws.rs.Application",
appHandler.getConfiguration().getApplication().getClass().getName());
// Add an init parameter - this could be loaded from a parameter in the constructor
registration.setInitParameter("myparam", "myvalue");
registration.addMapping("/*");
context.deploy(server);
} catch (ProcessingException e) {
throw new TestContainerException(e);
}
}
#Override
public void stop() {
if (LOGGER.isLoggable(Level.INFO)) {
LOGGER.log(Level.INFO, "Stopping GrizzlyTestContainer...");
}
this.server.stop();
}
}
#Override
public TestContainer create(URI baseUri, ApplicationHandler application) throws IllegalArgumentException {
return new GrizzlyTestContainer(baseUri, application);
}
Notice that the Jersey Servlet configuration is being loaded from the ApplicationHandler that is passed in as a parameter using the inner Application object's class name (ResourceConfig is a subclass of Application). Therefore, you also need to create a subclass of ResourceConfig for this approach to work. The code for this is very simple:
package com.example;
import org.glassfish.jersey.server.ResourceConfig;
public class MyResourceConfig extends ResourceConfig {
public MyResourceConfig() {
super(MyResource.class);
}
}
This assumes the resource you are testing is MyResource. You also need to override a couple of methods in your test like this:
public class MyResourceTest extends JerseyTest {
public MyResourceTest() throws TestContainerException {
}
#Override
protected Application configure() {
return new MyResourceConfig();
}
#Override
protected TestContainerFactory getTestContainerFactory() throws TestContainerException {
return new ExtendedGrizzlyTestContainerFactory();
}
#Test
public void testCreateSimpleBean() {
final String beanList = target("test").request().get(String.class);
Assert.assertNotNull(beanList);
}
}
Finally, for completeness, here is the code for MyResource:
#Path("test")
public class MyResource {
#Context WebConfig wc;
#PostConstruct
public void init() {
System.out.println("WebConfig: " + wc);
String url = wc.getInitParameter("myparam");
System.out.println("myparam = "+url);
}
#GET
#Produces(MediaType.APPLICATION_JSON)
public Collection<TestBean> createSimpleBean() {
Collection<TestBean> res = new ArrayList<TestBean>();
res.add(new TestBean("a", 1, 1L));
res.add(new TestBean("b", 2, 2L));
return res;
}
#POST
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public TestBean roundTrip(TestBean s) {
return s;
}
}
The output of running the test shows that the WebConfig is loaded and the init param is now available:
WebConfig: org.glassfish.jersey.servlet.WebServletConfig#107d0f44
myparam = myvalue
The solution from #ametke worked well but wasn't picking up my ExceptionMapper classes. To solve this I simplified the start() method to:
#Override
public void start() {
try {
initParams.put("jersey.config.server.provider.packages", "my.resources;my.config");
this.server = GrizzlyWebContainerFactory.create(uri, initParams);
} catch (ProcessingException | IOException e) {
throw new TestContainerException(e);
}
}
This was based on Problems running JerseyTest when dealing with HttpServletResponse