How can I pass MessageGroupId for FIFO SNS - amazon-sns

I've tried the following code:
private final NotificationMessagingTemplate notificationMessagingTemplate;
public void send(final T payload, final Object groupId) {
final ImmutableMap<String, Object> headers = ImmutableMap.of("message-group-id", groupId.toString(),
"message-deduplication-id", UUID.randomUUID().toString());
notificationMessagingTemplate.convertAndSend(topicName, payload, headers);
}
Passing those headers in SQS works fine but in SNS it's not working and it gives the error:
Caused by: com.amazonaws.services.sns.model.InvalidParameterException: Invalid parameter: The MessageGroupId parameter is required for FIFO topics (Service: AmazonSNS; Status Code: 400; Error Code: InvalidParameter; Request ID: 1aa83814-abc8-56e9-ae15-619723438fe9; Proxy: null)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819) ~[aws-java-sdk-core-1.11.933.jar:na]
Do I have to change the headers or there is another way arround?

I found a way around myself by using sdk without spring, because FIFO for SNS is new, Spring did not implemented any solution for this problem yet and I could not find a way to pass parameters to the topic through Spring, here is the link that help me solve it: https://docs.aws.amazon.com/sns/latest/dg/fifo-topic-code-examples.html
And here is how my method was done:
private final String topicArn;
private final AmazonSNS amazonSNS;
private final ObjectMapper objectMapper;
public void send(final T payload, final Object groupId) {
try {
amazonSNS.publish(new PublishRequest()
.withTopicArn(topicArn)
.withMessageDeduplicationId(UUID.randomUUID().toString())
.withMessage(objectMapper.writeValueAsString(payload))
.withMessageGroupId(groupId.toString()));
} catch (final IOException e) {
throw new RuntimeException(e);
}
}

Related

How to test a BadRequest exception on a POST request

I made a test that checked whether a post method from my controller does what it's supposed to. It worked great! Now I'm supposed to make a test to see whether the right message pops up when I get an 400 error for that post method.
Here's what I've got:
public void shouldReturnBadRequestExceptionWhenGivenBadArguments() throws Exception {
ObjectMapper objectMapper = new ObjectMapper();
String json = objectMapper.writeValueAsString(user);
mvc.perform(post("/users")
.contentType(MediaType.APPLICATION_JSON)
.content(json))
.andExpect(status().isBadRequest())
.andExpect(result -> assertEquals("Email is already taken!", Objects.requireNonNull(result.getResolvedException()).getMessage()))
// .andExpect(result -> assertTrue(result.getResolvedException() instanceof BadRequestException));
}
The test gives me an error as such: Status expected:<400> but was:<200>.
Now I do understand that it just means I didn't get an error and instead the post method worked. Now the thing I don't know how to do is to get that error on purpose. Anyone know how to do this?
EDIT: was told to post the controller endpoint so here it is:
#PostMapping
public UserDTO addUser(#RequestBody CreateUserDTO newUser) {
log.info(newUser.toString());
return userService.createUser(newUser);
}
and my user was created in my setUp() (with #BeforeEach annotation) as such:
#BeforeEach
public void setUp() throws Exception {
user = new User(
"ime",
"prezime",
"imeprezime#consulteer.com",
"1234");
}
Hope this helps!
EDIT 2: added a part of service class concerning the post method:
#Override
public UserDTO createUser(CreateUserDTO newUser) {
if(userRepository.findByEmail(newUser.getEmail()).isPresent())
throw new BadRequestException("Email is already taken!");
return userMapper.convertEntityToDTO(userRepository.save(userMapper.convertCreateDTOToEntity(newUser)));
}

Unable to throw exception by KafkaTemplate when topic is not available/ kafka broker is down

I am sending message to Kafka by Kafka Template but I wanted to test exception, So I have provided wrong topic name but When I run the code, it says " Error while fetching metadata with correlation id 2 : {ocf-oots-gr-outbound_123=LEADER_NOT_AVAILABLE" not available but the topic itself is created in Kafka that I can also see through Kafka tool and when broker is stopped, it is also not throwing exception.
Code:
KafkaTemplate<String, Object> kafkaTemplate = (KafkaTemplate<String, Object>) CommonAppContextProvider.getApplicationContext().getBean("kafkaTemplate");
//kafkaTemplate.send(CommonAppContextProvider.getApplicationContext().getEnvironment().getProperty("kafka.transalators.outbound.topic"), kafkaMessageFormat);
ListenableFuture listenableFuture = kafkaTemplate.send(CommonAppContextProvider.getApplicationContext().
getEnvironment().getProperty("kafka.transalators.outbound.topic"), kafkaMessageFormat);
listenableFuture.addCallback(new ListenableFutureCallback<SendResult<?, ?>>() {
#Override
public void onSuccess(SendResult<?, ?> result) {
System.out.println("Sent");
}
#Override
public void onFailure(Throwable ex) {
throw new KafkaException();
}
});
}
It should throw exception may be KafkaException, TimeOutException, Interrupted exception etc.
You have to call get(time, timeUnit) on the future to get the result (success or otherwise).

How to use spring-kafka for sending a message again

We are using spring-kafka 1.2.2.RELEASE.
What we want
1. As soon as a message is consumed and processed successfully, offset is committed in spring-kafka. I am using Manaul Commit/Acknowledgement for it, it is working fine.
2. In case of any exception we want spring-kafka to resend the same message. We are throwing RunTime exception on any system error, which was logged by spring-kafka and never committed.
This is fine as we don't want it to commit, but that message stays in spring-kafka and never comes back unless we restart the service. On restart message comes back and executes once again and then stay in spring-kafka
What we tried
1. I have tried both ErrorHandler and RetryingMessageListenerAdapter, but in both cases we have to code in service how to process the message again
This is my consumer
public class MyConsumer{
#KafkaListener
public void receive(...){
// application logic to return success/failure
if(success){
acknowledgement.acknowledge();
}else{
throw new RunTimeException();
}
}
}
Also I have following configurations for container factory
factory.getContainerProperties().setErrorHandler(new ErrorHandler(){
#Override
public void handle(...){
throw new RunTimeException("");
}
});
While executing the flow, control is coming inside both first to receive and then handle method. After that service waits for new message. However I was expecting, since we threw an exception, and message is not committed, same message should land in receive method again.
Is there any way, we can tell spring kafka "do not commit this message and send it again asap?"
1.2.x is no longer supported; 1.x users are recommended to upgrade to at least 1.3.x (currently 1.3.8) because of its much simpler threading model, thanks to KIP-62.
The current version is 2.2.2.
2.0.1 introduced the SeekToCurrentErrorHandler which re-seeks the failed record so that it is redelivered.
With earlier versions, you had to stop and restart the container to redeliver a failed message, or add retry to the listener adapter.
I suggest you upgrade to the newest possible release.
Unfortunately version available for us to use is 1.3.7.RELEASE.
I have tried implementing the ConsumerSeekAware interface. Below is how I am doing it and I can see message delivering repreatedly
Consumer
public class MyConsumer implements ConsumerSeekAware{
private ConsumerSeekCallback consumerSeekCallback;
if(condition) {
acknowledgement.acknowledge();
}else {
consumerSeekCallback.seek((String)headers.get("kafka_receivedTopic"),
(int) headers.get("kafka_receivedPartitionId"),
(int) headers.get("kafka_offset"));
}
}
#Override
public void registerSeekCallback(ConsumerSeekCallback consumerSeekCallback) {
this.consumerSeekCallback = consumerSeekCallback;
}
#Override
public void onIdleContainer(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onIdleContainer called");
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onPartitionsAssigned called");
}
}
Config
public class MyConsumerConfig {
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
// Set server, deserializer, group id
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MyModel> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, MyModel> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}
#Bean
public MyConsumer receiver() {
return new MyConsumer();
}
}

kafka consumer turn on off runtime, process messages in series

My kafka listener should process messages in sequential order , onMessage method should process messages synchronously, I dont want my listener to process multiple messages at the same time, the onmessage method first stops
org.springframework.kafka.listener.MessageListenerContainer
then delgates payload to a synchronized method, after complete processing , starts listener back. Other options ofcousrse are to use a blocking queue, executor service etc, need advice on better strategy to achieve this, does kafka consumer has any feature built to process messages in series?
here is my code.
I changed implementation to this
public static class KafkaReadMsgTask implements Runnable{
#Override
public void run() {
KakfaMsgConumerImpl kakfaMsgConumerImpl=null;;
try{
kakfaMsgConumerImpl=SpContext.getBean(KakfaMsgConumerImpl.class);
kakfaMsgConumerImpl.pollFormDef();
kakfaMsgConumerImpl.pollFormData();
} catch (Exception e){
logger.error(" kafka listener errors "+e);
kakfaMsgConumerImpl.pauseTask();
}
}
}
#Component
public static class KakfaMsgConumerImpl {
#Autowired
ObjectMapper mapper;
#Autowired
FormSink formSink;
#Autowired
Environment env;
#Resource(name="formDefConsumer")
Consumer formDefConsumer;
#Resource(name="formDataConsumer")
Consumer formDataConsumer;
ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();
public void startPolling() throws Exception{
executor.scheduleAtFixedRate(new KafkaReadMsgTask(),10, 3,TimeUnit.SECONDS);
}
public void pauseTask(){
try{
Thread.sleep (120000l);
}catch(Exception e){
throw new RuntimeException(e);
}
}
public void pollFormDef() throws Exception{
ConsumerRecords<Long, String> records =formDefConsumer.poll(0);
if(!records.isEmpty()){
int recordsCount=records.count();
if(logger.isDebugEnabled()){
logger.debug(" form-def consumer poll records size "+recordsCount);
}
if(records.count()>1){
logger.warn(" form-def consumer poll returned records more than 1 , expected 1 , received "+recordsCount);
}
ConsumerRecord<Long,String> record= records.iterator().next();
processFormDef(record.key(), record.value());
}
}
void pollFormData() throws Exception{
ConsumerRecords<Long, String> records =formDataConsumer.poll(0);
if(!records.isEmpty()){
int recordsCount=records.count();
if(logger.isDebugEnabled()){
logger.debug(" form-data consumer poll records size "+recordsCount);
}
if(records.count()>1){
logger.warn(" form-data consumer poll returned records more than 1 , expected 1 , received "+recordsCount);
} ConsumerRecord<Long,String> record= records.iterator().next();
processFormData(record.key(), record.value());
}
}
void processFormDef(Long key, String msg) throws Exception{
if(logger.isDebugEnabled()){
logger.debug(" key "+key+" payload : "+msg);
}
FormDefinition formDefinition= mapper.readValue(msg, FormDefinition.class);
formSink.createFromDef(formDefinition);
logger.debug(" processed message, key: "+key+ " msg : "+msg);
Thread.sleep(60000l);
}
void processFormData(Long key, String msg) throws Exception{
if(logger.isDebugEnabled()){
logger.debug(" key "+key+" payload : "+msg);
}
FormData formData= mapper.readValue(msg, FormData.class);
formSink.persists(formData);
logger.debug(" processed message, key: "+key+ " msg : "+msg);
Thread.sleep(60000l);
}
}
Using a message-driven listener container is not the right technology for this application; it looks like you want to consume messages alternately from two different topics.
Furthermore, stopping the container on the consumer thread won't take effect anyway, until the thread exits the method, at which time the consumer will be closed.
I would suggest you use the consumer factory to create two consumers; subscribe to the topics, set the max.poll.records on each to 1 and call the poll() method on each alternately.

SoapFault handling with Spring WS client - WebServiceGatewaySupport and WebServiceTemplate

I am trying to write a Spring WS client using WebServiceGatewaySupport. I managed to test the client for a successful request and response. Now I wanted to write test cases for soap faults.
public class MyClient extends WebServiceGatewaySupport {
public ServiceResponse method(ServiceRequest serviceRequest) {
return (ServiceResponse) getWebServiceTemplate().marshalSendAndReceive(serviceRequest);
}
#ActiveProfiles("test")
#RunWith(SpringRunner.class)
#SpringBootTest(classes = SpringTestConfig.class)
#DirtiesContext
public class MyClientTest {
#Autowired
private MyClient myClient;
private MockWebServiceServer mockServer;
#Before
public void createServer() throws Exception {
mockServer = MockWebServiceServer.createServer(myClient);
}
}
My question is how do i stub the soap fault response in the mock server, so that my custom FaultMessageResolver will be able to unmarshall soap fault?
I tried couple of things below, but nothing worked.
// responsePayload being SoapFault wrapped in SoapEnvelope
mockServer.expect(payload(requestPayload))
.andRespond(withSoapEnvelope(responsePayload));
// tried to build error message
mockServer.expect(payload(requestPayload))
.andRespond(withError("soap fault string"));
// tried with Exception
mockServer.expect(payload(requestPayload))
.andRespond(withException(new RuntimeException));
Any help is appreciated. Thanks!
Follow Up:
Ok so, withSoapEnvelope(payload) I managed to get the controller to go to my custom MySoapFaultMessageResolver.
public class MyCustomSoapFaultMessageResolver implements FaultMessageResolver {
private Jaxb2Marshaller jaxb2Marshaller;
#Override
public void resolveFault(WebServiceMessage message) throws IOException {
if (message instanceof SoapMessage) {
SoapMessage soapMessage = (SoapMessage) message;
SoapFaultDetailElement soapFaultDetailElement = (SoapFaultDetailElement) soapMessage.getSoapBody()
.getFault()
.getFaultDetail()
.getDetailEntries()
.next();
Source source = soapFaultDetailElement.getSource();
jaxb2Marshaller = new Jaxb2Marshaller();
jaxb2Marshaller.setContextPath("com.company.project.schema");
Object object = jaxb2Marshaller.unmarshal(source);
if (object instanceof CustomerAlreadyExistsFault) {
throw new CustomerAlreadyExistsException(soapMessage);
}
}
}
}
But seriously!!! I had to unmarshall every message and check the instance of it. Being a client I should be thorough with all possible exceptions of the service here, and create custom runtime exceptions and throw it from the resolver. Still at the end, its been caught in WebServiceTemplate and re thrown as just a runtime exception.
You could try with something like this:
#Test
public void yourTestMethod() // with no throw here
{
Source requestPayload = new StringSource("<your request>");
String errorMessage = "Your error message from WS";
mockWebServiceServer
.expect(payload(requestPayload))
.andRespond(withError(errorMessage));
YourRequestClass request = new YourRequestClass();
// TODO: set request properties...
try {
yourClient.callMethod(request);
}
catch (Exception e) {
assertThat(e.getMessage()).isEqualTo(errorMessage);
}
mockWebServiceServer.verify();
}
In this part of code mockWebServiceServer represents the instance of MockWebServiceServer class.

Resources