#RetryableTopic showing weird behaviour when using with topicPartitions to reset offset - spring kafka - spring-kafka

I am trying to use #RetryableTopic for unblocking retries and topicPartitions in order to read messages from beginning.
Below is my listener (I have only one partition):
#Slf4j
#Component
public class SingleTopicRetryConsumer {
#RetryableTopic(
attempts = "4",
backoff = #Backoff(delay = 1000),
fixedDelayTopicStrategy = FixedDelayStrategy.SINGLE_TOPIC)
#KafkaListener(topicPartitions = {#TopicPartition(topic = "products",
partitionOffsets = #PartitionOffset(partition = "0", initialOffset = "0"))})
public void listen(ConsumerRecord<String, String> message, #Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
log.info("message consumed - \nkey: {} , \nvalue: {}, \ntopic: {}, \nat: {}",
message.key(),
message.value(),
message.topic(),
LocalDateTime.now());
}
#DltHandler
public void dltListener(ConsumerRecord<String, String> message, #Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
log.info("message consumed at DLT - \nkey: {} , \nvalue: {}, \ntopic: {}, \nat: {}",
message.key(),
message.value(),
message.topic(),
LocalDateTime.now());
}
}
Config properties:
spring:
kafka:
consumer:
bootstrap-servers: localhost:9092
group-id: group_id
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: localhost:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
The above code starts to emit weird behaviour, it reads same message twice from main listener and once from DLT but from main topic only.
logs:
15:10:50.950 [org.springframework.kafka.KafkaListenerEndpointContainer#8-0-C-1] INFO c.m.s.c.n.SingleTopicRetryConsumer
- message consumed -
key: product1 ,
value: This is Product1,
topic: products,
at: 2022-04-07T15:10:50.950810
15:10:50.950 [org.springframework.kafka.KafkaListenerEndpointContainer#9-retry-0-C-1] INFO c.m.s.c.n.SingleTopicRetryConsumer -
message consumed -
key: product1 ,
value: This is Product1,
topic: products,
at: 2022-04-07T15:10:50.950810
15:10:50.950 [org.springframework.kafka.KafkaListenerEndpointContainer#10-dlt-0-C-1] INFO c.m.s.c.n.SingleTopicRetryConsumer -
message consumed at DLT -
key: product1 ,
value: This is Product1,
topic: products,
at: 2022-04-07T15:10:50.950810
If I use above code without topicPartitions by removing below line, listener works as expected.
partitionOffsets = #PartitionOffset(partition = "0", initialOffset = "0"))}
Any clues to why it might be happening ?

UPDATE: This bug has been fixed in Spring for Apache Kafka 2.8.5.
That's a bug. The problem is we set the retry topic name to the topics property of the endpoint, instead of setting it to the topicPartition. So we end up with two listeners for the main endpoint and none for the retry topic.
If you can please open an issue: https://github.com/spring-projects/spring-kafka/issues
Not sure there's a workaround for this using topic partitions - it should be fixed in the 2.8.5 release in a couple of weeks.
Thanks for reporting.

Related

Amplify JS API GraphQL Elasticsearch throws "ResolverExecutionLimitReached" error

I've implemented the Amplify JS Library with a Vue project and have had success with all of the features of the library except this issue. When I query a model with Elasticsearch, it returns the appropriate results, but also the error of "ResolverExecutionLimitReached".
This is the request:
let destinations = await API.graphql(graphqlOperation(queries.searchDestinations, {filter: { deviceId: { eq: params.id }}}))
This is the schema:
type Destination
#model
#searchable
#auth(rules: [{ allow: public }, { allow: private }])
#key(name: "byXpoint", fields: ["xpoint"])
#key(name: "byDevice", fields: ["deviceId"])
{
id: ID!
index: Int!
levels: [String]
name: String!
xpoint: String
sourceId: ID
Source: Source #connection
lock: Boolean
breakaway: Boolean
breakaways: String
probeId: ID!
probe: Probe #connection(fields: ["probeId"])
deviceId: ID!
device: Device #connection(fields: ["deviceId"])
orgId: ID!
org: Org #connection(fields: ["orgId"])
}
And this returns:
{
data: {
searchDestinations: {items: Array(100), nextToken: "ba1dc119-2266-4567-9b83-f7eee4961e63", total: 384}
},
errors: [
{
data: null
errorInfo: null
errorType: "ResolverExecutionLimitReached"
locations: []
message: "Resolver invocation limit reached."
path: []
}
]
}
My understanding is the AppSync API has a hard limit of returning more than 1000 entries, but this query is on a table with only ~600 entries and is only returning 384. I am executing the same command via AppSync directly via a NodeJS application and it works without issue.
Not sure where to investigate further to determine what is triggering this error. Any help or direction is greatly appreciated.
Connections in the schema were causing the single request to go beyond the 1000 request limit (exactly as stated by Mickers in the comments). Updated schema with less connections on fetch and issue was resolved.

RabbitMQ Connection refused 127.0.0.1:5672

I am preparing a simple ASP.NET Core MVC web application.
I have installed RabbitMQ server to my laptop. RabbitMQ Management UI is running on localhost:15672.
Rabbit MQ cluster name is like: rabbit#CR00001.ABC.COM.LOCAL
I am trying to send message to rabbitmq in controller. But I am getting None of the specified endpoints were reachable error.
If I use 'localhost' as host name, I get Connection refused 127.0.0.1:5672 in inner exceptions.
If I use rabbit as host name, I get Name or service not known
I've tried to solve the problem according to other StackOverflow questions, however, none of them could solved my problem.
Home controller:
[HttpPost]
public void SendMessage([FromBody]Message message)
{
try
{
var factory = new ConnectionFactory()
{
UserName = _username,
Password = _password,
HostName = _hostname,
VirtualHost = "/",
Port = _port,
};
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: _queueName,
durable: false,
exclusive: false,
autoDelete: false,
arguments: null);
var body = Encoding.UTF8.GetBytes(message.Text);
channel.BasicPublish(exchange: "",
routingKey: _queueName,
basicProperties: null,
body: body);
}
}
catch (Exception ex)
{
}
}
appsettings.json
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*",
"RabbitMq": {
"Hostname": "localhost",
"QueueName": "WordQueue",
"UserName": "test",
"Password": "test",
"Port": 5672
}
}
Here is test user configuration in Rabbit MQ Management UI
Have you setup your test user in the UI portal? This is probably the cause of your connection refused error. You can setup users via http://localhost:15672/#/users.
You should also debug your factory to check that your config values are being passed in correctly
I would also suggest that you pop some code in your catch to ensure you aren't missing an exception
The references to http://rabbit are for use within containers. These will only work if you are running both your ASPNET and Rabbit applications within a containerised network (for example Docker Compose). I found using containers was a much better approach for learning RabbitMq.
Apprciate that this is going a little off-topic now, but if you are not familiar with containerisation I would suggest taking a look at this post (and the respective series) from Wolfgang Ofner https://www.programmingwithwolfgang.com/rabbitmq-in-an-asp-net-core-3-1-microservice/ and the getting started with Docker from Brad Traversy on YouTube https://www.youtube.com/watch?v=Kyx2PsuwomE
After create user, do not forget set permissions,
basicaly,
can access virtual hosts (/)
set topic permission (AMQP default)
Note: of course you can use rabbitmq ui for this operation (create user and permissions).
var factory = new ConnectionFactory() { HostName = "hostname_or_ip_adres_here", UserName="username here..", Password="psw here.."
};
this will work !

DynamoDB provisioned Read/Write Capacity Units exceeded unexpectedly

I run a program that sends data to dynamodb using api gateway and lambdas.
All the data sent to the db is small, and only sent from about 200 machines.
I'm still using free tier and sometimes unexpectedly in the middle of the month I'm start getting an higher provisioned read / write capacity and then from this day I pay a constant amount each day until the end of the month.
Can someone understand from the image below what happened in the 03/13 that caused this pike in the charts and caused these provisioned to rise from 50 to 65?
I can't tell what happened based on those charts alone, but some things to consider:
You may not be aware of the new "PAY_PER_REQUEST" billing mode option for DynamoDB tables which allows you to mostly forget about manually provisioning your throughput capacity: https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/
Also, might not make sense for your use case, but for free tier projects I've found it useful to proxy all writes to DynamoDB through an SQS queue (use the queue as an event source for a Lambda with a reserved concurrency that is compatible with your provisioned throughput). This is easy if your project is reasonably event-driven, i.e. build your DynamoDB request object/params, write to SQS, then have the next step be a Lambda that is triggered from the DynamoDB stream (so you aren't expecting a synchronous response from the write operation in the first Lambda). Like this:
Example serverless config for SQS-triggered Lambda:
dynamodb_proxy:
description: SQS event function to write to DynamoDB table '${self:custom.dynamodb_table_name}'
handler: handlers/dynamodb_proxy.handler
memorySize: 128
reservedConcurrency: 95 # see custom.dynamodb_active_write_capacity_units
environment:
DYNAMODB_TABLE_NAME: ${self:custom.dynamodb_table_name}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:PutItem
Resource:
- Fn::GetAtt: [ DynamoDbTable, Arn ]
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource:
- Fn::GetAtt: [ DynamoDbProxySqsQueue, Arn ]
events:
- sqs:
batchSize: 1
arn:
Fn::GetAtt: [ DynamoDbProxySqsQueue, Arn ]
Example write to SQS:
await sqs.sendMessage({
MessageBody: JSON.stringify({
method: 'putItem',
params: {
TableName: DYNAMODB_TABLE_NAME,
Item: {
...attributes,
created_at: {
S: createdAt.toString(),
},
created_ts: {
N: createdAtTs.toString(),
},
},
...conditionExpression,
},
}),
QueueUrl: SQS_QUEUE_URL_DYNAMODB_PROXY,
}).promise();
SQS-triggered Lambda:
import retry from 'async-retry';
import { getEnv } from '../lib/common';
import { dynamodb } from '../lib/aws-clients';
const {
DYNAMODB_TABLE_NAME
} = process.env;
export const handler = async (event) => {
const message = JSON.parse(event.Records[0].body);
if (message.params.TableName !== env.DYNAMODB_TABLE_NAME) {
console.log(`DynamoDB proxy event table '${message.params.TableName}' does not match current table name '${env.DYNAMODB_TABLE_NAME}', skipping.`);
} else if (message.method === 'putItem') {
let attemptsTaken;
await retry(async (bail, attempt) => {
attemptsTaken = attempt;
try {
await dynamodb.putItem(message.params).promise();
} catch (err) {
if (err.code && err.code === 'ConditionalCheckFailedException') {
// expected exception
// if (message.params.ConditionExpression) {
// const conditionExpression = message.params.ConditionExpression;
// console.log(`ConditionalCheckFailed: ${conditionExpression}. Skipping.`);
// }
} else if (err.code && err.code === 'ProvisionedThroughputExceededException') {
// retry
throw err;
} else {
bail(err);
}
}
}, {
retries: 5,
randomize: true,
});
if (attemptsTaken > 1) {
console.log(`DynamoDB proxy event succeeded after ${attemptsTaken} attempts`);
}
} else {
console.log(`Unsupported method ${message.method}, skipping.`);
}
};

Websocket disconnects on subscribing

I'm building an application where I can see realtime changes within the logs
This application is build with the Symfony v4.1. There is this bundle that has a Web Socket server and client based on Ratchet and Autobahn.js
I've setup all the requirements to make it work according to the documentation.
There's a Topic class:
pubsub routing is configured
The server runs
Client runs in javascript when page is loaded
The script to connect works fine, until I subscribe to a channel/topic. The connection is immediately closed on the client side, without the server detecting it. Does anyone know how to solve this? Also, I'm curious what this responsecode WS-1007 means.
Javascript:
var ws = WS.connect("ws://" + $websocket_host + ":" + $websocket_port);
ws.on("socket/connect", function(session) {
if (window.$debug) {
console.log("websocket connected");
}
console.log(session);
session.subscribe("log/channel", function(uri, payload) {
console.log(payload);
});
});
ws.on("socket/disconnect", function(e) {
if (window.$debug) {
console.log("websocket disconnected [reason:" + e.reason + " code:" + e.code + "]");
}
});
Javascript logs:
~ websocket connected
~ websocket disconnected [reason:Connection was closed properly [WS-1007: ] code:0]
Server logs:
14:15:39 DEBUG [websocket] INSERT CLIENT 2926 ["user" => "s:37:"anon-19491835335b991f8bde43b229754494";"] []
14:15:39 INFO [websocket] anon-19491835335b991f8bde43b229754494 connected ["connection_id" => 2926,"session_id" => "19491835335b991f8bde43b229754494","storage_id" => 2926] []
14:15:39 DEBUG [websocket] GET CLIENT 2926 [] []
14:15:39 INFO [websocket] anon-19491835335b991f8bde43b229754494 subscribe to log/channel [] []
14:15:39 DEBUG [websocket] Matched route "shop4_log" [] []
14:15:39 DEBUG [websocket] Matched route "shop4_log" [] []
Topic class:
namespace App\Service\WebSocket\Topic;
use Gos\Bundle\WebSocketBundle\Router\WampRequest;
use Gos\Bundle\WebSocketBundle\Topic\TopicInterface;
use Ratchet\ConnectionInterface;
use Ratchet\Wamp\Topic;
class LogTopic implements TopicInterface
{
public function onPublish(ConnectionInterface $connection, Topic $topic, WampRequest $request, $event, array $exclude, array $eligible)
{
$topic->broadcast(['msg' => $event]);
}
public function getName()
{
return "log_topic";
}
....
}
pubsub.yaml
shop4_log:
channel: log/channel
handler:
callback: "log_topic"
So, I've found the solution eventually, the topic needs to be tagged in your services configuration
services.yaml:
App\Service\Websocket\Topic\LogTopic:
tags:
- { name: gos_web_socket.topic }

Log 'jsonPayload' in Firebase Cloud Functions

TL;DR;
Does anyone know if it's possible to use console.log in a Firebase/Google Cloud Function to log entries to Stack Driver using the jsonPayload property so my logs are searchable (currently anything I pass to console.log gets stringified into textPayload).
I have a multi-module project with some code running on Firebase Cloud Functions, and some running in other environments like Google Compute Engine. Simplifying things a little, I essentially have a 'core' module, and then I deploy the 'cloud-functions' module to Cloud Functions, 'backend-service' to GCE, which all depend on 'core' etc.
I'm using bunyan for logging throughout my 'core' module, and when deployed to GCE the logger is configured using '#google-cloud/logging-bunyan' so my logs go to Stack Driver.
Aside: Using this configuration in Google Cloud Functions is causing issues with Error: Endpoint read failed which I think is due to functions not going cold and trying to reuse dead connections, but I'm not 100% sure what the real cause is.
So now I'm trying to log using console.log(arg) where arg is an object, not a string. I want this object to appear in Stack Driver under the jsonPayload but it's being stringified and put into the textPayload field.
It took me awhile, but I finally came across this example in firebase functions samples repository. In the end I settled on something a bit like this:
const Logging = require('#google-cloud/logging');
const logging = new Logging();
const log = logging.log('my-func-logger');
const logMetadata = {
resource: {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME ,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
},
};
const logData = { id: 1, score: 100 };
const entry = log.entry(logMetaData, logData);
log.write(entry)
You can add a string severity property value to logMetaData (e.g. "INFO" or "ERROR"). Here is the list of possible values.
Update for available node 10 env vars. These seem to do the trick:
labels: {
function_name: process.env.FUNCTION_TARGET,
project: process.env.GCP_PROJECT,
region: JSON.parse(process.env.FIREBASE_CONFIG).locationId
}
UPDATE: Looks like for Node 10 runtimes they want you to set env values explicitly during deploy. I guess there has been a grace period in place because my deployed functions are still working.
I ran into the same problem, and as stated by comments on #wtk's answer, I would like to add replicating all of the default cloud function logging behavior I could find in the snippet below, including execution_id.
At least for using Cloud Functions with the HTTP Trigger option the following produced correct logs for me. I have not tested for Firebase Cloud Functions
// global
const { Logging } = require("#google-cloud/logging");
const logging = new Logging();
const Log = logging.log("cloudfunctions.googleapis.com%2Fcloud-functions");
const LogMetadata = {
severity: "INFO",
type: "cloud_function",
labels: {
function_name: process.env.FUNCTION_NAME,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
}
};
// per request
const data = { foo: "bar" };
const traceId = req.get("x-cloud-trace-context").split("/")[0];
const metadata = {
...LogMetadata,
severity: 'INFO',
trace: `projects/${process.env.GCLOUD_PROJECT}/traces/${traceId}`,
labels: {
execution_id: req.get("function-execution-id")
}
};
Log.write(Log.entry(metadata, data));
The github link in #wtk's answer should be updated to:
https://github.com/firebase/functions-samples/blob/2f678fb933e416fed9be93e290ae79f5ea463a2b/stripe/functions/index.js#L103
As it refers to the repository as of when the question was answered, and has the following function in it:
// To keep on top of errors, we should raise a verbose error report with Stackdriver rather
// than simply relying on console.error. This will calculate users affected + send you email
// alerts, if you've opted into receiving them.
// [START reporterror]
function reportError(err, context = {}) {
// This is the name of the StackDriver log stream that will receive the log
// entry. This name can be any valid log stream name, but must contain "err"
// in order for the error to be picked up by StackDriver Error Reporting.
const logName = 'errors';
const log = logging.log(logName);
// https://cloud.google.com/logging/docs/api/ref_v2beta1/rest/v2beta1/MonitoredResource
const metadata = {
resource: {
type: 'cloud_function',
labels: {function_name: process.env.FUNCTION_NAME},
},
};
// https://cloud.google.com/error-reporting/reference/rest/v1beta1/ErrorEvent
const errorEvent = {
message: err.stack,
serviceContext: {
service: process.env.FUNCTION_NAME,
resourceType: 'cloud_function',
},
context: context,
};
// Write the error log entry
return new Promise((resolve, reject) => {
log.write(log.entry(metadata, errorEvent), (error) => {
if (error) {
return reject(error);
}
resolve();
});
});
}
// [END reporterror]

Resources