Use Case:
I am using Spring Boot 2.2.5.RELEASE and Kafka 2.4.1
JAAS/SASL configurations are done properly on Kafka/ZooKeeper as topics are created without issue with kafka-topics.bat
Issue:
When i start Spring Boot application, i immediately get the following errors:
kafka-server-start.bat console:
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
IDE console:
WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=xxx, groupId=yyy] Bootstrap broker localhost:9093 (id: -3 rack: null) disconnected
My application.properties configuration:
spring.kafka.jaas.enabled=true
spring.kafka.properties.security.protocol=SASL_PLAINTEXT
spring.kafka.properties.sasl.mechanism=PLAIN
spring.kafka.properties.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="spring_bO0t" password="i_am_a_spring_bO0t_user";
kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="12345"
user_admin="12345"
user_spring_bO0t="i_am_a_spring_bO0t_user";
};
Am i missing something?
Thanks in advance.
The answer provided by #jumping_monkey is correct, however I didn't know where to put those configurations in ProducerFactory & ConsumerFactory beans, so I'll leave an example below for those who want to know:
-In your ProducerConfig or ConsumerConfig Beans respectively (Mine is named generalMessageProducerFactory):
#Bean
public ProducerFactory<String, GeneralMessageDto> generalMessageProducerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
configProps.put("sasl.mechanism", "PLAIN");
configProps.put("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required username='YOUR_KAFKA_CLUSTER_USERNAME' password='YOUR_KAFKA_CLUSTER_PASSWORD';");
configProps.put("security.protocol", "SASL_SSL");
return new DefaultKafkaProducerFactory<>(configProps);
}
And also in your TopicConfiguration Class in kafkaAdmin method:
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configs.put("sasl.mechanism", "PLAIN");
configs.put("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required username='YOUR_KAFKA_CLUSTER_USERNAME' password='YOUR_KAFKA_CLUSTER_PASSWORD';");
configs.put("security.protocol", "SASL_SSL");
return new KafkaAdmin(configs);
}
Hope this was helpful guys!
I defined the properties in the wrong place i.e in application.properties. As i have ProducerFactory & ConsumerFactory beans, those application.properties will be ignored by Spring Boot.
Configuring the same properties in the beans definitions resolved the issue, i.e move your properties from application.properties to where you define your beans.
Here's an example:
#Bean
public ProducerFactory<Object, Object> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
props.put(SaslConfigs.SASL_JAAS_CONFIG, String.format(
"%s required username=\"%s\" " + "password=\"%s\";", PlainLoginModule.class.getName(), "username", "password"
));
return new DefaultKafkaProducerFactory<>(props);
}
#Bean
public ConsumerFactory<Object, Object> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
props.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, JsonDeserializer.class);
props.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, JsonDeserializer.class);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
props.put(SaslConfigs.SASL_JAAS_CONFIG, String.format(
"%s required username=\"%s\" " + "password=\"%s\";", PlainLoginModule.class.getName(), "username", "password"
));
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
configs.put("security.protocol", "SASL_PLAINTEXT");
configs.put("sasl.mechanism", "PLAIN");
configs.put("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required " +
"username=username" +
"password=password;");
return new KafkaAdmin(configs);
}
Related
Spring-boot : 2.7.1,
Spring-kafka : 2.8.7
I tested kafka retry with error handling in my local where I publish around 1500 messages and I listen in batches and process them in loop (Only 1 topic which has 1 partition). It is working as expected, that is the message is retried to process again and upon exhaustion of retries it is sent to error topic.
KafkaConfig
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory(KafkaTemplate<String, String> kafkaTemplate){
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
DeadLetterPublishingRecoverer deadLetterPublishingRecoverer = new DeadLetterPublishingRecoverer(kafkaTemplate,
(consumer, e) -> new TopicPartition("errortopic", -1));
FixedBackOff fixedBackOff = new FixedBackOff(10,2);
DefaultErrorHandler defaultErrorHandler = new DefaultErrorHandler(deadLetterPublishingRecoverer, fixedBackOff);
factory.setCommonErrorHandler(defaultErrorHandler);
return factory;
}
#Bean
public ConsumerFactory<String, Object> consumerFactory() {
Map<String, Object> props = getDefaultParams();
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 20);
return new DefaultKafkaConsumerFactory<>(props);
}
Listener
#Autowired
private MyService myservice;
#KafkaListener(id="sample-listener", topics= {"topic1", "topic2"})
public void listen(final List<ConsumerRecord<String, String> records) {
for(ConsumerRecord<String, String> record: records)
try {
myservice.process(record.value());
} catch(Exception e) {
throw new BatchListenerFailedException("Failure in processing message", record);
}
}
Service
#Transactional
public void process(String record){
count++;
if(count == 2 || count ==3){
throw new RuntimeException();
}
}
When I run it in dev environment, where we have 2 topics and each topic has 64 partitions (messages will be around 100k an hour), the message is not retried at all.
i I have a KafkaListener that throw a NPE.
#KafkaListener(topics = CASE6_TOPIC, groupId = "demo-case-6", clientIdPrefix = "case6", containerFactory = "kafkaListenerContainerFactory")
private void consume(DemandeAvro payload, #Header(RECEIVED_PARTITION_ID) Integer partitionId, #Header(KafkaHeaders.OFFSET) int offset) {
log.info(String.format("Partition %s Offset %s Payload : %s", partitionId, offset, payload));
throw new NullPointerException("other than serde");
}
My configuration look like :
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setReplyTemplate(kafkaTemplate());
factory.setCommonErrorHandler(new DefaultErrorHandler(recoverer(kafkaTemplate()),new FixedBackOff(2000L, 4)));
return factory;
}
And the recovrer to publish on DLT
#Bean
public DeadLetterPublishingRecoverer recoverer(KafkaTemplate<?, ?> bytesTemplate) {
return new DeadLetterPublishingRecoverer(bytesTemplate);
}
I'm not sure if it'is a bad use or a bug , but when i drop the recoverer, the consumer will stop after 4 attempts like i'm asking on the FixedBackOff policy , otherwise it will retries forever ...
Thank you for your help
It is not clear what you are saying; with no recoverer the failed message will be logged after 4 retries; with the DLPR, it will be published to the DLT. If the publishing fails, the record will be delivered again as if retries were not exhausted.
If you are seeing this behavior, the DLT publishing must be failing for some reason.
I am trying to implement custom ILogger . NetCore 3.1
CustomLogger class implements the ILogger. one of the methods thats need to be implemented is:
public class AuditLogLogger: ILogger
{
public IDisposable BeginScope<TState>(TState state)
{
return null;
}
public bool IsEnabled(LogLevel logLevel)
{
return true;
}
public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
{
// How can I access the parameters I sent it from the controller?
}
}
From My controller I triggered the LogInformation Method and passeed a string,
and a List of KeyValuePair as such:
List<KeyValuePair<string, string>> udf = new List<KeyValuePair<string, string>>();
udf.Add(new KeyValuePair<string, string>("Test", "Test"));
_logger.LogInformation("This is a test", udf);
My code is able to make it to the Log<TState> but I need to perform some logic based on the parameters passed in. How can I access the parameters passed?
I ended up doing a dirty solution
Basically, have my controller send in a json string containing the list and message then have the Log function deserialize it such as
string message = "this is a test";
List<KeyValuePair<string, string>> udf = new List<KeyValuePair<string, string();
udf.Add(new KeyValuePair<string, string>("Test", "Test"));
JObject obj = new JObject();
obj["Message"] = message;
obj["MyList"] = JArray.FromObject(udf);
The Log Message needs to deserialize
I am sure there is a cleaner solution
uploading and downloading zip attachment containing a text file in corda not working
Tried to attach and download a zip file manually and also tried to send the attachment using client RPC using proxy.
flow code:
public class IOUFlow extends FlowLogic<Void> {
private final Integer iouValue;
private final Party otherParty;
private final SecureHash attachmentHash;
public IOUFlow(Integer iouValue, Party otherParty,SecureHash attachmentHash) {
this.iouValue = iouValue;
this.otherParty = otherParty;
this.attachmentHash=attachmentHash;
}
#Override
public ProgressTracker getProgressTracker() {
return progressTracker;
}
private static final Step ID_OTHER_NODES = new Step("Identifying other nodes on the network.");
private static final Step SENDING_AND_RECEIVING_DATA = new Step("Sending data between parties.");
private static final Step EXTRACTING_VAULT_STATES = new Step("Extracting states from the vault.");
private static final Step OTHER_TX_COMPONENTS = new Step("Gathering a transaction's other components.");
private static final Step TX_BUILDING = new Step("Building a transaction.");
private static final Step TX_SIGNING = new Step("Signing a transaction.");
private static final Step TX_VERIFICATION = new Step("Verifying a transaction.");
private static final Step SIGS_GATHERING = new Step("Gathering a transaction's signatures.") {
// Wiring up a child progress tracker allows us to see the
// subflow's progress steps in our flow's progress tracker.
#Override
public ProgressTracker childProgressTracker() {
return CollectSignaturesFlow.tracker();
}
};
private static final Step VERIFYING_SIGS = new Step("Verifying a transaction's signatures.");
private static final Step FINALISATION = new Step("Finalising a transaction.") {
#Override
public ProgressTracker childProgressTracker() {
return FinalityFlow.tracker();
}
};
private final ProgressTracker progressTracker = new ProgressTracker(
ID_OTHER_NODES,
SENDING_AND_RECEIVING_DATA,
EXTRACTING_VAULT_STATES,
OTHER_TX_COMPONENTS,
TX_BUILDING,
TX_SIGNING,
TX_VERIFICATION,
SIGS_GATHERING,
FINALISATION
);
#Suspendable
#Override
public Void call() throws FlowException {
progressTracker.setCurrentStep(ID_OTHER_NODES);
// We retrieve the notary identity from the network map.
Party notary = getServiceHub().getNetworkMapCache().getNotaryIdentities().get(0);
progressTracker.setCurrentStep(SENDING_AND_RECEIVING_DATA);
// We create the transaction components.
IOUState outputState = new IOUState(iouValue, getOurIdentity(), otherParty);
List<PublicKey> requiredSigners = Arrays.asList(getOurIdentity().getOwningKey(), otherParty.getOwningKey());
Command command = new Command<>(new IOUContract.Create(), requiredSigners);
TimeWindow ourAfter = TimeWindow.fromOnly(Instant.MIN);
progressTracker.setCurrentStep(TX_BUILDING);
// We create a transaction builder and add the components.
TransactionBuilder txBuilder = new TransactionBuilder(notary)
.addOutputState(outputState, IOUContract.ID)
.addCommand(command)
.addAttachment(attachmentHash);
// Verifying the transaction.
txBuilder.verify(getServiceHub());
progressTracker.setCurrentStep(TX_SIGNING);
// Signing the transaction.
SignedTransaction signedTx = getServiceHub().signInitialTransaction(txBuilder);
// Creating a session with the other party.
FlowSession otherPartySession = initiateFlow(otherParty);
// Obtaining the counterparty's signature.
SignedTransaction fullySignedTx = subFlow(new CollectSignaturesFlow(
signedTx, Arrays.asList(otherPartySession), CollectSignaturesFlow.tracker()));
progressTracker.setCurrentStep(TX_VERIFICATION);
// Finalising the transaction.
subFlow(new FinalityFlow(fullySignedTx, otherPartySession));
return null;
}
}
client code:
public class Client {
private static final Logger logger = LoggerFactory.getLogger(Client.class);
public static void main(String[] args) throws Exception {
// Create an RPC connection to the node.
if (args.length != 3) throw new IllegalArgumentException("Usage: Client <node address> <rpc username> <rpc password>");
final NetworkHostAndPort nodeAddress = parse(args[0]);
final String rpcUsername = args[1];
final String rpcPassword = args[2];
final CordaRPCClient client = new CordaRPCClient(nodeAddress);
final CordaRPCOps proxy = client.start(rpcUsername, rpcPassword).getProxy();
// Interact with the node.
// For example, here we print the nodes on the network.
final List<NodeInfo> nodes = proxy.networkMapSnapshot();
logger.info("{}", nodes);
InputStream inputstream = new FileInputStream("corda.zip");
SecureHash hashId= proxy.uploadAttachment(inputstream);
System.out.println(hashId);
CordaX500Name x500Name = CordaX500Name.parse("O=ICICI,L=New York,C=US");
final Party otherParty = proxy.wellKnownPartyFromX500Name(x500Name);
/* proxy
.startFlowDynamic(IOUFlow.class, "10", otherParty,hashId)
.getReturnValue()
.get();*/
InputStream stream = proxy.openAttachment(hashId);
JarInputStream in = new JarInputStream(stream);
BufferedReader br =new BufferedReader(new InputStreamReader(in));
System.out.println("Output from attachment : "+br.readLine());
}
}
Output:
Task :clients:runTemplateClient
I 16:36:28 1 RPCClient.logElapsedTime - Startup took 2066 msec
I 16:36:28 1 Client.main - [NodeInfo(addresses=[localhost:10005], legalIdentitiesAndCerts=[O=PNB, L=London, C=GB], platformVersion=4, serial=1559037129874), NodeInfo(addresses=[localhost:10002], legalIdentitiesAndCerts=[O=Notary, L=London, C=GB], platformVersion=4, serial=1559037126875), NodeInfo(addresses=[localhost:10008], legalIdentitiesAndCerts=[O=ICICI, L=New York, C=US], platformVersion=4, serial=1559037128218)]
DF3C198E05092E52F47AE8EAA0D5D26721F344B3F5E0DF80B5A53CA2B7104C9C
Output from attachment : null
Another output:when tried to send the attachment from client using RPC
Task :clients:runTemplateClient
I 16:41:46 1 RPCClient.logElapsedTime - Startup took 2045 msec
I 16:41:47 1 Client.main - [NodeInfo(addresses=[localhost:10005], legalIdentitiesAndCerts=[O=PNB, L=London, C=GB], platformVersion=4, serial=1559037129874), NodeInfo(addresses=[localhost:10002], legalIdentitiesAndCerts=[O=Notary, L=London, C=GB], platformVersion=4, serial=1559037126875), NodeInfo(addresses=[localhost:10008], legalIdentitiesAndCerts=[O=ICICI, L=New York, C=US], platformVersion=4, serial=1559037128218)]
B7F5F70FC9086ED594883E6EB8B0B53B666B92CC4412E27FF3D6531446E9E40C
Exception in thread "main" net.corda.core.CordaRuntimeException: net.corda.core.flows.IllegalFlowLogicException: A FlowLogicRef cannot be constructed for FlowLogic of type com.template.flows.IOUFlow: due to missing constructor for arguments: [class java.lang.String, class net.corda.core.identity.Party, class net.corda.core.crypto.SecureHash$SHA256]
thanks for raising this question. This is definitely something we could improve in the developer experience.
Essentially there are two-steps to adding an attachment to a TX.
The first step is importing the attachment itself into the node (which it seems you've already done):
SecureHash attachmentHash = getServiceHub().getAttachments().importAttachment(INPUT-FILE, getOurIdentity().getName().toString(), INPUT-FILE.getName());
Replace INPUT-FILE with a java File instance.
The second is to add the attachment hash to the TX (which you have already done as well)
.addAttachment(attachmentHash);
You can facilitate the first step either explicitly via flow logic OR using the RPC proxy as you've done here.
For the second error you've listed:
The exception you've posted however is related to the way you're invoking your IOUFlow. The IOUFlow is expecting an Integer as the first argument but we are providing a String - this is causing the IllegalFlowLogicException. Try providing an Int OR changing the IOUFlow to expect a string input to be converted to an Int.
A FlowLogicRef cannot be constructed for FlowLogic of type com.template.flows.IOUFlow: due to missing constructor for arguments: [class java.lang.String, class net.corda.core.identity.Party, class net.corda.core.crypto.SecureHash$SHA256]
startFlowDynamic(IOUFlow.class, "10", otherParty,hashId)
It looks like your IOUFlow expects an Integer, and you are sending a String instead?
I am trying a simple pact test but its failing giving the error. Below is my code. Is there any issue with the way I'm trying to call pact.
ERROR:
groovy.json.JsonException: Unable to determine the current character, it is not a string, number, array, or object The current character read is 'T' with an int value of 84
CODE
public class PactTest1 {
#Rule
//public PactProviderRule rule = new PactProviderRule("assessments", this);
public PactProviderRule provider = new PactProviderRule("test_provider", "localhost", 8080, this);
#Pact(state = "default", provider = "test_provider", consumer = "test_consumer")
public PactFragment createFragment(PactDslWithProvider builder) {
Map<String, String> headers = new HashMap<>();
headers.put("content-type", "application/json");
return builder
.given("test GET")
.uponReceiving("GET REQUEST")
.path("/assessments")
.method("GET")
.willRespondWith()
.status(200)
.headers(headers)
.body("Test Successful")
.toFragment();
}
#Test
#PactVerification("test_provider")
public void runTest() {
final RestTemplate call = new RestTemplate();
// when
final String response = call.getForObject(provider.getConfig().url()+"/assessments", String.class);
assertEquals(response, "Test Successful");
}
}
It worked after the changed the header content type to text/json. However I'm not able to find the pact file. Where can I find it?