SASL_SSL integration with EmbeddedKafka - spring-kafka

I've been following this blog post to implement an embedded sasl_ssl
https://sharebigdata.wordpress.com/2018/01/21/implementing-sasl-plain/
#SpringBootTest
#RunWith(SpringRunner.class)
#TestPropertySource(properties = {
"spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}",
"spring.kafka.consumer.group-id=notify-integration-test-group-id",
"spring.kafka.consumer.auto-offset-reset=earliest"
})
public class ListenerIntegrationTest2 {
static final String INBOUND = "inbound-topic";
static final String OUTBOUND = "outbound-topic";
static {
System.setProperty("java.security.auth.login.config", "src/test/java/configs/kafka/kafka_jaas.conf");
}
#ClassRule
public static final EmbeddedKafkaRule KAFKA = new EmbeddedKafkaRule(1, true, 1,
ListenerIntegrationTest2.INBOUND, ListenerIntegrationTest2.OUTBOUND)
.brokerProperty("listeners", "SASL_SSL://localhost:9092, PLAINTEXT://localhost:9093")
.brokerProperty("ssl.keystore.location", "src/test/java/configs/kafka/kafka.broker1.keystore.jks")
.brokerProperty("ssl.keystore.password", "pass")
.brokerProperty("ssl.key.password", "pass")
.brokerProperty("ssl.client.auth", "required")
.brokerProperty("ssl.truststore.location", "src/test/java/configs/kafka/kafka.broker1.truststore.jks")
.brokerProperty("ssl.truststore.password", "pass")
.brokerProperty("security.inter.broker.protocol", "SASL_SSL")
.brokerProperty("sasl.enabled.mechanisms", "PLAIN,SASL_SSL")
.brokerProperty("sasl.mechanism.inter.broker.protocol", "SASL_SSL");
When I use the PLAINTEXT://localhost:9093 config I get the following:
WARN org.apache.kafka.clients.NetworkClient - [Controller id=0, targetBrokerId=0] Connection to node 0 terminated during authentication. This may indicate that authentication failed due to invalid credentials.
However, when I remove it, I get org.apache.kafka.common.KafkaException: Tried to check server's port before server was started or checked for port of non-existing protocol
I've tried changing the SecurityProtocol type to autodiscover which style of broker communication it should be using (it's hardcoded to plaintext - this should probably get fixed):
if (this.kafkaPorts[i] == 0) {
this.kafkaPorts[i] = TestUtils.boundPort(server, SecurityProperties.forName(this.brokerProperties.getOrDefault("security.protocol", SecurityProtocol.PLAINTEXT).toString()); // or whatever property can give me the security protocol I should be using to communicate
}
I still get the following error: WARN org.apache.kafka.clients.NetworkClient - [Controller id=0, targetBrokerId=0] Connection to node 0 terminated during authentication. This may indicate that authentication failed due to invalid credentials.
Is there a way to correctly configure embedded kafka to be sasl_ssl enabled?

Related

GRPC call for a service which is inside a subdirectory? (Android grpc client)

This question is similar to below but my issue is with Android grpc client
How can I make a GRPC call for a service which is inside a subdirectory? (in .Net Framework)
I am getting 404 error while accessing the grpc streaming api :
UNIMPLEMENTED: HTTP status code 404
invalid content-type: text/html
headers: Metadata(:status=404,content-length=1245,content-type=text/html,server=Microsoft-IIS/10.0,request-id=5154500d-fb58-7903-65d6-3d3711129101,strict-transport-security=max-age=31536000; includeSubDomains; preload,alt-svc=h3=":443",h3-29=":443",x-preferredroutingkeydiagnostics=1,x-calculatedfetarget=PS2PR02CU003.internal.outlook.com,x-backendhttpstatus=404,x-calculatedbetarget=PUZP153MB0788.APCP153.PROD.OUTLOOK.COM,x-backendhttpstatus=404,x-rum-validated=1,x-proxy-routingcorrectness=1,x-proxy-backendserverstatus=404,x-feproxyinfo=MA0PR01CA0051.INDPRD01.PROD.OUTLOOK.COM,x-feefzinfo=MAA,ms-cv=DVBUUVj7A3ll1j03ERKRAQ.1.1,x-feserver=PS2PR02CA0054,x-firsthopcafeefz=MAA,x-powered-by=ASP.NET,x-feserver=MA0PR01CA0051,date=Tue, 11 Oct 2022 06:24:18 GMT)
The issue is that the /subdirectory_path is getting ignored by the service in the final outgoing call.
Here's the code I am using to create the grpc channel in android (gives 404)
val uri = Uri.parse("https://examplegrpcserver.com/subdirectory_path")
private val channel = let {
val builder = ManagedChannelBuilder.forTarget(uri.host+uri.path)
if (uri.scheme == "https") {
builder.useTransportSecurity()
} else {
builder.usePlaintext()
}
builder.executor(Dispatchers.IO.asExecutor()).build()
}
The uri is correct since it works with web client.
For web client the channel is defined like this (working)
var handler = new SubdirectoryHandler(httpHandler, "/subdirectory_path");
var userToken = "<token string>";
var grpcWebHandler = new GrpcWebHandler(handler);
using var channel = GrpcChannel.ForAddress("https://examplegrpcserver.com", new GrpcChannelOptions { HttpHandler = grpcWebHandler,
Credentials = ChannelCredentials.Create(new SslCredentials(), CallCredentials.FromInterceptor((context, metadata) =>
{
metadata.Add("Authorization", $"Bearer {userToken}");
return Task.CompletedTask;
}))
});
I tried to inject the subdirectory_path in the uri for my android client but unable to find appropriate api. grpc-kotlin doesn't expose the underlying http-client used in the channel.
Could someone please help me with this issue, how can I specify the subdirectory_path? (before the service and method name)
The path for an RPC is fixed by the .proto definition. Adding prefixes to the path is unsupported.
The URI passed to forTarget() points to the resource containing the addresses to connect to. So the fully-qualified form is normally of the form dns:///example.com. If you specified a host in the URI like dns://1.1.1.1/example.com, then that would mean "look up example.com at the DNS server 1.1.1.1." But there's no place to put a path prefix in the target string, as that path would only be used for address lookup, not actual RPCs.
If the web client supports path prefixes, that is a feature specific to it. It would also be using a tweaked grpc protocol that requires translation to normal backends.

Corda - Failed to find a store at certificates\sslkeystore.jks

Corda open source on Linux. Node RPC SSL enabled. I am getting error "Failed to find a store at certificates\sslkeystore.jks". Any ideas? I have entered absolute path in keyStorePath.
You must follow the steps of this paragraph: https://docs.corda.net/clientrpc.html#wire-security which I detailed for you below.
When you enable RPC SSL, you must run this command one time (you will be asked to supply 2 new passwords):
java -jar corda.jar generate-rpc-ssl-settings
It will create the rpcsslkeystore.jks under certificates folder, and rpcssltruststore.jks under certificates/export folder.
Inside your node.conf supply the path and password of rpcsslkeystore.jks:
rpcSettings {
useSsl=true
ssl {
keyStorePath=${baseDirectory}/certificates/rpcsslkeystore.jks
keyStorePassword=password
}
standAloneBroker = false
address = "0.0.0.0:10003"
adminAddress = "0.0.0.0:10004"
}
Now if you have a webserver, inside NodeRPCConnection you must use the constructor that takes a ClientRpcSslOptions parameter:
// RPC SSL properties.
#Value("${config.rpc.ssl.truststorepath}")
private String trustStorePath;
#Value("${config.rpc.ssl.truststorepassword}")
private String trustStorePassword;
#PostConstruct
public void initialiseNodeRPCConnection() {
NetworkHostAndPort rpcAddress = new NetworkHostAndPort(host, rpcPort);
ClientRpcSslOptions clientRpcSslOptions = new ClientRpcSslOptions(Paths.get(trustStorePath),
trustStorePassword, "JKS");
CordaRPCClient rpcClient = new CordaRPCClient(rpcAddress, clientRpcSslOptions, null);
rpcConnection = rpcClient.start(username, password);
proxy = rpcConnection.getProxy();
}
We added above 2 extra attributes that you must now supply when starting the webserver, for that; modify your clients module build.gradle:
task runNodeServer(type: JavaExec, dependsOn: jar) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.server.ServerKt'
args '--server.port=50005', '--config.rpc.host=localhost',
'--config.rpc.port=10005', '--config.rpc.username=user1', '--config.rpc.password=test',
'--config.rpc.ssl.truststorepath=/path-to-project/build/nodes/your-node/certificates/export/rpcssltruststore.jks',
'--config.rpc.ssl.truststorepassword=password'
}
If you're planning to connect to the node with a standalone shell, you must do something similar, but it didn't work for me; I reported the following bug: https://github.com/corda/corda/issues/5955

exactly once delivery Is it possible through spring-cloud-stream-binder-kafka or spring-kafka which one to use

I am trying to achieve exactly once delivery using spring-cloud-stream-binder-kafka in a spring boot application.
The versions I am using are:
spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE
spring-cloud-stream-binder-kafka-1.2.1.RELEASE
spring-cloud-stream-codec-1.2.2.RELEASE spring-kafka-1.1.6.RELEASE
spring-integration-kafka-2.1.0.RELEASE
spring-integration-core-4.3.10.RELEASE
zookeeper-3.4.8
Kafka version : 0.10.1.1
This is my configuration (cloud-config):
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
I have two main classes:
FeedSink Interface:
package au.com.xyz.proxy.interfaces;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.MessageChannel;
public interface FeedSink {
String FEED_PLATFORM_EVENTS_INPUT = "feed_platform_events_input";
#Input(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
MessageChannel feedlatformEventsInput();
}
EventConsumer
package au.com.xyz.proxy.consumer;
#Slf4j
#EnableBinding(FeedSink.class)
public class EventConsumer {
public static final String SUCCESS_MESSAGE =
"SEND-SUCCESS : Successfully sent message to platform.";
public static final String FAULT_MESSAGE = "SOAP-FAULT Code: {}, Description: {}";
public static final String CONNECT_ERROR_MESSAGE = "CONNECT-ERROR Error Details: {}";
public static final String EMPTY_NOTIFICATION_ERROR_MESSAGE =
"EMPTY-NOTIFICATION-ERROR Empty Event Received from platform";
#Autowired
private CapPointService service;
#StreamListener(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
/**
* method associated with stream to process message.
*/
public void message(final #Payload EventNotification eventNotification,
final #Header(KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
String caseMilestone = "UNKNOWN";
if (!ObjectUtils.isEmpty(eventNotification)) {
SysMessage sysMessage = processPayload(eventNotification);
caseMilestone = sysMessage.getCaseMilestone();
try {
ClientResponse response = service.sendPayload(sysMessage);
if (response.hasFault()) {
Fault faultDetails = response.getFaultDetails();
log.error(FAULT_MESSAGE, faultDetails.getCode(), faultDetails.getDescription());
} else {
log.info(SUCCESS_MESSAGE);
}
acknowledgment.acknowledge();
} catch (Exception e) {
log.error(CONNECT_ERROR_MESSAGE, e.getMessage());
}
} else {
log.error(EMPTY_NOTIFICATION_ERROR_MESSAGE);
acknowledgment.acknowledge();
}
}
private SysMessage processPayload(final EventNotification eventNotification) {
Gson gson = new Gson();
String jsonString = gson.toJson(eventNotification.getData());
log.info("Consumed message for platform events with payload : {} ", jsonString);
SysMessage sysMessage = gson.fromJson(jsonString, SysMessage.class);
return sysMessage;
}
}
I have set the autocommit property for Kafka and spring container as false.
if you see in the EventConsumer class I have used Acknowledge in cases where I service.sendPayload is successful and there are no Exceptions. And I want container to move the offset and poll for next records.
What I have observed is:
Scenario 1 - In case where the Exception is thrown and there are no new messages published on kafka. There is no retry to process the message and it seems there is no activity. Even if the underlying issue is resolved. The issue I am referring to is down stream server unavailability. Is there a way to retry the processing n times and then give up. Note this is retry of processing or repoll from the last committed offset. This is not about Kafka instance not available.
If I restart the service (EC2 instance) then the processing happens from the offset where the last successful Acknowledge was done.
Scenario 2 - In case where Exception happened and then a subsequent message is pushed to kafka. I see the new message is processed and the offset moved. It means I lost the message which was not acknowledged. So the question is if I have handled the Acknowledge. How do I control to read from last commit not just the latest message and process it. I am assuming there is internally a poll happening and it did not take into account or did not know about the last message not being acknowledged. I don't think there are multiple threads reading from kafka. I dont know how the #Input and #StreamListener annotations are controlled. I assume the thread is controlled by property consumer.concurrency which controls the thread and by default it is set to 1.
So I have done research and found a lot of links but unfortunately none of them answers my specific questions.
I looked at (https://github.com/spring-cloud/spring-cloud-stream/issues/575)
which has a comment from Marius (https://stackoverflow.com/users/809122/marius-bogoevici):
Do note that Kafka does not provide individual message acking, which
means that acknowledgment translates into updating the latest consumed
offset to the offset of the acked message (per topic/partition). That
means that if you're acking messages from the same topic partition out
of order, a message can 'ack' all the messages before it.
not sure if it is the issue with order when there is one thread.
Apologies for long post, but I wanted to provide enough information. The main thing is I am trying to avoid losing messages when consuming from kafka and I am trying to see if spring-cloud-stream-binder-kafka can do the job or I have to look at alternatives.
Update 6th July 2018
I saw this post https://github.com/spring-projects/spring-kafka/issues/431
Is this a better approach to my problem? I can try latest version of spring-kafka
#KafkaListener(id = "qux", topics = "annotated4", containerFactory = "kafkaManualAckListenerContainerFactory",
containerGroup = "quxGroup")
public void listen4(#Payload String foo, Acknowledgment ack, Consumer<?, ?> consumer) {
Will this help in controlling the offset to be set to where the last
successfully processed record? How can I do that from the listen
method. consumer.seekToEnd(); and then how will listen method reset to get the that record?
Does putting the Consumer in the signature provide support to get
handle to consumer? Or I need to do anything more?
Should I use Acknowledge or consumer.commitSyncy()
What is the significance of containerFactory. do I have to define it
as a bean.
Do I need #EnableKafka and #Configuration for above approach to work?
Bearing in mind the application is a Spring Boot application.
By Adding Consumer to listen method I don't need to implement
ConsumerAware Interface?
Last but not least, Is it possible to provide some example of above approach if it is feasible.
Update 12 July 2018
Thanks Gary (https://stackoverflow.com/users/1240763/gary-russell) for providing the tip of using maxAttempts. I have used that approach. And I am able to achieve exactly once delivery and preserve the order of the message.
My updated cloud-config:
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
consumer:
maxAttempts: 2147483647
backOffInitialInterval: 1000
backOffMaxInterval: 300000
backOffMultiplier: 2.0
Event Consumer remains the same as my initial implementation. Except for rethrowing the error for the container to know the processing has failed. If you just catch it then there is no way container knows the message processing has failures. By doing acknoweldgement.acknowledge you are just controlling the offset commit. In order for retry to happen you must throw the exception. Don't forget to set the kafka client autocommit property and spring (container level) autocommitOffset property to false. Thats it.
As explained by Marius, Kafka only maintains an offset in the log. If you process the next message, and update the offset; the failed message is lost.
You can send the failed message to a dead-letter topic (set enableDlq to true).
Recent versions of Spring Kafka (2.1.x) have special error handlers ContainerStoppingErrorHandler which stops the container when an exception occurs and SeekToCurrentErrorHandler which will cause the failed message to be redelivered.

Spring Integration TCP. Get connection ID of the connected clients

I have a problem here with the dynamic TCP connection approach (Spring-IP Dynamic FTP Sample). When a message is received, I want to get the TCP connection details for the received message. this way I can keep track in my application of the sender sending that message. But in Service activator I am not able to get this detail.
Also need the connection details when my TCP client is connected to the server. This way if the server wants to initiate the communication, it will have the connection details.
For info my application has more than one TCP clients and servers.
Got an answer reply in another post from Mr. Gary Russell.
Answer
For normal request/reply processing, using an inbound gateway, the framework will take care of routing the service activator reply to the correct socket. It does this by using the connection id header.
If you need to provide arbitrary replies (e.g. more than one reply for a message, you have to use inbound and outbound channel adapters and your application is responsible for setting up the connection id header.
There are two ways to access the required header in a POJO invoked by a service activator:
public void foo(byte[] payload, #Header(IpHeaders.CONNECTION_ID) String connectionId) {
...
}
public void foo(Message<byte[]> message) {
String connectionId = message.getHeaders().get(...);
}
Then, when you send your replies, you need to set that header somehow.
EDIT
Below Is My Implementation
To get all the connected clients simply get the ServerConnectionFactory from the context and access the method .getConnectedClients(). It returns the list connectionIds for each connected client.
AbstractServerConnectionFactory connFactory = (AbstractServerConnectionFactory) appContext.getBean("server");
List<String> openConns = connFactory.getOpenConnectionIds();
As mentioned above in Gary's response, use this connectionId and set it in conneciton header while sending the message to a client. Sample code as follows.
MessageChannel serverOutAdapter = null;
try{
serverOutAdapter = (MessageChannel) appContext.getBean("toObAdapter");
}catch(Exception ex){
LOGGER.error(ex.getMessage());
throw ex;
}
if(null == serverOutAdapter){
throw new Exception("output channel not available");
}
AbstractServerConnectionFactory connFactory = (AbstractServerConnectionFactory) appContext.getBean("serverConnFactoryBeanId");
List<String> openConns = connFactory.getOpenConnectionIds();
if(null == openConns || openConns.size() == 0){
throw new Exception("No Client connection registered");
}
for (String connId: openConns) {
MessageBuilder<String> mb = MessageBuilder.withPayload(message).setHeader(IpHeaders.CONNECTION_ID, connId);
serverOutAdapter.send(mb.build());
}
Note 1: If u want to send messages from the server then be cautious to configure the server and client connection factories in a way that they do not time-out. i.e put so-keep-alive = true in client connection factory.
Note 2: If the server has to communicate with the client then make sure that the client connects to the server as soon as the context is loaded. Because Spring-IP client connection factory connects only when the first message is sent out. In order to connect client after context load, put client-mode="true" in tcp client context for the "tcp-outbound-channel-adapter".

Getting Unknown Host Exception While using RestFb

I Want to Connect to get data from facebook using restFb but it is throwing Unknown host Exception .
My code
package com.resrfb;
import com.restfb.DefaultFacebookClient;
import com.restfb.FacebookClient;
import com.restfb.types.User;
public class SimpleMeExample {
public static void main(String[] args) {
FacebookClient facebookClient= new DefaultFacebookClient("Key");
User user = facebookClient.fetchObject("me", User.class);
System.out.println("User="+ user);
System.out.println("UserName= "+ user.getUsername());
System.out.println("Birthday= "+ user.getBirthday());
}
}
Also i wanted to know how to get data from any user that login to my web app using restfb as here i am geeting my accesstoken manually how to get it for any user when logging in using facebook sdk.
Stack Trace
Exception in thread "main" com.restfb.exception.FacebookNetworkException: A network error occurred while trying to communicate with Facebook: Facebook request failed (HTTP status code null)
at com.restfb.DefaultFacebookClient.makeRequestAndProcessResponse(DefaultFacebookClient.java:1024)
at com.restfb.DefaultFacebookClient.makeRequest(DefaultFacebookClient.java:952)
at com.restfb.DefaultFacebookClient.makeRequest(DefaultFacebookClient.java:914)
at com.restfb.DefaultFacebookClient.fetchObject(DefaultFacebookClient.java:392)
at com.resrfb.SimpleMeExample.main(SimpleMeExample.java:14)
Caused by: java.net.UnknownHostException: graph.facebook.com
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:195)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:559)
at com.sun.net.ssl.internal.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:141)
at sun.net.NetworkClient.doConnect(NetworkClient.java:163)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:272)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:329)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:172)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:911)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:158)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:133)
at com.restfb.DefaultWebRequestor.execute(DefaultWebRequestor.java:374)
at com.restfb.DefaultWebRequestor.executeGet(DefaultWebRequestor.java:96)
at com.restfb.DefaultFacebookClient$3.makeRequest(DefaultFacebookClient.java:965)
at com.restfb.DefaultFacebookClient.makeRequestAndProcessResponse(DefaultFacebookClient.java:1022)
... 4 more
Your machine cannot connect to Facebook, this error is on a lower level and not RestFB dependent. You should check your DNS settings, hosts file and your proxy settings.

Resources