I have a very simple Java 8 project (FTP server), which uses Apache FTPS (Mina) server library (v. 1.1.1). It is as simple as the following code:
ListenerFactory factory = new ListenerFactory();
factory.setPort(2221);
// SSL config
SslConfigurationFactory ssl = new SslConfigurationFactory();
ssl.setKeystoreFile(new File("keystore.jks"));
ssl.setKeystorePassword("password");
// set the SSL configuration for the listener
factory.setSslConfiguration(ssl.createSslConfiguration());
factory.setImplicitSsl(true);
FtpServerFactory serverFactory = new FtpServerFactory();
// replace the default listener
serverFactory.addListener("default", factory.createListener());
//Configure user manager and set admin user
PropertiesUserManagerFactory userManagerFactory = new PropertiesUserManagerFactory();
userManagerFactory.setFile(new File("users.properties"));
UserManager userManager = userManagerFactory.createUserManager();
if (!userManager.doesExist("admin")) {
BaseUser user = new BaseUser();
user.setName("admin");
user.setPassword("password");
user.setEnabled(true);
user.setHomeDirectory(USER_HOME_DIR);
user.setAuthorities(Collections.<Authority>singletonList(new WritePermission()));
userManager.save(user);
}
serverFactory.setUserManager(userManager);
// start the server
FtpServer server = serverFactory.createServer();
server.start();
Needed maven dependencies:
<dependency>
<groupId>org.apache.ftpserver</groupId>
<artifactId>ftpserver-core</artifactId>
<version>1.1.1</version>
</dependency>
to simply create a self-signed Keystore:
keytool -genkey -keyalg RSA -alias self-signed -keystore keystore.jks -validity 360 -keysize 2048
I followed the official guide to write this code: https://mina.apache.org/ftpserver-project/embedding_ftpserver.html
If I compile and run this code with Java 8, my FTPS server works perfectly fine, I can reach this server through localhost:2221 and with username "admin" and password "password". From my FTP client (I use Filezilla), I can see that the TLS connection was successfully established.
If I compile and run the same code with Java 11+ (I tried with 11 and 15), I see the following message in my FTP client, and the directory listing fails:
Status: Connecting to 127.0.0.1:2223...
Status: Connection established, initializing TLS...
Status: Verifying certificate...
Status: TLS connection established, waiting for welcome message...
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is current directory.
Command: TYPE I
Response: 200 Command TYPE okay.
Command: PASV
Response: 227 Entering Passive Mode (127,0,0,1,225,229)
Command: MLSD
Response: 150 File status okay; about to open data connection.
Error: Received TLS alert from the server: User canceled (90)
Error: Could not read from transfer socket: ECONNABORTED - Connection aborted
Response: 226 Closing data connection.
Error: Failed to retrieve directory listing
And this is the full application log (with VM parameter ):
2021-03-30 22:59:09.304 INFO 10557 --- [ main] com.example.ftp.demo.DemoApplication : Starting DemoApplication using Java 11.0.7 on Kara's-MBP with PID 10557 (...)
2021-03-30 22:59:09.306 INFO 10557 --- [ main] com.example.ftp.demo.DemoApplication : No active profile set, falling back to default profiles: default
2021-03-30 22:59:09.601 INFO 10557 --- [ main] com.example.ftp.demo.DemoApplication : Started DemoApplication in 0.487 seconds (JVM running for 1.046)
javax.net.ssl|DEBUG|01|main|2021-03-30 22:59:09.886 CEST|SSLCipher.java:438|jdk.tls.keyLimits: entry = AES/GCM/NoPadding KeyUpdate 2^37. AES/GCM/NOPADDING:KEYUPDATE = 137438953472
2021-03-30 22:59:09.966 INFO 10557 --- [ main] o.a.ftpserver.impl.DefaultFtpServer : FTP server started
2021-03-30 22:59:24.393 INFO 10557 --- [ NioProcessor-3] o.a.f.listener.nio.FtpLoggingFilter : CREATED
2021-03-30 22:59:24.395 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : OPENED
javax.net.ssl|DEBUG|1B|NioProcessor-3|2021-03-30 22:59:24.443 CEST|SSLCipher.java:1840|KeyLimit read side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1B|NioProcessor-3|2021-03-30 22:59:24.444 CEST|SSLCipher.java:1994|KeyLimit write side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1B|NioProcessor-3|2021-03-30 22:59:24.472 CEST|SSLCipher.java:1994|KeyLimit write side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1B|NioProcessor-3|2021-03-30 22:59:24.490 CEST|SSLCipher.java:1840|KeyLimit read side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
2021-03-30 22:59:24.493 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 220 Service ready for new user.
2021-03-30 22:59:24.501 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: USER admin
2021-03-30 22:59:24.503 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 331 User name okay, need password for admin.
2021-03-30 22:59:24.503 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: PASS *****
2021-03-30 22:59:24.505 INFO 10557 --- [pool-3-thread-1] org.apache.ftpserver.command.impl.PASS : Login success - admin
2021-03-30 22:59:24.505 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 230 User logged in, proceed.
2021-03-30 22:59:24.505 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: OPTS UTF8 ON
2021-03-30 22:59:24.506 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 200 Command OPTS okay.
2021-03-30 22:59:24.506 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: PBSZ 0
2021-03-30 22:59:24.506 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 200 Command PBSZ okay.
2021-03-30 22:59:24.507 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: PROT P
2021-03-30 22:59:24.508 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 200 Command PROT okay.
2021-03-30 22:59:24.508 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: OPTS MLST size;modify;type;
2021-03-30 22:59:24.509 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 200 Command OPTS okay.
2021-03-30 22:59:24.509 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: CWD /
2021-03-30 22:59:24.511 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 250 Directory changed to /
2021-03-30 22:59:24.511 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: TYPE I
2021-03-30 22:59:24.512 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 200 Command TYPE okay.
2021-03-30 22:59:24.512 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: PASV
2021-03-30 22:59:24.513 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 227 Entering Passive Mode (127,0,0,1,226,235)
2021-03-30 22:59:24.513 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: MLSD
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.526 CEST|SSLCipher.java:1840|KeyLimit read side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.527 CEST|SSLCipher.java:1994|KeyLimit write side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.528 CEST|SSLCipher.java:1994|KeyLimit write side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.529 CEST|SSLCipher.java:1840|KeyLimit read side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|ALL|1D|pool-3-thread-2|2021-03-30 22:59:24.533 CEST|SSLSocketImpl.java:994|Closing output stream
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.533 CEST|SSLSocketImpl.java:466|duplex close of SSLSocket
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.534 CEST|SSLSocketImpl.java:1372|close the SSL connection (passive)
2021-03-30 22:59:24.535 WARN 10557 --- [pool-3-thread-2] org.apache.ftpserver.impl.PassivePorts : Releasing unreserved passive port: 58091
2021-03-30 22:59:24.535 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 150 File status okay; about to open data connection.
2021-03-30 22:59:24.535 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 226 Closing data connection.
Additionally, if I remove SSL support from the code, my FTP server works perfectly fine even with Java 11+.
Is anybody of you guys experienced similar issues with Apache FTPS and Java 11+? If yes how did you find a solution?
I can reproduce the problem only when using FileZilla. When I use lftp, for example, I can connect successfully to the server (after trusting the self signed certificate).
FileZilla seems to have a problem with the jdk's implementation of TLSv1.3. There is a closed (rejected) ticket about this in Filezilla's bugtracker [1].
Also, I can reproduce the problem when using jdk 8. TLSv1.3 was added and enabled in jdk 8 since 8u261-b12 [2].
As a workaround, you can disable TLSv1.3 by using a security property jdk.tls.disabledAlgorithms [3] which will force the jvm to choose another algorithm for the security handshake (hopefully it'll be TLSv1.2).(As this is a security setting it's best to discuss it with your security team if you have one in your company).
The security property can be set or updated in jdk's configuration file java.security. Its path depends on the jdk and OS you're using.
Usually it is under $JAVA_HOME/jre/lib/security or $JAVA_HOME/lib/security.
If you can't find it, you can print its path by launching the jvm with -Djava.security.debug=all. You should see the path printed in the startup logs (there may be several files). Look for something similar to the following lines :
properties: reading security properties file: /usr/lib/jvm/java-11-openjdk-11.0.11.0.9-4.fc34.x86_64/conf/security/java.security
...
properties: reading system security properties file /etc/crypto-policies/back-ends/java.config
You can also update jdk.tls.disabledAlgorithms programmatically by adding the two following lines before ssl.createSslConfiguration():
String disabledAlgorithms = Security.getProperty("jdk.tls.disabledAlgorithms") + ", TLSv1.3";
Security.setProperty("jdk.tls.disabledAlgorithms", disabledAlgorithms);
Here is the complete program with the added two lines:
import org.apache.ftpserver.FtpServer;
import org.apache.ftpserver.FtpServerFactory;
import org.apache.ftpserver.ftplet.Authority;
import org.apache.ftpserver.ftplet.FtpException;
import org.apache.ftpserver.ftplet.UserManager;
import org.apache.ftpserver.listener.ListenerFactory;
import org.apache.ftpserver.ssl.SslConfigurationFactory;
import org.apache.ftpserver.usermanager.PropertiesUserManagerFactory;
import org.apache.ftpserver.usermanager.impl.BaseUser;
import org.apache.ftpserver.usermanager.impl.WritePermission;
import java.io.File;
import java.security.Security;
import java.util.Collections;
public class Main {
public static void main(String[] args) throws FtpException {
String disabledAlgorithms = Security.getProperty("jdk.tls.disabledAlgorithms") + ", TLSv1.3";
Security.setProperty("jdk.tls.disabledAlgorithms", disabledAlgorithms);
ListenerFactory factory = new ListenerFactory();
factory.setPort(2221);
// SSL config
SslConfigurationFactory ssl = new SslConfigurationFactory();
ssl.setKeystoreFile(new File("keystore.jks"));
ssl.setKeystorePassword("password");
// set the SSL configuration for the listener
factory.setSslConfiguration(ssl.createSslConfiguration());
factory.setImplicitSsl(true);
FtpServerFactory serverFactory = new FtpServerFactory();
// replace the default listener
serverFactory.addListener("default", factory.createListener());
//Configure user manager and set admin user
PropertiesUserManagerFactory userManagerFactory = new PropertiesUserManagerFactory();
userManagerFactory.setFile(new File("users.properties"));
UserManager userManager = userManagerFactory.createUserManager();
if (!userManager.doesExist("admin")) {
BaseUser user = new BaseUser();
user.setName("admin");
user.setPassword("password");
user.setEnabled(true);
user.setHomeDirectory("/tmp/admin");
user.setAuthorities(Collections.<Authority>singletonList(new WritePermission()));
userManager.save(user);
}
serverFactory.setUserManager(userManager);
// start the server
FtpServer server = serverFactory.createServer();
server.start();
}
}
[1] : https://trac.filezilla-project.org/ticket/12099
[2] : https://www.oracle.com/java/technologies/javase/8u261-relnotes.html
[3] : https://docs.oracle.com/en/java/javase/11/security/java-secure-socket-extension-jsse-reference-guide.html#GUID-0A438179-32A7-4900-A81C-29E3073E1E90
Thanks for the detailed information from #Mohamed.
I just met this issue recently, would like to share the recent testing result. I can reproduce this issue with JDK 16.0.1_64 with FileZilla pro 3.57.1; and JDK 16.0.1_64 with winscp 5.15.5 works fine; and JDK 17.0.1_64 with FileZilla pro 3.57.1 works fine;
Which means using JDK 17.0.1_64 can be a solution.
I am running a Spring Kafka consumer which I want to poll the given topic every 10 seconds and fetch all records or the max number I specified. The topics contains some base64 string of images which are usually 700x400 in dimensions. Below is how my config looks like:
#Bean
public ConsumerFactory<String, String> consumerConfig() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
config.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, "120000");
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 2000);
config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 300000);
return new DefaultKafkaConsumerFactory<>(config);
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> listener = new ConcurrentKafkaListenerContainerFactory<>();
listener.setBatchListener(true);
listener.getContainerProperties().setIdleBetweenPolls(10000);
listener.setConsumerFactory(consumerConfig());
listener.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return listener;
}
Below is how I have my listener:
#KafkaListener(id = "feedconsumer", topicPattern = ".*_hello")
public void messageListener(List<ConsumerRecord> records, Acknowledgment acknowledgment) {
log.info(String.valueOf(records.size()));
acknowledgment.acknowledge();
}
In my logs I can see only this:
2021-03-29 17:48:12.793 INFO 25102 --- [dconsumer-0-C-1] o.s.k.l.KafkaMessageListenerContainer : feedconsumer: partitions assigned: [test_hello-0]
2021-03-29 17:48:13.338 DEBUG 25102 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Received: 1 records
2021-03-29 17:48:13.341 DEBUG 25102 --- [dconsumer-0-C-1] l.a.BatchMessagingMessageListenerAdapter : Processing [GenericMessage [payload=org.springframework.kafka.support.KafkaNull#4f27e57e, headers={id=a9dea384-5f4a-5a59-22ad-45be4ac0c819, timestamp=1617020279053}]]
2021-03-29 17:48:13.342 INFO 25102 --- [dconsumer-0-C-1] c.r.i.t.m.s.s.i.KafkaConsumerServiceImpl : 1
2021-03-29 17:48:13.344 DEBUG 25102 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {test_hello-0=OffsetAndMetadata{offset=92, leaderEpoch=null, metadata=''}}
2021-03-29 17:48:23.359 DEBUG 25102 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Received: 1 records
As you can see, I am getting only 1 record every 10 second even though batch listener is enabled and the max record count is 2000. What am I missing?
EDIT: Tried the following config as well
config.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG, 10000000);
config.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 50000000);
config.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, 10000);
More logs:
2021-03-30 13:15:10.835 DEBUG 34356 --- [dconsumer-0-C-1] l.a.BatchMessagingMessageListenerAdapter : Processing [GenericMessage [payload=org.springframework.kafka.support.KafkaNull#b4ddc5, headers={id=72ae298d-1a89-d632-342a-282569e5c400, timestamp=1617090254725}]]
2021-03-30 13:15:10.836 INFO 34356 --- [dconsumer-0-C-1] c.r.i.t.m.s.s.i.KafkaConsumerServiceImpl : 1
2021-03-30 13:15:10.836 DEBUG 34356 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {test_hello-0=OffsetAndMetadata{offset=46, leaderEpoch=null, metadata=''}}
2021-03-30 13:15:10.836 DEBUG 34356 --- [dconsumer-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=consumer-feedconsumer-1, correlationId=59) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,topics=[{name=test_hello,partitions=[{partition_index=0,committed_offset=46,committed_leader_epoch=-1,committed_metadata=,_tagged_fields={}}],_tagged_fields={}}],_tagged_fields={}}
2021-03-30 13:15:10.847 DEBUG 34356 --- [dconsumer-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=consumer-feedconsumer-1, correlationId=59): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='test_hello', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])])
2021-03-30 13:15:10.848 DEBUG 34356 --- [dconsumer-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Committed offset 46 for partition test_hello-0
2021-03-30 13:15:11.015 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=consumer-feedconsumer-1, correlationId=58): org.apache.kafka.common.requests.FetchResponse#66229066
2021-03-30 13:15:11.015 DEBUG 34356 --- [ng-feedconsumer] o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Node 0 sent an incremental fetch response with throttleTimeMs = 1 for session 1615838501 with 1 response partition(s)
2021-03-30 13:15:11.016 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Fetch READ_UNCOMMITTED at offset 46 for partition test_hello-0 returned fetch data (error=NONE, highWaterMark=4513, lastStableOffset = 4513, logStartOffset = 0, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=1048576)
2021-03-30 13:15:12.263 DEBUG 34356 --- [alina-utility-2] org.apache.catalina.session.ManagerBase : Start expire sessions StandardManager at 1617090312260 sessioncount 0
2021-03-30 13:15:12.264 DEBUG 34356 --- [alina-utility-2] org.apache.catalina.session.ManagerBase : End expire sessions StandardManager processingTime 4 expired sessions: 0
2021-03-30 13:15:12.751 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending Heartbeat request with generation 7 and member id consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee to coordinator 192.168.1.3:9092 (id: 2147483646 rack: null)
2021-03-30 13:15:12.752 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=60) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,_tagged_fields={}}
2021-03-30 13:15:12.858 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=60): org.apache.kafka.common.requests.HeartbeatResponse#2e91937c
2021-03-30 13:15:12.858 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received successful Heartbeat response
2021-03-30 13:15:15.831 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending Heartbeat request with generation 7 and member id consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee to coordinator 192.168.1.3:9092 (id: 2147483646 rack: null)
2021-03-30 13:15:15.831 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=61) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,_tagged_fields={}}
2021-03-30 13:15:15.937 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=61): org.apache.kafka.common.requests.HeartbeatResponse#124bda17
2021-03-30 13:15:15.937 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received successful Heartbeat response
2021-03-30 13:15:18.906 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending Heartbeat request with generation 7 and member id consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee to coordinator 192.168.1.3:9092 (id: 2147483646 rack: null)
2021-03-30 13:15:18.907 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=62) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,_tagged_fields={}}
2021-03-30 13:15:19.012 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=62): org.apache.kafka.common.requests.HeartbeatResponse#50bb3548
2021-03-30 13:15:19.012 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received successful Heartbeat response
2021-03-30 13:15:20.857 DEBUG 34356 --- [dconsumer-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Added READ_UNCOMMITTED fetch request for partition test_hello-0 at position FetchPosition{offset=47, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[192.168.1.3:9092 (id: 1 rack: null)], epoch=0}} to node 192.168.1.3:9092 (id: 1 rack: null)
2021-03-30 13:15:20.857 DEBUG 34356 --- [dconsumer-0-C-1] o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Built incremental fetch (sessionId=1615838501, epoch=6) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s) out of 1 partition(s)
2021-03-30 13:15:20.857 DEBUG 34356 --- [dconsumer-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(test_hello-0), toForget=(), implied=()) to broker 192.168.1.3:9092 (id: 1 rack: null)
2021-03-30 13:15:20.857 DEBUG 34356 --- [dconsumer-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=consumer-feedconsumer-1, correlationId=63) and timeout 120000 to node 1: {replica_id=-1,max_wait_time=10000,min_bytes=10000000,max_bytes=50000000,isolation_level=0,session_id=1615838501,session_epoch=6,topics=[{topic=test_hello,partitions=[{partition=0,current_leader_epoch=0,fetch_offset=47,log_start_offset=-1,partition_max_bytes=1048576}]}],forgotten_topics_data=[],rack_id=}
2021-03-30 13:15:20.858 DEBUG 34356 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Received: 1 records
2021-03-30 13:15:20.858 DEBUG 34356 --- [dconsumer-0-C-1] l.a.BatchMessagingMessageListenerAdapter : Processing [GenericMessage [payload=org.springframework.kafka.support.KafkaNull#b4ddc5, headers={id=72ae298d-1a89-d632-342a-282569e5c400, timestamp=1617090254725}]]
2021-03-30 13:15:20.859 INFO 34356 --- [dconsumer-0-C-1] c.r.i.t.m.s.s.i.KafkaConsumerServiceImpl : 1
2021-03-30 13:15:20.859 DEBUG 34356 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {test_hello-0=OffsetAndMetadata{offset=47, leaderEpoch=null, metadata=''}}
2021-03-30 13:15:20.860 DEBUG 34356 --- [dconsumer-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=consumer-feedconsumer-1, correlationId=64) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,topics=[{name=test_hello,partitions=[{partition_index=0,committed_offset=47,committed_leader_epoch=-1,committed_metadata=,_tagged_fields={}}],_tagged_fields={}}],_tagged_fields={}}
2021-03-30 13:15:20.866 DEBUG 34356 --- [dconsumer-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=consumer-feedconsumer-1, correlationId=64): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='test_hello', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])])
2021-03-30 13:15:20.867 DEBUG 34356 --- [dconsumer-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Committed offset 47 for partition test_hello-0
2021-03-30 13:15:21.164 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=consumer-feedconsumer-1, correlationId=63): org.apache.kafka.common.requests.FetchResponse#22e83e99
2021-03-30 13:15:21.165 DEBUG 34356 --- [ng-feedconsumer] o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Node 0 sent an incremental fetch response with throttleTimeMs = 1 for session 1615838501 with 1 response partition(s)
2021-03-30 13:15:21.165 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Fetch READ_UNCOMMITTED at offset 47 for partition test_hello-0 returned fetch data (error=NONE, highWaterMark=4563, lastStableOffset = 4563, logStartOffset = 0, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=1048576)
2021-03-30 13:15:21.991 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending Heartbeat request with generation 7 and member id consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee to coordinator 192.168.1.3:9092 (id: 2147483646 rack: null)
2021-03-30 13:15:21.992 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=65) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,_tagged_fields={}}
2021-03-30 13:15:22.093 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=65): org.apache.kafka.common.requests.HeartbeatResponse#4053cb
2021-03-30 13:15:22.093 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received successful Heartbeat response
Try the below settings:
config.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG, 10000000);
config.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 250000000);
config.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, 10000);
config.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 50000000);
Your messages are too big for being read also, add the max.partition.fetch.bytes property as well.
I am using Camel version 3.1.0 and its http component for calling a REST endpoint which requires headers like: date, host, digest, authorization, etc. for security. However when I set all these headers in the route before sending request to the http component, the "date" HTTP header is always missing. Tested with both WireMock and the real test environment. Also wield is that the test passes if I run it together with several other tests but not individually.
Here is the code:
.setHeader("flowId", simple("\${body.flowId}"))
.process { exchange ->
with(pubsubConfig) { exchange.setPubsubTimeHeaders(minIntervalValueMills, maxIntervalValueMills, timeoutMillis) }
val settlementAdviceRequestDto = exchange.message.body as SettlementAdviceRequestDto
exchange.setProperty("settlementAdviceRequestDto", settlementAdviceRequestDto)
// Setting security headers
request.entity = StringEntity(jacksonObjectMapper().writeValueAsString(settlementAdviceRequestDto))
signer.signRequest(request, exchange.message)
}
.marshal(settlementAdviceRequestJacksonDataFormat)
.logInfoWithBreadCrumbId("Received SettlementAdviceRequestDto from PubSub, flowId: \${header.flowId}, sending to N&TS with \${headers}")
.to(with(ntsConfigProperties) {
"$protocol://$host:$port/$path?" +
"httpClient.maxConnTotal=$maxConnectionsTotal" +
"&httpClient.maxConnPerRoute=$maxConnectionsPerRoute" +
"&httpClient.connectionRequestTimeout=$connectionRequestTimeout" +
"&httpClient.connectTimeout=$connectionTimeout" +
"&httpClient.socketTimeout=$socketTimeout" +
"&httpClient.redirectsEnabled=$followRedirects"
})
.logInfoWithBreadCrumbId("SettlementAdviceRequestDto sent to N&TS, flowId: \${header.flowId} \${headers}")
Here is the logs from the test with checking the date header:
2020-05-13 11:11:22.267 INFO 5514 --- [tlement.advice]] o.c.f.f.s.route.SettlementAdviceRoute : [ID-C02VC27UHTD6-liping-1589361082160-0-1] Received SettlementAdviceRequestDto from PubSub, flowId: 550e8400-e29b-41d4-a716-446655440000, sending to N&TS with {CamelGooglePubsub.MsgAckId=TgQhIT4wPkVTRFAGFixdRkhRNxkIaFEOT14jPzUgKEUQC1MTUVx2B0YQajNcdQdRDRh1f2Ehbg4UBQEXWX5VWwk8aH58dAZUDRt2eGJ1aF8bCANCW1a0tP24kajpRx1tNZCxo6RASsXWuO52Zhg9XBJLLD5-KTBFQV5AEkwiBURJUytDCypYEQ, CamelGooglePubsub.MessageId=1150245302589178, CamelGooglePubsub.PublishTime=2020-05-13T09:11:20.194Z, breadcrumbId=ID-C02VC27UHTD6-liping-1589361082160-0-1, flowId=550e8400-e29b-41d4-a716-446655440000, CamelGooglePubsub.AckDeadline=2, CamelPubsubHeader.Lifetime=2047, CamelPubsubHeader.LifeTimeout=5000, date=Wed, 13 May 2020 09:11:22 GMT, host=localhost, content-type=application/json, content-length=417, digest=SHA-256=Ocdp4q+ZLUshftQIsycfkidD2SEEnvU29TpX/AFkMt4=, Authorization= keyId="id",algorithm="hmac-sha256",headers="date (request-target) host content-length content-type digest",signature="j/GUbOD0UijQJEjuwCrehQ+seoJ9yeHObYGXbuZgJJY="}
2020-05-13 11:11:22.328 INFO 5514 --- [tp293974199-118] / : RequestHandlerClass from context returned com.github.tomakehurst.wiremock.http.StubRequestHandler. Normalized mapped under returned 'null'
2020-05-13 11:11:22.373 ERROR 5514 --- [tp293974199-118] WireMock :
Request was not matched
=======================
date [contains] : GMT | <<<<< Header is not present
host [contains] : localhost | host: localhost:14685
content-length [contains] : 417 | content-length: 417
content-type [contains] : application/json | content-type: application/json
digest [contains] : | digest:
...
Here is logs from the test without checking the date header:
2020-05-13 11:20:09.092 INFO 5628 --- [tlement.advice]] o.c.f.f.s.route.SettlementAdviceRoute : [ID-C02VC27UHTD6-liping-1589361608990-0-1] Received SettlementAdviceRequestDto from PubSub, flowId: 550e8400-e29b-41d4-a716-446655440000, sending to N&TS with {CamelGooglePubsub.MsgAckId=BCEhPjA-RVNEUAYWLF1GSFE3GQhoUQ5PXiM_NSAoRRIGCBQFfH1yR1B1XjN1B1ENGXN6Y3U-XxYGVEUCdF9RGx9ZXH5VBlAIGXB-ZnZvWxoFA0BTeXfQ16DUpajANUsxIYq6v7BfeuyjqYNhZhs9XxJLLD5-KStFQV5AEkwiHkRJUytDCypYEU4, CamelGooglePubsub.MessageId=1150237351941507, CamelGooglePubsub.PublishTime=2020-05-13T09:20:07.119Z, breadcrumbId=ID-C02VC27UHTD6-liping-1589361608990-0-1, flowId=550e8400-e29b-41d4-a716-446655440000, CamelGooglePubsub.AckDeadline=1, CamelPubsubHeader.Lifetime=1949, CamelPubsubHeader.LifeTimeout=5000, date=Wed, 13 May 2020 09:20:09 GMT, host=localhost, content-type=application/json, content-length=417, digest=SHA-256=Ocdp4q+ZLUshftQIsycfkidD2SEEnvU29TpX/AFkMt4=, Authorization= keyId="id",algorithm="hmac-sha256",headers="date (request-target) host content-length content-type digest",signature="POpggmhIDI8tNBsb7239ksYEGfwY+IB/Rn93PCrkGsY="}
2020-05-13 11:20:09.145 INFO 5628 --- [tp294593670-119] / : RequestHandlerClass from context returned com.github.tomakehurst.wiremock.http.StubRequestHandler. Normalized mapped under returned 'null'
2020-05-13 11:20:09.434 INFO 5628 --- [tlement.advice]] o.c.f.f.s.route.SettlementAdviceRoute : [ID-C02VC27UHTD6-liping-1589361608990-0-1] SettlementAdviceRequestDto sent to N&TS, flowId: 550e8400-e29b-41d4-a716-446655440000 {CamelHttpResponseCode=200, CamelHttpResponseText=OK, Matched-Stub-Id=4efce26c-dbfa-457e-864d-98f29ef38f97, Vary=Accept-Encoding, User-Agent, Transfer-Encoding=chunked, Server=Jetty(9.2.z-SNAPSHOT), CamelGooglePubsub.MsgAckId=BCEhPjA-RVNEUAYWLF1GSFE3GQhoUQ5PXiM_NSAoRRIGCBQFfH1yR1B1XjN1B1ENGXN6Y3U-XxYGVEUCdF9RGx9ZXH5VBlAIGXB-ZnZvWxoFA0BTeXfQ16DUpajANUsxIYq6v7BfeuyjqYNhZhs9XxJLLD5-KStFQV5AEkwiHkRJUytDCypYEU4, CamelGooglePubsub.MessageId=1150237351941507, CamelGooglePubsub.PublishTime=2020-05-13T09:20:07.119Z, breadcrumbId=ID-C02VC27UHTD6-liping-1589361608990-0-1, flowId=550e8400-e29b-41d4-a716-446655440000, CamelGooglePubsub.AckDeadline=1, CamelPubsubHeader.Lifetime=1949, CamelPubsubHeader.LifeTimeout=5000, digest=SHA-256=Ocdp4q+ZLUshftQIsycfkidD2SEEnvU29TpX/AFkMt4=, Authorization= keyId="id",algorithm="hmac-sha256",headers="date (request-target) host content-length content-type digest",signature="POpggmhIDI8tNBsb7239ksYEGfwY+IB/Rn93PCrkGsY="}
we have a very simple Spring Boot Service (#EnableConfigServer) running behind a nginx proxy.
The service basically works, but it is restarting all the time (Context is closed and started continuously).
See the log files here: http://pastebin.com/GErCF5x6
The setup is basically just one Java Class and the two configs (bootstrap.properties as well as application.properties).
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.config.server.EnableConfigServer;
import org.springframework.context.annotation.Configuration;
/**
* Main Application, which starts the Spring Boot context
*/
#Configuration
#EnableAutoConfiguration
#EnableConfigServer
public class Application {
#SuppressWarnings("PMD.SignatureDeclareThrowsException")
public static void main(String[] args) throws Exception {
SpringApplication.run(Application.class, args);
}
}
bootstrap.properties
spring.application.name = configserver
spring.cloud.config.enabled = false
encrypt.failOnError= false
encrypt.key= secret
application.properties
# HTTP Configuration
server.port = 8888
# Management Configuration
management.context-path=/manage
# SSH Based Git-Repository
spring.cloud.config.server.git.uri=git#bitbucket.org:xyz.git
spring.cloud.config.server.git.basedir = cache/config
security.user.name=ads
security.user.password={cipher}secret2
security.basic.realm=Config Server
Log-File
11:13:47.105 [qtp1131266554-101] INFO o.s.boot.SpringApplication - Started application in 0.176 seconds (JVM running for 245.66)
11:13:47.105 [qtp1131266554-101] INFO o.s.c.a.AnnotationConfigApplicationContext - Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#7709b120: startup date [Wed Apr 08 11:13:47 UTC 2015]; root of context hierarchy
11:13:47.690 [qtp1131266554-51] INFO o.s.b.a.audit.listener.AuditListener - AuditEvent [timestamp=Wed Apr 08 11:13:47 UTC 2015, principal=ads, type=AUTHENTICATION_SUCCESS, data={details=org.springframework.security.web.authentication.WebAuthenticationDetails#ffffe21a: RemoteIpAddress: 10.10.100.207; SessionId: null}]
11:13:48.324 [qtp1131266554-19] INFO o.s.boot.SpringApplication - Starting application on api01.prd.rbx.xxxx.com with PID 24204 (started by ads in /home/ads/config-server)
11:13:48.328 [qtp1131266554-19] INFO o.s.c.a.AnnotationConfigApplicationContext - Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#473cffd3: startup date [Wed Apr 08 11:13:48 UTC 2015]; root of context hierarchy
11:13:48.332 [qtp1131266554-19] INFO o.s.boot.SpringApplication - Started application in 0.092 seconds (JVM running for 246.887)
11:13:48.332 [qtp1131266554-19] INFO o.s.c.a.AnnotationConfigApplicationContext - Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#473cffd3: startup date [Wed Apr 08 11:13:48 UTC 2015]; root of context hierarchy
11:13:48.612 [qtp1131266554-55] INFO o.s.b.a.audit.listener.AuditListener - AuditEvent [timestamp=Wed Apr 08 11:13:48 UTC 2015, principal=ads, type=AUTHENTICATION_SUCCESS, data={details=org.springframework.security.web.authentication.WebAuthenticationDetails#ffffe21a: RemoteIpAddress: 10.10.100.207; SessionId: null}]
11:13:50.601 [qtp1131266554-77] INFO o.s.boot.SpringApplication - Starting application on api01.prd.rbx.xxxx.com with PID 24204 (started by ads in /home/ads/config-server)
11:13:50.604 [qtp1131266554-77] INFO o.s.c.a.AnnotationConfigApplicationContext - Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#44330486: startup date [Wed Apr 08 11:13:50 UTC 2015]; root of context hierarchy
11:13:50.607 [qtp1131266554-77] INFO o.s.boot.SpringApplication - Started application in 0.088 seconds (JVM running for 249.162)
11:13:50.607 [qtp1131266554-77] INFO o.s.c.a.AnnotationConfigApplicationContext - Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#44330486: startup date [Wed Apr 08 11:13:50 UTC 2015]; root of context hierarchy
11:13:51.831 [qtp1131266554-55] INFO o.s.boot.SpringApplication - Starting application on api01.prd.rbx.xxxx.com with PID 24204 (started by ads in /home/ads/config-server)
11:13:51.834 [qtp1131266554-55] INFO o.s.c.a.AnnotationConfigApplicationContext - Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#1843040d: startup date [Wed Apr 08 11:13:51 UTC 2015]; root of context hierarchy
11:13:51.840 [qtp1131266554-55] INFO o.s.boot.SpringApplication - Started application in 0.094 seconds (JVM running for 250.395)
Any idea, how to solve the problem?
best,
fritz
That's normal. It's not the application that is starting and stopping, it's just the mini-contexts that are used to create the config resources for remote clients. Completely harmless.
I found out the problem which I am facing here. Basically we have two remote services running on AWS and a configured Load Balanacer, which continously checks the /health endpoint. By invoking this method, the ConfigServerClient always calls our ConfigServer.
I don't understand why there is a HealthIndicator for the ConfigServer, is there a way to disable this HealthIndicator, as every request to this endpoint will query our config server again and again. Another disadvantage is, that the /health request is not responding as fast as possible anymore, which leads to timeouts in the Load Balancer (default 2s).
So, config server isn't restarting. It uses a new Spring Application Context and loads the files it grabbed from git into a new context and the formats the values to send back to the client. Below are the logs from my config server for one client connection. The logs just looks confusing like it is restarting.
2015-04-08 12:22:52.206 INFO 85076 --- [nio-8888-exec-1] o.s.b.a.audit.listener.AuditListener : AuditEvent [timestamp=Wed Apr 08 12:22:52 EDT 2015, principal=user, type=AUTHENTICATION_SUCCESS, data={details=org.springframework.security.web.authentication.WebAuthenticationDetails#957e: RemoteIpAddress: 127.0.0.1; SessionId: null}]
2015-04-08 12:22:52.944 INFO 85076 --- [nio-8888-exec-2] o.s.b.a.audit.listener.AuditListener : AuditEvent [timestamp=Wed Apr 08 12:22:52 EDT 2015, principal=user, type=AUTHENTICATION_SUCCESS, data={details=org.springframework.security.web.authentication.WebAuthenticationDetails#957e: RemoteIpAddress: 127.0.0.1; SessionId: null}]
2015-04-08 12:22:53.490 INFO 85076 --- [nio-8888-exec-1] o.s.boot.SpringApplication : Starting application on sgibb-mbp.local with PID 85076 (started by sgibb in /Users/sgibb/workspace/spring/spring-cloud-samples/configserver)
2015-04-08 12:22:53.494 INFO 85076 --- [nio-8888-exec-1] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#571e4b84: startup date [Wed Apr 08 12:22:53 EDT 2015]; root of context hierarchy
2015-04-08 12:22:53.497 INFO 85076 --- [nio-8888-exec-1] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2015-04-08 12:22:53.498 INFO 85076 --- [nio-8888-exec-1] o.s.boot.SpringApplication : Started application in 0.151 seconds (JVM running for 23.747)
2015-04-08 12:22:53.499 INFO 85076 --- [nio-8888-exec-1] o.s.c.c.s.NativeEnvironmentRepository : Adding property source: file:/Users/sgibb/workspace/spring/spring-cloud-samples/configserver/target/config/foo.properties
2015-04-08 12:22:53.500 INFO 85076 --- [nio-8888-exec-1] o.s.c.c.s.NativeEnvironmentRepository : Adding property source: file:/Users/sgibb/workspace/spring/spring-cloud-samples/configserver/target/config/application.yml
2015-04-08 12:22:53.500 INFO 85076 --- [nio-8888-exec-1] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#571e4b84: startup date [Wed Apr 08 12:22:53 EDT 2015]; root of context hierarchy
2015-04-08 12:22:54.090 INFO 85076 --- [nio-8888-exec-2] o.s.boot.SpringApplication : Starting application on sgibb-mbp.local with PID 85076 (started by sgibb in /Users/sgibb/workspace/spring/spring-cloud-samples/configserver)
2015-04-08 12:22:54.096 INFO 85076 --- [nio-8888-exec-2] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#416d044c: startup date [Wed Apr 08 12:22:54 EDT 2015]; root of context hierarchy
2015-04-08 12:22:54.098 INFO 85076 --- [nio-8888-exec-2] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2015-04-08 12:22:54.099 INFO 85076 --- [nio-8888-exec-2] o.s.boot.SpringApplication : Started application in 0.433 seconds (JVM running for 24.348)
2015-04-08 12:22:54.099 INFO 85076 --- [nio-8888-exec-2] o.s.c.c.s.NativeEnvironmentRepository : Adding property source: file:/Users/sgibb/workspace/spring/spring-cloud-samples/configserver/target/config/foo.properties
2015-04-08 12:22:54.099 INFO 85076 --- [nio-8888-exec-2] o.s.c.c.s.NativeEnvironmentRepository : Adding property source: file:/Users/sgibb/workspace/spring/spring-cloud-samples/configserver/target/config/application.yml
2015-04-08 12:22:54.099 INFO 85076 --- [nio-8888-exec-2] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#416d044c: startup date [Wed Apr 08 12:22:54 EDT 2015]; root of context hierarchy