Unauthorized error when hitting the userinfo endpoint - spring-security-oauth2

I have configured the OAuth2 client application using Okta and working through the Authorization_code grant flow.
The application is able to get the auth code and the token, but trying to hit the userinfo endpoint and getting a 401 error when I have specified the user-info-uri.
I have enabled debug for org.springframework.security package but not getting much details. Where am I going wrong?
Update: I am getting this error when I have the user-info-uri property in the configuration and if removed, the endpoint is accissible.
application.yml
server:
port: 8555
spring:
security:
oauth2:
client:
registration:
okta:
client-id: masked
client-secret: masked
provider:
okta:
authorization-uri: https://domain/oauth2/default/v1/authorize
token-uri: https://domain/oauth2/default/v1/token
user-info-uri: https://domain/oauth2/v1/userinfo
jwk-set-uri: https://domain/oauth2/default/v1/keys
debug: true
logging:
level:
org.springframework.security: debug
ApplicationSecurityConfiguration
#Configuration
public class ApplicationSecurityConfiguration extends WebSecurityConfigurerAdapter {
#Override
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests()
.anyRequest()
.authenticated()
.and()
.oauth2Login();
}
}
Update:
I ran the application in debug mode and was able to gather the below logs
: Reading to [org.springframework.security.oauth2.core.endpoint.OAuth2AccessTokenResponse] as "application/json;charset=UTF-8"
2021-12-16 19:42:40.180 DEBUG 11880 --- [nio-8555-exec-3] o.s.web.client.RestTemplate : HTTP GET https://dev-7858070.okta.com/oauth2/default/v1/keys
2021-12-16 19:42:40.180 DEBUG 11880 --- [nio-8555-exec-3] o.s.web.client.RestTemplate : Accept=[text/plain, application/json, application/*+json, */*]
2021-12-16 19:42:40.757 DEBUG 11880 --- [nio-8555-exec-3] jdk.event.security : ValidationChain: 1341898239, 128597027, -1751274746
2021-12-16 19:42:41.032 DEBUG 11880 --- [nio-8555-exec-3] jdk.event.security : TLSHandshake: dev-7858070.okta.com:443, TLSv1.2, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, -1751274746
2021-12-16 19:42:41.033 DEBUG 11880 --- [nio-8555-exec-3] s.n.www.protocol.http.HttpURLConnection : sun.net.www.MessageHeader#6172186f5 pairs: {GET /oauth2/default/v1/keys HTTP/1.1: null}{Accept: application/json, application/jwk-set+json}{User-Agent: Java/11.0.7}{Host: dev-7858070.okta.com}{Connection: keep-alive}
2021-12-16 19:42:41.493 DEBUG 11880 --- [nio-8555-exec-3] s.n.www.protocol.http.HttpURLConnection : sun.net.www.MessageHeader#5920bb2c17 pairs: {null: HTTP/1.1 200 OK}{Date: Thu, 16 Dec 2021 14:12:41 GMT}{Content-Type: application/json}{Transfer-Encoding: chunked}{Connection: keep-alive}{Server: nginx}{Public-Key-Pins-Report-Only: pin-sha256="r5EfzZxQVvQpKo3AgYRaT7X2bDO/kj3ACwmxfdT2zt8="; pin-sha256="MaqlcUgk2mvY/RFSGeSwBRkI+rZ6/dxe/DuQfBT/vnQ="; pin-sha256="72G5IEvDEWn+EThf3qjR7/bQSWaS2ZSLqolhnO6iyJI="; pin-sha256="rrV6CLCCvqnk89gWibYT0JO6fNQ8cCit7GGoiVTjCOg="; max-age=60; report-uri="https://okta.report-uri.com/r/default/hpkp/reportOnly"}{x-xss-protection: 0}{p3p: CP="HONK"}{content-security-policy: default-src 'self' dev-7858070.okta.com *.oktacdn.com; connect-src 'self' dev-7858070.okta.com dev-7858070-admin.okta.com *.oktacdn.com *.mixpanel.com *.mapbox.com app.pendo.io data.pendo.io pendo-static-5634101834153984.storage.googleapis.com https://oinmanager.okta.com data:; script-src 'unsafe-inline' 'unsafe-eval' 'self' dev-7858070.okta.com *.oktacdn.com; style-src 'unsafe-inline' 'self' dev-7858070.okta.com *.oktacdn.com app.pendo.io cdn.pendo.io pendo-static-5634101834153984.storage.googleapis.com; frame-src 'self' dev-7858070.okta.com dev-7858070-admin.okta.com login.okta.com; img-src 'self' dev-7858070.okta.com *.oktacdn.com *.tiles.mapbox.com *.mapbox.com app.pendo.io data.pendo.io cdn.pendo.io pendo-static-5634101834153984.storage.googleapis.com data: blob:; font-src 'self' dev-7858070.okta.com data: *.oktacdn.com fonts.gstatic.com}{expect-ct: report-uri="https://oktaexpectct.report-uri.com/r/t/ct/reportOnly", max-age=0}{cache-control: max-age=5751840, must-revalidate}{expires: Mon, 21 Feb 2022 03:56:41 GMT}{vary: Origin}{x-content-type-options: nosniff}{Strict-Transport-Security: max-age=315360000; includeSubDomains}{X-Okta-Request-Id: YbtJWMz4hSJnMbK89S9YAAAABd8}
2021-12-16 19:42:41.493 DEBUG 11880 --- [nio-8555-exec-3] o.s.web.client.RestTemplate : Response 200 OK
2021-12-16 19:42:41.493 DEBUG 11880 --- [nio-8555-exec-3] o.s.web.client.RestTemplate : Reading to [java.lang.String] as "application/json"
2021-12-16 19:42:41.502 DEBUG 11880 --- [nio-8555-exec-3] o.s.web.client.RestTemplate : HTTP GET https://dev-7858070.okta.com/oauth2/v1/userinfo
2021-12-16 19:42:41.503 DEBUG 11880 --- [nio-8555-exec-3] o.s.web.client.RestTemplate : Accept=[application/json, application/*+json]
2021-12-16 19:42:41.503 DEBUG 11880 --- [nio-8555-exec-3] s.n.www.protocol.http.HttpURLConnection : sun.net.www.MessageHeader#3bdc7ab6 pairs: {GET /oauth2/v1/userinfo HTTP/1.1: null}{Accept: application/json}{Authorization: Bearer eyJraWQiOiJ3Wi0tT29HeTlURnFReVlfN1hPXzgzdnlmYlE3LWtuYUFIOUQ3MmN5S0F3IiwiYWxnIjoiUlMyNTYifQ.eyJ2ZXIiOjEsImp0aSI6IkFULlkzSldCaVMxUDYxeXR1ekZtUjUxMDlCRVM5MThKRWUwcTNkbFItSTlrWG8iLCJpc3MiOiJodHRwczovL2Rldi03ODU4MDcwLm9rdGEuY29tL29hdXRoMi9kZWZhdWx0IiwiYXVkIjoiYXBpOi8vZGVmYXVsdCIsImlhdCI6MTYzOTY2Mzk1OSwiZXhwIjoxNjM5NjkzOTU5LCJjaWQiOiIwb2EzYzk3dDFtaVBUa0pqVjVkNyIsInVpZCI6IjAwdTNteXk1c09sOVNEYnYzNWQ2Iiwic2NwIjpbIm9wZW5pZCIsInByb2ZpbGUiLCJlbWFpbCJdLCJzdWIiOiJwcmFkZWVwLmt1bWFyNDRAZ21haWwuY29tIiwiZ3JvdXBzIjpbIkV2ZXJ5b25lIiwic3VwZXJfYWRtaW5zIiwiYWRtaW5zIl19.PWdjnf4WCOpCCn84U-v3V8cdgVferDihMq5BYPcOlYR3yQbLHUdeHvXus22r_sre0mVJVbEQycF8z0fpkuAgOXLh-8KEEWj6WuEisvzW6dE9xwULODzZS5gE9ntolwcqix64DWX0BegFK1_WdZhRTTyM07RVdR2XFBq7POdiDb2Vkk9_dfc7--n3ax2eFFnsWaj3nXV95mRQD-xD_0MG-2k9JpzdpbS6M6KJ1egtu9fBCwD8U-bsFQbDe4LL58RGSeLvpAIqJochUhzS1cSl4_UNUwgS9l7V-MHDzt_53_BAyGRM2WiqnWmeG43sgXroRj2KQiRkX0XSHn268WnJiw}{User-Agent: Java/11.0.7}{Host: dev-7858070.okta.com}{Connection: keep-alive}
2021-12-16 19:42:42.008 DEBUG 11880 --- [nio-8555-exec-3] s.n.www.protocol.http.HttpURLConnection : sun.net.www.MessageHeader#3b99722114 pairs: {null: HTTP/1.1 401 Unauthorized}{Date: Thu, 16 Dec 2021 14:12:41 GMT}{Content-Length: 0}{Connection: keep-alive}{Server: nginx}{Public-Key-Pins-Report-Only: pin-sha256="r5EfzZxQVvQpKo3AgYRaT7X2bDO/kj3ACwmxfdT2zt8="; pin-sha256="MaqlcUgk2mvY/RFSGeSwBRkI+rZ6/dxe/DuQfBT/vnQ="; pin-sha256="72G5IEvDEWn+EThf3qjR7/bQSWaS2ZSLqolhnO6iyJI="; pin-sha256="rrV6CLCCvqnk89gWibYT0JO6fNQ8cCit7GGoiVTjCOg="; max-age=60; report-uri="https://okta.report-uri.com/r/default/hpkp/reportOnly"}{x-okta-request-id: YbtJWdz9vdX0rhB3Ae0VzAAADGc}{x-xss-protection: 0}{p3p: CP="HONK"}{access-control-expose-headers: WWW-Authenticate}{www-authenticate: Bearer authorization_uri="http://dev-7858070.okta.com/oauth2/v1/authorize", realm="http://dev-7858070.okta.com", scope="openid", error="invalid_token", error_description="The access token is invalid.", resource="/oauth2/v1/userinfo"}{content-language: en}{Strict-Transport-Security: max-age=315360000; includeSubDomains}{set-cookie: sid=""; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/}
2021-12-16 19:42:42.011 DEBUG 11880 --- [nio-8555-exec-3] o.s.web.client.RestTemplate : Response 401 UNAUTHORIZED
2021-12-16 19:42:42.014 DEBUG 11880 --- [nio-8555-exec-3] .s.a.DefaultAuthenticationEventPublisher : No event was found for the exception org.springframework.security.oauth2.core.OAuth2AuthenticationException
2021-12-16 19:42:42.014 DEBUG 11880 --- [nio-8555-exec-3] o.s.s.web.DefaultRedirectStrategy : Redirecting to /login?error
2021-12-16 19:42:42.014 DEBUG 11880 --- [nio-8555-exec-3] w.c.HttpSessionSecurityContextRepository : Did not store empty SecurityContext
2021-12-16 19:42:42.015 DEBUG 11880 --- [nio-8555-exec-3] w.c.HttpSessionSecurityContextRepository : Did not store empty SecurityContext
2021-12-16 19:42:42.015 DEBUG 11880 --- [nio-8555-exec-3] s.s.w.c.SecurityContextPersistenceFilter : Cleared SecurityContextHolder to complete request
By the logs, it seems that the client application is using the access token to fetch the user-info endpoint and hence the response is 401.

I was able to solve this issue. the User-info endpoint was incorrect. The user-info endpoint should be user-info-uri: https://dev-7858070.okta.com/oauth2/default/v1/userinfo. The default was missing in the url.

Related

Empty S3 remote log files in Airflow 2.3.2

I configured remote S3 logging with the following variables:
- name: AIRFLOW__LOGGING__REMOTE_LOGGING
value: 'True'
- name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
value: 's3://my-airflow/airflow/logs'
- name: AIRFLOW__LOGGING__REMOTE_LOG_CONN_ID
value: 'my_s3'
- name: AIRFLOW__LOGGING__LOGGING_LEVEL
value: 'ERROR'
- name: AIRFLOW__LOGGING__ENCRYPT_S3_LOGS
value: 'False'
So far the log files are created with the DAG and task path with the name attempt=1.log or similar but always with 0 bytes size (empty). When I try to see the logs from Airflow I get this message (I'm using the KubernetesExecutor):
*** Falling back to local log
*** Trying to get logs (last 100 lines) from worker pod ***
*** Unable to fetch logs from worker pod ***
(400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'f3e0dd67-c8f4-42fc-945f-95dc42e8c2b5', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Mon, 01 Aug 2022 13:07:07 GMT', 'Content-Length': '136'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"name must be provided","reason":"BadRequest","code":400}\n'
Why are my logs files empty?

Apache embeded FTPS (Mina) issue on Java11+

I have a very simple Java 8 project (FTP server), which uses Apache FTPS (Mina) server library (v. 1.1.1). It is as simple as the following code:
ListenerFactory factory = new ListenerFactory();
factory.setPort(2221);
// SSL config
SslConfigurationFactory ssl = new SslConfigurationFactory();
ssl.setKeystoreFile(new File("keystore.jks"));
ssl.setKeystorePassword("password");
// set the SSL configuration for the listener
factory.setSslConfiguration(ssl.createSslConfiguration());
factory.setImplicitSsl(true);
FtpServerFactory serverFactory = new FtpServerFactory();
// replace the default listener
serverFactory.addListener("default", factory.createListener());
//Configure user manager and set admin user
PropertiesUserManagerFactory userManagerFactory = new PropertiesUserManagerFactory();
userManagerFactory.setFile(new File("users.properties"));
UserManager userManager = userManagerFactory.createUserManager();
if (!userManager.doesExist("admin")) {
BaseUser user = new BaseUser();
user.setName("admin");
user.setPassword("password");
user.setEnabled(true);
user.setHomeDirectory(USER_HOME_DIR);
user.setAuthorities(Collections.<Authority>singletonList(new WritePermission()));
userManager.save(user);
}
serverFactory.setUserManager(userManager);
// start the server
FtpServer server = serverFactory.createServer();
server.start();
Needed maven dependencies:
<dependency>
<groupId>org.apache.ftpserver</groupId>
<artifactId>ftpserver-core</artifactId>
<version>1.1.1</version>
</dependency>
to simply create a self-signed Keystore:
keytool -genkey -keyalg RSA -alias self-signed -keystore keystore.jks -validity 360 -keysize 2048
I followed the official guide to write this code: https://mina.apache.org/ftpserver-project/embedding_ftpserver.html
If I compile and run this code with Java 8, my FTPS server works perfectly fine, I can reach this server through localhost:2221 and with username "admin" and password "password". From my FTP client (I use Filezilla), I can see that the TLS connection was successfully established.
If I compile and run the same code with Java 11+ (I tried with 11 and 15), I see the following message in my FTP client, and the directory listing fails:
Status: Connecting to 127.0.0.1:2223...
Status: Connection established, initializing TLS...
Status: Verifying certificate...
Status: TLS connection established, waiting for welcome message...
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is current directory.
Command: TYPE I
Response: 200 Command TYPE okay.
Command: PASV
Response: 227 Entering Passive Mode (127,0,0,1,225,229)
Command: MLSD
Response: 150 File status okay; about to open data connection.
Error: Received TLS alert from the server: User canceled (90)
Error: Could not read from transfer socket: ECONNABORTED - Connection aborted
Response: 226 Closing data connection.
Error: Failed to retrieve directory listing
And this is the full application log (with VM parameter ):
2021-03-30 22:59:09.304 INFO 10557 --- [ main] com.example.ftp.demo.DemoApplication : Starting DemoApplication using Java 11.0.7 on Kara's-MBP with PID 10557 (...)
2021-03-30 22:59:09.306 INFO 10557 --- [ main] com.example.ftp.demo.DemoApplication : No active profile set, falling back to default profiles: default
2021-03-30 22:59:09.601 INFO 10557 --- [ main] com.example.ftp.demo.DemoApplication : Started DemoApplication in 0.487 seconds (JVM running for 1.046)
javax.net.ssl|DEBUG|01|main|2021-03-30 22:59:09.886 CEST|SSLCipher.java:438|jdk.tls.keyLimits: entry = AES/GCM/NoPadding KeyUpdate 2^37. AES/GCM/NOPADDING:KEYUPDATE = 137438953472
2021-03-30 22:59:09.966 INFO 10557 --- [ main] o.a.ftpserver.impl.DefaultFtpServer : FTP server started
2021-03-30 22:59:24.393 INFO 10557 --- [ NioProcessor-3] o.a.f.listener.nio.FtpLoggingFilter : CREATED
2021-03-30 22:59:24.395 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : OPENED
javax.net.ssl|DEBUG|1B|NioProcessor-3|2021-03-30 22:59:24.443 CEST|SSLCipher.java:1840|KeyLimit read side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1B|NioProcessor-3|2021-03-30 22:59:24.444 CEST|SSLCipher.java:1994|KeyLimit write side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1B|NioProcessor-3|2021-03-30 22:59:24.472 CEST|SSLCipher.java:1994|KeyLimit write side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1B|NioProcessor-3|2021-03-30 22:59:24.490 CEST|SSLCipher.java:1840|KeyLimit read side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
2021-03-30 22:59:24.493 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 220 Service ready for new user.
2021-03-30 22:59:24.501 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: USER admin
2021-03-30 22:59:24.503 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 331 User name okay, need password for admin.
2021-03-30 22:59:24.503 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: PASS *****
2021-03-30 22:59:24.505 INFO 10557 --- [pool-3-thread-1] org.apache.ftpserver.command.impl.PASS : Login success - admin
2021-03-30 22:59:24.505 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 230 User logged in, proceed.
2021-03-30 22:59:24.505 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: OPTS UTF8 ON
2021-03-30 22:59:24.506 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 200 Command OPTS okay.
2021-03-30 22:59:24.506 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: PBSZ 0
2021-03-30 22:59:24.506 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 200 Command PBSZ okay.
2021-03-30 22:59:24.507 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: PROT P
2021-03-30 22:59:24.508 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 200 Command PROT okay.
2021-03-30 22:59:24.508 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: OPTS MLST size;modify;type;
2021-03-30 22:59:24.509 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 200 Command OPTS okay.
2021-03-30 22:59:24.509 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: CWD /
2021-03-30 22:59:24.511 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 250 Directory changed to /
2021-03-30 22:59:24.511 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: TYPE I
2021-03-30 22:59:24.512 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 200 Command TYPE okay.
2021-03-30 22:59:24.512 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: PASV
2021-03-30 22:59:24.513 INFO 10557 --- [pool-3-thread-1] o.a.f.listener.nio.FtpLoggingFilter : SENT: 227 Entering Passive Mode (127,0,0,1,226,235)
2021-03-30 22:59:24.513 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : RECEIVED: MLSD
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.526 CEST|SSLCipher.java:1840|KeyLimit read side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.527 CEST|SSLCipher.java:1994|KeyLimit write side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.528 CEST|SSLCipher.java:1994|KeyLimit write side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.529 CEST|SSLCipher.java:1840|KeyLimit read side: algorithm = AES/GCM/NOPADDING:KEYUPDATE-countdown value = 137438953472
javax.net.ssl|ALL|1D|pool-3-thread-2|2021-03-30 22:59:24.533 CEST|SSLSocketImpl.java:994|Closing output stream
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.533 CEST|SSLSocketImpl.java:466|duplex close of SSLSocket
javax.net.ssl|DEBUG|1D|pool-3-thread-2|2021-03-30 22:59:24.534 CEST|SSLSocketImpl.java:1372|close the SSL connection (passive)
2021-03-30 22:59:24.535 WARN 10557 --- [pool-3-thread-2] org.apache.ftpserver.impl.PassivePorts : Releasing unreserved passive port: 58091
2021-03-30 22:59:24.535 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 150 File status okay; about to open data connection.
2021-03-30 22:59:24.535 INFO 10557 --- [pool-3-thread-2] o.a.f.listener.nio.FtpLoggingFilter : SENT: 226 Closing data connection.
Additionally, if I remove SSL support from the code, my FTP server works perfectly fine even with Java 11+.
Is anybody of you guys experienced similar issues with Apache FTPS and Java 11+? If yes how did you find a solution?
I can reproduce the problem only when using FileZilla. When I use lftp, for example, I can connect successfully to the server (after trusting the self signed certificate).
FileZilla seems to have a problem with the jdk's implementation of TLSv1.3. There is a closed (rejected) ticket about this in Filezilla's bugtracker [1].
Also, I can reproduce the problem when using jdk 8. TLSv1.3 was added and enabled in jdk 8 since 8u261-b12 [2].
As a workaround, you can disable TLSv1.3 by using a security property jdk.tls.disabledAlgorithms [3] which will force the jvm to choose another algorithm for the security handshake (hopefully it'll be TLSv1.2).(As this is a security setting it's best to discuss it with your security team if you have one in your company).
The security property can be set or updated in jdk's configuration file java.security. Its path depends on the jdk and OS you're using.
Usually it is under $JAVA_HOME/jre/lib/security or $JAVA_HOME/lib/security.
If you can't find it, you can print its path by launching the jvm with -Djava.security.debug=all. You should see the path printed in the startup logs (there may be several files). Look for something similar to the following lines :
properties: reading security properties file: /usr/lib/jvm/java-11-openjdk-11.0.11.0.9-4.fc34.x86_64/conf/security/java.security
...
properties: reading system security properties file /etc/crypto-policies/back-ends/java.config
You can also update jdk.tls.disabledAlgorithms programmatically by adding the two following lines before ssl.createSslConfiguration():
String disabledAlgorithms = Security.getProperty("jdk.tls.disabledAlgorithms") + ", TLSv1.3";
Security.setProperty("jdk.tls.disabledAlgorithms", disabledAlgorithms);
Here is the complete program with the added two lines:
import org.apache.ftpserver.FtpServer;
import org.apache.ftpserver.FtpServerFactory;
import org.apache.ftpserver.ftplet.Authority;
import org.apache.ftpserver.ftplet.FtpException;
import org.apache.ftpserver.ftplet.UserManager;
import org.apache.ftpserver.listener.ListenerFactory;
import org.apache.ftpserver.ssl.SslConfigurationFactory;
import org.apache.ftpserver.usermanager.PropertiesUserManagerFactory;
import org.apache.ftpserver.usermanager.impl.BaseUser;
import org.apache.ftpserver.usermanager.impl.WritePermission;
import java.io.File;
import java.security.Security;
import java.util.Collections;
public class Main {
public static void main(String[] args) throws FtpException {
String disabledAlgorithms = Security.getProperty("jdk.tls.disabledAlgorithms") + ", TLSv1.3";
Security.setProperty("jdk.tls.disabledAlgorithms", disabledAlgorithms);
ListenerFactory factory = new ListenerFactory();
factory.setPort(2221);
// SSL config
SslConfigurationFactory ssl = new SslConfigurationFactory();
ssl.setKeystoreFile(new File("keystore.jks"));
ssl.setKeystorePassword("password");
// set the SSL configuration for the listener
factory.setSslConfiguration(ssl.createSslConfiguration());
factory.setImplicitSsl(true);
FtpServerFactory serverFactory = new FtpServerFactory();
// replace the default listener
serverFactory.addListener("default", factory.createListener());
//Configure user manager and set admin user
PropertiesUserManagerFactory userManagerFactory = new PropertiesUserManagerFactory();
userManagerFactory.setFile(new File("users.properties"));
UserManager userManager = userManagerFactory.createUserManager();
if (!userManager.doesExist("admin")) {
BaseUser user = new BaseUser();
user.setName("admin");
user.setPassword("password");
user.setEnabled(true);
user.setHomeDirectory("/tmp/admin");
user.setAuthorities(Collections.<Authority>singletonList(new WritePermission()));
userManager.save(user);
}
serverFactory.setUserManager(userManager);
// start the server
FtpServer server = serverFactory.createServer();
server.start();
}
}
[1] : https://trac.filezilla-project.org/ticket/12099
[2] : https://www.oracle.com/java/technologies/javase/8u261-relnotes.html
[3] : https://docs.oracle.com/en/java/javase/11/security/java-secure-socket-extension-jsse-reference-guide.html#GUID-0A438179-32A7-4900-A81C-29E3073E1E90
Thanks for the detailed information from #Mohamed.
I just met this issue recently, would like to share the recent testing result. I can reproduce this issue with JDK 16.0.1_64 with FileZilla pro 3.57.1; and JDK 16.0.1_64 with winscp 5.15.5 works fine; and JDK 17.0.1_64 with FileZilla pro 3.57.1 works fine;
Which means using JDK 17.0.1_64 can be a solution.

Spring Kafka: Polls only 1 record when in batch listener mode

I am running a Spring Kafka consumer which I want to poll the given topic every 10 seconds and fetch all records or the max number I specified. The topics contains some base64 string of images which are usually 700x400 in dimensions. Below is how my config looks like:
#Bean
public ConsumerFactory<String, String> consumerConfig() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
config.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, "120000");
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 2000);
config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 300000);
return new DefaultKafkaConsumerFactory<>(config);
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> listener = new ConcurrentKafkaListenerContainerFactory<>();
listener.setBatchListener(true);
listener.getContainerProperties().setIdleBetweenPolls(10000);
listener.setConsumerFactory(consumerConfig());
listener.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return listener;
}
Below is how I have my listener:
#KafkaListener(id = "feedconsumer", topicPattern = ".*_hello")
public void messageListener(List<ConsumerRecord> records, Acknowledgment acknowledgment) {
log.info(String.valueOf(records.size()));
acknowledgment.acknowledge();
}
In my logs I can see only this:
2021-03-29 17:48:12.793 INFO 25102 --- [dconsumer-0-C-1] o.s.k.l.KafkaMessageListenerContainer : feedconsumer: partitions assigned: [test_hello-0]
2021-03-29 17:48:13.338 DEBUG 25102 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Received: 1 records
2021-03-29 17:48:13.341 DEBUG 25102 --- [dconsumer-0-C-1] l.a.BatchMessagingMessageListenerAdapter : Processing [GenericMessage [payload=org.springframework.kafka.support.KafkaNull#4f27e57e, headers={id=a9dea384-5f4a-5a59-22ad-45be4ac0c819, timestamp=1617020279053}]]
2021-03-29 17:48:13.342 INFO 25102 --- [dconsumer-0-C-1] c.r.i.t.m.s.s.i.KafkaConsumerServiceImpl : 1
2021-03-29 17:48:13.344 DEBUG 25102 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {test_hello-0=OffsetAndMetadata{offset=92, leaderEpoch=null, metadata=''}}
2021-03-29 17:48:23.359 DEBUG 25102 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Received: 1 records
As you can see, I am getting only 1 record every 10 second even though batch listener is enabled and the max record count is 2000. What am I missing?
EDIT: Tried the following config as well
config.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG, 10000000);
config.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 50000000);
config.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, 10000);
More logs:
2021-03-30 13:15:10.835 DEBUG 34356 --- [dconsumer-0-C-1] l.a.BatchMessagingMessageListenerAdapter : Processing [GenericMessage [payload=org.springframework.kafka.support.KafkaNull#b4ddc5, headers={id=72ae298d-1a89-d632-342a-282569e5c400, timestamp=1617090254725}]]
2021-03-30 13:15:10.836 INFO 34356 --- [dconsumer-0-C-1] c.r.i.t.m.s.s.i.KafkaConsumerServiceImpl : 1
2021-03-30 13:15:10.836 DEBUG 34356 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {test_hello-0=OffsetAndMetadata{offset=46, leaderEpoch=null, metadata=''}}
2021-03-30 13:15:10.836 DEBUG 34356 --- [dconsumer-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=consumer-feedconsumer-1, correlationId=59) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,topics=[{name=test_hello,partitions=[{partition_index=0,committed_offset=46,committed_leader_epoch=-1,committed_metadata=,_tagged_fields={}}],_tagged_fields={}}],_tagged_fields={}}
2021-03-30 13:15:10.847 DEBUG 34356 --- [dconsumer-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=consumer-feedconsumer-1, correlationId=59): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='test_hello', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])])
2021-03-30 13:15:10.848 DEBUG 34356 --- [dconsumer-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Committed offset 46 for partition test_hello-0
2021-03-30 13:15:11.015 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=consumer-feedconsumer-1, correlationId=58): org.apache.kafka.common.requests.FetchResponse#66229066
2021-03-30 13:15:11.015 DEBUG 34356 --- [ng-feedconsumer] o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Node 0 sent an incremental fetch response with throttleTimeMs = 1 for session 1615838501 with 1 response partition(s)
2021-03-30 13:15:11.016 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Fetch READ_UNCOMMITTED at offset 46 for partition test_hello-0 returned fetch data (error=NONE, highWaterMark=4513, lastStableOffset = 4513, logStartOffset = 0, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=1048576)
2021-03-30 13:15:12.263 DEBUG 34356 --- [alina-utility-2] org.apache.catalina.session.ManagerBase : Start expire sessions StandardManager at 1617090312260 sessioncount 0
2021-03-30 13:15:12.264 DEBUG 34356 --- [alina-utility-2] org.apache.catalina.session.ManagerBase : End expire sessions StandardManager processingTime 4 expired sessions: 0
2021-03-30 13:15:12.751 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending Heartbeat request with generation 7 and member id consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee to coordinator 192.168.1.3:9092 (id: 2147483646 rack: null)
2021-03-30 13:15:12.752 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=60) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,_tagged_fields={}}
2021-03-30 13:15:12.858 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=60): org.apache.kafka.common.requests.HeartbeatResponse#2e91937c
2021-03-30 13:15:12.858 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received successful Heartbeat response
2021-03-30 13:15:15.831 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending Heartbeat request with generation 7 and member id consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee to coordinator 192.168.1.3:9092 (id: 2147483646 rack: null)
2021-03-30 13:15:15.831 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=61) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,_tagged_fields={}}
2021-03-30 13:15:15.937 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=61): org.apache.kafka.common.requests.HeartbeatResponse#124bda17
2021-03-30 13:15:15.937 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received successful Heartbeat response
2021-03-30 13:15:18.906 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending Heartbeat request with generation 7 and member id consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee to coordinator 192.168.1.3:9092 (id: 2147483646 rack: null)
2021-03-30 13:15:18.907 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=62) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,_tagged_fields={}}
2021-03-30 13:15:19.012 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=62): org.apache.kafka.common.requests.HeartbeatResponse#50bb3548
2021-03-30 13:15:19.012 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received successful Heartbeat response
2021-03-30 13:15:20.857 DEBUG 34356 --- [dconsumer-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Added READ_UNCOMMITTED fetch request for partition test_hello-0 at position FetchPosition{offset=47, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[192.168.1.3:9092 (id: 1 rack: null)], epoch=0}} to node 192.168.1.3:9092 (id: 1 rack: null)
2021-03-30 13:15:20.857 DEBUG 34356 --- [dconsumer-0-C-1] o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Built incremental fetch (sessionId=1615838501, epoch=6) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s) out of 1 partition(s)
2021-03-30 13:15:20.857 DEBUG 34356 --- [dconsumer-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(test_hello-0), toForget=(), implied=()) to broker 192.168.1.3:9092 (id: 1 rack: null)
2021-03-30 13:15:20.857 DEBUG 34356 --- [dconsumer-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=consumer-feedconsumer-1, correlationId=63) and timeout 120000 to node 1: {replica_id=-1,max_wait_time=10000,min_bytes=10000000,max_bytes=50000000,isolation_level=0,session_id=1615838501,session_epoch=6,topics=[{topic=test_hello,partitions=[{partition=0,current_leader_epoch=0,fetch_offset=47,log_start_offset=-1,partition_max_bytes=1048576}]}],forgotten_topics_data=[],rack_id=}
2021-03-30 13:15:20.858 DEBUG 34356 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Received: 1 records
2021-03-30 13:15:20.858 DEBUG 34356 --- [dconsumer-0-C-1] l.a.BatchMessagingMessageListenerAdapter : Processing [GenericMessage [payload=org.springframework.kafka.support.KafkaNull#b4ddc5, headers={id=72ae298d-1a89-d632-342a-282569e5c400, timestamp=1617090254725}]]
2021-03-30 13:15:20.859 INFO 34356 --- [dconsumer-0-C-1] c.r.i.t.m.s.s.i.KafkaConsumerServiceImpl : 1
2021-03-30 13:15:20.859 DEBUG 34356 --- [dconsumer-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {test_hello-0=OffsetAndMetadata{offset=47, leaderEpoch=null, metadata=''}}
2021-03-30 13:15:20.860 DEBUG 34356 --- [dconsumer-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=consumer-feedconsumer-1, correlationId=64) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,topics=[{name=test_hello,partitions=[{partition_index=0,committed_offset=47,committed_leader_epoch=-1,committed_metadata=,_tagged_fields={}}],_tagged_fields={}}],_tagged_fields={}}
2021-03-30 13:15:20.866 DEBUG 34356 --- [dconsumer-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=consumer-feedconsumer-1, correlationId=64): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='test_hello', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])])
2021-03-30 13:15:20.867 DEBUG 34356 --- [dconsumer-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Committed offset 47 for partition test_hello-0
2021-03-30 13:15:21.164 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=11, clientId=consumer-feedconsumer-1, correlationId=63): org.apache.kafka.common.requests.FetchResponse#22e83e99
2021-03-30 13:15:21.165 DEBUG 34356 --- [ng-feedconsumer] o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Node 0 sent an incremental fetch response with throttleTimeMs = 1 for session 1615838501 with 1 response partition(s)
2021-03-30 13:15:21.165 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Fetch READ_UNCOMMITTED at offset 47 for partition test_hello-0 returned fetch data (error=NONE, highWaterMark=4563, lastStableOffset = 4563, logStartOffset = 0, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=1048576)
2021-03-30 13:15:21.991 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending Heartbeat request with generation 7 and member id consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee to coordinator 192.168.1.3:9092 (id: 2147483646 rack: null)
2021-03-30 13:15:21.992 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=65) and timeout 120000 to node 2147483646: {group_id=feedconsumer,generation_id=7,member_id=consumer-feedconsumer-1-ca5f91a1-e17b-40ad-a98f-770abbba1cee,group_instance_id=null,_tagged_fields={}}
2021-03-30 13:15:22.093 DEBUG 34356 --- [ng-feedconsumer] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=consumer-feedconsumer-1, correlationId=65): org.apache.kafka.common.requests.HeartbeatResponse#4053cb
2021-03-30 13:15:22.093 DEBUG 34356 --- [ng-feedconsumer] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-feedconsumer-1, groupId=feedconsumer] Received successful Heartbeat response
Try the below settings:
config.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG, 10000000);
config.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 250000000);
config.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, 10000);
config.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 50000000);
Your messages are too big for being read also, add the max.partition.fetch.bytes property as well.

Some of the Camel IN exchange Headers are not mapped to HTTP headers although both mapHttpMessageHeaders and copyHeaders are default true

I am using Camel version 3.1.0 and its http component for calling a REST endpoint which requires headers like: date, host, digest, authorization, etc. for security. However when I set all these headers in the route before sending request to the http component, the "date" HTTP header is always missing. Tested with both WireMock and the real test environment. Also wield is that the test passes if I run it together with several other tests but not individually.
Here is the code:
.setHeader("flowId", simple("\${body.flowId}"))
.process { exchange ->
with(pubsubConfig) { exchange.setPubsubTimeHeaders(minIntervalValueMills, maxIntervalValueMills, timeoutMillis) }
val settlementAdviceRequestDto = exchange.message.body as SettlementAdviceRequestDto
exchange.setProperty("settlementAdviceRequestDto", settlementAdviceRequestDto)
// Setting security headers
request.entity = StringEntity(jacksonObjectMapper().writeValueAsString(settlementAdviceRequestDto))
signer.signRequest(request, exchange.message)
}
.marshal(settlementAdviceRequestJacksonDataFormat)
.logInfoWithBreadCrumbId("Received SettlementAdviceRequestDto from PubSub, flowId: \${header.flowId}, sending to N&TS with \${headers}")
.to(with(ntsConfigProperties) {
"$protocol://$host:$port/$path?" +
"httpClient.maxConnTotal=$maxConnectionsTotal" +
"&httpClient.maxConnPerRoute=$maxConnectionsPerRoute" +
"&httpClient.connectionRequestTimeout=$connectionRequestTimeout" +
"&httpClient.connectTimeout=$connectionTimeout" +
"&httpClient.socketTimeout=$socketTimeout" +
"&httpClient.redirectsEnabled=$followRedirects"
})
.logInfoWithBreadCrumbId("SettlementAdviceRequestDto sent to N&TS, flowId: \${header.flowId} \${headers}")
Here is the logs from the test with checking the date header:
2020-05-13 11:11:22.267 INFO 5514 --- [tlement.advice]] o.c.f.f.s.route.SettlementAdviceRoute : [ID-C02VC27UHTD6-liping-1589361082160-0-1] Received SettlementAdviceRequestDto from PubSub, flowId: 550e8400-e29b-41d4-a716-446655440000, sending to N&TS with {CamelGooglePubsub.MsgAckId=TgQhIT4wPkVTRFAGFixdRkhRNxkIaFEOT14jPzUgKEUQC1MTUVx2B0YQajNcdQdRDRh1f2Ehbg4UBQEXWX5VWwk8aH58dAZUDRt2eGJ1aF8bCANCW1a0tP24kajpRx1tNZCxo6RASsXWuO52Zhg9XBJLLD5-KTBFQV5AEkwiBURJUytDCypYEQ, CamelGooglePubsub.MessageId=1150245302589178, CamelGooglePubsub.PublishTime=2020-05-13T09:11:20.194Z, breadcrumbId=ID-C02VC27UHTD6-liping-1589361082160-0-1, flowId=550e8400-e29b-41d4-a716-446655440000, CamelGooglePubsub.AckDeadline=2, CamelPubsubHeader.Lifetime=2047, CamelPubsubHeader.LifeTimeout=5000, date=Wed, 13 May 2020 09:11:22 GMT, host=localhost, content-type=application/json, content-length=417, digest=SHA-256=Ocdp4q+ZLUshftQIsycfkidD2SEEnvU29TpX/AFkMt4=, Authorization= keyId="id",algorithm="hmac-sha256",headers="date (request-target) host content-length content-type digest",signature="j/GUbOD0UijQJEjuwCrehQ+seoJ9yeHObYGXbuZgJJY="}
2020-05-13 11:11:22.328 INFO 5514 --- [tp293974199-118] / : RequestHandlerClass from context returned com.github.tomakehurst.wiremock.http.StubRequestHandler. Normalized mapped under returned 'null'
2020-05-13 11:11:22.373 ERROR 5514 --- [tp293974199-118] WireMock :
Request was not matched
=======================
date [contains] : GMT | <<<<< Header is not present
host [contains] : localhost | host: localhost:14685
content-length [contains] : 417 | content-length: 417
content-type [contains] : application/json | content-type: application/json
digest [contains] : | digest:
...
Here is logs from the test without checking the date header:
2020-05-13 11:20:09.092 INFO 5628 --- [tlement.advice]] o.c.f.f.s.route.SettlementAdviceRoute : [ID-C02VC27UHTD6-liping-1589361608990-0-1] Received SettlementAdviceRequestDto from PubSub, flowId: 550e8400-e29b-41d4-a716-446655440000, sending to N&TS with {CamelGooglePubsub.MsgAckId=BCEhPjA-RVNEUAYWLF1GSFE3GQhoUQ5PXiM_NSAoRRIGCBQFfH1yR1B1XjN1B1ENGXN6Y3U-XxYGVEUCdF9RGx9ZXH5VBlAIGXB-ZnZvWxoFA0BTeXfQ16DUpajANUsxIYq6v7BfeuyjqYNhZhs9XxJLLD5-KStFQV5AEkwiHkRJUytDCypYEU4, CamelGooglePubsub.MessageId=1150237351941507, CamelGooglePubsub.PublishTime=2020-05-13T09:20:07.119Z, breadcrumbId=ID-C02VC27UHTD6-liping-1589361608990-0-1, flowId=550e8400-e29b-41d4-a716-446655440000, CamelGooglePubsub.AckDeadline=1, CamelPubsubHeader.Lifetime=1949, CamelPubsubHeader.LifeTimeout=5000, date=Wed, 13 May 2020 09:20:09 GMT, host=localhost, content-type=application/json, content-length=417, digest=SHA-256=Ocdp4q+ZLUshftQIsycfkidD2SEEnvU29TpX/AFkMt4=, Authorization= keyId="id",algorithm="hmac-sha256",headers="date (request-target) host content-length content-type digest",signature="POpggmhIDI8tNBsb7239ksYEGfwY+IB/Rn93PCrkGsY="}
2020-05-13 11:20:09.145 INFO 5628 --- [tp294593670-119] / : RequestHandlerClass from context returned com.github.tomakehurst.wiremock.http.StubRequestHandler. Normalized mapped under returned 'null'
2020-05-13 11:20:09.434 INFO 5628 --- [tlement.advice]] o.c.f.f.s.route.SettlementAdviceRoute : [ID-C02VC27UHTD6-liping-1589361608990-0-1] SettlementAdviceRequestDto sent to N&TS, flowId: 550e8400-e29b-41d4-a716-446655440000 {CamelHttpResponseCode=200, CamelHttpResponseText=OK, Matched-Stub-Id=4efce26c-dbfa-457e-864d-98f29ef38f97, Vary=Accept-Encoding, User-Agent, Transfer-Encoding=chunked, Server=Jetty(9.2.z-SNAPSHOT), CamelGooglePubsub.MsgAckId=BCEhPjA-RVNEUAYWLF1GSFE3GQhoUQ5PXiM_NSAoRRIGCBQFfH1yR1B1XjN1B1ENGXN6Y3U-XxYGVEUCdF9RGx9ZXH5VBlAIGXB-ZnZvWxoFA0BTeXfQ16DUpajANUsxIYq6v7BfeuyjqYNhZhs9XxJLLD5-KStFQV5AEkwiHkRJUytDCypYEU4, CamelGooglePubsub.MessageId=1150237351941507, CamelGooglePubsub.PublishTime=2020-05-13T09:20:07.119Z, breadcrumbId=ID-C02VC27UHTD6-liping-1589361608990-0-1, flowId=550e8400-e29b-41d4-a716-446655440000, CamelGooglePubsub.AckDeadline=1, CamelPubsubHeader.Lifetime=1949, CamelPubsubHeader.LifeTimeout=5000, digest=SHA-256=Ocdp4q+ZLUshftQIsycfkidD2SEEnvU29TpX/AFkMt4=, Authorization= keyId="id",algorithm="hmac-sha256",headers="date (request-target) host content-length content-type digest",signature="POpggmhIDI8tNBsb7239ksYEGfwY+IB/Rn93PCrkGsY="}

Spring Boot (ConfigServer) is restarting all the time

we have a very simple Spring Boot Service (#EnableConfigServer) running behind a nginx proxy.
The service basically works, but it is restarting all the time (Context is closed and started continuously).
See the log files here: http://pastebin.com/GErCF5x6
The setup is basically just one Java Class and the two configs (bootstrap.properties as well as application.properties).
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.config.server.EnableConfigServer;
import org.springframework.context.annotation.Configuration;
/**
* Main Application, which starts the Spring Boot context
*/
#Configuration
#EnableAutoConfiguration
#EnableConfigServer
public class Application {
#SuppressWarnings("PMD.SignatureDeclareThrowsException")
public static void main(String[] args) throws Exception {
SpringApplication.run(Application.class, args);
}
}
bootstrap.properties
spring.application.name = configserver
spring.cloud.config.enabled = false
encrypt.failOnError= false
encrypt.key= secret
application.properties
# HTTP Configuration
server.port = 8888
# Management Configuration
management.context-path=/manage
# SSH Based Git-Repository
spring.cloud.config.server.git.uri=git#bitbucket.org:xyz.git
spring.cloud.config.server.git.basedir = cache/config
security.user.name=ads
security.user.password={cipher}secret2
security.basic.realm=Config Server
Log-File
11:13:47.105 [qtp1131266554-101] INFO o.s.boot.SpringApplication - Started application in 0.176 seconds (JVM running for 245.66)
11:13:47.105 [qtp1131266554-101] INFO o.s.c.a.AnnotationConfigApplicationContext - Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#7709b120: startup date [Wed Apr 08 11:13:47 UTC 2015]; root of context hierarchy
11:13:47.690 [qtp1131266554-51] INFO o.s.b.a.audit.listener.AuditListener - AuditEvent [timestamp=Wed Apr 08 11:13:47 UTC 2015, principal=ads, type=AUTHENTICATION_SUCCESS, data={details=org.springframework.security.web.authentication.WebAuthenticationDetails#ffffe21a: RemoteIpAddress: 10.10.100.207; SessionId: null}]
11:13:48.324 [qtp1131266554-19] INFO o.s.boot.SpringApplication - Starting application on api01.prd.rbx.xxxx.com with PID 24204 (started by ads in /home/ads/config-server)
11:13:48.328 [qtp1131266554-19] INFO o.s.c.a.AnnotationConfigApplicationContext - Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#473cffd3: startup date [Wed Apr 08 11:13:48 UTC 2015]; root of context hierarchy
11:13:48.332 [qtp1131266554-19] INFO o.s.boot.SpringApplication - Started application in 0.092 seconds (JVM running for 246.887)
11:13:48.332 [qtp1131266554-19] INFO o.s.c.a.AnnotationConfigApplicationContext - Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#473cffd3: startup date [Wed Apr 08 11:13:48 UTC 2015]; root of context hierarchy
11:13:48.612 [qtp1131266554-55] INFO o.s.b.a.audit.listener.AuditListener - AuditEvent [timestamp=Wed Apr 08 11:13:48 UTC 2015, principal=ads, type=AUTHENTICATION_SUCCESS, data={details=org.springframework.security.web.authentication.WebAuthenticationDetails#ffffe21a: RemoteIpAddress: 10.10.100.207; SessionId: null}]
11:13:50.601 [qtp1131266554-77] INFO o.s.boot.SpringApplication - Starting application on api01.prd.rbx.xxxx.com with PID 24204 (started by ads in /home/ads/config-server)
11:13:50.604 [qtp1131266554-77] INFO o.s.c.a.AnnotationConfigApplicationContext - Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#44330486: startup date [Wed Apr 08 11:13:50 UTC 2015]; root of context hierarchy
11:13:50.607 [qtp1131266554-77] INFO o.s.boot.SpringApplication - Started application in 0.088 seconds (JVM running for 249.162)
11:13:50.607 [qtp1131266554-77] INFO o.s.c.a.AnnotationConfigApplicationContext - Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#44330486: startup date [Wed Apr 08 11:13:50 UTC 2015]; root of context hierarchy
11:13:51.831 [qtp1131266554-55] INFO o.s.boot.SpringApplication - Starting application on api01.prd.rbx.xxxx.com with PID 24204 (started by ads in /home/ads/config-server)
11:13:51.834 [qtp1131266554-55] INFO o.s.c.a.AnnotationConfigApplicationContext - Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#1843040d: startup date [Wed Apr 08 11:13:51 UTC 2015]; root of context hierarchy
11:13:51.840 [qtp1131266554-55] INFO o.s.boot.SpringApplication - Started application in 0.094 seconds (JVM running for 250.395)
Any idea, how to solve the problem?
best,
fritz
That's normal. It's not the application that is starting and stopping, it's just the mini-contexts that are used to create the config resources for remote clients. Completely harmless.
I found out the problem which I am facing here. Basically we have two remote services running on AWS and a configured Load Balanacer, which continously checks the /health endpoint. By invoking this method, the ConfigServerClient always calls our ConfigServer.
I don't understand why there is a HealthIndicator for the ConfigServer, is there a way to disable this HealthIndicator, as every request to this endpoint will query our config server again and again. Another disadvantage is, that the /health request is not responding as fast as possible anymore, which leads to timeouts in the Load Balancer (default 2s).
So, config server isn't restarting. It uses a new Spring Application Context and loads the files it grabbed from git into a new context and the formats the values to send back to the client. Below are the logs from my config server for one client connection. The logs just looks confusing like it is restarting.
2015-04-08 12:22:52.206 INFO 85076 --- [nio-8888-exec-1] o.s.b.a.audit.listener.AuditListener : AuditEvent [timestamp=Wed Apr 08 12:22:52 EDT 2015, principal=user, type=AUTHENTICATION_SUCCESS, data={details=org.springframework.security.web.authentication.WebAuthenticationDetails#957e: RemoteIpAddress: 127.0.0.1; SessionId: null}]
2015-04-08 12:22:52.944 INFO 85076 --- [nio-8888-exec-2] o.s.b.a.audit.listener.AuditListener : AuditEvent [timestamp=Wed Apr 08 12:22:52 EDT 2015, principal=user, type=AUTHENTICATION_SUCCESS, data={details=org.springframework.security.web.authentication.WebAuthenticationDetails#957e: RemoteIpAddress: 127.0.0.1; SessionId: null}]
2015-04-08 12:22:53.490 INFO 85076 --- [nio-8888-exec-1] o.s.boot.SpringApplication : Starting application on sgibb-mbp.local with PID 85076 (started by sgibb in /Users/sgibb/workspace/spring/spring-cloud-samples/configserver)
2015-04-08 12:22:53.494 INFO 85076 --- [nio-8888-exec-1] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#571e4b84: startup date [Wed Apr 08 12:22:53 EDT 2015]; root of context hierarchy
2015-04-08 12:22:53.497 INFO 85076 --- [nio-8888-exec-1] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2015-04-08 12:22:53.498 INFO 85076 --- [nio-8888-exec-1] o.s.boot.SpringApplication : Started application in 0.151 seconds (JVM running for 23.747)
2015-04-08 12:22:53.499 INFO 85076 --- [nio-8888-exec-1] o.s.c.c.s.NativeEnvironmentRepository : Adding property source: file:/Users/sgibb/workspace/spring/spring-cloud-samples/configserver/target/config/foo.properties
2015-04-08 12:22:53.500 INFO 85076 --- [nio-8888-exec-1] o.s.c.c.s.NativeEnvironmentRepository : Adding property source: file:/Users/sgibb/workspace/spring/spring-cloud-samples/configserver/target/config/application.yml
2015-04-08 12:22:53.500 INFO 85076 --- [nio-8888-exec-1] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#571e4b84: startup date [Wed Apr 08 12:22:53 EDT 2015]; root of context hierarchy
2015-04-08 12:22:54.090 INFO 85076 --- [nio-8888-exec-2] o.s.boot.SpringApplication : Starting application on sgibb-mbp.local with PID 85076 (started by sgibb in /Users/sgibb/workspace/spring/spring-cloud-samples/configserver)
2015-04-08 12:22:54.096 INFO 85076 --- [nio-8888-exec-2] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#416d044c: startup date [Wed Apr 08 12:22:54 EDT 2015]; root of context hierarchy
2015-04-08 12:22:54.098 INFO 85076 --- [nio-8888-exec-2] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2015-04-08 12:22:54.099 INFO 85076 --- [nio-8888-exec-2] o.s.boot.SpringApplication : Started application in 0.433 seconds (JVM running for 24.348)
2015-04-08 12:22:54.099 INFO 85076 --- [nio-8888-exec-2] o.s.c.c.s.NativeEnvironmentRepository : Adding property source: file:/Users/sgibb/workspace/spring/spring-cloud-samples/configserver/target/config/foo.properties
2015-04-08 12:22:54.099 INFO 85076 --- [nio-8888-exec-2] o.s.c.c.s.NativeEnvironmentRepository : Adding property source: file:/Users/sgibb/workspace/spring/spring-cloud-samples/configserver/target/config/application.yml
2015-04-08 12:22:54.099 INFO 85076 --- [nio-8888-exec-2] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#416d044c: startup date [Wed Apr 08 12:22:54 EDT 2015]; root of context hierarchy

Resources