mixed sync/async logging log4j does not work - asynchronous

I am trying to analyze and implement mixed sync and async logging. I am using Spring boot application along with disruptor API. My log4j configuration:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="debug">
<Appenders>
<Console name="Console-Appender" target="SYSTEM_OUT">
<PatternLayout>
<pattern>
[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
</pattern>>
</PatternLayout>
</Console>
</Appenders>
<Loggers>
<AsyncLogger name="com.example.logging" level="debug">
<AppenderRef ref="Console-Appender"/>
</AsyncLogger>
<Root level="info">
<AppenderRef ref="Console-Appender"/>
</Root>
</Loggers>
Demo class 1:
package com.example.logging;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;
#SpringBootApplication
public class DemoApplication2 {
static Logger logger = LogManager.getLogger(DemoApplication2.class);
public static void main(String[] args) {
SpringApplication.run(DemoApplication2.class, args);
long startTime = System.currentTimeMillis();
for(int i = 0; i < 2000; i++) {
logger.debug("Async : " + i);
}
System.out.println("time taken:: " + (System.currentTimeMillis() - startTime));
}
}
Using above code, I am expecting "System.out" should print before the logging of all "debug" statements as I am using async logging for "debug" level. So, few debugger logs would be logged first (e.g. few 100 or 150) then SOP should be printed and then remaining debugger logs should be logged. But, when I run my application, all debugger statements log first and then SOP prints which is not expected result.
Furthermore, if I use additivity="false" in the "asynclogger" (<AsyncLogger name="com.example.logging" level="debug" additivity="false">) then I can see my expected above mentioned result. Now I have 2nd demo class:
package com.example.logging;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;
#SpringBootApplication
public class DemoApplication3 {
static Logger logger = LogManager.getLogger(DemoApplication3.class);
public static void main(String[] args) {
SpringApplication.run(DemoApplication3.class, args);
long startTime = System.currentTimeMillis();
for(int i = 0; i < 2000; i++) {
logger.info("Sync : " + i);
}
System.out.println("time taken:: " + (System.currentTimeMillis() - startTime));
}
}
Now with above class, I am expecting all sync logging first and SOP should be printed after all info logs. But if add "additivity="false" to my configuration then all logs are async only.
Finally, I couldn't able to configure sync and async logging at the same time. Kindly help and suggest.

I'm not really sure what you think you are testing.
When additivity is enabled the log event will be copied and placed into the Disruptor's Ring Buffer where it will be routed to the console appender on a different thread. After placing the copied event in the buffer the event will be passed to the root logger and routed to the Console Appender in the same thread. Since both the async Logger and sync Logger are doing the same thing they are going to take approximately the same time. So I am not really sure why you believe anything will be left around by the time the System.out call is made.
When you only use the async logger the main thread isn't doing anything but placing events in the queue, so it will respond much more quickly and it would be quite likely your System.out message would appear before all log events have been written.
I suspect there is one very important piece of information you are overlooking. When an event is routed to a Logger the level specified on the LoggerConfig the Logger is associated with is checked. When additivity is true the event is not routed to a parent Logger (there isn't one). It is routed to the LoggerConfig's parent LoggerConfig. A LoggerConfig calls isFiltered(event) which ONLY checks Filters that have been configured on the LoggerConfig. So even though you have level="info" on your Root logger, debug events sent to it via the AsyncLogger will still be logged. You would have to add a ThresholdFilter to the RootLogger to prevent that.

Related

How to use JWKs with spring?

I got the task to implement jwks on the project. On our project, we have implemented a token validation check with oauth2. We use a jks format certificate to obtain a public key. the private key is not used in our project, since we need to check the validity of the token. Our goal is to get rid of the .jks file.
There are too few resources for jwks and therefore some points are not clear.
If I understand correctly, then jwks mean that there is a jwks.json file in the resources with keys inside, which we select by kid from the token header. Based on the documentation, it is not clear what kind of file it is and how it is loaded for checking by kid, that is, at what moment it happens.Does anyone have a project that can be used as an example? thanks in advance
https://docs.spring.io/spring-security-oauth2-boot/docs/2.2.x-SNAPSHOT/reference/html/boot-features-security-oauth2-authorization-server.html
You can use spring-boot resource server implementation.
First, what you need is to add the following dependency to your project
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-oauth2-resource-server</artifactId>
</dependency>
Second, you need to add an authentication server configuration. The JSON file that you mentioned has to be located on the authentication server or you can use JWKs URL of the authentication server.
You should have a configuration in your properties file like this.
spring.security.oauth2.resourceserver.jwt.jwk-set-uri=https:/example.com/.well-known/openid-configuration/jwks
spring.security.oauth2.resourceserver.jwt.issuer-uri=https:/example.com
Finally, you need to follow the natural spring-security API configuration. What you need is like the following.
#Configuration
#EnableWebSecurity
public class SecureSecurityConfiguration extends WebSecurityConfigurerAdapter {
#Value("${spring.security.oauth2.resourceserver.jwt.jwk-set-uri}")
private String jwtSetUri;
#Override
protected void configure(HttpSecurity http) throws Exception {
http.requiresChannel().anyRequest().requiresInsecure().and().cors()
.and().csrf().disable()
.authorizeRequests()
.antMatchers(HttpMethod.GET, "some path1").permitAll()
.antMatchers(HttpMethod.POST, "some path2").permitAll()
.antMatchers(HttpMethod.GET, "some path3").permitAll()
.antMatchers("/**").hasAuthority("some scope") // if you need this scope.
.anyRequest()
.authenticated()
.and()
.oauth2ResourceServer()
.jwt().decoder(jwtDecoder());
}
#Bean
CorsConfigurationSource corsConfigurationSource() {
final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
CorsConfiguration config = new CorsConfiguration().applyPermitDefaultValues();
config.addAllowedMethod("PUT");
config.addAllowedMethod("DELETE");
source.registerCorsConfiguration("/**", config);
return source;
}
private JwtDecoder jwtDecoder() {
return NimbusJwtDecoder.withJwkSetUri(jwtSetUri)
.jwtProcessorCustomizer(p -> p.setJWSTypeVerifier(
new DefaultJOSEObjectTypeVerifier<>(new JOSEObjectType("at+jwt")))).build();
}
}
After this, each request to your APIs should be verified automatically by the Spring by using the authentication server.

Need to write WARNING Level logs to different file using filehandler but console handler still show INFO and SEVERE but NO WARNING when .level=INFO

We have a Java application on Websphere where we need SystemOut.log only print loglevel SEVERE and INFO (using existing java.util.logging default ConsoleHandler), but we need a WARNING written to separate file using the FileHandler .
Created a LevelBasedFileHandler which takes log level and file to write and i can see the log file updated as needed.
But the Warning level's are written in SystemOut.log too, Need a way to stop them from appearing
logger.addHandler(new LevelBasedFileHandler("../logs/warning.log", Level.WARNING));
logger.setFilter(new LevelBasedFilter()); - Trying to see if i can filter
logger.setUseParentHandlers(false);
using the logger.setUseParentHandlers(false) is not printing any information to SystemOut.log if i remove it i see WARNING information too. Any idea i can filter the Warning Level from this?
If you filter at the logger level that will suppress log records before they reach any of the handlers. What you should do is install filters on the existing handlers.
For example, create a filter which takes a boolean:
import java.util.logging.Filter;
import java.util.logging.Level;
import java.util.logging.LogRecord;
public class WarningFilter implements Filter {
private final boolean complement;
public WarningFilter(final boolean complement) {
this.complement = complement;
}
#Override
public boolean isLoggable(LogRecord r) {
return Level.WARNING.equals(r.getLevel()) != complement;
}
}
Next you should install your filter on each handler. For example:
private static final Logger logger = Logger.getLogger("some.other.logger.name");
public static void main(String[] args) throws Exception {
boolean found = false;
for (Handler h : Logger.getLogger("").getHandlers()) {
h.setFilter(new WarningFilter(h instanceof ConsoleHandler));
}
if(!found) {
Handler h = new ConsoleHandler();
h.setFilter(new WarningFilter(true));
}
Handler h = new FileHandler();
h.setFilter(new WarningFilter(false));
logger.addHandler(h);
}

Why does C3p0's ComboPooledDataSource successfully connect to a database, but its clone doesn't?

In a Tomcat 8.5.15 environment using an Oracle 11 database, I want to implement a data source that handles encrypted passwords in the context.xml. I'm having troubles with that, as described in this StackOverflow question.
In hopes of determining the underlying problem, I simplified the scenario. First, I verified that the C3p0 resource specification worked fine.
<Resource
auth="Container"
description="MyDataSource"
driverClass="oracle.jdbc.OracleDriver"
maxPoolSize="100"
minPoolSize="10"
acquireIncrement="1"
name="jdbc/MyDataSource"
user="me"
password="mypassword"
factory="org.apache.naming.factory.BeanFactory"
type="com.mchange.v2.c3p0.ComboPooledDataSource"
jdbcUrl="jdbc:oracle:thin:#mydb:1521:dev12c"
/>
It worked fine. Then, I created a clone of the ComboPooledDataSource, based on decompiling the class file:
public final class ComboPooledDataSourceCopy
extends AbstractComboPooledDataSource
implements Serializable, Referenceable {
private static final long serialVersionUID = 1L;
private static final short VERSION = 2;
public ComboPooledDataSourceCopy() {
}
public ComboPooledDataSourceCopy(boolean autoregister) {
super(autoregister);
}
public ComboPooledDataSourceCopy(String configName) {
super(configName);
}
private void writeObject(ObjectOutputStream oos) throws IOException {
oos.writeShort(2);
}
private void readObject(ObjectInputStream ois) throws IOException, ClassNotFoundException {
short version = ois.readShort();
switch(version) {
case 2:
return;
default:
throw new IOException("Unsupported Serialized Version: " + version);
}
}
}
I created a revised resource specification using the cloned class:
<Resource
auth="Container"
description="MyDataSource"
driverClass="oracle.jdbc.OracleDriver"
maxPoolSize="100"
minPoolSize="10"
acquireIncrement="1"
name="jdbc/MyDataSource"
user="me"
password="mypassword"
factory="org.apache.naming.factory.BeanFactory"
type=type="com.mycompany.ComboPooledDataSourceCopy"
jdbcUrl="jdbc:oracle:thin:#mydb:1521:dev12c"
/>
When I try to connect to the database using this specification, the connection attempt fails.
...
Caused by: java.sql.SQLException: com.mchange.v2.c3p0.impl.NewProxyConnection#6950dfda
[wrapping: oracle.jdbc.driver.T4CConnection#765426dd]
is not a wrapper for or implementation of oracle.jdbc.OracleConnection
at com.mchange.v2.c3p0.impl.NewProxyConnection.unwrap(NewProxyConnection.java:1744)
at org.jaffa.security.JDBCSecurityPlugin.executeStoredProcedure(JDBCSecurityPlugin.java:117)
... 67 more
Why does the clone attempt fail to connect?
UPDATE:
With assistance from our local DBA, we’ve been able to audit my connection attempts. It appears that we are successfully connecting to the database and logging in. Based on this, it sounds like the problem may be in how the code is handling the database’s response, rather than in our request generation.
The error was a result of a class loading problem, where the Oracle classes were being loaded from multiple jars (%CATALINA_HOME%\lib\ojdbc7-12.1.0.2.0.jar and %CATALINA_HOME%\webapps\my-webapp-1.0.0\WEB-INF\lib\ojdbc7-12.1.0.2.0.jar) by different class loaders. When I deleted %CATALINA_HOME%\webapps\my-webapp-1.0.0\WEB-INF\lib\ojdbc7-12.1.0.2.0.jar, my problem went away.
These sources (1, 2, 3) discuss this in more detail.

Spring cloud contract: is possible to use constant from Java in grrovy Contract file?

I would like to share constants from Java classes in groovy Contracts.
Test base class:
#SpringBootTest(classes = TestBase.class, webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class TestBase {
public static final String NAME = "just constant";
#Before
public void setup() {
RestAssured.baseURI = "https://rest-service.com";
RestAssured.port = 443;
}
}
Contract file:
package contracts.test_contract
import org.springframework.cloud.contract.spec.Contract
import static test.TestBase.NAME;
Contract.make {
request {
method 'GET'
url ''
body (
"""${value(client(NAME), server(NAME))}"""
)
}
response {
status 200
}
}
pom.xml - spring cloud contract plugin config:
<plugin>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-contract-maven-plugin</artifactId>
<extensions>true</extensions>
<configuration>
<testMode>EXPLICIT</testMode>
<baseClassForTests>test.TestBase</baseClassForTests>
</configuration>
</plugin>
Running mvn clean install and getting
[ERROR] Exception occurred while trying to evaluate the contract at path [C:\dev\_idea_workspace\test_1\src\test\resources\contracts\test_contract\c1.groovy]
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
C:\dev\_idea_workspace\test_1\src\test\resources\contracts\test_contract\c1.groovy: 7: unable to resolve class test.TestBase
# line 7, column 1.
import static test.TestBase.NAME;
^
1 error
But when I try to import statically constant from another file, like Long.MAX_VALUE, it works.
Any suggestions how to go over this or how to share variable in more groovy contract files?
Thanks!
Yes, please read this section of the documentation - https://docs.spring.io/spring-cloud-contract/docs/current/reference/html/advanced.html#customization-customization . Here you can see the code with shared stuff - https://github.com/spring-cloud-samples/spring-cloud-contract-samples/tree/main/common . Here you can see how it's added on the producer side https://github.com/spring-cloud-samples/spring-cloud-contract-samples/blob/main/producer/pom.xml#L106-L111 and here on the consumer https://github.com/spring-cloud-samples/spring-cloud-contract-samples/blob/main/consumer/pom.xml#L63-L68

Glassfish - Preserve sessions across redeployment - SessionListener is not called on session recreation

So I made a simple session listener - there are many on the web :
import javax.servlet.ServletContext;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.servlet.annotation.WebListener;
import javax.servlet.http.HttpSession;
import javax.servlet.http.HttpSessionEvent;
import javax.servlet.http.HttpSessionListener;
#WebListener
class SessionListener implements ServletContextListener, HttpSessionListener {
private static final int MAX_INACTIVE_INTERVAL = 1000; // in secs
// static AtomicInteger numOfSessions;
// singleton ? static ?
static int numOfSessions;
static ServletContext context;
#Override
public void sessionCreated(HttpSessionEvent se) {
se.getSession().setMaxInactiveInterval(MAX_INACTIVE_INTERVAL);
increase();
}
#Override
public void sessionDestroyed(HttpSessionEvent se) {
decrease();
}
private synchronized void increase() {
++numOfSessions;
context.setAttribute("numberOfSessions", numOfSessions);
System.out.println("SessionListener - increase - numberOfSessions = " +
numOfSessions);
}
private synchronized void decrease() {
--numOfSessions;
context.setAttribute("numberOfSessions", numOfSessions);
System.out.println("SessionListener - decrease - numberOfSessions = " +
numOfSessions);
}
#Override
public void contextDestroyed(ServletContextEvent sce) {
System.out.println("SessionListener - contextDestroyed");
}
#Override
public void contextInitialized(ServletContextEvent sce) {
context = sce.getServletContext();
System.out.println("SessionListener - contextInitialized : " +
context);
}
}
_heavily_edited_
I am on Glassfish 3.1.2 on Eclipse Juno. The session is created via request.getSession() in the doPost() method in the relevant servlet. When I redeploy the project (on save) decrease() is called - session gets invalidated naturally.
Now, "Preserve sessions across redeployment" is on by default in the Eclipse glassfish plugin - so when I again save the project in Eclipse and is redeployed I get :
INFO: SessionListener - decrease - numberOfSessions = -1
Meaning : GF recreates the sessions BUT does not call the listener - so on redeployment a session is invalidated - but since sessionCreated() was not called my session count is on 0.
I need a workaround for this !
Historical (it helped me understand what was going on) :
if you modify and recompile a java program with tomcat
running, tomcat first removes all sessions via calling the session listener,
and then re-create new sessions objects with same session IDs edit and all attributes apart from non serializable objects (?) /edit,
but this time it does not call registered session listeners when it does this.
NB : I knew nothing of session preservation and since the session was not preserved entirely (a POJO session attribute was annihilated - as I understand it now it should be serializable to be preserved - right ? docs ?) it really took a while to understand what was going on.

Resources