Karaf does not start - apache-karaf

I am using Karaf 4.1.1 on Windows. When i start the karaf container double clicking karaf.bat, i see only the below lines in the log. Nothing is seen in the shell. Any help appreciated. Thanks
org.apache.karaf.main.lock.SimpleFileLock lock
INFO: Trying to lock <userlocation>target\assembly\lock
org.apache.karaf.main.lock.SimpleFileLock lock
INFO: Lock acquired
org.apache.karaf.main.Main$KarafLockCallback lockAquired
INFO: Lock acquired. Setting startlevel to 100

check the configuration section. look for the logging. Karaf use this to define the logging and the logging to the consule. it is log4J by default
http://karaf.apache.org/manual/latest/#_configuration_files
you can see that athe consule appender is not define by default
# CONSOLE appender not used by default
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
to enable it you need to add the stdout to the root logger
something like this:
# Root logger
log4j.rootLogger=INFO, out,stdout osgi:*
PS* if still doesn't work I would download a new version, it is easily get curropted on windows.

Related

How to configure log4j

How to configure log4j to show only my
log.debug("test log");
messages in console without other system generated information?
It's very disturbing when in small app your console is messed with tons of useless ( at least for me) information like
DEBUG org.springframework.beans.CachedIntrospectionResults: Getting BeanInfo for class [org.thymeleaf.spring4.view.ThymeleafView]
my log4j.properties file:
log4j.rootLogger=DEBUG, stdout, file
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
You can use the LoggerMatchFilter and DenyAllFilter to restrict your logging appender to messages coming from your code only.
LoggerMatchFilter filter = new LoggerMatchFilter();
filter.setLoggerToMatch("Your.Root.Namespace");
filter.setAcceptOnMatch(true);
log4j.appender.stdout.addFilter(filter); // Match your messages only.
log4j.appender.stdout.addFilter(new DenyAllFilter()); // Don't match anything else.
Is your application using log4j or yet to configure??
log4j configure steps
If already log4j is in use, change the logger level to ERROR
Check whether your applications is using any xml configuration file for log4j or properties file so you can change in it.
logger level configuration steps.

Rsyslog: imfile does not switch to inotify mode

I'm trying to send multiple nginx logs to loggly...
Config file: /etc/rsyslog.d/21-nginx.conf
$ModLoad imfile
#$InputFilePollInterval 10
$InputFileMode inotify
$WorkDirectory /var/spool/rsyslog
$PrivDropToGroup adm
# nginx access file:
$InputFileName /var/log/nginx/*access.log
$InputFileTag nginx-access:
$InputFileStateFile stat-nginx-access
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
# other stuff continues......
after restart i get this error in log syslog:
imfile: The to-be-monitored file "/var/log/nginx/*access.log" contains wildcards. This is not supported in polling mode. [v8.16.0 try http://www.rsyslog.com/e/2420 ]
activation of module imfile failed [v8.16.0 try http://www.rsyslog.com/e/-3 ]
did i make something wrong?
Are there other places in your rsyslog configuration where the file mode is changed to pulling or the file poll interval is active? The problem with using this kind of legacy syntax is that all the configuration is loaded globally, so things in other configuration files can interact. You might consider using the new action syntax so that the inotify mode is applied to the specific source. You can see an example of it here http://www.rsyslog.com/doc/v8-stable/configuration/modules/imfile.html

Weblogic 12C sending logs to syslog

I want to send my weblogic log to syslog. here is what I have done so far.
1.Included following log4j.properties in managed server classpath -
log4j.rootLogger=DEBUG,syslog
log4j.appender.syslog=org.apache.log4j.net.SyslogAppender
log4j.appender.syslog.Threshold=DEBUG
log4j.appender.syslog.Facility=LOCAL7
log4j.appender.syslog.FacilityPrinting=false
log4j.appender.syslog.Header=true
log4j.appender.syslog.SyslogHost=localhost
log4j.appender.syslog.layout=org.apache.log4j.PatternLayout
log4j.appender.syslog.layout.ConversionPattern=[%p] %c:%L - %m%n
2. added following command to managed server arguments -
-Dlog4j.configuration=file :<path to log4j properties file> -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger -Dweblogic.log.Log4jLoggingEnabled=true
3. Added wllog4j.jar and llog4j-1.2.14.jar into domain's lib folder.
4.Then, from Admin console changed logging information by doing the following. "my_domain_name"--->Configuration--->Logging--->(Advanced options)-->Logging implementation: Log4J
Restart managed server.
I used this as refernce. But didnt get anaything in syslog(/var/log/message). What am I doing wrong?
I would recommend a couple items to check:
Remove the space in DEBUG, syslog in the file
Your last two server arguments have a space between the - and the D so make sure that wasn't just a copy and paste error in this post.
Double check that the log files are in the actual classpath.
Double check from a ps command, that the -D options made it correctly into the start command that was executed.
Make sure that the managed server has a copy of the JARs correctly as they would get synchornized from admin during the restart.
Hopefully something in there will help or give an idea of what to look for.
--John
I figured out the problem. My appender was working fine, the problem was in rsyslog.conf. Just uncommented following properties
# Provides UDP syslog reception
#$ModLoad imudp
#$UDPServerRun 514
We were appending the messages, but the listner was abesnt, so it didnt knew what to do with it.
and from *.debug;mail.none;authpriv.none;cron.none /var/log/messages it figures out where to redirect any (debug in this case) information to messages file.

SBT not passing credentials when publishing to Artifactory

I am coding a Java project and I'm automating the build and the publishing to JFrog Artifactory using SBT.
Whenever it's time to publish to Artifactory I want to do it using the Ivy directory layout and obviously publish the Ivy XML file along with the jar. I managed to achieve this by using the following lines in the build.sbt file:
crossPaths := false
publishTo := Some("Artifactory Realm" at "http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my")
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
publishMavenStyle := false
However it only works when anonymous users are allowed to deploy into Artifactory. I realized that sbt is not really passing my credentials to Artifactory but, instead, logging in as anonymous.
My $HOME/.ivy2/.credentials file looks like this:
realm=Artifactory Realm
host=http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my
user=<my user name>
password=<my user name>
However, if I change the Artifactory configuration in order to prevent anonymous users from deploying new Artifacts, when I run "sbt publish" I get the following output:
[error] Unable to find credentials for [Artifactory Realm # <Artifactory IP>].
java.io.IOException: Access to URL http://<Artifactory IP>:<Artifactory Port>/artifactory//org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar was refused by the server: Unauthorized
The Artifactory request.log file then contains:
20160219011657|319|REQUEST|10.0.2.2|anonymous|PUT|/org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar|HTTP/1.1|401|24978
I have also tried passing the credentials manually instead of using a file:
credentials += Credentials("Artifactory Realm", "localhost", "<USERNAME>", "<PASS>")
But I am getting the same result.
Any idea what I might be missing?
try:
host=<Artifactory IP>
old answer (doesn't work):
host=<Artifactory IP>:<Artifactory port>
I had a different problem: I had the wrong realm set on my .credentials file.
Looking at the error output from sbt, I was able to figure out that I should use:
realm=Artifactory Realm
Error shows the expected values for realm and host:
[error] Unable to find credentials for [Artifactory Realm # myhost].

How to retrieve currently applied node configuration from Riak v2.0+

Showing currently applied configuration values
In v2.0+ of Riak there is a new command option: riak config effective
Which I read as it would tell you the current running values of riak.
At any time, you can get a snapshot of currently applied
configurations through the command line. For a listing of all of the
configs currently applied in the node
Config changes applied only on start of each node?
In multiple locations in Riak documentation there is reference like:
Remember that you must stop and then re-start each node when you
change storage backends or modify any other configuration
Problem:
However when I made a change to a setting (I've tested this in both riak.conf and advanced.conf), I see the newest value when running: riak config effective
ie:
Start node: riak start
View current setting for log level: riak config effective | grep log.console.level
log.console.level = info
Change the level to debug (something that will output a lot to console.log)
Re-run: riak config effective | grep log.console.level, we get:
log.console.level = debug
Checking the console log file for debug: cat /var/log/riak/console.log | grep debug give no results (indicating the config change has not been applied)
So the question is, how can I retrieve and verify what config setting each Riak node is running under?
When Riak starts, it creates two files: 'app..config' and 'vm..config'. The default location is in a 'generated.configs' directory under the platform data directory (usually /var/lib/riak).
These files will contain the settings that were in place when Riak was started. The command riak config effective processes the current riak.conf and advanced.config files.

Resources