Enabling NCSA access log with wsadmin scrpting - console

How can someone enable NCSA access log by using wsadmin script.
To view the settings page for an HTTP channels in WAS console we follow these steps:
Servers > Server Types > WebSphere application servers > server > Web Container Settings > Web container transport chains > Chain > HTTP inbound channel.
On the console, there is no administrative assistance for this task!!
Thank you

Use below snippet. I have added comments for each step.
Update the first two lines - serverName and chainName as per your environment before running this.
serverName = 'server1'
chainName = 'Chain'
#update this variable to true/false to toggle logging on/off
loggingEnabled = 'true'
#Get the server id
serverId = AdminConfig.getid('/Server:%s' %(serverName))
#Get the list of all Web Container transport chains
wcTransportChains = AdminTask.listChains(AdminConfig.list("TransportChannelService", serverId), '[-acceptorFilter WebContainerInboundChannel]').splitlines()
#Iterate the list and find the chain we are interested in
for chain in wcTransportChains:
if chain.startswith(chainName):
#list all transport channles for this chain
transportChannels = AdminConfig.showAttribute(chain, 'transportChannels').split(" ")
#iterate the list and find HTTPInboundChannel to enable NCSA logging
for channel in transportChannels:
if channel.find('HTTPInboundChannel') != -1:
#Enable logging config
print ('\nEnabling NCSA logging for Transport Channel : %s on server : %s\n' %(AdminConfig.showAttribute(channel, 'name'), serverName))
AdminConfig.modify(channel, [['enableLogging', loggingEnabled]])
#end if
#end for
#end if
#end for
#save the changes
AdminConfig.save()

Related

How to collect jolokia data via telegraf but just if the jolokia connection is active?

When my application is up, telegraf works fine and collects data related to jolokia since my application opens the port 11722 that telegraf uses to get the metrics. But then, when my application is down, telegraf starts to get errors since it can't connect to Jolokia. My telegraf version is 1.5.3 and this is a Production environment, so I don't have much flexibility to change the version. Is there a way to collect the jolokia metrics just when my application is up and running?
I've tried to create a script to check if jolokia was running and use with a tag that then I could use with my agent, but this didn't work:
[[inputs.exec]]
commands = ["sh /local/1/home/svcegctp/telegraf/inputs/scripts/check_jolokia.sh"]
timeout = "1s"
data_format = "influx"
name_override = "jvm_status"
[inputs.exec.tags]
running = "true"
(...)
[[inputs.jolokia2_agent]]
# Add agents URLs to query
urls = ["http://localhost:11722/jolokia"]
[inputs.jolokia2_agent.tags]
running = "true"
This is my script:
check_jolokia.sh
#!/bin/bash
if curl -s -u <username>:<password> http://localhost:11722/jolokia/version >/dev/null 2>&1; then
echo "jvm_status running=true"
else
echo "jvm_status running=false"
fi

Error while trying to send logs with rsyslog without local storage

I'm trying to send logs into datadog using rsyslog. Ideally, I'm trying to do this without having the logs stored on the server hosting rsyslog. I've run into an error in my config that I haven't been able to find out much about. The error occurs on startup of rsyslog.
omfwd: could not get addrinfo for hostname '(null)':'(null)': Name or service not known [v8.2001.0 try https://www.rsyslog.com/e/2007 ]
Here's the portion I've added into the default rsyslog.config
module(load="imudp")
input(type="imudp" port="514" ruleset="datadog")
ruleset(name="datadog"){
action(
type="omfwd"
action.resumeRetryCount="-1"
queue.type="linkedList"
queue.saveOnShutdown="on"
queue.maxDiskSpace="1g"
queue.fileName="fwdRule1"
)
$template DatadogFormat,"00000000000000000 <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% - - - %msg%\n "
$DefaultNetstreamDriverCAFile /etc/ssl/certs/ca-certificates.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.logs.datadoghq.com
*.* ##intake.logs.datadoghq.com:10516;DatadogFormat
}
First things first.
The module imudp enables log reception over udp.
The module omfwd enables log forwarding over (tcp, udp, ...)
So most probably - or atleast as far as i can tell - with rsyslog you just want to log messages locally and then send them to datadog.
I don't know anything about the $ActionSendStreamDriver tags, so I can't help you there. But what is jumping out is, that in your action you haven't defined where the logs should be sent to.
ruleset(name="datadog"){
action(
type="omfwd"
target="10.100.1.1"
port="514"
protocol="udp"
...
)
...
}

Several tcp connections from same syslog-ng clients

We have a syslog-ng server with several rsyslog clients. Over time some of them open a lot of client connections to the server instead of just one TCP connection. From the client perspective a netstat shows only one connection but from the server side netstat shows several of them for the same client.
Anyone ever had a similar pb ? What could that be?
Server conf:
#version:3.2
# syslog-ng configuration file.
#
# This should behave pretty much like the original syslog on RedHat. But
# it could be configured a lot smarter.
#
# See syslog-ng(8) and syslog-ng.conf(5) for more information.
#
options {
time_reopen (10);
long_hostnames (off);
use_dns (no);
use_fqdn (no);
create_dirs (no);
keep_hostname (no);
stats-freq (3600);
};
source s_sys {
file ("/proc/kmsg" program_override("kernel: "));
unix-stream ("/dev/log");
internal();
};
source s_network {
tcp(ip(serverIP) port(601) max-connections(100) log-fetch-limit(100) log-iw-size(10000));
};
#FROM REMOTE CLIENTS
destination d_clients { file("/var/log/messages_${HOST}" perm(0644)); };
template log2db {
template("INSERT INTO logs (host, facility, priority, level, tag, datetime, program, msg) VALUES ( '$HOST', '$FACILITY', '$PRIORITY', '$LEVEL', '$TAG', '$YEAR-$MONTH-$DAY $HOUR:$MIN:$SEC', '$PROGRAM', '$MSG' );\n");
template_escape(no);
};
destination go2db {
program( "/usr/bin/mysql -u myusername --password=mypass mybddname -Bs > /dev/null"
template(log2db) log_fifo_size(30000) flush_lines (100));
};
log { source(s_network); destination(d_clients); };
log { source(s_network); destination(go2db); flags(flow-control); };
Client conf:
# rsyslog v5 configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imklog # provides kernel logging support (previously done by rklogd)
$ModLoad immark # provides --MARK-- message capability
#### GLOBAL DIRECTIVES ####
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on
$MarkMessagePeriod 3600
$preserveFQDN on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg *
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
*.*;cron.none;cron.warning ##serverIP:601
Thanks,

Cloudtrax HTTP Authentication API debug

I'm using the Cloudtrax HTTP Authentication API
to create a custom logic for router authentication that has a captive portal.
When the router asks for a status request, the server responds the following:
"CODE" "ACCEPT"
"RA" "1c65684265a2bb1a7c87e4d9565c2b18"
"SECONDS" "3600"
"DOWNLOAD" "2000"
"UPLOAD" "800"
Which should be de correct format of an answer to login the user. The problem is that the captive portal is still present.
I don't know what could be the problem and I can't find a log on the router or cloudtrax to see what could be wrong.
Edit:
I am processing the RA string on django (python):
import hashlib
def calculate_ra(request, response):
code = response.get('CODE')
if not code:
return ''
previus_ra = request.GET.get('ra')
if not previus_ra:
return ''
if len(previus_ra) != 32:
return ''
previus_ra = previus_ra.decode('hex')
m = hashlib.md5()
m.update('{}{}{}'.format(code, previus_ra, SECRET))
response['RA'] = m.hexdigest()
Use SSH to log into the router and enable debug mode by sending these commands:
uci set http_portal.general.syslog=debug
uci commit
/etc/init.d/underdogsplash restart
Then use logread -f to view logs in realtime.
Don't forget to disable the debug mode when you're done:
uci set http_portal.general.syslog=err
uci commit
/etc/init.d/underdogsplash restart

Run R/Rook as a web server on startup

I have created a server using Rook in R - http://cran.r-project.org/web/packages/Rook
Code is as follows
#!/usr/bin/Rscript
library(Rook)
s <- Rhttpd$new()
s$add(
name="pingpong",
app=Rook::URLMap$new(
'/ping' = function(env){
req <- Rook::Request$new(env)
res <- Rook::Response$new()
res$write(sprintf('<h1>Pong</h1>',req$to_url("/pong")))
res$finish()
},
'/pong' = function(env){
req <- Rook::Request$new(env)
res <- Rook::Response$new()
res$write(sprintf('<h1>Ping</h1>',req$to_url("/ping")))
res$finish()
},
'/?' = function(env){
req <- Rook::Request$new(env)
res <- Rook::Response$new()
res$redirect(req$to_url('/pong'))
res$finish()
}
)
)
## Not run:
s$start(port=9000)
$ ./Rook.r
Loading required package: tools
Loading required package: methods
Loading required package: brew
starting httpd help server ... done
Server started on host 127.0.0.1 and port 9000 . App urls are:
http://127.0.0.1:9000/custom/pingpong
Server started on 127.0.0.1:9000
[1] pingpong http://127.0.0.1:9000/custom/pingpong
Call browse() with an index number or name to run an application.
$
And the process ends here.
Its running fine in the R shell but then i want to run it as a server on system startup.
So once the start is called , R should not exit but wait for requests on the port.
How will i convince R to simply wait or sleep rather than exiting ?
I can use the wait or sleep function in R to wait some N seconds , but that doesnt fit the bill perfectly
Here is one suggestion:
First split the example you gave into (at least) two files: One file contains the definition of the application, which in your example is the value of the app parameter to the Rhttpd$add() function. The other file is the RScript that starts the application defined in the first file.
For example, if the name of your application function is named pingpong defined in a file named Rook.R, then the Rscript might look something like:
#!/usr/bin/Rscript --default-packages=methods,utils,stats,Rook
# This script takes as a single argument the port number on which to listen.
args <- commandArgs(trailingOnly=TRUE)
if (length(args) < 1) {
cat(paste("Usage:",
substring(grep("^--file=", commandArgs(), value=T), 8),
"<port-number>\n"))
quit(save="no", status=1)
} else if (length(args) > 1)
cat("Warning: extra arguments ignored\n")
s <- Rhttpd$new()
app <- RhttpdApp$new(name='pingpong', app='Rook.R')
s$add(app)
s$start(port=args[1], quiet=F)
suspend_console()
As you can see, this script takes one argument that specifies the listening port. Now you can create a shell script that will invoke this Rscript multiple times to start multiple instances of your server listening on different ports in order to enable some concurrency in responding to HTTP requests.
For example, if the Rscript above is in a file named start.r then such a shell script might look something like:
#!/bin/sh
if [ $# -lt 2 ]; then
echo "Usage: $0 <start-port> <instance-count>"
exit 1
fi
start_port=$1
instance_count=$2
end_port=$((start_port + instance_count - 1))
fifo=/tmp/`basename $0`$$
exit_command="echo $(basename $0) exiting; rm $fifo; kill \$(jobs -p)"
mkfifo $fifo
trap "$exit_command" INT TERM
cd `dirname $0`
for port in $(seq $start_port $end_port)
do ./start.r $port &
done
# block until interrupted
read < $fifo
The above shell script takes two arguments: (1) the lowest port-number to listen on and (2) the number of instances to start. For example, if the shell script is in an executable file named start.sh then
./start.sh 9000 3
will start three instances of your Rook application listening on ports 9000, 9001 and 9002, respectively.
You see the last line of the shell script reads from the fifo which prevents the script from exiting until caused to by a received signal. When one of the specified signals is trapped, the shell script kills all the Rook server processes that it started before it exits.
Now you can configure a reverse proxy to forward incoming requests to any of the server instances. For example, if you are using Nginx, your configuration might look something like:
upstream rookapp {
server localhost:9000;
server localhost:9001;
server localhost:9002;
}
server {
listen your.ip.number.here:443;
location /pingpong/ {
proxy_pass http://rookapp/custom/pingpong/;
}
}
Then your service can be available on the public Internet.
The final step is to create a control script with options such as start (to invoke the above shell script) and stop (to send it a TERM signal to stop your servers). Such a script will handle things such as causing the shell script to run as a daemon and keeping track of its process id number. Install this control script in the appropriate location and it will start your Rook application servers when the machine boots. How to do that will depend on your operating system, the identity of which is missing from your question.
Notes
For an example of how the fifo in the shell script can be used to take different actions based on received signals, see this stack overflow question.
Jeffrey Horner has provided an example of a complete Rook server application.
You will see that the example shell script above traps only INT and TERM signals. I chose those because INT results from typing control-C at the terminal and TERM is the signal used by control scripts on my operating system to stop services. You might want to adjust the choice of signals to trap depending on your circumstances.
Have you tried this?
while (TRUE) {
Sys.sleep(0.5);
}

Resources