NSS+Pam+Tacacs+ firs session fails - pam

I have device that i want to autorize to using TACACS+ server.
I have TACACS version: tac_plus version F4.0.4.26
I have tacacs server with next configuration
accounting file = /var/log/tac_plus.acct
key = testing123
default authentication = file /etc/passwd
user = sf {
default service = permit
login = cleartext 1234
}
user = DEFAULT {
# login = PAM
service = ppp protocol = ip {}
}
on device i have NSS with config:
/etc/nsswitch.conf
passwd: files rf
group: files
shadow: files
hosts: files dns
networks: files dns
protocols: files
services: files
ethers: files
rpc: files
and pam.d with sshd file in it
# SERVER 1
auth required /lib/security/pam_rf.so
auth [success=done auth_err=die default=ignore] /lib/security/pam_tacplus.so server=172.18.177.162:49 secret=testing123 timeout=5
account sufficient /lib/security/pam_tacplus.so server=172.18.177.162:49 service=ppp protocol=ip timeout=5
session required /lib/security/pam_rf.so
session sufficient /lib/security/pam_tacplus.so server=172.18.177.162:49 service=ppp protocol=ip timeout=5
password required /lib/security/pam_rf.so
# PAM configuration for the Secure Shell service
# Standard Un*x authentication.
auth include common-auth
# Disallow non-root logins when /etc/nologin exists.
account required pam_nologin.so
# Standard Un*x authorization.
account include common-account
# Set the loginuid process attribute.
session required pam_loginuid.so
# Standard Un*x session setup and teardown.
session include common-session
# Standard Un*x password updating.
password include common-password
and the problem, while i connect to device first time vie TeraTerm, i see that inputed user name was added in session start to /etc/passwd and /etc/shadow
but logging not succeed and in tacacs server i see in logs
Mon Dec 17 19:00:05 2018 [25418]: session.peerip is 172.17.236.2
Mon Dec 17 19:00:05 2018 [25418]: forked 5385
Mon Dec 17 19:00:05 2018 [5385]: connect from 172.17.236.2 [172.17.236.2]
Mon Dec 17 19:00:05 2018 [5385]: Found entry for alex in shadow file
Mon Dec 17 19:00:05 2018 [5385]: verify
IN $6$DUikjB1i$4.cM87/pWRZg2lW3gr3TZorAReVL7JlKGA/2.BRi7AAyHQHz6bBenUxGXsrpzXkVvpwp0CrtNYAGdQDYT2gaZ/
Mon Dec 17 19:00:05 2018 [5385]:
IN encrypts to $6$DUikjB1i$AM/ZEXg6UAoKGrFQOzHC6/BpkK0Rw4JSmgqAc.xJ9S/Q7n8.bT/Ks73SgLdtMUAGbLAiD9wnlYlb84YGujaPS/
Mon Dec 17 19:00:05 2018 [5385]: Password is incorrect
Mon Dec 17 19:00:05 2018 [5385]: Authenticating ACLs for user 'DEFAULT' instead of 'alex'
Mon Dec 17 19:00:05 2018 [5385]: pap-login query for 'alex' ssh from 172.17.236.2 rejected
Mon Dec 17 19:00:05 2018 [5385]: login failure: alex 172.17.236.2 (172.17.236.2) ssh
after that if i close TeraTerm and opening it again and trying to connect, connection established successfully, after that if i close TeraTerm and open again, the same problem appears each seccond try.
what may be a problem with it, i am driving crazy already

after deeply discovering problem, i fount out that iit was my fault, i compiled my name service using g++ instead of gcc.
Because of name service using
#include <pwd.h>
that defines interface for functions like nss_service_getpwnam_r and others, that was written in C, therefore i was must to:
extern "C" {
#include <pwd.h>
}
or to compile my program using GCC, hope in once someone will face same problem it will help him / her. good luck

Related

Unable to setup floating IP using keepalived and VRRP

I've a ubuntu machine hosting two VMs, each of them running an application, for which i need to provide high availability, so i implemented a floating IP using keepalived and VRRP, But i cannot ping the master VM using the virtual IP from the host, it says destination host unreachable.
keepalived.conf for VM1:
vrrp_instance VI_1 {
interface enp1s0
state MASTER #BACKUP here for VM2
virtual_router_id 51
priority 200 #100 in case of VM2
advert_int 2
authentication {
auth_type PASS
auth_pass monkey
}
virtual_ipaddress {
192.168.122.150/24
}
}
when i start keepalived service , it shows below messages:
Dec 19 14:31:37 secondaryvm Keepalived_vrrp[1419]: Unknown keyword '}'
Dec 19 14:31:37 secondaryvm Keepalived_vrrp[1419]: Unknown keyword 'virtual_ipaddress'
Dec 19 14:31:37 secondaryvm Keepalived_vrrp[1419]: Unknown keyword '192.168.122.150'
Dec 19 14:31:37 secondaryvm Keepalived_vrrp[1419]: Unknown keyword '}'
Dec 19 14:31:37 secondaryvm Keepalived_vrrp[1419]: Unknown keyword '}'
Dec 19 14:31:37 secondaryvm Keepalived_vrrp[1419]: Using LinkWatch kernel netlink reflector...
Dec 19 14:31:37 secondaryvm systemd[1]: Started Keepalive Daemon (LVS and VRRP).
Dec 19 14:31:39 secondaryvm Keepalived_vrrp[1419]: VRRP_Instance(VI_1) Transition to MASTER STATE
Dec 19 14:31:41 secondaryvm Keepalived_vrrp[1419]: VRRP_Instance(VI_1) Entering MASTER STATE
Dec 20 01:55:40 secondaryvm Keepalived_vrrp[1419]: VRRP_Instance(VI_1) Received advert with lower priori
~
A little late to answer, but ran into similar issue myself. I kept receiving an error saying "vrrp_track_process" is an unknown keyword, even though if worked on 1 VM and not other.
On looking in "man keepalived.conf", I noticed one VM had "vrrp_track_process" in it's documentation and other did not. Hence package / repo needed to be updated.
Very likely the current version of package installed doesn't support the keyword used.

My ssh config is 'PasswordAuthentication no' why MobaXterm can login?

My ssh config is 'PasswordAuthentication no' why MobaXterm can login?
I want to know which technology is used by MobaXterm to login SSH by username and password?
I can confirm my '/etc/ssh/sshd_config' is
PasswordAuthentication no
PermitRootLogin no
I'm not use any private key in MobaXterm.
#To disable tunneled clear text passwords, change to no here!
PasswordAuthentication no
#PermitEmptyPasswords no
I try to use putty, it also can login by password.
is this function form this dialog?
login as: root
Using keyboard-interactive authentication.
Password:
Last failed login: Fri Apr 26 09:04:12 UTC 2019 from ipxxxx on ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Fri Apr 26 08:58:44 2019 from ipxxxx
09:04:29 root# aa:~>
# is a comment character, meaning that
# PasswordAuthentication no
actually does nothing.
Also, you can enable logging with
LogLevel VERBOSE
This will make it easier to debug what you want to do.

Shiny server Connection closed. Info: {"type":"close","code":4503,"reason":"The application unexpectedly exited","wasClean":true}

I've encountered a problem with deploying my shiny app on linux Ubuntu 16.04 LTS.
After I run sudo systemctl start shiny-server, and open up my browser heading to http://192.168..*:3838/StockVis/, the web page greys out in a second.
I found some warnings in the web console as below, and survey some information on the web for like two weeks, but still have no solution. :(
***"Thu Feb 16 2017 12:20:49 GMT+0800 (CST) [INF]: Connection opened. http://192.168.**.***:3838/StockVis/"
Thu Feb 16 2017 12:20:49 GMT+0800 (CST) [DBG]: Open channel 0
The application unexpectedly exited.
Diagnostic information is private. Please ask your system admin for permission if you need to check the R logs.
**Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [INF]: Connection closed. Info: {"type":"close","code":4503,"reason":"The application unexpectedly exited","wasClean":true}
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: SockJS connection closed
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: Channel 0 is closed
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: Removed channel 0, 0 left*****
Please kindly give some suggestions to move on.
This can indicate something in your R code is causing an error. As that R error could be anything, this answer is to help you gather that info. The browser console messages will not tell you what that is. In order to access the error, you need to configure Shiny to not delete the log upon exiting the application.
Assuming you have sudo access:
$ sudo vi /etc/shiny-server/shiny-server.conf
Place the following line in the file after run_as shiny; :
preserve_logs true;
Restart shiny:
sudo systemctl restart shiny-server
Reload your Shiny app.
In the var/log/shiny-sever/ directory there will be a log file with your application name. Viewing that file will give you more information on what is going on.
Warning. After you are done, take out the preserve_logs true; line in the conf file and restart Shiny. If not, you will start generating a bunch of log files you don't want.

jMeter Distributed Testing: Master won't shut down

I have a simple 4 server setup running jMeter (3 slaves, 1 master):
Slave 1: 10.135.62.18 running ./jmeter-server -Djava.rmi.server.hostname=10.135.62.18
Slave 2: 10.135.62.22 running ./jmeter-server -Djava.rmi.server.hostname=10.135.62.22
Slave 3: 10.135.62.20 running ./jmeter-server -Djava.rmi.server.hostname=10.135.62.20
Master: 10.135.62.11 with remote_hosts=10.135.62.18,10.135.62.22,10.135.62.20
I start the test with ./jmeter -n -t /root/jmeter/simple.jmx -l /root/jmeter/result.jtl -r
With the following output:
Writing log file to: /root/apache-jmeter-3.0/bin/jmeter.log
Creating summariser <summary>
Created the tree successfully using /root/jmeter/simple.jmx
Configuring remote engine: 10.135.62.18
Configuring remote engine: 10.135.62.22
Configuring remote engine: 10.135.62.20
Starting remote engines
Starting the test # Mon Aug 29 11:22:38 UTC 2016 (1472469758410)
Remote engines have been started
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
The Slaves print:
Starting the test on host 10.135.62.22 # Mon Aug 29 11:22:39 UTC 2016 (1472469759257)
Finished the test on host 10.135.62.22 # Mon Aug 29 11:22:54 UTC 2016 (1472469774871)
Starting the test on host 10.135.62.18 # Mon Aug 29 11:22:39 UTC 2016 (1472469759519)
Finished the test on host 10.135.62.18 # Mon Aug 29 11:22:57 UTC 2016 (1472469777173)
Starting the test on host 10.135.62.20 # Mon Aug 29 11:22:39 UTC 2016 (1472469759775)
Finished the test on host 10.135.62.20 # Mon Aug 29 11:22:56 UTC 2016 (1472469776670)
Unfortunately the master waits for messages on port 4445 indefinitely event though all slaves finished the test.
Is there anything I have missed?
I figured it out myself just before submitting the question. I guess the solution could be useful nonetheless:
Once I start the test (on the main server) with this:
./jmeter -n -t /root/jmeter/simple.jmx -l /root/jmeter/result.jtl -r -Djava.rmi.server.hostname=10.135.62.11 -Dclient.rmi.localport=4001
It works just fine. I wonder why the documentation doesn't mention something like this.

Why does cron (DEC OSF1 V4.0 1229 alpha) send mail from a user without a crontab?

I cannot seem to find an answer for this anywhere. Superuser root has a crontab with a couple of jobs that send the resultant output to root's mailbox addressed from my non-superuser account foo.
It is my understanding that the owner of the cron job is supposed to be the sender of the resultant cron job output. Account foo does not have a crontab, and in-fact I have even tried explicitly removing foo's crontab, but still root receives root's cron job output from user foo.
When I edit root's crontab, I log into the system as foo, and then su - to root. Does this have anything to do with it?
When I ls -alF /var/spool/cron/crontabs there is no file for user foo.
Does anyone know why my non-superuser account foo, that does not have a crontab file, seems to be sending mail to superuser root?
It also seems that for some of root's cron jobs, that it executes as root and as foo which both send email to root's mailbox.
Example:
From foo Sat Oct 30 19:01:01 2010
Received: by XXXXXX (8.8.8/1.1.22.3/15Jan03-1152AM)
id TAA0000027883; Sat, 30 Oct 2010 19:01:01 -0400 (EDT)
Date: Sat, 30 Oct 2010 19:01:01 -0400 (EDT)
From: foo
Message-Id: <201010302301.TAA0000027883#XXXXXX>
redacted
Cron: The previous message is the standard output
and standard error of one of your cron commands.
From root Sat Oct 30 19:01:01 2010
Received: by XXXXXX (8.8.8/1.1.22.3/15Jan03-1152AM)
id TAA0000025999; Sat, 30 Oct 2010 19:01:01 -0400 (EDT)
Date: Sat, 30 Oct 2010 19:01:01 -0400 (EDT)
From: system privileged account
Message-Id: <201010302301.TAA0000025999#XXXXXX>
redacted
Cron: The previous message is the standard output
and standard error of one of your cron commands.
You should show us the actual crontab entry. Some crons allow to specify a user, not just a command. If that user doesn't have a mailbox, maybe by default cron sends the output to root's inbox with the sender still set to 'foo' (which is easily done by having From: foo in the mail header.)
vixie-cron supports a system-wide crontab file in /etc/crontab that allows per-user cron jobs to be specified. The syntax is similar to the usual cron syntax except a username is specified in the 6th column, and the command to be run follows that. For example:
0 22 * * 1-5 foo mail -s "Mail to root from foo" root
So check /etc/crontab for any entries with foo in the 6th column.

Resources