Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm running OpenLDAP 2.4-28 on XUBUNTU 12.04.
I'm reading "Mastering OpenLDAP" and configuring along with the book.
When I try to perform the following search (page 47):
$ ldapsearch -x -W -D 'cn=Manager,dc=example,dc=com' -b "" -s base
I am prompted for the password. Then I enter "secret" but I get the following error:
ldap_bind: Invalid Credentials (49).
The following is my slapd.conf:
# slapd.conf - Configuration file for LDAP SLAPD
##########
# Basics #
##########
include /etc/ldap/schema/core.schema
include /etc/ldap/schema/cosine.schema
include /etc/ldap/schema/inetorgperson.schema
pidfile /var/run/slapd/slapd.pid
argsfile /var/run/slapd/slapd.args
loglevel none
modulepath /usr/lib/ldap
# modulepath /usr/local/libexec/openldap
moduleload back_hdb
##########################
# Database Configuration #
##########################
database hdb
suffix "dc=example,dc=com"
rootdn "cn=Manager,dc=example,dc=com"
rootpw secret
directory /var/lib/ldap
# directory /usr/local/var/openldap-data
index objectClass,cn eq
########
# ACLs #
########
access to attrs=userPassword
by anonymous auth
by self write
by * none
access to *
by self write
by * none
and here is the ldap.conf:
# LDAP Client Settings
URI ldap://localhost
BASE dc=example,dc=com
BINDDN cn=Manager,dc=example,dc=com
SIZELIMIT 0
TIMELIMIT 0
I don't see an obvious problem with the above.
It's possible your ldap.conf is being overridden, but the command-line options will take precedence, ldapsearch will ignore BINDDN in the main ldap.conf, so the only parameter that could be wrong is the URI.
(The order is ETCDIR/ldap.conf then ~/ldaprc or ~/.ldaprc and then ldaprc in the current directory, though there environment variables which can influence this too, see man ldapconf.)
Try an explicit URI:
ldapsearch -x -W -D 'cn=Manager,dc=example,dc=com' -b "" -s base -H ldap://localhost
or prevent defaults with:
LDAPNOINIT=1 ldapsearch -x -W -D 'cn=Manager,dc=example,dc=com' -b "" -s base
If that doesn't work, then some troubleshooting (you'll probably need the full path to the slapd binary for these):
make sure your slapd.conf is being used and is correct (as root)
slapd -T test -f slapd.conf -d 65535
You may have a left-over or default slapd.d configuration directory which takes preference over your slapd.conf (unless you specify your config explicitly with -f, slapd.conf is officially deprecated in OpenLDAP-2.4). If you don't get several pages of output then your binaries were built without debug support.
stop OpenLDAP, then manually start slapd in a separate terminal/console with debug enabled (as root, ^C to quit)
slapd -h ldap://localhost -d 481
then retry the search and see if you can spot the problem (there will be a lot of schema noise in the start of the output unfortunately). (Note: running slapd without the -u/-g options can change file ownerships which can cause problems, you should usually use those options, probably -u ldap -g ldap )
if debug is enabled, then try also
ldapsearch -v -d 63 -W -D 'cn=Manager,dc=example,dc=com' -b "" -s base
Related
Having lot of aliases might be confusing, such as when they are brought by zsh git plugin for instance. I wonder if it's possible to display the definition next to the alias for each one. I'm surprised I'm not able to find, so maybe I've used wrong terms when searching for it. If so, please apologize ...
Current behavior:
$ gg<Tab>
gg gga ggf ggfl ggl ggp ggpnp ggpull ggpur ggpush ggsup ggu
Desired behavior :
$ gg<Tab>
gg 'git gui citool'
gga 'git gui citool --amend'
ggpull 'git pull origin "$(git_current_branch)"'
ggpur ggu
ggpush 'git push origin "$(git_current_branch)"'
ggsup 'git branch --set-upstream-to=origin/$(git_current_branch)'
glgg 'git log --graph'
glgga 'git log --graph --decorate --all'
I've found a similar idea for displaying files list info that can be configured like this :
zstyle ':completion:*' file-list all. But I've found nothing in the doc for a similar feature related to alias expansion.
A similar problem is also solved for parameters completion :
docker -<Tab>
--config -- Location of client config files
--debug -D -- Enable debug mode
--help -h -- Print usage
--host -H -- tcp://host:port to bind/connect to
--log-level -l -- Logging level
--tls -- Use TLS
--tlsverify -- Use TLS and verify the remote
--userland-proxy -- Use userland proxy for loopback traffic
--version -v -- Print version information and quit
Thanks !
When I need to copy a file from local server (server A) to remote server(server B) via SSH, using a user with enough privileges, I do this successfuly like below
localpath='/this/is/local/path/file1.txt'
remotepath='/this/is/remote/path/'
mypass='MyPassword123'
sshpass -p $mypass scp username#hostname:$localpath $remotepath
Now, I have to transfer a file from server A to server C with a user that doesn't have enough privileges to copy. Then once
I connected to Server C, I need to send su in order to be able to send commands like cd, ls, etc.
Manually, I access the server C via SSH like this:
[root#ServerA ~]# ssh username#hostname
You are trying to access a restricted zone. Only Authorized Users allowed.
Password:
Last login: Sat Jun 13 10:17:40 2020 from XXX.XXX.XXX.XXX
ServerC ~ $
ServerC ~ $ su
Password:
ServerC /home/myuser #
ServerC /home/myuser # cd /documents/backups/
ServerC /documents/backups #
At this moment myuser has superuser privileges and I can send commands.
Then, how can I automate the task to copy files from server A to server C with the need to send su once I'm connected to Server C?
I've tried so far doing like this:
sshpass -p $mypass ssh -t username#hostname "su -c \"cd /documents/backups/ && ls\""
it requests password for su and I'm able to send cd and ls but with this command, I'm not copying files from Server A to Server C, only semi-automating the access to Server C and sending the su in Server C.
Thanks in advance for any help.
UPDATE
# $TAR | ssh $username#$hostname "$COMMAND"
+ tar -cv -C /this/is/local/path/file1.txt .
+ ssh username#X.X.X.X 'set -x; rm -f /tmp/copy && mknod /tmp/copy p; su - <<< "su_password
set -x; tar -xv -C /this/is/remote/path/ . < /tmp/copy" & cat > /tmp/copy'
tar: /this/is/local/path/file1.txt: Cannot chdir: Not a directory
tar: Error is not recoverable: exiting now
You are trying to access a restricted zone. Only Authorized Users allowed.
Password:
+ rm -f /tmp/copy
+ mknod /tmp/copy p
+ su -
+ cat
Password:
Editorial note: the previous version of this answer used sudo, the current version uses su as requested in the question.
You could use tar and pipes, like so:
TAR="tar -cv -C $localpath ."
UNTAR="tar -xv -C $remotepath ."
PREPARE_PIPE="rm -f /tmp/copy && mknod /tmp/copy p"
NEWLINE=$'\n' # that's the easiest way to get a literal newline
ROOT_PASSWORD=rootpasswordverydangerous
COMMAND="set -x; $PREPARE_PIPE; su - <<< \"${ROOT_PASSWORD}${NEWLINE} set -x; $UNTAR < /tmp/copy\" & cat > /tmp/copy"
$TAR | ssh username#hostname "$COMMAND"
Explanation:
tar -c . archives the current directory into a single file. We aren't passing -f to tar, so that single file is standard output.
tar -x . extracts the content of a single tar archive file to the current directory. We aren't passing -f to tar, so that single file is standard input.
-C <path> tells tar to cd into <path> so that it will be the current directory in which files are copied from/to.
-v just tells tar to list the files tar archives/extracts, for debugging purposes.
Likewise, set -x is just to have bash to emit trace information, for debugging purposes.
So we're archiving $localpath into stdout, and piping it to ssh, which will pipe it to $COMMAND.
If there was a way to give su the password in the command line, we would have used something like:
$TAR | ssh ... su --password ${ROOT_PASSWORD} -c "$UNTAR"
and things would have been simple.
But su doesn't have that. su runs like a shell, reading from stdin. So it will first read the password, and once the password is read and su has established a root session, it reads commands from stdin. That's why we have su - <<< \"${ROOT_PASSWORD}${NEWLINE}${UNTAR}.
But now stdin is used by the password and command, so we can't use it as the archive. We could use another file descriptor, but I prefer not to, because then the solution can be more easily ported to work with sudo instead of su. sudo closes all file descriptors, and sudo -C 200 (only close file descriptors above 200) may not work (didn't work on my test machine).
if we went that direction, we would have used something like
$TAR | ssh ... 'exec 9<&2 && sudo -S <<< $mypass bash -c "$UNTAR <&9"'
Our next option is to do something like cat > /tmp/archive.tar in order to write the entire archive into a file, and then have something like $UNTAR < /tmp/archive.tar. But the archive may be huge and we may run out of disk space.
So the idea is to create a dedicated pipe - that's PREPARE_PIPE. Pipes don't save anything to disk, and don't store the entire stream in memory, so the reader and the writer have to work concurrently (you know, like with a real pipe).
So having redirected su's stdin from $ROOT_PASSWORD, we pull ssh's stdin into our pipe with cat > /tmp/copy, and in parallel (&) having $UNTAR read from the pipe (< /tmp/copy).
Notes:
You could also pass -z to both tar commands to pass it compressed, if your network is too slow.
tar will preserve the source's metadata, e.g. timestamps and ownership.
Passing $ROOT_PASSWORD to commands is not good practice, anyone who runs ps -ef can see the password. There are ways to pass the password to server C in a more secure way, I didn't include it in order to not further complicate this answer.
I would suggest asking the server's owner to install sudo, so that if the password is compromised via ps -ef, at least it's not the root password.
I had been using a proxy for a long time. Now I need to remove it. I have forgotten how I have added the proxy to wget. Can someone please help me get back to the normal wget where it doesn't use any proxy. As of now, I'm using
wget <link> --proxy=none
But I'm facing a problem when I'm installing using a pre-written script. It's painstaking to search all through the scripts and change each command.
Any simpler solution will be very much appreciated.
Thanks
Check your
~/.wgetrc
/etc/wgetrc
and remove proxy settings.
Or use wget --no-proxy command line option to override them.
In case your OS is alpine/busybox then the wget might vary from the one used by #Logu.
There the correct command is
wget --proxy off http://server:port/
Running wget --help outputs:
/ # wget --help
BusyBox v1.31.1 () multi-call binary.
Usage: wget [-c|--continue] [--spider] [-q|--quiet] [-O|--output-document FILE]
[-o|--output-file FILE] [--header 'header: value'] [-Y|--proxy on/off]
[-P DIR] [-S|--server-response] [-U|--user-agent AGENT] [-T SEC] URL...
Retrieve files via HTTP or FTP
--spider Only check URL existence: $? is 0 if exists
-c Continue retrieval of aborted transfer
-q Quiet
-P DIR Save to DIR (default .)
-S Show server response
-T SEC Network read timeout is SEC seconds
-O FILE Save to FILE ('-' for stdout)
-o FILE Log messages to FILE
-U STR Use STR for User-Agent header
-Y on/off Use proxy
I almost been stuck a day on the following issue,
I installed LDAP using: apt-get install slapd
and use the following configuration:
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema
allow bind_v2
loglevel 0
moduleload back_sbdb.la
database bdb
suffix "dc=test,dc=nl"
rootdn "cn=Directory Manager,dc=test,dc=nl"
rootpw test
directory /var/lib/ldap
index objectClass eq
index userPassword eq,pres
index givenName,mail,mobile,sn,title,cn,description eq,sub,pres
index displayName eq,sub,pres
index postalAddress,facsimileTelephoneNumber pres
access to *
by self write
by * read
and I then try to bind using
ldapsearch -D cn=Directory Manager,dc=test,dc=nl -w test
but I still recieve the error ldap_bind: Invalid Credentials (49)
Anyone has any idea or clues what this could be?
Thanks in forward
Try it using quotes like;
ldapsearch -D "cn=Directory Manager,dc=test,dc=nl" -w test
Space character in Directory Manager may cause the problem.
Edit: Also, are you sure you don't need -h -p parameters?
-h The host name of the directory server
-p The port number of the directory server
Edit2: Just figured out what is wrong. You are using rootpw unencrypted in your slapd config file. You should use an encrypted password created by slappasswd tools output. This may cause problems under special circumstances.
Check this link for details: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-ldap-quickstart.html
A few things you could try:
Turn on more verbose logging (loglevel 255), and see if anything shows up in the log file.
Verify that the server really is reading the configuration file you think by checking the access time on the slapd.conf file (ls -lu slapd.conf)
Try binding using an invalid dn (ldapsearch -D cn=no-such-user -w test) and see if the error message changes (if so, that confirms that the problem is with the password, not the dn).
Try man ldapsearch.
I'm not really sure on debian/ubuntu, but in FreeBSD you need to add a -x to use simple authentication instead of SASL. I think this might be your issue?
Also, you could use -W instead of passing the password plain text on the commmand line.
How do you use an identity file with rsync?
This is the syntax I think I should be using with rsync to use an identity file to connect:
rsync -avz -e 'ssh -p1234 -i ~/.ssh/1234-identity' \
"/local/dir/" remoteUser#22.33.44.55:"/remote/dir/"
But it's giving me an error:
Warning: Identity file ~/.ssh/1234-identity not accessible: No such file or directory.
The file is fine, permissions are set correctly, it works when doing ssh - just not with rsync - at least in my syntax. What am I doing wrong? Is it trying to look for the identity file on the remote machine? If so, how do I specify that I want to use an identity file on my local machine?
Use either $HOME
rsync -avz -e "ssh -p1234 -i \"$HOME/.ssh/1234-identity\"" dir remoteUser#server:
or full path to the key:
rsync -avz -e "ssh -p1234 -i /home/username/.ssh/1234-identity" dir user#server:
Tested with rsync 3.0.9 on Ubuntu
You may want to use ssh-agent and ssh-add to load the key into memory. ssh will try identities from ssh-agent automatically if it can find them. Commands would be
eval $(ssh-agent) # Create agent and environment variables
ssh-add ~/.ssh/1234-identity
ssh-agent is a user daemon which holds unencrypted ssh keys in memory. ssh finds it based on environment variables which ssh-agent outputs when run. Using eval to evaluate this output creates the environment variables. ssh-add is the command which manages the keys memory. The agent can be locked using ssh-add. A default lifetime for a key can be specified when ssh-agent is started, and or specified for a key when it is added.
You might also want to setup a ~/.ssh/config file to supply the port and key definition. (See `man ssh_config for more options.)
host 22.33.44.55
IdentityFile ~/.ssh/1234-identity
Port 1234
Single quoting the ssh command will prevent shell expansion which is needed for ~ or $HOME. You could use the full or relative path to the key in single quotes.
You have to specify the absolute path to your identity key file. This probably some sort of quirck in rsync. (it can't be perfect after all)
I ran into this issue just a few days ago :-)
This works for me
rsync -avz --rsh="ssh -p1234 -i ~/.ssh/1234-identity" \
"/local/dir/" remoteUser#22.33.44.55:"/remote/dir/"
use key file with rsync:
rsync -rave "ssh -i /home/test/pkey_new.pem" /var/www/test/ ubuntu#231.210.24.48:/var/www/test
Are you executing the command in bash or sh? This might make a difference. Try replacing ~ with $HOME. Try double-quoting the string for the -e option.