Problem with Python script when setting up LDAP for MacOS - python-ldap

I am trying to set up Google secure LDAP on my Macbook Pro running Monterey 12.3 following these instructions from Google.
request.appendData_(NSData.dataWithBytes_length_(CONFIG,
len(CONFIG))) TypeError: Expecting byte-buffer, got str
See the script from the guide:
#!/usr/bin/python
from OpenDirectory import ODNode, ODSession, kODNodeTypeConfigure
from Foundation import NSMutableData, NSData
import os
import sys
# Reading plist
GOOGLELDAPCONFIGFILE = open(sys.argv[1], "r")
CONFIG = GOOGLELDAPCONFIGFILE.read()
GOOGLELDAPCONFIGFILE.close()
# Write the plist
od_session = ODSession.defaultSession()
od_conf_node, err = ODNode.nodeWithSession_type_error_(od_session, kODNodeTypeConfigure, None)
request = NSMutableData.dataWithBytes_length_(b'\x00'*32, 32)
request.appendData_(NSData.dataWithBytes_length_(CONFIG, len(CONFIG)))
response, err = od_conf_node.customCall_sendData_error_(99991, request, None)
# Edit the default search path and append the new node to allow for login
os.system("dscl -q localhost -append /Search CSPSearchPath /LDAPv3/ldap.google.com")
os.system("bash -c 'echo -e \"TLS_IDENTITY\tLDAP Client\" >> /etc/openldap/ldap.conf' ")
I have tried to find some solutions on Google (e.g. .encode, b'..) But I do not really understand it.
Thanks for the help.

Okay, I found the solution, actually here it was posted earlier.
Error running python script to create google ldap configuration on Macos

Related

Forcing a TLS 1.0 POST request with Requests

To start I know that TLSv1.0 is super old and should not be used, but I need to connect to some really old local hardware that isn't supporting anything else atm
#import ssl
from OpenSSL import SSL
try:
import urllib3.contrib.pyopenssl
urllib3.contrib.pyopenssl.inject_into_urllib3()
except ImportError:
pass
import requests sys, os, select, socket
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.poolmanager import PoolManager
from requests.packages.urllib3.util import ssl_
from requests.packages.urllib3.contrib import py
CIPHERS = (
'ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:
ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:AES256-SHA:'
)
class TlsAdapter(HTTPAdapter):
def __init__(self, ssl_options=0, **kwargs):
self.ssl_options = ssl_options
super(TlsAdapter, self).__init__(**kwargs)
def init_poolmanager(self, *pool_args, **pool_kwargs):
ctx = SSL.Context(SSL.TLSv1_METHOD)
self.poolmanager = PoolManager(*pool_args,
ssl_context=ctx,
**pool_kwargs)
session = requests.Session()
adapter = TlsAdapter(ssl.OP_NO_TLSv1_1 | ssl.OP_NO_TLSv1_2)
session.mount("https://", adapter)
data = { "key":"value"}
try:
r = session.post("https://192.168.1.1", data)
print(r)
except Exception as exception:
print(exception)
I've tried several ways. The above code is mostly ripped from similar issues posted here in the past but python3 ssl no longer supports TLSv1 so it throws an unsupported protocol error. I added the "import urllib3.contrib.pyopenssl" to try and force it to use pyOpenSSL instead per this urllib3 documentation. The current error with this code is
load_verify_locations() takes from 2 to 3 positional arguments but 4 were given
I know this is from the verify part of urllib3 context and I need to fix the context for pyOpenSSL but I've been stuck here trying to fix the context.
Analyzed the website in question in "https://www.ssllabs.com/" , the simulator doesn't use python for testing. I haven't been successful using python. However with jdk 1.8 , I was able to comment the line in the security file as mentioned in "https://www.youtube.com/watch?v=xSejtYOh4C0" and was able to work around the issue.
The server prefers these cipher suites. Is these supported ciphers in urllib3 ?
TLS_RSA_WITH_RC4_128_MD5 (0x4) INSECURE 128
TLS_RSA_WITH_RC4_128_SHA (0x5) INSECURE 128
TLS_RSA_WITH_3DES_EDE_CBC_SHA (0xa) WEAK
Right now I'm stuck with the below error:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='{}', port={}): Max retries exceeded with url: /xxx.htm (Caused by ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory')))

Jupyter Password Not Hashed

When I try to set up the jupyter notebook password, I don't get a password hash when I open up the jupyter_notebook_config.json file.
This is the output of the json file:
{
"NotebookApp": {
"password":
"argon2:$argon2id$v=19$m=10240,t=10,p=8$pcTg1mB/X5a3XujQqYq/wQ$/UBQBRlFdzmEmxs6c2IzmQ"
}
}
I've tried running passwd() from python as well, like in the instructions for Preparing a hashed password instructions found online but it produces the same results as above. No hash.
Can someone please let me know what I'm doing wrong?
I'm trying to set up a Jetson Nano in similar fashion to the Deep Learing Institute Nano build. With that build you can run Jupyter Lab remotely so the nano can run headless. I'm trying to do the same things with no luck.
Thanks!
This is the default algorithm (argon2):
https://github.com/jupyter/notebook/blob/v6.5.2/notebook/auth/security.py#L23
you can provide a different algorithm like sha1 if you like:
>>> from notebook.auth import passwd
>>> from notebook.auth.security import passwd_check
>>>
>>> password = 'myPass123'
>>>
>>> hashed_argon2 = passwd(password)
>>> hashed_sha1 = passwd(password, 'sha1')
>>>
>>> print(hashed_argon2)
argon2:$argon2id$v=19$m=10240,t=10,p=8$JRz5GPqjOYJu/cnfXc5MZw$LZ5u6kPKytIv/8B/PLyV/w
>>>
>>> print(hashed_sha1)
sha1:c29c6aeeecef:0b9517160ce938888eb4a6ec9ca44e3a31da9519
>>>
>>> passwd_check(hashed_argon2, password)
True
>>>
>>> passwd_check(hashed_sha1, password)
True
Check whether you don't have a different Jupyter server running on your machine. It happened to me that I was trying over and over a password on port 8888 while my intended server was on port 8889.
Another time, Anaconda started a server on localhost:8888, and I was trying to reach a mapped port from a docker container, also on port 8888, and the only way to access was actually on 0.0.0.0:8888.

pyodbc - [unixODBC][Driver Manager]Data source name not found, and no default driver specified

I am setting up a system to connect to an AWS Redshift database from python. I am thinking that there's something wrong in the python script because I can connect via isql. I've installed all the relevant packages, and I am able to connect via isql as follows:
$ isql rndredshift readonly ***** -v
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL> quit
However, my python script is failing to connect. Here's the script:
import pyodbc
import sys
def main():
redshift_conn_str = assemble_connection_string(
Driver='{PostgreSQL}',
Server='10.191.4.97',
ServerName='rndredshift',
Port='5439',
Database='prod',
Uid='readonly',
Pwd='*******'
)
print("===========")
print(redshift_conn_str)
print("===========")
new_conn2 = pyodbc.connect(redshift_conn_str)
print(psql.read_sql('select top 10 * from rawdb.raw_imprequest_20150101', new_conn2))
def assemble_connection_string(**kwargs):
return ';'.join([k + '=' + v for (k, v) in kwargs.items()])
if __name__ == '__main__':
sys.exit(main())
Here's the output:
===========
Uid=readonly;Database=prod;ServerName=rndredshift;Driver={PostgreSQL}; Server=10.191.4.97;Pwd=********;Port=5439
===========
Traceback (most recent call last):
File "test_redshift.py", line 24, in <module>
sys.exit(main())
File "test_redshift.py", line 17, in main
new_conn2 = pyodbc.connect(redshift_conn_str)
pyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnectW)')
The PosgreSQL driver is installed:
$ odbcinst -q -d
[PostgreSQL]
[MySQL]
And the data source is configured:
$ odbcinst -q -s
[rndredshift]
If you're using DSNs, you're going to need to specify that in your connection string. Also, if you want to use DSN-less connections, I believe the keyword is SERVER and not SERVERNAME.
Try this connection string?
Uid=readonly;Database=prod;DSN=rndredshift;Driver={PostgreSQL};Pwd=********;
Make sure you specify the full server name and port in odbc.ini as well. Also, since you're using PostgreSQL, any reason you're not using the native PostgreSQL driver?
https://wiki.postgresql.org/wiki/Psycopg
Good luck!
Also, I've been perplexed over the ways to obtain and install the PostgreSQL driver. When I installed unixODBC, the odbcinst.ini file was created and contained an entry for the PostgreSQL driver that looked this this:
[PostgreSQL]
Description = ODBC for PostgreSQL
Driver = /usr/lib/psqlodbc.so
Setup = /usr/lib/libodbcpsqlS.so
Driver64 = /usr/lib64/psqlodbc.so
Setup64 = /usr/lib64/libodbcpsqlS.so
FileUsage = 1
However, the files for Driver and Driver64 where not on the system. So then, I installed postgresql-odbc, which gave me the missing libraries. Is there a better way to do this? As I mentioned earlier, isql works fine, so I'm still thinking it's a python issue.
I decided to try using the psycopg2 package, and I got a connection to work! Here's my script:
import sys
import psycopg2
def main():
conn_string = "host='10.191.4.97' dbname='prod' user='readonly' password='****' port='5439'"
print("===========")
print(conn_string)
print("===========")
new_conn2 = psycopg2.connect(conn_string)
print("Connected using psycopg2!")
if __name__ == '__main__':
sys.exit(main())
So, while I'm happy that I can connect, the question still remains about pyodbc and the PostgreSQL connection string. Thoughts?
Here's the connection string:
Uid=readonly;Database=prod;ServerName=rndredshift;Driver={PostgreSQL}; Server=10.191.4.97;Pwd=********;Port=5439
Using DSN instead of ServerName didn't work.

Error when starting a secure public server for a notebook - IPython 2.2 and tornado 4.0.2 (Debian)

I created a new profile and set it up to be accessible publicaly over https. Such as described on the IPython documentation.
Find bellow the steps I followed
Generated a hashed password:
In [1]: from IPython.lib import passwd
In [2]: passwd()
Enter password:
Verify password:
Out[2]: 'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed'
Created a certificate:
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem
and created a new profile.
ipython profile created publicServer
edited the ipython_notebook_config.py file in ~/.ipython/profile_publicServer/
c = get_config()
# Kernel config
c.IPKernelApp.pylab = 'inline' # if you want plotting support always
# Notebook config
c.NotebookApp.certfile = u'/absolute/path/to/your/certificate/mycert.pem'
c.NotebookApp.ip = '*'
c.NotebookApp.open_browser = False
c.NotebookApp.password = u'sha1:bcd259ccf...[your hashed password here]'
# It is a good idea to put it on a known, fixed port
c.NotebookApp.port = 9999
Then I executed ipython from a terminal to start the notebook using the created profile:
ipython notebook --profile=publicServer
When I try to access it using a browser, from any ip (including localhost)
https://localhost:999
The browser hangs and never loads the page.
On the terminal I get the following error message
ERROR:tornado.application:Exception in callback (<socket._socketobject object at 0x7f76ba974980>, <function null_wrapper at 0x7f76ba918848>)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 833, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 201, in accept_handler
callback(connection, address)
File "/usr/local/lib/python2.7/dist-packages/tornado/tcpserver.py", line 225, in _handle_connection
do_handshake_on_connect=False)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 434, in ssl_wrap_socket
context = ssl_options_to_context(ssl_options)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 411, in ssl_options_to_context
context.load_cert_chain(ssl_options['certfile'], ssl_options.get('keyfile', None))
TypeError: coercing to Unicode: need string or buffer, NoneType found
Could anybody help me fixing this issue?
Cheers
I ran into this problem with a customer. It looks like the Tornado library updated how it does things, and needs to be explicitly told that the certificate/key generated by openssl are the same file.
Here is what you need: in ~/.ipython/profile_{yourprofile}/ipython_notebook_config.py, add the line
c.NotebookApp.keyfile = u'/absolute/path/to/your/certificate/mycert.pem'
Essentially, copy the same line for certfile, and replace keyfile for certfile.
See: Running the Notebook Server, specifically the section "Using SSL/HTTPS".

Openstack-Keystone failing to start

I've tried almost everything in the past couple of days to get keystone running to no avail.
The setup is all on the same host, the virtualization and openstack and keystone are all on the same host, so I've tried setting up keystone with 127.0.0.1 and localhost and the IP of the host with no luck
[DEFAULT] log_file = /var/log/keystone/keystone.log
admin_token = ***
bind_host = 192.168.33.11
public_port = 5000
admin_port = 35357
compute_port = 8774
# === Logging Options ===
# Print debugging output verbose = True
# Print more verbose output
# (includes plaintext request logging, potentially including passwords)
# debug = False
# Name of log file to output to. If not set, logging will go to stdout. log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone
# Use syslog for logging.
# use_syslog = False
# syslog facility to receive log lines
# syslog_log_facility = LOG_USER
# If this option is specified, the logging configuration file specified is
# used and overrides any other logging options specified. Please see the
# Python logging module documentation for details on logging configuration
# files. log_config = logging.conf
# A logging.Formatter log message format string which may use any of the
# available logging.LogRecord attributes.
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# Format string for %(asctime)s in log records.
# log_date_format = %Y-%m-%d %H:%M:%S
# onready allows you to send a notification when the process is ready to serve
# For example, to have it notify using systemd, one could set shell command:
# onready = systemd-notify --ready
# or a module with notify() method:
# onready = keystone.common.systemd
[sql] connection = mysql://keystone:***#localhost/keystone
# idle_timeout = 200
[identity] driver = keystone.identity.backends.sql.Identity
[catalog] template_file = /etc/keystone/default_catalog.templates driver = keystone.catalog.backends.sql.Catalog
# dynamic, sql-based backend (supports API/CLI-based management commands)
# driver = keystone.catalog.backends.sql.Catalog
# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog
# template_file = default_catalog.templates
[token] driver = keystone.token.backends.sql.Token
# driver = keystone.token.backends.kvs.Token
# Amount of time a token should remain valid (in seconds)
# expiration = 86400
I've enabled logging in the logging.conf file and set the level to DEBUG and INFO, however nothing in log files.
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# ps aux | grep keystone
root 25580 0.0 0.0 103236 880 pts/1 S+ 09:41 0:00 grep keystone
[root#* keystone]#
Any ideas will be greatly appreciated.Thank you
As I mentioned in the comment, I've never seen a config file with the section headings on the same line as config option:
[DEFAULT] log_file = /var/log/keystone/keystone.log
I've also seen it like this instead:
[DEFAULT]
log_file = /var/log/keystone/keystone.log
However, I have no idea if this is related to your issue.
To enable debug-level logging, make sure you set the following in /etc/keystone/logging.conf:
[logger_root]
level=DEBUG
Then try running keystone manually instead of as a service:
$ sudo -u keystone bash
$ HOME=/var/lib/keystone keystone-all --debug
Hopefully you'll see a relevant error message on standard out.
(I believe it will still send the logging to /var/log/keystone/keystone.log, not sure how to actually get it to log to standard out when running manually like this).
Add a valid token for admin_token. It should not be "*".
Check the below line:
[sql] connection = mysql://keystone:*#localhost/keystone
It should be something like:
connection = mysql://keystone:keystone#localhost/keystone
Refer to this url for an example keystone.conf file
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/keystone-conf-file.html
I ran into this issue as well. I am running on Ubuntu 12.04LTS. What i found was the the service start command in /etc/init/keystone.conf is using start-stop-daemon to run the service. It was written for a newer version than the one on my box. The --chdir variable is not accepted as an input. once i removed that line keystone started right up.
Try running:
start-stop-daemon --start --chuid keystone --name keystone --exec /usr/bin/keystone-all
/etc/init/keystone.conf after
description "Keystone API server"
author "Soren Hansen <soren#linux2go.dk>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec start-stop-daemon --start --chuid keystone \
--name keystone \
--exec /usr/bin/keystone-all
Check if your IP-adress is equal to HOST_IP=... in localrc
This might be due to keystone not getting started properly and therefore port 35357 is not in listening mode.
This seems to be anomalous behavior of service keystone.
I am mentioning steps which have worked on my system for havana installtion on Ubuntu 12.04 Kernel version 3.2.0-67-generic. After a day of headache around this issue. Try these steps, preferably in the same order.
1) Remove keystone package:-
apt-get remove keystone
2) Reboot your system
reboot
3) After reboot again INSTALL KEYSTONE.
apt-get install keystone
4) Check status of keystone service
service keystone status
It will show start/running
5) Now do the necessary changes you want to do in /etc/keystone/keystone.conf
after making changes in conf file DO NOT RESTART KEYSTONE SERVICE
Use stop and start command to make an effect of restart but don't restart.
service keystone stop
service keystone start
For further help, pasting a dump of my CLI :-
http://pastebin.com/sduuFCL7
There are multiple problems with the icehouse documentations and install. packstack is broken so the only way to get started is to manually follow the upstream docs for your distro. keystone is very important to set up first correctly before moving on, because other services rely on it.
the paste-file /usr/share/keystone/keystone-dist-paste.ini should be copied to /etc/ to be accessible to the config scripts like this:
cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/
chown keystone:keystone /etc/keystone/*
make sure to update keystone.conf with the new config_file value
documentation is wrong about the mysql connection, it should go to [sql] and not [database] so:
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD#controller/keystone
the name controller should be resolved to whatever mysql is bound to, I will add it to /etc/hosts like this if [mysqld]/bind-address in /etc/my.cnf is 10.1.1.100:
10.1.1.100 controller
make sure to uncomment log_file in keystone.conf to get what is happening.
I was facing similar issue.I followed below mentioned steps and openstack-keystone service got started.
Edit the /etc/keystone/keystone.conf file and complete the following actions:
In the [DEFAULT] section
[DEFAULT]
admin_token = ADMIN_TOKEN
In the [database] section
[database]
connection = mysql://keystone:KEYSTONE_DBPASS#controller/keystone
In the [token] section, configure the UUID token provider and SQL driver
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
In the [revoke] section
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
After making above changes populate the Identity service database using command
su -s /bin/sh -c "keystone-manage db_sync" keystone
Start the openstack-keystone service using below command
systemctl start openstack-keystone

Resources