OPENLDAP I tried to load custom schema file in openldap server, but apache directory studio and openldap server cant find custom attributes& objectcl - directory

working environment:
openldap with rocky linux(8.5 green obsidian): I followed this installation guide.
GUI: apache directory studio
I wrote custome schema file below and added it(hospitalperson.schema) in slapd.conf.(also restarted slapd daemon)
attributetype ( 1.3.6.1.4.1.59394.3.1 NAME 'departmentName'
DESC 'departmentName'
SYNTAX 1.3.6.1.4.1.1466.115.121.1.12 )
attributetype ( 1.3.6.1.4.1.59394.3.2 NAME 'serviceName'
DESC 'serviceName'
SYNTAX 1.3.6.1.4.1.1466.115.121.1.12 )
objectclass ( 1.3.6.1.4.1.59394.4.1 NAME 'hospitalperson'
DESC 'hospitalperson'
SUP inetOrgPerson
STRUCTURAL
MAY ( serviceName $ departmentName ) )
and this is part of slapd.conf file.
...
include /etc/openldap/schema/core.schema
include /etc/ldap/schema/cosine.schema
include /etc/openldap/schema/ppolicy/schema
include /etc/openldap/schema/hospitalperson.schema
include /etc/openldap/schema/nis.schema
include /etc/openldap/schema/misc.schema
...
after then, i tried to add new entry whose objectclass is hospitalperson in apache directory studio, but coudn`t find object classes list(even after refreshed!!).
image of apache directory studio with no custom objectclass
also, i tried to add attribute serviceName and departmentName to exsisting entry, ldap gives this error messages:
sudo ldapmodify -w (password) -x -D "cn=admin,dc=ldapmaster,dc=xxxxxx,dc=com" -H ldapi:/// -f addnew.ldif
modifying entry "uid=test1,ou=h00003,ou=hospitals,dc=ldapmaster,dc=xxxxxx,dc=com"
ldap_modify: Undefined attribute type (17)
additional info: departmentName: attribute type undefined
what other factor should i have to check?
i searched other questions and answers, but nothing solves my problem :(

Related

Percona tool pt-table-checksum does not return results

I have a MariaDB (10.4.14) Master-Slave configuration and I want to use Percona pt-table-checksum. I have installed Percona Toolkit in the Master host, which includes pt-table-checksum 3.2.1.
I have created a user to run pt-table-checksum and granted his privileges:
GRANT REPLICATION SLAVE,PROCESS,SUPER, SELECT ON *.* TO `checksum_user`#'%' IDENTIFIED BY 'checksum_password';
GRANT ALL PRIVILEGES ON percona.* TO `checksum_user`#'%';
However, when I try to run the tool, I always get the following error:
pt-table-checksum --replicate=percona.checksums --ignore-databases mysql --no-check-binlog-format h=localhost, u=checksum_user, p=checksum_password
Usage: pt-table-checksum [OPTIONS] [DSN]
Errors in command-line arguments:
* More than one host specified; only one allowed
Instead of using the DSN, I have also tried the options --host,--user and --password, but the results are the same.
What am I doing wrong?

Spring XD - Could not find module with name 'ftphdfs' and type 'source'

I running spring-xd-1.3.1.RELEASE run-time container, when I tried to module file with the source from ftp to hdfs, I get an exception in the shell command which is given below.
xd:>module info --name source:ftphdfs
Command failed org.springframework.xd.rest.client.impl.SpringXDException: Could
not find module with name 'ftphdfs' and type 'source'
Also when I tried to use source as http endpoint, I get an exception like this in shell command which is given below.
xd:>module info --name source:http
Information about source module 'http':
Injects data from http endpoint.
Option Name Description
Default
Type
--------------------- -------------------------------------------------------
--------------------------------------------------------------------------------
--------- -------------------------------------------------------------------
---------------------------------
https true for https://
false
boolean
maxContentLength the maximum allowed content length
1048576
int
messageConverterClass the name of a custom MessageConverter class, to convert
HttpRequest to Message; must have a constructor with a 'MessageBuilderFactory'
parameter org.springframework.integration.x.http.NettyInboundMessageConverter
java.lang.String
port the port to listen to
9000
int
sslPropertiesLocation location (resource) of properties containing the locati
on of the pkcs12 keyStore and pass phrase
classpath:httpSSL.properties
java.lang.String
outputType how this module should emit messages it produces
<none>
org.springframework.util.MimeType
Tech stack which I'm currently using is given below.
1) Hadoop 2.7.2
2) Spring-XD-1.3.1.RELEASE
3) Redis 2.6 (Windows Version) - I use this as a transport
4) Zoo-Keeper 3.8
Any help would be appreciated.
It's a job not a stream source...
xd:>module info job:ftphdfs
Information about job module 'ftphdfs':
...
I don't see an exception for source:http above - just a description of the source.

Error on starting the application Puppet in the Generic enablers Cosmos

Good afternoon,
I have installed the Generic enablers Cosmos, following the manual BigData Analysis - Installation and Administration Guide. When I have come to 'Step 7: applying Puppet' and executed the commands, in the file puppet.err has appeared the following errors:
Error: Could not prefetch yumrepo provider ' inifilé: Section 'openvz-utils' is already defined, cannot re-defines in/etc/yum.repos.d/openvz.repo
Description: There is a conflict with the titles (indicated in bold type) of the file /etc/yum.repos.d/cosmos-openvz.repo and /etc/yum.repos.d/openvz.repo .
cat /etc/yum.repos.d/cosmos-openvz.repo
[openvz-utils]
...
[openvz-kernel-rhel6]
...
cat /etc/yum.repos.d/openvz.repo
[openvz-utils]
...
[openvz-kernel-rhel6]
...
[openvz-kernel-rhel6-testing]
...
Solution: I have realized a change in the titles of the file /etc/yum.repos.d/openvz.repo, example: [openvz-utils_1]
Error: Could not prefetch database_grant provider 'mysql': Execution of '/usr/bin/mysql mysql -Be describe user' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
Description: in the folder /var/lib/mysql/ was not found the file mysql.sock.
Solution: I have installed mysql-server.x86_64:
yum install mysql-server.x86_64
At the end of the installations, I restarted the service:
/etc/init.d/mysqld stop
/etc/init.d/mysqld start
Error: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y list vzstats' returned 1: Error: Cannot retrieve repository metadata (repomd.xml) for repository: ambari. Please verify its path and try again
Description: This error appears in the machine of the Master node, this one is provoked by the configuration of the file [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hieradata/my_environment/common.yaml, indicated in 'Step 6: Puppet configuration'. Concretely, the URL that use the IP: 130.206.81.65
Solution: in the file [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hieradata /my_environment/common.yaml to change the line:
ambari::params::repo_url: 'http:// 130.206.81.65/cosmos/ambari/'
(without blank space)
for
ambari::params::repo_url: 'http:// public-repo-1.hortonworks.com/ambari/centos6/1.x/GA'
(without blank space)
Error: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y list vzstats' returned 1: Error: Cannot retrieve repository metadata (repomd.xml) for repository: cosmos-libvirt. Please verify its path and try again
Description: it is the same problem as the previous error. The difficulty in this one is that I cannot modify the file [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hieradata /my_environment/common.yaml in the line:
cosmos::params::cosmos_repo_deps_url: 'http:// 130.206.81.65/cosmos/rpms/cosmos-deps'
(without blank space)
Because it is line is used in several files:
cat /etc/yum.repos.d/cosmos-libvirt.repo
[cosmos-libvirt]
name=Cosmos LibVirt with OpenVZ - v1.0.5 - NO PolKIT
baseurl=http:// 130.206.81.65/cosmos/rpms/cosmos-deps//libvirt
gpgcheck=0
priority=10
enabled=1
cat /etc/yum.repos.d/cosmos-openvz.repo
[openvz-utils]
name=OpenVZ utilities
baseurl=http:// 130.206.81.65/cosmos/rpms/cosmos-deps//OpenVZ/openvz-utils
enabled=1
gpgcheck=0
priority=1
[openvz-kernel-rhel6]
name=OpenVZ RHEL6-based kernel
baseurl=http:// 130.206.81.65/cosmos/rpms/cosmos-deps//OpenVZ/openvz-kernel- > rhel6
enabled=1
gpgcheck=0
priority=1
It does not also allow to modify the file previous, in the moment to execute the command (indicated in 'Step 7: applying Puppet'):
puppet apply --debug --verbose \
--modulepath [COSMOS_TMP_PATH]/puppet/modules/:[COSMOS_TMP_PATH]/puppet/modules_third_party/ \
--environment my_environment --hiera_config [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hiera.yaml \
--manifestdir [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/ [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/site.pp \
> puppet.out 2> puppet.err
It will erase the modified.
Solution: https://github.com/telefonicaid/fiware-cosmos-platform/issues/4
I need help with the error:
Error: /Stage[main]/Ambari::Server::Config/Augeas[ambari-config-repoinfo]: Could not evaluate: Saving failed, see debug
Might they to throw me a hand with these last error?
Thank you in advance.
PD: Forgive for if it is written badly

~/.ssh/id_rsa.pub not found error while installing capistrano as ansible playbook

I try to install https://github.com/roots/bedrock-ansible to get a bedrock deployment (http://roots.io/wordpress-stack/) running.
When I run "vagrant up", after some time I get the error:
TASK: [capistrano-setup | Setup deploy group] *********************************
skipping: [default]
TASK: [capistrano-setup | Setup deploy user] **********************************
skipping: [default]
TASK: [capistrano-setup | Adding public key to server] ************************
fatal: [default] => could not locate file in lookup: ~/.ssh/id_rsa.pub
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/johannes/site.retry
default : ok=46 changed=16 unreachable=1 failed=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I do not have a clou how i can fix this. Do you have an idea?
It seems the role is trying to find your local public key. It should be in the location in the error message '~/.ssh/id_rsa.pub', but it's not. So either you don't have one, or you keep it in another location.
If you're not familiar with generating SSH keys you probably don't have one. I personally like the GitHub help page for this: https://help.github.com/articles/generating-ssh-keys/
(you only have to perform steps 1 and 2).
If you do have SSH keys, but in a different location, the capistrano-install role in bedrock uses some variables:
deploy_user: deploy
deploy_keys:
- "~/.ssh/id_rsa.pub"
So you can set (multiple) public key files in the deploy_keys list and they will be added to the deploy_user's authorized keys.
All this is needed because Capistrano will use the deploy user to connect to the remote server later. http://blakesmith.me/2010/02/08/understanding-public-key-private-key-concepts.html

Configuring Hue With CDH4.3

I am trying to Configure hue with CDH 4.3.I am facing Configuration Error fo HDFS. Its says that "Current value: http://XXX.XX.XX.XXX:50070/webhdfs/v1/ Filesystem root '/' should be owned by 'hdfs'"
But in my case the owner root folder is user, So how can i tell hue that the owner of root folder is user.
You can update DEFAULT_HDFS_SUPERUSER with 'user' but notice that this is not the official recommended way and it might brake things.
Modify the custom "hdfs-site" on HDFS part,
add the below key-value,
hadoop.proxyuser.hue.hosts=*
hadoop.proxyuser.hue.groups=*
Then you should restart the cluster.
I thought may be usermod -a -G root 'user', so the 'user' can get the /root filesystem ACLS, also you need to modify HUE configuration file pseudo-distributed.ini to make sure the Webserver runs as this user part:
server_user='user'
server_group='user'

Resources