I am setting up SSSD for one of the HDP setup.
While SSSD is trying to sync users and groups, I am getting the following error message
(Tue Aug 29 07:58:12 2017) [sssd[be[LDAP]]] [sdap_save_user] (0x0020): User [ambari-qa] filtered out! (primary gid out of range)
(Tue Aug 29 07:58:12 2017) [sssd[be[LDAP]]] [sdap_save_user] (0x0020): Failed to save user [ambari-qa]
Any idea how to overcome this error primary gid out of range?
Do you use min_id or max_id in your config? If yes, then you need to increase that range to include the primary GID of that user being cached.
Related
I have been trying to run clickhouse on ec2 instance from terraform. So far the ec2 instance runs well and I have access to the http localhost:8123. However when I try to access the localhost:8123/play I get the following message:
There is no handle /play
Use / or /ping for health checks.
Or /replicas_status for more sophisticated health checks.
Send queries from your program with POST method or GET /?query=...
Use clickhouse-client:
For interactive data analysis:
clickhouse-client
For batch query processing:
clickhouse-client --query='SELECT 1' > result
clickhouse-client < query > result
I don't understand why this is happening as I was not getting that error when running in local.
When I check the status of the clickhouse server I get the following output:
● clickhouse-server.service - ClickHouse Server
Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
Mar 25 12:14:35 systemd[1]: Started ClickHouse Server.
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: clickhouse_remote_servers
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: clickhouse_compression
Mar 25 12:14:35 clickhouse-server[11774]: Logging warning to /var/log/clickhouse-server/clickhouse-server.log
Mar 25 12:14:35 clickhouse-server[11774]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: networks
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: networks
Mar 25 12:14:37 clickhouse-server[11774]: Include not found: clickhouse_remote_servers
Mar 25 12:14:37 clickhouse-server[11774]: Include not found: clickhouse_compression
I don't know if this will help but maybe it is related to the problem.(logs file are empty)
Another question that I have and that has nothing to do with the problem above, is about the understanding of how clickhouse works because we hear many different articles talking about clickhouse but none seem very clear to me. We often hear about "nodes" in the articles that I've been reading. So far I think that clickhouse works with servers on which we put clusters. Inside those clusters we put shards and in each of those shards we put replicas, the so called "nodes". As we will be running in production I just want to make sure that when we talk about "nodes" we are talking about container which act as compute units or it is completely something else.
So far I've tried to open all port ingress and egress but it did not fix the problem. I've checked the clickhouse documentation which mention custom http endpoint but none talk about this error.
After upgrading from 6.16 to the freshest (7.2.1) I get 404 when trying to access the UI in browser.
In derby.log I see the following:
Tue Mar 24 06:26:39 UTC 2020 Thread[localhost-startStop-2,5,main] (XID = 67000717), (SESSIONID = 3), (DATABASE = /opt/jfrog/artifactory/var/data/artifactory/derby), (DRDAID = null), Cleanup action starting
Tue Mar 24 06:26:39 UTC 2020 Thread[localhost-startStop-2,5,main] (XID = 67000717), (SESSIONID = 3), (DATABASE = /opt/jfrog/artifactory/var/data/artifactory/derby), (DRDAID = null), Failed Statement is: INSERT INTO access_configs (config_name, modified, data) VALUES (?, ?, ?) with 3 parameters begin parameter #1: shared.security.joinKey :end parameter begin parameter #2: 1585031199475 :end parameter begin parameter #3: BLOB:Length=93 :end parameter
ERROR 23505: The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'ACCESS_CONFIGS_PK' defined on 'ACCESS_CONFIGS'.
This error should not prevent Artifactory from starting. 404 after migration to Artifactory 7 usually means you didn't change your reverse proxy config from 8081 (on which Artifactory ran on version 6) to 8082 (the new port for Artifactory in version 7).
While the embedded Tomcat redirects directly, if you use a reverse proxy, such as Nginx, you have to update the redirection rules manually, as described here.
Hi I have been trying non-dev mode to start up the nodes for corda V3.
Currently after starting the node, during restart I am experiencing an error of: java.security.cert.CertPathValidatorException: The issuing certificate for C=UK, L=London, O=NetworkMapAndNotary has role NETWORK_MAP, expected one of [INTERMEDIATE_CA, NODE_CA]
the roles that I followed is provided in this link: https://docs.corda.net/head/permissioning.html#certificate-role-extension
obtained from OID Corda Role (1.3.6.1.4.1.50530.1.1)
Any pointers for this issue?
When i followed Devmode and assign my NetworkMapAndNotary to (Role 4) it fails to startup with the error: java.lang.IllegalArgumentException: Incorrect cert role: NODE_CA at net.corda.nodeapi.internal.network.NetworkMapKt.verifiedNetworkMapCert(NetworkMap.kt:48) ~[corda-node-api-corda-3.0.jar:?]
on a side note: i tried to follow devmode cert creation and noticed that the devmode (NetworkMapAndNotary) cert is tagged under a node ( role 4 ) why is that so..
Certificate[2]:
Owner: O=NetworkMapAndNotary, L=London, C=UK
Issuer: C=UK, L=London, OU=corda, O=R3, CN=Corda Node Intermediate CA
Serial number: 39551bff61207fb6
Valid from: Mon Mar 26 07:00:00 ICT 2018 until: Thu May 20 07:00:00 ICT 2027
Certificate fingerprints:
MD5: D1:8C:4D:83:F2:A7:F4:DA:60:05:E3:69:2C:30:FF:20
SHA1: E5:4D:01:A5:68:01:73:59:8B:7A:3D:0B:28:4E:35:C4:CD:DE:C7:52
SHA256: 3F:D6:24:E5:C8:9F:BE:EE:D4:99:D7:2C:85:50:F0:A8:26:46:84:D7:FB:3A:42:54:F2:12:64:51:48:58:FD:CF
Signature algorithm name: SHA256withECDSA
Version: 3
Extensions:
#1: ObjectId: 1.3.6.1.4.1.50530.1.1 Criticality=false
0000: 02 01 04
I resolved it by assigning two different certificates by following this diagram: https://docs.corda.net/_images/certificate_structure.png
Basically I need to create two certs instead of one.
self sign certificate for network map ( network map role )
another signed certificate for nodeca ( node role )
An issue here was because of Corda's tool networkBootStrapper.kt file comes with a hard code function inside the function of: installNetworkParameters where it will always call: createDevNetworkMapCa() function to generate a dev key pair regardless if I am in dev-mode or not.
Customize the file to use the self-signed certificate for network map adding on the role-extension. so the node certificate still remains but the network Map will be a one-time used key just to generate the network-parameters file for each nodes, the node role will always be used for node startup.
It was failing restart because it was seeing that there was a networkmap role certificate acting as another node role in the network.
The Network Map has been redesigned in Corda V3. Take a look at the following blog post and the docs here
Try removing the Network Map identity
Emails are not being deliver to a particular email IDs.
We are using Sentora panel and Postfix mail server.
Error message:
Command died with signal 6: "/usr/libexec/dovecot/deliver"
Mail log:
Feb 14 09:50:27 host postfix/pipe[24913]: CBD7D2010A5: to=,
relay=dovecot, delay=13047, delays=13045/0/0/1.3, dsn=4.3.0,
status=SOFTBOUNCE (Command died with signal 6:
"/usr/libexec/dovecot/deliver")
Please help.
Signal 6 is SIGABRT, which is typically sent when there is an internal problem with the code of Dovecot's deliver binary. There are a number of reasons this could happen.
You can turn on LDA logging within your Dovecot config to get more insight on what's actually happening:
protocol lda {
...
# remember to give proper permissions for these files as well
log_path = /var/log/dovecot-lda-errors.log
info_log_path = /var/log/dovecot-lda.log
}
this can also happen when mail_temp_dir (default: /tmp) does not have enough space to extract attachments. it was fixed in https://github.com/dovecot/core/commit/43d7f354c44b358f45ddd10deb3742ec1cc94889 but is not yet available in some linux distributions (such as debian bullseye).
proenv>proserve dbname -S 2098 -H hostname -B 10000
OpenEdge Release 11.6 as of Fri Oct 16 19:02:26 EDT 2015
11:00:35 BROKER This broker will terminate when session ends. (5405)
11:00:35 BROKER The startup of this database requires 46Mb of shared memory. Maximum segment size is 1024Mb.
11:00:35☻ BROKER 0: dbname is a void multi-volume database. (613)
11:00:35 BROKER : Removed shared memory with segment_id: 39714816 (16869)
11:00:35 BROKER ** This process terminated with exit code 1. (8619)
I am getting the above error when I tried to start the progress database...
This is the problem:
11:00:35☻ BROKER 0: dbname is a void multi-volume database. (613)
My guess is, you have just created the DB using prostrct create. You need to procopy an empty db into your db so that if has the schema tables.
procopy empty yourdbname
See: http://knowledgebase.progress.com/articles/Article/P7713
The database is void it means, it does not have any metaschema.
First create your database using .st file (use prostrct create) than
copy meta schema table using emptyn.
Ex : procopy emptyn urdbname.
Then try to start your database.