I am using Ansible to create a server in the Hetzner Cloud, the playbook reads:
- name: create the server at Hetzner
hetzner.hcloud.hcloud_server:
name: "{{server_hostname}}"
enable_ipv4: false
enable_ipv6: false
server_type: cx11
location: "{{server_location}}"
image: ubuntu-22.04
ssh_keys:
- "mykey"
state: present
api_token: "{{hetzner_secret}}"
private_networks: ipfire
register: server
My aim is to integrate the new server into the private network named 'ipfire' that I have previously created. The server should not be accessible via the internet, so I have disabled ipv4 and ipv6. Rather, I'd like to access the server by connecting via OpenVPN to the private network 'ipfire' and connect by use of ssh from there.
Unfortunately, I get an error message as follows:
PLAY [Order servers] ********************************************************************************************************
TASK [hetznerserver : create the server at Hetzner] *************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (hetzner.hcloud.hcloud_server) module: private_networks. Supported parameters include: rebuild_protection, api_token, location, enable_ipv6, upgrade_disk, ipv4, endpoint, ipv6, firewalls, server_type, state, force, labels, ssh_keys, delete_protection, image, id, name, enable_ipv4, placement_group, force_upgrade, user_data, datacenter, rescue_mode, allow_deprecated_image, volumes, backups."}
PLAY RECAP ******************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The module private_networks does not seem to work like this?
Error messages like Unsupported parameters for (<moduleName>) module: <givenParameter>. Supported parameters include: <supportedParametersList> are usually syntax errors of the module used.
Therefore one may need to look up the respective documentation, in the example case hcloud_server module – Create and manage cloud servers on the Hetzner Cloud.
If the documentation shows the Parameters in question are available, it indicates
either a version mismatch of module used, means the used version is too old and an update is necessary
or an bug within the module code and further debugging and investigation within the module code is necessary
Code and Documentation Links
Community Authors> hetzner> hcloud
ansible-collections / hetzner.hcloud
After further investigation it might turn out that the parameter in question was introduced recently, in example
Github hetzner.hcloud Issue #150 "Unable to create cloud server without public ipv4 and ipv6"
Github hetzner.hcloud Pull #160 "Add possibility to specify private network when creating or updating servers"
which indicates in your example case that you'll need to update the Ansible Collection module in question since the parameter wasn't introduced in your used version of the module but as of v1.9.0.
Why are some openstack instances found in the "dashboard" and viewed using the command 'nova list --ip [ip]', but not found using the command 'openstack server list|grep [ip]', 'nova list|grep [ip]'?
It is hard to say for sure what the cause of your problem is from the information you provided.
However, nova list and openstack server list both default to listing instances for the current project only. To list instances for all projects, you need to include the --all-tenants or --all-projects option respectively.
Refer to the Nova list command command to select parameter parameters:
--limit
Maximum number of servers to display. If limit == -1, all servers will be displayed. If limit is bigger than 'CONF.api.max_limit' option of Nova API, limit 'CONF.api.max_limit' will be used instead.
I am using Laravel Forge to manage my servers and websites. So generating SSL certificates via Let's Encrypt is also done vie Forge. Somehow one of my domains throws me an error (see attached).
This domain is running on a server which holds several other domains. The nginx configuration is exactly the same.
The application is a Laravel app running on Laravel Octane.
Error:
2022-06-13 10:41:26 URL:https://forge-certificates.laravel.com/le/1441847/1663342/ecdsa?
env=production [4653] -> "letsencrypt_script1655109686" [1] Cloning
into 'letsencrypt1655109686'... Note: switching to
'91cccc0c234e4decf0a19595fa19a6f306788032'.
You are in 'detached HEAD' state. You can look around, make
experimental changes and commit them, and you can discard any commits
you make in this state without impacting any branches by switching
back to a branch.
If you want to create a new branch to retain commits you create, you
may do so (now or later) by using -c with the switch command. Example:
git switch -c
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to
false
HEAD is now at 91cccc0 ensure newline before new section in
openssl.cnf ERROR: Challenge is invalid! (returned: invalid) (result:
["type"] "http-01" ["status"] "invalid"
["error","type"] "urn:ietf:params:acme:error:connection"
["error","detail"] "111.222.333.444: Fetching
http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw:
Timeout during connect (likely firewall problem)"
["error","status"] 400
["error"] {"type":"urn:ietf:params:acme:error:connection","detail":"111.222.333.444:
Fetching
http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw:
Timeout during connect (likely firewall problem)","status":400}
["url"] "https://acme-v02.api.letsencrypt.org/acme/chall-v3/119151352296/awZDUg"
["token"] "_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",0,"url"] "http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",0,"hostname"] "www.my-domain.de"
["validationRecord",0,"port"] "80"
["validationRecord",0,"addressesResolved",0] "111.222.333.444"
["validationRecord",0,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",0,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",0,"addressUsed"] "2a01:4f8:141:333::84"
["validationRecord",0] {"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"} ["validationRecord",1,"url"] "http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",1,"hostname"] "www.my-domain.de"
["validationRecord",1,"port"] "80"
["validationRecord",1,"addressesResolved",0] "111.222.333.444"
["validationRecord",1,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",1,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",1,"addressUsed"] "111.222.333.444"
["validationRecord",1] {"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"111.222.333.444"}
["validationRecord",2,"url"] "http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",2,"hostname"] "my-domain.de"
["validationRecord",2,"port"] "80"
["validationRecord",2,"addressesResolved",0] "111.222.333.444"
["validationRecord",2,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",2,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",2,"addressUsed"] "2a01:4f8:141:333::84"
["validationRecord",2] {"url":"http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"} ["validationRecord"] [{"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"},{"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"111.222.333.444"},{"url":"http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"}] ["validated"] "2022-06-13T08:41:47Z")
I've finally found the solution. Laravel Forge does not support IPv6 out of the box. So you either have to configure Forge to use IPv6 as well or remove all AAAA records pointing to the server managed by Laravel Forge.
I am trying to start cloudera cluster after restart of the machine but it is not staring the server:
Getting below error in cloudera-scm-server logs:
2014-12-23 21:29:26,870 WARN [Task-Thread-for-com.mchange.v2.async.ThreadPerTaskAsynchronousRunner#2e39060b:resourcepool.BasicResourcePool#1841] com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#6ec135d6 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "192.168.6.109", user "scm", database "scm", SSL off
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:291)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:108)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:125)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:30)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:393)
at org.postgresql.Driver.connect(Driver.java:267)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPerTaskAsynchronousRunner$TaskThread.run(ThreadPerTaskAsynchronousRunner.java:255)
I tried to change the permission of db-data folder to 700 and also dropped the SCHEMA_VERSION table as per this link but no luck
EDIT
In DB logs from /var/log/cloudera-scm-server/db.log i got following FATAL error:
FATAL: no pg_hba.conf entry for host "192.168.6.109", user "scm", database "scm", SSL off
I got the solution of this problem
One of the other application that i am running updated the host file. So it was having two entries for localhost(Broken Host file). After fixing that problem was resolved.
Using impala-shell, I can see the hive metastore, use any data base created by Hive and query any table created by Hive. When I try to create a table in impala-shell or do a "invalidate metadata", I get
"ERROR: Couldn't open transport for localhost:26000(connect() failed: Connection refused)"
Have following configuration. This is a multi-node cluster configuration * built by hand i.e. without using Cloudera Manager *
CentOS 6
CDH4.5
Impala 1.2.1
Hive MySQL Metastore
impalad are running on multiple nodes with data nodes
statestored and catalogd is running on a single node that is NOT impalad node
In /etc/default/impala I have changed IMPALA_STATE_STORE_HOST to point to IP of the statestored machine
From the /var/log/impala/catalogd.INFO, it seems 26000 is used by catalog service as there is a line in this file "--catalog_service_port=26000"
Just as /etc/default/impala has to tell Impalad where is the statestore (using IMPALA_STATE_STORE_HOST), I am wondering if for 1.2.1 (where catalogd is introduced) there has to be an additional entry for catalogd location as well - just a guess ....
Any help is appreciated.
Thanks,
you have to start the impalad with the option -catalog_service_host=fqdn_to_your_catalog_host.
unfortunately this is not yet in the default configuration so you have to add it yourself
change /etc/default/impala
CATALOG_SERVICE_HOST=fqdn_to_your_catalog_host
IMPALA_SERVER_ARGS=add: -catalog_service_host=${CATALOG_SERVICE_HOST}
restart impalad and it should work now :-)