Openstack heat-engine - openstack

Not able to start openstack heat-engine...
Registering service and endpoints for heat with type orchestration at http://10.216.59.10:8004/v1/%(tenant_id)s
Failed to discover available identity versions when contacting http://127.0.0.1:5000/v3/. Attempting to parse version from URL.
Unable to establish connection to http://127.0.0.1:5000/v3/auth/tokens: HTTPConnectionPool(host='127.0.0.1', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe114073940>: Failed to establish a new connection: [Errno 111] Connection refused'))
Header line is expected but missing in file -
Failed to discover available identity versions when contacting http://127.0.0.1:5000/v3/. Attempting to parse version from URL.
Unable to establish connection to http://127.0.0.1:5000/v3/auth/tokens: HTTPConnectionPool(host='127.0.0.1', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f65123b57b8>: Failed to establish a new connection: [Errno 111] Connection refused'))
dpkg: error processing package heat-api (--configure):
installed heat-api package post-installation script subprocess returned error exit status 1
Setting up libopts25:amd64 (1:5.18.12-4) ...
ctl1.hq.mlsecloud.amd.com heat-engine[244460]: Traceback (most recent call last):
Jan 24 09:49:24 ctl1.hq.mlsecloud.amd.com heat-engine[244460]: File "/usr/bin/heat-engine", line 10, in
Jan 24 09:49:24 ctl1.hq.mlsecloud.amd.com heat-engine[244460]: sys.exit(main())
Jan 24 09:49:24 ctl1.hq.mlsecloud.amd.com heat-engine[244460]: File "/usr/lib/python3/dist-packages/heat/cmd/engine.py", line 80, in main
Jan 24 09:49:24 ctl1.hq.mlsecloud.amd.com heat-engine[244460]: launcher = launch_engine()
Jan 24 09:49:24 ctl1.hq.mlsecloud.amd.com heat-engine[244460]: File "/usr/lib/python3/dist-packages/heat/cmd/engine.py", line 51, in launch_engine
Jan 24 09:49:24 ctl1.hq.mlsecloud.amd.com heat-engine[244460]: logging.setup(cfg.CONF, 'heat-engine')
Jan 24 09:49:24 ctl1.hq.mlsecloud.amd.com heat-engine[244460]: File "/usr/lib/python3/dist-packages/oslo_log/log.py", line 274, in setup
Jan 24 09:49:24 ctl1.hq.mlsecloud.amd.com heat-engine[244460]: _setup_logging_from_conf(conf, product_name, version)
Jan 24 09:49:24 ctl1.hq.mlsecloud.amd.com heat-engine[244460]: File "/usr/lib/python3/dist-packages/oslo_log/log.py", line 394, in _setup_logging_from_conf
Jan 24 09:49:24 ctl1.: Scheduled restart

Related

redis start error "Job for redis-server.service failed because a configured resource limit was exceeded."

How to resolve this issue? Here is the error
I tried to start redis-server on my nginx.
It shows me this errors below.
I just followed this but it doesn't work for me.
I am not sure how to resolve it.
root#li917-222:~# service redis restart
Job for redis-server.service failed because a configured resource limit was exceeded. See "systemctl status redis-server.service" and "journalctl -xe" for details.
root#li917-222:~# systemctl status redis-server.service
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Thu 2021-07-01 13:29:01 UTC; 39s ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 13798 ExecStopPost=/bin/run-parts --verbose /etc/redis/redis-server.post-down.d (code=exited, status=0/SUCCESS)
Process: 13794 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 13789 ExecStop=/bin/run-parts --verbose /etc/redis/redis-server.pre-down.d (code=exited, status=0/SUCCESS)
Process: 13784 ExecStartPost=/bin/run-parts --verbose /etc/redis/redis-server.post-up.d (code=exited, status=0/SUCCESS)
Process: 13781 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS)
Process: 13776 ExecStartPre=/bin/run-parts --verbose /etc/redis/redis-server.pre-up.d (code=exited, status=0/SUCCESS)
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Unit entered failed state.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Failed with result 'resources'.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Service hold-off time over, scheduling restart.
Jul 01 13:29:01 li917-222 systemd[1]: Stopped Advanced key-value store.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Start request repeated too quickly.
Jul 01 13:29:01 li917-222 systemd[1]: Failed to start Advanced key-value store.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Unit entered failed state.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Failed with result 'start-limit-hit'.
root#li917-222:~# ExecStart=/usr/bin/redis-server /etc/redis/redis.conf --supervised systemd
/etc/redis/redis.conf: line 42: daemonize: command not found
/etc/redis/redis.conf: line 46: pidfile: command not found
/etc/redis/redis.conf: line 50: port: command not found
/etc/redis/redis.conf: line 59: tcp-backlog: command not found
/etc/redis/redis.conf: line 69: bind: warning: line editing not enabled
Try 'timeout --help' for more information.
/etc/redis/redis.conf: line 95: tcp-keepalive: command not found
/etc/redis/redis.conf: line 103: loglevel: command not found
/etc/redis/redis.conf: line 108: logfile: command not found
/etc/redis/redis.conf: line 123: databases: command not found
/etc/redis/redis.conf: line 147: save: command not found
/etc/redis/redis.conf: line 148: save: command not found
/etc/redis/redis.conf: line 149: save: command not found
/etc/redis/redis.conf: line 164: stop-writes-on-bgsave-error: command not found
/etc/redis/redis.conf: line 170: rdbcompression: command not found
/etc/redis/redis.conf: line 179: rdbchecksum: command not found
/etc/redis/redis.conf: line 182: dbfilename: command not found
backup.db dump.rdb exp.so root
/etc/redis/redis.conf: line 230: slave-serve-stale-data: command not found

Airflow - local variable 'filename' referenced before assignment

I'm having an annoying issue in airflow that keeps queuing a lot of tasks in the UI and in order to keep them running I have to restart the scheduler and the workers. My Airflow configuration is using CeleryExecutor, running in 2 workers with the help of Reddis.
I had a look to the logs in the workers and it's showing me this:
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: airflow.exceptions.AirflowException: dag_id could not be found: dc2_phd_nw_5225_processing. Either the dag did not exist or it failed to parse.
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: [2018-02-05 06:53:37,385: ERROR/ForkPoolWorker-17] Command 'airflow run dc2_phd_nw_5225_processing phd_5225_stage_4_add_new_gcs_segments_to_etl_unload_C 2018-02-04T02:00:00 --local --pool dc2 -sd /home/airflow/airflow/dags/doubleclick/dc2_processing.py' returned non-zero exit status 1
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: [2018-02-05 06:53:37,388: ERROR/ForkPoolWorker-17] Task airflow.executors.celery_executor.execute_command[a1821a3b-5ca5-430f-84ce-eb0625a7bbca] raised unexpected: AirflowException('Celery command failed',)
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: Traceback (most recent call last):
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: File "/usr/local/lib/python3.5/dist-packages/airflow/executors/celery_executor.py", line 56, in execute_command
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: subprocess.check_call(command, shell=True)
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: raise CalledProcessError(retcode, cmd)
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: subprocess.CalledProcessError: Command 'airflow run dc2_phd_nw_5225_processing phd_5225_stage_4_add_new_gcs_segments_to_etl_unload_C 2018-02-04T02:00:00 --local --pool dc2 -sd /home/airflow/airflow/dags/doubleclick/dc2_processing.py' returned non-zero exit status 1
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: During handling of the above exception, another exception occurred:
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: Traceback (most recent call last):
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: File "/usr/local/lib/python3.5/dist-packages/celery/app/trace.py", line 367, in trace_task
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: R = retval = fun(*args, **kwargs)
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: File "/usr/local/lib/python3.5/dist-packages/celery/app/trace.py", line 622, in __protected_call__
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: return self.run(*args, **kwargs)
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: File "/usr/local/lib/python3.5/dist-packages/airflow/executors/celery_executor.py", line 59, in execute_command
Feb 05 06:53:37 ip-172-31-46-75 airflow[3656]: raise AirflowException('Celery command failed')
I followed this solution that indicates to use --raw after airflow run command to see the real exception and it says the following:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 28, in <module>
args.func(args)
File "/usr/local/lib/python3.5/dist-packages/airflow/bin/cli.py", line 403, in run
print("Logging into: " + filename)
UnboundLocalError: local variable 'filename' referenced before assignment
Has anyone has the same issue or any idea how to solve it?
Make sure when you switch the run command to use --raw, you're not still passing --local. The command line parser doesn't enforce this, but the code is assuming only one of those is set. As you can see for yourself, here it only sets the filename variable if it raw is not passed. Then here it assumes filename is set if local is set. That logic doesn't work out if both are set!

Error while installing wordpress on CentOS 7

I´ve got a problem with installing wordpress on centos 7.
I used this guide to install it cause I´m quite new to linux:
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-on-centos-7
I did everything step by step but it didn´t work. So I decided to reinstall my root server cause I wanted to have a new clean machine on which I could try it again. Then I used this guide to install it:
http://www.linuxveda.com/2015/09/15/install-wordpress-centos-7/
Same error as before..
The last entries in my error_log are:
[Wed Apr 13 12:16:06.794902 2016] [:error] [pid 23495] [client 66.249.93.167:41393] PHP Warning: Unknown: failed to open stream: Permission denied in Unknown on line 0
[Wed Apr 13 12:16:06.794974 2016] [:error] [pid 23495] [client 66.249.93.167:41393] PHP Fatal error: Unknown: Failed opening required '/home/Soluna/homepages/stormcloud/index.php' (include_path='.:/usr/share/pear:/usr/share/php') in Unknown on line 0
[Wed Apr 13 12:16:07.352614 2016] [:error] [pid 23496] [client 79.196.171.71:51928] PHP Warning: Unknown: failed to open stream: Permission denied in Unknown on line 0
[Wed Apr 13 12:16:07.352706 2016] [:error] [pid 23496] [client 79.196.171.71:51928] PHP Fatal error: Unknown: Failed opening required '/home/Soluna/homepages/stormcloud/index.php' (include_path='.:/usr/share/pear:/usr/share/php') in Unknown on line 0
[Wed Apr 13 12:16:07.671419 2016] [:error] [pid 23497] [client 5.79.100.165:49709] PHP Warning: Unknown: failed to open stream: Permission denied in Unknown on line 0
[Wed Apr 13 12:16:07.671488 2016] [:error] [pid 23497] [client 5.79.100.165:49709] PHP Fatal error: Unknown: Failed opening required '/home/Soluna/homepages/stormcloud/index.php' (include_path='.:/usr/share/pear:/usr/share/php') in Unknown on line 0
[Wed Apr 13 12:16:18.058248 2016] [:error] [pid 23494] [client 79.196.171.71:51933] PHP Warning: Unknown: failed to open stream: Permission denied in Unknown on line 0
[Wed Apr 13 12:16:18.058303 2016] [:error] [pid 23494] [client 79.196.171.71:51933] PHP Fatal error: Unknown: Failed opening required '/home/Soluna/homepages/stormcloud/index.php' (include_path='.:/usr/share/pear:/usr/share/php') in Unknown on line 0
[Wed Apr 13 12:16:21.985259 2016] [:error] [pid 23495] [client 79.196.171.71:51934] PHP Warning: Unknown: failed to open stream: Permission denied in Unknown on line 0
[Wed Apr 13 12:16:21.985311 2016] [:error] [pid 23495] [client 79.196.171.71:51934] PHP Fatal error: Unknown: Failed opening required '/home/Soluna/homepages/stormcloud/index.php' (include_path='.:/usr/share/pear:/usr/share/php') in Unknown on line 0
Can anyone help me? Since now I´ve got nothing installed on the server except teamspeak 3. On my old root server (Hetzner Online) everything worked well but now it doesn´t...Any ideas of how to fix it? I already changed the permissions of all wordpress files but still the same error..
They key is
Permission denied
Check permissions of the folder and files where you are storaging your wordpress files, apache should be able to read them.

Why I can't install this old WordPRess website on my local webserver?

I am pretty new to WP (I came from Joomla) and I am finding some difficulties trying to put on my local web server an old backup of a website (made using WP 3.5)
I have performed the following operation:
1) I have put the website backup into a directory named blog into my Apache www directory on my Ubuntu local system.
2) Then I have put the database backup on my MySql local server
3) Finally I have change the values in the wp-config.php file to use my local DB
The problem is that when I try to open the URL to see the website I see noting (a white screen)
Into the Apache log file (/var/log/apache2/error.log) I found the following errors messages:
[Fri Jan 10 22:04:50 2014] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.9 with Suhosin-Patch configured -- resuming normal operations
[Fri Jan 10 22:05:08 2014] [error] [client 127.0.0.1] PHP Warning: require_once(/var/www/blog/wp-load.php): failed to open stream: No such file or directory in /var/www/blog/wp-blog-header.php on line 12
[Fri Jan 10 22:05:08 2014] [error] [client 127.0.0.1] PHP Fatal error: require_once(): Failed opening required '/var/www/blog/wp-load.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/blog/wp-blog-header.php on line 12
[Fri Jan 10 22:38:37 2014] [error] [client 127.0.0.1] PHP Warning: require_once(/var/www/blog/wp-load.php): failed to open stream: No such file or directory in /var/www/blog/wp-blog-header.php on line 12
[Fri Jan 10 22:38:37 2014] [error] [client 127.0.0.1] PHP Fatal error: require_once(): Failed opening required '/var/www/blog/wp-load.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/blog/wp-blog-header.php on line 12
[Fri Jan 10 22:38:38 2014] [error] [client 127.0.0.1] PHP Warning: require_once(/var/www/blog/wp-load.php): failed to open stream: No such file or directory in /var/www/blog/wp-blog-header.php on line 12
[Fri Jan 10 22:38:38 2014] [error] [client 127.0.0.1] PHP Fatal error: require_once(): Failed opening required '/var/www/blog/wp-load.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/blog/wp-blog-header.php on line 12
[Fri Jan 10 22:38:39 2014] [error] [client 127.0.0.1] PHP Warning: require_once(/var/www/blog/wp-load.php): failed to open stream: No such file or directory in /var/www/blog/wp-blog-header.php on line 12
[Fri Jan 10 22:38:39 2014] [error] [client 127.0.0.1] PHP Fatal error: require_once(): Failed opening required '/var/www/blog/wp-load.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/blog/wp-blog-header.php on line 12
[Fri Jan 10 22:38:40 2014] [error] [client 127.0.0.1] PHP Warning: require_once(/var/www/blog/wp-load.php): failed to open stream: No such file or directory in /var/www/blog/wp-blog-header.php on line 12
[Fri Jan 10 22:38:40 2014] [error] [client 127.0.0.1] PHP Fatal error: require_once(): Failed opening required '/var/www/blog/wp-load.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/blog/wp-blog-header.php on line 12
Why? what could be the problem? How can I try to solve it?
Tnx
Andrea
Make sure within the database the url structures has been same. It looks like some file is missing or the path structure is not proper.

Ceilometer Http 500

I was installing Ceilometer using this guide: http://docs.openstack.org/developer/ceilometer/install/manual.html
After I have finished everything, I try to test it by using: ceilometer meter-list and it gives me this error: HTTPInternalServerError (HTTP 500)
Here's what I have in the log:
root#iaas-hk01:/etc/apache2/sites-enabled# tail -f /var/log/apache2/ceilometer_error.log
[Wed Jul 24 18:35:48 2013] [error] File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 80, in connect
[Wed Jul 24 18:35:48 2013] [error] return dialect.connect(*cargs, **cparams)
[Wed Jul 24 18:35:48 2013] [error]
[Wed Jul 24 18:35:48 2013] [error] File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 281, in connect
[Wed Jul 24 18:35:48 2013] [error] return self.dbapi.connect(*cargs, **cparams)
[Wed Jul 24 18:35:48 2013] [error]
[Wed Jul 24 18:35:48 2013] [error] OperationalError: (OperationalError) unable to open database file None None
[Wed Jul 24 18:35:48 2013] [error]
[Wed Jul 24 18:35:48 2013] [error] [client 192.168.10.16] mod_wsgi (pid=2178): Exception occurred processing WSGI script '/opt/stack/ceilometer/ceilometer/api/app.wsgi'.
[Wed Jul 24 18:35:48 2013] [error] [client 192.168.10.16] TypeError: expected byte string object for header value, value of type int found
Hopefully someone can provide me some guidance in fixing this.
you should set the keystone api - end points. so that ceilometer api can talk with keystone...
Here are steps...
Create a service for ceilometer in keystone
$ keystone service-create --name=ceilometer \
--type=metering \
--description="Ceilometer Service"
Create an endpoint in keystone for ceilometer
$ keystone endpoint-create --region RegionOne \
--service_id $CEILOMETER_SERVICE \
--publicurl "http://$SERVICE_HOST:8777/" \
--adminurl "http://$SERVICE_HOST:8777/" \
--internalurl "http://$SERVICE_HOST:8777/"
Note that: 8777 is the default port of ceilometer if you have customised use the customised one..

Resources