The 'salt==3000.3' distribution was not found and is required by the application - salt-stack

I have upgraded my salt-master and now it doesn't start with following error :
avril 02 18:28:05 aksalt salt-master[1881]: Traceback (most recent call last):
avril 02 18:28:05 aksalt salt-master[1881]: File "/usr/bin/salt-master", line 6, in <module>
avril 02 18:28:05 aksalt salt-master[1881]: from pkg_resources import load_entry_point
avril 02 18:28:05 aksalt salt-master[1881]: File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 3191, in <module>
avril 02 18:28:05 aksalt salt-master[1881]: #_call_aside
avril 02 18:28:05 aksalt salt-master[1881]: File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 3175, in _call_aside
avril 02 18:28:05 aksalt salt-master[1881]: f(*args, **kwargs)
avril 02 18:28:05 aksalt salt-master[1881]: File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 3204, in _initialize_master_working_set
avril 02 18:28:05 aksalt salt-master[1881]: working_set = WorkingSet._build_master()
avril 02 18:28:05 aksalt salt-master[1881]: File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 583, in _build_master
avril 02 18:28:05 aksalt salt-master[1881]: ws.require(__requires__)
avril 02 18:28:05 aksalt salt-master[1881]: File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 900, in require
avril 02 18:28:05 aksalt salt-master[1881]: needed = self.resolve(parse_requirements(requirements))
avril 02 18:28:05 aksalt salt-master[1881]: File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 786, in resolve
avril 02 18:28:05 aksalt salt-master[1881]: raise DistributionNotFound(req, requirers)
avril 02 18:28:05 aksalt salt-master[1881]: pkg_resources.DistributionNotFound: The 'salt==3000.3' distribution was not found and is required by the application
avril 02 18:28:05 aksalt systemd[1]: salt-master.service: Main process exited, code=exited, status=1/FAILURE
If any of you have a clue on that, that would really help!
Xavier

Hi my solution was to completely reinstall salt-master & salt-minion purging all configuration documents..

In the latest CVE releases a breaking change was introduced and all previous releases were removed from the live repos.
If you still want 3000.3 you can get it from https://archive.repo.saltproject.io/.
Many people are questioning whether this policy is a good idea, on this issue and a few others.

Related

Mariadb : mysql installation on arch

Error: Job for mariadb.service failed because the control process exited with error code. See "systemctl status mariadb.service" and "journalctl -xeu mariadb.service" for details.
I want to use mysql on arch
so, i installed using pacman by running "sudo pacman -S mysql"
and then selecting "1. mariadb" for installation.
I checked mysql version : mysql Ver 15.1 Distrib 10.9.4-MariaDB, for Linux (x86_64) using readline 5.1
Then i want to start the service so i run : sudo systemctl start mariadb.service
The error appears :
"Job for mariadb.service failed because the control process exited with error code.
See "systemctl status mariadb.service" and "journalctl -xeu mariadb.service" for details."
I run the given commands so i get the following
For "sudo systemctl status mariadb.service"
× mariadb.service - MariaDB 10.9.4 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; preset: disabled)
Active: failed (Result: exit-code) since Wed 2023-02-01 20:43:54 PKT; 13min ago
Docs: man:mariadbd(8)
https://mariadb.com/kb/en/library/systemd/
Process: 3250 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 3251 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, stat>
Process: 3278 ExecStart=/usr/bin/mariadbd $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=exited, status=1/FAILURE)
Main PID: 3278 (code=exited, status=1/FAILURE)
Status: "MariaDB server is down"
CPU: 241ms
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] InnoDB: Plugin initialization aborted with error Data structure corruption
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [Note] InnoDB: Starting shutdown...
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] Plugin 'InnoDB' init function returned error.
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] Could not open mysql.plugin table: "Table 'mysql.plugin' doesn't exist". Some plugins may be not loaded
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] Unknown/unsupported storage engine: InnoDB
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] Aborting
Feb 01 20:43:54 zain-hpelitebook840g5 systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
Feb 01 20:43:54 zain-hpelitebook840g5 systemd[1]: mariadb.service: Failed with result 'exit-code'.
Feb 01 20:43:54 zain-hpelitebook840g5 systemd[1]: Failed to start MariaDB 10.9.4 database server.
For : journalctl -xeu mariadb.service
Feb 01 20:38:54 zain-hpelitebook840g5 systemd[1]: Failed to start MariaDB 10.9.4 database server.
░░ Subject: A start job for unit mariadb.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit mariadb.service has finished with a failure.
░░
░░ The job identifier is 1576 and the job result is failed.
Feb 01 20:43:54 zain-hpelitebook840g5 systemd[1]: Starting MariaDB 10.9.4 database server...
░░ Subject: A start job for unit mariadb.service has begun execution
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit mariadb.service has begun execution.
░░
░░ The job identifier is 1739.
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [Note] /usr/bin/mariadbd (server 10.9.4-MariaDB) starting as process 3278 ...
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [Note] InnoDB: Compressed tables use zlib 1.2.13
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [Note] InnoDB: Number of transaction pools: 1
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [Note] InnoDB: Using liburing
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [Note] InnoDB: Completed initialization of buffer pool
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] InnoDB: Invalid flags 0x4800 in ./ibdata1
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] InnoDB: Plugin initialization aborted with error Data structure corruption
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [Note] InnoDB: Starting shutdown...
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] Plugin 'InnoDB' init function returned error.
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] Could not open mysql.plugin table: "Table 'mysql.plugin' doesn't exist". Some plugins may be not loaded
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] Unknown/unsupported storage engine: InnoDB
Feb 01 20:43:54 zain-hpelitebook840g5 mariadbd[3278]: 2023-02-01 20:43:54 0 [ERROR] Aborting
Feb 01 20:43:54 zain-hpelitebook840g5 systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ An ExecStart= process belonging to unit mariadb.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Feb 01 20:43:54 zain-hpelitebook840g5 systemd[1]: mariadb.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit mariadb.service has entered the 'failed' state with result 'exit-code'.
Feb 01 20:43:54 zain-hpelitebook840g5 systemd[1]: Failed to start MariaDB 10.9.4 database server.
░░ Subject: A start job for unit mariadb.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit mariadb.service has finished with a failure.
░░
░░ The job identifier is 1739 and the job result is failed.
I tried whatever i'm able to do but cannot solve this error.

redis start error "Job for redis-server.service failed because a configured resource limit was exceeded."

How to resolve this issue? Here is the error
I tried to start redis-server on my nginx.
It shows me this errors below.
I just followed this but it doesn't work for me.
I am not sure how to resolve it.
root#li917-222:~# service redis restart
Job for redis-server.service failed because a configured resource limit was exceeded. See "systemctl status redis-server.service" and "journalctl -xe" for details.
root#li917-222:~# systemctl status redis-server.service
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Thu 2021-07-01 13:29:01 UTC; 39s ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 13798 ExecStopPost=/bin/run-parts --verbose /etc/redis/redis-server.post-down.d (code=exited, status=0/SUCCESS)
Process: 13794 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 13789 ExecStop=/bin/run-parts --verbose /etc/redis/redis-server.pre-down.d (code=exited, status=0/SUCCESS)
Process: 13784 ExecStartPost=/bin/run-parts --verbose /etc/redis/redis-server.post-up.d (code=exited, status=0/SUCCESS)
Process: 13781 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS)
Process: 13776 ExecStartPre=/bin/run-parts --verbose /etc/redis/redis-server.pre-up.d (code=exited, status=0/SUCCESS)
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Unit entered failed state.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Failed with result 'resources'.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Service hold-off time over, scheduling restart.
Jul 01 13:29:01 li917-222 systemd[1]: Stopped Advanced key-value store.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Start request repeated too quickly.
Jul 01 13:29:01 li917-222 systemd[1]: Failed to start Advanced key-value store.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Unit entered failed state.
Jul 01 13:29:01 li917-222 systemd[1]: redis-server.service: Failed with result 'start-limit-hit'.
root#li917-222:~# ExecStart=/usr/bin/redis-server /etc/redis/redis.conf --supervised systemd
/etc/redis/redis.conf: line 42: daemonize: command not found
/etc/redis/redis.conf: line 46: pidfile: command not found
/etc/redis/redis.conf: line 50: port: command not found
/etc/redis/redis.conf: line 59: tcp-backlog: command not found
/etc/redis/redis.conf: line 69: bind: warning: line editing not enabled
Try 'timeout --help' for more information.
/etc/redis/redis.conf: line 95: tcp-keepalive: command not found
/etc/redis/redis.conf: line 103: loglevel: command not found
/etc/redis/redis.conf: line 108: logfile: command not found
/etc/redis/redis.conf: line 123: databases: command not found
/etc/redis/redis.conf: line 147: save: command not found
/etc/redis/redis.conf: line 148: save: command not found
/etc/redis/redis.conf: line 149: save: command not found
/etc/redis/redis.conf: line 164: stop-writes-on-bgsave-error: command not found
/etc/redis/redis.conf: line 170: rdbcompression: command not found
/etc/redis/redis.conf: line 179: rdbchecksum: command not found
/etc/redis/redis.conf: line 182: dbfilename: command not found
backup.db dump.rdb exp.so root
/etc/redis/redis.conf: line 230: slave-serve-stale-data: command not found

Run a script with systemd timers on Nixos

I have a small shellscript scrape.sh that scrapes a website and puts the resulting data into a new directory:
website='test.com'
dir_name="datev_data/$(date -u +'%Y-%m-%dT%H:%M:%S')"
mkdir $dirname
wget --directory-prefix="$dir_name" "$website"
(I don't realy care where the data ends up as long as it is gets a new directory every time and I can get to the data. I therefore put it into my home directory /home/kaligule for now.)
Running this script by hand works fine, so now I want to run this script every hour on my nixos-server. Therefore I put the following into my config there (Inspired by this post):
systemd.services.test_systemd_timers = {
serviceConfig.Type = "oneshot";
script = ''
echo "Will start scraping now."
{pkgs.bash}/bin/bash /home/kaligule/scrape.sh
echo "Done scraping."
'';
};
systemd.timers.test_systemd_timers = {
wantedBy = [ "timers.target" ];
partOf = [ "test_systemd_timers.service" ];
timerConfig.OnCalendar = [ "*-*-* *:00:00" ];
};
Then I test it out:
sudo nixos-rebuild switch # everything is fine
sudo systemctl start test_systemd_timers # run it immediatelly for testing
I get:
Job for test_systemd_timers.service failed because the control process exited with error code.
See "systemctl status test_systemd_timers.service" and "journalctl -xe" for details.
The first suggested command gives me this:
● test_systemd_timers.service
Loaded: loaded (/nix/store/f8348svxpnn6qx08adrv5s7ksc2zy1sk-unit-test_systemd_timers.service/test_systemd_timers.service; linked; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2021-04-02 14:50:02 CEST; 2min 36s ago
TriggeredBy: ● test_systemd_timers.timer
Process: 5686 ExecStart=/nix/store/4smyxxxlhnnmw8l6l3nnfjyvmg0wxcwh-unit-script-test_systemd_timers-start/bin/test_systemd_timers-start (code=exited, status=127)
Main PID: 5686 (code=exited, status=127)
IP: 0B in, 0B out
CPU: 11ms
Apr 02 14:50:02 regulus systemd[1]: Starting test_systemd_timers.service...
Apr 02 14:50:02 regulus test_systemd_timers-start[5686]: Will start scraping now.
Apr 02 14:50:02 regulus test_systemd_timers-start[5687]: /nix/store/4smyxxxlhnnmw8l6l3nnfjyvmg0wxcwh-unit-script-test_systemd_timers-start/bin/test_systemd_timers-start: line 3: {pkgs.bash}/bin/bash: No such file or directory
Apr 02 14:50:02 regulus systemd[1]: test_systemd_timers.service: Main process exited, code=exited, status=127/n/a
Apr 02 14:50:02 regulus systemd[1]: test_systemd_timers.service: Failed with result 'exit-code'.
Apr 02 14:50:02 regulus systemd[1]: Failed to start test_systemd_timers.service.
The second suggested command gives me:
Apr 02 14:54:42 regulus systemd[1]: Starting test_systemd_timers.service...
░░ Subject: A start job for unit test_systemd_timers.service has begun execution
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit test_systemd_timers.service has begun execution.
░░
░░ The job identifier is 34454.
Apr 02 14:54:42 regulus test_systemd_timers-start[5734]: Will start scraping now.
Apr 02 14:54:42 regulus test_systemd_timers-start[5735]: /nix/store/4smyxxxlhnnmw8l6l3nnfjyvmg0wxcwh-unit-script-test_systemd_timers-start/bin/test_systemd_timers-start: line 3: {pkgs.bash}/bin/bash: No such file or directory
Apr 02 14:54:42 regulus systemd[1]: test_systemd_timers.service: Main process exited, code=exited, status=127/n/a
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ An ExecStart= process belonging to unit test_systemd_timers.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 127.
Apr 02 14:54:42 regulus systemd[1]: test_systemd_timers.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit test_systemd_timers.service has entered the 'failed' state with result 'exit-code'.
Apr 02 14:54:42 regulus systemd[1]: Failed to start test_systemd_timers.service.
░░ Subject: A start job for unit test_systemd_timers.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit test_systemd_timers.service has finished with a failure.
░░
░░ The job identifier is 34454 and the job result is failed.
Apr 02 14:54:42 regulus sudo[5731]: pam_unix(sudo:session): session closed for user root
So because I got the Will start scraping now. in the logs I think the job started but wasn't able to run the script. My questions are:
where (and with which permissions) should I put my script?
how should I addapt my config so that the script runs as I described?
(Of course I am thankfull for every feedback to my approach.)
Your problem is that, in your script, {pkgs.bash} should be ${pkgs.bash}; without the $, you won't get variable interpolation.
On NixOS, systemd services run with a very minimal default environment. You can add the missing packages in systemd.services.<name>.path
systemd.services.test_systemd_timers = {
serviceConfig.Type = "oneshot";
path = [
pkgs.wget
pkgs.gawk
pkgs.jq
];
script = ''
echo "Will start scraping now."
${pkgs.bash}/bin/bash /home/kaligule/scrape.sh
echo "Done scraping."
'';
};

502 Bad Gateway and failed to read PID from file /run/nginx.pid: Invalid argument using nginx and gunicorn

I already successfully deployed nginx and gunicorn in my centos 7 server but got 502 Bad Gateway error message. I'm using nginx/1.12.2. I already check both status for gunicorn and nginx.
gunicorn status
● deepagi.service - Gunicorn instance to serve deepagi
Loaded: loaded (/etc/systemd/system/deepagi.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2017-12-02 10:49:30 UTC; 41min ago
Main PID: 1829 (gunicorn)
CGroup: /system.slice/deepagi.service
├─1829 /root/deepagi/deepagienv/bin/python2 /root/deepagi/deepagienv/bin/gunicorn --workers 3 --bind unix:deepagi.sock -m 007 wsgi
├─1834 /root/deepagi/deepagienv/bin/python2 /root/deepagi/deepagienv/bin/gunicorn --workers 3 --bind unix:deepagi.sock -m 007 wsgi
├─1839 /root/deepagi/deepagienv/bin/python2 /root/deepagi/deepagienv/bin/gunicorn --workers 3 --bind unix:deepagi.sock -m 007 wsgi
└─1840 /root/deepagi/deepagienv/bin/python2 /root/deepagi/deepagienv/bin/gunicorn --workers 3 --bind unix:deepagi.sock -m 007 wsgi
Dec 02 10:49:30 DeepAGI systemd[1]: Started Gunicorn instance to serve deepagi.
Dec 02 10:49:30 DeepAGI systemd[1]: Starting Gunicorn instance to serve deepagi...
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1829] [INFO] Starting gunicorn 19.7.1
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1829] [INFO] Listening at: unix:deepagi.sock (1829)
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1829] [INFO] Using worker: sync
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1834] [INFO] Booting worker with pid: 1834
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1839] [INFO] Booting worker with pid: 1839
Dec 02 10:49:30 DeepAGI gunicorn[1829]: [2017-12-02 10:49:30 +0000] [1840] [INFO] Booting worker with pid: 1840
nginx status
● nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2017-12-02 11:16:18 UTC; 14min ago
Main PID: 2317 (nginx)
CGroup: /system.slice/nginx.service
├─2317 nginx: master process /usr/sbin/nginx
└─2318 nginx: worker process
Dec 02 11:16:18 DeepAGI systemd[1]: Starting The nginx HTTP and reverse proxy server...
Dec 02 11:16:18 DeepAGI nginx[2312]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Dec 02 11:16:18 DeepAGI nginx[2312]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Dec 02 11:16:18 DeepAGI systemd[1]: Failed to read PID from file /run/nginx.pid: Invalid argument
Dec 02 11:16:18 DeepAGI systemd[1]: Started The nginx HTTP and reverse proxy server.
But I saw in nginx status got this kind of error message
Dec 02 11:16:18 DeepAGI systemd[1]: Failed to read PID from file /run/nginx.pid: Invalid argument
How to solve this?

Openstack Horizon Service Fail due -- "openstack-dashboard: 500" --packstack installation on Centos7

I tried several times to install Openstack via Packstack on my Centos 7 ( Linux 3.10.0-514.26.2.el7.x86_64) VM. The problem is that Horizon Service is not working. If I check via "openstack-status" than I can see this error:
[root#server1 httpd]# openstack-status
== Keystone service ==
openstack-keystone: inactive (disabled on boot)
== Horizon service ==
openstack-dashboard: 500
Here is the output of horizon_error.log
[Fri Sep 22 10:34:27.281917 2017] [:error] [pid 29584] WARNING:root:"dashboards" and "default_dashboard" in (local_)settings is DEPRECATED now and may be unsupported in some future release. The preferred way to specify the order of dashboards and the default dashboard is the pluggable dashboard mechanism (in /usr/share/openstack-dashboard/openstack_dashboard/enabled, /usr/share/openstack-dashboard/openstack_dashboard/local/enabled).
[Fri Sep 22 10:35:41.865062 2017] [:error] [pid 29586] WARNING:root:"dashboards" and "default_dashboard" in (local_)settings is DEPRECATED now and may be unsupported in some future release. The preferred way to specify the order of dashboards and the default dashboard is the pluggable dashboard mechanism (in /usr/share/openstack-dashboard/openstack_dashboard/enabled, /usr/share/openstack-dashboard/openstack_dashboard/local/enabled).
[Fri Sep 22 10:39:31.927589 2017] [core:error] [pid 29611] [client ::1:50612] End of script output before headers: django.wsgi
[Fri Sep 22 10:40:48.146592 2017] [core:error] [pid 29632] [client ::1:50682] End of script output before headers: django.wsgi
[Fri Sep 22 10:50:48.754993 2017] [:error] [pid 110191] WARNING:root:"dashboards" and "default_dashboard" in (local_)settings is DEPRECATED now and may be unsupported in some future release. The preferred way to specify the order of dashboards and the default dashboard is the pluggable dashboard mechanism (in /usr/share/openstack-dashboard/openstack_dashboard/enabled, /usr/share/openstack-dashboard/openstack_dashboard/local/enabled).
[Fri Sep 22 10:55:53.987761 2017] [core:error] [pid 110223] [client ::1:51630] End of script output before headers: django.wsgi
[Fri Sep 22 11:04:00.792056 2017] [:error] [pid 111616] WARNING:root:"dashboards" and "default_dashboard" in (local_)settings is DEPRECATED now and may be unsupported in some future release. The preferred way to specify the order of dashboards and the default dashboard is the pluggable dashboard mechanism (in /usr/share/openstack-dashboard/openstack_dashboard/enabled, /usr/share/openstack-dashboard/openstack_dashboard/local/enabled).
[Fri Sep 22 11:09:05.859637 2017] [core:error] [pid 111625] [client ::1:52294] End of script output before headers: django.wsg>
and error.log
[Fri Sep 22 10:49:49.691043 2017] [mpm_prefork:notice] [pid 29575] AH00170: caught SIGWINCH, shutting down gracefully
[Fri Sep 22 10:50:16.431096 2017] [suexec:notice] [pid 110182] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Fri Sep 22 10:50:16.442501 2017] [so:warn] [pid 110182] AH01574: module access_compat_module is already loaded, skipping
[Fri Sep 22 10:50:16.442516 2017] [so:warn] [pid 110182] AH01574: module actions_module is already loaded, skipping
[Fri Sep 22 10:50:16.442528 2017] [so:warn] [pid 110182] AH01574: module alias_module is already loaded, skipping
[Fri Sep 22 10:50:16.442613 2017] [so:warn] [pid 110182] AH01574: module auth_basic_module is already loaded, skipping
[Fri Sep 22 10:50:16.442620 2017] [so:warn] [pid 110182] AH01574: module auth_digest_module is already loaded, skipping
[Fri Sep 22 10:50:16.442624 2017] [so:warn] [pid 110182] AH01574: module authn_anon_module is already loaded, skipping
[Fri Sep 22 10:50:16.442646 2017] [so:warn] [pid 110182] AH01574: module authn_core_module is already loaded, skipping
[Fri Sep 22 10:50:16.442738 2017] [so:warn] [pid 110182] AH01574: module authn_dbm_module is already loaded, skipping
[Fri Sep 22 10:50:16.442744 2017] [so:warn] [pid 110182] AH01574: module authn_file_module is already loaded, skipping
[Fri Sep 22 10:50:16.442832 2017] [so:warn] [pid 110182] AH01574: module authz_core_module is already loaded, skipping
[Fri Sep 22 10:50:16.442912 2017] [so:warn] [pid 110182] AH01574: module authz_dbm_module is already loaded, skipping
[Fri Sep 22 10:50:16.442920 2017] [so:warn] [pid 110182] AH01574: module authz_groupfile_module is already loaded, skipping
[Fri Sep 22 10:50:16.442924 2017] [so:warn] [pid 110182] AH01574: module authz_host_module is already loaded, skipping
[Fri Sep 22 10:50:16.442927 2017] [so:warn] [pid 110182] AH01574: module authz_owner_module is already loaded, skipping
[Fri Sep 22 10:50:16.442931 2017] [so:warn] [pid 110182] AH01574: module authz_user_module is already loaded, skipping
[Fri Sep 22 10:50:16.442935 2017] [so:warn] [pid 110182] AH01574: module autoindex_module is already loaded, skipping
[Fri Sep 22 10:50:16.442939 2017] [so:warn] [pid 110182] AH01574: module cache_module is already loaded, skipping
[Fri Sep 22 10:50:16.443172 2017] [so:warn] [pid 110182] AH01574: module deflate_module is already loaded, skipping
[Fri Sep 22 10:50:16.443178 2017] [so:warn] [pid 110182] AH01574: module dir_module is already loaded, skipping
[Fri Sep 22 10:50:16.443321 2017] [so:warn] [pid 110182] AH01574: module env_module is already loaded, skipping
[Fri Sep 22 10:50:16.443328 2017] [so:warn] [pid 110182] AH01574: module expires_module is already loaded, skipping
[Fri Sep 22 10:50:16.443333 2017] [so:warn] [pid 110182] AH01574: module ext_filter_module is already loaded, skipping
[Fri Sep 22 10:50:16.443337 2017] [so:warn] [pid 110182] AH01574: module filter_module is already loaded, skipping
[Fri Sep 22 10:50:16.443423 2017] [so:warn] [pid 110182] AH01574: module include_module is already loaded, skipping
[Fri Sep 22 10:50:16.443518 2017] [so:warn] [pid 110182] AH01574: module log_config_module is already loaded, skipping
[Fri Sep 22 10:50:16.443525 2017] [so:warn] [pid 110182] AH01574: module logio_module is already loaded, skipping
[Fri Sep 22 10:50:16.443530 2017] [so:warn] [pid 110182] AH01574: module mime_magic_module is already loaded, skipping
[Fri Sep 22 10:50:16.443534 2017] [so:warn] [pid 110182] AH01574: module mime_module is already loaded, skipping
[Fri Sep 22 10:50:16.443538 2017] [so:warn] [pid 110182] AH01574: module negotiation_module is already loaded, skipping
[Fri Sep 22 10:50:16.443685 2017] [so:warn] [pid 110182] AH01574: module rewrite_module is already loaded, skipping
[Fri Sep 22 10:50:16.443720 2017] [so:warn] [pid 110182] AH01574: module setenvif_module is already loaded, skipping
[Fri Sep 22 10:50:16.444194 2017] [so:warn] [pid 110182] AH01574: module substitute_module is already loaded, skipping
[Fri Sep 22 10:50:16.444205 2017] [so:warn] [pid 110182] AH01574: module suexec_module is already loaded, skipping
[Fri Sep 22 10:50:16.444291 2017] [so:warn] [pid 110182] AH01574: module unixd_module is already loaded, skipping
[Fri Sep 22 10:50:16.444377 2017] [so:warn] [pid 110182] AH01574: module version_module is already loaded, skipping
[Fri Sep 22 10:50:16.444385 2017] [so:warn] [pid 110182] AH01574: module vhost_alias_module is already loaded, skipping
[Fri Sep 22 10:50:16.444402 2017] [so:warn] [pid 110182] AH01574: module dav_module is already loaded, skipping
[Fri Sep 22 10:50:16.444412 2017] [so:warn] [pid 110182] AH01574: module dav_fs_module is already loaded, skipping
[Fri Sep 22 10:50:16.445171 2017] [so:warn] [pid 110182] AH01574: module mpm_prefork_module is already loaded, skipping
[Fri Sep 22 10:50:16.447082 2017] [so:warn] [pid 110182] AH01574: module systemd_module is already loaded, skipping
[Fri Sep 22 10:50:16.447130 2017] [so:warn] [pid 110182] AH01574: module cgi_module is already loaded, skipping
[Fri Sep 22 10:50:16.447142 2017] [so:warn] [pid 110182] AH01574: module wsgi_module is already loaded, skipping
[Fri Sep 22 10:50:16.451773 2017] [alias:warn] [pid 110182] AH00671: The Alias directive in /etc/httpd/conf.d/autoindex.conf at line 21 will probably never match because it overlaps an earlier Alias.
[Fri Sep 22 10:50:16.452539 2017] [auth_digest:notice] [pid 110182] AH01757: generating secret for digest authentication ...
[Fri Sep 22 10:50:16.556372 2017] [lbmethod_heartbeat:notice] [pid 110182] AH02282: No slotmem from mod_heartmonitor
[Fri Sep 22 10:50:16.866065 2017] [mpm_prefork:notice] [pid 110182] AH00163: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 configured -- resuming normal operations
[Fri Sep 22 10:50:16.866112 2017] [core:notice] [pid 110182] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[Fri Sep 22 11:02:12.621022 2017] [mpm_prefork:notice] [pid 110182] AH00170: caught SIGWINCH, shutting down gracefully
[Fri Sep 22 11:03:01.278069 2017] [suexec:notice] [pid 111604] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Fri Sep 22 11:03:01.327078 2017] [so:warn] [pid 111604] AH01574: module access_compat_module is already loaded, skipping
[Fri Sep 22 11:03:01.327093 2017] [so:warn] [pid 111604] AH01574: module actions_module is already loaded, skipping
[Fri Sep 22 11:03:01.327099 2017] [so:warn] [pid 111604] AH01574: module alias_module is already loaded, skipping
[Fri Sep 22 11:03:01.327174 2017] [so:warn] [pid 111604] AH01574: module auth_basic_module is already loaded, skipping
[Fri Sep 22 11:03:01.327181 2017] [so:warn] [pid 111604] AH01574: module auth_digest_module is already loaded, skipping
[Fri Sep 22 11:03:01.327185 2017] [so:warn] [pid 111604] AH01574: module authn_anon_module is already loaded, skipping
[Fri Sep 22 11:03:01.327189 2017] [so:warn] [pid 111604] AH01574: module authn_core_module is already loaded, skipping
[Fri Sep 22 11:03:01.327271 2017] [so:warn] [pid 111604] AH01574: module authn_dbm_module is already loaded, skipping
[Fri Sep 22 11:03:01.327277 2017] [so:warn] [pid 111604] AH01574: module authn_file_module is already loaded, skipping
[Fri Sep 22 11:03:01.327360 2017] [so:warn] [pid 111604] AH01574: module authz_core_module is already loaded, skipping
[Fri Sep 22 11:03:01.327452 2017] [so:warn] [pid 111604] AH01574: module authz_dbm_module is already loaded, skipping
[Fri Sep 22 11:03:01.327459 2017] [so:warn] [pid 111604] AH01574: module authz_groupfile_module is already loaded, skipping
[Fri Sep 22 11:03:01.327464 2017] [so:warn] [pid 111604] AH01574: module authz_host_module is already loaded, skipping
[Fri Sep 22 11:03:01.327467 2017] [so:warn] [pid 111604] AH01574: module authz_owner_module is already loaded, skipping
[Fri Sep 22 11:03:01.327471 2017] [so:warn] [pid 111604] AH01574: module authz_user_module is already loaded, skipping
[Fri Sep 22 11:03:01.327475 2017] [so:warn] [pid 111604] AH01574: module autoindex_module is already loaded, skipping
[Fri Sep 22 11:03:01.327479 2017] [so:warn] [pid 111604] AH01574: module cache_module is already loaded, skipping
[Fri Sep 22 11:03:01.327740 2017] [so:warn] [pid 111604] AH01574: module deflate_module is already loaded, skipping
[Fri Sep 22 11:03:01.327747 2017] [so:warn] [pid 111604] AH01574: module dir_module is already loaded, skipping
[Fri Sep 22 11:03:01.327927 2017] [so:warn] [pid 111604] AH01574: module env_module is already loaded, skipping
[Fri Sep 22 11:03:01.327937 2017] [so:warn] [pid 111604] AH01574: module expires_module is already loaded, skipping
[Fri Sep 22 11:03:01.327947 2017] [so:warn] [pid 111604] AH01574: module ext_filter_module is already loaded, skipping
[Fri Sep 22 11:03:01.327952 2017] [so:warn] [pid 111604] AH01574: module filter_module is already loaded, skipping
[Fri Sep 22 11:03:01.328046 2017] [so:warn] [pid 111604] AH01574: module include_module is already loaded, skipping
[Fri Sep 22 11:03:01.328181 2017] [so:warn] [pid 111604] AH01574: module log_config_module is already loaded, skipping
[Fri Sep 22 11:03:01.328188 2017] [so:warn] [pid 111604] AH01574: module logio_module is already loaded, skipping
[Fri Sep 22 11:03:01.328193 2017] [so:warn] [pid 111604] AH01574: module mime_magic_module is already loaded, skipping
[Fri Sep 22 11:03:01.328197 2017] [so:warn] [pid 111604] AH01574: module mime_module is already loaded, skipping
[Fri Sep 22 11:03:01.328202 2017] [so:warn] [pid 111604] AH01574: module negotiation_module is already loaded, skipping
[Fri Sep 22 11:03:01.328357 2017] [so:warn] [pid 111604] AH01574: module rewrite_module is already loaded, skipping
[Fri Sep 22 11:03:01.328369 2017] [so:warn] [pid 111604] AH01574: module setenvif_module is already loaded, skipping
[Fri Sep 22 11:03:01.328850 2017] [so:warn] [pid 111604] AH01574: module substitute_module is already loaded, skipping
[Fri Sep 22 11:03:01.328861 2017] [so:warn] [pid 111604] AH01574: module suexec_module is already loaded, skipping
[Fri Sep 22 11:03:01.328959 2017] [so:warn] [pid 111604] AH01574: module unixd_module is already loaded, skipping
[Fri Sep 22 11:03:01.329044 2017] [so:warn] [pid 111604] AH01574: module version_module is already loaded, skipping
[Fri Sep 22 11:03:01.329051 2017] [so:warn] [pid 111604] AH01574: module vhost_alias_module is already loaded, skipping
[Fri Sep 22 11:03:01.329070 2017] [so:warn] [pid 111604] AH01574: module dav_module is already loaded, skipping
[Fri Sep 22 11:03:01.329075 2017] [so:warn] [pid 111604] AH01574: module dav_fs_module is already loaded, skipping
[Fri Sep 22 11:03:01.329754 2017] [so:warn] [pid 111604] AH01574: module mpm_prefork_module is already loaded, skipping
[Fri Sep 22 11:03:01.331569 2017] [so:warn] [pid 111604] AH01574: module systemd_module is already loaded, skipping
[Fri Sep 22 11:03:01.331650 2017] [so:warn] [pid 111604] AH01574: module cgi_module is already loaded, skipping
[Fri Sep 22 11:03:01.331669 2017] [so:warn] [pid 111604] AH01574: module wsgi_module is already loaded, skipping
[Fri Sep 22 11:03:01.346650 2017] [alias:warn] [pid 111604] AH00671: The Alias directive in /etc/httpd/conf.d/autoindex.conf at line 21 will probably never match because it overlaps an earlier Alias.
[Fri Sep 22 11:03:01.347663 2017] [auth_digest:notice] [pid 111604] AH01757: generating secret for digest authentication ...
[Fri Sep 22 11:03:01.464545 2017] [lbmethod_heartbeat:notice] [pid 111604] AH02282: No slotmem from mod_heartmonitor
[Fri Sep 22 11:03:01.563230 2017] [mpm_prefork:notice] [pid 111604] AH00163: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 configured -- resuming normal operations
[Fri Sep 22 11:03:01.563299 2017] [core:notice] [pid 111604] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[root#server1 httpd]#
Can somebody help me to troubleshoot the issue?
Got it! This solved the issue!
https://bugs.launchpad.net/horizon/+bug/1573488/comments/6

Resources