I'm currently trying to setup Bitnami's Wordpress + Nginx Docker with Azure App Service for Containers and I am stuck at the error message: Stopping site because it failed during startup.
I already tried to increase WEBSITES_CONTAINER_START_TIME_LIMIT without success. I also tried to find anything in the other logs, however, there is not relevant information.
This is my log:
2021-03-26T20:26:39.850Z INFO - Starting multi-container app..
2021-03-26T20:26:40.686Z INFO - Pulling image from Docker hub: registry.hub.docker.com/bitnami/wordpress-nginx
2021-03-26T20:26:42.009Z INFO - latest Pulling from bitnami/wordpress-nginx
2021-03-26T20:26:42.011Z INFO - Digest: sha256:08ea17712bc6cd31a20128557aa5bbad562dc81af2e9a26b4f45bfca40e5538c
2021-03-26T20:26:42.013Z INFO - Status: Image is up to date for registry.hub.docker.com/bitnami/wordpress-nginx:latest
2021-03-26T20:26:42.021Z INFO - Pull Image successful, Time taken: 0 Minutes and 1 Seconds
2021-03-26T20:26:42.050Z INFO - Starting container for site
2021-03-26T20:26:42.052Z INFO - docker run -d -p 9180:80 --name as-wp-weu-prod_wordpress_0_f1793421 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=true -e WEBSITE_SITE_NAME=as-wp-weu-prod -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=as-wp-weu-prod.azurewebsites.net -e WEBSITE_INSTANCE_ID=8d9749e1ad54987267d51cda41194309f78e9fe336b09f44ea04d81342bdd40c -e HTTP_LOGGING_ENABLED=1 registry.hub.docker.com/bitnami/wordpress-nginx
2021-03-26T20:56:42.520Z ERROR - multi-container unit was not started successfully
2021-03-26T20:56:42.546Z INFO - Container logs from as-wp-weu-prod_wordpress_0_f1793421 = 2021-03-26T20:26:49.138042862Z [0m
2021-03-26T20:26:49.138217863Z [0m[1mWelcome to the Bitnami wordpress-nginx container[0m
2021-03-26T20:26:49.138376864Z [0mSubscribe to project updates by watching [1mhttps://github.com/bitnami/bitnami-docker-wordpress-nginx[0m
2021-03-26T20:26:49.138532065Z [0mSubmit issues and feature requests at [1mhttps://github.com/bitnami/bitnami-docker-wordpress-nginx/issues[0m
2021-03-26T20:26:49.147750834Z [0m
2021-03-26T20:26:56.839294655Z nami INFO Initializing php
2021-03-26T20:26:56.907101461Z nami INFO php successfully initialized
2021-03-26T20:27:04.576803980Z nami INFO Initializing nginx
2021-03-26T20:27:04.676840017Z nami INFO nginx successfully initialized
2021-03-26T20:27:12.622656961Z nami INFO Initializing mysql-client
2021-03-26T20:27:12.721293503Z nami INFO mysql-client successfully initialized
2021-03-26T20:27:22.196788004Z nami INFO Initializing wordpress
2021-03-26T20:27:22.519271584Z wordpre INFO ==> Preparing Varnish environment
2021-03-26T20:27:22.527241945Z wordpre INFO ==> Preparing Apache environment
2021-03-26T20:27:22.537316623Z wordpre INFO ==> Preparing PHP environment
2021-03-26T20:27:22.588082613Z wordpre INFO WordPress has been already initialized, restoring...
2021-03-26T20:27:24.832051104Z mysql-c INFO Trying to connect to MySQL server
2021-03-26T20:27:24.861626532Z mysql-c INFO Found MySQL server listening at mariadb-weu-prod.mariadb.database.azure.com:3306
2021-03-26T20:27:25.130808807Z mysql-c INFO MySQL server listening and working at mariadb-weu-prod.mariadb.database.azure.com:3306
2021-03-26T20:27:25.130828007Z wordpre INFO Upgrading WordPress Database ...
2021-03-26T20:27:27.133719647Z wordpre INFO
2021-03-26T20:27:27.134393052Z wordpre INFO ########################################################################
2021-03-26T20:27:27.135004057Z wordpre INFO Installation parameters for wordpress:
2021-03-26T20:27:27.141716808Z wordpre INFO Persisted data and properties have been restored.
2021-03-26T20:27:27.142344313Z wordpre INFO Any input specified will not take effect.
2021-03-26T20:27:27.142926818Z wordpre INFO This installation requires no credentials.
2021-03-26T20:27:27.143452822Z wordpre INFO ########################################################################
2021-03-26T20:27:27.143975626Z wordpre INFO
2021-03-26T20:27:27.144505830Z nami INFO wordpress successfully initialized
2021-03-26T20:27:27.205315299Z [0m[38;5;2mINFO [0m ==> Starting gosu...
2021-03-26T20:27:27.243704595Z [0m[38;5;2mINFO [0m ==> Starting PHP-FPM...
2021-03-26T20:27:27.245663910Z [0m[38;5;2mINFO [0m ==> Starting NGINX...
2021-03-26T20:27:27.335050999Z 2021/03/26 20:27:27 [warn] 83#83: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /opt/bitnami/nginx/conf/nginx.conf:2
2021-03-26T20:27:27.335712104Z nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /opt/bitnami/nginx/conf/nginx.conf:2
2021-03-26T20:56:46.234Z INFO - Stopping site as-wp-weu-prod because it failed during startup
This is my docker-compose:
version: '3.9'
services:
wordpress:
image: registry.hub.docker.com/bitnami/wordpress-nginx
restart: always
ports:
- '8080:80'
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/bitnami/wordpress
- ${WEBAPP_STORAGE_HOME}/site/server_blocks:/opt/bitnami/nginx/conf/server_blocks/wordpress-server-block.conf
Thank you!
Related
Install NebulaGraph
sudo rpm -ivh nebula-graph-3.3.0.el7.x86_64.rpm
Start NebulaGraph
sudo /usr/local/nebula/scripts/nebula.service start all
[INFO] Starting nebula-metad...
[INFO] Done
[INFO] Starting nebula-graphd...
[INFO] Done
[INFO] Starting nebula-storaged...
[INFO] Done
Run the following command to check the service status of NebulaGraph.
sudo /usr/local/nebula/scripts/nebula.service status all
The returned result is the following one, there is a problem in nebula-graphd. How do I deal with this issue?
[INFO] nebula-metad(33fd35e): Running as 29020, Listening on 9559
[INFO] nebula-graphd(33fd35e): Running as 29095, Listening on 9669
[WARN] nebula-storaged after v3.0.0 will not start service until it is added to cluster.
[WARN] See Manage Storage hosts:ADD HOSTS in https://docs.nebula-graph.io/
[INFO] nebula-storaged(33fd35e): Running as 29147, Listening on 9779
Best to my knowledge, starting from NebulaGraph 3.0.0, the Meta service cannot directly read or write data in the Storage service that you add in the configuration file. The configuration file only registers the Storage service to the Meta service.
You must run the ADD HOSTS command to enable the Meta to read and write data in the Storage service.
Since upgrading to 6.8.7 using the rpm on RHEL 7, using systemctl start artifactory fails
Looking in the log its failing at this point
2019-03-16 09:50:28,952 [art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:593) - Using Access Server URL: http://localhost:8040/access (bundled) source: detected
2019-03-16 09:50:29,379 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:353) - Waiting for access server...
2019-03-16 09:50:30,625 [art-init] [WARN ] (o.j.a.c.AccessClientHttpException:41) - Unrecognized ErrorsModel by Access. Original message: Failed on executing /api/v1/system/ping, with response: Not Found
2019-03-16 09:50:30,634 [art-init] [ERROR] (o.a.s.a.AccessServiceImpl:364) - Could not ping access server: {}
org.jfrog.access.client.AccessClientHttpException: HTTP response status 404:Failed on executing /api/v1/system/ping, with response: Not Found
Previously we would get
2019-03-13 09:56:06,293 [art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:593) - Using Access Server URL: http://localhost:8040/access (bundled) source: detected
2019-03-13 09:56:06,787 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:353) - Waiting for access server...
2019-03-13 09:56:24,068 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:360) - Got response from Access server after 17280 ms, continuing.
Any suggestions on debugging whether this access server has started ?
Further to this I found logs showing when it worked it used to start a jar file
2019-03-08 09:19:11,609 [localhost-startStop-2] [INFO ] (o.j.a.AccessApplication:48) - Starting AccessApplication v4.1.48 on hostname.nexor.co.uk with PID 5913 (/opt/jfrog/artifactory/tomcat/webapps/access/WEB-INF/lib/access-application-4.1.48.jar started by artifactory in /)
Now when i look I find /opt/jfrog/artifactory/tomcat/webapps/access/ is empty so there is no jar file to run
The rpm did deliver an access.war file and that is there
$ ls -l /opt/jfrog/artifactory/webapps
total 104692
-rwxrwxr-x. 1 root root 51099759 Mar 14 12:14 access.war
-rwxrwxr-x. 1 root root 56099348 Mar 14 12:14 artifactory.war
Is there some manual step I can run to expand this war file to get the jar (as you can guess I am not up on my java apps)
Eventually got it working by deleting the empty /opt/jfrog/artifactory/tomcat/webapps/access directory and a new one containing the required jar files got created.
Not sure why this happened but that got it working for me
I had similar problem on CentOS 7, the solution was downgrade the newly updated java packages by running this command:
yum downgrade java-1.8.0*
After that restart the artifactory:
systemctl restart artifactory
Try changing the port number under your tomcat\conf\server.xml from 8081 to a different, unused port. Then, restart the Artifactory service to ensure the change takes effect.
We currently have Artifactory running on a Digital Ocean Droplet under Tomcat
Artifactory starts nice and quickly, but the deployment doesn't actually complete until it's done, I can't work out how to get any further information out from either the Artifactory Deployment or the Tomcat Server
2017-01-11 13:18:20,707 [art-init] [INFO ]
(o.a.w.s.ArtifactoryContextConfigListener:219) -
###########################################################
### Artifactory successfully started (14.442 seconds) ###
###########################################################
From the catalina.log
11-Jan-2017 13:18:00.921 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDescriptor Deploying configuration descriptor /opt/jfrog/artifactory/tomcat/conf/Catalina/localhost/artifactory.xml
11-Jan-2017 13:35:28.086 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDescriptor Deployment of configuration descriptor /opt/jfrog/artifactory/tomcat/conf/Catalina/localhost/artifactory.xml has finished in 1,047,164 ms
I'm confused as to what's going on between 13:18:20 and 13:25:28, Artifactory doesn't have a forum anymore and directs everyone here.
I am new to docker and currently struggling with the following problem:
After starting the command in the docker terminal:
OAUTH_CLIENT_ID=<...> OAUTH_CLIENT_SECRET=<...>
OAUTH_URL_CALLBACK=http://192.168.99.100/api/v1/auth/login docker-compose --
file test/docker-compose.yml up
I get the following error message:
ERROR: for platform Cannot start service platform: invalid header field
value "oci runtime error: container_linux.go:247: starting container process
caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53:
mounting \\\\\\\"/c/users/m_konk01/documents/GitHub/o2r-
platform/test/nginx.conf\\\\\\\" to rootfs
\\\\\\\"/mnt/sda1/var/lib/docker/aufs/mnt/29c14c514916cf09070c6dd084bee55fa899d9
79b3f7b9521f1ab25e3a8232a0\\\\\\\" at
\\\\\\\"/mnt/sda1/var/lib/docker/aufs/mnt/29c14c514916cf09070c6dd084bee55fa899d9
79b3f7b9521f1ab25e3a8232a0/etc/nginx/nginx.conf\\\\\\\" caused \\\\\\\"not
a directory\\\\\\\"\\\"\"\n" [31mERROR[0m: Encountered errors while bringing up the project.
The docker-compose.yml starts several docker containers and contains the following platform settings:
platform:
image: nginx:latest
depends_on:
- container1
- container2
- container3
- container4
volumes:
- "./nginx.conf:/etc/nginx/nginx.conf:ro"
- "../client:/etc/nginx/html"
ports:
- "80:80"
Docker version
$ docker version
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Wed Oct 26 23:26:11 2016
OS/Arch: windows/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Wed Oct 26 23:26:11 2016
OS/Arch: linux/amd64
Would be happy for any ideas. My search was not successful so far.
it seems like you want to mount a file to a file. i remember on your docker-version its not allowed. you have to move your nginx.conf inside of your Dockerfile
Dockerfile:
....
ADD nginx.conf /tmp/
RUN mv /tmp/nginx.conf /etc/nginx/nginx.conf && \
....
this should work for you. if not show me your next error code
I faced similar issue due to volume mount was not worked properly.
So docker default behavior consider this /nginx.conf as directory and gives this error.
To confirm this, you can ssh into your image and go to shared directory and check the files exist or not.
Example: docker exec -it nginx sh and go to /etc/nginx/html you won't see your local files. Then you can confirm volume share was not working. you need to fix that.
If you are using windows 10 (hyper-v), first you need to share your drive and check.
However you need to find the solution to mound your shared directory first.
We're running RedHat 6.4 on 2 of our nodes.
We've installed the new Cloudera Manager 5.5.0 and we've been trying to create a cluster and add a first node to it (node is initially clean of any Cloudera component). Unfortunately, during the cluster installation, Cloudera Manager gets stuck every time at :
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager Server (check firewall rules).
Ensure that ports 9000 and 9001 are not in use on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added. (Some of the logs can be found in the installation details).
If Use TLS Encryption for Agents is enabled in Cloudera Manager (Administration -> Settings -> Security), ensure that /etc/cloudera-scm-agent/config.ini has use_tls=1 on the host being added. Restart the corresponding agent and click the Retry link here.
We looked around and saw how this is usually caused by a misconfigured /etc/hosts file. So we edited ours on both Cloudera Manager and the new node, did a service network restart as well as service cloudera-scm-server restart but it didn't work either.
Here's what the /etc/hosts file looks like :
127.0.0.1 localhost
10.186.80.86 domain.node2.fr.net host
10.186.80.105 domain.node1.fr.net mgrnode
We also tried some cleaning up before relaunching the cluster creation by deleting scm_prepare_node.* and .scm_prepare_node.lock.
We looked at service cloudera-scm-agent status on the new node after each installation fail as well, and we noticed that the service isn't running (even when we do a service restart, the result is still the same)
service cloudera-scm-agent start
Starting cloudera-scm-agent: [ OK ]
service cloudera-scm-agent status
cloudera-scm-agent dead but pid file exists
Here's the agent logs on the new node side :
tail -f /var/log/cloudera-scm-agent/cloudera-scm-agent.log
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Agent Logging Level: INFO
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO No command line vars
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Missing database jar: /usr/share/java/mysql-connector-java.jar (normal, if you're not using this database type)
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Missing database jar: /usr/share/java/oracle-connector-java.jar (normal, if you're not using this database type)
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Found database jar: /usr/share/cmf/lib/postgresql-9.0-801.jdbc4.jar
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Agent starting as pid 24529 user cloudera-scm(420) group cloudera-scm(207).
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Because agent not running as root, all processes will run with current user.
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent WARNING Expected mode 0751 for /var/run/cloudera-scm-agent but was 0755
[30/Nov/2015 15:07:27 +0000] 24529 MainThread agent INFO Re-using pre-existing directory: /var/run/cloudera-scm-agent
[30/Nov/2015 15:07:29 +0000] 24529 MainThread agent INFO Re-using pre-existing directory: /var/run/cloudera-scm-agent/cgroups
Is there anything we're doing wrong?
Thanks in advance for your help!
This time we just created the cluster with the root user (didn't check the single user mode)
Besides, our host had no internet access, and having created our own repository we needed to do one last step before launching the cluster creation which is importing the GPG key on the host using this command :
sudo rpm --import
If anybody finds themselves facing the same problem, hope this helps!