Persistence with Bitnami's Wordpress Docker Setup - wordpress

I'm trying to set up Wordpress with this documentation:
https://github.com/bitnami/bitnami-docker-wordpress#mount-host-directories-as-data-volumes-with-docker-compose
My host directories for the volumes look like this in the docker-compose file:
volumes:
- './mariadb_data:/bitnami'
...
volumes:
- './wordpress_data:/bitnami'
When running docker-compose up, the following errors occur:
mariadb_1 | INFO ==> Starting mysqld_safe...
mariadb_1 | Could not open required defaults file: /opt/bitnami/mariadb/conf/my.cnf
mariadb_1 | Fatal error in defaults handling. Program aborted
mariadb_1 | WARNING: Defaults file '/opt/bitnami/mariadb/conf/my.cnf' not found!
mariadb_1 | Could not open required defaults file: /opt/bitnami/mariadb/conf/my.cnf
mariadb_1 | Fatal error in defaults handling. Program aborted
mariadb_1 | WARNING: Defaults file '/opt/bitnami/mariadb/conf/my.cnf' not found!
mariadb_1 | 171105 05:15:41 mysqld_safe Logging to '/opt/bitnami/mariadb/data/200101d1b330.err'.
mariadb_1 | 171105 05:15:41 mysqld_safe Starting mysqld daemon with databases from /opt/bitnami/mariadb/data
mariadb_1 | /opt/bitnami/mariadb/bin/mysqld_safe_helper: Can't create/write to file '/opt/bitnami/mariadb/data/200101d1b330.err' (Errcode: 2 "No such file or directory")
myproject_mariadb_1 exited with code 1
However, if I change my docker-compose file to use non-host directories:
volumes:
- 'mariadb_data:/bitnami'
...
volumes:
- 'wordpress_data:/bitnami'
... the docker-compose up works.
If I then stop docker, and then revert my docker-compose file to use host directories again, docker-compose up will now work, and the host directories are populated correctly.
This is a solution to my problem, but I would like to know why, and if there is a way to make things work without this work-around.

Check if the bitnami/bitnami-docker-mariadb issue 123 is relevant in your case:
It seems that docker-compose up did not create a container from scratch (with a clean filesystem), but instead used one preexisting. I deduce this from the beginning sequence:
Starting mariadb_mariadb_1
Attaching to mariadb_mariadb_1
...
It seems to me that this container, in its previous execution, was started with an attached volume at /bitnami/mariadb. After that, the container was stopped, such volume detached, and then the container was restarted. It didn't configure anything and just tried to run the mysql server binary. Since we perform symbolic links from /opt/bitnami/mariadb pointing to /bitnami/mariadb (my.cnf file included), that file went missing and the binaries crashed at start time.
Could you please try using the docker-compose file we provide in this repo? If you only modify it for adding environment variables you shouldn't run into these kind of issues.
As a workaround, just run the following:
docker-compose down -v
docker-compose up
It will remove the the MariaDB container, along with any volume associated, and start from scratch. Bear in mind that you will lose any state you set in the container.

Related

DynamoDB local behaving erratically

This is a very strange situation that's driving me nuts, and I would really appreciate some help here.
I am using CDK to define the DynamoDB table and associated indices. To test them locally, I installed cdklocal and DynamoDB local using localstack. When the computer (Mac running Ventura 13.1) is restarted, everything works as expected. Here is the script I use to bootstrap and start the stack (this is in a file called startStack.sh):
docker-compose up -d
echo "Waiting for 5 seconds"
sleep 5
cd test-app
cdklocal bootstrap
echo "Waiting for 5 seconds"
sleep 5
cdklocal deploy TestAppStack
#cdklocal deploy TestAppStack/ops-table
DYNAMO_ENDPOINT="http://localhost:4566/" dynamodb-admin &
open http://0.0.0.0:8001
cd ..
The test-app directory contains a local copy of the DynamoDB (and associated indices) definition. I do not encounter any errors running the cdklocal (or cdk) deploy commands so I am assuming that the CDK definition is not an issue.
The docker-compose looks like this:
version: "3.8"
services:
localstack:
container_name: AWS-DEVELOPMENT-WITH-LOCALSTACK
image: localstack/localstack:latest
network_mode: bridge
ports:
- "127.0.0.1:53:53"
- "127.0.0.1:53:53/udp"
- "127.0.0.1:443:443"
- "127.0.0.1:4566:4566"
- "127.0.0.1:4571:4571"
- "127.0.0.1:${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- DYNAMODB_SHARE_DB=1
- DISABLE_CORS_CHECKS=1
- SERVICES=s3,dynamodb,sns,sqs,firehose,kinesis,ses,sts,cloudformation
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- PORT_WEB_UI=8080
- LAMBDA_EXECUTOR=local
- KINESIS_ERROR_PROBABILITY=1.0
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=./.localstack
volumes:
- './.localstack:/var/lib/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
Everything works as expected when I first run the startStack.sh file - the dynamodb-admin window opens up correctly and other interfaces can interact with the local DynamoDB table. But after some time (and I have not been able to pinpoint the cause), all interactions with local DynamoDB start failing with the following error(s):
Bootstrapping environment aws://000000000000/us-west-2...
❌ Environment aws://000000000000/us-west-2 failed bootstrapping: UnknownEndpoint: Inaccessible host: `localhost' at port `4566'. This service may not be available in the `us-west-2' region.
at Request.ENOTFOUND_ERROR (/usr/local/lib/node_modules/aws-sdk/lib/event_listeners.js:611:46)
at Request.callListeners (/usr/local/lib/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/usr/local/lib/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/usr/local/lib/node_modules/aws-sdk/lib/request.js:686:14)
at error2 (/usr/local/lib/node_modules/aws-sdk/lib/event_listeners.js:443:22)
at ClientRequest.<anonymous> (/usr/local/lib/node_modules/aws-sdk/lib/http/node.js:99:9)
at ClientRequest.emit (node:events:513:28)
at ClientRequest.emit (node:domain:489:12)
at Socket.socketErrorListener (node:_http_client:494:9)
at Socket.emit (node:events:513:28) {
code: 'UnknownEndpoint',
region: 'us-west-2',
hostname: 'localhost',
retryable: true,
originalError: [Error],
time: 2023-01-15T06:46:40.614Z
}
Inaccessible host: `localhost' at port `4566'. This service may not be available in the `us-west-2' region.
The script hangs at the following message:
[16:52:01] Retrieved account ID 000000000000 from disk cache
[16:52:01] Assuming role 'arn:aws:iam::000000000000:role/cdk-hnb659fds-deploy-role-000000000000-us-west-2'.
[16:52:01] Assuming role failed: Inaccessible host: `localhost' at port `4566'. This service may not be available in the `us-west-2' region.
[16:52:01] Could not assume role in target account using current credentials Inaccessible host: `localhost' at port `4566'. This service may not be available in the `us-west-2' region. . Please make sure that this role exists in the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
current credentials could not be used to assume 'arn:aws:iam::000000000000:role/cdk-hnb659fds-deploy-role-000000000000-us-west-2', but are for the right account. Proceeding anyway.
[16:52:01] Waiting for stack CDKToolkit to finish creating or updating...
Restarting the computer fixes it, but it's not clear what causes the issue in the first place. Restarting Docker does not help either.
Any thoughts on what could be causing the problem and how I can avoid it?
I'm adding this as an answer, although I do not have an affirmative answer I thought I would try to help.
I believe your port is being occupied and thus the process you are running is unable to obtain it resulting in error. Before running the job, check if the port is occupied:
sudo lsof -i :4566

Showing error while mounting EFS to an instance in my Elastic Beanstalk environment

Followed the following procedure for attaching the EFS file to instances created using EB:
https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-mount-efs-volumes/#:~:text=In%20an%20Elastic%20Beanstalk%20environment,scale%20up%20to%20multiple%20instances.
But the logs of Elastic Beanstalk are showing following error:
[Instance: i-06593*****] Command failed on instance. Return code: 1 Output: (TRUNCATED)...fs ... mount -t efs -o tls fs-d9****:/ /efs Failed to resolve "fs-d9****.efs.us-east-1.amazonaws.com" - check that your file system ID is correct. See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail. ERROR: Mount command failed!. command 01_mount in .ebextensions/storage-efs-mountfilesystem.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Just used **** in EFS ID for security.
Based on the comments.
The solution was to create new EFS filesystem, instead of using the original one.

Airflow (LocalExecutor) - Docker :: Job is failing with Log file does not exist

Airflow version: 1.10.9
Executor : LocalExecutor
Docket Setup
when job runs sometime we are getting following error. I have searched in web, many people faced this issue in celeryExecutor but we are using LocalExecutor(Docker setup). How can I resolve this problem?
*** Log file does not exist: /home/ubuntu/airflow/airflow/logs/es_update_relevance_score/es_update_relevance_score/2020-05-14T16:26:06.062416+00:00/1.log
*** Fetching from: http://:8793/log/es_update_relevance_score/es_update_relevance_score/2020-05-14T16:26:06.062416+00:00/1.log
*** Failed to fetch log file from worker. Invalid URL 'http://:8793/log/es_update_relevance_score/es_update_relevance_score/2020-05-14T16:26:06.062416+00:00/1.log': No host supplied
Here is one approach I've seen when running the scheduler and webserver in their own containers and using LocalExecutor:
Mount a host log directory as a volume into both the scheduler and webserver containers:
volumes:
- /location/on/host/airflow/logs:/opt/airflow/logs
Make sure the user within the airflow containers (usually airflow) has permissions to read and write that directory. If the permissions are wrong you will see an error like the one in your post.
This probably won't scale beyond LocalExecutor usage though.

Spark - No Space Left on device error

I am getting the below error . The Spark_local_dir has been set and has enough space and inodes left.
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at org.apache.spark.storage.TimeTrackingOutputStream.write(TimeTrackingOutputStream.java:58)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
at org.xerial.snappy.SnappyOutputStream.dumpOutput(SnappyOutputStream.java:294)
at org.xerial.snappy.SnappyOutputStream.compressInput(SnappyOutputStream.java:306)
at org.xerial.snappy.SnappyOutputStream.rawWrite(SnappyOutputStream.java:245)
at org.xerial.snappy.SnappyOutputStream.write(SnappyOutputStream.java:107)
at org.apache.spark.io.SnappyOutputStreamWrapper.write(CompressionCodec.scala:190)
at org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:218)
at org.apache.spark.util.collection.ChainedBuffer.read(ChainedBuffer.scala:56)
at org.apache.spark.util.collection.PartitionedSerializedPairBuffer$$anon$2.writeNext(PartitionedSerializedPairBuffer.scala:137)
at org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:757)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:70)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
cat spark-env.sh |grep -i local
export SPARK_LOCAL_DIRS=/var/log/hadoop/spark
disk usage
df -h /var/log/hadoop/spark
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/meta 200G 1.1G 199G 1% /var/log/hadoop
inodes
df -i /var/log/hadoop/spark
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/meta 209711104 185 209710919 1% /var/log/hadoop
I also encountered the same issue. To resolve it, I first checked my hdfs disk usage by running hdfs dfsadmin -report.
The Non DFS Used column was above 250 GB. This implied that my logs or tmp or intermediate data was consuming too much space.
After running du -lh | grep G from root folder I figured that spark/work was consuming over 200 GB.
After looking at the folders inside spark/work I understood that by mistake I forgot to uncomment System.out.println statement and hence the logs were consuming high space.
If you're running YARN in yarn-cluster mode then the local dirs used by both Spark executors and driver will be taken from YARN config (yarn.nodemanager.local-dirs). spark.local.dir and your env variable will be ignored.
If you're running YARN in yarn-client mode then the executors will use the local dirs configured the in the YARN config again but the driver will use the one you specified in your env variable because in that mode the driver is not ran on the YARN cluster.
So try setting that config.
You can find a bit more information in the documentation
And there's even a whole section on running spark on yarn
Please check how many inodes were used by hadoop. If they all have gone, the generic error would be the same, no space left, while there is still a space.

~/.ssh/id_rsa.pub not found error while installing capistrano as ansible playbook

I try to install https://github.com/roots/bedrock-ansible to get a bedrock deployment (http://roots.io/wordpress-stack/) running.
When I run "vagrant up", after some time I get the error:
TASK: [capistrano-setup | Setup deploy group] *********************************
skipping: [default]
TASK: [capistrano-setup | Setup deploy user] **********************************
skipping: [default]
TASK: [capistrano-setup | Adding public key to server] ************************
fatal: [default] => could not locate file in lookup: ~/.ssh/id_rsa.pub
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/johannes/site.retry
default : ok=46 changed=16 unreachable=1 failed=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I do not have a clou how i can fix this. Do you have an idea?
It seems the role is trying to find your local public key. It should be in the location in the error message '~/.ssh/id_rsa.pub', but it's not. So either you don't have one, or you keep it in another location.
If you're not familiar with generating SSH keys you probably don't have one. I personally like the GitHub help page for this: https://help.github.com/articles/generating-ssh-keys/
(you only have to perform steps 1 and 2).
If you do have SSH keys, but in a different location, the capistrano-install role in bedrock uses some variables:
deploy_user: deploy
deploy_keys:
- "~/.ssh/id_rsa.pub"
So you can set (multiple) public key files in the deploy_keys list and they will be added to the deploy_user's authorized keys.
All this is needed because Capistrano will use the deploy user to connect to the remote server later. http://blakesmith.me/2010/02/08/understanding-public-key-private-key-concepts.html

Resources