Mount EFS to wp-content on elastic beanstalk - wordpress

So i'm having a problem setting up a Wordpress site on EB. I got the EFS to mount correctly on wp-content/uploads/wpfiles (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-hawordpress-tutorial.html) however this only allows the pages to be stored and not the plugins. Is it possible to mount the entire wp-content folder onto EFS, I've tried and so far failed

I'm not sure if this issue was resolved and it passed silently. I'm having the same issue as you, but with a different error. My knowledge is fairly limited so take what I say with a grain of salt, according to what I saw in your log the problem is that your instance can't see the server. I think that it could be that your EB application is getting deployed in a different Availability Zone than your EFS. What I mean is that maybe you have mount targets for AZ a, b and d and your EB is getting deployed in AZ c. I hope this helps.
I tried a different approach (it basically does the same thing, but I'm manually linking each of the subfolders instead of the wp-content folder), for it to work I deleted the original folders inside /var/app/ondeck (that will eventually get copied to /var/app/current/ that is the folder which get served). Of course, once this gets done your Wordpress won't work since it doesn't have any themes, the solution here would be to quickly log in to the EC2 instance in which your ElasticBeanstalk app is running and manually copying the contents to the mounted EFS (in my case the /wpfiles folder). To connect to the EC2 instance (you can find the instance ID under your EB health configuration) you can follow this link and to mount your EFS you can follow this link. Of course, if the config works you won't have to mount it since it would be already mounted though empty. Here is the content of my config file:
option_settings:
aws:elasticbeanstalk:application:environment:
EFS_NAME: '`{"Ref" : "FileSystem"}`'
MOUNT_DIRECTORY: '/wpfiles'
REGION: '`{"Ref": "AWS::Region"}`'
packages:
yum:
nfs-utils: []
jq: []
files:
"/tmp/mount-efs.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p $MOUNT_DIRECTORY
EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.REGION')
EFS_NAME=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_NAME')
MOUNT_DIRECTORY=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.MOUNT_DIRECTORY')
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EFS_NAME.efs.${EFS_REGION}.amazonaws.com:/ $MOUNT_DIRECTORY || true
mkdir -p $MOUNT_DIRECTORY/uploads
mkdir -p $MOUNT_DIRECTORY/plugins
mkdir -p $MOUNT_DIRECTORY/themes
chown webapp:webapp -R $MOUNT_DIRECTORY/uploads
chown webapp:webapp -R $MOUNT_DIRECTORY/plugins
chown webapp:webapp -R $MOUNT_DIRECTORY/themes
commands:
01_mount:
command: "/tmp/mount-efs.sh"
container_commands:
01-rm-wp-content-uploads:
command: rm -rf /var/app/ondeck/wp-content/uploads && rm -rf /var/app/ondeck/wp-content/plugins && rm -rf /var/app/ondeck/wp-content/themes
02-symlink-uploads:
command: ln -snf $MOUNT_DIRECTORY/uploads /var/app/ondeck/wp-content/uploads && ln -snf $MOUNT_DIRECTORY/plugins /var/app/ondeck/wp-content/plugins && ln -snf $MOUNT_DIRECTORY/themes /var/app/ondeck/wp-content/themes
I'm using another config file to create my EFS as in here, in case you have already created your EFS you must change EFS_NAME: '`{"Ref" : "FileSystem"}`' to EFS_NAME: id_of_your_EFS.
I hope this helps user3738338.

You can do following this link - https://github.com/aws-samples/eb-php-wordpress/blob/master/.ebextensions/efs-mount.config
Just keep a note it uses uploads, you can change it for wp-content.

Related

Your environment may not have any index with Wazuh's alerts

I'm getting this error when I am trying to reinstall elk with wazuh
We need more precise information about your use case in order to help you to troubleshoot this problem.
I recommend you follow the uninstallation guide from the official documentation https://documentation.wazuh.com/current/user-manual/uninstall/elastic-stack.html, and after that, install it again https://documentation.wazuh.com/current/installation-guide/more-installation-alternatives/elastic-stack/all-in-one-deployment/unattended-installation.html.
If you want to preserve your configuration make sure to backup the following files
cp -p /var/ossec/etc/ /var/ossec_backup/etc/client.keys
cp -p /var/ossec/etc/ /var/ossec_backup/etc/ossec.conf
cp -p /var/ossec/queue/rids/sender_counter /var/ossec_backup/queue/rids/sender_counter
If you have made local changes to any of the following then also backup them:
cp -p /var/ossec/etc/local_internal_options.conf /var/ossec_backup/etc/local_internal_options.conf
cp -p /var/ossec/etc/rules/local_rules.xml /var/ossec_backup/rules/local_rules.xml
cp -p /var/ossec/etc/decoders/local_decoder.xml /var/ossec_backup/etc/local_decoder.xml
If you have the centralized configuration you must preserve:
cp -p /var/ossec/etc/shared/default/agent.conf /var/ossec_backup/etc/shared/agent.conf
Optionally the following files can be restored to preserve alert log files and syscheck/rootcheck databases:
cp -rp /var/ossec/logs/archives /var/ossec_backup/logs/archives/*
cp -rp /var/ossec/logs/alerts /var/ossec_backup/logs/alerts/*
cp -rp /var/ossec/queue/rootcheck /var/ossec_backup/queue/rootcheck/*
cp -rp /var/ossec/queue/syscheck /var/ossec_backup/queue/syscheck/*
After reinstalling, you need to place those files in their original path.
Also, in case you want to preserve your indexes after restarting consider making a backup of your indexes following this blog https://wazuh.com/blog/index-backup-management/.

Azure ARM - mount StorageAccount FileShare to a linux VM

I prepared an ARM template, template creates listed azure resources: linux VM deployment, Storage deployment, file share in this Storage Account.
ARM works fine, but I would like to add one thing, mounting file share to a linux VM (using script from file share blade, script proposed by Microsoft).
I would like to use Custom Script Extension, and then use "commandToExecute" option to paste inline linux script (this one for file share mounting).
My question is: how to retrieve password to file share and then pass it as a parameter to the inline script. Is it possible? Is it possible to paste file share mounting script as an inline script in ARM template? maybe there is any other way to complete my task? I know that I can store script in a storage account and in ARM template put "blob SAS URL" in the Custom Extension ARM area, but still is a question how to retrieve the password to File Shares, below is the script for File share mount.
sudo mkdir /mnt/wsustorageaccount
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/StorageAccountName.cred" ]; then
sudo bash -c 'echo "username=xxxxx" >> /etc/smbcredentials/StorageAccountName.cred'
sudo bash -c 'echo "password=xxxxxxx" >> /etc/smbcredentials/StorageAccountName.cred'
fi
sudo chmod 600 /etc/smbcredentials/StorageAccountName.cred
sudo bash -c 'echo "//StorageAccount.file.core.windows.net/test /mnt/StorageAccount cifs nofail,vers=3.0,credentials=/etc/smbcredentials/StorageAccountName.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
sudo mount -t cifs //StorageAccountName.file.core.windows.net/test /mnt/StorageAccountName -o vers=3.0,credentials=/etc/smbcredentials/StorageAccountName.cred,dir_mode=0777,file_mode=0777,serverino
You can use this quickstart example:
listKeys(variables('storageAccountId'), '2019-04-01').keys[0].value

Configuring a docker container to use host UID and generate files on the host system - Preferably at runtime

I am currently working on a research tool that is supposed to be containerized using docker to hopefully be run on as many different systems as possible. This works fine for the most part, we have run into a permission problem because of the workflow though: The tool takes an input file (which we mount into the container), evaluates it using R scripts and is then supposed to generate a report on the input file exactly where the file was taken from on the host system.
The latter part is problematic as at least in our university context, the internal container user lacks write permissions in the (non-root) user home folders, which we are currently taking our testing data from. This would obviously also be bad in a production context as we don't know how the potential users' system is set up, which is why we are trying to dynamically and temporarily set the permissions of the container user to the host user.
I have found different solutions that involve passing the UID/GID to the docker daemon when building the container in some way or another:
docker build --build-arg USER_ID=$(id -u ${USER}) --build-arg GROUP_ID=$(id -g ${USER}) -t IMAGE .
I also changed the dockerfile accordingly using a tutorial that suggested replacing the internal www-data user:
[...Package installation steps that are supposed to be run as root...]
ARG USER_ID
ARG GROUP_ID
RUN if [ ${USER_ID:-0} -ne 0 ] && [ ${GROUP_ID:-0} -ne 0 ]; then \
userdel -f www-data &&\
if getent group www-data ; then groupdel www-data; fi &&\
groupadd -g ${GROUP_ID} www-data &&\
useradd -l -u ${USER_ID} -g www-data www-data &&\
install -d -m 0755 -o www-data -g www-data /work/ &&\
chown --changes --silent --no-dereference --recursive \
--from=33:33 ${USER_ID}:${GROUP_ID} \
/work \
;fi
USER www-data
WORKDIR /work
RUN mkdir files
COPY data/ /opt/MTB/data/
COPY helpers/ /opt/MTB/helpers/
COPY src/www/ /opt/MTB/www/
COPY tmp/ /opt/MTB/tmp/
COPY example_data/ /opt/MTB/example_data/
COPY src/ /opt/MTB/src/
EXPOSE 8080
ENTRYPOINT ["/opt/MTB/src/starter_s_c.sh"]
The entrypoint script starter_s_c.sh is a small bashscript that feeds the trailing argument to the corresponding R script as an input file - the R script writes the report.
This works, but requires the container to be built again for every new user. What we are looking for is a solution that handles the dynamic permission setting at runtime, so that we only have to build the container once and can use it with many different user configurations.
I have found this but I am not entirely sure how to implement it as it would replace our entrypoint script and I'm not sure how to integrate this solution into our project.
Here is our current entrypoint script which already needs the permissions to be set so localmaster.r can generate the report in the host directory:
#!/bin/sh
file="$1"
cd $(dirname $0)/..
if [ $# -eq 0 ]; then
echo '.libPaths(c("~/lib/R/library", .libPaths())); library(shiny); library(shinyjs); runApp("src")' | R --vanilla
else
echo "Rscript --vanilla /opt/MTB/src/localmaster.r "$file""
Rscript --vanilla /opt/MTB/src/localmaster.r "$file"
fi
(If no arguments are given, it starts a shiny app, just to avoid confusion)
Any help or tips would be much appreciated! Thank you.

Elastic file system not able to persist data across different EC2 instances?

I am trying to leverage the power of elastic beanstalk with a fresh wordpress install. To keep it stateless I am trying to use EFS to persist wp-content files between ec2 instances, but for some reason I can't get my EFS setup to persist my wp-content folder.
The following is my efs.config file.
packages:
yum:
nfs-utils: []
jq: []
files:
"/tmp/mount-efs.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p /mnt/efs
EFS_NAME=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_NAME')
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EFS_NAME:/ /mnt/efs || true
mkdir -p /mnt/efs/wp-content
chown webapp:webapp /mnt/efs/wp-content
mkdir -p /mnt/efs/wp-content/themes
chown webapp:webapp /mnt/efs/wp-content/themes
mkdir -p /mnt/efs/wp-content/plugins
chown webapp:webapp /mnt/efs/wp-content/plugins
mkdir -p /mnt/efs/wp-content/uploads
chown webapp:webapp /mnt/efs/wp-content/uploads
commands:
01_mount:
command: "/tmp/mount-efs.sh"
container_commands:
01-rm-wp-content-uploads:
command: rm -rf /var/app/ondeck/wp-content
02-symlink-uploads:
command: ln -snf /mnt/efs/wp-content /var/app/ondeck/wp-content
It seems like it's mounting and deleting the files? Each time when a new instance is auto created I was able to ssh into the instance and see the new instance mount a wp-content folder but without my existing files?
Thanks in Advance!
Also, would I be able to see what files are in EFS directly in the AWS console?
Thanks.
You need setup to mount your efs folder automatically when your instance reboots.
Follow the steps which mention in this article.
http://docs.aws.amazon.com/efs/latest/ug/mount-fs-auto-mount-onreboot.html

Change chmod of dir from volume

I'm trying to run CakePHP 2 app inside of a container. I have everything setup and PHP works properly but have one problem: /var/www/app/tmp has incorrect write permissions. This directory is loaded from volume
Did you already take a look at the CakePHP2.0 docs? Maybe this is usefull:
One common issue is that the app/tmp directories and subdirectories must be writable both by the web server and the command line user. On a UNIX system, if your web server user is different from your command line user, you can run the following commands just once in your project to ensure that permissions will be setup properly:
HTTPDUSER=`ps aux | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1`
setfacl -R -m u:${HTTPDUSER}:rwx app/tmp
setfacl -R -d -m u:${HTTPDUSER}:rwx app/tmp
Source: https://book.cakephp.org/2.0/en/installation.html#permissions
This happens a lot if you're running PHP via a container passthrough. In this scenario, you are passing a directory through to the application with pre-defined permissions. What you'll need to do is periodically make sure permissions are being updated to the webserver from the container. Let's say your container is called web
docker exec web chown -R www-data /var/www/html
(/var/www/html being replaced with wherever your code resides)
For Example. This will make it work perfectly fine in the container, but may actually cause issues accessing the data from the host OS if you're using Linux. I had this issue several times with Laravel and PHP using a volume passthrough from the host, since the volume's files themselves are updated to a userID the host OS doesn't have.

Resources