I've taken over some DAGs left by some consultants, all I have is admin access to view the webserver info: "https://airflow.companyname.com",
And when I go into Configurations, it shows something like this:
The home folder for airflow, default is ~/airflow
airflow_home = /usr/local/airflow
And I do not have access to any of those local files.
Is there a way I can connect to this airflow and edit the dags?
Related
Drupal 9 requires that the private file directory be set by editing the Drupal settings.php file. However, when I deploy the Drupal by Bitnami container (https://hub.docker.com/r/bitnami/drupal/) on AWS ECS Fargate, I am unable to SSH in to the machine and update the settings.php file.
I'm aware that one solution may be to configure ECS Exec (https://aws.amazon.com/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/) so that I can tunnel into the container and edit the settings.php file. I'm wondering if that's the ideal solution, or if there is some way to use Docker/ECS commands to potentially:
Copy a custom settings.php file from an outside source (local, or S3)
or
Append text to the settings.php file this container creates after site installation
or
Something I'm not considering
The ultimate goal would be to set a private file directory which I can mount to an EFS system to. So, if there's an EFS related shortcut I'm missing here, that could also prove useful.
I tried looking through the Drupal by Bitnami documentation to see if there was an environment variable or parameter I could set for the private file directory, but I did not see anything.
Thanks in advance,
Jordan
Trying to modify the settings after the container is running is the wrong way to do something like this. That method doesn't scale, and doesn't work in any sort of automated/fault-tolerant/highly-available environment. You shouldn't think of a running docker container as a server that you can then modify after the fact.
If you absolutely want to change the volume path in settings.php then you probably need to create custom docker image, using this Bitnami image as the base, and as part of the Dockerfile for that custom docker image you would either copy a new settings.php file in, or modify the existing file.
However, I'm confused why you need to edit settings.php to change the path. The documentation for the image you are using clearly states that you need to create a persistent volume that is mapped to the /bitnami/drupal path. There's no need to change that to anything else, just configure your ECS Fargate task definition to map an EFS volume into the container at /bitnami/drupal.
I have a woocommerce website and use a dropshipping plugin. Because I am on a shared server I tried to set up a cron job, to reduce CPU usage of my site.
I checked the dropshipping plugin and they have 5 scheduled cron jobs and I wanted to add that schedule to my set but I am not sure if I did it correctly.
So they have the following structure in my cpanel:
public_html/wp-content/plugins/bdroppy/src/CronJob/Jobs
In the CronJob folder they have Jobs folder and 1 file CronJob.php. cronjob
Under the Jobs folder they have 6 php files. cronjob-jobs folder
So I added all task what was in the CronJob/Jobs folder to my siteground account:
php /home/******/www/ *********/public_html/wp-content/plugins/bdroppy/src/CronJob/Jobs/QueueJob.php
but I got an error "Could not open input file error" for each task.
Should I add CronJob.php also to the cron job command?
Hope I explained properly and thanks for any help! :)
As far as I understood, you have disabled WP-Cron because of performance reasons, and instead you want to run wp-cron.php directly from your shared server. Running wp-cron.php will take care of executing any events that have been scheduled using the wp_schedule_event function. Including the events of the drop shipping plugin you are using. In fact, the plugin is registering its events in the admin_footer hook. So as long as you have activated the plugin, you don't need to do anything else.
The correct command for the cron job (to set up on your server) is:
php -q /home/******/www/ *********/public_html/wp-cron.php
In order to check that the plugin events (i.e. bdroppy_queues_event, bdroppy_change_catalog_event, bdroppy_update_catalog_event, bdroppy_update_product_event and bdroppy_sync_order_event) have been correctly scheduled, you can either use a dedicated plugin (e.g. WP Crontrol), or use the WP command line interface (wp-cli), which should be available on your shared server too.
If you want to use wp-cli, you need to:
ssh into your shared server;
navigate to the public_html directory;
run this command: wp cron event list
For more information on how to use wp-cli to list scheduled cron events, check its documentation.
I am new to HTML/CSS and I'm trying to use Amplify to host my static site. I can easily use the "manual deploy" option and upload a .zip file.
What is the preferred method of using Gitlab with Amplify so that changes can be easily made?
My goal is to have everything in a repository that's not zipped so I can make constant changes.
You'll want to set up hosting for your website using the Hosting with Amplify Console option, which provides a git-based workflow for building, deploying, and hosting your website direct from source control.
You can trigger this workflow by running amplify add hosting in your project directory. Next, select the Hosting with Amplify Console option. Then, select the Continuous deployment option. This will open a browser window and take you to your amplify-project homepage. Here, click on the Frontend Environments tab, select your repository provider, and click "Connect branch". You will have to follow the steps to authorize Amplify to access your repository. Once your provider is authorized, you should see a dropdown menu with a list of your repositories. Select the appropriate repository and branch and click "Next".
Next, confirm your build settings. Assuming your website resides at the root of your project directory, your build settings should likely look something like the following:
version: 1
frontend:
phases:
# IMPORTANT - Please verify your build commands
build:
commands: []
artifacts:
# IMPORTANT - Please verify your build output directory
baseDirectory: /
files:
- '**/*'
cache:
paths: []
Again, click "Next". On the next screen, choose "Save and Deploy". Assuming all things were configured correctly, Amplify should now clone your repository and deploy it. You can now confirm that the process has executed properly by following the link provided by the CLI. Hereafter, every and anytime you push changes to your website to gitlab, Amplify will re-deploy your website automatically.
See docs for further info!
I searched thoroughly and could not find any info on this issue.
Currently, I am building a Word Press site, hosted on Google Cloud's Compute Engine. The goal is to build an autoscaling WP site. I am currently attempting to build the instance template, which I can then use to deploy an instance group and subsequently setup the HTTP load balancer.
I am using a base Ubuntu 16.04 image, and from there I installed the rest of the LEMP stack.
I was successfully able to use the SQL proxy to connect to Cloud SQL so that all of the ephermal instances to be deployed will share the same database. To have the proxy initiated each time a new instance is spun up, I am using a startup script:
./cloud_sql_proxy -dir=/cloudsql &
But here's my issue right now: the Word Press files. I can't figure out a way to have all of the instances use the same WP files. I already tried to use Google Cloud Storage FUSE, but when I mount the bucket at the root directory, it deletes all of the WordPress folders (wp-content, wp-admin, wp-includes).
It's as if the instance can't "see" the folders I have in the google cloud storage. Here was my work flow:
setup LEMP with WordPress--got the site working on the front end--all systems go
copied all of the files/folders in my root directory (/var/www/html/) to google cloud bucket
gsutil -m cp -r /var/www/html/. gs://example-bucket/
Then I mounted the bucket at my directory
gcsfuse --dir-mode "777" -o allow_other example-bucket /var/www/html/
But now, when I "ls" inside of the root directory from the Linux terminal, I DON'T see any of the WP folders (wp-includes, wp-admin, wp-content).
My end goal is to have a startup script that:
initiates the DB proxy so all WP instances can share the same DC
initiates the mounting of the google cloud bucket so all instances can share the same WP files
Obviously, that isn't working right now.
What is going wrong here? I am also open to other ideas for how to make the WP files "persistent" and "shared" across all of the ephemeral instances that the instance group spins up.
Try:
sudo nano /etc/rc.local
In the file after:
#By default this script does nothing...
gcsfuse -dir-mode=777 -o allow_other [bucket-name] /var/www/html/[folder-name if apply]
exit 0
Will be work to mount in every scaled instance your bucket.
Hope this help for somebody i realized that you Q has more than 6 months.
:)
Well, there should not be a problem with giving each instance its own WordPress files, unless those files will be modified by the instance.
Each instance can have its own copy of the Wordpress files and all instances can share a common Database. You can host the media files such as images, videos as well as Javascript and CSS files on a Content Delivery Network (CDN)
I have a git repo that includes submodules for wordpress and a wordpress theme. I am trying to configure this so that I can run "git pull" on the server whenever there is a change, to update the files from the repo. The problem I am having is that after I do a git pull, I end up with a 500 error on the front end and my server logs saying "file is writeable by group". Basically, I need all of the files to have the permissions of "0755" and to stay that way after I update them with git. How can I set this up correctly?
Check out the documentation on filemode. In your repository under .git/, the config file has a section starting with [core]. If you set filemode to FALSE (or zero, I can't quite recall), it will stop git from changing permissions on any of the files. Then, you can just update them to the right permission and leave them alone.
Note that you could run into other permissions errors if you are having git run as a separate user (we do this with a git user who runs automated updates). Just something to be aware of as you set things up.