Google App Engine Flex: Wordpress Plugin's Read/Write File Permissions - wordpress

We are trying to see if Google App Engine will be a good fit for our Wordpress Sites. I just ran into an issue with a plugin that needs to have a folder with read/write/execute permissions.
"All in One WP Migration is not able to create /app/wordpress/wp-content/plugins/all-in-one-wp-migration/storage folder. You will need to create this folder and grant it read/write/execute permissions (0777) for the All in One WP Migration plugin to function properly."
I noticed in order to upload media files, you need to activate the Google Cloud Storage plugin. So this takes care of that issue, but how should I handle plugins and other I/O?
I thought using Flex instead of Standard would fix this.
App.yaml
runtime: php
env: flex
beta_settings:
cloud_sql_instances: my-project:us-east4:test-instance
runtime_config:
document_root: wordpress
env_variables:
WHITELIST_FUNCTIONS: escapeshellarg,escapeshellcmd,exec,pclose,popen,shell_exec,phpversion,php_uname
php.ini
extension=bcmath.so
extension=gd.so
zend_extension=opcache.so
short_open_tag=On
google_app_engine.disable_readonly_filesystem = 1
EDIT:
I found something to put in app.yaml HOWEVER I don't know if this should be something on production
In the runtime_config I added
skip_lockdown_document_root: true
I'd like to know if this acceptable to put on a live site.
I also put:
handlers:
- url: /(.*\.(htm|html|css|js))$
static_files: wordpress/\1
upload: wordpress/.*\.(htm|html|css|js)$
application_readable: true
- url: /wp-content/(.*\.(ico|jpg|jpeg|png|gif|woff|ttf|otf|eot|svg))$
static_files: wordpress/wp-content/\1
upload: wordpress/wp-content/.*\.(ico|jpg|jpeg|png|gif|woff|ttf|otf|eot|svg)$
application_readable: true
- url: /(.*\.(ico|jpg|jpeg|png|gif|woff|ttf|otf|eot|svg))$
static_files: wordpress/\1
upload: wordpress/.*\.(ico|jpg|jpeg|png|gif|woff|ttf|otf|eot|svg)$
application_readable: true
- url: /wp-includes/images/media/(.*\.(ico|jpg|jpeg|png|gif|woff|ttf|otf|eot|svg))$
static_files: wordpress/wp-includes/images/media/\1
upload: wordpress/wp-includes/images/media/.*\.(ico|jpg|jpeg|png|gif|woff|ttf|otf|eot|svg)$
application_readable: true
- url: /wp-admin/(.+)
script: wordpress/wp-admin/\1
secure: always
- url: /wp-admin/
script: wordpress/wp-admin/index.php
secure: always
- url: /wp-login.php
script: wordpress/wp-login.php
secure: always
- url: /wp-cron.php
script: wordpress/wp-cron.php
login: admin
- url: /xmlrpc.php
script: wordpress/xmlrpc.php
- url: /wp-(.+).php
script: wordpress/wp-\1.php
- url: /(.+)?/?
script: wordpress/index.php

If you want to use standard, there's the limitation that GAE apps can't write to the filesystem.
If the plugin requires to write to the filesystem, then you should use flex.
Even when using flex, whatever is written in the filesystem will not be persisted as the only current storage option is to create a ramdisk in the instance, but the data stored there is not shared among instances, and will be lost on instance death.
There seems to be some workaround to use GCSfuse to mount a somewhat persistent storage in GAE flex, but I would not suggest it as you would run into concurrent write issues.
To summarize, if you need to read and write data into a persistent storage shared between all the instances, GAE is not the solution for you. After all, the whole point of serverless is the idempotence of executions. If your app depends on locally stored files (as this Wordpress plugin seems to do), then the result of the execution will depend on what instance is handling the request.

Related

Unable to load the CSS and JS link in laravel on google cloud

all I just deploy my laravel project on google cloud but the CSS and JS are not working.
Here's my app.yml
runtime: php73
handlers:
- url: /public/assets
static_dir: assets
env_variables:
## Put production environment variables here.
APP_KEY: Already Get this from .env
APP_STORAGE: /tmp
VIEW_COMPILED_PATH: /tmp
SESSION_DRIVER: cookie
Any one of you can help me to make it work
Here's the one link from my head.php file
<link rel="stylesheet" href="{{ asset('assets') }}/css/bootstrap.css">
You may have these backwards. Instead of:
handlers:
- url: /public/assets
static_dir: assets
Shouldn't it be:
handlers:
- url: /assets
static_dir: public/assets
because the url you are hitting doesn't have /public. I would need to see your directory tree to say for sure.

Use Google Cloud Build to deploy Firebase without Firebase CI token

In my package.json I have this line:
{
"deploy:ci": firebase deploy --force --only functions --token \"$SECRET\"
}
And my cloudbuild.yaml:
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "--tag", "gcr.io/$PROJECT_ID/functions", "."]
- name: "gcr.io/$PROJECT_ID/functions"
args: ["yarn", "deploy:ci"]
secretEnv: ["FIREBASE_TOKEN"]
secrets:
- kmsKeyName: projects/myproject/locations/global/keyRings/enviroment/cryptoKeys/firebase
secretEnv:
FIREBASE_TOKEN: VERY_LONG_UNGLY_AND_BORING_BASE64_STRING
I want to know if it is possible to add some "special" permissions to the cloudbuild in order to allow the deployment without this FIREBASE_TOKEN.
(all files are in the same project)
yes, it is possible. But there are couple of stuff you need to do
assuming your cloud build looks like this
steps:
- name: "gcr.io/cloud-builders/npm"
args: ["install"]
- name: "gcr.io/cloud-builders/npm"
args: ["run", "build]
- name: "gcr.io/cloud-builders/firebase"
args: ["deploy"]
You need to upload your own firebase image from here, I assume you are already familiar with this, otherwise, I wrote basically a similar post about how to do this part here anyway...
After that, the IAM you are asking for is Firebase Admin, you need to assign this to your ...#cloudbuild.gserviceaccount.com account.
Voila! you can test it like (using the sdk)
gcloud builds submit --config cloudbuild.yaml .
Of course point to your file location.
Opinionated comment: I'm not a big fan of this approach, I tried a few times and there were always some issues with it, but well, that's why we called it opinionated comment :)
Good luck.

Issue in publishing swagger doc for google cloud endpoints

I am trying to publish Swagger documentation for my cloud endpoints. My application is in Python. As per the documentation [1]:
1.) I downloaded the repository from https://github.com/swagger-api/swagger-ui.git.
2.) Copied the dist directory from the downloaded repository and placed into the docs folder inside my project. I created a new folder with name docs.
3,) As per the step 3, the project name should be FQDN. The project name I have in openapi.yaml is-> host: "sample-project-******.appspot.com"
4.) As given in step 4 in the documentation, I added following in app.yaml
handlers:
- url: /docs
static_files: docs/index.html
upload: docs/index.html
- url: /docs/(.*)
static_files: docs/\1
upload: docs/.
5.) Updated the URL entry in docs/index.html with the following:
url = "../api-docs";
6.) Added following in openapi.yaml
"/docs":
get:
description: "Documentation"
operationId: "docs"
produces:
- "application/json"
responses:
200:
description: "Documentation"
Questions:
What is the significance of adding url = "../api-docs" in step 5?
When I am hitting /docs, I am getting 404.
[1] https://cloud.google.com/endpoints/docs/openapi/adding-swagger
In regards to adding "url", from the documentation at https://cloud.google.com/endpoints/docs/openapi/adding-swagger:
This directs the Swagger UI to retrieve your OpenApi spec from this URL. Add a handler for this path which reads your openapi.yaml and serves it as json.
Also, for exposing your Swagger documentation, you might want to consider that Endpoints is working on a developer portal feature and looking for users: https://cloudplatform.googleblog.com/2018/03/now-you-can-automatically-document-your-API-with-Cloud-Endpoints.html

How to transfer file only when it changed in salt?

I am using the following way to provide bundled software project to salt minions:
proj-archive:
cmd:
- run
- name: "/bin/tar -zxf /home/myhome/Proj.tgz -C {{ proj_dir }}"
- require:
- file: /home/myhome/Proj.tgz
- {{ proj_dir }}
file:
- managed
- user: someone
- group: someone
- mode: '0600'
- makedirs: True
- name: /home/myhome/Proj.tgz
- source: salt://Proj.tgz
As far as I can tell, it does the job, but these rules are always active, even when archive have not changed. This brings unnecessary delays in deployment. In a similar situation, for example, service restart with watch clause on a file, it is possible to restart when file changed. How to tell salt to copy file over network only when it changed? Is there any automatic way to do it?
The Proj.tgz in salt directory is a symlink to file location, if it matters.
The archive.extracted is not that useful, because it does not trigger when changes are inside files, no files added or removed in the archive.
Some relevant info https://github.com/saltstack/salt/issues/40484 , but I am unsure of resolution / workaround.
You can replace both states with salt.states.archive. It might look like this:
proj-archive:
archive.extracted:
- name: {{ proj_dir }}
- source: salt://Proj.tgz
- user: someone
- group: someone
- source_hash_update: True
The key feature here is source_hash_update. From the docs:
Set this to True if archive should be extracted if source_hash has changed. This would extract regardless of the if_missing parameter.
I'm not sure whether or not the archive gets transferred on each state.apply. But I guess it will not.

Google App Engine modules: routing second module to subdirectory

Has anyone run two different WordPress installations as separate modules inside Google App Engine?
I have the following:
/app/
- wordpress1
- wordpress2
- app.yaml
- second.yaml
- dispatch.yaml
- php.ini
Inside wordpress1 and wordpress2 are somewhat clean installations of WordPress, with some GAE helper plugins.
app.yaml contains the default module config, which redirects traffic to wordpress1 using URL handlers.
second.yaml contains the second module config (module: second) and redirects traffic to wordpress2.
In dispatch.yaml I only check for a subdir second to load the second.yaml config:
dispatch:
- url: "*/second*"
module: second
Everything is fine and dandy:
http://localhost.dev:8080 -> wordpress1/index.php
http://localhost.dev:8080/second/ -> wordpress2/index.php
But I can't seem to work out how to set the edge cases:
http://localhost.dev:8080/secondwithextra -> dispatcher error (no URL set)
http://localhost.dev:8080/second (missing trailing slash) -> same as above
I tried to add the following to second.yaml handlers:
- url: /second[^/].+/?
script: wordpress1/index.php # Reroute to `wordpress1` because not a directory match.
But that didn't really work out.
How can I make the second module accept request URI /second, /second/, /second/abc but not /secondxyz?
Having the dispatch.yaml URL glob set to */second/* breaks the slashless /second.
I think you could try to add both */second/* and /second in dispatch.yaml.

Resources