I have a site based on WordPress 4.9.6 that is deployed using AWS ECS, with EFS used to store WordPress' files. As database I use an AWS Aurora MySQL 5.7 compatible instance. I have also set up an application load balancer to access the containerized WordPress instances. (More specifics on the setup below.)
Problem overview
This setup does seem to work in most cases. I.e. I am able to do GET requests on the site. I can login in, see the dashboard and often times do updates successfully. The problem I face is that also often times my update attempts result in 502 Bad Gateway response when I commit my update, i.e. doing POST /wp-admin/post.php.
Specifics
Setup
First my DB instance writer is populated with a dump of my local dev database. The database URL entries pointing back to the site itself, e.g. siteurl and home in the options table, have values with https.
The EFS was created with performance mode General Purpose. I originally tried with mode Max I/O, but resources suggesting to rather use the former mode. However, toggling performance mode has not change the frequency of 502 errors.
My Amazon Linux ECS cluster instances are provisioned to mount an EFS volume using NFSv4.1 as suggested by AWS (mount options nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2).
I have a Docker image derived from the official WordPress image in which I update the original image /usr/src/wordpress with my custom content. The custom content includes a.o. an updated wp_config.php in which I set $_SERVER['HTTPS']='on';
Putting pieces together my ECS task definition uses my custom Docker image and mounts a directory on my EFS drive as Docker volume in the Docker container. Bottom line here is that when I create an ECS service with the task definition the container spins up and writes to /var/www/html which I subsequently can see in my EFS drive. This all seems fine and dandy.
My containerized WordPress instances are then successfully registered in a target group I've previously set up for an application load balancer.
I can then access my site over https protocol. If I try to use http I am redirected to https as planned. I can open the landing page. When I log in I try to edit the landing page.
Problem
This is where I face the real problem. Often times, but not always, when I do an update on the landing page and click the Update button I get a 502 Bad Gateway response to the POST /wp-admin/post.php request. Also often when I start to edit and GET /wp-admin/post.php?post=2&action=edit is requested I get the same 502.
I don't see much pattern in when I do and do not get the 502. I have tried to update both textual contents as well as adding images to the landing page. The 502 happens sometimes but not always in either case.
Also I tried to remedy the problem as I suspected it had to do with the use of EFS and subsequent synchronization problems between the two ECS instances I have set up for the test. The following attempts were made, without seeing significant improvement.
Add mount option sync as suggested by the EFS user guide in order to avoid caching on the ECS instance
Increase the Idle timeout of the load balancer
Finally I reduced the number of ECS service tasks from 2 to 1 but the problem still persists.
As an error message in the browser console when meeting the 502 I often, possibly always, see the following: The character encoding of the HTML document was not declared. The document will render with garbled text in some browser configurations if the document contains characters from outside the US-ASCII range. The character encoding of the page must be declared in the document or in the transfer protocol.
So, does anyone have clue about what to try next and where to look for indications to the reason for the problem?
It looks like following this
https://wordpress.org/support/article/editing-wp-config-php/#increasing-memory-allocated-to-php
Helped resolve my issue, the usual time period where it would time out it has stopped.
TLDR;
During the Dockerfile build process, I added the following lines
# Copy php.ini Over to Container
COPY php.ini /usr/local/etc/php/
COPY phpinfo.php /var/www/html/wp-admin/
COPY .htaccess /var/www/html
Php.ini
upload_max_filesize = 500M
post_max_size = 256M
memory_limit = 256M
max_execution_time = 1800
max_input_time = 180
max_input_vars = 5000
.htaccess
php_value upload_max_file 500M
php_value post_max_size 256M
php_value memory_limit 256M
phpinfo.php
This is accessible by navigating to /wp-admin/phpinfo.php
It will give you a summary of the current WP Configuration to see if the changes have been implemented or not, also great for troubleshooting
<?php
phpinfo();
?>
You could either update your task definitions with the new Container built into the ECR, (Not sure if you've automated this ? )
Or just access the EC2's via Bastion host and then docker exec -it and edit the files manually,
Seeming that you have an EFS it'll be static.
This helped my time out issue.
Let me know how you go!
I'm getting this error on an avatar upload on my site. I've never gotten it before and nothing was changed recently for me to begin getting this error...
Warning: is_writable() [function.is-writable]:
open_basedir restriction in effect.
File(/) is not within the allowed path(s):
Modify the open_basedir settings in your hosting account and set them to none. Find the open_basedir setting given under 'PHP Settings' area of your Plesk/cPanel. Set it to 'none' from the dropdown given there.
I have shown them in the Plesk panel picture.
To resolve this error, you must edit the file httpd.conf.
Way before it can be seen in phpinfo in apache2handler section directive Server Root.
For example, in my case this way - / etc / httpd / httpd.conf.
Open the file httpd.conf, find the mention of the parameter open_basedir. And set it to none. (php_admin_value open_basedir none)
If you're running this with php file.php. You need to edit php.ini
Find this file:
: locate php.ini
/etc/php/php.ini
And append file's path to open_basedir property:
open_basedir = /srv/http/:/home/:/tmp/:/usr/share/pear/:/usr/share/webapps/:/etc/webapps/:/run/media/andrew/ext4/protected
For me the problem was bad/missing config values for the Plesk server running the whole thing.
I just followed the directions here:
http://davidseah.com/blog/2007/04/separate-php-error-logs-for-multiple-domains-with-plesk/
You can configure PHP to have a separate error log file for each VirtualHost definition. The trick is knowing exactly how to set it up, because you can’t touch the configuration directly without breaking Plesk.
Every domain name on your (dv) has its own directory in /var/www/vhosts. A typical directory has the following top level directories:
cgi-bin/
conf/
error_docs/
httpdocs/
httpsdocs/
...and so on
You’ll want to create a vhost.conf file in the domain directory’s conf/ folder with the following lines:
php_value error_log /path/to/error_log
php_flag display_errors off
php_value error_reporting 6143
php_flag log_errors on
Change the first value to match your actual installation (I used /tmp/phperrors.log). After you’re done editing the vhost.conf file, test the configuration from the console with:
apachectl configtest
…or if you don’t have apachectl (as Plesk 8.6 doesn’t seem to)…
/etc/init.d/httpd configtest
And finally tell Plesk that you’ve made this change.
/usr/local/psa/admin/bin/websrvmng -a
Laravel
If you have this problem when using Laravel.
Only go to folder bootstrap/cache and rename config.php to anything you want and reload site.
If used ispconfig3:
Go to Website section -> Options -> PHP open_basedir:
In this field has described allowed paths and each path is separated
with ":"
/var/www/clients/client2/web3/image:/var/www/clients/client2/web3/web:/var/www/...
and so on
So here must put the path that you want to have access, in my case is:
/var/www/clients/client2/web3/image:
The problem appears because:
When a script tries to access the filesystem, for example using include, or fopen(), the location of the file is checked. When the file is outside the specified directory-tree, PHP will refuse to access it.
The path you're refering to is incorect, and not withing the directoryRoot of your workspace. Try building an absolute path the the file you want to access, where you are now probably using a relative path...
if you have this kind of problem with ispconfig3 and got an error like this
open_basedir restriction in effect.
File(/var/www/clients/client7/web15) is not within the allowed
path(s):.........
To solve it (in my case) , just set PHP to SuPHP in the Website's panel of ispconfig3
Hope it helps someone :)
I had this problem # one of my wordpress sites after updating and/or moving :)
Check in database table 'wp_options' the 'upload_path' and edit it properly...
For Plesk, you can change or set the openbase dir settings via the panel
https://support.plesk.com/hc/en-us/articles/360006170513-How-to-add-custom-or-additional-path-to-the-open-basedir-option-for-Plesk-domain-
Edit the php.ini or .user.ini that is located within the main directory
open_basedir = none
If you are running a PHP IIS stack and have this error, it is usually a quick permission fix.
If you administer the windows server yourself and have access, try this FIRST:
Navigate to the folder that is giving you grief on writing to and right click it > open properties > security.
See what users have access to the folder, which ones have read only and which have full. Do you have a group that is blocking write?
The fix will be specific to your IIS setup, are you using Anonymous Authentication with specific user IUSR or with the Application Pool identity?
At any rate, you are going to end up adding a new full write permission for one of IUSR, IIS_IUSRS, or your application pool identity - like I said, this is going to vary depending on your setup and how you want to do it, you can go down the google rabbit hole on this one (one such post - IIS_IUSRS and IUSR permissions in IIS8) For me, i use anon with my app pool identity so i can get away with MACHINE_NAME\IIS_IUSRS with full read/write on any temp or upload folders.
I do not need to add anything extra to my open_basedir = in the php.ini.
In addition to #yogihosting's answer, if you are using DirectAdmin, then follow these steps:
Go to the DirectAdmin's login page. Usually, its port is 2222.
Login as administrator. Its username is admin by default.
From the "Access Level" on the right panel, make sure you are on "Admin Level". If not, change to it.
From the "Extra Features" section, click on "Custom HTTPD Configurations".
Choose the domain you want to change.
Enter the configurations you want to change in the textarea at the top of the page. You should consider the existing configuration file and modify values based on it. For example, if you see that open_basedir is set inside a <Directory>, maybe you should surround your change in the related <Directory> tag:
<Directory "/path/to/directory">
php_admin_value open_basedir none
</Directory>
After making necessary changes, click on the "Save" button.
You should now see your changes saved to the configuration file if they were valid.
There is another way of editing the configuration file, however:
Caution: Be careful, and use the following steps at your own risk, as you may run into errors, or it may lead to downtime. The recommended way is the previous one, as it prevents you from modifying configuration file improperly and show you the error.
Login to your server as root.
Go to /usr/local/directadmin/data/users. From the listed users, go to one related to the domain you want to change.
Here, there is an httpd.conf file. Make a backup from it:
cp httpd.conf httpd.conf.back
Now edit the configuration file with your editor of choice. For example, edit existing open_basedir to none. Do not try to remove things, or you may experience downtime. Save the file after editing.
Restart the Apache web server using one of the following ways (use sudo if needed):
httpd -k graceful
apachectl -k graceful
apache2 -k graceful
If your encounter any errors, then replace the main configuration file with the backed-up file, and restart the web server.
Again, the first solution is the preferred one, and you should not try the second method at the first time. As it is noted in the caution, the advantage of the first way is that it prevents saving your bad-configured stuff.
Hope it helps!
I am using an Apache vhost-File to run PHP with application-specific ini-options on my windows-server. Therefore I use the -d option of the php-command.
I am setting the open_basedir for every application as one of these options.
I needed to set multiple urls as open_basedir, including an UNC-Path, and the syntax for this case was a bit hard to find. You have to seperate the paths with semicolons and if your first path starts with a driveletter you might have to start the list with a semicolon too. At least that's what works for me.
Example:
php.exe -d open_basedir=;d:/www/applicationRoot;//internal.unc.path/ressource/
I uploaded my codeigniter project on Directadmin panel. I was getting same error.
Then I change in php settings.
open_basedir =
session.save_path = ./temp/
Then it worked for me.
As most do not find a solution, the solutions are broad for WordPress most even don't know fully why things are they are.
I've found out you will have to enable IP for your server in especially when using Cerber in some cases it can think you are not uploading .png instead you are uploading .js files.
The server IP needs to be whitelisted. Even the uploaders in some rare cases.
A great to know is to have a tmp folder 755 in your base directory, you actually do not need a folder called tmp.: "Also remember / properly inedited as below:
open_basedir = "/home/user/site.com/:/tmp"
upload_tmp_dir = /home/user/site.com/tmp
The best option for quick setup is in Cpanel where you use the MultiPHP INI Editor you can actually save and both .htaccess and php.ini will be updated as well as settings being initiated at the same time on site.
It's NOT recommended to have basedir as "none" since you are enabling root files that can be edited with just a single file editor in WordPress. If that truly is possible.
Check \httpdocs\bootstrap\cache\config.php file in plesk to see if there are some unwanted paths.
Just search
open_basedir =
in php.ini and disable it. That's the simplest solution to solve this issue.
Before Changes open_basedir =
After Changes ;open_basedir =
P.s - After changes don't forget to restart your server.
Enjoy ;)
Modify the open_basedir settings in your PHP configuration (See Runtime Configuration).
The open_basedir setting is primarily used to prevent PHP scripts for a particular user from accessing files in another user's account. So usually, any files in your own account should be readable by your own scripts.
Example settings via .htaccess if PHP runs as Apache module on a Linux system:
<DirectoryMatch "/home/sites/site81/">
php_admin_value open_basedir "/home/sites/site81/:/tmp/:/"
</DirectoryMatch>
When I try to upload a 200KB file, for example, I get an error message saying the file is bigger than 8MB. When I try to upload a 10KB image, the upload process complets successfully.
I am using Drupal 7, SQLite as database engine, and the upload limit in php.ini is 8MB; the server is hosted on hordeeasy.com.
Can someone give me an advice?
This maybe a problem with your webserver/hosting.
File uploads are limited in two ways when using drupal with apache:
Php/apache enforced limit
Drupal enforced software limit
If you can get to your php.ini, check these settings are correct:
post_max_size = 20M
upload_max_file = 20M
I developed a web site using Drupal 6 and I need to allow users to upload and download large files (up to 200 MB). Can anyone tell me please how is this usually done? Because I don't know if this is best to do using HTTP, maybe there is other way.
The site is hosted in a dedicated web host and I don't have access to php.ini or other server configuration.
Which is the best way to do this?
Thank you.
FYI, the values are: upload_max_filesize and post_max_size
To start with, you need to find out what PHP's upload_max_filesize and post_max_size are. Have a look at the output of php_info() to find current settings. If they are > 200MB, you're OK to use HTTP already using any Drupal module that deals with file uploads.
If either setting is smaller, you can try to alter them at runtime with ini_set(), .htaccess, or a couple of other methods as per here but this may or may not work, depending on your host.
I have recently moved a drupal site. (both servers run on a debian based LAMP stack) Everything works great here, including the uploading of images via a CCK filefield. Original url:
dev.example.com/foo
Deploying it to a test folder on the production server to a test folder for an environmental shakedown cruise lead it here:
www.example.com/foo
Everything works here too, including image uploads. After adjusting sites/default/settings.php, then making it readonly again, I renamed the folder to its production name:
www.example.com/bar
Everything works fine here except for image uploading. I've adjusted the webroot variable within settings.php .
Things I have tried so far:
Gave php system user write permissions to sites/default/files (images are set to go in sites/default/files/images but imagecache just puts them in sites/default/files)
Enabled file php file uploading for www.example.com/bar/sites/default/files
Are there any other configuration settings I should be looking out for here? I'm running low on relevant solutions.
Edit: I had quite the typo there, I adjusted sites/default/settings.php, not sites/default.settings.php .
Your question is slightly confusingly framed. default.settings.php has no impact on Drupal -- its merely a template. The file that contains the actual database connection information and other configuration is settings.php.
You may also want to look at your .htaccess file in your root Drupal folder and try changing the RewriteBase directive to the folder you are accessing your site on. Usually you should not have to change the $base_url directive in the settings.php file that you may/may not have done. Reverse that change for now if you have (you may need to play around with that later though).
imagecache will always upload the image derivatives in sites/default/files but imagefield will upload the original image in the folder you specify (within sites/default/files). You will get a setting for the imagefield under Manage Fields->[Name of Image field]->Configure under Path Settings.
Please google to understand the difference between imagecache and imagefield. Make sure your sites/default/files (and subfolders) are writable by the apache user (usually www-data).
In such situations, its usually a good idea to pick up a book on apache (if you haven't already) and try to understand how it works. It will be time consuming but will help you out in the future when you encounter configuration issues like this.
This worked for me. When having issues uploading images to a cck field, I gave write permissions to directory:
sites/default/files/field/image