Hey fellow overflowdians,
I've started to use Docker to facilitate my local development. However I'm facing an issue. I'm a WP Developer and every time I have to set up a new WP project I have to manually add to my Docker compose file the hostname of the machine using the following:
extra_hosts:
projectname.local:IP-of-the-container
Well as you know the IP of the container changes per docker container as so as the ID.
So I'm looking for a way to put this into my Docker compose file where it will automatically add the right container IP to the docker hosts file, so that the PHP Docker container can "talk" using the local domain to the frontend webserver nginx, and vice versa?
Any ideas ??
Thanks!
Andrew
Related
I have a bitnami WordPress instance using AWS lightsail. I need to migrate the site to another identical AWS instance. I created a PHP installer from the first instance using the Wordpress Duplicator plugin.
I then uploaded the file onto the new server to a folder (opt/bitnami/apache2/htdocs/).The instructions I've seen online they say I just need to navigate to the location of the installer in a browser and it will run. However, when I attempt to access the PHP file from a browser I just get an error on the front-end saying:
OOPS! THAT PAGE CAN’T BE FOUND
I have seen suggestions online that the installer be placed into a public folder called html_public but my instance doesn't have a folder like that. I changed the access rights to the folder, and the installer, to be full rw access.
Any ideas how I can get this to work?
You must config the Security Groups firewall to open the HTTP port.
Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/]
On the Inbound Rules tab, choose Edit the port range 80, 443 - Source: Anywhere - IP Address: 0.0.0.0/0 and Save.
I'm running Jenkins and SonarQube in containers on Debian. The Jenkins image is started with the options --link sonar:sonar where sonar is the name of the running Sonar container. Hence, Jenkins's /ets/hosts contains a hostname entry pointing to the ip of the dockerized Sonar. In the Sonar-config section (in Jenkins), sonar:9000 is entered as the Sonar server.
When building, the scanner runs as expected, but the Sonar-link in the project page contains the internal Docker-address, i.e. sonar:9000! This address is meaningless outside Docker. I want the link to point to the host machine. When I navigate directly to the host, port 9000, I am able to see Sonar's analysis.
How can I separate the url to Sonar which is used by Jenkins in the build process and the url which is displayed in the project-build page?
Here is a screenshot from the Project Build page in Sonar build page
The Sonarcube link is pointing to the URL which is set in Manage Jenkins->Configure System under Sonarqube Servers. In this instance, the url is sonar:9000, which is the hostname that is known for the container running Jenkins. This URL is meaningless outside of the container. It should be possible to enter one internal URL for the server and one external for the Jenkins users.
I installed vagrant on a Mac and I want to achieve this :
Launch vagrant up and have a vagrant with docker installed and docker compose
Install Wordpress with mysql inside this vagrant machine with docker-compose up
Have the folder (/var/www/html) of the docker container mapped as volume in my vagrant machine at /dockermapinVagrant
Have this /dockermapinVagrant onto my host(OS X) and be able to modify files directly from the host
I achieved that and everything works perfectly.
I can add templates from my OS X host with no problems (dragging the theme in the themes folder on my host OS X), see the changes directly of the Wordpress site using my browser ...
The problem
I noticed is that I can not install any plugins on Wordpress (
dashboard -> updates ) I have a message :
To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not
remember your credentials, you should contact your web host.
Solution i tried
Change the permission in the docker container wp by setting the user to www-data and chmod to 777 but on some folder as wp-content it doesn’t change to 777. Moreover setting the owner to www-data doesn’t work at all it stay always to 1000:1000 in the container.
Is there a way to update the plugins on wordpress in a docker container without FTP ?
Is there maybe a better way do to that ? Use a data-only container on the vagrant machine with FTP access to the mounted volume ? And map the data container volume on the vagrant machine then to the host to have direct access and see changes directly during the dev process. But I do not want to give FTP access directly on the data-only container (FTP is not secure and I prefer to manage the backup and the data from the vagrant machine directly and delete the mapped volumes in production)
You sure can achieve this, and it's probably just the missing line
define( 'FS_METHOD', 'direct' );
in your wp-config file. You should setup each WordPress outside of the docker environment first IMHO; and then import the database, and WordPress files etc into docker using your Dockerfile(s) as part of staging and deployment, which should be distinct from development (although some of deployment will be shared).
On better ways of managing, I would not put docker inside vagrant if at all possible; it adds unnecessary complexity to that stage of development. I would use vagrant exclusively, focus on getting my provisioning scripts ready (as I use scripts between vagrant and docker), and work via SFTP directly to the vagrant box; committing changes via git. You can then focus your efforts on the necessary code and pull what you need when you need to.
Once it gets to the stage of testing, or staging, I use the provisioner scripts to help me build my docker environment consistently, (probably sharing some of the provisioning code). I Can then pull a specific release from my repo, and build it into my docker image, which I can deploy.
Other alternatives if you really want spin up, delete WordPress, get to grips with the WP-CLI command-line tools, which can install WordPress, plugins, manage updates and install integrity.
I have a Symfony2 application with JS+PHP files that should access
"http://localhost/blahblah"
when on my development machine, but
"http://mydomain.com/blahblah"
when I push them to the production server. What's the appropriate way to configure these domains in Symfony2, to avoid manually changing the files with each server push?
I think the better is create a virtual host in your local machine and add these hosts mydomain.com/blahblah www.mydomain.com
In your host file, in linux are in /etc/hosts windows system32\drivers\etc\hosts ( i guess )
When you want to see production site you can comment those lines in your host file
I am trying to introduce a staging step in my company's code-production process. We currently have ~10 eng clients who commit code individually, update local codebase - debug/check locally, then we deploy the code to production environment and have other employees QA. Obviously we would like to have a better pre-production test process to help catch bugs before they go live to the public.
My first attempt is to create a staging environment on an extra ubuntu box with the most recent committed code from the eng clients. I then could allow the Product Managers to check this site and find bugs, test features, expose bottlenecks, etc.
What I have: The ubuntu machine (local server) is currently configured as a normal eng client. It has a local drupal installation, complete backup of the db, and all of this is accessible locally. Let's go with mysite.com = official site; and the local staging domain I use on the ubuntu box = ms.com. This local ms.com works just fine, so in essence, I need to just allow other people at the company to navigate to some URL and it acts the exact way ms.com currently behaves. I have DNS servers pointing to the ubuntu box and it is running some side projects out of the /www folder.
In an effort to keep the side projects running, I think my solution is to create a name-based virtual host that points to the directory of the local drupal installation. Is this the right thing to do to achieve my goals? Is there an easier way to open up this local config to the employees.
In trying to set up the virtual host I did the following:
I added the static ip address of the local server to /etc/hosts
I added a virutalHost to /etc/apache2/sites-available with the DocumentRoot dir/DrupalInstallation
I added a2ensite
Then restarted apache.
Halfway success. I can get to the main page, but none of the modules load, I tried loading more hosts/variations, started changing all localhost references to the external, but I don't really know what the underlying issue is and I do not know how to diagnose it. The one interesting bit is that if you click on some of the links, it kicks you back out to the index page of the www folder - I don't think the site alias is 100% sticking for requests.
Let me know if there is any sort of log or report I can share to help diagnose/debug this. Any and all help greatly appreciated - thanks!
It sounds like your specific error accessing pages beyond the homepage is related to not having mod_rewrite enabled/configured.
A different approach:
On a bigger scale, it sounds to me like you might not have what it takes to administer the staging server when something goes wrong. If you're unskilled at linux server admin, save yourself the headache and use a preconfigured virtual appliance (e.g. Quickstart, AegirDev, or Walid) instead of the dedicated box. If your staging box isn't beefy enough to handle hosting virtual machines, then just run the QuickStart install scripts over a base uBuntu build.
Now that you know your staging server is working and runs imported Drupal sites successfully, install git, create a shared repo, and make sure you and your developers are setup to use git as their source control in their IDE.