I added cloudfront to hosting with
amplify configure hosting
but once this is pushed to the cloud all environments are updated so that dev and prod now have it.
If I remove it from dev and push the changes then it will be removed from the prod.
How can I have cloudfront only on the prod evnironment?
I'm not sure how you'd do this. You may be able to add conditionals in the CloudFormation.
It's generally a best practice to have your dev & test environments mirror your prod environment. In Amplify, all environments are identical - obviously excluding the data and load on resources from using the service.
Also - without CloudFront, I'm not sure how you'd access the (deployed) front-end pages.
Can you elaborate on why you want to eliminate it?
EDIT
Amplify does not expose the S3 bucket or CloudFront distribution when hosting your front-end; so you won't be able to alter this behavior.
Related
I was trying to follow this post on using remote branches to create a staging environment
I could not figure out the hooks part of it but I created an empty repo, created a staging remote branch:
git remote add staging https://github.com/myusername/newEmptyRepo.git
git push staging master
This did fill my staging repo with project code, but the code was several months old without many of the components I have added since then. I tried to make a small change and go through the add/commit/push process, but was then told "Everything up-to-date" when it obviously isn't in anyway, especially since I just made a change.
How do I force it to actually push the new code to my staging repo?
Alternatively, Is there a better way to make a staging env? I have seen lots of recommendations for github actions, but those seem to only work with cloud deployments like GCP/AWS. I am deploying with firebase and want to deploy staging to a subdomain and then, when ready, manually deploy to prod.
I'm pretty new to CDN. I found a good free service that matches my needs in developing Wordpress (and also has a plugin for it). I already tried it on a remote project and it works.
Now, I usually develop websites in a local environment (http://my-project.test:9000) then I push to production on final domain (https://my-project.com).
Well, I don't understand how to work in a local environment with a CDN.
For instance, I tried configuring the plugin in local but it says needs https (never had to need to install an ssl certificate in local development). A CDN always needs HTTPS?
Also, in my head I should be able to use the same CDN key for 2 domain (for example the 2 above: one for local and one for production) so the static assets are "shared" for the 2 Wordpress installations.
So in general I'm really confused: how developing in local environment when you use a CDN?
I'm trying to set up an Next.js app on Amplify with container-based hosting on Fargate, but when I run amplify add hosting, the only two options I get are Amplify Console and CloudFront + S3.
I've already configured the project to enable container-based deployments, but I'm just not presented with the option to do so
Amplify CLI version is v4.41.2 and the container-hosting plugin is correctly listed in the active plugins
Region is eu-west-1, the CLI is configured and I've gone through all the steps more than once.
amplify init
amplify configure project
amplify add hosting
Are there any prerequisites, something I missed or don't know of? I can't find anything about it.
According to this video it's only available in US East 1 currently.
https://youtu.be/rA5l82vypXc
Here's the limitation they have mentioned in their docs: https://docs.amplify.aws/cli/usage/containers/#:~:text=Hosting%20with%20Fargate%20in%20Amplify%20is%20only%20available%20in%20US%2DEast%2D1%20at%20this%20time
Hosting with Fargate in Amplify is only available in US-East-1 at this
time
So the best solution would be to change the region of your project itself, if possible. Otherwise, use ECS and build your CI/CD pipeline manually (CodePipeline).
We're using google Cloud Build to deploy pull-request specific versions of our app to GAE so we can share dev versions with stakeholders before launching them into the wild. On GAE, a url looks like http://[VERSION_ID]-dot-[YOUR_PROJECT_ID].appspot.com or https://my-pr-name-dot-projectname.appspot.com
We're want to allow for stakeholders to preview and to run E2E tests (including Firebase login) but because of what's essentially a wildcard subdomain, we'd have to manually whitelist each subdomain in the the Firebase control panel under "Authorized domains" after a deploy. Unfortunately, Firebase doesn't allow for wildcard style whitelisting (eg. *-dot-projectname.appspot.com).
We've reached out to Google support but they've confirmed that whitelisting can only be done manually.
One possibility would be to use a separate, staging project for PR testing.
You'd use whitelisting for http://[YOUR_STAGING_PROJECT_ID].appspot.com or https://staging_projectname.appspot.com. And you'd manage your mapping from a specific PR to the staging project via traffic migrations, which can be done programmatically from your PR automation scripts.
The drawback would be that you'd effectively be verifying only one PR at a time. But that's not necessarily all bad: serializing PR verifications eliminates the risk of breakages due to conflicting changes that each pass in isolation.
There are also advantages of using a separate project for testing purposes you might find of interest, see Advantages of implementing CI/CD environments at GAE project/app level vs service/module level?
We encountered the same issue and decided on authorizing a constant number (say N) of "preview environments". We use GCP cloud run and our CI/CD is ran by github actions. The idea is to iterate through the number of environments such that every N PRs we'll use the same environment, this is done by calculating the modulo of the PR number to N.
Here's the main bash command in the github actions yaml:
run: echo "PREVIEW_ENV=$((${{github.event.number}} % ${{ env.N_PREVIEW_ENVS }}))" >> $GITHUB_ENV
The Setup:
I'm setting up a Wordpress-powered application using Elastic Beanstalk from Amazon Web Services. All development is being done locally under a MAMP apache2/php5 server environment with a GIT repository controlling the entire application root.
Deployment Workflow:
After committing any code changes (edits, new plugins, etc) to the repo the application is deployed using AWS EB CLI's eb deploy command which pushes the latest version out to any running EC2 instances managed by Elastic Beanstalk.
My Issue:
Sometimes the code changes aren't exactly syncing up between my development/production environments and I'm not sure how to overcome it. Especially when trying to install and setup plugins like W3 Total Cache or WP Super Cache.
Since my local environment doesn't have things like a memcahced server installed, but my production environment does (ElastiCache) I'm unable to save the proper settings file and deploy it for use in my production environment. These plugins won't allow me to select the needed services because it sees them as not available...
It seems I can only get W3 Total Cache to work if I install it directly onto a live production environment, which seems like a bad idea.
Given the above:
Am I going about deployments the wrong way?
Should plugins like W3 Total Cache be installed and configured on
local development environments and pushed to production environments?
I cannot comment on the issues specific to Elastic Beanstalk, but based on experience I can make a suggestion about the second part of your issue statement:
You are better off running a development environment that mirrors your production environment as closely as possible. I suggest that you convert from MAMP to a VM environment like VirtualBox. You might want to check out puphpet.com for help in getting it set up. It requires some startup effort, but gives you an environment similar to or the same as your production servers. For example, you could run memcached yourself so you could actually test it with W3 Total Cache.
As for your second question, just installing a plugin in the production environment without testing it beforehand has obvious risks (but then again clients do that all the time). I would prefer to test first. To a certain extent it probably depends on how critical it is if the site experiences downtime or weirdness.
I would suggest you to create another environment on Beanstalk.
It's easy, fast and more reliable than a VM in your case because it will allow you to test your deployment process as well.
I usually have 3 environment for a every website. Each environment is on its own branch. If your configuration is different between environment (url and database access for example), just store your wp-config and other config files into S3 (you may not want production password in your git repository), and through ebextensions you can download them into your website automatically.
I use AWS Beanstalk that way for 16 websites and some are wordpress one. All with autoscaling and able to get thousands of users simultaneously.
Don't hesitate to ask me for further details.