Using Elastic Beanstalk console, I'm trying to create a super basic Node.js environment using the preconfigured platform and sample application.
When I try to configure more options and then modify Rolling updates and deployments, I only have two options for Deployment policy. Why don't I have the option to select Rolling or Rolling with additional batch? What else would I need to do either within Elastic Beanstalk or in another service (resource) to be able to do this?
Region is N. California and IAM user is full admin.
The Configuration present needs to be changed from the Low cost (Free Tier eligible) to either High availability or Custom Configuration. It would still be good to know how to change after the application has already been created using the Low cost preset.
Related
Amplify does not support CLI option --profile. It always uses profile specified when application was generated. Different team members use different AWS profiles.
How to change/configure/use profile different than profile used during application generation?
Aim is to publish changes from different computer. Final goal is to use CI server to publish application to different regions.
Amplify does not work like "other" development tools where tool is detached from git. Amplify goes hand in hand with Git and requires initialization after cloning. Running amplify init and choosing existing environment (which is pushed by other developer), it is possible to select different AWS profile.
I am using Firebase for a long time (since 2018) and loving it. In that time there was not Location southamerica-east1 (São Paulo). Now I would like to store the project (web app, cloud function, and database) in southamerica to reduce cost and make it near to my end-users (also based in Brazil).
I have source control, all environment parameters values stored in Custom Environment Variables. The application works fine when no data is found. No concerns with backup data. No problem about downtime. This is not a critical app.
Anyway, I can't delete the application because I already have some users logged in there and IoT devices sending data through PubSub.
How can I rebuild my Firebase/Firestore/Web application/Function from the ground up, and make sure the new location is southamerica? If possible, I need to keep user and passwords, and web
Looking forward, (I don't think moving the bucked location would be the best solution here) but based on this page Select locations for your project I can't update the location, but since it is based on bucked location, if it doesn't break the project, I will use Google Cloud Transfer Page to Moving and renaming buckets
May is it a better solution than rebuild the app (Firebase/Firestore/Web application/Function)?
May I break my Firestore database or cloud function or web app?
May I lost my project domain or any other related URL parameter like authDomain, databaseURL, storageBucket?
May I need to update some web app parameter after the change?
They cannot be moved at present and migrating data is a manual process. Difficulty varies by product.
General guidance
Do not delete the old project before fully migrated.
Hosting
This migration is nearly trivial, with the understanding that there is likely a minor service interruption while moving custom domains.
Deploy to the new site
CNAME your custom domain to the new site (myproject.firebaseapp.com)
Delete custom domain from old site
Add custom domain to new site
Cloud Functions
This migration is trivial.
Create a local directory for your new project
Run firebase init and set up project normally (enable Functions)
Copy your Functions code into the new project's functions/ directory
Deploy to the new project
Database
This migration is tricky, difficult, and highly specific to your use case and tolerance for downtime. What follows is a general template to adapt.
Reference docs for import/export: Firestore import/export, Realtime Database backups
In the old project:
Lock the database using security rules to prevent changes
Export existing database
In the new project:
Import the database backup
You probably need to migrate existing users (see account export/import) as well so user ids stored in your DB will still reference the correct accounts
Point existing apps to the new project
If downtime is not an option, or if you'll be deploying a new mobile app version and need time for changes to propagate, then you'll need to set up a dual write model:
Dual sync: Create a Cloud Function on both the new and old database that duplicate all create/update/delete operations on the respective partner endpoint.
Sync pre-existing data: Perform the export/import process as above on all data created before the dual sync was implemented, excluding the step to lock the old database
Shut down your old mobile app version (once enough accounts have migrated)
Shut down the dual sync Functions and turn down the old site
Based in your information, the main issue is Firestore because other products are globally balanced like Cloud IoT Core and Hosting (these can't be configured on a specific region)
Other products like Functions can be redeployed with the same code and name into another region.
I think that you can create another project only to move the database to the new region and configure all Cloud resources to reach the new located database.
As a caveat, you need to add another domain/subdomain and create new credentials to work with the new project; this step can´t be skipped because it is required for authentication.
On the application side you can add the access to the new database
In case you need assistance during your migration you can start a case with GCP/Firestore support.
This is a hard pill to swallow, but maybe the costs and the time to migrate to another region will be higher than keeping your application as is working today.
The Issue
I am currently in the process of integrating a pre-rendering service for SEO optimization, however we use an Azure App Service Plan to scale up or down when necessary.
One of the steps for setting up the proper configuration requires placing an applicationHost.xdt file in the /site/ directory, which is one level above the /site/wwwroot directory where the application itself gets deployed to.
What steps should I take in order to have the applicationHost.xdt file persist to new instances spawned by the scaling process?
Steps I have taken to solve the issue
So far I have been Googling a lot, but haven't succeeded in finding a lot of documentation on using an applicationHost.xdt file in combination with an Azure App Service Plan.
I am able to upload the file to an instance manually, however I have assumed that when we then scale up to more instances the manually uploaded file will not be present on the new instance(s).
Etcetera
We are using Prerender.io as pre-rendering service.
Should there be an easier to set-up & similarly priced service available, we would be open to suggestions as we are in an exploratory phase regarding pre-rendering.
Suppose this won't be a problem, cause all files under azure app are shared between all your instances. You could check it in this Kudu wiki:Persisted files. And in my test all instances will keep the file.
About upload the applicationHost.xdt, you don't have to do it manually, there is a IIS Manager Site Extension to lets you very easily create XDT files. And it will provide some sample XDT's for you.
I'm running a Wordpress multisite which in short periods every week experience a big number of users requiring more CPU + RAM.
I therefore wish to make use of Azure autoscale to turn on more instances if the demand are there, however is it possible to make a setup where the different instances share same storage and database? And if yes how could it be done?
It is supported by out-of-the-box:
Goto "Web Sites" and add a new website from "Gallery".
Select "WordPress"
Follow the rest of the wizard.
The wizard allows you to create a MySQL database. The website runs as a cluster and uses a database which also runs on a cluster of servers (hosted by ClearDB).
I installed Wordpress using EC2. I created a Load Balancer by creating image (AMI) then adding both Wordpress1 and Wordpress2 on Load Balancer. But I'm still getting database error and have to restart the instances. If I'd like to make 4 instances as Load balancer, are the steps the same? because I saw a "Number of Instances" option when I launched an AMI. Default value is 1. I'm not sure if I should enter 3 or 4 to create multiple instances in one click.
Also, if I update on Wordpress1 instance, will the updates show if the domain will load Wordpress2 instance?
If you want to launch multiple instances and a database etc, you should consider using
AWS CloudFormation. CloudFormation is just a big json string that contains the configuration of your environment, including the servers, autoscaling, access, register with the loadbalancer, etc.
See http://aws.amazon.com/en/cloudformation/ for more details.
There is already an example template for wordpress including a database and autoscaling groups (example wordpress template)
However like datasage mentioned you will need to make adjustments to wordpress to make it working in a multiserver environment.
The "problem" with multiserver environments is that if you upload a file or in your case upgrade wordpress, it will only happen on one server, which could be terminated at any point. Furthermore the upgrade could contain changes in the database structure and then its getting complicated.
If you are building something in the cloud you should always keep in mind that every service you build, in you case the frontend webservers and the database should be allowed to fail without interrupting your service.
Another point is, that you should avoid doing stuff by hand, automation is the key.
An environment where you need to link your server by hand to a loadbalancer is not very useful in the cloud where servers are continuously terminated, rebooted and exchanged.
For you webservers you can use "autoscaling groups" to get this behavior.
If you are using autoscaling groups and a server is terminated or considered unhealthy, a new one will be started automatically and registered with the loadbalancer as soon as it is considered as healthy.
For your database amazon offers for rds multi AZ environments which provide a automatic failover.
Applying upgrades in the cloud can be a tricky and there are different ways to do this. for example using a shared NFS mount with the code base, git deployments or the way you already started: creating a new AMI for every upgrade and then replacing the servers. There are a lot options and they all have their benefits and drawbacks.
As far as i understand you use-case the cloud is maybe not the right choice at the moment.
Normally hosting a small business in the cloud is much more expensive than using a single server. You will only save money if you need like 20 servers in the evening and only 2 or 3 for the rest of the day. Of course there are a lot more points to consider but that would be to much.
Autoscaling in ec2 is horizontal scaling. Which means that instances are added as your infrastructure scales up. This in contrast to vertical scaling where the a single instance is given more resources.
In order to use this effectively, each instance cannot store data that may be needed by other instances. The most common requirement is the database which will need to exist on its own instance outside of the autoscaled instances. You could use RDS for this.
Wordpress also stores file uploads, plugins and themes within the wp-content folder within the wordpress install. By default, if you upload a file, it will be stored on one instance but not any of the others. You could store everything on an NFS volume shared by one of the instances, or you could try a plugin like this: http://wordpress.org/plugins/wp2cloud-wordpress-to-cloud/