How to configure Wordpress via AWS Fargate Container using AWS CDK - wordpress

I would like to configure Wordpress via AWS Fargate in the container variant (i.e. without EC2 instances) using AWS CDK.
I have already implemented a working configuration for this purpose. However, it is currently not possible to install themes or upload files in this form, since Wordpress is located in one or more docker containers.
Here is my current cdk implementation:
AWS-CDK
export class WordpressWebsiteStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// GENERAL
const vpc = new ec2.Vpc(this, 'Vpc', {
// 2 is minimum requirement for cluster
maxAzs: 2,
// only create Public Subnets in order to prevent aws to create
// a NAT-Gateway which causes additional costs.
// This will create 1 public subnet in each AZ.
subnetConfiguration: [
{
name: 'Public',
subnetType: ec2.SubnetType.PUBLIC,
},
]
});
// DATABASE CONFIGURATION
// Security Group used for Database
const wordpressSg = new ec2.SecurityGroup(this, 'WordpressSG', {
vpc: vpc,
description: 'Wordpress SG',
});
// Database Cluster for wordpress database
const dbCluster = new rds.DatabaseCluster(this, 'DBluster', {
clusterIdentifier: 'wordpress-db-cluster',
instances: 1,
defaultDatabaseName: DB_NAME,
engine: rds.DatabaseClusterEngine.AURORA, // TODO: AURORA_MYSQL?
port: DB_PORT,
masterUser: {
username: DB_USER,
password: cdk.SecretValue.plainText(DB_PASSWORD)
},
instanceProps: {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
vpc,
securityGroup: wordpressSg,
}
});
// FARGATE CONFIGURATION
// ECS Cluster which will be used to host the Fargate services
const ecsCluster = new ecs.Cluster(this, 'ECSCluster', {
vpc: vpc,
});
// FARGATE CONTAINER SERVICE
const fargateService = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'WordpressFargateService', {
cluster: ecsCluster, // Required
desiredCount: 1, // Default is 1
cpu: 512, // Default is 256
memoryLimitMiB: 1024, // Default is 512
// because we are running tasks using the Fargate launch type in a public subnet, we must choose ENABLED
// for Auto-assign public IP when we launch the tasks.
// This allows the tasks to have outbound network access to pull an image.
// #see https://aws.amazon.com/premiumsupport/knowledge-center/ecs-pull-container-api-error-ecr/
assignPublicIp: true,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry(wordpressRegistryName),
environment: {
WORDPRESS_DB_HOST: dbCluster.clusterEndpoint.socketAddress,
WORDPRESS_DB_USER: DB_USER,
WORDPRESS_DB_PASSWORD: DB_PASSWORD,
WORDPRESS_DB_NAME: DB_NAME,
},
},
});
fargateService.service.connections.addSecurityGroup(wordpressSg);
fargateService.service.connections.allowTo(wordpressSg, ec2.Port.tcp(DB_PORT));
}
}
Perhaps someone knows how I can set up Fargate via CDK so that the individual Wordpress containers have a common volume on which the data is then located? Or maybe there is another elegant solution for this :)
Many thanks in advance :)
Found a solution 🤗
Thanks to the comments in the open GitHub-Issue and the provided Gist, I was finally able to configure a working solution.
I provided my current solution in this Gist. So feel free and just have a look at it, leave some comments and adapt it if it suites to your problem.

I am part of the AWS container service team and I would like to give you a bit of background re where we stand. We have recently (5/11/2020) announced the integration of Amazon ECS / AWS Fargate with Amazon EFS (Elastic File System). This is the plumbing that will allow you to achieve what you want to achieve. You can read more about the theory here and here for a practical example.
The example I linked above uses the AWS CLI simply because CloudFormation support for this feature has not been released yet (stay tuned). Once CFN support is released CDK will pick it up and, at that point, it will be able to adjust your CDK code to achieve what you want.

Related

Add storage to notification bot

I followed this tutorial to create a Teams notification bot with Teams Toolkit: (https://learn.microsoft.com/en-us/microsoftteams/platform/sbs-gs-notificationbot?tabs=vscode&tutorial-step=3)
To store the channels where the bot is installed in a persistent way, I tried to add a custom blob storage. But the documentation was not really clear for me.
In initialize.js I added
const { BlobsStorage } = require("botbuilder-azure-blobs");
const myStorage = new BlobsStorage(
config.blobConnectionString,
config.blobContainerName
);
and
notification: {
enabled: true,
storage: myStorage,
},
In the config.js I added
blobConnectionString: process.env.BLOB_CONNECTION_STRING,
blobContainerName: process.env.BLOB_CONTAINER_NAME,
and in .env.teamsfx.local I added
blobConnectionString=<my connection string>
blobContainerName=<my container name>
But it is not working. The Azure function fails. How should I add a blob storage for this purpose?
For additional information on how to incorporate notification storage into your application, you can review the following document which includes a sample implementation available on GitHub:
https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/conversations/notification-bot-in-teams?tabs=ts%2Cjsts%2Cts4#add-storage

Why hydration errors happens only in production after few hours of deploying quasar application

I'm running into a weird situation here that debugging is becoming extremely hard to debug.
I have made a post here https://forum.cleavr.io/t/cloudflare-caching-in-a-quasar-app/844 thinking that the problem was about caching.
We are having hydration errors in our webapp codotto.com ONLY after a few hours (or minutes depending on website traffic). As soon as I redeploy the app, everything works well. No hydration errors anymore.
We had the idea that it was caching but we have disabled caching completely in our Cloudflare dashboard:
Then we can verify that the cache is not being used:
Note the CF-Cache-Status set to DYNAMIC (refer to here to see that when you have DYNAMIC set in CF-Cache-Status header it means that cache is not being used).
Locally the app works great and we are not able to reproduce the issue locally and in staging as well. This only happens in production. Here we have configurations for pm2 settings:
production
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxx/codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 7145,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
staging
// DO NOT MODIFY THIS FILE. THIS FILE IS RECREATED WITH EVERY DEPLOYMENT.
module.exports = {
name: "staging.codotto.com",
script: "index.js",
args: "",
log_type: "json",
cwd: "xxxxx/staging.codotto.com/artifact",
// Note: We detected that this web app is running on a server with multiple CPUs and hence we are
// setting *instances* to "max" and *exec_mode* to "cluster_mode" for better performance.
instances : 1, // change the value to "1" if your server has only 1 CPU
// exec_mode : "cluster_mode", // remove this line if your server has only 1 CPU
env: {
"PORT": 9892,
"CI": 1,
"NUXT_TELEMETRY_DISABLED": 1
}
}
We are running out of ideas and this only happens in production, making it extremely hard to debug and we are scoping the problem down to the server configuration.
We understand that this question might be tagged as off-topic since it's not too specific but what I'm looking for are some ideas on things that might help debug this issue. The webapp is done using Quasar framework in SSR mode and we are using https://cleavr.io/ to deploy our application.
We have tried following this guide on Quasar's website to debug hydration errors but haven't gotten us anywhere.
In case you would like to reproduce the bug, you will need to sign up to an account in codotto.com, then visit https://codotto.com so that you are redirected to the dashboard instead of the landing page.
Can anyone here help or explain why we have these hydration errors?
The problem was not related to caching or any other issues we thought it was related.
In one of our bootfiles we had the following:
export default boot(({ router }) => {
router.beforeEach((to, from, next) => {
const requiresAuthentication = to.matched.some(
(record) => record.meta.needsAuthentication
);
const appStore = useAppStore();
if (requiresAuthentication && !appStore.isLoggedIn) {
next({ name: 'auth-login' });
} else {
const onlyGuestCanSee = to.matched.some(
(record) => record.meta.onlyGuestCanSee
);
if (onlyGuestCanSee && appStore.isLoggedIn) {
next({ name: 'dashboard' });
} else {
next();
}
}
});
});
In this file we didn't pass the store to useAppStore causing the request pollution and, consequently, hydration errors. The fix was to pass store to useAppStore:
export default boot(({ router, store }) => {
const appStore = useAppStore(store);
...
})

Terraform: how to support different providers

I have a set of terraform codes in a directory called myproject:
\myproject\ec2.tf
\myproject\provider.tf
\myproject\s3.tf
....
the provider.tf shows:
provider "aws" {
region = "us-west-1"
profile = "default"
}
so, if I terraform apply in myproject folder, a set of aws resources are launched in us-west-1 under my account.
Now I want to introduce a AWS Glue resource, which is only available in a different region us-west-2. then how do I layout glue.tf file?
Currently I store it in a sub-directory under myproject and run terraform apply in that sub-directory i.e.
\myproject\glue\glue.tf
\myproject\glue\another_provider.tf
another_provider.tf is:
provider "aws" {
region = "us-west-2"
profile = "default"
}
Is it the only way to store a file launching resources in different regions? any better way?
If there is no better way, then I need to have another backend file in glue sub-folder as well, besides, some common variables in myproject directory cannot be shared.
--------- update:
I followed the link posted by Phuong Nguyen,
provider "aws" {
region = "us-west-1"
profile = "default"
}
provider "aws" {
alias = "oregon"
region = "us-west-2"
profile = "default"
}
resource "aws_glue_connection" "example" {
provider = "aws.oregon"
....
}
But I saw:
Error: aws_glue_connection.example: Provider doesn't support resource: aws_glue_connection
you can use provider alias to define multiple providers, .e.g.
# this is default provider
provider "aws" {
region = "us-west-1"
profile = "default"
}
# additional provider
provider "aws" {
alias = "west-2"
region = "us-west-2"
profile = "default"
}
and then in your glue.tf, you can refer to alias provider as:
resource "aws_glue_job" "example" {
provider = "aws.west-2"
# ...
}
More details at Multiple Provider Instances section: https://www.terraform.io/docs/configuration/providers.html
Read my comment ...
Which basically means that you should keep out aws profiles and regions and what not from your terraform code as much as possible and use them as configuration as follows:
terraform {
required_version = "1.0.1"
required_providers {
aws = {
version = ">= 3.56.0"
source = "hashicorp/aws"
}
}
backend "s3" {}
}
provider "aws" {
region = var.region
profile = var.profile
}
Than use tfvars configuration files:
cat cnf/env/spe/prd/tf/03-static-website.backend-config.tfvars
profile = "prd-spe-rcr-web"
region = "eu-north-1"
bucket = "prd-bucket-spe"
foobar = "baz"
which you will apply during the terraform plan and apply calls as follows:
terraform -chdir=$tf_code_path plan -var-file=<<line-one-^^^>>.tfvars
terraform -chdir=$tf_code_path plan -var-file=<<like-the-one-^^^>>.tfvars -auto-approve
As a rule of thumb you SHOULD separate your code and configuration always, the more mixed they are the deeper you will get into troubles ... this applies to ANY programming language / project etc. Now some wise heads will argue that terraform code is in itself configuration , but no it is not. The terraform code in your application is the declarative source code, which is used to provision your binary infrastructure used by the application source code etc. in your application ...

alexa skill local could not write to dynamodb

I am writing a node.js skill using ask-sdk and using alexa-skill-local to test the endpoint. I need to persist data to DynamoDb in one of the handler. But I keep getting "missing region error". Please find my code below:
'use strict';
// use 'ask-sdk' if standard SDK module is installed
const Alexa = require('ask-sdk');
const { launchRequestHandler, HelpIntentHandler, CancelAndStopIntentHandler, SessionEndedRequestHandler } = require('./commonHandlers');
const ErrorHandler = {
canHandle() {
return true;
},
handle(handlerInput, error) {
return handlerInput.responseBuilder
.speak('Sorry, I can\'t understand the command. Please say again.')
.reprompt('Sorry, I can\'t understand the command. Please say again.')
.getResponse();
},
};
////////////////////////////////
// Code for the handlers here //
////////////////////////////////
exports.handler = Alexa.SkillBuilders
.standard()
.addRequestHandlers(
launchRequestHandler,
HelpIntentHandler,
CancelAndStopIntentHandler,
SessionEndedRequestHandler,
ErrorHandler
)
.withTableName('devtable')
.withDynamoDbClient()
.lambda();
And in one of the handler I am trying to get persisted attributes like below:
handlerInput.attributesManager.getPersistentAttributes().then((data) => {
console.log('--- the attributes are ----', data)
})
But I keep getting the following error:
(node:12528) UnhandledPromiseRejectionWarning: AskSdk.DynamoDbPersistenceAdapter Error: Could not read item (amzn1.ask.account.AHJECJ7DTOPSTT25R36BZKKET4TKTCGZ7HJWEJEBWTX6YYTLG5SJVLZH5QH257NFKHXLIG7KREDKWO4D4N36IT6GUHT3PNJ4QPOUE4FHT2OYNXHO6Z77FUGHH3EVAH3I2KG6OAFLV2HSO3VMDQTKNX4OVWBWUGJ7NP3F6JHRLWKF2F6BTWND7GSF7OVQM25YBH5H723VO123ABC) from table (EucerinSkinCareDev): Missing region in config
at Object.createAskSdkError (E:\projects\nodejs-alexa-sdk-v2-eucerin-skincare-dev\node_modules\ask-sdk-dynamodb-persistence-adapter\dist\utils\AskSdkUtils.js:22:17)
at DynamoDbPersistenceAdapter.<anonymous> (E:\projects\nodejs-alexa-sdk-v2-eucerin-skincare-dev\node_modules\ask-sdk-dynamodb-persistence-adapter\dist\attributes\persistence\DynamoDbPersistenceAdapter.js:121:45)
Can we read and write attributes from DynamoDb using alexa-skill-local ? Do we need some different setup to achieve this ?
Thanks
I know that this is a really old topic, but I had the same problem few days ago, and I'm gonna explain how I did it work.
You have to download DynamoDB Locally and follow the instructions from here
Once that you have configure your local DynamoDB and check that it is working. You have to pass it through your code, to DynamoDbPersistenceAdapter constructor.
Your code should look similar to:
var awsSdk = require('aws-sdk');
var myDynamoDB = new awsSdk.DynamoDB({
endpoint: 'http://localhost:8000', // If you change the default url, change it here
accessKeyId: <your-access-key-id>,
secretAccessKey: <your-secret-access-key>,
region: <your-region>,
apiVersion: 'latest'
});
const {DynamoDbPersistenceAdapter} = require('ask-sdk-dynamodb-persistence-adapter');
return new DynamoDbPersistenceAdapter({
tableName: tableName || 'my-table-name',
createTable: true,
dynamoDBClient: myDynamoDB
});
Where <your-acces-key-id>, <your-secrete-access-key> and <your-region> are defined at aws config and credentials files.
The next step is launch your server with alexa-skill-local command as always.
Hope this will be helpfull! =)
Presumably you have an AWS config profile that your skill is using when running locally.
You need to edit the .config file and set the default region (ie us-east-1) there. The region should match the region where your table exists.
Alternatively, if you want to be able to run completely isolated, you may need to write come conditional logic and swap the dynamo client with one targeting an instance of DynamoDB Local running on your machine.

Access Couchbase cluster from config file (sbt)

I'm pretty new to Couchbase and am trying to access a cluster from my config file. I'm not 100% sure how to do this.
The general framework I have is:
couchbase {
buckets = [{
host= // string
port= // string
...
}]
servers = [{
uri= // node1
uri= // node2
uri= // node3
...
}]
}
Is this the right way to do it? Or, am I totally missing something?
The couchbase-scala project may be a starting point for you.
It uses typesafe config to define and load configuration information.
See CBCluster.scala to see how the configuration is used to connect to a Couchcbase cluster.

Resources