I'm pretty new to Couchbase and am trying to access a cluster from my config file. I'm not 100% sure how to do this.
The general framework I have is:
couchbase {
buckets = [{
host= // string
port= // string
...
}]
servers = [{
uri= // node1
uri= // node2
uri= // node3
...
}]
}
Is this the right way to do it? Or, am I totally missing something?
The couchbase-scala project may be a starting point for you.
It uses typesafe config to define and load configuration information.
See CBCluster.scala to see how the configuration is used to connect to a Couchcbase cluster.
Related
I am sending logs to an azure eventhub with Serilog (using WriteTo.AzureEventHub(eventHubClient)), after that I am running a filebeat process with the azure module enabled, so I send these logs to elasticsearch to be able to explore them with Kibana.
The problem I have is that all the information goes to the field "message", I would need to separate the information of my logs in different fields to be able to do good queries.
The way I found was create an ingest pipeline in Kibana and through a grok processor I separate the fields inside the "meessage" and generate multiple fields. In the filebeat.yml I set the pipeline name, but nothing happen, it seems the pipeline is not working.
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
pipeline: "filebeat-otc"
Does anybody knows what I am missing? THANKS in advance.
EDITION. I will add an example of my pipeline and my data. In the simulation is working properly:
POST _ingest/pipeline/_simulate
{
"pipeline": {
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIME:timestamp}\\s%{LOGLEVEL}\\s{[a-zA-Z]*:%{UUID:CorrelationID},[a-zA-Z]*:%{TEXT:OperationTittle},[a-zA-Z]*:%{TEXT:OriginSystemName},[a-zA-Z]*:%{TEXT:TargetSystemName},[a-zA-Z]*:%{TEXT:OperationProcess},[a-zA-Z]*:%{TEXT:LogMessage},[a-zA-Z]*:%{TEXT:ErrorMessage}}"
],
"pattern_definitions": {
"LOGLEVEL" : "\\[[^\\]]*\\]",
"TEXT" : "[a-zA-Z0-9- ]*"
}
}
}
]
},
"docs": [
{
"_source": {
"message": "15:13:59 [INF] {CorrelationId:83355884-a351-4c8b-af8d-b77c48462f36,OperationTittle:Operation1,OriginSystemName:Fexa,TargetSystemName:Usina,OperationProcess:Testing Log Data,LogMessage:Esto es una buena prueba,ErrorMessage:null}"
}
},
{
"_source": {
"message": "20:13:48 [INF] {CorrelationId:8451ee54-efca-40be-91c8-8c8e18e33f58,OperationTittle:null,OriginSystemName:Fexa,TargetSystemName:Donna,OperationProcess:Testing Log Data,LogMessage:null,ErrorMessage:null}"
}
}
]
}
It seems when you use a module it will create and use an ingest pipeline in elasticsearch, and the pipeline option in the output is ignored.
So my solution was modify the index.final_pipeline. For this, in Kibana I went to Stack Management / Index Management there I found my index, there I went to Edit Settings and set "index.final_pipeline": "the-name-of-my-pipeline".
I hope this helps to anybody.
This was thanks to leandrojmp
I would like to configure Wordpress via AWS Fargate in the container variant (i.e. without EC2 instances) using AWS CDK.
I have already implemented a working configuration for this purpose. However, it is currently not possible to install themes or upload files in this form, since Wordpress is located in one or more docker containers.
Here is my current cdk implementation:
AWS-CDK
export class WordpressWebsiteStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// GENERAL
const vpc = new ec2.Vpc(this, 'Vpc', {
// 2 is minimum requirement for cluster
maxAzs: 2,
// only create Public Subnets in order to prevent aws to create
// a NAT-Gateway which causes additional costs.
// This will create 1 public subnet in each AZ.
subnetConfiguration: [
{
name: 'Public',
subnetType: ec2.SubnetType.PUBLIC,
},
]
});
// DATABASE CONFIGURATION
// Security Group used for Database
const wordpressSg = new ec2.SecurityGroup(this, 'WordpressSG', {
vpc: vpc,
description: 'Wordpress SG',
});
// Database Cluster for wordpress database
const dbCluster = new rds.DatabaseCluster(this, 'DBluster', {
clusterIdentifier: 'wordpress-db-cluster',
instances: 1,
defaultDatabaseName: DB_NAME,
engine: rds.DatabaseClusterEngine.AURORA, // TODO: AURORA_MYSQL?
port: DB_PORT,
masterUser: {
username: DB_USER,
password: cdk.SecretValue.plainText(DB_PASSWORD)
},
instanceProps: {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
vpc,
securityGroup: wordpressSg,
}
});
// FARGATE CONFIGURATION
// ECS Cluster which will be used to host the Fargate services
const ecsCluster = new ecs.Cluster(this, 'ECSCluster', {
vpc: vpc,
});
// FARGATE CONTAINER SERVICE
const fargateService = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'WordpressFargateService', {
cluster: ecsCluster, // Required
desiredCount: 1, // Default is 1
cpu: 512, // Default is 256
memoryLimitMiB: 1024, // Default is 512
// because we are running tasks using the Fargate launch type in a public subnet, we must choose ENABLED
// for Auto-assign public IP when we launch the tasks.
// This allows the tasks to have outbound network access to pull an image.
// #see https://aws.amazon.com/premiumsupport/knowledge-center/ecs-pull-container-api-error-ecr/
assignPublicIp: true,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry(wordpressRegistryName),
environment: {
WORDPRESS_DB_HOST: dbCluster.clusterEndpoint.socketAddress,
WORDPRESS_DB_USER: DB_USER,
WORDPRESS_DB_PASSWORD: DB_PASSWORD,
WORDPRESS_DB_NAME: DB_NAME,
},
},
});
fargateService.service.connections.addSecurityGroup(wordpressSg);
fargateService.service.connections.allowTo(wordpressSg, ec2.Port.tcp(DB_PORT));
}
}
Perhaps someone knows how I can set up Fargate via CDK so that the individual Wordpress containers have a common volume on which the data is then located? Or maybe there is another elegant solution for this :)
Many thanks in advance :)
Found a solution 🤗
Thanks to the comments in the open GitHub-Issue and the provided Gist, I was finally able to configure a working solution.
I provided my current solution in this Gist. So feel free and just have a look at it, leave some comments and adapt it if it suites to your problem.
I am part of the AWS container service team and I would like to give you a bit of background re where we stand. We have recently (5/11/2020) announced the integration of Amazon ECS / AWS Fargate with Amazon EFS (Elastic File System). This is the plumbing that will allow you to achieve what you want to achieve. You can read more about the theory here and here for a practical example.
The example I linked above uses the AWS CLI simply because CloudFormation support for this feature has not been released yet (stay tuned). Once CFN support is released CDK will pick it up and, at that point, it will be able to adjust your CDK code to achieve what you want.
I am writing a node.js skill using ask-sdk and using alexa-skill-local to test the endpoint. I need to persist data to DynamoDb in one of the handler. But I keep getting "missing region error". Please find my code below:
'use strict';
// use 'ask-sdk' if standard SDK module is installed
const Alexa = require('ask-sdk');
const { launchRequestHandler, HelpIntentHandler, CancelAndStopIntentHandler, SessionEndedRequestHandler } = require('./commonHandlers');
const ErrorHandler = {
canHandle() {
return true;
},
handle(handlerInput, error) {
return handlerInput.responseBuilder
.speak('Sorry, I can\'t understand the command. Please say again.')
.reprompt('Sorry, I can\'t understand the command. Please say again.')
.getResponse();
},
};
////////////////////////////////
// Code for the handlers here //
////////////////////////////////
exports.handler = Alexa.SkillBuilders
.standard()
.addRequestHandlers(
launchRequestHandler,
HelpIntentHandler,
CancelAndStopIntentHandler,
SessionEndedRequestHandler,
ErrorHandler
)
.withTableName('devtable')
.withDynamoDbClient()
.lambda();
And in one of the handler I am trying to get persisted attributes like below:
handlerInput.attributesManager.getPersistentAttributes().then((data) => {
console.log('--- the attributes are ----', data)
})
But I keep getting the following error:
(node:12528) UnhandledPromiseRejectionWarning: AskSdk.DynamoDbPersistenceAdapter Error: Could not read item (amzn1.ask.account.AHJECJ7DTOPSTT25R36BZKKET4TKTCGZ7HJWEJEBWTX6YYTLG5SJVLZH5QH257NFKHXLIG7KREDKWO4D4N36IT6GUHT3PNJ4QPOUE4FHT2OYNXHO6Z77FUGHH3EVAH3I2KG6OAFLV2HSO3VMDQTKNX4OVWBWUGJ7NP3F6JHRLWKF2F6BTWND7GSF7OVQM25YBH5H723VO123ABC) from table (EucerinSkinCareDev): Missing region in config
at Object.createAskSdkError (E:\projects\nodejs-alexa-sdk-v2-eucerin-skincare-dev\node_modules\ask-sdk-dynamodb-persistence-adapter\dist\utils\AskSdkUtils.js:22:17)
at DynamoDbPersistenceAdapter.<anonymous> (E:\projects\nodejs-alexa-sdk-v2-eucerin-skincare-dev\node_modules\ask-sdk-dynamodb-persistence-adapter\dist\attributes\persistence\DynamoDbPersistenceAdapter.js:121:45)
Can we read and write attributes from DynamoDb using alexa-skill-local ? Do we need some different setup to achieve this ?
Thanks
I know that this is a really old topic, but I had the same problem few days ago, and I'm gonna explain how I did it work.
You have to download DynamoDB Locally and follow the instructions from here
Once that you have configure your local DynamoDB and check that it is working. You have to pass it through your code, to DynamoDbPersistenceAdapter constructor.
Your code should look similar to:
var awsSdk = require('aws-sdk');
var myDynamoDB = new awsSdk.DynamoDB({
endpoint: 'http://localhost:8000', // If you change the default url, change it here
accessKeyId: <your-access-key-id>,
secretAccessKey: <your-secret-access-key>,
region: <your-region>,
apiVersion: 'latest'
});
const {DynamoDbPersistenceAdapter} = require('ask-sdk-dynamodb-persistence-adapter');
return new DynamoDbPersistenceAdapter({
tableName: tableName || 'my-table-name',
createTable: true,
dynamoDBClient: myDynamoDB
});
Where <your-acces-key-id>, <your-secrete-access-key> and <your-region> are defined at aws config and credentials files.
The next step is launch your server with alexa-skill-local command as always.
Hope this will be helpfull! =)
Presumably you have an AWS config profile that your skill is using when running locally.
You need to edit the .config file and set the default region (ie us-east-1) there. The region should match the region where your table exists.
Alternatively, if you want to be able to run completely isolated, you may need to write come conditional logic and swap the dynamo client with one targeting an instance of DynamoDB Local running on your machine.
I have used the following code to set my reset email subject:
Accounts.emailTemplates.resetPassword.subject = function(user, url) {
var ul = Meteor.absoluteUrl();
var myArray = ul.split("//");
var array = myArray[1].split('/');
return "How to reset your password on "+array[0];
};
I want it to contain the current browser's url, but it's not happening.
This is what the subject looks like
How to reset your password on 139.59.9.214
but the desired outcome is:
How to reset your password on someName.com
where someName.com is my URL.
I would recommend handling this a bit differently. Your host name is tied to your environment, and depending on what your production environment looks like, deriving your hostname from the server might not always be the easiest thing to do (especially if you're behind proxies, load balancers, etc.). You could instead look into leveraging Meteor's Meteor.settings functionality, and create a settings file for each environment with a matching hostname setting. For example:
1) Create a settings_local.json file with the following contents:
{
"private": {
"hostname": "localhost:3000"
}
}
2) Create a settings.json file with the following contents:
{
"private": {
"hostname": "somename.com"
}
}
3) Adjust your code to look like:
Accounts.emailTemplates.resetPassword.subject = function (user, url) {
const hostname = Meteor.settings.private.hostname;
return `How to reset your password on ${hostname}`;
};
4) When working locally, start meteor like:
meteor --settings=settings_local.json
5) When deploying to production, make sure the contents or your settings.json file are taken into consideration. How you do this depends on how you're deploying to your prod environment. If using mup for example, it will automatically look for a settings.json to use in production. MDG's Galaxy will do the same.
I created a migration and ran it. It says it worked fine, but nothing happened. I don't think it is even connecting to my database.
My Migration file:
var util = require("util");
module.exports = {
up : function(migration, DataTypes, done) {
migration.createTable('nameOfTheNewTable', {
attr1 : DataTypes.STRING,
attr2 : DataTypes.INTEGER,
attr3 : {
type : DataTypes.BOOLEAN,
defaultValue : false,
allowNull : false
}
}).success(
function() {
migration.describeTable('nameOfTheNewTable').success(
function(attributes) {
util.puts("nameOfTheNewTable Schema: "
+ JSON.stringify(attributes));
done();
});
});
},
down : function(migration, DataTypes, done) {
// logic for reverting the changes
}
};
My Config.json:
{
"development": {
"username": "user",
"password": "pw",
"database": "my-db",
"dialect" : "sqlite",
"host": "localhost"
}
}
The command:
./node_modules/sequelize/bin/sequelize --migrate --env development
Loaded configuration file "config/config.json".
Using environment "development".
Running migrations...
20130921234513-initial.js
nameOfTheNewTable Schema: {"attr1":{"type":"VARCHAR(255)","allowNull":true,"defaultValue":null},"attr2":{"type":"INTEGER","allowNull":true,"defaultValue":null},"attr3":{"type":"TINYINT(1)","allowNull":false,"defaultValue":false}}
Completed in 8ms
I can run this over and over and the output is always the same. I've tried it on a database which I know to have existing tables and try to describe those tables and still nothing happens.
Am I doing something wrong?
EDIT:
I'm pretty sure I'm not connecting to the db, but try as I might I cannot connect using the migration. I can connect using sqlite3 my-db.sqlite and run commands such as .tables to see tables I have created previously, but I cannot for the life of me get the "nameOfTheNewTable" table created using a migration. (I want to create indexes in the migration too). I have tried using "development", changing values in the config.json like the host, database (my-db, ../my-db, my-db.sqlite), etc.
Here's a good example, in the config.json I put "database" : "bad-db" and the output from the migration is exactly the same. When it is done, there is no bad-db.sqlite file to be found.
You need to specify the 'storage' parameter in your config.json, so that sequelize knows what file to use as the sqlite DB.
Sequelize defaults to using memory storage for sqlite, so it's migrating an in-memory database, then exiting, effectively destroying the DB it just migrated.
you most likely have to wait for migration.createTable to finish:
migration.createTable(/*your options*/).success(function() {
migration.describeTable('nameOfTheNewTable').success(function(attributes) {
util.puts("nameOfTheNewTable Schema: " + JSON.stringify(attributes));
done()
});
})