How to connect to DynamoDb using karate framework [duplicate] - amazon-dynamodb

Want to know if Karate supports Neo4j database?. If yes, would like to have an ex. feature which will be helpful.

Karate supports any Java code so that way indirectly you should be able to do anything you want.
Please look at this JDBC example which will get you started: dogs.feature. You will need to write a little bit of Java code (one time only) so if you don't have that skill, please ask someone to help.
# use jdbc to validate
* def config = { username: 'sa', password: '', url: 'jdbc:h2:mem:testdb', driverClassName: 'org.h2.Driver' }
* def DbUtils = Java.type('com.intuit.karate.demo.util.DbUtils')
* def db = new DbUtils(config)
# since the DbUtils returns a Java Map, it becomes normal JSON here !
# which means that you can use the full power of Karate's 'match' syntax
* def dogs = db.readRows('SELECT * FROM DOGS')
* match dogs contains { ID: '#(id)', NAME: 'Scooby' }

Related

How to retrive data from python function and use it in a emr operator

Airflow version :2.0.2
Trying to create Emr cluster, by retrying data from AWS secrets manager.
I am trying to write an airflow dag and, my task is to get data from this get_secret function and use it in Spark_steps
def get_secret():
secret_name = Variable.get("secret_name")
region_name = Variable.get(region_name)
# Create a Secrets Manager client
session = boto3.session.Session()
client = session.client(service_name='secretsmanager', region_name=region_name)
account_id = boto3.client('sts').get_caller_identity().get('Account')
try:
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
if 'SecretString' in get_secret_value_response:
secret_str = get_secret_value_response['SecretString']
secret=json.loads(secret_str)
airflow_path=secret["airflow_path"]
return airflow_path
...
I need to use "airflow_path" return value in below spark_steps
Spark_steps:
SPARK_STEPS = [
{
'Name': 'Spark-Submit Command',
"ActionOnFailure": "CONTINUE",
'HadoopJarStep': {
"Jar": "command-runner.jar",
"Args": [
'spark-submit',
'--py-files',
's3://'+airflow_path+'-pyspark/pitchbook/config.zip,s3://'+airflow_path+'-pyspark/pitchbook/jobs.zip,s3://'+airflow_path+'-pyspark/pitchbook/DDL.zip',
's3://'+airflow_path+'-pyspark/pitchbook/main.py'
],
},
},
I saw on the internet I need to use Xcom, is this right ?, and do I need to run this function in python operator first and then get the value. please provide an example as I am a newbie.
Thanks for your help.
Xi
Yes if you would like to pass dynamic stuff, leveraging xcom push/pull might be easier.
Leverage PythonOperator to push data into xcom.
See reference implementation:
https://github.com/apache/airflow/blob/7fed7f31c3a895c0df08228541f955efb16fbf79/airflow/providers/amazon/aws/example_dags/example_emr.py
https://github.com/apache/airflow/blob/7fed7f31c3a895c0df08228541f955efb16fbf79/airflow/providers/amazon/aws/example_dags/example_emr.py#L108
https://www.startdataengineering.com/post/how-to-submit-spark-jobs-to-emr-cluster-from-airflow/

Initialising a driver instance with callSingle does not work for automated UI tests [duplicate]

I was trying to find a way to launch all features in Karate testing through maven using an external variable to set up the browser (with a local webdriver or using a Selenium grid).
So something like:
mvn test -Dbrowser=chrome (or firefox, safari, etc)
or using a Selenium grid:
mvn test -Dbrowser=chrome (or firefox, safari, etc) -Dgrid="grid url"
With Cucumber and Java this was quite simple using a singleton for setting up a global webdriver that was then used in all tests. In this way I could run the tests with different local or remote webdrivers.
In Karate I tried different solution, the last was to:
define the Karate config file a variable "browser"
use the variable "browser" in a single feature "X" in which I set up only the Karate driver
from all the other features with callonce to re-call the feature "X" for using that driver
but it didn't work and to be honest it doesn't seem to me to be the right approach.
Probably being able to set the Karate driver from a Javascript function inside the features is the right way but I was not able to find a solution of that.
Another problem I found with karate is differentiating the behavior using a local or a remote webdriver as in the features files they're set in different ways.
So does anyone had my same needs and how can I solve it?
With the suggestions of Peter Thomas I used this karate-config.js
function fn() {
// browser settings, if not set it takes chrome
var browser = karate.properties['browser'] || 'chrome';
karate.log('the browser set is: ' + browser + ', default: "chrome"');
// grid flag, if not set it takes false. The grid url is in this format http://localhost:4444/wd/hub
var grid_url = karate.properties['grid_url'] || false;
karate.log('the grid url set is: ' + grid_url + ', default: false');
// configurations.
var config = {
host: 'http://httpstat.us/'
};
if (browser == 'chrome') {
if (!grid_url) {
karate.configure('driver', { type: 'chromedriver', executable: 'chromedriver' });
karate.log("Selected Chrome");
} else {
karate.configure('driver', { type: 'chromedriver', start: false, webDriverUrl: grid_url });
karate.log("Selected Chrome in grid");
}
} else if (browser == 'firefox') {
if (!grid_url) {
karate.configure('driver', { type: 'geckodriver', executable: 'geckodriver' });
karate.log("Selected Firefox");
} else {
karate.configure('driver', { type: 'geckodriver', start: false, webDriverUrl: grid_url });
karate.log("Selected Firefox in grid");
}
}
return config;
}
In this way I was able to call the the test suite specifying the browser to use directly from the command line (to be used in a Jenkins pipeline):
mvn clean test -Dbrowser=firefox -Dgrid_url=http://localhost:4444/wd/hub
Here are a couple of principles. Karate is responsible for starting the driver (the equivalent of the Selenium WebDriver). All you need to do is set up the configure driver as described here: https://github.com/intuit/karate/tree/master/karate-core#configure-driver
Finally, depending on your environment, just switch the driver config. This can easily be done in karate-config.js actually (globally) instead of in each feature file:
function fn() {
var config = {
baseUrl: 'https://qa.mycompany.com'
};
if (karate.env == 'chrome') {
karate.configure('driver', { type: 'chromedriver', start: false, webDriverUrl: 'http://somehost:9515/wd/hub' });
}
return config;
}
And on the command-line:
mvn test -Dkarate.env=chrome
I suggest you get familiar with Karate's configuration: https://github.com/intuit/karate#configuration - it actually ends up being simpler than typical Java / Maven projects.
Another way is to set variables in the karate-config.js and then use them in feature files.
* configure driver = { type: '#(myVariableFromConfig)' }
Keep these principles in mind:
Any driver instances created by a "top level" feature will be available to "called" features
You can even call a "common" feature, create the driver there, and it will be set in the "calling" feature
Any driver created will be closed when the "top level" feature ends
You don't need any other patterns.
EDIT: there's some more details in the documentation: https://github.com/intuit/karate/tree/develop/karate-core#code-reuse
And for parallel execution or trying to re-use a single browser for all tests, refer: https://stackoverflow.com/a/60387907/143475

Using Dynamo DB client in my custom ask-sdk webhook

I have used my own custom webhook using ask-sdk and is deployed in my ec2 instance. Now I want to use DynamoDB as DynamoDbPersistenceAdapter
but I am not getting any reference how to do that.
DynamoDbPersistenceAdapter will need AWS Keys and table name and some details for dynamo db but where to initialize? I found some code, but this dont have anything :
persistenceAdapter = new DynamoDbPersistenceAdapter({
tableName: 'global_attr_table',
createTable: true,
partitionKeyGenerator: keyGenerator
});
This can probably be solved by adding environmental variables and by setting up an AWS CLI profile:
Heres how you setup an AWS CLI Profile:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Once you have a profile setup with your AWS access information you can export Environmental Variables in your command line or in a shell script
$> export AWS_PROFILE=YourNewAWSCLIProfileName
$> export AWS_REGION=us-east-1
$> export AWS_DEFAULT_REGION=us-east-1
and you can check that these variables are set by typing
$> echo $AWS_PROFILE
$> echo $AWS_REGION
$> echo $AWS_DEFAULT_REGION
This is what I use. If for some reason that doesnt work here is some research into how you might add a DynamoDB Client:
Trying to solve a different problem so let me solve yours as I walk through mine:
In: node_modules/ask-sdk/dist/skill/factory/StandardSkillFactory.js
there is reference to something similar to what you have above
new ask_sdk_dynamodb_persistence_adapter_1.DynamoDbPersistenceAdapter({
tableName: thisTableName,
createTable: thisAutoCreateTable,
partitionKeyGenerator: thisPartitionKeyGenerator,
dynamoDBClient: thisDynamoDbClient,
})
I believe you need to create a DynamoDbClient instance which I found referenced here in the AWS SDK Docs.
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/dynamodb-example-document-client.html
You'd have to set your own service:
In: node_modules/aws-sdk/lib/dynamodb/document_client.js
/**
* Creates a DynamoDB document client with a set of configuration options.
*
* #option options params [map] An optional map of parameters to bind to every
* request sent by this service object.
* #option options service [AWS.DynamoDB] An optional pre-configured instance
* of the AWS.DynamoDB service object to use for requests. The object may
* bound parameters used by the document client.
* #option options convertEmptyValues [Boolean] set to true if you would like
* the document client to convert empty values (0-length strings, binary
* buffers, and sets) to be converted to NULL types when persisting to
* DynamoDB.
* #see AWS.DynamoDB.constructor
*
*/
constructor: function DocumentClient(options) {
var self = this;
self.options = options || {};
self.configure(self.options);
},
/**
* #api private
*/
configure: function configure(options) {
var self = this;
self.service = options.service;
self.bindServiceObject(options);
self.attrValue = options.attrValue =
self.service.api.operations.putItem.input.members.Item.value.shape;
},
/**
* #api private
*/
bindServiceObject: function bindServiceObject(options) {
var self = this;
options = options || {};
if (!self.service) {
self.service = new AWS.DynamoDB(options);
} else {
var config = AWS.util.copy(self.service.config);
self.service = new self.service.constructor.__super__(config);
self.service.config.params =
AWS.util.merge(self.service.config.params || {}, options.params);
}
},
I'm not sure what those options might look like.

firebase auth token from Graphcool

can I generate a custom auth token, for use with a third party, with a resolver in graph.cool? something like this??
type FirebaseTokenPayload {
token: String!
}
extend type Query {
FirebaseToken(userIdentifier: String!): FirebaseTokenPayload
}
const fb = require('myNodeFirebaseAuthLib')
module.exports = event => fb.generateTokenWithPayload({ id: event.data.userId })
Authentication required - restrict who can read data in fields: Permission query:
query ($user_id: ID!, $node_firebaseIdentifier: String) {
SomeUserExists(filter: {
id: $user_id ,
firebaseIdentifier: $node_firebaseIdentifier
})
}
--
I think this question boils down two parts
"is it possible to install node modules in the graph.cool instance -- or for that sort of thing do we need to use a webhook" -- if it must be a webhook, what is the flow of identity verification and how do I pass the payload parameters ?
"can we add permissions queries and authentication to resolvers?"
notes, addendums:
according to this alligator.io blog post, it seems that using the Graphcool framework, you can install node modules! So, I wouldn't need to use a webhook. -- However, that is with an ejected app. I lose auth0 authentication that way -- the template does not produce a createUser and singinUser that works with the same auth0 data that the integration offers.
I forgot to post the answer to this - I had to eject graphcool, I could not use any node_modules I thought to try in my custom functions.

Symfony Workflow initial_place parameter not working

workflow.yaml:
framework:
workflows:
test_workflow:
type: 'workflow'
marking_store:
type: 'single_state'
arguments:
- 'currentPlace'
supports:
- App\Entity\Call
initial_place: draft
places:
- draft
- ok
- notok
transitions:
go:
from: draft
to: ok
reject:
from: draft
to: notok
My Controller:
public function twf(Registry $workflows){
$c = new Call();
$workflow = $workflows->get($c);
return $this->render('page/twf.html.twig',[
'cp' => $c->getCurrentPlace()
]);
}
It just shows nothing , but when applying the Go transition , it displays the 'ok' which is expected , I wonder why it's not taking the configured initial_place when the Call object is first initiated !
Any hints ?
I think it only sets the state to the initial after something triggers it. Requesting the workflow object is not enough.
Try calling getMarking, it should be set after that... you can see the set part here: https://github.com/symfony/workflow/blob/master/Workflow.php#L63

Resources