Data Driven Approach with Jmeter - functional-programming

I am using CSV Data Set Config config element to read CSV file. My requirement is to read line if the column value is value is "Y" else want to skip.
JSON Body Data:
{
"UserName": "${Username}",
"Password": "${Password}"
}
CSV File Content:
Username,Password,RunTest
testuser,test#123,Y
testuser1,test#12,N

Add a If Controller just before the request. In the Condition, use the below syntax:
"${RunTest}" == "Y"
In Run-time, if the value is Y, it will trigger your HTTP request, else not.

Related

Why am I not getting the NULL values when using FILTER to remove CSV Headers in PIG?

I have this data below in a .csv file:
Needed_values,TEMP,Desc
,022.3,NewYork
3,022.30,India
,027.0,Australia
,027.00,Russia
1,027.1,Austria
,027.10,Norway
,036.2,Hungary
,036.20,Lithunia
2,785.52,Nigeria
I saw in one of the StackOverflow question the way to remove header using FILTER. So,
When I load this file in my pig script and use Filter to remove the header of my csv then all the NULL values under Needed_values also got removed!
LOAD_DATA = LOAD 'DATA.csv' Using PigStorage(',') as
(
NEEDED_VALUES:chararray,
TEMP:chararray,
DESC:chararray
);
FILTER_HEADER = FILTER LOAD_DATA BY NEEDED_VALUES != 'Needed_values';
ACTUAL OUTPUT:
(3,022.30,India)
(1,027.1,Austria)
(1,027.1,Austria)
I'm expecting the output to include everything except the headers- Needed_values,TEMP,Desc:
,022.3,NewYork
3,022.30,India
,027.0,Australia
,027.00,Russia
1,027.1,Austria
,027.10,Norway
,036.2,Hungary
,036.20,Lithunia
2,785.52,Nigeria
The null values will not pass the filter condition. Change the filter to:
FILTER_HEADER = FILTER LOAD_DATA BY NEEDED_VALUES != 'Needed_values' OR NEEDED_VALUES IS NULL;

Is there a way to input multiple environment variables into a url for testing on Postman?

I am testing an endpoint in Postman using a url like this, {{api_url}}/stackoverflow/help/{{customer_id}}/{{client_id}}.
I have the api_url, customer_id, and client_id stored in my environment variables. I would like to test multiple customer_id and client_id without having to change the environment variables manually each time. I created a csv to store a list of customer_id and one to store client_id. When I go to run collection, it will only allow me to add one file. Is there another way to do this if I want to iterate through my tests to automate them?
You can add both customer_id & client_id in one csv file. Postman will iterate n times (n = number of csv lines, except header)
you can use postman.setNextRequest to control the flow. The below code runs the request with different values in the arr variable
url:
{{api_url}}/stackoverflow/help/{{customer_id}}/{{client_id}}
now add pre-request:
// add values for the variable in an array
const tempArraycustomer_id = pm.variables.get("tempArraycustomer_id")
const tempArrayclient_id = pm.variables.get("tempArrayclient_id")
//modify the array to the values you want
const arrcustomer_id = tempArraycustomer_id ? tempArraycustomer_id : ["value1", "value2", "value3"]
const arrclient_id = tempArrayclient_id ? tempArrayclient_id : ["value1", "value2", "value3"]
// testing variable to each value of the array and sending the request until all values are used
pm.variables.set("customer_id", arrcustomer_id.pop())
pm.variables.set("client_id", arrclient_id.pop())
pm.variables.set("tempArraycustomer_id", arrcustomer_id)
pm.variables.set("tempArrayclient_id", arrclient_id)
//end iteration when no more elements are there
if (arrcustomer_id.length !== 0) {
postman.setNextRequest(pm.info.requestName)
}

Producing files in dagster without caring about the filename

In the dagster tutorial, in the Materializiations section, we choose a filename (sorted_cereals_csv_path) for our intermediate output, and then yield it as a materialization:
#solid
def sort_by_calories(context, cereals):
# Sort the data (removed for brevity)
sorted_cereals_csv_path = os.path.abspath(
'calories_sorted_{run_id}.csv'.format(run_id=context.run_id)
)
with open(sorted_cereals_csv_path, 'w') as fd:
writer = csv.DictWriter(fd, fieldnames)
writer.writeheader()
writer.writerows(sorted_cereals)
yield Materialization(
label='sorted_cereals_csv',
description='Cereals data frame sorted by caloric content',
metadata_entries=[
EventMetadataEntry.path(
sorted_cereals_csv_path, 'sorted_cereals_csv_path'
)
],
)
yield Output(None)
However, this is relying on the fact that we can use the local filesystem (which may not be true), it will likely get overwritten by later runs (which is not what I want) and it's also forcing us to come up with a filename which will never be used.
What I'd like to do in most of my solids is just say "here is a file object, please store it for me", without concerning myself with where it's going to be stored. Can I materialize a file without considering all these things? Should I use python's tempfile facility for this?
Actually it seems this is answered in the output_materialization example.
You basically define a type:
#usable_as_dagster_type(
name='LessSimpleDataFrame',
description='A more sophisticated data frame that type checks its structure.',
input_hydration_config=less_simple_data_frame_input_hydration_config,
output_materialization_config=less_simple_data_frame_output_materialization_config,
)
class LessSimpleDataFrame(list):
pass
This type has an output_materialization strategy that reads the config:
def less_simple_data_frame_output_materialization_config(
context, config, value
):
csv_path = os.path.abspath(config['csv']['path'])
# Save data to this path
And you specify this path in the config:
execute_pipeline(
output_materialization_pipeline,
{
'solids': {
'sort_by_calories': {
'outputs': [
{'result': {'csv': {'path': 'cereal_out.csv'}}}
],
}
}
},
)
You still have to come up with a filename for each intermediate output, but you can do it in the config, which can differ per-run, instead of defining it in the pipeline itself.

How to handle inner Json when using JsonOutputter

I'm converting some csv files into Json using the JsonOutputter. In the csv files I have a field containing Json like this (pipe character is delimiter):
...|{ "type":"Point", "coordinates":[ 18.7726, 74.5091 ] }|...
When it's output to Json, the result looks like this:
"Location": "{ \"type\":\"Point\", \"coordinates\":[ 18.7726, 74.5091 ] }"
I would like to get rid of the outer quotes to make the Json look like this:
"Location": { "type":"Point", "coordinates":[ 18.7726, 74.5091 ] }
What is the best way to accomplish this? The output Json will be stored in Cosmos DB, so I guess the "cleaning up" of the Json could be done either in U-SQL or in Cosmos DB?
The sample outputter is only generating flat JSON. Since we do not have a JSON datatype, any string value has to be escaped to be a string value.
You can write your own custom Outputter that for example takes SqlMap instances for nested values and output them as nested JSON, or - if you know that some strings in the rowsets are really JSON and not just strings, serialize them without the quotes.
If JsonOutputter is not the only choice to that
,we could covert csv file to Json with our custom code.
I test it with following csv file.
number|Location
1|{ "type":"Point", "coordinates":[ 13.7726, 73.5091 ] }
2|{ "type":"Point", "coordinates":[ 14.7726, 74.5091 ] }
Please have a try to use the following code, it works correctly on my side.
var lines = File.ReadAllText(#"C:\Tom\tomtest.csv").Replace("\r", "").Split('\n');
var csv = lines.Select(l => l.Split('|')).ToList();
var headers = csv[0];
var dicts = csv.Skip(1).Select(row => headers.Zip(row, Tuple.Create).ToDictionary(p => p.Item1, p => p.Item2)).ToArray().Select(x=>new
{
number = x["number"],
location = JObject.Parse(x["Location"])
});
string json = JsonConvert.SerializeObject(dicts);
Console.WriteLine(json);
Test result:

Nginx-redis module returning string length along with the value from Redis

I am using redis2-nginx-module to serve html content stored as a value in redis. Following is the nginx config code to get value for a key from redis.
redis2_query get $fullkey;
redis2_pass localhost:6379;
#default_type text/html;
When the url is hit the following unwanted response is rendered along with the value for that key.
$14
How to remove this unwanted output? Also if key passed as an argument doesn't exist in the redis, how to check this condition and display some default page?
(Here's a similar question on ServerFault)
There's no way with just redis2 module, as it always return a raw Redis response.
If you only need GET and SET commands you may try with HttpRedisModule (redis_pass). If you need something fancier, like hashes, you should probably try filtering the raw response from Redis with Lua, e.g. something along the lines of
content_by_lua '
local res = ngx.location.capture("/redis",
{ args = { key = ngx.var.fullkey } }
)
local body = res.body
local s, e = string.find(body, "\r\n", 1, true)
ngx.print(string.sub(body, e + 1))
';
(Sorry, the code's untested, don't have an OpenResty instance at hand.)

Resources