Passing list of search string in contains in FilterExpression - amazon-dynamodb

Is there any way to pass a list of search strings in the contains() method of FilterExpression in DynamoDb?
Something like below:
search_str = ['value-1', 'value-2', 'value-3']
result = kb_table.scan(
FilterExpression="contains (title, :titleVal)",
ExpressionAttributeValues={ ":titleVal": search_str }
)
For now I can only think of looping through the list and scanning the table multiple times (as in below code), but I think it will be resource heavy.
for item in search_str:
result += kb_table.scan(
FilterExpression="contains (title, :titleVal)",
ExpressionAttributeValues={ ":titleVal": item }
)
Any suggestions.

For the above scenario, the CONTAINS should be used with OR condition. When you give array as input for CONTAINS, DynamoDB will check for the SET attribute ("SS", "NS", or "BS"). It doesn't looks for the sub-sequence on the string attribute.
If the target attribute of the comparison is of type String, then the
operator checks for a substring match. If the target attribute of the
comparison is of type Binary, then the operator looks for a
subsequence of the target that matches the input. If the target
attribute of the comparison is a set ("SS", "NS", or "BS"), then the
operator evaluates to true if it finds an exact match with any member
of the set.
Example:-
movies1 = "MyMovie"
movies2 = "Big New"
fe1 = Attr('title').contains(movies1)
fe2 = Attr('title').contains(movies2)
response = table.scan(
FilterExpression=fe1 or fe2
)

a little bit late but to allow people to find a solution i give here my method.
lets assume that in your DB you have a props called 'EMAIL you want to filter your scan on this EMAIL with a list of value. you can proceed as following.
list_of_elem=['mail1#mail.com','mail2#mail.com','mail3#mail.com']
#set an empty string to create your query
stringquery=""
# loop each element in your list
for index,value in enumerate(list_of_elem):
# add your query of contains with mail value
stringquery=stringquery+f"Attr('EMAIL').contains('{value }')"
# while your value is not the last element in list add the 'OR' operator
if index < len(list_of_elem)-1:
stringquery=stringquery+ ' | '
dynamodb = boto3.resource('dynamodb')
# Use eval of your query string to parse the string as filter expression
tableUser = dynamodb.Table('mytable')
tableUser.scan(
FilterExpression=eval(stringquery)
)

Related

r json mongodb query $in operator syntax error due to double quotes?

I'm building a json query to pass to a mongodb database in R.
In one scenario, I have a vector of dates and I want to query the database to return all records which have a date in the relevant field that matches a date in my vector of dates.
The second scenario is the same as the first, but this time I have a vector of character strings (IDs) and need to return all the records with matching IDs.
I understood the correct way to do this in a json query is to use the $in operator, and then put my vector in an array.
However, when I pass the query to my mongodb database, the exportLogId returns NULL. I'm quite sure that the problem is something to do with how I am representing the $in operator in the final query, since I have very similarly structured queries without the $in operator and they are all working. If I look for just one of my target dates or character strings, I get the desired result.
I followed the mongodb manual here to construct my query, and the only issue I can see is that the $in operator in the output of jsonlite::toJSON() is enclosed in double quotes; whereas I think it might need to be in single quotes (or no quotes at all, but I don't know how to write the syntax for that).
I'm creating my query in two steps:
Create the query as a series of nested lists
Convert the list object to json with jsonlite::toJSON()
Here is my code:
# Load libraries:
library(jsonlite)
# Create list of example dates to query in mongodb format:
sampledates <- c("2022-08-11T00:00:00.000Z",
"2022-08-15T00:00:00.000Z",
"2022-08-16T00:00:00.000Z",
"2022-08-17T00:00:00.000Z",
"2022-08-19T00:00:00.000Z")
# Create query as a list object:
query_list_l <- list(filter =
# Add where clause:
list(where =
# Filter results by list of sample dates:
list(dateSampleTaken = list('$in' = sampledates),
# Define format of column names and values:
useDbColumns = "true",
dontTranslateValues = "true",
jsonReplaceUndefinedWithNull = "true"),
# Define columns to return:
fields = c("id",
"updatedAt",
"person.visualId",
"labName",
"sampleIdentifier",
"dateSampleTaken",
"sequence.hasSequence")))
# Convert list object to JSON:
query_json = jsonlite::toJSON(x = query_list_l,
pretty = TRUE,
auto_unbox = TRUE)
The JSON query now looks like this:
> query_json
{
"filter": {
"where": {
"dateSampleTaken": {
"$in": ["2022-08-11T00:00:00.000Z", "2022-08-15T00:00:00.000Z", "2022-08-16T00:00:00.000Z", "2022-08-17T00:00:00.000Z", "2022-08-19T00:00:00.000Z"]
},
"useDbColumns": "true",
"dontTranslateValues": "true",
"jsonReplaceUndefinedWithNull": "true"
},
"fields": ["id", "updatedAt", "person.visualId", "labName", "sampleIdentifier", "dateSampleTaken", "sequence.hasSequence"]
}
}
As you can see, $in is now enclosed in double quotes, even though I put it in single quotes when I created the query as a list object. I have tried replacing with sprintf() but that just adds a lot of backslashes to my query. I also tried:
query_fixed <- gsub(pattern = "\\"\\$\\in\\"",
replacement = "\\'$in\\'",
x = query_json)
... but this fails with an error.
I would be very grateful to know if:
The syntax problem that is preventing $in from working is actually the double quotes?
If double quotes is the problem, how do I replace them with single quotes without messing up the JSON format?
UPDATE:
The issue seems to occur when R is passing the query to the database, but I still can't work out exactly why.
If I try the query out in loopback explorer in the database, it works and using the export log ID produced, I can then fetch the results with httr::GET() in R. Example query results are shown below (sorry for the hashes - the main point is you can see the format of the returned values):
[1] "[{\"_id\":\"e59953b6-a106-4b69-9e25-1c54eef5264a\",\"updatedAt\":\"2022-09-12T20:08:39.554Z\",\"dateSampleTaken\":\"2022-08-16T00:00:00.000Z\",\"labName\":\"LNG_REFERENCE_DATA_CATEGORY_LAB_NAME_LAB_A\",\"sampleIdentifier\":\"LS0044-SCV2-PCR\",\"sequence\":{\"hasSequence\":false},\"person\":{\"visualId\":\"C-2022-0002\"}},{\"_id\":\"af5cd9cc-4813-4194-b60b-7d130bae47bc\",\"updatedAt\":\"2022-09-12T20:11:07.467Z\",\"dateSampleTaken\":\"2022-08-17T00:00:00.000Z\",\"labName\":\"LNG_REFERENCE_DATA_CATEGORY_LAB_NAME_LAB_A\",\"sampleIdentifier\":\"LS0061-SCV2-PCR\",\"sequence\":{\"hasSequence\":false},\"person\":{\"visualId\":\"C-2022-0003\"}},{\"_id\":\"b5930079-8d57-43a8-85c0-c95f7e0338d9\",\"updatedAt\":\"2022-09-12T20:13:54.378Z\",\"dateSampleTaken\":\"2022-08-16T00:00:00.000Z\",\"labName\":\"LNG_REFERENCE_DATA_CATEGORY_LAB_NAME_LAB_A\",\"sampleIdentifier\":\"LS0043-SCV2-PCR\",\"sequence\":{\"hasSequence\":false},\"person\":{\"visualId\":\"C-2022-0004\"}}]"

Kamailio - get first occurrence value of a param in a string

I need to get the value of first tgrp param in this string using Kamailio :
$var(x) = <sip:xxxxxxxxx;tgrp=0001000;trunk-context=xx.xx.xx.xx#xx.xx.xx.xx:5060;transport=UDP;user=phone;tgrp=237>
I’m trying $var(y) = $(var(x){param.value,tgrp}); but it’s getting the last value of tgrp which is 237>.
Noting that first tgrp is not always in the second index , some other parameters can be added to the string.
How to get the value of first occurrence of tgrp param ?
param.value string operations designed to work with unique params names.
You can loop over all params using for loop and checking {param.name,index}
Try a solution based on xavp_params_explode():
https://kamailio.org/docs/modules/stable/modules/pv.html#pv.f.xavp_params_explode
Something like:
xavp_params_explode("$(var(x){s.unbracket})", "x");
xdbg("$xavp(x=>tgrp[0])"); # <- print the value of first parameter tgrp
The index [0] can be omitted, without it first value being returned, but if you want the second param value, then use [1] as index.

Invalid type for parameter error when using put_item dynamodb

I want to write data in dataframe to dynamodb table
item = {}
for row in datasource_archived_df_join_repartition.rdd.collect():
item['x'] = row.x
item['y'] = row.y
client.put_item( TableName='tryfail',
Item=item)
but im gettin this error
Invalid type for parameter Item.x, value: 478.2, type: '<'type 'float''>', valid types: '<'type 'dict''>'
Invalid type for parameter Item.y, value: 696- 18C 12, type: '<'type 'unicode''>', valid types: '<'type 'dict''>'
Old question, but it still comes up high in a search and hasn't been answered properly, so here we go.
When putting an item in a DynamoDB table it must be a dictionary in a particular nested form that indicates to the database engine the data type of the value for each attribute. The form looks like below. The way to think of this is that an AttributeValue is not a bare variable value but a combination of that value and its type. For example, an AttributeValue for the AlbumTitle attribute below is the dict {'S': 'Somewhat Famous'} where the 'S' indicates a string type.
response = client.put_item(
TableName='Music',
Item={
'AlbumTitle': { # <-------------- Attribute
'S': 'Somewhat Famous', # <-- Attribute Value with type string ('S')
},
'Artist': {
'S': 'No One You Know',
},
'SongTitle': {
'S': 'Call Me Today',
},
'Year': {
'N': '2021' # <----------- Note that numeric values are supplied as strings
}
}
)
In your case (assuming x and y are numbers) you might want something like this:
for row in datasource_archived_df_join_repartition.rdd.collect():
item = {
'x': {'N': str(row.x)},
'y': {'N': str(row.y)}
}
client.put_item( TableName='tryfail', Item=item)
Two things to note here: first, each item corresponds to a row, so if you are putting items in a loop you must instantiate a new one with each iteration. Second, regarding the conversion of the numeric x and y into strings, the DynamoDB docs explain that the reason the AttributeValue dict requires this is "to maximize compatibility across languages and libraries. However, DynamoDB treats them as number type attributes for mathematical operations." For fuller documentation on the type system for DynamoDB take a look at this or read the Boto3 doc here since you are using Python.
The error message is indicating you are using the wrong type, it looks like you need to be using a dictionary when assigning values to item['x'] and item[y]. e.g.
item['x'] = {'value': row.x}
item['y'] = {'value': row.y}

Xquery variable concat based on condition

I am trying to concat the value of an element based on certain condition, but unable to do so. What's wrong here?
For below given sample structure, I need to concat the value of CID based upon OutcomeCode code. Say if we have OutcomeCode as OC and PC, then we should display concatenated value of CId in a string variable.
<v4:ValidateResponse xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v4="http://service.com/v4">
<v4:Details>
<v4:Detail>
<v4:CId>001</v4:CId>
</v4:Detail>
<v4:OutcomeCode>FC</v4:OutcomeCode>
</v4:Details>
<v4:Details>
<v4:Detail>
<v4:CId>002</v4:CId>
</v4:Detail>
<v4:OutcomeCode>PC</v4:OutcomeCode>
</v4:Details>
<v4:Details>
<v4:Detail>
<v4:CId>003</v4:CId>
</v4:Detail>
<v4:OutcomeCode>OC</v4:OutcomeCode>
</v4:Details>
</v4:ValidateResponse>
Here is my transformation
as xs:string
{
for $Details in $ValidateResponse /*:Details
let $OutcomeCode := data($Details/*:OutcomeCode)
return
if (($OutcomeCode ='OC') or ($OutcomeCode='PC'))
then
contact('CID is-',data($Details/*:Detail/*:CId))
else
fn:data('Technical_Check')
};
I am unable to get concat values.
Expected result should be like: CID is- 002,003
as these 2 meet the OC and PC condition.
You could simplify this for loop and combine the criteria into a single XPath to select the CId from Details that have OutcomeCode of "OC" or "PC".
Then, use string-join() in order to produce a comma separated value.
Then, use concat() to produce a string with the prefix and the CSV value:
concat('CID is- ',
string-join(
$ValidateResponse/*:Details[*:OutcomeCode =('OC','PC')]/*:Detail/*:CId,
",")
)

In Biztalk mapper how to use split array concept

Required suggestion on below part.please any one give solution.
We have mapping from 850 to FlatFile
X12/PO1Loop1/PO1/PO109 and I need to map to field VALUE which is under record Option which is unbounded.
Split PO109 into substrings delimited by '.', foreach subsring after the first, create new Option with value=substring
So in input sample we have value like 147895632qwerqtyuui.789456123321456987
Similarly the field repeats under POLoop1.
So I need to split value based on (.) then pass a value to value field under option record(unbounded).
I tried using below code snippet
public string SplitValues(string strValue)
{
string[] arrValue = strValue.Split(".".ToCharArray());
foreach (string strDisplay in arrValue)
{
return strDisplay;
}
}
But it doesn't works, and I am not really familiar with the String methods and I am not sure if there's an easy way to do this. I have a String which contains couple of values delimited with "." .
So I need to separate values based on delimiter(.) and pass value to field.
How can I do this
As I mentioned, not too clear what is your objective, but I think you want to split a node that has some kind of delimiter into multiple nodes... if so, Try this: https://seroter.wordpress.com/2008/10/07/splitting-delimited-values-in-biztalk-maps/
He is doing exactly that. Given a node with a|b|c|d as value, output multiple nodes, each containing the value after splitted by |, so node1 = a, node2 = b, node3 = c, node4 = d.

Resources