This is how my table looks like:
toHash is my primary partition key and timestamp is the sort key.
So, when I am executing this code [For getting the reverse sorted list of timestamp]:
import boto3
from boto3.dynamodb.conditions import Key, Attr
client = boto3.client('dynamodb')
response = client.query(
TableName='logs',
Limit=1,
ScanIndexForward=False,
KeyConditionExpression="toHash = :X",
)
I get the following error:
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the Query operation: Invalid KeyConditionExpression: An expression attribute value used in expression is not defined; attribute value: :X
Am I doing something wrong here? Why isn't X considered a valid attribute value?
This is why I don't use boto directly anymore. The API for what should be simple stuff like this is nuts. #notionquest is correct, you need to add the ExpressionAttributeValues. I don't mess with these things anymore. I use dynamof. Heres an example with your query:
from boto3 import client
from dynamof import db as make_db
from dynamof import query
client = client('dynamodb', endpoint_url='http://localstack:4569')
db = make_db(client)
db(query(
table_name='logs',
conditions=attr('toHash').equals('somevalue')
))
dynamof build the KeyConditionExpression and ExpressionAttributeValues for you.
disclaimer: I wrote dynamof
Add ExpressionAttributeValues as mentioned below:-
response = client.query(
TableName='logs',
Limit=1,
ScanIndexForward=False,
KeyConditionExpression="toHash = :X",
ExpressionAttributeValues={":X" : {"S" : "somevalue"}}
)
Related
I'm attempting to send a batch of PartiQL statements in the NodeJS AWS SDK v3. The statement works fine for a single ExecuteStatementCommand, but the Batch command doesn't.
The statement looks like
const statement = `
SELECT *
FROM "my-table"
WHERE "partitionKey" = '1234'
AND "filterKey" = '5678'
`
This code snippet works as expected:
const result = await dynamodbClient.send(new ExecuteStatementCommand(
{ Statement: statement}
))
The batch snippet does not:
const result = await dynamodbClient.send(new BatchExecuteStatementCommand({
Statements: [
{
Statement: statement
}
]
}))
The batch call produces the following error:
"Code": "ValidationError",
"Message": "Select statements within BatchExecuteStatement must specify the primary key in the where clause."
Any insight is greatly appreciated. Thanks for taking the time to reading my question!
Seems like what I needed was a rubber duck.
DynamoDB primary keys consists of partition key + sort key. My particular table has a sort key, which is missing from the statement. Batch jobs cannot handle filtering of responses, and each statement must match a single item in the database.
I understand that sqlite doesn't support RETURNING at least that is what sqlAlchemy is telling me:
sqlalchemy.exc.CompileError: RETURNING is not supported by this dialect's statement compiler.
I get this error when using sqlAlchemy's Core library. Here is a code example:
from sqlalchemy.engine.url import URL
from sqlalchemy import create_engine, MetaData
from sqlalchemy import Table, Column, Integer, String
engine = create_engine('sqlite:///:memory:', echo=False)
# create table
meta = MetaData(engine)
table = Table('userinfo', meta,
Column('id', Integer, primary_key=True),
Column('first_name', String),
Column('age', Integer),
)
meta.create_all()
# generate rows
data = [{'first_name': f'Name {i}', 'age': 18+i} for i in range(10)]
# this seems to work on PostgreSQL only
stmt = table.insert().values(data).returning(table.c.id)
for rowid in engine.execute(stmt).fetchall():
print(rowid['id'])
Now, when I use similar code with sqlAlchemy's ORM library the IDs are returned. Here is the source code:
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String
from sqlalchemy import ForeignKey
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.orm import relationship
Base = declarative_base()
class UserInfo(Base):
__tablename__ = "userinfo"
id = Column(Integer, primary_key=True)
first_name = Column(String)
age = Column(Integer)
engine = create_engine('sqlite:///:memory:', echo=False)
Base.metadata.create_all(engine)
session = scoped_session(sessionmaker(bind=engine))
data = [dict(first_name=f'Name {i}', age=18+1) for i in range(10)]
session.bulk_insert_mappings(UserInfo, data, return_defaults=True)
session.commit()
print([s['id'] for s in data])
How come that this is working while the Core one is not? When I look at the generated sql I don't see.
After some digging I found this link
In this document the use of bulk_insert_mappings is just Batched INSERT statements via the ORM "bulk", using dictionaries.. When setting return_defaults=True I assume sqlalchemy is repeatedly calling upon last row id. Hence the IDs are available.
I have a table as : AdvgCountries which has two columns
a. CountryId (String) (Parition Key)
b. CountryName(String) Sort Key
While creating the table , I created with only Partition Key and then later added a Global Secondary Index with Index name as:
CountryName-index
Type : GSI
Partition key : CountryId
Sort Key : CountryName
I am able to retrieve CountryName based upon CountryId but unable to retrieve CountryId based upon CountryName. Based upon my reading I found that there are options to do this by providing indexname but I get the following error:
botocore.exceptions.ClientError: An error occurred
(ValidationException) when calling the Query operation: Query
condition missed key schema element: CountryId
import boto3
import json
import os
from boto3.dynamodb.conditions import Key, Attr
def query_bycountryname(pCountryname, dynamodb=None):
if not dynamodb:
dynamodb = boto3.resource('dynamodb', endpoint_url="https://dynamodb.us-east-1.amazonaws.com")
table = dynamodb.Table('AdvgCountires')
print(f"table")
attributes = table.query(
IndexName="CountryName-index",
KeyConditionExpression=Key('CountryName').eq(pCountryname),
)
if 'Items' in attributes and len(attributes['Items']) == 1:
attributes = attributes['Items'][0]
print(f"before return")
return attributes
if __name__ == '__main__':
CountryName = "India"
print(f"Data for {CountryName}")
countries = query_bycountryname(CountryName)
for country in countries:
print(country['CountryId'], ":", country['CountryName'])
Any help is appreciated.
You can't be able to fetch primary key value based on sort key. DynamoDB does not work like this.
In Dynamodb, each item’s location is determined by the hash value of
its partition key.
The Query operation in Amazon DynamoDB finds items based on primary
key values.
KeyConditionExpression are used to write conditional statements by
using comparison operators that evaluate against a key and limit the
items returned. In other words, you can use special operators to
include, exclude, and match items by their sort key values.
I am having trouble using AWS Boto3 to query DynamoDB with a hash key and a range key at the same time using the recommend KeyConditionExpression. I have attached an example query:
import boto3
from boto3 import dynamodb
from boto3.session import Session
dynamodb_session = Session(aws_access_key_id=AWS_KEY,
aws_secret_access_key=AWS_PASS,
region_name=DYNAMODB_REGION)
dynamodb = dynamodb_session.resource('dynamodb')
table=dynamodb.Table(TABLE_NAME)
request = {
'ExpressionAttributeNames': {
'#n0': 'hash_key',
'#n1': 'range_key'
},
'ExpressionAttributeValues': {
':v0': {'S': MY_HASH_KEY},
':v1': {'N': GT_RANGE_KEY}
},
'KeyConditionExpression': '(#n0 = :v0) AND (#n1 > :v1)',
'TableName': TABLE_NAME
}
response = table.query(**request)
When I run this against a table with the following scheme:
Table Name: TABLE_NAME
Primary Hash Key: hash_key (String)
Primary Range Key: range_key (Number)
I get the following error and I cannot understand why:
ClientError: An error occurred (ValidationException) when calling the Query operation: Invalid KeyConditionExpression: Incorrect operand type for operator or function; operator or function: >, operand type: M
From my understanding the type M would be a map or dictionary type and I am using a type N which is a number type and matches my table scheme for the range key. If someone could explain why this error is happening or I am also open to a different way of accomplishing the same query even if you cannot explain why this error exists.
The Boto 3 SDK constructs a Condition Expression for you when you use the Key and Attr functions imported from boto3.dynamodb.conditions:
response = table.query(
KeyConditionExpression=Key('hash_key').eq(hash_value) & Key('range_key').eq(range_key_value)
)
Reference: Step 4: Query and Scan the Data
Hope it helps
Adding this solution as the accepted answer did not address why the query used did not work.
TLDR: Using query on a Table resource in boto3 has subtle differences as opposed to using client.query(...) and requires a different syntax.
The syntax is valid for a query on a client, but not on a Table. The ExpressionAttributeValues on a table do not require you to specify the data type. Also if you are executing a query on a Table resource you do not have to specify the TableName again.
Working solution:
from boto3.session import Session
dynamodb_session = Session(aws_access_key_id=AWS_KEY,aws_secret_access_key=AWS_PASS,region_name=DYNAMODB_REGION)
dynamodb = dynamodb_session.resource('dynamodb')
table = dynamodb.Table(TABLE_NAME)
request = {
'ExpressionAttributeNames': {
'#n0': 'hash_key',
'#n1': 'range_key'
},
'ExpressionAttributeValues': {
':v0': MY_HASH_KEY,
':v1': GT_RANGE_KEY
},
'KeyConditionExpression': '(#n0 = :v0) AND (#n1 > :v1)',
}
response = table.query(**request)
I am the author of a package called botoful which might be useful to avoid dealing with these complexities. The code using botoful will be as follows:
import boto3
from botoful import Query
client = boto3.Session(
aws_access_key_id=AWS_KEY,
aws_secret_access_key=AWS_PASS,
region_name=DYNAMODB_REGION
).client('dynamodb')
results = (
Query(TABLE_NAME)
.key(hash_key=MY_HASH_KEY, range_key__gt=GT_RANGE_KEY)
.execute(client)
)
print(results.items)
I am trying to move form DynamoDB to DynamoDB2 to use tables with global secondary indices. I need to create a table and then batch-write items into it. Here's a block of tets code:
from boto.dynamodb2.fields import HashKey, RangeKey, GlobalAllIndex
from boto.dynamodb2.layer1 import DynamoDBConnection
from boto.dynamodb2.table import Table
from boto.dynamodb2.items import Item
import boto
conn = DynamoDBConnection(aws_access_key_id=<MYID>,aws_secret_access_key=<MYKEY>)
tables = conn.list_tables()
table_name = 'myTable001'
if table_name not in tables['TableNames']:
Table.create(table_name, schema=[HashKey('firstKey')], throughput={'read': 5, 'write': 2}, global_indexes=[
GlobalAllIndex('secondKeyIndex', parts=[HashKey('secondKey')], throughput={'read': 5, 'write': 3})], connection=conn)
table = Table(table_name, connection=conn)
with table.batch_write() as batch:
batch.put_item(data={'firstKey': 'fk01', 'secondKey':'sk001', 'message': '{"firstKey":"fk01", "secondKey":"sk001", "comments": "fk01-sk001"}'})
# ...
batch.put_item(data={'firstKey': 'fk74', 'secondKey':'sk112', 'message': '{"firstKey":"fk74", "secondKey":"sk012", "comments": "fk74-sk012"}'})
When I run this code for the 1st time with a new value of table_name, I get the following error at the last line of the block:
boto.exception.JSONResponseError: JSONResponseError: 400 Bad Request
{u'message': u'Requested resource not found', u'__type': u'com.amazonaws.dynamodb.v20120810#ResourceNotFoundException'}
When I run it one more time, it get executed fine. I suspect that the reason is simply that the table is still being created when I run it for the first time. How do I check for table's status in DDB2? In DDB I used table.status but this does not seem to be available in DDB2. What should I use instead?
UPDATE: Based on final response here, the right way to extract table status is:
tdescr = conn.describe_table(tName)
print "%s" % ((tdescr['Table'])['TableStatus'])
Here are the other elements of the description dictionary:
for key in tdescr['Table'].keys():
print key
GlobalSecondaryIndexes
AttributeDefinitions
ProvisionedThroughput
TableSizeBytes
TableName
TableStatus
KeySchema
ItemCount
CreationDateTime
You can use conn.describe_table('table') to fetch the details about table & then check for the TableStatus field in the returned output.