Why querying with ORDER BY fails in DynamoDb? - amazon-dynamodb

I have a query where I ORDER BY expected_delivery value which is a number (UNIX timestamp) but it fails with this error:
ValidationException: Must have at least one non-optional hash key condition in WHERE clause when using ORDER BY clause.
Here is the query:
SELECT * FROM "transactions"."composite_pk_1-index"
WHERE begins_with(composite_pk_1, '0#3#435634652#69992528')
ORDER BY expected_delivery ASC
if I run it without the ORDER BY part then it runs successfully and returns data:
SELECT * FROM "transactions"."composite_pk_1-index"
WHERE begins_with(composite_pk_1, '0#3#435634652#69992528')
I tried adding other conditions in the query but it keeps returning the same error. Obviously the error is not stating what's the problem but I dont get it what it is.
Can someone help? I am new to DynamoDB.

Related

DynamoDB PartiQL error: "ValidationException: Overlapping conditions with range keys are not supported in where clause"

I'm getting the following error when I try to run a query in DynamoDB's PartiQL:
An error occurred during the execution of the command.
ValidationException: Overlapping conditions with range keys are not supported in where clause
The query looks like:
SELECT * FROM "tableName"
WHERE "columnName" IN (
'abc',
'def',
'def'
)
The error message is unnecessarily confusing, but it means that you have a duplicate value in your IN clause. If you remove the duplicate, the query will work. If you had a long list in the IN clause, it might have been hard to spot that you had a duplicate.

How can I choose to show only failed result and result must be fail not pass

there are 22 records on both tables. at records 1-21 it's was right but on record 22 on both table can't compare cause it's not equal. I would like to show result only failed record that's can't compare and let result fail.
connect to database using custom params cx_Oracle ${DB_CONNECT_STRING}
#{queryResults1}= Query Select * from QA_USER.SealTest_Security_A Order by SECURITY_ID
Log ${queryResults1}
#{queryResults2}= Query Select * from QA_USER.SealTest_Security_B Order by SECURITY_ID
Log ${queryResults2}
should be equal ${queryResults1} ${queryResults2}
Disconnect From Database

Using DBMS error log, how can I bulk insert values from one table into another skipping (but logging) errors?

I am trying to bulk insert rows from one table to another. I want to skip any errors that occur while doing this but log them as well. I used the following PL/SQL command to create an error log table for the table I am trying to insert all the values into:
BEGIN
DBMS_ERRLOG.create_error_log (
dml_table_name => 'ROBOT_ID_TOOL_DUMP_IDPK.TEST'
, skip_unsupported => TRUE);
END;
I then do a simple insert with logging turned on:
INSERT INTO table2
SELECT * FROM table1
LOG ERRORS INTO ERR$_table2;
The logging works but my insert is stopping, and rolling back after the first exception is hit (PK or unique constraint for example). I don't want to rollback when an exception is encountered though, I want that row to be skipped, and to continue trying to insert all remaining rows (but logging the problem row). I thought that is what the skip_unsupported parameter was doing. I have tried setting this value to FALSE and am still encountering the same issue.
I missed a step on my insert.
INSERT INTO table2
SELECT * FROM table1
LOG ERRORS INTO ERR$_table2 REJECT LIMIT UNLIMITED;
The REJECT LIMIT UNLIMITED flag Continues through every error.

How to select several aggregate values in one query in Azure Cosmos DB

I'm trying to select min timestamp and max timestamp from a collection in one query.
All variants, like
SELECT value min(c._ts), value max(c._ts) FROM c
SELECT value min(c._ts), max(c._ts) FROM c
SELECT values min(c._ts), max(c._ts) FROM c
produce errors like
: {"code":400,"body":"{\"code\":\"BadRequest\",\"message\":\"Message: {\\\"errors\\\":[{\\\"severity\\\":\\\"Error\\\",\\\"location\\\":{\\\"start\\\":23,\\\"end\\\":24},\\\"code\\\":\\\"SC1001\\\",\\\"message\\\":\\\"Syntax error, incorrect syntax near ','.\\\"}]}\\r\\nActivityId: ad845eae-8b97-4f24-b372-dd5ce8f4d2a6, Microsoft.Azure.Documents.Common/2.0.0.0\"}","activityId":"ad845eae-8b97-4f24-b372-dd5ce8f4d2a6"}
Is there such possibility in Azure Cosmos DB?
Looking at the documentation about the value keyword, it doesn't look like you can have multiple value keywords in one query.
If you have a comma in the select statement then you're gonna get an error.
Sounds like a UDF or an SP is a more suitable solution for your problem.
Keep in mind that something like this select max(c._ts), min(c._ts) from c will throw the error: Cross partition query only supports 'VALUE <AggreateFunc>' for aggregates.
This indicates that if you specify the partition then you will be able to get it working but I just tried it with specifying the partition key and it still failed with the same error.

sqlite - use a column field in a limit clause

using sqlite, I'm trying to run a query with a limit clause, but instead of specifying a literal I am trying to use a column. Sadly I am getting an 'no such column' error. Is there a way of achieving what I mean without writing an external program?
Example
select * from ep where code=2 limit code
You have to use a subquery:
SELECT * FROM ep WHERE code = 2 LIMIT (SELECT code FROM ep WHERE ...)
Please note that the subquery must return a single value (if it returns multiple records, only the first one is used).

Resources