I am looking to add some items into DynamoDB via console. (Please see screenshot below). When I click the "Save" button, nothing is happening. No new items are getting created. I also checked the DynamoDB JSON checkbox to convert the JSON into DynamoDB compatible JSON and clicked save button again, but nothing is happening. Can someone please advise, what am I doing wrong ? There are no error messages either.
You haven't provided your table definition so it's difficult to say exactly what a valid item for your table would look like but I can tell you for sure that:
1) You shouldn't be creating an array of JSON objects: each item you create must be an individual valid JSON object. Like so:
{
"sub": 1234,
"EventID": ["B213", "B314"]
}
2) Each item you create must include attributes matching the item schema for your table. This means that if your table has just a partition key defined then each item must include one attribute whose name matches the name of the partition key. If the table has both partition and sort key then each item you create must include at least two attributes, one matching the partition key, the other matching the sort key. And finally, the partition and sort keys must be string or numeric.
Assuming your table has a partition key called sub and no sort key, then the item example above would work.
update
Based on the comment it sounds like the OP was looking for a way to insert multiple items in a single operation. This not possible with the console, and actually it goes deeper than that: Dynamo fundamentally operates one a single item at a time for write operations. It is of course possible to batch up to 25 item writes using the API but that is just a convenience.
If you need to add multiple items to your table, consider writing a small script using the AWS CLI or the API. It’s is relatively easy to do!
The scripting solution looks something like this:
aws-dynamodb-upload-json.sh
#!/bin/sh
set -e
# parse
if [[ $# -eq 0 ]]; then set -- "--help"; fi
if [[ "$1" = "--help" ]]; then
echo "Usage:"
echo " aws-dynamodb-upload-json {table-name} {file.json}"
exit 1
fi
# config
AWS_DYNAMODB_UPLOAD_TABLE_NAME=$1
AWS_DYNAMODB_UPLOAD_FILE_INPUT=$2
echo "Configuration"
echo "AWS_DYNAMODB_UPLOAD_TABLE_NAME=$AWS_DYNAMODB_UPLOAD_TABLE_NAME"
echo "AWS_DYNAMODB_UPLOAD_FILE_INPUT=$AWS_DYNAMODB_UPLOAD_FILE_INPUT"
# main
jq -c '.[]' < $AWS_DYNAMODB_UPLOAD_FILE_INPUT |
while read -r row
do
echo "Entry: $row"
echo ""
aws dynamodb put-item \
--region us-east-1 \
--table-name $AWS_DYNAMODB_UPLOAD_TABLE_NAME \
--item \
"$row"
done
This does rely on the aws CLI and the jq CLI to be installed and on your $PATH.
Hopefully AWS adds an easier way to do this VIA the web interface someday.
Related
I go through some posts and came to know that in dynamodb case insensitive search is not possible, hence trying to update existing dynamodb table's column values to lowercase.
I searched for syntax but havent get any satisfactory result. In mysql we achieve same thing by "
set name = LOWERCASE(name)
Please help me to write same thing in dynamodb.
I wrote this query
aws dynamodb update-item --profile test --table-name test-event-tickets --key '{"university_id": {"S": "112"}}' --update-expression 'SET #nameAttribute = :inputScope' --expression-attribute-names '{"#scopeAttribute":"name"}' --expression-attribute-values '{":inputname":{"S":"george philips"}}'
but here i have hardcoded inputname to "george philips". instead of this I want to read column value and convert it to lowercase
Unforetunately, there is no such syntax in DynamoDB. Although DynamoDB is capable of doing some transformations to data in-place, such as incrementing a counter, the syntax to do this is very limited, and lowercasing a value is NOT one of the things you can do.
So you'll have to scan the entire table, reading the old value of the attribute, calculating the lowercase version in your application, and writing the value back. If your application is doing regular writes in parallel to this transformation, you'll need to be very careful not to overwrite data that is being overwritten in parallel. You can do this with a conditional expression, but I think it will be easier if the new lowercase attribute will have a different name from the old not-always-lowercase attribute, so your transformation process will be able to write to the new attribute only (using ConditionalExpression) if the new attribute is not yet set.
I'm trying to add an attribute to a whole table, without specifying an index.
In this examples it's always being used an index:
aws dynamodb update-item \
--region MY_REGION \
--table-name MY_TABLE_NAME \
--key='{"AccountId": {"S": accountId}}' \
--update-expression 'SET conf=:newconf' \
--expression-attribute-values '{":newconf":{"S":"new conf value"}}'
Plus, that's an update for an attribute that is already in the table.
How can add a new attribute to each record of a table?
There is no API that will automatically add an attribute to all items in a table. DynamoDB just doesn't work that way.
The only way to add an attribute to all items in a table is to scan the table and for each item, make an UpdateItem request to add the attribute you want. This can be done for attributes that are missing (ie. adding new), or attributes that already exist and just being updated.
Some caveats:
If the table is small, and not being updated too often, this may work as intended in a single pass
If the table is larger and being updated relatively fast (ie. every second) then you will need to make sure the code updating the table is also adding the attribute to new items, or items being updated and that the updates don't clobber
Lastly, if the table is large, this can consume a LOT of capacity because of the scan and update for each item so plan on it taking a long time (also mind the consumed capacity vs. provisioned capacity) -- better have some rate-limiting on the update script
Could somebody please tell me what a valid key condition expression would be. I am trying to run a query on a simple table called MyKeyTable. It has two "columns," namely Id and AnotherNumberThatICareAbout which is of type Long.
I would like to see all the values I put in. So I tried:
aws dynamodb query --select ALL_ATTRIBUTES --table-name MyKeyTable
--endpoint http://localhost:8000
--key-condition-expression "WHAT DO I PUT IN HERE?"
What hash do I need to put in? The docs are a bit lame on this imho. Any help appreciated, even if it's just a link to a good doc.
Here's a command-line-only approach you can use with no intermediate files.
First, use value placeholders to construct your key condition expression, e.g.,
--key-condition-expression "Id = :idValue"
(Don't forget the colon prefix for placeholders!)
Next, construct an expression-attribute-values argument. Note that it expects a JSON format. The tricky bit I always try to forget with this is that you can't just plug in 42 for a number or "foo" for a string. You have to tell DynamoDb the type and value. Ref AWS docs for the complete breakdown of how you can format the value specification, which can be quite complex if you need it to be.
For Windows you can escape quotation marks in it by doubling them, e.g.,
--expression-attribute-values "{"":idValue"":{""N"":""42""}}"
For MacOS/Linux, single quote is required around the JSON:
--expression-attribute-values '{":idValue":{"N":"42"}}'
create a file containing your keys: test.json
{
"yourHashKeyName": {"S": "abc"},
"YourRangeKey": {"S": "xyz"} //optional
}
Run
aws dynamodb query --table-name "your table name" --key-conditions file://test.json
refer: http://docs.aws.amazon.com/cli/latest/reference/dynamodb/query.html
For scanning the table
aws dynamodb scan --table-name "you table name"
No need to pass any keys as we scan the whole table (Note: It will get max 1MB of data)
refer:http://docs.aws.amazon.com/cli/latest/reference/dynamodb/scan.html
I have an SQLite 3 database (db) like in this simplified example:
CREATE TABLE user(id integer primary key, name text);
INSERT INTO "user" VALUES(1,'user1');
INSERT INTO "user" VALUES(2,'user2');
If I enter .dump, sqlite will wrap these statements in a transaction and write them to the file db.sql previously defined with .output. This is fine if I need to import the data to an empty DB.
I want to be able to import the user data to a different DB with other users defined. If I try, I will most likely get something like this (as the ids may be used already in the target DB):
Error: near line 4: PRIMARY KEY must be unique
Error: near line 5: PRIMARY KEY must be unique
My approaches:
I can tweak the dumped SQL manually to remove everything but the inserts and also remove the id column (mentioning only name) in the statement, but this approach does not scale as I want to automate the process.
I can select from the db and write the SQL myself.
Is there any easy or more elegant approach that I am missing? sqlite3 will be run from a Bash script in the real world problem.
You can automate the tweaking (but this still requires that you know the table structure):
$ (echo ".mode insert"; echo "SELECT name FROM user;") | \
sqlite3 my.db | \
sed -e 's/^INSERT INTO table /INSERT INTO user(name) /'
INSERT INTO user(name) VALUES('user1');
INSERT INTO user(name) VALUES('user2');
I'm pretty new to SQLite 3 and just now I had to add a column to an existing table I had. I went about doing that by doing: ALTER TABLE thetable ADD COLUMN category;.
Of course, I forgot to specify that column's type. The first thing I was thinking about doing was dropping that column and then re-adding it. However, it seems that SQLite does not have a simple way of doing this, and I would have had to backup the table and re-create it without the column.
This seems messy, and I was wondering if there were just a way of modifying/adding a column's type. I would imagine so, but my searching around yielded no results, being new to SQLite, I imagine it was due to my wording being off in the query.
SQLite doesn't support removing or modifying columns, apparently. But do remember that column data types aren't rigid in SQLite, either.
See also:
SQLite Modify Column
If you prefer a GUI, DB Browser for SQLite will do this with a few clicks.
"File" - "Open Database"
In the "Database Structure" tab, click on the table content (not table name), then "Edit" menu, "Modify table", and now you can change the data type of any column with a drop down menu. I changed a 'text' field to 'numeric' in order to retrieve data in a number range.
DB Browser for SQLite is open source and free. For Linux it is available from the repository.
There is a much simpler way:
ALTER TABLE your_main_table
ADD COLUMN new_column_name new_column_data_type
UPDATE your_main_table
SET new_column_name = CAST(old_column_name as new_data_type_you_want)
I tried this on my machine locally and it works
It is possible by recreating table.Its work for me please follow following step:
create temporary table using as select * from your table
drop your table, create your table using modify column type
now insert records from temp table to your newly created table
drop temporary table
do all above steps in worker thread to reduce load on uithread
It is possible by dumping, editing and reimporting the table.
This script will do it for you (Adapt the values at the start of the script to your needs):
#!/bin/bash
DB=/tmp/synapse/homeserver.db
TABLE="public_room_list_stream"
FIELD=visibility
OLD="BOOLEAN NOT NULL"
NEW="INTEGER NOT NULL"
TMP=/tmp/sqlite_$TABLE.sql
echo "### create dump"
echo ".dump '$TABLE'" | sqlite3 "$DB" >$TMP
echo "### editing the create statement"
sed -i "s|$FIELD $OLD|$FIELD $NEW|g" $TMP
read -rsp $'Press any key to continue deleting and recreating the table $TABLE ...\n' -n1 key
echo "### rename the original to '$TABLE"_backup"'"
sqlite3 "$DB" "PRAGMA busy_timeout=20000; ALTER TABLE '$TABLE' RENAME TO '$TABLE"_backup"'"
echo "### delete the old indexes"
for idx in $(echo "SELECT name FROM sqlite_master WHERE type == 'index' AND tbl_name LIKE '$TABLE""%';" | sqlite3 $DB); do
echo "DROP INDEX '$idx';" | sqlite3 $DB
done
echo "### reinserting the edited table"
cat $TMP | sqlite3 $DB