Record/Revert DynamoDB state for integration testing - integration-testing

I want to write integration testing for my API Gateway which is using DynamoDB as backend. I was wondering if there is a method/framework/libraries which provides flexibility to record DynamoDB state before tests and revert it back to original state after the tests?
Ideally, I want something which can keep track changes made in DynamoDB since the beginning of tests and revert all those changes once the test is completed.

I use DynamoDB Local in my test environment, instead of running tests against DynamoDB directly. This saves costs and time. I use a test framework (RSpec) where I can delete anything stored in the database after a test is run.
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html
If you need to run tests against a real DynamoDB table, look into DynamoDB streams + AWS Lambda. You can write a Lambda function that is triggered on item changes from your table. That function can, for example, store a record of the change in another table. Once your test is done, it can kick off a second Lambda function which goes through the change table and reverts each change in your original table.
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html

as readyornot recommended, i am also using DynamoDBLocal for integration test. I implemented it like with the following concept:-
Add DynamoDBLocal dependency in the dependency management (in my case gradle: testCompile 'com.amazonaws:DynamoDBLocal:1.11.86').
DynamoDBLocal server needs some native files add them to the test resources, you will find them in the extracted files of the lib (sqlite4java-win32-x86.dll, libsqlite4java-linux-i386.so...etc)
In the #Before Setup of the Junit class set java library path with the location of the native libs you placed in step 2.
SQLite.setLibraryPath("src/test/resources/location/of/native");
In the setup method as well start the DynamoDB local server with inMemory mode, so you don't need to delete any records after the test finished.
final String[] localArgs = { "-inMemory" };
DynamoDBProxyServer server=ServerRunner.createServerFromCommandLineArgs(localArgs);
server.start();
ddb = AmazonDynamoDBClientBuilder
.standard()
.withEndpointConfiguration(new EndpointConfiguration("http://localhost:8000", "local"))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("test", "password")))
.build();
table = createTable(ddb, TABLE_NAME, HASH_KEY_NAME, SORT_KEY_NAME);
Create table like that
private CreateTableResult createTable(AmazonDynamoDB ddb, String tableName, String hashKeyName,
String sortKeyName)
{
List<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>();
attributeDefinitions.add(new AttributeDefinition(hashKeyName, ScalarAttributeType.S));
attributeDefinitions.add(new AttributeDefinition(sortKeyName, ScalarAttributeType.S));
List<KeySchemaElement> ks = new ArrayList<KeySchemaElement>();
ks.add(new KeySchemaElement(hashKeyName, KeyType.HASH));
ks.add(new KeySchemaElement(sortKeyName, KeyType.RANGE));
ProvisionedThroughput provisionedthroughput = new ProvisionedThroughput(1000L, 1000L);
CreateTableRequest request =
new CreateTableRequest()
.withTableName(tableName)
.withAttributeDefinitions(attributeDefinitions)
.withKeySchema(ks)
.withProvisionedThroughput(provisionedthroughput);
return ddb.createTable(request);
}
And here is an example of a test method that tests the table metadata
#Test
public void createTableTest()
{
TableDescription tableDesc = table.getTableDescription();
assertEquals(TABLE_NAME, tableDesc.getTableName());
assertEquals("[{AttributeName: "
+ HASH_KEY_NAME
+ ",KeyType: HASH}, {AttributeName: "
+ SORT_KEY_NAME
+ ",KeyType: RANGE}]",
tableDesc.getKeySchema().toString());
assertEquals("[{AttributeName: "
+ HASH_KEY_NAME
+ ",AttributeType: S}, {AttributeName: "
+ SORT_KEY_NAME
+ ",AttributeType: S}]", tableDesc.getAttributeDefinitions().toString());
assertEquals(Long.valueOf(1000L), tableDesc.getProvisionedThroughput().getReadCapacityUnits());
assertEquals(Long.valueOf(1000L), tableDesc.getProvisionedThroughput().getWriteCapacityUnits());
assertEquals("ACTIVE", tableDesc.getTableStatus());
assertEquals("arn:aws:dynamodb:ddblocal:000000000000:table/" + TABLE_NAME, tableDesc.getTableArn());
ListTablesResult tables = ddb.listTables();
assertEquals(1, tables.getTableNames().size());
}
implement #After method so it deletes the table and shutdown the server
#After
public void tearDown()
{
ddb.deleteTable(TABLE_NAME);
ddb.shutdown();
}

Related

DynamoDb streams, just get new updates since

I'm trying to work with DynamoDb streams, I am using the example code shown in this article. I've modified it to work in a basic Spring Boot app (initializr), utilizing an existing DynamoDb table which has streams enabled. Everything appears to work, however; I'm not seeing any new updates.
This particular database has a bulk update once per day at a specific time, it may get some minor changes now and then during the day. I'm trying to monitor these minor updates. When I run the application I can see the records from the bulk update, however if my application is running and I use the AWS Console to modify, create or delete a record I don't seem to get any output.
I'm using:
Spring Boot:2.3.9.RELEASE
amazon-kinesis-client:1.14.2
Java 11
Running on Mac Catalina (though that shouldn't matter)
In my test application I did the following:
package com.test.dynamodb_streams_test_kcl.service;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBStreams;
import com.amazonaws.services.dynamodbv2.model.*;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import javax.annotation.PostConstruct;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.util.List;
#Slf4j
#Service
#RequiredArgsConstructor
public class LowLevelKclProcessor {
private static final String dynamoDbTableName = "global-items";
private final AmazonDynamoDB dynamoDB;
private final AmazonDynamoDBStreams dynamoDBStreams;
private final ZonedDateTime startTime = ZonedDateTime.now();
#PostConstruct
public void initialize() {
log.info("Describing table={}", dynamoDbTableName);
DescribeTableResult itemTableDescription = dynamoDB.describeTable(dynamoDbTableName);
log.info("Got description");
String itemTableStreamArn = itemTableDescription.getTable().getLatestStreamArn();
log.info("Got stream arn ({}) for table={} tableArn={}", itemTableStreamArn,
itemTableDescription.getTable().getTableName(), itemTableDescription.getTable().getTableArn());
// Get all the shard IDs from the stream. Note that DescribeStream returns
// the shard IDs one page at a time.
String lastEvaluatedShardId = null;
do {
DescribeStreamResult describeStreamResult = dynamoDBStreams.describeStream(
new DescribeStreamRequest()
.withStreamArn(itemTableStreamArn)
.withExclusiveStartShardId(lastEvaluatedShardId));
List<Shard> shards = describeStreamResult.getStreamDescription().getShards();
// Process each shard on this page
for (Shard shard : shards) {
String shardId = shard.getShardId();
System.out.println("Shard: " + shard);
// Get an iterator for the current shard
GetShardIteratorRequest getShardIteratorRequest = new GetShardIteratorRequest()
.withStreamArn(itemTableStreamArn)
.withShardId(shardId)
.withShardIteratorType(ShardIteratorType.LATEST);
GetShardIteratorResult getShardIteratorResult =
dynamoDBStreams.getShardIterator(getShardIteratorRequest);
String currentShardIter = getShardIteratorResult.getShardIterator();
// Shard iterator is not null until the Shard is sealed (marked as READ_ONLY).
// To prevent running the loop until the Shard is sealed, which will be on average
// 4 hours, we process only the items that were written into DynamoDB and then exit.
int processedRecordCount = 0;
while (currentShardIter != null && processedRecordCount < 100) {
System.out.println(" Shard iterator: " + currentShardIter.substring(380));
// Use the shard iterator to read the stream records
GetRecordsResult getRecordsResult = dynamoDBStreams.getRecords(new GetRecordsRequest()
.withShardIterator(currentShardIter));
List<Record> records = getRecordsResult.getRecords();
for (Record record : records) {
// I set a breakpoint on the line below, but it was never hit after the bulk update info
if (startTime.isBefore(ZonedDateTime.ofInstant(record.getDynamodb()
.getApproximateCreationDateTime().toInstant(), ZoneId.systemDefault()))) {
System.out.println(" " + record.getDynamodb());
}
}
processedRecordCount += records.size();
currentShardIter = getRecordsResult.getNextShardIterator();
}
}
// If LastEvaluatedShardId is set, then there is
// at least one more page of shard IDs to retrieve
lastEvaluatedShardId = describeStreamResult.getStreamDescription().getLastEvaluatedShardId();
} while (lastEvaluatedShardId != null);
}
}
Note that your test is based on the low-level API, not on the Kenisis client library. So it's normal to have some tricky technical details to deal with.
Your test application has some similarities with the example given in the doc, but it has issues:
When I run the application I can see the records from the bulk update
ShardIteratorType.LATEST will not look for old records that happened before running the test (It starts reading just after the most recent stream records in the shard)
So, I will assume that the iterator type was different (ex: TRIM_HORIZON) and changed later to LATEST during your tests.
The main issue comes from the fact that your application will sequentially poll shards, and it will bloque in the first shard until it finds 100 new records in this shard (due to LATEST iterator type).
So, you may not see the new minor changes while the test is running if they belong to a different shard.
Solutions:
1- Poll shards in parallel using threads.
2- Filter returned shards using the sequence number of the last logged record, and try to guess the shard that may contain minor changes.
3- Dangerous & I'm not sure if it works :)
In a test table, and if your data model allows this: close the current stream, and enable a new one, then make sure that all your writes belong to one partition. In the majority of cases, table partitions have a one-to-one relationship with active shards. Theoretically, you have only one active shard to deal with.

How to migrate dynamodb data on major table change?

During development structures and requirements change. Key and index settings need to be changed, that might break incremental table update. So my solution so far is to delete the table and recreate it from the cloudformation stack.
But how to solve this problem with a production deployment? Is it possible to automate dynamodb deployment as follows?
Create new table
Migrate data from old table to new table
Delete old table
Yes, it is perfectly possible to automate such a deployment structure. As long as you have code to create a table, it should be fairly straightforward to get all of the data from an old table, change the data, and then upload it all to a new table without any drops in up-time. If you write what language you would like to do such a thing in I can help a bit more.
I've done this before and I've added below a small generified code-sample on how you could do this in Java.
Java method for creating a table given the class of the object type stored in dynamo:
/**
* Creates a single table with its appropriate configuration (CreateTableRequest)
*/
public void createTable(Class tableClass) {
DynamoDBMapper mapper = createMapper(); // you'll need your own function to do this.
ProvisionedThroughput pt = new ProvisionedThroughput(1L, 1L);
CreateTableRequest ctr = mapper.generateCreateTableRequest(tableClass);
ctr.withProvisionedThroughput(new ProvisionedThroughput(1L, 1L));
// Provision throughput and configure projection for secondary indices.
if (ctr.getGlobalSecondaryIndexes() != null) {
for (GlobalSecondaryIndex idx : ctr.getGlobalSecondaryIndexes()) {
if (idx != null) {
idx.withProvisionedThroughput(pt).withProjection(new Projection().withProjectionType("ALL"));
}
}
}
TableUtils.createTableIfNotExists(client, ctr);
}
Java method to delete table:
private static void deleteTable(String tableName) {
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);
Table table = dynamoDB.getTable(tableName);
try {
System.out.println("Issuing DeleteTable request for " + tableName);
table.delete();
System.out.println("Waiting for " + tableName + " to be deleted...this may take a while...");
table.waitForDelete();
}
catch (Exception e) {
System.err.println("DeleteTable request failed for " + tableName);
System.err.println(e.getMessage());
}
}
I would scan the whole table and plop all of the content into a List and then map through that list, converting the objects into your new type, and then create a new table of that type but with a different name, push all of your new objects, and then delete the old table after switching any references you might have of the old table to the new one. Unfortunately this does mean that everything consuming your tables are going to need to be able to switch between your two staging tables.

Blackberry not creating a valid sqlite database

I have a very unusual problem.
I'm trying to create a simple database (6 tables, 4 of which only have 2 columns).
I'm using an in-house database library which I've used in a previous project, and it does work.
However with my current project there are occasional bugs. Basically the database isn't created correctly. It is added to the sdcard but when I access it I get a DatabaseException.
When I access the device from the desktop manager and try to open the database (with SQLite Database Browser v2.0b1) I get "File is not a SQLite 3 database".
UPDATE
I found that this happens when I delete the database manually off the sdcard.
Since there's no way to stop a user doing that, is there anything I can do to handle it?
CODE
public static boolean initialize()
{
boolean memory_card_available = ApplicationInterface.isSDCardIn();
String application_name = ApplicationInterface.getApplicationName();
if (memory_card_available == true)
{
file_path = "file:///SDCard/" + application_name + ".db";
}
else
{
file_path = "file:///store/" + application_name + ".db";
}
try
{
uri = URI.create(file_path);
FileClass.hideFile(file_path);
} catch (MalformedURIException mue)
{
}
return create(uri);
}
private static boolean create(URI db_file)
{
boolean response = false;
try
{
db = DatabaseFactory.create(db_file);
db.close();
response = true;
} catch (Exception e)
{
}
return response;
}
My only suggestion is keep a default database in your assets - if there is a problem with the one on the SD Card, attempt to recreate it by copying the default one.
Not a very good answer I expect.
Since it looks like your problem is that the user is deleting your database, just make sure to catch exceptions when you open it (or access it ... wherever you're getting the exception):
try {
URI uri = URI.create("file:///SDCard/Databases/database1.db");
sqliteDB = DatabaseFactory.open(myURI);
Statement st = sqliteDB.createStatement( "CREATE TABLE 'Employee' ( " +
"'Name' TEXT, " +
"'Age' INTEGER )" );
st.prepare();
st.execute();
} catch ( DatabaseException e ) {
System.out.println( e.getMessage() );
// TODO: decide if you want to create a new database here, or
// alert the user if the SDCard is not available
}
Note that even though it's probably unusual for a user to delete a private file that your app creates, it's perfectly normal for the SDCard to be unavailable because the device is connected to a PC via USB. So, you really should always be testing for this condition (file open error).
See this answer regarding checking for SDCard availability.
Also, read this about SQLite db storage locations, and make sure to review this answer by Michael Donohue about eMMC storage.
Update: SQLite Corruption
See this link describing the many ways SQLite databases can be corrupted. It definitely sounded to me like maybe the .db file was deleted, but not the journal / wal file. If that was it, you could try deleting database1* programmatically before you create database1.db. But, your comments seem to suggest that it was something else. Perhaps you could look into the file locking failure modes, too.
If you are desperate, you might try changing your code to use a different name (e.g. database2, database3) each time you create a new db, to make sure you're not getting artifacts from the previous db.

SQLITE_BUSY The database file is locked (database is locked) in wicket

I am doing a project in wicket
How to solve the problem.
I came across such a message:
WicketMessage: Can't instantiate page using constructor public itucs.blg361.g03.HomePage()
Root cause:
java.lang.UnsupportedOperationException: [SQLITE_BUSY] The database file is locked (database is locked)
at itucs.blg361.g03.CategoryEvents.CategoryEventCollection.getCategoryEvents(CategoryEventCollection.java:41)
public List<CategoryEvent> getCategoryEvents() {
List<CategoryEvent> categoryEvents = new
LinkedList<CategoryEvent>();
try {
String query = "SELECT id, name, group_id"
+ " FROM event_category";
Statement statement = this.db.createStatement();
ResultSet result = statement.executeQuery(query);
while (result.next()) {
int id = result.getInt("id");
String name = result.getString("name");
int group_id = result.getInt("group_id");
categoryEvents.add(new CategoryEvent(id, name, group_id));
}
} catch (SQLException ex) {
throw new UnsupportedOperationException(ex.getMessage());
}
return categoryEvents;
}
at itucs.blg361.g03.HomePage.(HomePage.java:71)
categories = categoryCollection.getCategoryEvents();
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
Sqlite allows only one writer to the whole database at a time and, unless you selected "WAL" journal mode, no reader while writing. Moreover unless you explicitly ask it to wait, it simply returns the SQLITE_BUSY status for any attempt to access the database while conflicting operation is running.
You can tell sqlite to wait for the database to become available for a specified amount of time. The C-level API is sqlite3_busy_timeout; I never used sqlite from Java though, so I don't know where to find it there.
(...) tell sqlite to wait for the database to become available for specified amount of time.
In order to do it from Java, run the following statement just like a simple SQL statement:
pragma busy_timeout=30000; -- Busy timeout set to 30000 milliseconds

How do I create a unit test that updates a record into database in asp.net

How do I create a unit test that updates a record into database in asp.net
While technically we don't call this a 'unit test', but an 'integration test' (as Oded explained), you can do this by using a unit testing framework such as MSTest (part of Visual Studio 2008/2010 professional) or one of the free available unit testing frameworks, such as NUnit.
However, testing an ASP.NET web project is usually pretty hard, especially when you've put all you logic inside web pages. Best thing to do is to extract all your business logic to a separate layer (usually a separate project within your solution) and call that logic from within your web pages. But perhaps you’ve already got this separation, which would be great.
This way you can also call this logic from within your tests. For integration tests, it is best to have a separate test database. A test database must contain a known (and stable) set of data (or be completely empty). Do not use a copy of your production database, because when data changes, your tests might suddenly fail. Also you should make sure that all changes in the database, made by an integration test, should be rolled back. Otherwise, the data in your test database is constantly changing, which could cause your tests to suddenly fail.
I always use the TransactionScope in my integration tests (and never in my production code). This ensures that all data will be rolled back. Here is an example of what such an integration test might look like, while using MSTest:
[TestClass]
public class CustomerMovedCommandTests
{
// This test test whether the Execute method of the
// CustomerMovedCommand class in the business layer
// does the expected changes in the database.
[TestMethod]
public void Execute_WithValidAddress_Succeeds()
{
using (new TransactionScope())
{
// Arrange
int custId = 100;
using (var db = new ContextFactory.CreateContext())
{
// Insert customer 100 into test database.
db.Customers.InsertOnSubmit(new Customer()
{
Id = custId, City = "London", Country = "UK"
});
db.SubmitChanges();
}
string expectedCity = "New York";
string expectedCountry = "USA";
var command = new CustomerMovedCommand();
command.CustomerId = custId;
command.NewAddress = new Address()
{
City = expectedCity, Country = expectedCountry
};
// Act
command.Execute();
// Assert
using (var db = new ContextFactory.CreateContext())
{
var c = db.Customers.Single(c => c.Id == custId);
Assert.AreEqual(expectedCity, c.City);
Assert.AreEqual(expectedCountry, c.Country);
}
} // Dispose rolls back everything.
}
}
I hope this helps, but next time, please be a little more specific in your question.

Resources