How to get DynamoDB from DynamoDB local creation - amazon-dynamodb

Trying to write integration test for my logic, using recommended way to launch dynamoDB local:
final String port = getAvailablePort();
this.server = ServerRunner.createServerFromCommandLineArgs(new String[] { "-inMemory", "-port", port });
server.start();
amazonDynamoDB = AmazonDynamoDBClientBuilder.standard().withEndpointConfiguration(
// we can use any region here
new AwsClientBuilder.EndpointConfiguration("http://localhost:" + port, "us-west-2")).build();
I am planning to use com.amazonaws.services.dynamodbv2.document.DynamoDB in my production code to read/write from dynamo. I am looking to reuse some production dynamo-write code to setup testing data, hence, I will need com.amazonaws.services.dynamodbv2.document.DynamoDB object. However dynamo local setup above only give com.amazonaws.services.dynamodbv2.AmazonDynamoDB, any clue/suggestion on how to convert?

You can create an instance of DynamoDB from a AmazonDynamoDB using this constructor which has a signature of DynamoDB(AmazonDynamoDB client).

Related

Creating Connection Pool and using the connection pool in JanusGraph

I am using JanusGraph. How do I create a connection pool while connecting to JanusGraph remotely and then using the pool to borrow the connection?
right now I am doing something like
private static void init() {
String uri = "localhost";
int poolSize = 5;
graph = JanusGraphFactory.open("inmemory");
cluster = Cluster.build()
.addContactPoint(uri)
.port(8182)
.serializer(new GryoMessageSerializerV1d0(GryoMapper.build().addRegistry(JanusGraphIoRegistry.getInstance())))
.maxConnectionPoolSize(poolSize)
.minConnectionPoolSize(poolSize)
.create();
gts = graph
.traversal()
.withRemote(DriverRemoteConnection.using(cluster));
}
this init method is initialized once. and then anyone who requires connection simple calls the below method
public GraphTraversalSource getConnection() {
return gts.clone();
}
Please note that withRemote() method is being deprecated now. I am not sure am I doing it correctly?
I think you're confusing some concepts. You only need to use the TinkerPop driver (i.e. Cluster) if you are connecting to a Graph instance remotely. In your case you're creating your JanusGraph instance locally, so you can simply do graph.traversal() and start writing Gremlin. If you hosted your JanusGraph instance in Gremlin Server on the other hand, then you would need to use the withRemote() option. As you mention withRemote() in the fashion you are calling it is deprecated, but the javadoc mentions the new method which can also be found in the documentation:
import static org.apache.tinkerpop.gremlin.process.traversal.AnonymousTraversalSource.traversal;
GraphTraversalSource g = traversal().withRemote(DriverRemoteConnection.using(cluster));
To understand all the different options for connecting to a Graph instance I'd suggest reading this section of TinkerPop's Reference Documentation.

Optimize connection to SQLite DB using EF Core in UWP app

I'm currently working on a C# UWP application that runs on Windows 10 IoT Core OS on an ARM processor. For this application, I am using a SQLite DB for my persistence, with Entity Framework Core as my ORM.
I have created my own DBContext and call the Migrate function on startup which creates my DB. I can also successfully create a DBContext instance in my main logic which can successfully read/write data using the model. All good so far.
However, I've noticed that the performance of creating a DbContext for each interaction with the DB is painfully slow. Although I can guarantee that only my application is accessing the database (I'm running on custom hardware with a controlled software environment), I do have multiple threads in my application that need to access the database via the DbContext.
I need to find a way to optimize the connection to my SQLite DB in a way that is thread safe in my application. As I mentioned before, I don't have to worry about any external applications.
At first, I tried to create a SqliteConnection object externally and then pass it in to each DbContext that I create:
_connection = new SqliteConnection(#"Data Source=main.db");
... and then make that available to my DbContext and use in in the OnConfiguring override:
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlite(_connection);
}
... and then use the DbContext in my application like this:
using (var db = new MyDbContext())
{
var data = new MyData { Timestamp = DateTime.UtcNow, Data = "123" };
db.MyData.Add(data);
db.SaveChanges();
}
// Example data read
MyDataListView.ItemsSource = db.MyData.ToList();
Taking the above approach, I noticed that the connection is closed down automatically when the DbContext is disposed, regardless of the fact that the connection was created externally. So this ends up throwing an exception the second time I create a DbContext with the connection.
Secondly, I tried to create a single DbContext once statically and share it across my entire application. So instead of creating the DbContext in a using statement as above, I tried the following:
// Where Context property returns a singleton instance of MyDbContext
var db = MyDbContextFactory.Context;
var data = new MyData { Timestamp = DateTime.UtcNow, Data = "123" };
db.MyData.Add(data);
db.SaveChanges();
This offers me the performance improvements I hoped for but I quickly realized that this is not thread safe and wider reading has confirmed that I shouldn't do this.
So does anyone have any advice on how to improve the performance when accessing SQLite DB in my case with EF Core and a multi-threaded UWP application? Many thanks in advance.
Secondly, I tried to create a single DbContext once statically and share it across my entire application. So instead of creating the DbContext in a using statement as above, I tried the following...This offers me the performance improvements I hoped for but I quickly realized that this is not thread safe and wider reading has confirmed that I shouldn't do this.
I don't know why we shouldn't do this. Maybe you can share something about what you read. But I think, you can make the DBContext object global and static and when you want to do CRUD, you can do it in main thread like this:
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
//App.BloggingDB is the static global DBContext defined in App class
var blog = new Blog { Url = NewBlogUrl.Text };
App.BloggingDB.Add(blog);
App.BloggingDB.SaveChanges();
});
But do dispose the DBContext at a proper time as it won't automatically get disposed.

Can a single Meteor instance listen and react to multiple MongoDB databases?

I would like to build a network of distributed Meteor apps that each share some parts of their Mongo database with each other. Is this possible?
Yes you can do this by changing how you define the collection on the server side using:
var database = new MongoInternals.RemoteCollectionDriver("<other mongo url>");
MyCollection = new Mongo.Collection("collection_name", { _driver: database });

LINQ to SQL over multiple databases - Failover Partner?

I have an ASP.NET site using LINQ to SQL set up with multiple databases (by changing the Source for the tables on the other/"secondary" server). As seen here.
How do I set up a Failover Partner for this? I have it set up in the connection string, but I had to hardcode the Source with the server name, so that doesn't work.
The best method I could find/come up with was to create two LINQ External Mappings. On startup, I check to see if the main server is running using the main Mapping. If it's not, I set my connections to use the second LINQ mapping that has the FailOver database:
var xmlPath = #"C:\myAppFailOver.map";
System.Data.Linq.Mapping.XmlMappingSource linqMapping = System.Data.Linq.Mapping.XmlMappingSource.FromReader(System.Xml.XmlReader.Create(xmlPath));
using (DataClassesDataContext db = new DataClassesDataContext(connString, linqMapping))
{
//code
}

How to avoid DTC when using PersistenceIOParticipant in WF4.0?

I am using PersistenceIOParticipant in WF4.0 to save something into database together with the persistence of the workflow instance. I have no idea that how to use the same connection object with the workflow persistence and I am forced to use the distributed transaction. Are there any ways to avoid using DTC?
I found the WF4 Sample project "WorkflowApplication ReadLine Host" useful
to see an example of persistenceIOParticipant in action.
I toggled the booleans in the constructor to verify that a transaction was being used and that
MSDTC was required.
See http://msdn.microsoft.com/en-us/library/dd764467.aspx
If using SQL Server 2008+, then it shouldn't matter if multiple connections are required. After using reflector on the SqlWorkflowInstanceStore, I discovered it was setting some additional properties on the connection string. Here is the code it uses to create a connection string:
SqlConnectionStringBuilder builder2 = new SqlConnectionStringBuilder(connectionString);
builder2.AsynchronousProcessing = true;
builder2.ConnectTimeout = (int)TimeSpan.FromSeconds(15.0).TotalSeconds;
builder2.ApplicationName = "DefaultPool";
SqlConnectionStringBuilder builder = builder2;
return builder.ToString();
I verified with profiler that MSDTC is not involved when using a custom IO participant and this connection string code. Don't forget to pass true to the base PersistenceIOParticipant constructor and flow Transaction.Current appropriately. Obviously, Microsoft could change that at anytime so use at your own discretion.

Resources