I am using CosmosDB to store my BotState and ConversationState. Now that my codebase has grown, we have started to refactor, and we moved some objects into a common library and such.
After doing so, making the call
userData = await _botAccessors.BotStateAccessor.GetAsync(turnContext, () => new UserData(), cancellationToken);
fails with the following exception
Error resolving type specified in JSON '...'. Path '['BotAccessors.BotState'].ConversationContext.PreviousResponses.$values[0].channelData.$type'.
I have looked in the Document of the CosmosDB and I see the problem.
I have tried to set the TypeNameHandling and TypeNameAssemblyFormatHandling as such
var requestOptions = new RequestOptions
{
JsonSerializerSettings = new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.None, TypeNameAssemblyFormatHandling = TypeNameAssemblyFormatHandling.Simple }
};
when creating the CosmosDbStorageOptions, but this does not allow me to resolve the issue.
Not sure what to try next.
This is a bug in the v2 of Azure Cosmos DocumentDB Client and has been for awhile. Reviewing that issue, it doesn't appear that they plan to fix it in v2 and have instead opted to fix it in v3. Unfortunately:
Bot Framework doesn't plan to update CosmosDbStorage to v3 since CosmosDbPartitionedStorage was released in its place.
Currently, CosmosDbPartitionedStorage is not compatible with non-partitioned databases. A migration plan and backwards-compatibility is currently being discussed but may not be released soon.
So, your only option right now is to basically clone CosmosDbStorage, calling it something like NiteLordsCosmosDbStorage, and performing JSON serialization in the ReadAsync() or WriteAsync() methods, as necessary.
Related
We are in a process of updating our service code to using Cosmos SDK 3.12.0 from DocumentDB SDK 2.7.0. Since a change will likely be huge, we would like to do it incrementally that will result in our service using both SDKs to access the same databases (one executable loading assemblies of both SDKs). Please let us know if that is supported or if you see any issues in doing so. Also, I couldn’t figure out how to do things in same ways with Cosmos SDK (e.g. specifying “enable cross partition query” in querying items – the query method in 2.7.0 takes FeedOptions as a parameter whereas the new one in 3.12.0 doesn’t). I found this wiki and some sample code but if you have more info/guidelines for converting from Document SDK to Cosmos SDK, please let me know.
Yes,you can use both DocumentDB sdk and cosmos sdk access the same databases.
In Cosmos SDK 3.12.0,there is no need to set EnableCrossPartitionQuery true.just do something like this is ok:
QueryDefinition queryDefinition = new QueryDefinition("select * from c");
FeedIterator<JObject> feedIterator = container.GetItemQueryIterator<JObject>(queryDefinition);
while (feedIterator.HasMoreResults)
{
foreach (var item in await feedIterator.ReadNextAsync())
{
{
Console.WriteLine(item.ToString());
}
}
}
I have a NativeScript (Angular) app that makes API-calls to a server to get data. I want to implement a bi-directional synchronization once a device gets online but using current API, no BaaS.
I could do a sort of caching. Once in a while app invalidates info in database and fetches it again. I don't like this approach because there are big lists that may change. They are fetched in batches, i.e. by page. One of them is a list of files downloaded to and stored on a device. So I have to keep those that are still in the list, and delete those that are not. It sounds like a nightmare.
How would you solve such a problem?
I use nativescript-couchebase plugin to store the data. We have following services
Connectivity
Data
API Service
Based on connectivity is Online/Offline, we either fetch data from remote API or via couchebase db. Please note that API service always returns the data from Couchebase only.
So in online mode
API Call -> Write to DB -> Return latest data from Couchebase
Offline mode
Read DB -> Return latest data from Couchebase
Also along with this, we maintain all API calls in a queue. So whenever connectivity returns, API calls are processed in sequence. Another challenge that you may face while coming in online mode from offline mode is the token expiry. This problem can be solved by showing a small popup to user after you come online.
I do this by saving my data as a json string and saving it to the devices file system.
When the app loads/reloads I read it from the file.
ie.
const fileSystemModule = require("tns-core-modules/file-system");
var siteid = appSettings.getNumber("siteid");
var fileName = viewName + ".json";
const documents = fileSystemModule.knownFolders.documents();
const site_folder = documents.getFolder("site");
const siteid_folder = site_folder.getFolder(siteid.toString());
const directoryPath = fileSystemModule.path.join(siteid_folder.path, fileName);
const directoryFile = fileSystemModule.File.fromPath(directoryPath);
directoryFile.writeText(json_string)
.then((result) => {
directoryFile.readText().then((res) => {
retFun(res);
});
}).catch((err) => {
console.log(err.stack);
});
I'm currently working on a C# UWP application that runs on Windows 10 IoT Core OS on an ARM processor. For this application, I am using a SQLite DB for my persistence, with Entity Framework Core as my ORM.
I have created my own DBContext and call the Migrate function on startup which creates my DB. I can also successfully create a DBContext instance in my main logic which can successfully read/write data using the model. All good so far.
However, I've noticed that the performance of creating a DbContext for each interaction with the DB is painfully slow. Although I can guarantee that only my application is accessing the database (I'm running on custom hardware with a controlled software environment), I do have multiple threads in my application that need to access the database via the DbContext.
I need to find a way to optimize the connection to my SQLite DB in a way that is thread safe in my application. As I mentioned before, I don't have to worry about any external applications.
At first, I tried to create a SqliteConnection object externally and then pass it in to each DbContext that I create:
_connection = new SqliteConnection(#"Data Source=main.db");
... and then make that available to my DbContext and use in in the OnConfiguring override:
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlite(_connection);
}
... and then use the DbContext in my application like this:
using (var db = new MyDbContext())
{
var data = new MyData { Timestamp = DateTime.UtcNow, Data = "123" };
db.MyData.Add(data);
db.SaveChanges();
}
// Example data read
MyDataListView.ItemsSource = db.MyData.ToList();
Taking the above approach, I noticed that the connection is closed down automatically when the DbContext is disposed, regardless of the fact that the connection was created externally. So this ends up throwing an exception the second time I create a DbContext with the connection.
Secondly, I tried to create a single DbContext once statically and share it across my entire application. So instead of creating the DbContext in a using statement as above, I tried the following:
// Where Context property returns a singleton instance of MyDbContext
var db = MyDbContextFactory.Context;
var data = new MyData { Timestamp = DateTime.UtcNow, Data = "123" };
db.MyData.Add(data);
db.SaveChanges();
This offers me the performance improvements I hoped for but I quickly realized that this is not thread safe and wider reading has confirmed that I shouldn't do this.
So does anyone have any advice on how to improve the performance when accessing SQLite DB in my case with EF Core and a multi-threaded UWP application? Many thanks in advance.
Secondly, I tried to create a single DbContext once statically and share it across my entire application. So instead of creating the DbContext in a using statement as above, I tried the following...This offers me the performance improvements I hoped for but I quickly realized that this is not thread safe and wider reading has confirmed that I shouldn't do this.
I don't know why we shouldn't do this. Maybe you can share something about what you read. But I think, you can make the DBContext object global and static and when you want to do CRUD, you can do it in main thread like this:
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
//App.BloggingDB is the static global DBContext defined in App class
var blog = new Blog { Url = NewBlogUrl.Text };
App.BloggingDB.Add(blog);
App.BloggingDB.SaveChanges();
});
But do dispose the DBContext at a proper time as it won't automatically get disposed.
I am trying to read all existing messages on an Azure ServiceBus Subscription, using the Microsoft.Azure.ServiceBus.dll (in .Net Core 2.1) but am struggling.
I've found many examples that the following should work, but it doesn't:
var client = new SubscriptionClient(ServiceBusConnectionString, topicName, subscription, ReceiveMode.PeekLock, null);
var totalRetrieved = 0;
while (totalRetrieved < count)
{
var messageEnumerable = subscriptionClient.PeekBatch(count);
//// ... code removed from this example as not relevant
}
My issue is that the .PeekBatch method isn't available, and I'm confused as to how I need to approach this.
I've downloaded the source for the ServiceBusExplorer from GitHub (https://github.com/paolosalvatori/ServiceBusExplorer) and the above code example is pretty much as it's doing it. But not in .Net Core / Microsoft.Azure.ServiceBus namespace.
For clarity though, I'm trying to read messages that are already on the queue - I've worked through other examples that create listeners that respond to new messages, but I need to work in this disconnected manner, after the message has already been placed on the queue.
ServiceBusExplorer uses WindowsAzure.ServiceBus Library, which is a .Net Framework Library and you cannot use it in .Net Core applications. You should use Microsoft.Azure.ServiceBus (.Net Standard Library) in .Net Core applications.
Check here for samples of Microsoft.Azure.ServiceBus
var client = new SubscriptionClient(ServiceBusConnectionString, topicName, subscription, ReceiveMode.PeekLock, null);
client .RegisterMessageHandler(
async (message, token) =>
{
await subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
}
);
Try using RegisterMessageHandler. It will
receive messages continuously from the entity. It registers a message handler and
begins a new thread to receive messages. This handler is awaited
on every time a new message is received by the receiver.
I have dealing with some trouble serverless framework and DynamoDB.
After my lambda function executed,
the context.succeed(result) would return the result,
but nothing write into the DynamoDB.
Here is the link of demo repo.
I've read this question
And I added the resource to the s-resources-cf.json,
then serverless resources deploy again.
After sending the request, it still do nothing with DynamoDB.
Here's the thing I've done:
Create a table: posts with primary key in specific region
Attach AdministratorAccess to my IAM role(I know it's bad to do that.)
Add {"Effect": "Allow", "Action": ["*"], "Resource":"arn:aws:dynamodb:${region}:*:table/*"} to the s-resources-cf.json
Do there anything I still misunderstand?
Your demo repo does not appear to be including the AWS SDK & setting the region as noted in the Getting Started guide. I.e.:
var AWS = require("aws-sdk");
var DOC = require("dynamodb-doc");
AWS.config.update({region: "us-west-1"});
var docClient = new DOC.DynamoDB();
...
Note that dynamo-doc was deprecated almost a year ago. You may want to try the DynamoDB DocumentClient instead. This updated API has much more clear error-handling semantics that will probably help point out where the problem is.