Async method returns list with duplicated elements. It is async method which is using mysql connector to connect with database. Then it execute query (SELECT *) and by using MySqlDataReader - I save and add to list rows until last ReadAsync() call.
Asynchronous programming is still black magic for me - I would appreciate any feedback or indicating unlogical code lines with explanation.
This method will be used in my web api controller and method purpose is to return all entries from 'Posts' table. Code is working fine when I 'reset' temp object each loop by using
temp = new Post(); but I assume it is unacceptable ? What if my database would have not 15 but 15000 entries?
`
public async Task<List<Post>> GetPostsAsync()
{
List<Post> posts = new List<Post>();
Post temp = new Post();
try
{
await _context.conn.OpenAsync();
MySqlCommand cmd = new MySqlCommand("USE idunnodb; SELECT * FROM Posts;", _context.conn);
await using MySqlDataReader reader = await cmd.ExecuteReaderAsync();
while(await reader.ReadAsync())
{
temp.PostID = (int)reader[0];
temp.UserID = (int)reader[1];
temp.PostDate = reader[2].ToString();
temp.PostTitle = reader[3].ToString();
temp.PostDescription = reader[4].ToString();
temp.ImagePath = reader[5].ToString();
posts.Add(temp);
}
await _context.conn.CloseAsync();
}
catch (Exception ex)
{
return Enumerable.Empty<Post>().ToList();
}
return posts;
}
`
Looks like you can only read data by looping through MySqlDataReader, according to your code logic, you need to read each piece of data, and then add them to the List one by one for output.
Your code is behaving this way because temp has been declared & instantiated outside of your reader.ReadAsync() statement, you are updating the same object reference each time around the loop which is why you are seeing repeating objects in your list.
So you need to instantiate in reader.ReadAsync() loop:
while(await reader.ReadAsync())
{
Post temp = new Post();
...
}
Related
From a basic standpoint what I am trying to do is get a list of keys (key names) from session storage.
The way I am trying to do this is by calling the JsRuntime.InvokeAsync method to:
Get the number of keys in session storage, and
loop thought the number of items in session storage and get the key name.
public async Task<List<string>> GetKeysAsync()
{
var dataToReturn = new List<string>();
var storageLength = await JsRuntime.InvokeAsync<string>("sessionStorage.length");
if (int.TryParse(storageLength, out var slength))
{
for (var i = 1; i <= slength; i++)
{
dataToReturn.Add(await JsRuntime.InvokeAsync<string>($"sessionStorage.key({i})"));
}
}
return dataToReturn;
}
When calling the JsRuntime.InvokeAsync($"sessionStorage.length")) or JsRuntime.InvokeAsync($"sessionStorage.key(0)")) I am getting an error "The value 'sessionStorage.length' is not a function." or The value 'sessionStorage.key(0)' is not a function.
I am able to get a single items using the key name from session storage without issue like in the following example.
public async Task<string> GetStringAsync(string key)
{
return await JsRuntime.InvokeAsync<string>("sessionStorage.getItem", key);
}
When I use the .length or .key(0) in the Chrome console they work as expected, but not when using the JsRuntime.
I was able to get this to work without using the sessionStorage.length property. I am not 100% happy with the solution, but it does work as needed.
Please see below code. The main thing on the .key was to use the count as a separate variable in the InvokeAsync method.
I think the reason for this is the JsRuntime.InvokeAsync method adds the () automatically to the end of the request, so sessionStorage.length is becoming sessionStorage.length() thus will not work. sessionStorage.key(0) was becoming sessionStorage.key(0)(). etc. Just that is just a guess.
public async Task<List<string>> GetKeysAsync()
{
var dataToReturn = new List<string>();
var dataPoint = "1";
while (!string.IsNullOrEmpty(dataPoint) )
{
dataPoint = await JsRuntime.InvokeAsync<string>($"sessionStorage.key", $"{dataToReturn.Count}");
if (!string.IsNullOrEmpty(dataPoint))
dataToReturn.Add(dataPoint);
}
return dataToReturn;
}
Recently I have been working a lot with Cosmos and ran in to an issue when looking at deleting documents.
I need to delete around ~40 million documents in my Cosmos Container, I've looked around quite a bit and found a few options of which i have tried. two of the fastest of which I've tried are using a stored procedure within cosmos to delete records and using a bulk executor.
Both of these options have given subpar results compared to what I am looking for. I believe this should be obtainable within a couple hours but at the moment I am getting performance of around 1 hour per million recordsT
the two methods I used can also be seen here:
Stack Overflow Post on Document Deletion
My documents are about 35 keys long where half are string values and the other half are float/integer values, if that matters, and there are around 100k records per partition.
Here is are the two examples that I am using to attempt the deletion:
This first one is using C# and the documentation that helped me with this is here:
GitHub Documentation azure-cosmosdb-bulkexecutor-dotnet-getting-started
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.CosmosDB.BulkExecutor;
using Microsoft.Azure.CosmosDB.BulkExecutor.BulkImport;
using Microsoft.Azure.CosmosDB.BulkExecutor.BulkDelete;
namespace BulkDeleteSample
{
class Program
{
private static readonly string EndpointUrl = "xxxx";
private static readonly string AuthorizationKey = "xxxx";
private static readonly string DatabaseName = "xxxx";
private static readonly string CollectionName = "xxxx";
static ConnectionPolicy connectionPolicy = new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp
};
static async Task Main(string[] args)
{
DocumentClient client = new DocumentClient(new Uri(EndpointUrl), AuthorizationKey, connectionPolicy);
DocumentCollection dataCollection = GetCollectionIfExists(client, DatabaseName, CollectionName);
// Set retry options high during initialization (default values).
client.ConnectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 30;
client.ConnectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 9;
BulkExecutor bulkExecutor = new BulkExecutor(client, dataCollection);
await bulkExecutor.InitializeAsync();
// Set retries to 0 to pass complete control to bulk executor.
client.ConnectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 0;
client.ConnectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 0;
List<Tuple<string, string>> pkIdTuplesToDelete = new List<Tuple<string, string>>();
for (int i = 0; i < 99999; i++)
{
pkIdTuplesToDelete.Add(new Tuple<string, string>("1", i.ToString()));
}
BulkDeleteResponse bulkDeleteResponse = await bulkExecutor.BulkDeleteAsync(pkIdTuplesToDelete);
}
static DocumentCollection GetCollectionIfExists(DocumentClient client, string databaseName, string collectionName)
{
return client.CreateDocumentCollectionQuery(UriFactory.CreateDatabaseUri(databaseName))
.Where(c => c.Id == collectionName).AsEnumerable().FirstOrDefault();
}
}
}
The second one is using a stored procedure I found which delete data from a given partition using a query, of which I am running via a python notebook.
Here is the stored procedure:
/**
* A Cosmos DB stored procedure that bulk deletes documents for a given query.
* Note: You may need to execute this sproc multiple times (depending whether the sproc is able to delete every document within the execution timeout limit).
*
* #function
* #param {string} query - A query that provides the documents to be deleted (e.g. "SELECT c._self FROM c WHERE c.founded_year = 2008"). Note: For best performance, reduce the # of properties returned per document in the query to only what's required (e.g. prefer SELECT c._self over SELECT * )
* #returns {Object.<number, boolean>} Returns an object with the two properties:
* deleted - contains a count of documents deleted
* continuation - a boolean whether you should execute the sproc again (true if there are more documents to delete; false otherwise).
*/
function bulkDeleteSproc(query) {
var collection = getContext().getCollection();
var collectionLink = collection.getSelfLink();
var response = getContext().getResponse();
var responseBody = {
deleted: 0,
continuation: true
};
// Validate input.
if (!query) throw new Error("The query is undefined or null.");
tryQueryAndDelete();
// Recursively runs the query w/ support for continuation tokens.
// Calls tryDelete(documents) as soon as the query returns documents.
function tryQueryAndDelete(continuation) {
var requestOptions = {continuation: continuation};
var isAccepted = collection.queryDocuments(collectionLink, query, requestOptions, function (err, retrievedDocs, responseOptions) {
if (err) throw err;
if (retrievedDocs.length > 0) {
// Begin deleting documents as soon as documents are returned form the query results.
// tryDelete() resumes querying after deleting; no need to page through continuation tokens.
// - this is to prioritize writes over reads given timeout constraints.
tryDelete(retrievedDocs);
} else if (responseOptions.continuation) {
// Else if the query came back empty, but with a continuation token; repeat the query w/ the token.
tryQueryAndDelete(responseOptions.continuation);
} else {
// Else if there are no more documents and no continuation token - we are finished deleting documents.
responseBody.continuation = false;
response.setBody(responseBody);
}
});
// If we hit execution bounds - return continuation: true.
if (!isAccepted) {
response.setBody(responseBody);
}
}
// Recursively deletes documents passed in as an array argument.
// Attempts to query for more on empty array.
function tryDelete(documents) {
if (documents.length > 0) {
// Delete the first document in the array.
var isAccepted = collection.deleteDocument(documents[0]._self, {}, function (err, responseOptions) {
if (err) throw err;
responseBody.deleted++;
documents.shift();
// Delete the next document in the array.
tryDelete(documents);
});
// If we hit execution bounds - return continuation: true.
if (!isAccepted) {
response.setBody(responseBody);
}
} else {
// If the document array is empty, query for more documents.
tryQueryAndDelete();
}
}
}
I'm not sure if I am doing anything wrong or it the performance just isn't there with cosmos but I'm finding it quite difficult to achieve what I'm looking for, any advice is greatly appreciated.
I currently have a problem with the datareader when creating Microsoft.SqlServer.Management.Smo.Table asynchronously. Note: I derived my SmoTable from TableView and IDisposable.
private async Task Generate()
{
await Task.Run(()=>
{
MSSMSDatabase db = CreateDB(txtDBname.Text);
List<string> tableNames = GetTableNameList();
for(string tableName in tableNames)
{
using(SmoTable tbl = new Table(db, tableName)) // <=== after a few loops, the error occurs within here.
{
foreach(var col in columnList)
{
tbl.AddColumns(col);
}
tbl.Create();
}
}
});
}
Microsoft.SqlServer.Management.Smo.FailedOperationException: InvalidOperationException: There is already an open DataReader associated with this Connection which must be closed first.
I tried implementing IDisposable to my SmoTable class that I derived from the TableView class but still have the same error.
Thanks in advance.
I did a trial and error and found out that you need to create a new connection for each table creation to create a separate datareader for it. So, if you include the instantiation of Server in the foreach loop it will create a new connection and hence a new datareader.
for(string tableName in tableNames)
{
using(SmoTable tbl = new Table(db, tableName)) // <=== after a few loops, the error occurs within here.
{
foreach(var col in columnList)
{
_server = GetSQLServer(); // <=== this is basically Server server = new Server(); return server; kind of method.
db = _server.Databases[_databaseName];
tbl.AddColumns(col);
}
tbl.Create();
}
}
I am trying to execute a stored procedure in asp.net. The stored procedure requires 3 parameters, all 3 are ID's(ints). The 3 parameters are : TaskID, ExhibitID, and InvestigatorID.
I have a hidden field that contains an array of ExhibitID's that came from a javascript function.
My question is how do I get the query to execute as I am looping through the array?
Here is an example of my stored procedure:
var cnSaveTask = new SqlConnection(ConfigurationManager.ConnectionStrings["OSCIDConnectionString"].ToString());
var comLinkExhibitToTask = new SqlCommand("p_CaseFileTasksExhibitLinkAdd", cnSaveTask) { CommandType = CommandType.StoredProcedure };
foreach (string exhibit in hidExhibitsIDs.Value.Split(','))
{
comLinkExhibitToTask.Parameters.AddWithValue("#TaskID", taskID);
comLinkExhibitToTask.Parameters.AddWithValue("#ExhibitID", Convert.ToInt32(exhibit));
comLinkExhibitToTask.Parameters.AddWithValue("#InvestigatorID", int.Parse(Session["InvestigatorID"].ToString()));
}
try
{
cnSaveTask.Open();
comLinkExhibitToTask.ExecuteNonQuery();
}
It is not working in my DB though. Nothing gets added. My guess is that since it is iterating and not executing, it just keeps replacing the "exhibitID" everytime then eventually tries to execute it. But I don't think just adding "comLinkExhibitToTask.ExecuteNonQuery()"
outside the try is a good idea. Any suggestions?
you can either move the try block into the foreach loop or wrap the foreach loop with a try block. (depending on what error handling you wish - continue with the next exhibit on error or completely abort execution)
I've never used AddWithValue, so I can't speak to its functionality. Here's how I typically write a DB call like this.
using (SqlConnection cnSaveTask = new SqlConnection(ConfigurationManager.ConnectionStrings["OSCIDConnectionString"].ConnectionString))
{
cnSaveTask.Open();
using (SqlCommand comLinkExhibitToTask = new SqlCommand("p_CaseFileTasksExhibitLinkAdd", cnSaveTask))
{
comLinkExhibitToTask.CommandType = CommandType.StoredProcedure;
comLinkExhibitToTask.Parameters.Add(new SqlParameter("#TaskID", SqlDbType.Int) {Value = taskID});
// etc.
comLinkExhibitToTask.ExecuteNonQuery();
}
}
The solution:
var cnSaveTask = new SqlConnection(ConfigurationManager.ConnectionStrings["OSCIDConnectionString"].ToString());
try
{
var comLinkExhibitToTask = new SqlCommand("p_CaseFileTasksExhibitLinkAdd", cnSaveTask) { CommandType = CommandType.StoredProcedure };
cnSaveTask.Open();
comLinkExhibitToTask.Parameters.Add(new SqlParameter("#TaskID", SqlDbType.Int));
comLinkExhibitToTask.Parameters.Add(new SqlParameter("#ExhibitID", SqlDbType.Int));
comLinkExhibitToTask.Parameters.Add(new SqlParameter("#InvestigatorID", SqlDbType.Int));
foreach (string exhibit in hidExhibitsIDs.Value.Split(','))
{
comLinkExhibitToTask.Parameters["#TaskID"].Value = taskID;
comLinkExhibitToTask.Parameters["#ExhibitID"].Value = Convert.ToInt32(exhibit);
comLinkExhibitToTask.Parameters["#InvestigatorID"].Value = int.Parse(Session["InvestigatorID"].ToString());
comLinkExhibitToTask.ExecuteNonQuery();
}
}
catch (Exception ex)
{
ErrorLogger.Log(0, ex.Source, ex.Message);
}
finally
{
if (cnSaveTask.State == ConnectionState.Open)
{
cnSaveTask.Close();
}
}
Since I was in a loop it kept adding parameters. So just declare the parameters outside the loop, and only pass the values in the loop. That way there are only 3 parameters, and the values will be passed in accordingly
I've got data returned from my JavaScript client that just includes the data that has changed. That is, I may have an array with each row containing 10 columns of JSON downloaded, but on the Update, only the data that is returned to me is the data that got updated. On my update, I only want to update those columns that are changed (not all of them).
In other words, I have code like below but because I'm passing in an instance of the "President" class, I have no way of knowing what actually came in on the original JSON.
How can I just update what comes into my MVC3 update method and not all columns. That is, 8 of the columns may not come in and will be null in the "data" parameter passed in. I don't want to wipe out all my data because of that.
[HttpPost]
public JsonResult Update(President data)
{
bool success = false;
string message = "no record found";
if (data != null && data.Id > 0)
{
using (var db = new USPresidentsDb())
{
var rec = db.Presidents.FirstOrDefault(a => a.Id == data.Id);
rec.FirstName = data.FirstName;
db.SaveChanges();
success = true;
message = "Update method called successfully";
}
}
return Json(new
{
data,
success,
message
});
}
rec.FirstName = data.FirstName ?? rec.FirstName;
I would use reflection in this case because the code will be too messy like
if (data.FirstName != null)
rec.FirstName = data.FirstName
.
.
.
and so on for all the fields
Using reflection, it would be easier to do this. See this method
public static void CopyOnlyModifiedData<T>(T source, ref T destination)
{
foreach (var propertyInfo in source.GetType().GetProperties())
{
object value = propertyInfo.GetValue(source, null);
if (value!= null && !value.GetType().IsValueType)
{
destination.GetType().GetProperty(propertyInfo.Name, value.GetType()).SetValue(destination, value, null);
}
}
}
USAGE
CopyOnlyModifiedData<President>(data, ref rec);
Please mind that, this won't work for value type properties.