Anybody know how to do an Asynchronous creation of tables using Microsoft.SqlServer.Management.Smo.Table? - asynchronous

I currently have a problem with the datareader when creating Microsoft.SqlServer.Management.Smo.Table asynchronously. Note: I derived my SmoTable from TableView and IDisposable.
private async Task Generate()
{
await Task.Run(()=>
{
MSSMSDatabase db = CreateDB(txtDBname.Text);
List<string> tableNames = GetTableNameList();
for(string tableName in tableNames)
{
using(SmoTable tbl = new Table(db, tableName)) // <=== after a few loops, the error occurs within here.
{
foreach(var col in columnList)
{
tbl.AddColumns(col);
}
tbl.Create();
}
}
});
}
Microsoft.SqlServer.Management.Smo.FailedOperationException: InvalidOperationException: There is already an open DataReader associated with this Connection which must be closed first.
I tried implementing IDisposable to my SmoTable class that I derived from the TableView class but still have the same error.
Thanks in advance.

I did a trial and error and found out that you need to create a new connection for each table creation to create a separate datareader for it. So, if you include the instantiation of Server in the foreach loop it will create a new connection and hence a new datareader.
for(string tableName in tableNames)
{
using(SmoTable tbl = new Table(db, tableName)) // <=== after a few loops, the error occurs within here.
{
foreach(var col in columnList)
{
_server = GetSQLServer(); // <=== this is basically Server server = new Server(); return server; kind of method.
db = _server.Databases[_databaseName];
tbl.AddColumns(col);
}
tbl.Create();
}
}

Related

Async method reading mysql table produce list with duplicated elements

Async method returns list with duplicated elements. It is async method which is using mysql connector to connect with database. Then it execute query (SELECT *) and by using MySqlDataReader - I save and add to list rows until last ReadAsync() call.
Asynchronous programming is still black magic for me - I would appreciate any feedback or indicating unlogical code lines with explanation.
This method will be used in my web api controller and method purpose is to return all entries from 'Posts' table. Code is working fine when I 'reset' temp object each loop by using
temp = new Post(); but I assume it is unacceptable ? What if my database would have not 15 but 15000 entries?
`
public async Task<List<Post>> GetPostsAsync()
{
List<Post> posts = new List<Post>();
Post temp = new Post();
try
{
await _context.conn.OpenAsync();
MySqlCommand cmd = new MySqlCommand("USE idunnodb; SELECT * FROM Posts;", _context.conn);
await using MySqlDataReader reader = await cmd.ExecuteReaderAsync();
while(await reader.ReadAsync())
{
temp.PostID = (int)reader[0];
temp.UserID = (int)reader[1];
temp.PostDate = reader[2].ToString();
temp.PostTitle = reader[3].ToString();
temp.PostDescription = reader[4].ToString();
temp.ImagePath = reader[5].ToString();
posts.Add(temp);
}
await _context.conn.CloseAsync();
}
catch (Exception ex)
{
return Enumerable.Empty<Post>().ToList();
}
return posts;
}
`
Looks like you can only read data by looping through MySqlDataReader, according to your code logic, you need to read each piece of data, and then add them to the List one by one for output.
Your code is behaving this way because temp has been declared & instantiated outside of your reader.ReadAsync() statement, you are updating the same object reference each time around the loop which is why you are seeing repeating objects in your list.
So you need to instantiate in reader.ReadAsync() loop:
while(await reader.ReadAsync())
{
Post temp = new Post();
...
}

ServiceStack OrmLite - Elegant way to handle SQL Server Connection Drops

We are currently using ORMLite and it is working really well.
One of the places that we are using it is for running large batch processes.
These processes run a single large batch all within a single transaction, if there are any errors then it rolls back the transaction and then it needs to be run again.
Is there a way that something like a connection drop(which could be very quick) could be better handled and that it could then, just re-establish the connection and then re-continue from there?
The only thing that's resembles something close to what you're after is using a Custom OrmLite Exec Fitler which you can use to inject your own custom Execution strategy.
The example on OrmLite's home page shows an example of using an Exec filter to execute each query 3 times:
public class ReplayOrmLiteExecFilter : OrmLiteExecFilter
{
public int ReplayTimes { get; set; }
public override T Exec<T>(IDbConnection dbConn, Func<IDbCommand, T> filter)
{
var holdProvider = OrmLiteConfig.DialectProvider;
var dbCmd = CreateCommand(dbConn);
try
{
var ret = default(T);
for (var i = 0; i < ReplayTimes; i++)
{
ret = filter(dbCmd);
}
return ret;
}
finally
{
DisposeCommand(dbCmd);
OrmLiteConfig.DialectProvider = holdProvider;
}
}
}
OrmLiteConfig.ExecFilter = new ReplayOrmLiteExecFilter { ReplayTimes = 3 };
using (var db = OpenDbConnection())
{
db.DropAndCreateTable<PocoTable>();
db.Insert(new PocoTable { Name = "Multiplicity" });
var rowsInserted = db.Count<PocoTable>(x => x.Name == "Multiplicity"); //3
}
But it uses the same IDbConnection, i.e. it doesn't create a new DB Connection.

TransactionScope doesn't rollback on exception CSLA 4.3

I have a BO (Country) with a child BO (State) which also has a child BO (City). When I update the parent BO (Country), add a child State and run save, when an exception occurs in the DAL (on purpose), the transaction is not rolled back. I am using SqlCE. I am attaching a sample stripped down project that demonstrates the issue. What am I doing wrong?
Test code:
Country originalCountry = null;
try
{
originalCountry = Country.GetCountry(1);
var country = Country.GetCountry(1);
country.CountryName = "My new name";
var state = country.States.AddNew();
state.StateName = "Dummy state";
country.States.EndNew(country.States.IndexOf(state));
country.Save();
}
catch (Exception exception)
{
var country = Country.GetCountry(1);
if (originalCountry.CountryName != country.CountryName)
{
System.Console.WriteLine("Values ARE NOT the same: " + originalCountry.CountryName + " vs. " + country.CountryName);
}
else
{
System.Console.WriteLine("Values are the same: " + originalCountry.CountryName + " vs. " + country.CountryName);
}
}
Country.cs
[Transactional(TransactionalTypes.TransactionScope)]
protected override void DataPortal_Update()
{
Update();
}
private void Update()
{
using (var ctx = DalFactory.GetManager())
{
var dal = ctx.GetProvider<ICountryDal>();
using (BypassPropertyChecks)
{
var dto = new CountryDto();
TransferToDto(dto);
dal.Update(dto);
}
FieldManager.UpdateChildren(this);
throw new Exception("Rollback should occur.");
}
}
Sample project
From my understanding of SQL CE and transactions, they only support a transaction on a single database connection when using TransactionScope.
It looks like your code is following the model put forward by some of the CSLA samples, but the actual opening/closing of the database connection is hidden in the GetManager or GetProvider abstraction, so there's no way to say for sure how that's handled.
It does seem that SQL CE has some limitations on transactions with TransactionScope though, so you should make sure you aren't violating one of their restrictions somehow.
The DalManager (and the ConnectionManager) relies on reference counting to determine when close the actual connection.
The rules are not making sure to dispose the DalManager and hence the DalManager and reference counting is off. Resulting in the update happening on a connection that was created and opened in one of the Fetch operations and is therefore not be enlisted in the TransactionScope on the Update method.
See: http://msdn.microsoft.com/en-us/library/bb896149%28v=sql.100%29.aspx
All rules must be changed to dispose of the DalManager. Original rule:
protected override void Execute(RuleContext context)
{
var name = (string)context.InputPropertyValues[_nameProperty];
var id = (int)context.InputPropertyValues[_idProperty];
var dal = DalFactory.GetManager();
var countryDal = dal.GetProvider<ICountryDal>();
var exists = countryDal.Exists(id, name);
if (exists)
{
context.AddErrorResult("Country with the same name already exists in the database.");
}
}
DalManager is IDisposable but is not explicitly disposed here so it depends on when the GC will actually collect the object.
Should be:
protected override void Execute(RuleContext context)
{
var name = (string)context.InputPropertyValues[_nameProperty];
var id = (int)context.InputPropertyValues[_idProperty];
using (var dal = DalFactory.GetManager())
{
var countryDal = dal.GetProvider<ICountryDal>();
var exists = countryDal.Exists(id, name);
if (exists)
{
context.AddErrorResult("Country with the same name already exists in the database.");
}
}
}

Raven DB DocumentStore - throws out of memory exception

I have code like this:
public bool Set(IEnumerable<WhiteForest.Common.Entities.Projections.RequestProjection> requests)
{
var documentSession = _documentStore.OpenSession();
//{
try
{
foreach (var request in requests)
{
documentSession.Store(request);
}
//requests.AsParallel().ForAll(x => documentSession.Store(x));
documentSession.SaveChanges();
documentSession.Dispose();
return true;
}
catch (Exception e)
{
_log.LogDebug("Exception in RavenRequstRepository - Set. Exception is [{0}]", e.ToString());
return false;
}
//}
}
This code gets called many times. After i get to around 50,000 documents that have passed through it i get an OutOfMemoryException.
Any idea why ? perhaps after a while i need to declare a new DocumentStore ?
thank you
**
UPDATE:
**
I ended up using the Batch/Patch API to perform the update I needed.
You can see the discussion here: https://groups.google.com/d/topic/ravendb/3wRT9c8Y-YE/discussion
Basically since i only needed to update 1 property on my objects, and after considering ayendes comments about re-serializing all the objects back to JSON, i did something like this:
internal void Patch()
{
List<string> docIds = new List<string>() { "596548a7-61ef-4465-95bc-b651079f4888", "cbbca8d5-be45-4e0d-91cf-f4129e13e65e" };
using (var session = _documentStore.OpenSession())
{
session.Advanced.DatabaseCommands.Batch(GenerateCommands(docIds));
}
}
private List<ICommandData> GenerateCommands(List<string> docIds )
{
List<ICommandData> retList = new List<ICommandData>();
foreach (var item in docIds)
{
retList.Add(new PatchCommandData()
{
Key = item,
Patches = new[] { new Raven.Abstractions.Data.PatchRequest () {
Name = "Processed",
Type = Raven.Abstractions.Data.PatchCommandType.Set,
Value = new RavenJValue(true)
}}});
}
return retList;
}
Hope this helps ...
Thanks alot.
I just did this for my current project. I chunked the data into pieces and saved each chunk in a new session. This may work for you, too.
Note, this example shows chunking by 1024 documents at a time, but needing at least 2000 before we decide it's worth chunking. So far, my inserts got the best performance with a chunk size of 4096. I think that's because my documents are relatively small.
internal static void WriteObjectList<T>(List<T> objectList)
{
int numberOfObjectsThatWarrantChunking = 2000; // Don't bother chunking unless we have at least this many objects.
if (objectList.Count < numberOfObjectsThatWarrantChunking)
{
// Just write them all at once.
using (IDocumentSession ravenSession = GetRavenSession())
{
objectList.ForEach(x => ravenSession.Store(x));
ravenSession.SaveChanges();
}
return;
}
int numberOfDocumentsPerSession = 1024; // Chunk size
List<List<T>> objectListInChunks = new List<List<T>>();
for (int i = 0; i < objectList.Count; i += numberOfDocumentsPerSession)
{
objectListInChunks.Add(objectList.Skip(i).Take(numberOfDocumentsPerSession).ToList());
}
Parallel.ForEach(objectListInChunks, listOfObjects =>
{
using (IDocumentSession ravenSession = GetRavenSession())
{
listOfObjects.ForEach(x => ravenSession.Store(x));
ravenSession.SaveChanges();
}
});
}
private static IDocumentSession GetRavenSession()
{
return _ravenDatabase.OpenSession();
}
Are you trying to save it all in one call?
The DocumentSession need to turn all of the objects that you pass it into a single request to the server. That means that it may allocate a lot of memory for the write to the server.
Usually we recommend on batches of about 1,024 items in you are doing bulks saves.
DocumentStore is a disposable class, so I worked around this problem by disposing the instance after each chunk. I highly doubt this is the most efficient way to run operations, but it will prevent significant memory overhead from happening.
I was running a sort of "delete all" operation like so. You can see the using blocks disposing both the DocumentStore and the IDocumentSession objects after each chunk.
static DocumentStore GetDataStore()
{
DocumentStore ds = new DocumentStore
{
DefaultDatabase = "test",
Url = "http://localhost:8080"
};
ds.Initialize();
return ds;
}
static IDocumentSession GetDbInstance(DocumentStore ds)
{
return ds.OpenSession();
}
static void Main(string[] args)
{
do
{
using (var ds = GetDataStore())
using (var db = GetDbInstance(ds))
{
//The `Take` operation will cap out at 1,024 by default, per Raven documentation
var list = db.Query<MyClass>().Skip(deleteSum).Take(5000).ToList();
deleteCount = list.Count;
deleteSum += deleteCount;
foreach (var item in list)
{
db.Delete(item);
}
db.SaveChanges();
list.Clear();
}
} while (deleteCount > 0);
}

ASP.NET MySQL update multiple records

I have a web page that needs to update multiple records. This page gets all the information and then begins a transaction sending multiple UPDATE queries to the data base.
foreach row
{
Prepare the query
Hashtable Item = new Hashtable();
Item.Add("Id", Id);
Item.Add("Field1", Field1);
Item.Add("Field2", Field2);
Item.Add("Field3", Field3);
...
}
Then we launch the ytransaction
DO CHANGES()
public void execute_NonQuery_procedure_transaction(string StoredProcedure, List<Hashtable> Params)
{
using (MySqlConnection oConnection = new MySqlConnection(ConfigurationManager.AppSettings[DB]))
{
MySqlTransaction oTransaction;
bool HasErrors = false;
oConnection.Open();
oTransaction = oConnection.BeginTransaction();
try
{
MySqlCommand oCommand = new MySqlCommand(StoredProcedure, oConnection);
oCommand.CommandType = CommandType.StoredProcedure;
oCommand.Transaction = oTransaction;
foreach (Hashtable hParams in Params)
{
oCommand.Parameters.Clear();
IDictionaryEnumerator en = hParams.GetEnumerator();
while (en.MoveNext())
{
oCommand.Parameters.AddWithValue("_" + en.Key.ToString(), en.Value);
oCommand.Parameters["_" + en.Key.ToString()].Direction = ParameterDirection.Input;
}
oCommand.ExecuteNonQuery();
}
}
catch (Exception e)
{
HasErrors = true;
throw e;
}
finally
{
if (HasErrors)
oTransaction.Rollback();
else
oTransaction.Commit();
oConnection.Close();
}
}
}
Is there another way to do this or this is the most efficient way?
It depends on the situation, like if you have multiple row updates or adding new rows or deleting some rows or a combination of these, which modifies the database table then, the efficient way to do this is to have Batch Update...
Please go through this link Batch Update
Hope this helps...
it looks fine to me, you could eventually do not clear the Command.Parameters list but just assign the values on following iterations but probably this leads to no visible improvements.
pay attention your throw is wrong, in C# don't use throw e; but simply throw;.

Resources