Any side-effects using SqlParameterCollection.Clear Method? - asp.net

I have this specific situation where I need to execute a stored procedure 3 times before I declare it failed. Why 3 times because to check if a job that was started earlier finished. I am going to ask a separate question for deciding if there is a better approach. But for now here is what I am doing.
mysqlparametersArray
do{
reader = MyStaticExecuteReader(query,mysqlparametersArray)
Read()
if(field(1)==true){
return field(2);
}
else{
//wait 1 sec
}
}while(field(1)==false);
MyStaticExecuteReader(query,mysqlparametersArray)
{
//declare command
//loop through mysqlparametersArray and add it to command
//ExecuteReader
return reader
}
Now this occasionally gave me this error:
The SqlParameter is already contained by another
SqlParameterCollection.
After doing some search I got this workaround to Clear the parameters collection so I did this:
MyStaticExecuteReader(query,mysqlparametersArray)
{
//declare command
//loop through mysqlparametersArray and add it to command Parameters Collection
//ExecuteReader
command.Parameters.Clear()
return reader
}
Now I am not getting that error.
Question: Is there any side-effect using .Clear() method above?
Note: Above is a sample pseudo code. I actually execute reader and create parameters collection in a separate method in DAL class which is used by others too. So I am not sure if making a check if parameters collection is empty or not before adding any parameters is a good way to go.

I have not ran into any side effects when I have used this method.

Aside from overhead or possibly breaking other code that is shared, there is no issue with clearing parameters.

Related

Is there a tangible benefit to using wrapper requests over plain messages in grpc service calls?

Lets say we have a message containing ID of some record in the database
message Record {
uint64 id = 1;
}
We also have an rpc call that returns all of the rows from table DATA that said record is mentioned in.
rpc GetDataForRecord(Record) returns (Data) {}
If we, for example, wrap Record in
RqData{
Record id = 1;
}
then once we need to only return, for example, "active" data, we won't need to make
GetActiveDataForRecord
instead we could extend RqData as:
RqData{
Record id = 1;
bool use_active = 2;
}
and use
rpc GetDataForRecord(RqData) returns (Data) {}
and clients that know of this new functionality will be able to call it, while older clients will just use it as it was passing only Record part within the Rq wrapper, without specifying active or not.
Here's the question: is there really a reason to use this kind of wrapping of everything into a separate request, or am I overthinking things and just passing plain structures will do?
I am kinda trying to think about the future, but not sure if I am not overcomplicating things.
In general, making a method-specific request and response is a Good Thing™ and is encouraged. For a Foo method you'd have FooRequest and FooResponse. Having specialized messages for the method allows you to add new "arguments," as you mentioned.
But for some cases it turns out fine to break the pattern and avoid the wrapping; it's a judgement call. Although you're asking from a different perspective, you may be interested in this answer about related methods.

How to change the dart-sqlite code from synchronous style to asynchronous?

I'm trying to use Dart with sqlite, with this project dart-sqlite.
But I found a problem: the API it provides is synchronous style. The code will be looked like:
// Iterating over a result set
var count = c.execute("SELECT * FROM posts LIMIT 10", callback: (row) {
print("${row.title}: ${row.body}");
});
print("Showing ${count} posts.");
With such code, I can't use Dart's future support, and the code will be blocking at sql operations.
I wonder how to change the code to asynchronous style? You can see it defines some native functions here: https://github.com/sam-mccall/dart-sqlite/blob/master/lib/sqlite.dart#L238
_prepare(db, query, statementObject) native 'PrepareStatement';
_reset(statement) native 'Reset';
_bind(statement, params) native 'Bind';
_column_info(statement) native 'ColumnInfo';
_step(statement) native 'Step';
_closeStatement(statement) native 'CloseStatement';
_new(path) native 'New';
_close(handle) native 'Close';
_version() native 'Version';
The native functions are mapped to some c++ functions here: https://github.com/sam-mccall/dart-sqlite/blob/master/src/dart_sqlite.cc
Is it possible to change to asynchronous? If possible, what shall I do?
If not possible, that I have to rewrite it, do I have to rewrite all of:
The dart file
The c++ wrapper file
The actual sqlite driver
UPDATE:
Thanks for #GregLowe's comment, Dart's Completer can convert callback style to future style, which can let me to use Dart's doSomething().then(...) instead of passing a callback function.
But after reading the source of dart-sqlite, I realized that, in the implementation of dart-sqlite, the callback is not event-based:
int execute([params = const [], bool callback(Row)]) {
_checkOpen();
_reset(_statement);
if (params.length > 0) _bind(_statement, params);
var result;
int count = 0;
var info = null;
while ((result = _step(_statement)) is! int) {
count++;
if (info == null) info = new _ResultInfo(_column_info(_statement));
if (callback != null && callback(new Row._internal(count - 1, info, result)) == true) {
result = count;
break;
}
}
// If update affected no rows, count == result == 0
return (count == 0) ? result : count;
}
Even if I use Completer, it won't increase the performance. I think I may have to rewrite the c++ code to make it event-based first.
You should be able to write a wrapper without touching the C++. Have a look at how to use the Completer class in dart:async. Basically you need to create a Completer, return Completer.future immediately, and then call Completer.complete(row) from the existing callback.
Re: update. Have you seen this article, specifically the bit about asynchronous extensions? i.e. If the C++ API is synchronous you can run it in a separate thread, and use messaging to communicate with it. This could be a way to do it.
The big problem you've got is that SQLite is an embedded database; in order to process your query and provide your results, it must do computation (and I/O) in your process. What's more, in order for its transaction handling system to work, it either needs its connection to be in the thread that created it, or for you to run in serialized mode (with a performance hit).
Because these are fairly hard constraints, your plan of switching things to an asynchronous operation mode is unlikely to go well except by using multiple threads. Since using multiple connections complicates things a lot (as you can't share some things between them, such as TEMP TABLEs) let's consider going for a single serialized connection; all activity will be serialized at the DB level, but for an application that doesn't use the DB a lot it will be OK. At the C++ level, you'd be talking about calling that execute from another thread and then sending messages back to the caller thread to indicate each row and the completion.
But you'll take a real hit when you do this; in particular, you're committing to only doing one query at a time, as the technique runs into significant problems with semantic effects when you start using two connections at once and the DB forces serialization on you with one connection.
It might be simpler to do the above by putting the synchronous-asynchronous coupling at the Dart level by managing the worker thread and inter-thread communication there. That would let you avoid having to change the C++ code significantly. I don't know Dart well enough to be able to give much advice there.
Myself, I'd just stick with synchronous connection processing so that I can make my application use multi-threaded mode more usefully. I'd be taking the hit with the semantics and giving each thread its own connection (possibly allocated lazily) so that overall speed was better, but I do come from a programming community that regards threads as relatively heavyweight resources, so make of that what you will. (Heavy threads can do things that reduce the number of locks they need that it makes no sense to try to do with light threads; it's about overhead management.)

suppress events for Flex objects

[Edit]
The main question here loosely translates as 'is Flex multi-threaded'? I have since found out that it is not, so I won't have data mysteriously changing half way through an operation. The code below worked, but made things awkward and confusing. I eventually fixed the problem with an architecture change, eliminating the need to suppress events. As the first commenter suggested.
Infinite loops were eliminated by changing the way events were listened to and performing certain actions explicitly rather than via events.
Collating events was made easier using a command pattern.
Basically, do not use the code below if you come across this page!
[/Edit]
I'm building some Flex applications using a simple, lightweight MVC pattern. Models extend or encapsulate a dispatcher and fire events when updated. I'm stuck with Flex 3.5.
In some situations, I'll want to suppress these events to avoid infinite loops or help collate multiple actions into a single event.
My first stab at a solution that doesn't litter the models with unnecessary and confusing code is this:
private var _suppressEvents:Boolean = false;
public function suppressEvents(callback:Function):void
{
// In case of error, ensure the suppression is turned off, then re-throw
var err:Error = null;
_suppressEvents = true;
try
{
callback();
}
catch(e:Error)
{
err = e;
}
_suppressEvents = false;
if (err)
{
throw (err);
}
}
public function dispatch(type:String, data:*):void
{
// Suppress if called from a suppress callback.
if (!_suppressEvents)
{
_dispatcher.dispatchEvent(new DataEvent(type, data));
}
}
Obviously I call 'suppressEvents' with a function containing the model code I wish to run.
My questions:
1: Is there a chance I could accidentally lose events using this technique?
2: Do I need to think about any other error edge cases when it comes to ensuring I don't accidentally end up in a suppressed state after a call?
3: Is there a cleaner way I'm missing?
Thanks!

Store update, insert, or delete statement affected an unexpected number of rows (0)" Error in delete function

Each task has a reference to the goal it is assigned to. When I try and delete the tasks, and then the goal I get the error
"Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Refresh ObjectStateManager entries." on the line _goalRepository.Delete(goalId);
What am I doing wrong?
[HttpPost]
public void DeleteGoal(int goalId, bool deleteTasks)
{
try
{
if (deleteTasks)
{
Goal goalWithTasks = _goalRepository.GetWithTasks(goalId);
foreach (var task in goalWithTasks.Tasks)
{
_taskRepository.Delete(task.Id);
}
goalWithTasks.Tasks = null;
_goalRepository.Update(goalWithTasks);
}
_goalRepository.Delete(goalId);
}
catch (Exception ex)
{
Exception deleteException = ex;
}
}
Most likely the problem is because you're attempting to hold onto and reuse a context across page views. You should create a new context, do your work, and dispose of the context atomically. It's called the Unit Of Work pattern.
The main reason for this is that the context maintains some state information about the database rows it has seen, if that state information becomes stale or out of date then you get exceptions like this.
There are a lot of other reasons to use the Unit of Work pattern, I would suggest you do a web search and do a little reading as an educational exercise.
This may have nothing to do with data access though. You are removing items from a list as you are iterating it, which would cause problems if you were using a normal List. Without knowing much about the internals of EF, my guess is that your delete calls to the repository are changing the same list that you are iterating.
Try iterating the list in one pass and record the Task ids you want to delete in separate list. Then when you have finished iterating, call delete on the Task list. For example:
var tasksToDelete = new List<int>();
foreach (var task in goalWithTasks.Tasks)
{
tasksToDelete.Add(task.Id);
}
foreach (var id in tasksToDelete)
{
_taskRepository.Delete(id);
}
This may not be the cause of your problem but it is good practice to never change the collection you are iterating.
I ran across this issue at work (I am an Intern), I was getting this error when trying to delete a piece of Equipment that was referenced in other data-tables.
I was deleting all references before attempting to delete the Equipment BUT the reference deletion was happening in another Model which had its own database context, the reference deletion would be saved within the Model's context.
But the Equipment Model's context would not know about the changes that just happened in another Model's context which is why when I was trying to delete the Equipment and then save the changes (eg: db.SaveChanges()) the error would happen (the Equipment context still thought there was references to that equipment in other tables).
My solution for this was to re-allocate the context before attempting to delete the Equipment:
db = new DatabaseContext();
Now the newly allocated context has the latest snapshot of the database and is aware of all changes made. Deletion happens without issues.
Hope my experience helps.

Does the using statement keep me from closing or destroying objects?

If I use something like:
using (OdbcConnection conn = new OdbcConnection(....))
{
conn.open();
my commands and sql, etc.
}
Do I have to do a conn.close(); or does the using statement keep me from doing that last call? Does it dispose of everything in the using block? For example, if I called other objects unlrelated would it dipose of those automatically also?
Thank you. I was unclear after reading about using on Microsoft's site. I want to make sure I don't have any memory leaks.
The using block will dispose of the OdbcConnection.
Normal scope rules work for anything declared inside the using block.
The using block will not clean up any other IDisposable objects. It only cleans up the declared item
note that you can nest using blocks, or if the items are the same type, multiple items can be initialized at the same time.
See the top bit of my other answer for How do I use the using keyword in C# for a little more information.
I should also mention that you can close (dispose) of the connection as soon as you are done with it to release the resource. The guidelines say that the caller should be able to repeatedly call the dispose method. The using block is essentially just a safety net and allows writing clearer code in most circumstances.
[Edit]
e.g. multiple initialization in a using: initialize more than one object in the same using without having to nest using blocks if the objects are the same type:
using (Bitmap b1 = new Bitmap("file1"), b2 = new Bitmap("file2"))
{ ... }
Joel Coehoorn mentioned stacking, which is nesting but omitting the braces, much as you can omit the braces in a for, or if statement. The UI doesn't reformat with an indent. I'd be curious what the IL looks like.
using(Bitmap b = new Bitmap("filex"))
using(Graphics g = Graphics.FromImage(b))
{
}
It is an error to use put different objects in the same using error CS1044: Cannot use more than one type in a for, using, fixed, or declaration statement.
// error CS1044
using(Bitmap b = new Bitmap("filex"), Graphics g = Graphics.FromImage(b))
The using statement will handle calling the Close and Dispose methods for you.
Scott Hanselman has a pretty good explanation of the using statement.
The using statement ensures that an object which implements IDisposable gets disposed. It will only dispose the object that is referened in the using block so your code is basically equivlant to:
OdbcConnection conn = new ....;
try
{
conn.open();
conn.....
}
finally
{
conn.dispose();
}

Resources