I am creating a program in which a user can search and add their desired order. The problem that I'm facing now is that when I throw the exception, the program does not read the exception so that the user will know if the id that is entered is on the database or not. I will provide the code snippet of the program that I'm working on.
Problems
Your code will not throw an error if the item_code does not exist in your database. It will simply not enter the while loop.
This is not the proper use of an exception. It is not an error if the record is not found. The proper way of checking if the item_code exists is a check if the datareader has results.
You must properly defend yourself again SQL injection. By concatenating the sql query you are opening yourself up to a whole host of problems. For example, if a user maliciously enters the following text, it will delete the entire Products table: ';DROP TABLE Products;-
You are not disposing of the OleDbConnection or the OleDbCommand objects correctly. If an exception occurs, your code will not run the Dispose() method. This can cause you to quickly run out of resources.
Solutions
You should check if the dataRead has any rows. If it does not, then you can alert the user via javascript. Like so:
If dataRead.HasRows Then
//READ DATA
Else
//ALERT USER
End If
Solution #1 address Problem #2 as well
Use a parameterized query. The .NET framework will prevent these kinds of attacks (SQL Injection).
selectProductQuery = "SELECT * FROM Products WHERE item_code = #item_code"
...
newCmd.Parameters.AddWithValue("item_code", txtItemCode.Text);
Wrap all objects that implement Dispose() in a using block. This will guarantee everything is properly disposed of, whether an error is thrown or not.
Using newCon As New OleDbConnection(....)
Using newCmd As New OleDb.OleDbCommand(...)
...
End Using
End Using
To be perfectly honest, there is quite a bit "wrong" with your code, but this should get you headed in the right direction.
The line:
Response.Write(<script>alert('The ...')</script>)
Needs to be (note the quotes):
Response.Write("<script type='text/javascript'>alert('The ...')</script>")
Same for the other one at the top, but I dont think that will fix your overall problem.
Instead, use javascript like this:
if(!alert('Whoops!')){window.location.reload();}
to pop up an alert box and then reload the page after they click on the button.
Related
I have a SysOperation Framework process that creates a ReliableAsynchronous batch to post packing slips and several get created at a time.
Depending on how quickly I click to create them, I get:
Cannot edit a record in LastValue (SysLastValue).
An update conflict occurred due to another user process deleting the record or changing one or more fields in the record.
And
Cannot create a record in LastValue (SysLastValue). User ID: t edit a, Class.
The record already exists.
On a couple of them in the BatchHistory. I have this.parmLoadFromSysLastValue(false); set. I'm not sure how to prevent writing to SysLastValue table.
Any idea what could be going on?
I get this exception a lot too, so I've created the habit of catching DuplicateKeyException in my service operation. When it is thrown, catch it and retry (for a default of 5x).
The error occurs when a lot of processes run simultaneously, like you are doing now.
DupplicateKeyException can be caught inside a transaction so you could improve by putting a try/catch around the code that does the insert in the SysLastValue table if you can find the code.
As far as I can see these are the only to occurrences where a record is inserted in this table (except maybe in kernel):
InventUnusedDimCleanUp.serialize()
SysAutoSemaphore.autoSemaphore()
Put a breakpoint there and see if that code is executed. If so you can add a try/catch with retry and see if that "fixes" it.
You could also use the tracing cockpit and the trace parser to figure out where that record is inserted if it's not one of those two.
My theory about LoadFromSysLastValue: I believe setting this.parmLoadFromSysLastValue(false) does not work since it is only taken into account when the dialog is started, not when your operation is executed. When in batch, no SysLastValue will be used to initialize your data contract as you want it to use the exact parameters you have supplied in your data contract .
It's because of the code calling SysOperationController.savelast() while in batch, my solution is to set loadFromSysLastValue to false in SysOperationController.loadFromSysLastValue() as part of the in batch check:
if (!this.isInBatch())
{
.....
}
//Begin
else
{
loadFromSysLastValue = false;
}
//End
ASP.NET WEB PROGRAMMING
How to optimally handle
Open
Close
Dispose
Exception
when retrieving data from a database.
LOOKING TO OPTIMISE
I have always used the following to make a connection, catch any errors, and then correctly dispose of the connection in either event.
VB.NET
Try
con.Open()
//...
con.Close()
Catch ex As Exception
lbl.Text = ex.Message
Finally
If con IsNot Nothing Then
con.Dispose()
End If
End Try
After reading many articles I find people practically throwing up at this code; however, I do not see any other way to accomodate the four steps required efficiently.
The alternative, and I believe more cpu friendly USING statement, seems to be the tool of choice for the correct disposal of a sql connection. But what happens when bad data is retrieved from the database and there is nothing in place to indicate to an end user what went wrong?
QUESTION
Between: Using, try catch, and other.
Which is faster, cleaner, and/or most efficient way to handle a data retrieval statement?
I am happy to hear people's opinions but I am looking for facts.
You can also use the following block of code as template. It combines the Using...End Using and Try...Catch blocks.
Using conn As New SqlConnection(My.Settings.SQLConn)
Try
Conn.open
Catch ex As SqlException
End Try
End Using
There is no need to call conn.Dispose() because using block does that automatically.
Use Entity Framework, it implements Unit Of Work Pattern for You efficiently, and perform your operations within transaction scope
Always use the "Using" process as it will automatically assign and then free up system resources. You are still able to perform the try-catch within this to present errors to the user.
In the past I have avoided ORM and always handcrafted parameterised queries etc. This is very time consuming and a real pain when first developing an application. Recently I decided to have another look at ORM specifically the Sqlite.NET ORM.
I would like to use SQLite ORM features but also be able to run a batch of native SQL commands to prepopulate a database.
We are using the SqliteNetExtensions-MvvmCross dll to enable one-to-many relationships etc and this all looks fine. My issues comes to when I want to seed the database with configuration data. I was hoping to simply provide a sql file that contained a series of sql statements that it would run one after another.
I have grabbed the SQlite.NET code from GITHub and run the tests. I have then extended the StringQueryTests class that has a simple [Product] table to do the following:-
[Test]
public void AlanTest()
{
StringBuilder sb = new StringBuilder(200);
sb.Append(" DELETE FROM Product;");
sb.Append(" INSERT INTO Product VALUES (1,\"Name1\",1,1);");
sb.Append(" INSERT INTO Product VALUES (2,\"Name2\",2,3);");
db.Execute(sb.ToString());
}
When I run this it does not throw an error and in fact the behaviour seems to be that it will only run the first command. If I paste the contents of sb.ToString() into a sqlite database query window it will work just fine.
Is this the expected behaviour? If so, how do I go about overcoming this so that I can use an approach like above. I don’t really want to have to create objects to manage all SQL statements if possible.
I can see that there are a number of approaches that could be adopted to overcome this issue - anyone got a work around or suggestions that they think can solve this issue?
Kind regards
Alan.
I just ran into this issue too. I found a blog post that explains why.
Here is what the post says in case it goes missing.
All of the code [in sqlite-net] correctly checks the result codes and throws exceptions accordingly.
Although I haven't posted all relevant code here, I did review it, and the real origin of this behavior is elsewhere - in the native sqlite3.dll sqlite3_prepare_v2 method. Here's the relevant part of the documentation:
These routines only compile the first statement in zSql, so *pzTail is left pointing to what remains uncompiled.
Since sqlite-net doesn't do anything with the uncompiled tail, only the first statement in the command is actually executed. The remainder is silently ignored. In most cases you won't notice that when using sqlite-net. You will either use its micro ORM layer or execute individual statements. The only common exception that comes to mind, is trying to execute DDL or migration scripts which are typically multi statement batches.
Can't you do :
[Test]
public void AlanTest()
{
var queries = new List<string> ()
{
" DELETE FROM Product",
" INSERT INTO Product VALUES (1,\"Name1\",1,1)",
" INSERT INTO Product VALUES (2,\"Name2\",2,3)"
};
db.BeginTransaction ();
queries.ForEach (query => db.Execute (query));
db.Commit ();
}
You don't really need the transaction, just faster execution / checkpoint rollback...
I am running into a strange problem I don't fully understand. The main symptom is that when I double click a link (that points to a controller action) in my MVC application, my database server connection gets blown, and I get the error :
Execution of the command requires an open and available connection. The connection's current state is broken.
If I step through starting at a breakpoint at the top of the controller action, it will step down a couple lines and then jump back up to the breakpoint. Somehow the first request isn't executing fully before the second one gets there, and somehow my database connection breaks when it gets to any query. Every time this happens, I have to restart the application server.
It was happening intermittently at first, but the double clicking of links seems to reproduce it everytime. Does this happen to anyone else? What am I missing here?
Thanks,
rusty
Update :
A.) I incorrectly tagged this as Linq-to-sql when we are actually using Linq-to-entities.
B.) The connection object is defined as a member variable of the controller :
namespace C2S.Controllers
{
public class ArtifactController : Controller
{
private c2sEntities _entities = new c2sEntities();
...
I noticed in some of the asp.net tutorials they declare the variable in the same spot but have a separate constructor for the controller where the db object is initialized. Does this make any difference?
C.) The problem is not only with the double-clicking as described above. The connection breaks at other seemingly random times; I cannot seem to reproduce the error consistently (even double-clicking does not always break it). Restarting the web site usually fixes it, although sometimes I have to restart the host machine. After its back up, repeating the same sequence of actions usually does not reproduce the same error!
Maybe there's something I don't understand about setting up my linq-to-entities classes or the nature of the database connection. Does anyone have any thoughts? I really don't even know how to investigate this one!
Thanks again
Rusty
It's a bit difficult to say from your description of the problem, but a first guess would be:
Is your connection object static (i.e. controller or application level) or defined locally within the action? Double clicking a link would fire the event twice and that sounds like what you are describing here. So the first call creates the connection, then the 2nd call comes in and tramps all over the 1st call to the method, breaking the connection it thinks it has.
Edit: Does the problem only occur on double clicks. Does it work as expected if you only single click on the link? An example of the code in question would help.
I am guessing that you're not properly closing your database connection. You should consider making use of the using statement.
using(SqlConnection conn = new SqlConnection("connstring")) {
using (SqlCommand cmd = new SqlCommand("SQLSTATEMENT", conn)) {
// more code here......
}
}
This will ensure that your connection is closed even if there's an error in your code somewhere.
Read all about it here: http://davidhayden.com/blog/dave/archive/2005/01/13/773.aspx
I've a requirement of creating a HttpHandler that will serve an image file (simple static file) and also it'll insert a record in the SQL Server table. (e.g http://site/some.img, where some.img being a HttpHandler) I need an in-memory object (like Generic List object) that I can add items to on each request (I also have to consider a few hundreds or thousands requests per second) and I should be able unload this in-memory object to sql table using SqlBulkCopy.
List --> DataTable --> SqlBulkCopy
I thought of using the Cache object. Create a Generic List object and save it in the HttpContext.Cache and insert every time a new Item to it. This will NOT work as the CacheItemRemovedCallback would fire right away when the HttpHandler tries to add a new item. I can't use Cache object as in-memory queue.
Anybody can suggest anything? Would I be able to scale in the future if the load is more?
Why would CacheItemRemovedCalledback fire when you ADD something to the queue? That doesn't make sense to me... Even if that does fire, there's no requirement to do anything here. Perhaps I am misunderstanding your requirements?
I have quite successfully used the Cache object in precisely this manner. That is what it's designed for and it scales pretty well. I stored a Hashtable which was accessed on every app page request and updated/cleared as needed.
Option two... do you really need the queue? SQL Server will scale pretty well also if you just want to write directly into the DB. Use a shared connection object and/or connection pooling.
How about just using the Generic List to store requests and using different thread to do the SqlBulkCopy?
This way storing requests in the list won't block the response for too long, and background thread will be able to update the Sql on it's own time, each 5 min so.
you can even base the background thread on the Cache mechanism by performing the work on CacheItemRemovedCallback.
Just insert some object with remove time of 5 min and reinsert it at the end of the processing work.
Thanks Alex & Bryan for your suggestions.
Bryan: When I try to replace the List object in the Cache for the second request (now, count should be 2), the CacheItemRemovedCalledback gets fire as I'm replacing the current Cache object with the new one. Initially, I also thought this is weird behavior so I gotta look deeper into it.
Also, for the second suggestion, I will try to insert record (with the Cached SqlConnection object) and see what performance I get when I do the stress test. I doubt I'll be getting fantastic numbers as it's I/O operation.
I'll keep digging on my side for an optimal solution meanwhile with your suggestions.
You can create a conditional requirement within the callback to ensure you are working on a cache entry that has been hit from an expiration instead of a remove/replace (in VB since I had it handy):
Private Shared Sub CacheRemovalCallbackFunction(ByVal cacheKey As String, ByVal cacheObject As Object, ByVal removalReason As Web.Caching.CacheItemRemovedReason)
Select Case removalReason
Case Web.Caching.CacheItemRemovedReason.Expired, Web.Caching.CacheItemRemovedReason.DependencyChanged, Web.Caching.CacheItemRemovedReason.Underused
' By leaving off Web.Caching.CacheItemRemovedReason.Removed, this will exclude items that are replaced or removed explicitly (Cache.Remove) '
End Select
End Sub
Edit Here it is in C# if you need it:
private static void CacheRemovalCallbackFunction(string cacheKey, object cacheObject, System.Web.Caching.CacheItemRemovedReason removalReason)
{
switch(removalReason)
{
case System.Web.Caching.CacheItemRemovedReason.DependencyChanged:
case System.Web.Caching.CacheItemRemovedReason.Expired:
case System.Web.Caching.CacheItemRemovedReason.Underused:
// This excludes the option System.Web.Caching.CacheItemRemovedReason.Removed, which is triggered when you overwrite a cache item or remove it explicitly (e.g., HttpRuntime.Cache.Remove(key))
break;
}
}
To expand on my previous comment... I get the picture you are thinking about the cache incorrectly. If you have an object stored in the Cache, say a Hashtable, any update/storage into that Hashtable will be persisted without you explicitly modifying the contents of the Cache. You only need to add the Hashtable to the Cache once, either at application startup or on the first request.
If you are worried about the bulkcopy and page request updates happening simultaneously, then I suggest you simple have TWO cached lists. Have one be the list which is updated as page requests come in, and one list for the bulk copy operation. When one bulk copy is finished, swap the lists and repeat. This is similar to double-buffering video RAM for video games or video apps.