A specific aspx page hangs on postback forever - asp.net

In my application, out of 500 or so pages, only one specific page hangs forever. It keeps loading forever, never stops (I waited for 30 minutes).
Problem is this happens only in one or two off cases. Normally the same page works fine. It is a data entry page, so basically user will enter some data and we save the same in 2-3 different tables using a transaction. If I enter the data five times, it is possible it will hang 1 time, randomly. I tried saving exact same data five times, and it hung only twice, so clearly data is not at fault.
I also checked the database tables, and nothing seems to be locked either.
I am not sure, exactly why it is happening. I know it is extremely weird request, but I just want few suggestions for debugging.

Never mind. I finally got it. It was really a stupid problem. I had three insert and one select query in the transaction. Insert queries were fine. Select query was the one causing the timeout locking the transaction. Added an index on the table that was used by select query, and now it works perfect.

Related

Need to run multiple copies of ASP.net website with datagridview sharing single url C#

Have drop-down menu which fills 4 datagridviews based on the branch selected or when the start button is pressed loops through 80 branches.
4 sql server procs, 1 per datagridview, unique sql table, read access, only.
Need to access multiple copies, single url.
Database retrieval time = # of copies run (single asp.net websites over single url called multiple times) * database runtime.
So if it takes 30 seconds for data retrieval, running 3 copies takes 90 seconds and seems to fragment the data or timeout..
I'm using nolocks so there isn't deadlock.
But I need to optimize this.
Should I create one web service and will this solve the problem of hitting the database only one time instead of 1x per single url hit.
Thank you.
David
Thank you, the timer was taking over and performing differently on the server than on my local. Also the UI, timer, and Database were out of synch. So adding a thread.sleep helped. Adding a longer interval on the timer, helped. Also putting all the database calls together, instead of 1 connection per database call helped. Now it runs all the same time.
The main takeaway I think is that the timer and the Thread.Sleep was the main thing.
I also had a UI button - which I added some code so that once it's pushed, if you keep pushing it, it doesn't do anything.
Thank you to everyone that posted answer..
Well, this will come down to not really the numbers of records pulled, but that if you are executing multiple SQL statements over and over.
I mean, to fill 4 gv's with 4 queries? That's going to be quite much instant assuming the record set size for each grid is say in the 100 row range. Such a button click and filling the grids should be very low time.
And even if you using a row databound event - once again, it will run fast. But ONLY if you not executing a whole bunch of additional SQL queries. So the number of "hits" or so called SQL statements sent to the database is what for the most part determine the speed of this setup.
So say you have one grid - pulls 100 rows. But then the next grid say needs data based on 100 rows of "new" SQL queries. In that case, what you can often do is fill a reocrdset with the child data - and filter against that recordset, and thus say not have to execute 100 SQL queries, but only 1 query.
So, this will really come down to how many separate SQL queries you execute in total.
To fill 4 grids with 4 queries? I don't see that as being a problem, and thus we are no doubt missing some big "whopper" of a detail you not shared with us.
Expand in your question how many SQL statements are generated in total - that's the bottle neck here. Reduce that, and your performance should be just fine.
And if the 4 simple stored procedures have "cursors" that loop and again generate many SQL commands - get rid of that.
4 basic SQL queries pulls is nothing - something else is at work that you not sharing. Why would each single stored procedure take so very long? That's the detail we are missing here.

DynamoDB input broken item if putting items too fast?

I'm facing a weird phenomenal when putting items to DynamoDB.
It seems like if putting items too fast, DynamoDB can't put the whole data to the table(kinda like a broken data, it has partial attributes but with some weird values)?
I'm using the AWS JavaScript SDK to putting items, no errors shown up, everything seemed to work fine, but once I checked the data from web console, some of the inserted data was broken. Is this is related to write capacity units? (but no errors tell me it's caused by the write capacity units..) I could confirm the spike of my write capacity units was about 60/min, the setting is "on-demand".
I tried to slow down the putting speed with one second interval and with the exactly same data, the data could be inserted correctly...
Anyone knows why and how to fix this..?
The answer is no: If DynamoDB decides to throttle your requests because you exceeded your provisioned capacity or exceed their own hardware's capacity or whatever - it will refuse to do whole requests, or in the case of BatchWriteItems do some of the writes and not do others (and it will tell you which it did and which it didn't). DynamoDB will never write part of a request or corrupt parts of one attribute.
If you are seeing that, the most likely culprit is a bug in your own code that does the write. Maybe your own code is not thread-safe, so if your code is trying to prepare two items for writing concurrently, the code doing this preperation has a data race and results in a corrupt item to be written. Obviously, it is also possible that DynamoDB has a bug causing this, but it can't be as simple a bug as "writing more than 60 items a minute causes corruption" - if this were the case, everyone would have encountered this bug...

Can't see SQLite database changes on a database open by multiple processes

I have a process that opens a database using sqlite3_open and sets journal mode to WAL.
Another process, uses sqlite3_open to open that same database. Everything seems to work, but the problem is that second process does not seem to see changes made by the first process. I am trying to fetch count, or rowids, and they stay the same.
I am sure that database is being updated, because refreshing using SQLiteDatabaseBrowser shows the changes.
I tried multiple ways of opening databases, and multiple ways of querying, but no luck so far. What am I missing? Thanks!
Transactions are used to isolate connections from each other, especially to make changes visible only after a transaction has completed.
So for changes to be visible, the writing connection must end its transaction, and the reading connection must not have started its own transaction before that. (When using automatic transactions, ensure that statements are reset or finalized.)
I figured out what the problem was, and as usual in cases where thing make no sense, mistake on my side. Problem is however subtle, so worth mentioning.
I was doing sqlite3_reset calls on cached prepared statements lazily, that is before I reuse the prepared statement, not immediately after I am done executing it. Problem is that this pattern means that there’s always prepared statement pending reset. Apparently, reset is necessary to be able to see updates to the database (probably some mutex is being held).
Thanks for your help.
EDIT: after sleeping on it this behavior actually makes sense. Updates should not be visible during the time of prepared statement execution, otherwise it might never be done or accurate.

Asp.Net MVC weird error when filling a really big form

On a new website, I've an huge formular(meaning really big, needs at least 15-20min to finish it), that configure the whole website for one client for the next year.
It's distributed between several tabs(it's a wizard). Every time we go to the next tab, it makes a regular(non ajax) call to the server that generate the next "page". The previous informations are stored in the session(an object with a custom binder).
Everything was working fine until we test it today with all real data. Real data needs reflexion, work to find correct elements, ... And it takes times.
The problem we got is that the View receive a Model partialy empty. The session duration is set to 1440 minutes(and in IIS too). For now what I know is that I get a NullException the first time I try to access the Model into my view.
I'm checking the controller since something like 1 hour, but it's just impossible it gives a null model. If I put all those data very fast, I don't have any problem(but it's random data).
For now I did only manage to reproduce this problem on the IIS server, and I'm checking elmah logs to debug it, so it's not so easy to reproduce it.
Have you just any idea about how should I debug this? I'm a little lost here
I think you should assume session does not offer reliable persistence. I am not sure about details but I guess it will start freeing some elements when it exceeds its memory limit.
You will be safer if you use database to store that information or you could introduce your own implementation for persisting state.
in addition to ans provided by #Ufuk
you can easily send an ajax request every 1 minute which would actually do nothing but by doing this the session wont get expired and site will continue to run in extended periods
The problem was that the sessions wasn't having enough space I think. I resolved temporary my problem by restarting the application pool. Still searching a solution that will not implies to changes all this code. Maybe with other mode of session states, but I need to make my models serializable.

Am I using the cache correctly?

I have a page where I am pulling a dataset from the database, a few thousand records. I get it when the page is loaded and store it in the cache. Each time an operation is performed on the page, I check the cache to see if its still there, and if not, go get it again (20 minute expiration); fairly typical setup.
When I run the page, the initial data loads fine, and a default RowFilter is applied to the data. When I change the value of a dropdown (which changes the RowFilter), the page hangs for a moment, then returns a javascript error:
Line: 80772370 (yes, thats line 80 million...)
Char: 17
Error: Syntax error
Code: 0
URL: -the url of the page I'm on-
This error is repeated EXACTLY 20 times.
When I re-run the page and the operation that renders that error, I get a different line number (for example, the next time I ran it after I posted the above message, the line is at 80718666), exactly 20 times again.
Now a few curveballs:
I was having the exact same issues when I was using the Session to store the data rather than the cache.
I do not have this problem in the development environment (this is happening in QA). The web.config for each environment are nearly identical, but the primary difference between them is that QA uses a sessionState server that is separate from itself. This is why I moved from Session variables to the cache in the first place.
When the search criteria is intended to return no results at all, it performs as it should (shows no results).
Now this hasn't exactly been my best week, so maybe I'm missing something big, but I could use some guidance.
Thanks SO community.
if you use UpdatePanel, remove it to see what's the real error is because now the error is hidden on a javascript return string, in the position you mention.
After you find your error, bring up again the UpdatePanel.
I can assume that the error is a null object/control that have been cached, and you forget to check if it not null.
When you cache parts on your page, and controls, then you need to check them on your back code, if they are null before use them.

Resources