I have an asp.net page which has 4 grid views connecting to mysql database for data population. The average response time for a round trip to the server is 20.55 seconds. That is way too much time. I have since applied the HTTP Compression GZip to improve the speed,I don't see any improvement in load time. Any suggestion, ideas will be greatly appreciated.
Ive also used pagination, but no effect.
You will have to nail it to down where it is taking time. Debug the application and measure the response time of the sql queries and the databind operation seperately. If its the query or the stored procedure that is taking time you should add indexes or refine the query to improve the performance. But if its databinding to gird thats taking time(which I dont really suspect) post some code here without which we cannot help much on that.
As #Daniel said, start with profiling the page to see exactly where the time is spent.
Namely, execute the queries that the grid views run independently of the page. How long do they take to run? If most of your time is here, then try and figure out how to make them more performant.
Second, you might consider using something other than a grid view. Those can store a tremendous amount of information in view state. Maybe look into using repeaters or something similar depending on the functionality you actually need.
The first thing to do is to check how long your database queries are taking to run. Until you know how long they are taking it is hard to guess at what might be taking the time.
Related
we use a tool that operates on a certain database updating certain values when we makes some changes to the tool.
This takes a long time for some simple task.
I just need to find out which table,column and the value for that column gets updated.
for this i need to find out the whole database which column_name has value "XYZ" and the corresponding table/tables.
Any scripts for these.
Just because something can be done does not mean it should be done.
I know you've got your process designed this way and you very likely don't want to change it but, really, your life will get a whole lot better if you redesign this to avoid doing something that really, seriously, shouldn't be done. Searching through every text field in an entire database in search of some magical character string is a Bad Idea. It's actually only ONE of The Big Bad Ideas and it probably isn't the Baddest Idea, but it's a big enough bad enough idea that you should give Serious Consideration to doing something else, better.
OK, so what's wrong with it?
First, it indicates that you're not using a database, you're using a midden. You dump stuff in and then hope to dig it out later. This is the kind of thing people did thousands of years ago (it was popular back when flint was cutting edge technology), and while it helps keep archaeologists employed digging through these trash heaps, we are software developers, not archaeologists, and we don't want to have to do this kind of thing on a regular basis.
Second, this is a serious performance killer. You're going to either write some god-awful static code to laboriously check every field in every table, or you're going to write some middling-bright code to dynamically create some even more god-awful query that will laboriously check every field in every table. The word to focus on here is "laborious". And "god-awful", if it comes to that. Scanning through every row in every table in your database and testing every field in all of those every rows is going to be slow. Very, very slow. It's going to be dead-turtle-on-the-side-of-the-road-with-tire-marks-on-its-shell slow. This is not a good thing to do, unless you own stock in the local electric utility and want to make sure every generated electron has a happy home in your employers computer.
Third, people will have strong emotions when they see your code. Those destined for careers in management will laugh, for they know that they won't have to maintain it or try to solve the performance issues. The technically challenged will cry, because they'll know there's nothing they can do to fix it. The true Code Warriors will stare in amazement for a moment, and will then grit their teeth, hunt you down, and beat you to death with their ceremonial Wands Of Green-Bar, for only they will know that this evil could have been prevented.
So give some thought to a re-design. Once again, just because something can be done does not mean it should be done.
Share and enjoy.
To learn ASP.NET MVC, I am thinking of creating a community forum like SO where people can rate posts, users etc. and the user can thereby gain points. I just can't figure out if the points should be added to the user profile whenever an action is done (post rated up/down, user created new post etc.) or if it should be calculated from the different activities the user has done.
I have a few pro's and con's for both ways of doing it:
Add rating:
Pro:
Easier to implement, and much faster and less resource intensive.
Con:
If the value of the different activities change, you can't do anything about it.
No way of showing a history on how you have gotten your points.
Calculating rating:
Pro:
Much easier to have a point-history for both the user and people viewing the account.
Possibility to change the amount of points for a given activity.
Con:
A little more difficult to implement.
More resource extensive (can be prevented by caching the data, or creating a job which calculates the points).
I think you've pretty much thought of everything. I can just offer some engineering tips. All things equal, always start of with what's easier to implement.
Now there are some cons with that as you say, so they're not equal, they don't offer the same functionality. So can you live without the history? If not, implement calculating first. Your model will be tight and well defined, which is always nice.
IF you determine later on that this is too cpu intensive, only then do you go about fixing it with a cache or a job. Good ideas, both, btw. 90% of the time, unless you really measure it, you'll be laboring on optimizations that are not necessary. Unnecessary optimizations are wrong.
It looks like you are trying to build something like stackoverflow, and Stackoverflow does have a history where your points came from. When you will use linq, the calculation method could be done purely in SQL, without a lot of effort on programming skills. (although it'd be a bit more advanced than the normal linq querys)
I'd go for the second option, merely because it's more interesting, you'll learn more about linq, caching, and MVC overall.
You can use ActionFilter classs to catch every action that adds/deletes user points. Like AuditActionFilter class. This can be done just by putting action filter attribute on top of corresponding methods. In the audit action filter class, you can figure out which method is executed easility using filterContext object and track the progress of points for each user in a flat file or xml, which you can show/parse when he wants to see his history.
I have a ASP.NET web application (.NET 2008) using MS SQL server 2005. I want to increase the performance of the web site. Does anyone know of an article containing steps to do that, step by step, in SQL (indexes, etc.), and in the code?
Performance tuning is a very specific process. I don't know of any articles that discuss directly how to achieve this, but I can give you a brief overview of the steps I follow when I need to improve performance of an application/website.
Profile.
Start by gathering performance data. At the end of the tuning process you will need some numbers to compare to actually prove you have made a difference. This means you need to choose some specific processes that you monitor and record their performance and throughput.
For example, on your site you might record how long a login takes. You need to keep this very narrow. Pick a specific action that you want to record and time it. (Use a tool to do the timing, or put some Stopwatch code in you app to report times. Also, don't just run it once. Run it multiple times. Try to ensure you know all the environment set up so you can duplicate this again at the end.
Try to make this as close to your production environment as possible. Make sure your code is compiled in release mode, and running on real separate servers, not just all on one box etc.
Instrument.
Now you know what action you want to improve, and you have a target time to beat, you can instrument your code. This means injecting (manually or automatically) extra code that times each method call, or each line and records times and or memory usage right down the call stack.
There are lots of tools out their that can help you with this and automate some of it. (Microsoft's CLR profiler (free), Redgate - Ants (commercial), the higher editions of visual studio have stuff built in, and loads more) But you don't have to use automatic tools, it's perfectly acceptable to just use the Stopwatch class to time each block of your code. What you are looking for is a bottle neck. The likely hood is that you will find a high proportion of the overall time is spent in a very small bit of code.
Tune.
Now you have some timing data, you can start tuning.
There are two approaches to consider here. Firstly, take an overall perspective. Consider if you need to re design the whole call stack. Are you repeating something unnecessarily? Or are you just doing something you don't need to?
Secondly, now you have an idea of where your bottle neck is you can try and figure out ways to improve this bit of code. I can't offer much advice here, because it depends on what your bottle neck is, but just look to optimise it. Perhaps you need to cache data so you don't have to loop over it twice. Or batch up SQL calls so you can do just one. Or tighten your query filters so you return less data.
Re-profile.
This is the most important step that people often miss out. Once you have tuned your code, you absolutely must re-profile it in the same environment that you ran your initial profiling in. It is very common to make minor tweaks that you think might improve performance and actually end up degrading it because of some unknown way that the CLR handles something. This is much more common in managed languages because you often don't know exactly what is going on under the covers.
Now just repeat as necessary.
If you are likely to be performance tuning often I find it good to have a whole batch of automated performance tests that I can run that check the performance and throughput of various different activities. This way I can run these with every release and record performance changes each release. It also means that I can check that after a performance tuning session I know I haven't made the performance of some other area any worse.
When you are profiling, don't always just think about the time to run a single action. Also consider profiling under load, with lots of users logged in. Sometimes apps perform great when there's just one user connected, but when they hit a certain number of users suddenly the whole thing grinds to a halt. Perhaps because suddnely they are spending more time context switching or swapping memory in and out to disk. If it's throughput you want to improve you need to be figuring out what is causing the limit on throughput.
Finally. Check out this huge MSDN article on Improving .NET Application Performance and Scalability. Specifically, you might want to look at chapter 6 and chapter 17.
I think the best we can do from here is give you some pointers:
query less data from the sql server (caching, appropriate query filters)
write better queries (indexing, joins, paging, etc)
minimise any inappropriate blockages such as locks between different requests
make sure session-state hasn't exploded in size
use bigger metal / more metal
use appropriate looping code etc
But to stress; from here anything is guesswork. You need to profile to find the general area for the suckage, and then profile more to isolate the specific area(s); but start by looking at:
sql trace between web-server and sql-server
network trace between web-server and client (both directions)
cache / state servers if appropriate
CPU / memory utilisation on the web-server
I think First of all you have to find your Bottlenecks and then try to improve those.
This helps you to perform exactly where you have serios problem.
An in addition you needto improve your Connection to DB. For exampleusing a Lazy , Singletone Pattern and also create Batch request instead of single requests.
It help you to decrease DB connection.
Check your cache and suitable loop structures.
another thing is to use appropriate types, forexample if you need int donot create a long and etc
at the end ypu can use some Profiler (specially in SQL) andcheckif your queries implemented as well as possible.
We're experimenting with appending timestamps to some URL's to let things cache but freshen them when they do change. We have code that boils down to this:
DateTime ts = File.GetLastWriteTime(absPath);
where absPath is a MappedPath of a url. So the web server will be checking this file's last write time every time we serve up a link to the file. Kinda gives me the willies - should it?
You should performance-test it, but off-hand I doubt it's any more expensive than testing a file's existence (e.g. whether it's read-only), and certainly less expensive than actually opening the file.
If (after testing) it you decide that it's a problem, you could also cache your calls to GetLastWriteTime (e.g. don't call it more than once every 5 seconds for any given file).
Also, I've never used it but if caching is a concern I hope you've considered delegating its implementation to some specialist like Squid instead of rolling your own.
I have not tried this, but your question is relevant to a situation that I have been thinking about.
You did not indicate what data is changing? database, xml data etc.
ASP.NET caching does support updating the cache based on a variety of dependencies.
Check out this article in the sections of File-based Dependency, Time-based Dependency,
and Key-based Dependency.
"Dependencies allow us to invalidate a particular item within the Cache based on changes to files, changes to other Cache keys, or at a fixed point in time. Let's look at each of these dependencies."
Here is the article:
http://msdn.microsoft.com/en-us/library/ms972379.aspx
Thanks
Joe
Essentially, there are three answers to your question of "how expensive?".
Too expensive - you've tested it and something has to change for the system to be usable.
Acceptable - you've tested it and it isn't great, but it is fast enough to use
Rather cheap - you've tested it and there is no noticeable impact on performance.
We can't really answer the question for you, so you'll just have to try it out. If you decide that it was too expensive or that it's worth your time to move it from acceptable to rather cheap, change the question to ask how to speed things up.
It'll incur additional small disk I/O's when the links are generated. If you create many URL's in a short period of time this could be a bottleneck. Noone can say for sure if this will impact your scenario - you really need to measure and see if this is going to be an issue.
Or if you're worried about it, why not cache it for a minute?
I'm using asp.net together with mysql and my site requires some quite heavy calculations on the data. These calculations are made with mysql, I thought it was easier to make the calculations in mysql to just be able to work with the data easy in asp.net and databind my gridviews and other data controls not using that much code.
But I start thinking I made a mistake making all calculations in the back because everything seems quite slow. In one of my queries I need to get data from several tables at once and add them together into my gridview so what I do in that query is:
Selecting from four different tables each having some inner joins. Then union them together using union all. Then some sum and grouping.
I can't post the full query here because its quite long. But do you think it's bad way to do the calculations like I've done? Should it be better doing them in asp.net? What is your opinion?
MySQL does not allow embedded assembly, so writing a query which whill input a sequence of RGB records and output MPEG-4 is probably not the best idea.
On the other hand, it's good in summing or averaging.
Seeing what calculations you are talking about will surely help to improve the answer.
Self-update:
MySQL does not allow embedded assembly, so writing a query which whill input a sequence of RGB records and output MPEG-4 is probably not the best idea.
I'm really thinking how to implement this query, and worse than that: I feel it's possible :)
My experience with MySql is that it can be quite slow for calculations. I would suggest moving a substantial amount of work (particularly grouping and sum) to the ASP.NET. This has not been my experience with other database engines, but a 30 day moving average in MySQL seems quite slow.
Without knowing the actual query, it sounds like you are doing relational/table work on your query, which RDMS are good at so it seems that your are doing it at the correct place... the problem may be on the optimization of the query, you may want to do "EXPLAIN(query)" and get an idea of the query plan that MySql is doing and try to optimize that way....