Heavy calculations in mysql - asp.net

I'm using asp.net together with mysql and my site requires some quite heavy calculations on the data. These calculations are made with mysql, I thought it was easier to make the calculations in mysql to just be able to work with the data easy in asp.net and databind my gridviews and other data controls not using that much code.
But I start thinking I made a mistake making all calculations in the back because everything seems quite slow. In one of my queries I need to get data from several tables at once and add them together into my gridview so what I do in that query is:
Selecting from four different tables each having some inner joins. Then union them together using union all. Then some sum and grouping.
I can't post the full query here because its quite long. But do you think it's bad way to do the calculations like I've done? Should it be better doing them in asp.net? What is your opinion?

MySQL does not allow embedded assembly, so writing a query which whill input a sequence of RGB records and output MPEG-4 is probably not the best idea.
On the other hand, it's good in summing or averaging.
Seeing what calculations you are talking about will surely help to improve the answer.
Self-update:
MySQL does not allow embedded assembly, so writing a query which whill input a sequence of RGB records and output MPEG-4 is probably not the best idea.
I'm really thinking how to implement this query, and worse than that: I feel it's possible :)

My experience with MySql is that it can be quite slow for calculations. I would suggest moving a substantial amount of work (particularly grouping and sum) to the ASP.NET. This has not been my experience with other database engines, but a 30 day moving average in MySQL seems quite slow.

Without knowing the actual query, it sounds like you are doing relational/table work on your query, which RDMS are good at so it seems that your are doing it at the correct place... the problem may be on the optimization of the query, you may want to do "EXPLAIN(query)" and get an idea of the query plan that MySql is doing and try to optimize that way....

Related

What are some good practices of modular code?

In PL-SQL, there are some fancy new concepts, like table functions, objects, and probably others I have not discovered yet.
But then again, there is also plain simple code generation (dynamic pl-sql) that you can "execute immediately".
These can help with code reuse.
From what I can tell, table functions and objects can help with creating modular code, but still not enough to remove the entire of it (maybe I am not using the best of them; I have to admit my objects only contain data for now and no logic).
On the other side, the code generating is much more simple, and can reduce duplicate code more. But it is kind of hard to read what the actual business is behind the code generation logic.
I want modular and not-duplicate code. Should I stick with plain code generation? What are some pros and cons of each?
Dynamic SQL is generally better than advanced PL/SQL features like table functions, object-relational types, data cartridge, the ANY* types, etc. With a few simple tips you can avoid the pitfalls of dynamic SQL and use it to create modular systems.
Advanced PL/SQL features are cool, and at some point you'll have to use them at least a little. They're great for solving weird, specific problems. But you will almost certainly regret creating an Oracle system that is centered around one of those features. I've wasted weeks or months of my life on each of the above PL/SQL features.
Dynamic SQL
Pro - It always works. It might be painful, but there's always a way to make it work in SQL and make it run fast.
Con - A little harder to read and write.
Advanced PL/SQL
Pro - Cool features, elegant code that can perfectly solve certain problems.
Con - Will let you down at a critical moment.
It's hard to give examples of advanced PL/SQL failures without writing a novel. The stories typically go something like this: "We combined features A, B, C ... we hit bugs X, Y, Z ... everyone got angry ... we spent a month re-writing it."
Dynamic SQL doesn't have to be so bad. It just takes some discipline.
Good formatting and instrumenting. Make sure the dynamic SQL looks beautiful and it can be easily printed out for debugging. Follow good programming practices - indent, add comments, use meaningful names, etc. It will be a shock to the point-and-click programmers when the "Beautifier" button on the IDE doesn't help them. Don't let anyone get away with sloppy code - just because it's technically a string shouldn't allow anybody to avoid common style rules.
Alternative quoting mechanism. Use the q syntax to avoid constantly escaping things. For example, q'[I'll use single quotes if I want to!]' instead of 'I''ll use single quotes if I want to!'.
Templates instead of concatenation. Write the code in un-interrupted blocks and then replace the dynamic parts later. Combine it with the q strings to avoid a million quotation marks and pipes in the code. For example:
v_dynamic_sql_template constant varchar2(32767) :=
q'[
select a, b, $DYNAMIC_SELECT_LIST$
from table1
$DYNAMIC_JOIN_1$
where table1.a > 1
$DYNAMIC_WHERE_1$
]';
...
v_dyanmic_sql := replace(v_dynamic_sql_template, '$DYNAMIC_SELECT_LIST$', v_variable);
...
(In this question I assume you are an intermediate or advanced Oracle developer. If you're a beginner, the answer is probably static SQL statements but you haven't
seen enough SQL features to realize that yet.)

Does it make sense to make multiple SQLite databases to improve performance?

I'm just learning SQL/SQLite, and plan to use SQLite 3 for a new website I'm building. It's replacing XML, so concurrency isn't a big concern. But I would like to make it as performant as possible with the technology I'm using. Are there any benefits to using multiple databases for performance, or is the best performance keeping all the data for the site in one file? I ask because 99% of the data will be read-only 99% of the time, but that last 1% will be written to 99% of the time. I know databases don't read in and re-write the whole file for every little change, but I guess I'm wondering if the writes will be much faster if the data is going to a separate 5KB database, rather than part of the ~ 250MB main database.
With proper performance tuning, sqlite can do around 63 300 inserts-per-second. Unless you're planning on some really heavy volume, I would avoid pre-optimizing. Splitting into two databases doesn't feel right to me, and if you're planning on doing joins in the future, you'll be hosed. Especially since you say concurrency isn't a big problem, I would avoid complicating the database design.
Actually with 50 000 databases you will have very bad performance
you should try several tables in single database, sometimes it really can speed up something, but as description of initial task is very general - hard to say exactly what you need, try single table and multiple tables - measure speed

Big database and how to proceed

I'm working on a website running ASP.NET with MSSQL Database. I realized, that the number of rows in several tables can be very high (possibly something like hundread million rows). I thnik, that this would make the website run very slowly, am I right?
How should I proceed? Should I base it on a multidatabase system, so that users will be separated in different databases and each of the databases will be smaller? Or is there a different, more effective and easier approach?
Thank you for your help.
Oded's comment is a good starting point and you may be able to just stop there.
Start by indexing properly and only returning relevant results sets. Consider archiving unused data (or rarely accessed data
However if it isn't Partioning or Sharding is your next step. This is better than a "multidatabase" solution because your logical entities remain intact.
Finally if that doesn't work you could introduce caching. Jesper Mortensen gives a nice summary of the options that are out there for SQL Server
Sharedcache -- open source, mature.
Appfabric -- from Microsoft, quite mature despite being "late
beta".
NCache -- commercial, I don't know much about it.
StateServer and family -- commercial, mature.
Try partitioning the data. This should make each query faster and the website shouldn't be as slow
I don't know what kind of data you'll be displaying, but try to give users the option to filter it. As someone had already commented, partitioned data will make everything faster.

ASP>NET Page performance issue

I have an asp.net page which has 4 grid views connecting to mysql database for data population. The average response time for a round trip to the server is 20.55 seconds. That is way too much time. I have since applied the HTTP Compression GZip to improve the speed,I don't see any improvement in load time. Any suggestion, ideas will be greatly appreciated.
Ive also used pagination, but no effect.
You will have to nail it to down where it is taking time. Debug the application and measure the response time of the sql queries and the databind operation seperately. If its the query or the stored procedure that is taking time you should add indexes or refine the query to improve the performance. But if its databinding to gird thats taking time(which I dont really suspect) post some code here without which we cannot help much on that.
As #Daniel said, start with profiling the page to see exactly where the time is spent.
Namely, execute the queries that the grid views run independently of the page. How long do they take to run? If most of your time is here, then try and figure out how to make them more performant.
Second, you might consider using something other than a grid view. Those can store a tremendous amount of information in view state. Maybe look into using repeaters or something similar depending on the functionality you actually need.
The first thing to do is to check how long your database queries are taking to run. Until you know how long they are taking it is hard to guess at what might be taking the time.

Writing updates to OLAP cube

What is the easiest way to write user entered measure values (sales forcast) to the SQL Server Analysis Services OLAP cube from a .Net client application?
I'm aware that underlying fact table can be updated with DML statements and that cube can be reprocessed but I'm looking for alternatives.
Regards,
Aleksandar
We use pivot table Ranet OLAP for editing cube data.
View sample Simple PivotTable Widget - PivotTaple with Updateable
Writing updates to OLAP cube.
I nearly got into a project like this once. It did not go ahead, which I was very grateful for, after looking into the work involved. My advice to you is to run away!!!
You do not have to update actual cube data, or reprocess though - depending on how complex your user-entered data is going to be. I believe this is covered in Microsoft's standard MDX course, the notes of which you may be able to find online (sorry, I've since disposed of my copy). This depends on whether you want to learn MDX though, which is not easy.
I think you can use ADOMD .Net to do Writeback. You can ADOMDCommand to wrap UPDATE CUBE Statements.
ADOMD .Net
http://msdn.microsoft.com/en-us/library/ms123483(v=SQL.100).aspx
Link below talks about some of the issues in this approach, if you are doing too many updates together.
http://www.developmentnow.com/g/112_2006_1_0_0_677198/writeback-in-ADOMD-NET.htm

Resources