How to query data from an AspenTech IP21 Historian? - odbc

Old subject, combined with new tools: What would be the best/appropriate way to query data for a web application from an AspenTech IP21 (InfoPlus.21) data historian?
In the past, I've used some pretty awful queries via the Aspen SqlPlus ODBC driver, but that doesn't seem like the right approach, as it doesn't seem to install on Win 7 at all.
Anyone here have experience with that?

1) make sure you have appropriate version of Aspen tools, later ones (7.1, 7.2) will run on Windows 7 with no problems
2) I have worked with Aspen IP21 going over 15 years and have never had issues with SQL performance compared to other databases like Oracle or SQL server as long as the IP21 is on an approriate server and the query is written appropriately per the structure of the database. Doing a join against timestamp is going to produce a slow query. Depending on what you want to accomplish, there are multiple other ways to get data, through HISTORY pseudo table, AGGREGATES table, or other query techniques that are specific to IP21.
3) ODBC is still the most standard, easiest, and to me best performance for getting data from Ip21 form any client, ASP, .Net, web page, other databases, VB programs, Excel VBA, etc. Just may need some optimization tweaking probably in how SQL is written.

I've had extensive experience using the normal SQLPlus drivers in C#/ASP.NET and performance has never been an issue. While the ODBC drivers work, I have encountered certain limitations, such as not always returning SELECTs results.
As for how to check 'out of spec':
If this is for real-time values and not for ranges of time, I would suggest using record references to simply select the current value. That way the entire query stays in memory.
For time ranges you will have to select the ranges and iterate over them, which is more costly.

Related

ASP.NET/SQL 2008 Performance issue

We've developed a system with a search screen that looks a little something like this:
(source: nsourceservices.com)
As you can see, there is some fairly serious search functionality. You can use any combination of statuses, channels, languages, campaign types, and then narrow it down by name and so on as well.
Then, once you've searched and the leads pop up at the bottom, you can sort the headers.
The query uses ROWNUM to do a paging scheme, so we only return something like 70 rows at a time.
The Problem
Even though we're only returning 70 rows, an awful lot of IO and sorting is going on. This makes sense of course.
This has always caused some minor spikes to the Disk Queue. It started slowing down more when we hit 3 million leads, and now that we're getting closer to 5, the Disk Queue pegs for up to a second or two straight sometimes.
That would actually still be workable, but this system has another area with a time-sensitive process, lets say for simplicity that it's a web service, that needs to serve up responses very quickly or it will cause a timeout on the other end. The Disk Queue spikes are causing that part to bog down, which is causing timeouts downstream. The end result is actually dropped phone calls in our automated VoiceXML-based IVR, and that's very bad for us.
What We've Tried
We've tried:
Maintenance tasks that reduce the number of leads in the system to the bare minimum.
Added the obvious indexes to help.
Ran the index tuning wizard in profiler and applied most of its suggestions. One of them was going to more or less reproduce the entire table inside an index so I tweaked it by hand to do a bit less than that.
Added more RAM to the server. It was a little low but now it always has something like 8 gigs idle, and the SQL server is configured to use no more than 8 gigs, however it never uses more than 2 or 3. I found that odd. Why isn't it just putting the whole table in RAM? It's only 5 million leads and there's plenty of room.
Poured over query execution plans. I can see that at this point the indexes seem to be mostly doing their job -- about 90% of the work is happening during the sorting stage.
Considered partitioning the Leads table out to a different physical drive, but we don't have the resources for that, and it seems like it shouldn't be necessary.
In Closing...
Part of me feels like the server should be able to handle this. Five million records is not so many given the power of that server, which is a decent quad core with 16 gigs of ram. However, I can see how the sorting part is causing millions of rows to be touched just to return a handful.
So what have you done in situations like this? My instinct is that we should maybe slash some functionality, but if there's a way to keep this intact that will save me a war with the business unit.
Thanks in advance!
Database bottlenecks can frequently be improved by improving your SQL queries. Without knowing what those look like, consider creating an operational data store or a data warehouse that you populate on a scheduled basis.
Sometimes flattening out your complex relational databases is the way to go. It can make queries run significantly faster, and make it a lot easier to optimize your queries, since the model is very flat. That may also make it easier to determine if you need to scale your database server up or out. A capacity and growth analysis may help to make that call.
Transactional/highly normalized databases are not usually as scalable as an ODS or data warehouse.
Edit: Your ORM may have optimizations as well that it may support, that may be worth looking into, rather than just looking into how to optimize the queries that it's sending to your database. Perhaps bypassing your ORM altogether for the reports could be one way to have full control over your queries in order to gain better performance.
Consider how your ORM is creating the queries.
If you're having poor search performance perhaps you could try using stored procedures to return your results and, if necessary, multiple stored procedures specifically tailored to which search criteria are in use.
determine which ad-hoc queries will most likely be run or limit the search criteria with stored procedures.. can you summarize data?.. treat this
app like a data warehouse.
create indexes on each column involved in the search to avoid table scans.
create fragments on expressions.
periodically reorg the data and update statistics as more leads are loaded.
put the temporary files created by queries (result sets) in ramdisk.
consider migrating to a high-performance RDBMS engine like Informix OnLine.
Initiate another thread to start displaying N rows from the result set while the query
continues to execute.

Passing whole dataset to stored procedure in MSSQL 2005

How do I pass a dataset object to a stored procedure? The dataset comprises multiple tables and I'll need to be able to access them from within the SQL.
You can use Table valued parameter for passing single table in SQL 2008 http://msdn.microsoft.com/en-us/library/bb675163.aspx
or
refer to this article and use SQL CLR procedure to pass dataset http://blogs.msdn.com/b/jpapiez/archive/2005/09/26/474059.aspx
It looks like you can do this with SQL Server 2008 or newer (at least with a DataTable). Here are the links:
http://www.eggheadcafe.com/community/aspnet/10/10138579/passing-dataset-to-stored-procedure.aspx
http://www.sqlteam.com/article/sql-server-2008-table-valued-parameters
As the article from MusiGenesis' answer states
In SQL Server 2005 and earlier, it is
not possible to pass a table variable
as a parameter to a stored procedure.
When multiple rows of data to SQL
Server need to send multiple rows of
data to SQL Server, developers either
had to send one row at a time or come
up with other workarounds to meet
requirements. While a VB.Net developer
recently informed me that there is a
SQLBulkCopy object available in .Net
to send multiple rows of data to SQL
Server at once, the data still can not
be passed to a stored proc.
At the risk of stating obvious here are two more approaches
Parametrize your processing procedure
You might re-evaluate if you truly and really need to pass a general table variable. While sometimes this can not be avoided the reason why this is a later addition to the set of features that MS SQL Server has is partially because usually you can get around it by structuring your stored procedures and the flow of your data processing.
If you are able to 'parametrize' your process then you should be able to let stored procedures retrieve full dataset based on a limited number of parameters.
This will make the process less flexible, but it will also make it more controlled, which is not a bad thing (similarly like the database which interfaces with applications only on the level of stored procedures is more robust, this approach also, by limiting the flexibility reduces the number of possible cases and consequently the number of possibly unhandeled cases. read: security holes and general bugs)
Temp tables
Besides the above there's always approach with temp tables, which can be more or less complicated, depending on the scope of sharing that you need on the data (sharing can be between db users, app users, connections, processes, etc..).
Nice side effect is that such approach would allow persistence of the process (which bring you closer to having undo, redo and ability to continue interrupted work).

Simulate records in database without entering any

I've nearly finished the development of a project and would like to test its performance, especially the database query calls. I'm using Linq to SQL to search via usernames, but I've only got around 10 'users' in my database, so I can't really get a decent speed reading. How can I simulate thousands/millions of users in the database without actually creating new records? I've read about Selenium, but it seems that is good for repeat actions (simulating concurrent users?). Are there any other tools I should look into, or are there any options in VS 2008 (Professional Edition)?
Thanks
You can "trick" SQL Server into thinking there are more records than there actually are in a table using the approach outlined in this article. See the section on False SQL Server Statistics
e.g.
UPDATE STATISTICS TableName WITH ROWCOUNT=100000
will create statistics for the table as if it has 100000 rows in. You can then see what effect this has on the execution plan. But note this is undocumented functionality as so it may give quirky behaviour.
You could just populate your table with sample data. There's various tools available to help out with that like, Red Gate's SQL Data Generator. I prefer actually having large data volumes as I think that is what will be more accurate.

How to improve asp.net AJAX autocomplete performance

My web site has city,state and zip code autocomplete feature.
If user types in 3 characters of a city in the textbox, then top 20 cities starting with those characters are shown.
As of now, Autocomplete method in our application queries sql 2005 database which has got around 900,000 records related to city,state and zip.
But the response time to show the cities list seems to be very very slow.
Hence, for peformance optimization, is it a good idea to store the location data into Lucene index or may be in Active directory and then pull the data from there?
Which one will be faster...Lucene or Activedirectory?And what are the pros and cons of each?Any suggestions please?
Thanks a bunch!
Taking a nuclear option (like changing backing data stores) probably shouldn't be the first option. Rather, you need to look at why the query is performing so slowly. I'd start with looking at the query performance in SQL Profiler and the execution plan in Sql Management Studio and see if I am missing anything stupid like an index. After you cover that angle, then check the web layer and ensure that you are not sending inordinate amounts of data or otherwise tossing a spanner in the works. Once you have established that you aren't killing yourself in the db or on the wire, then it is time to think about re-engineering.
On a side note, my money would be on Sql Server handling the data end of this task better than either of those options. Lucene is better suited for full-text searches and AD is a poor database at best.
I would cache the data into a separate table. Depending on how fresh you need that data to be, you can rebuild it as often as necessary.
--Create the table
SELECT DISTINCT city, state, zip INTO myCacheTable FROM theRealTable
--Rebuild the table anytime
TRUNCATE TABLE myCacheTable
INSERT INTO myCacheTable (city, state, zip) SELECT DISTINCT city, state, zip FROM theRealTable
Your AJAX calls can access myCacheTable instead, which will have far fewer rows than 900k.
Adding to what Wyatt said, you first need to figure out which area is slow? Is the SQL query slow OR the network connection slow between the browser and the server? OR is there something else?
And I completely agree with Wyatt that SQL Server is much more suitable for this task then Lucene and Active Directory.

Profiling SQL Server and/or ASP.NET

How would one go about profiling a few queries that are being run from an ASP.NET application? There is some software where I work that runs extremely slow because of the database (I think). The tables have indexes but it still drags because it's working with so much data. How can I profile to see where I can make a few minor improvements that will hopefully lead to larger speed improvements?
Edit: I'd like to add that the webserver likes to timeout during these long queries.
Sql Server has some excellent tools to help you with this situation. These tools are built into Management Studio (which used to be called Enterprise Manager + Query Analyzer).
Use SQL Profiler to show you the actual queries coming from the web application.
Copy each of the problem queries out (the ones that eat up lots of CPU time or IO). Run the queries with "Display Actual Execution Plan". Hopefully you will see some obvious index that is missing.
You can also run the tuning wizard (the button is right next to "display actual execution plan". It will run the query and make suggestions.
Usually, if you already have indexes and queries are still running slow, you will need to re-write the queries in a different way.
Keeping all of your queries in stored procedures makes this job much easier.
To profile SQL Server, use the SQL Profiler.
And you can use ANTS Profiler from Red Gate to profile your code.
Another .NET profiler which plays nicely with ASP.NET is dotTrace. I have personally used it and found lots of bottlenecks in my code.
I believe you have the answer you need to profile the queries. However, this is the easiest part of performance tuning. Once you know it is the queries and not the network or the app, how do you find and fix the problem?
Performance tuning is a complex thing. But there some places to look at first. You say you are returning lots of data? Are you returning more data than you need? Are you really returning only the columns and records you need? Returning 100 columns by using select * can be much slower than returning the 5 columns you are actually using.
Are your indexes and statistics up-to-date? Look up how to update statisistcs and re-index in BOL if you haven't done this in a awhile. Do you have indexes on all the join fields? How about the fields in the where clause.
Have you used a cursor? Have you used subqueries? How about union-if you are using it can it be changed to union all?
Are your queries sargable (google if unfamiliar with the term.)
Are you using distinct when you could use group by?
Are you getting locks?
There are many other things to look at these are just a starting place.
If there is a particular query or stored procedure I want to tune, I have found turning on statistics before the query to be very useful:
SET STATISTICS TIME ON
SET STATISTICS IO ON
When you turn on statistics in Query Analyzer, the statistics are shown in the Messages tab of the Results pane.
IO statistics have been particularly useful for me, because it lets me know if I might need an index. If I see a high read count from the IO statistics, I might try adding different indexes to the affected tables. As I try an index, I run the query again to see if the read count has gone down. After a few iterations, I can usually find the best index(es) for the tables involved.
Here are links to MSDN for these statistics commands:
SET STATISTICS TIME
SET STATISTICS IO

Resources