Is it possible to run a calculation when presenting a number from a recordset to a webpage
I want to deduct 10% off the number in the database.
I've tried the below:
<%=((Recordset1.Fields.Item("FullCover").Value)*90/100)%>
But it just returns
-1.#IND
Your code is effectively returning Not a Number or (NaN).
Answer will be :
<%=((parseInt(Recordset1.Fields.Item("FullCover").Value, 10))*90/100)%>
It will parse the item out of the recordset into an integer and then performs the calculations.
Also a simple * 0.90 would do the trick. Saves one calculation.
Related
I'm currently optimizing our data warehouse and processes which uses it and i'm looking for some suggestions.
The problem is that i'm not sure about the calculations on retrieved data.
For make things more clearer for example we have following data stucture:
id : 1
param: static_value
param2: static_value
And let's consider that we got about 50 million entries with this structure.
Also let's assume that we are querying this data set about 30 times per minute which results every time at least 10k entries.
So, in short we got these stats:
Data set: 50 million entries.
Access frequency: 30 / s.
Resulting data size: ~10k results
On every query in resulting data set i have to go thought every entry and apply on it some calculations which results a field (for example param3 ) with it's dynamic value. For example:
Query2 ( 2k results ) and one of it's entries:
id : 2
param: static_value_2
param2: static_value_2
param3: dynamic_value_2
Query3 ( 10k results ) and one of it's entries:
id : 3
param: static_value_3
param2: static_value_3
param3: dynamic_value_3
And so on..
The problem is that i can't prepare the field param3 value earlier than i get it's values by query because of many dynamic values which are used in calculations.
Main question:
Is there any guidelines, practises or even the technologies for optimizing this kind of „problems“ or implementing this kind solutions?
Thanks for any information.
Update 1:
The field "param3" is calculated on every query in every data result entry, it means that this calculated value is not stored in any storage it just computed on every query. I can't store this value because it's dynamic and depends on many variables due this reason i can't store it as static value when it's dynamic.
I guess it's not good practise to have such implementation?
Following is the query that I use for getting a fixed number of records from a database with millions of records:-
select * from myTable LIMIT 100 OFFSET 0
What I observed is, if the offset is very high like say 90000, then it takes more time for the query to execute. Following is the time difference between 2 queries with different offsets:
select * from myTable LIMIT 100 OFFSET 0 //Execution Time is less than 1sec
select * from myTable LIMIT 100 OFFSET 95000 //Execution Time is almost 15secs
Can anyone suggest me how to optimize this query? I mean, the Query Execution Time should be same and fast for any number of records I wish to retrieve from any OFFSET.
Newly Added:-
The actual scenario is that I have got a database having > than 1 million records. But since it's an embedded device, I just can't do "select * from myTable" and then fetch all the records from the query. My device crashes. Instead what I do is I keep fetching records batch by batch (batch size = 100 or 1000 records) as per the query mentioned above. But as i mentioned, it becomes slow as the offset increases. So, my ultimate aim is that I want to read all the records from the database. But since I can't fetch all the records in a single execution, I need some other efficient way to achieve this.
As JvdBerg said, indexes are not used in LIMIT/OFFSET.
Simply adding 'ORDER BY indexed_field' will not help too.
To speed up pagination you should avoid LIMIT/OFFSET and use WHERE clause instead. For example, if your primary key field is named 'id' and has no gaps, than your code above can be rewritten like this:
SELECT * FROM myTable WHERE id>=0 AND id<100 //very fast!
SELECT * FROM myTable WHERE id>=95000 AND id<95100 //as fast as previous line!
By doing a query with a offset of 95000, all previous 95000 records are processed. You should make some index on the table, and use that for selecting records.
As #user318750 said, if you know you have a contiguous index, you can simply use
select * from Table where index >= %start and index < %(start+size)
However, those cases are rare. If you don't want to rely on that assumption, use a sub-query, for example using rowid, which is always indexed,
select * from Table where rowid in (
select rowid from Table limit %size offset %start)
This speeds things up especially if you have "fat" rows (e.g. that contain blobs).
If maintaining the record order is important (it usually isn't), you need to order the indices first:
select * from Table where rowid in (
select rowid from Table order by rowid limit %size offset %start)
select * from data where rowid = (select rowid from data limit 1 offset 999999);
With SQLite, you don't need to get all rows returned at once in a big fat array, you can get called back for every row. This way, you can process the results as they come in, which should address both your crashing and performance issues.
I guess you're not using C as you would already be using a callback, but this technique should be available in any other language.
Javascript example (from : https://www.npmjs.com/package/sqlite3 )
db.each("SELECT rowid AS id, info FROM lorem", function(err, row) {
console.log(row.id + ": " + row.info);
});
I am currently working on a classic ASP application extracting data from an excel sheet. Before this data is saved to the server, I validate first if the data has the required data populated.
Do Until myRecordSet.EOF
' Do processing here
if Len(myRecordSet.Fields(0)) > 0 Then
' Something has to be done inside
End if
myRecordSet.MoveNext
Loop
I was able to handle this accordingly although I noticed an issue with the EOF property. Supposed my excel sheet have 50 rows populated accordingly, and then the user added 5 more rows but deleted it afterwards, the EOF property points to the end of the additional 5 more rows as EOF (instead that it will hit EOF at row 50, it will EOF at row 55). It would be tedious to exhaust all columns to check for the length if it's greater than 0 just to check if the current row is empty or not. Any leads to make checking much more easier?
If you have an ID field of some description or a reasonably small number of fields (columns), it should be possible to SELECT using a WHERE statement, even if you have a large number of fields (columns), it may be possible to say that certain fields are essential. Furthermore, it should not take all that long to run two statements, one with WHERE and one without and compare the record count.
I have a SQL stored proc that returns a dataset to ASP.NET v3.5 dataset. One of the columns in the dataset is called Attend and is a nullable bit column in the SQL table. The SELECT for that column is this:
CASE WHEN Attend IS NULL THEN -1 ELSE Attend END AS Attend
When I execute the SP in Query Analyzer the row values are returned as they should be - the value for Attend is -1 is some rows, 0 in others, and 1 in others. However, when I debug the C# code and examine the dataset, the Attend column always contains -1.
If I SELECT any other columns or constant values for Attend the results are always correct. It is only the above SELECT of the bit field that is behaving strangely. I suspect it has something to do with the type being bit that is causing this. So to test this I instead selected "CONVERT(int, Attend)" but the behavior is the same.
I have tried using ExecuteDataset to retrieve the data and I have also created a .NET Dataset schema with TableAdapter and DataTable. Still no luck.
Does anyone know what is the problem here?
Like you, I suspect the data type. If you can change the data type of Attend, change it to smallint, which supports negative numbers. If not, try changing the name of the alias from Attend to IsAttending (or whatever suits the column).
Also, you can make your query more concise by using this instead of CASE:
ISNULL(Attend, -1)
You've suggested that the Attend field is a bit, yet it contains three values (-1,0,1). A bit, however, can only hold two values. Often (-1, 0) when converted to an integer, but also possible (0, 1), depending on whether the BIT is considered signed (two's compliment) or unsigned (one's compliment).
If your client (the ASP code) is converting all values for that field to a BIT type then both -1 and 1 will likely show as the same value. So, I would ensure two things:
- The SQL returns an INTEGER
- The Client isn't converting that to a BIT
[Though this doesn't explain the absence of 0's]
One needs to be careful with implicit conversion of types. When not specifying explicitly double check the precidence. Or, to be certain, explicitly specify every type...
Just out of interest, what do you get when using the following?
CASE [table].attend
WHEN NULL THEN -2
WHEN 0 THEN 0
ELSE 2
END
a) SqlCommand.ExecuteNonQuery is used for update, insert and delete operations.
Besides the fact that by using ExecuteNonQuery instead of ExecuteReader we automatically know there won’t be any query results returned, are there some other benefits/reasons why ExecuteNonQuery should be used?
b) Similarly, if we want a database operation to return a single value, we should use ExecuteScalar instead of ExecuteNonquery ,where with the latter result would be returned via SqlParameter. Is there any particular reason why we should prefer ExecuteScalar over ExecuteNonQuery?
thanx
For the 2nd point...
ExecuteScalar returns 1st column of 1st row: the entire dataset is passed back to the client. Even if the dataset is exactly one row and exactly one column, it's still less efficient to return a dataset than use an output/return parameter
The same applies to the 1st point too: more efficient to not process a recordset.
For the 1st point...
You have said:
we automatically know there won’t be any query results returned
Actually, ExecuteNonQuery returns the number of affected rows, which is very important if your logic after executing the query depends on whether the database has been changed or not.