I have table with column fields date,points and categoryType.
How to write sqlite query to get records from table in between given date
where points value are greater then Zero ( > 0).
Select * from tableName
where (Select points FROM tableName
WHERE date(DateValue) BETWEEN date('2016-12-26') AND date('2016-12-27')
> 0)
A subquery would make sense only if it were correlated.
Just use a simple query, and combine multiple conditions with AND:
SELECT *
FROM MyTable
WHERE date(DateValue) BETWEEN '2016-12-26' AND '2016-12-27'
AND points > 0;
Related
I'm trying to write a query that in Teradata but I'm not sure how to do it; my table looks like this:
col1: text (account_number)
col2: text (secondary account number)
col3: text (Primary_cust)
the business requirements are:
"Group records by account number.
If there is only one record for an account then keep that record.
If there are multiple records for an account number then:
(1) if only one record has Primary_CUST = 'Y' then keep.
(2) if multiple records have Primary_CUST = 'Y' then keep one with lowest SCDRY_ACCT_NBR
(3) If no records have Primary_CUST = 'Y' then keep one with lowest SCDRY_ACCT_NBR.
I know I need a CASE statement and I'm able to write the first requirement, but not sure on the second. Any help would be greatly appreciated.
You just have to think about how to order the rows to get the row you want on top, seems to be like this:
SELECT * FROM tab
QUALIFY
Row_Number()
Over (PARTITION BY account_number -- for each account
ORDER BY Primary_CUST DESC -- 'Y' before 'N' (assuming it's a Y/N column)
,SCDRY_ACCT_NBR -- lowest number
) = 1 -- return the top row
Of course QUALIFY is proprietary Teradata syntax, if you need to do this on Oracle you have to wrap it in a Derived Table:
SELECT *
FROM
(
SELECT t.*,
Row_Number()
Over (PARTITION BY account_number -- for each account
ORDER BY Primary_CUST DESC -- 'Y' before 'N' (assuming it's a Y/N column)
,SCDRY_ACCT_NBR) AS rn-- lowest number
FROM tab
) AS dt
WHERE rn = 1 -- return the top row
I have the following pyDAL table:
market = db.define_table(
'market',
Field('name'),
Field('ask', type='double'),
Field('timestamp', type='datetime', default=datetime.now)
)
I would like to use the expression language to execute the following SQL:
SELECT * FROM market
GROUP BY name
ORDER BY timestamp DESCENDING
HAVING COUNT(name) > 1
I know how to do the ORDER BY and the GROUP BY:
db().select(
db.market.ALL,
orderby=~db.market.timestamp,
groupby=db.market.name
)
but I do not know how to do a count within a having clause even after reading the section in the web2py book on the HAVING clause.
The count() function returns an expression which can be used both as a field in the select query, and to build an argument to the query's having parameter. The Grouping and counting section from the web2py manual has a few hints on this topic.
The following code will give the desired result. The row objects will hold both the market objects and their respective row counts.
count = db.market.name.count()
rows = db().select(
db.market.ALL,
count,
groupby=db.market.name,
orderby=~db.market.timestamp,
having=(count > 2)
)
I have the following SQLite database
I expect there will be 3 rows of result being returned, if I make the following query.
SELECT name, sum(heart) FROM test_table;
However, even though I am not using GROUP BY, only 1 row is being returned.
C:\Users\yan-cheng.cheok\Desktop>sqlite3.exe
SQLite version 3.7.13 2012-06-11 02:05:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .restore abc
sqlite> SELECT name, sum(heart) FROM test_table;
Record3|102
I am expecting result :
Record1|102
Record2|102
Record3|102
As in convientional SQL, if I do not use GROUP BY, every individual rows will be returned.
http://www.w3schools.com/sql/sql_groupby.asp
Is there anything I can make all 3 rows returned?
Try this you can use cross join
SELECT a.name, b.totalHeart
FROM test_table a,
(
SELECT SUM(heart) totalHeart
FROM test_table
) b
This behaviour is documented:
If the SELECT statement is an aggregate query without a GROUP BY clause, then each aggregate expression in the result-set is evaluated once across the entire dataset. Each non-aggregate expression in the result-set is evaluated once for an arbitrarily selected row of the dataset. The same arbitrarily selected row is used for each non-aggregate expression. Or, if the dataset contains zero rows, then each non-aggregate expression is evaluated against a row consisting entirely of NULL values.
You could do this:
sqlite> select a.name, b.s from abc as a, (select sum(heart) as s from abc) as b;
Record1|102
Record2|102
Record3|102
I have a sqlite table with the following schema:
CREATE TABLE foo (bar VARCHAR)
I'm using this table as storage for a list of strings.
How do I select a random row from this table?
Have a look at Selecting a Random Row from an SQLite Table
SELECT * FROM table ORDER BY RANDOM() LIMIT 1;
The following solutions are much faster than anktastic's (the count(*) costs a lot, but if you can cache it, then the difference shouldn't be that big), which itself is much faster than the "order by random()" when you have a large number of rows, although they have a few inconvenients.
If your rowids are rather packed (ie. few deletions), then you can do the following (using (select max(rowid) from foo)+1 instead of max(rowid)+1 gives better performance, as explained in the comments):
select * from foo where rowid = (abs(random()) % (select (select max(rowid) from foo)+1));
If you have holes, you will sometimes try to select a non-existant rowid, and the select will return an empty result set. If this is not acceptable, you can provide a default value like this :
select * from foo where rowid = (abs(random()) % (select (select max(rowid) from foo)+1)) or rowid = (select max(rowid) from node) order by rowid limit 1;
This second solution isn't perfect : the distribution of probability is higher on the last row (the one with the highest rowid), but if you often add stuff to the table, it will become a moving target and the distribution of probabilities should be much better.
Yet another solution, if you often select random stuff from a table with lots of holes, then you might want to create a table that contains the rows of the original table sorted in random order :
create table random_foo(foo_id);
Then, periodicalliy, re-fill the table random_foo
delete from random_foo;
insert into random_foo select id from foo;
And to select a random row, you can use my first method (there are no holes here). Of course, this last method has some concurrency problems, but the re-building of random_foo is a maintainance operation that's not likely to happen very often.
Yet, yet another way, that I recently found on a mailing list, is to put a trigger on delete to move the row with the biggest rowid into the current deleted row, so that no holes are left.
Lastly, note that the behavior of rowid and an integer primary key autoincrement is not identical (with rowid, when a new row is inserted, max(rowid)+1 is chosen, wheras it is higest-value-ever-seen+1 for a primary key), so the last solution won't work with an autoincrement in random_foo, but the other methods will.
You need put "order by RANDOM()" on your query.
Example:
select * from quest order by RANDOM();
Let's see an complete example
Create a table:
CREATE TABLE quest (
id INTEGER PRIMARY KEY AUTOINCREMENT,
quest TEXT NOT NULL,
resp_id INTEGER NOT NULL
);
Inserting some values:
insert into quest(quest, resp_id) values ('1024/4',6), ('256/2',12), ('128/1',24);
A default select:
select * from quest;
| id | quest | resp_id |
1 1024/4 6
2 256/2 12
3 128/1 24
--
A select random:
select * from quest order by RANDOM();
| id | quest | resp_id |
3 128/1 24
1 1024/4 6
2 256/2 12
--*Each time you select, the order will be different.
If you want to return only one row
select * from quest order by RANDOM() LIMIT 1;
| id | quest | resp_id |
2 256/2 12
--*Each time you select, the return will be different.
What about:
SELECT COUNT(*) AS n FROM foo;
then choose a random number m in [0, n) and
SELECT * FROM foo LIMIT 1 OFFSET m;
You can even save the first number (n) somewhere and only update it when the database count changes. That way you don't have to do the SELECT COUNT every time.
Here is a modification of #ank's solution:
SELECT *
FROM table
LIMIT 1
OFFSET ABS(RANDOM()) % MAX((SELECT COUNT(*) FROM table), 1)
This solution also works for indices with gaps, because we randomize an offset in a range [0, count). MAX is used to handle a case with empty table.
Here are simple test results on a table with 16k rows:
sqlite> .timer on
sqlite> select count(*) from payment;
16049
Run Time: real 0.000 user 0.000140 sys 0.000117
sqlite> select payment_id from payment limit 1 offset abs(random()) % (select count(*) from payment);
14746
Run Time: real 0.002 user 0.000899 sys 0.000132
sqlite> select payment_id from payment limit 1 offset abs(random()) % (select count(*) from payment);
12486
Run Time: real 0.001 user 0.000952 sys 0.000103
sqlite> select payment_id from payment order by random() limit 1;
3134
Run Time: real 0.015 user 0.014022 sys 0.000309
sqlite> select payment_id from payment order by random() limit 1;
9407
Run Time: real 0.018 user 0.013757 sys 0.000208
SELECT bar
FROM foo
ORDER BY Random()
LIMIT 1
I came up with the following solution for the large sqlite3 databases:
SELECT * FROM foo WHERE rowid = abs(random()) % (SELECT max(rowid) FROM foo) + 1;
The abs(X) function returns the absolute value of the numeric argument
X.
The random() function returns a pseudo-random integer between
-9223372036854775808 and +9223372036854775807.
The operator % outputs the integer value of its left operand modulo its right operand.
Finally, you add +1 to prevent rowid equal to 0.
I have SQLite database and I have in it certain column of type "double".
I want to get a row that has in this column value closest to a specified one.
For example, in my table I have:
id: 1; value: 47
id: 2; value: 56
id: 3; value: 51
And I want to get a row that has its value closest to 50. So I want to receive id: 3 (value = 51).
How can I achieve this goal?
Thanks.
Using an order-by, SQLite will scan the entire table and load all the values into a temporary b-tree to order them, making any index useless. This will be very slow and use a lot of memory on large tables:
explain query plan select * from 'table' order by abs(10 - value) limit 1;
0|0|0|SCAN TABLE table
0|0|0|USE TEMP B-TREE FOR ORDER BY
You can get the next lower or higher value using the index like this:
select min(value) from 'table' where x >= N;
select max(value) from 'table' where x <= N;
And you can use union to get both from a single query:
explain query plan
select min(value) from 'table' where value >= 10
union select max(value) from 'table' where value <= 10;
1|0|0|SEARCH TABLE table USING COVERING INDEX value_index (value>?)
2|0|0|SEARCH TABLE table USING COVERING INDEX value_index (value<?)
0|0|0|COMPOUND SUBQUERIES 1 AND 2 USING TEMP B-TREE (UNION)
This will be pretty fast even on large tables. You could simply load both values and evaluate them in your code, or use even more sql to select one in various ways:
explain query plan select v from
( select min(value) as v from 'table' where value >= 10
union select max(value) as v from 'table' where value <= 10)
order by abs(10-v) limit 1;
2|0|0|SEARCH TABLE table USING COVERING INDEX value_index (value>?)
3|0|0|SEARCH TABLE table USING COVERING INDEX value_index (value<?)
1|0|0|COMPOUND SUBQUERIES 2 AND 3 USING TEMP B-TREE (UNION)
0|0|0|SCAN SUBQUERY 1
0|0|0|USE TEMP B-TREE FOR ORDER BY
or
explain query plan select 10+v from
( select min(value)-10 as v from 'table' where value >= 10
union select max(value)-10 as v from 'table' where value <= 10)
group by v having max(abs(v)) limit 1;
2|0|0|SEARCH TABLE table USING COVERING INDEX value_index (value>?)
3|0|0|SEARCH TABLE table USING COVERING INDEX value_index (value<?)
1|0|0|COMPOUND SUBQUERIES 2 AND 3 USING TEMP B-TREE (UNION)
0|0|0|SCAN SUBQUERY 1
0|0|0|USE TEMP B-TREE FOR GROUP BY
Since you're interested in values both arbitrarily greater and less than the target, you can't avoid doing two index searches. If you know that the target is within a small range, though, you could use "between" to only hit the index once:
explain query plan select * from 'table' where value between 9 and 11 order by abs(10-value) limit 1;
0|0|0|SEARCH TABLE table USING COVERING INDEX value_index (value>? AND value<?)
0|0|0|USE TEMP B-TREE FOR ORDER BY
This will be around 2x faster than the union query above when it only evaluates 1-2 values, but if you start having to load more data it will quickly become slower.
This should work:
SELECT * FROM table
ORDER BY ABS(? - value)
LIMIT 1
Where ? represents the value you want to compare against.