where clause and aggregate functions - aggregate-functions

Deposit(accno,cname,bname,amount)
Question:List the name of customer having max deposit in branch B1.
Answer: Select cname
from Deposit
where amount in (select max(amount)
from deposit
where bname='B1');
IS THE ANSWER CORRECT?IF NO,PLEASE POINT ME OUT THE MISTAKE AND EXPLAIN THE CORRECT ANSWER FOR THIS.
Thank You.

It is correct but it will probably be slow on a large dataset and if you don't have indexes on amount and bname.
I would rather use something like
SELECT cname FROM Deposit WHERE bname='B1' ORDER BY amount DESC LIMIT 1;

Related

Cant navigate into a specific column sqlite

The first link is the problems.
I am very inexperienced in SQLite and needed some help.
Thanks in advance!
https://imgur.com/v7BdVe3
These are all the tables displayed open
https://imgur.com/a/VMOxAuc
https://imgur.com/a/LrDcCBQ
The schema
https://imgur.com/a/bv5KTHN
This is as far as I could get which is close but I couldn't figure out how to sort it also by marina 1.
SELECT BOAT_NAME, OWNER.OWNER_NUM, LAST_NAME, FIRST_NAME from OWNER inner join MARINA_SLIP on OWNER.OWNER_NUM = MARINA_SLIP.OWNER_NUM;
If you know anything else bout the other questions feel free to help me with those too, Thanks!
I believe that you want
SELECT BOAT_NAME, OWNER.OWNER_NUM, LAST_NAME, FIRST_NAME
FROM OWNER INNER JOIN MARINA_SLIP ON OWNER.OWNER_NUM = MARINA_SLIP.OWNER_NUM
WHERE MARINA_NUM = 1
ORDER BY BOAT_NAME;
The second question involves multiple joins.
The third question asks you to use the count(*) function, noting that this is an aggregate function and will result in the number of rows for the GROUP as per the GROUP BY clause (if no GROUP BY clause then there is just the one GROUP i.e. all resultant rows).
The fourth question progresses a little further asking you to extend the GROUP BY clause with the HAVING clause (see link above for GROUP BY).

SQLite, Sorting a data base by a timestamp

this is my first time asking a question, so bear with me and thanks in advance for any response I get.
I am using sqlite3 on a Macbook pro.
Every record in my database has a time stamp in the form YYYY-MM-DD HH:MM:SS, and I need to sort the entire database by the time stamps. The closest answer I have found to letting me do this is SELECT * FROM Table ORDER BY date(dateColumn) DESC Limit 1 from SQLite Order By Date but this returns the most recent date. I would love to be able to apply this but I am just learning sqlite can't figure how to do so.
Change the limit to the number of rows you want:
SELECT * FROM Table ORDER BY dateColumn DESC Limit 10000000;
you can figure out how many rows you have using
SELECT count(*) FROM Table;
and give a limit greater than that number. Beware: If you want all rows you should really put a limit, because if you don't put a limit and simply do
SELECT * FROM Table ORDER BY dateColumn DESC;
it will limit the output to a certain number depending on your system configurations so you might not get all rows.
When you don't want a limit, omit it.
Please note that it is not necessary to call the date function:
SELECT * FROM MyTable ORDER BY dateColumn;
Just leave off the "Limit 1". The query means "SELECT *" (the star means return all the columns) "FROM Table" (kind of obvious, but from the table name you enter here) "ORDER BY date(dateColumn)" (again, somewhat obvious, but this is the sort order where you put your data column name) "DESC" (backwards sort, leave this off if you want ascending, aka forward, sort) and "Limit 1" (only return the first record in the record set).

how to query those data ,which apper more than twice in table?

I would like to query customer from order table and only customer order more than twice is qualifed.
Which clause or filter should I use to filter those customer order more than twice ?
Please help
I am using SQLite.
Best regards
tom
SELECT customer_id FROM orders GROUP BY customer_id HAVING count(*) > 2;
In the future, please provide more information such as Table definition or you'll probably get a few downvotes :)

How can I count how many duplicates there are for each distinct value in sqlite?

I have a table:
ref,type
1,red
2,red
3,green
4,blue
5,black
6,black
I want the result of a sqlite query to be:
red,2
green,1
blue,1
black,2
I think the hardest thing to do is find a question to match my problem? Then I am sure the answer is around the corner....
:)
My quick google with the terms "count unique values sqlite3" landed me on this post. However, I was trying to count the overall number of unique values, instead of how many duplicates there are for each category.
From Chris's result table above, I just want to know how many unique colors there are. The correct answer here would be four [4].
This can be done using
select count(DISTINCT type) from table;
A quick google gave me this: http://www.mail-archive.com/sqlite-users#sqlite.org/msg38339.html
select type, count(type) from table group by type;

Does a multi-column index work for single column selects too?

I've got (for example) an index:
CREATE INDEX someIndex ON orders (customer, date);
Does this index only accelerate queries where customer and date are used or does it accelerate queries for a single-column like this too?
SELECT * FROM orders WHERE customer > 33;
I'm using SQLite.
If the answer is yes, why is it possible to create more than one index per table?
Yet another question: How much faster is a combined index compared with two separat indexes when you use both columns in a query?
marc_s has the correct answer to your first question. The first key in a multi key index can work just like a single key index but any subsequent keys will not.
As for how much faster the composite index is depends on your data and how you structure your index and query, but it is usually significant. The indexes essentially allow Sqlite to do a binary search on the fields.
Using the example you gave if you ran the query:
SELECT * from orders where customer > 33 && date > 99
Sqlite would first get all results using a binary search on the entire table where customer > 33. Then it would do a binary search on only those results looking for date > 99.
If you did the same query with two separate indexes on customer and date, Sqlite would have to binary search the whole table twice, first for the customer and again for the date.
So how much of a speed increase you will see depends on how you structure your index with regard to your query. Ideally, the first field in your index and your query should be the one that eliminates the most possible matches as that will give the greatest speed increase by greatly reducing the amount of work the second search has to do.
For more information see this:
http://www.sqlite.org/optoverview.html
I'm pretty sure this will work, yes - it does in MS SQL Server anyway.
However, this index doesn't help you if you need to select on just the date, e.g. a date range. In that case, you might need to create a second index on just the date to make those queries more efficient.
Marc
I commonly use combined indexes to sort through data I wish to paginate or request "streamily".
Assuming a customer can make more than one order.. and customers 0 through 11 exist and there are several orders per customer all inserted in random order. I want to sort a query based on customer number followed by the date. You should sort the id field as well last to split sets where a customer has several identical dates (even if that may never happen).
sqlite> CREATE INDEX customer_asc_date_asc_index_asc ON orders
(customer ASC, date ASC, id ASC);
Get page 1 of a sorted query (limited to 10 items):
sqlite> SELECT id, customer, date FROM orders
ORDER BY customer ASC, date ASC, id ASC LIMIT 10;
2653|1|1303828585
2520|1|1303828713
2583|1|1303829785
1828|1|1303830446
1756|1|1303830540
1761|1|1303831506
2442|1|1303831705
2523|1|1303833761
2160|1|1303835195
2645|1|1303837524
Get the next page:
sqlite> SELECT id, customer, date FROM orders WHERE
(customer = 1 AND date = 1303837524 and id > 2645) OR
(customer = 1 AND date > 1303837524) OR
(customer > 1)
ORDER BY customer ASC, date ASC, id ASC LIMIT 10;
2515|1|1303837914
2370|1|1303839573
1898|1|1303840317
1546|1|1303842312
1889|1|1303843243
2439|1|1303843699
2167|1|1303849376
1544|1|1303850494
2247|1|1303850869
2108|1|1303853285
And so on...
Having the indexes in place reduces server side index scanning when you would otherwise use a query OFFSET coupled with a LIMIT. The query time gets longer and the drives seek harder the higher the offset goes. Using this method eliminates that.
Using this method is advised if you plan on joining data later but only need a limited set of data per request. Join against a SUBSELECT as described above to reduce memory overhead for large tables.

Resources