SQLite: SELECT from grouped and ordered result - sqlite

I'm new to SQL(ite), so i'm sorry if there is a simple answer i just were to stupid to find the right search terms for.
I got 2 tables: 1 for user information and another holding points a user achieved. It's a simple one to many relation (a user can achieve points multiple times).
table1 contains "userID" and "Username" ...
table2 contains "userID" and "Amount" ...
Now i wanted to get a highscore rank for a given username.
To get the highscore i did:
SELECT Username, SUM(Amount) AS total FROM table2 JOIN table1 USING (userID) GROUP BY Username ORDER BY total DESC
How could i select a single Username and get its position from the grouped and ordered result? I have no idea how a subselect would've to look like for my goal. Is it even possible in a single query?

You cannot calculate the position of the user without referencing the other data. SQLite does not have a ranking function which would be ideal for your user case, nor does it have a row number feature that would serve as an acceptable substitute.
I suppose the closest you could get would be to drop this data into a temp table that has an incrementing ID, but I think you'd get very messy there.
It's best to handle this within the application. Get all the users and calculate rank. Cache individual user results as necessary.
Without knowing anything more about the operating context of the app/DB it's hard to provide a more specific recommendation.

For a specific user, this query gets the total amount:
SELECT SUM(Amount)
FROM Table2
WHERE userID = ?
You have to count how many other users have a higher amount than that single user:
SELECT COUNT(*)
FROM table1
WHERE (SELECT SUM(Amount)
FROM Table2
WHERE userID = table1.userID)
>=
(SELECT SUM(Amount)
FROM Table2
WHERE userID = ?);

Related

Sessions by hits.page.pagePath in GA bigquery tables

I am new to bigquery, so sorry if this is a noob question! I am interested in breaking out sessions by page path or title. I understand one session can contain multiple paths/titles so the sum would be greater than total sessions. Essentially, I want to create a 'session id' and do a count distinct of sessionids where path like a or b.
It might actually be helpful to start at the very beginning and manually calculate total sessions. I tried to concatenate visit id and full visitor id to create a unique visit id, but apparently that is quite different from sessions. Can someone help enlighten me? Thanks!
I am working with our GA site data. Schema is the standard in GA exports.
DATA SAMPLE
Let's use an example out of the sample BigQuery (London Helmet) data:
There are 63 sessions in this day:
SELECT count(*) FROM [google.com:analytics-bigquery:LondonCycleHelmet.ga_sessions_20130910]
How many of those sessions are where hits.page.pagePath like /vests% or /helmets%? How many were vests only vs helmets only? Thanks!
Here is an example of how to calculate whether there were only helmets, or only vests or both helmets and vests or neither:
SELECT
visitID,
has_helmets AND has_vests AS both_helmets_and_vests,
has_helmets AND NOT has_vests AS helmets_only,
NOT has_helmets AND has_vests AS vests_only,
NOT has_helmets AND NOT has_vests AS neither_helmets_nor_vests
FROM (
SELECT
visitId,
SOME(hits.page.pagePath like '/helmets%') WITHIN RECORD AS has_helmets,
SOME(hits.page.pagePath like '/vests%') WITHIN RECORD AS has_vests,
FROM [google.com:analytics-bigquery:LondonCycleHelmet.ga_sessions_20130910]
)
Way 1, easier but you need to repeat on each field
Obviously you can do something like this :
SELECT count(*) FROM [google.com:analytics-bigquery:LondonCycleHelmet.ga_sessions_20130910] WHERE hits.page.pagePath like '/helmets%'
And then have multiple queries for your own substrings (one with '/vests%', one with 'helmets%', etc).
Way 2, works fine, but not with repeated fields
If you want ONE query that'll just group by on the first part of the string, you can do something like that :
Select a, Count(*) FROM (SELECT FIRST(SPLIT(hits.page.pagePath, '/')) as a FROM [google.com:analytics-bigquery:LondonCycleHelmet.ga_sessions_20130910] ) group by a
When I do this, it returns me the following the 63 sessions, with a total count of 63 :).
Way 3, using a FLATTEN on the table to get each hit individually
Since the "hits" field is repeatable, you would need a FLATTEN in your query :
Select a, Count(*) FROM (SELECT FIRST(SPLIT(hits.page.pagePath, '/')) as a FROM FLATTEN ([google.com:analytics-bigquery:LondonCycleHelmet.ga_sessions_20130910] , hits)) group by a
The reason why you need to FLATTEN here is that the "hits" field is repeatable. If you don't flatten, it won't look into ALL the "hits" in your response. Adding "FLATTEN" will make you work off a sub-table where each hit is in its own row, so you can query on all of them.
If you want it by sessions instead of hits, (it'll be both), do something like :
Select b, a Count(*) FROM (SELECT FIRST(SPLIT(hits.page.pagePath, '/')) as a, visitID as b, FROM FLATTEN ([google.com:analytics-bigquery:LondonCycleHelmet.ga_sessions_20130910] , hits)) group by b, a

What is the fastest way of selecting by a list of strings in sqlite database?

I have database with with roughly following structure:
table1 (name) -< table2 -< table3 (score)
where -< means 1 to many relationship. What i need to do is for every string in a given list find the linked entry from table3 with a maximum score value. The way i do it now is quite slow, and i wonder of it could be sped up.
How i am doing this:
SELECT k.score,k.yaw,k.pitch,k.roll,k.kp_number,k.ke_number,k.points,k.elems --various fields of third table
FROM File
JOIN FaceDetection AS d ON d.f_id=File.file_id --joining second table
JOIN FaceKey AS k ON k.face_det=d.fd_id --joining third table
WHERE name=:fld
ORDER BY k.score DESC
I open transaction, prepare query with the above text, and in cycle retrieve the entries i am interested in from the database, then commit transaction. What are better, faster ways?
Indexes can be used for all the columns that are used for lookups or sorting, but a query cannot use more than one index per table.
Check the EXPLAIN QUERY PLAN output to see whether this query does table scans or uses indexes.
You are not returning values from any table but FaceKey, so you do not actually need to do a join.
However, rewriting the query as below might or might not help:
SELECT score,
yaw,
pitch,
roll,
kp_number,
ke_number,
points,
elems
FROM FaceKey
WHERE face_det IN (SELECT fd_id
FROM FaceDetection
WHERE f_id IN (SELECT file_id
FROM File
WHERE name = :fld))
ORDER BY score DESC

How to create database table dynamically and insert data selected by query

I'm working on website where I need to find rank of user on the basis of score. Earlier I'm calculating the score and rank of user by sql query .
select * from (
select
usrid,
ROW_NUMBER()
OVER(ORDER BY (count(*)+sum(sup)+sum(opp)+sum(visited)*0.3) DESC) AS rank,
(count(*)+sum(sup)+sum(opp)+sum(visited)*0.3 ) As score
from [DB_].[dbo].[dsas]
group by usrid) as cash
where usrid=#userid
Please don't concentrate more on query because this is only to explain how I select data.
Problem: Now I can't use above query because every time I use rank it need to select rank from dsas table and data of dsas table is increasing day by day and slows down my website.
What I need is select data by above query and insert in another table named as score. Can we do anything like this?
A better solution is to either include score as a field in your user table or have a separate table for scores. Any time you add new sup, opp, or visited data for a user, also recalculate their score at that time.
Then to get the highest ranking users, you will be able to perform a very simple select statement, ordering by score descending, and only fetching the number of rows you want. It will be very fast.

SQL Query Help - Duplicate Removal

wasn't sure whether to put this in Software or here, so I figured I'd start here I know this will be a straightforward answer from you SQL geniuses...
I have a table, it contains contacts that I import on a daily basis. I will have an ASP.NET front end for user interaction. From this table, my intention is to send them all mailers - but only one to each address. So my end result is a user enters a date (which corresponds to teh date imported) and they are given a resultant grid that has all the unique addresses associated to that date. I only want to send a mailer to that address once - many times my original imported list will contain multiple businesses at the same address.
Table: ContactTable
Fielsd:
ID, CompanyName, Address, City, State, Zip, Phone
I can use the SELECT DISTINCT clause, but I need all the data associated to it (company name, etc.)
I have over 262000 Records in this table.
If I select a sample date of 1/10/2011, I get 2401 records. SELECT DISTINCT Address from the same date gives me 2092 records. This is workable, I would send those 2092 people a mailer.
Secondly, I'd have to be able to historically check if a mailer was already sent to that address as well. I would not want to send another mailer to the same business tomorrow either.
What's my best way?
I would start with creating a table to lookup sent mailers.
ID | DateSent
-------------
Every time you send a mailer you are going to want to insert the ID, and the DateTime into it, this way when you go to pull the mailers you can look against this table to see if the mailer has been sent within whatever your specified time frame of mailing is. You can extend this if you have multiple types of mailers to include the mailer type.
Plain Old SQL
SELECT a.ID, a.CompanyName, b.Address, b.City, b.State, b.Zip, a.Phone
FROM a.ContactTable
RIGHT JOIN (SELECT DISTINCT Address, City, State, Zip
FROM ContactTable) b
ON a.ID = b.ID
This sub-query is like creating a temp table SELECTing only the DISTINCT addresses, then joining it to the rest of the info.
To add the lookup against your new table add the following
SELECT a.ID, a.CompanyName, b.Address, b.City, b.State, b.Zip, a.Phone
FROM a.ContactTable
RIGHT JOIN (SELECT DISTINCT Address, City, State, Zip
FROM ContactTable) b
ON a.ID = b.ID
RIGHT JOIN SentMailer c
ON a.ID = c.ID
WHERE DATEDIFF(mm, c.DateSent, GETDATE()) > 12 --gives you everything that hasn't been sent a mailer within the last year
Edit
Without the data being standardized it's hard to get quality results. I've found in the past the more creative I have to get with my queries is a flag to bad table structure or data collection. I think you should still create a lookup table for ID/DateSent to manage the time frames for sending.
Edit
Yes, I'm basically looking for the unique address, city, state, zip. I would only require one instance for each address so we would be able to send a mailer to that address. At this point, Company name would not be required.
If this is the case you can simply do the following:
SELECT DISTINCT Address, City, State, Zip, Phone
FROM ContactTable
Keep in mind this won't scrub entries like Main Street vs Main St.
RogueSpear, I work in the address verification (and thus de-duplication) field for SmartyStreets, where we deal with this scenario a lot and tackle the challenge.
If you're getting daily lists from a company and have hundreds of thousands of records, then removing duplicate addresses using stored procedures or mere queries won't be enough to match the varying possibilities of each address. There are services which do this, and I'd point you to CASS-Certified vendors which provide that.
You can flag duplicates in a table using something like CASS-Certified Scrubbing, or you can prevent duplicates at point-of-entry with an API like LiveAddress. Anyway, I'd be happy to personally help you with any other address questions.
I would select, then remove, the duplicates like this:
SELECT a.ID, a.PurgedID, a.CAMPAIGNTYPE, a.COMPANY, a.DBANAME, a.COADDRESS, a.COCITY, a.COSTATE, a.COZIP, a.FIRSTNAME1, a.DIALERPHONENUM, a.Purged FROM PurgeReportDetail a
WHERE EXISTS (
SELECT * FROM PurgeReportDetail b WHERE
b.COADDRESS = a.COADDRESS
AND b.COCITY = a.COCITY
AND b.COSTATE = a.COSTATE
AND b.COZIP = a.COZIP
AND b.id <> a.id
) -- This clause will only include rows with duplicate columns noted
AND a.ID IN (
SELECT TOP 1 c.ID from PurgeReportDetail c
WHERE c.COADDRESS = a.COADDRESS
AND c.COCITY = a.COCITY
AND c.COSTATE = a.COSTATE
AND c.COZIP = a.COZIP
ORDER BY c.ID -- If you want the *newest* entry to be saved, add "DESC" here
) -- This clause gets the top 1 ID value for each matching set
or something like this.
This will keep the first ID of the redundant address, just replace the SELECT with DELETE when ready.
EDIT: Of course this will only work on exact matches.
EDIT2: If you wanted to only check where you hadn't sent mailers, you should join both to a table of sent mailers from a specified date range

Does a multi-column index work for single column selects too?

I've got (for example) an index:
CREATE INDEX someIndex ON orders (customer, date);
Does this index only accelerate queries where customer and date are used or does it accelerate queries for a single-column like this too?
SELECT * FROM orders WHERE customer > 33;
I'm using SQLite.
If the answer is yes, why is it possible to create more than one index per table?
Yet another question: How much faster is a combined index compared with two separat indexes when you use both columns in a query?
marc_s has the correct answer to your first question. The first key in a multi key index can work just like a single key index but any subsequent keys will not.
As for how much faster the composite index is depends on your data and how you structure your index and query, but it is usually significant. The indexes essentially allow Sqlite to do a binary search on the fields.
Using the example you gave if you ran the query:
SELECT * from orders where customer > 33 && date > 99
Sqlite would first get all results using a binary search on the entire table where customer > 33. Then it would do a binary search on only those results looking for date > 99.
If you did the same query with two separate indexes on customer and date, Sqlite would have to binary search the whole table twice, first for the customer and again for the date.
So how much of a speed increase you will see depends on how you structure your index with regard to your query. Ideally, the first field in your index and your query should be the one that eliminates the most possible matches as that will give the greatest speed increase by greatly reducing the amount of work the second search has to do.
For more information see this:
http://www.sqlite.org/optoverview.html
I'm pretty sure this will work, yes - it does in MS SQL Server anyway.
However, this index doesn't help you if you need to select on just the date, e.g. a date range. In that case, you might need to create a second index on just the date to make those queries more efficient.
Marc
I commonly use combined indexes to sort through data I wish to paginate or request "streamily".
Assuming a customer can make more than one order.. and customers 0 through 11 exist and there are several orders per customer all inserted in random order. I want to sort a query based on customer number followed by the date. You should sort the id field as well last to split sets where a customer has several identical dates (even if that may never happen).
sqlite> CREATE INDEX customer_asc_date_asc_index_asc ON orders
(customer ASC, date ASC, id ASC);
Get page 1 of a sorted query (limited to 10 items):
sqlite> SELECT id, customer, date FROM orders
ORDER BY customer ASC, date ASC, id ASC LIMIT 10;
2653|1|1303828585
2520|1|1303828713
2583|1|1303829785
1828|1|1303830446
1756|1|1303830540
1761|1|1303831506
2442|1|1303831705
2523|1|1303833761
2160|1|1303835195
2645|1|1303837524
Get the next page:
sqlite> SELECT id, customer, date FROM orders WHERE
(customer = 1 AND date = 1303837524 and id > 2645) OR
(customer = 1 AND date > 1303837524) OR
(customer > 1)
ORDER BY customer ASC, date ASC, id ASC LIMIT 10;
2515|1|1303837914
2370|1|1303839573
1898|1|1303840317
1546|1|1303842312
1889|1|1303843243
2439|1|1303843699
2167|1|1303849376
1544|1|1303850494
2247|1|1303850869
2108|1|1303853285
And so on...
Having the indexes in place reduces server side index scanning when you would otherwise use a query OFFSET coupled with a LIMIT. The query time gets longer and the drives seek harder the higher the offset goes. Using this method eliminates that.
Using this method is advised if you plan on joining data later but only need a limited set of data per request. Join against a SUBSELECT as described above to reduce memory overhead for large tables.

Resources