I wanted to select the 1st row of the table X below and update the column "name" to '53656C696E613333' as shown below. Sqlite keeps saying syntax error. Can someone please assist with this problem? Many thanks!
CREATE TABLE Ages (name VARCHAR(128),age INTEGER)`
DELETE FROM Ages;
INSERT INTO Ages (name,age) Values ('Alex',25);
INSERT INTO Ages (name,age) Values ('Mel',31);
INSERT INTO Ages (name,age) Values ('Fred',30);
INSERT INTO Ages (name,age) Values ('Nancy',35);
INSERT INTO Ages (name,age) Values ('Nathan',13);
INSERT INTO Ages (name,age) Values ('Oscar',24);
SELECT hex (name||age) AS X FROM Ages ORDER BY X
SELECT * FROM X LIMIT 1
UPDATE X SET name = '53656C696E613333'
I think you might be wanting something like :-
UPDATE Ages
SET name = '53656C696E613333'
WHERE name = (SELECT name FROM Ages ORDER BY hex(name||age) LIMIT(1))
AND age = (SELECT age FROM Ages ORDER BY hex(name||age) LIMIT(1))
;
as a replacement for :-
SELECT hex (name||age) AS X FROM Ages ORDER BY X
SELECT * FROM X LIMIT 1
UPDATE X SET name = '53656C696E613333'
This would result in :-
However, as you haven't specified WITHOUT ROWID for the table, then you could use the simpler :-
UPDATE Ages
SET name = '53656C696E613333'
WHERE rowid = (SELECT rowid FROM Ages ORDER BY hex(name||age) LIMIT(1))
;
Related
Consider the following schema and table:
CREATE TABLE IF NOT EXISTS `names` (
`id` INTEGER,
`name` TEXT,
PRIMARY KEY(`id`)
);
INSERT INTO `names` VALUES (1,'zulu');
INSERT INTO `names` VALUES (2,'bene');
INSERT INTO `names` VALUES (3,'flip');
INSERT INTO `names` VALUES (4,'rossB');
INSERT INTO `names` VALUES (5,'albert');
INSERT INTO `names` VALUES (6,'zuse');
INSERT INTO `names` VALUES (7,'rossA');
INSERT INTO `names` VALUES (8,'juss');
I access this table with the following query:
SELECT *
FROM names
ORDER BY name
LIMIT 10
OFFSET 4;
Where offset 4 is used because it's the rowid (in the ordered list) to the first occurance of 'R%' names. This returns:
1="7" "rossA"
2="4" "rossB"
3="1" "zulu"
4="6" "zuse"
My question is, is there an SQL statement which can return the OFFSET value (in the R case above its 4) given a starting first letter please? (I don't really want to resort to stepping() through results, counting rows, until first 'R%' is reached!)
I've tried the following without success:
SELECT MIN(ROWID)
FROM
(
SELECT *
FROM names
ORDER BY name
)
WHERE name LIKE 'R%'
It always returns single row of NULL data.
As background, this table is a phone book list and I want to provide subset of results (from main table) back to caller, starting at a initial letter offset.
Just count the rows before the string of interest:
select count(*) from names where name < 'r';
The following has a number of options. Basically your issues is that the sub-query doesn't return the roiwd hencne NULL as the minimum. However, there is no need to use the rowid directly as the id column is an alias of the rowid, so that could be used:-
SELECT name, id, MIN(rowid), min(id) -- shows how rowid and id are the same
FROM
(
SELECT rowid, * -- returns rowid from the subquery so min(rowid) now works
FROM names
ORDER BY name
)
WHERE name LIKE 'R%' ORDER BY id ASC LIMIT 1 -- Will effectivley do the same (no need for the sub-query)
Extra columns added for demonstration.
As such your query could be :-
SELECT min(rowid) FROM names where name LIKE 'R%';
Or :-
SELECT min(id) FROM names where name LIKE 'R%';
You could also use :-
SELECT id FROM names WHERE name LIKE 'R%' ORDER BY id ASC LIMIT 1;
Or :-
SELECT rowid FROM names WHERE name LIKE 'R%' ORDER BY id ASC LIMIT 1;
I have created a subset of the pg_table_def table with table_name,col_name and data_type. I have also added a column active with 'Y' as value for some of the rows. Let us call this table as config.Table config looks like below:
table_name column_name
interaction_summary name_id
tag_transaction name_id
interaction_summary direct_preference
bulk_sent email_image_click
crm_dm web_le_click
Now I want to be able to map the table names from this table to the actual table and fetch values for the corresponding column. name_id will be the key here which will be available in all tables. My output should look like below:
name_id direct_preference email_image_click web_le_click
1 Y 1 2
2 N 1 2
The solution needs to be dynamic so that even if the table list extends tomorrow, the new table should be able to accommodate. Since I am new to Redshift, any help is appreciated. I am also considering to do the same via R using the dplyr package.
I understood that dynamic queries don't work with Redshift.
My objective was to pull any new table that comes in and use their columns for regression analysis in R.
I made this working by using listagg feature and concat operation. And then wrote the output to a dataframe in R. This dataframe would have 'n' number of select queries as different rows.
Below is the format:
df <- as.data.frame(tbl(conn,sql("select 'select ' || col_names|| ' from ' || table_name as q1 from ( select distinct table_name, listagg(col_name,',') within group (order by col_name)
over (partition by table_name) as col_names
from attribute_config
where active = 'Y'
order by table_name )
group by 1")))
Once done, I assigned every row of this dataframe to a new dataframe and fetched the output using below:
df1 <- tbl(conn,sql(df[1,]))
I know this is a round about solution. But it works !! Fetches about 17M records under 1 second.
So I've been looking at this for the past week and learning. I'm used to SQL Server not SQLite. I understand RowId now, and that if I have an "id" column of my own (for convenience) it will actually use RowId. I've done running totals in SQL Server using ROW_NUMBER, but that doesn't seem to be an option with SQLite. The most useful post was...
How do I calculate a running SUM on a SQLite query?
My issue is that it works as long as I have data that I will keep adding to at the "bottom" of the table. I say "bottom" and not bottom because my display of the data is always sorted based on some other column such as a month. So in other words if I insert a new record for a missing month it will get inserted with a higher "id" (aka _RowId"). My running total below that month now needs to reflect this new data for all subsequent months. This means I cannot order by "id".
With SQL Server, ROW_NUMBER took care of my sequencing because in the select where I use a.id > running.id, I would have used a.rownum > running.rownum
Here's my table
CREATE TABLE `Test` (
`id` INTEGER,
`month` INTEGER,
`year` INTEGER,
`value` INTEGER,
PRIMARY KEY(`id`)
);
Here's my query
WITH RECURSIVE running (id, month, year, value, rt) AS
(
SELECT id, month, year, value, value
FROM Test AS row1
WHERE row1.id = (SELECT a.id FROM Test AS a ORDER BY a.id LIMIT 1)
UNION ALL
SELECT rowN.id, rowN.month, rowN.year, rowN.value, (rowN.value + running.rt)
FROM Test AS rowN
INNER JOIN running ON rowN.id = (
SELECT a.id FROM Test AS a WHERE a.id > running.id ORDER BY a.id LIMIT 1
)
)
SELECT * FROM running
I can order my CTE with year,month,id similar to how it is suggested in original example I linked above. However unless I'm mistaken that example solution relies on records in the table already ordered by year, month, id. If I'm right if I insert an earlier "month", then it will break because the "id" will have the largest value of all the _RowId_s.
Appreciate if someone can set me straight.
I have a table containing user_id, movie_id, rating. These are all INT, and ratings range from 1-5.
I want to get the median rating and group it by user_id, but I'm having some trouble doing this.
My code at the moment is:
SELECT AVG(rating)
FROM (SELECT rating
FROM movie_data
ORDER BY rating
LIMIT 2 - (SELECT COUNT(*) FROM movie_data) % 2
OFFSET (SELECT (COUNT(*) - 1) / 2
FROM movie_data));
However, this seems to return the median value of all the ratings. How can I group this by user_id, so I can see the median rating per user?
The following gives the required median:
DROP TABLE IF EXISTS movie_data2;
CREATE TEMPORARY TABLE movie_data2 AS
SELECT user_id, rating FROM movie_data order by user_id, rating;
SELECT a.user_id, a.rating FROM (
SELECT user_id, rowid, rating
FROM movie_data2) a JOIN (
SELECT user_id, cast(((min(rowid)+max(rowid))/2) as int) as midrow FROM movie_data2 b
GROUP BY user_id
) c ON a.rowid = c.midrow
;
The logic is straightforward but the code is not beautified. Given encouragement or comments I will improve it. In a nutshell, the trick is to use rowid of SQLite.
This is not easily possible because SQLite does not allow correlated subqueries to refer to outer values in the LIMIT/OFFSET clauses.
Add WHERE clauses for the user_id to all three subqueries, and execute them for each user ID.
SELECT user_id,AVG(rating)
FROM movie_data
GROUP BY user_id
ORDER BY rating
Sqlite doesn't have a row number function. My database however could have several thousands of records. I need to sort a table based upon a date (the date field is actually an INTEGER) and then return a specific range of rows. So if I wanted all the rows from 600 to 800, I need to somehow create a row number and limit the results to fall within my desired range. I cannot use RowID or any auto-incremented ID field because all the data is inserted with random dates. The closest I can get is this:
CREATE TABLE Test (ID INTEGER, Name TEXT, DateRecorded INTEGER);
Insert Into Test (ID, Name, DateRecorded) Values (5,'fox', 400);
Insert Into Test (ID, Name, DateRecorded) Values (1,'rabbit', 100);
Insert Into Test (ID, Name, DateRecorded) Values (10,'ant', 800);
Insert Into Test (ID, Name, DateRecorded) Values (8,'deer', 300);
Insert Into Test (ID, Name, DateRecorded) Values (6,'bear', 200);
SELECT ID,
Name,
DateRecorded,
(SELECT COUNT(*)
FROM Test AS t2
WHERE t2.DateRecorded > t1.DateRecorded) AS RowNum
FROM Test AS t1
where RowNum > 2
ORDER BY DateRecorded Desc;
This will work except it's really ugly. The Select Count(*) will result in carrying out that Select statement for every row encountered. So if I have several thousands of rows, that will be a very poor performance.
This is what the LIMIT/OFFSET clauses are for:
SELECT *
FROM Test
ORDER BY DateRecorded DESC
LIMIT 200 OFFSET 600