sqlite - how do I get a one row result back? (luaSQLite3) - sqlite

How can I get a single row result (e.g. in form of a table/array) back from a sql statement. Using Lua Sqlite (LuaSQLite3). For example this one:
SELECT * FROM sqlite_master WHERE name ='myTable';
So far I note:
using "nrows"/"rows" it gives an iterator back
using "exec" it doesn't seem to give a result back(?)
Specific questions are then:
Q1 - How to get a single row (say first row) result back?
Q2 - How to get row count? (e.g. num_rows_returned = db:XXXX(sql))

In order to get a single row use the db:first_row method. Like so.
row = db:first_row("SELECT `id` FROM `table`")
print(row.id)
In order to get the row count use the SQL COUNT statement. Like so.
row = db:first_row("SELECT COUNT(`id`) AS count FROM `table`")
print(row.count)
EDIT: Ah, sorry for that. Here are some methods that should work.
You can also use db:nrows. Like so.
rows = db:nrows("SELECT `id` FROM `table`")
row = rows[1]
print(row.id)
We can also modify this to get the number of rows.
rows = db:nrows("SELECT COUNT(`id`) AS count FROM `table`")
row = rows[1]
print(row.count)

Here is a demo of getting the returned count:
> require "lsqlite3"
> db = sqlite3.open":memory:"
> db:exec "create table foo (x,y,z);"
> for x in db:urows "select count(*) from foo" do print(x) end
0
> db:exec "insert into foo values (10,11,12);"
> for x in db:urows "select count(*) from foo" do print(x) end
1
>

Just loop over the iterator you get back from the rows or whichever function you use. Except you put a break at the end, so you only iterate once.
Getting the count is all about using SQL. You compute it with the SELECT statement:
SELECT count(*) FROM ...
This will return one row containing a single value: the number of rows in the query.

This is similar to what I'm using in my project and works well for me.
local query = "SELECT content FROM playerData WHERE name = 'myTable' LIMIT 1"
local queryResultTable = {}
local queryFunction = function(userData, numberOfColumns, columnValues, columnTitles)
for i = 1, numberOfColumns do
queryResultTable[columnTitles[i]] = columnValues[i]
end
end
db:exec(query, queryFunction)
for k,v in pairs(queryResultTable) do
print(k,v)
end
You can even concatenate values into the query to place inside a generic method/function.
local query = "SELECT * FROM ZQuestionTable WHERE ConceptNumber = "..conceptNumber.." AND QuestionNumber = "..questionNumber.." LIMIT 1"

Related

Converting a string to be used in in-clause Teradata

I have a string like this ('car, bus, train')
I want to convert it to be used in an in-clause. Basically I want to convert it to
('car','bus','train'). Please how do I do this in Teradata
I don't know how you are getting data like that, but if you have no control over that, you can use STRTOK_SPLIT_TO_TABLE.
select t.* from table (strtok_split_to_table(1,'car, bus, train',',')
returns (outkey integer,tokennum integer,resultstring varchar(25))) as t
Run by itself, that gives you:
outkey tokennum resultstring
1 1 car
1 2 bus
1 3 train
You can use that as a derived table and join it to the table you want to filter by. Something like:
select
<your table>.*
from
<your table>
inner join (select t.* from table (strtok_split_to_table(1,'car, bus, train',',')
returns (outkey integer,tokennum integer,resultstring varchar(25))) as t) dt
on yourtable.yourcolumn = dt.resultstring
here is the another way of spliting the input for n number of commas and use IN clause.
SELECT regexp_substr('car,bus,train','[^,]+',1,day_of_calendar) fields
FROM sys_calendar.calendar
WHERE day_of_calendar <= (CHAR('car,bus,train') - CHAR(oreplace('car,bus,train',',','')))+1;
Output of the Query
fields
~~~~~~~~
bus
car
train
Here is the systax to use in where clause
SELECT * FROM <your table>
WHERE yourtable.requiredColumn in
(
SELECT regexp_substr('car,bus,train','[^,]+',1,day_of_calendar) fields
FROM sys_calendar.calendar
WHERE
day_of_calendar <= (CHAR('car,bus,train') - CHAR(oreplace('car,bus,train',',','')))+1
);
Basically what we are doing here is splitting the string for each comma and below function is calculating number of commas in the string
(CHAR('car,bus,train') - CHAR(oreplace('car,bus,train',',','')))+1

Select Top 1 From a Table For Each row in another Table

I am just starting to work with openedge and I need to join information from two tables but I just need the first row from the second one.
Basically I need to do a typical SQL Cross Apply but in progress. I look in the documentation and the Statement FETCH FIRST 10 ROWS ONLY only in OpenEdge 11.
My query is:
SELECT * FROM la_of PUB.la_ofart ON la_of.empr_cod = la_ofart.empr_cod
AND la_of.Cod_Ordf = la_ofart.Cod_Ordf
AND la_of.Num_ordex = la_ofart.Num_ordex AND la_of.Num_partida = la_ofart.Num_partida
CROSS APPLY (
SELECT TOP 1 ofart.Cod_Ordf AS Cod_Ordf_ofart ,
ofart.Num_ordex AS Num_ordex_ofart
FROM la_ofart AS ofart
WHERE ofart.empr_cod = la_ofart.empr_cod
AND ofart.Num_partida = la_ofart.Num_partida
AND la_ofart.doc1_num = ofart.doc1_num
AND la_ofart.doc2_linha = ofart.doc2_linha
ORDER BY ofart.Cod_Ordf DESC) ofart
I am using SSMS to extract data from OE10 using an ODBC connector and querying to OE using OpenQuery.
Thanks for all help.
If I correctly understood your question, maybe you can use something like this. Maybe this isn't the best solution for your problem, but may suit your needs.
DEF BUFFER ofart FOR la_ofart.
DEF TEMP-TABLE tt-ofart NO-UNDO LIKE ofart
FIELD seq AS INT
INDEX ch-seq
seq.
DEF VAR i-count AS INT NO-UNDO.
EMPTY TEMP-TABLE tt-ofart.
blk:
FOR EACH la_ofart NO-LOCK,
EACH la_of NO-LOCK
WHERE la_of.empr_cod = la_ofart.empr_cod
AND la_of.Cod_Ordf = la_ofart.Cod_Ordf
AND la_of.Num_ordex = la_ofart.Num_ordex
AND la_of.Num_partida = la_ofart.Num_partida,
EACH ofart NO-LOCK
WHERE ofart.empr_cod = la_ofart.empr_cod
AND ofart.Num_partida = la_ofart.Num_partida
AND ofart.doc1_num = la_ofart.doc1_num
AND ofart.doc2_linha = la_ofart.doc2_linha
BREAK BY ofart.Cod_Ordf DESCENDING:
ASSIGN i-count = i-count + 1.
CREATE tt-ofart.
BUFFER-COPY ofart TO tt-ofart
ASSIGN ofart.seq = i-count.
IF i-count >= 10 THEN
LEAVE blk.
END.
FOR EACH tt-ofart USE-INDEX seq:
DISP tt-ofart WITH SCROLLABLE 1 COL 1 DOWN NO-ERROR.
END.

How to execute a complex sql statement and get the results in an array?

I would like to execute a fairly complex SQL statement using SQLite.swift and get the result preferably in an array to use as a data source for a tableview. The statement looks like this:
SELECT defindex, AVG(price) FROM prices WHERE quality = 5 AND price_index != 0 GROUP BY defindex ORDER BY AVG(price) DESC
I was studying the SQLite.swift documentation to ind out how to do it properly, but I couldn't find a way. I could call prepare on the database and iterate through the Statement object, but that wouldn't be optimal performance wise.
Any help would be appreciated.
Most sequences in Swift can be unpacked into an array by simply wrapping the sequence itself in an array:
let stmt = db.prepare(
"SELECT defindex, AVG(price) FROM prices " +
"WHERE quality = 5 AND price_index != 0 " +
"GROUP BY defindex " +
"ORDER BY AVG(price) DESC"
)
let rows = Array(stmt)
Building a data source from this should be relatively straightforward at this point.
If you use the type-safe API, it would look like this:
let query = prices.select(defindex, average(price))
.filter(quality == 5 && price_index != 0)
.group(defindex)
.order(average(price).desc)
let rows = Array(query)

How can I get the row number of results querying with DQL(Doctrine)

example: SELECT title,ROW_NUM FROM article ORDER BY count_read.
What should ROW_NUM be replace by ?
I don't like to after getting the results generate the index by program, because I want to insert into a table Rank with the result data by querying the example DQL above.
What I want to achieve maybe like :
"INSERT INTO RANK r (title, index, lastIndex)
SELECT title,ROW_NUM,(SELECT index FROM RANK WHERE id = :id - 1) FROM article ORDER BY count_read"
Thanks in advance..
I think you might use variables, like this:
"
SET #row_num := 1;
INSERT INTO RANK r (title, index, lastIndex)
SELECT title,
(#row_num := #row_num + 1),
(SELECT index FROM RANK WHERE id = :id - 1)
FROM article ORDER BY count_read
"

issue in sql Query

I have an column in table where this column name is items it contains value like this
itemID items
1 school,college
2 place, country
3 college,cricket
4 School,us,college
5 cricket,country,place
6 football,tennis,place
7 names,tennis,cricket
8 sports,tennis
Now I need to write a search query
Ex: if the user types 'cricket' into a textbox and clicks the button I need to check in the column items for cricket.
In the table I have 3 rows with cricket in the items column (ItemId = 3, 5, 7)
If the user types in tennis,cricket then I need to get the records that match either one. So I need to get 5 row (ItemId = 3, 5, 6, 7, 8)
How do I write a query for this requirement?
You need to start by redesigning your database as this is is a very bad structure. You NEVER store a comma delimited list in a field. First think about waht fields you need and then design a proper database.
The very bad structure of this table (holding multiple values in one column) is the reason you are facing this issue. Your best option is to normalize the table.
But if you can't, then you can use the "Like" operator, with a wildcard
Select * From Table
Where items Like '%cricket%'
or
Select * From Table
Where items Like '%cricket%'
or items Like '%tenis%'
You will need to dynamically construct these sql queries from the inputs the user makes. The other alternative is to write code on the server to turn the comma delimited list of parameters into a table variable or temp table and then join to it..
Delimited values in columns is almost always a bad table design. Fix your table structure.
If for some reason you are unable to do that, the best you can hope for is this:
SELECT * FROM [MyTable] WHERE items LIKE '%CRICKET%'
This is still very bad, for two important reasons:
Correctness. It would return values that only contain the word cricket. Using your tennis example, what if you also had a "tennis shoes" item?
Performance. It's not sargable, which means the query won't work with any indexes you may have on that column. That means your query will probably be incredibly slow.
If you need help fixing this structure, the solution is to add another table — we'll call it TableItems — with a column for your ItemID that will be a foreign key to your original table and an item field (singular) for each of your item values. Then you can join to that table and match a column value exactly. If these items work more like categories, where you want to rows with the "Cricket" item to match the same cricket item, you also want a third table to be an intersection between your original table and the other one I just had you create.
For a single item:
SELECT itemID, items FROM MyTable WHERE items LIKE '%cricket%'
For multiple items:
SELECT itemID, items FROM MyTable WHERE items LIKE '%tennis%' or items LIKE '%cricket%'
You'll need to parse the input and split them up and add each item to the query:
items LIKE '%item1%' or items LIKE '%item2%' or items LIKE '%item3%' ...
I think that in the interest of validity of data, it should be normalized so that you split the Items into a separate table with an item on each row.
In either case, here is a working sample that uses a user defined function to split the incoming string into a Table Variable and then uses JOIN with a LIKE
CREATE FUNCTION dbo.udf_ItemParse
(
#Input VARCHAR(8000),
#Delimeter char(1)='|'
)
RETURNS #ItemList TABLE
(
Item VARCHAR(50) ,
Pos int
)
AS
BEGIN
DECLARE #Item varchar(50)
DECLARE #StartPos int, #Length int
DECLARE #Pos int
SET #Pos = 0
WHILE LEN(#Input) > 0
BEGIN
SET #StartPos = CHARINDEX(#Delimeter, #Input)
IF #StartPos < 0 SET #StartPos = 0
SET #Length = LEN(#Input) - #StartPos - 1
IF #Length < 0 SET #Length = 0
IF #StartPos > 0
BEGIN
SET #Pos = #Pos + 1
SET #Item = SUBSTRING(#Input, 1, #StartPos - 1)
SET #Input = SUBSTRING(#Input, #StartPos + 1, LEN(#Input) - #StartPos)
END
ELSE
BEGIN
SET #Pos = #Pos+1
SET #Item = #Input
SET #Input = ''
END
INSERT #ItemList (Item, Pos) VALUES(#Item, #Pos)
END
RETURN
END
GO
DECLARE #Itemstable TABLE
(
ItemId INT,
Items VarChar (1000)
)
INSERT INTO #Itemstable
SELECT 1 itemID, 'school,college' items UNION
SELECT 2, 'place, country' UNION
SELECT 3, 'college,cricket' UNION
SELECT 4, 'School,us,college' UNION
SELECT 5, 'cricket,country,place' UNION
SELECT 6, 'footbal,tenis,place' UNION
SELECT 7, 'names,tenis,cricket' UNION
SELECT 8, 'sports,tenis'
DECLARE #SearchParameter VarChar (100)
SET #SearchParameter = 'cricket'
SELECT DISTINCT ItemsTable.*
FROM #Itemstable ItemsTable
INNER JOIN udf_ItemParse (#SearchParameter, ',') udf
ON ItemsTable.Items LIKE '%' + udf.Item + '%'
SET #SearchParameter = 'cricket,tenis'
SELECT DISTINCT ItemsTable.*
FROM #Itemstable ItemsTable
INNER JOIN udf_ItemParse (#SearchParameter, ',') udf
ON ItemsTable.Items LIKE '%' + udf.Item + '%'
Why exactly are you using a database in the first place?
I mean : you are clearly not using it's potential. If you like using comma separated stuff, try a file.
In MySQL, create a fulltext index on your table:
CREATE FULLTEXT INDEX fx_mytable_items ON mytable (items)
and issue this query:
SELECT *
FROM mytable
WHERE MATCH(items) AGAINST ('cricket tennis' IN BOOLEAN MODE)

Resources