BULK INSERT WITH INDEX INPLSQL - plsql

Table contains id mobile and CNIC COLUMN .NEED TO INSERT 1000 RECORDS WITH BULK INSERT. The records insert randomly. I.e mobile and cnic randomly generated.

The example below creates 10 rows with random data. Check the docs for DBMS_RANDOM for other formats (lower case, mixed case, numeric, etc)
create table some_random_data (mobile VARCHAR2(10),cnic VARCHAR2(10));
INSERT INTO some_random_data (mobile, cnic)
SELECT DBMS_RANDOM.STRING(opt => 'u', len => 10), DBMS_RANDOM.STRING(opt => 'u', len => 10)
FROM DUAL CONNECT BY LEVEL <= 10;
select * from some_random_data;
MOBILE CNIC
---------- ----------
KCAHKDLFTS IFCPMVWKXF
PHBKAIYILO CNHZCPMETJ
XLMQEFVGCP CYYGCKLZQW
RNSBSKUOTG FTVKWUMXRC
YAHKVEYTZS MESMLPWHFY
LAZYAYTLDT NJZXHNSRUR
DHSMSQIRSB TXXRBYYAVY
XLCYPTRKDF GZOYLDQIQS
ZMIAKOHYXC LQCXQSAKXV
WPRHZEBNZR QYMEZIAKXN
10 rows selected.

Related

Combining select statements to format output properly SQLite3

I want to combine these outputs to have one single table output instead of two. I also need to change the names of the rows from numbers to Protected and Non-Protected.
Here is my code:
SELECT
MediaTypeID Media, count(MediaTypeID) FROM Track t
WHERE MediaTypeID = 2 OR MediaTypeID = 3;
SELECT
MediaTypeID Media, count(MediaTypeID) FROM Track t
WHERE MediaTypeID = 1 OR MediaTypeID = 4;
This is my out put:
SMedia count(MediaTypeID)
---------- ------------------
2 451
Media count(MediaTypeID)
---------- ------------------
4 3041
This is the kind of output required:
Media Tracks
--------------- ---------------
Protected 451
non-Protected 3052
In the general case, you can combine multiple queries (with the same number of output columns) with UNION ALL:
SELECT 'Protected' AS Media, COUNT(*) FROM Track WHERE MediaTypeID IN (2, 3)
UNION ALL
SELECT 'non-Protected' , COUNT(*) FROM Track WHERE MediaTypeID IN (1, 4);
However, to run aggregate functions with multiple groups, just use GROUP BY:
SELECT CASE WHEN MediaTypeID IN (2, 3) THEN 'Protected'
ELSE 'non-Protected'
END AS Media,
COUNT(*)
FROM Track
GROUP BY Media;

Consolidating values from multiple tables

I have an application which has data spread accross 2 tables.
There is a main table Main which has columns - Id , Name, Type.
Now there is a Sub Main table that has columns - MainId(FK), StartDate,Enddate,city
and this is a 1 to many relation (each main can have multiple entries in submain).
Now I want to display columns Main.Id, City( as comma seperated from various rows for that main item from submain), min of start date(from submain for that main item) and max of enddate( from sub main).
I thought of having a function but that will slow things up since there will be 100k records. Is there some other way of doing this. btw the application is in asp.net. Can we have a sql query or some linq kind of thing ?
This is off the top of my head, but firstly I would suggest you create a user defined function in sql to create the city comma separated list string that accepts #mainid, then does the following:
DECLARE #listStr VARCHAR(MAX)
SELECT #listStr = COALESCE(#listStr+',' , '') + city
FROM submain
WHERE mainid = #mainid
... and then return #listStr which will now be a comma separated list of cities. Let's say you call your function MainIDCityStringGet()
Then for your final result you can simply execute the following
select cts.mainid,
cts.cities,
sts.minstartdate,
sts.maxenddate
from ( select distinct mainid,
dbo.MainIDCityStringGet(mainid) as 'cities'
from submain) as cts
join
( select mainid,
min(startdate) as 'minstartdate',
max(enddate) as 'maxenddate'
from submain
group by mainid ) as sts on sts.mainid = cts.mainid
where startdate <is what you want it to be>
and enddate <is what you want it to be>
Depending on how exactly you would like to filter by startdate and enddate you may need to put the where filter within each subquery and in the second subquery in the join you may then need to use the HAVING grouped filter. You did not clearly state the nature of your filter.
I hope that helps.
This will of course be in stored procedure. May need some debugging.
An alternative to creating a stored procedure is performing the complex operations on the client side. (untested):
var result = (from main in context.Main
join sub in context.SubMain on main.Id equals sub.MainId into subs
let StartDate = subs.Min(s => s.StartDate)
let EndDate = subs.Max(s => s.EndDate)
let Cities = subs.Select(s => s.City).Distinct()
select new { main.Id, main.Name, main.Type, StartDate, EndDate, Cities })
.ToList()
.Select(x => new
{
x.Id,
x.Name,
x.Type,
x.StartDate,
x.EndDate,
Cities = string.Join(", ", x.Cities.ToArray())
})
.ToList();
I am unsure how well this is supported in other implimentations of SQL, but if you have SQL Server this works a charm for this type of scenario.
As a disclaimer I would like to add that I am not the originator of this technique. But I immediately thought of this question when I came across it.
Example:
For a table
Item ID Item Value Item Text
----------- ----------------- ---------------
1 2 A
1 2 B
1 6 C
2 2 D
2 4 A
3 7 B
3 1 D
If you want the following output, with the strings concatenated and the value summed.
Item ID Item Value Item Text
----------- ----------------- ---------------
1 10 A, B, C
2 6 D, A
3 8 B, D
The following avoids a multi-statement looping solution:
if object_id('Items') is not null
drop table Items
go
create table Items
( ItemId int identity(1,1),
ItemNo int not null,
ItemValue int not null,
ItemDesc nvarchar(500) )
insert Items
( ItemNo,
ItemValue,
ItemDesc )
values ( 1, 2, 'A'),
( 1, 2, 'B'),
( 1, 6, 'C'),
( 2, 2, 'D'),
( 2, 4, 'A'),
( 3, 7, 'B'),
( 3, 1, 'D')
select it1.ItemNo,
sum(it1.ItemValue) as ItemValues,
stuff((select ', ' + it2.ItemDesc --// Stuff is just used to remove the first 2 characters, instead of a substring.
from Items it2 with (nolock)
where it1.ItemNo = it2.ItemNo
for xml path(''), type).value('.','varchar(max)'), 1, 2, '') as ItemDescs --// Does the actual concatenation..
from Items it1 with (nolock)
group by it1.ItemNo
So you see all you need is a sub query in your select that retrieves a set of all the values you need to concatenate and then use the FOR XML PATH command in that sub query in a clever way. It does not matter where the values you need to concatenate comes from you just need to retrieve them using the sub query.

SQLITE equivalent for Oracle's ROWNUM?

I'm adding an 'index' column to a table in SQLite3 to allow the users to easily reorder the data, by renaming the old database and creating a new one in its place with the extra columns.
The problem I have is that I need to give each row a unique number in the 'index' column when I INSERT...SELECT the old values.
A search I did turned up a useful term in Oracle called ROWNUM, but SQLite3 doesn't have that. Is there something equivalent in SQLite?
You can use one of the special row names ROWID, OID or _ROWID_ to get the rowid of a column. See http://www.sqlite.org/lang_createtable.html#rowid for further details (and that the rows can be hidden by normal columns called ROWID and so on).
Many people here seems to mix up ROWNUM with ROWID. They are not the same concept and Oracle has both.
ROWID is a unique ID of a database ROW. It's almost invariant (changed during import/export but it is the same across different SQL queries).
ROWNUM is a calculated field corresponding to the row number in the query result. It's always 1 for the first row, 2 for the second, and so on. It is absolutely not linked to any table row and the same table row could have very different rownums depending of how it is queried.
Sqlite has a ROWID but no ROWNUM. The only equivalent I found is ROW_NUMBER() function (see http://www.sqlitetutorial.net/sqlite-window-functions/sqlite-row_number/).
You can achieve what you want with a query like this:
insert into new
select *, row_number() over ()
from old;
No SQLite doesn't have a direct equivalent to Oracle's ROWNUM.
If I understand your requirement correctly, you should be able to add a numbered column based on ordering of the old table this way:
create table old (col1, col2);
insert into old values
('d', 3),
('s', 3),
('d', 1),
('w', 45),
('b', 5465),
('w', 3),
('b', 23);
create table new (colPK INTEGER PRIMARY KEY AUTOINCREMENT, col1, col2);
insert into new select NULL, col1, col2 from old order by col1, col2;
The new table contains:
.headers on
.mode column
select * from new;
colPK col1 col2
---------- ---------- ----------
1 b 23
2 b 5465
3 d 1
4 d 3
5 s 3
6 w 3
7 w 45
The AUTOINCREMENT does what its name suggests: each additional row has the previous' value incremented by 1.
I believe you want to use the constrain LIMIT in SQLite.
SELECT * FROM TABLE can return thousands of records.
However, you can constrain this by adding the LIMIT keyword.
SELECT * FROM TABLE LIMIT 5;
Will return the first 5 records from the table returned in you query - if available
use this code For create Row_num 0....count_row
SELECT (SELECT COUNT(*)
FROM main AS t2
WHERE t2.col1 < t1.col1) + (SELECT COUNT(*)
FROM main AS t3
WHERE t3.col1 = t1.col1 AND t3.col1 < t1.col1) AS rowNum, * FROM Table_name t1 WHERE rowNum=0 ORDER BY t1.col1 ASC

sql query selection

I am trying to build a query which will do this:
Lets say for example I have 100 records in the table. I have .net form which calls the query. I have a querystring parameter pageindex, something like http://mysite.com?id=2.
What I want to do now is if id = NULL, then get the 1st set of records from that table whichave id of 1 to 20, means from 1 to 20, if id=2 then get the 2nd set of recordw, from row 20 to 40, if id=3 then get the 3rd set of records, meaning records from 40 to 60, from that table.
I want to know if this is possible.
Thanks a lot in advance, Laziale
If you make it as you say so, and the IDs are in line from 1 to 100 you can do
#Page is the page number (base on 0)
SELECT TOP 20 * FROM MyTable WHERE (ID > #Page*20) ORDER BY ID
If you wish to use the paging style of Ms SQL and the ids are not in line you can do
WITH NewTable AS (SELECT *, ROW_NUMBER() OVER (ORDER BY ID) AS RowNumber FROM MyTable)
SELECT TOP 20 * FROM NewTable WHERE (RowNumber > #Page*20)
Reference :
http://msdn.microsoft.com/en-us/library/ms186734.aspx
http://msdn.microsoft.com/en-us/library/ms175972.aspx
SELECT col1, col2
FROM (
SELECT col1, col2, ROW_NUMBER() OVER (ORDER BY ID) AS RowNum
FROM MyTable
) AS MyDerivedTable
WHERE MyDerivedTable.RowNum BETWEEN #startRow AND #endRow
taken from Row Offset in SQL Server, first result in google via "mssql limit offset" search queue.

issue in sql Query

I have an column in table where this column name is items it contains value like this
itemID items
1 school,college
2 place, country
3 college,cricket
4 School,us,college
5 cricket,country,place
6 football,tennis,place
7 names,tennis,cricket
8 sports,tennis
Now I need to write a search query
Ex: if the user types 'cricket' into a textbox and clicks the button I need to check in the column items for cricket.
In the table I have 3 rows with cricket in the items column (ItemId = 3, 5, 7)
If the user types in tennis,cricket then I need to get the records that match either one. So I need to get 5 row (ItemId = 3, 5, 6, 7, 8)
How do I write a query for this requirement?
You need to start by redesigning your database as this is is a very bad structure. You NEVER store a comma delimited list in a field. First think about waht fields you need and then design a proper database.
The very bad structure of this table (holding multiple values in one column) is the reason you are facing this issue. Your best option is to normalize the table.
But if you can't, then you can use the "Like" operator, with a wildcard
Select * From Table
Where items Like '%cricket%'
or
Select * From Table
Where items Like '%cricket%'
or items Like '%tenis%'
You will need to dynamically construct these sql queries from the inputs the user makes. The other alternative is to write code on the server to turn the comma delimited list of parameters into a table variable or temp table and then join to it..
Delimited values in columns is almost always a bad table design. Fix your table structure.
If for some reason you are unable to do that, the best you can hope for is this:
SELECT * FROM [MyTable] WHERE items LIKE '%CRICKET%'
This is still very bad, for two important reasons:
Correctness. It would return values that only contain the word cricket. Using your tennis example, what if you also had a "tennis shoes" item?
Performance. It's not sargable, which means the query won't work with any indexes you may have on that column. That means your query will probably be incredibly slow.
If you need help fixing this structure, the solution is to add another table — we'll call it TableItems — with a column for your ItemID that will be a foreign key to your original table and an item field (singular) for each of your item values. Then you can join to that table and match a column value exactly. If these items work more like categories, where you want to rows with the "Cricket" item to match the same cricket item, you also want a third table to be an intersection between your original table and the other one I just had you create.
For a single item:
SELECT itemID, items FROM MyTable WHERE items LIKE '%cricket%'
For multiple items:
SELECT itemID, items FROM MyTable WHERE items LIKE '%tennis%' or items LIKE '%cricket%'
You'll need to parse the input and split them up and add each item to the query:
items LIKE '%item1%' or items LIKE '%item2%' or items LIKE '%item3%' ...
I think that in the interest of validity of data, it should be normalized so that you split the Items into a separate table with an item on each row.
In either case, here is a working sample that uses a user defined function to split the incoming string into a Table Variable and then uses JOIN with a LIKE
CREATE FUNCTION dbo.udf_ItemParse
(
#Input VARCHAR(8000),
#Delimeter char(1)='|'
)
RETURNS #ItemList TABLE
(
Item VARCHAR(50) ,
Pos int
)
AS
BEGIN
DECLARE #Item varchar(50)
DECLARE #StartPos int, #Length int
DECLARE #Pos int
SET #Pos = 0
WHILE LEN(#Input) > 0
BEGIN
SET #StartPos = CHARINDEX(#Delimeter, #Input)
IF #StartPos < 0 SET #StartPos = 0
SET #Length = LEN(#Input) - #StartPos - 1
IF #Length < 0 SET #Length = 0
IF #StartPos > 0
BEGIN
SET #Pos = #Pos + 1
SET #Item = SUBSTRING(#Input, 1, #StartPos - 1)
SET #Input = SUBSTRING(#Input, #StartPos + 1, LEN(#Input) - #StartPos)
END
ELSE
BEGIN
SET #Pos = #Pos+1
SET #Item = #Input
SET #Input = ''
END
INSERT #ItemList (Item, Pos) VALUES(#Item, #Pos)
END
RETURN
END
GO
DECLARE #Itemstable TABLE
(
ItemId INT,
Items VarChar (1000)
)
INSERT INTO #Itemstable
SELECT 1 itemID, 'school,college' items UNION
SELECT 2, 'place, country' UNION
SELECT 3, 'college,cricket' UNION
SELECT 4, 'School,us,college' UNION
SELECT 5, 'cricket,country,place' UNION
SELECT 6, 'footbal,tenis,place' UNION
SELECT 7, 'names,tenis,cricket' UNION
SELECT 8, 'sports,tenis'
DECLARE #SearchParameter VarChar (100)
SET #SearchParameter = 'cricket'
SELECT DISTINCT ItemsTable.*
FROM #Itemstable ItemsTable
INNER JOIN udf_ItemParse (#SearchParameter, ',') udf
ON ItemsTable.Items LIKE '%' + udf.Item + '%'
SET #SearchParameter = 'cricket,tenis'
SELECT DISTINCT ItemsTable.*
FROM #Itemstable ItemsTable
INNER JOIN udf_ItemParse (#SearchParameter, ',') udf
ON ItemsTable.Items LIKE '%' + udf.Item + '%'
Why exactly are you using a database in the first place?
I mean : you are clearly not using it's potential. If you like using comma separated stuff, try a file.
In MySQL, create a fulltext index on your table:
CREATE FULLTEXT INDEX fx_mytable_items ON mytable (items)
and issue this query:
SELECT *
FROM mytable
WHERE MATCH(items) AGAINST ('cricket tennis' IN BOOLEAN MODE)

Resources