Oracle 11g fetch values using offset value - oracle11g

I am trying to fetch set of records from the database part by part.
I tried to use Limit and fetch but it seems like it does not working with oracle 11g. Is there any alternative solution to do this. I have tried many in google results but nothing is working properly.

You can use this query and do what u want.
SELECT A.*
FROM (SELECT A.*, ROWNUM ROWNUMBER
FROM Table1 T
WHERE ROWNUM <= TO) T
WHERE ROWNUMBER > FROM;
FROM is from which number and TO is to which number

A Sound application is based on sound design. Kindly check if you are trying to achieve a procedural requirement using an SQL. If yes, it is better to use PL/SQL instead of SQL.
Create a cursor using the required SQL without any limits.
Create a type of associative array to hold the batch records.
Create an associative array using the type created above
Open and loop the cursor.
FETCH created_cursor BULK COLLECT INTO created_associated_array LIMIT ;
Hope this helps.

Related

Why TableAdapter doesn't recognize #parameter

I am using table adapter Query configuration wizard in Visual studio 2013 for getting data from my database. For some queries like this:
SELECT *
FROM ItemsTable
ORDER BY date_of_creation desc, time_of_creation desc
OFFSET (#PageNumber - 1) * #RowsPerPage ROWS
FETCH NEXT #RowsPerPage ROWS ONLY
it doesn't recognize the #pageNumber as a paremeter and it cannot generate function that has these arguments while it works fine for queries like:
Select Top (#count) * from items_table
Why does in first query tableadapter fail to generate function with mentioned arguments whereas it can generate function fine for second one for example: tableadapter.getDataByCount(?int count)
Am I forced to use stored procedure, if yes since I don't know anything about it how?
Update: The Problem exactly occurs in TableAdapter Configuration Wizard in DataSet editor (VS 2013) and it doesn't generate functions with these parameters some times it says #RowsPerPage should be declared! but it should generate a function with this arguments I found that it happens when we don't use #parameter_name in clause other than SELECT and WHERE for example in this query we used the, in Offset clause.
I can't tell you how to fix it in ASP, but here is a simple stored procedure that should do the same thing:
CREATE PROCEDURE dbo.ReturnPageOfItems
(
#pageNumber INT,
#rowsPerPage INT
)
AS
BEGIN;
SELECT *
FROM dbo.ItemsTable
ORDER BY date_of_creation desc,
time_of_creation desc
OFFSET (#pageNumber - 1) * #rowsperpage ROWS
FETCH NEXT #rowsPerPage ROWS ONLY;
END;
This will also perform better than simply passing the query, because SQL Server will take advantage of the cached query plan created for the procedure on its first execution. It is best practice not to use SELECT *, as that can cause maintenance trouble for you if there are schema changes to the table(s) involved, so I encourage you to spell out the columns in which you're actually interested. The documentation for the CREATE PROCEDURE command is available here, and it spells out the many various options you have in greater detail. However, the code above should work fine as is.
If you need to grant access to your application user so they can use this proc, that code is
GRANT EXECUTE ON OBJECT::dbo.ReturnPageOfItems TO userName;

run query in all schemas

Hi I am a newbie to Oracle. I have a query that I need to run against all the schemas in database lets say (scott,hr,exampleetc) that has the table transaction in the database.
Can some one help what is the best way to do it ?? I have about 30 schemas in database I can't do it by running this against all schemas as it is time consuming..
I was thinking a plsql will be the best way to do it but I am not enough knowledgeable to do this myself ..
query example:
select sum(amount)
from abc.transaction t1
where t1.payment_method ='transfer'
and TO_char(t1.result_time_stamp,'MONTH') = TO_char(sysdate,'MONTH')
order by t1.time_stamp asc;
Thanks is advance for help ..
I'm not sure what kind of queries you want to run, if they need to aggregate data from all tables at once or if you want to look through and run a command on each, but here's something that should get you started. It's a cursor that will find all tables named "transaction". If these are going to be ad-hoc queries, I would just copy the table name output from the below cursor, write a basic query and save it somewhere or make it into a procedure of some sort. Then just modify and re-use it in the future.
DECLARE
CURSOR c1
IS
SELECT owner, table_name
FROM all_tables
WHERE upper(table_name) = 'TRANSACTION';
BEGIN
FOR c1rec in c1
LOOP
dbms_output.put_line(c1rec.owner||'.'||c1rec.table_name);
END LOOP;
END;

Simple.Data Default Generated Queries and Performance

I am thinking of using Simple.Data Micro-ORM for my ASP.NET 4.5 website. However, there is something that I need to know before deciding whether to use it or not.
Let's take the following Join query for example:
var albums = db.Albums.FindAllByGenreId(1)
.Select(
db.Albums.Title,
db.Albums.Genre.Name);
This query will be translated to:
select
[dbo].[Albums].[Title],
[dbo].[Genres].[Name]
from [dbo].[Albums]
LEFT JOIN [dbo].[Genres] ON ([dbo].[Genres].[GenreId] = [dbo].[Albums].[GenreId])
WHERE [dbo].[Albums].[GenreId] = #p1
#p1 (Int32) = 1
Let's assume that the 'Genres' table is a a table with thousands or even millions of rows. I think that it might be very inefficient to filter the data after the JOIN has taken place, which is what this query translated for in Simple.Date.
Would it be better to filter the data firs in the Generes table, which means create make a SELECT statement first and make the JOIN with that filtered table?
Wouldn't it be better to filter the data ahead of time?
Furthermore, is there an option to make that type of complex (JOIN on a filtered table) query using Simple.Data.
Need your answer to know if to proceed with Simple.Data, or damp it in favor of another micro-ORM.
You are confused about how SQL is interpreted and executed by the database engine. Modern databases are incredibly smart about the best way to execute queries, and the order in which instructions appear in SQL statements has nothing to do with the order in which they are executed.
Try running some queries through SQL Management Studio and looking at the Execution Plan to see how they are actually optimised and executed. Or just try the SQL you think would work better and see how it actually performs compared to what is generated by Simple.Data.
The sql that Simple.Data is generating is idomatic T-SQL, too be honest its what I would be writing if I was drafting the sql myself.
This sql allows Sql Server to optimise the execution plan which should mean the most efficient retrieval of data.
The beauty of Simple.Data is that if you have any doubts or issues with the sql it generates you can just call a stored proc:
db.ProcedureWithParameters(1, 2);

Basic SQL count with LINQ

I have a trivial issue that I can't resolve. Currently our app uses Linq to retrieve data and get a basic integer value of the row count. I can't form a query that gives back a count without a 'select i'. I don't need the select, just the count(*) response. How do I do this? Below is a sample:
return (from io in db._Owners
where io.Id == Id && io.userId == userId
join i in db._Instances on io.Id equals i.Id **select i**).Count()
;
The select i is fine - it's not actually going to be fetching any data back to the client, because the Count() call will be translated into a Count(something) call at the SQL side.
When in doubt, look at the SQL that's being generated for your query, e.g. with the DataContext.Log property.
Using the LINQ query syntax requires a select statement. There's no way around that.
That being said, the statement will get transformed into a COUNT()-based query; the select i is there only to satisfy the expression system that underlies the LINQ query providers (otherwise the type of the expression would be unknown).
Including the select will not affect the performance here because the final query will get translated into SQL. At this point it will be optimized and will be like select (*) from ......

SQL Server 2005 - Pass In Name of Table to be Queried via Parameter

Here's the situation. Due to the design of the database I have to work with, I need to write a stored procedure in such a way that I can pass in the name of the table to be queried against if at all possible. The program in question does its processing by jobs, and each job gets its own table created in the database, IE table-jobid1, table-jobid2, table-jobid3, etc. Unfortunately, there's nothing I can do about this design - I'm stuck with it.
However, now, I need to do data mining against these individualized tables. I'd like to avoid doing the SQL in the code files at all costs if possible. Ideally, I'd like to have a stored procedure similar to:
SELECT *
FROM #TableName AS tbl
WHERE #Filter
Is this even possible in SQL Server 2005? Any help or suggestions would be greatly appreciated. Alternate ways to keep the SQL out of the code behind would be welcome too, if this isn't possible.
Thanks for your time.
best solution I can think of is to build your sql in the stored proc such as:
#query = 'SELECT * FROM ' + #TableName + ' as tbl WHERE ' + #Filter
exec(#query)
not an ideal solution probably, but it works.
The best answer I can think of is to build a view that unions all the tables together, with an id column in the view telling you where the data in the view came from. Then you can simply pass that id into a stored proc which will go against the view. This is assuming that the tables you are looking at all have identical schema.
example:
create view test1 as
select * , 'tbl1' as src
from job-1
union all
select * , 'tbl2' as src
from job-2
union all
select * , 'tbl3' as src
from job-3
Now you can select * from test1 where src = 'tbl3' and you will only get records from the table job-3
This would be a meaningless stored proc. Select from some table using some parameters? You are basically defining the entire query again in whatever you are using to call this proc, so you may as well generate the sql yourself.
the only reason I would do a dynamic sql writing proc is if you want to do something that you can change without redeploying your codebase.
But, in this case, you are just SELECT *'ing. You can't define the columns, where clause, or order by differently since you are trying to use it for multiple tables, so there is no meaningful change you could make to it.
In short: it's not even worth doing. Just slop down your table specific sprocs or write your sql in strings (but make sure it's parameterized) in your code.

Resources