I have 1 million records and I was upload in database table by using putty but only 0.5 million records are uploaded and got one error msg. How to identify which 0.5 million records are uploaded and which are not?
Flashback Query. Find all rows, then substract those rows which were not in e. g. 1 day ago. (This works with Oracle 9i and up, for older versions you'll have to analyze the logs.)
select * from emp
minus
select * from emp
as of timestamp sysdate - 1
The flashback query
select * from emp
as of timestamp sysdate - 1
gives you the result, how it would have been (in this case) 1 day ago, select the date close before the first row was loaded. This is the data in the table from before the crashed load. Then subtract those rows from the actual table and - voila, you've got all rows inserted since this time.
And by the way, if you are not interviewed for a senior position you are not supposed to know this.
Related
I have a table like this:
Assume there are lots of Names (i.e., E,F,G,H,I etc.,) and their respective Date and Produced Items in this table. It's a massive table, so I'd want to write an optimised query.
In this, I want to query the latest A,B,C,D records.
I was using the following query:
SELECT * FROM c WHERE c.Name IN ('A','B','C','D') ORDER BY c.Date DESC OFFSET 0 LIMIT 4
But the problem with this query is, since I'm ordering by Date, the latest 4 records I'm getting are:
I want to get this result:
Please help me in modifying the query. Thanks.
I caught a nice and simple bug in my "next data" request with sqlite.
I'm requesting rows with limit 15 and before x date.
The problem/bug is if row 16 (order by date) is the same date as row 15 (the last row in the "request"),
the next time I will call my "next request" with limit 15 and before date X (the last row date from the previous request) it will skip row 16.
Now I know I can request before and equal to date and check if I got this row already,
But I wonder if there is magic word in sqlite and maybe it's my lucky day, so I can say to sqlite :
"Hey I need the next 15 rows order by date, but don't stop if row 16 (and after him) have the same date" ?
What other are doing in this situation ?
I prefer using the date as "cursor" and not rowID incase I will delete and insert rows during app usage.
UPDATE:
this is my sql for next request :
""SELECT * from feedItems WHERE object_date < 1600954500 order by object_date DESC limit 15""
What I want : like #forpas said in the comments, I don't want strictly 15 rows if row 16 (and after him) have the same date(tie).
If your version of SQLite support DENSE_RANK, then you can order using that:
WITH cte AS (
SELECT *, DENSE_RANK() OVER (ORDER BY object_date DESC) dr
FROM feedItems
WHERE object_date < 1600954500
)
SELECT *
FROM cte
WHERE dr <= 15;
This approach would retain the 15 most recent dates which happen before some date, possibly returning more than 15 records in the event that the same date occur more than once.
I thought how to get more performance to show data from a table with thousands of rows and that i could split a select in parts.
For example, I have a Repeater in ASP NET and only shows 10 rows at the time. I want to select only 10 rows from the table, on next page it selects the next 10 rows and so on.
the problem is that I can't find anything to give me a head on on this problem and I was hoping someone with knowledge of this could refer me to some good start ups, thank you.
Try this Sample Sql script
first it select only 10 rows from the table, on next page it selects the next 10 rows and so on.
DECLARE #i_PageIndex INT=1,-- change page index 1 and 2 .. you we get the exact difference
#i_PageSize INT=10
SELECT COUNT(1) OVER() AS recordCnt,
ROW_NUMBER()OVER(ORDER BY TABLE_NAME) AS Seq,
*
FROM INFORMATION_SCHEMA.COLUMNS
ORDER BY ROW_NUMBER()OVER(ORDER BY TABLE_NAME)
OFFSET(COALESCE(#i_PageIndex, 1) - 1) * #i_PageSize ROWS FETCH NEXT #i_PageSize ROWS ONLY
I have an sqlite database with a table that logs electric power values over time, i.e. there is a timestamp column and one for the associated power value.
With a value coming in roughly every second, this table grows significantly over time. Which is why I want to thin out old values, for example by replacing all 60 values in a minute with their average.
I know how to query for the average.
I know how to insert the query's result back into the table.
But how do I delete the original values without also deleting the newly inserted average value (which has a timestamp within the same range)?
Note that I would like to perform the operation entirely inside sqlite query language, i.e. without storing for example row ids in the C code that is executing the queries.
The easiest way would be to use a temporary table:
BEGIN;
CREATE TEMP TABLE Averages AS
SELECT MIN(Timestamp), AVG(Value)
FROM MyTable
WHERE (old)
GROUP BY (minute);
DELETE FROM MyTable WHERE (old);
INSERT INTO MyTable(Timestamp, Value) SELECT * FROM Averages;
DROP TABLE Averages;
COMMIT;
Trying to create a report for our support ticketing system and I'm trying to have 2 results in the report that show a rolling average of how many tickets were opened in a day and how many were closed in a day.
Basically, query the entire tickets table, separate out everything by individual days that the tickets were created on, count the number tickets for each individual day, then average that number.
My friend gave me this query:
SELECT AVG(ticket_count)
FROM (SELECT COUNT(*) AS ticket_count FROM tickets
GROUP BY DATE(created_at, '%Y'), DATE(created_at, '%m'), DATE(created_at, '%d')) AS ticket_part
But it's not seeming to work for me. All I get is a single result with the number of tickets created last year.
Here's what finally worked for me:
SELECT round(CAST(AVG(TicketsOpened) AS REAL), 1) as DailyOpenAvg
FROM
(SELECT date(created_at) as Day, COUNT(*) as TicketsOpened
FROM tickets
GROUP BY date(created_at)
) AS X
The middle part of your query is collapsing the table to a single row, so the outer part has nothing upon which to group. It's hard to say exactly what you need without seeing the schema for ticket_count, but at a guess I'd try this:
SELECT
AVG(CAST(TicketsOpened AS REAL)) -- The cast to REAL ensures that { 1, 2 } averages to 1.5 rather than 1
FROM
(
SELECT
CAST(created_at AS DATE) AS Day -- The cast to DATE truncates any time element; if you're storing date alone, you can omit this
COUNT(*) AS TicketsOpened
FROM
ticket_count
GROUP BY
CAST(created_at AS DATE)
) AS X
Hope that helps!