Find which instances got terminated via Athena query - amazon-cloudtrail

I am running a query which is giving me list of instances launched for a particular month for my security group.
Lets say - [A, B ,C ,D]
My goal is to find what all instances got terminated also and when.
Now the issue I am facing is that I don't want to execute my query repeatedly - for each instance - to check whether instance got terminated or not..
Like:
SELECT eventname, useridentity.username, eventtime, requestparameters FROM your_athena_tablename WHERE (requestparameters like '%instanceid%')and eventtime > '2017-02-15T00:00:00Z'order by eventtime asc;
How can I pass multiple values here ??

Related

Why querying with ORDER BY fails in DynamoDb?

I have a query where I ORDER BY expected_delivery value which is a number (UNIX timestamp) but it fails with this error:
ValidationException: Must have at least one non-optional hash key condition in WHERE clause when using ORDER BY clause.
Here is the query:
SELECT * FROM "transactions"."composite_pk_1-index"
WHERE begins_with(composite_pk_1, '0#3#435634652#69992528')
ORDER BY expected_delivery ASC
if I run it without the ORDER BY part then it runs successfully and returns data:
SELECT * FROM "transactions"."composite_pk_1-index"
WHERE begins_with(composite_pk_1, '0#3#435634652#69992528')
I tried adding other conditions in the query but it keeps returning the same error. Obviously the error is not stating what's the problem but I dont get it what it is.
Can someone help? I am new to DynamoDB.

How can I choose to show only failed result and result must be fail not pass

there are 22 records on both tables. at records 1-21 it's was right but on record 22 on both table can't compare cause it's not equal. I would like to show result only failed record that's can't compare and let result fail.
connect to database using custom params cx_Oracle ${DB_CONNECT_STRING}
#{queryResults1}= Query Select * from QA_USER.SealTest_Security_A Order by SECURITY_ID
Log ${queryResults1}
#{queryResults2}= Query Select * from QA_USER.SealTest_Security_B Order by SECURITY_ID
Log ${queryResults2}
should be equal ${queryResults1} ${queryResults2}
Disconnect From Database

SQL Code for Running Total does not recognise table name

I have a question on creating running totals in MS Access 2010 similar to the one here:
Access 2010 - query showing running total for multiple records, dropping old record and adding new record on each line
However when I input the equivalent code from that thread I get an error saying that the database cannot be found (Access seems to think the table I have specified is the database name)
Here is the code from the original thread:-
SELECT hbep1.EmployeeID, hbep1.PayPeriodID,
(
SELECT Sum(hbep2.HoursUsed)
FROM Hours_by_Empl_PP hbep2
WHERE hbep2.EmployeeID=hbep1.EmployeeID
AND (hbep2.PayPeriodID Between hbep1.[PayPeriodID]-3
And hbep1.[PayPeriodID])
) AS Sum_of_Hours_last_4_PPs
FROM Hours_by_Empl_PP hbep1;
Here is the code I inputted into my query:-
SELECT
V4_Try.ID_NIS_INV_HDR,
V4_Try.ID_ITM,
V4_Try.RunTot3,
V4_Try.BomVsActQty,
DMin("RunTot3","V4_Try","[ID_Itm]=" & [ID_ITM]) AS IDItmMin,
DMax("RunTot3","V4_Try","[ID_Itm]=" & [ID_ITM]) AS IDItmMax,
(
SELECT Sum([V4_Try].[BomVsActQty])
FROM [V4_Try].[BomVsActQty]
WHERE [V4_Try].[ID_ITM]=[V4_Try].[ID_ITM]
AND (IDItmMax < IDItmMin)
) AS RunTot6
FROM V4_Try
ORDER BY V4_Try.ID_ITM, V4_Try.RunTot3;
One thing I notice is that the main query uses DMax() and DMin() to create some aliased columns
...
DMin("RunTot3","V4_Try","[ID_Itm]=" & [ID_ITM]) AS IDItmMin,
DMax("RunTot3","V4_Try","[ID_Itm]=" & [ID_ITM]) AS IDItmMax,
...
and then the subquery tries to use those aliases in its WHERE clause
(
SELECT ...
WHERE...
AND (IDItmMax < IDItmMin)
) AS RunTot6
I'm pretty sure that the subquery will have no knowledge of the column aliases in the "parent" query, so they may be the items that are unrecognized.
Start by running this query:
SELECT * FROM V4_Try;
Then develop for complexity. Build the nested query before anything else. When you know that runs, try adding your aliases, then the DMax() function, and so on. Isolate the point at which you have an error popping up.
This is the process to fix a query.
Oh, and please specify the precise error that is raised by Access. Also, if this is being run from VBA, please let us know because that affects your trouble-shooting.

Column 'AuctionStatus' cannot be used in an IF UPDATE clause because it is a computed column

I am developing an Auction site in asp.net3.5 and sql server 2008R2, My Database has an Auction Table that has a calculated column "AuctionStatus" -
(case when [EndDateTime] < getdate() then '0' else '1' end)
that gives auction status Active or inactive based on End Date.
Now I want to call a stored procedure that sends email notifications to buyers and sellers as soon as AuctionStatus becomes '0'. For that i tried to create a after update trigger that could call the email notification sp, but i am not able to do so.
I am getting the following error message :-
Msg 2114, Level 16, State 1, Procedure trgAuctionEmailNotification,
Line 6 Column 'AuctionStatus' cannot be used in an IF UPDATE clause
because it is a computed column.
The trigger is:
CREATE TRIGGER trgAuctionEmailNotification ON SE_Auctions
AFTER UPDATE
AS
BEGIN
IF (UPDATE (AuctionStatus))
BEGIN
IF EXISTS (SELECT * FROM inserted WHERE currentbidderid > 0
AND AuctionStatus='0' )
BEGIN
DECLARE #ID int
SELECT #ID = AuctionID from inserted
EXEC spSelectSE_AuctionsByAuctionID #ID
END
END
END
You could just replace AuctionStatus with the corresponding expression :
IF EXISTS (SELECT * FROM inserted WHERE currentbidderid > 0 AND [EndDateTime] < getdate() )
But, the point is I don't see how your trigger will be "triggered" as [AuctionStatus] is never "updated". Its Value is just calculated whenever you need it.
You could go for a sql job that runs every x minutes and send a notification for each auction which ended during the last x minutes.
You need to add a real column containing a flag to indicate whether the notifications have been sent, and then implement a polling technique to scan the table for rows where the status is inactive and notifications haven't been sent.
The computed column doesn't really transition from one state to another, so it's not like an UPDATE has occurred. Even if SQL Server did implement this, it would be hideously expensive, since it would have to query the entire table for transitioning rows every 3ms. (or even more frequently if you're using datetime2 with a higher precision)
Whereas you can pick a suitable polling interval yourself. This could be an SQL agent job, or in some service code somewhere, whatever best fits the rest of your architecture.

How to find out which package/procedure is updating a table?

I would like to find out if it is possible to find out which package or procedure in a package is updating a table?
Due to a certain project being handed over (the person who handed over the project has since left) without proper documentation, data that we know we have updated always go back to some strange source point.
We are guessing that this could be a database job or scheduler that is running the update command without our knowledge. I am hoping that there is a way to find out where the source code is calling from that is updating the table and inserting the source as a trigger on that table that we are monitoring.
Any ideas?
Thanks.
UPDATE: I poked around and found out
how to trace a statement back to its
owning PL/SQL object.
In combination with what Tony mentioned, you can create a logging table and a trigger that looks like this:
CREATE TABLE statement_tracker
( SID NUMBER
, serial# NUMBER
, date_run DATE
, program VARCHAR2(48) null
, module VARCHAR2(48) null
, machine VARCHAR2(64) null
, osuser VARCHAR2(30) null
, sql_text CLOB null
, program_id number
);
CREATE OR REPLACE TRIGGER smb_t_t
AFTER UPDATE
ON smb_test
BEGIN
INSERT
INTO statement_tracker
SELECT ss.SID
, ss.serial#
, sysdate
, ss.program
, ss.module
, ss.machine
, ss.osuser
, sq.sql_fulltext
, sq.program_id
FROM v$session ss
, v$sql sq
WHERE ss.sql_address = sq.address
AND ss.SID = USERENV('sid');
END;
/
In order for the trigger above to compile, you'll need to grant the owner of the trigger these permissions, when logged in as the SYS user:
grant select on V_$SESSION to <user>;
grant select on V_$SQL to <user>;
You will likely want to protect the insert statement in the trigger with some condition that only makes it log when the the change you're interested in is occurring - on my test server this statement runs rather slowly (1 second), so I wouldn't want to be logging all these updates. Of course, in that case, you'd need to change the trigger to be a row-level one so that you could inspect the :new or :old values. If you are really concerned about the overhead of the select, you can change it to not join against v$sql, and instead just save the SQL_ADDRESS column, then schedule a job with DBMS_JOB to go off and update the sql_text column with a second update statement, thereby offloading the update into another session and not blocking your original update.
Unfortunately, this will only tell you half the story. The statement you're going to see logged is going to be the most proximal statement - in this case, an update - even if the original statement executed by the process that initiated it is a stored procedure. This is where the program_id column comes in. If the update statement is part of a procedure or trigger, program_id will point to the object_id of the code in question - you can resolve it thusly:
SELECT * FROM all_objects where object_id = <program_id>;
In the case when the update statement was executed directly from the client, I don't know what program_id represents, but you wouldn't need it - you'd have the name of the executable in the "program" column of statement_tracker. If the update was executed from an anonymous PL/SQL block, I'm not how to track it back - you'll need to experiment further.
It may be, though, that the osuser/machine/program/module information may be enough to get you pointed in the right direction.
If it is a scheduled database job then you can find out what scheduled database jobs exist and look into what they do. Other things you can do are:
look at the dependencies views e.g. ALL_DEPENDENCIES to see what packages/triggers etc. use that table. Depending on the size of your system that may return a lot of objects to trawl through.
Search all the database source code for references to the table like this:
select distinct type, name
from all_source
where lower(text) like lower('%mytable%');
Again that may return a lot of objects, and of course there will be some "false positives" where the search string appears but isn't actually a reference to that table. You could even try something more specific like:
select distinct type, name
from all_source
where lower(text) like lower('%insert into mytable%');
but of course that would miss cases where the command was formatted differently.
Additionally, could there be SQL scripts being run through "cron" jobs on the server?
Just write an "after update" trigger and, in this trigger, log the results of "DBMS_UTILITY.FORMAT_CALL_STACK" in a dedicated table.
The purpose of this function is exactly to give you the complete call stack of al the stored procedures and triggers that have been fired to reach your code.
I am writing from the mobile app, so i can't give you more detailed examples, but if you google for it you'll find many of them.
A quick and dirty option if you're working locally, and are only interested in the first thing that's altering the data, is to throw an error in the trigger instead of logging. That way, you get the usual stack trace and it's a lot less typing and you don't need to create a new table:
AFTER UPDATE ON table_of_interest
BEGIN
RAISE_APPLICATION_ERROR(-20001, 'something changed it');
END;
/

Resources