Teradata abort session that makes the DBA sleep better at night - teradata

Since execute on SybLib.AbortSessions allows the grantee to pretty much down the entire server, I wonder if the below might be a more sensible alternative, from the perspective of a dba granting these rights to power-accounts:
create macro Dbc.mcAbort ( sess_id number )
as
(
Select SyLib.AbortSessions( '-1', USER, :sess_id, 'Y', 'Y' );
);
grant Dbc.mcAbort to grantee_id;
Curious if this could work as designed.

Related

How to recover queries with Teradata?

I am submitting queries with vb script (NOT sql assistant). Now I accidentally deleted that vb script file . How can I recover the queries that I submitted ? Where are they stored in Teradata ?
Most TD systems enable the Database Query Log (DBQL), so there's a high probability that your SQL was captured. You might try if you got access to it:
SELECT * FROM dbc.QryLogV
WHERE UserName = USER;
But even if this works you might still not find the required queries, as all data is regularly moved from the DBQL base tables to a history database (probably every day). So you better contact your DBA and ask for assistance :-)
If QueryText in dbc.QryLogV is empty (or just partial text) you can check QryLogSQLV (hopefully it's populated):
SELECT * FROM dbc.QRryLogSQLV
WHERE QueryId IN
(
SELECT QueryId FROM dbc.QryLogV
WHERE UserName = USER
AND some more conditions to find the correct queries
)

Why records added so slowly

I'm reading records from SQL Server 2005 and writing returned recordset to SQLite with following piece of code.
My compiler is Lazarus 1.0.12 and qt1 is "sqlquery" also "qrystkmas" is Ztable from Zeos dbo...
but the operation is quite slow. the test time is
start time : 15:47:11
finish time : 16:19:04
Record count is : 19500
So in SQL Server and SQL Server CE pair it is less than 2-3 minute on Delphi project.
How can I speed up this process?
Code:
Label2.Caption:=TimeToStr(Time);
if Dm.Qt1.Active then Dm.Qt1.Close;
Dm.Qt1.SQL.Clear;
Dm.Qt1.SQL.Add(' select ');
Dm.Qt1.SQL.Add(' st.sto_kod, st.sto_isim,st.sto_birim1_ad, ');
Dm.Qt1.SQL.Add(' st.sto_toptan_vergi,st.sto_perakende_vergi,');
Dm.Qt1.SQL.Add(' st.sto_max_stok,st.sto_min_stok, ');
Dm.Qt1.SQL.Add(' sba.bar_kodu, ');
Dm.Qt1.SQL.Add(' stf.sfiyat_fiyati ');
Dm.Qt1.SQL.Add(' from MikroDB_V14_DEKOR2011.dbo.STOKLAR st ');
Dm.Qt1.SQL.Add(' left JOIN MikroDB_V14_DEKOR2011.dbo.BARKOD_TANIMLARI sba on sba.bar_stokkodu=st.sto_kod ');
Dm.Qt1.SQL.Add(' left JOIN MikroDB_V14_DEKOR2011.dbo.STOK_SATIS_FIYAT_LISTELERI stf on stf.sfiyat_stokkod=st.sto_kod ');
Dm.Qt1.SQL.Add(' where LEFT(st.sto_kod,1)=''5'' --and stf.sfiyat_listesirano=1 ');
Dm.Qt1.Open;
Dm.qryStkMas.Open;
Dm.qrystkmas.First;
While not Dm.Qt1.EOF do
begin
Dm.qryStkMas.Append;
Dm.qryStkMas.FieldByName('StkKod').AsString :=Dm.Qt1.FieldByName('sto_kod').AsString;
Dm.qryStkMas.FieldByName('StkAd').AsString :=Dm.Qt1.FieldByName('sto_isim').AsString;
Dm.qryStkMas.FieldByName('StkBrm').AsString :=Dm.Qt1.FieldByName('sto_birim1_ad').AsString;
Dm.qryStkMas.FieldByName('StkBar').AsString :=Dm.Qt1.FieldByName('bar_kodu').AsString;
Dm.qryStkMas.FieldByName('StkKdv1').AsFloat :=Dm.Qt1.FieldByName('sto_toptan_vergi').AsFloat;
Dm.qryStkMas.FieldByName('StkKdv2').AsFloat :=Dm.Qt1.FieldByName('sto_perakende_vergi').AsFloat;
Dm.qryStkMas.FieldByName('StkGir').AsFloat :=0;
Dm.qryStkMas.FieldByName('StkCik').AsFloat :=0;
Dm.qryStkMas.FieldByName('YeniStk').AsBoolean :=False;
Dm.qryStkMas.FieldByName('MinStk').AsFloat :=Dm.Qt1.FieldByName('sto_min_stok').AsFloat;
Dm.qryStkMas.FieldByName('MaxStk').AsFloat :=Dm.Qt1.FieldByName('sto_max_stok').AsFloat;
Dm.qryStkMas.FieldByName('StkGrp1').AsString:='';
Dm.qryStkMas.FieldByName('StkGrp2').AsString:='';
Dm.qryStkMas.FieldByName('StkGrp3').AsString:='';
Dm.qryStkMas.FieldByName('StkGrp4').AsString:='';
Dm.qryStkMas.FieldByName('StkFytno').AsInteger:=1;
Label1.Caption:=Dm.Qt1.FieldByName('sto_isim').AsString;
Dm.qryStkMas.Post;
Dm.Qt1.Next;
end;
Dm.qryStkMas.Close;
label3.Caption:=timetostr(time);
The first step in speeding things up is diagnosis.
MEASURING
You can measure by splitting the select and insert up obviously, but you can also get some diagnostics out SQL itself.
If you prefix out query with the keyword EXPLAIN in SQLite it will tell you what indexes are use and how the statement is handled internally, see here: http://www.sqlite.org/eqp.html
This is invaluable info for optimizing.
IN MS SQL Server you go into the gui, put in the query and click on the estimated query plan button, see: what is the equivalent of EXPLAIN form SQLite in SQL Server?.
What's taking the most time? Is the select slow or is the insert.
SELECT
Selects are usually speed up by putting indexes on those fields that are evaluated.
In your case the fields involved in the join criteria.
The field in the where clause uses a function and you cannot put an index on a function in MSSQL (you can in PostgreSQL and Oracle).
INSERT
Inserts are speed up by disabling indexes.
One common trick is to disable all indexing prior to the insert batch and reenable them after the insert batch is done.
This is usually more efficient because its faster (per item) to sort the whole in one go that to keep resorting after each individual insert.
You can also disable transaction safeguards.
This will corrupt your data in case or power/disk etc failure, so consider yourself warned, see here: Improve large data import performance into SQLite with C#
Comments on the code
You select data using an SQL select statement, however you insert using the datasets append and fieldbyname() methods. FieldByName is notoriously slow because it does a name lookup every time.
FieldByName should never be used in a loop
Construct an insert SQL statement instead.
Remember that you can use parameters, or even hard paste the values in there.
Do some experimentation to see which is faster.
About.com has a nice article on how to speed up database access by eliminating FieldByName: http://delphi.about.com/od/database/ss/faster-fieldbyname-delphi-database.htm
Did you try wrapping your insertions in a transaction? You would need to BEGIN a transaction before your While... and COMMIT it after the ...End. Try it, it might help.
Edit: If you get an improvement, that would be because your database connection to SQLite is set up in the "autocommit" mode, where every operation (such as your .Append) is done independantly with all the others, and SQLite is smart enough to ensure ACID properties of the database. This means that for every write operation you make, the database will make one or more writes to your hard drive, which is slow. By explicitly creating a transaction (which turns off autocommit...), you group write operations in a transaction, and the database can issue a much smaller number of writes to the hard drive when you explicit commit the transaction.

System.Web.Providers.DefaultMembershipProvider having performance issues/deadlocks

We have started to use the updated System.Web.Providers provided in the Microsoft.AspNet.Providers.Core package from NuGet. We started to migrate our existing users and found performance slowing and then deadlocks occurring. This was with less than 30,000 users (much less than the 1,000,000+ we need to create). When we were calling the provider, it was from multiple threads on each server and there were multiple servers running this same process. This was to be able to create all the users we required as quickly as possible and to simulate the load we expect to see when it goes live.
The logs SQL Server generated for for a deadlock contained the EF generated sql below:
SELECT
[Limit1].[UserId] AS [UserId]
, [Limit1].[ApplicationId] AS [ApplicationId]
, [Limit1].[UserName] AS [UserName]
, [Limit1].[IsAnonymous] AS [IsAnonymous]
, [Limit1].[LastActivityDate] AS [LastActivityDate]
FROM
(SELECT TOP (1)
[Extent1].[UserId] AS [UserId]
, [Extent1].[ApplicationId] AS [ApplicationId]
, [Extent1].[UserName] AS [UserName]
, [Extent1].[IsAnonymous] AS [IsAnonymous]
, [Extent1].[LastActivityDate] AS [LastActivityDate]
FROM
[dbo].[Users] AS [Extent1]
INNER JOIN [dbo].[Applications] AS [Extent2] ON [Extent1].[ApplicationId] = [Extent2].[ApplicationId]
WHERE
((LOWER([Extent2].[ApplicationName])) = (LOWER(#p__linq__0)))
AND ((LOWER([Extent1].[UserName])) = (LOWER(#p__linq__1)))
) AS [Limit1]
We ran the query manually and the execution plan said that it was performing a table scan even though there was an underlying index. The reason for this is the use of LOWER([Extent1].[UserName]).
We looked at the provider code to see if we were doing something wrong or if there was a way to either intercept or replace the database access code. We didn't see any options to do this but we did find the source of the LOWER issue, .ToLower() is being called on both the column and parameter.
return (from u in ctx.Users
join a in ctx.Applications on u.ApplicationId equals a.ApplicationId into a
where (a.ApplicationName.ToLower() == applicationName.ToLower()) && (u.UserName.ToLower() == userName.ToLower())
select u).FirstOrDefault<User>();
Does anyone know of a way that we change the behaviour of the provider to not use .ToLower() so allowing the index to be used?
You can create an index on lower(username) per Sql Server : Lower function on Indexed Column
ALTER TABLE dbo.users ADD LowerFieldName AS LOWER(username) PERSISTED
CREATE NONCLUSTERED INDEX IX_users_LowerFieldName_ ON dbo.users(LowerFieldName)
I was using the System.Web.Providers.DefaultMembershipProvider membership provider too but found that it was really slow. I changed to the System.Web.Security.SqlMembershipProvider and found it to be much faster (>5 times faster).
This tutorial shows you how to set up the SQL database that you need to use the SqlMembershipProvider http://weblogs.asp.net/sukumarraju/archive/2009/10/02/installing-asp-net-membership-services-database-in-sql-server-expreess.aspx
This database that is auto generated uses stored procedures which may or may not be an issue for your DB guys.

Slow query when connecting to linked server

I've got this query
UPDATE linkeddb...table SET field1 = 'Y' WHERE column1 = '1234'
This takes 23 seconds to select and update one row
But if I use openquery (which I don't want to) then it only takes half a second.
The reason I don't want to use openquery is so I can add parameters to my query securely and be safe from SQL injections.
Does anyone know of any reason for it to be running so slowly?
Here's a thought as an alternative. Create a stored procedure on the remote server to perform the update and then call that procedure from your local instance.
/* On remote server */
create procedure UpdateTable
#field1 char(1),
#column1 varchar(50)
as
update table
set field1 = #field1
where column1 = #column1
go
/* On local server */
exec linkeddb...UpdateTable #field1 = 'Y', #column1 = '1234'
If you're looking for the why, here's a possibility from Linchi Shea's Blog:
To create the best query plans when
you are using a table on a linked
server, the query processor must have
data distribution statistics from the
linked server. Users that have limited
permissions on any columns of the
table might not have sufficient
permissions to obtain all the useful
statistics, and might receive aless
efficient query plan and experience
poor performance. If the linked
serveris an instance of SQL Server, to
obtain all available statistics, the
user must own the table or be a member
of the sysadmin fixed server role, the
db_ownerfixed database role, or the
db_ddladmin fixed database role on the
linkedserver.
(Because of Linchi's post, this clarification has been added to the latest BooksOnline SQL documentation).
In other words, if the linked server is set up with a user that has limited permissions, then SQL can't retrieve accurate statistics for the table and might choose a poor method for executing a query, including retrieving all rows.
Here's a related SO question about linked server query performance. Their conclusion was: use OpenQuery for best performance.
Update: some additional excellent posts about linked server performance from Linchi's blog.
Is column1 primary key? Probably not. Try to select records for update using primary key (where PK_field=xxx), otherwise (sometimes?) all records will be read to find PK for records to update.
Is column1 a varchar field? Is that why are you surrounding the value 1234 with single-quotation marks? Or is that simply a typo in your question?

How to find out which package/procedure is updating a table?

I would like to find out if it is possible to find out which package or procedure in a package is updating a table?
Due to a certain project being handed over (the person who handed over the project has since left) without proper documentation, data that we know we have updated always go back to some strange source point.
We are guessing that this could be a database job or scheduler that is running the update command without our knowledge. I am hoping that there is a way to find out where the source code is calling from that is updating the table and inserting the source as a trigger on that table that we are monitoring.
Any ideas?
Thanks.
UPDATE: I poked around and found out
how to trace a statement back to its
owning PL/SQL object.
In combination with what Tony mentioned, you can create a logging table and a trigger that looks like this:
CREATE TABLE statement_tracker
( SID NUMBER
, serial# NUMBER
, date_run DATE
, program VARCHAR2(48) null
, module VARCHAR2(48) null
, machine VARCHAR2(64) null
, osuser VARCHAR2(30) null
, sql_text CLOB null
, program_id number
);
CREATE OR REPLACE TRIGGER smb_t_t
AFTER UPDATE
ON smb_test
BEGIN
INSERT
INTO statement_tracker
SELECT ss.SID
, ss.serial#
, sysdate
, ss.program
, ss.module
, ss.machine
, ss.osuser
, sq.sql_fulltext
, sq.program_id
FROM v$session ss
, v$sql sq
WHERE ss.sql_address = sq.address
AND ss.SID = USERENV('sid');
END;
/
In order for the trigger above to compile, you'll need to grant the owner of the trigger these permissions, when logged in as the SYS user:
grant select on V_$SESSION to <user>;
grant select on V_$SQL to <user>;
You will likely want to protect the insert statement in the trigger with some condition that only makes it log when the the change you're interested in is occurring - on my test server this statement runs rather slowly (1 second), so I wouldn't want to be logging all these updates. Of course, in that case, you'd need to change the trigger to be a row-level one so that you could inspect the :new or :old values. If you are really concerned about the overhead of the select, you can change it to not join against v$sql, and instead just save the SQL_ADDRESS column, then schedule a job with DBMS_JOB to go off and update the sql_text column with a second update statement, thereby offloading the update into another session and not blocking your original update.
Unfortunately, this will only tell you half the story. The statement you're going to see logged is going to be the most proximal statement - in this case, an update - even if the original statement executed by the process that initiated it is a stored procedure. This is where the program_id column comes in. If the update statement is part of a procedure or trigger, program_id will point to the object_id of the code in question - you can resolve it thusly:
SELECT * FROM all_objects where object_id = <program_id>;
In the case when the update statement was executed directly from the client, I don't know what program_id represents, but you wouldn't need it - you'd have the name of the executable in the "program" column of statement_tracker. If the update was executed from an anonymous PL/SQL block, I'm not how to track it back - you'll need to experiment further.
It may be, though, that the osuser/machine/program/module information may be enough to get you pointed in the right direction.
If it is a scheduled database job then you can find out what scheduled database jobs exist and look into what they do. Other things you can do are:
look at the dependencies views e.g. ALL_DEPENDENCIES to see what packages/triggers etc. use that table. Depending on the size of your system that may return a lot of objects to trawl through.
Search all the database source code for references to the table like this:
select distinct type, name
from all_source
where lower(text) like lower('%mytable%');
Again that may return a lot of objects, and of course there will be some "false positives" where the search string appears but isn't actually a reference to that table. You could even try something more specific like:
select distinct type, name
from all_source
where lower(text) like lower('%insert into mytable%');
but of course that would miss cases where the command was formatted differently.
Additionally, could there be SQL scripts being run through "cron" jobs on the server?
Just write an "after update" trigger and, in this trigger, log the results of "DBMS_UTILITY.FORMAT_CALL_STACK" in a dedicated table.
The purpose of this function is exactly to give you the complete call stack of al the stored procedures and triggers that have been fired to reach your code.
I am writing from the mobile app, so i can't give you more detailed examples, but if you google for it you'll find many of them.
A quick and dirty option if you're working locally, and are only interested in the first thing that's altering the data, is to throw an error in the trigger instead of logging. That way, you get the usual stack trace and it's a lot less typing and you don't need to create a new table:
AFTER UPDATE ON table_of_interest
BEGIN
RAISE_APPLICATION_ERROR(-20001, 'something changed it');
END;
/

Resources