I am submitting queries with vb script (NOT sql assistant). Now I accidentally deleted that vb script file . How can I recover the queries that I submitted ? Where are they stored in Teradata ?
Most TD systems enable the Database Query Log (DBQL), so there's a high probability that your SQL was captured. You might try if you got access to it:
SELECT * FROM dbc.QryLogV
WHERE UserName = USER;
But even if this works you might still not find the required queries, as all data is regularly moved from the DBQL base tables to a history database (probably every day). So you better contact your DBA and ask for assistance :-)
If QueryText in dbc.QryLogV is empty (or just partial text) you can check QryLogSQLV (hopefully it's populated):
SELECT * FROM dbc.QRryLogSQLV
WHERE QueryId IN
(
SELECT QueryId FROM dbc.QryLogV
WHERE UserName = USER
AND some more conditions to find the correct queries
)
Related
I have a SQL query which is running on a view and on top has lot of wild card operators, hence taking a lot of time to complete.
The data is consumed by an ASP.net application, is there any way I could pre-run the query once in a day so data is already there when asp.net application needs it and only pass on the parameter to fetch specific records.
Much simplified example would be
select * from table
Run every day and result stored somewhere and when asp.net passes on the parameter only specific records are fetched like
select * from table where field3 = 'something'
Either use SQLAgent (MSSQL) or equivalent to run a scheduled process that stores the result into a Table like this...
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[MyTemporaryTable]') AND type in (N'U'))
BEGIN
TRUNCATE TABLE [dbo].[MyTemporaryTable];
INSERT INTO [dbo].[MyTemporaryTable]
SELECT * FROM [vwMyTemporaryTable];
END
ELSE
BEGIN
SELECT *
INTO [MyTemporaryTable]
FROM [vwMyTemporaryTableDataSource];
END
or you could store the result in ASP.Net as an Application/Session variable or even a Property in a class that is stored in Application/Session. The Property approach will load the data the first time it is requested, and use memory thereafter.
private MyObjectType _objMyStoredData;
public MyObjectType MyStoredData
{
get
{
if (_objMyStoredData == null)
{
_objMyStoredData = GetMyData();
}
return _objMyStoredData;
}
}
However, if your source data for this report is only 2,000 rows... I wonder if all this is really necessary. Perhaps increasing the efficiency of the query could solve the problem without delving into pre caching and the downsides that go with it, such as re-using data that could be out of date.
You can use redis. You can run the view once the user logs in. Then fill redis with the view data. Set that object in Session Context of user so that it is accessible on all pages. Then when the user logs out. clean up the redis. By this way user won't go to database everytime for result instead will get the data from redis cache. it's very fast. You can contact me if more help is needed.
you can mark it as answer if you find it helpful.
We have started to use the updated System.Web.Providers provided in the Microsoft.AspNet.Providers.Core package from NuGet. We started to migrate our existing users and found performance slowing and then deadlocks occurring. This was with less than 30,000 users (much less than the 1,000,000+ we need to create). When we were calling the provider, it was from multiple threads on each server and there were multiple servers running this same process. This was to be able to create all the users we required as quickly as possible and to simulate the load we expect to see when it goes live.
The logs SQL Server generated for for a deadlock contained the EF generated sql below:
SELECT
[Limit1].[UserId] AS [UserId]
, [Limit1].[ApplicationId] AS [ApplicationId]
, [Limit1].[UserName] AS [UserName]
, [Limit1].[IsAnonymous] AS [IsAnonymous]
, [Limit1].[LastActivityDate] AS [LastActivityDate]
FROM
(SELECT TOP (1)
[Extent1].[UserId] AS [UserId]
, [Extent1].[ApplicationId] AS [ApplicationId]
, [Extent1].[UserName] AS [UserName]
, [Extent1].[IsAnonymous] AS [IsAnonymous]
, [Extent1].[LastActivityDate] AS [LastActivityDate]
FROM
[dbo].[Users] AS [Extent1]
INNER JOIN [dbo].[Applications] AS [Extent2] ON [Extent1].[ApplicationId] = [Extent2].[ApplicationId]
WHERE
((LOWER([Extent2].[ApplicationName])) = (LOWER(#p__linq__0)))
AND ((LOWER([Extent1].[UserName])) = (LOWER(#p__linq__1)))
) AS [Limit1]
We ran the query manually and the execution plan said that it was performing a table scan even though there was an underlying index. The reason for this is the use of LOWER([Extent1].[UserName]).
We looked at the provider code to see if we were doing something wrong or if there was a way to either intercept or replace the database access code. We didn't see any options to do this but we did find the source of the LOWER issue, .ToLower() is being called on both the column and parameter.
return (from u in ctx.Users
join a in ctx.Applications on u.ApplicationId equals a.ApplicationId into a
where (a.ApplicationName.ToLower() == applicationName.ToLower()) && (u.UserName.ToLower() == userName.ToLower())
select u).FirstOrDefault<User>();
Does anyone know of a way that we change the behaviour of the provider to not use .ToLower() so allowing the index to be used?
You can create an index on lower(username) per Sql Server : Lower function on Indexed Column
ALTER TABLE dbo.users ADD LowerFieldName AS LOWER(username) PERSISTED
CREATE NONCLUSTERED INDEX IX_users_LowerFieldName_ ON dbo.users(LowerFieldName)
I was using the System.Web.Providers.DefaultMembershipProvider membership provider too but found that it was really slow. I changed to the System.Web.Security.SqlMembershipProvider and found it to be much faster (>5 times faster).
This tutorial shows you how to set up the SQL database that you need to use the SqlMembershipProvider http://weblogs.asp.net/sukumarraju/archive/2009/10/02/installing-asp-net-membership-services-database-in-sql-server-expreess.aspx
This database that is auto generated uses stored procedures which may or may not be an issue for your DB guys.
I've got this query
UPDATE linkeddb...table SET field1 = 'Y' WHERE column1 = '1234'
This takes 23 seconds to select and update one row
But if I use openquery (which I don't want to) then it only takes half a second.
The reason I don't want to use openquery is so I can add parameters to my query securely and be safe from SQL injections.
Does anyone know of any reason for it to be running so slowly?
Here's a thought as an alternative. Create a stored procedure on the remote server to perform the update and then call that procedure from your local instance.
/* On remote server */
create procedure UpdateTable
#field1 char(1),
#column1 varchar(50)
as
update table
set field1 = #field1
where column1 = #column1
go
/* On local server */
exec linkeddb...UpdateTable #field1 = 'Y', #column1 = '1234'
If you're looking for the why, here's a possibility from Linchi Shea's Blog:
To create the best query plans when
you are using a table on a linked
server, the query processor must have
data distribution statistics from the
linked server. Users that have limited
permissions on any columns of the
table might not have sufficient
permissions to obtain all the useful
statistics, and might receive aless
efficient query plan and experience
poor performance. If the linked
serveris an instance of SQL Server, to
obtain all available statistics, the
user must own the table or be a member
of the sysadmin fixed server role, the
db_ownerfixed database role, or the
db_ddladmin fixed database role on the
linkedserver.
(Because of Linchi's post, this clarification has been added to the latest BooksOnline SQL documentation).
In other words, if the linked server is set up with a user that has limited permissions, then SQL can't retrieve accurate statistics for the table and might choose a poor method for executing a query, including retrieving all rows.
Here's a related SO question about linked server query performance. Their conclusion was: use OpenQuery for best performance.
Update: some additional excellent posts about linked server performance from Linchi's blog.
Is column1 primary key? Probably not. Try to select records for update using primary key (where PK_field=xxx), otherwise (sometimes?) all records will be read to find PK for records to update.
Is column1 a varchar field? Is that why are you surrounding the value 1234 with single-quotation marks? Or is that simply a typo in your question?
Does I use SqlServices.Uninstall() to uninstall ASP.NET Membership tables and other stuff programmatically from database. But when tables hold old data, it does not work with following error message:
Cannot uninstall the specified feature(s) because the SQL table 'aspnet_Membership' in the database '[DBNAME]' is not empty. You must first remove all rows from the table.
Is there a way to tell SqlServices or any other class in .NET to erase those old data too?
You can use the following that deletes every user from your database.
foreach (MembershipUser user in Membership.GetAllUsers())
Membership.DeleteUser(user.UserName, true);
SqlServices.Uninstall()
Be careful:
The code above executes 1 + N (DELETE Statements) + SQL Statements from the Uninstall method,
where N == Number of Users in the database. So it's not very efficient.
If you want something more efficient you have to write your own Stored Procedure.
I have an error occuring frequently from our community server installation whenever the googlesitemap.ashx is traversed on a specific sectionID. I suspect that a username has been amended but the posts havn't recached to reflect this.
Is there a way a can check the data integruity by performing a select statement on the database, alternatively is there a way to force the database to recache?
This error could be thrown by community server if it finds users that aren't in the instance of MemberRoleProfileProvider.
See CommunityServer.Users AddMembershipDataToUser() as an example
UPDATE:
I Solved this problem for my case by noticing that the usernames are stored in two tables - cs_Users and aspnet_Users. Turns out somehow the username was DIFFERENT in each table. Manually updating so the names were the same fixed this problem.
Also, the user would left out of membership in the following line of the stored procedure cs_Membership_GetUsersByName:
INSERT INTO #tbUsers
SELECT UserId
FROM dbo.aspnet_Users ar, #tbNames t
WHERE LOWER(t.Name) = ar.LoweredUserName AND ar.ApplicationId = #ApplicationId
The #tbNames is a table of names comes from cs_Users(?) at some point and therefore the usernames didn't match and user was not inserted in to the result later on.
See also: http://dev.communityserver.com/forums/t/490899.aspx?PageIndex=2
Not so much an answer, but you can find the affected data entries by running the following query...
Select *
FROM cs_Posts
Where UserID Not In (Select UserID
From cs_Users Where UserAccountStatus = 2)