Can the qpad queries be recovered if qpad is closed but the port is open? - console

I ran multiple queries but before saving them, the qpad crashed. However the q-port on which these queries were running (on my windows machine) is still open. I can recover the variables and functions by \v and \f respectively.
Is there a way to recover all the q statements I ran using qpad? I forgot to maintain a log file, hence I am trying to find a way to recover queries using q-port.
Thanks

Unfortunately there's no way to retrieve your old queries for the reasons Davis.Leong said. But if you can't/don't want to create a table on your server to save them, you can also check the log queries box in QPad settings:
Q > Settings > Editor > Log queries to "queries_date.log"
Now when you run queries, they will be written to this log file in the same directory as QPad.exe, along with the server and timestamp, like this:
/ 02/26/19 09:54:52 on `:localhost:1234:: from QPad1*
show `logthis
/ 02/26/19 10:03:03 on `:localhost:1234:: from QPad1*
a:10

Unfortunately I don't think there is a way to retrieve your command history. Others has already mentioned why so I will not go into that. You can easily maintain a log file in the future however:
When you start your server, adding the -l flag will allow you to define a path to a log file. Any commands sent to the server from the client will now be logged. For example
q ../log/logtest -l -p 5555
t:([]date:`date$();sym:`sym$();price:`float$())
will start a q process listening on 5555, logging any messages that cause the server to update. So if I open a handle to 5555 in another q session h:hopen `::5555
and
update table t
q)h"insert[`t](2000.01.01;`appl;102.3)"
,0
the server will have updated t like so
q)t
date sym price
---------------------
2000.01.01 appl 102.3
There will be a log file created which will show any commands sent to the server. NOTE however it will only log those commands that change the state of the server's data.
This log file can be reloaded in the event of a server crash using the same command as before.

The answer is no. qpad is the GUI that interact with the q process. The reason why you can retrieve the variable and function is because the process did not die. For the query, in default q will not save that, unless when you customize your .z.pg to upsert a record in a queryHistory table.
e.g.
q).z.pg:{[x]`queryHistory insert ([]queryTime:.z.P;query:enlist x)}
q)queryHistory:([]queryTime:`timestamp$();query:())
q)10+10
20
q)testTab:([]sym:10?`1;val:10?100)
q)queryHistory
queryTime query
---------------
queryHistory is not append with record as this is being done in q process itself, if you do it in your qpad:
10+10
testTab:([]sym:10?`1;val:10?100)
you can see there will be record append, so even your GUI is crashed, you can trace the query
q)queryHistory
queryTime query
-------------------------------------
2019.02.26D17:32:38.471063000 "10+10"
q)queryHistory
queryTime query
----------------------------------------------------------------
2019.02.26D17:32:38.471063000 "10+10"
2019.02.26D17:32:52.790863000 "testTab:([]sym:10?`1;val:10?100)"

Got to know recently, there is a backup of your q scripts at "c/users//Appdata/local" and are autosaved every 5-6 mins.These are temporary files which are deleted when you save the script. However if your qPad crashed, you can find your files here :)

Related

Various difficulties creating ASP.NET Session tables via aspnet_regsql.exe

We're trying to move ASP.NET session state for one of our Azure web apps into a database, and it seems like the aspnet_regsql.exe tool is the way to go. Unfortunately, I'm getting stuck on a few issues below. It's an Azure SQL database, and I'm connecting using the server's admin account.
I initially wanted to add the session tables to our existing database, so I ran .\aspnet_regsql.exe -U adminusername -P adminpassword -S servername.database.windows.net -d databasename -ssadd -sstype c. Which throws the exception "Database 'databasename' already exists. Choose a different database name"
Omitting the database name and running it again throws the exception: "Execution Timeout Expired" after about 30 seconds, which is just the default for SqlCommand.CommandTimeout. This occurs while executing the "CREATE DATABASE" command. I tried creating a database manually, and it takes about 50 seconds for some reason. This database is S0 tier and is not under any load
Running aspnet_regsql again on the already-created database (because it's idempotent, right?) leads to the "Database already exists" error, as does pre-creating an empty database for it to start from.
There's no flag that lets me increase the timeout, and I can't set command timeout using the -C (connection string) flag
Adding the -sqlexportonly flag to generate a script and just running that directly doesn't work either (yes, I know I'm not supposed to run InstallSqlState.sql directly). It throws a whole load of error messages saying things like:
Reference to database and/or server name in 'msdb.dbo.sp_add_job' is not supported in this version of SQL Server.
USE statement is not supported to switch between databases.
Which makes me think this script might have some issues with an Azure SQL database...
Does anyone have any ideas?
Update:
It looks like all the errors involving 'msdb' are related to removing and re-adding a database job called 'Job_DeleteExpiredSessions'. Azure SQL doesn't support database jobs, so the only options I can see are
Run SQL on a VM instead (vastly more expensive, and I'd rather stick with the platform services than have to manage VMs)
Implement one of those "Elastic Job Agents"
Perhaps move the same functionality elsewhere (e.g. a stored proc)?
Turns out Microsoft has an article about how to do exactly what I need, which I somehow
missed during my searching yesterday. Hopefully this answer saves someone else a few hours of frustration. All the info you need is at https://azure.microsoft.com/en-au/blog/using-sql-azure-for-session-state/ earlier.
Note that YMMV since it's from 2010 and also says in scary red letters
"Microsoft does not support SQL Session State Management using SQL Azure databases for ASP.net applications"
Nevertheless, they provide a working script that seems to do exactly what I need.

SQL Job, Linked Server, Domain Account

I think I have tried everything but I can't figure out how to set up an SQL Job that have a T-SQL Step that performs a SELECT on a linked server.
1) I got a Domain user mydomain\SQLJob
2) I got a SQL 2017 server 'JOBSERVER' with the SQL Agent running log on as a domain user mydomain\sv_agent (this cannot be changed). No futher rights should be given to this user either.
3) On the JOBSERVER I created a linked server
EXEC sp_addlinkedserver 'LINKEDSERVER'
4) mydomain\SQLJob is data reader on a database on LINKEDSERVER
5) I am able to do a SELECT * FROM LINKEDSERVER... from JOBSERVER in a regular Query Window.
On JOBSERVER I have tried
ALTER DATABASE MyDatabase SET TRUSTWORTHY ON
And then set the job to execute in MyDatabase
I have also tried
Job Step Properties > Advanced > Run as user > mydomain\SQLJob
I have tried adding mydomain\SQLJob to the linked server on JOBSERVER both with and without Impersonate
Could someone let me know what the correct steps are ?
Thanks
I found another post that stated it could not be done.
What I did instead was changing the T-SQL to a PowerShell script. Here the Run As works just fine.

Teradata: Keep running the script although there are errors

I have a long script with sql statements to run on teradata. I want the script to keep running until the end and save the errors in a log file and that it will stop on every error. How can I do it?
thanks
Assuming you are using Teradata SQL Assistant:
Click on Tools in the menu bar, then Options, then Query. There is a checkbox that says "Stop query execution if an SQL error occurs"
To get the most recent error hit F11. Otherwise, from the menu bar click Tools, then show history. Double click on the row number on the left side of one of the history records and it will bring up a screen with the result messages for each statement. You can also query this sort of info directly from one of the QryLog views in DBC.
Errors can be of multiple types, some can be by-passed and some cannot be. For example, with native Teradata Tools and Utilities you can make a script ignore run-time errors, or even syntax errors, but generally it is impossible to ignore network connectivity errors and still get remaining part of your queries executed.
Generally in such scenarios, you want to use the BTEQ tool for executing the SQL in which you can ignore the execution errors. BTEQ is a standard Teradata tool which can be downloaded from Teradata website for free and is commonly installed by users querying Teradata through plain SQL.
To create a workable BTEQ script simply copy paste all of your queries into a plain text file, separate all queries with semicolons ; and on the very top of that plain text file add a logon statement as stated below
.logon Teradata_IP_Address/your_UserName,your_Password;
example script:
.logon 127.0.0.1/dbc,dbc;
/*Some sample queries. Replace these with your actual queries*/
SELECT Current_Timestamp;
CREATE TABLE My_Table (Dummy INTEGER) PRIMARY INDEX (Dummy);
So BTEQ got you through the execution errors. To avoid network connectivity issues, ideally you want to execute that on a server which has a constant connection to Teradata and with Teradata Tools and Utilities installed. Such a server may be called as ETL server, landing server, edge node or managed server (or something else, depending on your environment). You will definitely need login credentials to that server (if you don't already have access). Preferable commands to execute a bteq script are
Windows: bteq < yourscriptname >routine_logfile 2>error_logfile
Linux (bash/ksh): nohup bteq < yourscriptname >routine_logfile 2>error_logfile &
Make sure not to close the command prompt if you are on windows. On Linux you can close the current window or even terminate your network session with your ETL server if you use the recommended command.
If you see a warning about EOL line found at the end of your logs, just ignore it; it is because for simplicity I ignored some optional BTEQ statements which ensure cleaner exit.

Is it ok to change from full recovery to simple recovery in Sql Server

I have an old database - a users membership/role that was setup automatically by an ASP.Net 2 application years ago:
The Sql Server version currently running is: Sql Server 10.5.1617
The users database log file is huge (the ldf file is approx 400 times the size of the mdf file).
The recovery model is currently set to "Full". I understand what that is - and I don't need point in time restoration.
If I simply changed the recovery model to "Simple" from within Sql Server Management Studio:
...and clicked ok to save the changes - would I be risking my current database in any way? Or is Sql Server fine with making changes like this to live databases? And would the log file automatically shrink itself?
Thanks for your advice,
Mark
You should be fine, the transactions have been commited. The log file is waiting to be backed up and therefor released. Changing to Simple Recovery means that you cannot do rolling backups, but data will be commited to the db in the same way as before, logs are simply deleted after sql has completed writing the transaction.
To answer both of your questions:
Changing the recovery model on a live database is safe. You shouldn't incur any downtime, blocking, etc.
The log file won't shrink itself. You may find that once you've set the recovery model to simple that it may not be shrinkable right away. If you find that you're unable to shrink it, take a look at dbcc loginfo, specifically the 'status' column. Each row in the output of that command represents one virtual log file (vlf). The shrink command will only be able to clear a contiguous block of inactive (i.e. status = 0) vlfs at the end of the file. TL;DR - If you've got rows with status = 2 at the bottom, wait until you don't and then shrink.

Timeout when uploading images

I am currently testing Tridion 2011 and am having problems creating multimedia components with uploaded content (as opposed to external).
I fill out the title, schema, multimedia type, select a file from my system then click save. I get a Saving item... information message then approximately 30 seconds later I will receive a The wait operation timed out message.
There doesn't appear to be any error messages in the C:\Program Files (x86)\Tridion\log directory. Looking at the event viewer I see the following information relating to the save action
Unable to save Component (tcm:4-738361).
The wait operation timed out
Error Code:
0x8004033F (-2147220673)
Call stack:
System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject,Boolean,Boolean)
System.Data.SqlClient.TdsParser.TryRun(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject,Boolean&)
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,TaskCompletionSource`1,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1,String,Boolean,Int32,Boolean)
System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.SetBinaryContent(Int32,Stream)
Tridion.ContentManager.Data.AdoNet.ContentManagement.ItemDataMapper.Tridion.ContentManager.Data.ContentManagement.IItemDataMapper.SetBinaryContent(Stream,TcmUri)
Tridion.ContentManager.ContentManagement.RepositoryLocalObject.SetBinaryContent(BinaryContent)
Tridion.ContentManager.ContentManagement.Component.OnSaved(SaveEventArgs)
Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)
Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)
Tridion.ContentManager.ContentManagement.VersionedItem.Save()
Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)
XMLState.Save
Component.Save
I already have my timeout settings in the Content Manager Snap-In set to high values (more than 10 minutes) due to another issue.
The BINARIES table in the Content Manage Database is 25GB if that helps.
Any ideas? Thanks.
Edit 1
Following suggestions from Bart Koopman, my DBA has rebuilt the indexes but does not reckon the Transaction log has any impact on performance. The problem persists.
Edit 2
I have just found more details of the error
Unable to save Component (tcm:0-0-0).
Timeout expired.
The timeout period elapsed prior to completion of the operation or the server is not responding.
A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATEBINARYCONTENT".EDA_ITEMS_UPDATEBINARYCONTENT
After taking a look at this procedure it looks like the following statement could be the root cause
SELECT 1 FROM BINARIES WHERE ID = #iBINARY_ID AND CONTENT IS NULL
I execute it manually with #iBINARY_ID as -1 and after 2 minutes it still hasn't completed. I assume that when I insert a new multimedia component the query will be something similar (i.e. the id will not exist in the table).
The BINARIES table currently has a NON-CLUSTERED Primary Key. Maybe the solution would be to change this to a CLUSTERED Primary Key? However, I assume it is NON-CLUSTERED for a reason.
Just had a response from SDL customer support. Apparently this is a known issue related to statistics and the chosen query plan.
Running the following statement manually from SQL Server Management Studio fixes the problem (it didn't even need to complete for me)
SELECT 1 FROM BINARIES WHERE ID = -1 AND CONTENT IS NULL
Hope this helps someone else out!
Timeouts on database operations are usually an indication of a misconfiguration or a lack of maintenance. By increasing the timeout you are just working around the problem rather than solving it.
With a binaries table that big you will want to make sure you have proper database setup with data files that are separated from your log files (separated on different physical partitions/disks) and possibly even multiple data files on multiple physical partitions to take advantage of performance gains.
Next to that you will want to assure that the standard database maintenance is performed daily/hourly. Things like backing up and truncating the transaction log every hour will greatly improve your database performance (on MS SQL Server a transaction log of more than 1GB slows the database down drastically, you should always try to keep it below that size through timely backup/trucate). Updating statistics and rebuilding indexes is also something you should not forget on a regular basis.

Resources