Is it possible in peoplesoft to schedule a query daily and transfer the result file (for example in xls) to a server or to a local drive ? In other words to extract the result out of the peoplesoft application itself? I don't mean to transfer it by mail...
Yes it is possible. You can create the query as a PSQUERY and then schedule the query.
See Scheduling Queries for more details
The summary is:
Create a query in PSQUERY
Schedule Query (Reporting Tools->Query->Schedule Query)
Select your runcontrol
Select your query (from #1) and enter parameters
Press Run
Change the type to File
Change the Format if needed (XLS is an option)
Enter the folder path in the output destination
Select a recurrence
Press OK
The output destination is from the point of view of the Process scheduler so c:\temp\ would put the files on the local drive of the process scheduler, not your PC. If the process scheduler has the correct permissions (and it's running on windows) you can put a UNC in here or in linux, you can mount the location you need and save it there.
Related
I ran multiple queries but before saving them, the qpad crashed. However the q-port on which these queries were running (on my windows machine) is still open. I can recover the variables and functions by \v and \f respectively.
Is there a way to recover all the q statements I ran using qpad? I forgot to maintain a log file, hence I am trying to find a way to recover queries using q-port.
Thanks
Unfortunately there's no way to retrieve your old queries for the reasons Davis.Leong said. But if you can't/don't want to create a table on your server to save them, you can also check the log queries box in QPad settings:
Q > Settings > Editor > Log queries to "queries_date.log"
Now when you run queries, they will be written to this log file in the same directory as QPad.exe, along with the server and timestamp, like this:
/ 02/26/19 09:54:52 on `:localhost:1234:: from QPad1*
show `logthis
/ 02/26/19 10:03:03 on `:localhost:1234:: from QPad1*
a:10
Unfortunately I don't think there is a way to retrieve your command history. Others has already mentioned why so I will not go into that. You can easily maintain a log file in the future however:
When you start your server, adding the -l flag will allow you to define a path to a log file. Any commands sent to the server from the client will now be logged. For example
q ../log/logtest -l -p 5555
t:([]date:`date$();sym:`sym$();price:`float$())
will start a q process listening on 5555, logging any messages that cause the server to update. So if I open a handle to 5555 in another q session h:hopen `::5555
and
update table t
q)h"insert[`t](2000.01.01;`appl;102.3)"
,0
the server will have updated t like so
q)t
date sym price
---------------------
2000.01.01 appl 102.3
There will be a log file created which will show any commands sent to the server. NOTE however it will only log those commands that change the state of the server's data.
This log file can be reloaded in the event of a server crash using the same command as before.
The answer is no. qpad is the GUI that interact with the q process. The reason why you can retrieve the variable and function is because the process did not die. For the query, in default q will not save that, unless when you customize your .z.pg to upsert a record in a queryHistory table.
e.g.
q).z.pg:{[x]`queryHistory insert ([]queryTime:.z.P;query:enlist x)}
q)queryHistory:([]queryTime:`timestamp$();query:())
q)10+10
20
q)testTab:([]sym:10?`1;val:10?100)
q)queryHistory
queryTime query
---------------
queryHistory is not append with record as this is being done in q process itself, if you do it in your qpad:
10+10
testTab:([]sym:10?`1;val:10?100)
you can see there will be record append, so even your GUI is crashed, you can trace the query
q)queryHistory
queryTime query
-------------------------------------
2019.02.26D17:32:38.471063000 "10+10"
q)queryHistory
queryTime query
----------------------------------------------------------------
2019.02.26D17:32:38.471063000 "10+10"
2019.02.26D17:32:52.790863000 "testTab:([]sym:10?`1;val:10?100)"
Got to know recently, there is a backup of your q scripts at "c/users//Appdata/local" and are autosaved every 5-6 mins.These are temporary files which are deleted when you save the script. However if your qPad crashed, you can find your files here :)
I think I have tried everything but I can't figure out how to set up an SQL Job that have a T-SQL Step that performs a SELECT on a linked server.
1) I got a Domain user mydomain\SQLJob
2) I got a SQL 2017 server 'JOBSERVER' with the SQL Agent running log on as a domain user mydomain\sv_agent (this cannot be changed). No futher rights should be given to this user either.
3) On the JOBSERVER I created a linked server
EXEC sp_addlinkedserver 'LINKEDSERVER'
4) mydomain\SQLJob is data reader on a database on LINKEDSERVER
5) I am able to do a SELECT * FROM LINKEDSERVER... from JOBSERVER in a regular Query Window.
On JOBSERVER I have tried
ALTER DATABASE MyDatabase SET TRUSTWORTHY ON
And then set the job to execute in MyDatabase
I have also tried
Job Step Properties > Advanced > Run as user > mydomain\SQLJob
I have tried adding mydomain\SQLJob to the linked server on JOBSERVER both with and without Impersonate
Could someone let me know what the correct steps are ?
Thanks
I found another post that stated it could not be done.
What I did instead was changing the T-SQL to a PowerShell script. Here the Run As works just fine.
I have a long script with sql statements to run on teradata. I want the script to keep running until the end and save the errors in a log file and that it will stop on every error. How can I do it?
thanks
Assuming you are using Teradata SQL Assistant:
Click on Tools in the menu bar, then Options, then Query. There is a checkbox that says "Stop query execution if an SQL error occurs"
To get the most recent error hit F11. Otherwise, from the menu bar click Tools, then show history. Double click on the row number on the left side of one of the history records and it will bring up a screen with the result messages for each statement. You can also query this sort of info directly from one of the QryLog views in DBC.
Errors can be of multiple types, some can be by-passed and some cannot be. For example, with native Teradata Tools and Utilities you can make a script ignore run-time errors, or even syntax errors, but generally it is impossible to ignore network connectivity errors and still get remaining part of your queries executed.
Generally in such scenarios, you want to use the BTEQ tool for executing the SQL in which you can ignore the execution errors. BTEQ is a standard Teradata tool which can be downloaded from Teradata website for free and is commonly installed by users querying Teradata through plain SQL.
To create a workable BTEQ script simply copy paste all of your queries into a plain text file, separate all queries with semicolons ; and on the very top of that plain text file add a logon statement as stated below
.logon Teradata_IP_Address/your_UserName,your_Password;
example script:
.logon 127.0.0.1/dbc,dbc;
/*Some sample queries. Replace these with your actual queries*/
SELECT Current_Timestamp;
CREATE TABLE My_Table (Dummy INTEGER) PRIMARY INDEX (Dummy);
So BTEQ got you through the execution errors. To avoid network connectivity issues, ideally you want to execute that on a server which has a constant connection to Teradata and with Teradata Tools and Utilities installed. Such a server may be called as ETL server, landing server, edge node or managed server (or something else, depending on your environment). You will definitely need login credentials to that server (if you don't already have access). Preferable commands to execute a bteq script are
Windows: bteq < yourscriptname >routine_logfile 2>error_logfile
Linux (bash/ksh): nohup bteq < yourscriptname >routine_logfile 2>error_logfile &
Make sure not to close the command prompt if you are on windows. On Linux you can close the current window or even terminate your network session with your ETL server if you use the recommended command.
If you see a warning about EOL line found at the end of your logs, just ignore it; it is because for simplicity I ignored some optional BTEQ statements which ensure cleaner exit.
Is there any way to see Control-M Job´s in the line command ?
Eveyday I use the Control-M Enterprise Manager GUI, but I´m looking for any way to see all jobs in the line command.
Query the DB of Control-M Server:
Select * From Cms_Jobdef Where Tasktype = 'C '
If you have access to the API you will get the response in JSON format:
ctm run jobs:status
If you go to the "planning" or "monitoring" tools and then select "View" and then "List" (instead of "Map") then it will list all job in non-graphical mode - so you can see all the commands listed (where applicable).
If you have access to the Control-M Server machine itself then this query on the DB will list all commands (where the command line contains text) -
select SCHEDTAB, JOBNAME, CMDLINE from CMS_JOBDEF where CMDLINE IS NOT NULL;
Assuming that you are not using version 8 or higher, you can use CTRL+L on EM GUI to view the jobs as a list.
On the header of the list, right click and use the column chooser to choose to display the commands of the jobs.
I would like to count the number of open connections in an sqlite database. Is there a way to do that?
According to these posts on the mailing list there is no way to check the number of open connections through code or the database itself. There is no API.
According to this post, if you are running on a POSIX type system you can use the lsof command to count how many processes have opened the database.
If you are on Windows you can use Process Explorer to count the number of connections with the following steps:
In Process Explorer click on 'Find' -> Find Handle or DLL...
Type in the name of your sqlite database and click on 'Search' (or hit Enter)
In the results window, click on your database. It will become highlighted as a 'file' in the main Process Explorer window.
Back in the main window, right-click on your database file and click Properties
You can now see the number of References and Handles
An open file monitor like lsof will do it
lsof dbName.sql
will give you list of connection
OpenerName 6158 User 39u REG 1,2 20480 20397113 dbName.sql