Passing parameters to package run by sp_startjob - asp.net

We have an SSIS package that is run via a SQLAgent job. We are initiating the job (via sp_startjob) from within an ASP.NET web page. The user that is logged onto the UI needs to be logged with the SSIS package that the user initiates - hence we require the userId to be passed to the SSIS package. The issue is we cannot pass parameters to sp_startjob.
Does anyone know how this can be achieved and/or know of an alternative to the above approach

It cannot be done through sp_startjob. You can't pass a parameter to a job step so that option is out.
If you have no concern about concurrency, and given that you can't have the same job running at the same time, you could probably hack it by changing your job step from type SQL Server Integration Services to something like a OS Command. Have the OS Command called a batch script that the web page creates/modifies. Net result being you start your package like dtexec.exe /file MyPackage /Set \Package.Variables[User::DomainUser].Properties[Value];\"Domain\MyUser\" At this point, the variable DomainUser in your package would have the value of Domain\MyUser.
I don't know your requirements so perhaps you can just call into the .NET framework and start your package from the web page. Although you'd probably want to make sure that call asynchronously. Otherwise unless your SSIS package is very fast, the users might try and navigate away, spam refresh etc waiting for it to the page to "work".
All of this by the way is simply pushing a value into an SSIS package. In this case, a user name. It doesn't pass along their credentials so calls to things like SYSTEM_USER would report the SQL Agent user account (or the operator of the job step).

Related

Control-M: SMART Folder - get information of inside job

I use SMART Folder to get email notifications if jobs inside that folder changed their status to failed.
When an email notification is sent I need to get the name of the failing job inside the SMART folder.
Is there a way to get information about failed jobs inside SMART folder via some variables?
I tried %%SCHEDTAB and %%JOBNAME but this only relates to the SMART folder and not the failing job inside.
On Do Action in SMART folder
Monitoring view example of SMART table with failed job
You can set the alert to go from individual jobs instead. I often just have a standardised post-proc panel for the job that generates an alert using local variables, e.g. -
%%JOBNAME failed on %%NODEID with code = %%COMPSTAT.. This is a %%APPLGROUP job running on the %%APPLIC system.

Detecting whether code is running in mirror when using Velocity to test a Meteor app

I have a simple Meteor application. I would like to run some code periodically on the server end. I need to poll a remote site for XML orders.
It would look something like this (coffee-script):
unless process.env.ORDERS_NO_FETCH
Meteor.setInterval ->
checkForOrder()
, 600000
I am using Velocity to test. I do not want this code to run in the mirrored instance that runs the tests (otherwise it will poach my XML orders and I won't see them in the real instance). So, to that end, I would like to know how to tell if server code is running in the testing environment so that I can avoid setting up the the periodic checks.
EDIT I realised that I missed faking one of my server calls in the tests, which is why my test code was grabbing one of the XML orders from the real server. So, this might not be an issue. I am not sure yet how the tests are run for the server code and if the server code runs in a mirror (is that a client only concept)?.
The server and the client both run in a mirror when using mocha/jasmine integration tests.
If you want to know if you are in a mirror, you can use:
Meteor.call('velocity/isMirror', function(err, isMirror) {
if (isMirror) {
// do something
}
});
Also, on the server you can use:
process.env.IS_MIRROR
You've already got the fake working, and that is the right approach.

Qlikview fails to reload on server when executing stored procedure

I cannot seem to figure out this issue. I have a qlikview document that pulls in a bunch of data and aggregates/joins it up. Typical qlikview stuff. At the end of my process I have an oracle stored procedure call. I am not retrieving anything back. This is a simple call to a database to trigger a process. I have setup my ODBC connection and User DSN on my local machine for the connection. When I run my qvw file on my local machine everything works just fine. The proc call is made and the script executes without any errors.
However, when I put the document on our reload server and after I setup a reload task for it the process throws a general script error when the sql proc is called. What could cause this? The user running the document has execute permissions. Do I need to setup a DSN on the reload server?
Really not sure at all here. Hopefully someone here can help me out. Thanks.
Unfortunately QlikView's SQL error messages are not that helpful for debugging purposes. In this case you can try turning on ODBC logging (http://support2.microsoft.com/kb/274551) and then reload the script to try and capture the cause of the error.
Finally, if your script refers to a "local" DSN then this also needs to be present on the machine that will perform the reload, in this case the QlikView server.

How to get the user who initiated the process in IBM BPM 8.5?

How to get the user who initiated the process in IBM BPM 8.5. I want to reassign my task to the user who actually initiated the process. How it can be achieved in IBM BPM?
There are several ways to get that who initiated a Task , But who initiated a process Instance is somewhat different.
You can perform one out of the following :
Add a private variable and assign it tw.system.user_loginName at the POST of start. you can access that variable for user who initiated the process.(It will be null or undefined for the scenario if task is initiated by some REST API or UCA.)
Place a Tracking group after Start event . Add a input variable to it as username , assign it a value same as tw.system.user_loginName. So whenever Process is started entry will be inserted to DB Table.You can retrieve this value from that view in PerformanceDB.
Also there might be some table ,logging the process Instances details , where you can find the user_id directly.
I suggest you to look in getStarter() method of ProcessInstanceData API.
Official Documentation on API
This link on IBM Developerworks should help you too: Process Starter
Unfortunately there's not an Out Of The Box way to do this - nothing is recorded in the Process Instance that indicates "who" started a process. I presume this is because there are many ways to launch a process instance - from the Portal, via a Message Event, from an API call, etc.
Perhaps the best way to handle this is to add a required Input parameter to your BPD, and supply "who" started the process when you launch it. Unfortunately you can't supply any inputs from the OOTB Portal "New", but you can easilty build your own "launcher".
If you want to route the first task in process to the user that started the process the easiest approach is to simply put the start point in the lane, and on the activity select routing to "Last User In Lane". This will take care of the use case for you without requiring that you do the book keeping to track the user.
Its been a while since I've implemented this, so I can't remember if it will work elegantly if you have system steps before the first task, but this can easily be handled by moving the system steps into the human service to be executed as part of that call, rather than as a separate step in the BPD.
Define variable as string type and using script task to define the login user that use this task and assign it to your defined variable to keep to you in all of the process as initiator of the task.
You can use this line of code to achieve the same:
tw.system.user_loginName

have R halt the EC2 machine it's running on

I have a few work flows where I would like R to halt the Linux machine it's running on after completion of a script. I can think of two similar ways to do this:
run R as root and then call system("halt")
run R from a root shell script (could run the R script as any user) then have the shell script run halt after the R bit completes.
Are there other easy ways of doing this?
The use case here is for scripts running on AWS where I would like the instance to stop after script completion so that I don't get charged for machine time post job run. My instance I use for data analysis is an EBS backed instance so I don't want to terminate it, simply suspend. Issuing a halt command from inside the instance is the same effect as a stop/suspend from AWS console.
I'm impressed that works. (For anyone else surprised that an instance can stop itself, see notes 1 & 2.)
You can also try "sudo halt", as you wouldn't need to run as a root user, as long as the user account running R is capable of running sudo. This is pretty common on a lot of AMIs on EC2.
Be careful about what constitutes an assumption of R quitting - believe it or not, one can crash R. It may be better to have a separate script that watches the R pid and, once that PID is no longer active, terminates the instance. Doing this command inside of R means that if R crashes, it never reaches the call to halt. If you call it from within another script, that can be dangerous, too. If you know Linux well, what you're looking for is the PID from starting R, which you can pass to another script that checks ps, say every 1 second, and then terminates the instance once the PID is no longer running.
I think a better solution is to use the EC2 API tools (see: http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ for documentation) to terminate OR stop instances. There's a difference between the two of these, and it matters if your instance is EBS backed or S3 backed. You needn't run as root in order to terminate the instance - the fact that you have the private key and certificate shows Amazon that you're the BOSS, way above the hoi polloi who merely have root access on your instance.
Because these credentials can be used for mischief, be careful about running API tools from a given server, you'll need your certificate and private key on the server. That's a bad idea in the event that you have a security problem. It would be better to message to a master server and have it shut down the instance. If you have messaging set up in any way between instances, this can do all the work for you.
Note 1: Eric Hammond reports that the halt will only suspend an EBS instance, so you still have storage fees. If you happen to start a lot of such instances, this can clutter things up. Your original question seems unclear about whether you mean to terminate or stop an instance. He has other good advice on this page
Note 2: A short thread on the EC2 developers forum gives advice for Linux & Windows users.
Note 3: EBS instances are billed for partial hours, even when restarted. (See this thread from the developer forum.) Having an auto-suspend close to the hour mark can be useful, assuming the R process isn't working, in case one might re-task that instance (i.e. to save on not restarting). Other useful tools to consider: setTimeLimit and setSessionTimeLimit, and various checkpointing tools (I have a Q that mentions a couple). Using an auto-kill is useful if one has potentially badly behaved code.
Note 4: I recently learned of the shutdown command in package fun. This is multi-platform. See this blog post for commentary, and code is here. Dangerous stuff, but it could be useful if you want to adapt to Windows. I haven't tried it, though.
Update 1. Three more ideas:
You could use .Last() and runLast = TRUE for q() and quit(), which could shut down the instance.
If using littler or a script that invokes the script via Rscript, the same command line functions could be used.
My favorite package of today, tcltk2 has a neat timer mechanism, called tclTaskSchedule() that can be used to schedule the execution of an expression. You could then go crazy with the execution of stuff just before a hourly interval has elapsed.
system("echo 'rootpassword' | sudo halt")
However, the downside is having your root password in plain text in the script.
AFAIK those ways you mentioned are the only ones. In any case the script will have to run as root to be able to shut down the machine (if you find a way to do it without root that's possibly an exploit). You ask for an easier way but system("halt") is just an additional line at the end of your script.
sudo is an option -- it allows you to run certain commands without prompting for any password. Just put something like this in /etc/sudoers
<username> ALL=(ALL) PASSWD: ALL, NOPASSWD: /sbin/halt
(of course replacing with the name of user running R) and system('sudo halt') should just work.

Categories

Resources