How to load Autosys profile in jil - autosys

I have an autosys job which has a profile value specified in the jil. I am very new to autosys and I need to check the logs of the job. When I go to the host, it says that the log locationn specified in the jil(like $AUTLOG/errorlogs) does not exist. Do I have to load some profile on the host? How can I do this so that I am able to access the variables in the profile.

While defining the JIL for the job add the following attribute.
profile:/path/profile.ini
All the variables defined in /path/profile.ini file would be read.
Remember to export the variables inside the profile.ini file, Eg.
export Var1="some_path"
export Var2="DB_USER"
Goodluck !!

Related

Using R and paws: How to set credentials using profile in config file?

I use SSO and a profile as defined in ~/.aws/config (MacOS) to access AWS services, for instance:
aws s3 ls --profile myprofilename
I would like to access AWS services from within R, using the paws() package. In order to do this, I need to set my credentials in the R code. I want to do this through accessing the profile in the ~/.aws/config file (as opposed to listing access keys in the code), but I haven't been able to figure out how to do this.
I looked at the extensive documentation here, but it doesn't seem to cover my use case.
The best I've been able to come up with is:
x = s3(config = list(credentials = list(profile = "myprofilename")))
x$list_objects()
... which throws an error: "Error in f(): No credentials provided", suggesting that the first line of code above does not connect to my profile as stored in ~/.aws/config.
An alternative is to generate a user/key with programmatic access to your S3 data. Then, assuming that ~/.aws/env contains the values of the generated key:
AWS_ACCESS_KEY_ID=abc
AWS_SECRET_ACCESS_KEY=123
AWS_REGION=us-east-1
insert the following line at the beginning of your file:
readRenviron("~/.aws/env")
This AWS blog provides details about how to get the temporary credentials for programatic access. If you can get the credentials and set the appropriate environment variables, then the code should work fine without the profile name.
Or You can also try the following if you can get temporary credentials using aws cli
Check if you can generate temporary credentials
aws sts assume-role --role-arn <value> --role-session-name <some-meaningful-session-name> --profile myprofilename
If you can execute the above successfully, then you can use this method to automate the process of generating credentials before your code runs.
Put the above command in a bash script get-temp-credentials.sh and generate a JSON containing the temporary credentials as per the documentation.
Add a new profile programmatic-access in the ~/.aws/config
[profile programmatic-access]
credential_process = "/path/to/get-temp-credentials.sh"
Finally update the code to use the profile name as programmatic-access
If you have AWS cli credentials set up as a bash profile eg. ~/.aws/config:
[profile myprof]
region=eu-west-2
output=json
.. and credentials eg. ~/.aws/credentials:
[myprof]
aws_access_key_id = XXX
aws_secret_access_key = xxx
.. paws will use these if you add a line to ~/.Renviron:
AWS_PROFILE=myprof

How to capture user who triggered Airflow DAG

I've enabled RBAC as environment variable in docker-compose file.
- AIRFLOW__WEBSERVER__RBAC=True
I want to capture the user who kicked off a dag inside my dag files.
I tried using from flask_login import current_user. But, I get the value of current_user as None.
May I know how to capture user details using RBAC?
According to Airflow documentation as part of RBAC security model that is handled by Flask AppBuilder (FAB):
Airflow uses flask_login and exposes a set of hooks in the
airflow.default_login module. You can alter the content and make it
part of the PYTHONPATH and configure it as a backend in
airflow.cfg.
Flask-login module provides user management operations, thus you can fetch current user within a dedicated property flask_login.current_user, adding
some extra fields, as described in #3438 pull request:
if current_user and hasattr(current_user, 'user'):
user = current_user.user.username
elif current_user and hasattr(current_user, 'username'):
I suppose that you can use current_user.user.username to fetch a user login.

Alfresco Activiti - How to modify process variables?

How can I modify process variables in Alfresco's workflows? (embedded Activiti)
I know that they are created when the process is initiated but I'm unable to change them in the Java/JavaScript/process code. (unless I use REST API directly)
I can successfully change local and execution variables ( execution.setVariable("san_value", "1000"); ) but I am unable to change variables which are shown in the Workflow details page.
For the workflow details, data fetched from start-task . Please how you've added the process variables and how you're accessing it ?
Instead of using process variable, you can add new property(using aspect) in the workflow model and you should be able to access and fetch the variable across all the tasks in the workflow process.

SaltStack: Get logged in user name

I'm trying to use Salt to set up my dev environment so that it can be identical to my staging environment. One of the things I need to do is add some environment variables to the current users .bashrc file.
I currently have this in my .sls file:
/home/fred/.bashrc:
file.append:
- text:
- export GOROOT=/usr/local/go
- export GOPATH=/home/fred/dev/go
- export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
I then use salt-call to run the state locally under root (since there are other things that I need root for). This isn't ideal, however, if you name isn't fred. How can I rewrite this so that it will work for the current user even when salt-call is run under root?
It seems like I can get to the name of the machine, so if the machine name is something like username-dev, would I be able to parse that somehow and replace all of the instances of fred with the new username? Is there a better way?
This method will gives you the ability to submit different username per one time. I assumed that you don't have a specific users list
/home/{{ pillar.newuser }}/.bashrc:
file.append:
- text:
- export GOROOT=/usr/local/go
- export GOPATH=/home/fred/dev/go
- export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
the commandline:
salt-call state.sls golang_env pillar='{"newuser": "fred"}'
But if you have a specific users list then you can follow this method as mentioned at the docs

nginx writre log to sqlite

please could you help me
can I let NGINX to write access log directly to sqlite table which I will make the same field as in access.log?
I know I can try to do it with LUA, but I do not know how to put the trigger to nginx to run LUA script on every record in access.log file
You would use the log_by_lua phase to write access logs, as it runs last and allows you to access variables like upstream_response_time, etc.

Resources