I am trying to run the workflows using pmcmd or .ksh, but i m facing the error
saying
Failed to authenticate login. [UM_10205] [UM_10205] Failed to authenticate the user [user_name] that belongs to the security domain [e-directory]. For more information, see the domain logs.
Disconnecting from Integration Service.
I have tried logging into informatica and run the workflows thru workflow manager, it is running. I am not able to find the reason why it is saying authentication issue, if it is an authentication issue it should not allow me to login to informatica.
pmcmd startworkflow -sv intg_ser -d Domain_name -usd e-directory -u user -pv encrypted_passwd -f app_CCMLB_FPERX -wait wf_4100_CCML_CAP_PKG_EXT
Related
When I used NebulaGraph Explorer, I expected to use workflow, but the task was executed with this error.
There are 0 NebulaGraph Analytics available. clusterSize should be less than or equal to it
You can check according to the following procedure:
Check whether the configuration of SSH password-free login between nodes is successful. You can run the ssh <user_name>#<node_ip> command on the Dag Controller machine to check whether the login succeeds.
Note that if the Dag Controller and Analytics are on the same machine, you also need to configure SSH password-free login.
Check the configuration file of the Dag Controller.
Check whether the SSH user in etc/dag-ctrl-api.yaml is the same as the user who starts the Dag Controller service and configs SSH password-free login.
Check whether the algorithm path in etc/tasks.yaml is correct.
Check whether Hadoop and Java paths in scripts/set_env.sh are correct.
Restart the Dag Controller for the settings to take effect.
When I used the workflow feature of NebulaGraph Explorer, the task reported the following error:
handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain?
How to resolve the error?
You need to reconfigure the permissions to 744 on the folder .ssh and 600 on the file .ssh/authorized_keys.
In the workflow/dag controller, it's leveraging SSH to do some of the RPC call(dirty but works), which requires ssh key based authentication.
To debug this, we just need to login to the workflow machine and try manually perform the ssh login with exactly the same user and host(even when it calls itself, SSH is still needed here), the tips are to add -vvv arguments to show in the verbose mode where it could go wrong as #lisa liu posted, it could be permission issues of the corresponding files or other cipher handshake issues.
I've used the ansible install to run all services on a single host and have two separate physical node controllers.
Everything installed fine and all of my services are green. But I don't think image workers are launching to do my first image uploads. As I'm trying to troubleshoot I see that no node controllers are reported by:
euserv-describe-node-controllers
It doesn't return an error just blank output. I've unregistered and re-registered the two node controllers and copied the CLC admin keys with no errors but still can't see output from that command. cloud-output and the various nc log files seem to show successful startup.
I've switched to ImagingServiceAdministrator to look for imaging worker instances with this and got blank output which was what started me looking at NC's:
euca-describe-instances --filter tag-value=euca-internal-imaging-workers
The imaging service is not required for installing instance-store images, e.g.:
python <(curl -Ls https://eucalyptus.cloud/images)
or (on an ansible deployed cloud):
eucalyptus-images --size 1
To check on the status of node controllers in a deployment you will need to have cloud administrator credentials. You can check this using:
euare-getcallerid
euare-accountlist
and verifying that the eucalyptus account is being used.
Node controllers are managed via a cluster controller so you should check the status for both:
euserv-describe-services -a --filter service-type=cluster
euserv-describe-services -a --filter service-type=node
this differs from euserv-describe-node-controllers as it does not include information on running instances.
If there are any issues you can check for service events:
euserv-describe-events
and look at the logs (/var/log/eucalyptus/...) to further investigate.
Check that the IP addresses you registered node controllers using are the ones that the node controllers are listening on (NC_ADDR in /etc/eucalyptus//eucalyptus.conf)
If using firewalld restart/reload the configuration after deployment to ensure running with the latest settings.
Does anyone know how I can get the credentials of the logged user in a heroku dyno?
Context: We are trying to log access to Heroku production rails console. People access it via heroku run or heroku exec. Heroku does authenticate the users executed the command, however, I'm unable to retrieve this information once the rails console is started.
It looks like heroku cli is not available in the dyno, and whoami returns dyno.
Eg:
$ heroku run rails c -a opt-prod
<in rails console>
irb(main):001:0> ENV
irb(main):001:0> `whoami`
We contacted Heroku and there is no option to do this. Instead,
A running dyno doesn't contain
information on the user that started the process, so there's no way to
get the "current user" from inside a dyno on Heroku.
However, one idea for a workaround is to grab the current Heroku CLI
user and sends it along with the console command when starting.
Something like this will work:
CURRENT_HEROKU_USER="$CURRENT_HEROKU_USER" rails c --app my-app-name
> Running CURRENT_HEROKU_USER=chap#heroku.com rails c...
ENV['CURRENT_HEROKU_USER']
> "chap#heroku.com"
The downside of this approach is there's no way to validate that value from inside the dyno. However you could use
your application logs to retroactively verify the info. The API will
log the user who started the console process in your application logs:
CURRENT_HEROKU_USER=chap#heroku.com rails c by user chap#heroku.com
I'm sorry we don't have a better answer for you here.
Please let me know if I can be of more help.
I dont have hands on experience on stackdriver monitoring configuration for google cloud platform VM instances monitoring. our basic monitoring for our project works fine but while trying to install stackdriver agent in Ubuntu 14.04 OS it gives us error and stack driver with agent does not works for us. below is the error for your reference.
Jan 3 10:43:42 ubuntu-uat01 collectd[2283]: write_gcm: Unsuccessful
HTTP request 403: {#012 "error": {#012 "code": 403,#012
"message": "User is not authorized to access the project monitoring
records.",#012 "status": "PERMISSION_DENIED"#012 }#012} Jan 3
10:43:42 ubuntu-uat01 collectd[2283]: write_gcm: Error -2 from
wg_curl_get_or_post Jan 3 10:43:42 ubuntu-uat01 collectd[2283]:
write_gcm: wg_transmit_unique_segment failed.
Can someone help me in setting up stackdriver monitoring with Agent installed on server or provide me some documentation link if any available.
I got this precise error on my instances until I added the permission 'Monitoring Metric Writer' to the service account.
You could also, as Igor suggested, add the monitoring api scope to the instance
See the StackDriver Monitoring docs
Most likely you either don't have the Stackdriver Monitoring API enabled in your project, or your VM does not have the correct scopes. There are extensive instructions on the Google Cloud site for installing the agent, including the troubleshooting page.
If you are installing StackDriver monitoring and logging agent on your instance, you need to make sure attached service-account to your instance has proper rights to edit/write data to StackDriver. Simply run following commands to assign proper roles:
gcloud projects add-iam-policy-binding PROJECT_NAME --member="serviceAccount:SERVICE_ACCOUNT_EMAIL" --role="roles/logging.logWriter"
gcloud projects add-iam-policy-binding PROJECT_NAME --member="serviceAccount:SERVICE_ACCOUNT_EMAIL" --role="roles/monitoring.metricWriter"
replace PROJECT_NAME and SERVICE_ACCOUNT_EMAIL with proper values from your environment.