I want to use SSHLibrary to connect remote server.
*** Settings ***
Library SSHLibrary
*** Test Cases ***
Connection
${RemoteServer}= openconnection 127.0.0.1 port=2123
login 127.0.0.1 gfi
${username}= Executecommand pwd
But i am getting error as authentication failed
TRACE : Arguments: [ '127.0.0.1' | port=2123 ]
TRACE : Return: 1
INFO : ${RemoteServer} = 1
TRACE : Arguments: [ '127.0.0.1' | 'gfi' | delay='0.5 seconds' ]
INFO : Logging into '127.0.0.1:2123' as '127.0.0.1'.
DEBUG : Adding ssh-ed25519 host key for [127.0.0.1]:2123: 56cde5c5d3a8494218b68ed41b4e837d
FAIL : Authentication failed for user '127.0.0.1'.
DEBUG :
Traceback (most recent call last):
File "c:\python27\lib\site-packages\SSHLibrary\library.py", line 914, in login
is_truthy(look_for_keys), delay, proxy_cmd)
File "c:\python27\lib\site-packages\SSHLibrary\library.py", line 973, in _login
raise RuntimeError(e)
Ending test: Launchvm.Launchvm.Connection
This is first time i am using SSHLibrary .Does it require any preconditions to use SSHLibrary.
Can someone help how to solve authentication failed.
You have to take a look at the arguments for SSHLibrary - Login keyword.
As seen in the documentation Login first argument is username.
However, in your code you give 127.0.0.1 as username.
login 127.0.0.1 gfi
And I assume that is not the username.
You can also see this in the log message, that it try to login 127.0.0.1:2123 as 127.0.0.1.
INFO : Logging into '127.0.0.1:2123' as '127.0.0.1'.
If you update the code and call login keyword with username and password as expected, it should run fine.
login <username> <password>
Related
How to telnet using Telnet library of Robot Framework where there is no login Password required for Telnet to server
My code is
*** Settings ***
Library Telnet
Test Teardown Close All Connections
*** Test Cases ***
Telnet to DUT
Open Connection 192.168.2.254
Login ls date login_prompt=# password_prompt=""
Execute Command ls
Just given ls and date to check since there is no username password required to connect. And by correct\expected prompt is #
And I am getting "ls" output as well but next time when it is expecting a Password prompt it is failing with the below error as there is no Password prompt
"No match found for '""' in 3 seconds. Output:"
Can someone pls help.. may be this is easy and I am not able to figure it out.
Thanks in advance
I got the answer, kindly ignore it.
Added prompt_is_regexp=yes prompt=# (This is expected prompt) in Open Connection and this worked.
Composer is failing a task due to it not being able to read a log file, it's complaining about incorrect encoding.
Here's the log that appears in the UI:
*** Unable to read remote log from gs://bucket/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** 'ascii' codec can't decode byte 0xc2 in position 6986: ordinal not in range(128)
*** Log file does not exist: /home/airflow/gcs/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Fetching from: http://airflow-worker-68dc66c9db-x945n:8793/log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-68dc66c9db-x945n', port=8793): Max retries exceeded with url: /log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1c9ff19d10>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I try viewing the file in the google cloud console and it also throws an error:
Failed to load
Tracking Number: 8075820889980640204
But I am able to download the file via gsutil.
When I view the file, it seems to have text overriding other text.
I can't show the entire file but it looks like this:
--------------------------------------------------------------------------------
Starting attempt 1 of 1
--------------------------------------------------------------------------------
#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,313] {models.py:1569} INFO - Executing <Task(BigQueryOperator): merge_campaign_exceptions> on 2019-08-03T10:00:00+00:00#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,314] {base_task_runner.py:124} INFO - Running: ['bash', '-c', u'airflow run __campaign_exceptions_0_0_1 merge_campaign_exceptions 2019-08-03T10:00:00+00:00 --job_id 22767 --pool _bq_pool --raw -sd DAGS_FOLDER//-campaign-exceptions.py --cfg_path /tmp/tmpyBIVgT']#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:24,658] {base_task_runner.py:107} INFO - Job 22767: Subtask merge_campaign_exceptions [2019-08-04 10:01:24,658] {settings.py:176} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
Where the #-#{} pieces seems to be "on top of" the typical log.
I faced the same problem. In my case the problem was that I removed the google_gcloud_default connection that was being used to retrieve the logs.
Check the configuration and look for the connection name.
[core]
remote_log_conn_id = google_cloud_default
Then check the credentials used for that connection name has the right permissions to access the GCS bucket.
I'm having a similar problem with viewing logs in GCP Cloud Composer. It doesn't appear to be preventing the failing DAG task from running though. What it looks like is a permissions error between the GKE and Storage Bucket where the log files are kept.
You can still view the logs by going into your cluster's storage bucket in the same directory as your /dags folder where you should also see a logs/ folder.
Your helm chart should setup global env:
- name: AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT
value: "google-cloud-platform://"
Then, you should deploy a Dockerfile with root account only (not airflow account), additionaly, you set up your helm uid, gid as:
uid: 50000 #airflow user
gid: 50000 #airflow group
Then upgrade helm chart with new config
*** Unable to read remote log from gs://bucket
1)Found the solution after assigning the roles to the service account
2)The SA key(json or txt) to be added and configured to the connection in the
remote_log_conn_id = google_cloud_default
3)restart the scheduler and webserver of the airflow
4)restart the dags on the airflow
you can find the logs on the GCS bucket where its configured
I upload a dag file to the web page and when I click 'Graph View' -> ${my_dag} -> 'View Log', it shows:
*** Log file isn't local.
*** Fetching here: http://:8793/log/demo_dag/hello_task/2018-11-14T15:06:00
*** Failed to fetch log file from worker.
*** Reading remote logs...
*** Unsupported remote log location.
I have checked the airflow.cfg and find these config info:
worker_log_server_port = 8793
base_log_folder = /root/airflow/logs
My question is:
How to setup IP address for log service (Only port is setup)?
I have setup directory for log service, why does it still go to /log/.. ?
Any help is appreciated.
This can happen when the task status was manually changed (likely through the "Mark Success" option) and the task never receives a hostname value on the record.
The webserver is attempting to reach out to a server, with no name, to get logs for a task that never ran.
PS: Be careful running processes as the root user.
I've been getting this error, fix it by correcting the socket volume path:
WARNING - OSError while attempting to symlink the latest log directory
In windows the volume will go with a double bar like this:
volumes:
- //var/run/docker.sock:/var/run/docker.sock
Bind to docker socket on Windows
Setting up Airflow to run with Docker Swarm’s orchestration
I am trying to implement couple simple commands on cisco ios devices using Ansible (ios_config module).
Especially, I want to remove user profile, but it requires to answer on a prompt and I am getting timeout error...
I have noticed that there are prompt/answer parameters in ios_command module, but it seems that it is not supported in ios_config module.
Has anyone run into the similar problem?
Ansible Task:
- name: remove user on remote devices
ios_config:
lines:
- no username testuser
provider: "{{ provider }}"
Output from Cisco device:
Cisco_Router(config)#no username testuser
This operation will remove all username related configurations with same name.Do you want to continue? [confirm]
Playbook output:
TASK [remove user on remote devices] *************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: timeout trying to send command: end
fatal: [Cisco_Router]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_3_OlXK/ansible_module_ios_config.py\", line 583, in <module>\n main()\n File \"/tmp/ansible_3_OlXK/ansible_module_ios_config.py\", line 512, in main\n load_config(module, commands)\n File \"/tmp/ansible_3_OlXK/ansible_modlib.zip/ansible/module_utils/network/ios/ios.py\", line 168, in load_config\n File \"/tmp/ansible_3_OlXK/ansible_modlib.zip/ansible/module_utils/connection.py\", line 149, in __rpc__\nansible.module_utils.connection.ConnectionError: timeout trying to send command: end\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Starting with Ansible 2.4 there is an ios_user module that can be used to create, edit and remove users.
Removing a specific user with state: absent
- name: set user view/role
ios_user:
name: testuser
state: absent
provider: "{{ provider }}"
The full documentation and further examples can be found at: https://docs.ansible.com/ansible/latest/modules/ios_user_module.html
_command modules and prompts
The various _command modules, including ios_command support passing prompts.
For example:
- name: run commands that require answering a prompt
ios_command:
commands:
- command: 'clear counters GigabitEthernet0/1'
prompt: 'Clear "show interface" counters on this interface \[confirm\]'
answer: 'y'
- command: 'clear counters GigabitEthernet0/2'
prompt: '[confirm]'
answer: "\r"
See https://docs.ansible.com/ansible/latest/modules/ios_command_module.html for further info.
the prompt waits for a confirmation it seems so you need to confirm the command with a second line, so you likely have to do that.
- name: remove user on remote devices
ios_config:
lines:
- no username testuser
- yes
provider: "{{ provider }}"
I have tried this as well.
It seems that ios_config module is looking for a hostname(config)# prefix after executing each line. Thats why second line is not processing at all and I got the same notification - timeout.
Im unable to connect to SFTP with below script, any ideas why?
*** Variables ***
${HOST} = mysite.com
${USERNAME} = username
${PASSWORD} = password
${PORT} = 22222
${keyfile} = /Users/victor/.ssh/
*** Test Cases ***
Open Connec
Open Connection stage1.globalcashcard.com alias=LaborReady port=22222
Enter Credentials
Login ${USERNAME} ${PASSWORD}
List Dir
List Directory /
Close ALL Conns
Close All Connections
Side NTOE:
I Even tried it with
Login with P. Key
Login With Public Key username /Users/username/.ssh/known_hosts
Results log
Status:
FAIL (critical)
Message:
SSHException: SSH session not active
00:00:00.004
KEYWORD SSHLibrary . List Directory /
Documentation:
Returns and logs items in the remote `path`, optionally filtered with `pattern`.
Start / End / Elapsed:
20170607 14:18:33.338 / 20170607 14:18:33.342 / 00:00:00.004
14:18:33.342
FAIL
SSHException: SSH session not active
I'm able to login with Filezilla without any issues. On thing I did notice is that I cannot login via terminal with below command
ssh username#stg1.mysite.com -p 22222
I get prompted to enter password when I enter then I get below message
shell request failed on channel 0
I able able to connect via Terminal with below command
sftp -o "Port 22222" username#stg1.mysite.com
Could this be the issue?
SSH and SFTP are different protocols. That is what you validated with your experiences.
When you use SSHLibrary you can use similar functions you get in SFTP, but not the exact commands.