[Atom][Remote-ftp][colfax] Unable to connect remote server - atom-editor

I'm recently trying to work on the Intel® AI DevCloud, please see Connecting from Linux or a Mac.
I can connect the remove server colfax via SSH. But I'm not able to set atom-remote-ftp .ftpconfig correctly for colfax.
Here is what I did:
download the linux access key and put it at key_path
add
Host colfax
User xxxxxx
IdentityFile key_path
ProxyCommand ssh -T -i key_path guest#cluster.colfaxresearch.com
logging in use
ssh colfax
Would anyone please let me know what should be the host(?), usr(xxxxxx) and pass("")?
{
"protocol": "ftp",
"host": "***FTP_HOSTNAME_HERE***",
"port": 21,
"user": "***YOUR_USERNAME_HERE***",
"pass": "***YOUR_PASSWORD_HERE***",
"promptForPass": false,
"remote": "***REMOTE_PATH_HERE***",
"secure": true,
"secureOptions": {"rejectUnauthorized": false, "requestCert": true, "agent": false},
"connTimeout": 10000, // integer - How long (in milliseconds) to wait for the control connection to be established. Default: 10000
"pasvTimeout": 10000, // integer - How long (in milliseconds) to wait for a PASV data connection to be established. Default: 10000
"keepalive": 10000, // integer - How often (in milliseconds) to send a 'dummy' (NOOP) command to keep the connection alive. Default: 10000
"watch":[]
}
code refer to #Sanjay Verma at [Atom][Remote-ftp] Unable to connect ftps/ftpes . Thank you!

Please find the procedure below
Host colfax
User uXXXX
IdentityFile ~/Downloads/colfax-access-key-xxxx
ProxyCommand ssh -T -i ~/Downloads/colfax-access-key-xxxx guest#cluster.colfaxresearch.com
Set the correct restrictive permissions on the private SSH. To do this, run the following commands in a terminal:
chmod 600 ~/Downloads/colfax-access-key-xxxx
chmod 600 ~/.ssh/config
After the preparation steps above, you should be able to log in to your login node
ssh colfax
Once your connection is set up, you can copy local files to your login node like this:
scp /path/to/local/file colfax:/path/to/remote/directory/

Related

After changing paswords in vault.yml, deployment fails in trellis

I had a wordpress site setup using Trellis. Initially I had set up the server and deployed without encrypting the vault.yml.
Once everything was working fine I changed the passwords in vault.yml and encrypted the file. But my deployment fails now.
And I get the following error-
TASK [deploy : WordPress Installed?]
**************************
System info:
Ansible 2.6.3; Darwin
Trellis version (per changelog): "Allow customizing Nginx `worker_connections`"
---------------------------------------------------
non-zero return code
Error: Error establishing a database connection. This either means that
the username and password information in your `wp-config.php` file is
incorrect or we can’t contact the database server at `localhost`. This
could mean your host’s database server is down.
fatal: [mysite.org]: FAILED! => {"changed": false,
"cmd": ["wp", "core", "is-installed", "--skip-plugins", "--skip-
themes", "--require=/srv/www/mysite.org/shared/tmp_multisite_constants.php"], "delta":
"0:00:00.224955", "end": "2019-01-04 16:59:01.531111",
"failed_when_result": true, "rc": 1, "start": "2019-01-04
16:59:01.306156", "stderr_lines": ["Error: Error establishing a
database connection. This either means that the username and password
information in your `wp-config.php` file is incorrect or we can’t
contact the database server at `localhost`. This could mean your host’s
database server is down."], "stdout": "", "stdout_lines": []}
to retry, use: --limit
#/Users/praneethavelamuri/Desktop/path/to/my/project/trellis/deploy.retry
Is there any step I missed? I followed these steps-
ansible-playbook server.yml -e env=staging
./bin/deploy.sh staging mysite.org
change passwords in staging/vault.yml
set vault password
inform ansible about password
encrypt the file
commit the file and push the repo
re deploy and then I get the error!
I got it solved. I have changed the sudo user password too in my vault. so ssh into server and changing sudo password to the password mentioned in vault and then provisioning it and then deploying solved the issue.

salt-ssh permission denied when attempting to log into remote system

I am new to salt-ssh and I have gotten it to work successfully for setting up a remote system. However, I have a login issue that I don't know how to address. What is happening is that when I try to run the salt-ssh commands I have to fight with then initial login process before eventually it just works. I am looking to see if I can narrow down what is causing me to have to fight with login process.
I am using OS X to run my salt-ssh commands against an ubuntu vagrant vm.
I have added my root user's ssh key to the root user authorized_keys on the vagrant vm. I have verified that I can log into the system using ssh without any issues
sudo ssh root#192.168.33.10
Here are what my config files look like:
roster
managed:
host: 192.168.33.10
user: root
sudo: true
Saltfile
salt-ssh:
config_dir: /users/vmcilwain/projects/salt-ssh-rails
roster_file: /users/vmcilwain/projects/salt-ssh-rails/roster
log_file: /users/vmcilwain/projects/salt-ssh-rails/saltlog.txt
master
file_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/states
pillar_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/pillars
I run this command:
sudo salt-ssh -i '*' test.ping
I enter my local user's password and I get this output
Permission denied for host 192.168.33.10, do you want to deploy the salt-ssh key? (password required):
[Y/n]
This is where my fight is. If the vagrant vm has the ssh key for the user I am executing salt-ssh as, why am I being told that permission is denied? Especially when I verified I could ssh into the system without using salt-ssh.
Clicking yes prompts me for the remote root user's password, which I didn't set and don't necessarily want to since an ssh key should have worked.
I'm hoping someone can tell me the best way to setup connections between both systems so that I don't have to have this fight every time.
I needed to set the priv in my roster to the rsa key that I am using to connect to the remote host:
priv: /Users/vmcilwain/.ssh/id_rsa

Connection to MySQL Server using RMySQL Library through Bastion in R

On my local machine, I have ssh into the bastion where I can then connect to the remote MySQL server. I know that this is working because in terminal, it says that I have successfully connected and when I use an app like SQLPro and attempt to connect to the MySQL server with the correct permissions, I am able to successfully log in. Also, the command line
mysql -u username -p
works after I ssh.
Now, I am trying to use the library RMySQL to connect to the server and using
con<-dbConnect(MySQL(), user = "username", password = "pw", host = "127.0.0.1")
I get the return
Error in .local(drv, ...) : Failed to connect to database: Error: Can't connect to MySQL server on '127.0.0.1' (61)
It seems that R cannot determine that I have connected to the bastion. I say this because I have used the line above before on the remote server and it worked just fine.
con<-dbConnect(MySQL(), user = "username", password = "pw", host = "localhost")
If you have a workbench then go to server-> client connection and check the Host name. Your host name might be incorrect
I'm running R on linux.
After a few hours of searching, the following documentation for AWS finally gave me the command I needed to connect to an RDS instance via an AWS bastion host:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-connect-using-bastion-host-linux/
The "syntax 2" at the above link worked for me to set up the tunnel:
ssh -i "Private_key.pem" -f -N -L 3306:RDS_Instance_Endpoint:3306 ec2-user#EC2-Instance_Endpoint -v
This successfully forwarded my local port 127.0.0.1:3306 to the RDS port 3306.
I then connected to the RDS instance from within R with just:
cn = dbConnect(RMariaDB::MariaDB(), user = "myDataBaseUserName", password = "myPassword", host = "127.0.0.1", dbname = "mySchemaName")

Openstack-Keystone failing to start

I've tried almost everything in the past couple of days to get keystone running to no avail.
The setup is all on the same host, the virtualization and openstack and keystone are all on the same host, so I've tried setting up keystone with 127.0.0.1 and localhost and the IP of the host with no luck
[DEFAULT] log_file = /var/log/keystone/keystone.log
admin_token = ***
bind_host = 192.168.33.11
public_port = 5000
admin_port = 35357
compute_port = 8774
# === Logging Options ===
# Print debugging output verbose = True
# Print more verbose output
# (includes plaintext request logging, potentially including passwords)
# debug = False
# Name of log file to output to. If not set, logging will go to stdout. log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone
# Use syslog for logging.
# use_syslog = False
# syslog facility to receive log lines
# syslog_log_facility = LOG_USER
# If this option is specified, the logging configuration file specified is
# used and overrides any other logging options specified. Please see the
# Python logging module documentation for details on logging configuration
# files. log_config = logging.conf
# A logging.Formatter log message format string which may use any of the
# available logging.LogRecord attributes.
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# Format string for %(asctime)s in log records.
# log_date_format = %Y-%m-%d %H:%M:%S
# onready allows you to send a notification when the process is ready to serve
# For example, to have it notify using systemd, one could set shell command:
# onready = systemd-notify --ready
# or a module with notify() method:
# onready = keystone.common.systemd
[sql] connection = mysql://keystone:***#localhost/keystone
# idle_timeout = 200
[identity] driver = keystone.identity.backends.sql.Identity
[catalog] template_file = /etc/keystone/default_catalog.templates driver = keystone.catalog.backends.sql.Catalog
# dynamic, sql-based backend (supports API/CLI-based management commands)
# driver = keystone.catalog.backends.sql.Catalog
# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog
# template_file = default_catalog.templates
[token] driver = keystone.token.backends.sql.Token
# driver = keystone.token.backends.kvs.Token
# Amount of time a token should remain valid (in seconds)
# expiration = 86400
I've enabled logging in the logging.conf file and set the level to DEBUG and INFO, however nothing in log files.
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# ps aux | grep keystone
root 25580 0.0 0.0 103236 880 pts/1 S+ 09:41 0:00 grep keystone
[root#* keystone]#
Any ideas will be greatly appreciated.Thank you
As I mentioned in the comment, I've never seen a config file with the section headings on the same line as config option:
[DEFAULT] log_file = /var/log/keystone/keystone.log
I've also seen it like this instead:
[DEFAULT]
log_file = /var/log/keystone/keystone.log
However, I have no idea if this is related to your issue.
To enable debug-level logging, make sure you set the following in /etc/keystone/logging.conf:
[logger_root]
level=DEBUG
Then try running keystone manually instead of as a service:
$ sudo -u keystone bash
$ HOME=/var/lib/keystone keystone-all --debug
Hopefully you'll see a relevant error message on standard out.
(I believe it will still send the logging to /var/log/keystone/keystone.log, not sure how to actually get it to log to standard out when running manually like this).
Add a valid token for admin_token. It should not be "*".
Check the below line:
[sql] connection = mysql://keystone:*#localhost/keystone
It should be something like:
connection = mysql://keystone:keystone#localhost/keystone
Refer to this url for an example keystone.conf file
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/keystone-conf-file.html
I ran into this issue as well. I am running on Ubuntu 12.04LTS. What i found was the the service start command in /etc/init/keystone.conf is using start-stop-daemon to run the service. It was written for a newer version than the one on my box. The --chdir variable is not accepted as an input. once i removed that line keystone started right up.
Try running:
start-stop-daemon --start --chuid keystone --name keystone --exec /usr/bin/keystone-all
/etc/init/keystone.conf after
description "Keystone API server"
author "Soren Hansen <soren#linux2go.dk>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec start-stop-daemon --start --chuid keystone \
--name keystone \
--exec /usr/bin/keystone-all
Check if your IP-adress is equal to HOST_IP=... in localrc
This might be due to keystone not getting started properly and therefore port 35357 is not in listening mode.
This seems to be anomalous behavior of service keystone.
I am mentioning steps which have worked on my system for havana installtion on Ubuntu 12.04 Kernel version 3.2.0-67-generic. After a day of headache around this issue. Try these steps, preferably in the same order.
1) Remove keystone package:-
apt-get remove keystone
2) Reboot your system
reboot
3) After reboot again INSTALL KEYSTONE.
apt-get install keystone
4) Check status of keystone service
service keystone status
It will show start/running
5) Now do the necessary changes you want to do in /etc/keystone/keystone.conf
after making changes in conf file DO NOT RESTART KEYSTONE SERVICE
Use stop and start command to make an effect of restart but don't restart.
service keystone stop
service keystone start
For further help, pasting a dump of my CLI :-
http://pastebin.com/sduuFCL7
There are multiple problems with the icehouse documentations and install. packstack is broken so the only way to get started is to manually follow the upstream docs for your distro. keystone is very important to set up first correctly before moving on, because other services rely on it.
the paste-file /usr/share/keystone/keystone-dist-paste.ini should be copied to /etc/ to be accessible to the config scripts like this:
cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/
chown keystone:keystone /etc/keystone/*
make sure to update keystone.conf with the new config_file value
documentation is wrong about the mysql connection, it should go to [sql] and not [database] so:
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD#controller/keystone
the name controller should be resolved to whatever mysql is bound to, I will add it to /etc/hosts like this if [mysqld]/bind-address in /etc/my.cnf is 10.1.1.100:
10.1.1.100 controller
make sure to uncomment log_file in keystone.conf to get what is happening.
I was facing similar issue.I followed below mentioned steps and openstack-keystone service got started.
Edit the /etc/keystone/keystone.conf file and complete the following actions:
In the [DEFAULT] section
[DEFAULT]
admin_token = ADMIN_TOKEN
In the [database] section
[database]
connection = mysql://keystone:KEYSTONE_DBPASS#controller/keystone
In the [token] section, configure the UUID token provider and SQL driver
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
In the [revoke] section
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
After making above changes populate the Identity service database using command
su -s /bin/sh -c "keystone-manage db_sync" keystone
Start the openstack-keystone service using below command
systemctl start openstack-keystone

Binding external IP address to Rabbit MQ server

I have box A and it has a consumer on it that listens on a Rabbit MQ server
I have box B that will publish a message to the listener
So as long as all of this in on box A and I start Rabbit MQ server w/ defaults it works fine.
The defaults are host=127.0.0.1 on port 5672, but
when I telnet box.a.ip.addy 5672 from box B I get:
Trying box.a.ip.addy...
telnet: connect to address box.a.ip.addy: No route to host
telnet: Unable to connect to remote host: No route to host
telnet on port 22 is fine, I can ssh into Box A from Box B
So I assume I need to change the ip that the RabbitMQ server uses
I found this: http://www.rabbitmq.com/configure.html and I now have a config file in the location the documentation said to use, with the name rabbitmq.config and it contains:
[
{rabbit, [{tcp_listeners, {"box.a.ip.addy", 5672}}]}
].
So I stopped the server, and started RabbitMQ server again. It failed. Here are the errors from the error logs. It's a little over my head. (in fact most of this is)
=ERROR REPORT==== 23-Aug-2011::14:49:36 ===
FAILED
Reason: {{case_clause,{{"box.a.ip.addy",5672}}},
[{rabbit_networking,'-boot_tcp/0-lc$^0/1-0-',1},
{rabbit_networking,boot_tcp,0},
{rabbit_networking,boot,0},
{rabbit,'-run_boot_step/1-lc$^1/1-1-',1},
{rabbit,run_boot_step,1},
{rabbit,'-start/2-lc$^0/1-0-',1},
{rabbit,start,2},
{application_master,start_it_old,4}]}
=INFO REPORT==== 23-Aug-2011::14:49:37 ===
application: rabbit
exited: {bad_return,{{rabbit,start,[normal,[]]},
{'EXIT',{rabbit,failure_during_boot}}}}
type: permanent
and here is some more from the start up log:
Erlang has closed
Error: {node_start_failed,normal}
^M
Crash dump was written to: erl_crash.dump^M
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}}}})^M
Please help
did you try adding?
RABBITMQ_NODE_IP_ADDRESS=box.a.ip.addy
to the /etc/rabbitmq/rabbitmq.conf file?
Per http://www.rabbitmq.com/configure.html#customise-general-unix-environment
Also per this documentation it states that the default is to bind to all interfaces. Perhaps there is a configuration setting or environment variable already set in your system to restrict the server to localhost overriding anything else you do.
UPDATE: After reading again I realize that the telnet should have returned "Connection Refused" not "No route to host." I would also check to see if you are having a firewall related issue.
You need to open up the tcp port on your firewall
Using Linux, Find the iptables config file:
eric#dev ~$ find / -name "iptables" 2>/dev/null
/etc/sysconfig/iptables
Edit the file:
sudo vi /etc/sysconfig/iptables
Fix the file by adding a port:
# Generated by iptables-save v1.4.7 on Thu Jan 16 16:43:13 2014
*filter
-A INPUT -p tcp -m tcp --dport 15672 -j ACCEPT
COMMIT

Resources