Why does Meteor Up (MUP) fail on authentication? - meteor

I am currently trying to deploy a Meteor project to an external server for the first time. The server is hosted by DigitalOcean, running ubuntu 16.04, and has an SSH key set up for password-free access.
The error I am getting from MUP is:
[159.203.165.13] - Setup Docker
events.js:165
throw er; // Unhandled 'error' event
^
Error: All configured authentication methods failed
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:290:17)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
at SSH2Stream.emit (events.js:180:13)
at parsePacket (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:3647:10)
at SSH2Stream._transform (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:551:13)
at SSH2Stream.Transform._read (_stream_transform.js:185:10)
at SSH2Stream._read (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:212:15)
at SSH2Stream.Transform._write (_stream_transform.js:173:12)
at doWrite (_stream_writable.js:410:12)
at writeOrBuffer (_stream_writable.js:396:5)
at SSH2Stream.Writable.write (_stream_writable.js:294:11)
at Socket.ondata (_stream_readable.js:651:20)
at Socket.emit (events.js:180:13)
at addChunk (_stream_readable.js:274:12)
at readableAddChunk (_stream_readable.js:261:11)
at Socket.Readable.push (_stream_readable.js:218:10)
Emitted 'error' event at:
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:292:12)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
[... lines matching original stack trace ...]
at Socket.Readable.push (_stream_readable.js:218:10)
At this point I have tried several solutions involving the mup file as per other recommendations such as:
1) Adding in a password - Gives the exact same error as though the change didn't occur.
2) Adding in the same SSH key that I use for authentication to the server as per digital ocean - Says 'privateKey value does not contain a (valid) private key'. I have tried both the key that is used for authentication to the server and every other key I could find short of generating a new one just for Meteor's use.
3) Leaving both blank and allowing it to 'try' ssh-agent - pretends it doesn't know what ssh-agent is and throws an error saying the same thing as when I use a password.
I have looked through and followed the same instructions in the following article: http://meteortips.com/deployment-tutorial/digitalocean-part-1/
This article assumes that there are only two possible states. One being that an ssh key has NOT been used or set up so it needs to be generated. The second being that an ssh key exists and is set up exactly where they expect it. Unfortunately I seem to be in a different situation. I generated a key using putty prior to setting up the D.O server and created the droplet using that. After creation, the file did not exist. The only thing in the ~/.ssh/ directory was a single file named "authorized_keys" that held the key I would use to connect to the server. This file cannot be used, nor any file on the server in the other ssh key locations.I also tried copying over the file directly onto the server to no avail as well.
In some vain hope at finding a solution I also tried running these same commands in both the Meteor build bundle an the source code folder. Neither worked. I should mention that although this is the only article I still have open to try for a solution, I have tried every one I could find using MUP.
If anyone can point me in the right direction with this so I can stop flailing wildly in the dark I would be incredibly grateful.
Edit: As requested, below is the current mup.js file with removed credentials
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '111.111.111.11',
username: 'root',
// ssh-agent: '/home/Meteor/MeteorKey.pem'
pem: '~/.ssh/id_rsa.pub'
// password: 'password1'
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: 'app-name',
path: '../',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
ROOT_URL: 'http://www.app-name.com',
MONGO_URL: 'mongodb://mongodb/meteor',
MONGO_OPLOG_URL: 'mongodb://mongodb/local',
},
docker: {
// change to 'abernix/meteord:base' if your app is using Meteor 1.4 - 1.5
image: 'abernix/meteord:node-8.4.0-base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
},
// (Optional)
// Use the proxy to setup ssl or to route requests to the correct
// app when there are several apps
// proxy: {
// domains: 'mywebsite.com,www.mywebsite.com',

The error message you are receiving:
Error: All configured authentication methods failed
Means that the SSH connection is failing. So the credentials you are using (pity you removed them from the config) are not working. Try using a command line ssh using these same credentials, and then trouble shoot that - once you can ssh into the server, then mup should be able to do it's work.
You can get more information out of ssh by specifying one or more -v parameters, eg:
ssh -v -v my_user#remote.com
and it will give you information about the authentication methods it is trying as it goes through them. This will help you narrow down the problem.

Related

Ansible Ad-Hoc command with ssh keys

I would like to setup ansible on my Mac. I've done something similar in GNS3 and it worked but here there are more factors I need to take into account. so I have the Ansible installed. I added hostnames in /etc/hosts and I can ping using the hostnames I provided there.
I have created ansible folder which I am going to use and put ansible.cfg inside:
[defaults]
hostfile = ./hosts
host_key_checking = false
timeout = 5
inventory = ./hosts
In the same folder I have hosts file:
[tp-lab]
lab-acc0
When I try to run the following command: ansible tx-edge-acc0 -m ping
I am getting the following errors:
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
[WARNING]: Unhandled error in Python interpreter discovery for host tx-edge-acc0: unexpected output from Python interpreter discovery
[WARNING]: sftp transfer mechanism failed on [tx-edge-acc0]. Use ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: scp transfer mechanism failed on [tx-edge-acc0]. Use ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: Platform unknown on host tx-edge-acc0 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information.
tx-edge-acc0 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"module_stderr": "Shared connection to tx-edge-acc0 closed.\r\n",
"module_stdout": "\r\nerror: unknown command: /bin/sh\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
Any idea what might the problem here? much appreciated
At first glance it seems that you ansible controller does not load configuration files (especially ansible.cfg) when playbook is fired.
(From documentation) Ansible searches for configuration files in the following order, processing the first file it finds and ignoring the rest:
$ANSIBLE_CONFIG if the environment variable is set.
ansible.cfg if it’s in the current directory.
~/.ansible.cfg if it’s in the user’s home directory.
/etc/ansible/ansible.cfg, the default config file.
Edit: For peace of mind it is good to use full paths
EDIT Based on comments
$ cat /home/ansible/ansible.cfg
[defaults]
host_key_checking = False
inventory = /home/ansible/hosts # <-- use full path to inventory file
$ cat /home/ansible/hosts
[servers]
server-a
server-b
Command & output:
# Supplying inventory host group!
$ ansible servers -m ping
server-a | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
server-b | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

Euca 5.0 Ansible Console Task Failing

Background:
I am only able to get past the ansible console install/config tasks by adding --region localhost to anywhere in: /usr/share/eucalyptus-ansible/roles/cloud-post/tasks/console.yml wherever it calls tools that take that argument.
Otherwise each sub task fails like this: ["euca-describe-images: error: connection error (('Connection aborted.', gaierror(-2, 'Name or service not known')))"]
Running the commands from that playbook directly on the euca server being configured gives the same result unless I specify --region localhost
Problem:
I'm stuck here: [cloud-post : update console route53 system domain for eucalyptus-cloud authentication]
Error: "euform-update-stack: error (ValidationError): No updates are to be performed.", "stderr_lines": ["euform-update-stack: error (ValidationError): No updates are to be performed."]
All services are running except the ImagingBackend is Not Ready
No instances are running according to euca-describe-instances
Images are available:
IMAGE ami-5be483c81cf8bd65c eucalyptus-console-image-5-0-823/eucalyptus-console-image-5-0-823.raw.manifest.xml 000216594841 available private x86_64 machine instance-store hvm
TAG image ami-5be483c81cf8bd65c type eucalyptus-console-image
TAG image ami-5be483c81cf8bd65c version 5.0.823
IMAGE ami-f31092ddb73e29af9 eucalyptus-service-image-v5.0.100/eucalyptus-service-image.raw.manifest.xml 000216594841 available privatx86_64 machine instance-store hvm
TAG image ami-f31092ddb73e29af9 provides imaging,loadbalancing
TAG image ami-f31092ddb73e29af9 type eucalyptus-service-image
TAG image ami-f31092ddb73e29af9 version 5.0.100
---
all:
hosts:
exp-euca.lan.com:
exp-enc-[01:02].lan.com:
vars:
vpcmido_public_ip_range: "192.168.100.5-192.168.100.254"
vpcmido_public_ip_cidr: "192.168.100.1/24"
cloud_system_dns_dnsdomain: "cloud.lan.com"
cloud_public_port: 443
eucalyptus_console_cloud_deploy: yes
cloud_service_image_rpm: no
cloud_properties:
services.imaging.worker.ntp_server: "x.x.x.x"
services.loadbalancing.worker.ntp_server: "x.x.x.x"
children:
cloud:
hosts:
exp-euca.lan.com:
console:
hosts:
exp-euca.lan.com:
node:
hosts:
exp-enc-[01:02].lan.com:
EDIT:
Solved. Details are in the comments of the marked answer.
The name error most likely means that DNS for the domain cloud.lan.com is not being correctly delegated to your deployment. To test this, check if the nameserver is found:
dig +short NS cloud.lan.com
you should see "ns1.cloud.lan.com" and then should be able to use that nameserver to resolve services, e.g.
dig +short ec2.cloud.lan.com #ns1.cloud.lan.com
which should be the IP of the host for the compute service.
The second item is a bug in the ansible playbook that occurs when the stack is already present and up to date. To work around it, you can either update your playbook or delete the stack before running the playbook. Depending on how far the playbook progressed you may have a script to do this:
/usr/local/bin/console-manage-stack -a delete
the related playbook change is https://github.com/AppScale/ats-deploy/pull/36

Mupx deployment with Meteor.js fails when "Installing Docker"

I have Ubuntu 14.04, developing on it and want to have a test server on the same computer. It is ran in Virtual Box.
So I followed all the steps on Github for Mupx setup and watched the video that Meteor.js guide told me to watch it. When I get to command:
mupx setup
it shows me the screen with the error:
nejc#nejc-vb:~/Meteor Projects/CSGO/CSGO-deploy$ mupx setup
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
Configuration file : mup.json
Settings file : settings.json
“ Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup ”
Started TaskList: Setup (linux)
[my_public_IP] - Installing Docker
events.js:72
throw er; // Unhandled 'error' event
^
Error: Timed out while waiting for handshake
at null._onTimeout (/usr/local/lib/node_modules/mupx/node_modules/nodemiral/node_modules/ssh2/lib/client.js:138:17)
at Timer.listOnTimeout [as ontimeout] (timers.js:121:15)
My mup.json file looks like this:
{
// Server authentication info
"servers": [
{
"host": "my_public_IP",
"username": "nejc",
"password": "123456",
// or pem file (ssh based authentication)
// WARNING: Keys protected by a passphrase are not supported
//"pem": "~/.ssh/id_rsa"
// Also, for non-standard ssh port use this
//"sshOptions": { "port" : 49154 },
// server specific environment variables
"env": {}
}
],
// Install MongoDB on the server. Does not destroy the local MongoDB on future setups
"setupMongo": true,
// Application name (no spaces).
"appName": "CSGO",
// Location of app (local directory). This can reference '~' as the users home directory.
// i.e., "app": "~/Meteor Projects/CSGO",
// This is the same as the line below.
"app": "/home/nejc/Meteor Projects/CSGO",
// Configure environment
// ROOT_URL must be set to your correct domain (https or http)
"env": {
"PORT": 80,
"ROOT_URL": "http://my_public_IP"
},
// Meteor Up checks if the app comes online just after the deployment.
// Before mup checks that, it will wait for the number of seconds configured below.
"deployCheckWaitTime": 30,
// show a progress bar while uploading.
// Make it false when you deploy using a CI box.
"enableUploadProgressBar": true
}

Intern target QT webdriver on remote machine

I have installed intern on my local machine (192.168.1.50) and want to use the QT Browser webdriver on a remote machine (192.168.1.76). I've changed the intern.js and added the correct hostname as shown beneath:
tunnelOptions: {
hostname: '192.168.1.207:9517'
},
The qt browser is called as well:
environments: [
{ browserName: 'QTBrowser', version: '5.4' , platform: [ 'LINUX' ] }
],
Tunnel is set to NullTunnel.
When executing the tests, following error is shown
C:\intern-tutorial>intern-runner config=tests/intern.js Listening on 0.0.0.0:9000 Tunnel started Suite QTBrowser 5.4 on LINUX FAILED Error: [POST http://192.168.1.207:9517/wd/hub/session] connect ETIMEDOUT
192.168.1.207:4444 at Server.createSession at
at retry
at
at
runCallbacks
at at run
at
at
nextTickCallbackWith0Args at process._tickCallback
TOTAL: tested 0 platforms, 0/0 tests failed; fatal error occurred
Error: Run failed due to one or more suite errors at
emitLocalCoverage
at
finishSuite
at at
at
runCallbacks
at at run
at
at
nextTickCallbackWith0Args at process._tickCallback
I am able to access the remote webdriver myself via the browser using url http://192.168.1.76:9517/status
So the connection is correct, but intern does add the /wd/hub/session which actually isn't needed.
How can I get my intern from not doing this?
You can get past the 'wd/hub' issue by setting pathname on in the tunnel options:
tunnelOptions: {
pathname: '/',
hostname: '192.168.1.207',
port: 9517
}
However, there are currently a couple of incompatibilities between Intern and QtWebDriver. One is that QtWebDriver requires that headers use a specific capitalization scheme, like 'Content-Type'. However, the library Intern uses to handle its requests currently normalizes header names to lowercase. This should be fine, because headers are supposed to be case insensitive, but not everything follows the standard.
Another problem is that, unlike most other WebDriver implementations, QtWebDriver responds to a session creation call with a 303 response rather than a 200, and the redirect address is relative. While that should be fine, the version of the Leadfoot library used by Intern doesn't properly follow relative redirect addresses.
These issues should be fixed in a future version of Intern, but for the moment Intern doesn't work out-of-the-box with QtWebDriver.

Grunt connect or grunt serve?

I don't quite get the difference between the two. From the description, seems like both are for opening webserver.
If i used the grunt-serve plugin with the following configurations on my gruntfile.js
serve: {
options: {
port: 9000
}
}
I can open a webserver at the specified port, though i have to open the webserver manually at the browser (not sure how to make it open automatically on my default browser). The webserver is working fine, and can load JSON files without any problem.
However when i tried to do it with grunt connect plugin, with the following configurations
connect: {
server: {
options: {
port: 9000,
livereload: 35729,
hostname: 'localhost',
keepalive:true,
open:true
}
}
},
open: {
dev: {
url: 'http://localhost:<%= connect.server.options.port %>/index.html'
}
}
grunt.registerTask('serve', function (target) {
grunt.task.run([
'connect',
'open:dev'
]);
});
I could automatically opened a webserver at the specified port on my default browser, but the catch is, it couldn't load the JSON data like how grunt serve did.
I'd like to make the webserver works like Yeoman, where when running the command grunt serve, it would connect to the webserver and automatically open it on my default browser, and can load all my PHP/json files. Seems like grunt-serve plugin is the right plugin for this, but i'm sure grunt-connect can do the same thing as grunt-serve too.
according to https://github.com/gruntjs/grunt-contrib-connect the connect task makes the server available for a limited amount of time in order to run other tasks such as unit testing. Once the tasks are complete the server stops. As you have shown there is a keepalive option to prevent the server from stopping. Connect is also useful for connecting to resources on another domain such as a REST API. Typically this would be denied by the browser due to the same origin policy - see https://github.com/drewzboto/grunt-connect-proxy.
So for development I would use the standard pattern "grunt serve" and connect for testing and proxying to resources on another domain :-)

Resources