Meteor Down SSL support - meteor

I use to test my Meteor app, "meteor down". I am totally happy with this tool, but when I switched to https, it does not work anymore.
I only get this output:
Time : 10/31/2016, 9:06:52 AM
--------------------------------------------------
Time : 10/31/2016, 9:06:57 AM
--------------------------------------------------
Time : 10/31/2016, 9:07:02 AM
--------------------------------------------------
Time : 10/31/2016, 9:07:07 AM
--------------------------------------------------
Time : 10/31/2016, 9:07:12 AM
--------------------------------------------------
Time : 10/31/2016, 9:07:17 AM
Here is my configuration:
meteorDown.run({
concurrency: 10,
url: 'https://example.com'
});
What am I doing wrong?
UPDATE
After I use the fork, something has changed. But unfortunately I am getting an error now:
/root/.nvm/versions/node/v4.6.0/lib/node_modules/meteor-down/lib/mdown.js:47
if(error) throw error;
^
Error during WebSocket handshake: Unexpected response code: 404
Here is my new configuration:
meteorDown.run({
concurrency: 10,
url: 'wss://example.com/websocket'
});

I think you are ok (it's not you!), there is an outstanding pull request on the meteor down package to allow use of secure web sockets as well as regular ones.
https://github.com/meteorhacks/meteor-down/pull/20/commits/68a297ac987390b4df1f2e8e616e118d8291a4ce
You should ask for the author to accept the PR, or use the fork
https://github.com/louis49/meteor-down

Related

SSL error when creating vue.js app using api platform client generator

I tried to create client app of my API created using API Platform. I follow this guide https://api-platform.com/docs/client-generator/vuejs/.
I use Laravel Homestead for the VM when developing it.
I've added myapp .crt file to Keychan Access.
Problem was, when I try to execute
generate-api-platform-client --generator vue https://myapp-api.local/api src/
It return error message like this:
{
api: Api { entrypoint: 'https://myapp-api.local/api', resources: [] },
error: FetchError: request to https://myapp-api.local/api failed, reason: unable to verify the first certificate
at ClientRequest.<anonymous> (/Users/permana.jayanta/.config/yarn/global/node_modules/node-fetch/index.js:133:11)
at ClientRequest.emit (events.js:209:13)
at TLSSocket.socketErrorListener (_http_client.js:406:9)
at TLSSocket.emit (events.js:209:13)
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
name: 'FetchError',
message: 'request to https://myapp-api.local/api failed, reason: unable to verify the first certificate',
type: 'system',
errno: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE',
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
},
response: undefined,
status: undefined
}
I'm thinking this is related with SSL certificate. That node doesn't recognise the certificate. How to make node.js recognise the custom SSL certificate generated by Homestead?
it failed to verify https signature.
To disable it, type in shell
export NODE_TLS_REJECT_UNAUTHORIZED=0

Session not created error with physical device

I've been trying to run automated tests with Appium, they are already running in physical devices, but I get the error: [WD Proxy] Got an unexpected response: {"value":{"error":"session not created","message":"'capabilities' is mandatory to create a new session"}
I've gone through the configuration guide and the webDriverAgent seems to be running correctly in the device, when I make a request to the webdriveragent running on the device I get the response:
[WD Proxy] Determined that the downstream protocol for proxy is W3C
[XCUITest] WebDriverAgent information:
[XCUITest] {
[XCUITest] "message": "WebDriverAgent is ready to accept commands",
[XCUITest] "state": "success",
However, when Appium makes the request to create a new WDA session, receives the following response:
[WD Proxy] Got an unexpected response: {"value":{"error":"session not created","message":"'capabilities' is mandatory to create a new session"},"sessionId":"595F87C8-0564-4B75-94B4-7D67BA0AF382"}
Using these capabilities
'app': app,
'bundleId' : bundle_id,
'platformName': platform_name,
'automationName': automation_name,
'platformVersion': platform_version,
'deviceName': device_name,
'udid': udid,
'xcodeOrgId': xcode_org_id,
'xcodeSigningId': xcode_signing_id,
'newCommandTimeout': new_command_timeout,
'updatedWDABundleId': updated_WDA_Bundle_Id,
'agentPath': "/usr/local/lib/node_modules/appium/node_modules/appium-xcuitest-driver/WebDriverAgent/WebDriverAgent.xcodeproj",
'bootstrapPath': "/usr/local/lib/node_modules/appium/node_modules/appium-xcuitest-driver/WebDriverAgent"
Is there anything else I might be missing?
Not sure if these capabilities are actually required:
'updatedWDABundleId': updated_WDA_Bundle_Id,
'agentPath': "/usr/local/lib/node_modules/appium/node_modules/appium-xcuitest-driver/WebDriverAgent/WebDriverAgent.xcodeproj",
'bootstrapPath': "/usr/local/lib/node_modules/appium/node_modules/appium-xcuitest-driver/WebDriverAgent"
try removing them, also in my tests i have noticed that it is necessary to restart the appium server everytime you switch from real device to simulator.
Hope it helps

Why does Meteor Up (MUP) fail on authentication?

I am currently trying to deploy a Meteor project to an external server for the first time. The server is hosted by DigitalOcean, running ubuntu 16.04, and has an SSH key set up for password-free access.
The error I am getting from MUP is:
[159.203.165.13] - Setup Docker
events.js:165
throw er; // Unhandled 'error' event
^
Error: All configured authentication methods failed
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:290:17)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
at SSH2Stream.emit (events.js:180:13)
at parsePacket (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:3647:10)
at SSH2Stream._transform (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:551:13)
at SSH2Stream.Transform._read (_stream_transform.js:185:10)
at SSH2Stream._read (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:212:15)
at SSH2Stream.Transform._write (_stream_transform.js:173:12)
at doWrite (_stream_writable.js:410:12)
at writeOrBuffer (_stream_writable.js:396:5)
at SSH2Stream.Writable.write (_stream_writable.js:294:11)
at Socket.ondata (_stream_readable.js:651:20)
at Socket.emit (events.js:180:13)
at addChunk (_stream_readable.js:274:12)
at readableAddChunk (_stream_readable.js:261:11)
at Socket.Readable.push (_stream_readable.js:218:10)
Emitted 'error' event at:
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:292:12)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
[... lines matching original stack trace ...]
at Socket.Readable.push (_stream_readable.js:218:10)
At this point I have tried several solutions involving the mup file as per other recommendations such as:
1) Adding in a password - Gives the exact same error as though the change didn't occur.
2) Adding in the same SSH key that I use for authentication to the server as per digital ocean - Says 'privateKey value does not contain a (valid) private key'. I have tried both the key that is used for authentication to the server and every other key I could find short of generating a new one just for Meteor's use.
3) Leaving both blank and allowing it to 'try' ssh-agent - pretends it doesn't know what ssh-agent is and throws an error saying the same thing as when I use a password.
I have looked through and followed the same instructions in the following article: http://meteortips.com/deployment-tutorial/digitalocean-part-1/
This article assumes that there are only two possible states. One being that an ssh key has NOT been used or set up so it needs to be generated. The second being that an ssh key exists and is set up exactly where they expect it. Unfortunately I seem to be in a different situation. I generated a key using putty prior to setting up the D.O server and created the droplet using that. After creation, the file did not exist. The only thing in the ~/.ssh/ directory was a single file named "authorized_keys" that held the key I would use to connect to the server. This file cannot be used, nor any file on the server in the other ssh key locations.I also tried copying over the file directly onto the server to no avail as well.
In some vain hope at finding a solution I also tried running these same commands in both the Meteor build bundle an the source code folder. Neither worked. I should mention that although this is the only article I still have open to try for a solution, I have tried every one I could find using MUP.
If anyone can point me in the right direction with this so I can stop flailing wildly in the dark I would be incredibly grateful.
Edit: As requested, below is the current mup.js file with removed credentials
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '111.111.111.11',
username: 'root',
// ssh-agent: '/home/Meteor/MeteorKey.pem'
pem: '~/.ssh/id_rsa.pub'
// password: 'password1'
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: 'app-name',
path: '../',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
ROOT_URL: 'http://www.app-name.com',
MONGO_URL: 'mongodb://mongodb/meteor',
MONGO_OPLOG_URL: 'mongodb://mongodb/local',
},
docker: {
// change to 'abernix/meteord:base' if your app is using Meteor 1.4 - 1.5
image: 'abernix/meteord:node-8.4.0-base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
},
// (Optional)
// Use the proxy to setup ssl or to route requests to the correct
// app when there are several apps
// proxy: {
// domains: 'mywebsite.com,www.mywebsite.com',
The error message you are receiving:
Error: All configured authentication methods failed
Means that the SSH connection is failing. So the credentials you are using (pity you removed them from the config) are not working. Try using a command line ssh using these same credentials, and then trouble shoot that - once you can ssh into the server, then mup should be able to do it's work.
You can get more information out of ssh by specifying one or more -v parameters, eg:
ssh -v -v my_user#remote.com
and it will give you information about the authentication methods it is trying as it goes through them. This will help you narrow down the problem.

Intern target QT webdriver on remote machine

I have installed intern on my local machine (192.168.1.50) and want to use the QT Browser webdriver on a remote machine (192.168.1.76). I've changed the intern.js and added the correct hostname as shown beneath:
tunnelOptions: {
hostname: '192.168.1.207:9517'
},
The qt browser is called as well:
environments: [
{ browserName: 'QTBrowser', version: '5.4' , platform: [ 'LINUX' ] }
],
Tunnel is set to NullTunnel.
When executing the tests, following error is shown
C:\intern-tutorial>intern-runner config=tests/intern.js Listening on 0.0.0.0:9000 Tunnel started Suite QTBrowser 5.4 on LINUX FAILED Error: [POST http://192.168.1.207:9517/wd/hub/session] connect ETIMEDOUT
192.168.1.207:4444 at Server.createSession at
at retry
at
at
runCallbacks
at at run
at
at
nextTickCallbackWith0Args at process._tickCallback
TOTAL: tested 0 platforms, 0/0 tests failed; fatal error occurred
Error: Run failed due to one or more suite errors at
emitLocalCoverage
at
finishSuite
at at
at
runCallbacks
at at run
at
at
nextTickCallbackWith0Args at process._tickCallback
I am able to access the remote webdriver myself via the browser using url http://192.168.1.76:9517/status
So the connection is correct, but intern does add the /wd/hub/session which actually isn't needed.
How can I get my intern from not doing this?
You can get past the 'wd/hub' issue by setting pathname on in the tunnel options:
tunnelOptions: {
pathname: '/',
hostname: '192.168.1.207',
port: 9517
}
However, there are currently a couple of incompatibilities between Intern and QtWebDriver. One is that QtWebDriver requires that headers use a specific capitalization scheme, like 'Content-Type'. However, the library Intern uses to handle its requests currently normalizes header names to lowercase. This should be fine, because headers are supposed to be case insensitive, but not everything follows the standard.
Another problem is that, unlike most other WebDriver implementations, QtWebDriver responds to a session creation call with a 303 response rather than a 200, and the redirect address is relative. While that should be fine, the version of the Leadfoot library used by Intern doesn't properly follow relative redirect addresses.
These issues should be fixed in a future version of Intern, but for the moment Intern doesn't work out-of-the-box with QtWebDriver.

Symfony2 and RabbitMqBundle. Can't publish a message

I am trying to use syfmony2 framework with RabbitMqBundle from here
I am sure that my rabbitmq server is up and running and I am doing the configuration and publishers code accordingly to the docs delivered on github. Unfortunately I can`t add any message to the queue.
I am sure that my rabbitmq server is up and running. I have queue named accordingly to the symfony configuration file.
Have anyone got any clue what is wrong?
Thanks in advance for any suggestions.
well... try this simple example
# app/config.yml
old_sound_rabbit_mq:
connections: %rabbitmq_connections%
producers: %rabbitmq_producers%
consumers: %rabbitmq_consumers%
parameters:
# connection parameters
rabbitmq_connections:
default: { host: 'localhost', port: 5672, user: 'guest', password: 'guest', vhost: '/' }
# define producers
rabbitmq_producers:
sample:
connection: default
exchange_options: {name: 'exchange_name', type: direct, auto_delete: false, durable: true}
# define consumers
rabbitmq_consumers:
sample:
connection: default
exchange_options: {name: 'exchange_name', type: direct, auto_delete: false, durable: true}
queue_options: {name: 'sample', auto_delete: false}
callback: rabbitmq.callback.service
then you should define your callback service. feel free to put it in app/config.yml
services:
rabbitmq.callback.service:
class: RabbitMQ\Callback\Service
and yes. you should write this callback service. here is simple implementation. should be enough for understanding and check is it works for you.
namespace RabbitMQ\Callback;
use OldSound\RabbitMqBundle\RabbitMq\ConsumerInterface;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Message\AMQPMessage;
class Service implements ConsumerInterface
{
public function execute(AMQPMessage $msg)
{
var_dump(unserialize($msg->body));
}
}
then you should start rabbitmq server, run consumer and check was new exchange and queue added.
to run test consumer you should run
app/console rabbitmq:consumer sample --route="sample"
in your controller (where you want to send message to rabbitMQ put next code
# get producer service
$producer = $this->get('old_sound_rabbit_mq.sample_producer');
# publish message
$producer->publish(serialize(array('foo'=>'bar','_FOO'=>'_BAR')), 'sample');
Hope it's more or less clear and will help you with rabbitmq.
PS: it's easier to debug if you have rabbitmq management plugin. if you have no, use console commands like rabbitmqctl to check queues/exchanges/consumers and so on...
and also would be nice to see your configuration for producers/consumers. callback services code as well.
I also had some issue to send messages with this bundle, i recommend you to try SonataNotificationBundle instead.
You can also install the RabbitMq management plugin to see the queued messages.

Resources