Configure RemoteFTP atom package for Dreamhost SFTP - sftp

SFTP uploads fails when using RemoteFTP atom package with Dreamhost servers.
RemoteFTP with FTP works so I know the credentials are good.
SFTP via Filezilla works so I know the SFTP configuration on the server side is good; Filezilla prompted about the host SSH hash, which I visually confirmed/matched against the Dreamhost info.
SFTP via RemoteFTP connects but does not show the server's folders/files.
But when a file upload is attempted RemoteFTP gives error "RemoteFTP: Upload Error. No such file"
Here's a sanitized .ftpconfig:
{
"protocol": "sftp",
"host": "example.com",
"port": 22,
"user": "user",
"pass": "password",
"promptForPass": false,
"remote": "/server-folder-name/",
"local": "",
"agent": "",
"privatekey": "",
"passphrase": "",
"hosthash": "",
"ignorehost": true,
"connTimeout": 10000,
"keepalive": 10000,
"keyboardInteractive": false,
"keyboardInteractiveForPass": false,
"remoteCommand": "",
"remoteShell": "",
"watch": [],
"watchTimeout": 500
}
I suspected that the hosthash needed a key so I placed the Dreamhost provided fingerprint string there but that did not work.

The fix was to put the entire server folder path in config file's remote field.
"remote": "/root/id/server-folder-name/",

Related

Debug Next.js App with VSCode in NX monorepo

I'm currently trying to debug a Next.js Application inside a NX monorepo.
I have enabled the Auto Attach setting in VSCode's User Settings.
When I start the Application using the serve command, I can see output in the Debug Console and also print out the current process by typing process or console.log(process) into the Debug Console.
However, I cannot set any breakpoints in the server side code, for example in getServerSideProps.
I checked Next.js Debugging Documentation for the missing pieces, and tried setting the NODE_OPTIONS='--inspect' in my Next.js Application via .env file.
Update: Seems like it's a missing feature on NX.
Got it working, thanks to the information from this Pull Request.
.vscode/launch.json
{
"version": "0.2.0",
"resolveSourceMapLocations": ["${workspaceFolder}/**", "!**/node_modules/**"],
"configurations": [
{
"name": "name-of-the-app – Server",
"type": "node",
"request": "launch",
"runtimeExecutable": "yarn",
"runtimeArgs": [
"nx",
"run",
"name-of-the-app:serve",
"-r",
"ts-node/register",
"-r",
"tsconfig-paths/register"
],
"outputCapture": "std",
"internalConsoleOptions": "openOnSessionStart",
"console": "internalConsole",
"env": {
"TS_NODE_IGNORE": "false",
"TS_NODE_PROJECT": "${workspaceFolder}/apps/name-of-the-app/tsconfig.json"
},
"cwd": "${workspaceFolder}/apps/name-of-the-app/"
}
]
}
Note: I'm using yarn. You might have to replace it with npm instead.

Ansible job failed because failed cert validation

I run an Ansible job on server1. This deploys an application to server2.
It fails on this step:
- name: Check {{ my_app }} runs at "https://{{ host }}:{{ port }}{{ endpoint }}" - returns a status 200
uri:
url: 'https://{{ host }}:{{ port }}{{ endpoint}}'
return_content: yes
register: result
until: result.status == 200
retries: 5
delay: 20
It gives this error:
fatal: [server2.url.com]: FAILED! => {
"attempts": 5,
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"backup": null,
"body": null,
"body_format": "raw",
"client_cert": null,
"client_key": null,
"content": null,
"creates": null,
"delimiter": null,
"dest": null,
"directory_mode": null,
"follow": false,
"follow_redirects": "safe",
"force": false,
"force_basic_auth": false,
"group": null,
"headers": {},
"http_agent": "ansible-httpget",
"method": "GET",
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"removes": null,
"return_content": true,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"status_code": [
200
],
"timeout": 30,
"unix_socket": null,
"unsafe_writes": null,
"url": "https://server2.url.com:1234/my/endpoint",
"url_password": null,
"url_username": null,
"use_proxy": true,
"validate_certs": true
}
},
"msg": "Failed to validate the SSL certificate for server2.url.com:1234. Make sure your managed systems have a valid CA certificate installed. You can use validate_certs=False if you do not need to confirm the servers identity but this is unsafe and not recommended. Paths checked for this platform: /etc/ssl/certs, /etc/pki/ca-trust/extracted/pem, /etc/pki/tls/certs, /usr/share/ca-certificates/cacert.org, /etc/ansible. The exception msg was: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618).",
"status": -1,
"url": "https://server2.url.com:1234/my/endpoint"
I think I need to install cert somewhere on server2 but I'm not sure how or where this is done. I think I have the correct cert though. How do I add it?
Additionally, I'm aware that Ansible uses Python. server1 has Python 3.6.8 and server2 has Python 2.7.5. Is there any possible conflict between versions?
Regarding your question
I run an Ansible job on server1. ... I think I need to install cert somewhere on server2 ...
and the error message (msg)
Failed to validate the SSL certificate for server2.url.com:1234. Make sure your managed systems have a valid CA certificate installed. You can use validate_certs=False if you do not need to confirm the servers identity but this is unsafe and not recommended.
it is server1, the initiator of the connection attempt which is failing to confirm the target server (server2) identity. Therefore you need to trust the certificate on server1.
Regarding your question
I'm not sure how or where this is done.
and the error message (msg)
Paths checked for this platform: /etc/ssl/certs, /etc/pki/ca-trust/extracted/pem, /etc/pki/tls/certs, /usr/share/ca-certificates/cacert.org, /etc/ansible
you may need to import and trust the self-signed server certificates in one of the mentioned paths on server1.
Regarding
server1 has Python 3.6.8 and server2 has Python 2.7.5. Is there any possible conflict between versions?
Not in your current case.
Try this:
- name: Check {{ my_app }} runs at "https://{{ host }}:{{ port }}{{ endpoint }}" - returns a status 200
uri:
url: 'https://{{ host }}:{{ port }}{{ endpoint}}'
**validate_certs: no**
return_content: yes
register: result
until: result.status == 200
retries: 5
delay: 20

Xdebug logging connection errors when it's not supposed to be running

WordPress site built using a Lando development environment.
Debugging in WordPress is enabled, as is debugging to a log file.
In VS Code I have the following launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "Listen for XDebug",
"type": "php",
"request": "launch",
"port": 9003,
"log": false,
"pathMappings": {
"/app/": "${workspaceFolder}/"
}
}
]
}
And this is my php.ini:
; Xdebug
xdebug.max_nesting_level = 256
xdebug.show_exception_trace = 0
xdebug.collect_params = 0
xdebug.mode = debug
xdebug.start_with_request = yes
xdebug.client_host = ${LANDO_HOST_IP}
; Remote settings
xdebug.remote_enable = 1
xdebug.remote_autostart = 1
xdebug.remote_host = ${LANDO_HOST_IP}
When I start debugging in VS Code everything works as expected.
When debugging isn't active in VS Code (i.e. the green triangle hasn't been clicked)... I get lots of the following error in my WordPress debug log:
Xdebug: [Step Debug] Could not connect to debugging client. Tried: 172.20.0.1:9003 (from HTTP_X_FORWARDED_FOR HTTP header), 192.168.1.18:9003 (fallback through xdebug.client_host/xdebug.client_port) :-(
Is there a way to prevent Xdebug constantly trying to connect?
Is there a way to prevent Xdebug constantly trying to connect?
Yes, don't tell it to connect if you don't want it to connect. You have set xdebug.start_with_request=yes, which means that Xdebug will do as asked, and try to make a connection. If it can't, you'll get a warning.
FWIW, the xdebug.remote_* settings do nothing in Xdebug 3.

404 Error at Graylog login

I'm currently trying to get Graylog to work. I installed with the following graylog-settings.json :
local-ip is the graylog server local ip on our network.
graylog.domain.com is our graylog external domain
{
"timezone": "Europe/Paris",
"smtp_server": "smtp.gmail.com",
"smtp_port": 465,
"smtp_user": "xxxx",
"smtp_password": "xxxx",
"smtp_from_email": "graylog#graylog",
"smtp_web_url": "http://graylog",
"smtp_no_tls": false,
"smtp_no_ssl": false,
"master_node": "127.0.0.1",
"local_connect": false,
"current_address": "local-ip",
"last_address": "local-ip",
"enforce_ssl": false,
"journal_size": 1,
"node_id": false,
"internal_logging": true,
"web_listen_uri": false,
"web_endpoint_uri": false,
"rest_listen_uri": false,
"rest_transport_uri": false,
"external_rest_uri": "http://graylog.domain.com:9000/",
"custom_attributes": {
}
}
We have a PFSense (which I'm whitelisted on every port).
I configured a NAT entry to send all 9000 request on my graylog server.
I configured my NGinx proxy to send all graylog.domain.com to local-ip
Here is the problem :
If I reach graylog.domain.com:80, I can see the login page, but a any login attempt, I get :
Error - the server returned: 404 - cannot POST
http://graylog.domain.com:9000/system/sessions (404)
If I reach graylog.domain.com:9000, I get directly this error (without the login page) :
We are experiencing problems connecting to the Graylog server running
on http://local-ip:9000/api/. Please verify that the server is
healthy and working correctly.
You will be automatically redirected to the previous page once we can
connect to the server.
Do you need a hand? We can help you.
More details
I RTFM but I can't get the right configuration.. Can anybody help ?
EDIT :
Thanks to #joschi, I manage to get this to work. Here is my conf file now :
{
"timezone": "Europe/Paris",
"smtp_server": "smtp.gmail.com",
"smtp_port": 465,
"smtp_user": "xxx",
"smtp_password": "xxx",
"smtp_from_email": "graylog#graylog",
"smtp_web_url": "http://graylog",
"smtp_no_tls": false,
"smtp_no_ssl": false,
"master_node": "127.0.0.1",
"local_connect": false,
"current_address": "local-ip",
"last_address": "local-ip",
"enforce_ssl": false,
"journal_size": 1,
"node_id": false,
"internal_logging": true,
"web_listen_uri": false,
"web_endpoint_uri": false,
"rest_listen_uri": false,
"rest_transport_uri": false,
"external_rest_uri": "http://external-ip:9000/api/",
"custom_attributes": {
}
}
And I used the following command to update my conf file :
sudo graylog-ctl set-external-ip "http://external-ip:9000/api/"
Of course, external-ip is our public IP.
Your external_rest_uri setting is wrong. It has to point to the URI of the Graylog REST API.
You're also not supposed to edit the graylog-settings.json by hand (unless you really need some advanced settings), but use the graylog-ctl command.
Please read http://docs.graylog.org/en/2.1/pages/configuration/graylog_ctl.html for further information about the graylog-ctlcommand.

events.js:72 meteor up deploy

i tried everything what i found in net but nothing helped me... i try deploy app in my server on Debian 8.2, and every time after: mup deploy i got this:
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
Building Started: /Volumes/Macintosh HD/Users/myName/Google Drive/_projects/Coda/lottato_com
events.js:72
throw er; // Unhandled 'error' event
^
Error: spawn ENOENT
at errnoException (child_process.js:1011:11)
at Process.ChildProcess._handle.onexit (child_process.js:802:34)
my mup.json look lik:
{
"servers": [
{
"host": "server IP",
"username": "root",
"password": "blablabla"
}
],
"setupMongo": false,
"setupNode": true,
"nodeVersion": "0.10.36",
"enableUploadProgressBar": true,
"appName": "myAppName",
"app": "/Volumes/Macintosh HD/Users/myName/Google Drive/_projects/Coda/myAppName",
"env": {
"MONGO_URL": "//<login>:<password>#ds061464.mongolab.com:61111/myAppdb",
"ROOT_URL": "http://myApp.com"
},
"deployCheckWaitTime": 15
}
i can't handle with this issue almost 3 day! i tried deploy form server, change path, but it still didn't work and don't yet work...
and when i try to look in log, i got this:
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
[178.63.41.196] tail: cannot open ‘/var/log/upstart/lottato.log’ for reading: No such file or directory
tail: no files remaining
also u tried use mupx insted of mup, n now i got:
Invalid configuration file mup.json: There is no meteor app in the current app path.
new mup.json look like:
{
"servers": [
{
"host": "server IP",
"username": "root",
"password": "blablabla",
"env": {}
}
],
"setupMongo": false,
"appName": "appName",
"app": "~/Google Drive/_projects/Coda/appName",
"env": {
"PORT": 80,
"ROOT_URL": "http://appName.com",
"MONGO_URL": "mongodb://login:pass#ds035735.mongolab.com:35735/appName"
},
"deployCheckWaitTime": 15,
"enableUploadProgressBar": true
}
but i tried any type of path, with ~ or full path, n always the same, installation is starting only when in field path i writ:
"app": ".",
after increasing to 0.10.40 you should run 'mup setup' again followed by 'mup deploy'.
In my project I have mup.json in the project root (same level as .meteor) and
"app": "/Volumes/Macintosh HD/Users/myName/Google Drive/_projects/Coda/myAppName"
looks like
"app": ".",
Not sure if that is important.
i resolve this problem only with mupx + i moved project on server and deploy it from server to the same server.

Resources