I have downloaded the Apache Tinkerpop Gremlin Console but I cannot figure out how to connect this to my AWS Neptune Instance. Please provide me with step by step instructions to get this connected to the Neptune.
The official procedure is provided from AWS here -> https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-console.html
Please be aware that by default, your Neptune instance does not permit a remotely accessible port. That has to be prepared via an Application Load Balancer or having the AWS VPN connection to your VPC. For this reason, I highly recommend that you launch a small Linux instance on your VPC and SSH to this to follow the instructions first. You will also need to install Java 8 or later on that machine. If using a VPN, you would also ensure that the inbound traffic to port 8182 be enabled on the VPC's subnet(s) that's serviced by the AWS Open VPN endpoint. These are not the only options but are answered elsewhere.
Download the AWS CA Certificate from https://www.amazontrust.com/repository/AmazonRootCA1.pem. It will come up as a text on your browser. Just copy and paste as something like aws.pem This is to allow TLS connection from the Gremlin Console.
Using openssl tool (installone if you don't have it.) export this pem to p12 file. p12 or pkcs12 is the format the Java Certificat Store recognizes. It would go like this:
openssl pkcs12 -export -out aws.p12 -in aws.pem
From here on I have cd to the root of the gremlin console distribution.
Copy above aws.p12 under the conf directory.
Obtain the full DNS address of your Neptune instance from your AWS Console
Open conf/remote.yaml and use the following pattern example to edit the host and add connectionPool configuration.
hosts: [test.cluster-abcdefzxyz.planet-earth-1.neptune.amazonaws.com]
connectionPool: { enableSsl: true, trustStore: conf/aws.p12 }
Create a file conf/remote.txt with the following lines. This is an optional step but otherwise, you will be typing thses two :remote commands each time you start the console.
:remote connect tinkerpop.server conf/remote.yaml
:remote console
Finally issue the following line on your terminal.
cd bin
gremlin.bat -i conf/remote.txt
The gremlin console should start, connect to the Neptune and be ready to accept your Gremlin queries. To quickly test this.
g.V().limit(1)
Related
I have created the batch to transfer the file using SSH keys, I checked the public and private key mapping on both the servers and it's working fine.
My Windows batch code using SFTP command is as follows:
open sftp://sftp_user#ssh_dest_server -privatekey="D:\directory_full_path\private_key.ppk" -rawsettings TryAgent=0 AuthKI=0 AuthGSSAPI=0
CD "/XYZ_Directory/folder1"
Put "\\full_directory_path\FILE1.zip"
exit
When I execute the batch manually it's executing fine without any issue, but when I execute batch from SQL Job (using different user) then it's shows below error:
Searching for host...
Connecting to host...
Authenticating...
Continue connecting to an unknown server and add its host key to a cache?
The server's host key was not found in the cache. You have no guarantee that the server is the computer you think it is.
The server's RSA key details are:
Algorithm: ssh-rsa 2048
SHA-256: finger_print_key
MD5: zz:xx:yy:xx:yy:xx:yy:xx:yy:xx:yy:xx:yy:zz:zz:00
If you trust this host,
press Yes. To connect without adding host key to the cache,
press No. To abandon the connection press Cancel.
In scripting, you should use a -hostkey switch to configure the expected host key.
(Y)es, (N)o, C(a)ncel (10 s), (C)opy Key, (P)aste key: Cancel
Host key wasn't verified!
Host key fingerprint is ssh-rsa 2048 finger_print_key.
Authentication failed.
I already tried -hostkey WinSCP command but says "unknown command". Suggestions are most welcome.
Something I want to do like this link "WinSCP" through WinSCP command but inside my Windows batch automatically to verify the host.
To verify a host key in WinSCP script, add -hostkey switch to the open command:
open sftp://sftp_user#ssh_dest_server -hostkey=... -privatekey="D:\directory_full_path\private_key.ppk" -rawsettings TryAgent=0 AuthKI=0 AuthGSSAPI=0
See Verifying the host key ... in script in WinSCP documentation. It covers everything you need to know. In particular, where to get the host key value.
Also note that WinSCP GUI can generate a script template even with the -hostkey switch for you.
Also covered in My script works fine when executed manually, but fails or hangs when run by Windows Scheduler, SSIS or other automation service. What am I doing wrong?
I would like to use RStudio for analysis of data on a MySQL instance. This is a AWS RDS MySql instance that is only accessible via a jump box / bastion host. I have the credentials necessary to connect to the jump box, and from the jump box to the RDS instance. What do I need to do be able to query this DB directly from within the RStudio console?
I can connect (using the Terminal tab in RStudio)to the jump box using:
ssh -p 22xx user#ip.add.re.ss
Then I can connect to RDS mysql using:
mysql -u username -p database -h hostname.us-east-1.rds.amazonaws.com
I can connect and do manual mysql commands from within RStudio terminal, but I don't seem to be able to do anything with the DB from the RStudio console.
Sorry for opening a 2yo thread, but for everyone dealing with this issue as I am - I found this thread and it looks like it works (connecting with MySQL via ssh from R Studio).
You should use something which is called port forwarding. Some details
are here
(https://help.ubuntu.com/community/SSH/OpenSSH/PortForwarding) For
example, say you wanted to connect from your laptop to
http://www.ubuntuforums.org using an SSH tunnel. You would use source
port number 8080 (the alternate http port), destination port 80 (the
http port), and destination server www.ubuntuforums.org. :
ssh -L 8080:www.ubuntuforums.org:80 Where should be
replaced by the name of your laptop.
This is done for whole computer so you dont need to do this from r
studio.
Offcourse you need to forward your port to 3036. But you need special
privilige on the server. Because on most hosting you can only connect
from localhost (for example from PHP)
Source: https://www.py4u.net/discuss/881859
Connect to instance: i-38942195
To connect to your instance, be sure security group my-test-security-group has TCP port 22 open to inbound traffic and then perform the following steps (these instructions do not apply if you did not select a key pair when you launched this instance):
Open an SSH terminal window.
Change your directory to the one where you stored your key file my-test-keypair.pem
Run the following command to set the correct permissions for your key file:
chmod 400 my-test-keypair.pem
Connect to your instance via its public IP address by running the following command:
ssh -i my-test-keypair.pem root#192.168.0.29
Eucalyptus no longer supports VMware, but to generally troubleshoot instance connectivity you would first check that you are using a known good image such as those available via:
# python <(curl -Ls https://eucalyptus.cloud/images)
and ensure that the instance booted correctly:
# euca-get-console-output i-38942195
if that looks good (check for instance meta-data access for the SSH key) then check that the security group rules are correct, and that the instance is running using the expected security group and SSH key.
VMWare deprecation notice from version 4.1:
Support for VMWare features in Eucalyptus has been deprecated and will be removed in a future release.
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#release-notes/4.1.0/4.1.0_rn_features.html
Euca2ools command:
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#euca2ools-guide/euca-get-console-output.html
I have seen the instructions here https://cloud.google.com/dataproc/docs/tutorials/jupyter-notebook for setting up Jupyter notebooks with dataproc but I can't figure out how to alter the process in order to use Cloud shell instead of creating an SSH tunnel locally. I have been able to connect to a datalab notebook by running
datalab connect vmname
from the cloud shell and then using the preview function. I would like to do something similar but with Jupyter notebooks and a dataproc cluster.
In theory, you can mostly follow the same instructions as found https://cloud.google.com/shell/docs/features#web_preview to use local port forwarding to access your Jupyter notebooks on Dataproc via the Cloud Shell's same "web preview" feature. Something like the following in your cloud shell:
gcloud compute ssh my-cluster-m -- -L 8080:my-cluster-m:8123
However, there are two issues which prevent this from working:
You need to modify the Jupyter config to add the following to the bottom of /root/.jupyter/jupyter_notebook_config.py:
c.NotebookApp.allow_origin = '*'
Cloud Shell's web preview needs to add support for websockets.
If you don't do (1) then you'll get popup errors when trying to create a notebook, due to Jupyter refusing the cloud shell proxy domain. Unfortunately (2) requires deeper support from Cloud Shell itself; it'll manifest as errors like A connection to the notebook server could not be established.
Another possible option without waiting for (2) is to run your own nginx proxy as part of the jupyter initialization action on a Dataproc cluster, if you can get it to proxy websockets suitably. See this thread for a similar situation: https://github.com/jupyter/notebook/issues/1311
Generally this type of broken websocket support in proxy layers is a common problem since it's still relatively new; over time more and more things will start to support websockets out of the box.
Alternatively:
Dataproc also supports using a Datalab initialization action; this is set up such that the websockets proxying is already taken care of. Thus, if you're not too dependent on just Jupyter specifically, then the following works in cloud shell:
gcloud dataproc clusters create my-datalab-cluster \
--initialization-actions gs://dataproc-initialization-actions/datalab/datalab.sh
gcloud compute ssh my-datalab-cluster-m -- -L 8080:my-datalab-cluster-m:8080
And then select the usual "Web Preview" on port 8080. Or you can select other Cloud Shell supported ports for the local binding like:
gcloud compute ssh my-datalab-cluster-m -- -L 8082:my-datalab-cluster-m:8080
In which case you'd select 8082 as the web preview port.
You can't connect to Dataproc through a Datalab installed on a VM (on a GCE).
As the documentation you mentionned, you must launch a Dataproc with a Datalab Initialization Action.
Moreover the Datalab connect command is only available if you have created a Datalab thanks to the Datalab create command.
You must create a SSH tunnel to your master node ("vmname-m" if your cluster name is "vmname") with:
gcloud compute ssh --zone YOUR-ZONE --ssh-flag="-D 1080" --ssh-flag="-N" --ssh-flag="-n" "vmname-m"
I'm trying to SFTP Compute Engine from MAC using Filezilla. I can SSH with port 22 without any problem. But I need R/W/D access to my files and trying SFTP to port 21 and getting the following error,
Command: keyfile "/bitnami-google-api-project-4xxxxxxxxxx.pem"
Command: open "bitnami#104.xxx.xxx.xxx" 21
Error: Connection refused
Error: Could not connect to server
I referred a couple of similar threads here nothing make this work, sofar I did,
Bitnami Key added in Google Compute Engine and both are same Added.
PEM key file (MAC) in FileZilla settings.
I'm using root password with default username
Anything I'm missing from the doc to follow to get access through 21?
SFTP runs over an SSH session, usually on TCP port 22. In the Bitnami Stack SFTP is configured to use port 22. In the link below you will find information about how to upload files using SFTP using Bitnami Cloud Images on Google Cloud:
https://docs.bitnami.com/google/faq/#how-to-upload-files-to-the-server-with-sftp
If you want to use SFTP on any other port, you need to open that port on your server and configure SFTP to use port 21. You can open a port on your server following the steps described in the guide below:
https://docs.bitnami.com/google/faq/#how-to-open-the-server-ports-for-remote-access