We have a (route based) VPN between our data center and Google Cloud. I'm trying to set up a replica of one of our on-prem databases in Google Cloud.
With current setup, Google Cloud SQL instance is unable to communicate with our on-prem instance via local IP through VPN tunnel because (my understanding is) no routes exist from the sub-network on which Cloud SQL instance is placed by Google.
Will this work with Cloud Router?
Is the only option is exposing our on-prem DB to the internet to get this working?
Based from this link Cloud router is needed to connect to external source. This is to advertise all subnet that are visible to the cloud router. You can check the link for more information.
Regarding exposing your DB to the internet, not necessarily and that is not the only option if you like to connect your on-prem DB to the Google cloud network your idea of cloud VPN will also do since VPN tunnel is a secure connection between on-prem network and Google cloud network.
Related
Can someone let me know if its possible to connect or PING a Databricks Cluster via its public ip address?
For example I have issued the command ping --all-ip-addresses and I get the ip address 10.172.226.115.
I would like to be able to PING that ip address(10.172.226.115) from my on-premise PC (or connect to the cluster with an application using the ip address?
Can someone let me know if that is possible?
That public IP is not guaranteed to be your cluster; unless somehow you've installed Databricks into your own cloud provider account, where you fully control the network routes, it would be connecting to Databricks managed infrastructure where the public ip would likely be an API gateway or router that serves traffic for more than one account
Note: just because you can ping Google DNS with outbound traffic doesn't mean inbound traffic from the internet is even allowed through the firewall
connect to the cluster with an application
I'd suggest using other Databricks support channels (i.e their community forum) to see if that's even possible, but I thought you're just supposed to upload and run code within their ecosystem. At least, for the community plans
Specifically, they have a REST API to submit a remote job from your local system, but if you want to be able to send data back to your local machine, I think you'd have to write and download from DBFS or other cloud filesystem
For example let's say I have service A and B in GCP. Imagine that we are sending data from a VM in GCP to CloudStorage.
A sends B 10GB of traffic using the public API. In GCP would this this result in the data exiting the GCP network and then coming back in or would the entire exchange of data stay local to the GCP network?
Google VPC provides private communication between compute resources you create, and you can also enable private communication to Google managed services like Google Cloud Storage, Spanner, big data and analytics, and Machine Learning
For more details see here - https://cloud.google.com/compute/docs/private-google-access/private-google-access
The traffic is all internal and private though the API is resolved to a public destination IP address. Network address translation is in Google's infrastructure and is transparent to the user.
When you create a VPN connection with Amazon VPC, Amazon offers a configuration file for different router brands. This file can be downloaded as soon as the VPN creation is done.
So the natural procedure is to start the configuration process in Amazon and then just run the provided commands in your router.
My question is, can you do it the other way around? Is there a way to modify the VPN settings in Amazon based on my physical router settings? Or are Amazon VPN settings just read-only?
Yes, you can modify various VPN settings in Amazon.
For an existing VPN connection, you can edit the static routes for your connection from the VPC Console (in the navigation pane, choose VPN Connections) - see Editing Static Routes for a VPN Connection.
You can also specify many options when manually setting up a new VPN connection - see Setting up the VPN Connection in the Amazon VPC documentation.
For more general info on Amazon VPC/VPN connectivity options, see the Amazon Virtual Private Cloud Connectivity Options whitepaper.
I've setup neo4j on port 7474 on a Rackspace cloud server. I want to access this server from another Rackspace cloud server (appserver) but the connection is refused.
I've tried enabling access for the appserver to port 7474 on the neo4j server using ufw:
sudo ufw allow from 22.234.298.297 to any port 7474
I can see this rule when I run 'ufw staus' but it doesn't seem to make any difference when I try to connect to the appserver. I can ssh between these two servers.
How do I open port 7474 between cloud servers on Rackspace?
(my apologies for this very basic question but rackspace support are not helping and I cant find rackspace specific information on this)
Glad, we could solve the problem (see comments on the question).
It so happens that Neo4j accepts only connections from localhost per default. When trying to gain access to Neo4j via REST API from an app server within the same network, one has to configure the Neo4j server to open up.
The neo4j-server.properties configuration file has a configuration key with org.neo4j.server.webserver.address. You have a couple of options here.
Grant app servers in the same local network to consume the Neo4j REST API
Grant everybody access and let the firewall handle it
For the first case, use the local ip address of the machine where Neo4j is running. Let's say your machines are connected via a private class C network. The machine with Neo4j has an ip 192.168.1.4 - that's the ip you want to enter as the value in org.neo4j.server.webserver.address, so your app server running in the same network with maybe an ip of 192.168.1.5 can make network requests that are being answered by the Neo4j web server.
For the second case, you enter 0.0.0.0 as value for org.neo4j.server.webserver.address to denote that you want to accept connections on all available ip addresses on that machine. In that case you want to set up your firewall to handle permissions who can talk to the server and who doesn't - even with authentication enabled.
Extra
In a production environment that requires high availability, one can use Neo4j's enterprise edition with a high availability cluster in a master-slave setting. I've used in with one master and two slaves. I configured the Neo4j servers that they can only be accessed from the proxy server that routes writing cypher queries to the master, and reading queries to the slaves. The proxy itself had a hardware firewall on it to ensure only specific app servers within the network have access to the Neo4j database.
We have a site to site vpn setup between rackspace and azure. When I'm inside the rackspace network (ie remote desktop onto a rackspace server) I have no trouble connecting to our azure vms. However, when I connect to rackspace using a vpn client I don't get the same behavior.
Rackspace have told me that packets are getting passed through but nothing is getting returned. They're telling me it's down to the vpn firewall configuration on the azure side. I've added our ip ranges under the local network section in azure.
Any help figuring this out would be much appreciated.
As a bit of background the only reason we're going this route is because we can't vpn directly into azure (don't want to use connect as we want to do it at our router level) and we don't have an external ip so we can't use a site to site vpn form the office.
Any input really appreciated
The azure team are providing direct support
http://social.msdn.microsoft.com/Forums/en-US/WAVirtualMachinesVirtualNetwork/thread/75c87385-fefc-4edc-876d-4be577986a27