When you create a VPN connection with Amazon VPC, Amazon offers a configuration file for different router brands. This file can be downloaded as soon as the VPN creation is done.
So the natural procedure is to start the configuration process in Amazon and then just run the provided commands in your router.
My question is, can you do it the other way around? Is there a way to modify the VPN settings in Amazon based on my physical router settings? Or are Amazon VPN settings just read-only?
Yes, you can modify various VPN settings in Amazon.
For an existing VPN connection, you can edit the static routes for your connection from the VPC Console (in the navigation pane, choose VPN Connections) - see Editing Static Routes for a VPN Connection.
You can also specify many options when manually setting up a new VPN connection - see Setting up the VPN Connection in the Amazon VPC documentation.
For more general info on Amazon VPC/VPN connectivity options, see the Amazon Virtual Private Cloud Connectivity Options whitepaper.
Related
Followed this tutorial to setup two ec2 instances: 12 . Creation of two EC2 instances and how to establish ping communication - YouTube
The only difference is I used a linux image.
I setup a simple python http server on a machine (on port 8000). But I cannot access this from my other machine; whenever I curl, the program kind of waits. (It might eventually timeout but I wasn't patient enough to witness that).
However, the workaround, I figured, was that you have to add a port rule via the security group. I do not like this option since it means that that port (for the machine that hosts the web server) can be accessed via the internet.
I was looking for an experience similar to what people usually have at home with their routers; machines connected to the same home router can reach out to other machines on any port (provided the destination machine has some service hosted on that port).
What is the solution to achieve something like this when working with ec2?
The instance is open to the internet because you are allowing access from '0.0.0.0/0' (anywhere) in the inbound rule of the security group.
If you want to the communication to be allowed only between the instances and not from the public internet. You can achieve that by assigning the same security group to both the instances and modifying the inbound rule in the security group to allow all traffic or ICMP traffic sourced from security group itself.
You can read more about it here:
AWS Reference
Can someone let me know if its possible to connect or PING a Databricks Cluster via its public ip address?
For example I have issued the command ping --all-ip-addresses and I get the ip address 10.172.226.115.
I would like to be able to PING that ip address(10.172.226.115) from my on-premise PC (or connect to the cluster with an application using the ip address?
Can someone let me know if that is possible?
That public IP is not guaranteed to be your cluster; unless somehow you've installed Databricks into your own cloud provider account, where you fully control the network routes, it would be connecting to Databricks managed infrastructure where the public ip would likely be an API gateway or router that serves traffic for more than one account
Note: just because you can ping Google DNS with outbound traffic doesn't mean inbound traffic from the internet is even allowed through the firewall
connect to the cluster with an application
I'd suggest using other Databricks support channels (i.e their community forum) to see if that's even possible, but I thought you're just supposed to upload and run code within their ecosystem. At least, for the community plans
Specifically, they have a REST API to submit a remote job from your local system, but if you want to be able to send data back to your local machine, I think you'd have to write and download from DBFS or other cloud filesystem
Please bear with me as my background is development and not sysadmin. Networking is something I'm learning as I go and thus why I'm writing here :)
A couple of months ago I started the process of designing the network structure of our cloud. After a couple of exchange here, I settled for having a project that will host a VPN Tunnel to the on-premise resources and some other projects that will host our products once they are moved from the on-premises servers.
All is good and I managed to set things up.
Now, one of the projects is dedicated to "storage": that means, for us, databases, buckets for statis data to be accessed around , etc.
I created a first mySQL database (2nd gen) to start testing and noticed that the only option available to access the SQL databases from Internal IPs was with the "parent project" subnetwork.
I realised that SQL Engine create a subnetwork dedicated for just that. It's written in the documentation as well, silly me.
No problem, I tear it down, enable Private Service Connection, create an allocated IP range in the VPC management and set it to export routes.
Then I went back to the SQL Engine a created a new database. As expected the new one had the IP assigned to the allocated IP range set up previously.
Now, I expected every peered network to be able to see the SQL subnetwork as well but apparently not. Again, RDFM you silly goose. It was written there as well.
I activated a bronze support subscription with GCP to have some guidance but what I got was a repeated "create a vpn tunnel between the two projects" which left me a little disappointed as the concept of Peered VPC is so good.
But anyway, let's do that then.
I created a tunnel pointing to a gateway on the project that will have K8s clusters and vice-versa.
The dashboard tells me that the tunnel are established but apparently there is a problem with the bgp settings because they are hanging on "Waiting for peer" on both side, since forever.
At this point I'm looking for anything related to BGP but all I can find is how it works in theory, what it is used for, which are the ASM numbers reserved etc etc.
I really need someone to point out the obvious and tell me what I fucked up here, so:
This is the VPN tunnel on the projects that hosts the databases:
And this is the VPN tunnel on the project where the products will be deployed, that need to access the databases.
Any help is greatly appreciated!
Regarding the BGP status "Waiting for peer" in your VPN tunnel, I believe this is due to the configured Cloud Router BGP IP and BGP peer IP. When configuring, the Cloud Router BGP IP address of tunnel1 is going to be the BGP Peer IP address for tunnel2, and the BGP Peer IP address for tunnel1 is going to be the Router BGP IP address of tunnel2.
Referring to your scenario, the IP address for stage-tunnel-to-cerberus should be:
Router BGP IP address: 169.254.1.2
and,
BGP Peer IP address: 169.254.1.1
This should put your VPN tunnels BGP session status in "BGP established".
You can't achieve what you want by VPN or by VPC Peering. In fact there is a rule in VPC which avoid peering transitivity described in the restriction part
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
Now, take what you want to achieve. When you use a Cloud SQL private IP, you create a peering between your VPC and the VPC of the Cloud SQL. And you have another peering (or VPN tunnel) for the SQL engine.
SQL Engine -> Peering -> Project -> Peering -> Cloud SQL
Like this you can't.
But you can use the shared VPC. Create a shared VPC, add your 2 projects in it, create a common subnet for SQL Engine and the Cloud SQL peering. That should work.
But, be careful. All VPC features aren't available with shared VPC. For example, serverless VPC connector aren't yet compliant with it.
Hope this help!
The original setup in the OP question should work, i.e.
Network 1 <--- (VPN) ---> Network 2 <--- (Peered) ---> CloudSQL network
(the network and the peering is created by GCP)
Then resource in Network 1 is able to access a MySQL instance created in the CloudSQLz network.
I've setup neo4j on port 7474 on a Rackspace cloud server. I want to access this server from another Rackspace cloud server (appserver) but the connection is refused.
I've tried enabling access for the appserver to port 7474 on the neo4j server using ufw:
sudo ufw allow from 22.234.298.297 to any port 7474
I can see this rule when I run 'ufw staus' but it doesn't seem to make any difference when I try to connect to the appserver. I can ssh between these two servers.
How do I open port 7474 between cloud servers on Rackspace?
(my apologies for this very basic question but rackspace support are not helping and I cant find rackspace specific information on this)
Glad, we could solve the problem (see comments on the question).
It so happens that Neo4j accepts only connections from localhost per default. When trying to gain access to Neo4j via REST API from an app server within the same network, one has to configure the Neo4j server to open up.
The neo4j-server.properties configuration file has a configuration key with org.neo4j.server.webserver.address. You have a couple of options here.
Grant app servers in the same local network to consume the Neo4j REST API
Grant everybody access and let the firewall handle it
For the first case, use the local ip address of the machine where Neo4j is running. Let's say your machines are connected via a private class C network. The machine with Neo4j has an ip 192.168.1.4 - that's the ip you want to enter as the value in org.neo4j.server.webserver.address, so your app server running in the same network with maybe an ip of 192.168.1.5 can make network requests that are being answered by the Neo4j web server.
For the second case, you enter 0.0.0.0 as value for org.neo4j.server.webserver.address to denote that you want to accept connections on all available ip addresses on that machine. In that case you want to set up your firewall to handle permissions who can talk to the server and who doesn't - even with authentication enabled.
Extra
In a production environment that requires high availability, one can use Neo4j's enterprise edition with a high availability cluster in a master-slave setting. I've used in with one master and two slaves. I configured the Neo4j servers that they can only be accessed from the proxy server that routes writing cypher queries to the master, and reading queries to the slaves. The proxy itself had a hardware firewall on it to ensure only specific app servers within the network have access to the Neo4j database.
I use to develop my project on my localhost, on apache in ubuntu machine.
Sometimes i need to show progress to my costumer.
Is it possible to access to localhost from remote machine?
You can use a service that provides a tunnel to your local service, such as localtunnel, pagekite or ngrok. These services simplify setting up remote demos, mobile testing and some provide request inspection as well.
I find ngrok useful because it provides a https address, which is needed to test things like webcam access.
Terms used in this answer:
Host = machine with site on it
Client = machine you are trying to access the host from
If the host and client are on the same network, you can access the host from the client by entering
http://(hostname or ip address)
in your client's browser. If the site is not running on port 80 (for http) or port 443 (for https), add the post as so (this example is for if your server is on 8080, a common alternate port):
http://(hostname or ip address):8080
If the host and client are not on the same network, and you need to reach across the internet from the client to see the host, you will need to make your host available on the internet for the client to access.
This can be extremely dangerous for your information security if you're not sure what you're doing and I'd recommend getting a cheap-o hosting account (can get them for like $10/month at places like 1:1 hosting).
There are many methods to do this - the difference is security, easiness of the configuration and cost of the solution.
Following I am typing some methods with some analyses
Port Forwarding (with Dynamic DNS and SSL encryption)
This requires router configuration (to forward your routers public port to loclhoat port), however this requires you to have fixed ip address. In case your ip address is not fixed (in most cases) you need to use Dynamic DNS services to be able to use domain name instead ip address (there are lot of available free services). Here we still have security question open. To solve security question i.e. setup ssl certificate we can use Let’s Encrypt service ( https://letsencrypt.org/ ) to get free certificate, however we should configure local server to use the certificate or we should setup reverse proxy (in most cases nginx or apache) and configure proxy to use certificate.
Conclusion – Hard to setup if we want to have secure connection (can be done for free)
VPN
For this scenario we should use VPN services. We should connect our local machine to VPN then in other side we should connect our client's machine to VPN that will allow us to access to localhost by local IP address. We can set up our own VPN server however this requires knowledge to do it right.
Conclusion – Easy, Paid, Secure, Bad User Experience (connecting to VPN every time you need to connect to localhost)
Tunneling
For this scenario we can use free tunneling services (i.e. https://tunnelin.com/). The process is very straight forward i.e. Register a User, Connect your device to service (by running one line command on device), use Web interface to open/close secure tunnels to the device.
Conclusion – Free, Secure, Easy
Yes, if you have a public and static IP. Usually, ISPs offer static ips during a session (i.e. until you disconnect and connect again)