Aws ec2 - Unable to consume http server from a different machine on the same network - networking

Followed this tutorial to setup two ec2 instances: 12 . Creation of two EC2 instances and how to establish ping communication - YouTube
The only difference is I used a linux image.
I setup a simple python http server on a machine (on port 8000). But I cannot access this from my other machine; whenever I curl, the program kind of waits. (It might eventually timeout but I wasn't patient enough to witness that).
However, the workaround, I figured, was that you have to add a port rule via the security group. I do not like this option since it means that that port (for the machine that hosts the web server) can be accessed via the internet.
I was looking for an experience similar to what people usually have at home with their routers; machines connected to the same home router can reach out to other machines on any port (provided the destination machine has some service hosted on that port).
What is the solution to achieve something like this when working with ec2?

The instance is open to the internet because you are allowing access from '0.0.0.0/0' (anywhere) in the inbound rule of the security group.
If you want to the communication to be allowed only between the instances and not from the public internet. You can achieve that by assigning the same security group to both the instances and modifying the inbound rule in the security group to allow all traffic or ICMP traffic sourced from security group itself.
You can read more about it here:
AWS Reference

Related

Accessing a specific instance of an application installed on different machines behind a router (NAT), from the outside

Question:
Is there a way to safely reach an opened socket on a machine from the internet, knowing the routers external ip, the private network internal ip and the port, without having to explicitly configure the router and the firewall? Given that many machines on the private network run the targeted application and will only accept authentified and encrypted requests.
Things I've explored:
Hole punching, ssh, proxy/vpn: They all seem like they need to configure the router to forward the port.
UPnP: Would seem like the solution, but Google tells me it's been shown to be used by malicious softwares, so maybe not a long term solution?
Context:
I'm designing a system with a program (mini-server) that collects local data from the computer it is installed on and waits for a request from a known outside program (big-client), installed somewhere in the cloud, to send the collected data only when requested. I expect to have many of these mini-servers installed on many physical machines within a customer's private network, behind its router and firewall, where they would send to the big-client their external ip (internet exposed), local ip (on the local network) and port upon behind booted. These mini-servers can talk to each other on the network to resolve port conflicts and such, but I want to be able to dynamically add new ones without having to go back again in the router config, as this isn't scalable.
I understand the security necessity for this to be difficult to do, but when I think about software like Teamviewer and other remote desktop applications, they seem to be able to act on many machines behind a router without any conflict, and that upon request from the outside.

K8s: routing traffic to a subnet via a pod (accesing VPN clients from pods)

I'm running an app on Kubernetes / GKE.
I have a bunch of devices without a public IP. I need to access SSH and VNC of those devices from the app.
The initial thought was to run an OpenVPN server within the cluster and have the devices connect, but then I hit the problem:
There doesn't seem to be any elegant / idiomatic way to route traffic from the app to the VPN clients.
Basically, all I need is to be able to tell route 10.8.0.0/24 via vpn-pod
Possible solutions I've found:
Modifying routes on the nodes. I'd like to keep nodes ephemeral and have everything in K8s manifests only.
DaemonSet to add the routes on nodes with K8s manifests. It's not clear how to keep track of OpenVPN pod IP changes, however.
Istio. Seems like an overkill, and I wasn't able to find a solution to my problem in the documentation. L3 routing doesn't seem to be supported, so it would have to involve port mapping.
Calico. It is natively supported at GKE and it does support L3 routing, but I would like to avoid introducing such far-reaching changes for something that could have been solved with a single custom route.
OpenVPN client sidecar. Would work quite elegantly and it wouldn't matter where and how the VPN server is hosted, as long as the clients are allowed to communicate with each other. However, I'd like to isolate the clients and I might need to access the clients from different pods, meaning having to place the sidecar in multiple places, polluting the deployments. The isolation could be achieved by separating clients into classes in different IP ranges.
Routes within GCP / GKE itself. They only allow to specify a node as the next hop. This also means that both the app and the VPN server must run within GCP.
I'm currently leaning towards running the OpenVPN server on a bare-bones VM and using the GCP routes. It works, I can ping the VPN clients from the K8s app, but it still seems brittle and hard-wired.
However, only the sidecar solution provides a way to fully separate the concerns.
Is there an idiomatic solution to accessing the pod-private network from other pods?
Solution you devised - with the OpenVPN server acting as a gateway for multiple devices (I assume there will be dozens or even hundreds simultaneous connections) is the best way to do it.
GCP's VPN unfortunatelly doesn't offer needed functionality (just Site2site connections) so we can't use it.
You could simplify your solution by putting OpenVPN in the GCP (in the same VPC network as your application) so your app could talk directly to the server and then to the clients. I believe by doing this you would get rid of that "brittle and hardwired" part.
You will have to decide which solution works best for you - Open VPN in or out of GCP.
In my opinion if you go for hosting Open VPN server in GCP it will be more elegant and simple but not necessarily cheaper.
Regardless of the solution you can put the clients in different ip ranges but I would go for configuring some iptables rules (on Open VPN server) to block communication and allow clients to reach only a few IP's in the network. That way if in the future you needed some clients to communicate it would just be a matter of iptable configuration.

Can't access port 7474 across Rackspace cloud servers

I've setup neo4j on port 7474 on a Rackspace cloud server. I want to access this server from another Rackspace cloud server (appserver) but the connection is refused.
I've tried enabling access for the appserver to port 7474 on the neo4j server using ufw:
sudo ufw allow from 22.234.298.297 to any port 7474
I can see this rule when I run 'ufw staus' but it doesn't seem to make any difference when I try to connect to the appserver. I can ssh between these two servers.
How do I open port 7474 between cloud servers on Rackspace?
(my apologies for this very basic question but rackspace support are not helping and I cant find rackspace specific information on this)
Glad, we could solve the problem (see comments on the question).
It so happens that Neo4j accepts only connections from localhost per default. When trying to gain access to Neo4j via REST API from an app server within the same network, one has to configure the Neo4j server to open up.
The neo4j-server.properties configuration file has a configuration key with org.neo4j.server.webserver.address. You have a couple of options here.
Grant app servers in the same local network to consume the Neo4j REST API
Grant everybody access and let the firewall handle it
For the first case, use the local ip address of the machine where Neo4j is running. Let's say your machines are connected via a private class C network. The machine with Neo4j has an ip 192.168.1.4 - that's the ip you want to enter as the value in org.neo4j.server.webserver.address, so your app server running in the same network with maybe an ip of 192.168.1.5 can make network requests that are being answered by the Neo4j web server.
For the second case, you enter 0.0.0.0 as value for org.neo4j.server.webserver.address to denote that you want to accept connections on all available ip addresses on that machine. In that case you want to set up your firewall to handle permissions who can talk to the server and who doesn't - even with authentication enabled.
Extra
In a production environment that requires high availability, one can use Neo4j's enterprise edition with a high availability cluster in a master-slave setting. I've used in with one master and two slaves. I configured the Neo4j servers that they can only be accessed from the proxy server that routes writing cypher queries to the master, and reading queries to the slaves. The proxy itself had a hardware firewall on it to ensure only specific app servers within the network have access to the Neo4j database.

Access to a site on localhost from remote

I use to develop my project on my localhost, on apache in ubuntu machine.
Sometimes i need to show progress to my costumer.
Is it possible to access to localhost from remote machine?
You can use a service that provides a tunnel to your local service, such as localtunnel, pagekite or ngrok. These services simplify setting up remote demos, mobile testing and some provide request inspection as well.
I find ngrok useful because it provides a https address, which is needed to test things like webcam access.
Terms used in this answer:
Host = machine with site on it
Client = machine you are trying to access the host from
If the host and client are on the same network, you can access the host from the client by entering
http://(hostname or ip address)
in your client's browser. If the site is not running on port 80 (for http) or port 443 (for https), add the post as so (this example is for if your server is on 8080, a common alternate port):
http://(hostname or ip address):8080
If the host and client are not on the same network, and you need to reach across the internet from the client to see the host, you will need to make your host available on the internet for the client to access.
This can be extremely dangerous for your information security if you're not sure what you're doing and I'd recommend getting a cheap-o hosting account (can get them for like $10/month at places like 1:1 hosting).
There are many methods to do this - the difference is security, easiness of the configuration and cost of the solution.
Following I am typing some methods with some analyses
Port Forwarding (with Dynamic DNS and SSL encryption)
This requires router configuration (to forward your routers public port to loclhoat port), however this requires you to have fixed ip address. In case your ip address is not fixed (in most cases) you need to use Dynamic DNS services to be able to use domain name instead ip address (there are lot of available free services). Here we still have security question open. To solve security question i.e. setup ssl certificate we can use Let’s Encrypt service ( https://letsencrypt.org/ ) to get free certificate, however we should configure local server to use the certificate or we should setup reverse proxy (in most cases nginx or apache) and configure proxy to use certificate.
Conclusion – Hard to setup if we want to have secure connection (can be done for free)
VPN
For this scenario we should use VPN services. We should connect our local machine to VPN then in other side we should connect our client's machine to VPN that will allow us to access to localhost by local IP address. We can set up our own VPN server however this requires knowledge to do it right.
Conclusion – Easy, Paid, Secure, Bad User Experience (connecting to VPN every time you need to connect to localhost)
Tunneling
For this scenario we can use free tunneling services (i.e. https://tunnelin.com/). The process is very straight forward i.e. Register a User, Connect your device to service (by running one line command on device), use Web interface to open/close secure tunnels to the device.
Conclusion – Free, Secure, Easy
Yes, if you have a public and static IP. Usually, ISPs offer static ips during a session (i.e. until you disconnect and connect again)

Create a Windows (win32) service discoverable across the network

In short: How to reliably discover a server running somewhere on a (presumably multi-segmented) local area network with zero client configuration
My client application has to locate the server application without knowing the server IP address. It has to work on a local LAN that may be split into segments with hubs or other switching devices.
I already have a working solution, but it is a bit cumbersome to get it working on multi-segment networks. It works as follows:
When the client starts up, it sends UDP broadcasts on its own network segment. If the server is running on the same segment, it works without any issues - the server responds with the appropriate messages.
If the server and client are running on networks separated by a hub / switch that won't forward UDP (the most likely case), then I have a server instance running on each segment, and they forward client requests to each other via TCP - but I need to configure this for the server instances (simple, but still a pain for tech support.) This is the main problem that I need to address. There are sites where we have hundreds of clients running on 5 or 6 separate segments.
The problems I'm facing:
1. Although my application installer enables the appropriate ports on the firewall, sometimes I come across situations where this doesn't seem to happen correctly.
2. Having to run multiple server instances (and therefore configure and maintain them) on hub/switched networks that won't forward UDP
Finally I need a solution that will work without maintenance on a minimal Windows network (XP / 2000 / Vista) that probably doesn't have Active Directory or other lookup services configured.
I don't want to tag on any runtime stuff for this - should be able to do it with plain VC++ or Delphi.
What approaches do commercial apps usually take? I know that SQL Server uses a combination of broadcast and NetBEUI calls (I may be wrong about this).
Thanks in advance.
You have a few terminology issues:
Where you say "network segment" you appear to mean "IP subnet". Devices on the same network segment can see the same IP broadcasts.
Where you say "hub/switch" you appear mean "IP router".
Where you say "won't forward UDP", the problem is actually "won't forward IP broadcasts".
Once we get past that, you have a few options:
Your servers could register themselves under a well-known name in DNS, if you have a DNS server that allows dynamic DNS updates. You should probably use a SRV record as specified in RFC2782. The clients then do a DNS lookup to find the server(s).
You could statically assign your server(s) well-known names in the organisation's DNS, perhaps with a SRV record as with the previous option.
Your servers could join an IP multicast group, if your routers support IP multicast. The clients then send their initial discovery request as a UDP packet to the (pre-ordained) multicast address.
If you have domain server, I would go with small service on it. You can connect with other services to it and use it as distribution point.
Why domain server? It is relatively easy to find it's name (DsGetDcName).
Other choices would include DHCP server, DNS server or something of that kind that needs to be filled by maintenance staff anyhow.

Resources