I have the following scenario:
Elastic beanstalk with N instances
ELB for load balancing to the EBS
External datacenter with IP Filtering
Since I can't filter by name (fqdn) and I can't filter for a single IP either, is there a way to make all the request that came from AWS Machines have only one IP or maybe use a third machine to serve as proxy for the calls for the AWS Machines and then attach a EIP on it.
Not really. Or at least, if there's a way to do it, I'd love to hear about it. One of the biggest problems with beanstalk is its requirement to exist outside of VPCs, and thus, in arbitrary Amazon IP space. About the only workaround I've found for this after talking to AWS engineers is to forward traffic from them to something like a bastion server, and allow the bastion server to communicate with your data center firewall. Maybe there's something I'm missing, but I know of no other way to get it working without some server in between the beanstalk instances and the data center; not if the IP of the server matters.
Related
Followed this tutorial to setup two ec2 instances: 12 . Creation of two EC2 instances and how to establish ping communication - YouTube
The only difference is I used a linux image.
I setup a simple python http server on a machine (on port 8000). But I cannot access this from my other machine; whenever I curl, the program kind of waits. (It might eventually timeout but I wasn't patient enough to witness that).
However, the workaround, I figured, was that you have to add a port rule via the security group. I do not like this option since it means that that port (for the machine that hosts the web server) can be accessed via the internet.
I was looking for an experience similar to what people usually have at home with their routers; machines connected to the same home router can reach out to other machines on any port (provided the destination machine has some service hosted on that port).
What is the solution to achieve something like this when working with ec2?
The instance is open to the internet because you are allowing access from '0.0.0.0/0' (anywhere) in the inbound rule of the security group.
If you want to the communication to be allowed only between the instances and not from the public internet. You can achieve that by assigning the same security group to both the instances and modifying the inbound rule in the security group to allow all traffic or ICMP traffic sourced from security group itself.
You can read more about it here:
AWS Reference
I'm running an app on Kubernetes / GKE.
I have a bunch of devices without a public IP. I need to access SSH and VNC of those devices from the app.
The initial thought was to run an OpenVPN server within the cluster and have the devices connect, but then I hit the problem:
There doesn't seem to be any elegant / idiomatic way to route traffic from the app to the VPN clients.
Basically, all I need is to be able to tell route 10.8.0.0/24 via vpn-pod
Possible solutions I've found:
Modifying routes on the nodes. I'd like to keep nodes ephemeral and have everything in K8s manifests only.
DaemonSet to add the routes on nodes with K8s manifests. It's not clear how to keep track of OpenVPN pod IP changes, however.
Istio. Seems like an overkill, and I wasn't able to find a solution to my problem in the documentation. L3 routing doesn't seem to be supported, so it would have to involve port mapping.
Calico. It is natively supported at GKE and it does support L3 routing, but I would like to avoid introducing such far-reaching changes for something that could have been solved with a single custom route.
OpenVPN client sidecar. Would work quite elegantly and it wouldn't matter where and how the VPN server is hosted, as long as the clients are allowed to communicate with each other. However, I'd like to isolate the clients and I might need to access the clients from different pods, meaning having to place the sidecar in multiple places, polluting the deployments. The isolation could be achieved by separating clients into classes in different IP ranges.
Routes within GCP / GKE itself. They only allow to specify a node as the next hop. This also means that both the app and the VPN server must run within GCP.
I'm currently leaning towards running the OpenVPN server on a bare-bones VM and using the GCP routes. It works, I can ping the VPN clients from the K8s app, but it still seems brittle and hard-wired.
However, only the sidecar solution provides a way to fully separate the concerns.
Is there an idiomatic solution to accessing the pod-private network from other pods?
Solution you devised - with the OpenVPN server acting as a gateway for multiple devices (I assume there will be dozens or even hundreds simultaneous connections) is the best way to do it.
GCP's VPN unfortunatelly doesn't offer needed functionality (just Site2site connections) so we can't use it.
You could simplify your solution by putting OpenVPN in the GCP (in the same VPC network as your application) so your app could talk directly to the server and then to the clients. I believe by doing this you would get rid of that "brittle and hardwired" part.
You will have to decide which solution works best for you - Open VPN in or out of GCP.
In my opinion if you go for hosting Open VPN server in GCP it will be more elegant and simple but not necessarily cheaper.
Regardless of the solution you can put the clients in different ip ranges but I would go for configuring some iptables rules (on Open VPN server) to block communication and allow clients to reach only a few IP's in the network. That way if in the future you needed some clients to communicate it would just be a matter of iptable configuration.
Hi I dont want to use any clouldbase services for deployment. I have created react create app with backend (MERN STACK). I want to deploy it in my local server bitnami nginx server (ubuntu 14.04) . I dont find any thing about bitnami configuration. can anyone help.
You can install bitnami stack from their website https://bitnami.com/stack/nginx/installer
You can find Get-Started Docs on their site as well https://docs.bitnami.com/installer/get-started/
I want to deploy it in my local server
Hosting/Deploying Application from local server is really bad idea, Because usually your computer or your router is sitting behind the ISP's NAT and You don't have control over general ports (80, 443) which means your cant use those ports. it is generally the case with IPv4 addresses. ISP's do that to save their IPv4 addresses. this is not generally the case with IPv6.
Also (in most cases) your IP address is dynamic, which means it keeps changing overtime. If you are planning to use a domain for your application then you will also need and dynamic DNS service which usually aren't free.
On top of that Home broadband upstream speeds are very poor and low. Also you will have to keep your local server up 24/7.
You might wanna change your mind.
Hope this Helps!
I have an ELB (in EC2 classic) running and one of my client want to hardcode an IP to his firewall rule to access our site.
I know that ELB doesn't provide static IP but is there a way to set up an instance (just for them) that they could hit and be used as gateway to our API?
(I was thinking of using HA Proxy on OpsWorks but it points directly to my instances and I need something that points to my ELB because SSL resolution happens at this level)
Any recommendation would be very helpful.
I assume you are running one or more instances behind your ELB.
You should be able to assign an elastic IP to one of those instances. In EC2 classic, EIPs will need to get reattached when you restart your instance.
I have got a dedicated server that has nginx web server located in a uk datacenter, the ngnix acts as a front end server which then directs users to other instances that i have on aws located in america. What ip address would the client see, the ngnix front server(which is the desired result), or would the client still know about the instances or servers ip address located in america?
PS. nginx acts as a load balancer here.
Typically, users connect to "www.yoursite.com", and that gets looked up in DNS.
Assuming there is only one DNS entry (corresponding your nginx frontend), then as far as those users are concerned, they are only talking to that one machine.
Sometimes people use round-robin DNS, where multiple machines correspond to a given host name.
Presumably you would know if you were doing this, though (:
You can confirm this by tracing your traffic when connecting. Maybe use WireShark?