I created an instance-group with 8 instances. All instances have identical programs. Even though I checked two boxes: Allow HTTP traffic and Allow HTTPS traffic during creation, it seems like I still struggle running mpirun -n 8 --hostfile host_file hostname which hangs. The host file includes internal IPs of all instances and all instances store the same host file. I also tried not using host file and provide hosts one by one. On a single VM all mpi programs worked as expected. How can I configure GCE to allow mpi communication?
Related
Followed this tutorial to setup two ec2 instances: 12 . Creation of two EC2 instances and how to establish ping communication - YouTube
The only difference is I used a linux image.
I setup a simple python http server on a machine (on port 8000). But I cannot access this from my other machine; whenever I curl, the program kind of waits. (It might eventually timeout but I wasn't patient enough to witness that).
However, the workaround, I figured, was that you have to add a port rule via the security group. I do not like this option since it means that that port (for the machine that hosts the web server) can be accessed via the internet.
I was looking for an experience similar to what people usually have at home with their routers; machines connected to the same home router can reach out to other machines on any port (provided the destination machine has some service hosted on that port).
What is the solution to achieve something like this when working with ec2?
The instance is open to the internet because you are allowing access from '0.0.0.0/0' (anywhere) in the inbound rule of the security group.
If you want to the communication to be allowed only between the instances and not from the public internet. You can achieve that by assigning the same security group to both the instances and modifying the inbound rule in the security group to allow all traffic or ICMP traffic sourced from security group itself.
You can read more about it here:
AWS Reference
All - First off, this may not be possible, but I thought I would try. Our company security is such that we utilize linux bastions to ssh tunnel to segmented networks. It is nice from the desktop (as opposed to Windows bastions) because we can run from Putty and have the term session in the background and sites are accessible via standard browser/apps. In short, we add 127 (loopback) versions of the endpoints to our local host files on our machines, then launch a batch file that launches the linux bastion and has local host/port to remote host/port mappings. So far so good.
Example (hosts file)
127.22.22.22 website.com # real IP would be 10.22.22.22
Example (batch file mapping, for port 443 in this example)
-L 127.22.22.22:443:10.22.22.22:443
Now, we have a new application that we can reach, but as part of login logic, it picks a random port from a range (1000 range in the random ports). These always go to the same remote host port (that is known, for example 18501). Is there a way to map from a range locally to a single port on the remote side? And is it made even more hard by the fact that we are already technically mapping them with the loopback step (if that makes any sense). Appreciate any ideas!
I'm trying to set up an Openstack environment with two Kubernetes clusters, one production and one testing. My idea was to separate them with two networks in Openstack and then have a VPN in front, to limit the exposure through floating ip:s (for this I would have a proxy that routes requests into the correct internal addresses).
However, issues arise when trying to tunnel requests to both networks when connected to the VPN. Either I choose to run the VPN in its own network or in one of the two, but I don't seem to be able to make requests across network boundaries.
Is there a better way to configure the networking in Openstack or OpenVPN, so that I can keep the clusters separated and still have access to all resources through one installation of OpenVPN?
Is it better to run everything in the same Openstack network and separate them with subnets? Can I still have the production and test cluster expose different IP-addresses externally? Are they still separated enough to limit the risk of them accessing each other?
Sidenote: I use Terraform to deploy the infrastructure and Ansible to install resources, if someone has suggestion in the line of already prepared scripts.
Thanks,
The solution I went for was to separate the environments with their own networks and cidr and then attach them to the VPN instance to let it get access to them. From there I just tunnel everything.
Windows Server 2008 R2, fully patched and updated.
I have 4 static IPs on a dedicated server. I will refer to them as follows:
x.x.x.x
y.y.y.y
z.z.z.z
a.a.a.a
x.x.x.x is the default external and internal IP address of the server.
All external IPs are the same as the internal IPs, all running on the same NIC.
x.x.x.x and y.y.y.y were running on port 80 for HTTP through IIS, with different host headers handling the destinations. That worked perfectly.
I recently added two new IP addresses, z.z.z.z and a.a.a.a for a different application that uses two ports, but we want external port 80 traffic to translate to the internal ports it is using.
We want incoming traffic to work as follows:
Incoming traffic on x.x.x.x:80 map to x.x.x.x:8080
Incoming traffic on y.y.y.y:80 map to y.y.y.y:8080
Incoming traffic on z.z.z.z:80 map to y.y.y.y:8088
Incoming traffic on a.a.a.a:80 map to y.y.y.y:8089
We changed the binding in IIS to only listen on the specific IPs and internal ports so that port 80 was only being listened to by netsh portproxy.
We have been able to accomplish this with 4 separate netsh portproxy rules and everything works great. All traffic to the two HTTP IPs works fine, and the traffic for the other two IPs to the other two internal ports get routed properly as well.
The problem is that everything works as expected, but occasionally something hangs, usually around 4 PM EST, and the websites are no longer available. There are no application pool or website crashes. Just ports no longer routing.
When it hangs the easiest fix is to run "portproxy reset" and re-create the portproxy rules through a batch file and everything works again.
I guess my question is if there is some kind of idle timeout built into netsh portproxy or possibly some type of buffer overrun protection.
Not a single log shows any faults.
The application pools in IIS were adjusted to recycle at shorter intervals (30 minutes) to prevent any long running worker processes in case that was the issue. Same result.
I can easily create a Windows service to check the port status at a very short interval and reset the specific portproxy rule, but this is not ideal, as there is still the potential for packet loss and unavailability of the services (even if for only a few seconds) requiring the HTTP requests to be re-sent.
Again, I should reiterate that everything works great until a certain point in the day, and that this has absolutely nothing to do with the Windows Firewall, as we get the exact same results with it on or off. There are no apparent DDoS or other types of attacks either. All separate websites and other applications still run on their internal ports when the portproxy hangs (i.e. accessing http://example.com:8080 still works without issue).
The point of failure is the netsh portproxy.
Has anyone experienced similar issues? I am considering adding a Fortinet hardware firewall that has this functionality built in, but I am wondering if that will handle it any better than what is already in place.
I have a host laptop running Debian, and a client VM running Debian. On the client, I run NGINX, and it serves up a complex web application with several hostnames (e.g. www.host, api.host, blog.host). The laptop moves between several different networks, with a seemingly ever-changing IP address.
I'm trying to meet the following conditions with this VM:
The IP address of the client shouldn't change (e.g. always 192.168.10.10)
With a static IP, I could edit the host /etc/hosts file and keep complex hostnames
The client should have access to the Internet
No other machines need to access the client
What is the best way to set up the Attached to settings for this client?
To do this, simply add two network interfaces to the box.
The first interface will use Host-Only, and that is how your host can connect to the client. This will create an additional network adapter on the host.
The second interface will use NAT, and that is the gateway to the internet. This will create an additional network adapter on the client.
If you've already got a client running, you'll need to get the next network adapter up and running by executing sudo ifconfig eth1 up and to get an IP address, run sudo dhclient eth1.