How to create Ant Media Server auto-scaling Cluster with Custom VPC using cloudformtion on AWS? - ant-media-server

I'm looking to use custom VPC and not the default ones while making a cluster setup of Ant Media Server on AWS using the cloudformation.
Could you please let me know how can I create custom VPC or if there's something critical that should be kept in mind while creating custom VPC to go with cloudformation!
Thanks.

One thing to keep in mind is that let's say you created a vpc with 10.0.0.0/16 IP addresses, you need to create two subnets for example 10.0.0.0/24 and 10.0.1.0/24 ,then you should create a internet gateway for those subnets to make them available: ( https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html )
You should also check the routing table if the subnet has internet gateway:
Also in the subnets, you should double check if Auto-assign public IP addresses is enabled. It is not going to work if there is no auto-assigned public IP on the instances.
Then it should be fine with these adjustments.
Cheers

Related

Connect to OpenStack instance via the internet through the router

I've recently found out that the external network for our OpenStack (Ocata) setup has maxed out on the available IP addresses in its allocation table. In fact, it has over-allocated with -9 free IPs. So, to manage the limited IP addresses, is it possible to access an instance in a project directly from an external network (internet) via the project's router? This way only a single IP address needs to be allocated per project instead of allocating to multiple instances per project.
The short answer would be NO, but there are couple of workarounds that came to my mind (not that they will be good, but they will work).
In case any instance in your private network has floatingIP, you can use that host as a jump-host (bastion-host) to SSH into the target host. This also brings the benefits of port forwarding/SSH tunneling to the table if you want to access to some other port.
You can always access to any host on private networks through qdhcp or qrouter namespace from the network node
ip netns exec qdhcp-XXXXXXX ssh user#internal-IP

Assign domain name to a floating ip

I want to assign a domain name to an internal openstack floating ip, to access the instance over the internet.
I checked that you can set dnsmasq_dns_servers = 1.1.1.1 and configure dhcp_agent.ini accordingly, it seems to be a step in the right direction, but i couldn't find a way to allocate domain name to openstack instance (via horizon or cli).
The dnsmasq server that is managed by the DHCP agent is used to implement DHCP in subnets where DHCP is enabled. It does not resolve hostnames. If you want to be able to resolve hostnames internally, you could look into running a DNS server in your subnet or maintaning a hostfile on each instance that needs to communicate with the instance.
You could look at Designate. That is the DNS as a Service component of OpenStack. It is also possible to integrate Designate with an external service to manage external DNS.
See SysEleven's How to set up DNS for a Server/Website.
It walks you through the process of:
Creating the zone,
adding the DNS record, and finally
making the zone authoritative in global DNS.
It assumes you can use the OpenStack CLI, but there's also documentation on doing the same thing with Terraform, which I'd recommend as it fully automates the entire infrastructure with infrastructure as code (IaC).
It should apply to any OpenStack provider.

How to set the external IP of a specific node in Google Kubernetes Engine?

Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.
Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).
The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster

TCP connection between two openshift containers

I have two applications (diy container type) which have to be connected via TCP. Let's take as example application clusternode1 and clusternode2.
Each one has TCP listener set up for $OPENSHIFT_DIY_IP:$OPENSHIFT_DIY_PORT.
For some reason clusternode1 fails to connect to any of the following options for clusternode2:
$OPENSHIFT_DIY_IP:$OPENSHIFT_DIY_PORT
$OPENSHIFT_APP_DNS
Can you please help in understanding what should be url for external TCP connection?
You might check the logs to see if the OPENSHIFT_DIY_IP for both apps are within the same subnet. If one, say, is...
1.2.3.4
...and the other is...
1.5.6.7
...for example, then you might not expect Amazon's firewalls to just arbitrarily allow TCP traffic from one subnet to another. If this were allowed by default then one person's app might try to hack another's.
I know that when you're dealing directly with Amazon AWS and you spin up multiple virtual servers you have to create virtual zones to allow traffic between them. This might be something that's necessary.
Proxy Ports I don't know if this is useful but it's possible that a private IP address is being bound to your application(s) and then a NAT server is translating that into a public IP address.

AWS Connecting to instance in public subnet

I'm trying to SSH into a known good instance inside a new AWS VPN
Set up so far
Elastic IP connected to VPC instance inside public subnet
IGW associated with subnet with CIDR 0.0.0.0/0
Security Groups set up
Does anyone have any debug tips? Does the configuration matter?
Mostly want to know how to debug and isolate issues like this
Check your security group make sure your allowing the SSH port and also the CIDR for the inbound traffic to bind to that port.

Resources