Is it possible to associate an Elastic Network Interface to an Opswork instance? - aws-opsworks

I'd like to have the instances I create in AWS Opsworks use preconfigured Elastic Network Interfaces so that I have some predictable internal IP addresses to use when I configure all my services. I'm not seeing a way to do this in the UI yet, has anyone come up with a slick way to do this via a recipe or custom JSON?

You can associate elastic ips with opsworks instances through opsworks api. Boto; which is a python library for aws api will allow you to do exactly what you are looking for.
Specific method to do that is:
associate_elastic_ip(elastic_ip, instance_id=None)
Boto docs

Related

Firebase Functions cannot connect to Azure SQL Database [duplicate]

I would like to develop a Google Cloud Function that will subscribe to file changes in a Google Cloud Storage bucket and upload the file to a third party FTP site. This FTP site requires allow-listed IP addresses of clients.
As such, it is possible to get a static IP address for Google Cloud Functions containers?
Update: This feature is now available in GCP https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
First of all this is not an unreasonable request, don't get gaslighted. AWS Lambdas already support this feature and have for awhile now. If you're interested in this feature please star this feature request: https://issuetracker.google.com/issues/112629904
Secondly, we arrived at a work-around which I also posted to that issue as well, maybe this will work for you too:
Setup a VPC Connector
Create a Cloud NAT on the VPC
Create a Proxy host which does not have a public IP, so the egress traffic is routed through Cloud NAT
Configure a Cloud Function which uses the VPC Connector, and which is configured to use the Proxy server for all outbound traffic
A caveat to this approach:
We wanted to put the proxy in a Managed Instance Group and behind a GCP Internal LB so that it would dynamically scale, but GCP Support has confirmed this is not possible because the GCP ILB basically allow-lists the subnet, and the Cloud Function CIDR is outside that subnet
I hope this is helpful.
Update: Just the other day, they announced an early-access beta for this exact feature!!
"Cloud Functions PM here. We actually have an early-access preview of this feature if you'd like to test it out.
Please complete this form so we can add you..."
The form can be found in the Issue linked above.
See answer below -- it took a number of years, but this is now supported.
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
For those wanting to associate cloud functions to a static IP address in order to whitelist the IP for an API or something of the sort I recommend checking out this step by step guide which helped me a lot:
https://dev.to/alvardev/gcp-cloud-functions-with-a-static-ip-3fe9 .
I also want to specify that this solution works for Google Cloud Functions and Firebase Functions (as it is based on GCP).
This functionality is now natively part of Google Cloud Functions (see here)
It's a two-step process according to the GCF docs:
Associating function egress with a static IP address In some cases,
you might want traffic originating from your function to be associated
with a static IP address. For example, this is useful if you are
calling an external service that only allows requests from whitelisted
IP addresses.
Route your function's egress through your VPC network. See the
previous section, Routing function egress through your VPC network.
Set up Cloud NAT and specify a static IP address. Follow the guides at
Specify subnet ranges for NAT and Specify IP addresses for NAT to set
up Cloud NAT for the subnet associated with your function's Serverless
VPC Access connector.
Refer to link below:
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
As per Google, the feature has been released check out the whole thread
https://issuetracker.google.com/issues/112629904
It's not possible to assign a static IP for Google Cloud Functions, as it's pretty much orthogonal to the nature of the architecture being 'serverless' i.e. allocate and deallocate servers on demand.
You can, however, leverage a HTTP proxy to achieve a similar effect. Setup a Google Compute Engine instance, assign it a static IP and install a proxy library such as https://www.npmjs.com/package/http-proxy. You can then route all your external API calls etc through this proxy.
However, this probably reduces scale and flexibility, but it might be a workaround.

Direct pod networking with Kubernetes

I am trying to move a currently docker based app to Kubernetes.
My app inspects network traffic that passes through it, and because of that it needs an accessible External IP, and it needs to accept traffic on all ports, not just some.
Right now, I am using docker with a macvlan network driver in order to attach docker containers to multiple interfaces and allow them to inspect traffic that way.
After research, I've found that the only way to access pods in Kubernetes is using Services, but services only allow that through some specific ports, because it is mostly intended for "server" type applications, and not "forwarder"/"sniffer" type which is what I am looking for.
Is Kubernetes a good fit for this type of application? Does it offer tools to cope with this problem?
Is Kubernetes a good fit for this type of application? Does it offer tools to cope with this problem?
Being a good fit is more of an opinion, the pods in Kubernetes have their own PodCidr that is not exposed to the outside world and a sniffer doesn't quite fit in either a service or a job definition which are the typical workloads in Kubernetes.
Having said, it can be done if you can use your custom CNI plugin that supports macvlan
You can also use something like Multus that supports the macvlan plugin.

Use an existing Gateway with Azure Machine Learning?

We want to access an onprem SQL database with an existing Gateway, is that possible in AML? The tool only seems to allow creating new gateways.
Confirmed that this is not possible, AML only allows use of AML-created gateways.

NFV on OpenStack

I am fairly new to the NFV+SDN. I have downloaded the OpenDayLight and OpenStack in one Fedora 20 VM. I have mininet network as underlying physical topology in a separate VM. I want to run services like VPN, L3 routing and NAT, Loadbalancing etc on OpenStack, but I don't have a very clear image on how to start. As far as I have understood I have to run these services on OpenStack nodes (through VM instances) and route the traffic through mininet topology with OpenDayLight as the controller in the middle.
My confusions are:
How to start writing the applications (Firewall, VPN, NAT, etc) on OpenStack?
Do I have to write a code for such services or is it command line configuration?
I came across Neutron API, Is that of any help?
Came across this: http://docs.openstack.org/api/openstack-network/2.0/content/API_extensions.html
I have looked at the other questions regarding writing "Hello World" on OpenStack but could not find anything. I shall be grateful to you for any information that could get me started on this project.
I would suggest you to check OpenBaton.
Nowadays I'm working with it which can be used NFV MANO. In addition it's ETSI compliant and their solutions are easy to implement and configure.
For your confusions- You do NOT need to write code explicitly for Firewall / VPN / LB. You need to configure the Openstack Neutron to allow these services directly. The code is already present. You need to configure them to use them. For NAT there is L3 agent already running in the default setup ( al least via packstack )
Neutron API is of any use??? I assume you are refering to REST API and NOT CLI.
Well everything that you do on Dashboard is actualy represented as a REST API to Neutron Server ( not just Neutron but all the other components of Openstack ). All the components of Openstack ( Neutron, Nova, Glance, Keystone, etc ) interact via REST API with each other and RPC mechanism within each component. All the clicks on the Dashboard are actually thrown as a REST API call to the component servers!

OpenStack Swift is there a module to redirect client by region location?

I am currently playing with OpenStack Swift, my expectation is to deploy a multi region cluster. For example one node of the swift cluster will be deployed in the US and one in EU.
Is there a module or an option in swift-proxy to redirect client by region location?
If it is not possible, what other solutions do you suggest? Should I develop my own proxy server that redirects client to the nearest node (with geoloc/maxmind etc.)?
Resources:
Configuring a multi-region cluster
Proxy server configuration
EDIT: One of the contributor to Openstack answered me the code for geographically-distributed Swift clusters does not yet exist in the Git repository. The link I have posted in the resources is a bunch of proposed changes. There is no code in Swift to do
that sort of redirection. I will need to write a piece of WSGI
middleware and stick it in the proxy server's middleware pipeline.
Not exactly an answer to your needs, but as you know openstack has a side project keystone, in which endpoints are stored with Region information. If you want to write your own implementation that can be a starting point. Also since their a cdn tag in your quest there is a project named sos, making openstack swift work as a cdn server. Hope these can help you on your implementation.

Resources