IP addresses / ranges required for Azure Search to index Cosmos DB - azure-cosmosdb

I have a Cosmos DB locked down to a vnet (and its subnets) and am looking to have this Cosmos DB be a data source for Azure Search.
When I attempt to configure it, it complains about being blocked by the firewall.
If I enabled "Accept connections from within public Azure datacenters" it still complains about being blocked by a firewall.
If I remove the "Accept connections from within public Azure datacenters
" setting and grant access to the following IP 13.76.208.129 it works.
Is it only 13.76.208.129 that is actually required for integration or is there a larger range I need to add into the firewall?

As mentioned in this documentation page:
If your search service has only one search unit (that is, one replica
and one partition), the IP address will change during routine service
restarts, invalidating an existing ACL with your search service's IP
address.
One way to avoid the subsequent connectivity error is to use more than
one replica and one partition in Azure Search. Doing so increases the
cost, but it also solves the IP address problem. In Azure Search, IP
addresses don't change when you have more than one search unit.

Related

Are the Google Cloud Endpoints only for REST?

Are the Google Cloud Endpoints only for REST?
I have a virtual machine with cassandra, and now I need (temporarly) to expose this machine for the world (the idea is to run a cassandra client in some computers in my home/office/...). Is Google Cloud Endpoints the best way to expose this machine to world?
I am assuming that you are running Cassandra on a Google Compute Engine (CE). When one runs a compute engine, one can specify that one wants a public internet address to be associated with it. This will allow an Internet connected client application to connect with it at that address. The IP address can be declared as ephemeral (it can be changed by GCP over time) or it can be fixed (I believe there will be a modest charge for its allocation). When one attempts to connect to the software running on the Compute Engine, a firewall rule (by default) will block the vast majority of incoming connections. Fortunately, since you own the CE you also own the firewall configuration. If we look here:
https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureFireWall.html
we see the set of ports needed for different purposes. This gives us a hint as to what firewall rule changes to make.
Cloud Endpoints is for exposing APIs that YOU develop in your own applications and doesn't feel an appropriate component for accessing Cassandra.

Setup VPN for database instance on Google Cloud SQL

I have a MySQL database instance on Google Cloud SQL. Currently it has over 10 authorized ip addresses since multiple teams are accessing it from various locations. I would like to know if I can setup a VPN to this database instance and authorize just this ip address, instead of 10 addresses.
If that's possible, I would also like to know how many user accounts I can create for one VPN. I could not understand the Google Cloud documentation about setting up VPN. Please provide links to websites/tutorials/documentation that can help me with setting up a VPN in Google Cloud Platform.
Thanks.
Using cloud VPN and authorize only its external IP will not work for two reasons:
1) You can not specify in cloud SQL a private network (for example, 10.x.x.x) as an authorized network. as documented here.
2) Packets will arrive to cloud SQL after decapsulation which means that Cloud SQL get them as they come from different source IPs.
If you want more secure connection without IP white-listing, maybe using cloud SQL proxy.

Kubernetes update changes static+reserved external IPs for nodes in Google Cloud

I have three nodes in my google container cluster.
Everytime i perform a kubernetes update through the web-ui on the cluster in Google Container Engine.
My external IP's change, and i have to manually assign the previous IP on all three instances in Google Cloud Console.
These are reserved static external IP set up using the following guide.
Reserving a static external IP
Has anyone run into the same problem? Starting to think this is a bug.
Perhaps you can set up the same static outbound external IP for all the instances to use, but i cannot find any information on how to do so, that would be a solution as long as it persists through updates, otherwise we've got the same issue.
It's only updates that cause this, not restarts.
I was having the same problem as you. We found some solutions.
KubeIP - But this needed a cluster 1.10 or higher. Ours is 1.8
NAT - At GCP documentation they talk about this method. It was too complex for me.
Our Solution
We followed the documentation for assign IP addresses on GCE. Used the command line.
Using this method, we didn't have any problems so far. I don't know the risks for it yet. If anyone has an idea, it would be good.
We basically just ran:
gcloud compute instances delete-access-config [INSTANCE_NAME] --access-config-name [CONFIG_NAME]
gcloud compute instances add-access-config [INSTANCE_NAME] --access-config-name "external-nat-static" --address [IP_ADDRESS]
If anyone have any feedback on this solution. Please give it to us.
#Ahmet Alp Balkan - Google
You should not rely on the IP addresses of each individual node. Instances can come and go (especially when you use Cluster Autoscaler), and their IP addresses can change.
You should always be exposing your applications with Service or Ingress and IP addresses of the load balancers created with these resources do not change between upgrades. Further you can convert IP address on a load balancer to a static (reserved) IP address.
I see that you're assigning static IP addresses to your nodes. I don't see any reason to do that. When you expose your services with Service/Ingress resources, you can associate a static external IP to them.
See this tutorial: https://cloud.google.com/container-engine/docs/tutorials/http-balancer#step_5_optional_configuring_a_static_ip_address

How do I restrict access to a website hosted on an Azure VM to a given IP and itself?

I am doing some testing on an Azure VM and have an ASP.NET website that I wish to limit access to. The website should only be accessible from a given IP (our office) but I also want the server to be able to make requests to other websites hosted on itself.
I have successfully setup an IP and Domain Restriction for our office IP but cannot find a way to allow requests internally from itself.
Is this possible without setting up a static IP in Azure?
This question should be migrated to ServerFault, but given that there are programmatic approaches to it, I'll answer:
You need to set up Access Control Lists on the input endpoints, to specify ranges (via CIDR notation) of allowable or blocked IP addresses.
While this may be done via the portal, you may do it via the CLI:
azure vm endpoint acl-rule create [vm-name] [endpoint-name] [order] [action] [remote-subnet]
You may do this via PowerShell as well. Rough outline:
Use New-AzureAclConfig to set up a config object
Use Set-AzureAclConfig to add rules to the config
Use Get-AzureVM to retrieve the config of your given virtual machine and add the ACL config to the VM
Use Update-AzureVM to save your changes
More PowerShell details here.

AWS SimpleDB: can I find out which IPs connect to it or which domains get queried the most?

We've been working on a collaborative project using AWS and in particular SimpleDB. Lately, our SDB cost have been going through the roof and we're trying to figure out what's going on. Is there any way to find out which IP addresses connect to it?
EDIT: If we can't find out which IPs are accessing SDB to get data from it, is it at least possible to determine how much each of our SDB domains get queried in terms of number of queries to a domain and/or the total amount of data getting pulled from a domain?
AWS IAM allows you to put condition on user's IP address using AWS IAM AWS-Wide Policy Keys. Here is the link - For Managing Users for Amazon SimpleDB using AWS IAM.
Here is the example to allow requests only if they come from certain IP address or range. source
Allow requests only if they come from a certain IP address or range
This policy is for an IAM group that all users in a company belong to. The policy denies access to all actions in the account unless the request comes from the IP range 192.0.2.0 to 192.0.2.255 or 203.0.113.0 to 203.0.113.255. (The policy assumes the IP addresses for the company are within the specified ranges.) A typical use is for Amazon VPC, where you might expect all your users' requests to originate from a particular IP address, and so you want to deny requests from any other address.
{
"Version": "2012-10-17",
"Statement":[{
"Effect":"Deny",
"Action":"*",
"Resource":"*",
"Condition":{
"NotIpAddress":{
"aws:SourceIp":["192.0.2.0/24", "203.0.113.0/24"]
}
}
}
]
}

Resources