Azure Network Security Group Vs Route Tables - networking

Networking newbie here. From the Documentation it feels like both NSG and Routing tables(UDR) are doing the same thing - capable of defining ACLs at multiple levels (Vnet, Subnet, VM)
https://learn.microsoft.com/en-us/azure/virtual-network/network-security-group-how-it-works
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview
So how are they different and when is each used?
thanks.

Azure automatically creates a route table for each subnet within an Azure virtual network and adds system default routes to the table. The route table is like a networking map that tells the traffic from one place to another place via the next hop. This generates the "path" but does not filter traffic.
The Azure network security group is used to filter network traffic to and from Azure resources in an Azure virtual network. It contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. If there is no route to one place from a subnet, you even do not need to configure the security rules because there is no path. So when you consider the NSG it should have a successful network route.
For example, usually, we can access Azure VM in Azure virtual network via SSH or RDP over the Internet but it has a less secure way to expose the port 22 or 3389. We can restrict access to your Azure VM via specifying the source IP address in the NSG. This setting allows traffic only from a specific IP address or range of IP addresses to connect to the VM. Read more details here. In this scenario, we need to ensure that there is a route to the internet from your Azure virtual network and vice versa.

Related

Unable to understanding DMZ

Could anyone please explain if by default (firewall application) all users can access DMZ? Or only inside users? Also, if the users will be added to the access control list?
The most common form of a DMZ is a kind of "proxy" network between your intranet (LAN) where all your clients are connected to and the WAN. Imagine you have a network with some web servers, PCs like laptops or workstations, and some other servers or services with databases or similar. In front of your LAN, there´s a firewall creating the gateway to the WAN.
If everything is inside the same network you'll have security issues since, if one machine gets compromised, basically everything will be possible.
As long as you're communicating in the same subnet, let´s say a class C network of 192.168.0.0 (IP-Range from 192.168.0.1 - 192.168.0.254) the traffic will not be routed to your gateway which is usually your firewall. Meaning that every request you do from 192.168.0.2 to 192.168.0.3 will not be monitored and/or restricted by your firewall. This is an issue.
Web servers for instance have to be accessible from the outside. If an attacker gets access to the server he could mess with anything in your network.
Now you introduce a DMZ, basically a proxy network between your LAN and the WAN (at least in most cases). Since it's an own subnet the traffic will be routed through your gateway (firewall) so your rules apply. Also, to get into the intranet the data has to pass two firewalls (or twice the same firewall). You can now create firewall rules that will allow or disallow the communication from servers or clients and vice versa in the DMZ to your actual LAN. This way you can define that every communication into your LAN is denied by default and then you start adding rules to allow communication, for instance, if some service has to connect to a database in your LAN or similar.
Many networks only filter inbound traffic that way but in my opinion, you should also deny all outbound traffic until approved by a firewall rule.
Also, depending on the situation, often the clients are in the DMZ as well-meaning only critical infrastructure is in your actual intranet. In this case, commonly only administrative users will have "full" access to the intranet itself. Generally, it's a good idea to put the clients in a separate, restricted network since you'll often get to points where the clients are the biggest vulnerability for your network (Like users who like to open word documents clearly being some kind of fraud and similar)

How to set the external IP of a specific node in Google Kubernetes Engine?

Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.
Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).
The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster

How to create firewall for kubernetes cluster in google container engine

This may be an extremely simple question, but I can't seem to figure out how to only allow my kubernetes cluster to be accessible ONLY from my office IP.
In my firewall rules I see my rules for the gke nodes to be 2 internal ips and my office ip.
I also see a firewall rule for an external ip range that I don't see in my external IP addresses. That IP address also doesn't appear in my load balancer IPs...
Finally I have a loadbalancing firewall rule that allows the external IP ranges from the load balancing tab, which are my kubernetes ingress rules.
Long story short, how do I only allow my kubernetes cluster to be only accessible from my office IP?
This isn't currently possible in Google Container Engine.
You don't see any firewall rules for your cluster control plane because it isn't running inside your cloud project. Therefore the endpoint for your cluster won't show up in your networking views and you cannot add firewall rules to restrict access to it.
This is a shortcoming that the team is aware of and we hope to be able to provide a solution for you in the future.

Can't access port 7474 across Rackspace cloud servers

I've setup neo4j on port 7474 on a Rackspace cloud server. I want to access this server from another Rackspace cloud server (appserver) but the connection is refused.
I've tried enabling access for the appserver to port 7474 on the neo4j server using ufw:
sudo ufw allow from 22.234.298.297 to any port 7474
I can see this rule when I run 'ufw staus' but it doesn't seem to make any difference when I try to connect to the appserver. I can ssh between these two servers.
How do I open port 7474 between cloud servers on Rackspace?
(my apologies for this very basic question but rackspace support are not helping and I cant find rackspace specific information on this)
Glad, we could solve the problem (see comments on the question).
It so happens that Neo4j accepts only connections from localhost per default. When trying to gain access to Neo4j via REST API from an app server within the same network, one has to configure the Neo4j server to open up.
The neo4j-server.properties configuration file has a configuration key with org.neo4j.server.webserver.address. You have a couple of options here.
Grant app servers in the same local network to consume the Neo4j REST API
Grant everybody access and let the firewall handle it
For the first case, use the local ip address of the machine where Neo4j is running. Let's say your machines are connected via a private class C network. The machine with Neo4j has an ip 192.168.1.4 - that's the ip you want to enter as the value in org.neo4j.server.webserver.address, so your app server running in the same network with maybe an ip of 192.168.1.5 can make network requests that are being answered by the Neo4j web server.
For the second case, you enter 0.0.0.0 as value for org.neo4j.server.webserver.address to denote that you want to accept connections on all available ip addresses on that machine. In that case you want to set up your firewall to handle permissions who can talk to the server and who doesn't - even with authentication enabled.
Extra
In a production environment that requires high availability, one can use Neo4j's enterprise edition with a high availability cluster in a master-slave setting. I've used in with one master and two slaves. I configured the Neo4j servers that they can only be accessed from the proxy server that routes writing cypher queries to the master, and reading queries to the slaves. The proxy itself had a hardware firewall on it to ensure only specific app servers within the network have access to the Neo4j database.

Whats my IP and subnet from Azure website?

Im building out an Azure hosted website, but it needs to reach into our home office to connect to some internally hosted web services. Our firewall is setup to only allow traffic over certain IP's, so we're looking to determine what IP range we need to allow access to.
Currently I'm still using the MSDN "Free" Azure subscription, so I don't know what options may be limited, but is there a way I can determine what source IP, subnet, whatever my Azure hosted site will attempt to call my web services from?
Thanks!
Be careful opening your firewall to the entire Azure datacenter IP ranges. Anybody can host anything in Azure, including malicious software, so if you open your firewall to the entire Azure IP range you may as well just open to 0.0.0.0-255.255.255.255 because in effect you are getting the same security.
A better option is to deploy your service and just whitelist that one IP address. That IP address is guaranteed to remain the same until you delete your service. With the ability to do in-place upgrades and VIP swaps there should be no reason why you would need to delete your hosted service and lose your IP address. If you ever do run into a scenario where you need to delete/redeploy you can always update your firewall at that time.
It sounds like this is what you're looking for:
Windows Azure Datacenter IP Ranges

Resources