Openshift/Kubernates kube dns best practise (ndots = 5) - networking

I have been using Openshift/Kubernates for some time and this has been the understanding.
For service to service communication
use DNS name of ${service-name} if they are under the same namespace
use DNS name of ${service-name}.${namespace}.svc.cluster.local if they are from different namespaces (network is joined)
Recently i was introduced with the topic of "we should add a dot after the svc.cluster.local to make it FQDN, for better DNS lookup speed". Done some testing and indeed with lookup is much faster with the dot. (~100ms without dot, 10ms with dot)
After some research, it was caused by the default dns setting from the kubernates
sh-4.2$ cat /etc/resolv.conf
search ${namespace}.svc.cluster.local svc.cluster.local cluster.local
nameserver X.X.X.X
options ndots:5
the ndots = 5 will perform a local search (sequential) if the dns name does not contain 5 dots.
In the case of ${service-name}.${namespace}.svc.cluster.local, the local search will be as such
${service-name}.${namespace}.svc.cluster.local + ${namespace}.svc.cluster.local // FAILED LOOKUP
${service-name}.${namespace}.svc.cluster.local + svc.cluster.local // FAILED LOOKUP
${service-name}.${namespace}.svc.cluster.local + cluster.local // FAILED LOOKUP
${service-name}.${namespace}.svc.cluster.local // SUCCESS LOOKUP
And for ${service-name}.${namespace}.svc.cluster.local., the local search will be as such
${service-name}.${namespace}.svc.cluster.local // SUCCESS LOOKUP
References
link
how to debug
Questions:
Since the ndots = 5 is the default setting for kubernetes, why ${service-name}.${namespace}.svc.cluster.local. is not documented on the official side ?
Should we change all service call to ${service-name}.${namespace}.svc.cluster.local. ? any potential downsides ?

Since the ndots = 5 is the default setting for kubernetes, why
${service-name}.${namespace}.svc.cluster.local. is not documented on
the official side ?
Well, it's a really good question. I searched through the official docs and it looks like this is not a documented feature. For this reason much better place for posting your doubts and also request for documentation improvement is the official GitHub site of Kubernetes DNS.
Should we change all service call to
${service-name}.${namespace}.svc.cluster.local. ? any potential
downsides ?
If it works well for you and definitely increases the performance, I would say - Why not ? I can't see any potential downsides here. By adding the last dot you're simply omitting those first 3 lookups that are doomed to failure anyway if you use Service domain name in a form of ${service-name}.${namespace}.svc.cluster.local
Inferring from lookup process you described and your tests, I guess if you use only ${service-name} (of course only within the same namespace), dns lookup should be also much faster and closer to those 10ms you observed when using ${namespace}.svc.cluster.local svc.cluster.local cluster.local. as then it is matched in the very first iteration.

Based on the latest document here, it states that that we should use ${service}.${namespace} to call a service from different namespace and expect to resolve on the second attempt

Related

Can I use wildcard on info-plist for Bonjour services

My apps using bonjour service to conversation with each other via local network.
I am facing a problem on Xcode12 with OS14 device.
A device publish a service with server type name depends on self device IP address
(example: 192.168.33.20 -> _1921683320._tcp)
B device searching a service with service type depends on A device IP address
(example: _1921683320._tcp)
According to apple document..From OS14~
https://developer.apple.com/documentation/multipeerconnectivity
Important
Apps that use the local network must provide a usage string in their Info.plist with the key NSLocalNetworkUsageDescription. Apps that use Bonjour must also declare the services they browse, using the NSBonjourServices key.
because my service type name is named by local network ip, it is changeable base on local network setting, so I am thinking about to using wildcard to define the service type name.
example: _*._tcp
but seems wildcard is not available on this definition.(I tried it)
I am also thinking about changing the naming method on A device
(example: 192.168.33.20 -> _20._tcp)
and add _1.tcp ~ _255.tcp to info-plist
But if I changed the naming method, B device could not find A device until version up.
Any idea for this problem? Please help.
I'm currently working through the same issue - Bonjour service name is dynamically created based off the iPad name to form a local mesh network. The conclusion that I have came to is com.apple.developer.networking.multicast is required for this to function without completely overhauling how all that logic is done. (More info here)
You will have to request permission from apple by filling out a form here. Let me know if this works for you!
The thing I am finding is, you "might" not be able to use a wildcard, but you can put multiple entries in the plist:
Item 0 _multicastapp0-p._tcp
Item 1 _multicastapp1-p._tcp
Item 2 _multicastapp2-p._tcp
Item 3 _multicastapp3-p._tcp
etc
Item N _multicastappN-p._tcp
So for some reason if you are trying to have multiple "Groups" of 8 or have a device have it's own "collection" i.e. be a server and have 3 devices connect to that, you can.
I haven't "fully" tested but I am going to be doing this in my apps, I did test using multiple keys tho, but not fully, no errors...

How to define https connection in Airflow using environment variables

In Airflow http (and other) connections can be defined as environment variables. However, it is hard to use an https schema for these connections.
Such a connection could be:
export AIRFLOW_CONN_MY_HTTP_CONN=http://example.com
However, defining a secure connection is not possible:
export AIRFLOW_CONN_MY_HTTP_CONN=https://example.com
Because Airflow strips the scheme (https) and in the final connection object the url gets http as scheme.
It turns out that there is a possibility to use https by defining the connection like this:
export AIRFLOW_CONN_MY_HTTP_CONN=https://example.com/https
The second https is called schema in the airflow code (like in DSN's e.g. postgresql://user:passw#host/schema). This schema is then used as the scheme in the construction of the final url in the connection object.
I am wondering if this is by design, or just an infortunate mixup of scheme and schema.
For those who land in this question in the future, I confirm that #jjmurre 's answer works well for 2.1.3 .
In this case we need URI-encoded string.
export AIRFLOW_CONN_SLACK='http://https%3a%2f%2fhooks.slack.com%2fservices%2f...'
See this post for more details.
Hope this can save other fellows an hour which I've spent on investigating.
You should use Connections and then you can specify schema.
This is what worked for me using bitnami airflow:
.env
MY_SERVER=my-conn-type://xxx.com:443/https
docker-compose.yml
environment:
- AIRFLOW_CONN_MY_SERVER=${MY_SERVER}

How to automatically scrape all Docker instances from Kubernetes with Prometheus?

I've successfully installed Prometheus in Google Container Engine and I have these targets up:
kubernetes-apiservers
kubernetes-cadvisor
kubernetes-nodes
Now I would like to scrape Nginx stats from each of the Docker containers inside this Kubernetes cluster (which seems like a sensible thing to do).
But how can I make Prometheus automatically pull the metrics from all the Nginx instances running in all of the Docker containers?
From my research so far, the answer involves kubernetes_sd_config but I simply could not find enough documentation on how to put the pieces together.
Thank you!
Edit: This is not about exposing the Nginx stats. This is just about scraping any stats that are exposed by all Docker containers.
You are correct that you need to use the kubernetes_sd_config directive. Before continuing let me just say that what you should be asking is "Automatically scape all pods from Kubernetes". This is because a pod is considered the lowest unit of scale in Kubernetes. Regardless it is clear what you are trying to do.
So the kubernetes_sd_config can be used to discover all pods with a given tag like so:
- job_name: 'some-app'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
regex: python-app
action: keep
The source label [__meta_kubernetes_pod_label_app] is basically using the Kubernetes api to look at app pods that have a label of 'app' and whose value is captured by the regex expression, given on the line below (in this case, matching 'python-app').
Hope that helps. You can follow blog post here for more detail. Also for more information about kubernetes_sd_config check out docs here.
Note: it is worth mentioning that kubernetes_sd_config is still in beta. Thus breaking changes to configuration may occur in future releases.

How to migrate Wordpress between Compute Engine instances

I have recently created a very small Google Compute Engine instance, naively thinking it's one of those easily scalable things Google people keep raving about.
I used the quick deployment feature of Wordpress and it all installed itself nicely, so I started configuring and adding data etc.
However, I then found out that I can't scale an existing instance (i.e. it won't allow me to change the instance type to a bigger one. I don't get why not, but there you go.), so it looks like I need to find a way to migrate my Wordpress installation to a new instance.
Will I simply be able to create a new instance and point it at the persistent disk my small instance currently uses, et voila, Bob's your uncle?
Or do I need to manually get the files and MySql data off the first instance and re-import into an empty new instance?
What's the easiest way?
Any advise or helpful links would be appreciated.
Thanks.
P.S.: Btw, should I try to use the Google Cloud SQL store instead of a local MySql installation?
In order to upgrade your VM:
access the VM's settings in the Developers Console (your project -> Compute -> Compute Engine -> VM instances -> click on the VM's name)
Scroll down to the "Disks" section, and un-check "Delete boot disk when instance is deleted"
Delete the VM in question. Take note that the disk, named after the instance, will remain.
Create a new VM, selecting "Existing disk" under Boot disk - Boot source. In the next box down, select the disk from point 3 above, as well as a bigger machine type.
The resulting new instance will use the existing disk from the old one, with improved hardware / performance.
As for using Cloud SQL in lieu of a VM-installed database, it's perfectly feasible, and allows to adjust the Cloud SQL instance to match your actual use. A few consideration when setting up this kind of instance:
limit the IPs allowed to connect to your Cloud SQL instance to your frontend's IP, and perhaps the workstation's IP or subnet from which you maintain the database out of.
configure Cloud SQL to use SSL certificates.
Sammy's answer covers the important stuff I just wanted to clarify how your files are arranged on the two disks that are attached to your instance:
The data disk contains /var/www/ which is all of the wordpress files. It's mounted on the instance at /wordpress
The boot disk contains everything else, including the MySQL database that was created for the Wordpress installation.

Test to identify your development environment?

The code has a runtime dependency which is not available in our development environment (and is available in test and prod). It's expensive to actually test for the dependency, and I want to test for the environment instead.
if (isDevEnvironment) {
// fake it
}
else {
// actually do it
}
Without using appSettings, what code/technique/test would you use to set isDevEnvironment?
Example answers:
check machine name (partial or full)
check for running instance of Visual Studio
check for environment variable
I'm hoping for a test I hadn't considered.
You should try to not test your environment in the code! That's why dependency inversion (and then injection) has been invented for.
Draw some inspiration from NewSpeak, where where the complete platform is abstracted in an object and passed as parameter down the chain of method calls.
The code you provided (if (isDevEnvironment) ..) smells with test code in production.
Without using appSettings, what code/technique/test would you use to set isDevEnvironment?
Generally, Dependency Injection.
But also the the possible Solution in the link provided.
You should not check the environment, instead you need to provide the environment.
You've hit upon the major techniques. At my current job, we use the Enviroment variable technique.
At a previous job, all servers had three NIC's, there was the public front end, the middle tier for server to server traffic, and the back end Network Operations would connect to.
There were on different IP subnets. It made it easy to detect where something was coming from, but also who where was it.
Example:
10.100.x.xxx - Production Subnet
10.100.1.xxx - Back
10.100.2.xxx - Middle
10.100.3.xxx - Front
10.0.1.x - Development Subnet
This required nothing to be installed special on the servers, just code detection and then caching.
I prefer to do this:
if(Properties.Settings.Default.TestEnvironment || HttpContext.Current.Request.ServerVariables["Server_Name"] == "localhost")
{
// do something
}

Resources