SaltStack: Is there a way to make the minion use a different SNI for publishing and returning? - salt-stack

I'm running a Salt master in a very constrained Kubernetes environment where the ingress controller only listens on a single port.
Can I configure my minion so that it uses a different SNI for publishing and returning?
e.g. publish https://salt-master.publish.com
ret https://salt-master.ret.com

Unfortunately it is not possible. salt is setup to pay attention and to make sure that the information goes to the same master that it picked up from.

Related

Symfony Messenger different consumers for different app servers

I have a Symfony application that is running on several servers behind a load balancer. So I have separate hosts www1, www2, www3, etc.
At the moment I'm running messenger:consume only on www1, in fear of race conditions and potentially messages being handled twice.
Now I have a scenario where I need to execute a command on each host.
I was thinking of using separate transports for each host and running messenger:consume on each, consuming only messages from its respective queue. However I want the configuration to be dynamic, i.e. I don't want to do another code release with different transports configuration when a new host is added or removed.
Can you suggest a strategy to achieve this?
If you want to use different queues and different consumers... just configure a different DSNs for each www, stored on environment variables (not code). Then you could have different queues or transports for each server.
The Transport Configuration can include the desired queue name within the DSN, and best practice is to store that configuration on an environment variable, not as code, so you wouldn't need "another code release with different transports config when a new host is added or removed". Simply add the appropriate environment variables when each instance is deployed, same as you do with the rest of the configuration.
framework:
messenger:
transports:
my_transport:
dsn: "%env(MESSENGER_TRANSPORT_DSN)%"
On each "www" host you would have a different value for MESSENGER_TRANSPORT_DSN, which would include a different queue name (or a completely different transport).
You would need to create a separate consumer for each instance, with a matching configuration (or run the consumer off the same instance).
But if all the hosts are actually running the same app, generally, you'd use a single consumer, and al the instances should publish to the same queue.
The consumer does not even need to run on the same server than any of the web instances, simply be configured to consume from the appropriate transport/queue.

How to set unique Salt Minion Ids when provisioned using vCenter

For my use case, I am provisioning VM's using pre defined VM template in vCenter. The hostname in this template is already set, also salt minion is installed with no minion_id file. Once VM is provisioned and minion service starts, it automatically sets the hostname as minion id.
Now same template is used for provisioning more machines, due to which all machines gets same minion id.
One of the way to solve the problem is to manually change the minion_id file inside the newly created VM, but due to business reasons this is not possible.
Other way I can think about to set the unique minionid in VM guest advacned option like guestinfo and read it when VM is booting up, but this can only be set when VM is in powered off state.
I need help to set the different minion ids for each VM, how can this be accomplish without going inside the provisioned VM?
In our case, hostname collisions are a possibility. So we set the minion id to the UUID of the device. On linux that's obtainable with dmidecode -s system-uuid, there's a similar command for windows.

Traefik instance loadbalance to Kubernetes NodePort services

Intro:
On AWS, Loadbalancers are expensive ($20/month + usage), so I'm looking for a way to achieve flexible load-balancing between the k8s nodes, without having to pay that expense. The load is not that big, so I don't need the scalability of the AWS load balancer any time soon. I just need services to be HA. I can get a small EC2 instance for $3.5/month that can easily handle the current traffic, so I'm chasing that option now.
Current setup
Currently, I've set up a regular standalone Nginx instance (outside of k8s) that does load balancing between the nodes in my cluster, on which all services are set up to expose through NodePorts. This works really well, but whenever my cluster topology changes during restarts, adding, restarting or removing nodes, I have to manually update the upstream config on the Nginx instance, which is far from optimal, given that cluster nodes cannot be expected to stay around forever.
So the question is:
Can Trækfik be set up outside of Kubernetes to do simple load-balancing between the Kubernetes nodes, just like my Nginx setup, but keep the upstream/backend servers of the traefik config in sync with Kubernetes list of nodes, such that my Kubernetes services are still HA when I make changes to my node setup? All I really need is for Træfik to listen to the Kubernetes API and change the backend servers whenever the cluster changes.
Sounds simple, right? ;-)
When looking at the Træfik documentation, it seems to want an ingress resource to send its trafik to, and an ingress resource requires an ingress controller, which I guess, requires a load balancer to become accessible? Doesn't that defeat the purpose, or is there something I'm missing?
Here is something what would be useful in your case https://github.com/unibet/ext_nginx but I'm note sure if project is still in development and configuration is probably hard as you need to allow external ingress to access internal k8s network.
Maybe you can try to do that on AWS level? You can add cron job on Nginx EC2 instance where you query AWS using CLI for all EC2 instances tagged as "k8s" and make update in nginx configuration if something changed.

Kubernetes service with exactly one pod from a deployment?

I've got a k8s deployment with 3 pods in it and I've set up a NodePort service to forward SSH (port 22) to the 3 pods. Everything works as expected, but each time I SSH in, I get a random pod. I'd like to make it sticky so that I always get the same pod, but I'm unsure if this is possible.
According to the documentation, setting sessionAffinity: ClientIP probably won't work for NodePorts. I don't think externalTrafficPolicy: Local will work because you need to use a LoadBalancer service. I don't think LoadBalancer services are feasible for me because I need to create hundreds of these and each LoadBalancer costs money and uses up quota.
What I'm wondering here is whether it's possible to create a service that doesn't point to all 3 pods in the deployment, but instead exactly 1 pod. That would help for my situation. I could manually attach a special label to 1 pod and set the service selector to that label, but it feels brittle to me in case that pod dies and is replaced.
One way to get around this would be to create your pods using StatefulSet instead of deployment. Then your pods have a deterministic names, and when restarted, will retain their name. That way you can create a service that points to myapp-0, myapp-1 etc. and be reasonably sure that interruptions, while will break for a while when pod is rescheduled/restarted, will also get back to a working state. You will need to handle automation of such service creation when scaling StatefulSet though and your "affinity" would be based on service port that client is connecting to (can't have multiple services on same port)
That said, this is definitely not a good pattern to follow. You should ensure that your client can connect to any of the pods and that they share required state by means of another service they all use or a shared RWX volume if it's about files.

Automating Salt-Minion Installation

I have to setup a new salt configuration.
For minion setup I want to devise an approach. I came up with this.
Make entry of the new minion in /etc/salt/roster file so that I can use salt-ssh.
Run a salt formula to install salt-minion on this new minion.
Generate minion fingerprint with salt-call key.finger --local on the minion and somehow(still figuring) get it to master and maintain it in some file till the minion actually tries to connect.
When the minion actually tries to connect to the master, master makes sure about the minion identity with the stored fingerprint and then accepts the key.
Once this is done salt state can then bring the minion up to its desired state.
The manual chores associated with this:
I'll have to do manual entries viz. minion-id, IP and user in the /etc/salt/roster file for every new minion that I want up.
Other than this I can't figure any drawbacks.
My questions are:
Is this approach feasible?
Are there any security risks?
Is a better approach already out there ?
P.S. Master and minions may or may not be on public network.
There is salt-cloud to provision new nodes. This includes among others a provider saltify that will use SSH for the provisioning. See here for the online documentation. It will do the following all in one step:
create a new set of keys for the minion
register the minion's key with the master
connect to the minion using SSH and bootstrap the minion with salt and the minion's keys
If you want the minions to verify the master's key once they connect, you can publish a certificate to the minions and sign the master's key with the certificate like described here. Please double-check if saltify already supports this.
Some time ago I have prepared a salt-cloud setup that works both with DigitalOcean and with Vagrant on my Github account. The Vagrant provisioning uses salt-cloud with saltify. Have a look at the included cheatsheet.adoc for the Vagrant commands.

Resources