salt stack: grains vs pillars - salt-stack

In the Salt system there are grains and pillars. I understand how I can assign custom grains, but when would it be better to consider using pillars?

In Salt, grains are used for immutable aspects of your minion, such as the cpu, memory, location, time zone, etc.
A pillar is a list of data on the master (in SLS format) that you need to distribute to your minions. Pillar allows you to set variables that the minions can access, for example a database configuration option.

In short, custom static Grains is likely worse alternative than Pillars.
| Differences | Grains | Pillars |
|------------------------------|-------------------------------|-------------------------------------|
| This is info which... | ... Minion knows about itself | ... Minion asks Master about |
| | | |
| Distributed: | Yes (different per minion) | No (single version per master) |
| Centralized: | No | Yes |
| | | |
| Computed automatically: | Yes (preset/computed value) | No (only rendered from Jinja/YAML) |
| Assigned manually: | No (too elaborate) | Yes (Jinja/YAML sources) |
| | | |
| Conceptually intrinsic to... | ... individual Minion node | ... entire system managed by Master |
| Data under revision control: | No (computed values) | Yes (Jinja/YAML sources) |
| | | |
| They define rather... | _provided_ resources | _required_ resources |
| | (e.g. minion OS version) | (e.g. packages to install) |
| | | |

The fundamental difference here is that you can set a custom grain as an innate property of a minion, versus pillar which needs to be assigned to a minion at some point.
For example, there are two practical ways to assign a role to a minion: the minion id or using custom grains. You can then match against the minion id or custom grains inside your top.sls file like so:
# salt/top.sls
base:
# match against custom grain
'G#role:webserver':
- match: compound
- webserver
'G#role:search':
- match: compound
- elasticsearch
# match against minion id
'minion_db*':
- database
You CANNOT do this with pillar. While you can indeed target with pillar, you first need a way to assign pillar to to your minions (this must be minion id, or grains as stated above). Think about how you would assign pillar in the pillar top file, you need to assign this pillar data using an innate attribute of the minion.
# pillar/top.sls
base:
'G#env:dev':
- match: compound
- dev_settings
'G#env:prod':
- match: compound
- prod_settings
The pattern here is that you use grains (or minion id) as a minimal way to set type/role/environment of your minion. After that, you use pillar data to feed it all the appropriate detailed settings.

Pillar is also useful for ensuring that only certain minions get a particular bit of information.
There are some great docs here:
http://docs.saltstack.com/topics/pillar/index.html
and here:
http://docs.saltstack.com/topics/tutorials/pillar.html
You can also use an External Pillar to allow an arbitrary database or config file to set your Pillar data for you. This allows for very powerful integration with other aspects of your infrastructure.
There are several built in external pillars listed here:
http://docs.saltstack.com/ref/pillar/all/index.html
And it's pretty straightforward to build a custom external pillar:
http://docs.saltstack.com/topics/development/external_pillars.html

Related

DynamoDB access pattern for storing shopping history

What is a solid DynamoDB access pattern for storing data from a bunch of receipts of identical format? I would use SQL for maximum flexibility on more advanced analytics, but as a learning exercise want to see how far one can go with DynamoDB here. For starters I'd like to query for aggregate overall and per product spending for a given time range, track product price history, sort receipts by total, stuff along those lines. But I also want it to be as flexible as possible for future queries I haven't thought of yet. Would something like this, plus some GSI's, work?
-----------------------------------------------------------------------------------------------------------
| pk | sk | unit $ | qty | total $ | receipt total | items
-----------------------------------------------------------------------------------------------------------
| "product a" | "2021-01-01T12:00:00Z" | 2 | 2 | 4 | |
| "product b" | "2021-01-01T12:00:00Z" | 2 | 3 | 6 | |
| "receipt" | "2021-01-01T12:00:00Z" | | | | 10 | array of above item data
| "product a" | "2021-01-02T12:00:00Z" | 1.75 | 3 | 5.25 | |
| "product c" | "2021-01-02T12:00:00Z" | 2 | 2 | 4 | |
| "receipt" | "2021-01-02T12:00:00Z" | | | | 9.25 | array of above item data
-----------------------------------------------------------------------------------------------------------
You have to decide your access patterns, and build the design of the dynamo off that not the other way around. No one outside your team/product can tell you what your access patterns are. That entirely depends on your products need.
You have to ask: What pieces of Information do you have, and what do you need to retrieve when you have those pieces of information? You then have decide what is the most common ones that will be done the most and craft your PK/SK combinations off that. If you can't fit all your queries into just one or two bits of information, you may want to set up an Index - but Index's should be maintained only for far less often accessed queries.
If you need to, its also Accepted Practice to enter the same information twice - in two documents in the table - as writes are easier/cheaper than multiple reads (a write is pretty much one WCU per document - any query/scan can be multiple RCUs even if you only need one part -- plus Index's being replications of the table mean there is a desync chance if you write/read too quickly or try to write/read the same document in parallel calls)
Take your time now to sit down and consider everything your app will need to query the dynamo for. The more you can figure out now, the better, and if you can set your PK to something that will almost always be available to the calling function trying to query then you will be in a much better state.

How to search for unique users in this dynamodb table?

How does one return a list of unique users from a dynamodb table with the following (simplified) schema? Does it require a GSI? This is for an app with small number of users, and I can think of ways that will work for my needs without creating a GSI (like scanning and filtering on SK, or creating a new item with list of user ids inside). But what is the scalable solution?
------------------------------------------------------
| pk | sk | amount | balance
------------------------------------------------------
| "user1" | "2021-01-01T12:00:00Z" | 7 |
| "user1" | "2021-01-03T12:00:00Z" | 5 |
| "user2" | "2021-01-01T12:00:00Z" | 3 |
| "user2" | "2021-01-03T12:00:00Z" | 2 |
| "user1" | "user1" | | 12
| "user2" | "user2" | | 5
Your data model isn't designed to fetch all unique users efficiently.
You certainly could use a scan operation and filter with your current data model, but that is inefficient.
If you want to fetch all users in a single query, you'll need to get all user information into a single partition. As you've identified, you could do this with a GSI. You could also re-organize your data model to accommodate this access pattern.
For example, you mentioned that the application has a small number of users. If the number of users is small enough, you could create a partition that stores a list of all users (e.g. PK=USERS). If you could do this under 400kb, that may be a viable solution.
The idiomatic solution is to create a global secondary index.

Cisco ASA Concurrent VPN Users - Timechart in KQL / Azure Sentinel

I need to make a timechart of concurrent VPN users connected to my Cisco ASA like the one in the following screenshot:
look! here is "the perfect" timechart in splunk
Another timechart screenshot here: https://drive.google.com/file/d/1dW8nyG3dz3GbPiXuiXZofuhccoHpEHSP/view?usp=sharing
In splunk it was made possible by the awesome query posted here:
https://community.splunk.com/t5/Splunk-Search/Concurrent-Active-VPN-Sessions-on-a-Timechart/m-p/493141#M137524
If I have to use the same logic to achieve the desired result, I just need your help to convert the following part of the above splunk query into KQL:
| sort 0 _time
| eval time2=_time
| bin span=20m time2
| eval time2=if(status="disconnected",NULL,time2)
| eval _time=coalesce(time2,_time)
| streamstats count(eval(status="assigned")) as session by user
| stats values(eval(if(status="assigned",round(_time),NULL))) as start values(eval(if(status="disconnected",round(_time),NULL))) as end by user session
| eval timerange=mvrange(start,end,1200)
| mvexpand timerange
| rename timerange as _time
| timechart span=20m count(user)
Expected Output (from splunk) : https://drive.google.com/file/d/11F5p_zOGlgenIqVsToXiPlL2UplSIRNa/view?usp=sharing
Sample Data (from Sentinel, parsed) :
https://drive.google.com/file/d/1wzansi1MfCnUylNHSeUHiw8POIxzS4q_/view?usp=sharing
Yea, we had to switch from splunk to Azure Sentinel. (Don't ask why.)

how does telegram checks if a newly joined user number exists in another user contacts list

I've been trying to research this for a while now, what I want is very simple. I'm trying to compare two phone numbers and checks if they match because I'm tryign to implement something similar to telegram, notify a user if one of his contacts list created an account.
My problem is the following:
If I saved my contact using this format 0791234567 and my contact joined using this number +962791234567 both numbers are the same but the first is using local formats and the second using international formats. Does telegram finds these two numbers as a match and sends me a notification indicating that my contact has joined the network ?
I tried to use google library for parsing the numbers, but unfortunately the library doesn't always parse numbers in any format especially if the region was not provided.
Any hints ? or this is just not possible and all numbers must be of a specific format to be able to find a match ?
I think you should have two fields: ‍counry_code and phone_number, and when registering, login, changing the mobile number and etc, get each of the fields individually.
for example :
id | first_name| last_name | password | country_code |phone_number|...
----------------------------------------------------------------------
1 | alihossein| shahabi | XXXXX | +98 |9377548654
or two tables users and phone_numbers :
id | first_name| last_name | password |
------------------------------------------
1 | alihossein| shahabi | XXXXX |
id | user_id| country_code | phone_number | active
--------------------------------------------------
1 | 1 | +98 | 9377541258 | 1
2 | 1 | +98 | 9377543333 | 0

How to handle additional columns in join tables when using Symfony?

Let's assume I have two Entities in my Symfony2 bundle, User and Group. Associated by a many-to-many relationship.
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
| USER | | USER_GROUP_REL | | GROUP |
├────────────────┤ ├────────────────┤ ├────────────────┤
| id# ├---------┤ user_id# | ┌----┤ id# |
| username | | group_id# ├----┘ | groupname |
| email | | created_date | | |
└────────────────┘ └────────────────┘ └────────────────┘
What would be a good practice or a good approach to add additional columns to the join table, like a created date which represents the date when User joined Group?
I know that I could use the QueryBuilder to write an INSERT statement.
But as far as I have not seen any INSERT example of QueryBuilder or native SQL which makes me believe that ORM/Doctrine try to avoid direct INSERT statements (e.g. for security reasons). Plus as far as I have understood Symfony and Doctrine I would be taken aback if such a common requirement wouldn't be covered by the framework.
You want to set a property of the relation. This is how it's done in doctrine:
doctrine 2 many to many (Products - Categories)
I answered that question with a use case (like yours).
This is an additional question / answer which considers the benefits and use cases: Doctrine 2 : Best way to manage many-to-many associations

Resources