Is it possible to define nodegroups outside of the master config file (/etc/salt/master). It seems that something as dynamic node groups shouldn't be in the master config, or am I thinking about nodegroups the wrong way.
Here's my scenario, our servers are classified by role in their naming convention. Which means we have multiple servers named location-nginxXX.mydomain.com, where the XX denotes the node. The applications on the node can/will change over time. As we move an application from one node to another we want to target the new node with additional states and pillar items. Ideally, we'd update some config (pillar, maybe?) with the list of servers assigned to a given application. Then we update the servers with the states for the new application, and remove the states no longer needed.
Is our approach sound, and if so, how do you target a changing set of minions with states and pillars?
If you're using nodegroups 'dynamically' then this is usually done using one of the compound matchers. you define a nodegroup (in master) that matches something that can be changed elsewhere in the config (grains being the most common, but pillars also works). When you want to change a server's group, just modify it's config to be matched by a different nodegroup.
You could use an ENC (external node classifier) - either build your own one, or use something like reclass.
Related
When I generate resources, methods etc. using "for_each", how can I make a deployment depend on them? Terraform requires a static list as value for "depends_on"
I think what you are looking for here is this (somewhat hidden) reference in terraform documents about triggers
I was facing the same issue (using for_each to create gateway methods, integrations) but was not able to reliably trigger api-gateway redeployment, until this...
... or removing the .id references to calculate a hash against whole resources. Be aware that using whole resources will show a difference after the initial implementation. It will stabilize to only change when resources change afterwards
This allow us to do the following in triggers
triggers = {
redeployment = sha1(jsonencode([
aws_api_gateway_resource.gateway_resources,
aws_api_gateway_method.gateway_methods,
aws_api_gateway_integration.gateway_integrations,
]))}
By removing .id (and thus not needing to reference each.key, or any element in the dynamic list) you let terraform decide if the hash of the file changed. If it did it will redeploy, if it doesnt change, then no redeploy required :)
Reference https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_deployment#terraform-resources
Look at the comments on 'triggers'
I wrote a program which gives all user attributes from active directory using novell library but unfortunately I don't get value of lastlogon attribute although this attribute contains a value.
The lastLogon attribute is not replicated. That means it will only be accurate on the domain controller that the user last authenticated against. Any other DC will have either an old value, or no value.
You have two options:
Query each DC and use the most recent value, or
Use the lastLogonTimestamp attribute, which was created for just this reason. It won't give you the exact time of the last logon, but it is guaranteed to be accurate within 2 weeks.
Also make sure you are reading from the domain (LDAP://) and not the Global Catalog (GC://). Neither attribute will be available from a GC.
I have an app using React + Redux and coupled with Firebase for the backend.
Often times, I will want to add some new attributes to existing objects.
When doing so, existing objects won't get the attribute until they're modified with the new version of the app that handles those new attributes.
For example, let's say I have a /categories/ node, in there I've got objects such as this :
{
name: "Medical"
}
Now let's say I want to add an icon field with a default of "
Is it possible to update all categories at once so that field always exists with the default value?
Or do you handle this in the client code?
Right now I'm always testing the values to see if they're here or not, but it doesn't seem like a very good way to go about it. I'd like to have one place to define defaults.
It seems like having classes for each object type would be interesting but I'm not sure how to go about this in Redux.
Do you just use the reducer to turn all categories into class instances when you fetch them for example? I'm worried this would be heavy performance wise.
Any write operation to the Firebase Database requires that you know the exact path to the node that you're writing.
There is no built-in operation to bulk update nodes with a path that is only partially known.
You can either keep your client-side code robust enough to handle the missing properties, or you can indeed run a migration script to add the new property to each relevant node. But since that script will have to know the exact path of each node to write, it will likely first have to read/query the database to determine those paths. Depending on the number of items to update, it could possibly use multi-location updates after that to update multiple nodes in one call. E.g.
firebase.database().ref("categories").update({
"idOfMedicalCategory/icon": "newIconForMedical",
"idOfCommercialCategory/icon": "newIconForCommercial"
"idOfTechCategory/icon": "newIconForTech"
})
Suppose there is an active directory, how to check or detect for any changes like add, delete, modify or even moving an object from one OU or container to another.
Edit : Is there any way to perform this using the LDAP Queries ?
There are 3 documented ways to track AD changes: https://msdn.microsoft.com/en-us/library/ms677625(v=vs.85).aspx
I'd like to try and expose some connections for a webpart at runtime, at compile time I don't know what they are, and I'm wondering if anyone can provide any suggestions on where to start.
All the examples I've read seem to do so statically using [ConnectionConsumer] and [ConnectionProvider] which obviously needs to be done in code, I don't however know what I need to expose at this point in time.
My use case would be something like a grid that uses a DataTable. The DataTable is retrieved by using a SQL statement:
select * from myTable
The connections I want to expose are when this changes to
select * from myTable where columnA = myConnection1
At this point I want to expose a connection for my WebPart called 'myConnection1', if I add multiple where clauses I want multiple connections that can be linked from other WebParts.
EDIT
An example of this would be like how ReportingServices within SharePoint handles connections. It seems to use a custom WebPartManager that determines at runtime the number, names and types of connections that need exposing.
You can create connections between web parts dynamically:
wpMgr.ConnectWebParts(wp1, cp1, wp2, cp2)
Ted Pattison: http://msdn.microsoft.com/en-us/magazine/cc188696.aspx#S6
Not sure what is dynamic in your question:
-the schema of the data that flows through the connections OR
-the creation of these connections from provider to consumer web parts at runtime?
Hope this helps?
In the end I determined that the best way was to use the IWebPartParameters interface and expose them manually.
http://blog.mindbusiness.de/blog/2011/09/05/implementation-of-iwebpartparameters-web-part/