In a legacy project we use Zuul and we need to base the zuul variable 'voting' on which branch has triggered the job.
- name: ^job_name$
branch: <?>
voting: <true or false or ?>
Observe we use an older version of Zuul (2.x).
thanks
You cannot do conditions for voting but you can split your jobs in two categories, ones that are voting and ones that are not voting.
A common approach is to have one job per branch.
Related
We have several golden AMIs that teams have built AMIs off of (children), some AMIs are built off of those (grand children), and now we'd like to figure out how to track the decendant to its parent golden AMI. There is a /etc/os-release for the Amazon AMIs which is useful but it makes it harder to find the AMIs in between.
Possible solutions
Tagging of AMIs and tagging of decendent AMIs
This would work but would require this tagging approach for all packer scripts which someone may forget to include.
"tags": {
"source_ami": "{{ .SourceAMI }}",
"source_ami_name": "{{ .SourceAMIName }}",
"source_ami_date": "{{ .SourceAMICreationDate }}"
}
In addition to that, we can also create a cloud custodian policy to deregister any new AMIs (after a specific date) to automatically deregister if it does not contain the above mandated tags.
Another problem with this approach is that sharing these AMIs with other accounts loses tags in those shared accounts. This solution would require either a Lambda or packer post processor that can assume a role in child accounts in order to copy tags of AMIs from the primary build account to the child account.
Manifest json file (example) downloaded to EC2 upon boot
This would not contain the resulting AMI id on the AMI itself since we do not know what the AMI is until it's complete. What we can do instead is use a manifest post processor to output the manifest.json, upload it to a prefix according to its respective AMI e.g. aws s3 cp manifest.json s3://bucket-ami-output/<ami-id>/manifest.json, and then have the EC2 launch use a /etc/rc.local script to hit its metadata to get its AMI, download the respective AMI manifest.json, check for a non-existent /etc/os-<parent-id>.json e.g. /etc/os-0.json. If os-0.json already exists, increment the parent id until one is available. Finally, move the json file to available file on the system.
Or we could create a file that has the source ami instead of the resulting ami. This is possible using a script that hits the metadata endpoint http://169.254.169.254/latest/meta-data/ami-id to get the current AMI during the packing process and then dump that information into a /etc/os-0.json file.
I'm leaning to the first approach because it seems much simpler.
I've successfully installed Prometheus in Google Container Engine and I have these targets up:
kubernetes-apiservers
kubernetes-cadvisor
kubernetes-nodes
Now I would like to scrape Nginx stats from each of the Docker containers inside this Kubernetes cluster (which seems like a sensible thing to do).
But how can I make Prometheus automatically pull the metrics from all the Nginx instances running in all of the Docker containers?
From my research so far, the answer involves kubernetes_sd_config but I simply could not find enough documentation on how to put the pieces together.
Thank you!
Edit: This is not about exposing the Nginx stats. This is just about scraping any stats that are exposed by all Docker containers.
You are correct that you need to use the kubernetes_sd_config directive. Before continuing let me just say that what you should be asking is "Automatically scape all pods from Kubernetes". This is because a pod is considered the lowest unit of scale in Kubernetes. Regardless it is clear what you are trying to do.
So the kubernetes_sd_config can be used to discover all pods with a given tag like so:
- job_name: 'some-app'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
regex: python-app
action: keep
The source label [__meta_kubernetes_pod_label_app] is basically using the Kubernetes api to look at app pods that have a label of 'app' and whose value is captured by the regex expression, given on the line below (in this case, matching 'python-app').
Hope that helps. You can follow blog post here for more detail. Also for more information about kubernetes_sd_config check out docs here.
Note: it is worth mentioning that kubernetes_sd_config is still in beta. Thus breaking changes to configuration may occur in future releases.
This is related to Hyperledger fabric v1.0 network topology.
From the example, configtx.yaml contains following definitions:
Profiles:
TwoOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
There are two main sections.
TwoOrgsOrdererGenesis
This defines the orderer service and the genesis block details.
TwoOrgsChannel
This defines the channel details. Such as how many organization/entity are going to be part of the channel.
What I understood from the documentation is Consortiums section defines what are the organizations/entities belongs to a Consortium.
My questions:
What is the role of Consortium?
Can a Consortium entity would have peer nodes running for it's own?
If yes, how to configure in this yaml file?
What is the meaning of <<: line?
What is Application in this context?
Can I define multiple profiles in this yaml file?
appreciate if anyone can explain in details.
My questions:
What is the role of Consortium?
A consortium consists of organizations. And organization contains
peers or orderer. one channel is matched with one consortium but one
consortium can be matched many consortiums.
Can a Consortium entity would have peer nodes running for it's own?
If yes, how to configure in this yaml file?
No, but if you want to define consortium, you have to get MSPs about
organizations.
What is the meaning of <<: line?
It's a YAML grammar.
What is Application in this context?
channel application like Node.js Application. But now My channel works without the section.
Can I define multiple profiles in this yaml file?
sorry, I don't understand what you want to define profile for.
to reply to "Can I define multiple profiles in this yaml file?" the answer is yes.
As you can see in this sample file, multiple profiles are here defined.
Profiles are used to defines the configuration of the genesis block and to define the first channel configuration transaction. In the code that you provide TwoOrgsOrdererGenesis should be used as parameter for the configtxgen command
configtxgen -profile SampleSingleMSPSolo -channelID sys-channel -outputBlock ./channel-artifacts/genesis.block
while the second is used to generate the artifacts for the channel transaction
configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID channel-name
In the above file you can check some different configurations... I think it is a good starting point to understand the network configuration which is still something I'm trying to fully figure out.
Regarding the question "What is the meaning of <<: line?"
yaml files syntax :-
The & marks an alias for the node (in your example &default aliases the development node as "default") and the * references the aliased node with the name "default". The <<: inserts the content of that node.
This type reference are used when - Repeated nodes (objects) are first identified by an anchor (marked with the ampersand - “&”), and are then aliased (referenced with an asterisk - “*”) thereafter.
Currently, all parameters passed to a template are hardcoded (for instance, Windows Vm Version: 2012-Datacenter, 2016 Datacenter and so on). is their a way to dynamically update these values based on the type of subscription or the location?
As I known, there is no in-build feature for you to dynamically update parameter values based on the type of subscription or the location. You could add your feedback here. In order to achieve this purpose, I assume that you need to add your code logic to generate the TemplateParameterFile based on the type of subscription or the location of your resource group, then leverage New-AzureRmResourceGroupDeployment command to deploy your Azure resources. Moreover, here are some common ARM templates, you could refer to them.
I've been trying to provision a 2-node-type service fabric cluster using ARM. The secondary node type (backend) should not be exposed to the internet. For that I've created a loadbalancer with an internal IP-Address.
Everything gets provisioned correctly but I cannot get the nodes added to the cluster. From the Azure portal when I open the cluster it says it has no nodes in it even though it has the node types configured.
I have even tried downloading the template produced by the azure portal after creating a service fabric cluster. I have also executed one of the templates provided on github and I cannot still see any nodes in the cluster.
Any suggestion what I could be missing?
Thanks
Glad to hear you got that sorted. Regarding your follow-up question on deploying to the backend node-types, that's where you'd use placement constraints. When you create clusters in Azure through ARM, it automatically sets up a placement property on each node using the node type name you defined. So on your back-end nodes, assuming your node type is called "backendnode" you'll have the following placement policy defined:
NodeTypeName: backendnode
When you deploy your services, just use that as your placement constraint:
New-ServiceFabricService -ApplicationName "fabric:/myapp" -ServiceName "fabric:/myapp/myservice" -ServiceTypeName "myservicetype" -Stateful -MinReplicaSetSize 2 -TargetReplicaSetSize 3 -PartitionSchemeSingleton -PlacementConstraint "NodeTypeName == backendnode"