Integrating cloudify with openstack nova-network, if nova-network dosn't support floating-ip, how to define the openstack-nova-net-manager-blueprint.yaml?
1.cloudify-manager-blueprints version: cloudify-manager-blueprints-3.2.1
https://github.com/cloudify-cosmo/cloudify-manager-blueprints/tree/3.2.1-build
2.the blueprint DSL like this:
enter image description here
how to solve this problem? thanks for your kindly help!
The floating IP is used to connect to the manager once it has been already bootstrapped.
In case you do not have a floating IP, you can bypass it with one of two options:
Create manually an IP connected to the external network and use it as an external resource, so you it would look like:
manager_server_ip:
type: string
default: 1.1.1.1
manager_server:
type: cloudify.openstack.nodes.Server
properties:
resource_id: { get_input: manager_server_name }
manager_server_ip: { get_input: manager_server_ip }
install_agent: false
server:
image: { get_input: image_id }
flavor: { get_input: flavor_id }
openstack_config: { get_property: [openstack_configuration, openstack_config] }
relationships:
- target: management_security_group
type: cloudify.openstack.server_connected_to_security_group
- target: management_keypair
type: cloudify.openstack.server_connected_to_keypair
Just create a regular IP on the some network that will let you connect the manager after bootstrap
Related
I have an existing vNet and subnet and I'm trying to deploy a new NIC to the Subnet with the following bicep
param location string = resourceGroup().location
param nicName string
param vNetName string
param subnetName string
resource vnet 'Microsoft.Network/virtualNetworks#2021-02-01' existing = {
name: vNetName
scope: resourceGroup('myRgName')
}
resource subnet 'Microsoft.Network/virtualNetworks/subnets#2021-02-01' existing = {
parent: vnet
name: subnetName
}
resource nsg 'Microsoft.Network/networkSecurityGroups#2021-08-01' = {
name: '${nicName}-nsg'
location: location
}
resource nic 'Microsoft.Network/networkInterfaces#2021-08-01' = {
name: nicName
location: location
dependsOn: [
subnet
]
properties: {
ipConfigurations: [
{
name: 'ipConfig'
properties: {
privateIPAllocationMethod: 'Dynamic'
subnet: subnet
primary: true
privateIPAddressVersion: 'IPv4'
}
}
]
networkSecurityGroup: nsg
}
}
I compile the template and try to deploy but I'm getting the error Value for reference id is missing. Path properties.ipConfigurations[0].properties.subnet. which appears to be caused by the ARM not finding the subnet (which exists and I have access to).
The json portion of it looks like this
"subnet": "[reference(extensionResourceId(format('/subscriptions/{0}/resourceGroups/{1}', subscription().subscriptionId, 'myRgName'), 'Microsoft.Network/virtualNetworks/subnets', split(parameters('subnetName'), '/')[0], split(parameters('subnetName'), '/')[1]), '2021-02-01', 'full')]",
Use
subnet: {
id: subnet.id
}
for the subnet reference in the NIC's properties... you'll need the same for the networkSecurityGroup as Thomas mentioned.
I deployed a simple NFT smart contract on polygon mumbai testnet but when I am trying to verify it then It is showing an error. please guide me how to verify it...
This is the error which I am getting
PS C:\Users\Sumits\Desktop\truffle> truffle run verify MyNFT --network matic --debug
DEBUG logging is turned ON
Running truffle-plugin-verify v0.5.20
Retrieving network's chain ID
Verifying MyNFT
Reading artifact file at C:\Users\Sumits\Desktop\truffle\build\contracts\MyNFT.json
Failed to verify 1 contract(s): MyNFT
PS C:\Users\Sumits\Desktop\truffle>
This is my truffle-config.js
const HDWalletProvider = require('#truffle/hdwallet-provider');
const fs = require('fs');
const mnemonic = fs.readFileSync(".secret").toString().trim();
module.exports = {
networks: {
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
},
matic: {
provider: () => new HDWalletProvider(mnemonic, `https://rpc-mumbai.maticvigil.com`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true
},
},
// Set default mocha options here, use special reporters etc.
mocha: {
// timeout: 100000
},
// Configure your compilers
compilers: {
solc: {
version: "^0.8.0",
}
},
plugins: ['truffle-plugin-verify'],
api_keys: {
polygonscan: 'BTWY55K812M*******WM9NAAQP1H3'
}
}
First deploy the contract:
truffle migrate --network matic --reset
I am not sure if you successfully deploy it to matic network, because your configuration does not seem to be correct:
matic: {
// make sure you set up provider correct
provider: () => new HDWalletProvider(mnemonic, `https://rpc-mumbai.maticvigil.com/v1/YOURPROJECTID`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true
},
Then verify.
truffle run verify ContractName --network matic
ContractName should be the name of the contract, not the name of the file
please make sure you are putting the polygonscan api key in lowercase
After using terraform for a very long time, I decided to start learning azure bicep. So far I am trying to have a grip on the logic. So far I have playing around on deployment of a storage account and keyvault. What I am doing here is the following.
create a storage account
use existing key vault to store storage account connection string as secret
create a key based on the storage account name
And this works as I am expected.
So I wanted to take one step forward. and here is where I am a bit confused.
What I wanted to do, is to use the same bicep template, to create a new secret but in a different resource group into a different key vault.
Now according to my understand of azure documentation, the template comes with a default scope which in my specific case target my default subscription and to run my bicep template from the terminal I use the command
az deployment group create -f ./template.bicep -g <resource-group-name>
and this is my template:
// Default values I'm using to test
param keyVaultName string = '<keyvault-name>'
param managedIdentityName string = 'test-managed-identity'
param tenantCodes array = [
'elhm'
'feor'
]
// I'm using prefix so I dont need to create additional arrays
var keyVaultKeyPrefix = 'Client-Key-'
var storagePrefix = 'sthrideveur'
// Get a reference to key vault
resource keyVault 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: keyVaultName
}
// Create a managed identity
resource managedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' = {
name: managedIdentityName
location: resourceGroup().location
}
// Grant permissions to key vault
resource accessPolicy 'Microsoft.KeyVault/vaults/accessPolicies#2019-09-01' = {
name: '${keyVault.name}/add'
properties: {
accessPolicies: [
{
tenantId: subscription().tenantId
objectId: managedIdentity.properties.principalId
permissions: {
// minimum required permissions
keys: [
'get'
'unwrapKey'
'wrapKey'
]
}
}
]
}
}
// Create key vault keys
resource keyVaultKeys 'Microsoft.KeyVault/vaults/keys#2021-06-01-preview' = [for tenantCode in tenantCodes: {
name: '${keyVault.name}/${keyVaultKeyPrefix}${tenantCode}'
properties: {
keySize: 2048
kty: 'RSA'
// storage key should only needs these operations
keyOps: [
'unwrapKey'
'wrapKey'
]
}
}]
// Create storage accounts
resource storageAccount 'Microsoft.Storage/storageAccounts#2021-04-01' = [for tenantCode in tenantCodes: {
name: '${storagePrefix}${tenantCode}'
location: resourceGroup().location
kind: 'StorageV2'
sku: {
name: 'Standard_RAGRS'
}
// Assign the identity
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
'${managedIdentity.id}': {}
}
}
properties: {
allowCrossTenantReplication: true
minimumTlsVersion: 'TLS1_2'
allowBlobPublicAccess: false
allowSharedKeyAccess: true
networkAcls: {
bypass: 'AzureServices'
virtualNetworkRules: []
ipRules: []
defaultAction: 'Allow'
}
supportsHttpsTrafficOnly: true
encryption: {
identity: {
// specify which identity to use
userAssignedIdentity: managedIdentity.id
}
keySource: 'Microsoft.Keyvault'
keyvaultproperties: {
keyname: '${keyVaultKeyPrefix}${tenantCode}'
keyvaulturi: keyVault.properties.vaultUri
}
services: {
file: {
keyType: 'Account'
enabled: true
}
blob: {
keyType: 'Account'
enabled: true
}
}
}
accessTier: 'Cool'
}
}]
// Store the connectionstrings in KV if specified
resource storageAccountConnectionStrings 'Microsoft.KeyVault/vaults/secrets#2019-09-01' = [ for (name, i) in tenantCodes :{
name: '${keyVault.name}/${storagePrefix}${name}'
properties: {
value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccount[i].name};AccountKey=${listKeys(storageAccount[i].id, storageAccount[i].apiVersion).keys[0].value};EndpointSuffix=${environment().suffixes.storage}'
}
}]
according to the documentation here https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/deploy-to-resource-group?tabs=azure-cli
When I need to target a specific resource group, I can use the scope in the resource, so I create this:
resource keyvaultApi 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: keyVaultApiName
scope: resourceGroup('secondresourcegroup')
}
So far no errors, but the problem happens when I had to create a managed identity resource.
resource keyvaultApi 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: keyVaultApiName
scope: resourceGroup('secondresourcegroup')
}
resource managedIdentityTwo 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' = {
name: managedIdentityNameTwo
location: resourceGroup().location
}
resource accessPolicyApi 'Microsoft.Media/videoAnalyzers/accessPolicies#2021-11-01-preview' = {
name: '${keyvaultApi.name}/add'
properties: {
accessPolicies: [
{
tenantId: subscription().tenantId
objectId: managedIdentityTwo.properties.principalId
permissions: {
// minimum required permissions
keys: [
'get'
'unwrapKey'
'wrapKey'
]
}
}
]
}
}
In the key vault I could declare the scope, but to the underlying resources, such as access policy etc, I cannot declare the scope. So how can bicep understand that those resources needs to target a specific resource group and specific key vault?
Because when I run the terminal command, I am targeting a specific resource group, so I don't really understand how I can use one template to target different resource groups and resources accordingly.
I hope I made my point clear, and please if I didn't, just feel free to ask me more informations.
Thank you so much for your time and help
UPDATE:
When I try to run the code as it is, I get the following error:
{"status":"Failed","error":{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"ParentResourceNotFound\",\r\n \"message\": \"Can not perform requested operation on nested resource. Parent resource 'secondkeyvault' not found.\"\r\n }\r\n}"}]}}
UPDATE:
So I followed the Daniel lead and in a second template I deployed the code I needed for the second template as follow:
template2.bicep
param deploymentIdOne string = newGuid()
param deploymentIdTwo string = newGuid()
output deploymentIdOne string = '${deploymentIdOne}-${deploymentIdTwo}'
output deploymentIdTwo string = deploymentIdTwo
// Default values I'm using to test
param keyVaultApiName string = 'secondkeyvaultapi'
param managedIdentityNameTwo string = 'second-second-identity'
var keyVaultKeyPrefixTw = 'Client-Key-'
param tenantCodes array = [
'tgrf'
]
resource keyvaultApi 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: keyVaultApiName
}
resource managedIdentityTwo 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' = {
name: managedIdentityNameTwo
location: resourceGroup().location
}
resource accessPolicyApi 'Microsoft.KeyVault/vaults/accessPolicies#2019-09-01' = {
name: '${keyvaultApi.name}/add'
properties: {
accessPolicies: [
{
tenantId: subscription().tenantId
objectId: managedIdentityTwo.properties.principalId
permissions: {
// minimum required permissions
keys: [
'get'
'unwrapKey'
'wrapKey'
]
}
}
]
}
}
// Store the connectionstrings in KV if specified
resource clientApiKeys 'Microsoft.KeyVault/vaults/secrets#2019-09-01' = [ for name in tenantCodes :{
name: '${keyvaultApi.name}/${keyVaultKeyPrefixTw}${name}'
properties: {
value: '${deploymentIdOne}-${deploymentIdTwo}'
}
}]
and in my main template I added the module:
module clientKeyApi 'template2.bicep' = {
name: 'testfrgs'
scope: 'secondresourcegroup'
}
But there is something that is not clear 100% to me.
How does it work to override all the for loop and parameters name I have declared in my template2.bicep , and yet the module require a scope subscription, if I declare the scope, wouldn't this override the default value?
Sorry guys for the newbie questions, I am trying to break my mindset from terraform and understand better how bicep work.
Any explanation would be amazing and helpful
You can't specify scope on the resource, but you can specify it on a module. You'll need to turn the resource that adds the access policy to the keyvault into a separate module, then specify scope on the module. You can also make the scope for your deployment subscription, but then you'll need to break everything that targets a specific resource group into modules, as well.
This is due to how ARM deployments work. The default scope for an ARM deployment is at the resource group level. You can't point a resource at a different resource group because it's outside the scope of the deployment.
Modules, however, run as sub-deployments and therefore can have a different scope set.
This is a case where Terraform is more straightforward, since it calls the Azure APIs directly instead of using the ARM deployment model. Terraform doesn't care about deployment scopes because it doesn't use them.
hy
i have an openstack deployed on my laptop. i'm trying to create a stack with heat.
i have created a keypair openstack keypair create heat_key > heat_key.priv
which is recognized by nova nova keypair-list give the following output :
+----------+------+-------------------------------------------------+
| Name | Type | Fingerprint |
+----------+------+-------------------------------------------------+
| heat_key | ssh | 0b:7a:36:20:e2:e3:19:3b:ab:a1:95:ac:67:41:67:d7 |
+----------+------+-------------------------------------------------+
this is my simple HOT template :
heat_template_version: 2013-05-23
description: Hot Template to deploy a single server
parameters:
image_id:
type: string
description: Image ID
key_name:
type: string
description: name of keypair to enable ssh to the instance
resources:
test_stack:
type: OS::Nova::Server
properties:
name: "test_stack"
image: { get_param: image_id }
flavor: "ds1G"
key_name:{ get_param: key_name }
outputs:
test_stack_ip:
description: IP of the server
value: { get_attr: [ test_stack, first_address ] }
when i try to create the stack
openstack stack create -t myTemp.hot --parameter key_name=heat_key --parameter image_id=trusty-server-cloudimg-amd64-disk1 test_stack
i get the following error
ERROR: Property error: : resources.test_stack.properties: : Unknown Property key_name:{ get_param
i have tried with different versions of templates but i get the same error
any idea why this is happening?
WORST part about the YAML file is, it is SPACE sensitive, so we need to be really careful while editing or copying conetent of HEAT templete. There is no space between "key_name" and "{" which is why it is failing.
key_name:{ get_param: key_pair_name }
Just put an extra space between these and it will work. I tested it :-)
key_name: { get_param: key_pair_name }
I was able to do it by providing the details in parameters . PFB Sample script which worked for me.
heat_template_version: 2016-10-14
description: Admin VM - Test Heat
parameters:
image_name_1:
type: string
label: Centos-7.0
description: Centos Linux 7.0
default: Centos-7.0
network_id_E1:
type: string
label: 58e867ce-841c-48cf-8116-e72d998dbc89
description: Admin External Network
default: Admin
network_id_E1:
type: string
label: 4f69c8e5-8f52-4804-89e0-2c8232f9f3aa
description: Internal-1 Network
default: SR-IOV Interface
network_id_I2:
type: string
label: 28120cdb-7e8b-4e8b-821f-7c7d8df37c1d
description: Internal-2 Network
default: Internal-2
KeyName:
type: string
default: IO_Perf_Cnt
description: Name of an existing key pair to use for the instance
constraints:
- custom_constraint: nova.keypair
description: Must name a public key (pair) known to Nova
resources:
AdminVM1:
type: OS::Nova::Server
properties:
availability_zone: naz3
image: { get_param: image_name_1 }
flavor: 4vcpu_8192MBmem_40GBdisk
key_name: { get_param: KeyName }
networks:
- network: { get_param : network_id_E1 }
Try to change the parameter name key_name to some other name and execute it
heat_template_version: 2015-10-15
description: Hot Template to deploy a single server
parameters:
image_id:
type: string
description: Image ID
key_pair_name:
type: string
description: name of keypair to enable ssh to the instance
resources:
test_stack:
type: OS::Nova::Server
properties:
name: "test_stack"
image: { get_param: image_id }
flavor: "ds1G"
key_name: { get_param: key_pair_name }
outputs:
test_stack_ip:
description: IP of the server
value: { get_attr: [ test_stack, first_address ] }
I have a Meteor App based on Angular 1.3 + Meteor 1.5.2.2.
I am using Ubuntu 17.
I am trying to deploy my Meteor App on local machine first before going for live server using Meteor Up.
But I am facing this issue when running mup setup command
martinihenry#martinihenry:~/mytestapp-prod/.deploy$ mup setup
Started TaskList: Setup Docker
[192.168.100.12] - Setup Docker
events.js:141
throw er; // Unhandled 'error' event
^
Error: connect ECONNREFUSED 192.168.100.12:22
at Object.exports._errnoException (util.js:907:11)
at exports._exceptionWithHostPort (util.js:930:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1078:14)
Here is my mup.json:
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '192.168.100.12',
username: 'root',
// pem: './path/to/pem'
// password: 'server-password'
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: 'mytestapp-prod',
path: '../',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
ROOT_URL: '192.168.100.12:3000',
MONGO_URL: 'mongodb://localhost/meteor',
},
// ssl: { // (optional)
// // Enables let's encrypt (optional)
// autogenerate: {
// email: 'email.address#domain.com',
// // comma separated list of domains
// domains: 'website.com,www.website.com'
// }
// },
docker: {
// change to 'kadirahq/meteord' if your app is using Meteor 1.3 or older
image: 'abernix/meteord:base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
}
};
What could be wrong here?
It looks like you don't have sshd running on your machine, or you have not enabled remote ssh access for root.
You need to edit /etc/ssh/sshd_config, and comment out the following line:
PermitRootLogin without-password
Just below it, add the following line:
PermitRootLogin yes
Then restart SSH:
service ssh restart
I know this is late, but this a known and reproducable bug resulting from inotfiy-watch using all of the available slots for watches, and while very misleading, it actually has absolutely nothing to do with disk space.
The easy fix? increase watch slots:
sudo -i
echo 1048576 > /proc/sys/fs/inotify/max_user_watches
exit