I'm studying about FIWARE Health - Sanity Checks, and some configuration is needed before test execution (Sanity Checks execution). To run the Sanity Checks, I need edit the configuration file, etc/settings.json.
But I don't understand the information that I need to give in region_configuration. I created some regions in my keystone, but I don't know how get the external_network_name, and shared_network_name.
How do you say, there are 3 parameters that you have to keep into consideration:
external_network_name, it contains the network that it really
connected to internet. It means that you should be configured with
neutron this network with the following values:
provider:physical_network = external
router:external = true
shared = false
shared_network_name, this is a network that we have created in FIWARE to be used by all tenants (projects). It means that you should be configured with neutron this network with the following values:
provider:physical_network = (no value)
router:external = false
shared = true
test_object_storage, this paremeter is used to specify if you want to check the Object Storage functionality.
I think that if you create those networks in the region you can use without problems in your OpenStack instance.
Related
Would like to write the airflow logs to s3. Following are the parameter that we need to set according to the doc-
remote_logging = True
remote_base_log_folder =
remote_log_conn_id =
If Airflow is running in AWS, why do I have to pass the AWS keys? Shouldn't the boto3 API be able to write/read to s3 if correct permission are set on IAM role attached to the instance?
Fair point, but I think it allows for more flexibility if Airflow is not running on AWS or if you want to use a specific set of credentials rather than give the entire instance access. It might have also been easier implementation as well because the underlying code for writing logs into S3 uses the S3Hook (https://github.com/apache/airflow/blob/1.10.9/airflow/utils/log/s3_task_handler.py#L47), which requires a connection id.
Is it possible to scope Openstack CLI output for listing networks only for a single project. I have tried multiple options like --os-project-id, --os-project-name etc but it seems to list down all networks across multiple projects/tenants.
Currently, the command I am using is:
openstack network list --os-username XXX --os-password YYY --os-project-id ZZZ
Note: The credentials that I am using here are of an 'admin' account
Parameters set in the environment are :
OS_PROJECT_ID=XXX
OS_REGION_NAME=XXX
OS_TENANT_ID=XXX
OS_USER_DOMAIN_NAME=XXX
OS_PROJECT_NAME=XXX
OS_AUTH_VERSION=XXX
OS_IDENTITY_API_VERSION=XXX
OS_PASSWORD=XXX
OS_AUTH_URL=XXX
OS_USERNAME=XXX
OS_TENANT_NAME=XXX
OS_INTERFACE=XXX
OS_PROJECT_DOMAIN_NAME=XXX
May be your networks are shared by all tenants. If you only have a few networks you can verify with neutron net-show Network-Name and review the shared attribute
BTW I use the env variable OS_PROJECT_NAME to switch between projects
Without any explicit filter specified in the parameters, Neutron's network API returns all networks that the user accessing the API has privileges to list. The recommended way to scope down the list of networks to a specific project is to explicitly specify that filter.
Via CLI, you can scope the list to a specific project "demo" using the following example:
openstack network list --project demo
You can see more filtering options via the help text:
openstack help network list
Issues were caused by an older version of Openstack CLI v3.7.0
Using Openstack CLI version v3.13.0, I was able to solve my requirement. By default, with the domain admin account, the CLI still dumped the entire network list but with the --long flag, the 'project' field this time was populated and I could filter out the results for the specific project.
This was not the case with the previous CLI versions. Usage of '--long' flag had all the values of 'Project' as none.
I've been trying to provision a 2-node-type service fabric cluster using ARM. The secondary node type (backend) should not be exposed to the internet. For that I've created a loadbalancer with an internal IP-Address.
Everything gets provisioned correctly but I cannot get the nodes added to the cluster. From the Azure portal when I open the cluster it says it has no nodes in it even though it has the node types configured.
I have even tried downloading the template produced by the azure portal after creating a service fabric cluster. I have also executed one of the templates provided on github and I cannot still see any nodes in the cluster.
Any suggestion what I could be missing?
Thanks
Glad to hear you got that sorted. Regarding your follow-up question on deploying to the backend node-types, that's where you'd use placement constraints. When you create clusters in Azure through ARM, it automatically sets up a placement property on each node using the node type name you defined. So on your back-end nodes, assuming your node type is called "backendnode" you'll have the following placement policy defined:
NodeTypeName: backendnode
When you deploy your services, just use that as your placement constraint:
New-ServiceFabricService -ApplicationName "fabric:/myapp" -ServiceName "fabric:/myapp/myservice" -ServiceTypeName "myservicetype" -Stateful -MinReplicaSetSize 2 -TargetReplicaSetSize 3 -PartitionSchemeSingleton -PlacementConstraint "NodeTypeName == backendnode"
I need to keep track of Instance creation in openstack Nova.
That is I need to perform some special operations on creation of new instance in openstack.
So that I need to know where all the details are getting stored (In Log file).
Please some one guide me regarding the Log file for tracking instance creation or some other way to track the same.
As I am aware you have to look in the following services' log files
nova-scheduler (oftenly installed on controller node). This will show which 'server' will host the newly created Virtual Machine.
The logs of nova-compute service running on the host that the Virtual Machine was instantiated.
You can additionally check the logs of qemu and libvirt (again on the host that the Virtual Machine was instantiated)
Have in mind that the info you will find there, depends on the 'logging level' you have set in each service configuration files. For more information about how you can configure the OpenStack Components logging refer to the official documentation "Logging and Monitoring".
I am using .NET SDK for AWS and and trying to create a service that can create/mange instances. As part of this I want to create an EC2 instance in a specific VPC (non-default). There may have more then one VPC in a zone and I want to programatically be able to create/manage instances in any of the VPC rather than just the default VPC.
Is this possible? If yes how? I looked through the API docs and could not find a way to specify the VPC at the time of creation of EC2 isntance.
The VPC appears to be implied by the subnet-id that you specify. If this doesn't get you there, it might at least get you an error message explaining what you've missed.
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/PEC2Instance_SubnetId_NET4_5.html
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/TEC2RunInstancesRequest_NET4_5.html
http://docs.aws.amazon.com/AWSSdkDocsNET/latest/DeveloperGuide/run-instance.html
From the underlying REST API:
SubnetId
[EC2-VPC] The ID of the subnet to launch the instance into.
Type: String
Default: None
Required: No
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-RunInstances.html