AWS API Create instance in non default VPC - asp.net

I am using .NET SDK for AWS and and trying to create a service that can create/mange instances. As part of this I want to create an EC2 instance in a specific VPC (non-default). There may have more then one VPC in a zone and I want to programatically be able to create/manage instances in any of the VPC rather than just the default VPC.
Is this possible? If yes how? I looked through the API docs and could not find a way to specify the VPC at the time of creation of EC2 isntance.

The VPC appears to be implied by the subnet-id that you specify. If this doesn't get you there, it might at least get you an error message explaining what you've missed.
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/PEC2Instance_SubnetId_NET4_5.html
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/TEC2RunInstancesRequest_NET4_5.html
http://docs.aws.amazon.com/AWSSdkDocsNET/latest/DeveloperGuide/run-instance.html
From the underlying REST API:
SubnetId
[EC2-VPC] The ID of the subnet to launch the instance into.
Type: String
Default: None
Required: No
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-RunInstances.html

Related

VPC creation problem in aws via terraform

I have been trying to create vpc infrastructure in AWS through terraform I am unable to perform the "Terraform apply" command anyone has a similar problem while using a free trial account.
Error: Error creating VPC: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: 4HZVo3-eWCS-YLhRy55P_0T13F_fPtA29TYrJrSe5_dyPxIcqRbh7_wCcrCZr2cpmb-B5--_fxVaOngBfHD_7yfnPH7NLf1rrqpb7ge1mvQrK8P0Ltfpgpm37nZXezZUoYf1t4peB25aCxnbfeboHpgJjcFnHvqvf5so5G2PufnGZSB4FUZMfdaqppnJ-sNT7b36TonHUDNbLhBVUl5Fwd8d02R-6ZraRYvDx-o4lDfP9xSWs6PMUFXNr1qzruYaeMYMxIe-9kGOQptgBLYZXsxr966ajor-p6aLJAKlIwPGN7Iz7v893oGpGgz_8wxTv4oEb5GnfYOuPOqSyEMLKI69b2JUvVU1m4tCcjKBaHJARP5sIiFSGhh4lb_E0_cKkmmFfKzyET2h8YkSD8U9Lm4rRtGbAEJvIoDZYDkNxlW7W2XvsccmLnQFeSxpLolVhguExkP7DT9uXffJzFEjQn-VkhqKnWlwv0vxIcOcoLP04Li5WAqRRr3l7yK2bYznfg
│ status code: 403, request id: 5c297a4d-7bcf-4bb4-b311-37480e1f26b8
make sure you have properly setup aws credentials and permissions.
check these two files
~/.aws/credentials
~/.aws/config
this docs can help you.
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Did you configure your access keys?
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
There are multiple ways to do it (described here).
My above example can be a good start but you don't want to commit those so I recommend to configure the keys in ~/.aws/credentials (like you need them for AWS CLI). The aws provider will pick them up automatically and so you don't need to define them somewhere in your terraform code.

Airflow logs in s3 bucket

Would like to write the airflow logs to s3. Following are the parameter that we need to set according to the doc-
remote_logging = True
remote_base_log_folder =
remote_log_conn_id =
If Airflow is running in AWS, why do I have to pass the AWS keys? Shouldn't the boto3 API be able to write/read to s3 if correct permission are set on IAM role attached to the instance?
Fair point, but I think it allows for more flexibility if Airflow is not running on AWS or if you want to use a specific set of credentials rather than give the entire instance access. It might have also been easier implementation as well because the underlying code for writing logs into S3 uses the S3Hook (https://github.com/apache/airflow/blob/1.10.9/airflow/utils/log/s3_task_handler.py#L47), which requires a connection id.

Openstack CLI does not honour project scope

Is it possible to scope Openstack CLI output for listing networks only for a single project. I have tried multiple options like --os-project-id, --os-project-name etc but it seems to list down all networks across multiple projects/tenants.
Currently, the command I am using is:
openstack network list --os-username XXX --os-password YYY --os-project-id ZZZ
Note: The credentials that I am using here are of an 'admin' account
Parameters set in the environment are :
OS_PROJECT_ID=XXX
OS_REGION_NAME=XXX
OS_TENANT_ID=XXX
OS_USER_DOMAIN_NAME=XXX
OS_PROJECT_NAME=XXX
OS_AUTH_VERSION=XXX
OS_IDENTITY_API_VERSION=XXX
OS_PASSWORD=XXX
OS_AUTH_URL=XXX
OS_USERNAME=XXX
OS_TENANT_NAME=XXX
OS_INTERFACE=XXX
OS_PROJECT_DOMAIN_NAME=XXX
May be your networks are shared by all tenants. If you only have a few networks you can verify with neutron net-show Network-Name and review the shared attribute
BTW I use the env variable OS_PROJECT_NAME to switch between projects
Without any explicit filter specified in the parameters, Neutron's network API returns all networks that the user accessing the API has privileges to list. The recommended way to scope down the list of networks to a specific project is to explicitly specify that filter.
Via CLI, you can scope the list to a specific project "demo" using the following example:
openstack network list --project demo
You can see more filtering options via the help text:
openstack help network list
Issues were caused by an older version of Openstack CLI v3.7.0
Using Openstack CLI version v3.13.0, I was able to solve my requirement. By default, with the domain admin account, the CLI still dumped the entire network list but with the --long flag, the 'project' field this time was populated and I could filter out the results for the specific project.
This was not the case with the previous CLI versions. Usage of '--long' flag had all the values of 'Project' as none.

Provision 2 node-type Service Fabric ARM

I've been trying to provision a 2-node-type service fabric cluster using ARM. The secondary node type (backend) should not be exposed to the internet. For that I've created a loadbalancer with an internal IP-Address.
Everything gets provisioned correctly but I cannot get the nodes added to the cluster. From the Azure portal when I open the cluster it says it has no nodes in it even though it has the node types configured.
I have even tried downloading the template produced by the azure portal after creating a service fabric cluster. I have also executed one of the templates provided on github and I cannot still see any nodes in the cluster.
Any suggestion what I could be missing?
Thanks
Glad to hear you got that sorted. Regarding your follow-up question on deploying to the backend node-types, that's where you'd use placement constraints. When you create clusters in Azure through ARM, it automatically sets up a placement property on each node using the node type name you defined. So on your back-end nodes, assuming your node type is called "backendnode" you'll have the following placement policy defined:
NodeTypeName: backendnode
When you deploy your services, just use that as your placement constraint:
New-ServiceFabricService -ApplicationName "fabric:/myapp" -ServiceName "fabric:/myapp/myservice" -ServiceTypeName "myservicetype" -Stateful -MinReplicaSetSize 2 -TargetReplicaSetSize 3 -PartitionSchemeSingleton -PlacementConstraint "NodeTypeName == backendnode"

Instance creation in Openstack Nova - Logfile

I need to keep track of Instance creation in openstack Nova.
That is I need to perform some special operations on creation of new instance in openstack.
So that I need to know where all the details are getting stored (In Log file).
Please some one guide me regarding the Log file for tracking instance creation or some other way to track the same.
As I am aware you have to look in the following services' log files
nova-scheduler (oftenly installed on controller node). This will show which 'server' will host the newly created Virtual Machine.
The logs of nova-compute service running on the host that the Virtual Machine was instantiated.
You can additionally check the logs of qemu and libvirt (again on the host that the Virtual Machine was instantiated)
Have in mind that the info you will find there, depends on the 'logging level' you have set in each service configuration files. For more information about how you can configure the OpenStack Components logging refer to the official documentation "Logging and Monitoring".

Resources