Cloud Foundry - Installing Micro Bosh in a VM ( OpenStack ) - openstack

I have followed the instructions on http://cloudfoundry.github.com/docs/running/deploying-cf/openstack/install_microbosh_openstack.html to install the micro bosh in a VM.
I'm a little confused about the micro_bosh.yml:
name: microbosh-openstack
env:
bosh:
password: $6$u/dxDdk4Z4Q3$MRHBPQRsU83i18FRB6CdLX0KdZtT2ZZV7BLXLFwa5tyVZbWp72v2wp.ytmY3KyBZzmdkPgx9D3j3oHaDZxe6F.
level: DEBUG
network:
name: default
type: dynamic
label: private
ip: 192.168.22.34
resources:
persistent_disk: 4096
cloud_properties:
instance_type: m1.small
cloud:
plugin: openstack
properties:
openstack:
auth_url: http://10.0.0.2:5000/v2.0/tokens
username: admin
api_key: f00bar
tenant: admin
default_key_name: admin-keypair
default_security_groups: ["default"]
private_key: /root/.ssh/admin-keypair.pem
what is the api_key used for? I don't comprehend the meaning of this key.
And the default key name?
Can someone please explain this configuration options better?
thanks
Bruno
EDIT
the answer to this question can be found here:
https://github.com/drnic/bosh-getting-started/blob/master/create-a-bosh/creating-a-micro-bosh-from-stemcell-openstack.md

http://10.0.0.2:5000/v2.0/tokens
Likely refers to the Keystone Service API.
This API authenticates you to OpenStack's keystone identity service. All REST API Services are catalogued there in the catalog service. Additionally all of OpenStack relies on keystone to authenticate all API queries.
Knowing nothing about bosh the attribute 'api_key' to me requires better context.
Generally OpenStack doesn't require an API Key in its own concept of API authentication.
More about openstack api authentication here:
http://docs.openstack.org/api/quick-start/content/index.html#Getting-Credentials-a00665
However there is a concept of an API Key in relation to EC2 keys. These can be generated with this query:
keystone ec2-credentials-create
My guess is that's what it requires there.
More alternatives there:
Credentials could be in in novarc file generated for your Openstack project with nova-manage project zipfile command. This is also available from the horizon interface.
Alternatively it could refer to a provider specific API Key such as rackspaces ( I doubt this ):
http://docs.rackspace.com/servers/api/v2/cs-devguide/content/curl_auth.html
'default_key_name' probably refers to the name of a keypair that has been previously registered with openstack. This would be a keypair that can be injected into an image at instance run time. It should correspond to the .pem filename. The key would need to be available to your user and your tenant that you choose in the config.
Check out a keypair creation / use example here:
http://docs.openstack.org/developer/nova/runnova/managing.instances.html
Best effort man. Hope that gives you what you need.

Related

VPC creation problem in aws via terraform

I have been trying to create vpc infrastructure in AWS through terraform I am unable to perform the "Terraform apply" command anyone has a similar problem while using a free trial account.
Error: Error creating VPC: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: 4HZVo3-eWCS-YLhRy55P_0T13F_fPtA29TYrJrSe5_dyPxIcqRbh7_wCcrCZr2cpmb-B5--_fxVaOngBfHD_7yfnPH7NLf1rrqpb7ge1mvQrK8P0Ltfpgpm37nZXezZUoYf1t4peB25aCxnbfeboHpgJjcFnHvqvf5so5G2PufnGZSB4FUZMfdaqppnJ-sNT7b36TonHUDNbLhBVUl5Fwd8d02R-6ZraRYvDx-o4lDfP9xSWs6PMUFXNr1qzruYaeMYMxIe-9kGOQptgBLYZXsxr966ajor-p6aLJAKlIwPGN7Iz7v893oGpGgz_8wxTv4oEb5GnfYOuPOqSyEMLKI69b2JUvVU1m4tCcjKBaHJARP5sIiFSGhh4lb_E0_cKkmmFfKzyET2h8YkSD8U9Lm4rRtGbAEJvIoDZYDkNxlW7W2XvsccmLnQFeSxpLolVhguExkP7DT9uXffJzFEjQn-VkhqKnWlwv0vxIcOcoLP04Li5WAqRRr3l7yK2bYznfg
│ status code: 403, request id: 5c297a4d-7bcf-4bb4-b311-37480e1f26b8
make sure you have properly setup aws credentials and permissions.
check these two files
~/.aws/credentials
~/.aws/config
this docs can help you.
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Did you configure your access keys?
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
There are multiple ways to do it (described here).
My above example can be a good start but you don't want to commit those so I recommend to configure the keys in ~/.aws/credentials (like you need them for AWS CLI). The aws provider will pick them up automatically and so you don't need to define them somewhere in your terraform code.

BAD_GATEWAY when connecting Google Cloud Endpoints to Cloud SQL

I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.

Multiple applications in the same Symfony2 application

This is quite a long question, but there's quite a lot to it.
It feels like it should be a reasonably common use case, so I'm hoping the Stack Overflow community can provide me with a 'best practice in Symfony2' answer.
The solution I describe below works, but there are several consequences I'd like to avoid:
In my local dev environment, if I have used the wrong db connection the test will work in dev but fail on production
The routes of the ADMIN API are accessible on the PUBLIC API url, just denied.
If I have a mirror of live in my dev environment (3 separate checkouts with the corresponding parameters.yml file) then the feature tests for the other bundles fail
Is there a 'best practice in Symfony2' way to set up my project?
We're running a LAMP stack. We use git/(Atlassian) stash for version control.
We're using doctrine for the ORM and FOS-REST with OAuth plus symfony firewalls to authenticate and authorise the users.
We're committed to use Symfony2, so I am trying to find a 'best practice' solution:
I have a project with 3 applications:
A public-facing API (which gives read-only access to the data)
A protected API (which provides admin functionality)
A set of batch processes (to e.g. import data and monitor data quality)
Each application uses a set of shared models.
I have created 4 bundles, one each for the application and a 4th for the shared models.
Each application must use a different database user to access the database.
There's only one database.
There's several tables, one is called 'prices'
The admin API only must be accessible from one hostname (e.g. admin-api.server1)
The public API only must be accessible from a different hostname (e.g. public-api.server2)
Each application is hosted on a different server
In parameters.yml in my dev environment I have this
// parameters.yml
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: user2
api_admin_db_pass: pass2
batch_db_user: user3
batch_db_pass: pass3
In config.yml I have this:
// config.yml
doctrine:
dbal:
connections:
api_public:
user: "%api_public_db_user%"
password: "%api_public_db_pass%"
api_admin:
user: "%api_admin_db_user%"
password: "%api_admin_db_pass%"
batch:
user: "%batch_db_user%"
password: "%batch_db_pass%"
In my code I can do this (I believe this can be done from the service container too, but I haven't got that far yet)
$entityManager = $this->getContainer()->get('doctrine')->getManager('api_public');
$entityRepository = $this->getContainer()->get('doctrine')->getRepository('CommonBundle:Price', api_admin');
When I deploy my code to each of the live servers, I put junk values in the parameters.yml for the other applications
// parameters.yml on the public api server
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: **JUNK**
api_admin_db_pass: **JUNK**
batch_db_user: **JUNK**
batch_db_pass: **JUNK**
I have locked down my application so that the database isn't accessible (and thus the other API features don't work)
I have also set up Symfony firewall security so that the different routes require different permissions
There's also security in the apache vhost to deny access to say the admin api path from the public api directory.
So, I have secured my application and met the requirement of the security audit, but the dev process isn't ideal and something feels wrong.
As background:
We have previously looked at splitting it up into different applications within the same project (like this Symfony2 multiple applications and api centric application. Actually followed this method http://jolicode.com/blog/multiple-applications-with-symfony2) , but ran into difficulties, and in any case, Fabien says not to (https://groups.google.com/forum/#!topic/symfony-devs/yneojUuFiqw). That this existed in Symfony1 and was removed in Symfony2 is enough of an argument for me.
We have previously gone down the route of splitting up each bundle and importing it using composer, but this caused too many development overheads (for example, having to modify many repositories to implement a feature; it not being possible to see all of the changes for a feature in a single pull request).
We are receiving an ever growing number of requests to create APIs, and we're similarly worried about putting each application in its own repository.
So, putting each of the three applications in a separate Symfony project / git repository is something we want to avoid too.

Instance creation in Openstack Nova - Logfile

I need to keep track of Instance creation in openstack Nova.
That is I need to perform some special operations on creation of new instance in openstack.
So that I need to know where all the details are getting stored (In Log file).
Please some one guide me regarding the Log file for tracking instance creation or some other way to track the same.
As I am aware you have to look in the following services' log files
nova-scheduler (oftenly installed on controller node). This will show which 'server' will host the newly created Virtual Machine.
The logs of nova-compute service running on the host that the Virtual Machine was instantiated.
You can additionally check the logs of qemu and libvirt (again on the host that the Virtual Machine was instantiated)
Have in mind that the info you will find there, depends on the 'logging level' you have set in each service configuration files. For more information about how you can configure the OpenStack Components logging refer to the official documentation "Logging and Monitoring".

AWS API Create instance in non default VPC

I am using .NET SDK for AWS and and trying to create a service that can create/mange instances. As part of this I want to create an EC2 instance in a specific VPC (non-default). There may have more then one VPC in a zone and I want to programatically be able to create/manage instances in any of the VPC rather than just the default VPC.
Is this possible? If yes how? I looked through the API docs and could not find a way to specify the VPC at the time of creation of EC2 isntance.
The VPC appears to be implied by the subnet-id that you specify. If this doesn't get you there, it might at least get you an error message explaining what you've missed.
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/PEC2Instance_SubnetId_NET4_5.html
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/TEC2RunInstancesRequest_NET4_5.html
http://docs.aws.amazon.com/AWSSdkDocsNET/latest/DeveloperGuide/run-instance.html
From the underlying REST API:
SubnetId
[EC2-VPC] The ID of the subnet to launch the instance into.
Type: String
Default: None
Required: No
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-RunInstances.html

Resources