Wait Condition Heat/CloudFormation and instance startup order - openstack

I have a OpenStack Heat template which borrows heavily from the CloudFormation parameters, which is why I added the CF tag.
My template contains two instances which should be started (or at least configured through user-data) in a specific order. I thought I would use WaitCondition to make that happen but it looks like he doesn't fully work, or at least doesn't do what I expect.
Here's a snippet:
resources:
first:
type: OS::Nova::Server
properties:
key_name: { get_param: key_name }
image: fedora19
flavor: { get_param: instance_type }
user_data:
str_replace:
template: |
#!/bin/bash
[configuration code here]
curl -X PUT -H 'Content-Type:application/json' -d '{"Status" : "SUCCES", "Data" : "Application has completed configuration."}' "$wait_handle$"
params:
$wait_handle$: {get_resource: my_wait_handle}
first_wait_handle:
type: AWS::CloudFormation::WaitConditionHandle
first_wait:
type: AWS::CloudFormation::WaitCondition
depends_on: first
properties:
Handle:
get_resource: first_wait_handle
Timeout: 1000
second:
type: OS::Nova::Server
depends_on: first_wait
properties:
key_name: { get_param: key_name }
image: fedora19
flavor: { get_param: instance_type }
user_data: |
#!/bin/bash
[configuration code 2]
Currently the stack is correctly stuck on "create in progress" state while Heat hasn't received the curl signal back, which is correct. Problem is that the 'second' instance is created as soon as the stack is launched and configuration runs automatically.
I added a depends_on in the second instance but it looks like it has no effect (or, again, not the effect I thought).
Is it possible to do this instance startup order configuration with Heat/Cloud Formation? What am I missing?
Thanks!

Read this blog given here and he has given a correct explanation to your question. Because this functionality doesn't actually works. There is a work-around which you can make use of.

I'm not at all familiar with Heat templates, but the thing that immediately stood out is your curl command.
Change the curl command
curl -X PUT -H 'Content-Type:' --data-binary '{"Status" : ....
I've had all sorts of problems signalling AWS with what's ostensibly JSON but doesn't accept a header with content-type: application/json

Related

There seems to be a mismatch between last deployed API and actually in-use API

According to Google Cloud Console > Endpoints > Services > Deployment History this is the currently deployed API spec:
swagger: "2.0"
info:
title: "JSON Ingester"
description: "Receive JSON files, transform and load them."
version: "1.0.0"
host: "project-id-123.appspot.com"
schemes:
- "https"
paths:
"/upload":
post:
summary: "ETL JSON file."
security:
- api_key: []
operationId: "upload"
consumes:
- multipart/form-data
parameters:
- in: formData
name: file
type: string
responses:
200:
description: "File uploaded."
schema:
type: string
400:
description: "Error during file upload."
securityDefinitions:
api_key:
type: "apiKey"
name: "apikey"
in: "query"
But the key "apikey" is not accepted - instead it requires "key" which was specified in an openapi.yaml that I deployed few hours ago.
This works while it shouldn't:
$ curl -X POST -F "file=#data/file_6.json" https://project-id-123.appspot.com/upload\?key\=AIzaS...Eaoog
And this doesn't work while it should:
$ curl -X POST -F "file=#data/file_6.json" https://project-id-123.appspot.com/upload\?apikey\=AIzaS...Eaoog
{
"code": 16,
"message": "Method doesn't allow unregistered callers (callers without established identity). Please use API Key or other form of API consumer identity to call this API.",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "service_control"
}
]
}
Do I have to clear a cache or something?
For deploying the API I use:
gcloud endpoints services deploy "./openapi.yaml"
Any ideas?
What rollout_strategy did you use when you deploy ESP? If not specified, default is "fixed". You should use "managed"
Please also check the generated service config by CLI "gcloud endpoints configs describe". Check its system_parameters filed to see if your new "apikey" is created properly.

mup setup : Error: Timed out while waiting for handshake

Problem here is when I am trying to run command mup setup
there is error,where I am going wrong
Started TaskList: Setup Docker
[54.186.xx.xxx] - Setup Docker
events.js:183
throw er; // Unhandled 'error' event
My mup.js file looks like below
module.exports = {
servers: {
one: {
host: '54.186.xx.xxx',
username: 'ubuntu',
pem: '~/.ssh/mypem.pem'
}
},
app: {
name: 'myapp',
path: '/var/www/meteor/myapp',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
ROOT_URL: 'http://ec2-54-186-xx-xxx.us-west-2.compute.amazonaws.com',
MONGO_URL: 'mongodb://127.0.0.1:27017/myapp',
PORT: 3027,
},
docker: {
image: 'abernix/meteord:node-8.4.0-base',
},
deployCheckWaitTime: 60,
enableUploadProgressBar: true
},
mongo: {
oplog: true,
port: 27017,
version: '3.4.1',
servers: {
one: {}
}
}
};
Meteor version is 1.6.
Thanks in advance!
Nothing looks wrong with your mup.js file.
The problem may be that you cannot SSH with your current IP address. For instance, if you are using AWS, make sure that in the security groups your current IP address have access to it.
Not sure what is happening exactly, but there are a few potential issues:
deployCheckWaitTime: 60,
You could make this longer, eg 90 or 120 to give it more time to deploy (in case that is a problem)
path: '/var/www/meteor/myapp',
This might be the cause of the problem. Usually it is a relative path to the source code of the app, not where you deploy to, so typically it is something like ../app
ROOT_URL: 'http://ec2-54-186-xx-xxx.us-west-2.compute.amazonaws.com',
Presumably you are intending to use something like http://myapp.com/ for your app - that's what should go here.
In security groups, SSH source rule was MY IP, which I changed to anywhere then created elastic ip and bind it with instance. And now I can access login.
One can use this link to get help.
check your host ip.
I had same issue changing host ip fixed it for me.
Ip changes when you restart your VM es2 client.

Prevent creating multiple connections in RabbitMQ while using RabbitMQ Bundle for the Symfony2

I'm using RabbitMQ Bundle for the Symfony2 web framework. My question is, how can I avoid creating multiple connections (to prevent overloading the broker) after running many workers in terminal? In example below, I've run two workers and ended up having two connections/channel.
config.yml
old_sound_rabbit_mq:
connections:
default:
host: 127.0.0.1
port: 5672
user: guest
password: guest
vhost: /
lazy: true
producers:
order_create_bmw:
connection: default
exchange_options: { name: order_create_ex, type: direct }
queue_options:
name: order_create_bmw_qu
routing_keys:
- bmw
consumers:
order_create_bmw:
connection: default
exchange_options: { name: order_create_ex, type: direct }
queue_options:
name: order_create_bmw_qu
routing_keys:
- bmw
callback: application_frontend.consumer.order_create_bmw
services.yml
services:
application_frontend.producer.order_create_bmw:
class: Application\FrontendBundle\Producer\OrderCreateBmwProducer
arguments:
- #old_sound_rabbit_mq.order_create_bmw_producer
Producer
namespace Application\FrontendBundle\Producer;
use Application\FrontendBundle\Entity\Order;
use OldSound\RabbitMqBundle\RabbitMq\ProducerInterface;
class OrderCreateBmwProducer
{
private $producer;
public function __construct(ProducerInterface $producer)
{
$this->producer = $producer;
}
public function add(Order $order)
{
$message = [
'order_id' => $order->getId(),
'car_model' => $order->getCarModel(),
'timestamp' => date('Y-m-d H:i:s')
];
$this->producer->publish(json_encode($message), 'bmw');
}
}
Running workers
$ app/console rabbitmq:consumer order_create_bmw
$ app/console rabbitmq:consumer order_create_bmw
RabbitMQ Management
Every client (regardless if publisher or subscriber) that connects to rabbitmq will create a connection. Aside from using less clients, I can't think of any other way to achive this. I also can't think of a reason to do so :) If it's performance, than actually having more subscribers will help to "empty" the exchanges (and queues).

LiipImagineBundle: change the path where the images are saved after applying the filter

I have this this config:
liip_imagine:
resolvers:
default:
web_path: ~
filter_sets:
cache: ~
subitem_in_category:
path: ~ ///how to change the default path where the images are saved?
filters:
my_custom_filter: { }
relative_resize: { heighten: 210 }
Im trying to change the name of the directory wheren the images are saved, but I get
InvalidConfigurationException: Unrecognized options "path" under
"liip_imagine.filter_sets.subitem_in_category"
I have read this: https://github.com/liip/LiipImagineBundle/blob/master/Resources/doc/configuration.md
The features was removed some time ago for bad design about component dependency. See this pull request for further motivations about this.
For the same features suggest to configuring several resolvers, as described here:
liip_imagine:
resolvers:
foo:
web_path:
cache_prefix: foo
bar:
web_path:
cache_prefix: bar
filter_sets:
foo:
cache: foo
bar:
cache: bar
Else you can use an old branch of the bundle.
Hope this help

Connect Doctrine to memcached pool

Perhaps anybody know, how to connect Doctrine to memcached pool, to use it as a cache driver?
I've check official bundle documentation, and lot of another sources, but didn't find any examples of such connection.
Also due to source code, I could not find any options to use pool, but perhaps I miss something.
Didn't test, but the following should work:
in app/config/parameters.yml, set/add
parameters:
memcached.servers:
- { host: 127.0.0.1, port: 11211 }
- { host: 127.0.0.2, port: 11211 }
in app/config/config.yml set/add
services:
memcache:
# class 'Memcache' or 'Memcached', depending on which PHP module you use
class: Memcache
calls:
- [ addServers, [ %memcached.servers% ]]
doctrine.cache.memcached:
class: Doctrine\Common\Cache\MemcachedCache
calls:
- [setMemcached, [#memcached]]
in app/config/config_prod.yml, set
doctrine:
orm:
metadata_cache_driver:
type: service
id: doctrine.cache.memcached
query_cache_driver:
type: service
id: doctrine.cache.memcached
result_cache_driver:
type: service
id: doctrine.cache.memcached
As I said, I can't test it, but this is the combination of several known-to-work techniques.
UPDATE: solution updated based on CrazySquirrel's findings.
Thanks lxg for your ideas. I've build right configuration using your ideas. Please find correct service definition below:
application config:
result_cache_driver:
type: service
id: doctrine.cache.memcached
service.yml
memcached:
class: Memcached
calls:
- [ addServers, [ %memcached_servers% ]]
doctrine.cache.memcached:
class: Doctrine\Common\Cache\MemcachedCache
calls:
- [setMemcached, [#memcached]]

Resources