I have created a heat template which created network (flat and vlans), subnet, trunk, ports and created VM. Now I want to create another VM which will share some flat network/subnet created already, though Vlan network created on same network will be different.
Actually it is all-in-onesetup (single m/c) and I also would like to know if there is some concept like namespace where multiple VMs can be without any conflicts from other VMs?
Another thing I have to change all resources name also as otherwise it says it exists, Is there any way by which all resources name be prefixed/suffixed automatically by some variable provided ?
Related
the problem is I have 3 virtual machines with the same source images in three diferent zones in a region. I can't put them in a MIG because each of them has to attach to a specific persistent disk and according that I researched, I have no control of which VM in the MIG will be attached to which persistent disk (please correct me if I´m wrong). I explored the unmanaged instance group option too, but only has zonal scope. Is there any way to create a load balancer that works with my VMs or I have to create another solution (ex. NGINX)?
You have two options.
Create an Unmanaged Instance Group. This allows you to set up your VM instances as you want. You can create multiple instance groups for each region.
Creating groups of unmanaged instances
Use a Network Endpoint Group. This supports specifying backends based upon the Compute Engine VM instance's internal IP address. You can specify other methods such as an Internet address.
Network endpoint groups overview
Solution in my case was using a dedicated vm, intalling nginx as a load balancer and creating static ips to each vm in the group. I couldn't implement managed or unmanaged instance groups and managed load balancer.
That worked fine but another solution found in quicklabs was adding all intances in a "instance pool", maybe in the future I will implement that solution.
I have set up Akka.net actors running inside an ASP.net application to handle some asynchronous & lightweight procedures. I'm wondering how Akka scales when I scale out the website on Azure. Let's say that in code I have a single actor to process messages of type FooBar. When I have two instances of the website, is there still a single actor or are there now two actors?
By default, whenever you'll call ActorOf method, you'll order creation of a new actor instance. If you'll call that in two actor systems, you'll end up with two separate actors, having the same relative paths inside their systems, but different global addresses.
There are several ways to share an information about actors between actor systems:
When using Akka.Remote you can call actors living on another actor system given their addresses or IActorRefs. Requirements:
You must know the path to a given actor.
You must know the actual address (URL or IP) of an actor system, on which that actor lives.
Both actor systems must be able to communicate via TCP between actor system (i.e. open ports on firewall).
When using Akka.Cluster actor systems (also known as nodes) can form a cluster. They will exchange information about their localization in the network, track incoming nodes and eventually detect a dead or unreachable ones. On top of it, you can use higher level components i.e. cluster routers. Requirements:
Every node must be able to open TCP channel to every other (so again, firewalls etc.)
A new incoming node must know at least one node that is already part of the cluster. This is easily achievable as part of the pattern known as lighthouse or via plugins and 3rd party services like consul.
All nodes must have the same name.
Finally, when using cluster configuration you can make use of Akka.Cluster.Sharding - it's essentially a higher level abstraction over actor's location inside the cluster. When using it, you don't need to explicitly tell, where to find or when to create an actor. Instead, all you need is a unique actor identifier. When sending a message to such actor, it will be created ad-hoc somewhere in the cluster if it didn't exist before, and rebalanced to equally spread the workload in cluster. This plugin also handles all logic associated with routing the message to that actor.
Lets consider a scenario
I have two system A and B
IP Address A - 192.168.0.1 database IP is 192.168.0.1 for pacs
IP Address B - 192.168.0.2 database IP is 192.168.0.2 for pacs
I have sent dicom image in A using dcmsnd command
how to access system A data from system B
So what i need to configure in system A or system B to access system A's dicom data in system B
I can recommend two options depending on your needs.
Option 1
The first option assumes you do actually want redundant data (ie: two separate storage locations and two separate databases) and not just two dcm4chee instances.
In this case you can set up dicom forwarding from A to B. This is setup in the Forward Service bean of dcm4chee (via the jmx-console or via the jboss twiddle.sh script). More complex forwarding (ie: based on modalities) can be configured in the Forward Service2 bean.
The official docs are here:
Forward Service
Forward Service2
If you need more details, I have written a blog post that goes into more depth about using and setting up Forward Service here:
DCM4CHEE: PACS SYNCHRONIZATION VIA DICOM FORWARDING
Option 2
The second option assumes you don't really need data redundancy, but you do need two separate dcm4chee instances.
No problem. You can setup two dcm4chee instances on separate boxes to share the same database (which lives either at 192.168.0.1 or 192.168.0.2 or perhaps somewhere else) and storage device.
For this to really work, you will need to configure both dcm4chee instances to not only connect to the same db, but to also store their archives on the same shared network storage device which you mount on each box.
The storage directory is configured via the DefaultStorageDirectory property of the FileSystemMgt group=ONLINE_STORAGE bean in the jmx-console.
Note: My answer assumes the dcm4chee-2.x series, and not the successor arc-light series (though the steps should be conceptually similar in either case - ie: setup forwarding or shared storage).
I am wondering what is a stack in openslack.
It seems to be components (network, containers and stuff)
Can i mix thoses items with docker containers for example ?
Thanks
OpenStack version of AWS cloudformation (infrastructure as a code) is performed trough HEAT (OpenStack Component). A "stack" is a set of resources that can be deployed in the cloud. What are those resources ?: Everything that can be created withing the cloud environment:
- A network port...
- A set of instances (virtual machines).
- A load balancer.
- A storage device (block or object).
Basically, you use a template (or a set of templates) in order to create a complete infrastructure inside the cloud (database servers, web servers, a load balancer for the web servers, maybe the storage too... etc.).
The template uses a common language to define those components and how are they created in the cloud.
Note that, many "AWS Cloudformation" objects can be defined in OpenStack HEAT templates, but, HEAT also has it's very own set of objects definitions for OpenStack.
At the end of my site you will find a sample HEAT template for a stack that include servers, loadbalancer and autoscalling:
https://tigerlinux.github.io/recipes/openstack/index.html
Also, more examples directly from the OpenStack community:
https://git.openstack.org/cgit/openstack/heat-templates/tree/hot
I'm on 14.04 On-Prem
I have an Active and DR setup
see here: http://www.slideshare.net/michaelgeiser/apigee-dc-failover
When I failover to the DR site, I update my DNS entry (at Akamai)
Virtual hosts work fine; Target Servers are giving me a headache
How can I setup and work with the Target Servers so I do not have to modify the Proxy API bundle but have traffic flow to the right VIP based on the DC?
I prefer not to do something like MyService-Target-dc1 and MyService-Target-dc2 and use the deploy script to modify the target name in the bundle.
I do not want to have a JavaScript that is modifies the target or anything else in the Proxy API, I need to define this in Environment setup.
I also cannot put the two DCs each into a separate Org; I need to use the same API Key when I move between the Active and DR sites; different Orgs mean different API Keys (right?).
TIA
One option is to modify the DNS lookup on each set of MP per DC so that a name like 'myservice.target.dc' resolves to different VIP. You'll of course want to document this well
since this, especially since this is external to the Apigee product.
I know you weren't too keen on modifying the target, but if you were open to that option, you could try using the host header of an ELB in front (if you have one) or client IP address (e.g., in geo-based routing) to identify which DC a call is passing through. From there, you can modify the target URL.
And yes, different Orgs do mean different API keys.
You can try Named Target Servers. They are part of the Load Balancing function but you can set them up individually and have different targets for different environments See:
Load Balancing Across Backend Servers
http://apigee.com/docs/api-services/content/load-balancing-across-backend-servers
Create a Named Target Server
http://apigee.com/docs/management/apis/post/organizations/%7Borg_name%7D/environments/%7Benv_name%7D/targetservers