Let say we have 10-15 micro services running and they have separate DB's. So how to maintain all these 15 config files at a single deployment server. As every db server has different creds, ip address and urls. So how will we manage all these in a single file or we'll have to create separate DB file per micro service ?
Each configuration refers to a single database, so you need to create one file per DB for properties such as URL and credentials. Probably the best way to handle this is to have a repository of configurations named after their DBs, and your deployment mechanism uses the appropriate one. However, you can have a base configuration for common settings and just have connection details per database using the configFiles parameter, eg:
flyway migrate -configFiles=/usr/configurations/base.conf,/usr/configurations/db1.conf
will start from the base configuration and override any settings that also appear in the db1-specific configuration.
Related
I am following this document-https://is.docs.wso2.com/en/5.9.0/setup/changing-datasource-bpsds/
deployment.toml Configurations.
[bps_database.config]
url = "jdbc:mysql://localhost:3306/IAMtest?useSSL=false"
username = "root"
password = "root"
driver = "com.mysql.jdbc.Driver"
Executing database scripts.
Navigate to <IS-HOME>/dbscripts. Execute the scripts in the following files, against the database created.
<IS-HOME>/dbscripts/bps/bpel/create/mysql.sql
<IS-HOME>/dbscripts/bps/bpel/drop/mysql-drop.sql
<IS-HOME>/dbscripts/bps/bpel/truncate/mysql-truncate.sql
Now create/mysql.sql creates table and the rest two file are responsible for deleting and trucating the same table..............what do i do?????????
Can anyone also tell the use case of BPS datasource??????
Please Help...........
You should only change your bps database if you have a requirement of using the workflow feature[1] in the wso2 identity server. It is mentioned in this documentation https://is.docs.wso2.com/en/5.9.0/setup/changing-to-mysql/
The document supposed to menstion the related db script. But it seems like mis leading. As it has requested to execute all three scripts. if you are using the workflow feature just use the
/dbscripts/bps/bpel/create/mysql.sql
script to create tables in you mysql database.
[1]. https://is.docs.wso2.com/en/5.9.0/learn/workflow-management/
Environment
ASP.NET WebForms app over IIS
Docker container host
AWS ECS hosting platform
Each client hosting its own copy of the app with private database connection string
Background
In the non-docker environment, each copy is a virtual directory under IIS, and thus have their own individual web.config pointing to dedicated databases. The underlying codebase is the same for each client, with no client-specific customization involved. The route becomes / here.
In the docker environment (one container per client), each copy goes over as a central root application.
Challange
Since the root image is going to be the same, how to have the web.config overridden for each client deployment.
We shouldn't create multiple images (one per client) as that will mean having extra deployment jobs and losing out on centralization. The connection strings should ideally be stored in some kind of dictionary storage applicable at ECS level which can provide client-specific values upon loading of corresponding containers.
Presenting the approach we used to solve this issue. Hope it may help others struck in similar cases.
With the problem statement tied to having a single root image and having any customization being applied at runtime, we knew that there needs to be a transformation of web.config at time of loading of the corresponding containers.
The solution was to use a PowerShell script that will read the web.config and get replace the specific values which were having a custom prefix embedded to the key. The values got passed from custom environmental variables within ECS and the web.config also got updated to have the keys with the prefix added.
Now since the docker container can have only a single entry point, a new base image was created which instantiated an IIS server and called a PowerShell script as startup. The called script called this transformation script and then set the ServiceMonitor on the w3cwp.
Thanks a lot for this article https://anthonychu.ca/post/overriding-web-config-settings-environment-variables-containerized-aspnet-apps/
I would use environment variables as the OP suggests for this with a start up transform, however I want to make the point that you do not want sensitive information in ENV variables, like DB passwords, in your ECS task definition.
For that protected information, you should use ECS secrets coupled with Parameter Store in Systems Manager. These values can be stored encrypted in the Parameter Store (using a KMS key) and the ECS Agent will 'inject' them as ENV variables on task startup.
For me, to simplify matters, I simply use secrets for everything although you can choose to only encrypt the sensitive information and leave the others clear.
I dynamically add the secrets for the given application into my task definitions at deploy time by looking up the 'secrets' for the given app by 'namespace' (something that Parameter Store supports). Then, if I need to add a new parameter, I can just add a new secret to the store in the given namespace and re-deploy the app. It will pick up and inject into the task definition any newly defined secrets automatically (or remove ones that have been retired).
Sample ruby code for creating task definition:
params = ssm_client.get_parameters_by_path(path: '/production/my_app/').parameters
secrets = params.map{ |p| { name: p.name.split("/")[-1], value_from: p.arn } }
task_def.container_definitions[0].secrets = secrets
This last transform injects the secrets such that the secret 'name' is the ENV variable name... which ends up looking like this:
"secrets": [
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_HOSTNAME",
"name": "DB_HOSTNAME"
},
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_PASSWORD",
"name": "DB_PASSWORD"
}
You can see there are no values now in the task definition. They are retrieved and injected when ECS starts up your task.
More information:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
In cloudera is there a way to update list of configurations at a time using CM-API or CURL?
Currently I am updating one by one one using below CM API.
services_api_instance.update_service_config()
How can we update all configurations stored in json/config file at a time.
The CM API endpoint you're looking for is PUT /cm/deployment. From the CM API documentation:
Apply the supplied deployment description to the system. This will create the clusters, services, hosts and other objects specified in the argument. This call does not allow for any merge conflicts. If an entity already exists in the system, this call will fail. You can request, however, that all entities in the system are deleted before instantiating the new ones.
This basically allows you to configure all your services with one call rather than doing them one at a time.
If you are using services that require a database (Hive, Hue, Oozie ...) then make sure you set them up before you call the API. It expects all the parameters you pass in to work so external dependencies must be resolved first.
This is quite a long question, but there's quite a lot to it.
It feels like it should be a reasonably common use case, so I'm hoping the Stack Overflow community can provide me with a 'best practice in Symfony2' answer.
The solution I describe below works, but there are several consequences I'd like to avoid:
In my local dev environment, if I have used the wrong db connection the test will work in dev but fail on production
The routes of the ADMIN API are accessible on the PUBLIC API url, just denied.
If I have a mirror of live in my dev environment (3 separate checkouts with the corresponding parameters.yml file) then the feature tests for the other bundles fail
Is there a 'best practice in Symfony2' way to set up my project?
We're running a LAMP stack. We use git/(Atlassian) stash for version control.
We're using doctrine for the ORM and FOS-REST with OAuth plus symfony firewalls to authenticate and authorise the users.
We're committed to use Symfony2, so I am trying to find a 'best practice' solution:
I have a project with 3 applications:
A public-facing API (which gives read-only access to the data)
A protected API (which provides admin functionality)
A set of batch processes (to e.g. import data and monitor data quality)
Each application uses a set of shared models.
I have created 4 bundles, one each for the application and a 4th for the shared models.
Each application must use a different database user to access the database.
There's only one database.
There's several tables, one is called 'prices'
The admin API only must be accessible from one hostname (e.g. admin-api.server1)
The public API only must be accessible from a different hostname (e.g. public-api.server2)
Each application is hosted on a different server
In parameters.yml in my dev environment I have this
// parameters.yml
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: user2
api_admin_db_pass: pass2
batch_db_user: user3
batch_db_pass: pass3
In config.yml I have this:
// config.yml
doctrine:
dbal:
connections:
api_public:
user: "%api_public_db_user%"
password: "%api_public_db_pass%"
api_admin:
user: "%api_admin_db_user%"
password: "%api_admin_db_pass%"
batch:
user: "%batch_db_user%"
password: "%batch_db_pass%"
In my code I can do this (I believe this can be done from the service container too, but I haven't got that far yet)
$entityManager = $this->getContainer()->get('doctrine')->getManager('api_public');
$entityRepository = $this->getContainer()->get('doctrine')->getRepository('CommonBundle:Price', api_admin');
When I deploy my code to each of the live servers, I put junk values in the parameters.yml for the other applications
// parameters.yml on the public api server
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: **JUNK**
api_admin_db_pass: **JUNK**
batch_db_user: **JUNK**
batch_db_pass: **JUNK**
I have locked down my application so that the database isn't accessible (and thus the other API features don't work)
I have also set up Symfony firewall security so that the different routes require different permissions
There's also security in the apache vhost to deny access to say the admin api path from the public api directory.
So, I have secured my application and met the requirement of the security audit, but the dev process isn't ideal and something feels wrong.
As background:
We have previously looked at splitting it up into different applications within the same project (like this Symfony2 multiple applications and api centric application. Actually followed this method http://jolicode.com/blog/multiple-applications-with-symfony2) , but ran into difficulties, and in any case, Fabien says not to (https://groups.google.com/forum/#!topic/symfony-devs/yneojUuFiqw). That this existed in Symfony1 and was removed in Symfony2 is enough of an argument for me.
We have previously gone down the route of splitting up each bundle and importing it using composer, but this caused too many development overheads (for example, having to modify many repositories to implement a feature; it not being possible to see all of the changes for a feature in a single pull request).
We are receiving an ever growing number of requests to create APIs, and we're similarly worried about putting each application in its own repository.
So, putting each of the three applications in a separate Symfony project / git repository is something we want to avoid too.
I have recently created a very small Google Compute Engine instance, naively thinking it's one of those easily scalable things Google people keep raving about.
I used the quick deployment feature of Wordpress and it all installed itself nicely, so I started configuring and adding data etc.
However, I then found out that I can't scale an existing instance (i.e. it won't allow me to change the instance type to a bigger one. I don't get why not, but there you go.), so it looks like I need to find a way to migrate my Wordpress installation to a new instance.
Will I simply be able to create a new instance and point it at the persistent disk my small instance currently uses, et voila, Bob's your uncle?
Or do I need to manually get the files and MySql data off the first instance and re-import into an empty new instance?
What's the easiest way?
Any advise or helpful links would be appreciated.
Thanks.
P.S.: Btw, should I try to use the Google Cloud SQL store instead of a local MySql installation?
In order to upgrade your VM:
access the VM's settings in the Developers Console (your project -> Compute -> Compute Engine -> VM instances -> click on the VM's name)
Scroll down to the "Disks" section, and un-check "Delete boot disk when instance is deleted"
Delete the VM in question. Take note that the disk, named after the instance, will remain.
Create a new VM, selecting "Existing disk" under Boot disk - Boot source. In the next box down, select the disk from point 3 above, as well as a bigger machine type.
The resulting new instance will use the existing disk from the old one, with improved hardware / performance.
As for using Cloud SQL in lieu of a VM-installed database, it's perfectly feasible, and allows to adjust the Cloud SQL instance to match your actual use. A few consideration when setting up this kind of instance:
limit the IPs allowed to connect to your Cloud SQL instance to your frontend's IP, and perhaps the workstation's IP or subnet from which you maintain the database out of.
configure Cloud SQL to use SSL certificates.
Sammy's answer covers the important stuff I just wanted to clarify how your files are arranged on the two disks that are attached to your instance:
The data disk contains /var/www/ which is all of the wordpress files. It's mounted on the instance at /wordpress
The boot disk contains everything else, including the MySQL database that was created for the Wordpress installation.