Using same cloudControl MySQLd addon with multiple apps [closed] - cloudcontrol

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
It is unclear to me how cloudControl MySQLd addon works.
My understanding of MySQLd is that it is a MySQL server that can/will work with unlimited apps.
But since all addons are only app based, this could also mean that I cannot use the same MySQLd server on multiple apps.
Could anyone please help me understand if one MySQLd instance can be used with multiple apps hosted on cloudControl?

There are two concepts on the cloudControl PaaS. Applications and deployments. An application is basically just grouping developers and deployments together. Each deployment is a distinct running version of the app from a branch matching the deployment name. More details on this can be found in the Apps, Users and Deployments documentation.
All add-ons are always per deployment. We do this because this way we can provide all credentials as part of the runtime environment. This means you don't have to have credentials in version controlled files. Thich is a huge benefit when merging between branches, because you don't risk accidentally talking to e.g. the live database from a dev deployment. Also add-on credentials can change at any time at the add-on providers discretion.
For this reason separation between deployments makes a lot of sense. Usually your dev deployments also don't need the same database power as the production deployment for example. So you can easily use a smaller plan or even a shared database (e.g. MySQLs) for development. You can read more about how to use this feature inside your code in the Add-on documentation.
Also as explained earlier, add-on credentials are always provided as part of the runtime environment. Now credetials can change at any time at the add-on providers discretion. These changes are automatically provided in the environment and the app processes restarted. If you had hard coded the credentials as would be required for the second app, this would mean the app would probably experience downtime.
Last but not least, it's usually very bad practice to connect to the same database from two different code bases in different repositories, which would be the reason to have a second app. This causes all kinds of potential conflicts and dependencies that make code changes and database migrations extremely hard to maintain over time. The recommended way would be to have the data owned by one code base only and provide an API to access that data from the second code base.
All this being said, it is technically possible to connect multiple deployments or even apps to the same add-on (database or anything else) but highly advised against.
If you have a good reason to connect two apps/deployments to the same database I would suggest you manually launch an RDS instance at Amazon (MySQLd is based on RDS) and provide credentials for that through the custom config add-on to both of your apps/deployments.
I hope this answers your question and also explains the reasons.

Related

Use existent VM Instace (bitnami) for Autoscale Group of Instances [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I am using a Bitnami Wordpress for Google Cloud. Now, I need to setup a Instance Template -> Group of instances -> Load balancer and with this, my system will be autoscaling :)
But, I have the VM instance created using an boot image by Bitnami, and I need to put in a group of instance.
Can you help me with this, please?
The answer for creating a Highly scalable web application on GCP is very long and could be made as a blog post. Since writing the whole answer here will be very long and difficult to read, I have split the answer into 3 parts.
As you have mentioned, the steps for creating a Highly scalable web application on GCP can be divided as:
Instance template
Managed Instance Group and autoscaling
Network / HTTP(s) load balancer
1. Instance Template: This is first step in creating this high scale web app. I have listed out the steps for creating an Instance Template here. One change, that you have to make in the template, is to change from CentOS 6 image to bitnami image.
Best practices: From my perspective, it is better to create a custom image with all your software installed than to use a startup script. As the time taken to launch new instances in the group should be as minimum as possible. This will increase the speed at which you scale your web app.
2. Managed Instance group and Autoscaling: I have written about the steps for creating Managed Instance group and Autoscaling here. As autoscaling and load balancing are independent, either of them
can be set up first.
Best practices: Both autoscaling and load balancers offer health check to the instances. From my perspective, setting up the health check for both the services are redundant and I think health check for load balancer alone would do good.
3. Load balancer: GCP offers two types of load balancers namely, Network and HTTP(s) load balancer. I have written about the differences Network Vs HTTP(s) here. Since I assumed that you will be building a web stack out of bitnami image, I have written about the steps for setting up the HTTP load balancer here.
By following these three steps, I hope you would be able to build an highly scalable web app. This answer is based on my perspective. If anything is incorrect or If I had missed something, please feel free to comment and I will add it to the table.

Datapower service migration to different region/environment [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am new to Datapower and have developed/configured a service which is working fine at the moment, I want to take this to production and for that need to create artefacts. Could you help me telling the standard practice and how /what all files I should include ? I heard about manifest file to include but not sure where should I find them.
Also heard about the mkick but not even know what does it do.
Thanks in advance!
As Stefan suggests Deployment policies will likely be of interest for changing settings between your Development and Production environments.
You will want to take a configuration export of your service and use the options to include referenced objects.
Also keep in mind certificates and keys are not included in the export, so if you have any referenced for the configuration, you will need to update those settings on your prod environment before this service can be active.
As answered earlier by Jimb. We can export the service from the DEV, STG environments and import them to Production Environments.
You can use deployment policies, Be sure you first import deployment policy and then the service(Because you have to select the deployment policy when importing the service).
Also You have to export the Keys, Certs and necessary artifacts from the previous environment.
Hope this helps.
Thank You!
Deployment is an integral part of any development architecture. Code deployment is a process of moving the code from your development environment to QA (Quality Assessment) env’t or from conveying env’t to pre-production env’t etc.
In DP the code deployment mean bundling all your code and dependent resources into one env’t and to the target env’t. However, to move from one env’t to the other env’t in practical you may have to face key challenges:
For instance in the process of moving code from dev to QA, both the structure remains the same. But, the detail is different why? Because IP and port no which is available in dev env’t it may not be working if we move along with QA env’t. Therefore, you should change that. The second thing also the backend server details of the dev env’t is also different from the QA env’t. That also needs to change. However, in order to address these challenges, the DP has a set of tools. That tool is the so-called Deployment policy.
Generally, whenever to make deployment and migration we need to keep in mind is :
Identify from and to which application domain is taken place, usually this migration process is taken place from high-level appliance DP to the lower one the process definitely failed.
If the process is taken place the same level appliance say from XI50 to XI52 we need to take care of the code from the lower level firmware to the new one may not be working because the new one may have advanced features.
Migration is working with env’t variables and we need to check that env’t variable. How? Use the deployment policy. However, deployment policy has one weakness is cannot look inside your SSL file and cannot make a change over there. You have to do a deployment policy by yourself.

Windows Azure Can I run multiple WebSites on the same Extra small instance or Small instance [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm evaluating MS cloud Windows Azure for hosting 3 completely separated websites.
Every website has its own database and they are not connected, so 3 websites and 3 databases.
My aim is to optimize costs for a start-up project with the possibility to scale up on demand.
I would like to know:
If is possible to host 3 websites on the same instance (Extra small instance or Small instance).
if is possible to host 3 databases on the same Sql Azure database (so I would use the total amount of SQL storage for my 3 databases) or for each website database I have to pay an instance of SQL Azure.
Thanks for your time on this.
You can absolutely run multiple web sites on the same instance, starting with SDK 1.3, as full IIS is now running in Web Roles. As Jonathan pointed out with the MSDN article link, you can set up the Sites element to define each website. You should also check out the Windows Azure Platform Training Kit, which has a lab specifically around building a multi-site web role.
You can also take advantage of something like Cloud Ninja or Windows Azure Accelerator for Web Roles, which provides a multi-tenant solution that you can load into your Web Role (check out the Cloud Cover Show video here for more info).
When hosting multiple websites, remember that they're all sharing the same resources on an instance. So you might find that an Extra Small instance won't meet your performance needs (it's limited to 768MB RAM and approx. 5Mbps bandwidth). I think you'll be fine with Small instances and scaling out as you need to handle more traffic.
For the past several months, I've been running three websites on a pair of extra small instances, including albahari.com, linqpad.net and the LINQPad licensing server (which uses LINQ to SQL). The trick is to serve large static content directly from blob storage so that it's not subject to the 5MBit/second I/O bandwidth restriction. And I've never got anywhere close to running out of memory.
A pair of extra small Azure instances is a great alternative to shared hosting when you need better reliability, security and performance.
Edit: close to a year now, still no problems with multiple websites on Azure. I will never go back to shared hosting.
You can definitely run 3 websites in the same instance. Check out this MSDN article that shows you how to form your configuration file such that you can host multiple websites within a single role. One thing to note though since you mentioned "scaling on demand" - when you scale an instance with multiple websites, you are scaling the instance, which means all of the sites will scale together. You can't scale just one of the sites on the shared instance.
For the databases, in theory this can be done, but it would be "manual" in that you would have to all of your tables across the three databsaes in the same database and you would probably want to prefix them with some sort of indicator so that you know which table belongs to which application. This is certainly not a recommended practice, but if it works for your solution, then there is nothing technical preventing you from doing it. If at all possible, I would recommend multiple databases.

ASP.NET deployment and regulatory compliance (SOX, et al) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a customer who is being dogged pretty hard by SOX auditors regarding the deployment practices of our ASP.NET applications. Care is taken to be sure to use appropriate file- and folder-level security and authorization. Only those few with deployment privileges can copy an up to the product server (typically done using secure FTP).
However, the file/folder-level security and the requirement of secure FTP isn't enough for the bean counters. They want system logs of who deployed what when, what version replaced what version (and why), and generally lots of other minutiae designed to keep the business from being Office Spaced (the bean counters apparently want the rounded cents all to themselves).
What are your suggestions for making the auditors happy? We don't mind throwing some dollars at this (in fact, I think we would probably throw big dollars at a good enough solution).
You probably want to look at an automated deployment solution and you are going to need a formal change control process. We use anthill pro. It can track what version and when it was deployed.
To satify sox we had a weekly meeting of what was getting deployed when. It had to be approved by compliance manager and each deployment needed to have a form filled out explaining what, why and how something was being changed. Once the form was filled out a third person had to be involved (not the person requesting or approving, neither of them can have access to the production environment, because of the seperation of duties rule you have to follow) to make the change and the change was based off of what was in the "change document" no outside communication from the person making the request. Once deployed, all people had to sign off that it was done and when.
It shouldn't be too hard to meet the requirements, it might require some changes to your development processes but it's definately possible.
What you need is:
A task tracking system, showing descriptions of work, and approvals
The ability to link documents, as well as packages to this system.
A test system to test your deployments onto.
Finally all deployments must be done via installation packages, and other scripted means.
Any manual changes must be documented and approved too.
Also turn on auditing, run regular security tests, and document almost everything.
All of this is possible with a number of systems, the biggest change is the changes to your internal processes.
You might want to take a look at the auditing features provided by NTFS.

Best practice for moving live web apps to new servers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am tasked with moving quite a few web apps including the databases to new servers, they are ASP.NET. I was not the one to create and setup these originally so I must try to figure out what exactly I need to replicate in order to not break anything and so the customers have no idea that anything was moved.
Does anyone have any tips for this, or know any automated ways?
Is there any software that can help with this?
I know the web app sends emails, so I will need to setup SMTP and it connects to a database so that I also will need to move. I suppose I should do this at night and take down the servers so I can move the database at it's latest state...
Any tips or tricks?
This might help: IIS 6.0 Migration Tool
"The Internet Information Services 6.0
Migration Tool is a command line tool
that automates several of the steps
needed to move a Web application from
IIS 4.0, IIS 5.0 or IIS 6.0 to a clean
installation of Internet Information
Services (IIS) 6.0 and Windows Server
2003.
The tool transfers configuration data,
Web site content, and application
settings to a new IIS 6.0 server if
desired, or can move just application
settings using the copy functionality.
"
I don't think it will help with the database migration, though.
Here's a link to more detailed information about using the tool.
May I suggest setting up the new servers in a staging environment. Allow business users to verify the functionality in the staging environment before flipping the switch and going live. Once you are ready, then bring over a fresh copy of the data. As far as the emails go... you should be fine with ASP.NET but some classic ASP programs require COM components in order to send email.
The route I've taken in the past is to do a live/current copy (whatever that entails) of $CURRENT_SERVER to $NEW_SERVER. If the DB is not moving, just make sure $NEW_SERVER can reach $DB_SERVER, and that it will continue to run once copied.
Then update DNS to point to $NEW_SERVER.
After some period of time (2-3x the TTL for the DNS record), remove the old server.
We just went through the same thing--bought a new server and had to transfer ASP.NET sites + Databases to the new server. We experienced problems with the IIS Migration tool, so we followed a "staging environment" approach, as stated in Berkshire's answer and had much success. When all issues are cleared from the staging environment, you can make it "live" with much confidence.
One other thing to watch out for is that you'll have to skim the ASP & VB/C# code for any hard-keyed connection strings to the database. These will have to change to reference the new location of the database.

Resources