Datapower service migration to different region/environment [closed] - ibm-datapower

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am new to Datapower and have developed/configured a service which is working fine at the moment, I want to take this to production and for that need to create artefacts. Could you help me telling the standard practice and how /what all files I should include ? I heard about manifest file to include but not sure where should I find them.
Also heard about the mkick but not even know what does it do.
Thanks in advance!

As Stefan suggests Deployment policies will likely be of interest for changing settings between your Development and Production environments.
You will want to take a configuration export of your service and use the options to include referenced objects.
Also keep in mind certificates and keys are not included in the export, so if you have any referenced for the configuration, you will need to update those settings on your prod environment before this service can be active.

As answered earlier by Jimb. We can export the service from the DEV, STG environments and import them to Production Environments.
You can use deployment policies, Be sure you first import deployment policy and then the service(Because you have to select the deployment policy when importing the service).
Also You have to export the Keys, Certs and necessary artifacts from the previous environment.
Hope this helps.
Thank You!

Deployment is an integral part of any development architecture. Code deployment is a process of moving the code from your development environment to QA (Quality Assessment) env’t or from conveying env’t to pre-production env’t etc.
In DP the code deployment mean bundling all your code and dependent resources into one env’t and to the target env’t. However, to move from one env’t to the other env’t in practical you may have to face key challenges:
For instance in the process of moving code from dev to QA, both the structure remains the same. But, the detail is different why? Because IP and port no which is available in dev env’t it may not be working if we move along with QA env’t. Therefore, you should change that. The second thing also the backend server details of the dev env’t is also different from the QA env’t. That also needs to change. However, in order to address these challenges, the DP has a set of tools. That tool is the so-called Deployment policy.
Generally, whenever to make deployment and migration we need to keep in mind is :
Identify from and to which application domain is taken place, usually this migration process is taken place from high-level appliance DP to the lower one the process definitely failed.
If the process is taken place the same level appliance say from XI50 to XI52 we need to take care of the code from the lower level firmware to the new one may not be working because the new one may have advanced features.
Migration is working with env’t variables and we need to check that env’t variable. How? Use the deployment policy. However, deployment policy has one weakness is cannot look inside your SSL file and cannot make a change over there. You have to do a deployment policy by yourself.

Related

Select web.config or app.config Connection String based on Machine Name

This question, loosely relates to Choose settings based on machine name asked previously, however I have a much more specific use for it, which I am hoping is baked into .Net by default.
I am one of several people in a small team writing DotNet desktop and web applications. We use git as the source repository, and it becomes tiresome to have to constantly change the .config file connection strings for each of the development environments.
I know there are several ways to overcome this problem; which range from not storing the .config files in the repo in the first place (and using .gitignore), through to writing code to parse the configuration file manually adding prefixes etc as is suggested in the other question.
However, this seems both overly simplistic and tedious; as in a production environment there may be lots of legitimate reasons to store multiple connection strings in the config file - such as having several servers - that makes me think there has to be an easier way to do it.
So my question is this: Is there a way in the DotNet .config files to have multiple connection strings that the framework 'automatically' knows which one to load based on a property, such as the environment or machine name?
A question for clarity: Is there something about config transforms that do not meet this requirement? Setup the active solution configuration based on the build configuration manager settings (environment) and replace the nodes as needed. We do this all the time for web, console, and service projects. Perfectly suited for web (built-in) and others (with some minor post build tweaking).

Dividing up Symfony2 Directory Structure by Client [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
In a multi-client scenario, where client entry points are sub-domains, I am wondering if I could separate my clients into directories that contain essentially the app ( like cache, config, logs, kernel) and then symlink back to a "core" Symfony directory for the rest (vendor, src, and web). This lets me keep the application unified in regards to my bundles, but provides me with a separate config for each client. Then I point my sub-domains to their respective directories.
On the surface, it seems to be promising, and simpler than some other approaches I've been considering.
Later down the road if I want to upgrade a client to version 2 or add a component, I can switch bundles, or even point the symlinks to a whole new source. It might also scale well.
I am also wondering if using this approach would allow me to maintain separate security contexts between clients, as opposed to checking sub-domains and redirecting to authenticate if a user manually switched sub-domain.
Downsides would be duplication of several config files, and more involved initial client setup (but honestly nothing to bad in my opinion).
Is Symfony2 flexible enough to handle such a re-arrangement?
Are there benefits like speed or security separating the caches in multi client app?
Would using separate firewalls in each config result in separate security contexts for each sub-domain?
Background/Additional information:
I am re-developing an application to be multi client/tenant. I am using Symfony2 in the re-design since the original sat on top of Doctrine and I need more robust framework features now. I want to maintain a single application (its the same across all clients) and have individual databases for each client. My expectations are 100-200 clients max realistically (if I go over that, I'll celebrate and then worry about it). The schema is the same between databases, I am separating for ease of backup/restore and for separate upgrade paths later.
I have spent time reading numerous questions and answers about multi-tenacy. Also about using routing and kernel listeners to use sub-domains to glean client ids and then dynamically selecting the Database Connections, etc.. I ended up finding a blog post from Orm-designer.com outlining what they did when they moved their site to a new VPS. They detailed their directory structure in the post and it got me to thinking about how I might adapt the concept to suit my purposes.
If I understand you correctly you have:
your application based on Symfony 2,
database structure,
several machines,
You need to:
deploy your app on ~200 nodes (they could be located on different
machines),
keep the configuration easy to maintain,
use the same source code for each application,
You need to:
implement maintenance mode in your application,
decide which files are user specyfic,
all files that are not user specyfic sym-link to client folder from some shared place,
implement rsync script to update the files that are common for all users,
Each time you want to update version of your application:
let client know that your application will not be available for some
time,
put all applications into maintenance mode,
rsync the common code among all machines and clear cache/deploy assets for each client,
if needed prepare backup and run database update scripts,
run functional tests on all client machine and switch off maintanence mode,
Client Folder:
Machine 1
SourceFolder
app
bin
src
Company
YourBundle
*Controller
Resources
config
*css
...
...
*vendor
web
Directories with * are sym-links to to folder shared by several clients on the same machine and rest of the directories are user specyfic.

Using same cloudControl MySQLd addon with multiple apps [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
It is unclear to me how cloudControl MySQLd addon works.
My understanding of MySQLd is that it is a MySQL server that can/will work with unlimited apps.
But since all addons are only app based, this could also mean that I cannot use the same MySQLd server on multiple apps.
Could anyone please help me understand if one MySQLd instance can be used with multiple apps hosted on cloudControl?
There are two concepts on the cloudControl PaaS. Applications and deployments. An application is basically just grouping developers and deployments together. Each deployment is a distinct running version of the app from a branch matching the deployment name. More details on this can be found in the Apps, Users and Deployments documentation.
All add-ons are always per deployment. We do this because this way we can provide all credentials as part of the runtime environment. This means you don't have to have credentials in version controlled files. Thich is a huge benefit when merging between branches, because you don't risk accidentally talking to e.g. the live database from a dev deployment. Also add-on credentials can change at any time at the add-on providers discretion.
For this reason separation between deployments makes a lot of sense. Usually your dev deployments also don't need the same database power as the production deployment for example. So you can easily use a smaller plan or even a shared database (e.g. MySQLs) for development. You can read more about how to use this feature inside your code in the Add-on documentation.
Also as explained earlier, add-on credentials are always provided as part of the runtime environment. Now credetials can change at any time at the add-on providers discretion. These changes are automatically provided in the environment and the app processes restarted. If you had hard coded the credentials as would be required for the second app, this would mean the app would probably experience downtime.
Last but not least, it's usually very bad practice to connect to the same database from two different code bases in different repositories, which would be the reason to have a second app. This causes all kinds of potential conflicts and dependencies that make code changes and database migrations extremely hard to maintain over time. The recommended way would be to have the data owned by one code base only and provide an API to access that data from the second code base.
All this being said, it is technically possible to connect multiple deployments or even apps to the same add-on (database or anything else) but highly advised against.
If you have a good reason to connect two apps/deployments to the same database I would suggest you manually launch an RDS instance at Amazon (MySQLd is based on RDS) and provide credentials for that through the custom config add-on to both of your apps/deployments.
I hope this answers your question and also explains the reasons.

Does Azure Service Configuration seem backwards to anyone else? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've just migrated and deployed my first Azure Web Role this week and now that the pressure is off to get it deployed I'm reading "Azure in Action" and after reading about configuration settings the whole thing rubs me the wrong way.
This seems fine for migrating AppSettings type configuration settings. However, what about settings in system.web, system.webServer and system.webService or other more complex configuration systems. If I want to be able to modify my WCF configuration settings my current options are:
Make the change and do a full deploy (build, upload to staging, switch VIP)
Extend WCF thru a custom behavior or whatnot to use the Service Config (cscfg) instead.
I thought maybe I was misunderstanding the use - like the examples were simply the very naive case and that in practice they were used differently. However, after googling for a while it seems that this is exactly how everyone is doing it. For example, instead of using the connectionStrings configuration element for Entity Framework connections I have to write a custom connection factory.
This not only seems like too much work, but it ties my entire configuration implementation to Azure. Yes, I can use an interface so I can abstract the details and replace the implementation if I need to. But I still don't like all the extra work, connectionStrings are simple, but there are much more complex things to override.
What I'm thinking is that I should be able to do is read the Service Configuration at startup and use the ConfigurationManager to update my web.config. If something changes at runtime then again, I can update web.config. This way my application is still portable and I'm not hardwired to the Azure configuration system.
Does anyone agree? Or is it just me?
What I'm thinking is that I should be able to do is read the Service Configuration at startup and use the ConfigurationManager to update my web.config. If something changes at runtime then again, I can update web.config. This way my application is still portable and I'm not hardwired to the Azure configuration system.
In that case, what would happen if Azure restarted your role? The configuration would revert to that in the Service Configuration. If you're running multiple instances, configuration can then differ between them with potentially dangerous results.
An option is to build (once) a customer configuration provider that picked up settings from somewhere else (such as Table Storage) rather than web.config or .cscfg
With your configuration provider abstracted behind an interface, you can exploit Dependency Injection to provide the appropriate configuration mechanism for your deployment model.
I feel your pain, but it's really only a problem that needs solving once.
it ties my entire configuration implementation to Azure
For an application to properly take advantage of Azure you'll end up tying much more than just configuration implementation!
For example, table storage is much much faster than SQL Azure, and even with SQL Azure there are differences regarding e.g. the requirement for clustered indexes.
It's worth remembering that unlike virtual hosts, Azure is not an abstraction of Windows Server: it is a platform in its own right, with its strengths and weaknesses.
In the case of configuration settings it's in my view entirely reasonable for them to be relatively hard to change on production boxes. It's obviously a different matter when developing and testing, however; and to that end there's Azure Web Deploy, which lets you do a "disposable" deployment in a few moments.

ASP.NET deployment and regulatory compliance (SOX, et al) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a customer who is being dogged pretty hard by SOX auditors regarding the deployment practices of our ASP.NET applications. Care is taken to be sure to use appropriate file- and folder-level security and authorization. Only those few with deployment privileges can copy an up to the product server (typically done using secure FTP).
However, the file/folder-level security and the requirement of secure FTP isn't enough for the bean counters. They want system logs of who deployed what when, what version replaced what version (and why), and generally lots of other minutiae designed to keep the business from being Office Spaced (the bean counters apparently want the rounded cents all to themselves).
What are your suggestions for making the auditors happy? We don't mind throwing some dollars at this (in fact, I think we would probably throw big dollars at a good enough solution).
You probably want to look at an automated deployment solution and you are going to need a formal change control process. We use anthill pro. It can track what version and when it was deployed.
To satify sox we had a weekly meeting of what was getting deployed when. It had to be approved by compliance manager and each deployment needed to have a form filled out explaining what, why and how something was being changed. Once the form was filled out a third person had to be involved (not the person requesting or approving, neither of them can have access to the production environment, because of the seperation of duties rule you have to follow) to make the change and the change was based off of what was in the "change document" no outside communication from the person making the request. Once deployed, all people had to sign off that it was done and when.
It shouldn't be too hard to meet the requirements, it might require some changes to your development processes but it's definately possible.
What you need is:
A task tracking system, showing descriptions of work, and approvals
The ability to link documents, as well as packages to this system.
A test system to test your deployments onto.
Finally all deployments must be done via installation packages, and other scripted means.
Any manual changes must be documented and approved too.
Also turn on auditing, run regular security tests, and document almost everything.
All of this is possible with a number of systems, the biggest change is the changes to your internal processes.
You might want to take a look at the auditing features provided by NTFS.

Resources