As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
In a multi-client scenario, where client entry points are sub-domains, I am wondering if I could separate my clients into directories that contain essentially the app ( like cache, config, logs, kernel) and then symlink back to a "core" Symfony directory for the rest (vendor, src, and web). This lets me keep the application unified in regards to my bundles, but provides me with a separate config for each client. Then I point my sub-domains to their respective directories.
On the surface, it seems to be promising, and simpler than some other approaches I've been considering.
Later down the road if I want to upgrade a client to version 2 or add a component, I can switch bundles, or even point the symlinks to a whole new source. It might also scale well.
I am also wondering if using this approach would allow me to maintain separate security contexts between clients, as opposed to checking sub-domains and redirecting to authenticate if a user manually switched sub-domain.
Downsides would be duplication of several config files, and more involved initial client setup (but honestly nothing to bad in my opinion).
Is Symfony2 flexible enough to handle such a re-arrangement?
Are there benefits like speed or security separating the caches in multi client app?
Would using separate firewalls in each config result in separate security contexts for each sub-domain?
Background/Additional information:
I am re-developing an application to be multi client/tenant. I am using Symfony2 in the re-design since the original sat on top of Doctrine and I need more robust framework features now. I want to maintain a single application (its the same across all clients) and have individual databases for each client. My expectations are 100-200 clients max realistically (if I go over that, I'll celebrate and then worry about it). The schema is the same between databases, I am separating for ease of backup/restore and for separate upgrade paths later.
I have spent time reading numerous questions and answers about multi-tenacy. Also about using routing and kernel listeners to use sub-domains to glean client ids and then dynamically selecting the Database Connections, etc.. I ended up finding a blog post from Orm-designer.com outlining what they did when they moved their site to a new VPS. They detailed their directory structure in the post and it got me to thinking about how I might adapt the concept to suit my purposes.
If I understand you correctly you have:
your application based on Symfony 2,
database structure,
several machines,
You need to:
deploy your app on ~200 nodes (they could be located on different
machines),
keep the configuration easy to maintain,
use the same source code for each application,
You need to:
implement maintenance mode in your application,
decide which files are user specyfic,
all files that are not user specyfic sym-link to client folder from some shared place,
implement rsync script to update the files that are common for all users,
Each time you want to update version of your application:
let client know that your application will not be available for some
time,
put all applications into maintenance mode,
rsync the common code among all machines and clear cache/deploy assets for each client,
if needed prepare backup and run database update scripts,
run functional tests on all client machine and switch off maintanence mode,
Client Folder:
Machine 1
SourceFolder
app
bin
src
Company
YourBundle
*Controller
Resources
config
*css
...
...
*vendor
web
Directories with * are sym-links to to folder shared by several clients on the same machine and rest of the directories are user specyfic.
Related
I've never worked with any CMS and I simply wanted to play with such ones. As originally I come from .NET roots, so I was thinking about choosing Orchard Core CMS.
Let's imagine very simple scenario, together with my colleague I'd like to create a blog. As I'm used to work with web based systems and applications for a business for me it's kinda normal to work with code repository, having multiple environments dev/test/stage/prod, implementing CI / CD, adjusting database via migrations or scripts.
Now the question is do I need all of this with working on our blog with a usage of CMS.
To be more specific I can ask few questions:
Shall I create blog using CMS locally (My PC) -> create few articles and then deploy it to the web or I should create a blog over the internet and add articles in prod environment directly.
How to synchronize databases between environments (dev / prod).
I can add, that as I do not expect many visitors on a website I was thinking to use Orchard Core CMS together with SQLite. Also I expect that I can customize code, add new modules, extend existing ones etc. - not only add content (articles). You can take that into consideration in answering the question
So basically my question is what should be the workflow of a person who want to create / administer and maintain CMS (let it be blog) as a single person or as a team.
Shall I work and create content locally, then publish it and somehow synchronize both application and database (database is my main question mark - also in a context how to do that properly using SQLite).
Or simply all the changes - code + content should be managed directly on a server let's call it production environment.
Excuse me if question is silly and hard to understand, but I'm looking for any advice as I really didn't find any good examples / information about that or maybe I'm seeking in totally wrong direction.
Thanks in advance.
Great question, not at all silly ;)
When dealing with a CMS, you need to think about the data/content in very different terms from the code/modules, despite the fact that the boundary between them is not always completely obvious.
For Orchard, the recommendation is not to install modules in production, but to have a dev - staging - production type of environment: install new modules on a dev environment, test them in staging, and then deploy to production when it's safe to do so. Depending on the scale of the project, the staging may be skipped for a more agile dev to prod setting but the idea remains the same, and is not very different from any modular application.
Then you have the activation and configuration of the settings of the modules you deploy. Because in a CMS like Orchard, those settings are considered data and stored in the database, they should be handled like content. This includes metadata such as the very shape of the content of your site: content types are data.
Data is typically not deployed like code is, with staging and prod environments (although it can, to a degree, more on that in a moment). One reason for this is that a CMS will often feature user-provided data, such as reviews, ratings, comments or usage stats. Synchronizing all that two-ways is very impractical. Another even more important reason is that the very reason to use a CMS is to let non-technical owners of the site manage content themselves in a fast and direct manner.
The difference between code and data is also visible in the way you secure their changes: for code, usual source control is still the rule, whereas for the content, you'll setup database backups.
Also important to mention is the structure of the database. You typically don't have to worry about this until you write your own modules: Orchard comes with a rich data migration feature that makes sure the database structure gets updated with the code that uses it. So don't worry about that, the database will just update itself as you deploy code to production.
Finally, I must mention that some CMS sites do need to be able to stage contents and test it before exposing it to end-users. There are variations of that: in some cases, being able to draft and preview content items is enough. Orchard supports that out of the box: any content type can be marked draftable. When that is not enough, there is an optional feature called Deployments that enables rich content deployment workflows that can be repeated, scheduled and validated. An important point concerning that module is that the deployment only applies to the subset of the site's content you decide it should apply to (and excludes, obviously, stuff like user-provided content).
So in summary, treat code and modules as something you deploy in a one-way fashion from the dev box all the way to production, with ordinary source control and deployment methods, and treat data depending on the scenario, from simple direct in production database instances with a good backup policy, to drafts stored in production, and then all the way to complex content deployment rules.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am new to Datapower and have developed/configured a service which is working fine at the moment, I want to take this to production and for that need to create artefacts. Could you help me telling the standard practice and how /what all files I should include ? I heard about manifest file to include but not sure where should I find them.
Also heard about the mkick but not even know what does it do.
Thanks in advance!
As Stefan suggests Deployment policies will likely be of interest for changing settings between your Development and Production environments.
You will want to take a configuration export of your service and use the options to include referenced objects.
Also keep in mind certificates and keys are not included in the export, so if you have any referenced for the configuration, you will need to update those settings on your prod environment before this service can be active.
As answered earlier by Jimb. We can export the service from the DEV, STG environments and import them to Production Environments.
You can use deployment policies, Be sure you first import deployment policy and then the service(Because you have to select the deployment policy when importing the service).
Also You have to export the Keys, Certs and necessary artifacts from the previous environment.
Hope this helps.
Thank You!
Deployment is an integral part of any development architecture. Code deployment is a process of moving the code from your development environment to QA (Quality Assessment) env’t or from conveying env’t to pre-production env’t etc.
In DP the code deployment mean bundling all your code and dependent resources into one env’t and to the target env’t. However, to move from one env’t to the other env’t in practical you may have to face key challenges:
For instance in the process of moving code from dev to QA, both the structure remains the same. But, the detail is different why? Because IP and port no which is available in dev env’t it may not be working if we move along with QA env’t. Therefore, you should change that. The second thing also the backend server details of the dev env’t is also different from the QA env’t. That also needs to change. However, in order to address these challenges, the DP has a set of tools. That tool is the so-called Deployment policy.
Generally, whenever to make deployment and migration we need to keep in mind is :
Identify from and to which application domain is taken place, usually this migration process is taken place from high-level appliance DP to the lower one the process definitely failed.
If the process is taken place the same level appliance say from XI50 to XI52 we need to take care of the code from the lower level firmware to the new one may not be working because the new one may have advanced features.
Migration is working with env’t variables and we need to check that env’t variable. How? Use the deployment policy. However, deployment policy has one weakness is cannot look inside your SSL file and cannot make a change over there. You have to do a deployment policy by yourself.
We have different web based products. All the products share same underlying authentication and authorization mechanism. All are on same database server and are ultimately published to same server.
Each project has its own namespace, folder structure and pages. Still due to the fact that authentication and authorization is shared, we use login and other pages across all the projects.
Also to make look and feel uniform across the projects/products we use same master pages.
Currently we have a separate project which contains code, markup and scripts etc. for shared things. We copy the markup and other things to all the projects to build and run them. It is really a hell. We have to include/exclude the files, change namespaces etc. all the times and over that make sure that shared things are at same version in all dependent projects.
What would be the best methodology to handle all this in a way that we don't go to asylum?
We are on ASP.Net 4.0, Visual Studio 2010, Telerik 2013 Q1 release.
You have several options to improve your situation. The best option for your will likely depend on more than the information you have provided, however the following may be worth investigation.
Decouple authorisation system. If more than one application is using a single common authorisation code base then you may want to consider decoupling the functionality into a standalone (probably web service based) application. Authorisation through such an architecture is tricky, and easy to get wrong from a security point of view, but is achievable. The authorisation code base will then only need to be maintained in one location which will inevitably reduce deployment and building mishaps.
Extended configuration management. Does your application have any configuration management capability? If not, it should. It may well solve your problems with regards to includes and excludes and namespace chopping, especially when combined with point 3.
Improved version control management. It sounds as if you possibly aren't making the most of your version control system. Although you allude to versions in your question, if you were maintain different branches of a common trunk for your different applications the chopping up of namespaces and includes and excludes would probably be reduced or even not necessary since customisations could co-exist.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
It is unclear to me how cloudControl MySQLd addon works.
My understanding of MySQLd is that it is a MySQL server that can/will work with unlimited apps.
But since all addons are only app based, this could also mean that I cannot use the same MySQLd server on multiple apps.
Could anyone please help me understand if one MySQLd instance can be used with multiple apps hosted on cloudControl?
There are two concepts on the cloudControl PaaS. Applications and deployments. An application is basically just grouping developers and deployments together. Each deployment is a distinct running version of the app from a branch matching the deployment name. More details on this can be found in the Apps, Users and Deployments documentation.
All add-ons are always per deployment. We do this because this way we can provide all credentials as part of the runtime environment. This means you don't have to have credentials in version controlled files. Thich is a huge benefit when merging between branches, because you don't risk accidentally talking to e.g. the live database from a dev deployment. Also add-on credentials can change at any time at the add-on providers discretion.
For this reason separation between deployments makes a lot of sense. Usually your dev deployments also don't need the same database power as the production deployment for example. So you can easily use a smaller plan or even a shared database (e.g. MySQLs) for development. You can read more about how to use this feature inside your code in the Add-on documentation.
Also as explained earlier, add-on credentials are always provided as part of the runtime environment. Now credetials can change at any time at the add-on providers discretion. These changes are automatically provided in the environment and the app processes restarted. If you had hard coded the credentials as would be required for the second app, this would mean the app would probably experience downtime.
Last but not least, it's usually very bad practice to connect to the same database from two different code bases in different repositories, which would be the reason to have a second app. This causes all kinds of potential conflicts and dependencies that make code changes and database migrations extremely hard to maintain over time. The recommended way would be to have the data owned by one code base only and provide an API to access that data from the second code base.
All this being said, it is technically possible to connect multiple deployments or even apps to the same add-on (database or anything else) but highly advised against.
If you have a good reason to connect two apps/deployments to the same database I would suggest you manually launch an RDS instance at Amazon (MySQLd is based on RDS) and provide credentials for that through the custom config add-on to both of your apps/deployments.
I hope this answers your question and also explains the reasons.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've just migrated and deployed my first Azure Web Role this week and now that the pressure is off to get it deployed I'm reading "Azure in Action" and after reading about configuration settings the whole thing rubs me the wrong way.
This seems fine for migrating AppSettings type configuration settings. However, what about settings in system.web, system.webServer and system.webService or other more complex configuration systems. If I want to be able to modify my WCF configuration settings my current options are:
Make the change and do a full deploy (build, upload to staging, switch VIP)
Extend WCF thru a custom behavior or whatnot to use the Service Config (cscfg) instead.
I thought maybe I was misunderstanding the use - like the examples were simply the very naive case and that in practice they were used differently. However, after googling for a while it seems that this is exactly how everyone is doing it. For example, instead of using the connectionStrings configuration element for Entity Framework connections I have to write a custom connection factory.
This not only seems like too much work, but it ties my entire configuration implementation to Azure. Yes, I can use an interface so I can abstract the details and replace the implementation if I need to. But I still don't like all the extra work, connectionStrings are simple, but there are much more complex things to override.
What I'm thinking is that I should be able to do is read the Service Configuration at startup and use the ConfigurationManager to update my web.config. If something changes at runtime then again, I can update web.config. This way my application is still portable and I'm not hardwired to the Azure configuration system.
Does anyone agree? Or is it just me?
What I'm thinking is that I should be able to do is read the Service Configuration at startup and use the ConfigurationManager to update my web.config. If something changes at runtime then again, I can update web.config. This way my application is still portable and I'm not hardwired to the Azure configuration system.
In that case, what would happen if Azure restarted your role? The configuration would revert to that in the Service Configuration. If you're running multiple instances, configuration can then differ between them with potentially dangerous results.
An option is to build (once) a customer configuration provider that picked up settings from somewhere else (such as Table Storage) rather than web.config or .cscfg
With your configuration provider abstracted behind an interface, you can exploit Dependency Injection to provide the appropriate configuration mechanism for your deployment model.
I feel your pain, but it's really only a problem that needs solving once.
it ties my entire configuration implementation to Azure
For an application to properly take advantage of Azure you'll end up tying much more than just configuration implementation!
For example, table storage is much much faster than SQL Azure, and even with SQL Azure there are differences regarding e.g. the requirement for clustered indexes.
It's worth remembering that unlike virtual hosts, Azure is not an abstraction of Windows Server: it is a platform in its own right, with its strengths and weaknesses.
In the case of configuration settings it's in my view entirely reasonable for them to be relatively hard to change on production boxes. It's obviously a different matter when developing and testing, however; and to that end there's Azure Web Deploy, which lets you do a "disposable" deployment in a few moments.