How effectively develop Azure blueprints? - azure-resource-manager

I'm starting to develop Azure blueprints and I can see that structure of ARM template is different compared to one used in ARM deployment. I like to modularize code and trying to figure out how I can properly develop individual ARM templates and then incorporate into final blueprint properly. As of right now instead of directly putting ARM artifact into blueprint (along with 100 others) I just manually debug ARM template and then cut in paste into artifact. I'm wondering if there is more effective way doing that or I missing something? Based on documentation it seems to be suggested directly incorporate templates into artifacts then deploy/publish/assign blueprint which takes way to much when you just need to work on single ARM template

An effective / dynamic / automated way can be accomplished by leveraging this Blueprints as Code repository for management and dynamic way of configuring lifecycle of your blueprints which helps in reducing the effort compared to Portal way of managing the blueprints.
Other related references:
Functions for use with Blueprints
Blueprints REST API Reference
Blueprints Az PowerShell Reference

Related

Alfresco content services - Extentions/AMP/customization - How does it work?

I have recently started learning about Alfresco Content Service.
I have some questions:
My understanding is that the standard way to add customization is to create AMP's.
Why create an amps for each customization instead of adding it directly to the configurations of ACS? Are there some benefits like not having to restart the service or something?
If apply_amps adds all custom amps to the alfresco server (.war files), won't there be a risk of customizations writing over each other?
E.g if two different amps change the same standard button in the share service.
I have found that there are 2 ways to add these customizations as well:
Add dependency to the pom file. (works only for .jar)
Actually compile the .amp and move it to the correct folder and run apply_amps.sh.
From the documentation it seems to my like AMP-files used to be the standard way of adding customization but that there have now been a move away from this in favor of using regular jar files and eventually in 7.1 and forward use JSON instead.
Yet other tutorials I find mentions things like "always use .amp". Which then seems strange if it contradicts the information on the official documentation.
Also I found something about adding amps through the share interface? Or must they always be added when building the server (.war)?
Could someone provide me with a thorough explanation of the best practice for applying customizations to the alfresco content service? Preferably with details regarding a live production setting.
Thanks for helping me make some of this clearer.
I'll try to give you helpful answers:
Making app packages (APMs or JARs) is much better than changing config manually. It's good for versioning, portability (TEST vs PROD or between projects), composition (you can add some addons witch are often very useful)... It is standard and good way how to build a web app.
About conflict of customizations, I'm not sure how it works. Is good practise always use own namespace for every AMP.
If AMPs write to the same file, result is always append (share-config-custom.xml can get be very big).
Problem about JARs and AMPs is simple. Old version of Alfresco supports more AMPs than JARs. Now it does not matter with way you use. Try to look inside these packages they look very similarly.
I never heard about adding AMPs through the share interface. Have you some source? Only thing which is similar is creating content model through Model manager (https://docs.alfresco.com/content-services/latest/tutorial/model/)
I use for PROD combination of AMPs and JARs. I have a lot of legacy code and addons in AMPs and new things in JARs. Alfresco work with them same...

ARM ADF Deployment Features

I am not sure that this is the appropriate forum but we are having some issues with the recommended CI/CD flow for Azure Datafactory which is requiring us to create our own script to deploy ADF resources using the ADF REST APIs instead of auto-generated ARM templates.
Before undertaking this work, we wanted to clarify a few assumptions about the ARM template deployment for ADF resources. Are all of the assumptions true?
ARM template deployment for ADF resources simply calls the ADF REST APIs to deploy resources and has the same limitations as calling the REST APIs ourselves?
ARM template deployment for ADF does not perform any optimizations before calling the REST APIs such as reading the current definition of resources before writing and only writing if the definition has changed.
Are there any other ARM limitations or optimizations that we should be aware of in order to make sure that our performance is as optimal as ARM?
As we know An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.
ARM has its own benefits over REST APIs such as Incremental and complete deployments which are not available when using the REST APIs.
David Gaspard: Coming to your ask, ARM Templet is collection of JSON defines where you can include multiple REST APIs calls in single module and use it for deployment. You can re-use. It also allows you to use Variables. It dependence on what best suits for your requirement.
You can use incremental deployment in ARM which can take care of Optimization.
Few reference that might help.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-rest
related discussion : Azure ARM Templates and REST API
ARM Limitations : https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/best-practices

Combine Meteor and Express

I am evaluating Meteor as an alternative to developing real-time capabilities using socket.io and it looks like awesome framework for single page real-time apps. It is great time saver that enables developer focusing on the business logic of the app, rather than writing boilerplate code. However, I find it still pre-mature for a medium size app with multiple pages/routings and REST api. Plus, number of features like i18n are still not available which requires some time investment to develop by myself.
I think that it would be great if I could combine Meteor and Express and use Meteor in use cases where it really shines.
Is it possible to develop an app using standard Express/Mongo stack and use Meteor for only specific part of the app where I need real time collaboration?
For example, can I share a session between Express/Connect and Meteor?
Thanks!
This does not directly answer your question, but I thought I'd throw it out there:
You should check out the community packages on atmosphere. Specifically, I'd recommend having a look at iron-router and i18n (I'll note I have not used the latter).
I've built a large production app that uses iron-router and it's running smoothly. You may also be able to use its server-side-routing capabilities to implement your REST api.

How do you structure your ASP.net sources in Visual Studio?

Do you have one solution with the web application project, class libraries, database project and tests? Or, do you segment it into multiple solutions? Why?
I'm asking because we're trying to streamline this scenario for Visual Studio 2010 and I'd like to get input from the community on how you'd prefer to work.
I tend (but not always) to have one solution per job, but I import existings projects from other solutions, such as my WebControlLibrary where I keep common user controls and classes, etc.
My actual solution for the job I then tend to break down into the Web Application, Business Logic Layer, Data Access Layer and Entity Layer, i.e.:
Solution
...MyCompany.WebControlLibrary
...Project
...Project.BusinessLogic
...Project.DataAccess
...Project.Entities
...Project.Scripts
...Project.Testing
...Project.Deployment
If a project requires something such as a mobile device, I'll always put that in a new solution, but it may perhaps share some projects of the current solution, i.e.
MobileSolution
...MobileProject
...Project.Entities
...MobileProject.BusinessLogic
The more 'stuff' you have combined, the slower Visual Studios becomes at building. You can obviously stop certain projects building by default, but that's when you have to start creating your own build configurations. If you are going to be creating large applications, I'd suggest breaking down into multiple solutions. I find it much easier to flick between solutions that to keep changing build configurations.
Another option is that when you build your projects you can reference their DLLs. I prefer to import said projects into my solution as you never have to worry about referencing the creating build configuration i.e. selecting the DLL from the Debug or Release folder.
Stand alone libraries can be their own solutions. References for those libraries can be made into the project that you're working with. Related items like the web application, the test setups, and specific libraries such as data access or business rules can be setup as projects within one solution. It really all comes down to how much you want to break things out for resuability.
This depends a little on the job the project performs.
For ease of use it's simple to have a solution that just contains all the projects required. If it's a large solution this can hamper you later on when the IDE starts to get slow and build times rocket through the roof.
Let's say one of the projects is a library used by your company to take card payments and interface with 3d secure. You present you're own GUI page to take the details etc.
If you had numerous sites that all take card payments you would greatly benefit by having this project in a separate solution and referencing the compiled dll. Any changes you require you would need to open up the solution, make the change, build it, go to the solution you're working in and test it. Sounds like a pita and you find it's just simpler to have it all in one big solution. But then if you have this library in every solution and make a generic change to it you need to repeat that change through out.
So you just need to make a decision on wether you're developing a separate project in the same solution or something that might be used elsewhere. If you needed more functionality than the library provides you could implement a partial class in your project and extend the library in that way. Or perhaps a wrapper class will suffice. But then you know you're not affecting the other sites that use this library and you are keeping your solution smaller and more manageable with a smaller memory print during development.

.NET automated build with cruisecontrol.net + nant - multiple assembly structure / best practice

I'm doing some work with several shared .NET assemblies and a generic web application that I would like to handle better in our CC.NET/NAnt build environment.
Currently, we have several .NET assemblies (shared common code that we use in client projects) that exist in different .NET solutions within different repositories in our SCM (Vault incidentally). They are all configured under CC.NET separately so we have a decent amount of control over their build and deployment at present.
We have developed a CMS system that uses some of the .NET assemblies and includes a common administration website project and a template website example project. Out of this one solution we have the following elements that need to managed separately:
Admin interface is not tied to .NET so it is template based and we are developing a PHP backend for it currently.
CMS shared assembly build on top of our other common company wide assemblies.
Control over functionality within each major CMS build/release.
I'd like the build output of this solution to be a Visual Studio template, which we can use to develop other client sites and better manage version changes within the CMS itself, as we add features to the codebase.
I have a rough approach for all this and think it is achievable, however, I wanted to open this topic up for discussion and see what everyone else is doing when it comes to managing the build and deployment of multiple solutions.
Main considerations for us are:
Do we make use of the integration queue functionality in CC.NET to ensure a build order and pull together the assemblies we need for the CMS at build time?
Debugging within a CMS client site i.e. stepping into the shared assemblies' code when the client solution is a version of the base CMS system and therefore separate.
Developing and extending the CMS when it uses shared assemblies i.e. do we add the assembly projects to the trunk solution during development (across source control repositories) and then rely on the build to pull it together or do we use a different approach entirely?
Any other issues people might have experienced that could change our way of thinking?
Hopefully this question isn't too vague and some of you will have dealt with these issues. Look forward to hearing everyones experiences.
Many thanks!
Tim
I unfortunately cannot answer all of your points, but let me start with this one:
Do we make use of the integration queue functionality in CC.NET to
ensure a build order and pull together
the assemblies we need for the CMS at
build time?
The short answer is -yes, you should. The queue attribute ensures a build order within the running instance of CC.NET and is gives you serialization of the builds that depend on each other. For specifying which projects depend on each other, you should use project triggers. Do not rely on the queuePriority for this task.
You shold most likely pull the pieces you need to do the build at build time. Unless you have some time constraints on your individual builds.
Re:
Developing and extending the CMS when it uses shared assemblies i.e. do we add the assembly projects to the trunk solution during development (across source control repositories) and then rely on the build to pull it together or do we use a different approach entirely?
I'm fundamentally against distributing binaries in the trunk unless it's some libraries that does not need to be updated/changed on a frequent basis. If you build the shared assemblies yourself, you should consider pulling them from the artifacts on the build server(s).

Resources