I am really new to Logic Apps and I have created my first LA to delete ARM Deployment History to help alleviate this deployment error with max quota for deployments:
Creating the deployment [DEPLOYMENT_NAME] would exceed the quota of
‘800’. The current deployment count is ‘800’, please delete some
deployments before creating a new one.
The LA I have created lists all deployments and after that deletes them via a For Each loop action. But this deletes ALL my deployment history. I want to keep the latest 100 deployments,
How can I skip the first 100 deployments in my Logic App? I am not sure how to use the Filter or Top parameters.
In PowerShell I can accomplish this easily with this line:
Get-AzResourceGroupDeployment -ResourceGroupName myRG | Select -Skip 100 | Remove-AzResourceGroupDeployment
How can I do this in my Logic App?
The input of your for each could use the skip function for this. The input expression would be something like this
skip(body('List_template_deployments'), 100)
Related
I'm creating an app where a user can create a piece of data that could be presented in the ui at a later date. I'm trying to find a way to create cron entries dynamically using either java code (for android devices) or node.js code (firebase cloud function generates a cron job). I haven't been able to find a way to do it dynamically and based on what I read it may not be possible. Does anyone out there know a way?
Presently the only way to create GAE cron jobs is via deployment of a cron configuration file (by itself or, in some cases, together with the app code). Which today can only be done via CLI tools (from the GAE or Cloud SDKs).
I'm unsure if you'd consider programmatically invoking such CLI tools qualifying to 'create cron entries dynamically'. If you do - generation the cron config file and scripting the desired CLI-based deployment would be an approach.
But creating jobs via an API is still a feature request, see How to schedule repeated jobs or tasks from user parameters in Google App Engine?. It also contains another potentially acceptable approach (along the same lines as the one mentioned in #ceejayoz's comment))
I am in the process of converting our legacy custom database deployment process with custom built tools into a full fledged SSDT project. So far everything has gone very well. I have composite projects that can deploy a base database as well as projects that deploy sample and test data.
The problem I am having now is finding a solution for running some sort of code that can call a web service to get an activation code and add it to the database as the final step of the process. Can anyone point me to a hook that I might be able to use?
UPDATE: To be clearer I am doing this to make it easier to maintain and deploy our sample and test data to a local machine. We can easily use Jenkins to activate the sites when they are deployed nightly to our official testing environments. I'm just hoping to be able to do this in a single step to replace the homegrown database deploy tool that we use now.
In my deployment scenario I wrapped database deployment process in some powershell scripts which do necessary prerequisites. For example:
powershell script is started and then it stops some services
next it run sqlpackage.exe or preproduced sql deployment scripts
finally powershell script starts services.
You can pass some parameters from powershell to sql scripts or sqlpackage.exe as sqlcmd variables. So you can call webservice first, then pass activation code as sqlcmd variable and use the variable in postdeployment script.
Particularly if it's the final step, I'd be tempted to do this separately, using whatever tool you're using to do the deployment: Powershell scripts, msbuild, TFS, Jenkins, whatever. Presumably there's also a front-end of some description that gets provisioned this way?
SSDT isn't an eierlegende Wollmilchsau, it's a set of tools for managing database changes.
I suspect if the final step were "provision a Google App Engine Instance and deploy a Python script", for example, it wouldn't appear to be a natural candidate for inclusion in an SSDT post-deploy script, and I reckon this falls into the same category.
I am working on a ASP.net MVC4 project where a same project needs to be deployed to many clients on daily basis, each client will have its own domain / sub domain and a separate app pool and db (MSSSQL).
Doing each deployment manually could take at least 1-2 hours if everything goes well. Is there anyway using which I can do this in some automated way?
Moreover, we also need to update all of the apps when a new version is released.. may be one by one or all of them at same time. However, doing this manually could take weeks and once we have more clients then it will not possible doing this update manually.
The update involves, suspending app for some time, taking a full backup of files and db, update application code/ files in app folder, upgrade db with a script and then start app, doing some diagnosis script to check if update was successful or not, if not we need to check what went wrong?
How can we automate this updates? Any idea would be great on how to approach this issue.
As a developer for BuildMaster, I can say that this scenario, known as the "Core Version" pattern, is a common one. If you're OK with a paid solution, you can setup your deployment plans within the tool that do exactly what you described.
As a more concrete example, we experience this exact situation in a slightly different way. BuildMaster has a set of 60+ extensions that rely on a specific SDK version. In our recent 4.0 release, we had to re-deploy every extension because of breaking API changes within the SDK. This is essentially equivalent to having a bunch of customers and deploying to them all at once. We have set up our deployment plans such that any time we create a new release of the SDK application, we have the option to set a variable that says to build every extension that relies on the SDK:
In BuildMaster, the idea is to promote a build (i.e. an immutable object that travels through various environments like Dev, Test, Staging, Prod) to its final environment (where it becomes the deployed build for the release). In your case, this would be pushing your MVC application to its final environment, and that would then trigger the deployments of all dependent applications (i.e. your customers' instances of your application). For our SDK, the plan looks like this:
For your scenario, you would only need the single action, "Promote Build". As I mentioned before, any dependents would then be promoted to their final environments, so all your customer deployments would kick off once that action is run during deployment. As an example, our Azure extension's deployment plan for its final environment looks like this (internal URLs redacted):
You may have noticed that these plans are marked "Shared", which means every extension we have has the exact same deployment plan, but utilizes different variables to handle the minor differences like names, paths, etc.
Since this is such an enormous topic I could go on for ages, but I think that should be sufficient for your use-case if you wanted to try it out.
There are others but you could setup Team Server Foundation to deploy automated builds.
http://msdn.microsoft.com/en-us/library/ff650529.aspx
I find the easiest way to do this from an MVC project is to create a publish profile.
This is done by right-clicking your project selecting publish and then configuring it to your needs.
Then from TFS you create a new build definition, this kicks of a wizard which takes you through it.
There are quite a few options which would be too long to go into for every scenario.
The main change I usually find the most important is to set an MSBuild Argument to deploy with the publish profile.
This can be found at Process > Advanced > MSBuild Arguments.
Once this is configured correctly it's a simple case of right-clicking and queue new build to build and deploy.
You wil need different PublishProfile/Build configuration per deployment environment.
For backups I use a powershell script which can be called manually or from TFS.
You also have a drop folder in TFS which keeps a backup of x many releases.
The datbases are automatically configured via Sql server to backup, TBH I didn't set that up it was a DB admin guy who is also involved with releases.
From a dev testing side I use jMeter (http://jmeter.apache.org/) to run some automated scripts that check that users can login and view certain screens, just to confirm nothing major has gone wrong. However there is usually a testing team to run more detailed tests, again not setup by me.
All of the above will probably take you sometime to setup but in the long run it will literally save you weeks of time over a year.
A free alternative to TFS is http://www.cruisecontrolnet.org/, I have used this in the past too and is pretty good.
You can automate your .Net deployments with Beanstalk, which will give you a way to trigger deployments with a single click, watch progress, manage permissions and see history of deployments. Check out this guide on the topic:
http://guides.beanstalkapp.com/deployments/deploy-dotnet.html
I hope you will find it useful.
P.S. - I work at Beanstalk.
In the project I'm working for we're having a continuous deployment setup. The goal is to always install the latest working build to production, unless someone manually overrides this functionality.
In order to make this working we
Run static code analysis
Run unit tests
Run integration tests
Run automatic UI tests, to the extent this is feasible
If any of the above steps fail, the build process is halted, and the build marked as failed. If the installation package is created it is then in steps installed to
CI --> staging --> production
At each step we run a integration and UI tests for the environment, to make sure we didn't introduce some new things which fail on on the subsequent environments. If none of the tests fail, and N minutes pass without anyone pressing the panic button, the build gets promoted to the next env. If the tests fail, we want to delete the package, and discard it completely. The installation packages are, however, delivered to other servers, so we need to run a bunch of remote (shell) scripts to make this step happen.
The problem is, that there are a big set of failure cases which we cannot reliably test in the normal automated cycle, e.g. page layout, or some integrations fail only production and so on.
So the actual question: How shall I demote/delete builds, once they've been promoted? Is it possible to either run a remote script when doing delete build or use any of the promotion plugins to achieve this functionality? Is there some think-outside-the-box solution for this that I might not have thought about?
Instead of deleting builds manually, you may write a Jenkins job that accepts the build number as a parameter, deletes it, and then does the rest of the housekeeping. You can configure Jenkins access privileges so that people do not delete builds manually by accident.
This might be a very particular case, but we decided against creating a separate job for removing the builds, for the very simple reason of keeping all the logging related to a specific build number in one single place. The setup was the following:
Promotion here means make the installation package (RPM) available to the given server, where auto-update handles the actual upgrade of the package.
We have one main build, that builds every time a new commit is available. We had some fine-tuning related to quiet times etc. but basically every new pushed set of commits resulted in a new build. The build contains all the relevant and available testing, which is far from being complete, and probably never will be.
Every hour a separate promotion step handles promotion from staging to production. This build kicks off a separate promotion which takes the latest accepted build from CI to staging. There is a 30min delay before builds were promoted CI-->staging, to prevent accidental promotions for last second commits. Delays were achieved with some bash find scripting. The order of promotions is this, to make sure a build is available in staging for (at least) one hour before going to promotion.
The actual answer:
The promotion steps were done as separate builds. In order to do a real promotion, rather than a separate build with a separate log, the build kicks off an actual promotion in the main build, using curl and calling the remote HTTP API. This leaves a relevan promotions star in the main build log. Using different colors, the promotions are visible with one look.
To demote builds I decided to create a separate "demote build" promotion step. This would then issue a purple star as a sign of the build being defective, and thus removed. The demotion is done by accessing the correct build in the UI, and pressing the "Remove build" button. No automation has been added to this step, but by creating a separate test step, it would be fairly easy to automate the demotion as well. We, however, have not gotten quite this far yet.
The benefits of this approach include
A build is deleted by accessing the failed build, not by providing parameters. Makes it much easier to document, and get right under pressure
Having a "panic button" like this available for anyone to press, builds trust and ownership for the process not only amongst the developers but also managers and DevOps.
It's dead simple to spot dead builds, as the log is available besides the other promotion logs
Having all the relevant promotion calls in the same build makes further scripting easier
Acute things we still have to improve include automating the testing on the later stages of the build pipeline, and also a suitable way of downgrading builds after demotion. E.g. in production a defective build and a demotion must always lead to installing the last good build, which has turned out to be fairly hard to achieve. Production data centers are rarely allowed to be accessible to this level from the development DC where the CI system sits. Also stopping and starting the build pipeline must be automated, as else there is the chance of slipping back to the manual state.
Naturally, in the spirit of continuous improvement, there are always things to improve. The whole setup is something of a bash/perl scripting mess, but since it's scripted and repeatable, there is always the option of improving one small piece at a time. The most important thing is the automation, as it allows for incremental steps, which any manual steps more or less prevent.
For anyone looking for an easy way to delete a build with custom steps:
Create a 'defective' promotion.
Make it manually triggered.
Force it run on the master.
Add a choice parameter DELETE with choices NO and YES.
Add action Execute Shell.
_
if [ "${DELETE}" == "YES" ]; then
# TODO: my custom steps
curl -X POST ${PROMOTED_URL}/doDelete"
fi
To delete a build now, just go to promotions, flip the choice to YES and click approve.
How can I automate the web-application build process, that includes following steps:
Change connection string.
Recreate database by scripts.
Deploy web-site by ftp.
Copy some files to server in addition to application.
And may be perform some initialize operations.
Should I write any script/programm, use Visual Studio or any another program?
Personally I use a Continuous Integration tool to do this kind of work.
The one I mainly use is Team City by JetBrains.
This kind of software can look at your Source Control repo for new checking, perform builds, publish builds to servers as well as running pre/post build events.
You've to start learning MSBuild. It is VERY simple and straightforward, so just start and you'll see ;)
In adddition to built in features it has Community Pack with many tasty things so you will be able to:
Replace connection string in config file using regex or replace whole config with predefined connection string (FileUpdate or Copy task)
Execute database scripts (MSBuild.Community.Tasks.SqlServer.ExecuteDDL)
Deploy site using Copy task
And many other...
You can run pre and post events in Visual Studio. To do this, simply right click on the project and in the project properties navigate to the 'Build Events' options. Here you can specify the pre and post build events (you can also specify when the event runs - on successful build or otherwise).
Once the project has been successfully built, the post build event can be set up to perform the tasks specified. You can detail the steps either in a separate file or in Visual Studio project's build events itself.
More information
Pre/Post Build event command line arguments
How to: Specify Build Events (C#)
Much along the continuous integration concept Jamie mentions, we use BuildMaster internally for all of our applications since we develop it :)
Now that we have a version offered for free, I'll share some thoughts on each of your bullet points:
Change connection string
This is something that is handled uniquely by the tool. Each environment would get its own "instance" of a configuration file and in a deployment plan you can use the "deploy configuration files" action to put them in any environment. This means there are no transforms to worry about since the config file is stored and versioned within the tool.
Recreate database by scripts
This is another major feature we have. Object code (stored procs, views, etc.) can be run every time with a DROP/CREATE combo, but adding indexes, dropping columns, can only be done once (you can't bring a column's data back without a restore!)
BuildMaster handles these types of change scripts differently - they can only be run at most once against an environment's instance of your database. This makes it super easy to bring any new or existing initialized database schema up-to-date.
Deploy web-site by FTP
Just add an action to your deployment plan, and you click Create Build or Promote Build, it will do that.
Copy some files to server in addition to application
If the process is repeatable you can do this easily, if need be by using a manual action that will remind you to do it.
And may be perform some initialize operations
This sounds like a "change control" to me, a one-time change when you release. We support these as well but not in the free version unfortunately.