I wanted to install APS(Alfresco process service) for production environment.
for this installation, there are two ways.
By using Installer file
By using WAR files
What will be the best way for this, and what are the issues with other approach?
Thanks in Advance.
For a Production installation you should deploy the war file into you own java container.
The reason for this is that it gives you a lot more flexibility in configuring the APS properties outside the war file.
While the installer is easy, it makes many assumptions about your environment.
Another option in case you werent aware is that APS is now available in the AWS environmen as a quick start application. This makes setting up a hosted environmenr quick and easy.
Related
I want to deploy a meteor app to some custom Linux server. Sure currently installed project packages must be preserved on the destination server.
So I need to pack my local project structure, upload it to the server and unpack it there (or something else).
I think I need (or can) at least remove .meteor/local/* folder content.
What about .meteor/.id file content? Anything else?
I can't find any documentation explaining how to do this but taking into account Meteor's usage simplicity philosophy there must be some simple command to pack application distro.
If you are deploying you should use the meteor build(docs) command which creates a tarball contains your application.
But to make the setup even simpler you could use Meteor-up takes care of the whole deployment process even including preparation of the target server.
I am a front-end developer. I do a lot of PHPs/CSS/JS and HTMLs. Currently, how we do our deployment to staging environment is to push our codes to GIT servers. Go to our staging servers and do a pull to some directory. And then manually move the files from the directory to the correct directories in our apache web server.
Will it be overkill if I use TeamCity to do this? I intend to write an ANT script that does the copying which means to say Runner type will be ANT. So every time there is a push to the GIT repo, Teamcity will pull and then run the ANT script to copy the affected codes to the correct directories.
If not, I will gladly love to listen to any suggestions.
Thanks
Teamcity may be overkill right now, as you would just be using it as a fancy trigger for your build.
But consider adding custom build parameters, which it can pass to your script. You can then start automating builds to different environments through a friendly UI.
You then have a platform to base a correct deployment process around further down the road.
Come the time when you need PHP compilation, JS minifcation, unit testing, its all just another step in your TC configuration.
I would recommend it.
While investigating CI tools, I've found that many installations of CI also integrate to artifact repositories like SonaType Nexus and JFrog Artifactory.
Those tools sound highly integrated to Maven. We do not use Maven, nor do we compile Java even. We compile C++ using Qt/qmake/make, and this build works really well for us. We are still investigating CI tools.
What is the point of using an Artifact repository?
Is archiving to Nexus or Artifactory (or Archiva) supposed to be a step in our make chain, or part of the CI chain, or could it be either?
How might I make our "make" builds or perl/bash/batch scripts interact with them?
An artifact repository has several purposes. The main purpose is to have an copy of maven central (or any other maven repo) to have faster download times and you can use maven even if the internet is down. Since your not using maven this is irrelevant for you.
The second purpose is to store files in it you want to use as dependency but you can not download freely from the internet. So you buy them or get them from your vendors and put them in your repo. This is also more applicable to maven user and there dependency mechanism.
The third important purpose is to have a central way were you can store your releases. So if you build a release v1.0 you can upload it to such a repository and with the clean way of naming in maven its kinda easy to know how to find v1.0 and to use it with all other tools. So you could write a script which downloads your release with wget and install it on a host.
Most of the time these repos have a way of a staging process. So you can store v1.0 in the repo in staging. Someone do the test and when its fine he promotes it to the release repo were everybody can find and use it.
Its simple to integrate them with Maven projects and they are lot of other build tools frameworks with has a easy possiblity to connect against it like ant ivy, groovy grape and so on. Because of the naming schema there is no limitation that you use bash or perl to download/upload files from it.
So if you have releases or files which should be shared between projects and do not have a good solution for it an artefact repository could be good starting point to see how this could work.
As mentioned here:
Providing stable and reliable access to repositories
Supporting a large number of common binaries across different environments
Security and access control
Tracing any action done to a file back to the user
Transferring a large number of binaries to a remote location
Managing infrastructure configuration across different environments
I've always deployed my web applications via FTP (sometimes even xcopy), and then manually run database scripts myself.
I started deploying this way in the 90's, but lately, I've seen a few web apps with installers. I'm starting to question, if I'm locked into an out dated process. I'm a consultant, my apps are usually internal, so I don't worry about distributing and having others installing them.
But I'm curious; does anybody create installers to deploy internal asp.net web applications?
If so, why? (Voluntarily, mandated, or part of an automation process)
And have you had any problems doing it this way?
absolutely. We use it to do all of our apps. That way we create the installer and run it on the qa and uat environments to test and we know exactly what is going to happen in production. There are no guesses as to what order someone might do something in, or if they miss a step. It makes things a lot easier.
Ooh I forgot about the automated process too. We have systems in place (Ant Hill Pro) which automatically deploy it to the proper environments. The qa people don't have to wait for something to be done, because it's all done at 2 am. If they need to rerun the build with updates, the devs check the code in and we push a button, and it's automatically deployed. No waiting for the build engineer, because he's in a meeting or sick or whatever.
You always want to have an automated way to build and deploy - it greatly reduces the chances of a one-off error if you forget a certain step. Also, it allows you to offload the deploy to someone else easily without having to teach them 100 customized steps. Whether the project is internal or not, all applications should follow best practices.
Personally I'm a bit like the OP; generally I just deploy using FTP, but in saying that typically my applications are internal, or in the case of other projects, 100% managed by me.
I've also been thinking about this lately however, and have started to think about how using proper deployment may improve the process - having to document a detailed install process can be a real pain.
I use Powershell and found really easy to automate lots of tasks. You will probably find a bit different at the very begining but at the end you will see that it's all about the power of the .NET libraries !!!
I have use the "Web Setup Project" to create an MSI that installed the output of a "Web Deployment Project" for an internal app. Our server admin wasn't up to the task to doing a 50 step manual install. For my current app, my server admin doesn't like the 'black box' feel of MSI installers and prefers getting a pile of files and a 50 step deployment manual. (See a pattern here? Ask your server admin what he wants.)
The Web Setup Project doesn't make it immediately obvious how to install to anything other than the "Default Website", other than that, it made the installation process repeatable and created a built in way to rollback (by just running the installer from 1 version ago).
This of course assumes that your virtual directory doesn't hold any user modified content-- I wouldn't trust an MSI to properly merge user created and new files.
We use the "XCopy" deploy model here, since the Ops folks have their own method of setting up security on a new web application on the server.
However, we did need to use an installer when we had to install a web application that was using a newer version of Crystal Reports since it had to do something special with a key and we didn't have a full blown version of CR on the server itself. So keep that in mind when working with third party apps, they may need to do some kind of merge module that the MSI handles easily.
Yep...we have an app that needs a lot of pre-requisites set up....web service, windows service, user accounts, security, folder creation, GAC bits etc....I rolled it all up into a nice MSI with custom actions that can install and uninstall cleanly. Saved about one hours worth of work to deploy on a new box.
A lot of the other smaller apps are just deployed by doing Publish Website to a local folder then ftp'ing the contents to the target.
It greatly depends upon the scale of your project, your enviornment and your internal user base. I rarely deploy with an msi because we are too small an operation to have multiple environments (except for SharePoint, that's different all together) . We develop and use VS to deploy web apps to a development box, assuming they are approved then we use VS again to deploy to the live box.
The only proviso is that we have multiple copies of the web.config (appended with test, dev and live) and we then delete the suffix off the relevant file depending upon where its been deployed.
It's probably not the best methodology (I know it's not), but it works and it aids rapid deployment of small to medium sized solutions in a small-scale user environment.
F5ToDebug...
Your saying its OK to take short cuts if you dont have time to do it properly?
"who's going to test the code on the test environment?" You said it yourself that you have config files for _test - why would that not be a suitable test?
Seems like there are so many different ways of automating one's build/deployment that it becomes difficult to parse through all the different scenarios that people support in tutorials on the web. So I wanted to present the question to the stackoverflow crowd ... what would be the best way to set up an automated build and deployment system using the following configuration:
Visual Studio 2008
Web Application Project
CruiseControl.NET
One of the first things I tried was to have CCnet automatically zip the output and copy it to the server, but then that requires manual work to unzip at the destination. However, if we try to copy all the files individually, then it could potentially take a long time if it's a large application (build server lives outside of the datacenter in our office ... I know).
Also of particular interest is how we would support multiple environments as we have dev, qa, uat, and then of course prod.
MSDeploy seems really interesting, but unless I'm interpreting the literature incorrectly, doesn't help in the scenario of deploying from the output of a build server. If anything, it seems like it'll be useful in deploying one build across a build farm ... but even for deploying from one environment to another, one would have to manually change config settings and web service URLs, etc.
I recently spent a few days working on automating deployments at my company.
We use a combination of CruiseControl, NAnt, MSBuild to generate a release version of the app. Then a separate script uses MSDeploy and XCopy to backup the live site and transfer the new files over.
Our solution is briefly described in an answer to this question Automate Deployment for Web Applications?
You might be interested in MSDeploy. Here's a Scott Hanselman post on this. It's only available as a technical preview at the moment (September 2008) but is worth evaluating against your requirements.
There is another new build tool (a very intelligent wrapper) called NUBuild. Its lightweight, open source and extremely easy to setup and provides almost no-touch maintenance. I really like this new tool and we have made it standard tool for our continuous build and integration process of our projects (we have about 400 projects across 75 developers). Try it out.
http://nubuild.codeplex.com/
Easy to use command line interface
Ability to target all .Net framework
version i.e. 1.1, 2.0, 3.0 and 3.5
Supports XML based configuration
Supports both project and file
references
Automatically generates the “complete
ordered build list” for a given
project – No touch maintenance.
Ability to detect and display
circular dependencies
Perform parallel build -
automatically decides which of the
projects in the generated build list
can be built independently.
Ability to handle proxy assemblies
Provides visual clue to the build
process e.g. showing “% completed”,
“current status” etc.
Generates detailed execution log both
in XML and text format
Easily integrated with
Cruise-Control.Net continuous
integration system
Can use custom logger like XMLLogger
when targeting 2.0 + version
Ability to parse error logs
Ability to deploy built assemblies to
user specified location
Ability to synchronize source code
with source-control system
Version management capability
Do you have the ability to run commands remotely? The PsExec utility from Systinternals would let run a command line unzip program on the remote machine. If you have a script that copies the build as a .zip file to the remote site, you would just need one more line for the PsExec call to unzip the files.
I had a related question about getting a deployable set of files from an automated build. I found Web Deployment Projects (links and all in the old question) did what I needed - they're a VS and MSBuild add-on.
This is a common problem (and I wish I had read it sooner) for all development, not just ASP.NET. Being one of its developers, my team naturally uses BuildMaster internally for the entire release process, and for most scenarios it's free. Within the tool, we are able to perform all the standard CI builds to create artifacts and then set up an automation process to deploy these artifacts to any one of the 40+ servers we have internally or externally hosted, depending on the specific application or environment.
Since you specifically mentioned deployment to different testing environments, this is a fundamental aspect of the tool. The idea is to model the environment workflow (e.g. Integration -> QA -> Production) you already have in place and essentially promote a build all the way from source control to production. Most times, it's as simple as adding a deployment action that deploys an artifact to the environment, other times it can be much more complex.
You also casually mentioned configuration file changes are part of deployment, which is another built-in component to BuildMaster. The idea we had was to use the tool itself as the central hub for all configuration files and deployments, thus ensuring the latest changes are applied automatically with a simple "deploy configuration files" action in your deployment plan.
One thing you didn't mention with regard to this process is the database deployment aspect. Most ASP.NET applications require an associated database, otherwise they could just be static HTML files. It is crucial that the database schema gets updated to the appropriate database version with every deployment. There is, not surprisingly, a module within BuildMaster that handles this for you as well. The idea is to store DDL-DML scripts within the tool itself, and by executing scripts only once per environment, it ensures that all of your databases across each environment are up-to-date as your builds are deployed through them. Other scripts (e.g. stored procedures, views, triggers, etc.) are essentially code files and therefore belong in source control. These DROP-CREATE-CONFIGURE type scripts can be run each and every time in most cases with a simple deployment action.
Another piece of the deployment puzzle that most developers do not think about is process automation. Many developers need to perform sign-offs or fill out change request forms in order to manually perform these processes. Again, this is all available as part of the automated workflow setup within BuildMaster. You can setup blockers that do not allow promotion to say the QA environment unless all unit tests have passed, or block promotion to the Staging environment unless someone from the QA team approves the build and all issues in your issue tracking tool are resolved/closed for that particular release.
While I realize I left out CC.NET from the answer, our applications are all built and deployed through BuildMaster so we no longer need it, though we could however just as easily pickup the artifacts from a drop location and deploy them in later environments.
I see that many people use CC for their .NET projects, but why not use Jenkins, Sonarqube? They got all you need. I setup all this in 3 days. I have a Win 2008 server R2, MSSQL, Jenkins, VIsual SVN and Sonarqube.
It all works great and u get all metrics on your project. Sonarqube uses Gallio, Gendarme, FXcop, Stylecop, NDepths and PartCover to get your metrics and all this is pretty straight forward since SonarQube do this automatically without much configuration.
i post som pictures for u too get a feeling for it. Here is Jenkins witch builds and get Sonar metrics and a another job for deploying automatically to IIS
And Sonarqube, all metrics for my project. This is a simple MVC4 app, but it works great!:
If you want more information i can be more specific but i think you should at least consider jenkins. If CC suites you better, at least you looked at good alternative before you chose.
This whole setup uses MSBuild, too build and deploy the apps.