Updating labels via XPO export - axapta

I have modified labels in my dev. environment along with other code changes but when I export this XPO and then import it in another environment, the labels in the target AOT are not updated.
If I open the XPO in Notepad, I indeed can see the newly modified labels. But at the time of import, the dialog does not detect changes it seems.
All labels ID's which I would want to be updated in the target are set to "Do not import" in the Details part of the import dialog.
If I have, 10, 20, 30 labels that changed, I would like to think AX would be smart enough to select "Use an existing label".
Any way to achieve this?
Thanks!
EDIT: Even when I manually set to "Use an existing label" and set the ID of the label to use, labels are not updated in the target :|

For AX 2009, instead of importing label using XPO's, I would recommend the following:
Use a version version system such as TFS (especially when working with multiple devs)
Setup up a build. (This could be an environment where you connect to your version control system and do a sync of all code that was checked in. Or create a script that uses combinexpo to compbine all xpo's for your version system and imports it)
You should now have a stable build environment -> copy the ald and aod files from here
Stop the aoses of you target environment, delete all .aoi, .ali, .alc and .alt files and copy/paste your ald file from your build into the target environment. I would suggest you do the same for aod files to move code.
The reason why you shouldn't be using xpo's for deployement is that it is prone to human error. XPO's should work so they aren't a problem themselves but they can cause problems because importing xpo's is a manual action.
The advantage of using source control is that you have traceability (you know what code is being delivered) and that it opens the door to having an automated build procedure (which will result in less errors form manually tranfering xpo's). With this build you can set up a daily build for your test environment, which again will improve the quality due to better testing. When all tests pass for a build, you have a tested build which you can then deliver using .aod files to your customer (no xpo's are used, so you are delivering the exact code you have tested).
Of course, it could be that setting up an automated build and such is overkill for you (I do think you should you version control always though) you can leave this out, the important thing is that you deliver code and labels from dev to test and all the way to you customer using aod and ald files.

my experienced procedure with updating labels in AX 2009 is following:
Copy the modified *.ald files (which contains labels and you should copy only the one you need-For example only EN-US + CS) from DEV to PROD. It doesn't depend whether AOS service is running or not.
That is all! The rest is done automatically when no user is connected (and no backgroud job is running) to AX for minute or so. Of course you can restart the AOS service to have it updated sooner, but in my case it is not neccessary.
Good luck!

I ended up copying the label file (.ALD) to the application directory of the target environment. I guess if I added or deleted labels, some other files than the .ALD files would need to be copied.

I have come across this issue a number of times. Please see the following blog entry in which I detail how to import labels as part of an XPO.
http://blog.m1cr0sux0r.com/2011/04/exporting-labels-with-xpos-in-dynamics.html

Related

How to upgrade (merge) web.config with web deploy (msdeploy)?

I'm trying to set up a deployment chain for some of our ASP.NET applications. The tool of choice is Web Deploy (msdeploy) - for now. Unfortunately I'm stuck on a problem.
A high level overview of the chain is thus:
Web developer creates the code and checks it in SVN;
Buildserver sees the update and builds the msdeploy .zip package of the website;
The .zip package is automatically put inside our installer and sent to various clients;
The clients run the installer on their webserver(-s);
The installer uses msdeploy internally to deploy the .zip package and create a new website or upgrade an existing one.
Msdeploy makes it easy to deploy a new instance, but I'm stumped about how to perform an "upgrade" install. The main problem is the web.config file. Each client will most certainly have made some customizations there to suit their specific environment. The installer itself offers to set some more critical parameters at the first-time installation (achieved by msdeploy's parameter mechanism), but they can do others by hand.
On the other hand, we developers also occasionally make changes to web.config, adding some new settings or removing obsolete ones. So I can't just tell msdeploy to ignore the file entirely. I need some kind of advanced XML modification mechanism. It could be a script that the developers maintain, but then it needs to be run ONLY at upgrades, not new installs.
I've no idea how to accomplish this.
Besides that, sometimes there's also some completely weird upgrade logic. For example, the application comes with our company logo, but some clients have replaced that .png file to show their own logo. Recently we needed to update the logo - but only for clients that hadn't replaced it with their own.
Similarly, there might be some cache folders that might need to be cleaned at SOME upgrades but not at others. Or folders with user content that may not be touched (but come with default content at the initial installation). Etc.
How do you normally achieve this dual behavior for msdeploy packages? Do I really need to create 2 distinct packages for every application?
Suggestion from personal experience:
Isolate customisations
Your customers should have the ability to customise their set up and the best way is to provide them with something like an override file. That way you install the new package and follow by superimposing your customer's customisations on top of your standard setup. If its a brand new install then there will be nothing to superimpose.
> top-level --
> standard files |
images | This will never be touched or changed by customer
settings.txt |
__
> customer files --
images | Customer hacks this to their heart's content
settings.txt_override |
--
Yes, this does mean that some kind of merging process needs to happen and there needs to be some script that does that but this approach has several advantages.
For settings that suddenly become redundant just issue a warning to that effect
If a customer has their own logo provide the ability to specify this in the override file
The message is clear to customers. Stay off standard files.
If customers request more customisable settings then write the default if it does not exist into the override file during upgrades.
Vilx, in answer to your question, the logic for knowing whether it is an upgrade or not must be contained in the script itself.
To run an upgrade script before installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -preSync:runcommand="c:\UpgradeScript.bat"
Or to run an upgrade script after installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -postSync:runcommand="c:\UpgradeScript.bat"
More info here
As to how you know its an upgrade your script could check for a text file called "version.txt" and if it exists the upgrade bat script will run. Version to be contained within the text file. Bit basic but it should work.
This also has the added advantage of giving you the ability of more elegantly merging customer's custom settings between versions as you know which properties could be overriden for that particular version.
There are some general suggestions (not specific to msdeploy), but I hope that helps:
I think you'll need to provide several installers anyway: for the initial setup and for each version-to-version upgrade.
I would suggest to let your clients to merge the config files themselves. You could just provide them either detailed desciption of waht was added/changed/removed, and/or include the utility that simplifies the merge. Maybe this and this links will give you some pointers.
as for merging the replaced logos, other client's customization, I think the best approach would be to support branding your application. I mean - move all branding details to the place where your new/upgrade installers won't touch that.
as for the rest of the adjustments made by your clients, they do that on their own risk, so the only help you could provide them is to include the detailed list of changes (maybe even the list of changed files since the previous version) and the How-To article about merging the sources with tools like Araxis Merge or similar
Or.. you could create a utility and include it to the installer, which will try to do all the tricky merging stuff on client's machine. I would not recommend this way as it requires a lot of efforts/resources to maintain.
One more thing: you could focus on backup-ing the previous client copy before upgrade. So even client will have troubles with upgrading - that will be always possible to roll back. The only thing here for you is to provide a good feedback channel which your clients can use to shoot their troubles. This feedback will allow you to figure out what the troubles your clients have and how to make their upgrade process more comfortable.
I would build on what the above have said, but I would do it with transformations, and strict documentation about who configures what. The way you have it now relies on customer intervention against a config that is mission critical to the app deploy process.
Create three config file areas. One for development, one for the "production generic" build, and one that is an empty template for the customer to edit.
The development instance should be self explanatory. This is the transform that takes the production generic template and creates a web config for your development server. (it sounds like you are shooting for a CI type process here)
The "production generic" transform should set the app up for a hypothetically perfect instance of the app. This is what the install would look like if the architect had his way.
The customer transform is used by the customers to set up the web config as required to meet their own needs. Write some documentation and see what happens. Edit the docs as you help customers through the process.
It that what you were looking for? Thoughts?

Development shell in ASP.NET

I write a lot of code, most of it I throw away eventually when I am done with it; recently I was thinking that if I just kept every small piece of utility script I wrote, named it, tagged it and filed it in a dev shell, I will never loose the code, and on top of that I won't need to redo something I have done already, which is the main motivation, as I keep finding myself writing something I've done earlier.
Is there a ASP.NET shell style environment anywhere?
If not, what would be the best way to go about this?
I am looking to be able to do the following:
Write big or small bits of code.
Derive from or chain together alread written code/libraries/services.
Ability to have everything on my desktop (would that mean IIS on the desktop? or is there an lighter weight mechanism?), sync'ed with the server at home, so if I am on the move I can still access this and make this part of my day-to-day workflow.
You could build a unique solution, with many class library projects inside. Each project would address a specific scenario, something like this:
MyStuff (Solution)
MyStuff.Common
MyStuff.Validation
MyStuff.Web
MyStuff.Encryption
etc.
Then you can put this solution on an online versioning service like bitbucket or assembla, so you can access your source code from anywhere, edit it and commit it back to the server. This way you get the advantages of versioning and you store your code on a remote server so even if your harddisk breaks it's not a problem, cause what's on the server is what matters.
You should either look into a source control system (Git perhaps?) or into a file storage / syncing / sharing service like DropBox.
DropBox would allow you to access code snippets from wherever you are and works really easily (just drop a file into a folder).
If you need versioning and branching you're going to have to look into a source control system. Since you have a server at home, that should be no problem.

What should I actually *do* in AnkhSVN when two people edit the same file under SVN

Our code is in SVN. We develop using Visual Studio and the AnkhSVN plugin.
Having used VSS before SVN I was used to the idea of locking files so other users know not to edit it while you are (in fact I thought this was the main point of source control, to prevent lost data from these conflicts).
I've been told this rarely happens and cases where you can't work because another dev is locking you out are more frequent (which sounds like a principle that might only apply to a certain subset of dev projects). But anyway, SVN is better and we're using it.
So when I do edit a file, and go to check it in, and find out the other user has edited it too, what do I actually do?
Surely there's a better way than saving a copy of my file, reverting changes, updating it from server, then merging my changes back in with winmerge? When I right-click the file and click 'merge' I'm told I should update first, so that's obviously not what I need.
.
Update: partial answer
OK, it sounds like I just hit update, then SVN merges non-conflicting changes automatically, and should let AnkhSVN know about any conflicting changes to allow some kind of resolution. Does anyone know how this works in AnkhSVN - what I'd actually do?
(if not I'll try it myself, accept the current top answer and update this question with the second half for posterity).
Actually, that's exactly what you need.
Edit: Clarification, what you need to do is just hit that update. You don't need to make a separate copy, revert, etc. Updating from the repository will merge those changes with your own.
When you do the update, where you have local changes to a file that has also been changed in the repository, SVN will merge the file in the repository with your local file, preserving both sets of changes.
In effect, it should do what you would do with Winmerge automatically.
If the changes are conflicting, typically that they occur within the same lines, there will be a merge conflict, which has to be resolved. Not knowing AnkhSvn, I don't know what it will do in this case, but it should have some means of fixing things. Usually it involves looking over 3 files (your local file, the repository file, and the result of a successful merge) where you pick each part you want to keep from the two changed versions of the file.
After you've updated your local copy, merged and fixed any conflicts, you commit as usual.
It´s not an direct answer to your question, but I would recommened to use SVN-Monitor in addition to AnkhSVN or any other Subversion client like TortoiseSVN.
With it you can watch your repository and will be notified by changes in your repository. So you can see what other devs did in the repository and probably see if your commit will conflict with other checkins or if you can update your local copy without any effect/conflict

With subversion or TFS, how would you go about setting up automatic builds?

With subversion or TFS, how would you go about setting up automatic builds?
I need some guidance with regards to naming convention and how this would happen automatically.
I am using /branches /trunk /tags folder structure.
I am using a build app (finalbuilder).
Which tag name would I tell it to pull from (or revision # etc)? Since it is going to change all the time, how do people perform nightly builds? Using the date in the name of the release?
Just use the revision number. Something like CruiseControl.NET should make this pretty easy for you.
Use TeamCity, setup a separate build for trunk + every branch. We do this and it's very helpful.
I would set up the build server to monitor the /trunk folder and trigger a build whenever anything is commited there. If wanted, you could have the build script end with creating a tag for the build (even though that might be a bit ambitious, depending on how often things are commited to the trunk). When I have done that I have usually included the subversion revision number in the tag name, and also in the version number of the files (to the extent that this is applicable).
You should be able to pull right from the /trunk (and possibly with other nightly builds from branches that you think are important). It's not particularly useful to do a nightly build from a tag, since generally tags are static. When it is checked out, you can identify the checkout by the revision number that was checked out. That way if you ever need to find out what has changed since then, you can diff from that revision (or branch, whatever).
We use Hudson, which checks periodically (set by you) for changes to whatever svn path you give it. It then has the ability to run a shell script (we are building for iPhone so use xcodebuild, but you could use whatever is used for ASP.net). We then upload the results of this to our local server under $REVISION. It would be easy to run automated tests in this as well.
Since you are asking about TFS:
We are using a CommonAssemblyInfo to increment the version of the dlls. Nightly Builds are typically from a trunk.
We have a Main-Folder from which a "Dev" Folder is branched for the current release. We make nightly builds from the current Dev-branch and manual, so called reference builds once we merge Dev-stuff back into Main.
Builds are defined via the Build Agent stuff. Custom Tasks like incrementing version number enter the game via MSBuild.

Improving Your Build Process

Or, actually establishing a build process when there isn't much of one in place to begin with.
Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever).
What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta.
Relevant Tools:
Visual Build
Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.)
Tentatively, I've got a Visual Build project that does this:
Get source and place in local directory, including necessary DLLs needed for project.
Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use).
Build using Visual Studio
Precompile using command line, copying into what will be a "build" directory
Copy to destination.
Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps.
I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet.
Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming.
First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control.
Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is.
Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc...
Hope this helps.
I have a set of Powershell scripts that do all of this for me.
Script 1: Build - this one is simple, it is mostly handled by a call to msbuild, and also it creates my database scripts.
Script 2: Package - This one takes various arguments to package a release for various environments, such as test, and subsets of the production environment, which consists of many machines.
Script 3: Deploy - This is run on each individual machine from within the folder created by the Package script (the Deploy script is copied in as a part of packaging)
From the deploy script, I do sanity checks on things like the machine name so things don't accidentally get deployed to the wrong place.
For web.config files, I use the
<appSettings file="Local.config">
feature to have overrides that are already on the production machines, and they are read-only so they don't accidentally get written over. The Local.config files are not checked in, and I don't have to do any file switching at build time.
[Edit] The equivalent of appSettings file= for a config section is configSource="Local.config"
We switched from using a perl script to MSBuild two years ago and haven't looked back.
Building visual studio solutions can be done by just specifying them in the main xml file.
For anything more complicated (getting your source code, executing unit tests, building install packages, deploying web sites) you can just create a new class in .net deriving from Task that overrides the Execute function, and then reference this from your build xml file.
There is a pretty good introduction here:
introduction
I've only worked on a couple of .Net projects (I've done mostly Java) but one thing I would recommend is using a tool like NAnt. I have a real problem with coupling my build to the IDE, it ends up making it a real pain to set up build servers down the road since you have to go do a full VS install on any box that you want to build from in the future.
That being said, any automated build is better than no automated build.
Our build process is a bunch of homegrown Perl scripts that have evolved over a decade or so, nothing fancy but it gets the job done. One script gets the latest source code, another builds it, a third stages it to a network location. We do desktop application development so our staging process also builds install packages for testing and eventually shipping to customers.
I suggest you break it down to individual steps because there will be times when you want to rebuild but not get latest, or maybe just need to re-stage. Our scripts can also handle building from different branches so consider that also with whatever solution you develop.
Finally we have a dedicated build machine that rebuilds the trunk and maintenance branches every night and sends out an email with any problems or if it completed successfully.
One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it.
I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.
Our build system is a makefile (or two). It has been rather fun getting it working as it needs to run on both windows (as a build task under VS) and under Linux (as a normal "make bla" task). The really fun thing is that the build gets the actual file list from a .csproj file, builds (another) makefile from that, and run that. In the processes the make file actually calls it's self.
If that thought doesn't scare the reader, then (either they are crazy or) they can probably get make + "your favorite string mangler" to work for them.
We use UppercuT.
UppercuT uses NAnt to build and it is extremely easy to use.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT

Resources