AX2012 R2 CU7 : VSProject node compilation issues - dynamics-ax-2012

I am currently creating a build for Ax using TFS activities. All of the steps are in there and all of it works on a simple scenario with a couple dummy XPO's in TFS VCS. But now I need to do the full scenario of building our codebase and I'm experiencing compilation issues with the Visual Studio Project Nodes.
This is what I roughly do regarding code import/compilation:
import label files
import the XPO with all of the code.
import the visual studio projects using the
SysTreeNodeVSProject\ImportProject method
When I run a full compilation, there are still compiler errors regarding code that is in need of the resulting assemblies of the VSProjects in the AOT.
This is caused by the output of the projects still being empty. When selecting them all and hitting compile, still no result. Selecting them one by one and compiling them. The output of the projects is generated in the AOT and the depending classes can be compiled directly.
Compiling them separately cause the comiler to detect it is a VSProject node and the kernel will call the export and build functionality on the VSProjects resulting in the output being generated.
The real question here is: For my build I now have to create an AutoRun file that will compile those VSProject nodes, but isn't the compiler supposed to be doing that when doing a full compile?!

Found a work around. I modified SysTreeNodeVSProject.importProject() to compile the tree node after it has imported.

Related

Include SASS compiler in build definition in Visual Studio? (and avoid merging the CSS files when using TFS)

We plan on using SASS instead of plain CSS for our SharePoint project very soon. While testing and trying to set everything up, I ran into some problems:
We're using Visual Studio 2015 and on my developer machine I installed the Web Compiler Extension to compile the .scss-files and partial files to a regular .css-file.
That worked very nicely but the problem is, that there will be a few developers working simultaneously on the styles. I want to avoid merging the resulting css-file each time someone tries to check in something into source control (we're using Team Foundation Server).
Since there is a build running every time someone is checking in their changes, and to deploy the resulting solution to the nightly build machine, the idea was to somehow include the SASS compiler in the build definition. This way the more readable scss-files get merged and the build creates the resulting css-file to include it in the solution.
Maybe I'm thinking too complicated, but I just couldn't get that to work so far.
Any ideas how I can achieve that?
(Maybe I should also mention that none of the dev machines got any internet connection)
If you're building an MVC app, you can use MVC's bundling feature along with the SASS NuGet package. And, be sure to enable minification. There's a UseNativeMinification property on SassAndScssSettings. That way you don't need to deal with merging the css file when you get latest or check in. Reference this thread: SASS/TFS best practice
Another way is running a script (e.g with PowerShell task) on the server that to install the gulp components and then call the sass compile task to compile the SASS. Refer to Powershell build - compiling SASS for details.

Qt, Visual Studio 2017 and .vcxproj.user Files

I am using Visual Studio 15.9.14 and Qt 5.13.0 on multiple machines. When I check out my source from version control:
If I open the VS IDE to build my solutions, everything compiles and links correctly.
If I build the solutions from the command line using devenv.exe, there are multiple compile and link errors in the Qt projects.
The problem is that when building from the command line, the .vcxproj.user files are NOT generated, and therefore $QTDIR is not defined for use in my projects. The result is my automated/nightly builds fail.
I can build a tool to create the files and integrate it into my build process, but I shouldn't have to. This problem seems to be related to the VS/Qt integration. I have also encountered a similar problem in the IDE where I had to force the files to regenerate by touching the projects.
Any suggestions/help would be appreciated.

How do you enforce dependencies among java folders in Netbeans?

I am new to Netbeans. I am wondering if someone can help me with project setup in netbeans. I am moving half million lines of Java code from a different IDE to Netbeans. I was able to get the code build and run in Netbeans easily. I have a project with many folders with dependencies among those folders. They have to be built in specific order. This is to enforce layering so that a module in lower layer cannot call into higher layers. I couldn't get that configured in Netbeans. Below is how my project looks like
project/
libA/
libB/
libC/
libD/
libE/
appA/
...
I have one project that builds all the libs and appA. The project build xml is stored under project/ folder. But the libs have dependencies among them. libB should be built after libA. libC after libA. libE depends on libD and libB etc.
I tried to change the order of source folders for libs in project properties. That didn't seem to make any difference. Even if I move libA after libB, it was building everything fine. I expected it to fail because libA didn't build yet.
Iam lost. Just wondering what the trick is to enforce this kind of dependencies. I created my project using "Java project using existing sources" wizard.
I appreciate your help
Thanks
Video guy.
Even though it would be a pain, you could just write your own ant build script and then just have Netbeans use that.
Basically:
write the custom ant build file
install the Ant plugin
create an Ant build file
right click the build file
run the selected target.
This would enable you to enforce whatever you need to do, but, if Netbeans is figuring out the correct order then why not just use it.
Does something break when you just compile and run in Netbeans?
Well! Lets say a team member added piece of code in lower level package that calls into higher layer code. It should fail because it breaks the layering. Because Netbeans seem to compile all the files in one javac invocation, the build compiles just fine. I want Netbeans to break the build in this case.
Writing my own ant script is another way of enforcing it. The whole point in using an IDE is to save yourself from writing your own make files (or ant scripts). This is something any IDE was able to accomplish 10 years back out of the box. I am wondering if I am missing something here.
Thanks
Video Guy

Automatic BizTalk Versioning in My Build Process

In all of my other .net apps my build process (a mixture of nant and custom tasks) automatically updates the [AssemblyVersionAttribute] AssemblyInfo.cs with the current build number before the call to msbuild, stamping in the build number in the version number.
I'm now working on my first BizTalk project and I'd like to do the same thing with the version numbers of the BizTalk assemblies, but I've run into trouble!
First of all the aseembly version numbers are stored in the btproj files, so I did some googling and found www.codeplex.com/biztalk which looked like the answer to my problem, but there is a deeper problem!
I have a project for my schemas and another for my pipelines, the pipelines project references my schemas project as I have a flat file dis/assemblers. The problem comes when I update the version numbers, as updating them even from within visual studio does not update the pipeline components references to the schemas.
So if I update all the version numbers manually in the VS IDE from 1.0.0.0 to 1.1.0.0, the build fails as the pipeline components flat file dis/assemblers still reference the old 1.0.0.0 version of the schemas! They don't automatically update!
Is this really a manual process of updating the version numbers of the BizTalk projects in the property pages, then building the projects and manually updating the references to them in the properties of all the pipeline components that reference them?
This means that I can't have my build process control the build number part of my version numbers!
Or is there a better method of managing the version numbers of the BizTalk assemblies?
I'm sorry to disappoint you but I've been down the exact some road I had to give up. I guess it could be possible to achieve it but it would require a lot of changes to both the binding files and other XML files (as you mentioned and even more if you have published services etc).
Maybe it could be possible to wrap all these necessary changes in a build step (a MSBuild step or similar in other build frameworks) - that would be useful!
Developer- :)
We had the similar problem and we ended up developing a small utility which would change the version number in all the projects i.e. *.csproj (asssemblyinfo.cs), *.btproj accordingly. Apart from this it would open and modify the *.btp files with the new version of schemas. In nutshell, what all you have to do is to configure this utility in your VS.net tools menu and execute it.
I guess its not very difficult to develop such utility in any .net lanagauge.
Caveat: Do not forget to save the files after updates with the same encoding as they were originally.
Cheers!
Gutted, thought that might be the case. Maybe BizTalk 2009 projects will play more nicely when updating references when changing version numbers.
I started to go through and automate it manually, and when I realised what needed to be done, I took a biiig step back when I realised just how many places I'd have to modify to get it working. Thank god for Undo Checkout.
I do have a standard C# class library included in my project (various helper functions), which i am able to update the version number of during my build process, so I'm basically using that one assembly to version the whole application. If anyone wants to know what version is in any environment, check out the version number of that one assembly.
Not ideal, but it's working.
We've done this successfully on our project - I'll see if I can get the developer of the tool to post details...
This problem arises when you perform an integration build to the latest versions of your dependent components as file references (aka schemas here).
Keep in mind that upgrading the assemblyversion must always performed manually, that way you are always in charge of changes to assemblyversions.
A possible solution to solve the buildbreaks issue is to file reference to a specific version of a dependent component build and not to the latest version and use a subst drive and a copy script to get the latest component builds.
For example:
SchemaA, assembly version 1.0.0.0
PipelineA (with pipelinecomponent XMLValidator for example), assembly version 1.0.0.0
PipelineA has a file reference to a subst drive(say R drive, which maps to a workspace D:\MyComponents) and version 1.0.0.0 of SchemaA as follows:
R:\SchemaA\1.0.0.0\SchemaA.dll.
The copy-script copies the buildoutput of SchemaA locally to your R drive.
When schema A updates to version 1.1.0.0 you don't have any issues because you still use version 1.0.0.0 and YOU have the choice to use the 1.1.0.0 version of your schema. When you want to upgrade, you have to alter your copy-script and replace the file reference to R:\SchemaA\1.1.0.0\SchemaA.dll.

Best way to manage generated code in an automated build?

In my automated NAnt build we have a step that generates a lot of code off of the database (using SubSonic) and the code is separated into folders that match the schema name in the database. For example:
/generated-code
/dbo
SomeTable.cs
OtherTable.cs
/abc
Customer.cs
Order.cs
The schema names are there to isolate the generated classes that an app will need. For example, there is an ABC app, that will pull in the generated code from this central folder. I'm doing that on a pre-build event, like this:
del /F /Q $(ProjectDir)Entities\generated*.cs
copy $(ProjectDir)....\generated-code\abc*.cs $(ProjectDir)Entities\generated*.cs
So on every build, the Nant script runs the generator which puts all the code into a central holding place, then it kicks off the solution build... which includes pre-build events for each of the projects that need their generated classes.
So here's the friction I'm seeing:
1) Each new app needs to setup this pre-build event. It kind of sucks to have to do this.
2) In our build server we don't generate code, so I actually have an IF $(ConfigurationName) == "Debug" before each of those commands, so it doens't happen for release builds
3) Sometimes the commands fail, which fails our local build. It will fail if:
- there is no generated code yet (just setting up a new project, no database yet)
- there is no existing code in the directory (first build)
usually these are minor fixes and we've just hacked our way to getting a new project or a new machine up and running with the build, but it's preventing me from my 1-click-build Nirvana.
So I'd like to hear suggestions on how to improve this where it's a bit more durable. Maybe move the copying of the code into the application folders into the NAnt script? This seems kind of backwards to me, but I'm willing to listen to arguments for it.
OK, fire away :)
How often does your DB schema change? Wouldn't it be possible to generate the database-related files on demand (e.g. when the schema changes) and then check them into your code repository?
If your database schema doesn't change, you can also package the compiled *.cs classes and distribute the archive to other projects.
We have two projects in our solution that are built completely out of generated code. Basically, we run the code generator .exe as a post-build step for another project and along with generating the code, it automates the active instance of visual studio to make sure that the generated project is in the solution, that it has all of the generated code files, and that they are checked out/added to TFS as necessary.
It very rarely flakes out during the VS automation stage, and we have to run it "by hand" but that's usually only if you have several instances of VS open with >1 instance of the solution open and it can't figure out which one it's supposed to automate.
Our solution and process are such that the generation should always be done and correct before our auto-build gets to it, so this approach might not work for you.
Yeah I'd like to take VS out of the equation so that a build from VS is just simply compiling the code and references.
I can manage the NAnt script... I'm just wondering if people have advice around having 1 NAnt script, or possibly one for each project which can push the code into the projects rather than being pulled.
This does mean that you have to opt-in to generate code.

Resources