How do teams use Flex Builder Pro to develop large applications? - apache-flex

We develop applications for the Flash platform, which have LOTS of run-time loaded assets (graphics, data, audio, code-libraries, etc.) Those assets are logically organized within project folders. Programmers and designers get the latest from version control, check-out their code or design work, test within a full local copy of the application, then check-in when they are done. The organization is important to this workflow and our communication.
Using Flash or FlashDevelop, I can easily work with any organization of folders and files. I can easily compile a single package or class to create a run-time loadable swf. In Flex Builder Pro, it seems my options are to create a project for each swf that I need to compile or create a project which references modules, which can also be setup as projects. Neither technique seems ideal for a team or even as good as our current workflow. Note: I've got about 10 years experience with Flash, but only a couple months experience using Flex Builder Pro. It is quite likely that I simply haven't discovered a better workflow.
Would you please share a few tips on how you and your team use Flex Builder Pro to develop large applications, which have lots of runtime libraries and other assets?

Generally, you are correct: Using FlexBuilder the idea is that you will create a project that has a single thing to be built (.swf, .swc, etc).
A file repository (as you mentioned you use) is definitely a MUST HAVE for large-scale development.
Adding build tools such as Ant (my personal favorite) and Maven (growing in popularity and ability pretty quickly) to your toolbox allow you to do more advanced building (and even unit-testing). These types of tools will allow the building of a large application with many aspects and dependencies in a single action. This is a must for large-scale applications and development environments. For larger projects with a lot of sub-projects I will often have a single "master" project that is little more than a build script that calls sub-project build scripts and puts everything together. Maven is particularly good at that. There are Eclipse plug-ins that help out with both of those tools.
Different situations require different ways to use the projects together. It may be helpful to directly link to a Library project as a dependency. That way your projects are able to debug the linked code and modify it as needed. Or if library projects are less commonly modified then their assets could be dropped into the /libs folder of the dependant project eliminating the need to have the .swc generating project open while development goes on.
Keeping your projects to a one-asset-per-project situation goes a long way to keeping it organized. Generally I have a folder for each client and in there a folder for each project. If that project needs to be broken down into sub-projects then ALL of those sub-projects will be located in that project folder (no matter what the relationship is to each other).
Sometimes it is of course helpful to have a number of assets be created for a single project. This could be multiple .swfs for different situations, different .zips for distribution to different places or clients or a dozen other situations. "Asset" projects are sometimes a good example of this. Sometimes I have a "project" that is just my collection of assets. I don't normally access this project from within FlexBuilder and it doesn't normally have the .project and other Eclipse files.

I also use combination of library projects and regular projects. One issue I ran into with a large project is compilation time. Here are some links for the compilation:
Any advice for speeding up the compile time in Flex Builder 3?
http://code.seanhess.net/?p=184
http://www.rogue-development.com/blog2/2007/11/slow-flex-builder-compile-and-refresh-solution-modules/
http://blog.iconara.net/2008/02/22/quick-tips-to-speed-up-your-ant-build/
Setting the right eclipse.ini and using the -incremental compiler option really helped me out.

Unless you develop a Hello World application, you should have more than one Flex Builder projects. The main one has the bare minimum of classes, and possibly shared libraries that are required to display the first screen of your app.
Fonts and CSS go to a separate proj and are compiled into a separate swf. Load them during the runtime via StyleManager. This alone will speed up the compilation of your app.
The rest of the code has to be split into a separate projects (either Flex Library projects or just the projects having modules). Read about differences in linking of the libraries with the main proj (RSL vs Merged into Code vs. External).
We use Ant for building each of the projects and the entire app. Our open sourced Fx2Ant utility generates ANT scripts for your Flex Builder projects in seconds.
For example, here's the project I was working on last year: http://www.mbusa.com. It consists of more than 15 Flex Builder projects.

Related

Typescript in VS 2015 ASP.NET 4 MVC 5 - what are the working combinations of settings and choices?

I am adding a typescript project to a VS2015 MVC 5 project (that makes it ASP.NET 4, not asp.net 5 or asp.net 6 code: only the MVC is version 5). This is the sole target for all aspects of my question, I cannot use generalized or theoretical guidance on typescript, node.js, module loaders, etc.
The problem is simpler with ASP.NET Core. But that's not what I'm facing. All the usual sources of examples and guidance avoid or provide scraps when it comes to ASP.NET 4 MVC 5 because it is hard. And no one will state exactly how hard or what precisely are the obstacles.
Worst, Typescript documentation is like Open Source Documentation: you can only get one issue, one step deep. This produces a research workflow consisting of endless-issue-tree-recursion.
I understand opinions, I even have one. But I'm looking for the experiential answer, what is the one combination that has proven to work for a production team.
So here are the specific items that need to be addressed and made to work within the confines of a working, medium-sized ASP.NET 4 MVC 5 LOB app:
Visual Studio's version of typescript. This is an installation issue (using node most simply) and the Tools/options - typescript settings have to match.
Browser-style testing (typically manual TDD workflow) or node.js testing (automated). This has to be chosen up-frontly to prevent more issue-tree-recursion. We are going with browser-based... phantomjs using WallabyJs.
NPM #types/library-name: supposed to fill a node_modules folder with both library-name and library-name.d.ts based only on a package.json with #Types references. But actually requires the package.json to hold a reference for both the #types/library-name and the library-name to work in my VS 2015 ENT v3 and asp.net 4 mvc 5 project. And all the versions specified then require manual correction' and even then the version look-up process is a little suspicious. This #types process may not be the way to go with ASP.NET 4 MVC 5, but I can't tell what the correction alternative might be. #Types is currently the only recommended option for typescript.
Which version of ECMAScript: es6 is apparently too far ahead. es2015 is likely, but this (maybe) appears to have relationships to several of the other issues. Supposedly, these designations are the same, but there are two places they can be set. I've chosen es2015 in tools/options/typescript. But getting any of these (now 3) settings wrong could be a problem.
Module system: CommonJs is for node and automated testing, and VS development testing is automated only for server-side, and VS UI tests are a manual process. So AMD, require JS is probably an option for VS, but it adds its own workflow and maintenance and considerations that are really hard to get right in ASP.NET. Using ASP.NET bundling and triple-slash references (dependable) might work, but after you have put the libraries in node-modules, you would want to use the full path into node-modules in the file name slug in an import statement. This is all very clumsy and involves the most guesswork. But solving this whole item might be the 'key' for the overall question.
There are probably a lot of other, smaller issues. But someone who has done this will have solved all the mentioned items and the others as well.
What I'm looking for is all the settings across all these issues in detail based on a working Typescript app in ASP.NET 4 MVC 5 implementation for browser-based unit/behavior tests in VS 2015. Those who have done it will understand.
Thanks very much for your consideration.
What you're missing is separation of concerns, in spite of the initial benefit of such starter templates, they start to cause incidental dependencies and complicate the mental model. It's much easier to have your front end in a separate project.
Regardless:
Visual Studio's version of typescript.
Always use the very latest available. This controls the version of TypeScript which powers the IDE. You will probably end up compiling in a separate process or in the browser during development. Again you will want to use the latest but it will likely be installed with a different package manager.
Browser-style testing (typically manual TDD workflow) or node.js
testing (automated). This has to be chosen up-frontly to prevent more
issue-tree-recursion.
Firstly, I definitely agree with the importance of choosing up front but, if it is still possible, just unpleasant, to add tests to an existing project.
TDD workflows involve automated testing as they rely on rapid feedback. This is orthogonal to whether you run your tests in the browser or using NodeJS.
You should use whichever approach makes the most sense for your application and that may be a mix of both.
Since you are writing a frontend JavaScript application you will likely want to run some tests in the browser. However, as Uncle Bob (Robert C. Martin) has stated, views should be dumb and require little testing. My interpretation of this is that we should not spend too much time testing things like Angular or React components to ensure that they render correctly, and instead focus on testing behavioral elements of the system such as services and plain old functions.
That said, you may well want to run tests of your client side services against an actual browser runtime, as opposed to just Node.js, and that is reasonable.
There are a number of testing libraries to help you with this. I do not have a specific recommendation besides to say that you should find a reliable test runner, and a simple assertion library. Tried and true testing libraries like QUnit and Tape are examples of solid options.
One last note, it is important that one not confuse the concept of integration testing with running tests in a web browser, it's perfectly valid to run TDD style tests, which implies unit tests, in a web browser.
NPM #types/library-name: supposed to fill a node_modules folder with
both library-name and library-name.d.ts, but requires the package.json
to hold a reference for both the #types/library-name and the
library-name to work in my VS 2015 ENT v3 and asp.net 4 mvc 5 project.
Simply put this goes to back to decoupling your front end from your back end. Visual Studio and certainly ASP.NET have nothing to do with versioning your types packages.
If a package comes with its own type declarations then you don't need to install an auxiliary types package otherwise you do.
Either way install JavaScript and TypeScript dependencies using a JavaScript oriented package manager (such as NPM, JSPM, or Yarn).
Do not use NuGet for these!
As you suggest there are versioning issues, it's currently a difficult problem in TypeScript. However once again it has nothing to do with ASP.NET or Visual Studio.
Which version of ECMAScript: es6 is apparently too far ahead. es2015
is likely, but this appears to have (maybe) relationships to several
of the other issues.
ES6 is the same as ES2015, the latter being the name under which the former was ultimately released. ECMAScript now follows a yearly cadence, roughly, with ES2017 just around the corner.
The nice thing about having a transpiler, such as TypeScript, is that you can use the latest features from es2017 and still target es5 for emit and you'll be fine.
Module system: CommonJS is for NodeJS and automated testing, and VS
development testing is automated only for server-side, and VS UI tests
are a manual process. So AMD/UMD require JS is probably the option for
VS, but it adds its own workflow and maintenance and considerations.
Using triple-slash references (dependable) might work, but after you
have put your/their libraries in node-modules you would want to use
the full path into node-modules in the file name slug in an import
statement. (solving this whole item might be the 'key' for the overall
question).
This is a very complex subject and probably the only one of your questions that you really need to spend a lot of time considering. As I said earlier using NodeJS or not is orthogonal to automated testing. But if you're targeting NodeJS natively with your test code then you will need to use CommonJS output.
For the actual application code, the choice has nothing at all remotely to do with whether or not you are using Visual Studio, I'm sorry for reiterating this but it really is important that you separate these ideas.
The question of which module format to use for your front end application code is a very important and contentious one.
Triple /// references are not a module format but rather a way of declaring the dependencies between global variables that are declared and referenced across multiple files.
They do not scale well, working acceptably when you have only a handful of files.
Triple /// references should not be used. They are not a modularity mechanism and their use is completely different from using any of the module systems/module formats you mention, including CommonJS.
Never combine them with a module system, which is what you would have to do in order to run your tests under NodeJS or load your app with RequireJS or anything else.
RequireJS is an excellent option which would imply AMD modules as you say. RequireJS does not require any use of triple slash references. In fact they should be avoided as the plague when using this format or any other module format!
I recommend strongly against using UMD modules. Isomorphic JavaScript is a problematic idea, and it offers you no benefits since you are creating a browser application with a .NET backend.
Many developers actually do use CommonJS modules in a browser. This requires bundling them continuously, using tools such as Webpack. This approach has advantages and disadvantages. The primary advantages are the ability to lean on existing NodeJS JavaScript server-side tools, such as npm, by way of Webpack or Browserify. This may not sound like a big advantage but the amount of rich tooling available for CommonJS modules is nothing to scoff at, making it a strong option.
Consider using the System module format and the SystemJS loader via jspm to both manage your packages and to load your code. With this approach, you gain the advantages of RequireJS, are able to run your tests under NodeJS and the browser using jspm run without needing to switch targets formats or bundle your code just to test it. There's also no need to bundle your code during development, although this is supported. More importantly, you gain the advantage of writing future compatible code, as it offers the only module format and loader which correctly models the semantics that ES Modules will eventually have when implemented natively in browsers. JSPM has first class support for TypeScript, Babel, and Traceur.
For posterity here is the description of the System module format taken from the link above:
System.register can be considered as a new module format designed to support the exact semantics of ES6 modules within ES5. It is a format that was developed out of collaboration and is supported as a module output in Traceur (as instantiate), Babel and TypeScript (as system). All dynamic binding and circular reference behaviors supported by ES6 modules are supported by this format. In this way it acts as a safe and comprehensive target format for the polyfill path into ES6 modules.
Disclaimer:
I am a member of the JSPM GitHub organization, playing a role in maintaining the registry and have made very minor contributions to the jspm cli.

How can I make an SBT build for multi-projects and multi-platforms?

I'm starting on a medium project with many independent components that can run either on Android or the JVM and I'm wondering how to break it into SBT projects so that the dependencies behave nicely. Here's what I've got so far:
core/ for platform agnostic core code, must not break on either platform, this includes interfaces for component launchers
android-core/ for implementations of the core interfaces that depend on android libraries (note, this project depends on sbt-android)
jvm-core/ for implementations of the core interfaces that depend on libraries that don't play well with or depend on android
So far so good, but now it's time to consume the core projects in the individual components. My requirements are:
Each component should compile to a separate Android app (perhaps sharing an aar library?), and the apps can be individually installed (still via sbt, a la the android:install task)
There is a wrapper build so that all project builds can be done from the same place.
It is so easily extensible that fresh grad students can correctly add components (bonus points if adding a component needs no change whatsoever to the build).
If a component depends on a platform specific library it does not prevent other components from being compiled agnostically.
Some of the questions I have are:
Should each component have an sbt project? (I'm inclined to think so so that students could add dependencies on libraries that don't run on both platforms, but I'm open to being wrong)
If so, will each component's project require an sbt build?
If so, how can I bootstrap the component builds to require minimum skill from the component author?
Later I'm going to be adding code generation to generate message classes from descriptions (think protobuf/thrift), that will want to run as a first pass before the components get compiled, I'm assuming this can be done, but do you have a link that explains how?
If two components each compile against the messages of each other will that create impossible circular dependencies?
Basically I'm looking for wisdom and experience, the nitty gritty code I'm sure I can hack my way through once I know what terms to search the docs to understand and roughly how the whole thing wants to hang together. Thanks for your help!

How big should a Flex SWF be?

I've a fairly small Flex4 project targeting Flash 10, developed in FlashDevelop. I know Flex SWFs carry extra overheads to a plain AS3 project, but 240Kb for release build seems still a lot - is it? Or is this a realistic minimum?
In case it's relevant, FlashDevelop builds my project with the following (anonymized):
mxmlc -load-config+=obj\********.xml -incremental=true -benchmark=false -optimize=true -static-link-runtime-shared-libraries=true -o obj\*****************
Doesn't Flash player include Flex runtimes or something sensible like that?
The player does not include the Flex framework. It shouldn't. The Flex framework is independent of the player and if it was included the player would have to include every version of the framework to use the one each swf was built against. To solve this the framework is different (as is the Flash framework).
The resolution to large swfs is to use Framework Runtime Shared Libraries. This way the player will load a shared library swf once for the specific framework version you used and this shared library will be used across all swfs that were compiled against the same framework version.
You can find more information here:
http://livedocs.adobe.com/flex/3/html/help.html?content=rsl_09.html
In practice, it's somewhat like having the framework in the player, but it's just not pre-loaded. The frameworks get loaded as needed.
240KB is not a lot. Yet.
However, in my opinion, Flex does make files quite large when you start developing bigger applications, which is the reason I do plain ActionScript projects.
Flash Player does not come with preloaded framework data in. Therefore, 1) do what sam said with Runtime libraries. 2) load almost all files after the main flash has loaded, thus giving the user meaningful info, while the rest loads (you could load homepage, display it, and only then start loading the other sections). You could use something nice called BulkLoader.
hth

Maven - which projects or techologies you are using it for?

I've been leading rather large project that strives to "Mavenize" various testing apps produced by the engineering tools group over past 5+ years to test and optimize our home-built database. So far our group managed to successfully retrofit (beside obvious Java) few Coldfusion-based apps, PHP app, large .NET app with about 30 modules and currently working on roughly 40 C/C++ apps. Actually, once you abstract yourself from Java-centric nature of Maven and throw in few useful plugins such as antrun, exec, assembler and resource you can pretty much figure out way of "Mavenizing" just about anything.
So my question is - are there people who had this sort of experience - using Maven to manage non-Java projects? What was it? What language/technology? What did you end up using? How? Were you successful? And if not - what did you end up using as alternative?
Conceptually, Maven is not Java centric but Java is monopolizing most efforts as written on Wikipedia:
Theoretically, [Maven's plugin-based architecture] would allow anyone to write plugins to interface with build tools (compilers, unit test tools, etc.) for any other language. In reality, support and use for languages other than Java has been minimal.
Having that said, I don't have any personal experience of maven with something else than Java. But I can suggest to check out Maven for other languages? :)
We're using Maven to build a Flex application, and it's working quite nicely :).
I have used maven for generating documentation based on LaTeX source files. Using exec and some wrapper scripts, I can create PDF files and handle SCM releases.
One of the PDF files generated is included in a web app by letting maven package it into a jar file, which is referenced from the web app as a regular dependency. The web app can then access the PDF file on the class path.

When should one use a project reference opposed to a binary reference?

My company has a common code library which consists of many class libary projects along with supporting test projects. Each class library project outputs a single binary, e.g. Company.Common.Serialization.dll. Since we own the compiled, tested binaries as well as the source code, there's debate as to whether our consuming applications should use binary or project references.
Some arguments in favor of project references:
Project references would allow users to debug and view all solution code without the overhead of loading additional projects/solutions.
Project references would assist in keeping up with common component changes committed to the source control system as changes would be easily identifiable without the active solution.
Some arguments in favor of binary references:
Binary references would simplify solutions and make for faster solution loading times.
Binary references would allow developers to focus on new code rather than potentially being distracted by code which is already baked and proven stable.
Binary references would force us to appropriately dogfood our stuff as we would be using the common library just as those outside of our organization would be required to do.
Since a binary reference can't be debugged (stepped into), one would be forced to replicate and fix issues by extending the existing test projects rather than testing and fixing within the context of the consuming application alone.
Binary references will ensure that concurrent development on the class library project will have no impact on the consuming application as a stable version of the binary will be referenced rather than an influx version. It would be the decision of the project lead whether or not to incorporate a newer release of the component if necessary.
What is your policy/preference when it comes to using project or binary references?
It sounds to me as though you've covered all the major points. We've had a similar discussion at work recently and we're not quite decided yet.
However, one thing we've looked into is to reference the binary files, to gain all the advantages you note, but have the binaries built by a common build system where the source code is in a common location, accessible from all developer machines (at least if they're sitting on the network at work), so that any debugging can in fact dive into library code, if necessary.
However, on the same note, we've also tagged a lot of the base classes with appropriate attributes in order to make the debugger skip them completely, because any debugging you do in your own classes (at the level you're developing) would only be vastly outsized by code from the base libraries. This way when you hit the Step Into debugging shortcut key on a library class, you resurface into the next piece of code at your current level, instead of having to wade through tons of library code.
Basically, I definitely vote up (in SO terms) your comments about keeping proven library code out of sight for the normal developer.
Also, if I load the global solution file, that contains all the projects and basically, just everything, ReSharper 4 seems to have some kind of coronary problem, as Visual Studio practically comes to a stand-still.
In my opinion the greatest problem with using project references is that it does not provide consumers with a common baseline for their development. I am assuming that the libraries are changing. If that's the case, building them and ensuring that they are versioned will give you an easily reproducible environment.
Not doing this will mean that your code will mysteriously break when the referenced project changes. But only on some machines.
I tend to treat common libraries like this as 3rd-party resources. This allows the library to have it's own build processes, QA testing, etc. When QA (or whomever) "blesses" a release of the library, it's copied to a central location available to all developers. It's then up to each project to decide which version of the library to consume by copying the binaries to a project folder and using binary references in the projects.
One thing that is important is to create debug symbol (pdb) files with each build of the library and make those available as well. The other option is to actually create a local symbol store on your network and have each developer add that symbol store to their VS configuration. This would allow you to debug through the code and still have the benefits of usinng binary references.
As for the benefits you mention for project references, I don't agree with your second point. To me, it's important that the consuming projects explicitly know which version of the common library they are consuming and for them to take a deliberate step to upgrade that version. This is the best way to guarantee that you don't accidentally pick up changes to the library that haven't been completed or tested.
when you don't want it in your solution, or have potential to split your solution, send all library output to a common, bin directory and reference there.
I have done this in order to allow developers to open a tight solution that only has the Domain, tests and Web projects. Our win services, and silverlight stuff, and web control libraries are in seperate solutions that include the projects you need when looking at those, but nant can build it all.
I believe your question is actually about when projects go together in the same solution; the reason being that projects in the same solution should have project references to each other, and projects in different solutions should have binary references to each other.
I tend to think solutions should contain projects that are developed closely together. Such as your API assemblies and your implementations of those APIs.
Closeness is relative, however. A designer for an application, by definition, is closely related to the app, however you wouldn't want to have the designer and the application within the same solution (if they are at all complex, that is). You'd probably want to develop the designer against a branch of the program that is merged at intervals further spaced apart than the normal daily integration.
I think that if the project is not part of the solution, you shouldn't include it there... but that's just my opinion
I separate it by concept in short

Resources