Java/Flash application not working when compiled with flex-mojo - apache-flex

I'm working on a Java web application which contains some Flash modules.
So far I've used Adobe Flash builder to compile the Flash source code and manually integrate the swf file into the war, and the web application can be deployed and runs successfully.
Recently the customer, who also is the source code proprietary, asked for managing the Flash source modules using the flex-mojo maven plugin.
The problem is that the application compiles and is deployed with no errors but no longer works. When you access the application from a browser after the login phase a blank screen appears and there is no way to interact with the application.
The pom.xml used to build the Flash module is the following (I omitted irrelevant parts):
<plugins>
<plugin>
<groupId>org.sonatype.flexmojos</groupId>
<artifactId>flexmojos-maven-plugin</artifactId>
<version>4.0-RC2</version>
<extensions>true</extensions>
<configuration>
<contextRoot>myapp</contextRoot>
<services>../myapp-war/src/main/webapp/WEB-INF/flexCompile/services-config.xml</services>
<localesSourcePath>${basedir}/locale/{locale}</localesSourcePath>
<debug>true</debug>
<output>target/myapp.swf</output>
<definesDeclaration>
<property>
<name>BUILD::buildNumber</name>
<value>"Versione: ${project.version}"</value>
</property>
</definesDeclaration>
<localesRuntime>
<locale>en_US</locale>
</localesRuntime>
<localesCompiled>
<locale>en_US</locale>
</localesCompiled>
</configuration>
</plugin>
</plugins>
It's important to notice that the size of the two compiled swf files are different, so this seems to be a compiler version issue.
Is there anyone who uses to develop with Flash and Java that can give me some hint about solving this issue, even just pointing me to resources, forums, and so on?

Did you try setting the swfVersion property to your configuration? It would be something like this:
<swfVersion>13</swfVersion>
The version is specific to your targeted player. You can find which version you need here
As a general good practice, do not forget to check myapp-config.xml which contains every options passed by Flexmojo to mxmlc. This file is located in your target directory.

Related

Tomcat on centos behaves differently from tomcat on mac?

I am developing my spring mvc application on a mac and tried different tomcat versions there. All worked fine.
When I deploy the war to a tomcat residing on centos 7, the application overall works fine as well. But there are some issues, I couldn't find the root cause for:
1) Symbols of font-awesome do not show on centos-tomcat. The font is provided by the war and I checked, that the references do work.
2) The unit tests during the build (jenkins on the centos machine) are getting some errors, when reading property files. It seems like some entries of that property files are not read. However, most entries work fine, and I don't see obvious differences.
3) a favicon of type *.ico could not be rendered. After renaming it to *.gif, it worked fine.
4) a third-party javascript produces odd results, which might be a consequence of 2)
Do you have some hints, what I could check?
I found the cause. In my pom.xml I had a filtering activated, which kept some resources out of the war file. When I tested locally, I started the application by eclipse and this error was not seen.
This change helped:
<webResources>
<resource>
<directory>src/main/webapp</directory>
<filtering>false</filtering>
</resource>
</webResources>

Why ASP.NET Web deploy re-publish everything after changing a dev machine

I've encountered this issue several times, every time I change a dev machine, did a minor change, it re-publish everything even including images (jpg, png, gif), and the comparison windows shows no difference at all.
The "solution" is re-publish everything, then on this machine it will work correctly, however, once I change to another machine, same issue happen. I can't stand it any more...
This happens because by default Web Deploy uses file modification dates to check if the file needs to be copied to target. When you change dev pc you re-buiold everything effectively setting file modification timestamps to a newever date, than it was when publishing from old dev machine.
As of the new ASP.NET and Web Tools for Visual Studio 2013 Release you can configure your web peoject to use checksums instead of modification timestamps. This must solve your problem. - see http://msdn.microsoft.com/en-us/library/ee942158.aspx#use_checksum
Just modify your publich profile to include the following:
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<MSDeployUseChecksum>true</MSDeployUseChecksum>
<!— other settings omitted to keep the example short -->
<PublishDatabaseSettings>
<!— this section omitted to keep the example short -->
</PublishDatabaseSettings>
</PropertyGroup>
</Project>

Use Visual Studio web.config transform for debugging [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I use Web.debug.config in the built-in visual studio debugger server?
I want to use the Web.config transformation that works fine for publish also for debugging.
When I publish a web app, Visual Studio automatically transforms the Web.config based on my currenctbuild configuration.
How can I tell Visual Studio to do the same when I start debugging?
On debug start it simply uses the default Web.config without transformation.
Any idea?
OK, with the understanding that web.debug.config & web.release.config are for package/publish only. I have come up with a way in which to enable what you are trying to do. I've blogged about it at https://devblogs.microsoft.com/aspnet/asp-net-web-projects-web-debug-config-web-release-config/.
Here is the summary.
Now let’s see how we can enable what the question asker wants to do.
To recap, when he builds on a particular configuration he wants a specific transform to be applied to web.config. So obviously you do not want to maintain a web.config file, because it is going to be overwritten.
So what we need to do is to create a new file web.template.config, which is just a copy of web.config. Then just delete web.config by using Windows Explorer (don’t delete using Visual Studio because we do not want to delete it from the project).
Note: If you are using a source control provider which is integrated into Visual Studio then you probably want to delete web.config from source control.
Also with this we do not want to use web.debug.config or web.release.config because these already have a well defined role in the Web Publishing Pipeline so we do not want to disturb that. So instead we will create two new files, in the same folder as the project and web.template.config, web.dev.debug.config and web.dev.release.config.
The idea is that these will be the transforms applied when you debug, or run, your application from Visual Studio. Now we need to hook into the build/package/publish process to get this all wired up. With Web Application Projects (WAP) there is an extensibility point that you can create a project file in the same folder with the name {ProjectName}.wpp.targets where {ProjectName} is the name of the project. If this file is on disk in the same folder as the WAP then it will automatically be imported into the project file. So I have created this file. And I have placed the following content:
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<!-- Make sure web.config will be there even for package/publish -->
<Target Name="CopyWebTemplateConfig" BeforeTargets="Build">
<Copy SourceFiles="web.template.config"
DestinationFiles="web.config"/>
</Target>
<PropertyGroup>
<PrepareForRunDependsOn>
$(PrepareForRunDependsOn);
UpdateWebConfigBeforeRun;
</PrepareForRunDependsOn>
</PropertyGroup>
<!-- This target will run right before you run your app in Visual Studio -->
<Target Name="UpdateWebConfigBeforeRun">
<Message Text="Configuration: $(Configuration): web.dev.$(Configuration).config"/>
<TransformXml Source="web.template.config"
Transform="web.dev.$(Configuration).config"
Destination="web.config" />
</Target>
<!-- Exclude the config template files from the created package -->
<Target Name="ExcludeCustomConfigTransformFiles" BeforeTargets="ExcludeFilesFromPackage">
<ItemGroup>
<ExcludeFromPackageFiles Include="web.template.config;web.dev.*.config"/>
</ItemGroup>
<Message Text="ExcludeFromPackageFiles: #(ExcludeFromPackageFiles)" Importance="high"/>
</Target>
</Project>
Let me explain this a bit. I have created the CopyWebTemplateConfig target which will always copy web.template.config to web.config on build, even if you are not debugging your application in Visual Studio.
This is needed because we still need to support the package/publish process of Visual Studio. Then I extended the property PrepareForRunDependsOn to include the UpdateWebConfigBeforeRun target. This property is used to identify the list of targets which needs to be executed before any managed project is run from Visual Studio.
In this target I am using the TransformXml task to transform web.template.config, using the correct web.dev.***.config file. After that your app starts up using the correct web.config based on your build configuration.
After that I have another target ExcludeCustomConfigTransformsFiles, which I inject into the package/publish process via the attribute BeforeTargets=”ExcludeFilesFromPackage”. This is needed because we do not want these files to be included when the application is packaged or published.
So that is really all there is to it.
To explain the package/publish process a bit more for this scenario. When you package/publish web.debug.config or web.release.config, depending on build configuration, will still be used. But ultimately the file that it is transforming is web.template.config, so you may have to adjust depending on what you have in that file. Questions/Comments?
Andrew is on the right path. When you are using this feature here is how it was designed to be used.
web.config
This is the config file which developers should use locally. Ideally you should get this to be standardized. For instance you could use localhost for DB strings, and what not. You should strive for this to work on dev machines without changes.
web.debug.config
This is the transform that is applied when you publish your application to the development staging environment. This would make changes to the web.config which are required for the target environment.
web.release.config
This is the transform that is applied when you publish your application to the "production" environment. Obviously you'll have to be careful with passwords depending on your application/team.
The problem with transforming the web.config that you are currently running is that a transform can perform destructive actions to the web.config. For example it may delete a attributes, delete elements, etc.
You could just use the 'default' web.config as your development/debugging version, and then the web.release.config would of course continue to be the release version, since its transforms are applied when you publish.
In your debug configuration, add a post-build step, and use it to replace/transform your web.config
Although I agree that the simplest approach is usually the best, I can easily imagine a circumstance where for some period of time you want to connect your IDE to a test database instead of your development database. Although you can specify the development connect strings in your default web.config file, it would be really nice to have a Web.Test.config file so that when you swap your build configuration to "Test", you would automatically get the new settings while still in your IDE.
The historical alternative is commenting out one set of connection strings for another, but these new config transforms held out the hope of finally putting a stake in the heart of that ugly practice. Although one default file for development and a transform for release may work much of the time, adding a post-build step to transform the web.config file is the more complete answer in my opinion.

Unit Testing ASP.net Web Site Project code stored in App_Code

I have an ASP.net Web Site Project (.net 3.5). Currently all of the non-code behind code files (including Linq2Sql stuff, data contexts, business logic, extension methods, etc) are located in the App_Code folder.
I am interested in introducing Unit Testing (using nunit) in at least some sections of the project moving forward. Any Unit Testing that I would be doing would need to have full access to all of the code that is currently located in the App_Code folder. I have done some initial reading so far, and the consensus seems to be:
This will not be possible given my current setup
Unit testing requires referencing classes that are part of a compiled dll, and a Web Site Project by definition only compiles at run time.
In order to proceed, I will need to either convert my entire project to a Web Application, or move all of the code that I would like to test (ie: the entire contents of App_Code) to a class library project and reference the class library project in the web site project. Either of these will provide access to the classes that I need in compiled dll format, which will allow me to Unit Test them.
Is this correct? Or is there another way that I can Unit Test without restructuring/refactoring my entire project?
My shop has finally worked through an answer for this for our MVC project. And I want to share it as I chased a lot of dead ends here on StackOverflow hearing a lot of people say it couldn't be done. We do it like this:
Open the MVC folder "as a website, from local iis" which gets intellisense and debugging working properly
Add a unit test project that lives in our source controlled directory
Add a pre-build step to the TEST project, since we can't add one to a project that is open as a website. Imagine website is \FooSite and
our test project is \FooSite.Tests. The compiled app code will end up
in FooSite.Tests\FooSite_Precompiled\bin.
*
<Target Name="BeforeBuild">
<AspNetCompiler VirtualPath="FooSite" TargetPath="$(ProjectDir)\FooSite_Precompiled" Force="true"
Debug="true" /> </Target>
Add a reference to the FooSite_Precompiled/bin/App_Code.dll in your test project.
Boom that's it. You can have your cake and eat it too. Every time you click Build in your solution you call the aspnet_compiler.ext tool
on your website csproj (which does still exist) which is able, unlike
MSBuild, to compile app_code, and the Debug="true" allows you step
into the app_code.dll code when debugging your unit test. And you
only need to Build when you're running updated unit tests. When
you're looking at the effects of your change on the page, you just
Change Code/Save/Refresh Page since the app_code folder dynamically
compiles when called from your web server.
Your conclusions seem correct. I would vote for moving functionality into one or several class library projects, since that may open the door for reusing the same functionality in other projects as well.
We have this issue at my company (My boss doesn't like DLLs, some rubbish about versioning...)
We have two ways round it that we use frequently:
1) Get the CI tool to do the unit testing: We use TeamCity which has a pretty tight NUnit integration, and our solution builds quick enough (and has few enough tests) for this to be a valid option.
2) Manually precompile and unit test the resulting binaries: It's perfectly possible to run the ASP.net compiler / MSBuild from the command line (as if you were doing a 'Publish' build) and just unit test the resulting binaries.
However, if you have the option of segregating the code into binaries (class libraries) or just using a web application, I'd suggest that as a better alternative.
Should anyone find themselves implementing Brian's solution, here's a Website.targets file you can include in unit testing solution. It (re)compiles website only when App_Code changes. Just add something like
<PropertyGroup>
<WebsiteName>MyWebsite</WebsiteName>
<WebsitePath>..</WebsitePath>
</PropertyGroup>
<Import Project="$(ProjectDir)\Website.targets" />
<Target Name="BeforeBuild" DependsOnTargets="CompileWebsite">
</Target>
to your .csproj, customizing WebsiteName and WebsitePath and you should be ready to go. Website.targets:
<?xml version="1.0" encoding="utf-8"?>
<!--
Target that compiles Website's App_Code to be used for testing
-->
<Project DefaultTargets="CompileWebsite" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<AppCodeFiles Include="$(WebsitePath)\$(WebsiteName)\App_Code\**\*.*" />
</ItemGroup>
<Target Name="CompileWebsite" Inputs="#(AppCodeFiles)" Outputs="$(ProjectDir)\PrecompiledWeb\bin\App_Code.dll">
<AspNetCompiler VirtualPath="$(WebsiteName)" PhysicalPath="$(WebsitePath)\$(WebsiteName)" TargetPath="$(ProjectDir)\PrecompiledWeb" Force="true" Debug="true" />
</Target>
<Target Name="CleanWebsite">
<RemoveDir Directories="$(WebsitePath)\$(WebsiteName)\PrecompiledWeb" />
</Target>
</Project>
It looks like this is possible whilst still using App_code, but I would either move this logic out to its own class library project or change the project type to Web Application, as Fredrik and Colin suggest.
I always create my own ASP.NET projects as Web Application projects not Websites.
And as the OP stated it's also possible to move to a Web App project, which i would say is cleaner as well, your pages can stay in the wep app project, you will have them in 1 DLL (testable). All your business logic etc. goes in a separate class library / libraries.
It is possible to unit test classes stored in the App_Code folder without converting your project to a Web App or moving your classes to a Class Library project.
All that is necessary is setting the code files' Build Actions to Compile. This will cause Debugging and Unit Testing your website to output a .dll file.
Now when you reference your website project from the unit test project, the classes in the app_code folder will be visible.
NOTE:
Setting your .cs files' Build Action to Compile will cause your website to generate a .dll file on debugging and unit-testing. The .dll file will cause problems when you debug your website because IIS will now find your code in two places, the bin and the App_Code folder and will not know which one to use. I currently just delete the .dll file when I want to debug.
I had to change Brian White's solution by adding the PhysicalPath attribute. In addition I am not using the Default Web Site and had to change the VirtualPath property to my website name.
<Target Name="BeforeBuild">
<AspNetCompiler VirtualPath="myIISsitename.com" PhysicalPath="$(SolutionDir)MySiteFolder" TargetPath="$(ProjectDir)\MySite_Precompiled" Force="true" Debug="true" />
</Target>
The resulting dll will be at MySite_Precompiled\App_Code.dll

Using Maven to setup a Drupal PHP project

What do I want to achieve?
We are currently working on a PHP project that uses Drupal.
I desperately want to learn how to create a One-step build for the whole project.
Preferably by using something new (for me) that seems very powerful: Maven
Basically I want to automate the following process:
Checkout Drupal from the official CVS repository.
Checkout official 3rd party modules from their respective CVS repositories.
Checkout our custom modules from our mercurial repository.
Copy/move all the modules to the appropriate directory within Drupal.
Checkout and install our custom theme.
Add a custom drupal installation profile.
Create a new MySQL database schema.
If possible, automate the drupal db connection setup.
In the future I would like to run this build on a Hudson (or any other) continues integration server.
Why Maven? (why not Ant or Phing?)
Other than the desire to learn something new (I have used Ant before) I think the dependency management of Maven might work well for the drupal modules.
Do you think this is enough reason to use Maven, even though Maven was not originally intended for PHP projects? I know Ant was not originally used for PHP either, but there are far more examples of people using Ant and PHP together.
BTW I think I will switch to Ant if I can't get Maven to work soon. The procedural style of Ant is just easier for me to understand.
What do I have so far?
I have a pom.xml file, that uses the SCM plugin, to checkout the drupal source.
When I run:
mvn scm:checkout
the source is checked out into a new directory:
target/checkout
When I try:
mvn scm:bootstrap
it complains about the install goal not being defined.
Here is the pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>drupal</artifactId>
<version>1.0</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-scm-plugin</artifactId>
<version>1.1</version>
<configuration>
<username>anonymous</username>
<password>anonymous</password>
</configuration>
</plugin>
</plugins>
</build>
<scm>
<connection>scm:cvs:pserver:cvs.drupal.org:/cvs/drupal:drupal</connection>
<developerConnection>scm:cvs:pserver:cvs.drupal.org:/cvs/drupal:drupal</developerConnection>
<tag>DRUPAL-6-12</tag>
<url>http://cvs.drupal.org/viewvc.py/drupal/drupal/?pathrev=DRUPAL-6</url>
</scm>
</project>
Finally, what are my questions?
Is Maven the wrong tool for this?
If no,
How would you do this?
Is it the scm:bootstrap goal that I should be using?
What is the Maven way of moving directories around on the file system?
Should the install goal be used to move the modules into the drupal directory?
Currently all our custom modules are in one mercurial repository. Is it possible to create a pom.xml and checkout each module individually?
Any general advice would be appreciated.
Thanks for your time!
I'm 98% certain that what you really need is Drush Make, which can recursively build Drupal projects, provided they provide their own .make file listing their dependencies. It can download from multiple SCMs, web, patch files, and you can control where they get downloaded. It also support external libs, such as wysiwyg, PHP files, or JS libraries.
See the Open Atrium make file for a sample of what it can do.
Definitely you're not using Maven, here some thoughts:
Maven is a Java build tool and dependency management software with a well-defined lifecycle which goes like this: validate, compile, test, package, integration-test, verify, install, deploy. What you are using is the scm plugin which can stick to any of the phases defined here and perform some actions but unless you make complicated changes in the POM (I haven't heard of anyone doing this) the lifecycle will continue being executed.
Maven also is designed to package JARs, WARs and with the use of some plugins EARs, SARs, RARs (not that RARs) and some other files; you might have to program a new plugin to get the type of packages you expect or use the assembly plugin which will make things more complicated.
Because of the previous points, there is no command for Maven to move the files into an specific directory (not a native one) and you shouldn't invoke install phase to copy the files to any other location than the local repository. What you're doing is like taking a laundry machine and converting it into a blender.
After reading what you want to do with your project I'd suggest you to create a script (shell script or batch script depending on your OS) for doing the job. SVN and CVS has command line tools which can be invoked from inside your build scripts. I guess you opted for Maven, among other reasons, because Hudson and many other Continuous Integration software are well integrated with it but you can use them with scripts too.
If you are comfortable using Ant and you consider using it will ease the building time of your app I think is not as bad ;) (I haven't used Ant for other purposes than Java projects)
The Drush 'module' is a great tool for scripting out things in Drupal. But, beyond that, I think your approach of doing CVS checkouts for each 'build' is a little off base - unless you have -really- good reasons to have every chunk of the project in a separate repository, your best bet is to have fixed checkouts of Drupal core & contributed modules committed to your project's repository. Not only does this take out a dependency on a network connection and the stability of an external server but it allows you to have local modifications of the contributed modules (unfortunately, you're probably going to end up doing this somewhere down the line).
Once you take out the requirement to do checkouts from multiple repositories, you'll probably notice that your task becomes -much- easier, leaving you with some simple MySQL manipulation and writing out a settings.php.
The project http://www.php-maven.org know comes with a build plugin enabling the php world to maven (or the maven world for php projects). Version 2 snapshot can be found in our google groups (news thread available at https://groups.google.com/group/maven-for-php/t/e055e49c89ccb8c5?hl=de).
However this gives you a full control over the project and respects the default maven lifecycle so that the maven commands:
mvn clean
mvn package
mvn deploy
mvn site
will work correctly.
Drupal support may be enabled in version 2.1 where we are focused on frameworks (zend, flow3...) and project types (web, cli, libs...). It would be to much to clearify wha maven is and how it can help you during php development. As Vistor Hugo stated on his early comment Mavens benefits are not only to execute a specific command manually but to embed the whole project structure and the whole project lifecycle via maven.
Since the most php guys did not yet have contact to java and especially maven we are creating tutorials so that everyone has a fairly simple entry in the maven world.
I love maven, although I think it is very java specific as mentioned above.
I had success to handle repeable task with phing. I used in a Zend project to prepare a build or just fasten the normal repetable tasks (eg. clean up db, load db dump).
Phing won't provide you complete lifecycle management as maven, but you can write yourself by hand. You can embed shell script commands to build.xml so you can use everything that you would use in a normal shell script.
I prefer phing over normal shell script because it can handle dependent targets, so if your build.xml contains well designed targets that depend each other, you'll get very useful chains to achive specified goals.
It works for me.
Another great tool for drupal is drush which makes drupal administration scriptable. You can do lots of drupal specific things from console. I think you can call drush commands from phing scripts.

Resources