Is there any way to find unreferenced code in Flex Builder? - apache-flex

We've got several Flex projects, one of which has just been refactored. I'm wondering if there's an easy way to tell which classes and functions (if any) aren't being used any more?
I've discovered that we've definitely got some unused code, because running ASDoc on the entire project reports some compilation errors which don't get reported by Flex Builder (implying that those classes aren't being used any more). I'm hoping to find a more robust and complete method, and preferably one which can work at function level too.

My ugly hack:
Using the swfdump tool from SWFtools, dump the disassembly from (all of) your swf(s):
swfdump -a my.swf > dump
Get a list of all your classes:
find . -name "*.as" -exec basename {} .as \; > classes
find . -name "*.mxml" -exec basename {} .mxml \; >> classes
Apply one list to the other:
for class in $(<classes) ; do grep -q \\\<$class\\\> dump || echo $class ; done
I am doing this on Windows, using Cygwin.

Check out the Flex PMD tool. It was recently released in beta, but we've been using it at work for a few weeks, and it seems to work pretty nicely.

Note, the swfdump tool included with the Flex SDK will work in place of the SWFTools version in the bash script listed above.

This doesn't really answer your question but you can find the references to a single class, variable or function by selecting it (in code editor) and pushing: Ctrl+Shift+G . I think that's what you can get out of Flex / Flash Builder at the moment.

Related

"find" command returning nothing when searching through absolute path

Thought there might be a simple solution to this, but I can't seem to find it anywhere. It's a simple-enough problem. Say I have the following folder/file structure:
/home/
text1.txt
/mydir/
text2.txt
Then I input the command:
find . -name *.txt
This command returns "text1.txt" when called from within /home, and returns "text2.txt" when called from within /home/mydir, as it should.
However, when calling the following from /home...:
find /home/mydir -name *.txt
it returns nothing. My expectation is that it would return "text2.txt." Any thoughts? I have already checked to see if I have any wayward aliases assigned to find, and I have nothing.
It is also worth it to note that I have two unix machines. The use of an absolute path for "find" works on one machine and not the other. Can't go into much more detail than that, I'm afraid. Just looking for a direction to investigate this more.
Thanks to anyone who can help :-)
You should use
find . -name "*.txt"
otherwise bash will extract *.txt to text1.txt resulting in the following command:
find . -name text1.txt
And it will no longer match text2.txt

How to generate translation file (.po, .xliff, .yml,...) from a Symfony2/Silex project?

Im going to build a Silex/Symfony2 project and I have been looking around for a method to generate XLIFF/PO/YAML translation files based on texts-to-be-translated inside the project but not found any instruction or documentation on it.
My question is: Is there an automated way to generate translation file(s) in specific format for a Symfony2/Silex project?
If yes, please tell me how to generate the file then update the translation after that.
If no, please tell me how to create translation file(s) then adding up more text for my project? I am looking for an editor desktop based or web-based instead of using normal editor such as Transifex, GetLocalization (but they dont have option to create a new file or add more text)
After a long time searching the internet, I found a good one:
https://github.com/schmittjoh/JMSTranslationBundle
I see you've found a converter, but to answer your first question about generating your initial translation file -
If you have Gettext installed on your system you could generate a PO file from your "texts-to-be-translated inside the project". The command line program xgettext will scan the source files looking for whatever function you're using.
Example:
To scan PHP files for instances of the trans method call as shown here you could use the following command -
find . -name "*.php" | xargs xgettext --language=PHP --keyword=trans --output=messages.pot
To your question about editors:
You could use any PO editor, such as POEdit, to manage your translations, but as you say you eventually need to convert the PO file to either an XLIFF or YAML language pack for Symfony.
I see you've already found a converter tool. You may also like to try the one I wrote for Loco. It supports PO to YAML, and PO to XLIFF
Workaround for busy people (UNIX)
You can run the following command in the Terminal:
$ grep -rEo --no-filename "'.+'\|\btrans\b" templates/ > output.txt
This will output the list of messages to translate:
'Please provide your email'|trans
'Phone'|trans
'Please provide your phone number'|trans
...
I mean almost.. But you can usually do some work from here...
Obviously you must tweak the command to your liking (transchoice, double-quotes instead of single...).
Not ideal but can help!
grep options
grep -R, -r, --recursive: Read all files under each directory, recursively this is equivalent to the -d recurse option.
grep -E, --extended-regexp: Interpret PATTERN as an extended regular expression.
grep -o, --only-matching: Show only the part of a matching line that matches PATTERN.
grep -h, --no-filename: Suppress the prefixing of filenames on output when multiple files are searched.
(source)

Complex command execution in Makefile

I have a query regarding the execution of a complex command in the makefile of the current system.
I am currently using shell command in the makefile to execute the command. However my command fails as it is a combination of a many commands and execution collects a huge amount of data. The makefile content is something like this:
variable=$(shell ls -lart | grep name | cut -d/ -f2- )
However the make execution fails with execvp failure, since the file listing is huge and I need to parse all of them.
Please suggest me any ways to overcome this issue. Basically I would like to execute a complex command and assign that output to a makefile variable which I want to use later in the program.
(This may take a few iterations.)
This looks like a limitation of the architecture, not a Make limitation. There are several ways to address it, but you must show us how you use variable, otherwise even if you succeed in constructing it, you might not be able to use it as you intend. Please show us the exact operations you intend to perform on variable.
For now I suggest you do a couple of experiments and tell us the results. First, try the assignment with a short list of files (e.g. three) to verify that the assignment does what you intend. Second, in the directory with many files, try:
variable=$(shell ls -lart | grep name)
to see whether the problem is in grep or cut.
Rather than store the list of files in a variable you can easily use shell functionality to get the same result. It's a bit odd that you're flattening a recursive ls to only get the leaves, and then running mkdir -p which is really only useful if the parent directory doesn't exist, but if you know which depths you want to (for example the current directory and all subdirectories one level down) you can do something like this:
directories:
for path in ./*name* ./*/*name*; do \
mkdir "/some/path/$(basename "$path")" || exit 1; \
done
or even
find . -name '*name*' -exec mkdir "/some/path/$(basename {})" \;

Parallel Make Output

When running a CMake generated Makefile with multiple processes (make -jN), the output often gets messed up like this:
[ 8%] [ 8%] [ 9%] Building CXX object App/CMakeFiles/App.dir/src/File1.cpp.o
Building CXX object App/CMakeFiles/App.dir/src/File2.cpp.o
Building CXX object App/CMakeFiles/App.dir/src/File3.cpp.o
I'm not sure, but I think this behavior is also there for Makefiles not generated by CMake. I'd say it happens when multiple processes write to stdout at the same time.
I know I'm probably being pedantic, but is there any (simple) fix to this? ;)
If you're using GNU make, you can do it by redefining SHELL such that commands are wrapped by a trivial utility that ensures atomicity of information printed to standard output. Here's a more detailed description, including sample source for the wrapper utility.
I tried to get the CMake people to fix this, but apparently they don't want to. See http://www.cmake.org/Bug/view.php?id=7062.
The specific CMake bug related to interleaved make output using -jN with N>1 is CMake bug 0012991: "Parallel build output mess". It is still open in the "backlog" state waiting to be fixed.
This bug is actually annoying enough that it's a strong reason to switch to Ninja instead of make. Plus the fact that Ninja is faster than make. Ninja also uses an appropriate number of parallel jobs based on the number of CPU cores present. Also cool is how Ninja is, by default, very quiet: all progress happens on a single line in the terminal unless the build process emits messages or a build step fails. If a build step fails, Ninja prints the full command line that invoked it and displays the output. It's really nice since it makes any warning or error messages stand out. Although currently there is no colored terminal output: that would be a nice improvement but for me the advantages of Ninja over make are tremendous.
Looks like it is already fixed. Add a -Oline parameter to the command line:
make -j 8 -Oline
Version of make:
GNU Make 4.3
Built for x86_64-pc-msys
Sun's (now Oracle's) dmake available on Linux and Solaris takes care of that.
See here and there.
Here is a simple working example of using a wrapper for Make. I'm not sure if I'd encourage it's use, but it's an idea.
# Makefile
SHELL = /tmp/test/wrapper
test: test1 test2
test1:
$(eval export TARGET=$#)
env
test2:
$(eval export TARGET=$#)
env
and this:
#!/usr/bin/env bash
# wrapper
bash $# | sed -e "s/^/${TARGET} /"

Memorizing *nix command line arguments

For my developer work I reside in the *nix shell environment pretty much all day, but still can't seem to memorize the name and argument specifics of programs I don't use daily. I wonder how other 'casual amnesiacs' handle this. Do you maintain an big cheat sheet? Do you rehearse the emacs shortcuts when you take your weekly shower? Or is your desk covered under sticky notes?
Using bash_completion is one way of not having to remember the precise syntax of program arguments.
> svn [tab][tab]
--help checkout delete lock pdel propget revert
--version ci diff log pedit proplist rm
-h cleanup export ls pget propset status
add co help merge plist pset switch
annotate commit import mkdir praise remove unlock
blame copy info move propdel rename update
cat cp list mv propedit resolved
If I don't use a command regularly enough to remember what I want, I tend to just use --help or the man pages when I need to.
Or, if I'm lucky, I use CTRL+R and let bash's history search find when I last used it.
Eventually you just remember them, well the set that you use anyway. I used to maintain a README in my home directory when I was starting out but that disappeared many years ago.
One useful command is man -k which you pass a word to and it will return a list of all commands whose man page summary contains that word.
'apropos' is also a very useful command. It will list all commands whose man pages contain the keyword.

Resources