In terms of performance, which is better Flex or Silverlight? - apache-flex

In general, which performs better? How are they like when processing vector graphics?

Bubblemark is the premier benchmarking site for RIA.
Note at least one comparison image showing better drawing in Flash.
The GUIMark test is very interesting, the initial results showing SilverLight performance as poor. If you read down into the comments, the solution was identified as being partly a coding problem with timing and partly due to a major speed issue with SilverLight rendering text.
So, one key issue would be if text forms a major part of what you wish to render at speed.
Aside - I did a realtime graphing engine in Cocoa back in 2003 where I ended up using traditional Quickdraw rendering because the heavily anti-aliased and fancy text rendering of the Quartz graphics on Mac OS/X at the time was way too slow. Fast, good-looking text is not easy!

Related

Adding calculation power to flex application

I have been tasked with making several flex-driven visualizations of immense excel spreadsheets. The client wants the finished product to be as self-contained as possible. The problem being that flex doesn't offer as much computing horsepower as is needed. Is there an easy (or not easy) way to accomplish this. I am just trolling for pointers. Thanks in advance!
If you dont mind doing it the hard way, I have two options for you:
Pixel Bender: a tool originally designed for creating complex and CPU-intensive graphic filters and offload those calculations to the hardware. But it can be used for number crunching too. Here's an article that covers that topic: Using Pixel Bender with Flash Builder 4 as a number crunching engine. The language may not be like anything you're used to. I had a hard time wrapping my head around it.
Alchemy: a tool that compiles C or C++ code so it can be executed in the Flash VM. I am not certain how much performance can be gained for simple number crunching, but if you know C, this might be a path to investigate.
The first thing that comes into my mind - building a webservice that will do the hard work. But this is not a self-contained product though.
Apart from that - take a look at the apparat - http://code.google.com/p/apparat, it allows various optimizations, access to the low level AVM2 code - http://code.google.com/p/apparat/wiki/AsmExpansion and more. I do not think that as3 and flex compiler is so bad for math. Try to write the sample math function and test it using different languages.

Dynamic 3D Map Visualization Toolkit?

I'm seeking a dynamically updatable, "real-time" map visualization toolkit that would support the following concept:
A user-controlled pilot's eye view flying above a landscape where the topography is
dynamically changing (hills rising and falling, slip/sliding around, valleys opening and
filling) in real-time. (Currently just a color-coded landscape surface would be acceptable, although the eventual goal is to overlay terrain/map imagery.)
Another process is dynamically updating the landscape topography data
as our fearless flyer passes over it.
There's lots of 3D visualization "explorers" out there, but they all seem to either be limited to a static data set, or require that the dynamic evolution of the data visualizations all be calculated in advance and then strung together as an animation. And flight simulators of course all pretty much assume that the topography doesn't change while in flight.
Technical wishlist:
Linux
C API preferred, but open to C++ or Java (or Ada :-)
Free/Open source preferred, but will consider proprietary
Performance: Well, I'll try to work with whatever it's got
If C++ is ok, and you don't require something too high level, I'd HIGHLY recommend OpenSceneGraph for a project like this. I used it on a project several years ago to display various forms of geospatial data on a globe (vector coastline data, sat imagery, etc).
Do keep in mind you're not limited to writing your entire solution in C++ :)
Our 3D visualization app combined our C++/OSG 3D library for graphics, a Java front-end for the GUI, and some old fortran code for the serious number crunching :O

wxGraphicsContext dreadfully slow on Windows

I've implemented a plotter using wxGraphicsContext. The development was done using wxGTK, and the graphics was very fast.
Then I switched to Windows (XP) using wxWidgets 2.9.0. And the same code is extremely slow. It takes about 350 ms to render a frame. Since the user is able to drag the plotter with the mouse to navigate it feels very sluggish with such a slow update rate.
I've tried to implement some parts using wxDC and benchmarked the difference. With wxDC the code runs just about 100 times faster.
As far as I know both Cairo and GDI+ are implemented in software at this point, so there's no real reason Cairo should be so much faster than GDI+.
Am I doing something wrong? Or is the GDI+ implementation just not up on par with Cairo?
One small note: I'm rendering to a wxBitmap now, with the wxGraphicsContext created from a wxMemoryDC. This is to avoid flicker on XP, since double buffering doesn't work there.
Excerpt from the cairo homepage:
Cairo is designed to produce consistent output on all output media while taking advantage of display hardware acceleration when available (eg. through the X Render Extension).

What are the options and best practices for PV3D inspired modeling

The studio I work at is currently developing the Tony Hawk XI website and I am responsible for the flash/AS3 development. As part of the pitch, I entered an augmented reality skateboard example to be shown which impressed the client very much.
After a few weeks of getting stronger with Papervision3D, and getting to know the Flar Toolkit, I have successfully imported md2 and dae files that load and interact with my custom marker.
Now it has come time to develop some of my own models; I will be using 3DSMAX. I want to know what the limitations are on things like poly-count, character rigging and animation, texturing, tricks for exporting and creating the proper format file and any other bits of information that may save me some serious headaches down the road.
Currently I have a Quake2 MD2 model, Ernie, pulled inside of a FlarToolkit demo here.
This is very low-poly and I was wondering how many polys could I expect to get away with being that today's machines are so much faster;
Brian Hodgeblog.hodgedev.com hodgedev.com
I've heard that 2000 polys is about the threshold for good performance. In practice though, its been hit or miss and a lot of things can have an impact. So far I've run into perfomance hits when using animated movieclip materials, animated materials with an alpha chanel and precise materieals.
Having to clip objects seems to be a double edged sword. In some cases, it will increase performance by a good deal, and in others (seems to be primarily when there are alot of polys on the edge of the viewport) it'll drop the framerate by a good 10-15 fps. So, I'd say the view you setup is something to think about as well.
For example, we have a model of an interior of a store with some shelves and products and customers walking around. In total we have just under 600 triangles (according to the StatsView, which you should check out if you haven't yet: org.papervision3d.view.stats.StatsView). On my computer, which is a new computer with a quad core it runs at a steady 30fps (which is where we want it), but on an old Dell XPS (Pentium 4) it runs between 20 and 30fps depending on what objects are being clipped, etc.
We try to reduce the poly count and texture creatively to fix as many of the performance issues as possible. Unfortunatley our minimum specs are really low, so we need to do alot to get it to run well.
Edit:
Another thing we're doing is swapping out less detailed models for higher detailed ones when zoomed in. If you aren't zooming at all, than this probably won't help.
Hope that helps a bit.

How can you program if you're blind?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Sight is one of the senses most programmers take for granted. Most programmers would spend hours looking at a computer monitor (especially during times when they are in the zone), but I know there are blind programmers (such as T.V. Raman who currently works for Google).
If you were a blind person (or slowly becoming blind), how would you set up your development environment to assist you in programming?
(One suggestion per answer please. The purpose of this question is to bring the good ideas to the top. In addition, screen readers can read the good ideas earlier.)
I am a totally blind college student who’s had several programming internships so my answer will be based off these. I use windows xp as my operating system and Jaws to read what appears on the screen to me in synthetic speech. For java programming I use eclipse, since it’s a fully featured IDE that is accessible.
In my experience as a general rule java programs that use SWT as the GUI toolkit are more accessible then programs that use Swing which is why I stay away from netbeans. For any .net programming I use visual studio 2005 since it was the standard version used at my internship and is very accessible using Jaws and a set of scripts that were developed to make things such as the form designer more accessible.
For C and C++ programming I use cygwin with gcc as my compiler and emacs or vim as my editor depending on what I need to do. A lot of my internship involved programming for Z/OS. I used an rlogin session through Cygwin to access the USS subsystem on the mainframe and C3270 as my 3270 emulator to access the ISPF portion of the mainframe.
I usually rely on synthetic speech but do have a Braille display. I find I usually work faster with speech but use the Braille display in situations where punctuation matters and gets complicated. Examples of this are if statements with lots of nested parenthesis’s and JCL where punctuation is incredibly important.
Update
I'm playing with Emacspeak under cygwin http://emacspeak.sourceforge.net I'm not sure if this will be usable as a programming editor since it appears to be somewhat unresponsive but I haven't looked at any of the configuration options yet.
I'm blind, and have been programming for about 13 years on Windows, Mac, Linux and DOS, in languages from C/C++, Python, Java, C# and various smaller languages along the way. Though the original question was around configuring the environment, I think it's best answered by looking at how a blind person would use a computer.
Some people use a talking environment, such as T. V. Raman and the Emacspeak environment mentioned in other answers. The more common solution by far is to have a screen reader which runs in the background monitoring OS activity and alerting the user via synthetic speech or a physical braille display (generally showing somewhere from 20 to 80 characters at a time). This then means a blind person can use any accessible application.
So, I personally use Visual Studio 2008 these days, and run it with very few modifications. I turn off certain features like displaying errors as I type since I find this distracting. Prior to joining Microsoft all my development was done in a standard text editor like Notepad, so once again no customisations.
It is possible to configure a screen reader to announce indentation. I personally don't use this, since Visual Studio takes care of this, and C# uses braces. But this would be very important in a language like Python where whitespace matters. Finally, Emacspeak does make use of different voices/pitches to indicate different parts of syntax (keywords, comments, identifiers, etc).
I am blind and have been a programmer for the last 12 years or so. Currently am a senior architect and work with Sapient Corporation (a cambridge-based consulting company creating both Web-based and thick client based enterprise solutions).
I use several screen readers but mostly stick with Jaws for windows and NVDA.
I have mostly worked on the Microsoft platform and visual studio as my environment. I also use tools like the MS Sql enterprise studio and others for DB access, network monitoring etc.
I tried to spend some time with emacspeak but since my work was mostly based on the MS platform, never really spent a lot of time there.
I have also spent a couple of years working on C++ on linux - mostly used notepad or visual studio on windows for all the coding and then samba to share files with the linux environment.
Also used borland C for some experimental stuff. Have recently been playing around with python, which as other people have noted above is particularly unfriendly for a blind user because it is written using indentation as the nesting mechanism. Having said that, NVDA, the most popular open source screen reader is written completely using python and some of the commiters on that project are themself blind.
A particularly interesting question I get frequently asked as an architect is how do I deal with diagrams - UML and visio and rational rose etc. Visio is probably the most accessible diagraming tool out there. I was able to write jaws scripts to read rational rose diagrams for me. I've used a tool called T-dub (technical diagram understanding for the blind) developed by some german university for accessing UML 2.0 diagrams. Have used a java-based ugly tool called magic draw for doing model-driven development and was a commiter on the androMDA project and helped develop the .Net code generator from a UML model.
In general, I find that I thrive most in a team environment where I can work on my strengths. For example, while a diagram is extremely useful to communicate/document a design, the actual design process involves a lot of thinking and brainstorming and when the design has been thought out, one of your team mates can help you quickly put together a neatly drawn picture out of it.
People incorrectly mis-construe the above to be lack of independence or ability while I see this as pure inter-dependence -- as in I am sure that the team mate alone could never have come up with that design on his/her own and in-turn, if I depend on him to document the design, so be it.
Most hurdles I face are tool-based inaccessibility. For example all oracle products have been progressively declining in accessibility over the years (shame on them) and a team environment basically allows me an extra layer of defense against these over and above my screen readers and custom scripts.
I am a blind developer and I work under Windows, GNU Linux and MacOS X. Each of platform has different workflows for blind users. This depends on the screen reader that the blind developer uses.
Development tools are not completely accessible for blind developers. I can type code and use compiling functions in all IDEs but there are many problems if I have to design an interface using designing tools as Interface Builder, XGlade or other. When I was developing with Borland Delphi I could add a control, a Button for example, and I could modify each visual attribute of the control using object inspector window. Many IDEs use object inspector windows to modify visual and non visual attributes but the problem for a blind developer is add new controls because the method to add a new control consists of dragging and dropping a control from the palette to the canvas. Visual studio 200x uses alternative methods to do this but the interface of the IDE changes in each new version and this is a big problem because screen readers for Windows need special support, using scripts, to identify each area of some non standar applications. A blind developer can use Visual studio 2008 with his screen reader but when a new version of this IDE appears he has to wait for a new version of scripts for this version of the IDE.
Xcode with Interface builder has no alternative for dragging and dropping tasks yet. I asked it to Apple many times but they are working in other things. I published 3 apps in the App store (Accessible minesweeper, accessible fruitmachine and Programar a ciegas RSS) and I had to design all the interface by code. It's a hard work but I can manage all features of each control.
Eclipse has an accessible code editor but other development tools as debug console,plugins for designing or documentation area present problems for assistive tools for blind users.
Documentations is a problem for blind developers too. Many samples and demonstrations use images to show the explanation (set the environment settings as you can in the picture)
I think the question is not being blind. The question is the companies and development groups think accessibility affects final software but it doesn't affect development software. They think a blind user should be a client but a blind user can't be a development mate.
Blind associations ask accessibility for products and services but they forgot blind developers. Blind people can work as lawyers, journalists, teachers but a blind developer is a strange concept even for the blind. Many times I feel alone because some blind friends of mine can't understand my work.
You can read my opinion about this issue in this article, in Spanish, in my blog http://www.programaraciegas.net/2010/11/05/la-accesibilidad-en-crisis-para-los-desarrolladores-ciegos/
there is a translation tool in the web page. Sorry but I didn't translate it.
Emacs has a number of extensions to allow blind users to manipulate text files. You'd have to consult an expert on the topic, but emacs has text-to-speech capabilities. And probably more.
In addition, there's BLinux:
http://leb.net/blinux/
Linux for the blind. Been around for a very long time. More than ten years I think, and very mature.
Keep in mind that "blind" is a range of conditions - there are some who are legally blind that could read a really large monitor or with magnification help, and then there are those who have no vision at all. I remember a classmate in college who had a special device to magnify books, and special software she could use to magnify a part of the screen. She was working hard to finish college, because her eyesight was getting worse and was going to go away completely.
Programming also has a spectrum of needs - some people are good at cranking out lots and lots of code, and some people are better at looking at the big picture and architecture. I would imagine that given the difficulty imposed by the screen interface, blindness may enhance your ability to get the big picture...
Hanselman had a really interesting podcast with a blind developer recently.
I worked for the Greater Detroit Society for the Blind for three years running a BBS tailored for blind access and worked with a number of blind users on how to better meet their needs, and with newly blind users to get them acclimated to the available hardware and software offerings that were available at the time. If nothing else, I at least learned to read Braille as a hedge against the case where I ever wound up in the same situation!
The majority of blind computer users and programmers use a screen reader of some sort. Jaws in particular is popular. Fortunately, most major applications these days offer some form of handicapped access. You may have to tune your environment slightly to cut down on the chatter, e.g. consider disabling Intellisense in Visual Studio.
A Braille display is less common and is comparatively much more expensive and can show 40 or 80 columns of text, and can be used when exact positioning/punctuation is important. While a screen reader can be configured to rattle off punctuation, a lot of people find it distracting, and it is easier in many cases to feel your way through it. Jaws can be configured to drive the display, so you're not juggling accessibility applications.
Also, a lot of legally blind users still have some modicum of sight left to them. Using high contrast backgrounds and the magnification functionality can help a lot of these users.
Using ToggleKeys in Windows will let you hear when you accidentally tap one of the modal 'caps lock', 'num lock', 'scroll lock', etc. keys as well.
I know at least one Haskell programmer who uses a screen reader and who explicitly programs without using Haskell's layout rules, and instead opts to use the rather non-idiomatic, but supported {;}'s instead, because it is easier/less distracting for him to get his screen reader to read off punctuation than for him to figure out exact indentation that complies with Haskell's layout rules. On that same note, I've heard some grumbling from a couple of blind programmers about when they have to write Python.
Ultimately, you learn to play on your strengths.
I can't recall the source, but I've heard/read about a form of audible syntax "colouring" - so that instead of a string assignment being read as
foo equals quote this is a string quote
the string part would be read with a different pitch or voice to make the separation of elements clearer.
One place to start is the Blinux project:
http://leb.net/blinux/
That project describes how to get Emacspeak (editor with text-to-speech) and has a lot of other resources.
I worked with one person who's eye sight all but prevented them from using a monitor - they did well with Screen reader software and spent a lot of time using text based applications and the shell.
Wikipedia's list of screen reader packages is another place to start: http://en.wikipedia.org/wiki/List_of_screen_readers
I'm a postgraduate student in Beijing,China. I major in computer science and a lot of my work is programming.
I am born with low sight, I need to use magnifying tools to see fonts on screen clearly. I use microsoft's mgnify tools on windows and use compiz's magnify plug in if on linux. I usally set the tool to magnify as three times many as the original font size.
For me maginify tools is ok, the main problem is the speed,I have to move mouse to keep cursors follow the text I'm looking at, microsoft's magnify provides a option of "auto follow the text edit points",that set me from continuously mouse movement when editting or coding. But it doesn't always works because of the edit software or IDE may not support that.
Magnifying tools on linux are hard to use. The KMag come with KDE has a terrible refresh rate which make my eyes unconfortable, compiz's magnifying plugs which I'm using now is OK,but has no function of auto focus(focus auto following).
iOS provides quite perfect solution for me with full screen magnifying, especially on ipad's 9.7 inches screen. there auto focus is not necessary because I hardly use them to code or do other edit stuff.
Android provides very little accessibility functions, only like shake feedback, which is useless for me.
there is no any kind of good magnifying tools on android , not to mention advance function like full screen magnify on iOS.
I used to study Qt, want to build a useful magnify tools on linux, even on android. But hardly have some progress.
When I was in grad school, we had a member of our research team who was blind. He was a bit older, maybe mid-40s. He told us about how he programmed his first computer (which was well before text-to-speech was common) to output the contents of the screen in Morse Code. To overcome the obvious chicken-and-egg problem, he had to completely rewrite the code each time through from scratch until it was working well enough for him to have it read back to him.
Now he uses text-to-speech, though he plans the code very thoroughly before actually writing any of it, to minimize the debug loop.
He was also pretty good at giving PowerPoint presentations that, despite his lack of sight, were just about as well formatted as any sighted presenter's.
This blog post has some information about how the Visual Studio team is making their product accessible:
Visual Studio Core Team's Accessibility Lab Tour Activity
Many programmers use Emacspeak:
Emacspeak --The Complete Audio Desktop
Back in New Zealand I knew someone who had macular degeneration, so was partially sighted. He's a very talented programmer and wound up using Delphi because he could work by recognizing word shapes This was easier to do with a Pascal-like syntax than a C-ish squiggly bracket one. He has a web site, but doesn't seem to mention macular degeneration at all, so I won't name him.
I'm blind and from some months I'm using VINUX (a linux distro based on Ubuntu) with SODBEANS (a version of netbeans with a plug-in named SAPPY that add a TTS support).
This solution works quite well but sometimes I prefer to launch Win XP and NVDA for launching many pages on FireFox because Vinux doesn't work very well when you try to open more than 3 windows of FireFox...
As many have pointed out, emacspeak has been the enduring solution cross platform for many of the older hackers out there. Since it supports Linux and Mac out of the box, it has become my prefered means of developing Windows egnostic projects.
To the issue of actually getting down syntax through an auditory one as opposed to a visual one, I have found that there exists a variety of techniques to get one close if not on the same playing field.
Auditory icons can stand in place for verbal descriptors for one example. You can, put tones for how far a line is indented. The longer the tone, the further the indent. Since tones can play in parallel with text to speech, the information comes through in the same timeframe and doesn't serialize the communication of something so basic.
Braille can quickly and precisely decode to the user the exact syntax of a line. This is something more useful for people who use braille in daily life; the biggest advantage is random access to the contents of the display. Refreshable units typically have router keys above each character cell which can place the cursor to that cell. No fiddling with arrow keys O(n) op vs O(1) access.
Auditory dimensionality (pitch, rate, volume, inflection, richness, stress, etc) can convey a concept (keyword, class, variable, error, etc). For example, comments can be read in a monotone inflection...suiting, if I might say so :).
Emacs and other editors to lesser extents (Visual Studio) allow a coder to peruse a program symantically (next block, fold block, down defun, jump to def, walk up the parse tree, etc). You can very quickly get the "big" picture of the structure of an entire project doing this; with extensions like Cedet, you can get the goodness of VS/Eclipse/etc cross platform and in a textual editor.
Could probably go on and on, but that in a nutshell, is the basis of why a few of us are out there hacking away in industry, adacdemia, or in our basements :).
A group of students from Southern Illinois University Edwardsville and Washington State University are working on a programming language for the blind:
http://www.youtube.com/watch?v=lC1mOSdmzFc
harald van Breederode is a well-known Dutch Oracle DBA expert, trainer and presenter who is blind. His blog contains some useful tips for visually impaired people.
What in the world would a braille keyboard even be??
There are such things as braille writers but you would never use one as an input device for a computer.
If you're simply talking about a keyboard with the braille symbols on it this would also be a very bad idea. You're going to have a lot more keys to reach while typing and it would still be slower.
Touch typing is NOT a visual skill, a blind person can do it just as well as a sighted person.
I think that this would work well in extreme programming using the pair programming principle. If you're making software for blind people, who better to make it then someone who would literally be in touch with the business requirements, so I don't think it's very far fetched at all.
As for writing code, well unless there was some kind of feedback I think a person may struggle with syntax. Audio feedback may help to a point though.
NVDA is a good open source screen reader for win.
What about inventing some kind of device that you plug in a usb port and that would be basically a "sheet of rubber" that would modify itself to show brail of your code, allowing blind people to read it instead to hear it?
There are a variety of tools to aid blind people or partially sighted including speech feedback and braillie keyboards. http://www.rnib.org.uk/Pages/Home.aspx is a good site for help and advice over these issues.
Once I met Sam Hartman, he is a famous Debian developer since 2000, and blind. On this interview he talks about accessibility for a Linux user. He uses Debian, and gnome-orca as screen reader, it works with Gnome, and "does a relatively good job of speaking Iceweasel/Firefox and Libreoffice".
Specifically speaking about programming he says:
While [gnome-orca] does speak gnome-terminal, it’s not really good enough at
speaking terminal programs that I am comfortable using it. So, I run
Emacs with the Emacspeak package. Within that, I run the Emacs
terminal emulator, and within that, I tend to run Screen. For added
fun, I often run additional instances of Emacs within the inner
screens.

Resources