Is there a reliable way to program by using voice command (dictation)? - voice

I wonder if we can get rid of the keyboard one day. I've tried Apple built in software but it seems to be focused on common textual dictation and not tuned for programming.

Related

ECMA-48 and ANSI X3.64 Implementation

I'm currently developing a console application with a text-based user interface, in python, which is meant to be an open-source, UNIX alternative for a series of industry standard software and to minimize, in my opinion, the hardware/software requirements for a software of such scale.
However, I do not prefer using curses/ncurses in order to understand the principles of how such libraries work and implement them for my own purposes. For that I've been - let's say - experimenting with ANSI Escape Sequences, for a while.
At some point, I've felt the need that I should split (or to describe in UNIX terminology, multiplex) my terminal into sessions, windows or panes as in GNU Screen or tmux. However, for the reasons I mentioned above, instead of using them in my project I tried to discover their source to get inspired. But the truth is that they both require an extensive knowledge of C, at least for me at this point, and searching through the source codes is exhaustive.
I've recenlty found out that GNU Screen documentation refers to ECMA 48 and ANSI X3.64.
Screen User's Manual
Each virtual terminal provides the functions of the DEC VT100 terminal
and, in addition, several control functions from the ISO 6429 (ECMA
48, ANSI X3.64) and ISO 2022 standards (e.g. insert/delete line and
support for multiple character sets). There is a scrollback history
buffer for each virtual terminal and a copy-and-paste mechanism that
allows the user to move text regions between windows.
I also checked ECMA 48 and ANSI X3.64 for what I want to achieve, couldn't find a clue.
My questions actually arise at this point.
How can I find out which coded character sets in those standards are implemented? For example in §8.3.123 of ECMA 48 refers to DEVICE COMPONENT SELECT MODE, which I could found no signs of implementation or use thereof.
What is the working principle of Screen or tmux in creating a window and pane? What kind of path do they follow for creating windows or panes.
The time I've spent for the second question is quite a lot, and took me nowhere. I thought they may be sweeping the whole screen, drawing borders, delimiting lines or columns with coded characters each time there needs to be an update (resizing, creating a new pane etc.). I considered this option may be a basic but frustrating solution. And I could not be sure that screen or tmux works in that way. But I am pretty sure that I am missing a critical point here.
Any help, opinion or recommendations are appreciated.

What language to use when prototyping a small game

I am currently considering writing a small game. It is essentially a map where you can zoom out and in, and in certain places click on info boxes where, at some point, I hope to integrate minigames. Granted, game might be overstating it. Think of it as an interactive map. The theme is how mathematics can be applied in peoples every day life to raise awareness on the usefullness of mathematics.
The question is how I as fast as possible can make a reasonable prototype. If I recieve enough positive response on this I might try to code "the real thing" and use the prototype to obtain funding.
However, I am at a crossroad. I want something to work rather fast and have some C++ experience coding optimization problems, mainly in c-style. I am not convienced, though, that coding it in C++ is the fast way to obtain a prototype. Though I have some experience coding in C++, but have no experience in coding any sort of GUI.
As I see it there is a number of possibilites:
C++, possibly using some library, such as boost or ???.
Start out purely webbased, using e.g. HTML 5 and java.
Python
C#/.NET
Others, such as?
I have to admit I have little experience with anything besides C++ and the STL.
So my question to this wonderful forum is basically, is there a language that provides a significant advantage? Also, any additional insight or comments is more than welcome!
Python is a simpler language than C++, and for prototyping it will help you focus on the task at hand. You can use Pygame, a game library built on the excellent cross-platform SDL library. It provides 2D graphics, input, and audio mixing features. SDL is mainly a C library (and thus compatible with C++), and there are a number of very useful libraries that integrate with it:
SDL_image for loading images in various formats
SDL_ttf for rendering text using TrueType fonts
SDL_mixer for audio mixing
SDL_net for networking
SDL_gfx for graphics drawing primitives
So if you prototype in Python using Pygame, there is a reasonable chance you’ll be able to port what you make over to C++ with minimal hassle, if and when you choose to do so.
Possible options:
Go with what you know the best. Anything else will require a learning curve, which may be weeks to months long. If you're willing to take that road in order to make your prototype, then there are some really great tools available.
BlitzBasic is a good way to go, and is basically designed to be for games
I've done little games in Java using Slick2D - but you'll need good grounding in object-oriented coding to work effectively in Java. If you've got that from C++, then you can see a tech demo I built in Slick2D called Pedestrians. It's open source, and has demo videos here.
You might also ask your question on https://gamedev.stackexchange.com/ - a Q/A site dedicated to game programming

wav-to-midi conversion

I'm new to this field - but I need to perform a WAV-to-MIDI conversion in java.
Is there a way to know what exactly are the steps involved in WAV-to-MIDI conversion?
I have a very rough idea as in you need to;
sample the wav file, filter it, use FFT for spectral analysis, feature extraction and then write the extracted features on to MIDI.
But I cannot find solid sources or papers as in how to do all that?
Can some one give me clues as in how and where to start?
Are there any Open Source APIs available for this WAV-to-MIDI conversion process?
Advance thanks
It's a more involved process than you might imagine.
This research problem is often referred to as music transcription: the act of converting a low-level representation of music (e.g., waveform) into a higher-level representation such as MIDI or even sheet music.
The sophistication of your solution will depend upon the complexity of your input data. Tons of research papers address music transcription only on monophonic piano or drums... because they are easy to transcribe. (Relatively.) Violin is harder. Voice is even harder. Violin plus voice plus piano is much harder. A symphony is nearly impossible. You get the picture.
The basic elements of music transcription involve any of the following overlapping areas:
(multi)pitch estimation
instrument recognition, timbral modeling
rhythm detection
note onset/offset detection
form/structure modeling
Search for papers on "music transcription" on Google Scholar or from the ISMIR proceedings: http://www.ismir.net. If you are more interested in one of the above subtopics, I can point you further. Good luck.
EDIT: That being said, there are existing solutions that we can all find on the web. Feel free to try them. But as you do, evaluate them with a critical eye and ear. What types of audio signals would cause transcription to fail?
EDIT 2: Ah, you are only doing this for piano. Okay, this is doable. Music transcription has advanced to the point where it can transcribe monophonic piano pretty well. A Rachmaninov concerto will still pose problems.
Our recommendations depend upon your end goal. You state "need to perform... in Java." So it sounds like you just want something to work regardless of how it gets you there. In that case, I agree 100% with others: use something that exists.
That's actually an interesting question; all of the MIR libraries I know are typically C/C++/Python/Matlab. But not Java. The EchoNest has a Java API, but I don't think it does note-level transcription. http://developer.echonest.com. (Edit: It does note-level transcription. The returned data includes pitch, timbre, beat, tatum, and more. But I find polyphony is still a problem.)
Oh, Marsyas is Java-based. Cool. I thought it was just C++. http://marsyas.info/ I recommend this. It's developed by George Tzanetakis, a professor in MIR. It does signal-level analysis and should be a good option.
Now, if this is for a fun learning experience, I think you can use the sound manipulation utilities in Java to experiment with the WAV signal and see what comes out.
EDIT: This page describes MIR software better than I can: The Tools We Use
For Matlab, you may be interested in the MIR Toolbox
Here is a nice page of common datasets: MIR Datasets
This is a very big undertaking for being new to the field, unless you mean you are familiar with signal analysis and feature detection in general and want to look more specifically into automatic transcription.
There is no API for WAV to MIDI conversion. Vamp is a framework for feature extraction plugins, but to do automatic transcription you would need to use all the functionality of the existing plugins, plus implement functionality that exists in none of them yet.
Browse through the descriptions of the plugins on the vamp download page, any descriptions you do not understand are topics you should start researching if you want to do this.
If you don't need to automate this task (ie, for a website where people can upload MP3's and get MIDI files back), then you should consider using a tool like Melodyne which is already quite good at going this. As Steve noted, this is a very difficult task to accomplish, and even the best algorithms and solutions present at the moment are not 100% reliable.
So if you are just doing studio work and need to do a few conversions, it will probably save you a bit of time (and lots of headache) to use a tool already designed for this task.
This is a field which is still highly under development, yet, there are some (experimental) algorithms available.
You can install sonic annotator and use a few vamp plugins.
For example:
./sonic-annotator file.wav -d vamp:qm-vamp-plugins:qm-transcription:transcription -w midi
./sonic-annotator file.wav -d vamp:silvet:silvet:notes -w midi
./sonic-annotator file.wav -d vamp:ua-vamp-plugins:mf0ua:mf0ua -w midi
Dolphin, sorry to be brusque, but you have completely underestimated the problem. What you want to achieve - a full piano sound transcription involving all parameters that were used while playing would need an enormous amount of research with people who have worked in the field for many years. Even a group of PhDs in signal processing would have to invest a lot of work to even come close to what you mean. Music transcription has needed decades of work to even work halfway reliable. I'd suggest you pick a different problem which you can manage better than this.

Hardware Programming - Hands-On Learning

Besides Arduino, what other ways are there to learn hardware programming in a hands-on way? Are there any nifty kits available, either a pre-assembled robot, that you can program to move a certain way, or do certain things, or anything similar to that?
Atmel AVR and the PIC both have experiment boards that you can use solder stuff on to, usually they have a couple of buttons and some lights pre-soldered to the area. This let's you program/flash the microprocessor and play with the output pins. You can either write the programs in assembly or C.
Parallax have a number of kits. They have two product lines suited for "playing around", Basic Stamp and something called Propeller. The former is a small microprocessor that runs programs written in Basic (a tad disgusting ;)) and the latter runs something called Spin or assembly (well after compilation obviously.)
I would go with either AVR or the PIC. I've done PIC but I've heard good things about AVR, they seem to ship with better software.
At first look Microsoft's VPL sounds good, but when it comes to actually LEARNING how hardware works it goes a LONG way to hide those details from you. As a matter of fact it is pretty much designed for people who don't program, and is distastful to someone who's actually written embedded software. IF you just want to make stuff happen and not delve into the details it's fine, but if you want to get down to the metal like programming the "Arduino" boards it's not for you.
If you're used to something like the Arduino then something like the PIC will be an easy transistion. SparcFun Electronics has all sorts of DIY type projects and hardware available. If you have a decent bookstore around your area, I would suggest looking for "Circuit Cellar" magazine. It has articles on a monthly basis with project for someone looking to get into hardware projects, everything from homebrew Software Defined Radio to FPGA based 3D graphics. (Raytracing actually) Usually the authors describe the project in an article and "WHY" they made the decisions they did, a description and schematics of the hardware and provide a link to source code.
Cypress Semiconductor has one of the most interesting embedded processors on the market and several high quality dev boards for sale. The PSoC includes the ability to not only configure the software, but also to "drop in" software configured hardware such Analog to digital converters, serial I/O, Digital to Analog and Various amps and filters. It's a REALLY cool concept, and the "touch sensor" capability of the PSoC were actually used in several models of the IPod.
One thing about programming these little micros is they don't have a lot between you and the hardware, you get to see how things really work. It doesn't matter whether you're talking about an 8-bit microcontroller or a quad-core Pentium programming hardware is largely the same concept. You write to a memory mapped register for some piece of hardware like a serial controller, and the hardware responds in someway. If you program a baudrate generator in a PIC or PC it's largely the same idea, you write a value that will be used as a division factor from a given clock to achive a given baudrate. The numbers and names maybe different, but the concepts is the same. On a PC you may have to map to the PCI address of the card, which adds a some complications, but if you looked underneath the OS you would see that that was done just by writing values to registers simalar to programming a PIC to use a different "Page" of memory. Is it worth learning an 8-bitter? Well, there are approximately $5 billion dollars in sales of the little 8-bit micros today with projection only showing growth in that market in the future. I saw one reference that state the average car has 25 Microcontrollers in it. That's not too bad.
I haven't played with it much, but the iRobot looks pretty cool.
The ability to simulate how your robot will work which some of the other answers mentioned is nice, but there's nothing like seeing a real-life robot do what you programmed it to do. That, to me, is what really makes robots fun and cool.
There's the .NET Micro Framework.
It's incredibly simple to use/setup and there's lots of hardware being made to target this framework.
You should take a look at Microsoft Robotics Developer Studio which supports many different kits.
I have always been curious about gumstix. It seems more professional than arduino, and it aims at the Linux programmer. I cannot give you a real suggestion, as I've never played with it, but I would definitely go with one of this toys if I had to do and learn some cool hardware programming.

How can you program if you're blind?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Sight is one of the senses most programmers take for granted. Most programmers would spend hours looking at a computer monitor (especially during times when they are in the zone), but I know there are blind programmers (such as T.V. Raman who currently works for Google).
If you were a blind person (or slowly becoming blind), how would you set up your development environment to assist you in programming?
(One suggestion per answer please. The purpose of this question is to bring the good ideas to the top. In addition, screen readers can read the good ideas earlier.)
I am a totally blind college student who’s had several programming internships so my answer will be based off these. I use windows xp as my operating system and Jaws to read what appears on the screen to me in synthetic speech. For java programming I use eclipse, since it’s a fully featured IDE that is accessible.
In my experience as a general rule java programs that use SWT as the GUI toolkit are more accessible then programs that use Swing which is why I stay away from netbeans. For any .net programming I use visual studio 2005 since it was the standard version used at my internship and is very accessible using Jaws and a set of scripts that were developed to make things such as the form designer more accessible.
For C and C++ programming I use cygwin with gcc as my compiler and emacs or vim as my editor depending on what I need to do. A lot of my internship involved programming for Z/OS. I used an rlogin session through Cygwin to access the USS subsystem on the mainframe and C3270 as my 3270 emulator to access the ISPF portion of the mainframe.
I usually rely on synthetic speech but do have a Braille display. I find I usually work faster with speech but use the Braille display in situations where punctuation matters and gets complicated. Examples of this are if statements with lots of nested parenthesis’s and JCL where punctuation is incredibly important.
Update
I'm playing with Emacspeak under cygwin http://emacspeak.sourceforge.net I'm not sure if this will be usable as a programming editor since it appears to be somewhat unresponsive but I haven't looked at any of the configuration options yet.
I'm blind, and have been programming for about 13 years on Windows, Mac, Linux and DOS, in languages from C/C++, Python, Java, C# and various smaller languages along the way. Though the original question was around configuring the environment, I think it's best answered by looking at how a blind person would use a computer.
Some people use a talking environment, such as T. V. Raman and the Emacspeak environment mentioned in other answers. The more common solution by far is to have a screen reader which runs in the background monitoring OS activity and alerting the user via synthetic speech or a physical braille display (generally showing somewhere from 20 to 80 characters at a time). This then means a blind person can use any accessible application.
So, I personally use Visual Studio 2008 these days, and run it with very few modifications. I turn off certain features like displaying errors as I type since I find this distracting. Prior to joining Microsoft all my development was done in a standard text editor like Notepad, so once again no customisations.
It is possible to configure a screen reader to announce indentation. I personally don't use this, since Visual Studio takes care of this, and C# uses braces. But this would be very important in a language like Python where whitespace matters. Finally, Emacspeak does make use of different voices/pitches to indicate different parts of syntax (keywords, comments, identifiers, etc).
I am blind and have been a programmer for the last 12 years or so. Currently am a senior architect and work with Sapient Corporation (a cambridge-based consulting company creating both Web-based and thick client based enterprise solutions).
I use several screen readers but mostly stick with Jaws for windows and NVDA.
I have mostly worked on the Microsoft platform and visual studio as my environment. I also use tools like the MS Sql enterprise studio and others for DB access, network monitoring etc.
I tried to spend some time with emacspeak but since my work was mostly based on the MS platform, never really spent a lot of time there.
I have also spent a couple of years working on C++ on linux - mostly used notepad or visual studio on windows for all the coding and then samba to share files with the linux environment.
Also used borland C for some experimental stuff. Have recently been playing around with python, which as other people have noted above is particularly unfriendly for a blind user because it is written using indentation as the nesting mechanism. Having said that, NVDA, the most popular open source screen reader is written completely using python and some of the commiters on that project are themself blind.
A particularly interesting question I get frequently asked as an architect is how do I deal with diagrams - UML and visio and rational rose etc. Visio is probably the most accessible diagraming tool out there. I was able to write jaws scripts to read rational rose diagrams for me. I've used a tool called T-dub (technical diagram understanding for the blind) developed by some german university for accessing UML 2.0 diagrams. Have used a java-based ugly tool called magic draw for doing model-driven development and was a commiter on the androMDA project and helped develop the .Net code generator from a UML model.
In general, I find that I thrive most in a team environment where I can work on my strengths. For example, while a diagram is extremely useful to communicate/document a design, the actual design process involves a lot of thinking and brainstorming and when the design has been thought out, one of your team mates can help you quickly put together a neatly drawn picture out of it.
People incorrectly mis-construe the above to be lack of independence or ability while I see this as pure inter-dependence -- as in I am sure that the team mate alone could never have come up with that design on his/her own and in-turn, if I depend on him to document the design, so be it.
Most hurdles I face are tool-based inaccessibility. For example all oracle products have been progressively declining in accessibility over the years (shame on them) and a team environment basically allows me an extra layer of defense against these over and above my screen readers and custom scripts.
I am a blind developer and I work under Windows, GNU Linux and MacOS X. Each of platform has different workflows for blind users. This depends on the screen reader that the blind developer uses.
Development tools are not completely accessible for blind developers. I can type code and use compiling functions in all IDEs but there are many problems if I have to design an interface using designing tools as Interface Builder, XGlade or other. When I was developing with Borland Delphi I could add a control, a Button for example, and I could modify each visual attribute of the control using object inspector window. Many IDEs use object inspector windows to modify visual and non visual attributes but the problem for a blind developer is add new controls because the method to add a new control consists of dragging and dropping a control from the palette to the canvas. Visual studio 200x uses alternative methods to do this but the interface of the IDE changes in each new version and this is a big problem because screen readers for Windows need special support, using scripts, to identify each area of some non standar applications. A blind developer can use Visual studio 2008 with his screen reader but when a new version of this IDE appears he has to wait for a new version of scripts for this version of the IDE.
Xcode with Interface builder has no alternative for dragging and dropping tasks yet. I asked it to Apple many times but they are working in other things. I published 3 apps in the App store (Accessible minesweeper, accessible fruitmachine and Programar a ciegas RSS) and I had to design all the interface by code. It's a hard work but I can manage all features of each control.
Eclipse has an accessible code editor but other development tools as debug console,plugins for designing or documentation area present problems for assistive tools for blind users.
Documentations is a problem for blind developers too. Many samples and demonstrations use images to show the explanation (set the environment settings as you can in the picture)
I think the question is not being blind. The question is the companies and development groups think accessibility affects final software but it doesn't affect development software. They think a blind user should be a client but a blind user can't be a development mate.
Blind associations ask accessibility for products and services but they forgot blind developers. Blind people can work as lawyers, journalists, teachers but a blind developer is a strange concept even for the blind. Many times I feel alone because some blind friends of mine can't understand my work.
You can read my opinion about this issue in this article, in Spanish, in my blog http://www.programaraciegas.net/2010/11/05/la-accesibilidad-en-crisis-para-los-desarrolladores-ciegos/
there is a translation tool in the web page. Sorry but I didn't translate it.
Emacs has a number of extensions to allow blind users to manipulate text files. You'd have to consult an expert on the topic, but emacs has text-to-speech capabilities. And probably more.
In addition, there's BLinux:
http://leb.net/blinux/
Linux for the blind. Been around for a very long time. More than ten years I think, and very mature.
Keep in mind that "blind" is a range of conditions - there are some who are legally blind that could read a really large monitor or with magnification help, and then there are those who have no vision at all. I remember a classmate in college who had a special device to magnify books, and special software she could use to magnify a part of the screen. She was working hard to finish college, because her eyesight was getting worse and was going to go away completely.
Programming also has a spectrum of needs - some people are good at cranking out lots and lots of code, and some people are better at looking at the big picture and architecture. I would imagine that given the difficulty imposed by the screen interface, blindness may enhance your ability to get the big picture...
Hanselman had a really interesting podcast with a blind developer recently.
I worked for the Greater Detroit Society for the Blind for three years running a BBS tailored for blind access and worked with a number of blind users on how to better meet their needs, and with newly blind users to get them acclimated to the available hardware and software offerings that were available at the time. If nothing else, I at least learned to read Braille as a hedge against the case where I ever wound up in the same situation!
The majority of blind computer users and programmers use a screen reader of some sort. Jaws in particular is popular. Fortunately, most major applications these days offer some form of handicapped access. You may have to tune your environment slightly to cut down on the chatter, e.g. consider disabling Intellisense in Visual Studio.
A Braille display is less common and is comparatively much more expensive and can show 40 or 80 columns of text, and can be used when exact positioning/punctuation is important. While a screen reader can be configured to rattle off punctuation, a lot of people find it distracting, and it is easier in many cases to feel your way through it. Jaws can be configured to drive the display, so you're not juggling accessibility applications.
Also, a lot of legally blind users still have some modicum of sight left to them. Using high contrast backgrounds and the magnification functionality can help a lot of these users.
Using ToggleKeys in Windows will let you hear when you accidentally tap one of the modal 'caps lock', 'num lock', 'scroll lock', etc. keys as well.
I know at least one Haskell programmer who uses a screen reader and who explicitly programs without using Haskell's layout rules, and instead opts to use the rather non-idiomatic, but supported {;}'s instead, because it is easier/less distracting for him to get his screen reader to read off punctuation than for him to figure out exact indentation that complies with Haskell's layout rules. On that same note, I've heard some grumbling from a couple of blind programmers about when they have to write Python.
Ultimately, you learn to play on your strengths.
I can't recall the source, but I've heard/read about a form of audible syntax "colouring" - so that instead of a string assignment being read as
foo equals quote this is a string quote
the string part would be read with a different pitch or voice to make the separation of elements clearer.
One place to start is the Blinux project:
http://leb.net/blinux/
That project describes how to get Emacspeak (editor with text-to-speech) and has a lot of other resources.
I worked with one person who's eye sight all but prevented them from using a monitor - they did well with Screen reader software and spent a lot of time using text based applications and the shell.
Wikipedia's list of screen reader packages is another place to start: http://en.wikipedia.org/wiki/List_of_screen_readers
I'm a postgraduate student in Beijing,China. I major in computer science and a lot of my work is programming.
I am born with low sight, I need to use magnifying tools to see fonts on screen clearly. I use microsoft's mgnify tools on windows and use compiz's magnify plug in if on linux. I usally set the tool to magnify as three times many as the original font size.
For me maginify tools is ok, the main problem is the speed,I have to move mouse to keep cursors follow the text I'm looking at, microsoft's magnify provides a option of "auto follow the text edit points",that set me from continuously mouse movement when editting or coding. But it doesn't always works because of the edit software or IDE may not support that.
Magnifying tools on linux are hard to use. The KMag come with KDE has a terrible refresh rate which make my eyes unconfortable, compiz's magnifying plugs which I'm using now is OK,but has no function of auto focus(focus auto following).
iOS provides quite perfect solution for me with full screen magnifying, especially on ipad's 9.7 inches screen. there auto focus is not necessary because I hardly use them to code or do other edit stuff.
Android provides very little accessibility functions, only like shake feedback, which is useless for me.
there is no any kind of good magnifying tools on android , not to mention advance function like full screen magnify on iOS.
I used to study Qt, want to build a useful magnify tools on linux, even on android. But hardly have some progress.
When I was in grad school, we had a member of our research team who was blind. He was a bit older, maybe mid-40s. He told us about how he programmed his first computer (which was well before text-to-speech was common) to output the contents of the screen in Morse Code. To overcome the obvious chicken-and-egg problem, he had to completely rewrite the code each time through from scratch until it was working well enough for him to have it read back to him.
Now he uses text-to-speech, though he plans the code very thoroughly before actually writing any of it, to minimize the debug loop.
He was also pretty good at giving PowerPoint presentations that, despite his lack of sight, were just about as well formatted as any sighted presenter's.
This blog post has some information about how the Visual Studio team is making their product accessible:
Visual Studio Core Team's Accessibility Lab Tour Activity
Many programmers use Emacspeak:
Emacspeak --The Complete Audio Desktop
Back in New Zealand I knew someone who had macular degeneration, so was partially sighted. He's a very talented programmer and wound up using Delphi because he could work by recognizing word shapes This was easier to do with a Pascal-like syntax than a C-ish squiggly bracket one. He has a web site, but doesn't seem to mention macular degeneration at all, so I won't name him.
I'm blind and from some months I'm using VINUX (a linux distro based on Ubuntu) with SODBEANS (a version of netbeans with a plug-in named SAPPY that add a TTS support).
This solution works quite well but sometimes I prefer to launch Win XP and NVDA for launching many pages on FireFox because Vinux doesn't work very well when you try to open more than 3 windows of FireFox...
As many have pointed out, emacspeak has been the enduring solution cross platform for many of the older hackers out there. Since it supports Linux and Mac out of the box, it has become my prefered means of developing Windows egnostic projects.
To the issue of actually getting down syntax through an auditory one as opposed to a visual one, I have found that there exists a variety of techniques to get one close if not on the same playing field.
Auditory icons can stand in place for verbal descriptors for one example. You can, put tones for how far a line is indented. The longer the tone, the further the indent. Since tones can play in parallel with text to speech, the information comes through in the same timeframe and doesn't serialize the communication of something so basic.
Braille can quickly and precisely decode to the user the exact syntax of a line. This is something more useful for people who use braille in daily life; the biggest advantage is random access to the contents of the display. Refreshable units typically have router keys above each character cell which can place the cursor to that cell. No fiddling with arrow keys O(n) op vs O(1) access.
Auditory dimensionality (pitch, rate, volume, inflection, richness, stress, etc) can convey a concept (keyword, class, variable, error, etc). For example, comments can be read in a monotone inflection...suiting, if I might say so :).
Emacs and other editors to lesser extents (Visual Studio) allow a coder to peruse a program symantically (next block, fold block, down defun, jump to def, walk up the parse tree, etc). You can very quickly get the "big" picture of the structure of an entire project doing this; with extensions like Cedet, you can get the goodness of VS/Eclipse/etc cross platform and in a textual editor.
Could probably go on and on, but that in a nutshell, is the basis of why a few of us are out there hacking away in industry, adacdemia, or in our basements :).
A group of students from Southern Illinois University Edwardsville and Washington State University are working on a programming language for the blind:
http://www.youtube.com/watch?v=lC1mOSdmzFc
harald van Breederode is a well-known Dutch Oracle DBA expert, trainer and presenter who is blind. His blog contains some useful tips for visually impaired people.
What in the world would a braille keyboard even be??
There are such things as braille writers but you would never use one as an input device for a computer.
If you're simply talking about a keyboard with the braille symbols on it this would also be a very bad idea. You're going to have a lot more keys to reach while typing and it would still be slower.
Touch typing is NOT a visual skill, a blind person can do it just as well as a sighted person.
I think that this would work well in extreme programming using the pair programming principle. If you're making software for blind people, who better to make it then someone who would literally be in touch with the business requirements, so I don't think it's very far fetched at all.
As for writing code, well unless there was some kind of feedback I think a person may struggle with syntax. Audio feedback may help to a point though.
NVDA is a good open source screen reader for win.
What about inventing some kind of device that you plug in a usb port and that would be basically a "sheet of rubber" that would modify itself to show brail of your code, allowing blind people to read it instead to hear it?
There are a variety of tools to aid blind people or partially sighted including speech feedback and braillie keyboards. http://www.rnib.org.uk/Pages/Home.aspx is a good site for help and advice over these issues.
Once I met Sam Hartman, he is a famous Debian developer since 2000, and blind. On this interview he talks about accessibility for a Linux user. He uses Debian, and gnome-orca as screen reader, it works with Gnome, and "does a relatively good job of speaking Iceweasel/Firefox and Libreoffice".
Specifically speaking about programming he says:
While [gnome-orca] does speak gnome-terminal, it’s not really good enough at
speaking terminal programs that I am comfortable using it. So, I run
Emacs with the Emacspeak package. Within that, I run the Emacs
terminal emulator, and within that, I tend to run Screen. For added
fun, I often run additional instances of Emacs within the inner
screens.

Resources