We're using JAWS to test accessibility in our web application on IE11. One of our controls requires a CTRL + click to bring up a context menu. Is there a way to do this in JAWS with keyboard commands?
Thank you
This is basically not a good practice to attach context menus to CTRL+Leftclick in an accessible application, unless you have special messages about that. You should think about intercepting the standard context menu keys (Applications/SHIFT+F10) instead.
However, there is a key combination for that in JAWS, indeed: Ctrl+NumPadSlash, since NumPadSlash simulates the left mouse button click in the JAWS cursor position.
But please note that you navigate the webpage with virtual PC cursor, and not JAWS cursor. So to carry out your command, the user first has to route JAWS cursor to virtual PC cursor (Insert+NumPadMinus), and then execute the CTRL+LeftClick. This is an extremely uncomfortable solution since it is not obvious at all that in this particular place I have to route JAWS to PC and then CTRL+click.
Please think about a better approach for JAWS users.
Related
Following is a dummy implementation of our web application
https://roleapplication.herokuapp.com/index.html
appArea element has role application as it contains highly complex widgets such as ms paint/editor/ms office.
Navigator contains standard web widgets such as dropdown and buttons
The HTML is something similar to as specified below.
<body>
<div class="appArea" role="application">
.......//Complex widgets
</div>
<div class="toolbar">
......//Buttons, dropdowns
</div>
</body>
Keyboard functionality of appArea is handled by its code and for toolbar we rely on keyboard handling with the screen reader as they work in web browser.
Issue - When user press escape in navigator area we blur the navigator so the focus by default goes to body.
Now as focus is in body then arrow keys moves the focus to toolbar and therefore user is never able to go into appArea. If focus is in appArea it works fine.
Expectation - When focus is on body then on pressing down arrow focus should inside the appArea and then appArea will get the key instead of screen reader.
Check the down arrow key functionality when page is loaded with and without screen reader.
Keyboard notes
Press f6 to go from widget 1 to widget 2 to navigator
You can use arrow/tab keys in widgets to navigate.
Move to navigator using f6 and press tab to go to any button and then press escape. Now focus is on body(check using
document.activeElement).
Without screen reader our widgets captures the key on body and process it even if they dont have focus.
However with screen reader, when body has focus and user press down arrow, screen reader consumes the key and move the focus to navigator instead of application area which has widgets and user is unable to go to appArea using arrow keys or other keys which screen reader consume.
Note -
If we give role application to complete application then default arrow key handling of navigator will stop working which is not desired
Removal of role application is not possible as appArea is quite complex with hundreds of widgets all having their keyboard handling.
There are three ways to interact with role="application".
Hit enter on the application element, exit out of edit mode (or forms mode) and use the application as if it is another web page. You can put other elements there and the screen reader will move through those elements in brows mode.
Hit enter on the application which pops the screen reader into edit mode where all keys are passed to the edit widget inside the application. and you handle everything within your application, probably on a keydown event.
Control the tabindex as the screen reader presses keys using a roving tabindex.
You currently have 1 and 3 which is really confusing. If you removed the application element, it would still work just fine. It sounds as if you want 2 though. 2 is highly discouraged unless you have a screen reader user constantly testing UX or building your app. Number 2 is mostly for games and is considered the "canvas" element for screen readers.
You do 2 by doing the following:
<div role="application">
<input type="button" autoFocus="true" value="Click me" />
<p aria-live="polite" id="spk"></p>
</div>
The spk element is to send messages to the screen reader which you need to do in this Window, Icon, Menu, Message (WIMM) interface. Remember that in this mode, you need to program everything and users get upset if expectations are not met.
You said you are making a word processor. This last option (number 2), is NOT meant to make a word processor. As a screen reader user, I have expectations and workflows for Word processors. You can't get that functionality with programming it manually in Javascript.
Instead, use the existing edit fields HTML provides for this reason, such as:
This text editor example
Please let me know if there is some reason why you would not want to use the above widget.
You could get away with using 3 along with normal widgets, but it is better to do what Google Drive does and allow users to enter edit mode when the page loads, or press a key, like escape, to enter the tabindex application area (which does not need to be in an application element, although it can be).
Edit: After reading your question again, it sounds as if you can't figure out how to enter the application element. You arrow to where the screen reader says "application" and hit enter. To get out, you either tab to the next tabindex element that is outside the application or press the special key command to exit out of the application. In NVDA, this key command is ctrl+nvda+space. On your application, the application element is the first element.
role='application' should be used on rare occasions. As you noted, it causes all keyboard events to skip the screen reader and go directly to your app. This causes the screen reader virtual cursor to not work. Typically, a screen reader will automatically go into "application" mode (often called "forms mode") for certain types of widgets, such as an input field. If you are using widget roles, you will get this "forms mode" for free.
When you say "arrow keys" are not working, are you talking about up/down arrows or left/right arrows? They have different behaviors for a screen reader.
I need to capture the current time each time a spacebar is pressed on the browser while using JAWS Screen reader. I am able to capture the spacebar if I am not using JAWS, however, the system cannot capture any spacebar once JAWS is on.
Here is my code:
$(document).keypress(function(event) {
var chCode = ('charCode' in event) ? event.charCode : event.keyCode;
if (chCode == 32){ //32 is keyCode for spacebar
addTime = addTime + Number(new Date()) + ",";
var x = document.getElementById("spacebar");
alert("spacebars!!!");
}
});
I would like to know what to do so that I can capture the current time each time a spacebar is pressed.
Funny enough, each time a spacebar is pressed, JAWS reads out "space" but the event is not captured at the code level.
OR - Since JAWS reads out "Space" when I press the spacebar, does anyone know how I can capture JAWS event? Since it recognizes it when I press spacebar, I am wondering if I can capture the event directly from JAWS. Any thoughts?
This happens because most screen readers, and namely JAWS, provide so-called virtual cursor in browsers. This is needed for quick navigation on web pages and alike documents.
To test this, try pressing a letter on a web page while JAWS is on. For instance, if you press b, JAWS will say "No buttons" because b moves to the next button (if any). To type text, you need to enter the forms mode.
The spacebar, on the other side, works only when you are on a clickable element (link, button, check box or just an element with onClick event attached), then it activates the element; or in forms mode, then it types the space into an edit field.
In order to accomplish what you want to do, you need to declare a part of your web page as role="application" (more on this here):
When the user navigates an element assigned the role of application, assistive technologies that typically intercept standard keyboard events SHOULD switch to an application browsing mode, and pass keyboard events through to the web application.
The intent is to hint to certain assistive technologies to switch from normal browsing mode into a mode more appropriate for interacting with a web application; some user agents have a browse navigation mode where keys, such as up and down arrows, are used to browse the document, and this native behavior prevents the use of these keys by a web application.
So, in order to be able to count your time, just declare a parent div as role="application", and your Spacebar will be passed directly to the application and not intercepted by JAWS.
I am working in a project which uses the Facebook graph-api to log in. I have the requirement of only using a virtual keyboard (no hardware will be present). I have looked everywhere, but can't find a solution for adding a virtual qwerty keyboard to the popUp.
I can put the keyboard into a popup, or I could add the qwerty keyboard into the screen with the addChild() method, but I still have one problem: the virtual keyboard does not focus to the textInputs of the popup and when i press a key, everything "explooota".
Anyone knows how i could solve the focus problem?
I mean... when i prees the virtual key, i call a java function wich simulate a physical keyboard, but i lose the focus into the facebook input text and the letter is not in the textinput... and i dont know how to recover the focus...
Thanks in advance for the help!
We had the same problem with a desktop app written in C#. I can only answer for a windows based application. Assuming you are working on a desktop app and that you are showing the login in a web browser control you can use the SendInput API to direct keyboard-like input to a field in the browser. We had our own custom keyboard; I don't think you will be able to use the built-in on-screen keyboard MS provides.
We had a windows form that hosted a web browser control and the keyboard custom control. The user touches the field that they want to fill in. The user types their input using the on-screen keyboard, the keyboard uses SendInput to send the appropriate character for the key that was touched to the web browser control. Other problems to look out for:
the facebook login form takes a lot of space, having both the keyboard and login visible at the same time is difficult
sending non-ascii characters; see this for help (SendInput sequence to create unicode character fails)
the user will have to touch to select the input field
there are other links on the FB login page you may want to restrict (like create an account)
an on-screen keyboard where touching the key doesn't steal focus from the browser field
These can all be solved but they are not trivial.
Not why no one has been complaining about this but I'm have a lot of problems with the Blackberry Playbook Virtual Keyboard on the Simulator.
I have an richedit component in the middle of the screen and as soon as the virtual keyboard appears to enter text, it completely hides the text input. I'd like to move the text input up when the keyboard appears/disappears. Is there any way to do this? I don't want to muck around with the focus_in and focus_out events on the richedit. I've tried, and it's not very reliable.
Thank you in advance!
We expect the next release of the SDK (long overdue at this point but, I think, imminent) to provide much more complete support for the virtual keyboard. Until that occurs, I think it's a waste of time to attempt to do anything special with it.
I also think there's a chance it will automagically move your whole stage up when it would cover up a text input, so maybe you won't have to do anything about it anyway.
Edit: Actually I published code in January describing an undocumented way to support this, using some rudimentary PPS support. It also shows how you can programmatically control the keyboard opening and closing. I don't recommend it yet for real code...
I've written a little video game in Flex that runs in a browser. The player moves by pushing the arrow buttons on the keyboard, so I need to capture those keystrokes. In fact, the game action starts when the player presses one of those keys.
In order to capture the keystrokes, the Flash/Flex application, not just the browser, needs to have the focus.
How can I ensure that the application has the focus? I've implemented a bit of a hack: A "Begin" button you must click to start the game. The only point of this button is to ensure that the app has the focus. Is there a better solution to this?
No, this is the only way, but I think your present solution is a great one. The reason that you (as a user) have to click to focus, is so that the application cannot quietly steal focus, to then log the keystrokes without your knowledge, e.g. to steal passwords.
In some browsers (IE) you can give a SWF focus via JavaScript. Unfortunately this doesn't work in Firefox. So some users will have to click on the SWF to give it focus. You could pretty easily in your game check the browser and if it's IE then give the SWF focus automatically and not show the "Begin" button. Then in Firefox show the "Begin" button.
Take a look at the link below. Worked well for me.
http://www.flexjunk.com/2010/12/30/managing-initial-swf-focus-in-all-browsers/