How to map controls to physical keyboard locations - qt

I've been wanting to create a game engine, but I want to sort out some issues such as how to do controls. Is it possible to map controls to the physical locations on the keyboard as opposed to the individual keys themselves?
I would like to do this because I think that simply configuring to controls like "W" for up and "S" for down is a nuisance for anyone who isn't using qwerty and needs to reconfigure to the locations intended by the creators of a game, which would include Dvorak users like myself or anyone who just decides to change the system default.
I'll probably be using C++ with Boost, SFML and Qt if that matters.

Probably the best way to do it would to simply have the controls in a .ini or text file, and then the user can configure it from there. Or else, a menu in the game from which you can select a key. i.e. "Press the key for up" etc. In terms of physical layouts, there aren't really any standards.
Anyway, even those using qwerty keyboards way want to customise their keys e.g. roguelike keys.

Related

is There any way to control programs by finding from task manager and managing contents?

Hello guess my title is bad enough to explain question but I am trying to understand is there any way to control and automate things just finding tasks from task manager? I have seen in Visual Studio "Spy++". Firstly, i didn't understand what it's aim and how far we can go with it. I just got it, it can provide us logs in a cool range.
I would like to give an example,
I want to log in Facebook/Twitter and do casual things with developed software by myself(I don't want to use selenium or any kind of that thing) or I want to get informations from a game which is about characters actual health, attack power, ability power... or giving command that game from my software like, press a,b or 1.
Can someone tell me, exact subject name of what i am talking about?
Terminology: Selenium / AutoIt: "UI automation". Reading and modifying in-game values: "memory editor" or "trainer".
There is no universal way to control programs if you want your tool to be transparent. A browser may listen to OS input events (Windows messages telling it which keys were pressed or where the mouse was clicked), games may use DirectInput and yet other apps may subscribe to low-level system events or hooks.
For example browser automation:
Using plugins/extensions gives you a JavaScript API that allows you to inspect pages, forms on those pages, modify browser behavior and whatnot.
Browsers can also have their own external API. This can be done by linking to their DLLs, or passing command line arguments, or passing messages in other ways. For Firefox, this API is named "Marionette".
Then there's Selenium, that provides a common API for various browsers. It controls them using "drivers".
Selenium "knows" how to drive a browser, as it's coded against the browser's APIs. Spy++ "knows" that it's inspecting a Win32 window and looks for known controls, their classes and their names so you could write another program to send specific messages to those specific controls of those specific applications.
As for "log in to Facebook", no, you cannot do that in a reasonable amount of time for the currently popular browsers if you want to code it from the ground on up.
You'll have to, in one way or the other, interface with the browser and ask for a handle to the username/password textboxes, enter data into them and then submit the form. Then you'll practically be rebuilding Selenium, so why not use that tool in the first place?
Or you'll have to scrape the pixels on the screen, recognize those textboxes, click the mouse there and send some keys. And then Facebook redesigns their login form and you'll have to start over.
tl;dr: use the right tool for the job. If you want to automate a site's UI, then use Selenium.

Do you need to use keyboard shortcuts to comply with WCAG 2.1 AA?

I have been reading all the guidelines and I am slightly confused. Do all links on then navigation bar need to have a keyboard shortcut in order to comply with WCAG 2.1 AA? The guidelines seem to mention more what how to comply if you use them but doesn't state you have to use them so I am confused.
Thanks in advance.
Short Answer
No
Long Answer
No - you are confusing several sections of WCAG.
Keyboard shortcuts are separate to skip links, which are what I think you are getting confused by.
Skip Links allow a screen reader user to jump past the navigation at the top of a page, this avoids having to tab past all of the navigation each time they enter a page.
Menus - as long as they are semantically correct (<nav> with an <ul> of links) are accessible anyway as screen reader users navigate via links, tab stops, headings etc. using shortcuts on their screen readers (if you have drop down menus then there are a lot of things to consider beyond the scope of this question).
Shortcut keys allow different actions and sections to be accessed quickly via the given shortcut key.
I would advise against setting these, if you do you need to:
provide ways to change the keys via a settings menu
a way to disable the keys (as they may interfere with a user's keyboard shortcuts they use for their screen reader)
explain what the shortcut keys are (and update these descriptions if a user changes their preferred shortcuts) etc. etc.
They are not worth the effort for a simple website and should only be used in complex applications for features (not generally for navigation, but for things like a WYSIWYG).
You need to make sure that default system behavior of keyboard shortcuts is not overridden by your website/application. If you do override the shortcut key, you need to let the user know of it and provide a mechanism to change those shortcut keys to user choice.

Best solution for user inputs like text input

I just wonder what will be the best solution for receiving text input from user in PlayN.
I didn't find anything that i can use to achieve this, i think that the best solution will be to render something like HTML inputs to write a text, but it will be not that simple because we need to be able to use in example virtual keyboard from android (on android platform) and regular keyboard on HTML backend. Even then i think it will be very difficult (or impossible) to evoke android keyboard in game...
I'm thinking about creating a widget in tripleplay UI library (because i will use it), but this will end with rendering virtual keyboard on screen for user inputs.. buttons from a-z etc...
I wonder is there any better solution for this, or i need to implement something like i wrote above (like tripleplay widget)?
There is already a Tripleplay widget for receiving text input called Field.
However, it is very primitive and does not yet work on mobile platforms (it will work on an Android device with a hardware keyboard). We need to provide an API in PlayN to display the virtual keyboard, but until then there's no way for it to trigger the display of the virtual keyboard.
I don't recommend using this for any substantial text input, however. It doesn't (and never will) support cut and paste, or language input methods, or anything of the other extremely complex features that users expect for text input.
I would like to add an API to PlayN like:
Keyboard.requestTextInput(String label, Callback<String> callback)
which would pop up the virtual keyboard, with an attached (native) text box, and allow the user to enter a single line of text using all of the machinery of the platform's native text entry support. This will allow them to cut-and-paste, and use language input methods, and provide an experience with which they are comfortable on the platform in question.
If your game needs more sophisticated text input (like a chat interface, or the ability to take pages of notes), you will probably have to create a separate interface for each platform that you wish to support, using native multiline text editing widgets and then "wire" those into your PlayN game. This will be more complicated than can be described in a simple SO answer, so you'll have to do some research and learn how PlayN manages the display on each of the backends that you wish to support.

Is there an "easy" way to add customizable keyboard shortcuts to my Qt4 app?

I've got a sizable Qt app that has been in development since the Qt 3 days, and it now contains dozens of windows with thousands of menu items, controls, and other user-initiatable actions. It currently compiles under Qt 4.6, for Linux, MacOS/X, and Windows.
The new feature request from on high is that the user should be able to customize any and all keyboard shortcuts in this app... i.e. there should be a "Customize Key Bindings..." menu item, that when chosen, opens up a dialog that lists all of the actions in the application and their current key binding (if any) and allows the user to assign or change key bindings for any and all actions he cares to, and then save his settings and use the applications with his own customized key bindings.
This seems like a rather ambitious thing to implement, considering the number of keyboard-able actions in the app, and I'm wondering if there is any existing classes or code libraries available to assist in this sort of thing, or if it's something I'm going to have to implement from scratch myself. The Qt internationalization system, in particular, seems like it might be adapted to help with something like this -- the difference being that instead of (actually in addition to) the developer choosing key combinations before shipping the app, the users could choose/alter key combinations while using the app (if they aren't happy with the shipped defaults, of course).
Does anyone have any hints or pointers on code or approaches towards implementing this feature?
I agree with JimDaniel, it sounds like the most generic approach would be to create a QAction for everything that you would to be executed through a Keyboard shortcut. The user then configures the appropriate shortcut for each action.
This is definitely a cleaner way to implement this than overriding the events, it also then lets you put your actions into menus and toolbars, I don't know how much work this would be for your application.
You could store the bindings in an application config file and read it in upon app startup. Then whenever the user changes bindings update this file. Keys are just enums in the Qt framework. You can override the appropriate keyPressEvent() or keyReleaseEvent(), check the key(s) pressed and match against the current bindings.
Here is what I found before this thread :
http://doc.qt.nokia.com/qq/qq14-actioneditor.html
It's in Qt3 but I guess it's possible to write it in Qt4

The best approach for multilingual user interface

I am working on a multilingual web application. I'm wondering how do i design the best user interface that the user can localize data for various languages? for instance, in making a page which its title is different in every lang, do i put a textbox for every one? it's not a suitable way to do(in case of 10 lang, the user has 10 textbox!!! too silly)
what is your idea about this?
Edit: i have no problem with globalization in my system. in fact, i'm looking for a good way for my interface design which user can enter his data to my forms in various langs.
thanks in advance
What about only one textfield and a dropdown containing the languages. After selecting the language and filling out the textfield the field gets submitted and the chosen language disappears from the dropdown list.
the entered value and language then appears beneath the dropdown and textbox with a way to edit/delete it. this way it's always clear to the user which languages are already covered and which values are assigned to them. furthermore it's a nicer way if not all 10 languages have to be mandatorily filled in, if the user e.g. just knows english and french.
Hope you know what I mean, otherwise I'll have to create an example screenshot :-)
You could have 3 text boxes, and that's fine...get to 10, and it starts getting a bit crazy. Beyond that it starts looking pretty bad.
Maybe you could put up to say 5 text boxes up...but if it goes beyond 5 (because the user desires localization for more than 5 places) it places a single textbox with a dropdown next to it, and the dropdown would contain the current language.
Textbox would auto-populate with the current value for the language selected in the dropdown. Should work well in asp.net, and it can be done both client side, or server side on a post back pretty easily, so you don't need to do anything crazy for people not running javascript.
You have one text box.
On load you populate the text box depending on language.
The content will be populated from some kind of resource file. If there isn't much text it could even go in your config file.
Be aware of the following:
Different content length depending on language.
Right to Left alphabets screwing up your alignment
This is a classic project for using NUNIT or similar to promve that things work after new translations are added!
What language do you use in development? If this is something like PHP, then you definitely should use templates and load text strings into them from configuration files for every language. In Smarty, for example, I use configuration files for that.
Text strings for error messages or something like that could be put to files like .ini and loaded from there.
The Google Web Toolkit (GWT) demo shows the same page with versions available in English, French, Arabic and Chinese.
The GWT docs have a thorough discussion of internationalization. You could emulate their implementation.
Constants: Useful for localizing typed constant values
Messages: Useful for localizing messages requiring arguments
ConstantsWithLookup: Like Constants but with extra lookup flexibility for highly data-driven applications
Dictionary: Useful when adding a GWT module to existing localized web pages
Remember that dates and times are represented differently in different locales, if your forms use them.
The W3C also discusses Internationalization Best Practices in HTML content.
Normally, a user navigating a website will have a preference specifying the language of the whole site. I think it would be confusing to break this pattern.
So, in an intro page, or a user preferences page, allow the user to select a language; then, on the other pages, display a consistent set of controls to be able to edit the content on each page.
Are you making an administration page that allows users to change the text used in other pages in the application?
If so, you could use grid like in Zeta Resource Editor:
thumbnail http://img202.imageshack.us/img202/7813/zetaresourceeditor02.th.png
Or you could make a per-language list like in nopCommerce:
thumbnail http://img249.imageshack.us/img249/9079/nopcommerce.th.png
You can use javascipts as a resource file for your language like. language_arabic.js, language_english.js,etc.So when a use wants to see his preference language he/she has select the available languages from drop down list. Regarding this the user has to change the language settings from his/her computer. This is what I did while I was working a GIS project to customize a Geocortex IMF( http://demos.geocortex.net/imf-5.2.2/sites/demo_geocortex/jsp/launch.jsp ) site for an Arabic client.

Resources