Create a program that if user were to type a chess-coordinate, and the mouse would transport to input while the screen is overlay in a grid - accessibility

Create a program that allows a user to type a chess-coordinate (i.e. G38), and the mouse will transport to that location while the screen is over-layered with a cartesian coordinate system to suit as a reference.
What language?
What classes do I need?
OOP or functional?
Is this re-inventing the wheel or is this new patentable idea?

Related

Display QML rectangles on video stream based on object recognition

I have a video stream as describe in Qt Video Overview, using the MyVideoProducer mechanics. The source images are analyzed and I have a list of connected components (x,y,width,height) and I want overlay rectangles on the video.
Can I do this by sending a list of rectangle co-ordinates to QML and have it place the rectangles or do I need to create my own overlay images?
I looked at the QtQuick particle system but it doesn't seem to fit. Other questions have the layout of the rectangle managed by Qt/Qml, but I need the rectangle to be placed according to the co-ordinates that the vision pipeline has determined in C++ and sent to the QML front-end. They will be stale/related to the video frames.
There is an example, but the overlay is unrelated to the video. I think I need an overlay that is synced to the onNewVideoContentReceived(). QML won't be able to determine how to keep any list of rectangle in sync with the video easily.
I just modified the original buffer creation, debayered from a camera, to draw the rectangles myself in the RGBA format. It avoids the synchronization issue of the video frame with the object location data. I did not use alpha but just replacement of pixels. For my content, the amount of boxes versus the video area was not great. With alpha rectangles and a lot of objects, it may be more efficient to involve a GPU. In fact, you could used fixed size squares and not the CCL bounded region and this might be significantly faster with a GPU.
A QML solution would be more elegant, but this solution works.
Alternative options are QVideoFrame::setMetaData, this can tie the CCL QRect list to the frame, so that the association is clear and tied to the frame. The method onNewVideoContentReceived() of the MyVideoProducer could render the rectangles from C++.
Another option is QAbstractVideoFilter, which will modify the original buffer to add additional data to the images presented. This is easy to enable/disable via the QML front end.
All solutions rely on C++ so it is not easy to change coloring, etc in QML. For example if the object has a recognized property such as 'male', 'female', 'cat', 'vehicle', etc the QML could update the highlighting appropriately and maintain an accounting of the object types.

What device/instrument/technology should I use for detecting object’s lying on a given surface?

First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.

What's a sub-buffer object in OpenCL for?

The align requirement seems to render at least the region part of this functionality almost completely useless.
Could anyone give me an example of when to create a sub-buffer from a region of a buffer?
And am I right that I can create a readonly or writeonly sub-buffer from a read-write buffer? If I can, will I benefit from this read/writeonly reference to a actually read-write buffer?
The purpose is to allow different parts of a buffer to be independently updated. One example would be if you want different devices to update different parts of your data structure. Rather than copying regions into new buffers, passing to devices, getting data back and remerging you can create sub-buffers and pass those to the devices.
You can create a read-write sub-buffer though. clCreateSubBuffer allows CL_MEM_READ_WRITE.

QGraphicsView: How to efficiently get the viewport coordinates of QGraphicsItems?

Is there a fast way to get the viewport coordinates of QGraphicsItems in a QGraphicsView? The only way I can think of is to call QGraphicsView::items(), and then QGraphicsItem::pos() followed by QGraphicsView::mapFromScene.
I must be missing something, though, because items are already converted to viewport coordinates to position them correctly on the QGraphicsView, so converting it to viewport coordinates again with mapFromScene seems inefficient--especially because in my case this is occurring often and for many items. Is there a more direct approach?
Probably not. A QGraphicScene can be rendered by more than one QGraphicsView simultaneously. It makes no sense to keep only one set of view port coordinates.
Also. All operation between QGraphicsItems are calculated in scene coordinate directly. Events from view port are convert to scene before processing. Working off view port which is only integer-based can also loose precision. A QGraphicsView is only a representation of the mathematical model of a scene. It's not the actual model.
Maybe you can ask a more specific question on what exactly you are trying to accomplish. There may be a better way to do it in scene coordinate.

How to adjust a sound clip's volume in real time using DShow.h and strmiids.lib with C++

I am trying to figure out how to set the volume in in real time that my sound clips play at in my C++ program, and do things like make the volume of the sound increase as 2 objects move closer to one another. Right now, I am using "DShow.h" as well as "strmiids.lib", and I am using the interface provided by the following data member pointers:
IGraphBuilder* m_graphBuilder;
IMediaControl* m_mediaControl;
IMediaEvent* m_mediaEvent;
IMediaSeeking* m_mediaSeeking;
Using the interface provided by these, is there a way to alter the volume of the media stream playing?
Have a look at the IBasicAudio interface.

Resources