Displaying different stuff to multiple Projectors. - directshow

I want to connect multiple projector to single laptop, I found about VGA Splitter (http://www.kvmswitchtech.com/vga-splitter-350mhz-8-port-p46359.htm) which can use to connect multiple projectors to single PC.
But I don’t want to display whole screen in both Projectors, in short Projectors should display different stuff at same time.
For Example:-
Projector 1 can Display Power point Presentation and Projector 2 can Display the running movie in player.
I got below set of Questions
Is there any Software available to perform this operation?
If I want to write my own application, Is Directshow (Provided by Microsoft ) is good one to start?
Is there any other VGA Programming language available?

Is there any Software available to perform this operation?
The primary question you are going to get here is how the projectors are connected to PC. The device might need specific integration and you're moving the item #2 below immediately. Otherwise it can be a sort of secondary monitor and you can extend your desktop over it, and then any full screen application running on the secondary monitor is going to be projected.
Perhaps, you will find more software choices going to SuperUser.
If I want to write my own application, Is Directshow (Provided by Microsoft ) is good one to start?
As mentioned above, the hardware might need you to use specific SDK from the vendor and this is your starting point. DirectShow is the API that covers multiple related tasks and might be of use here:
it is capable to built media pipelines terminating at DirectShow-compatible video output device (the projector might be not might be not capable/compatible)
it allows you to play media files into your application and otherwise control video/audio and integrate it into higher level software
DirectShow as API does not fully cover requested task, but it definitely relevant might be used in the in-house built Windows based app.
Is there any other VGA Programming language available?
The "language" is not actually language and it is how particular device is integrated with PC. This is typically covered by hardware vendors who provide the hardware with accompanying development kits and samples.

Related

Will ASP.NET Form Get value from Barcode

ASP.NET Form. If running a form in a browser on a small (Android) device with a barcode scanner, will the scanned barcode go into the ASP.NET textbox? Or I need to add something to the application?
Well, it going to depend on which of the 150+ barcode scanners you decide to grab from google play.
However, the answer is yes, or no. It will depend on the kind of scanner.
If you download just a scanning application (software based - not built in scanner).
The reason is Android (and even iOS) don't allow one application to set focus, get/grab/take data from other applications. Nor is the reverse allowed. If that was possible, then the app could also get/grab/take values from when you are say running your on-line banking application.
I don't think Android thus supports focus to another application during scan that has focus. Now if this is factory supplied software on the phone? Then yes, this works like a desktop keyboard "wedge". That means the program does not know if you are typing from keyboard, or input is from the scanner (hence the name keyboard wedge). These will work with a web form.
However, we now seeing the rise of software based keyboard wedges. That means the software scanner is installed on android as a custom keyboard. And this in case, then once again, it will work in a web form.
So, for devices with a built in scanner? yes, that will work in all applications. For a software only (uses built in camera), then again, this is possible if the software in question works as a keyboard/wedge scanner.
If you going to adopt android scanning? then use a purpose built Android scanner.
And another possible if you want to use a software scanner? Write a small android application and have it talk to your web site. This I think is the best solution, but of course means you have to adopt some Android dev tools.
So how this works will depend on if the android device has a built in scanner, or it is a software + camera based scanner. However, it would seem that even now installable software based scanners in theory can be made to work for any application since the application is running and behaving as a user installed keyboard.
So, you have to check the particular device. The answer is not in all cases, and the answer depends on if you using a Android device with a built in scanner, or you looking to use any Android phone as that scanner.

Screen Sharing with Qt WebGL (like VNC)

I just tried out Qt WebGL and was thrilled to see my app running in the browser w/o making any changes (other then starting it with -platform webgl)!
I would like to use WebGL for screen sharing so that it would still usable on the device while also being able to interact with the app through the browser. Is this somehow possible with the current platform plugin or would it be possible to extend the platform plugin to support this in the future?
Qt WebGL streaming is intentionally only done for a single user per application. The reasons are mentioned here in a presentation about Qt WebGL streaming:
Why single user?
Problem with user input
Problem with querying the GPU
We can improve security
However, I found a blog post presenting a solution: start multiple parallel processes of the Qt application, one per use, and then sync the state of all these processes using Qt Remote Objects. One of the processes would be the "master application", and the others would duplicate what it shows. The application's state has to include everything that influences its rendered content, including model content and window size.
A detailed recipe for this technique can be found in this article.
There is also this section in a Qt blog post that confirms that this approach is possible:
By the way, there is an idea to complement streaming with an ability of mirroring as in some cases having the latter is more important.
Speaking about mirroring, I would like to mention our recent webinar [edit: link update by me] that we had together with Toradex. There you can see an interesting combination of WebGL streaming and Remote Objects, which allows you to implement mirroring functionality as of now already.
Within the webinar video linked above, the demonstration of mirroring / screen sharing starts here. This type of mirroring is even two-way, allowing to operate the application from multiple screens simultaneously.
Unlike in "real" screen sharing, the mouse pointer would not be shared. You might however be able to track the mouse pointer position as a state property of the master application and then paint an "artificial" mouse pointer at that position in the client applications.

How to get and change the values of the projector lens system?

I am trying to write a Gatan DigitalMicrograph script to control the tilting of incident electron beam before and after a specimen. I think that the values of pre-specimen lens system can be got and changed by using commands such as EMGetBeamTilt, EMSetBeamTilt and EMChangeBeamTilt. However, I don't know how to get or control the status of the post-specimen lens system such as a projector lens. What command or code should be written in order to control the projector lens system?
It will be appreciated if you share some wisdom. Thank you very much in advance.
Unfortunately, only a limited number of microscope hardware components can be accessed by DM-script via a generalized interface. The generalized commands communicate to the microscope via a software interface which is implemented by the microscope vendor, so that the exact behaviour of each command (i.e. which lenses are driven when a value is changed) lies completely within the control of the microscope software and not DM. Commands to access specific lenses or microscope-specific controls are most often not available.
All available commands, while they can be found in earlier versions often as well, are officially supported and documented since GMS 2.3. You will find the complete list of commands in the F1 help-documentation (on online-systems):

Using Windows Tablet PC Input to implement handwriting recognition

I want to write a app (initially Windows) that include handwriting to text recognition. I want to use the Windows built-in Tablet PC INput. My question is is there a way to capture the strokes as an image, "send these to the OCR engine used by the Tablet Input, and return the recognised text?
Or, are there any good open source handwriting libraries that could be used directly?
The primary development language is Qt.
I am not aware of any open source or free software libraries for handwriting recognition, so I wrote an adapter. My target was my tablet PC running Linux, but part of my solution can also be used directly on Windows, although you will need to adapt it to your needs.
You will need to read through the licenses for the components I used and validate your own use of them.
The source is available here: Ink2Text project
Part of this solution is a server which uses the XP Handwriting Recognition libraries to interpret the strokes which make up handwriting. As an aside, this does not use OCR - it uses connected graphs of the flow of the strokes.
Another complementary project provides a client handwriting widget: Stylus/Handwriting Input Panel. This is written in Java, and it's GPL3. It accepts the handwriting and sends it off to the server. Unless you wish to use it as is, it's of value solely to see the data format for the ink, although that's simple enough and you can probably deduce that with just the Ink2Text source code.
An earlier solution used the S/HIP with my MS Ink Server, which accepted input over regular network connections. That may also be useful depending on your architecture, but requires a running copy of Windows.
This system provides very good recognition of printed and cursive handwriting.
I will answer questions about it only in it's associated SourceForge forums, so that others may benefit from the answers as well - please don't ask here.
Cheers,
Bret
I want to be wrong, but unfortunately, there is no available open-source offline handwriting recognition system even close to MS' or Apple's Ink.
On Windows you can play with Ink Recognition (About Handwriting Recognition, Advanced Recognition Sample). C++ interface is available, but not as well documented, as .net implementation is. So, you need to apply more efforts and do a lot of research to achieve what you want.
For another systems (including Windows too) there is way to use Tesseract-OCR with your application. See Tesseract's base api. For better recognition quality, you may train tesseract and use your own trained data.
If you do not want to spend your time doing R&D tasks above, you can use paid solutions like: MyScript SDK, WritePad SDK and so on...

Control System For Sensor Networks

I'm making a distributed sensor network. The basic architecture of my network is to have several slave nodes (up to about 10) reporting back to a master node on a regular basis.
I'm looking for a software framework that I can use for this, so far I have thought of
corba
pubsubhubub
xmtp
making my own
I have some basic requirements (like basic security, fault awareness)
Anyone have any suggestions?
In specific answer to your question, TinyOS provides a lot of what you'll need.
There's quite a large body of academic work on getting these up and running, especially combining agent-based infrastructures with sensor networks -- take a look on Google Scholar for example.
There are also some very good links on Wikipedia.
Are you specifically interested in an OS to run on your sensors, or something at higher level that plugs into some sensor infra you already have? Are you intending to build your own kit, or work on something that already exists (e.g. BTNode)?
You can also use RL-ARM or FreeRTOS if you wanted to use micro controllers for your project. also in the network layer you can use lwip.
there are many other libraries both free and open source in case if you want to use ARM based micro controllers.

Resources