How to crypt patches on screen - encryption

I am wondering (out of curiosity) how to encrypt a chunk of pixels (e.g. a captcha) in a server application, such that a client cannot use any kind of pattern recognition (neural networks etc.) to decrypt the pixels but will see the correct pixels on his / her screen. I have heard of techniques such as HDCP and I am wondering if there are any libraries to implement this. So my questions are:
Is HDCP the droid I am looking for / are there other solutions?
Are there any libraries that help me to implement this (in C++, Python, Go, Java, whatever)?
Is it possible to use this technique for various (small) patches of the screen (not fullscreen)?
Maybe it is even possible to encrypt/decrypt pixel patches with transparency?
Thank you for your help.

From your description I'm assuming you're talking about a server-client relationship across the internet here. In that case: No. Way.
In order to display anything on screen, something has to decrypt/decode the data on the client and then send it to the screen. That decryption/decoding would be happening in the browser, on the CPU/GPU, and the decoded image would then be stored in memory. From there it's available to any other process, including neural networks and whatnot.
What you would need for this is some way to send encrypted data over the internet directly to the monitor, where it needs to be decrypted and immediately displayed. You would also somehow need to keep the implementation detail a secret, so nobody could be building a "fake monitor" to do the decryption elsewhere and get at the data this way. That's fundamentally infeasible, and even more so given the open standards based protocols and file formats on the internet.

Related

Is an user space on a MCU possible?

I'm currently researching the possibility of making some kind of user space on an MCU. User space is probably not the right therm here.
What we want is space on the memory that a user can freely use to add code and execute it. The rest of the memory will be used for the bootloader and some kind of library/package that the user space can "use".
Normally you would just compile the whole code, so including the
(precompiled) library and user code and then flash it to the MCU. But we are afraid that the library can be reverse engineered. So we want some way to "fix" the library on the MCU to at least make it harder to reverse engineer.
When I try to research this subject I only get results about Operating Systems, so probally user space is the wrong therm. I'm struggling to find more information about this. Is this possible, does it even make sence to do something like this?
The MCU in question has a Core-M4

encrypt on server decrypt on client

I'm building a simple trivia game that has "hangman" style clues (where letters are revealed as the player asks for hints). I don't want to flat out send the answer with the question any user with sufficient smarts could figure it out) - rather I'd like to encrypt answers on the server and decrypt them on the client. Security isn't of huge importance I just want to make the process harder then it's worth for players. I was wondering if anyone could recommend a strategy for doing this?
A simple approach, which might be sufficiently difficult for most users, would be to send the answer and encryption key to the web client (as hidden form fields), and use Javascript to decrypt it on the fly (inside the browser). A simple exclusive-or'ing of the answer string characters with the key string should be sufficient to "shroud" the answer without requiring large amounts of crypto processing on the client side. Using more than one key string might increase the difficulty of cracking it, too.
I'm assuming that you don't want to implement full commercial-grade crypto on the client side, and also assuming that you only want to hide the answers for a few minutes at most.

qr code decoding on uC

i am working on a project that uses qr code to check in guest at an event. i intended to implement it as a mobile app on android but my professor require a hardware element to the project. so my question s are
can i do decoding of a qr-code image on a microcontroller with a CMOS camera and which one is recommended?
if not, is it possible to use a cmos camera with a microcontroller to take the picture and send it to a pc to do the decoding and which microcontroller is recommended?
any other suggestion will be appreciated
I wouldn't try to decode QR Code with something less powerful than ARM.
Ad 1.
Of course you can, but, as I said, I wouldn't try on something less powerful than ARM (unless you're a C ninja and you can fit into, say, AVR for this task).
Decoding QR code itself isn't that hard and I you'll be able to write it by yourself (or use existing library).
Ad 2.
You'll need some connectivity to do that. There are many Bluetooth, Ethernet and WLAN boards around (in my experience, best choice may be Bluetooth, you may get away without implementing network stack).
Useful link.
Decoding QR codes is relatively easy as barcodes go. You can use source code from the ZXing library, running on the server side (it's primarily Java) to do the decoding. Decoding is "fast"; on the original Android (ARM7) devices it would still decode in about 100ms.
But I think your question is about image quality. I am not familiar with the output of CMOS sensors, but for QR codes, you don't need color data and you don't need much resolution (240x240 works for most QR codes). If anything the issue is focus.

How to prevent user from decrypting data file while the program is capable to read the data file freely?

I want to ask for a mature model to do this.
Suppose I want to deliver a program and a sensitive data file to user. The program is able to read any data stored in the data file, but user is not allowd to easly break the data file. The data file will be encrypted by standard algorithm such as AES. Now, the problem turns to how to manage the key. Putting the key in the program seems to be a bad idea, but what else I can do? Apparently I can't give the key to user directly.
There is no way to do this securely, ie. to really prevent the user from reading the data. As long as they have the data and they have the program that can read it a competent disassembler will be able to figure out how the program reads it and do the same thing. Or, even easier, they could let the program do it and then get the decrypted version out of its memory.
Having said that, if you just want to prevent the average user from doing it, hardcoding the key in your source code should be fine. :) Just be realistic about the level of protection this provides.
Does it have to be pure software? If not, you could look at a solution which does decryption and storage of the key on a hardware device, e.g. a USB dongle.
You can also potentially prevent the whole problem by having your software retireve the data from a web-service instead of a data file. In this case you can control access to the data much more tightly (i.e. who gets how much of what and when). This might or might not work for your application.
Otherwise as others pointed out, there is no known good pure software solution.
There is no 100% safe solution to this because at some point you have to have the key loaded into memory so that de/encryption can take place and a savvy-enough hacker will be able to capture it. The best you can do is to make it very difficult to capture and to mitigate exposure to data (by limiting access as much as possible) if the key is compromised.
As far as how to make it safer... you could have a combined key that is made up of something stored in the program and something derived from the user's passcode?
Is your perceived user determined? are they skilled enough to do reverse the application or the key? If the user is considered to just be a generic desktop user you probably could implement a partial key using some general encryption just to make the key non obvious, beyond that a determined individual will be able to reverse must simple means of encrypting keys and data.
A DVD John conundrum, eh? Why is having a key in the program bad? You could have a super-obscured function which computes it reliably once. Someone with disassembler and debugger can break your key given enough time, IMO.

Implement IP camera

We have a device that has an analog camera. We have a card that samples it and digitizes it. This is all done in directx. At this point in time, replacing hardware is not an option, but we need to code such that we can see this video feed real-time regardless of any hardware or underlying operating system changes occur in the future.
Along this line, we've chosen Qt to implement a GUI to view this camera feed. However, if we move to a linux or other embedded platform in the future and change other hardware (including the physical device where the camera/video sampler lives), we will need to change the camera display software as well, and that's going to be a pain because we need to integrate it into our GUI.
What i proposed was migrating to a more abstract model where data is sent over a socket to the GUI and the video is displayed live after being parsed from the socket stream.
First, is this a good idea or a bad idea?
Secondly, how would you implement such a thing? How do the video samplers usually give usable output? How can I push this output over a socket? Once I am on the receiving end parsing the output, how do I know what to do with the output (as in how to get the output to render)? The only thing I can think of would be to write each sample to a file and then to display the contents of the file every time a new sample arrives. This seems like an inefficient solution to me, if it would work at all.
How do you recommend I handle this? Are there any cross-platform libraries available for such a thing?
Thank you.
edit: i am willing to accept suggestions of something different rather than what is listed above.
Have you looked at QVision? It is a Qt based framework for managing video and video processing. You don't need the processing, but I think it will do what you want.
Anything that duplicates the video stream is going to cost you in performance, especially in an embedded space. In most situations for video, I think you're better off trying to use local hardware acceleration to blast the video directly to the screen. With some proper encapsulation, you should be able to use Qt for the GUI surrounding the video, and have a class that is platform specific that you use to control the actual video drawing to the screen (where to draw, and how big, etc.).
Edit:
You may also want to look at the Phonon library. I haven't looked at it much, but it appears to support showing video that may be acquired from a range of different sources.

Resources