detecting VR capable on Quest in a-frame - aframe

I tried detecting the Quest headset with aframe.utils - it comes up as no headset connected
using this code https://xr.dadako.com/famicase/dc.html
Is there a way to detect the Quest?

ok I'll answer my own question here, incase someone has the same issue:
device detection through navigator.xr needs to be done in async, otherwise it will fail.

Related

metastable example of verilog

I'm learning metastable and using 2-FF sychronizer .
And I.m trying to figure out it meaning.
but it seem to almost example is about click button and cause it so we need 2-FF sychronizer to solve it.
Is there any other example code of verilog that I can try to test on my own computer?
I think it will more understandable and compare 1-FF and 2-FF by result.
thanks all.
Metastability is the result of analog behavior. You cannot simulate metastability problems with a digital event driven simulator. You can only verify that the solution doesn't cause other functional issues. CDC/RDC tools can help statically (i.e. without dynamic simulation) catch design flaws where metastability might arise and make sure they have been properly corrected

Hardware list for wardriving(pen testing) with an rpi2 and laptop

so basically I've been online trying to research this the whole day and I seem to only be able to come across specific setups that people have for their own specific needs rather than a generic list of hardware needed.
What I want to do firstly using my raspberry pi 2 running raspbian, and secondly a laptop running kali, is to be able to do penetration testing along with some extras.
What I Am looking for is a list of hardware that I need (other than the rpi2 for the first case and laptop for the second) that will enable me to sniff out WiFi signals, and attempt to get onto the network. I believe the general name for this is wardriving.
I know that I need a portable power supply for the rpi2, and a screen or some sort (I want a small screen that I can see the rpi gui desktop from. Not just terminal), so any suggestions of examples of those would be appreciated.
Where I get confused is about the WiFi antenna that I need. From what I understand is that it needs to be one that can monitor as well as connect to a WiFi, but I don't really know of any examples or of what the actual difference is between it and a normal usb WiFi stick.
I'm also not sure what else I need to have beyond that to successfully accomplish my stated goal.
Any further help would be greatly appreciated, and I think beneficial to anyone else who's looking to get started doing the same thing.
Any extra information would be good too what I mean is when I was doing my research I saw some people mentioning radio attachments, gps attachments, etc. But I'm not really sure if they're necessary to start or things that can be added further down the road with experience.
Thanks.
Ok so I seem to have found a good article that answers at least the general part of my question. It can be found here.
http://lewiscomputerhowto.blogspot.ca/2014/06/how-to-hack-wpawpa2-wi-fi-with-kali.html?m=1
It also gives tips on the process of pen testing.

qr code decoding on uC

i am working on a project that uses qr code to check in guest at an event. i intended to implement it as a mobile app on android but my professor require a hardware element to the project. so my question s are
can i do decoding of a qr-code image on a microcontroller with a CMOS camera and which one is recommended?
if not, is it possible to use a cmos camera with a microcontroller to take the picture and send it to a pc to do the decoding and which microcontroller is recommended?
any other suggestion will be appreciated
I wouldn't try to decode QR Code with something less powerful than ARM.
Ad 1.
Of course you can, but, as I said, I wouldn't try on something less powerful than ARM (unless you're a C ninja and you can fit into, say, AVR for this task).
Decoding QR code itself isn't that hard and I you'll be able to write it by yourself (or use existing library).
Ad 2.
You'll need some connectivity to do that. There are many Bluetooth, Ethernet and WLAN boards around (in my experience, best choice may be Bluetooth, you may get away without implementing network stack).
Useful link.
Decoding QR codes is relatively easy as barcodes go. You can use source code from the ZXing library, running on the server side (it's primarily Java) to do the decoding. Decoding is "fast"; on the original Android (ARM7) devices it would still decode in about 100ms.
But I think your question is about image quality. I am not familiar with the output of CMOS sensors, but for QR codes, you don't need color data and you don't need much resolution (240x240 works for most QR codes). If anything the issue is focus.

Loop back problem for Laptop

We are developing a video chatting application on flex, it works well on Desktop systems and while running on a laptop the sound is loop backed and becomming a large noise, Please help me to fix the problem.
I can't speak to your problem from a coding from a coding standpoint. But, from a sound standpoint, feedback is created when the microphone picks up the sound coming out of the speaker that is amplifying the sound. So, kind of like this:
Speak into a Microphone
Your voice comes out of the speaker
The microphone picks uo your voice from the speaker
Your "speaker" voice is now coming out of the "speaker" a second time.
The microphone picks up the speaker noise, etc..
When setting up sound for live bands, it is best to get the microphone as far away from the monitors as possible. Some people even suggest making sure the "vocal" mic is perpendicular to the speaker.
I envision that here is nothing, codewise, you can do to solve this problem, if this is in fact your problem.
Try a headset mic on the laptop. Does that solve the problem?

Implement IP camera

We have a device that has an analog camera. We have a card that samples it and digitizes it. This is all done in directx. At this point in time, replacing hardware is not an option, but we need to code such that we can see this video feed real-time regardless of any hardware or underlying operating system changes occur in the future.
Along this line, we've chosen Qt to implement a GUI to view this camera feed. However, if we move to a linux or other embedded platform in the future and change other hardware (including the physical device where the camera/video sampler lives), we will need to change the camera display software as well, and that's going to be a pain because we need to integrate it into our GUI.
What i proposed was migrating to a more abstract model where data is sent over a socket to the GUI and the video is displayed live after being parsed from the socket stream.
First, is this a good idea or a bad idea?
Secondly, how would you implement such a thing? How do the video samplers usually give usable output? How can I push this output over a socket? Once I am on the receiving end parsing the output, how do I know what to do with the output (as in how to get the output to render)? The only thing I can think of would be to write each sample to a file and then to display the contents of the file every time a new sample arrives. This seems like an inefficient solution to me, if it would work at all.
How do you recommend I handle this? Are there any cross-platform libraries available for such a thing?
Thank you.
edit: i am willing to accept suggestions of something different rather than what is listed above.
Have you looked at QVision? It is a Qt based framework for managing video and video processing. You don't need the processing, but I think it will do what you want.
Anything that duplicates the video stream is going to cost you in performance, especially in an embedded space. In most situations for video, I think you're better off trying to use local hardware acceleration to blast the video directly to the screen. With some proper encapsulation, you should be able to use Qt for the GUI surrounding the video, and have a class that is platform specific that you use to control the actual video drawing to the screen (where to draw, and how big, etc.).
Edit:
You may also want to look at the Phonon library. I haven't looked at it much, but it appears to support showing video that may be acquired from a range of different sources.

Resources