How can i make avatar hands with Leapmotion - motion-detection

I download and import 'Avatar Hand controller' with Leapmotion. And, I want using that source to make my own avatar hand controller. So, I download any avatar models and insert same script of 'Avatar Hand controller' into my avatar hands and fingers. But, hands of the downloaded avatar works well, my avatar doesn't work like this picture. The hand itself recognized, but arm stretches like a monster.
How can I solve this? please help me.enter image description here

If you are using a commercial Unity asset, you should ask for support from that developer.
The crux of the problem is that you (or the scripts involved) are applying translation to the hands, which causes this sort of stretching. You often only want to apply rotations from the Leap Motion data, not translation, but it really depends on how you are animating the avatar. For example, you could implement an IK solution to move the avatar's elbows to the Leap Motion elbow position and then use rotations from there -- or you could use IK to set the hand position and not use the Leap arm data at all. It is a fairly involved task.
There is some work to that end here, but I don't know how finished it is.

Related

Creating a Navmesh in Aframe using aframe-inspector-plugin-recast

I'm using Don McCurdy's A-Frame Inspector plugin to try to build a navmesh for some "stairs" that I've constructed essentially by sticking a bunch of box primitives together. You can see a demo at:
http://webvr.decodingsteve.com/stair-nav/
For whatever reason though, I can't get the navmesh to generate, and the error message that's generated is completely opaque. I tried exporting my "stairs" as a single GLTF, (figuring maybe a single object would work better), but that didn't seem to have any effect either.
So, just in case anyone else runs in to a similar problem, it turns out I was making a silly mistake. My scene only had the blocks I was using for stairs in it, and no flat surface, (like a plane), underneath to extend the navmesh on to. Adding a plane for the floor solved the issue and allowed the navmesh to be created.

ZXing.Net.Mobile customising scannable area in xamarin.forms

There is an overlay feature available in the library. The overlay feature is nice but functionality wise it is just there to mask the UI.
Is there a way to customize the scannable area, I mean reduce the area that is scannable on the screen? When this can't be done there is no point in having the overlay right? The overlay does not really do anything.
This android app which uses the core Zxing library developed in java by the Zxing team https://play.google.com/store/apps/details?id=com.google.zxing.client.android you can see that - if the barcode/qr code lies outside the scanning area, it does not process it. That is what I am looking for. Is this possible?
According to this link github.com/Redth/ZXing.Net.Mobile/issues/87
the author of the library says
Unfortunately there's not a great short fix that I can think of for this scenario. Yes I could only check a certain region, but what would the region be set to? It makes sense to use the non-gray areas if you're using the default overlay, but if you use a custom overlay, you might not want the same region checked.
Just need some time to implement it (on ALL platforms - which is what takes so much effort).
It has been a more than a year since this issue was raised on github, so I don't think it will be implemented anytime soon.

Vuforia to recognize logos inside a shopping center

A client asked me about using Vuforia in order to recognize logos on the shop windows. Basically, they want to use logos as a QR.
Is this idea viable? Will it work pretty well? Can you tell my some alternatives to Vuforia about this?
Recognizing logos is hard.
Basically all image recognition algorithms rely on the same principle: trying to recognize "interest points" of the image. These interest points can for example be blobs or corners; in short, we want to look for places in an image where "things happen", compared to (for example) a large solid area painted in the same color where there is not much information to grab.
This comes to trying to recognize discriminant "details" of the image.
When applied to logos, this method tends to fail due to the fact that logos often don't have enough of such "details". Take the Nike logo for example: if corner detection is applied to it, it will only find 2 corners (the 2 ends of the accent). Blob detection will probably give no result at all. This is an extreme example, as the Nike logo is really simple, but even on more complex logos there will often not be enough details for recognition to work.
As for Vuforia: it works in this exact same way, and their web interface (Vuforia Target Manager) is very clear about it: when you upload an image on it for recognition, if there are not enough details on it, it will either warn you that results may be poor or simply reject the image.
To conclude: you can run some tests, it's still the best way to be sure of it, but I wouldn't expect great results. It will probably work for detailed logos, and fail on simpler ones.
Hope this helps!

What technology is being used for photofunia.com

Any idea about what backend technology might be in use for sites like photofunia.com and loonapix.com for image merging to create the effects? is it flash/flex or Open GL?
loonapix is doing server-side image processing to create the effect. If you look at the cloud over the ocean one, it looks like they just run a blur (perhaps a guassian blur) and remove the color through desaturation and then colorize it to blue, and then they overlay that on a stock image. This is a total guess, but it feels like they might have done this with Ruby On Rails -- if so, they probably use this: http://rmagick.rubyforge.org/
photofunia is also server-side. I also noticed that it uses a lot of face recognition to automatically place the face -- for that, they may be using OpenCV. Otherwise, it's mostly the same thing as loonapix, image processing and compositing on the server-side.
You could use many different image processing libraries to do that (ImageMagick or PIL). I work for a company that makes a .NET imaging SDK that can do it -- Atalasoft.
A few years ago, we posted this sample to show how to use blurs and noise generators to create random clouds. You'd need to do something like that except incorporate a photo into the process.

Does the Flex rich text control have any practical value?

Since the control emits bizzare non-standard HTML, I'm wondering if it has any practical value.
The control emits font tags!
How are others dealing with it? Do you do some sort of RegEx replace on the text?
Or am I missing something?
Does it have a value? Yes. Is it practical? That depends. How much work are you willing to do to get something useful out of it?
I had to use the RTC to create a chat window for a chat app that was built on Jabber. I wound up having to parse every line of every chat message, check its textwidth, GREP out the bogus HTML (TextFormat and Font tags) while leaving the styling tags (bold, italic, etc.) then shift it onto a queue that would scroll upwards as new messages were sent and received. I had to keep an onscreen buffer of 200 of these lines (taking care not to delete partial messages at the end of the queue). I also had to plot where the emoticons — :) ;) :-) and the like — were located, find out their exact locations, and then draw the emoticon images onto a sync-scrolled Canvas that exactly matched the position of the chat output window. All this while keeping the text selectable and letting people copy and paste it, complete with emoticon tokens that reverted to whatever text smiley upon pasting into the input field.
Was this a lot of work? You bet it was. Was the product ultimately useful? I like to think so. It was pretty cool, in fact. And as it was one of the first Flex projects I ever worked on, it taught me a lot.
Do I wish Adobe supported real, non-gimped HTML? Absolutely.
Short answer: Getting something out of the RTC is a bitch, but probably still faster than doing anything similarly useful in Java or C++. YMMV.

Resources