Azure Kinect - Setting World Scale - scale

I'm trying to get body tracking to register on small action figures that are about 12" tall. I've tried using other depth sensors like the Zed2 and D435i and their skeletal SDK's recognize the toys as "humanoid" and attempt to track the skeleton.
Is it possible to change world scale or a filtering option so that the Azure Kinect or Kinect v2 do not ignore the toys?

I reached out to Microsoft and this was their response:
" AK Body Tracking has been tuned to process human’s from 7-8 years and up in age. The action doll is being filtered out as too small. They currently don’t expose the tuning parameters. They will considering exposing the tuning parameters but they have nothing to announce at this time. "
Unfortunately a no go at the moment.

Related

Page speed Does not pass even after scoring 90+ on Page Speed Insight

My webpage is scoring 90+ on desktop version but yet it's test result on Field Data show "does not pass". While the same page on Mobile with 70+ speed is marked as "Passed"
What's the criteria over here and what else is needed to pass test on desktop version. Here is the page on which I'm performing test: Blog Page
Note: This page speed is on 90+ from about 2 months. Moreover if anyone can guide about improving page speed on Mobile in WordPress using DIVI builder, that would be helpful.
Although 6 items show in "Field Data" only three of them actually count towards your Core Web Vitals assessment.
First Input Delay (FID)
Largest Contentful Paint (LCP)
Cumulative Layout Shift (CLS)
You will notice that they are denoted with a blue marker.
On mobile all 3 of them pass, despite a lower overall performance score.
However on Desktop your LCP occurs at 3.6 seconds average, which is not a pass (it needs to be within 2.5 seconds).
That is why you do not pass on Desktop but do on mobile.
This appears to be something with your font at a glance (sorry not at PC to test properly), causing a late switch out. I could be wrong, as I said, I haven't had chance to test so you need to investigate using Dev Tools etc.
Bear in mind that the score you see (95+ on Desktop, 75+ on mobile) is part of a synthetic test performed each time you run Page Speed Insights and has no bearing on your Field Data or Origin Summary.
The data in the "Field Data" (and Origin Summary) is real world data, gathered from browsers, so they can be far apart if you have a problem at a particular screen size (for example) etc. that is not picked up in a synthetic test.
Field Data pass or fails a website based on historical data.
Field Data Over the previous 28-day collection period, field data shows
that this page does not pass the Core Web Vitals assessment.
So if you have made recent changes to your website to improve your site score you need to wait atleast a month so that Field Data shows result based on newer data.
https://developers.google.com/speed/docs/insights/v5/about#distribution

Struggling to get CLS down under 0.1s on mobile. Can't reproduce it on tests

I try to optimize the whole Pagespeed of this page but I can't get the CLS under 0.1 on mobile. I really don't know why as I use critical css, page-caching and font-preloading and I cant reproduce the behaviour in tests.
https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.birkengold.com%2Frezept%2Fselbstgemachte-zahnpasta
Tested with an simulated Galaxy S5 on 3G Fast.
https://www.webpagetest.org/result/210112_DiK9_256ca61d8f9383a5b927ef5f55644338/
In no Scenario I get somewhere near the 0.1 in CLS.
Field Data and Origin Summary
Field data and Origin Summary are real world data.
There is the key difference between these metrics and the synthetic test that Page Speed Insights runs.
For example: CLS is measured until page unload in the real world, as mentioned in this explanation on CLS from Addy Osmani who works on Google Chrome.
For this reason your CLS can be high for pages if they perform poorly at certain screen sizes (as Lighthouse / PSI only tests one mobile screen size by default) or if there are things like lazy loading not performing well in the real world and causing layout shifts when things load too slowly.
It could also be certain browsers, connection speeds etc. etc.
How can you find the page / root cause that is ruining your Web Vitals?
Let's assume you have a page that does well in the Lighthouse synthetic test but it performs poorly in the real world at certain screen sizes. How can you identify it?
For that you need to gather Real User Metrics (RUM) data.
RUM data is data gathered in the real world as real users use your site and stored on your server for later analysis / problem identification.
There is an easy way to do this yourself, using the Web Vitals Library.
This allows you to gather CLS, FID, LCP, FCP and TTFB data, which is more than enough to identify pages that perform poorly.
You can pipe the data gathered to your own API, or to Google Analytics for analysis.
If you gather and then combine the web vitals information with User Agent strings (to get the browser and OS) and the browser size information (to get the effective screen size) you can narrow down if the issue is down to a certain browser, a certain screen size, a certain connection speed (as you can see slower connections from high FCP / LCP figures) etc. etc.

Is it possible to identify parts of a known image

Imagine you have a big place like a shopping mall and I have a 360 degree picture of several places inside and outside of it. Is it possible through Cognitive Services/Computer Vision to compare if a photo taken by users of my app is related to any of these 360 degree pictures so I can add a description saying what is in the photo?
Microsoft Cognitive Services - Computer Vision currently does not offer this type of functionality. Training or customization is not yet supported. This is a highly requested feature and under review.

Calling the face and emotion API at the same time

My goal is to take the live camera sample and create an app that uses the emotion api and the face api at the same time. Whenever it detects a face it should say Gender , Age , Emotion , Emotion detection confidance in a one string.
I am having trouble with that because all of the function are async aurrnd it does frame analysis (analysis function) individually.
Thanks for your help.
I have tried calling the same API via classes about frame analysis try checking How to Analyze Videos in Real-time
Hello from Cognitive Services - Face API dev team, we are going to support emotion analyze during detection (in next release which should happen before April). It invoke the same interface provided by emotion API. So just be a little more patient and feel free to reach us if you have any further questions.
Bests,
Xuan (Sean) Hu.

Woocommerce API limits AND ussing less resources

I was reading the documentation at:
http://woocommerce.github.io/woocommerce-rest-api-docs/
I am trying to figure out the limits for the API for the following methods
$woocommerce->get products/tags
$woocommerce->get products/categories
$woocommerce->get products/categories
$woocommerce->post products/tags
$woocommerce->post products/batch
For these methods I want to know how many items I can get or save at once. (Batch save for example I want to save 50 at a time; or for getting products I want to get 50 at a time (per page))
Also I am trying to figure out best practices to use less resources on both consumer of the API and receiver of API. Right now in development I have them both on the same machine and the fan really gets going on my laptop
The majority of work is done in products/batch. I am sending almost 4k items in batches of 50.
I know a service that uses WooCommerce says that their API calls are rate-limited by IP to 86400 calls per day (one per second on average).
That is their service so implies you can go same or higher for WooCommerce
Source: https://github.com/Paymium/api-documentation#rate-limiting

Resources