Hi, there! I was reading the W3C spec about Units of measurement (https://www.w3.org/TR/css-values-3/#reference-pixel) and the fact is that I didn't understand what is a reference pixel. Can you guys explain it to me, or point me to another reference or explanation that's easier to understand. Also I'm not sure if I really understand the other things about Units and Measurements ha ha. Really, that seems too hard.
Thank you!
The reference pixel is an attempt to standardize what "pixel" means in web development. The reason that this matters is because the physical measurement of a pixel can vary greatly depending on the pixel depth of the display.
For example, old CRT monitors had 72 pixels per inch, whereas an iPhone 7+ has 401 pixels per inch. So a literal measurement of 100px would be 1.39 inches on the CRT monitor and 0.25 inches on the iPhone.
This article also has a pretty good explanation that helped me understand it better.
A List Apart, "A Pixel Identity Crisis" by Scott Kellum. January 17, 2012
"The w3c currently defines the reference pixel as the standard for all
pixel-based measurements. Now, instead of every pixel-based
measurement being based on a hardware pixel it is based on an optical
reference unit that might be twice the size of a hardware pixel. This
new pixel should look exactly the same in all viewing situations..."
"When using a phone that you held close, a reference pixel will be
smaller on the screen than a projection you view from a distance. If
the viewer holds their phone up so it is side-by-side with the
projection, the pixel sizes should look identical no matter the
resolution or pixel density the devices have. When implemented
properly, this new standard will provide unprecedented stability
across all designs on all platforms no matter the pixel density or
viewing distance."
Related
In Xamarin.Forms what unit of measurement is intrinsically associated with the number 100 like in this example:
<Button WidthRequest="100" />
Pixels
Points
Inches
None, the size depends on the the runtime platform
I already saw this question:
Xamarin.Forms WidthRequest value meaning
But while the answer explains HOW things work, it doesn't give an exact answer to my question.
For example, I am not sure if the correct answer is "Points" or "Size depends on the runtime platform". To me both are related because, the units you specify in Xamarin Forms are device independent units (can we call it points?) and they are translated to pixels by the platform.
In the same, if you think how device independent pixels work on iOS and Android, they are actually associated with the inch:
1 iOS point ~= 1/163 inch
1 Android dp/dip (density-independent pixel) ~= 1/160 inch
So while I understand how things work, I still don't know the correct answer to the question
According to MDN, the "px" unit can mean 2 completely different things depending on whether it's on a "low-dpi" device or a "high-dpi" device.
For low-dpi devices, the unit px represents the physical reference pixel; other units are defined relative to it.
or
For high-dpi devices, inches (in), centimeters (cm), and millimeters (mm) are the same as their physical counterparts. Therefore, the px unit is defined relative to them (1/96 of 1 inch).
But how exactly does it differentiate one from another? What is the cut off for "high dpi"? How can I tell which one of these 2 cases is being used on a particular device?
Found the answer on the w3 page:
In the past, CSS required that implementations display absolute units correctly even on computer screens. But as the number of incorrect implementations outnumbered correct ones and the situation didn't seem to improve, CSS abandoned that requirement in 2011. Currently, absolute units must work correctly only on printed output and on high-resolution devices.
CSS doesn't define what “high resolution” means. But as low-end printers nowadays start at 300 dpi and high-end screens are at 200 dpi, the cut-off is probably somewhere in between.
https://www.w3.org/Style/Examples/007/units.en.html
Well, i want to ask if ADXL345 can be used to detect an Earthquake Occurrence based on its magnitude/intensity level. For more information, I want to used an accelerometer to create a Device that can detect the intensity/magnitude level of an Earthquake.
I have absolutely no experience in this field, but it looks useful and fascinating.
Questions are:
is this device able to detect medium scale earthquakes?
if yes, does anybody did it, available to share experiences?
if no to the previous, is there any guide which explains algorithms, calculations and mechanical plans?
That sensor is not suitable. It has 13 bit resolution at +-16g full range. That gives you a sensitivity of 0.002g for the lsb. In order to detect an earthquake directly below you, you need approx. a few milli-g (e.g. see here), even less for earthquakes with an epicentre elsewhere.
You want a sensor which is much more sensitive by a factor of 100 and probably with more resolution (better ADC), too.
(And you should have been able to do this quick google-search analysis yourself ;) )
Using accelerometers reading tells you nothing about the actual magnitude of the quake itself. It tells you the size of the quake at your location. Combining location and amplitude will give you a 'weighted' measurement, but that's still useless without a calibration curve. Without knowing what acceleration, at a certain distance, corresponds to what magnitude you will be unable to tell what the magnitude is. You can certainly conclude that your measured earthquake has a median amplitude of, say, 2000% of a non-earthquake reading, but you won't be able to turn it into a Richter measurement. To do that you'd need to take some data during earthquakes of known magnitude and then work out how acceleration, distance and magnitude are related for your device. You could alternatively use a scale like the Shindo (just Google it).
Using MapTiler Pro Demo. Testing zoom levels 1-21 for Google Maps export from a tiff image (about 21mb file covering polygons over 2000km).
At the moment its been running an hour with constant usage at 12% of 12 vcores (about 1.5 of 12) maxed to about 2.5ghz. No tiles has been exported yet, only the html's associated.
Am I too quick to judge performance?
Edit: Progressbar at 0%
Edit2: Hour 8 still 0%. Memory usage increased from 400mb to 2gb
You are trying to generate from your input 21 MBytes file about 350 GBytes of tiles (approx. 10 billion of map tiles at zoom level 21) by the options you have set in the software. Is this really what you want to do?
It sounds like a nonsense to render the very low-res image (2600 x 2000 pixel) covering a large area (such as the South Africa) down to zoom level 21!
The software has suggested you the default maxzoom 6. If your data are coverage maps or similar dataset it makes sense to render it maybe down to zoom level 12 or similar, definitely not deeper than 14. For standard input data (aerial photo) the native suggested maxzoom +1 or +2 is the max which really makes sense. Deeper zoom levels do not add any visual advantage.
The user can always zoom deeper - but the upper tiles can be displayed on the client side - so you don't really need to generate and save all these images at all...
MapTiler automatically provides you with a Google Maps V3 viewer, which is doing the client-side over zooming out of the box.
See a preview here:
http://tileserver.maptiler.com/#weather/gmapsmaptiler.embed
If you are interested in the math behind the map tiles, check:
http://tools.geofabrik.de/calc/#type=geofabrik_standard&bbox=16.44,-34.85,32.82,-22.16
Thanks for providing the report (with http://www.maptiler.com/how-to/submit-report/) to us. Your original email to our support did not contain any technical data at all (not even the data you write here on the stackoverflow).
Please, before you publicly rant on the performance of a software - double check you know what you do. MapTiler Pro is a powerful tool, but the user must know what he does.
Based on your feedback - we have decided to implement for a future version of the MapTiler software an estimated final output size - and warn the user in the graphical user interface if he chooses options which are probably unwanted.
Is there any evidence of a particular sizing unit taking longer to process? For instance, if you were to use rem to size your entire site would it take longer to calculate/paint the page than if everything were given a specific px value?
Is there any benefit to max-width: 16rem over max-width: 250px?
I'm under the impression that rem takes longer since it has to revert back to the root and calculate while em is like a steady stream of processing, and px would be the fastest because there's nothing to calculate.
Please let me know if anyone has any evidence of which method is faster
Edit: I started off pretty much dismissing this discussion as, well,
polishing the roof of a truck, but I had not considered css-animations
which are quite heavy pocessor users and with CSS not being a
graphically optimized process (-very inefficient) then I think
there is a slightly higher warrent for such a question, if a website
has a large number of css-animations.
Quote from question:
I'm under the impression that rem takes longer since it has to revert back to the root and calculate while em is like a steady stream of processing, and px would be the fastest because there's nothing to calculate.
No. Rem simply takes a factor of the root em value, rather than the parent em value. (As the root doesn't change I would hope that the structure processing of CSS doesn't need to keep calling it and can instead simply regain it from memory).
rem Is the way we should be writing CSS in 2016. It beats lights out of em beyond having one or two parent elements effecting the em value, for instance [from the pont of view of working out as a developer what 1.2em of 1.4 em of 1.2 em of 14px is, why not just have 1.2 of 14px as 1.2rem].
As for px, that is not a straight-to-screen result either as with many modern display devices, the pixel is not a pixel, this may be an interesting topic for you to read.
If you care about the speed of processing rem against px then I personally feel you're in effect trying to get better fuel efficiency from your truck by polishing the roof so that air resistence is reduced, your work may have a tiny impact but there are other far larger consumers of GPU,CPU ram and operating power, and many more of them.
You may also like to read this: How a CSS pixel size is calculated?
And because I want to entertain you, you may like to know that you can now generate full 3D computer game levels developed entirely through CSS. This was made in 2013! I still find it increadible!!
In this game the developer used px throughout. You could perhaps take his code and apply em and/or rem and the heaviness of the page will display if it is indeed notably faster.