In CSS, is there a way to format floating point numbers to a specific number of digits to the right of the decimal point? - css

I am trying to determine if, using just CSS, there is there a way to format floating point numbers to a specific number of digits to the right of the decimal point?
The reason I want to do this is that I need to use a website that displays a massive amount of continually updating data in tables. (Because the data is continually updating, I can't just export the data into a spreadsheet.)
The values are floating point and range from 0 to 9999. The number of fractional digits varies from 0 to 7. For the most part, I have no use for anything beyond hundredths (2 places to the right of the decimal point). The exception is for values ranging from 0 to 9, but I'm willing to forego that case, if necessary.
This is an tiny example of how the data is currently displayed:
9484.83
133.57643
1344.5432
9.5848274
58.48381
5989.1
1.5847493
1.348
As you can see, it's hard to read the data with that presentation. Ideally, I would like to use a CSS overlay to reformat that data as:
9484.83
133.57
1344.54
9.584
58.48
5989.10
1.584
1.348
If that's not possible, I'm fine with:
9484.83
133.57
1344.54
9.58
58.48
5989.10
1.58
1.34
Using CSS, I can easily enforce a maximum width for the HTML elements displaying the values. I can use em units to try to not get any digits partially displayed (not 100% effective though, unless forcing a monospaced font, which results in much less visible data in the viewport). But even using such techniques, I still wind up with values displayed as 58.4848.
Can CSS be used to solve this task?

Related

Handle "Division by Zero" in Image Processing (or PRNU estimation)

I have the following equation, which I try to implement. The upcoming question is not necessarily about this equation, but more generally, on how to deal with divisions by zero in image processing:
Here, I is an image, W is the difference between the image and its denoised version (so, W expresses the noise in the image), and K is an estimated fingerprint, gained from d images of the same camera. All calculations are done pixel-wise; so the equations does not involve a matrix multiplication. For more on the Idea of estimating digital fingerprints consult corresponding literature like the general wikipedia article or scientific papers.
However my problem arises when an Image has a pixel with value Zero, e.g. perfect black (let's say we only have one image, k=1, so the Zero gets not overwritten by the pixel value of the next image by chance, if the next pixelvalue is unequal Zero). Then I have a division by zero, which apparently is not defined.
How can I overcome this problem? One option I came up with was adding +1 to all pixels right before I even start the calculations. However this shifts the range of pixel values from [0|255] to [1|256], which then makes it impossible to work with data type uint8.
Other authors in papers I read on this topic, often do not consider values close the range borders. For example they only calculate the equation for pixelvalues [5|250]. They reason this, not because of the numerical problem but they say, if an image is totally saturated, or totally black, the fingerprint can not even be estimated properly in that area.
But again, my main concern is not about how this algorithm performs best, but rather in general: How to deal with divisions by 0 in image processing?
One solution is to use subtraction instead of division; however subtraction is not scale invariant it is translation invariant.
[e.g. the ratio will always be a normalized value between 0 and 1 ; and if it exceeds 1 you can reverse it; you can have the same normalization in subtraction but you need to find the max values attained by the variables]
Eventualy you will have to deal with division. Dividing a black image with itself is a proper subject - you can translate the values to some other range then transform back.
However 5/8 is not the same as 55/58. So you can take this only in a relativistic way. If you want to know the exact ratios you better stick with the original interval - and handle those as special cases. e.g if denom==0 do something with it; if num==0 and denom==0 0/0 that means we have an identity - it is exactly as if we had 1/1.
In PRNU and Fingerprint estimation, if you check the matlab implementation in Jessica Fridrich's webpage, they basically create a mask to get rid of saturated and low intensity pixels as you mentioned. Then they convert Image matrix to single(I) which makes the image 32 bit floating point. Add 1 to the image and divide.
To your general question, in image processing, I like to create mask and add one to only zero valued pixel values.
img=imread('my gray img');
a_mat=rand(size(img));
mask=uint8(img==0);
div= a_mat/(img+mask);
This will prevent division by zero error. (Not tested but it should work)

Is it possible to create an SVG that is precise to 1,000,000,000% zoom?

Split off from: https://stackoverflow.com/questions/31076846/is-it-possible-to-use-javascript-to-draw-an-svg-that-is-precise-to-1-000-000-000
The SVG spec states that SVGs use double-precision floats for all values.
Through testing, it's easy to verify this.
Affinity designer is a vector graphics program that allows zooms up to 1,000,000,000%, and it too uses double-precision floats to do all calculations.
I would like to know from someone who deeply understands double-precision floats: is it possible create an SVG that is visually correct at 1,000,000,000% zoom?
Honestly, I'm struggling with getting a grasp on the math of this:
9007199254740992 (The max value of a double-float according to https://stackoverflow.com/a/1848953/2328064 ) is larger than 1,000,000,000 so it seems to be reasonable that if something is 2 or even 2000 wide, that it would still be small when starting at 9007199254740992 and zooming 1,000,000,000%.
Hypothetical examples as ways to approach the question:
If we created an SVG of a 2D slice of the entire visible universe how far could we zoom in before floating point rounding started shifting things by 1 pixel?
If we start with an SVG that is 1024x1024, can we create a 'microscopic' grid that is both visible and visually correct at 1,000,000,000% zoom? (Like, say, we can see 20+ equidistant squares)
Edit:
Based on everything so far, the definitive answer is yes (with some important and interesting caveats for actually viewing this SVG).
In order to get the most precision at high zoom, start at the centre.
The SVG spec is not designed for this level of precision. This is especially true of the spec for SVG viewers.
(Not mentioned below) Typically curves are represented in software as Bézier curves, and standard Bézier curve implementations do not draw mathematically perfect circles.
Of course it is. Floating point math deals with relative, not absolute, precision. If you created a regular polygon at the origin, with radius 1e-7, then zoomed it to 1e7X size, you would expect to see a regular polygon with the same size and precision as an unzoomed circle with radius 1.
If you were to create the same regular polygon with vertices centered at (0, 1e9) or so, you'd expect to see some serious error. Doubles that large do not have enough absolute precision to accurately represent a shape that small.
However, there's another way to express "shapes far from the origin" in SVG, using a node transformation. If you were to specify the polygon relative to the origin, but give it a translation of (0,1e9), and zoomed to that point, you'd expect to see the same precision as the origin-centered polygon.
HOWEVER however, all this assumes that the SVG renderer in question is designed to do such things in the most precise possible manner (such as composing the shape and view transformations before applying them to the vertices, rather than applying one at a time). I'm not sure if any of the SVG renderers out there go to such lengths, given the unusualness (some might say, the wrong-headedness) of such a use case.
TL;DR: It is possible to create such an SVG file, but it's impossible to know if a renderer or other tools that merely follow the spec will render/process it correctly.
This is a case of the SVG standard being too vague. Since the renderers, canvses, etc. only have to follow the spec, the realistic answer is: you can create it, but it won't be usable for what you intend to use it for.
Most likely no.
The double has around 53 bits precision, so when doing a multiplication of 1e9 percent you could get a small amount of, but there are no guarantees. Maybe not enough to not stay in the correct pixel, but I guess you should create your own solution working and have a look at rasterisation, because that's what you seem to need to know more about.

How to resize an existing point cloud file?

I am trying to enlarge a point cloud data set. Suppose I have a point cloud data set consisting of 100 points & I want to enlarge it to say 5 times. Actually I am studying some specific structure which is very small, so I want to zoom in & do some computations. I want something like imresize() in Matlab.
Is there any function to do this? What does resize() function do in PCL? Any idea about how can I do it?
Why would you need this? Points are just numbers, regardless whether they are 1 or 100, until all of them are on the same scale and in the same coordinate system. Their size on the screen is just a visual representation, you can zoom in and out as you wish.
You want them to be a thousandth of their original value (eg. millimeters -> meters change)? Divide them by 1000.
You want them spread out in a 5 times larger space in that particular coordinate system? Multiply their coordinates with 5. But even so, their visual representations will look exactly the same on the screen. The data remains basically the same, they will not be resized per se, they numeric representation will change a bit. It is the simplest affine transform, just a single multiplication.
You want to have finer or coarser resolution of your numeric representation? Or have different range? Change your data type accordingly.
That is, if you deal with a single set.
If you deal with different sets, say, recorded with different kinds of sensors and the numeric representations differ a bit (there are angles between the coordinate systems, mm vs cm scale, etc.) you just have to find the transformation from one coordinate system to the other one and apply it to the first one.
Since you want to increase the number of points while preserving shape/structure of the cloud, I think you want to do something like 'upsampling'.
Here is another SO question on this.
The PCL offers a class for bilateral upsampling.
And as always google gives you a lot of hints on this topic.
Beside (what Ziker mentioned) increasing allocated memory (that's not what you want, right?) or zooming in in visualization you could just rescale your point cloud.
This can be done by multiplying each points dimensions with a constant factor or using an affine transformation. So you can e.g switch from mm to m.
If i understand your question correctly
If you have defined your cloud like this
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
in fact you can do resize
cloud->points.resize (cloud->width * cloud->height);
Note that doing resize does nothing more than allocate more memory for variable thus after resizing original data remain in cloud. So if you want to have empty resized cloud dont forget to add cloud->clear();
If you just want zoom some pcd for visual puposes(i.e you cant see what is shape of cloud because its too small) why dont you use PCL Visualization and zoom by scrolling up/down

How do browsers handle rgb(percentage); for strange numbers

This is related to CSS color codes:
For hexcode we can represent 16,777,216 colors from #000000 to #FFFFFF
According to W3C Specs, Valid RGB percentages fit in a range from (0.0% to 100.0%) essentially giving you 1,003,003,001 color combinations. (1001^3)
According to the specs:
Values outside the device gamut should be clipped or mapped into the gamut when the gamut is
known: the red, green, and blue values must be changed to fall within the range supported by
the device. Users agents may perform higher quality mapping of colors from one gamut to
another. For a typical CRT monitor, whose device gamut is the same as sRGB, the four rules
below are equivalent:
I'm doubtful if browsers actually can render all these values. (but if they do please tell me and ignore the rest of this post)
Im assuming there's some mapping from rgb(percentage) to hex. (but again Im not really sure how this works)
Ideally I'd like to find out the function rgb(percentage)->HEX
If I had to guess it would probably be one of these 3.
1) Round to the nearest HEX
2) CEIL to the nearest HEX
3) FLOOR to the nearest HEX
Problem is I need to be accurate on the mapping and I have no idea where to search.
There's no way my eyes can differentiate color at that level, but maybe there's some clever way to test each of these 3.
It might also be browser dependent. Can this be tested?
EDIT:
Firefox seems to round from empirical testing.
EDIT:
I'm looking through Firefox's source code right now,
nsColor.h
// A color is a 32 bit unsigned integer with four components: R, G, B
// and A.
typedef PRUint32 nscolor;
It seems Fiefox only has room for 255 values for each R,G and B. Hinting that rounding might be the answer, but maybe somethings being done with the alpha channel.
I think I found a solution for Firefox anyways, thought you might like a follow up:
Looking through the source code I found a file:
nsCSSParser.cpp
For each rgb percentages it does the following:
It takes the percentage component multiplies it by 255.0f
Stores it in a float
Passes it into a function NSToIntRound
The result of NSToIntRound is stored into an 8 bit integer datatype,
before it is combined with the other 2 components and an alpha
channel
Looking for more detail on NSToIntRound:
nsCoord.h
inline PRInt32 NSToIntRound(float aValue)
{
return NS_lroundf(aValue);
}
NSToIntRound is a wrapper function for NS_lroundf
nsMathUtils.h
inline NS_HIDDEN_(PRInt32) NS_lroundf(float x)
{
return x >= 0.0f ? PRInt32(x + 0.5f) : PRInt32(x - 0.5f);
}
This function is actually very clever, took me a while to decipher (I don't really have a good C++ background).
Assuming x is positive
It adds 0.5f to x and then casts to an integer
If the fractional part of x was less than 0.5, adding 0.5 won't change the integer and the fractional part is truncated,
Otherwise the integer value is bumped by 1 and the fractional part is truncated.
So each component's percentage is first multiplied by 255.0f
Then Rounded and cast into a 32bit Integer
And then Cast again into an 8 bit Integer
I agree with most of you that say this appears to be a browser dependent issue, so I will do some further research on other browsers.
Thanks a bunch!
According to W3C Specs, Valid RGB percentages fit in a range from (0.0% to 100.0%) essentially giving you 1,003,003,001 color combinations. (1001^3)
No, more than that, because the precision is not limited to one decimal place. For example, this is valid syntax:
rgb(23.456% 78.90123456% 0%)
The reason for this is that, while 8 bits per component is common (hence hex codes) newer hardware supports 10 or 12 bits per component; and wider gamut colorspaces need more bits to avoid banding.
This bit-depth agnosticism is also why newer CSS color specifications use a 0 to 1 float range.
Having said which, the CSS Object Model still requires color values to be serialized at 8 bits per component. This is going to change, but the higher-precision replacement is still being discussed in the CSS working group. So for now, browsers don't let you get more than 8 bits per component of precision.
If you are converting a float or percentage form to hex (or to 0 - 255 integer) the correct method is rounding. Floor or ceiling will not spec the values evenly at the top or bottom of the range.

Problem with Principal Component Analysis

I'm not sure this is the right place but here I go:
I have a database of 300 picture in high-resolution. I want to compute the PCA on this database and so far here is what I do: - reshape every image as a single column vector - create a matrix of all my data (500x300) - compute the average column and substract it to my matrix, this gives me X - compute the correlation C = X'X (300x300) - find the eigenvectors V and Eigen Values D of C. - the PCA matrix is given by XV*D^-1/2, where each column is a Principal Component
This is great and gives me correct component.
Now what I'm doing is doing the same PCA on the same database, except that the images have a lower resolution.
Here are my results, low-res on the left and high-res on the right. Has you can see most of them are similar but SOME images are not the same (the ones I circled)
Is there any way to explain this? I need for my algorithm to have the same images, but one set in high-res and the other one in low-res, how can I make this happen?
thanks
It is very possible that the filter you used could have done a thing or two to some of the components. After all, lower resolution images don't contain higher frequencies that, too, contribute to which components you're going to get. If component weights (lambdas) at those images are small, there's also a good possibility of errors.
I'm guessing your component images are sorted by weight. If they are, I would try to use a different pre-downsampling filter and see if it gives different results (essentially obtain lower resolution images by different means). It is possible that the components that come out differently have lots of frequency content in the transition band of that filter. It looks like images circled with red are nearly perfect inversions of each other. Filters can cause such things.
If your images are not sorted by weight, I wouldn't be surprised if the ones you circled have very little weight and that could simply be a computational precision error or something of that sort. In any case, we would probably need a little more information about how you downsample, how you sort the images before displaying them. Also, I wouldn't expect all images to be extremely similar because you're essentially getting rid of quite a few frequency components. I'm pretty sure it wouldn't have anything to do with the fact that you're stretching out images into vectors to compute PCA, but try to stretch them out in a different direction (take columns instead of rows or vice versa) and try that. If it changes the result, then perhaps you might want to try to perform PCA somewhat differently, not sure how.

Resources