I am somehow head blocked to figure this out but is there an easy mathematical way to determine the intrinsic ratio for a given image lets say e.g. w 580 h 650 to get to a ratio like 3/4, 4/3, 5/6, 16/9 etc pp. Best regard Ralf
Related
I am in a refactoring process for a client where their 2D modeling software needs to be rewritten. There is poor old logic for scaling things down that does not fit in the canvas. I was wondering can anyone provide a proper mathematical formula to scale down a vector based on canvas size, most important thing is that the ratio should be kept between lines when scaling down.
One single formula is not required I can take any suggestions with using any programming language.
Example image:
Incase someone models a 2000mm width cover strip the drawn line should be downscaled to fit in the canvas. In this case, pixels and millimeters are proportional.
I have tried using exponential downscaling like this, but that does not count the canvas size in any way.
20mm^0.85=12.76mm
10mm^0.85=7.07mm
5mm^0.85=3.92mm
I know this is more a mathematical question, but it's more like a programming problem.
Thank you for your time.
Since you are not specifying any language, I will outline the procedure. It is very easy to implement, for instance, in javascript. Let canvas.width and canvas.height the width and height of the canvas, and object.width and object.height the width and height of the object.
Start by calculating scx = object.width / canvas.width and scy = object.height / canvas.height.
If you only want to downscale (never upscale), then: If both scx and scy are lower than 1, then do nothing (the object fits). In any other case, the largest value max(scx, scy) is your scale factor. You must divide object.width and object.height by that scale factor.
*If you always want to fit the object to the canvas, then the largest value max(scx, scy) is your scale factor. You must divide object.width and object.height by that scale factor.
Just one more advice: you can easily set a margin (actually padding) by using a lower canvas.width and canvas.height. Say you use 90% of the actual sizes. Then you can set the origin point at 5% of the width and the height and you know that no object will be closer than 5% to any canvas limit.
I've looked online on how to get the aspect ratio to write proper media queries for a website I'm trying to make but some of these numbers don't make sense.
What I'm trying to do is take two pixels, say, 667 x 325. I put those two numbers inside the website below, but result I'm getting is 667 : 325. I don't think that's correct, or is it?
https://aspectratiocalculator.com/
I've also tried looking for a mathematical formula to get these so I can just manually do these but there are so many out there that don't fit into the context of what I'm trying to obtain.
How can I get the aspect ratio of two given pixels?
Aspect ratio is width / height, e.g. 600 by 800 = 600/800 = 0.75.
The calculator is correct because the "Ratio" by conventional meaning of the word is 600:800 or 0.75:1 or 0.75 (as we programmers use it)
I have the following equation, which I try to implement. The upcoming question is not necessarily about this equation, but more generally, on how to deal with divisions by zero in image processing:
Here, I is an image, W is the difference between the image and its denoised version (so, W expresses the noise in the image), and K is an estimated fingerprint, gained from d images of the same camera. All calculations are done pixel-wise; so the equations does not involve a matrix multiplication. For more on the Idea of estimating digital fingerprints consult corresponding literature like the general wikipedia article or scientific papers.
However my problem arises when an Image has a pixel with value Zero, e.g. perfect black (let's say we only have one image, k=1, so the Zero gets not overwritten by the pixel value of the next image by chance, if the next pixelvalue is unequal Zero). Then I have a division by zero, which apparently is not defined.
How can I overcome this problem? One option I came up with was adding +1 to all pixels right before I even start the calculations. However this shifts the range of pixel values from [0|255] to [1|256], which then makes it impossible to work with data type uint8.
Other authors in papers I read on this topic, often do not consider values close the range borders. For example they only calculate the equation for pixelvalues [5|250]. They reason this, not because of the numerical problem but they say, if an image is totally saturated, or totally black, the fingerprint can not even be estimated properly in that area.
But again, my main concern is not about how this algorithm performs best, but rather in general: How to deal with divisions by 0 in image processing?
One solution is to use subtraction instead of division; however subtraction is not scale invariant it is translation invariant.
[e.g. the ratio will always be a normalized value between 0 and 1 ; and if it exceeds 1 you can reverse it; you can have the same normalization in subtraction but you need to find the max values attained by the variables]
Eventualy you will have to deal with division. Dividing a black image with itself is a proper subject - you can translate the values to some other range then transform back.
However 5/8 is not the same as 55/58. So you can take this only in a relativistic way. If you want to know the exact ratios you better stick with the original interval - and handle those as special cases. e.g if denom==0 do something with it; if num==0 and denom==0 0/0 that means we have an identity - it is exactly as if we had 1/1.
In PRNU and Fingerprint estimation, if you check the matlab implementation in Jessica Fridrich's webpage, they basically create a mask to get rid of saturated and low intensity pixels as you mentioned. Then they convert Image matrix to single(I) which makes the image 32 bit floating point. Add 1 to the image and divide.
To your general question, in image processing, I like to create mask and add one to only zero valued pixel values.
img=imread('my gray img');
a_mat=rand(size(img));
mask=uint8(img==0);
div= a_mat/(img+mask);
This will prevent division by zero error. (Not tested but it should work)
I want to repeat a background image that is rotated. Trying to make it seamless is destroying my soul.
Starting with something simple, consider each image is laid out like bricks. Creating a seamless repeating background image is pretty simple:
(the red area is the crop). You can see this working as expected at http://jsfiddle.net/mPqfB.
Now let's say I want to rotate the image by 45 degrees:
Unfortunately, the same crop no longer works, as you can see on http://jsfiddle.net/mPqfB/1.
I'm trying to figure out how to crop the image correctly so that we have a seamless repeat. There's probably some fairly trivial maths involved to do this but I can't for the life of me figure it out.
[Update]
I'm attempting to follow #oezi's calculations so to make things easier have created an image of dimensions: 100px x 50px.
Therefore:
Least Common Multiple = 100
Hypotenuse = 1002 + 1002 = 20000
Now I'm assuming this means we don't have to create an image of 20000px x 20000px. Am hoping that #oezi can clarify how he performs his resizing??
If this is a2 + b2 = c2 is equal to c = square root of (a2 + b2)
Then we can concur that our crop should be 141px?
Finally, this doesn't actually explain where we take the crop from?
[Update 2]
It does look like this is how the resize should be created. Taking a 141px x 141px crop of the image yielded the correct results - http://jsfiddle.net/EfuV2/
As far as where to crop from, it doesn't actually matter!
is the rotation is exactly 45 degrees, you'll have to find out the least common multiple of the width and height of your unrotated pattern.
in your case, that's 15100 (width 100 and height 151)
it would be much better to scale your pattern to width 100 and height 150, so the least common multiple is only 300
Take that number and some math (pythagorean theorem). Assume your number is the length of the two short arms and calculate the length of the hypotenuse - that's our result (make a square image of that size to get your pattern).
in your case, that's 21355
with resizing, it's ~ 424
Note that this is just typed straight from my head because i can't try it out practically at the moment - but i'm really sure it's correct.
edit: a fast (and messy) test got me to this:
http://i.imgur.com/rZuu9.jpg
http://jsfiddle.net/mPqfB/2/ (click the image-link first, otherwise jsfiddle doesn't show the image)
accidentally i made the pattern only be 423 in height and the rotation isn't perfect (don't have photoshop here), but it's good enough to prove that my math is correct.
The trick is to crop the pattern at points where the section being cut off matches the section remaining on the opposite side of the crop area (see example cuts in blue). It'll probably take some trial and error to get it right but you should be able to do it easily enough.
I have a gray-scale image and I want to make a function that
closely follows the image
is always grater than it the image
smooth at some given scale.
In other words I want a smooth function that approximates the maximum of another function in the local region while over estimating the that function at all points.
Any ideas?
My first pass at this amounted to picking the "high spots" (by comparing the image to a least-squares fit of a high order 2-D polynomial) and matching a 2-D polynomial to them and their slopes. As the first fit required more working space than I had address space, I think it's not going to work and I'm going to have to come up with something else...
What I did
My end target was to do a smooth adjustment on an image so that each local region uses the full range of values. The key realization was that an "almost perfect" function would do just fine for me.
The following procedure (that never has the max function explicitly) is what I ended up with:
Find the local mean and standard deviation at each point using a "blur" like function.
offset the image to get a zero mean. (image -= mean;)
divide each pixel by its stdev. (image /= stdev;)
the most image should now be in [-1,1] (oddly enough most of my test images have better than 99% in that range rather than the 67% that would be expected)
find the standard deviation of the whole image.
map some span +/- n*sigma to your output range.
With a little manipulation, that can be converted to find the Max function I was asking about.
Here's something that's easy; I don't know how good it is.
To get smooth, use your favorite blurring algorithm. E.g., average points within radius 5. Space cost is order the size of the image and time is the product of the image size with the square of the blurring radius.
Take the difference of each individual pixel with the original image, find the maximum value of (original[i][j] - blurred[i][j]), and add that value to every pixel in the blurred image. The sum is guaranteed to overapproximate the original image. Time cost is proportional to the size of the image, with constant additional space (if you overwrite the blurred image after computing the max.
To do better (e.g., to minimize the square error under some set of constraints), you'll have to pick some class of smooth curves and do some substantial calculations. You could try quadratic or cubic splines, but in two dimensions splines are not much fun.
My quick and dirty answer would be to start with the original image, and repeat the following process for each pixel until no changes are made:
If an overlarge delta in value between this pixel and its neighbours can be resolved by increasing the value of the pixel, do so.
If an overlarge slope change around this pixel and its neighbours can be resolved by increasing the value of the pixel, do so.
The 2D version would look something like this:
for all x:
d = img[x-1] - img[x]
if d > DMAX:
img[x] += d - DMAX
d = img[x+1] - img[x]
if d > DMAX:
img[x] += d - DMAX
dleft = img[x-1] - img[x]
dright = img[x] - img[x+1]
d = dright - dleft
if d > SLOPEMAX:
img[x] += d - SLOPEMAX
Maximum filter the image with an RxR filter, then use an order R-1 B-spline smoothing on the maximum-filtered image. The convex hull properties of the B-spline guarantee that it will be above the original image.
Can you clarify what you mean by your desire that it be "smooth" at some scale? Also, over how large of a "local region" do you want it to approximate the maximum?
Quick and dirty answer: weighted average of the source image and a windowed maximum.