I am programming a small synthesizer application and I input musical notes by clicking along the length of a bar. Now, the musical scale is logarithmic and my question is, how do I convert the position of the mouse to a relevant pitch. At the moment I calculate a ratio. It works, sort of, but I get a wide range of closely packed low notes, and at the far end I takes of with just a few pixels translating into multiple octaves.
Basically I want, if I click at the center of the bar (1/2), the frequency is doubled, and 1/4 is another double in freq. etc…
I'm being stupid here!
Frequencies of musical notes are indeed logarithmic. The frequency doubles when you go one octave higher, and halves when you go one octave lower. A standard A is exactly 440Hz.
So, you need a power law to translate location into a frequency. Something like f*2.0^(x/w) where w is the width of an octave, f is a scale factor and ^ is the power operator.
Related
I'm currently working towards a 3D model of this, but I thought I would start with 2D. Basically, I have a grid of longitude and latitude with NO2 concentrations across it. What I want to produce, at least for now, is a total amount of Nitrogen Dioxide between two points. Like so:
2DGrid
Basically, These two points are at different lats and lons and as I stated I want to find the amount of something between them. The tricky thing to me is that the model data I'm working with is gridded so I need to be able to account for the amount of something along a line at the lat and lons at which that line cuts through said grid.
Another approach, and maybe a better one for my purposes, could be visualized like this:3DGrid
Ultimately, I'd like to be able to create a program (within any language honestly) that could find the amount of "something" between two points in a 3D grid. If you would like specfics, the bottom altitude is the surface, the top grid is the top of the atmosphere. The bottom point is a measurement device looking at the sun during a certain time of day (and therefore having a certain zenith and azimuth angle). I want to find the NO2 between that measurement device and the "top of the atmosphere" which in my grid is just the top altitude level (of which there are 25).
I'm rather new to coding, stack exchange, and even the subject matter I'm working with so the sparse code I've made might end up creating more clutter than purely asking the question and seeing what methods/code you might suggest?
Hopefully my question is beneficial!
Best,
Taylor
To traverse all touched cells, you can use Amanatides-Woo algorithm. It is suitable both for 2D and for 3D case.
Implementation clues
To account for quantity used from every cell, you can apply some model. For example, calculate path length inside cell (as difference of enter and exit coordinates) and divide by normalizing factor to get cell weight (for example, byCellSize*Sqrt(3) for 3D case as diagonal length).
I have the following equation, which I try to implement. The upcoming question is not necessarily about this equation, but more generally, on how to deal with divisions by zero in image processing:
Here, I is an image, W is the difference between the image and its denoised version (so, W expresses the noise in the image), and K is an estimated fingerprint, gained from d images of the same camera. All calculations are done pixel-wise; so the equations does not involve a matrix multiplication. For more on the Idea of estimating digital fingerprints consult corresponding literature like the general wikipedia article or scientific papers.
However my problem arises when an Image has a pixel with value Zero, e.g. perfect black (let's say we only have one image, k=1, so the Zero gets not overwritten by the pixel value of the next image by chance, if the next pixelvalue is unequal Zero). Then I have a division by zero, which apparently is not defined.
How can I overcome this problem? One option I came up with was adding +1 to all pixels right before I even start the calculations. However this shifts the range of pixel values from [0|255] to [1|256], which then makes it impossible to work with data type uint8.
Other authors in papers I read on this topic, often do not consider values close the range borders. For example they only calculate the equation for pixelvalues [5|250]. They reason this, not because of the numerical problem but they say, if an image is totally saturated, or totally black, the fingerprint can not even be estimated properly in that area.
But again, my main concern is not about how this algorithm performs best, but rather in general: How to deal with divisions by 0 in image processing?
One solution is to use subtraction instead of division; however subtraction is not scale invariant it is translation invariant.
[e.g. the ratio will always be a normalized value between 0 and 1 ; and if it exceeds 1 you can reverse it; you can have the same normalization in subtraction but you need to find the max values attained by the variables]
Eventualy you will have to deal with division. Dividing a black image with itself is a proper subject - you can translate the values to some other range then transform back.
However 5/8 is not the same as 55/58. So you can take this only in a relativistic way. If you want to know the exact ratios you better stick with the original interval - and handle those as special cases. e.g if denom==0 do something with it; if num==0 and denom==0 0/0 that means we have an identity - it is exactly as if we had 1/1.
In PRNU and Fingerprint estimation, if you check the matlab implementation in Jessica Fridrich's webpage, they basically create a mask to get rid of saturated and low intensity pixels as you mentioned. Then they convert Image matrix to single(I) which makes the image 32 bit floating point. Add 1 to the image and divide.
To your general question, in image processing, I like to create mask and add one to only zero valued pixel values.
img=imread('my gray img');
a_mat=rand(size(img));
mask=uint8(img==0);
div= a_mat/(img+mask);
This will prevent division by zero error. (Not tested but it should work)
I'm writing a 2D game, in which I would like to have crate-like objects. These objects would move around, like real crates do. I have a hypothetical idea of how I would like to achieve that:
Basically I'd store the boxes' corners' coordinates with their force and velocity unit vectors, and in every update I'd basically do the following steps:
1. Apply the forces(gravity, from collisions, etc..) accordingly.
2. Modify velocity vector based on the force.
3. Move every corner of the box, like so:
4. I repeat nr 3. for every corner, so I get the real movement of the cube.
My questions are: Is this approach heading in the right direction? Is this theory even correct? If not, what would be the correct way to move a box around based on vectors in a 2D environment?
Just to clarify: I'm only dragging corner "A" in the picture, but I want to repeat the dragging for every other corner, with their own vectors. By "dragging" I mean the algorithm I just stated.
Keeping each corner's coordinate and speed makes no sense as you would be storing lots of redundant information. Boxes are rigid objects, which means that there are constraints that must be satisfied at any time instant, namely the distance between any two given corners is fixed. This also translates to a constraint that links the velocities of all four corners and so they are not independent values. With rigid bodies the movement of any point is the sum of two independent movements - the linear movement of the centre of mass (CM) and the rotation around a fixed axis - often, but not always, chosen to be the one that goes through the CM. Hence you only need to store the position and the velocity of the crate's CM (which coincides with the geometric centre of the crate) as well as the angle of rotation and the rate of rotation around the CM.
As to the motion, the gravity field is a constant vector field and hence cannot induce rotation in symmetric objects like those rectangular crates. Instead it only produces accelerated vertical motion of the CM. This is also what happens due to all external forces - one has to take their vector sum and apply it to the CM. Only external forces whose direction does not go through the CM give torque and so cause rotation. Such forces are any external pushes/pulls or reaction forces that arise when crates collide with each other or hit the ground / a wall. Computing torque due to external forces is easy but computing reaction forces could be quite involving process because of the constrained dynamics that has to be employed. Once the torque has been computed, one has to divide it by the moment of inertia of the create in order to get the angular acceleration. Often it is more convenient to use another axis and not the one that goes through the CM - Steiner's theorem can be employed in this case in order to compute the moment of inertia around that other axis.
To summarise:
all forces, acting on the create, are first added together (as vectors) and the resultant force (divided by the mass of the create) determines the linear acceleration of the CM;
the torque of all forces is computed and then used to determine the angular acceleration around a given axis.
See here for some sample problems of rigid body motion and how the physics is actually worked out.
Given your algorithm, if by "velocity vector" you actually mean "the velocity of CM", then 1 would be correct - all corners move in the same direction (the linear motion of the CM). But 2 would not be always correct - the proper angle of rotation would depend on the time the torque was applied (e.g. the simulation timestep), and one has to take into account that the lever arm length changes in between as the crate rotates.
I have a 3D closed mesh car object having a surface made up
triangles. I want to calculate its volume, center of volume and inertia tensor.
Could you help me
Regards.
George
For volume...
For each triangular facet, lookup its corner points. Call 'em P,Q,R.
Compute this quantity (I call it "partial volume")
pv = PxQyRz + PyQzRx + PzQxRy - PxQzRy - PyQxRz - PzQyRx
Add these together for all facets and divide by 6.
Important! The P,Q,R for each facet must be arranged clockwise as seen from outside. (Or all counter-clockwise, as long as it's consistent for all facets.)
If the mesh has any quadrilaterals, just temporarily hallucinate a diagonal joining one pair of opposite corners. That makes it into two triangles.
Practical computationial improvement: Before doing math with P,Q and R, subtract the coordinates of some "center" point C. This can be the center of mass, a midpoint between the min/max x, y and z, or any convenient point inside or near the mesh. This helps minimize truncation errors for more accurate volumes.
From numerical point of view, what you are trying to achieve is quite simple and can be reduced to calculating few quadratures. Wikipedia will provide needed information about maths behind it.
If you are looking for out-of-the-box volume calculation, take a look at this entry.
As of inertia -- shape is not enough, as you also need distribution of mass.
Well, there isn't much information on the car being provided here - you should be able to break down the car into simpler shapes - the higher degree of approximation your require - the more simpler shapes you can break it into. (This could be difficult if the car is somehow dynamically generated and completely different every time ... but I don't see that situation making any sense).
This should help with finding the Inertial Tensor of various simpler shapes: http://www.gamedev.net/community/forums/topic.asp?topic_id=57001 , finding the volumes and the likes of things like spheres and cubes is pretty common knowledge so I won't bother linking that.
I think it was Archimedes who discovered that if you submerge the car in a volume of liquid, the displaced liquid will have the same volume as the car.
I'm not sure what this would help you in this case though. Having a liquid simulation running in the background and submerging the mesh into it sounds a bit over the top. Although, I think it does work, and therefore qualifies as a (bit useless nonetheless) answer. ;^)
I have an implicit scalar field defined in 2D, for every point in 2D I can make it compute an exact scalar value but its a somewhat complex computation.
I would like to draw an iso-line of that surface, say the line of the '0' value. The function itself is continuous but the '0' iso-line can have multiple continuous instances and it is not guaranteed that all of them are connected.
Calculating the value for each pixel is not an option because that would take too much time - in the order of a few seconds and this needs to be as real time as possible.
What I'm currently using is a recursive division of space which can be thought of as a kind of quad-tree. I take an initial, very coarse sampling of the space and if I find a square which contains a transition from positive to negative values, I recursively divide it to 4 smaller squares and checks again, stopping at the pixel level. The positive-negative transition is detected by sampling a sqaure in its 4 corners.
This work fairly well, except when it doesn't. The iso-lines which are drawn sometimes get cut because the transition detection fails for transitions which happen in a small area of an edge and that don't cross a corner of a square.
Is there a better way to do iso-line drawing in this settings?
I've had a lot of success with the algorithms described here http://web.archive.org/web/20140718130446/http://members.bellatlantic.net/~vze2vrva/thesis.html
which discuss adaptive contouring (similar to that which you describe), and also some other issues with contour plotting in general.
There is no general way to guarantee finding all the contours of a function, without looking at every pixel. There could be a very small closed contour, where a region only about the size of a pixel where the function is positive, in a region where the function is generally negative. Unless you sample finely enough that you place a sample inside the positive region, there is no general way of knowing that it is there.
If your function is smooth enough, you may be able to guess where such small closed contours lie, because the modulus of the function gets small in a region surrounding them. The sampling could then be refined in these regions only.