n coins. Which is fake? [duplicate] - math

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Algorithm to find minimum number of weighings required to find defective ball from a set of n balls
We have n coins. One of them is fake, which is heavier or lighter (we don't know). We have scales with 2 plates. How can we get the fake coin in p moves?
Can you give me a hand for writing such a program? No need a whole program, just ideas.
Thank you.

This is known as Balance puzzle. See Marcel Kołodziejczyk’s Two-pan balance and generalized counterfeit coin problem for a generalization of this problem.

I remember solving this for n=12 and 13, partly by hand and then with a program at the end. I don't know how I would solve it for a general n... but I know how I'd start - by considering small values of n and doing it by hand.
I suspect there are essentially patterns that can be used recursively for this... but you'll find them much easier to discover with pen and paper for small values (n=4 to 7, for example) than by coding.

Put coins on each side, the real ones will balance each other out, the fake will make the scale go either way. When the scales aren't balanced, one of the 2 you just put on is fake, try each against a real coin.
If the coins are objects you're handed, then you should be able to do that in a program quite easily.

Related

Big O Log problem solving

I have question that comes from a algorithms book I'm reading and I am stumped on how to solve it (it's been a long time since I've done log or exponent math). The problem is as follows:
Suppose we are comparing implementations of insertion sort and merge sort on the same
machine. For inputs of size n, insertion sort runs in 8n^2 steps, while merge sort runs in 64n log n steps. For which values of n does insertion sort beat merge sort?
Log is base 2. I've started out trying to solve for equality, but get stuck around n = 8 log n.
I would like the answer to discuss how to solve this mathematically (brute force with excel not admissible sorry ;) ). Any links to the description of log math would be very helpful in my understanding your answer as well.
Thank you in advance!
http://www.wolframalpha.com/input/?i=solve%288+log%282%2Cn%29%3Dn%2Cn%29
(edited since old link stopped working)
Your best bet is to use Newton;s method.
http://en.wikipedia.org/wiki/Newton%27s_method
One technique to solving this would be to simply grab a graphing calculator and graph both functions (see the Wolfram link in another answer). Find the intersection that interests you (in case there are multiple intersections, as there are in your example).
In any case, there isn't a simple expression to solve n = 8 log₂ n (as far as I know). It may be simpler to rephrase the question as: "Find a zero of f(n) = n - 8 log₂ n". First, find a region containing the intersection you're interested in, and keep shrinking that region. For instance, suppose you know your target n is greater than 42, but less than 44. f(42) is less than 0, and f(44) is greater than 0. Try f(43). It's less than 0, so try 43.5. It's still less than 0, so try 43.75. It's greater than 0, so try 43.625. It's greater than 0, so keep going down, and so on. This technique is called binary search.
Sorry, that's just a variation of "brute force with excel" :-)
Edit:
For the fun of it, I made a spreadsheet that solves this problem with binary search: binary‑search.xls . The binary search logic is in the second data column, and I just auto-extended that.

fourier transform to transpose key of a wav file

I want to write an app to transpose the key a wav file plays in (for fun, I know there are apps that already do this)... my main understanding of how this might be accomplished is to
1) chop the audio file into very small blocks (say 1/10 a second)
2) run an FFT on each block
3) phase shift the frequency space up or down depending on what key I want
4) use an inverse FFT to return each block to the time domain
5) glue all the blocks together
But now I'm wondering if the transformed blocks would no longer be continuous when I try to glue them back together. Are there ideas how I should do this to guarantee continuity, or am I just worrying about nothing?
Overlap the time samples for each block by half so that each block after the first consists of the last N/2 samples from the previous block and N/2 new samples. Be sure to apply some window to the samples before the transform.
After shifting the frequency, perform an inverse FFT and use the middle N/2 samples from each block. You'll need to adjust the final gain after the IFFT.
Of course, mixing the time samples with a sine wave and then low pass filtering will provide the same shift in the time domain as well. The frequency of the mixer would be the desired frequency difference.
For speech you might want to look at PSOLA - this is a popular algorithm for pitch-shifting and/or time stretching/compression which is a little more sophisticated than the basic overlap-add method, but not much more complex.
If you need to process non-speech samples, e.g. music, then there are several possibilities, however the overlap-add FFT/modify/IFFT approach mentioned in other answers is probably the best bet.
Found this great article on the subject, for anyone trying it in the future!
You may have to find a zero-crossing between the blocks to glue the individual wavs back together. Otherwise you may find that you are getting clicks or pops between the blocks.

How can determine dice sum probabilities? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
In trying to solve a particular Project Euler question, I ran into difficulties with a particular mathematical formula. According to this web page (http://www.mathpages.com/home/kmath093.htm), the formula for determining the probability for rolling a sum, T, on a number of dice, n, each with number of sides, s, each numbered 1 to s, can be given as follows:
alt text http://www.freeimagehosting.net/uploads/8294d47194.gif
After I started getting nonsensical answers in my program, I started stepping through, and tried this for some specific values. In particular, I decided to try the formula for a sum T=20, for n=9 dice, each with s=4 sides. As the sum of 9 4-sided dice should give a bell-like curve of results, ranging from 4 to 36, a sum of 20 seems like it should be fairly (relatively speaking) likely. Dropping the values into the formula, I got:
alt text http://www.freeimagehosting.net/uploads/8e7b339e32.gif
Since j runs from 0 to 7, we must add over all j...but for most of these values, the result is 0, because at least one the choose formulae results are 0. The only values for j that seem to return non-0 results are 3 and 4. Dropping 3 and 4 into this formula, I got
alt text http://www.freeimagehosting.net/uploads/490f943fa5.gif
Which, when simplified, seemed to go to:
alt text http://www.freeimagehosting.net/uploads/603ca84541.gif
which eventually simplifies down to ~30.75. Now, as a probability, of course, 30.75 is way off...the probability must be between 0 and 1, so something has gone terribly wrong. But I'm not clear what it is.
Could I misunderstanding the formula? Very possible, though I'm not clear at all where the breakdown would be occuring. Could it be transcribed wrong on the web page? Also possible, but I've found it difficult to find another version of it online to check it against. Could I be just making a silly math error? Also possible...though my program comes up with a similar value, so I think it's more likely that I'm misunderstanding something.
Any hints?
(I would post this on MathOverflow.com, but I don't think it even comes close to being the kind of "postgraduate-level" mathematics that is required to survive there.)
Also: I definitely do not want the answer to the Project Euler question, and I suspect that other people that my stumble across this would feel the same way. I'm just trying to figure out where my math skills are breaking down.
According to mathworld (formula 9 is the relevant one), the formula from your source is wrong.
The correct formula is supposed to be n choose j, not n choose T. That'll really reduce the size of the values within the summation.
The mathworld formula uses k instead of j and p instead of T:
Take a look at article in wikipedia - Dice.
The formula here looks almost similar, but have one difference. I think it will solve your problem.
I'm going to have to show my ignorance here.... Isn't 9 choose 20 = 0? More generally, isn't n choose T going to always be 0 since T>=n? Perhaps I'm reading this formula incorrectly (I'm not a math expert), but looking at de Moive's work, I'm not sure how this formula was derived; it seems slightly off. You might try working up from Moive's original math, page 39, in the lemma.

Finding area of straight line with graph (Math question but needed for flot)

Okay, so this is a straight math question and I read up on meta that those need to be written to sound like programming questions. I'll do my best...
So I have graph made in flot that shows the network usage (in bytes/sec) for the user. The data is 4 minutes apart when there is activity, and otherwise set at the start of the usage range (let's say day 1) and the end of the range (day 7). The data is coming from a CGI script I have no control over, so I'm fairly limited in what I can provide the user.
I never took trig or calculus, so I'm pretty much in over my head. What I want is for the user to have the option to click any point on the graph and see their bandwidth usage for that moment. Since the lines between real data points are drawn straight, this can be done by getting the points before and after where the user has clicked and finding the y-interval.
It took me weeks to finally get a helpful math person to explain this to me. Everyone else has insisted on trying to teach me Riemann sum techniques and all sorts of other heavy stuff that not only is confusing to me, doesn't seem necessary for the problem.
But I also want the user to be able to highlight the graph from two arbitrary points on the y-axis (time) to get the amount of network usage total during that range. I know this would be inaccurate, but I need it to be the right inaccurate using a solid equation.
I thought this was the area under the line, but experiments with much simpler graphs makes this seem just far too high. I figured out I could take the distance from y2 - y1 and multiply it by x2 - x1 and then divide by two to get the area of the graph below the line like a triangle, but again, the numbers seemed to high. (maybe they are just big numbers and I don't get this math stuff at all).
So what I need, if anyone would be really awesome enough to provide it before this question is closed down for being too pure-math, is either the name of the concept I should be researching or the equation itself. Or the bad news that I do need advanced math to get an accurate result.
I am not bad at math, just as a last note, I just am not familiar with math beyond 10th grade and so I need some place to start. All the math sites seem to keep it too simple or way over my paygrade.
If I understood correctly what you're asking (and that is somewhat doubtful), you should find what you seek in these links:
Linear interpolation
(calculating the value of the point in between)
Trapezoidal rule
(calculating the area below the "curve")
*****Edit, so we can get this over :) without much ado:*****
So I have graph made in flot that shows the network usage (in bytes/sec) for the user. The data is 4 minutes apart when there is activity, and otherwise set at the start of the usage range (let's say day 1) and the end of the range (day 7). The data is coming from a CGI script I have no control over, so I'm fairly limited in what I can provide the user.
What is a "flot" ?
Okey, so you have speed on y axis [in bytes/sec]; and time on x axis in [sec], right?
That means, that if you're flotting (I'm bored, yes :) speed over time, in linear segments, interpolating at some particular point in time you'll get speed at that particular point in time.
If you wish to calculate how much bandwidth you've spend, you need to determine the area beneath that curve. The area from point "a" to point "b" will determine the spended bandwidth in [bytes] in that time period.
It took me weeks to finally get a helpful math person to explain this to me. Everyone else has insisted on trying to teach me Riemann sum techniques and all sorts of other heavy stuff that not only is confusing to me, doesn't seem necessary for the problem.
In the immortal words of Snoopy: "Good grief !"
But I also want the user to be able to highlight the graph from two arbitrary points on the y-axis (time) to get the amount of network usage total during that range. I know this would be inaccurate, but I need it to be the right inaccurate using a solid equation.
It would not be inaccurate.
It would be actually perfectly accurate (well, apart from roundoff error in bytes :), since you're using linear interpolation on linear segments.
I thought this was the area under the line, but experiments with much simpler graphs makes this seem just far too high. I figured out I could take the distance from y2 - y1 and multiply it by x2 - x1 and then divide by two to get the area of the graph below the line like a triangle, but again, the numbers seemed to high. (maybe they are just big numbers and I don't get this math stuff at all).
"like a triangle" --> should be "like a trapezoid"
If you do deltax*(y2-y1)/2 you will get the area, yes (this works only for linear segments). This is the basis principle of trapezoidal rule.
If you're uncertain about what you're calculating use dimensional analysis: speed is in bytes/sec, time is in sec, bandwidth is in bytes. Multiplying speed*time=bandwidth, and so on.
What I want is for the user to have
the option to click any point on the
graph and see their bandwidth usage
for that moment. Since the lines
between real data points are drawn
straight, this can be done by getting
the points before and after where the
user has clicked and finding the
y-interval.
Yes, that's a good way to find that instantaneous value. When you report that value back, it's in the same units as the y-axis, so that means bytes/sec, right?
I don't know how rapidly the rate changes between points, but it's even simpler if you simply pick the closest point and report its value. You simplify your problem without sacrificing too much accuracy.
I thought this was the area under the
line, but experiments with much
simpler graphs makes this seem just
far too high. I figured out I could
take the distance from y2 - y1 and
multiply it by x2 - x1 and then divide
by two to get the area of the graph
below the line like a triangle, but
again, the numbers seemed to high.
(maybe they are just big numbers and I
don't get this math stuff at all).
To calculate the total bytes over a given time interval, you should find the index closest to the starting and ending point and multiply the value of y by the spacing of your x-points and add them all together. That will give you the total # of bytes consumed during that time interval, but there's one more wrinkle you might have forgotten.
You said that the points come in "4 minutes apart", and your y-axis is in bytes/second. Remember that units matter. Your area is the sum of bytes/second times a spacing in minutes. To make the units come out right you have to multiply by 60 seconds/minute to get the final value of bytes that you want.
If that "too high" value is still off, consider units again. It's 1024 bytes per kbyte, and 1024*1024 bytes per MB. Check the units of the values you're checking the calculation against.
UPDATE:
No wonder you're having problems. Your original question CLEARLY stated bytes/sec. Even this question is imprecise and confusing. How did you arrive at "amount of data" at a given time stamp? Are those the total bits transferred since the last time stamp? If yes, simply add the values between the start and end of the interval you want and convert to the units convenient for you.
The network usage total is not in bytes (kilo-, mega-, whatever) per second. It would be in just straight bytes (or kilo-, or whatever).
For example, 2 megabytes per second over an interval of 10 seconds would be 20 megabytes total. It would not be 20 megabytes per second.
Or do you perhaps want average bytes per second over an interval?
This would be a lot easier for you if you would accept that there is well-established terminology for the concepts that you are having trouble expressing concisely or accurately, and that these mathematical terms have been around far longer than you. Since you've clearly gone through most of the trouble of understanding the concepts, you might as well break down and start calling them by their proper names.
That said:
There are 2 obvious ways to graph bandwidth, and two ways you might be getting the bandwidth data from the server. First, there's the cumulative usage function, which for any time is simply the total amount of data transferred since the start of the measurement. If you plot this function, you get a graph that never decreases (since you can't un-download something). The units of the values of this function will be bytes or kB or something like that.
What users are typically interested is in the instantaneous usage function, which is an indicator of how much bandwidth you are using right now. This is what users typically want to see. In mathematical terms, this is the derivative of the cumulative function. This derivative can take on any value from 0 (you aren't downloading) to the rated speed of your network link (indicating that you're pushing as much data as possible through your connection). The units of this function are bytes per second, or something related like Mbps (megabits per second).
You can approximate the instantaneous bandwidth with the average data usage over the past few seconds. This is computed as
(number of bytes transferred)
-----------------------------------------------------------------
(number of seconds that elapsed while transferring those bytes)
Generally speaking, the smaller the time interval, the more accurate the approximation. For simplicity's sake, you usually want to compute this as "number of bytes transferred since last report" divided by "number of seconds since last report".
As an example, if the server is giving you a report every 4 minutes of "total number of bytes transferred today", then it is giving you the cumulative function and you need to approximate the derivative. The instantaneous bandwidth usage rate you can report to users is:
(total transferred as of now) - (total as of 4 minutes ago) bytes
-----------------------------------------------------------
4*60 seconds
If the server is giving you reports of the form "number of bytes transferred since last report", then you can directly report this to users and plot that data relative to time. On the other hand, if the user (or you) is concerned about a quota on total bytes transferred per day, then you will need to transform the (approximately) instantaneous data you have into the cumulative data. This process, known as computing the integral, is the opposite of computing the derivative, and is in some ways conceptually simpler. If you've kept track of each of the reports from the server and the timestamp, then for each time, the value you plot is the total of all the reports that came in before that time. If you're doing this in realtime, then every time you get a new report, the graph jumps up by the amount in that report.
I am not bad at math, ... I just am not familiar with math beyond 10th grade
This is like saying "I'm not bad at programming, I have no trouble with ifs and loops but I never got around to writing more than one function."
I would suggest you enrol in a maths class of some kind. An understanding of matrices and the basics of calculus gives you an appreciation of many things, and can be useful in all sorts of areas. You'll be able to understand more of Wikipedia articles and SO answers - and questions!
If you can't afford that, try to find some lecture videos or something.
Everyone else has insisted on trying to teach me Riemann sum techniques
I can't see why. You don't need them for this - though if you had learned them, I expect you would find it easier to come up with a solution. You see, Riemann sums attempt to give you a "familiar" notion of area. The sort of area you (hopefully) learned years ago.
Getting the area below your usage graph between two points will tell you (approximately) how much was used over that period.
How do you find the area of a floor plan? You break it up into rectangles and triangles, find the area of each, and add them together. You can do the same thing with your graph, basically. Someone has worked out a simple way of doing this called the trapezoidal rule. It's just a matter of choosing how to divide your graph into strips, and in your case this is easy: just use the data points themselves as dividers. (You'll also need to work out the value of the graph at the left and right ends of the region selected by the user, using linear interpolation.)
If there's anything I've said that isn't clear to you (as there may well be), please leave a comment.

Normalizing FFT Data for Human Hearing

The typical FFT for audio looks pretty similar to this, with most of the action happening on the far left side
http://www.flight404.com/blog/images/fft.jpg
He multiplied it by a partial sine wave to get it to the bottom, but the article isn't too specific on this part of it. It also seems like a "good enough" modification of the dataset, rather than one based on some property. I understand that human hearing is better suited to the higher frequencies, thus, most music will have amplified bass and attenuated treble so that both sound to us as being of relatively equal strength.
My question is what modification needs to be done to the FFT to compensate for this standard falloff?
for(i = 0; i < fft.length; i++){
fft[i] = fft[i] * Math.log(i + 1); // does, eh, ok but the high
// end is still not really "loud"
// enough
}
EDIT ::
http://en.wikipedia.org/wiki/Equal-loudness_contour
I came across this article, I think it might be the direction to head in, but there still might be some property of an FFT that needs to be counteracte.
First, are you sure you want to do this? It makes sense to compensate for some things, like the microphone response not being flat, but not human perception. People are used to hearing sounds with the spectral content that the sounds have in the real world, not along perceptual equal loudness curves. If you play a sound that you've modified in the way you suggest it would sound strange. Maybe some people like the music to have enhanced low frequencies, but this is a matter of taste, not psychophysics.
Or maybe you are compensating for some other reason, for example, taking into account the poorer sensitivity to lower frequencies might enhance a compression algorithm. Is this the idea?
If you do want to normalize by the equal loudness curves, one should note that most of the curves and equations are in terms of sound pressure level (SPL). SPL is the log of the square of the waveform amplitude, so when you work with the FFTs, it's probably easiest to work with their square (the power specta). (Or, of course, you could compensate in other ways by, say, multiplying by sqrt(log(i+1)) in your equation above -- assuming that the log was an approximation of the inverse equal-loudness curve.)
I think the equal loudness contour is exactly the right direction.
However, its shape depends on the absolute pressure level.
In other words the sensitivity curve of our hearing changes with sound pressure.
There is no "correct normalization" if you have no information about absolute levels.
If this is a problem depends on what you want to do with the data.
The loudness contour is standardized in ISO 226 but this document is not freely available for download. It should be in a decent university library though.
Here is another source for
loudness contours
So you are trying to raise the level of the high end frequencies? Sounds like a high pass filter with a minimum multiplier might work, so that you don't attenuate the low frequency signals too much. Pick up a good book on filter design, maybe monkey around with this applet
In the old days of first samplers, this is before MOTU Boost people :) it wasn't FFT but simple (Fairlight or Roland it first I think) Normalisation done on the original or resulting time-domain signal (if you are doing beat slicing, recycle-style); can't you do that? Or only go for the FFT after you compensate to counteract for it?
Seems like a two phase procedure otherwise, I'd personally leave FFT as is for the task..

Resources