There is a particular type of problem for which I need some help to understand it properly.
Lets look at an example.
Suppose we are given an integer n.
We have to find the number of possible pairs (say a and b) such that the given conditions are fulfilled-
1<=a<=b<=n
f(a) < f(b)
where f(a)=sum of digits of a
Now I understand that instead of counting the possible solutions, we will try to find the number of ways to form two numbers such that the above 3 conditions are fulfilled. We will start from the one's place and go on from there.
But how to proceed after that?
How to determine that we have to stop now??
How to check that the above conditions are fulfilled at each step?
For e.g What digit we choose for the thousand's place will depend on the digits chosen for the hundred's and the one's place.
This is a pretty common type of a problem in competitive programming and I want to learn the proper method to solve it.
Related
I am unable to find what's the theory behind it cause brute force will not give an answer in efficient way cause n can vary up to 10^18, so loop each number isn't a good approach. I searched google entirely but didn't get any theory behind it. all i want to know is what number theory is behind it. using combination for individual digit will also be a hell solution. there were programs but i didn't get them. so please let me know what's theory or concept it is based on. just let me know the topic. thank you
Problem
I want to find
The first root
The first local minimum/maximum
of a black-box function in a given range.
The function has following properties:
It's continuous and differentiable.
It's combination of constant and periodic functions. All periods are known.
(It's better if it can be done with weaker assumptions)
What is the fastest way to get the root and the extremum?
Do I need more assumptions or bounds of the function?
What I've tried
I know I can use root-finding algorithm. What I don't know is how to find the first root efficiently.
It needs to be fast enough so that it can run within a few miliseconds with precision of 1.0 and range of 1.0e+8, which is the problem.
Since the range could be quite large and it should be precise enough, I can't brute-force it by checking all the possible subranges.
I considered bisection method, but it's too slow to find the first root if the function has only one big root in the range, as every subrange should be checked.
It's preferable if the solution is in java, but any similar language is fine.
Background
I want to calculate when arbitrary celestial object reaches certain height.
It's a configuration-defined virtual object, so I can't assume anything about the object.
It's not easy to get either analytical solution or simple approximation because various coordinates are involved.
I decided to find a numerical solution for this.
For a general black box function, this can't really be done. Any root finding algorithm on a black box function can't guarantee that it has found all the roots or any particular root, even if the function is continuous and differentiable.
The property of being periodic gives a bit more hope, but you can still have periodic functions with infinitely many roots in a bounded domain. Given that your function relates to celestial objects, this isn't likely to happen. Assuming your periodic functions are sinusoidal, I believe you can get away with checking subranges on the order of one-quarter of the shortest period (out of all the periodic components).
Maybe try Brent's Method on the shortest quarter period subranges?
Another approach would be to apply your root finding algorithm iteratively. If your range is (a, b), then apply your algorithm to that range to find a root at say c < b. Then apply your algorithm to the range (a, c) to find a root in that range. Continue until no more roots are found. The last root you found is a good candidate for your minimum root.
Black box function for any range? You cannot even be sure it has the continuous domain over that range. What kind of solutions are you looking for? Natural numbers, integers, real numbers, complex? These are all the question that greatly impact the answer.
So 1st thing should be determining what kind of number you accept as the result.
Second is having some kind of protection against limes of function that will try to explode your calculations as it goes for plus or minus infinity.
Since we are touching the limes topics you could have your solution edge towards zero and look like a solution but never touch 0 and become a solution. This depends on your margin of error, how close something has to be to be considered ok, it's good enough.
I think for this your SIMPLEST TO IMPLEMENT bet for real number solutions (I assume those) is to take an interval and this divide and conquer algorithm:
Take lower and upper border and middle value (or approx middle value for infinity decimals border/borders)
Try to calculate solution with all 3 and have some kind of protection against infinities
remember all 3 values in an array with results from them (3 pair of values)
remember the current best value (one its closest to solution) in seperate variable (a pair of value and result for that value)
STEP FORWARD - repeat above with 1st -2nd value range and 2nd -3rd value range
have a new pair of value and result to be closest to solution.
clear the old value-result pairs, replace them with new ones gotten from this iteration while remembering the best value solution pair (total)
Repeat above for how precise you wish to get and look at that memory explode with each iteration, keep in mind you are gonna to have exponential growth of values there. It can be further improved if you lets say take one interval and go as deep as you wanna, remember best value-result pair and then delete all other memory and go for next interval and dig deep.
I want to create a divide and conquer algorithm (O(nlgn) runtime) to determine if there exists a number in an array that occurs k times. A constraint on this problem is that only a equality/inequality comparison method is defined on the objects of the array (i.e can't use <, >).
So I have tried a number of approaches including splitting the array into k pieces of equal size (approximately). The approach is similar to finding the majority item in an array, however in the majority case when you split the array, you know that one half must have a majority item if such an item exists. Any pointers or tips that one could provide to put me in the right direction ?
EDIT: To clear up a little, I am wondering whether the problem of finding the majority item by splitting the array in half and using a recursive solution can be extended to other situations where k may be n/4 or n/5 etc.
Maybe I should of phrased the question using n/k instead.
This is impossible. As a simple example of why this is impossible, consider an input with a length-n array, all elements distinct, and k=2. The only way to be sure no element appears twice is to compare every element against every other element, which takes O(n^2) time. Until you perform all possible comparisons, you cannot be sure that some pair you didn't compare isn't actually equal.
I have to divide a class of 50 students writing a dissertation in 10 different discussion groups of 5 members each. In theory, there are 1.35363x10^37 possible ways of doing this, which is just the result of {50!}/{(5!^10)*10!)}, if it is already decided that the groups will consist of 5.
However, each group is to be led by a facilitator. This reduces the number of possible combinations considerably, because each facilitaror has one field of expertise among 5 possible ones, which should be matched to the topics the students are writing about as much as possible. If there are three facilitators with competence A, three with competence B, two with competence C, one with competence D and one with competence E, and 15 students are assigned to A, 15 to B, 10 to C, 5 to D and 5 to E, the number of possible combinations comes down to 252 505.
But both students and facilitators keep advocating for the use of more criteria, instead of just focusing on field of expertise. For example, wanting to be in a group of students that know each other, or being in a group with a facilitator that has particular knowledge of a specific research method.
I am trying to illustrate my intuitive reasoning, which tells me that each new criteria increases the complexity/impossibility of the task, if the objective is a completely efficient solution. But I can't get my head around expressing this analytically in a satisfactory manner.
Is my reasoning correct, that adding criteria would reduce the amount of possibilities that can be discarded following the inclusion-exclusion principle, thus making the task more complex, adding possible combinations? I also think that if the criteria are not compatible (for example if students that know each other are writing about different topics, and there aren't enough competent facilitators), certain constraints become inviable.
You need to distinguish between computational complexity and human complexity. Adding constraints almost automatically increases the human complexity of the problem in the sense that it means that there is more to wrap your mind around. But -- it isn't true that the computational complexity increases. At least sometimes it decreases.
For example, say you have a set of 200 items and you want to determine if there is a subset of them which satisfy some constraint. Depending on the constraint, There might be no feasible way to do it. After all, 2^200 is much too large to brute-force. Now add the constraint that the subset needs to have exactly 3 elements. Now all of a sudden it is possible to brute force (just run through all 1,313,400 3-element subsets until you either find a solution or determine that none exist). This is enough to show that it isn't true that adding a constraint always makes a problem intrinsically more difficult. In the discrete case a new constraint can cut down on the size of the search space in a way that can be exploited. In the continuous cases it can reduce degrees of freedom and thus lower the dimension of the problem. This isn't to say that it always makes it easier. Probably as a rule of thumb, additional constraints tend to make a problem more difficult.
Your actual problem isn't spelled out enough to give concrete advice. One possibility (and one way to handle a proliferation of somewhat extraneous constraints) is to divide the constraints into hard constraints which need to be satisfied and soft constraints which are merely desired but not strictly needed. Turn it into an optimization problem: find the solution which maximizes the number of soft-constraints that are satisfied, subject to the condition that it satisfies the hard constraints. Perhaps you can formulate it as an integer programming problem and hopefully find an exact solution. Or, if it is easy to generate solutions that satisfy the hard constraints and it is easy to mutate one such solution to obtain another (e.g. swap two students who are in different groups), then an evolutionary algorithm would be a reasonable heuristic.
I'm learning Python and came across a question that went something like "How long would it take to count to 1,000,000 out loud?" The only parameter it gave was, "you count, on average, 1 digit per second." I did that problem, which wasn't very difficult. Then I started thinking about counting aloud, annunciating each numeral. That parameter seems off to me, and indeed the answer Google gives to the question alone "how long to count to a million" suggests it's off. Given that each number in the sequence takes progressively longer (an exponential increase??), there must be a better way.
Any ideas or general guidance would be of assistance. Would sampling various people's "counting rates" at various intervals work? Would programming the # of syllables work? I am really curious, and have looked all over SO and Google for solutions that don't revolve around that seemingly inaccurate "average time".
Thanks, and sorry if this isn't on topic or in the appropriate place. I'm a long time lurker, but new to posting, so let me know if you need more info or anything. Thanks!
Let us suppose for the sake of simplicity that you don't say 1502 as "fifteen hundred and two", but as "thousand five hundred and two". Then we can hierarchically break it down.
And let's ignore the fact whether you say "and" or not (though apparently it is more said than not) for now. I will use this reference (and British English, because I like it more and it's more consistent : http://forum.wordreference.com/showthread.php?t=15&langid=6) for how to pronounce numbers.
In fact, to formally describe this, let t be a function of a set of numbers, that tells you how much time it takes to pronounce every number in that set. Then your question is how to compute t([1..1000000]), and we will use M=t([1..999])
Triplet time in function of previous one
To read a large number we start at the left and read the three-digit groups. The group at the left, of course, may have only one or two digits.
Thus for every number x of thousands you will say x thousand y where y will describe all the numbers from 1 to 999.
Thus the time you spend in the x thousand ... is 1000 t({1000x}) + M, as detailed here after :
Note that this formula is generalizable to numbers below 1000, by simply defining t({0}) = 0.
Now the time to say "x thousand" is, per our hypothesis, equal to the time to say "x" plus the time to say "thousand" (when x > 0). Thus your answer is :
Where is the time it takes to say the word thousand. This supposes you say 1000 as "one thousand". You may want to remove 1000 tau("one") if you would only say "thousand".
How ever I stick with the reference :
The numbers 100-199 begin with one hundred... or a hundred...
You can in exactly the same way express the time it takes to count to a billion from and the number above, and so on for all the greater powers of 103, i.e.
Taking into account the "and"
There is a small correction to be done. Let us suppose that M is the time it takes to pronounce numbers from 1 to 999 when they are preceded by at least a non-0 group of numbers, including initial "and"s.
Our reference (well, the wordreference post I linked) says the following :
What do we say to join the groups?
Normally, we don’t use any joining word.
The exception is the last group.
If the last group after the thousands is 1-99 it is joined with and.
Thus our correction applies only to the numbers between 0 and 999 (where there is no non-zero group preceding) :
Getting M
Or rather, let's get t([1..999]) since it's more natural and we know how it is related to M.
Let C = t([1..99]), X = t([1..9]).
Between 1 and 999 we have all the numbers from [1..99] and all the 9 exact hundreds where you don't say "and", that is 108 occurences. There are 900 numbers prefixed with a hundreds number.
Thus
C is probably hard to break down, so I'm not going to try.
Final result
The corrected formula is :
And as a function of C and X :
Note that your measures of tau(word), C, and X need to be very precise if you plan on doing this multiplication and having any kind of correct order of magnitude.
Conclusion : Brits end up saying "and" a whole lot. The nice thing about the last formulation is that you can remove all the "and"s if you decide you actually don't want to pronounce them.