Prove that log n! = n log n - O(n) [closed] - math

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 months ago.
Improve this question
So i have seen everywhere on how log n! = nlogn. I don't know how to get to O(n)
Textbook just says log(n!) = logn + log(n − 1) + ··· + log(1) = nlogn − O(n) but does not. explain.

From ln(x-1) < ln(floor(x)) ≤ ln(x), one deduces by integration from 1 to n+1
⌠ ⌠ ⌠
⌡ ln(x-1).dx < ⌡ ln(floor(x)).dx ≤ ⌡ ln(x).dx
The integrand in the middle term is piecewise linear constant, and one easily sees that it is nothing but
Σ[1->n] ln(k) = ln(n!)
Finally,
n.ln(n)-n ≤ ln(n!) < (n+1)ln(n+1)-n
and
(n+1)ln(n+1) - n.ln(n) = n.ln(1+1/n) + ln(n+1) = O(n).

Related

How do I find this seemingly unknown exponential relation? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 days ago.
This post was edited and submitted for review 8 days ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
We analyzed a program which was supposedly used for cracking some cryptographic algorithms.
During the investigations we determined that the program input size can be varied in a wide range and for N-bit input the program result is also always N bit long. Additionally, we found that the program working time depends significantly on input length N, especially when N is greater than 10-15. Our test also reveals that the program working time depends only on Input length, not the input itself.
During our tests, we fixed the following working times with an accuracy of one hundredth of a second):
N=2-16.38 seconds
N=5 - 16.38 seconds
N = 10 - 16.44 seconds
N = 15 - 18.39 seconds
N = 20 - 1 minute 4.22 seconds
We also planned to test the program for N = 25 and N = 30, but for both cases the program didn't finish within half an hour and was forced to terminate it. Finally, we decided for N = 30 to not terminate the program, but to wait a little bit longer. The result was 18 hours 16 minutes 14.62 seconds. We repeated the test for N = 30 and it gave us exactly the same result, more than 18 hours. Tasks: a) Find the program working times for the following three cases - N = 25, N = 40 and N = 50. b) Explain your result and solution process.
At first I thought of finding a linear relation between N and time taken, t. Obviously that failed.
Then I realized that the t for N=2 and N=5 are nearly identical (here they are identical because they have been rounded to two digits after the decimal). Which emphasizes that the change in t only becomes more apparent when N>=10.
So, I tried to write t as a function of N, since t only depends on the size of the input.
Seeing the exponential growth my first idea was to write it as f(t)=Ce^N+k; where C and k are constants and e is euler's number.
That approach does not hold up. Afterwards I thought of trying powers of 2 because it's a computer question but I'm kind of lost.

For loop includes zero [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
I have fixed my problem by telling my loop to ignore instances of 0. But I don't know why it wants to include them in the first place.
P <- function(n) {
if (n <= 1) {
return (1)
}
s = 0
for (i in 1 : n - 1) {
if (i != 0) {
s <- s + P(i) * P(n - i)
}
}
return (s)
}
print (P(2))
It seems weird to include that in my loop, but without it I simply haven't been able to get the program to work. Not until I started putting in a trace to see what I was did I discover that i = 0. Did I write something wrong? I'm too new at R to even consider blaming this on the RGui that I'm using to run this, but I'm too old at programming to think anything else.
It is a precedent problem. ":" takes precedent over "-"
Compare:
n<-5
c(1: n -1)
c(1:(n-1))
This is because you forgot a parenthesis, see operator syntax & precedence :
n <- 2
1: n-1
[1] 0 1
1:(n-1)
[1] 1

If a non-deterministic Turing machine runs in f(n) space, then why does it run in 2^O(f(n)) time? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
Assuming that f(n) >= n.
If possible, I'd like a proof in terms of Turing machines. I understand the reason why with machines that run on binary, because each "tape cell" is a bit with either 0 or 1, but in Turing machines a tape cell could hold any number of symbols. I'm having trouble why the base is '2' and not something like 'b' where 'b' is the number of types of symbols of the Turing machine tape.
The important detail here is that the runtime is 2O(n) rather than O(2n). In other words, the runtime is "two raised to the power of something that's O(n)," rather than "something that's on the order of 2n." That's a subtle distinction, so let's dissect it a bit.
Let's consider the function 4n. This function is not O(2n), because 4n outgrows 2n in the long run. However, notice that 4n = 22n, and since 2n = O(n) we can say that 4n = 2O(n).
Similarly, take bn for any base b. If b > 2, then bn is not O(2n). However, we do have that
bn = 2(lg b) n = 2O(n)
because (lg b) n = O(n), since (lg b) is just a constant factor.
It is definitely a bit weird that O(2n) is not the same 2O(n). The idea of using big-O notation in exponents is somewhat odd the first time you see it (for example, nO(1) means "something bounded by a polynomial"), but you'll get used to it with practice.

Why do Clojure vector function results exclude the stop value? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Forgive me for the newb question and potentially incorrect terminology.
Clojure vector functions produce values that do not include the stop value. For example:
=> (subvec [:peanut :butter :and :jelly] 1 3)
[:butter :and]
=> (range 1 5)
(1 2 3 4)
The doc for range explicitly states this but doesn't give a rational: "...Returns a lazy seq of nums from start (inclusive) to end (exclusive)...".
In Ruby these operations are inclusive:
(1..5).to_a
=> [1, 2, 3, 4, 5]
[:peanut, :butter, :and, :jelly][1,3]
=> [:butter, :and, :jelly]
Obviously these are very different languages, but I'm wondering if there was some underlying reason, beyond a personal preference by the language designers?
Making the end exclusive allows you to do things like specify (count collection) as the endpoint without getting an NPE. That's about the biggest difference between the two approaches.
It might be that the indexing was chosen in order to be consistent with Java libraries. java.lang.String.substring and java.util.List.subList both have exclusive-end indexes.

How to convert a propositional formula to conjunctive normal form (CNF)? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
How can I convert this equation to CNF?
¬((p ∨ ¬Q) ⊃ R) ⊃ (P ∧ R))
To convert a propositional formula to conjunctive normal form, perform the following two steps:
Push negations into the formula, repeatedly applying De Morgan's Law, until all negations only apply to atoms. You obtain a formula in negation normal form.
¬(p ∨ q) to (¬p) ∧ (¬q)
¬(p ∧ q) to (¬p) ∨ (¬q)
Repeatedly apply the distributive law where a disjunction occurs over a conjunction. Once this is not possible anymore, the formula is in CNF.
p ∨ (q ∧ r) to (p ∨ q) ∧ (p ∨ r)
To obtain a formula in disjunctive normal form, simply apply the distribution of ∧ over ∨ in step 2.
Note about ⊂
The subset symbol (⊂) used in the question is just an alternative notation for the logical implication/entailment, which is usually written as an arrow (⇒).
http://en.wikipedia.org/wiki/Conjunctive_normal_form
To convert first-order logic to CNF:
Convert to Negation normal form.
Eliminate implications: convert x → y to ¬ x ∨ y
Move NOTs inwards.
Standardize variables
Skolemize the statement
Drop universal quantifiers
Distribute ANDs over ORs.
(Artificial Intelligence: A modern
Approach [1995...] Russel and Norvig)

Resources