Which is more random? [closed] - math

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Which is more random?
rand()
OR
rand() + rand()
OR
rand() * rand()
Just how can one determine this? I mean. This is really puzzling me! One feels that they may all be equally random, but how can one be absolutely sure?!
Anyone?

The concept of being "more random" doesn't really make sense. Your three methods give different distributions of random numbers. I can illustrate this in Matlab. First define a function f that, when called, gives you an array of 10,000 random numbers:
f = #() rand(10000,1);
Now look at the distribution of your three methods.
Your first method, hist(f()) gives a uniform distribution:
Your second method hist(f() + f()) gives a distribution which is peaked in the centre:
Your third method hist(f() .* f()) gives a distribution where numbers close to zero are more likely:

As to amount of entropy, I guess, is comparable.
If you need more entropy (randomness) than you have currently have, use cryptographically strong random generators.
Why they are comparable --- because if attacker could guess next pseudorandom value returned by
rand()
it would not be significally harder for him to guess next
rand()*rand()
Nevertheless argument about different distributions is important and valid!

Related

Numerical integration of an unknown function [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
For a school project I have to determine a function u(t) of time. I have derived an expression of the following form:
(https://i.stack.imgur.com/vNrYb.png)
with a,b,c,d constants (not necessarily integers). I have figured out that this problem is only solvable with numerical integration with initial condition u(0)=u_0, yet I don't know how to do this particular problem.
I have looked at all the numerical integration methods I have learnt so far, but they all seem to apply for polynomials or for functions where you know the function evaluations at specific points.
There are lots of ways to calculate an approximate value for u(t), some simple but requiring a lot of iterations, and more complex requiring fewer iterations. Assuming a,b,c,d are real numbers, and u_0 = u(0) then for t > 0, one could just split the interval between 0 and t into N sub-intervals and calculate
u_(i+1) = u_i + (du/dt)(t_i)*t/N
where t_i = i*t/N
then,
u_N = u(t).
If N is not sufficiently large, the result will be inaccurate. To obtain a satisfactory N is more art than science. Just printing the results for increasing N should give you an idea of how large N needs to be to obtain the level of accuracy you need. Adding higher order terms (d^2u/dt^2 etc.) can sometimes improve speed and accuracy.
You can't numerically integrate anything unless you have values for all those constants.
I don't know what numerical integration schemes you looked at, but I think Euler's method or Runga-Kutta would both be worth trying.
You don't say which language you want to use. Python would be a fine choice. So would Java. Lots of libraries to help.
Wolfram Alpha has a closed-form solution here. It's a separable, non-linear ODE. You'll need to know hypergeometric functions to evaluate.

Finding private Key x Big integers [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Well is it possible to find private key x for this equation y=g^x mod p of course big integers if you have p ,g, y, q?.
What method can be used if there is method to find it out? ..........Note:These are big Integers
This is called the discrete logarithm problem. You seem to be interested in the prime field special case of this problem.
For properly chosen fields with sufficiently large p this is infeasible. I expect this to be reasonably cheap (100$ or so) for 512 bit p and extremely expensive at 1024 bit p. Going beyond that it quicky becomes infeasible even for state level adversaries.
For some fields it's much cheaper. For example solving DL in binary fields (not prime fields as in your example) produced quite a few recent papers. For example Discrete logarithm in GF(2^809) with FFS and On the Function Field Sieve and the Impact of Higher Splitting Probabilities: Application to Discrete Logarithms in F_2^1971.

Is there a rule for prime numbers? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've passed by this article:
http://gauravtiwari.org/2011/12/11/claim-for-a-prime-number-formula/
and this paper:
http://www.m-hikari.com/ams/ams-2012/ams-73-76-2012/kaddouraAMS73-76-2012.pdf
They say that there is a formula that when I give it (n) then it returns nth prime number. Where in other articles they say that no formula discovered so far that does such thing.
If the formula exists indeed, then why from time to time they discover the largest prime number known ever, It would be very simple using the formula to find a larger one.
I just want to ensure that such formula exists or not.
Conceptually it is very simple to test that a given number n is a prime number: just check for all smaller numbers 'm' (larger than 1) whether 'm' divides 'n' without remainder. If
such an 'm' exists 'n' is not a prime number.
Then, to find the k-th prime number you just iterate this procedure until you found the k-th number which is a prime. So yes, such a formula exists.
But, executing the above procedure is very inefficient. So even having this formula (and in real cases you would use more intelligent variants), it can take literally ages before you get an answer. And that is why more efficient variants and tricks are used to find large prime numbers.

Simulate ARFIMA process with custom initial values [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
My question deals with the fracdiff.sim function in R (in the fracdiff package) for which the help document, just like for arima.sim, is not really clear concerning initial values.
It's ok that stationary processes do not depend on their initial values when time grows, but my aim is to see in my simulations the return of my long memory process (fitted with arfima) to its mean.
Therefore, I need to input at least the p final values of my in-sample process (and eventually q innovations) if it is ARFIMA(p,d,q). In other words, I would like to set the burn-in period's length to 0 and give starting values instead.
Nevertheless, I'm currently not able to do this. I know that fracdiff.sim makes it possible for the user to chose the length of a burning period (which leads to the stationnary behavior) and the mean of the simulated process (it is simulated and then translated to make the means match). There is also a condition: the length of the burn-in period must be >= p+q. What I suppose is that there is something to do with the innov argument but I'm really not sure.
This idea is inspired by the arima.sim function which has a start.innov argument. However, even if my aim was to simulate an ARMA(p,q), I'm not sure of the exact use of this argument (the help is quite poor) : must we input only q innovations ? put with them the p last values of the in-sample process ? In which order ?
To sum up, I want to simulate ARFIMA processes starting from a specific value and having a specific mean in order to see the return to the mean and not only the long term behavior. I fund beginnings of solutions for arima.sim on the internet but nobody clearly answered, and if the solution uses start.innov, how to solve the problem for ARFIMA processes (fracdiff.sim doesn't have the start.innov argument) ?
Hopping I have been clear enough,

how to build a null distribution [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
1:I would like to create a synthetic dataset of 14.000 genes (rows) and 250 samples (columns of the matrix).
How this can be done?
2: after this, I would like to infer gene regulation using for ex algorithms of mutual information. I know how and in fact I have a network.
3: I would like to know if the net I had is due by chance or not. To do this, one common approach is to schuffle samples or genes, 1000 times, to create 1000 net and plot a null distribution to validate the net you previously (point 2) obtained. This is called bootstrap.
Is there another method?
Best,
E.
The sample function in R is the basic way to construct random permutations of existing data. It's not clear what you want, and an additional thought was that you might just need to be pointed to the runif function for generating random Uniform sequences. If you had 1000 objects of a particular sort in an object vector, obj:
sample( obj ) # returns a permuted sequence
# Same as ...
obj[ sample(length(obj)) ]
Whether that is a "null distribution" is up to you to decide. (And that request for "all" of the methods to do any particular task in R will be viewed as being excessively demanding. There are often a large number of methods, and even asking for the "best" will increase you chances of getting your question closed.)

Resources