Finding private Key x Big integers [closed] - encryption

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Well is it possible to find private key x for this equation y=g^x mod p of course big integers if you have p ,g, y, q?.
What method can be used if there is method to find it out? ..........Note:These are big Integers

This is called the discrete logarithm problem. You seem to be interested in the prime field special case of this problem.
For properly chosen fields with sufficiently large p this is infeasible. I expect this to be reasonably cheap (100$ or so) for 512 bit p and extremely expensive at 1024 bit p. Going beyond that it quicky becomes infeasible even for state level adversaries.
For some fields it's much cheaper. For example solving DL in binary fields (not prime fields as in your example) produced quite a few recent papers. For example Discrete logarithm in GF(2^809) with FFS and On the Function Field Sieve and the Impact of Higher Splitting Probabilities: Application to Discrete Logarithms in F_2^1971.

Related

When asked if two graphs are the same, is the problem P, NP, NP-hard, NP-complete? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I was given a question where two graphs are given and the questions asks if the two graphs are the same and whether the problem was a P, NP, NP-hard or NP-complete. By looking at the two graph, the graphs are not the same. However, I don't know what type of problem it is.
First of all, you have to define what you mean by "the same". There are several ways of defining equality, but the most likely one in this context is graph isomorphism, where two graphs are equal if there is an edge-preserving bijection between them.
Next, if the problem is just to decide if your two given graphs are the same or not (and this is how you stated it), the problem is trivially in O(1) (and therefore in P and NP).
If, however, the problem is to decide whether any two graphs are isomorphic, the problem is in NP. It is currently neither known whether it is in P nor whether it is NP-complete.

How to work with numeric probability distribution functions [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have to calculate the probability distribution function of a random variable that is composed of (sum, division, product, exponentiation, etc...) some other simple random variables. It is pretty complex so I am morte then happy to get a numerical solution
While thought this was a very standard thing to do , I was unable to find a framework to do that. I'd preferably use R, but any major language will do.
What I would like therefore is a library that allowed me to:
i) create numerical random variables from classic distributions
ii) compose them by simple operations (+,-,*,/, exp,min, max,...)
Of course I could work with vectors and use convolutions and the like, but I wanted something more polished.
I am also aware that is possible to use simulation to create the variables, then compose them with the operations and finally getting the PDF using a histogram, but again, I would prefer a non - simulating approach.
Try the rv package. Note that if X is an exponential random variable with mean 1, then -log(X) has a standard Gumbel distribution.

Is there a rule for prime numbers? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've passed by this article:
http://gauravtiwari.org/2011/12/11/claim-for-a-prime-number-formula/
and this paper:
http://www.m-hikari.com/ams/ams-2012/ams-73-76-2012/kaddouraAMS73-76-2012.pdf
They say that there is a formula that when I give it (n) then it returns nth prime number. Where in other articles they say that no formula discovered so far that does such thing.
If the formula exists indeed, then why from time to time they discover the largest prime number known ever, It would be very simple using the formula to find a larger one.
I just want to ensure that such formula exists or not.
Conceptually it is very simple to test that a given number n is a prime number: just check for all smaller numbers 'm' (larger than 1) whether 'm' divides 'n' without remainder. If
such an 'm' exists 'n' is not a prime number.
Then, to find the k-th prime number you just iterate this procedure until you found the k-th number which is a prime. So yes, such a formula exists.
But, executing the above procedure is very inefficient. So even having this formula (and in real cases you would use more intelligent variants), it can take literally ages before you get an answer. And that is why more efficient variants and tricks are used to find large prime numbers.

what is the condition under which the markov chain converge? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm programming some program which calculates the limit of markov chain.
if the markov matrix diverges, I should transform it into the form
dA + (1-d)E, where both A and E are n * n matrix, and all of the elements of E are 1/n.
But if I apply that transformation when the input converges, the wrong value comes out.
Is there any easy way to check if the markov matrix converges?
I'm not going to go into detail, because it's an entire field unto itself. Although the general convergence theorem states that any finite Markov chain that is aperiodic and irreducible converges (to its stationary distribution). Irreducibility is simple to check (it's equivalent to connectedness in graphs), and periodicity is also easy to check (the definition of both is found in the first chapter of the book below, and the convergence theorem is proved in chapter 4 of the book).
It's worth noting that if there isn't irreducibility that can be easily solved in the symmetrical case by splitting the state space into "connected components", and considering each one separately. While periodicity can be patched by doing something similar to what you're suggesting. It's called creating the lazy Markov chain. If you want to understand the whole topic a little better (Mixing times for example will be very helpful in your convergence algorithm), this is an excellent book (available for free):
http://pages.uoregon.edu/dlevin/MARKOV/markovmixing.pdf

Which is more random? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Which is more random?
rand()
OR
rand() + rand()
OR
rand() * rand()
Just how can one determine this? I mean. This is really puzzling me! One feels that they may all be equally random, but how can one be absolutely sure?!
Anyone?
The concept of being "more random" doesn't really make sense. Your three methods give different distributions of random numbers. I can illustrate this in Matlab. First define a function f that, when called, gives you an array of 10,000 random numbers:
f = #() rand(10000,1);
Now look at the distribution of your three methods.
Your first method, hist(f()) gives a uniform distribution:
Your second method hist(f() + f()) gives a distribution which is peaked in the centre:
Your third method hist(f() .* f()) gives a distribution where numbers close to zero are more likely:
As to amount of entropy, I guess, is comparable.
If you need more entropy (randomness) than you have currently have, use cryptographically strong random generators.
Why they are comparable --- because if attacker could guess next pseudorandom value returned by
rand()
it would not be significally harder for him to guess next
rand()*rand()
Nevertheless argument about different distributions is important and valid!

Resources