How to understand zk-Snark from Buterin blog? - zk-snark

I reading about zero knowledge proofs. In this regard, I have read about zk-SNARKs from the blog by Vitalik Buterin. I understood Flattening, R1CS and QAP polynomials which create proof.
However, I am not able to map what exactly is happening in the proof using polynomials. I mean I know zero knowledge proof has a prover, verifier and witness. There are prover and verifier functions. In the polynomial example explained in the blog, I don't understand what prover and verifier computes. What does the prover compute to prove his legitimacy ?
Anyhelp would be appreciated. Thank you.

Related

A scheme’s running time is: O((1/E)^2*n^2). Is this a fully polynomial-time approximation scheme? Explain why?

A scheme’s running time is: O((1/E)^2*n^2). Is this a fully polynomial-time approximation scheme? Explain why?
I am looking for an answer to this question:
A scheme’s running time is: O((1/E)^2*n^2). Is this a fully polynomial-time approximation scheme? Explain why?

Can it be proven no polynomial algorithm exists for an NP-Complete prob.?

I can't really seem to grasp what it really means to say a problem is NP-Complete. Could anyone help me with the following question?
An NP-complete problem is a problem for which one can prove that an algorithm for solving it in polynomial time does not exist. Is the statement true?
I would want to say this statement isn't true, because can anyone actually prove that such an algorithm doesn't exist for any NP-Complete problem? From looking around on various sources, I understand that no polynomial time algorithm is known for any NP-Complete problem; however, it can't be proven.
Any help would be greatly appreciated. Thanks.
It is possible in some situations to prove that no algorithm exists that is better than a certain limit.
For example the O(n log n) bound for a comparison sort has been proven. No matter how clever we become in the future, we can be sure that no-one will ever invent an O(n) comparison sort.
In this case though, no-one has found a proof. But that doesn't mean it can't be proven.
The statement is more fundamentally wrong: There are problems that cannot be solved in polynomial time which are much harder than NP problems. The point of NP completeness is a polynomial time solution existing is equivalent to P=NP (which means additionally that a solution not existing means P!=NP).

Lua alternative to optim()

I'm currently looking for a lua alternative to the R programming languages; optim() function, if anyone knows how to deal with this?
http://numlua.luaforge.net/ looks interesting but doesn't seem to have minimization. The most promising lead seems to be a Lua wrapper for GSL, which has a variety of multidimensional minimization algorithms included.
With derivatives
- BFGS (method="BFGS" in optim) and two conjugate gradient methods (Fletcher-Reeves and Polak-Ribiere) which are two of the three options available for method="CG" in optim.
Without derivatives
- the Nelder-Mead simplex (method="Nelder-Mead", the default in optim).
More specifically, see here for the Lua shell documentation covering minimization.
I agree with #Zack that you should try to use existing implementations if at all possible, and that you might need a little bit more background knowledge to know which algorithms will be useful for your particular problems ...
R's implementation of optim isn't actually written in R. If you type "optim" with no parentheses at the prompt, it'll dump out the definition of the function, and you can see that after some error checking and argument shuffling it invokes an .Internal routine (coded in C and/or Fortran) to do all the real work.
So your best bet is to find a C library for mathematical optimization -- sorry, I have no recommendations -- and wrap that into Lua. I doubt anyone has written native-Lua code for this, and I would not recommend trying to code it yourself; doing mathematical optimization efficiently is still an active domain of basic research, and the best-so-far algorithms are decidedly nontrivial to implement.

Representing code algebraically

I have a number of small algorithms that I would like to write up in a paper. They are relatively short, and concise. However, instead of writing them in pseudo-code (à la Cormen or even Knuth), I would like to write an algebraic representation of them (more linear and better LaTeX rendering) . However, I cannot find resources as to the best notation for this, if there is anything: e.g. how do I represent a loop? If? The addition of a tuple to a list?
Has any of you encountered this problem, and somehow solved it?
Thanks.
EDIT: Thanks, people. I think I did a poor job at phrasing the question. Here goes again, hoping I make it clearer: what is the common notation for talking about loops and if-then clauses in a mathematical notation? For instance, I can use $acc \leftarrow acc \cup \langle i,i+1 \rangle$ to represent the "add" method of a list.
Don't do this. You are deviating from what people expect to see when they read a paper about algorithms. You should follow expected practices; your ideas are more likely to get the attention that they deserve. When in Rome, do as the Romans do.
Formatting code (or pseudocode as it may be) in a LaTeXed paper is very easy. See, for example, Formatting code in LaTeX.
I see if-expressions in mathematical notation fairly often. The usual thing for a loop is a recurrence relation, or equivalently, a function defined recursively.
Here's how the Ackermann function is defined on Wikipedia, for instance:
This picture is nice because it feels mathematical in flavor and yet you could clearly type it in almost exactly as written and have an implementation. It is not always possible to achieve that.
Other mathematical notations that correspond to loops include ∑-notation for summation and set-builder notation.
I hope this answers your question! But if your aim is to describe how something is done and have someone understand, I think it is probably a mistake to assume that mathematicians would prefer to see equations. I don't think they're interchangeable tools (despite Turing equivalence). If your algorithm involves mutable data structures, procedural code is probably going to be better than equations for explaining it.
I'd copy Knuth. Few know how to communicate better than him in a computer science setting.
A symbol for general loops does not exist; usually you will use the summation operator. "if" is represented using implications, and to "add a tuple to a list" you would use union.
However, in general, a bit of verbosity is not necessarily a bad thing - sometimes, especially for complex algorithms, it is best to spell it out in plain English, using examples and diagrams. This is doubly-true for non-coders.
Think about it: when you read a math text-book on Euclid's algorithm for GCD, or the sieve of Eratosthenes, how is it written? Usually, the algorithm itself is in prose, while the proof of the algorithm is where the mathematical symbols lie.
You might take a look at Haskell. Haskell formats well in latex, has a nice algebraic syntax, and you can even compile a latex file with Haskell in it, provided the code is wrapped in \begin{code} and \end{code}. See here: http://www.haskell.org/haskellwiki/Literate_programming. There are probably literate programming tools for other languages.
Lisp started out as a mathematical notation of a computing model so that the lecturer would have a better tool than turing machines. By accident, it turns out that it can be implemented in assembly - thus lisp, the programming language was born.
But I don't think this is really what you are looking for since the computing model that lisp describes doesn't have loops: recursion is used instead. The syntax derives from algebra where braces denote evaluate-this-and-substitute-the-result. Indeed, lisp's model of computing is basically substitution - what algebra essentially is.
Indeed, most functional languages like Lisp, Haskell and Erlang are derived from mathematics. Haskell is actually a result of proving that lambda calculus can be used to implement type systems. So Haskell, like Lisp was born out of pure mathematics. But again, the syntax is not what you would probably be used to.
You can certainly explain Lisp and Haskell syntax to mathematicians and they would treat it as a "game". Language constructs like loops, recursion and conditionals can be proven out of the rules of the game rather than blindly implemented like in other languages. This would lead you into the realms of combinatronics, another branch of mathematics. Indeed, in combinatronics, even the concept of numbers can be constructed out of the rules of the game rather than being a native part of the language (google Church Numerals).
So have a look at Lisp/Scheme, Erlang and Haskell if you want. Erlang especially has syntax close to what you want:
add(a,b) -> a + b
But my recommendation is to write in C-like pseudocode. It's sort of the lowest common denominator in programming languages. Has a syntax that is fairly easy to understand and clean. And the function syntax even derives from functions in mathematics. Remember f(x)?
As a plus, mathematicians are used to writing C, statisticians are used to writing C (though generally they prefer R), physicists are used to writing C, programmers are used to at least looking at C (I know a few who've never touched C).
Actually, scratch that. You mention that your target audience are statisticians. Write in R
Something like this website describes?
APL? The only problem is that few people can read it.

Fibonacci coding

Can anybody suggest a good book/paper/website/background reading about universal codes for integers and especially Fibonacci code (in the sense of http://en.wikipedia.org/wiki/Fibonacci_code)? Thanks!
Edit: Thanks for the answers and the useful links so far! I am sorry if I have not made myself completely clear: I am not asking about code (as in writing a program) to generate or compute Fibonacci numbers, but about a particular code (as in encoding, or compressing, data) that makes use of Fibonacci numbers.
One paper found with Google Scholar :
Data compression (DA Lelewer, DS Hirschberg - ACM Computing Surveys (CSUR), 1987)
I'm not so familiar with the subject but the article seems to be pretty decent by a brief looking.
I find MIT's online lectures to be a good resource generally. And they address Fibonacci algorithms in some detail: http://www.catonmat.net/blog/mit-introduction-to-algorithms-part-two/
Relevant segments of the video:
[17:49] Algorithms for computing Fibonacci numbers (FBs).
[19:04] Naive recursive algorithm (exponential time) for computing FBs.
[22:45] Bottom-up algorithm for computing FBs.
[24:25] Naive recursive squaring algorithm for FBs (doesn’t work because of floating point rounding errors).
[27:00] Recursive squaring algorithm for FBs.
Information Theory, Inference, and Learning Algorithms has a chapter on codes. It has a free pdf version, check it out.

Resources