What is difference between rem and mod function in sml? - math

I'm working on sml project in which I have to implement these two function rem & mod for custom datatype.
I know the definition of remainder rem.
dividend = divisor * quotient + remainder
What is the definition of mod?
Please explain me the difference between these in simple words.

In i mod j, the result has the same sign as j.
In i rem j, the result has the same sign as i.
You can look up the details in the documentation for Basis.

Related

Power with integer exponents in Isabelle

Here is my definition of power for integer exponents following this mailing-list post:
definition
"ipow x n = (if n < 0 then (1 / x) ^ n else x ^ n)"
notation ipow (infixr "^⇩i" 80)
Is there a better way to define it?
Is there an existing theory in Isabelle that already includes it so that I can reuse its results?
Context
I am dealing with complex exponentials, for instance consider this theorem:
after I proved it I realized I need to work with integers n not just naturals and this involves using powers to take out the n from the exponential.
I don't think something like this exists in the library. However, you have a typo in your definition. I believe you want something like
definition
"ipow x n = (if n < 0 then (1 / x) ^ nat (-n) else x ^ nat n)"
Apart from that, it is fine. You could write inverse x ^ nat (-n), but it should make little difference in practice. I would suggest the name int_power since the corresponding operation with natural exponents is called power.
Personally, I would avoid introducting a new constant like this because in order to actually use it productively, you also need an extensive collection of theorems around it. This means quite a bit of (tedious) work. Do you really need to talk about integers here? I find that one can often get around it in practice (in particular, note that the exponentials in question are periodic anyway).
It may be useful to introduce such a power operator nevertheless; all I'm saying is you should be aware of the trade-off.
Side note: An often overlooked function in Isabelle that is useful when talking about exponentials like this is cis (as in ‘cosine + i · sine‘). cis x is equivalent to ‘exp(ix)’ where x is real.

Is the Sieve of Eratosthenes an example of Dynamic Programming?

I'm a bit confused as to whether the Sieve of Eratosthenes (implemented with an array for all the numbers and a loop marking the composite numbers) is an example of Dynamic Programming? A couple of friends were telling me the way it's implemented is an example of Bottom Up DP, but I'm having trouble seeing it. Exactly what are the subproblems and how would you implement SoE with Top-Down / Recursion? Thanks guys.
Sure, we could think of the Sieve of Eratosthenes as an example of dynamic programming. The subproblems would be all the composite numbers. Skipping over marked numbers is a perfect demonstration of the subproblems overlapping since if they did not overlap we wouldn't be skipping over them :)
One way we could formulate the sieve recursively could be: let f(N) represent the Nth prime and its associated sieve state. Then:
f(1) = (2, [4,6...])
f(N) = (q, join( Sieve, [q+q,q+q+q...]))
'''a pair, of the next number q above p
_not_ in Sieve, and the Sieve with
all the multiples of this number q
added into it (we'll place an upper
bound on this process, practically)'''
where (p, Sieve) = f(N - 1)
q = next_not_in(p, Sieve)
Let's test:
f(3) =
call f(2) =
call f(1) =
<-- return (2, [4,6...])
<-- return (3, [4,6,8,9...])
<-- return (5, [4,6,8,9,10...])

how to write a formula in c#?

how to write a formula like
v_r (t)=∑_(n=0)^(N-1)▒〖A_r (L_2-L_1 ) e^j(ω_c t-4π/λ (R+υt+L_(1+L_2 )/2 cos⁡〖(θ)sin⁡(ω_r t+2πn/N)))〗 ┤) sinc(4π/λ-L_(2-L_1 )/2 cos⁡(θ) sin⁡(ω_r t+2πn/N))〗
in c#?
You have to convert the formula to something the compiler recognizes.
To it's equivalent using the a combination of basic algebra and the Math class like so:
p = rho*R*T + (B_0*R*T-A_0-((C_0) / (T*T))+((E_0) / (Math.Pow(T, 4))))*rho*rho +
(b*R*T-a-((d) / (T)))*Math.Pow(rho, 3) +
alpha*(a+((d) / (t)))*Math.Pow(rho, 6) +
((c*Math.Pow(rho, 3)) / (T*T))*(1+gamma*rho*rho)*Math.Exp(-gamma*rho*rho);
Example taken from: Converting Math Equations in C#
Well, first you have to figure out what all those symbols mean. I see the sigma which usually indicates sum-of, with ∑_(n=0)^(N-1) probably translating to:
N-1
∑
n=0
This generally means the sum of the following expression where n varies from 0 to N-1. So I gather you'd need a loop there.
The expression to be calculated within that loop consists of a lof of trigonometric-type functions involving π, θ, sin and cos, and the little known sinc which I assume is a typo :-)
The bottom line is that you need to understand the current expression before you can think about converting it to another form like a C# program. Short of knowing where it came from, or a little bit of context, we probably can't help you that much though there's always a possibility that we have a savant/genius here that will recognise that formula off the top of their head.

Multiplication using FFT in integer rings

I need to multiply long integer numbers with an arbitrary BASE of the digits using FFT in integer rings. Operands are always of length n = 2^k for some k, and the convolution vector has 2n components, therefore I need a 2n'th primitive root of unity.
I'm not particularly concerned with efficiency issues, so I don't want to use Strassen & Schönhage's algorithm - just computing basic convolution, then some carries, and that's nothing else.
Even though it seems simple to many mathematicians, my understanding of algebra is really bad, so I have lots of questions:
What are essential differences or nuances between performing the FFT in integer rings modulo 2^n + 1 (perhaps composite) and in integer FIELDS modulo some prime p?
I ask this because 2 is a (2n)th primitive root of unity in such a ring, because 2^n == -1 (mod 2^n+1). In contrast, integer field would require me to search for such a primitive root.
But maybe there are other nuances which will prevent me from using rings of such a form for the FFT.
If I picked integer rings, what are sufficient conditions for the existence of 2^n-th root of unity in this field?
All other 2^k-th roots of unity of smaller order could be obtained by squaring this root, right?..
What essential restrictions are imposed on the multiplication by the modulo of the ring? Maybe on their length, maybe on the numeric base, maybe even on the numeric types used for multiplication.
I suspect that there may be some loss of information if the coefficients of the convolution are reduced by the modulo operation. Is it true and why?.. What are general conditions that will allow me to avoid this?
Is there any possibility that just primitive-typed dynamic lists (i.e. long) will suffice for FFT vectors, their product and the convolution vector? Or should I transform the coefficients to BigInteger just in case (and what is the "case" when I really should)?
If a general answer to these question takes too long, I would be particularly satisfied by an answer under the following conditions. I've found a table of primitive roots of unity of order up to 2^30 in the field Z_70383776563201:
http://people.cis.ksu.edu/~rhowell/calculator/roots.html
So if I use 2^30th root of unity to multiply numbers of length 2^29, what are the precision/algorithmic/efficiency nuances I should consider?..
Thank you so much in advance!
I am going to award a bounty to the best answer - please consider helping out with some examples.
First, an arithmetic clue about your identity: 70383776563201 = 1 + 65550 * 2^30. And that long number is prime. There's a lot of insight into your modulus on the page How the FFT constants were found.
Here's a fact of group theory you should know. The multiplicative group of integers modulo N is the product of cyclic groups whose orders are determined by the prime factors of N. When N is prime, there's one cycle. The orders of the elements in such a cyclic group, however, are related to the prime factors of N - 1. 70383776563201 - 1 = 2^31 * 3^1 * 5^2 * 11 * 13, and the exponents give the possible orders of elements.
(1) You don't need a primitive root necessarily, you need one whose order is at least large enough. There are some probabilistic algorithms for finding elements of "high" order. They're used in cryptography for ensuring you have strong parameters for keying materials. For numbers of the form 2^n+1 specifically, they've received a lot of factoring attention and you can go look up the results.
(2) The sufficient (and necessary) condition for an element of order 2^n is illustrated by the example modulus. The condition is that some prime factor p of the modulus has to have the property that 2^n | p - 1.
(3) Loss of information only happens when elements aren't multiplicatively invertible, which isn't the case for the cyclic multiplicative group of a prime modulus. If you work in a modular ring with a composite modulus, some elements are not so invertible.
(4) If you want to use arrays of long, you'll be essentially rewriting your big-integer library.
Suppose we need to calculate two n-bit integer multiplication where
n = 2^30;
m = 2*n; p = 2^{n} + 1
Now,
w = 2, x =[w^0,w^1,...w^{m-1}] (mod p).
The issue, for each x[i], it will be too large and we cannot do w*a_i in O(1) time.

Big Oh notation (how to write a sentence)

I had a test about asymptotic notations and there was a question:
Consider the following:
O(o(f(n)) = o(f(n))
Write in words the meaning of the statement, using conventions from asymptotic notation.
Is the statement true or false? Justify.
I got it wrong (don't exactly remember what I wrote), but I think is something like:
For any function g(n) = o(f(n)), there
is a function h(n) = o(f(n)) so that
h(n) = O(f(n)).
Is it correct?
And for (2), I'm not totally sure. Can someone help me with this one too?
Thanks in advance.
I think they were trying to ask a question about the relationship between Big O and little o asymptotic notation.
A) Big O bounding of a Little O bounded function reduces to/imples the Little O bound of that function.
B) True. Big O is a less "strict" bound in that it stipulates that there is an M and an x0 such that f(n) <= M * g(n) for x >= x0, whereas Little O stipulates that for all positive M, there is an x0 such that f(n) is upper-bounded by M * g(n).
Thus the "an M" of Big O is a subset of the "all M" of little O, and so O(o(f(n)) is equivalent to o(f(n)).
For the actual math and not my weak ascii, see the wikipedia page
Meaning in plain English :
The upper bound of a function that is strictly greater than f(n) is strictly greater than f(n)
Your statement could have been written as : For any function g(n)=o(f(n)) there exists h(n)=O(g(n)) which implies h(n) is also o(f(n)) => O(g(n)) = o(f(n)) => O(o(f(n))) = o(f(n))
Yes the statement is correct.
(of course the above statement assumes all the correct constants and the use of "strictly greater is fr readability and understanding : it should be "a strict upperbound")
Sorry if this seems like a bit of an aside, but I think it's a dodgy question (as Alexandre C has alluded to) as it's a pretty big abuse of notation.
The way big-O notation is commonly taught (especially in a computer science class) is as if O(f(n)) is a function. This should set off some alarm bells, as the statements "n = O(n)" and "2n = O(n)" are both true, but "n = 2n" is not. If we want to say "f(n) is big-O of g(n)" we technically shouldn't be saying "f(n) = O(g(n))", rather we should say "f(n) is an element of O(g(n))". The former is just a convenient shorthand.
So back to the actual question, O(o(f(n))) doesn't really mean a whole lot (or at least I've never seen a formal definition of big-O of a set of functions). But I guess the logical way to interpret it would be as per enjay's answer, with g(n) = o(f(n)).

Resources