How to do rational arithmetic in julia? - julia

I am writing some functions in julia and want the results to be represented as rational numbers. That is, if a function returns 1/2, 1/3, 13/2571 etc I want them to be returned as written and not converted to floats. Say the functions compute some coefficients by some iterative process and I want the coefficient values to be shown as rationals. How can I do that in julia?

Rationals in Julia can be written as
1//2
These will work with functions, including user-defined ones, as you would expect:
5//7*3//5 # results in 3//7
f(x) = x^2 - 1
f(3//4) # results in -7//16
There's really not much else to it, but see also the manual section. If there's something in particular that's not working for you, post some example code and I'll take a look.

Related

How can I improve the lindep function's applicability in Pari/GP for integral approximations?

While doing certain computations involving the Rogers L-function, the following result was generated by Wolfram Alpha:
                              
I wanted to verify this result in Pari/GP by means of the lindep function, so I calculated the integral to 20 digits in WA, yielding:
11.3879638800312828875
Then, I used the following code in Pari/GP:
lindep([zeta(2), zeta(3), 11.3879638800312828875])
As pi^2 = 6*zeta(2), one would expect the output to be a vector along the lines of:
[12,12,-3]
because that's the linear dependency suggested by WA's result. However, I got a very elaborate vector from Pari/GP:
[35237276454, -996904369, -4984618961]
I think the first vector should be the "right" output of the Pari code sample.
Questions:
Why is the lindep function in Pari/GP not yielding the output one would expect in this case?
What can I do to make it give the vector that would be more appropriate in this situation?
It comes down to Pari treating your rounded values as exact. Since you must round your values, lindep's solution doesn't always come to the same solution as the true answer due to error.
You can try changing the accuracy of lindep using the second argument. The manual states that you should choose this to be smaller than the number of correct decimal digits. I believe this should solve the issue.
lindep(v, {flag = 0}) finds a small nontrivial integral linear
combination between components of v. If none can be found return an
empty vector.
If v is a vector with real/complex entries we use a floating point
(variable precision) LLL algorithm. If flag = 0 the accuracy is chosen
internally using a crude heuristic. If flag > 0 the computation is
done with an accuracy of flag decimal digits. To get meaningful
results in the latter case, the parameter flag should be smaller than
the number of correct decimal digits in the input.

Is this what rnorm(x) does if x is a vector, and how could I have found out faster?

I’m looking for R resources, and I started looking at “An Introduction to R” here at r-project.org. I did and got stumped immediately.
I think I've figured out what’s going on, and my question is basically
Are there resources to help me figure out something like this more
easily?
The preface of the Introduction to R suggests starting with the introductory session in Appendix A, and right at the start is this code and remark.
x <- rnorm(50)
y <- rnorm(x)
Generate two pseudo-random normal vectors of x- and y-coordinates.
The documentation says the (first and only non-optional) parameter to rnorm is the length of the result vector. So x <- rnorm(50) produces a vector of 50 random values from a normal distribution with mean 0 and standard deviation 1.
So far so good. But why does rnorm(x) seem to do what y <- rnorm(50) or y <- rnorm(length(x)) would have done? Either of these alternatives seem clearer to me.
My guess as to what happens is this:
The wrapper for rnorm didn’t care what kind of thing x is and just passed to the underlying C function a pointer to the C struct for x as an R object.
R objects represented in C are structs followed by “data”; the data of the C representation of an R vector of reals starts with two integers, the first of which is the vector's length. (The vector elements follow those integers.) I found this out by reading up on R internals here.
If a C function were written to find the value of an R integer from a passed pointer-to-R-object, and it were called with a pointer to an R vector of reals, it would find the vector’s length in the place it would look for the single integer.
In addition to my main question of “How can I figure out something like this more easily?”, I wouldn’t mind knowing whether what I think is going on is correct and whether the fact that rnorm(x) is idiomatic R in this context or more of a sloppy choice. Given that it does something useful, can it be relied upon or is it just lucky behavior for an expression that isn’t well-defined in R?
I’m used to strongly-typed languages like C or SQL, which have easier-to-follow (for me) semantics and which also have more comprehensive references available, so any references for R that have a programming-language-theory focus or are aimed at people used to strong typing would be good, too.
It is documented behavior. From ?rnorm:
Usage: [...]
rnorm(n, mean = 0, sd = 1)
Arguments:
[...]
n: number of observations. If ‘length(n) > 1’, the length is
taken to be the number required.

In R, incomplete gamma function with complex input?

Incomplete gamma functions can be calculated in R with pgamma, or with gamma_inc_Q from library(gsl), or with gammainc from library(expint). However, all of these functions take only real input.
I need an implementation of the incomplete gamma function which will take complex input. Specifically, I have an integer for the first argument, and a complex number for the second argument (the limit in the integral).
This function is well-defined for complex inputs (see Wikipedia), and I've been calculating it in Mathematica. It doesn't seem to be built into R though, and I don't see it in any libraries.
So, can anyone suggest a shorter path to doing these calculations, than looking up an algorithm, implementing it in C, and writing an R interface?
(If I do have to implement it myself, here's the only algorithm for complex inputs that I've found: Kostlan & Gokhman 1987)
Here is an implementation, assuming you want the lower incomplete gamma function. I've compared a couple of values with Wolfram and they match.
library(CharFun)
incgamma <- function(s,z){
z^s * exp(-z) * hypergeom1F1(z, 1, s+1) / s
}
Perhaps the evaluation fails for a large s.
EDIT
Looks like CharFun has been removed from CRAN. You can use IncGamma in HypergeoMat:
> library(HypergeoMat)
> IncGamma(m=50, 2+2i, 5-6i)
[1] 0.3841221+0.3348439i
The result is the same on Wolfram.

Divisibility function in SML

I've been struggling with the basics of functional programming lately. I started writing small functions in SML, so far so good. Although, there is one problem I can not solve. It's on Project Euler (https://projecteuler.net/problem=5) and it simply asks for the smallest natural number that is divisible from all the numbers from 1 - n (where n is the argument of the function I'm trying to build).
Searching for the solution, I've found that through prime factorization, you analyze all the numbers from 1 to 10, and then keep the numbers where the highest power on a prime number occurs (after performing the prime factorization). Then you multiply them and you have your result (eg for n = 10, that number is 2520).
Can you help me on implementing this to an SML function?
Thank you for your time!
Since coding is not a spectator sport, it wouldn't be helpful for me to give you a complete working program; you'd have no way to learn from it. Instead, I'll show you how to get started, and start breaking down the pieces a bit.
Now, Mark Dickinson is right in his comments above that your proposed approach is neither the simplest nor the most efficient; nonetheless, it's quite workable, and plenty efficient enough to solve the Project Euler problem. (I tried it; the resulting program completed instantly.) So, I'll go with it.
To start with, if we're going to be operating on the prime decompositions of positive integers (that is: the results of factorizing them), we need to figure out how we're going to represent these decompositions. This isn't difficult, but it's very helpful to lay out all the details explicitly, so that when we write the functions that use them, we know exactly what assumptions we can make, what requirements we need to satisfy, and so on. (I can't tell you how many times I've seen code-writing attempts where different parts of the program disagree about what the data should look like, because the exact easiest form for one function to work with was a bit different from the exact easiest form for a different function to work with, and it was all done in an ad hoc way without really planning.)
You seem to have in mind an approach where a prime decomposition is a product of primes to the power of exponents: for example, 12 = 22 × 31. The simplest way to represent that in Standard ML is as a list of pairs: [(2,2),(3,1)]. But we should be a bit more precise than this; for example, we don't want 12 to sometimes be [(2,2),(3,1)] and sometimes [(3,1),(2,2)] and sometimes [(3,1),(5,0),(2,2)]. So, we can say something like "The prime decomposition of a positive integer is represented as a list of prime–exponent pairs, with the primes all being positive primes (2,3,5,7,…), the exponents all being positive integers (1,2,3,…), and the primes all being distinct and arranged in increasing order." This ensures a unique, easy-to-work-with representation. (N.B. 1 is represented by the empty list, nil.)
By the way, I should mention — when I tried this out, I found that everything was a little bit simpler if instead of storing exponents explicitly, I just repeated each prime the appropriate number of times, e.g. [2,2,3] for 12 = 2 × 2 × 3. (There was no single big complication with storing exponents explicitly, it just made a lot of little things a bit more finicky.) But the below breakdown is at a high level, and applies equally to either representation.
So, the overall algorithm is as follows:
Generate a list of the integers from 1 to 10, or 1 to 20.
This part is optional; you can just write the list by hand, if you want, so as to jump into the meatier part faster. But since your goal is to learn the basics of functional programming, you might as well do this using List.tabulate [documentation].
Use this to generate a list of the prime decompositions of these integers.
Specifically: you'll want to write a factorize or decompose function that takes a positive integer and returns its prime decomposition. You can then use map, a.k.a. List.map [documentation], to apply this function to each element of your list of integers.
Note that this decompose function will need to keep track of the "next" prime as it's factoring the integer. In some languages, you would use a mutable local variable for this; but in Standard ML, the normal approach is to write a recursive helper function with a parameter for this purpose. Specifically, you can write a function helper such that, if n and p are positive integers, p ≥ 2, where n is not divisible by any prime less than p, then helper n p is the prime decomposition of n. Then you just write
local
fun helper n p = ...
in
fun decompose n = helper n 2
end
Use this to generate the prime decomposition of the least common multiple of these integers.
To start with, you'll probably want to write a lcmTwoDecompositions function that takes a pair of prime decompositions, and computes the least common multiple (still in prime-decomposition form). (Writing this pairwise function is much, much easier than trying to create a multi-way least-common-multiple function from scratch.)
Using lcmTwoDecompositions, you can then use foldl or foldr, a.k.a. List.foldl or List.foldr [documentation], to create a function that takes a list of zero or more prime decompositions instead of just a pair. This makes use of the fact that the least common multiple of { n1, n2, …, nN } is lcm(n1, lcm(n2, lcm(…, lcm(nN, 1)…))). (This is a variant of what Mark Dickinson mentions above.)
Use this to compute the least common multiple of these integers.
This just requires a recompose function that takes a prime decomposition and computes the corresponding integer.

How to make nonsymbolic plot_vector_field in sage?

I have a function f(x,y) whose outcome is random (I take mean from 20 random numbers depending on x and y). I see no way to modify this function to make it symbolic.
And when I run
x,y = var('x,y')
d = plot_vector_field((f(x),x), (x,0,1), (y,0,1))
it says it can't cast symbolic expression to real or rationa number. In fact it stops when I write:
a=matrix(RR,1,N)
a[0]=x
What is the way to change this variable to real numbers in the beginning, compute f(x) and draw a vector field? Or just draw a lot of arrows with slope (f(x),x)?
I can create something sort of like yours, though with no errors. At least it doesn't do what you want.
def f(m,n):
return m*randint(100,200)-n*randint(100,200)
var('x,y')
plot_vector_field((f(x,y),f(y,x)),(x,0,1),(y,0,1))
The reason is because Python functions immediately evaluate - in this case, f(x,y) was 161*x - 114*y, though that will change with each invocation.
My suspicion is that your problem is similar, the immediate evaluation of the Python function once and for all. Instead, try lambda functions. They are annoying but very useful in this case.
var('x,y')
plot_vector_field((lambda x,y: f(x,y), lambda x,y: f(y,x)),(x,0,1),(y,0,1))
Wow, I now I have to find an excuse to show off this picture, cool stuff. I hope your error ends up being very similar.

Resources