How to make multi-line nested for loops in PARI/GP? - pari-gp

How can I make nested loops in PARI/GP that span multiple lines at each level? I often have to do multiple things inside for loops, and for readability I don't like writing my loops on a single line. For a loop over one variable, I've been doing this:
for(i=1,10,{
printf("%u\n",i);
})
However, for nested loops I've only managed to put line-breaks at one level. This works:
for(i=1, 10, for(j=1, 10, {
printf("%2u\t%2u\n", i, j);
}));
This also works:
for(i=1, 10, {
for(j=1, 10, printf("%2u\t%2u\n", i, j));
});
However, this is what I'd really like to do:
for(i=1, 10, {
for(j=1, 10, {
printf("%2u\t%2u\n", i, j);
});
});
This last example doesn't work; it gives an error:
*** sorry, embedded braces (in parser) is not yet implemented.
... skipping file 'nested_for.gp'
*** at top-level: printf("%2u\t%2u\n",
*** ^--------------------
*** printf: not a t_INT in integer format conversion: i.
*** Break loop: type 'break' to go back to GP
I'm using PARI/GP 2.5.3 on OS X 10.8.3. I write my scripts into a file nested_for.gp and run them using gp ./nested_for.gp at Bash.

Contrary to what we expect from C-like syntax, braces don't define a block in
GP. They only allow to split a sequence of instructions on multiple
consecutive lines. They don't nest; on the other hand, you can nest loops
inside a single { } block:
{
for (i = 1, 10,
for (j = 1, 10,
print (i+j)))
}
Multi-line commands are usually found in user functions, and may look
more natural in such a context:
fun(a, b) =
{
for (i = 1, a,
for (j = 1, b,
print (i+j)));
}

Related

Nargin function in R (number of function inputs)

Goal
I am trying to create a function in R to replicate the functionality of a homonymous MATLAB function which returns the number of arguments that were passed to a function.
Example
Consider the function below:
addme <- function(a, b) {
if (nargin() == 2) {
c <- a + b
} else if (nargin() == 1) {
c <- a + a
} else {
c <- 0
}
return(c)
}
Once the user runs addme(), I want nargin() to basically look at how many parameters were passed―2 (a and b), only 1 (a) or none―and calculate c accordingly.
What I have tried
After spending a lot of time messing around with environments, this is the closest I ever got to a working solution:
nargin <- function() {
length(as.list(match.call(envir = parent.env(environment()))))
}
The problem with this function is that it always returns 0, and the reason why is that I think it's looking at its own environment instead of its parent's (in spite of my attempt of throwing in a parent.env there).
I know I can use missing() and args() inside addme() to achieve the same functionality, but I'll be needing this quite a few other times throughout my project, so wrapping it in a function is definitely something I should try to do.
Question
How can I get nargin() to return the number of arguments that were passed to its parent function?
You could use
nargin <- function() {
if(sys.nframe()<2) stop("must be called from inside a function")
length(as.list(sys.call(-1)))-1
}
Basically you just use sys.call(-1) to go up the call stack to the calling function and get it's call and then count the number of elements and subtract one for the function name itself.

Best Practice of using nested constant variable across project in Golang?

In NodeJS, when we want to declare some constant variable and would like them to be used across the project, we might write something like:
// const.js
module.exports.mqttQOS = {
AtMostOnce: 0,
AtLeastOnce: 1,
ExactlyOnce: 2,
};
Therefore, we could use it like constant.mqttQOS.AtMostOnce, and throw an error when we use constantQOS.ErrorRefering.
In Golang we could only do something like:
var mqttQoS = map[string]byte{
"AtMostOnce": 0,
"AtLeastOnce": 1,
"ExactlyOnce": 2,
}
And use it as: fmt.Println(mqttQoS["AtMostOnce"]) // print: 0
However, it'll print fmt.Println(mqttQoS["ErrorRefering"]) // print: 0 because of the characteristic of Golang map (like Python's defaultdict() )
Althought we could do something to prevent this error referring by:
var mqttQoS = map[string]byte{
"AtMostOnce": 0,
"AtLeastOnce": 1,
"ExactlyOnce": 2,
}
result, ok := mqttQoS["ErrorRefering"]
if ok {
fmt.Println("value: ", result)
}
So back to my question, other than using ok to limit the error referring,
is there any better practice to work on the nested constant objects in Golang?
Updated:
so that I could use like mqttQoS.AtMostOnce and will raise an error when I mqttQos.ErrorRefer.
Defining another type is one way, but is it common practice in big projects?
Thanks!
Referred to the comments, using nested const is not the common practice in Golang.
constant literal might be the solution to the question.
There are related discussions and proposals on github.com/golang
https://github.com/golang/go/issues/21130

R - Writing a function to return binary output using if statement

Good day,
I am a beginner and trying to understand why I am getting the error below.
I am trying to create a function that would return 0 or 1 based on column values in data set.
LT = function(Lost.time) {
For (i in 1:dim(df)) {
if (df$Lost.time > 0) {
x = 1
}
else {
x = 0
}
return(x)
}
}
Error: no function to return from, jumping to top level In addition: Warning
message: In if (df$Lost.time > 0) { : the condition has length > 1 and only
the first element will be used> } Error: unexpected '}' in "}"
There are a couple of mistakes in the code:
R is case sensitive. Use for instead of For.
If you are looping over the entries in df$Lost.time, the individual elements should be addressed within the loop using df$Lost.time[i]. However, a loop is not necessary for this task.
An else statement should not begin on a new line of the code. The parser cannot know that the if statement is not finished after the first block. If the else statement is enclosed in curly braces like in } else { there will be no problem in this sense.
The parameter passed to the function is not suitable. Maybe you could pass df, instead of Lost.time, but it may be necessary to rewrite parts of the function.
The use of 1:dim(df) in the for loop should work, but it will trigger a warning message. It is better to use 1:nrow(df).
Those are syntax problems. However, the main issue is probably what has been addressed in the answer by #TimBiegeleisen: In the loop you are checking for each of the ̀nrow(df) elements of df$Lost.time whether a specific condition is fulfilled. It therefore does not seem to make sense to have a single binary result as return value. The purpose of the function should be clarified before it is implemented.
An alternative to this function could be constructed in a one-liner with ifelse.
It is not clear what you actually want to return in your function. return can only be called once, after which it will return a single value and the function will terminate.
If you want to get a vector which will contain 1 or 0 depending on whether a given row in your data frame has Lost.time > 0, then the following one liner should do the trick:
x <- as.numeric(df$Lost.time > 0)
If loops are used for writing a function indices should be used for each element.
Create a variable(x) in your dataframe, if the statements goes true it prints 1 else 0
LT = function(Lost.time) {
for (i in 1:dim(df)) {
if (as.numeric(df$Lost.time[i]) > 0) {
df$x[i] <- 1
}else{
df$x[i] <- 0
}
}
}

Java 8 functional style to iterate with indexes

I have been practicing java 8 streams and functional style for a while.
Sometimes I try to solve some programming puzzles just using streams.
And during this time I found a class of tasks which I don't know how to solve with streams, only with classical approach.
One example of this kind of tasks is:
Given an array of numbers find index of the element which will make sum of the left part of array below zero.
e.g. for array [1, 2, 3, -1, 3, -10, 9] answer will be 5
My first idea was to use IntStream.generate(0, arr.length)... but then I don't know how to accumulate values and being aware of index same time.
So questions are:
Is it possible to somehow accumulate value over stream and then make conditional exit?
What is then with parallel execution? it's not fitting to problem of finding indexes where we need to be aware of elements order.
I doubt your task is well suited for streams. What you are looking for is a typical scan left operation which is by nature a sequential operation.
For instance imagine the following elements in the pipeline: [1, 2, -4, 5]. A parallel execution may split it into two subparts namely [1, 2] and [-4, 5]. Then what would you do with them ? You cannot sum them independently because it will yields [3] and [1] and then you lost the fact that 1 + 2 - 4 < 0 was respected.
So even if you write a collector that keeps track of the index, and the sum, it won't be able to perform well in parallel (I doubt you can even benefit from it) but you can imagine such a collector for sequential use :
public static Collector<Integer, ?, Integer> indexSumLeft(int limit) {
return Collector.of(
() -> new int[]{-1, 0, 0},
(arr, elem) -> {
if(arr[2] == 0) {
arr[1] += elem;
arr[0]++;
}
if(arr[1] < limit) {
arr[2] = 1;
}
},
(arr1, arr2) -> {throw new UnsupportedOperationException("Cannot run in parallel");},
arr -> arr[0]
);
}
and a simple usage:
int index = IntStream.of(arr).boxed().collect(indexSumLeft(0));
This will still traverse all the elements of the pipeline, so not very efficient.
Also you might consider using Arrays.parallelPrefix if the data-source is an array. Just compute the partial sums over it and then use a stream to find the first index where the sum is below the limit.
Arrays.parallelPrefix(arr, Integer::sum);
int index = IntStream.range(0, arr.length)
.filter(i -> arr[i] < limit)
.findFirst()
.orElse(-1);
Here also all the partial sums are computed (but in parallel).
In short, I would use a simple for-loop.
I can propose a solution using my StreamEx library (which provides additional functions to the Stream API), but I would not be very happy with such solution:
int[] input = {1, 2, 3, -1, 3, -10, 9};
System.out.println(IntStreamEx.of(
IntStreamEx.of(input).scanLeft(Integer::sum)).indexOf(x -> x < 0));
// prints OptionalLong[5]
It uses IntStreamEx.scanLeft operation to compute the array of prefix sums, then searches over this array using IntStreamEx.indexOf operation. While indexOf is short-circuiting, the scanLeft operation will process the whole input and create an intermediate array of the same length as the input which is completely unnecessary when solving the same problem in imperative style.
With new headTail method in my StreamEx library it's possibly to create lazy solution which works well for very long or infinite streams. First, we can define a new intermediate scanLeft operation:
public static <T> StreamEx<T> scanLeft(StreamEx<T> input, BinaryOperator<T> operator) {
return input.headTail((head, tail) ->
scanLeft(tail.mapFirst(cur -> operator.apply(head, cur)), operator)
.prepend(head));
}
This defines a lazy scanLeft using the headTail: it applies given function to the head and the first element of the tail stream, then prepends the head. Now you can use this scanLeft:
scanLeft(StreamEx.of(1, 2, 3, -1, 3, -10, 9), Integer::sum).indexOf(x -> x < 0);
The same can be applied to the infinite stream (e.g. stream of random numbers):
StreamEx<Integer> ints = IntStreamEx.of(new Random(), -100, 100)
.peek(System.out::println).boxed();
int idx = scanLeft(ints, Integer::sum).indexOf(x -> x < 0);
This will run till the cumulative sum becomes negative and returns the index of the corresponding element.

Is there a way to write code in D similar to this Python expression?

There are articles and presentations about functional style programming in D (e.g. http://www.drdobbs.com/architecture-and-design/component-programming-in-d/240008321). I never used D before, but I'm interested in trying it. Is there a way to write code in D similar to this Python expression:
max(x*y for x in range(N) for y in range(x, N) if str(x*y) == str(x*y)[::-1])
Are there D constructs for generators or list (array) comprehensions?
Here's one possible solution, not particularly pretty:
iota(1,N)
.map!(x =>
iota(x,N)
.map!(y => tuple(x,y)))
.joiner
.map!(xy => xy[0]*xy[1])
.filter!(xy => equal(to!string(xy), to!string(xy).retro))
.reduce!max;
So what this actually does is create a range from 1 to N, and map each element to a range of tuples with your x,y values. This gives you a nested range ([[(1,1),(1,2)],[(2,2)]] for N = 2).
We then join this range to get a range of tuples ([(1,1),(1,2),(2,2)] for N = 2).
Next we map to x*y (D's map does for some reason not allow for unpacked tuples, so we need to use indexing).
Penultimately we filter out non-palindromes, before finally reducing the range to its largest element.
Simple answer, no, D does not have generators or list comprehensions (AFAIK). However, you can create a generator using an InputRange. For that solution, see this related question: What is a "yield return" equivalent in the D programming language?
However, your code isn't using generators, so your code could be translated as:
import std.algorithm : max, reduce, retro, equal;
import std.conv : to;
immutable N = 13;
void main() {
int[] keep;
foreach(x; 0 .. N) {
foreach(y; x .. N) {
auto val = x*y;
auto s = to!string(val);
if (equal(s, s.retro)) // reverse doesn't work on immutable Ranges
keep ~= val; // don't use ~ if N gets large, use appender instead
}
}
reduce!max(keep); // returns 121 (11*11)
}
For me, this is much more readable than your list comprehension because the list comprehension has gotten quite large.
There may be a better solution out there, but this is how I'd implement it. An added bonus is you get to see std.algorithm in all its glory.
However, for this particular piece of code, I wouldn't use the array to save on memory and instead store only the best value to save on memory. Something like this:
import std.algorithm : retro, equal;
import std.conv : to;
immutable N = 13;
void main() {
int best = 0;
foreach(x; 0 .. N) {
foreach(y; x .. N) {
auto val = x*y;
auto s = to!string(val);
if (equal(s, s.retro))
best = val;
}
}
}

Resources