Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
I have fixed my problem by telling my loop to ignore instances of 0. But I don't know why it wants to include them in the first place.
P <- function(n) {
if (n <= 1) {
return (1)
}
s = 0
for (i in 1 : n - 1) {
if (i != 0) {
s <- s + P(i) * P(n - i)
}
}
return (s)
}
print (P(2))
It seems weird to include that in my loop, but without it I simply haven't been able to get the program to work. Not until I started putting in a trace to see what I was did I discover that i = 0. Did I write something wrong? I'm too new at R to even consider blaming this on the RGui that I'm using to run this, but I'm too old at programming to think anything else.
It is a precedent problem. ":" takes precedent over "-"
Compare:
n<-5
c(1: n -1)
c(1:(n-1))
This is because you forgot a parenthesis, see operator syntax & precedence :
n <- 2
1: n-1
[1] 0 1
1:(n-1)
[1] 1
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
In scala we have the concept of an implicit variable or parameter, which can be handy, although sometimes confusing, in many cases. The question is:
Is there something like implicit variables in R?
If there is not, would be possible to achieve the same behavior as scala implicit parameters while calling some function in R?
Moved from comments.
If I understand this correctly an implicit parameter to a function is a function argument which, if not specified when calling the function, defaults to a default associated with that argument's type and only one such default can exist for all instances of that type at any one time; however, arguments in R don't have types -- its all dynamic. One does not write f <- function(int x) ... but just f <- function(x) ... .
I suppose one could have a convention that integerDefault is the default value associated with the integer type:
f <- function(x = integerDefault) x
g <- function(y = integerDefault) y + 1L
integerDefault <- 0L
f()
## [1] 0
g()
## [1] 1
There is nothing that will prevent you from passing a double to f and g but
if you don't pass anything then you get the default integer which seems similar to scala and
there can only be one such default at any point since they all go by the same name which seems similar to scala. Also
if no value is assigned to integerDefault then the function fails which is also similar to scala.
Note that integerDefault will be looked up lexically -- not in the caller.
I'm not sure what the desired behavior is. From the first paragraph of the site you link, it seems to be simply a default parameter setting for parameters not provided to the function. This is used in R all the time:
> f <- function(x=10) print(x)
> f()
[1] 10
Is that what you mean?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I think any programmer who has taken an intro to programming class had to memorize DeMorgan's law.
In case you don't know what it is, here is the gist
!(A && B) = !A || !B
!(A || B) = !A && !B
I was under the assumption that if i had to memorize it, I would find it applicable in a programming situation. But I haven't had to use it at all.
Is there any reason why I should use it in programming? Does it make the program faster, or does it make conditions easier to read?
Keeping code readable is a big reason, and I'm sure whoever works on your code after you will agree it's an important one. And you'll save a CPU cycle or two if you only invert (!) one value instead of two.
Another reason is to bypass short-circuiting. When many languages see !A || !B they stop evaluating if !A is true because it doesn't matter if !B is true or not. !A is enough for the OR to be true.
If A and B are functions instead of variables, and you want both to execute, you're gonna have a problem:
if( !save_record() || !save_another_record() ) {
echo 'Something bad happened';
}
Demorgan's laws let you replace that OR with an AND. Both sides of the AND need to be evaluated to make sure it's true:
if( !( save_record() && save_another_record() ) ) {
echo 'Something bad happened';
}
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 years ago.
Improve this question
I'm trying to create a function that divides min by 60, but when I compile, I received the error:
Compilation failed,line 8 (14:53:49) PLS-00103: Encountered the symbol "/" when expecting one of the following: (
Here's the code:
create or replace function "HORAS"
(min in NUMBER)
return NUMBER
is
hr NUMBER;
begin
hr:= (min) /(60);
return hr;
end;
MIN is a built-in aggregate/analytic function, so the compiler is expecting it to be followed by arguments, hence the message saying it's expecting (.
Just change the argument name; you also don't really need to define an intermediate variable:
create or replace function HORAS (p_min in NUMBER)
return NUMBER
is
begin
return p_min / 60;
end;
/
select horas(345) from dual;
HORAS(345)
----------
5.75
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Forgive me for the newb question and potentially incorrect terminology.
Clojure vector functions produce values that do not include the stop value. For example:
=> (subvec [:peanut :butter :and :jelly] 1 3)
[:butter :and]
=> (range 1 5)
(1 2 3 4)
The doc for range explicitly states this but doesn't give a rational: "...Returns a lazy seq of nums from start (inclusive) to end (exclusive)...".
In Ruby these operations are inclusive:
(1..5).to_a
=> [1, 2, 3, 4, 5]
[:peanut, :butter, :and, :jelly][1,3]
=> [:butter, :and, :jelly]
Obviously these are very different languages, but I'm wondering if there was some underlying reason, beyond a personal preference by the language designers?
Making the end exclusive allows you to do things like specify (count collection) as the endpoint without getting an NPE. That's about the biggest difference between the two approaches.
It might be that the indexing was chosen in order to be consistent with Java libraries. java.lang.String.substring and java.util.List.subList both have exclusive-end indexes.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a financial time series data for which i want to calculate Returns , Maximum Draw down etc based on a signal series . My actual time series is a big one. I am giving here a toy example so that i can tell want i need. Here 1 is for buy signal and -1 is for sell signal. I initiate and hold the trade position till the opposite signal is received and then reverse the position ans so on. Returns should be calculated for every data point so that an Equity Curve can be plotted.
data<- rnorm(20,100,3)
signal<- c( 1,1,1,1,1,1,1,-1,-1,-1,-1,-1,1,-1,1,-1,-1,-1,-1,1)
For this purpose Quantmod and PerformanceAnalytics comes to my mind.
Any help appreciated.
I have no idea of the R financial packages (I wish I knew). I am guessing that your main problem is to know when to trade and when not to trade, and, that after figuring out that, your problem is solved.
First you may try with a pure R solution. I am a fan of Reduce so you may try with this.
deltaTrade <- function(currentTrend,nextSignal) ifelse(lastOp != nextSignal,1,-1)
trade <- Reduce('deltaTrade',signal,init=signal[1],accumulate=TRUE)
tradePeriods = which(trade==1)
If it is too slow I have recently seen in other SO questions that switching to C++ for an efficient solution is a good way to tackle the problem. You can do that with the cpp package, which apparently has become a real hip.
library(Rcpp)
cppFunction("NumericVector selectTrades(NumericVector x, NumericVector out) {
int n = x.length();
int current = x[0];
for(int i = 0; i < n; ++i) {
if (x[i] == current) {
out[i] = 0; // hold position
} else {
current = x[i];
out[i] = 1; // play position
}
}
return out;
}")
trades = which(selectTrades(signal,out)==1)
Anyway, I hope that any of these helps.