Can anyone help me with the format .. what command should I be using or let me know what should I learn to solve this myslef...)
> d
[1] 5.5
> cat("##",d[])
## 5.5
[1] 5.5
> print("##",d)
[1] "##"
> print("##",d[])
[1] "##"
> print(d)
[1] 5.5
I am looking to match the sample out:
## [1] 5.5
cat(paste("##", capture.output(print(5.5))))
## [1] 5.5
There are probably better alternatives to your actual problem (which you don't define). Package knitr comes to mind.
Related
How can that be?
> mode(daten[1,16])
[1] "numeric"
> mode(weku)
[1] "numeric"
>
> weku
[1] 10.47855
> daten[1,16]
[1] 814995955
> daten[1,16]/weku
[1] 77777557
>
> 814995955/10.47855
[1] 77777551
>
I don't understand this. How can I get the correct calculation?
daten[1,16]/weku is correct.
R does not display all of the decimal values it stores internally. What is printed on the console is controlled by options("digits").
For example, compare print(pi), print(pi, digits=10), and print(pi, digits=22).
I'm trying to use the percent() formatter for a valuebox() in a shiny app I'm designing... and came across some interesting behavior.
Obviously this works:
library(formattable)
a <- 0.2
percent(a)
But I need to catch a potential NA, so tried this:
ifelse(!is.na(a),percent(a),NA)
which returns the unpercented a (ie 0.2 rather than 20%)! What's going on? Some additional testing:
> ifelse(!is.na(a),percent(a),2)
[1] 0.2
> percent(a)
[1] 20.00%
> if(1==1) percent(a)
[1] 20.00%
> ifelse(1==1,percent(a),0)
[1] 0.2
> ifelse(1==1,eval(percent(a)),0)
[1] 0.2
> ifelse(1==1,parse(text = percent(a)),0)
expression(0.2)
> ifelse(1==1,eval(parse(text = percent(a))),0)
[1] 0.2
So what's going on?
Full transparency: percent(NA) does return NA, so I'm not stuck, just curious.
Take it outside the ifelse:
percent(ifelse(!is.na(a),a,NA))
[1] 20.00%
since NA is returned as NA, you can do it for NA:
percent(ifelse(!is.na(NA),a,NA))
[1] NA
It also works with a vector with NA and numbers:
percent(ifelse(!is.na(c(0.2,NA)),a,NA))
[1] 20.00% NA
This also happens in ifelse with time formats and a lot more always try to put it outside
This is my next question from cycle of "strange" questions.
I found same difference in code execution in R console and RStudio and couldn't understand reason of it. It's also connected with incorrect work of "track" package in RStudio and R.NET as I'd written before in Incorrect work of track package in R.NET
So, let's look at example from https://search.r-project.org/library/base/html/taskCallback.html
(I corrected it a little for correct data output for sum in RStudio)
times <- function(total = 3, str = "Task a") {
ctr <- 0
function(expr, value, ok, visible) {
ctr <<- ctr + 1
cat(str, ctr, "\n")
if(ctr == total) {
cat("handler removing itself\n")
}
return(ctr < total)
}
}
# add the callback that will work for
# 4 top-level tasks and then remove itself.
n <- addTaskCallback(times(4))
# now remove it, assuming it is still first in the list.
removeTaskCallback(n)
## Not run:
# There is no point in running this
# as
addTaskCallback(times(4))
print(sum(1:10))
print(sum(1:10))
print(sum(1:10))
print(sum(1:10))
print(sum(1:10))
## End(Not run)
An output in R console:
>
> # add the callback that will work for
> # 4 top-level tasks and then remove itself.
> n <- addTaskCallback(times(4))
Task a 1
>
> # now remove it, assuming it is still first in the list.
> removeTaskCallback(n)
[1] TRUE
>
> ## Not run:
> # There is no point in running this
> # as
> addTaskCallback(times(4))
1
1
Task a 1
>
> print(sum(1:10))
[1] 55
Task a 2
> print(sum(1:10))
[1] 55
Task a 3
> print(sum(1:10))
[1] 55
Task a 4
handler removing itself
> print(sum(1:10))
[1] 55
> print(sum(1:10))
[1] 55
>
> ## End(Not run)
>
Okay, let's run this in RStudio.
Output:
> source('~/callbackTst.R')
[1] 55
[1] 55
[1] 55
[1] 55
[1] 55
Task a 1
>
Second run give us this:
> source('~/callbackTst.R')
[1] 55
[1] 55
[1] 55
[1] 55
[1] 55
Task a 2
Task a 1
>
Third:
> source('~/callbackTst.R')
[1] 55
[1] 55
[1] 55
[1] 55
[1] 55
Task a 3
Task a 2
Task a 1
>
and so on.
There is a strange difference between RStudio and R console and I don't know why. Could anyone help me? Is is bug or it's normal and I have curved hands?
Thank you.
P.S. This post connected with correct working of "track" package, because "track.start" method consist this part of code:
assign(".trackingSummaryChanged", FALSE, envir = trackingEnv)
assign(".trackingPid", Sys.getpid(), envir = trackingEnv)
if (!is.element("track.auto.monitor", getTaskCallbackNames()))
addTaskCallback(track.auto.monitor, name = "track.auto.monitor")
return(invisible(NULL))
which, I think, doesn't work correct in RStudio and R.NET
P.P.S. I use R 3.2.2 x64, RStudio 0.99.489 and Windows 10 Pro x64. On RRO this problem also exists under R.NET and RStudio
addTaskCallback() will add a callback that's executed when R execution returns to the top level. When you're executing code line-by-line, each statement executed will return control to the top level, and callbacks will execute.
When executed within source(), control isn't returned until the call to source() returns, and so the callback is only run once.
I have two seemingly identical zoo objects created by the same commands from csv files for different time periods. I try to combine them into one long zoo but I'm failing with "indexes overlap" error. ('merge' 'c' or 'rbind' all produce variants of the same error text.) As far as I can see there are no duplicates and the time periods do not overlap. What am I doing wrong? Am using R version 3.0.1 on Windows 7 64bit if that makes a difference.
> colnames(z2)
[1] "Amb" "HWS" "Diff"
> colnames(t.tmp)
[1] "Amb" "HWS" "Diff"
> max(index(z2))
[1] "2012-12-06 02:17:45 GMT"
> min(index(t.tmp))
[1] "2012-12-06 03:43:45 GMT"
> anyDuplicated(c(index(z2),index(t.tmp)))
[1] 0
> c(z2,t.tmp)
Error in rbind.zoo(...) : indexes overlap
>
UPDATE: In trying to make a reproducible case I've concluded this is an implementation error due to the large number of rows I'm dealing with: it fails if the final result is more than 311434 rows long.
> nrow(c(z2,head(t.tmp,n=101958)))
Error in rbind.zoo(...) : indexes overlap
> nrow(c(z2,head(t.tmp,n=101957)))
[1] 311434
# but row 101958 inserts fine on its own so its not a data problem.
> nrow(c(z2,tail(head(t.tmp,n=101958),n=2)))
[1] 209479
I'm sorry but I dont have the R scripting skills to produce a zoo of the critical length, hopefully someone might be able to help me out..
UPDATE 2- Responding to Jason's suggestion.. : The problem is in the MATCH but my R skills arent sufficient to know how to interpret it- does it mean MATCH finds a duplicate value in x.t whereas anyDuplicated does not?
> x.t <- c(index(z2),index(t.tmp));
> length(x.t)
[1] 520713
> ix <- ORDER (x.t)
> length(ix)
[1] 520713
> x.t <- x.t[ix]
> length(ix)
[1] 520713
> length(x.t)
[1] 520713
> tx <- table(MATCH(x.t,x.t))
> max(tx)
[1] 2
> tx[which(tx==2)]
311371 311373 311378 311383 311384 311386 311389 311392 311400 311401
2 2 2 2 2 2 2 2 2 2
> anyDuplicated(x.t)
[1] 0
After all the testing and head scratching it seems that the problem I'm having is timezone related. Setting the environment to the same time zone as the original data makes it work just fine.
Sys.setenv(TZ="GMT")
> z3<-rbind(z2,t.tmp)
> nrow(z3)
[1] 520713
Thanks to how to guard against accidental time zone conversion for the inspiration to look in that direction.
I am trying to learn about how to use the apply function and I came across this tutorial: http://nsaunders.wordpress.com/2010/08/20/a-brief-introduction-to-apply-in-r/ which seems clear and concise, but I'm running into a problem right away. The very first example they give to demonstrate apply is:
> # create a matrix of 10 rows x 2 columns
> m <- matrix(c(1:10, 11:20), nrow = 10, ncol = 2)
> # mean of the rows
> apply(m, 1, mean)
[1] 6 7 8 9 10 11 12 13 14 15
This seems very basic, but I thought I'd give it a try. Here is my result:
> # create a matrix of 10 rows x 2 columns
> m <- matrix(c(1:10, 11:20), nrow = 10, ncol = 2)
> # mean of the rows
> apply(m, 1, mean)
Error in FUN(newX[, i], ...) : unused argument(s) (newX[, i])
Needless to say, I'm lost on this one...
To provide some more information, I attempted another example provided in the tutorial and got the correct result. The difference in this case was that the function was specifically stated in the apply function:
apply(m, 1:2, function(x) x/2)
[,1] [,2]
[1,] 0.5 5.5
[2,] 1.0 6.0
[3,] 1.5 6.5
[4,] 2.0 7.0
[5,] 2.5 7.5
[6,] 3.0 8.0
[7,] 3.5 8.5
[8,] 4.0 9.0
[9,] 4.5 9.5
[10,] 5.0 10.0
sessionInfo() output is below:
R version 2.15.3 (2013-03-01)
Platform: x86_64-apple-darwin9.8.0/x86_64 (64-bit)
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] tools_2.15.3
And the output for conflicts(details = TRUE)
$.GlobalEnv
[1] "edit" "mean"
$`package:utils`
[1] "edit"
$`package:methods`
[1] "body<-" "kronecker"
$`package:base`
[1] "body<-" "kronecker" "mean"
As others have identified, it's probably because you have a conflict on mean. When you call anything (functions, objects), R goes through the search path until it's found (and if it isn't found R will complain accordingly):
> search()
[1] ".GlobalEnv" "tools:RGUI" "package:stats"
[4] "package:graphics" "package:grDevices" "package:utils"
[7] "package:datasets" "package:methods" "Autoloads"
[10] "package:base"
If you're fairly new to R, note that when you create a function, unless you specify otherwise, it's usually going to live in ".GlobalEnv". R looks there first before going any further, so it's fairly important to name your functions wisely, so as not to conflict with common functions (e.g. mean, plot, summary).
It's probably a good idea to start with a clean session once in a while. It's fairly common in the debugging phase to name variables x or y (names picked for convenience rather than informativeness... we're only human after all), which can be unexpectedly problematic down the line. When you have a workspace that's fairly crowded, the probability of conflicts increases, so (a) pick names carefully and (b) restart without restoring would be my advice.