Does anybody have an idea, how I can fix this? git commit -a -m "message here works fine for other projects and previous commits this day were all ok.
Now, it throws the error:
Error in [<-(*tmp*, 1, "Date", value = "2016-07-29") :
Indizierung außerhalb der Grenzen
Ausführung angehalten
The error message is something like:
index out of bounds
Please let me know if you need any further information.
Here is a screenshot:
Edit: #Carsten guessed right! I have a hook running. But I cannot see why it should stop working from one to another minute... (It still does not work)
#!C:/R/R-3.2.2/bin/x64/Rscript
# License: CC0 (just be nice and point others to where you got this)
# Author: Robert M Flight <rflight79#gmail.com>, github.com/rmflight
inc <- TRUE # default
# get the environment variable and modify if necessary
tmpEnv <- as.logical(Sys.getenv("inc"))
if (!is.na(tmpEnv)) {
inc <- tmpEnv
}
# check that there are files that will be committed, don't want to increment version if there won't be a commit
fileDiff <- system("git diff HEAD --name-only", intern = TRUE)
if ((length(fileDiff) > 0) && inc) {
currDir <- getwd() # this should be the top level directory of the git repo
currDCF <- read.dcf("DESCRIPTION")
currVersion <- currDCF[1,"Version"]
splitVersion <- strsplit(currVersion, ".", fixed = TRUE)[[1]]
nVer <- length(splitVersion)
currEndVersion <- as.integer(splitVersion[nVer])
newEndVersion <- as.character(currEndVersion + 1)
splitVersion[nVer] <- newEndVersion
newVersion <- paste(splitVersion, collapse = ".")
currDCF[1,"Version"] <- newVersion
currDCF[1, "Date"] <- strftime(as.POSIXlt(Sys.Date()), "%Y-%m-%d")
write.dcf(currDCF, "DESCRIPTION")
system("git add DESCRIPTION")
cat("Incremented package version and added to commit!\n")
}
Thanks to #Carsten: Using print statements I could track the error in the hook file. In the end it was a stupid bug, where Date was accidentially deleted (=missing) in the description file.
Related
This is a weird one and I am hoping someone can figure it out. I have written a function that uses googlesheets4 and googledrive. One thing I'm trying to do is move a googledrive document (spreadsheet) from the base folder to a specified folder. I had this working perfectly yesterday so I don't know what happened as it just didn't when I came in this morning.
The weird thing is that if I step through the function, it works fine. It's just when I run the function all at once that I get the error.
I am using a folder ID instead of a name and using drive_find to get the correct folder ID. I am also using a sheet ID instead of a name. The folder already exists and like I said, it was working yesterday.
outFolder <- 'exact_outFolder_name_without_slashes'
createGoogleSheets <- function(
outFolder
){
folder_id <- googledrive::drive_find(n_max = 10, pattern = outFolder)$id
data <- data.frame(Name = c("Sally", "Sue"), Data = c("data1", "data2"))
sheet_id <- NA
nameDate <- NA
tempData <- data.frame()
for (i in 1:nrow(data)){
nameDate <- data[i, "Name"]
tempData <- data[i, ]
googlesheets4::gs4_create(name = nameDate, sheets = list(sheet1 = tempData)
sheet_id <- googledrive::drive_find(type = "spreadsheet", n_max = 10, pattern = nameDate)$id
googledrive::drive_mv(file = as_id(sheet_id), path = as_id(folder_id))
} end 'for'
} end 'function'
I don't think this will be a reproducible example. The offending code is within the for loop that is within the function and it works fine when I run through it step by step. folder_id is defined within the function but outside of the for loop. sheet_id is within the for loop. When I move folder_id into the for loop, it still doesn't work although I don't know why it would change anything. These are just the things I have tried. I do have the proper authorization for google drive and googlesheets4 by using:
googledrive::drive_auth()
googlesheets4::gs4_auth(token = drive_token())
<error/rlang_error>
Error in as_parent():
! Parent specified via path is invalid:
x Does not exist.
Backtrace:
global createGoogleSheets(inputFile, outPath, addNames)
googledrive::drive_mv(file = as_id(sheet_id), path = as_id(folder_id))
googledrive:::as_parent(path)
Run rlang::last_trace() to see the full context.
Backtrace:
x
-global createGoogleSheets(inputFile, outPath, addNames)
-googledrive::drive_mv(file = as_id(sheet_id), path = as_id(folder_id))
\-googledrive:::as_parent(path)
\-googledrive:::drive_abort(c(invalid_parent, x = "Does not exist."))
\-cli::cli_abort(message = message, ..., .envir = .envir)
\-rlang::abort(message, ..., call = call, use_cli_format = TRUE)
I have tried changing the folder_id to the exact path of my google drive W:/My Drive... and got the same error. I should mention I have also tried deleting the folder and re-creating it fresh.
Anybody have any ideas?
Thank you in advance for your help!
I can't comment because I don't have the reputation yet, but I believe you're missing a parenthesis in your for-loop.
You need that SECOND parenthesis below:
for (i in 1:nrow(tempData) ) {
...
}
Everytime I open a new session in RStudio, I'm greeted with the error message:
Error: C stack usage 7953936 is too close to the limit
Based on suggestions for similar issues posted here and here, I tried using the ulimit command in terminal, but get the following error.
Isabels-MacBook-Pro ~ % ulimit -s
8176
Isabels-MacBook-Pro ~ % R --slave -e 'Cstack_info()["size"]'
Error: C stack usage 7954496 is too close to the limit
Execution halted
Yet, when I run ulimit on it's own, I get:
Isabels-MacBook-Pro ~ % ulimit
unlimited
Just to double-check, I try setting it to unlimited again:
Isabels-MacBook-Pro ~ % ulimit -s unlimited
but then get a new error:
Isabels-MacBook-Pro ~ % R --slave -e 'Cstack_info()["size"]'
Error: evaluation nested too deeply: infinite recursion / options(expressions=)?
Execution halted
I have no clue what this means in this context. Is the Cstack_info() the bit getting stuck on infinite recursion?? I'd love to get this figured out, as it's getting in the way of installing some necessary packages!
In case it's helpful, here's my session info
R version 4.1.3 (2022-03-10)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Monterey 12.2.1
And contents of .Rprofile
# REMEMBER to restart R after you modify and save this file!
# First, execute the global .Rprofile if it exists. You may configure blogdown
# options there, too, so they apply to any blogdown projects. Feel free to
# ignore this part if it sounds too complicated to you.
if (file.exists("~/.Rprofile")) {
base::sys.source("~/.Rprofile", envir = environment())
}
# Now set options to customize the behavior of blogdown for this project. Below
# are a few sample options; for more options, see
# https://bookdown.org/yihui/blogdown/global-options.html
options(
# to automatically serve the site on RStudio startup, set this option to TRUE
blogdown.serve_site.startup = FALSE,
# to disable knitting Rmd files on save, set this option to FALSE
blogdown.knit.on_save = TRUE,
# build .Rmd to .html (via Pandoc); to build to Markdown, set this option to 'm$
blogdown.method = 'html'
)
# fix Hugo version
options(blogdown.hugo.version = "0.82.0")
Here are the contents from /Library/Frameworks/R.framework/Resources/library/base/R/Rprofile
### This is the system Rprofile file. It is always run on startup.
### Additional commands can be placed in site or user Rprofile files
### (see ?Rprofile).
### Copyright (C) 1995-2020 The R Core Team
### Notice that it is a bad idea to use this file as a template for
### personal startup files, since things will be executed twice and in
### the wrong environment (user profiles are run in .GlobalEnv).
.GlobalEnv <- globalenv()
attach(NULL, name = "Autoloads")
.AutoloadEnv <- as.environment(2)
assign(".Autoloaded", NULL, envir = .AutoloadEnv)
T <- TRUE
F <- FALSE
R.version <- structure(R.Version(), class = "simple.list")
version <- R.version # for S compatibility
## for backwards compatibility only
R.version.string <- R.version$version.string
## NOTA BENE: options() for non-base package functionality are in places like
## --------- ../utils/R/zzz.R
options(keep.source = interactive())
options(warn = 0)
# options(repos = c(CRAN="#CRAN#"))
# options(BIOC = "http://www.bioconductor.org")
## setting from an env variable added in 4.0.2
local({to <- as.integer(Sys.getenv("R_DEFAULT_INTERNET_TIMEOUT", 60))
if (is.na(to) || to <= 0) to <- 60L
options(timeout = to)
})
options(encoding = "native.enc")
options(show.error.messages = TRUE)
## keep in sync with PrintDefaults() in ../../main/print.c :
options(show.error.messages = TRUE)
## keep in sync with PrintDefaults() in ../../main/print.c :
options(scipen = 0)
options(max.print = 99999)# max. #{entries} in internal printMatrix()
options(add.smooth = TRUE)# currently only used in 'plot.lm'
if(isFALSE(as.logical(Sys.getenv("_R_OPTIONS_STRINGS_AS_FACTORS_",
"FALSE")))) {
options(stringsAsFactors = FALSE)
} else {
options(stringsAsFactors = TRUE)
}
if(!interactive() && is.null(getOption("showErrorCalls")))
options(showErrorCalls = TRUE)
local({dp <- Sys.getenv("R_DEFAULT_PACKAGES")
if(identical(dp, "")) ## it fact methods is done first
dp <- c("datasets", "utils", "grDevices", "graphics",
"stats", "methods")
else if(identical(dp, "NULL")) dp <- character(0)
else dp <- strsplit(dp, ",")[[1]]
dp <- sub("[[:blank:]]*([[:alnum:]]+)", "\\1", dp) # strip whitespace
options(defaultPackages = dp)
})
## Expand R_LIBS_* environment variables.
Sys.setenv(R_LIBS_SITE =
.expand_R_libs_env_var(Sys.getenv("R_LIBS_SITE")))
Sys.setenv(R_LIBS_USER =
.expand_R_libs_env_var(Sys.getenv("R_LIBS_USER")))
local({
if(nzchar(tl <- Sys.getenv("R_SESSION_TIME_LIMIT_CPU")))
setSessionTimeLimit(cpu = tl)
if(nzchar(tl <- Sys.getenv("R_SESSION_TIME_LIMIT_ELAPSED")))
setSessionTimeLimit(elapsed = tl)
})
setSessionTimeLimit(elapsed = tl)
})
.First.sys <- function()
{
for(pkg in getOption("defaultPackages")) {
res <- require(pkg, quietly = TRUE, warn.conflicts = FALSE,
character.only = TRUE)
if(!res)
warning(gettextf('package %s in options("defaultPackages") was not found', sQuote(pkg)$
call. = FALSE, domain = NA)
}
}
## called at C level in the startup process prior to .First.sys
.OptRequireMethods <- function()
{
pkg <- "methods" # done this way to avoid R CMD check warning
if(pkg %in% getOption("defaultPackages"))
if(!require(pkg, quietly = TRUE, warn.conflicts = FALSE,
character.only = TRUE))
warning('package "methods" in options("defaultPackages") was not found',
call. = FALSE)
}
if(nzchar(Sys.getenv("R_BATCH"))) {
.Last.sys <- function()
{
cat("> proc.time()\n")
print(proc.time())
}
## avoid passing on to spawned R processes
## A system has been reported without Sys.unsetenv, so try this
try(Sys.setenv(R_BATCH=""))
}
local({
if(nzchar(rv <- Sys.getenv("_R_RNG_VERSION_")))
local({
if(nzchar(rv <- Sys.getenv("_R_RNG_VERSION_")))
suppressWarnings(RNGversion(rv))
})
.sys.timezone <- NA_character_
.First <- NULL
.Last <- NULL
###-*- R -*- Unix Specific ----
.Library <- file.path(R.home(), "library")
.Library.site <- Sys.getenv("R_LIBS_SITE")
.Library.site <- if(!nzchar(.Library.site)) file.path(R.home(), "site-library") else unlist(strspl$
.Library.site <- .Library.site[file.exists(.Library.site)]
invisible(.libPaths(c(unlist(strsplit(Sys.getenv("R_LIBS"), ":")),
unlist(strsplit(Sys.getenv("R_LIBS_USER"), ":")
))))
local({
popath <- Sys.getenv("R_TRANSLATIONS", "")
if(!nzchar(popath)) {
paths <- file.path(.libPaths(), "translations", "DESCRIPTION")
popath <- dirname(paths[file.exists(paths)][1])
}
bindtextdomain("R", popath)
bindtextdomain("R-base", popath)
assign(".popath", popath, .BaseNamespaceEnv)
})
local({
## we distinguish between R_PAPERSIZE as set by the user and by configure
papersize <- Sys.getenv("R_PAPERSIZE_USER")
if(!nchar(papersize)) {
lcpaper <- Sys.getlocale("LC_PAPER") # might be null: OK as nchar is 0
papersize <- if(nchar(lcpaper))
if(length(grep("(_US|_CA)", lcpaper))) "letter" else "a4"
else Sys.getenv("R_PAPERSIZE")
}
options(papersize = papersize,
}
options(papersize = papersize,
printcmd = Sys.getenv("R_PRINTCMD"),
dvipscmd = Sys.getenv("DVIPS", "dvips"),
texi2dvi = Sys.getenv("R_TEXI2DVICMD"),
browser = Sys.getenv("R_BROWSER"),
pager = file.path(R.home(), "bin", "pager"),
pdfviewer = Sys.getenv("R_PDFVIEWER"),
useFancyQuotes = TRUE)
})
## non standard settings for the R.app GUI of the macOS port
if(.Platform$GUI == "AQUA") {
## this is set to let RAqua use both X11 device and X11/TclTk
if (Sys.getenv("DISPLAY") == "")
Sys.setenv("DISPLAY" = ":0")
## this is to allow gfortran compiler to work
Sys.setenv("PATH" = paste(Sys.getenv("PATH"),":/usr/local/bin",sep = ""))
}## end "Aqua"
## de-dupe the environment on macOS (bug in Yosemite which affects things like PATH)
if (grepl("^darwin", R.version$os)) local({
## we have to de-dupe one at a time and re-check since the bug affects how
## environment modifications propagate
while(length(dupes <- names(Sys.getenv())[table(names(Sys.getenv())) > 1])) {
env <- dupes[1]
value <- Sys.getenv(env)
Sys.unsetenv(env) ## removes the dupes, good
.Internal(Sys.setenv(env, value)) ## wrapper requries named vector, a pain, hence internal
}
})
local({
tests_startup <- Sys.getenv("R_TESTS")
if(nzchar(tests_startup)) source(tests_startup)
})
Is there anything glaring here that could be causing the issue?
Your user .Rprofile file is loading itself recursively for some reason:
if (file.exists("~/.Rprofile")) {
base::sys.source("~/.Rprofile", envir = environment())
}
From your comments it seems that these lines are inside ~/.Rprofile (~ expands to the user home directory, i.e. /Users/mycomputer in your case, assuming mycomputer is your user name).
Delete these lines (or comment them out), they don’t belong here. In fact, the file looks like it’s a template for a project-specific .Rprofile configuration. It would make sense inside a project directory, but not as the profile-wide user .Rprofile.
The logic for these files is as follows:
If there is an .Rprofile file in the current directory, R attempts to load that.
Otherwise, if the environment variable R_PROFILE_USER is set to the path of a file, R attempts to load this file.
Otherwise, if the file ~/.Rprofile exists, R attempts to load that.
Now, this implies that ~/.Rprofile is not loaded automatically if a projects-specific (= in the current working directory) .Rprofile exists. This is unfortunate, therefore many projects add lines similar to the above to their project-specific .Rprofile files to cause the user-wide ~/.Rprofile to be loaded as well. However, the above implementation ignores the R_PROFILE_USER environment variable. A better implementation would therefore look as follows:
rprofile = Sys.getenv('R_PROFILE_USER', '~/.Rprofile')
if (file.exists(rprofile)) {
base::sys.source(rprofile, envir = environment())
}
rm(rprofile)
Success! Thank you to everyone in the comment. The issue was resolved by deleting /Library/Frameworks/R.framework/Resources/library/base/R/Rprofile and re-installing R and Rstudio.
The point in the code is to gather posts from a Facebook page and store them in my_page however i am unfamiliar with the code as it is for a Uni project. The problem i have is that it has to be used in a .rpres format created using Rstudio and as such i don't want the output but still need to run the code.
This is the output i don't want to be displayed:
```{r, echo = FALSE}
#install.packages("Rfacebook")
include(Rfacebook)
token <- "Facebook dev auth token goes here"
page_name <- "BuzzFeed"
my_page <- getPage(page_name, token, n = 2,reactions = TRUE,api = "v2.10")
number_required <- 50
dates <- seq(as.Date("2017/07/14"), Sys.Date(), by = "day")
#
n <- length(dates) - 1
df_daily <- list()
for (i in 1:n){
cat(as.character(dates[i]), " ")
try(df_daily[[i]] <- getPage(page_name, token,
n = number_required,reactions = TRUE,api = "v2.10",
since = dates[i],
until = dates[i+1]))
cat("\n")
}
```
Your problem is simply that Rfacebook::getPage prints to the console when it runs. That's because it calls cat(), which is the same thing as print(). Fortunately the package provides a switch to turn that off - all you need to do is add the verbose = FALSE argument to your call and it will stop printing:
getPage(...)
getPage(..., verbose = FALSE)
It's pretty bad practice for a package to call cat or print - they should use message and warning instead - so I have raised an issue with the package maintainer to ask for this to be changed, which you can watch here if you like:
https://github.com/pablobarbera/Rfacebook/issues/145
I want to put some R code plus the associated data file (RData) on Github.
So far, everything works okay. But when people clone the repository, I want them to be able to run the code immediately. At the moment, this isn't possible because they will have to change their work directory (setwd) to directory that the RData file was cloned (i.e. downloaded) to.
Therefore, I thought it might be easier, if I changed the R code such that it linked to the RData file on github. But I cannot get this to work using the following snippet. I think perhaps there is some issue text / binary issue.
x <- RCurl::getURL("https://github.com/thefactmachine/hex-binning-gis-data/raw/master/popDensity.RData")
y <- load(x)
Any help would be appreciated.
Thanks
This works for me:
githubURL <- "https://github.com/thefactmachine/hex-binning-gis-data/raw/master/popDensity.RData"
load(url(githubURL))
head(df)
# X Y Z
# 1 16602794 -4183983 94.92019
# 2 16602814 -4183983 91.15794
# 3 16602834 -4183983 87.44995
# 4 16602854 -4183983 83.79617
# 5 16602874 -4183983 80.19643
# 6 16602894 -4183983 76.65052
EDIT Response to OP comment.
From the documentation:
Note that the https:// URL scheme is not supported except on Windows.
So you could try this:
download.file(githubURL,"myfile")
load("myfile")
which works for me as well, but this will clutter your working directory. If that doesn't work, try setting method="curl" in the call to download.file(...).
I've had trouble with this before as well, and the solution I've found to be the most reliable is to use a tiny modification of source_url from the fantastic [devtools][1] package. This works for me (on a Mac).
load_url <- function (url, ..., sha1 = NULL) {
# based very closely on code for devtools::source_url
stopifnot(is.character(url), length(url) == 1)
temp_file <- tempfile()
on.exit(unlink(temp_file))
request <- httr::GET(url)
httr::stop_for_status(request)
writeBin(httr::content(request, type = "raw"), temp_file)
file_sha1 <- digest::digest(file = temp_file, algo = "sha1")
if (is.null(sha1)) {
message("SHA-1 hash of file is ", file_sha1)
}
else {
if (nchar(sha1) < 6) {
stop("Supplied SHA-1 hash is too short (must be at least 6 characters)")
}
file_sha1 <- substr(file_sha1, 1, nchar(sha1))
if (!identical(file_sha1, sha1)) {
stop("SHA-1 hash of downloaded file (", file_sha1,
")\n does not match expected value (", sha1,
")", call. = FALSE)
}
}
load(temp_file, envir = .GlobalEnv)
}
I use a very similar modification to get text files from github using read.table, etc. Note that you need to use the "raw" version of the github URL (which you included in your question).
[1] https://github.com/hadley/devtoolspackage
load takes a filename.
x <- RCurl::getURL("https://github.com/thefactmachine/hex-binning-gis-data/raw/master/popDensity.RData")
writeLines(x, tmp <- tempfile())
y <- load(tmp)
I would like to get the title of a base function (e.g.: rnorm) in one of my scripts. That is included in the documentation, but I have no idea how to "grab" it.
I mean the line given in the RD files as \title{} or the top line in documentation.
Is there any simple way to do this without calling Rd_db function from tools and parse all RD files -- as having a very big overhead for this simple stuff? Other thing: I tried with parse_Rd too, but:
I do not know which Rd file holds my function,
I have no Rd files on my system (just rdb, rdx and rds).
So a function to parse the (offline) documentation would be the best :)
POC demo:
> get.title("rnorm")
[1] "The Normal Distribution"
If you look at the code for help, you see that the function index.search seems to be what is pulling in the location of the help files, and that the default for the associated find.packages() function is NULL. Turns out tha tthere is neither a help fo that function nor is exposed, so I tested the usual suspects for which package it was in (base, tools, utils), and ended up with "utils:
utils:::index.search("+", find.package())
#[1] "/Library/Frameworks/R.framework/Resources/library/base/help/Arithmetic"
So:
ghelp <- utils:::index.search("+", find.package())
gsub("^.+/", "", ghelp)
#[1] "Arithmetic"
ghelp <- utils:::index.search("rnorm", find.package())
gsub("^.+/", "", ghelp)
#[1] "Normal"
What you are asking for is \title{Title}, but here I have shown you how to find the specific Rd file to parse and is sounds as though you already know how to do that.
EDIT: #Hadley has provided a method for getting all of the help text, once you know the package name, so applying that to the index.search() value above:
target <- gsub("^.+/library/(.+)/help.+$", "\\1", utils:::index.search("rnorm",
find.package()))
doc.txt <- pkg_topic(target, "rnorm") # assuming both of Hadley's functions are here
print(doc.txt[[1]][[1]][1])
#[1] "The Normal Distribution"
It's not completely obvious what you want, but the code below will get the Rd data structure corresponding to the the topic you're interested in - you can then manipulate that to extract whatever you want.
There may be simpler ways, but unfortunately very little of the needed coded is exported and documented. I really wish there was a base help package.
pkg_topic <- function(package, topic, file = NULL) {
# Find "file" name given topic name/alias
if (is.null(file)) {
topics <- pkg_topics_index(package)
topic_page <- subset(topics, alias == topic, select = file)$file
if(length(topic_page) < 1)
topic_page <- subset(topics, file == topic, select = file)$file
stopifnot(length(topic_page) >= 1)
file <- topic_page[1]
}
rdb_path <- file.path(system.file("help", package = package), package)
tools:::fetchRdDB(rdb_path, file)
}
pkg_topics_index <- function(package) {
help_path <- system.file("help", package = package)
file_path <- file.path(help_path, "AnIndex")
if (length(readLines(file_path, n = 1)) < 1) {
return(NULL)
}
topics <- read.table(file_path, sep = "\t",
stringsAsFactors = FALSE, comment.char = "", quote = "", header = FALSE)
names(topics) <- c("alias", "file")
topics[complete.cases(topics), ]
}