why can't I use the functions from prophet packages? - r

I'm not using CSV data. Is that will be a problem?
Every time I run this it will be shown couldn't find function "prophet" or "make_future_dataframe"
This is the data i use
resp_jakarta <- GET("https://data.covid19.go.id/public/api/prov_detail_DKI_JAKARTA.json")
status_code(resp_jakarta)
cov_jakarta_raw <- content(resp_jakarta, as = "parsed", simplifyVector = TRUE)
cov_jakarta <- cov_jakarta_raw$list_perkembangan
new_cov_jakarta <-
cov_jakarta %>%
select(-contains("DIRAWAT_OR_ISOLASI")) %>%
select(-starts_with("AKUMULASI")) %>%
rename(
kasus_baru = KASUS,
meninggal = MENINGGAL,
sembuh = SEMBUH
) %>%
mutate(
tanggal = as.POSIXct(tanggal / 1000, origin = "1970-01-01"),
tanggal = as.Date(tanggal)
)
#Forecast
install.packages("prophet")
trying URL https://cran.rstudio.com/bin/macosx/contrib/4.0/prophet_0.6.1.tgz
Content type 'application/x-gzip' length 6317112 bytes (6.0 MB)
downloaded 6.0 MB
The downloaded binary packages are in
/var/folders/bl/q861y47s7b7cnym8hzmryv0c0000gn/T//RtmpTKLo8z/downloaded_packages
library(prophet)
This happens when i run library(prophet)
Loading required package: Rcpp
Loading required package: rlang
Error: package or namespace load failed for ‘prophet’ in dyn.load(file, DLLpath = DLLpath, ...):
unable to load shared object '/Library/Frameworks/R.framework/Versions/4.0/Resources/library/prophet/libs/prophet.so':
dlopen(/Library/Frameworks/R.framework/Versions/4.0/Resources/library/prophet/libs/prophet.so, 6): Library not loaded: #rpath/libtbb.dylib
Referenced from: /Library/Frameworks/R.framework/Versions/4.0/Resources/library/prophet/libs/prophet.so
Reason: image not found
date=as.Date(new_cov_jakarta$tanggal)
cases=new_cov_jakarta$kasus_baru
temp_prophet <- data.frame(date,cases)
temp_prophet <- temp_prophet %>% rename(ds = date, y = cases)
#Issues start from here
m <- prophet(temp_prophet)
And then this happens:
Error in prophet(temp_prophet) : could not find function "prophet"
future <- make_future_dataframe(m, periods = 30,freq="day")
Error in make_future_dataframe(m, periods = 30, freq = "day") : could not find function "make_future_dataframe"
tail(future)
forecast <- predict(m, future)

That's already been reported as an issue in prophet, and was suggested to install the package from source in order to fix it:
install.packages("prophet", type="source")
Also, double-check that both prophet.so and libtbb.dylib exist in your system.

Related

hide package loading message when I render R markdown

I can't get this dplyr package loading message to go away:
package 'dplyr successfully unpacked and MD5 sums checked
And here is my current code:
g <- df$Finished
h <- append(g, rep("dummy",519))
i <- data.frame(counts <- table(h))
row.names(i) <- c("In progress", "Completed", "Invited")
colnames(i) <- c("gh", "Count")
i = subset(i,select = -c(gh))
suppressPackageStartupMessages(install.packages("dplyr", repos = "http://cran.us.r-project.org", quiet = TRUE, message=FALSE))
suppressPackageStartupMessages(library(dplyr, quietly = TRUE, warn.conflicts = FALSE, invisible()))
ii<- i %>%
arrange(desc(Count))
u <- ii %>% mutate(Percentage = (ii[,1]/519)*100)
print(u)
It even says "cannot remove prior installation of package 'dplyr'
It's not a package loading message, it's a package installation message (which is why all the message suppression won't help). You probably shouldn't install the package every time. Try something like
if (!require("dplyr")) {
install.packages("dplyr", repos = "http://cran.us.r-project.org",
quiet = TRUE, message=FALSE))
}
If all else fails you could probably use capture.output() to make sure you had intercepted all output.

problem with installing package transformr and animating plot in r

I have a data frame named df and want to create an animated chart line.
here is my data frame and code for animating the plot.
co1<- tibble(age= c(10:14 ), pop=c(10,12,14,16,18), cn= c(10.1,12.1,14.25,16.09,18.3), country ="USA")
co2<- tibble(age= c(10:14 ), pop=c(10.5,12.6,14.5,16.5,18.5), cn= c(10.6,12.5,14.3,16.7,18.6), country ="brazil")
co3<- tibble(age= c(10:14 ), pop=c(10.9,12.9,14.9,16.9,18.9), cn= c(11.9,13.9,15.9,17.9,19.9), country ="niger")
df<- rbind(co1,co2,co3)
df <- pivot_longer(df, cols = c("pop", "cn"))
#plot
ggplot(df, aes(x = age, y = value, group = country, col = name)) + geom_line() +
labs(x = "age", y = "population") + transition_states(country, transition_length = 3, state_length = 0) +
ggtitle("country: {closest_state}") +
theme_bw()
but when I try to run the example provided, I get this error:
Error in transform_path(all_frames, states[[i]], ease, nframes[i], !!id, : transformr is required to tween paths and lines
following some questions I have tried to install package transformr and devtools::install_github("thomasp85/transformr") but both did not work and I got this error :
Warning in install.packages :
downloaded length 2292878 != reported length 3644763
Warning in install.packages :
URL 'https://cran.rstudio.com/src/contrib/sf_1.0-1.tar.gz': Timeout of 60 seconds was reached
Error in download.file(url, destfile, method, mode = "wb", ...) :
download from 'https://cran.rstudio.com/src/contrib/sf_1.0-1.tar.gz' failed
Warning in install.packages :
download of package ‘sf’ failed
The error messages indicate that there is an issue with installing the sf package. It looks like the CRAN mirror (https://cran.rstudio.com/) you are using is not responding. This might be a temporary issue.
You can try to select another mirror with chooseCRANmirror() or specify another mirror just to install the sf package:
install.packages('sf', repos='http://cran.us.r-project.org')
If that works try again to run devtools::install_github("thomasp85/transformr").

Using R to process google earth engine data

I want to download the daily tmax from the NASA for a given lat lon (https://developers.google.com/earth-engine/datasets/catalog/NASA_NEX-DCP30_ENSEMBLE_STATS)
using the following tutorial https://jesjehle.github.io/earthEngineGrabR/index.html
library(devtools)
install_github('JesJehle/earthEngineGrabR')
library(earthEngineGrabR)
ee_grab_install() # had to install Anaconda before doing this step.
test_data <- ee_grab(data = ee_data_collection(datasetID = "NASA/NEX-DCP30_ENSEMBLE_STATS",
timeStart = "1980-01-01",
timeEnd = '1980-01-02',
bandSelection = 'tasmax'),
targetArea = system.file("data/territories.shp", package = "earthEngineGrabR")
)
Error: With the given product argument no valid data could be requested.
In addition: Warning message:
Error on Earth Engine servers for data product: NASA-NEX-DCP30_ENSEMBLE_STATS_s-mean_t-mean_1980-01-01to2005-12-31
Error in py_call_impl(callable, dots$args, dots$keywords): EEException: Collection.first: Error in map(ID=historical_195001):
Image.select: Pattern 'tasmax' did not match any bands.
I would like to know how to specify the bandwidth so that I do get this error and instead of using a shapefile as target area, I do I download tmax data for a single lat lon 9.55, 78.59?
You might use rgee to accomplish this. Currently, rgee has a function called rgee::ee_extract that works similar to raster::extract().
library(rgee)
library(sf)
# 1. Load a geometry
y <- st_read(system.file("shape/nc.shp", package = "sf"), quiet = TRUE) %>%
st_transform(4326)
## Move that geometry from local to earth engine
ee_y <- sf_as_ee(y)
# 2. Load your ImageCollection
x <- ee$ImageCollection("NASA/NEX-DCP30_ENSEMBLE_STATS")$
filterDate("1980-01-01","1980-01-02")$
map(function(img) img$select("tasmax_mean"))
## calculate the nominal scale
scale <- x$first()$projection()$nominalScale()$getInfo()
# 3. Extract values
tasmax_mean_data <- ee_extract(x = x,
y = y,
fun = ee$Reducer$mean(),
scale = scale,
id = "FIPS")
# 4. Merge results with the sf object
ee_nc_tasmax <- merge(y, tasmax_mean_data, by = "FIPS")
plot(ee_nc_rain['historical_198001'])

Error in x[["endpoint"]] : object of type 'closure' is not subsettable in rtweet CRAN package

I am using a package rtweet to download tweets based on a keyword.
The function search_tweets() returned an error after successfully working for more than 2 months.
Now it's showing the error:
# load twitter library - the rtweet library is recommended now over twitteR
library(rtweet)
# plotting and pipes - tidyverse!
library(ggplot2)
library(dplyr)
# text mining library
library(tidytext)
# plotting packages
library(igraph)
library(ggraph)
cat ("Initializing.... \n")
Initializing....
cat("Enter a keyword whose analysis you want to check: ");
Enter a keyword whose analysis you want to check: > #a <- readLines("stdin",n=1) #doesnot work in Rstudio
a = "#realdonaldtrump"
src_tweets <- search_tweets(q = a, n = 1000,
+ lang = "en",
+ include_rts = TRUE)
Error in init_oauth1.0(self$endpoint, self$app, permission = self$params$permission, :
Unauthorized (HTTP 401).

Not able to to convert R data frame to Spark DataFrame

When I try to convert my local dataframe in R to Spark DataFrame using:
raw.data <- as.DataFrame(sc,raw.data)
I get this error:
17/01/24 08:02:04 WARN RBackendHandler: cannot find matching method class org.apache.spark.sql.api.r.SQLUtils.getJavaSparkContext. Candidates are:
17/01/24 08:02:04 WARN RBackendHandler: getJavaSparkContext(class org.apache.spark.sql.SQLContext)
17/01/24 08:02:04 ERROR RBackendHandler: getJavaSparkContext on org.apache.spark.sql.api.r.SQLUtils failed
Error in invokeJava(isStatic = TRUE, className, methodName, ...) :
The question is similar to
sparkR on AWS: Unable to load native-hadoop library and
Don't need to use sc if you are using the latest version of Spark. I am using SparkR package having version 2.0.0 in RStudio. Please go through following code (that is used to connect R session with SparkR session):
if (nchar(Sys.getenv("SPARK_HOME")) < 1) {
Sys.setenv(SPARK_HOME = "path-to-spark home/spark-2.0.0-bin-hadoop2.7")
}
library(SparkR)
library(SparkR, lib.loc = c(file.path(Sys.getenv("SPARK_HOME"), "R","lib")))
sparkR.session(enableHiveSupport = FALSE,master = "spark://master url:7077", sparkConfig = list(spark.driver.memory = "2g"))
Following is the output of R console:
> data<-as.data.frame(iris)
> class(data)
[1] "data.frame"
> data.df<-as.DataFrame(data)
> class(data.df)
[1] "SparkDataFrame"
attr(,"package")
[1] "SparkR"
use this example code :
library(SparkR)
library(readr)
sc <- sparkR.init(appName = "data")
sqlContext <- sparkRSQL.init(sc)
old_df<-read_csv("/home/mx/data.csv")
old_df<-data.frame(old_df)
new_df <- createDataFrame( sqlContext, old_df)

Resources