query and Request issues with gtrendsr - gtrendsr

I have successfully logged in google account with
gconnect(usr,password)
But I got request problem when I want to query the data
gt.us <- gtrends("USA", geo="US", start_date="2004-01-01", end_date="2004-01-30")
Error: Not enough search volume. Please change your search terms.
In addition: Warning message:
In request_GET(x, url, ...) : Gone (HTTP 410).
Could anyone help me out ?

Get gtrendsR 2.0.0:
devtools::install_github('PMassicotte/gtrendsR')
library(gtrendsR)
# do not need to log in anymore
# syntax change!
gt.us <- gtrends("USA", geo="US", time = "2004-01-01 2004-01-30")
See Philippe Massicotte answer to this issue.

Related

academictwitteR - Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, : something went wrong. Status code: 403

I have just received an Academic Twitter Developer privileges and am attempting to scrape some tweets. I updated RStudio, regenerated a new bearer token once I got updated Academic Twitter access, and get_bearer() returns my new bearer token. However, I continue to get the following error:
Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, :
something went wrong. Status code: 403
In addition: Warning messages:
1: Recommended to specify a data path in order to mitigate data loss when ingesting large amounts of data.
2: Tweets will not be stored as JSONs or as a .rds file and will only be available in local memory if assigned to an object.
Additionally, I have tried specifying a data path, but I think I am confused as to what these means? I think that's where my issue lies, but does the data path way mean like a specific file pathway on my computer?
Below is the code I was attempting to use.This code worked previously with my professor's bearer token that they used to just show the output:
`tweets <-
get_all_tweets(
query = "#BlackLivesMatter",
start_tweets = "2020-01-01T00:00:00Z",
end_tweets = "2020-01-05T00:00:00Z",
n = 100)`
Thanks in advance!
Status code 403 means forbidden. You may want to check the error codes reference page of Twitter API here.
Perhaps your bearer token is misspelled?

quantmod - getQuote() - '403 Forbidden'

I found an answer to my own question (see below). Still need help.
In the same package, quantmod, there is an option called getSymbol.google.
Nevertheless,
If I use it to get Microsoft value, for example, it works all right
getSymbols.google('MSFT', environment() , src="google", from = (Sys.Date() - 1))
[1] "MSFT"
But, I can´t make it work on a currency pair;
getSymbols.google("GBPUSD", environment() , src="google", from = (Sys.Date() - 1))
Error in download.file(paste(google.URL, "q=", Symbols.name, "&startdate=", :
cannot open URL 'http://finance.google.com/finance/historical?q=GBPUSD&startdate=Nov+02,+2017&enddate=Nov+03,+2017&output=csv'
In addition: Warning message:
In download.file(paste(google.URL, "q=", Symbols.name, "&startdate=", :
cannot open URL 'http://finance.google.com/finance/historical?q=GBPUSD&startdate=Nov+02,+2017&enddate=Nov+03,+2017&output=csv': HTTP status was '400 Bad Request'
Any ideas?
Good morning,
Since the 1ts of November i´m having trouble with the function getQuote from Yahoo. Is a function inside the package "quantmod", which uses yahoo API to request the information.
The description of the function is as follows; Fetch current stock quote(s) from specified source. At present this only handles sourcing quotes from Yahoo Finance, but it will be extended to additional sources over time.
In r, i´m getting the following error; "HTTP status was '403 Forbidden'"
I´ve look on my browser and the error comes from the following error in Yahoo web page "Fetch current stock quote(s) from specified source. At present this only handles sourcing quotes from Yahoo Finance, but it will be extended to additional sources over time."
Does anybody know how to solve ir, or, any alternatives to the function getQuote()
Here is an example from RStudio
getQuote("AAPL")
Error in download.file(paste("https://finance.yahoo.com/d/quotes.csv?s=", :
cannot open URL 'https://finance.yahoo.com/d/quotes.csv?s=AAPL&f=d1t1l1c1p2ohgv'
In addition: Warning message:
In download.file(paste("https://finance.yahoo.com/d/quotes.csv?s=", :
cannot open URL 'https://finance.yahoo.com/d/quotes.csv?s=AAPL&f=d1t1l1c1p2ohgv': HTTP status was '403 Forbidden'
Thanks
seems that yahoo has discontinued this service. Anyone aware of a alternative for yahoo (I'd rather not have to webscrape yahoo for this)
rob
I ran into the same problem... it's kludgey but as a workaround to get the end-of-day value, I have found this to work for now:
Instead of getQuote() to get the Last price (which doesn't seem to work from Yahoo anymore):
underlying<-"AAPL"
quote.last <-getQuote(underlying)$Last
I use "getSymbols" which still works-- throws it into a new data frame, and I pull out the value I want from that:
Hx<-getSymbols(underlying,from=Sys.Date()-1) # allows me to not have to retain the ticker name if I do this across many tickers
quote.last<-as.double(tail(Cl(get(Hx)),1)) # Closing price value from last row of data
rm(list=Hx) # throw away the temporary data frame with quote history
I'm sure the's a more elegant way to do it, but this is what fell out of my brain as a quick workaround that got it done... sadly that doesn't get things like the Bid and Ask that getQuote does.

How can I avoid Twitter API rate limits when returning a user's followers?

I have seen a few posts about this, but I have not been able to use any of the suggested code for the way I have my program set up. I am not very good at R but I need this information for a Political Science class. Could someone please help me figure out a way to avoid receiving the error message "Warning message: In twInterfaceObj$doAPICall(cmd, params, method, ...) : Rate limit encountered & retry limit reached - returning partial results." Thank you in advance! The code I am running is below:
# install and load packages
install.packages("twitteR")
install.packages("ROAuth")
install.packages("base64enc")
install.packages("plyr")
library("twitteR")
library("ROAuth")
library("base64enc")
library("plyr")
# Establish the connection (do this once)
cred$handshake(cainfo="cacert.pem")
#setup Twitter
setup_twitter_oauth('Consumer Key - XXXXXXXXXX', Consumer Secret -
'XXXXXXXXXX', API Key - 'XXXXXXXXXX', API Secret - 'XXXXXXX')
#store the user
user <- getUser('JackPosobiec')
#return a list of followers
Posobiecfollowers <- user$getFollowers(n=NULL)
Posobiecfollowers
#save followers in a data set
followers.df = ldply(followers, function(t) t$toDataFrame())
write.csv(followers.df, file = "JackPosobiecfollowersfile.csv")

Can getSymbols still work with oanda?

I want to get the data of currencies and metals. As I tried some packages, many person suggest quantmod. So I used getSymbols as the following:
getSymbols("USD/EUR",src="oanda")
Error in download.file(paste(oanda.URL, from.date, to.date, "exch=", currency.pair[1], :
cannot open URL 'http://www.oanda.com/convert/fxhistory?lang=en&date1=09%2F28%2F13&date=02%2F09%2F15&date_fmt=us&exch=USD&expr2=EUR&margin_fixed=0&SUBMIT=Get+Table&format=CSV&redirected=1'
In addition: Warning message:
In download.file(paste(oanda.URL, from.date, to.date, "exch=", currency.pair[1], :
cannot open: HTTP status was '404 Not Found'
as I used:
getSymbols("USD/EUR",src="oanda",from="2015-01-01")
I get the same message.
So getSymbols can still work with Oanda?
And another question is where I can find the list of the symbols that the web-service such as Yahoo, Oanda, Google supported? In fact I don't need the stock symbols, I just need the symbols for the future such as corn, gold and the currency.
Oanda changed their URL structure and file format. I fixed this over the weekend. You would need to look on the website of each respective provider in order to find what symbols they support.

Using R package BerkeleyEarth

I'm working for the first time with the R package BerkeleyEarth, and attempting to use its convenience functions to access the BEST data. I think maybe it's just a problem with their servers (a matter I've separately addressed to the package's maintainer) but I wanted to know if it's instead something silly I'm doing.
To reproduce my fault
library(BerkeleyEarth)
downloadBerkeley()
which provides the following error message
trying URL 'http://download.berkeleyearth.org/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip'
Error in download.file(urls$Url[thisUrl], destfile = file.path(destDir, :
cannot open URL 'http://download.berkeleyearth.org/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip'
In addition: Warning message:
In download.file(urls$Url[thisUrl], destfile = file.path(destDir, :
InternetOpenUrl failed: 'A connection with the server could not be established'
Has anyone had a better experience using this package?
The error message is pointing to a different URL than one should get judging what URLs are listed at http://berkeleyearth.org/data/ that point to the zip formatted files. There are another set of .nc files that appear to be more recent. I would replace the entries in the BerkeleyUrls dataframe with the ones that match your analysis strategy:
This is the current URL that should be in position 1,1:
http://berkeleyearth.lbl.gov/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip
And this is the one that is in the package dataframe:
> BerkeleyUrls[1,1]
[1] "http://download.berkeleyearth.org/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip"
I suppose you could try:
BerkeleyUrls[, 1] <- sub( "download\\.berkeleyearth\\.org", "berkeleyearth.lbl.gov", BerkeleyUrls[, 1])

Resources