Export data from Google Analytics API with rga in R - r

Let's assume that the google api link start like this:
https://www.googleapis.com/analytics/v3/data/ga?ids=ga%29A5366921&start-date...
And now I need to export data from, but for the code below I got errors:
ga$getData(29A5366921, batch = TRUE, '2017-12-01', '2017-12-10',
metrics = "ga:visits", dimensions = "ga:date",
sort = "", filters = "", segment = "",
start = 1, max = 1000)
One of the errors (others are pretty much the same, just for different parameter): unexpected ',' in " start = 1,"
I don't have ',' anywhere between '', commas are here just to separate the parameters, but this is the first time in my life I trying to use R and maybe there is some hidden rule.
Not sure what to type at the beginning of the code, instead ga$getData did I need to type 29A5366921$getData?
I'm using rga library.
Any help is welcomed. Thanks in advance.

Related

rscopus scopus_search() only returns first author. Need full author list

I am performing a bibliometric analysis, and have chosen to use rscopus to automate my document searches. I performed a test search, and it worked; the documents returned by scopus_search() exactly matched a manual check that I performed. Here's my issue: rscopus returned only information on the first author (and their affiliation) of each article, but I need information on all authors/affiliations for each article pulled for my particular research questions. I've scoured the rscopus documentation, as well as Elsevier's Developer notes for API use, but can't figure this out. Any ideas on what I'm missing?
query1 <- 'TITLE-ABS-KEY ( ( recreation ) AND ( management ) AND (challenge)'
run1 <- scopus_search(query = query1, api_key = apikey, count = 20,
view = c('STANDARD', 'COMPLETE'), start = 0, verbose = TRUE,
max_count = 20000, http = 'https://api.elsevier.com/content/search/scopus',
headers = NULL, wait_time = 0)
I wanted to post an update since I figured out what was going wrong. I was using the university VPN to access the Scopus API, but the IP address associated with that VPN was not within the range of addresses included in my institution's Scopus license. So, I did not have permission to get "COMPLETE" results. I reached out to Elsevier and very quickly got an institution key that I could add to the search. My working search looks as follows...
query1 <- 'TITLE-ABS-KEY ( ( recreation ) AND ( management ) AND (challenge)'
run1 <- scopus_search(query = query1, api_key = apikey, count = 20,
view = c('COMPLETE'), start = 0, verbose = TRUE,
max_count = 20000, http='https://api.elsevier.com/content/search/scopus',
headers = inst_token_header(insttoken), wait_time = 0)
Just wanted to reiterate Brenna's comment - I had the same issue using the VPN to access the API (which can be resolved by being on campus). Elsevier were very helpful and provided an institutional token very quickly - problem solved.
Otherwise the other workaround I found was to use CrossRef data using library(rcrossref)
I used the doi column from the scopusdata from my original Scopus search:
crossrefdata <- scopusdata %>%
pmap(function(doi){
cr_works(dois = doi) # returns CrossRef metadata for each doi
}) %>%
map(pluck("data")) %>% # cr_works returns a list, select only the 'data'
bind_rows()
You can then manipulate the crossref metadata however you need with full author list.

Set Max erros while uploading data from bucket using R

Im bigrquery to upload data from google bucket to bigquery and i like to set the maximum number of errors i will allow (default is 0 )
for that im using bq_perform_load
bq_job<-
bq_perform_load(bq_table(destination_project,destination_dataset,destination_table_temp),
file_name,
source_format ="CSV",
nskip="1",
create_disposition = "CREATE_IF_NEEDED",
write_disposition = "WRITE_TRUNCATE",
fields = bq_fields,
billing = destination_project)
as part of the documentation it looks like there is an ability to add additional arguments
...
Additional arguments passed on to the underlying API call. snake_case names are automatically converted to camelCase.
but im not sure how to use it
ive tried something like that
bq_perform_load(bq_table(destination_project,destination_dataset,destination_table_temp),
file_name,
source_format ="CSV",
nskip="1",
create_disposition = "CREATE_IF_NEEDED",
write_disposition = "WRITE_TRUNCATE",
fields = bq_fields,
maxBadRecords = "500",
billing = destination_project)
but it didn't work

Keep getting ERROR: with function tabledap

I keep getting a returned "Error:" with my code but have no idea why when the same code works fine with a similar dataset.
Changed the observation variable, changed the time restrictions, searched for similar problems online
library(rerddap)
CalPoly = info("HABs-CalPoly", url= "http://erddap.sccoos.org/erddap/")
CalPoly_Data = tabledap(CalPoly,
fields = c('Ceratium','Cochlodinium', 'Dinophysis_spp', 'Gymnodinium_spp','time'),
'time>=2008-08-15T00:00:00Z', 'time<=2019-05-26T05:35:00Z')
Should return a data table but I just keep getting "Error:"
This similar code does work though and I have no idea why
CalCOFI = info('siocalcofiHydroCasts')
calcofi.df <- tabledap(CalCOFI,
fields = c('cst_cnt', 'date', 'year', 'month', 'julian_date', 'julian_day', 'rpt_line', 'rpt_sta', 'cruz_num', 'intchl', 'intc14', 'time'),
'time>=1984-01-01T00:00:00Z', 'time<=2014-04-17T05:35:00Z')
Resolved the issue!
I initially set the url correctly for the info argument, I didn't realize I had to set the url again for the tabledap argument. I did not realized it the Default is: https://upwell.pfeg.noaa.gov/ erddap/
Five hours later but at least it is resolved!
The code now works:
CalPoly_Data = tabledap(CalPoly, fields = c('Temp','time'),'time>=2008-08-15T07:00:00Z', 'time<=2019-05-26T07:00:00Z', url = "http://erddap.sccoos.org/erddap/")

Using Filters in R google_analytics

Im trying to do the following
google_analytics(my_id,
date_range = c("2018-08-25", "2019-12-31"),
metrics = c("ga:pageviews"
,"ga:uniquePageviews"
)
,segments = seg_obj
,dimensions = c("date"
#,"ga:channelGrouping"
#,"ga:deviceCategory";
,"ga:pagePath"
#,"ga:segment"
)
, anti_sample = TRUE
,filters = "ga::pagePath=#x",max = 100000)
But it returns an error saying
API returned: Invalid value 'ga::pagePath-#x' for filters parameter.
Also tried using 2 equal signs and also with filtersExpression but same resulting in error.
Tried to follow another question asked at link:
GoogleAnalyticsR api - FilterExpression
Any idea why this is happening and how i can resolve this?
Stupid error. I was using ga::pagePath when it should be ga:pagePath

The argument query in the GET function in R is showing "%2C" instead of ","

I'm trying to use the GET function, in the R package httr, to get info from a web page like:
http://fake.web.com/fruit?type=APPLE,GREEN&number=2
so I´m ussing the code:
resp <- GET("http://fake.web.com/fruit", query = list(type = "APPLE,GREEN", number = 2))
But when I check the url of resp I'm getting:
http://fake.web.com/fruit?type=APPLE%2CGREEN&number=2 instead of
http://fake.web.com/fruit?type=APPLE,GREEN&number=2
How can I solve this?

Resources