R Bigquery Error : attempt to apply non-function API Data Failed to Parse - r

I am trying to do a simple Query with R and BigQueryR
https://github.com/cloudyr/bigQueryR:
I am not sure what is wrong but I keep getting an error (this might just be my lack of knowledge with R).
It is returning a list of projects and data sets correctly so I know it is connected.
I followed the guides on querying:
https://rdrr.io/cran/bigQueryR/man/bqr_query.html
bqr_query(projectId, datasetId, query, maxResults = 1000)
This is the command I put in:
result <- bqr_query("bigqyerytestproject2", "TestDataSet1", "SELECT * FROM TestTable3", maxResults = 1000)
and I get the error:
Error : attempt to apply non-function
Warning message:
In q(the_body = body, path_arguments = list(projects = projectId)) :
API Data failed to parse. Returning parsed from JSON content.
Use this to test against your data_parse_function.
But then I checked BigQuery and the query is going through successfully:
I am just connecting a small amount before I move a large data set but the results are:
[
{
"Name": "Season",
"Date": "2010-06-30",
"ID": "1"
},
{
"Name": "Test",
"Date": "2010-06-30",
"ID": "2"
}
]
Thanks in advance for your help

Depending on your app's needs, you might consider using bigrquery instead of bigQueryR. The difference is explained here.
Beyond that, I'd suggest filing an issue with the developer of the bigQueryR library.

I'll describe how I got past that same error in case it's helpful, though I think it might be a slightly different problem as mine was caused by a permissions error with Google Sheets.
To create the error, I can:
Create a Google Sheets spreadsheet
Use the spreadsheet as a source of a BigQuery table
Query the BigQuery table using bqr_query()
At this point an error will pop up due to insufficient permissions
After seeing the error I grant permission to edit the Google Sheet to my app
The error "Error : attempt to apply non-function" appears and I can't get rid of it.
If I don't query the table until after granting permission to my app, the error doesn't appear in the first place. So I just had to recreate the Google Sheet and BigQuery table.
Hope this helps!

Related

Firestore REST API batchGet returns status 400

I am trying to get data from Firestore in a batch using the batchGet method. But i get an error as below :
Error: JSON error response from server: [{"error":{"code":400,"message":"Document name "["projects/MYPROJECT/databases/(default)/documents/MYCOLLECTION/DOCUMENTID"]" lacks "projects" at index 0.","status":"INVALID_ARGUMENT"}}].status: 400
I have searched for any questions regarding this, I found this. They are trying t add documents and not using batchGet, however I have followed the solution. Still no luck.
Note: I am integrating Firebase to AppGyver.
I am new to Firebase and still learning. I need all the help I can get. Thank you in advance.
I have found what I did wrong.
In AppGyver you can only set your parameters as text or number only. While the documents parameter needs to be in array. Hence, I convert the array to string using ENCODE_JSON. But that was wrong.
The correct way is just send the documents in array without converting it. Use the formula :
Just hit "Save" even there is an error telling you that it is not a text.

Dataset From Bigquery To Datastudio

I connect my Firebase project to BigQuery. Then, I connected the BigQuery to DataStudio to create a dynamic dashboard.
The default DataStudio collector does not work. It gives me back an error. Going to analyze the BigQuery logs I get this:
QUERY:
SELECT t0.device.browser, t0.device.browser_version,
t0.device.web_info.browser, t0.device.web_info.browser_version
FROM my_dataset.my_table AS t0 LIMIT 100;
RETURN:
jobStatus: {
additionalErrors: [
0: {
code: 11
message: "Duplicate column names in the result are not supported. Found duplicate(s): browser, browser_version"
}
]
error: {
code: 11
message: "Duplicate column names in the result are not supported. Found duplicate(s): browser, browser_version"
}
state: "DONE"
}
}
Not being able to change the scheme I have no way to fix this?
Is there any way to report this to google?
The duplicated columns are shown here:
For browser version:
t0.device.web_info.browser_version
t0.device.browser_version
For browser:
t0.device.browser
t0.device.web_info.browser
They might look different and / or are from different sources or elements but the column names are the same, which is causing issues here.
If you can't change the schema, I recommend you to use aliases for each column like:
SELECT
t0.device.browser AS device_browser,
t0.device.browser_version AS device_browser_version,
t0.device.web_info.browser AS web_browser,
t0.device.web_info.browser_version AS web_browser_version
FROM my_dataset.my_table AS t0 LIMIT 100;
I hope this is helpful!

Meteor: Match error: Failed Match.OneOf or Match.Optional validation (websocket)

I have a website that uses Meteor 0.9. I have deployed this website on OpenShift (http://www.truthpecker.com).
The problem I'm experiencing is that when I go to a path on my site (/discover), then sometimes (though not always), the data needed are not fetched by Meteor. Instead I get the following errors:
On the client side:
WebSocket connection to 'ws://www.truthpecker.com/sockjs/796/3tfowlag/websocket' failed: Error during WebSocket handshake: Unexpected response code: 400
And on the server side:
Exception from sub rD8cj6FGa6bpTDivh Error: Match error: Failed Match.OneOf or Match.Optional validation
at checkSubtree (packages/check/match.js:222)
at check (packages/check/match.js:21)
at _.extend._getFindOptions (packages/mongo-livedata/collection.js:216)
at _.extend.find (packages/mongo-livedata/collection.js:236)
at Meteor.publish.Activities.find.user [as _handler] (app/server/publications.js:41:19)
at maybeAuditArgumentChecks (packages/livedata/livedata_server.js:1492)
at _.extend._runHandler (packages/livedata/livedata_server.js:914)
at _.extend._startSubscription (packages/livedata/livedata_server.js:764)
at _.extend.protocol_handlers.sub (packages/livedata/livedata_server.js:577)
at packages/livedata/livedata_server.js:541
Sanitized and reported to the client as: Match failed [400]
Can anyone help me to eliminate this error and get the site working? I'd be very grateful!
Tony
P.S.: I never got this error using localhost.
EDIT:
The line causing the problem the problem is this (line 41):
return Activities.find({user: id}, {sort: {timeStamp: -1}, limit:40});
One document in the activities collection looks like this:
{
"user" : "ZJrgYm34rR92zg6z7",
"type" : "editArg",
"debId" : "wtziFDS4bB3CCkNLo",
"argId" : "YAnjh2Pu6QESzHQLH",
"timeStamp" : ISODate("2014-09-12T22:10:29.586Z"),
"_id" : "sEDDreehonp67haDg"
}
When I run the query done in line 41 in mongo shell, I get the following error:
error: { "$err" : "Unsupported projection option: timeStamp", "code" : 13097 }
I don't really why this is though. Can you help me there as well? Thank you.
Make sure that you are passing an integer to skip and limit. Use parseInt() if need be.
You have a document on your website that does not match your check validation.
The validation you have is in app/server/publications.js:41
So the attribute in question exists in some way like Match.optional(Match.oneOf(xx)) but the document's attribute is neither of the values in Match.oneOf
You would have to go through your documents for the collection causing this and remove or correct the attribute causing this to match your check statement.
Update for your updated question.
You're running Meteor commands in the meteor mongo/mongo shell. The error you get is unrelated to the problem in Meteor, to sort in the mongo shell you would do activities.find(..).sort(), instead of activities.find(.., { sort : {..}). This is unrelated to the issue
The issue is most-likely that your id is not actually a string. Its supposed to be sEDDreehonp67haDg for the document you're looking for. You might want to use the debugger to see what it actually is.
I don't think you can use limit in client-side find queries. Removing limit from my query solves the problem. If you're looking for pagination, then you can either manually roll your own by passing a parameter to your Activities publication so that the limit is added to the server-side query along with an offset. There is also this pagination package.

RGoogleDocs Token Invalid Error

I have a confidential spreadsheet I'd like to access via the RGoogleDocs library but am receiving an odd error. This same code worked yesterday in order to fetch the worksheet. It was failing on the sheetsAsMatrix call. Now I can't even fetch the worksheet at all. options(error=recover) doesn't tell me anything outside of the invalid token error message.
auth = getGoogleAuth("email#email.com", "password")
sheets.con = getGoogleDocsConnection(auth, service = "wise")
docs = getDocs(sheets.con)
names(docs)
[1] "Testing Plan 0.1"
[2] "All Events Template 11 1 13"
ts = getWorksheets("All Events Template 11 1 13", sheets.con)
Error in getDocs(con, what = "spreadsheets") :
problems connecting to get the list of documents: Token invalid (401)
As a solution to my own problem in case anyone else needs a workaround as well, I ended up using googlecl (https://code.google.com/p/googlecl/) and just executed a system call in R to download the spreadsheet I needed. Finally I read it into R using the XLConnect library.

Obtaining topic description in search api

I seem to be having problem pulling out the text content of the following query without making another call:
http://tinyurl.com/mgsewz2 via the mqlread api
{
"id": "/en/alcatraz_island",
"/common/topic/description": [{}],
"/common/topic/topic_equivalent_webpage": [],
"/common/topic/official_website": null
}
I can't retrieve the following
description
equivalent webpage (I'm looking for the en wiki page)
, but I can obtain the official_website url.
It looks like I can get it via the search api via output= but I can't walk through the entire set that I'm looking for without getting search request is too large error.
http://markmail.org/message/hd6pcveta7bhuz25#query:+page:1+mid:u7xegxlhtmhwiqbl+state:results
Thanks!
It you want to download large subsets of Freebase data, your best bet is to use the Freebase RDF Dumps. They contain all the properties that you listed above.

Resources