I try to download the "World Gross domestic product, current prices" using the IMFdata-API. If I use this query:
library(IMFData)
databaseID <- 'IFS'
startdate = '2001-01-01'
enddate = '2016-12-31'
checkquery = TRUE
queryfilter <- list(CL_FREA="", CL_AREA_IFS="W0", CL_INDICATOR_IFS="NGDP_USD")
W0.NGDP.query <- CompactDataMethod(databaseID, queryfilter, startdate, enddate, checkquery)
I get this error:
Error: lexical error: invalid char in json text.
<?xml version="1.0" encoding="u
(right here) ------^
In addition: Warning message:
JSON string contains (illegal) UTF8 byte-order-mark!
How can I fix this? Is there a better way using Quandl, etc.?
Related
I'm trying to get some tweets with academictwitteR, but the code throws the following error:
tweets_espn <- get_all_tweets( query = "fluminense",
+ user = "ESPNBrasil",
+ start_tweets = "2020-01-01T00: 00: 00Z " ,
+ end_tweets = "2020-31-12T00 : 00: 00Z " ,
+ n = 10000)
query: fluminense (from:ESPNBrasil) Error in make_query(url =
endpoint_url, params = params, bearer_token = bearer_token, :
something went wrong. Status code: 403 In addition: Warning messages:
1: Recommended to specify a data path in order to mitigate data loss
when ingesting large amounts of data. 2: Tweets will not be stored as
JSONs or as a .rds file and will only be available in local memory if
assigned to an object.
it seems to me that you can only access the Twitter API via academictwitteR if you have been awarded the "academic research" access from the Twitter developer portal. So i dont think it works with the essential or elevated access.
Here is my function
getSQL <- function(server="server name", database="database name", Uid="
user name", Pwd="password", Query){
conlink <- paste('driver={SQL Server};server=', server,';database=',database,';Uid=', Uid,
';Pwd=', Pwd,';Encrypt=True;TrustServerCertificate=False', sep="")
conn <- odbcDriverConnect(conlink)
dat <- sqlQuery(channel= conn, Query, stringsAsFactors = F)
odbcCloseAll()
return(dat)
}
When I call the function using
query.cut = "SELECT [measurename]
,[OrgType]
,[year_session]
,[Star]
,[cutvalue]
,[Date]
,[File]
FROM [database name].[dbo].[DST_Merged_Cutpoint]
ORDER BY [year_session] DESC
"
getSQL(Query=query.cut)
I get this error:
Error in sqlQuery(conn, Query, stringsAsFactors = F) :
first argument is not an open RODBC channel
In addition: Warning messages:
1: In odbcDriverConnect(conlink) :
[RODBC] ERROR: state 28000, code 18456, message [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user ' insightm8'.
2: In odbcDriverConnect(conlink) :
[RODBC] ERROR: state 01S00, code 0, message [Microsoft][ODBC SQL Server Driver]Invalid connection string attribute
3: In odbcDriverConnect(conlink) :
Error in sqlQuery(conn, Query, stringsAsFactors = F) :
first argument is not an open RODBC channel
How can I fix these errors? Thanks in advance
Take care not to add spaces to UID:
Server]Login failed for user ' insightm8'.
Reproducing this on an SQL Server connection creates the same error.
Try using paste0 instead of paste :
conlink <- paste0('driver={SQL Server};server=', server,';database=',database,';Uid=', Uid,
';Pwd=', Pwd,';Encrypt=True;TrustServerCertificate=False', sep="")
I have no problem to query Mediawiki API of French Wikipedia for strings without accents:
string <- 'chien'
string <- stringi::stri_enc_toutf8(string, is_unknown_8bit = FALSE, validate = FALSE)
apiQuery <- paste0('https://fr.wikipedia.org/w/api.php?action=query&format=xml&titles=', string)
page <- xml2::read_xml(apiQuery)
{xml_document}
[1] \n \n \n \n \n <page _idx="2736914" pageid="2736914 ...
but I have problem for strings with accents:
string <- 'être'
string <- stringi::stri_enc_toutf8(string, is_unknown_8bit = FALSE, validate = FALSE)
apiQuery <- paste0('https://fr.wikipedia.org/w/api.php?action=query&format=xml&titles=', string)
page <- xml2::read_xml(apiQuery)
I receive the following error :
Error in open.connection(x, "rb") : HTTP error 400.
You need to encode the query in HTML escapes:
page <- xml2::read_xml(URLencode(apiQuery))
This changes the "ê" to "%C3%AA".
I have this piece of code to download the hex content of a file with import parameter file id. I want to insert a new attachment for a notification, but I don't know how to get started.
METHOD GET_SINGLE_ATTACHMENT_CONTENT.
" VARIABLES
DATA: HEXCONT TYPE TABLE OF SOLIX.
DATA: DOCDATA TYPE SOFOLENTI1.
DATA: LV_LENGTH TYPE I.
" CHECK TO CONTINUE FUNCTION MODULE
IF FILE_ID IS INITIAL. "type = SOFOLENTI1-DOC_ID.
MESSAGE 'Document ID is empty.' TYPE 'E' RAISING DOC_ID_EMPTY.
ENDIF.
" GET BINARY CONTENT OF FILE
CALL FUNCTION 'SO_DOCUMENT_READ_API1'
EXPORTING
DOCUMENT_ID = FILE_ID
IMPORTING
DOCUMENT_DATA = DOCDATA
TABLES
CONTENTS_HEX = HEXCONT.
IF SY-SUBRC <> 0.
MESSAGE 'Error downloading file.' TYPE 'E' RAISING FILE_DOWNLOAD_ERROR.
ENDIF.
" CONVERT TO XSTRING
LV_LENGTH = DOCDATA-DOC_SIZE.
CALL FUNCTION 'SCMS_BINARY_TO_XSTRING'
EXPORTING
INPUT_LENGTH = LV_LENGTH
IMPORTING
BUFFER = EV_RETURN "type XSTRING
TABLES
BINARY_TAB = HEXCONT.
IF SY-SUBRC <> 0.
MESSAGE 'Error downloading file.' TYPE 'E' RAISING FILE_DOWNLOAD_ERROR.
ENDIF.
ENDMETHOD.
I've read about function modules like 'SO_DOCUMENT_INSERT_API1' and it has the file information but not the file content (preferably hex content). Any idea on how to get started with this?
Maybe you can do something with the approach I used. Check this example below.
Start with parsing your data to hex
Insert the document with so_document_insert_api1
Link it to your notification with the fm binary_relation_create
" TYPES
TYPES: BEGIN OF TYPE_FILE,
NAME TYPE STRING,
TYPE TYPE STRING,
QMNUM TYPE QMNUM,
USER TYPE CHAR64,
HEXCONT TYPE XSTRING,
END OF TYPE_FILE.
" VARIABLES AND OBJECTS
DATA: LT_RETURN TYPE TABLE OF BAPIRET2.
DATA: LT_HEXCONT TYPE TABLE OF SOLIX.
DATA: LS_DOCDATA TYPE SODOCCHGI1.
DATA: LS_DOCINFO TYPE SOFOLENTI1.
DATA: LS_FILE TYPE TYPE_FILE.
DATA: LV_DOCTYPE TYPE SOODK-OBJTP.
DATA: LV_FOLDER_ID TYPE SOOBJINFI1-OBJECT_ID.
DATA: LV_LENGTH TYPE I.
DATA: OBJ_NOTIF TYPE BORIDENT.
DATA: OBJ_ATTACH TYPE BORIDENT.
DATA: P_QMNUM TYPE QMNUM.
LV_FOLDER_ID = 'FOL40000000000004'.
" CONVERT HEX CONTENT TO BINARY
CALL FUNCTION 'SCMS_XSTRING_TO_BINARY'
EXPORTING
BUFFER = LS_FILE-HEXCONT
IMPORTING
OUTPUT_LENGTH = LV_LENGTH
TABLES
BINARY_TAB = LT_HEXCONT.
" SET VALUES
LS_DOCDATA-OBJ_DESCR = LS_FILE-NAME.
LS_DOCDATA-OBJ_LANGU = 'E'.
LS_DOCDATA-OBJ_NAME = 'MESSAGE'.
LS_DOCDATA-DOC_SIZE = XSTRLEN( LS_FILE-HEXCONT ).
LV_DOCTYPE = LS_FILE-TYPE.
UNPACK LS_FILE-QMNUM TO P_QMNUM.
" CREATE ATTACHMENT
CALL FUNCTION 'SO_DOCUMENT_INSERT_API1'
EXPORTING
FOLDER_ID = LV_FOLDER_ID
DOCUMENT_DATA = LS_DOCDATA
DOCUMENT_TYPE = LV_DOCTYPE
IMPORTING
DOCUMENT_INFO = LS_DOCINFO
TABLES
CONTENTS_HEX = LT_HEXCONT.
" CREATE LINK TO OBJECT (Notification)
OBJ_NOTIF-OBJKEY = P_QMNUM.
OBJ_NOTIF-OBJTYPE = 'BUS2038'.
OBJ_ATTACH-OBJKEY = LS_DOCINFO-DOC_ID.
OBJ_ATTACH-OBJTYPE = 'MESSAGE'.
CALL FUNCTION 'BINARY_RELATION_CREATE'
EXPORTING
OBJ_ROLEA = OBJ_NOTIF
OBJ_ROLEB = OBJ_ATTACH
RELATIONTYPE = 'ATTA'.
" COMMIT
COMMIT WORK.
Is is necessary to open the contenct as hex?
You probably just want to read / write on a file. Try open OPEN DATASET
https://help.sap.com/doc/abapdocu_752_index_htm/7.52/en-US/abapset_dataset.htm
Question
Using the mongolite package in R, how do you query a database for a given date?
Example Data
Consider a test collection with two entries
library(mongolite)
## create dummy data
df <- data.frame(id = c(1,2),
dte = as.POSIXct(c("2015-01-01","2015-01-02")))
> df
id dte
1 1 2015-01-01
2 2 2015-01-02
## insert into database
mong <- mongo(collection = "test", db = "test", url = "mongodb://localhost")
mong$insert(df)
Mongo shell query
To find the entries after a given date I would use
db.test.find({"dte" : {"$gt" : new ISODate("2015-01-01")}})
How can I reproduce this query in R using mongolite?
R attempts
So far I have tried
qry <- paste0('{"dte" : {"$gt" : new ISODate("2015-01-01")}}')
mong$find(qry)
Error: Invalid JSON object: {"dte" : {"$gt" : new ISODate("2015-01-01")}}
qry <- paste0('{"dte" : {"$gt" : "2015-01-01"}}')
mong$find(qry)
Imported 0 records. Simplifying into dataframe...
data frame with 0 columns and 0 rows
qry <- paste0('{"dte" : {"gt" : ', as.POSIXct("2015-01-01"), '}}')
mong$find(qry)
Error: Invalid JSON object: {"dte" : {"gt" : 2015-01-01}}
qry <- paste0('{"dte" : {"gt" : new ISODate("', as.POSIXct("2015-01-01"), '")}}')
mong$find(qry)
Error: Invalid JSON object: {"dte" : {"gt" : new ISODate("2015-01-01")}}
#user2754799 has the correct method, but I've made a couple of small changes so that it answers my question. If they want to edit their answer with this solution I'll accept it.
d <- as.integer(as.POSIXct(strptime("2015-01-01","%Y-%m-%d"))) * 1000
## or more concisely
## d <- as.integer(as.POSIXct("2015-01-01")) * 1000
data <- mong$find(paste0('{"dte":{"$gt": { "$date" : { "$numberLong" : "', d, '" } } } }'))
as this question keeps showing up at the top of my google results when i forget AGAIN how to query dates in mongolite and am too lazy to go find the documentation:
the above Mongodb shell query,
db.test.find({"dte" : {"$gt" : new ISODate("2015-01-01")}})
now translates to
mong$find('{"dte":{"$gt":{"$date":"2015-01-01T00:00:00Z"}}}')
optionally, you can add millis:
mong$find('{"dte":{"$gt":{"$date":"2015-01-01T00:00:00.000Z"}}}')
if you use the wrong datetime format, you get a helpful error message pointing you to the correct format: use ISO8601 format yyyy-mm-ddThh:mm plus timezone, either "Z" or like "+0500"
of course, this is also documented in the mongolite manual
try mattjmorris's answer from github
library(GetoptLong)
datemillis <- as.integer(as.POSIXct("2015-01-01")) * 1000
data <- data_collection$find(qq('{"createdAt":{"$gt": { "$date" : { "$numberLong" : "#{datemillis}" } } } }'))
reference: https://github.com/jeroenooms/mongolite/issues/5#issuecomment-160996514
Prior converting your date by multiplying it with 1000, do this: options(scipen=1000), as the lack of this workaround will affect certain dates.
This is explained here: