Oracle bulk copy importing wrong data - oracle11g

I am trying to import data from an excel sheet to an Oracle table. I am able to extract the correct data, but when I run the following code:
$bulkCopy = new-object ("Oracle.DataAccess.Client.OracleBulkCopy") $oraClientConnString
$bulkCopy.DestinationTableName = $entity
$bulkCopy.BatchSize = 5000
$bulkCopy.BulkCopyTimeout = 10000
$bulkCopy.WriteToServer($dt)
$bulkcopy.close()
$bulkcopy.Dispose()
The data inserted in the table is some garbage values, consisting of 0's and 10's.
Value received from excel is stored in a data table ($dt).
Any help will be highly appreciated.

Please check the data type of the values in your data table. I have experienced this issue in .Net with the data type Double. When I change my data type to Decimal, everything was fine.

Related

sqlQuery to append new data to R object based on R object

I have created an r data frame that currently has 691221 rows of data and I want to continue to add to this without repeating or having to recreate this df every time. So, I just want to append the new data. The original data is in an sql database that I have to access and this is my first time ever using RODBC library.
#this was my initial query to get the first batch of data and create the 691000 df
locs <- sqlQuery(con, 'SELECT * FROM v_AllLocs', rows_at_time = 1)
now tomorrow for example, I want to only append the new data that comes in. Is there some command in the RODBC libaray that can recognize this from an R object and previous command lines? OR I have a date/time stamp as one of the columns and thought I could reference that somehow. I was thinking something like:
lastloc<-max(locs$acq_time_ak)
new<-sqlQuery(con, 'SELECT * FROM v_AllLocs where acq_time_ak'> lastloc , rows_at_time = 1)
locs<-rbind(locs, new)
However, I don't think sqlQuery can recognize the r object in its line? or the str of last loc is a POSIXct and maybe the sqlQuery database can't recognize this? It doesn't work regardless. Also, technically this is really simplistic because in reality, I have subsets of information within this where I have individual X with a time stamp that may have a different time stamp than individual Y. But at the moment maybe just to get started?... how can I get the latest data to add to the r object?
Or regardless of data within the SQL, can I ask for the latest data the SQL db has since XX date. So no matter any attribute within the database, I just know that as of November 16 2021 any new data coming in would be selected. Then subsequent queries id have to change the date or something?

DbWriteTable() writing multiple copies of my data to SQLite

I've been doing processing data with R that results in a data frame of typically 4860 observations. I write that to a Results table in a SQLite database like this:
db = dbConnect(RSQLite::SQLite(), dbname=DATAFILE)
dbWriteTable(db, "Results", my_dataframe, append = TRUE)
dbDisconnect(db)
Then I process some more data and later write it to the same table using this same code.
The problem is, every now and again, what's written to my SQLite file is some multiple of the 4860 records I expect. Just now it was 19448 (exactly 4X the 4860 records that I can see in RStudio are in my data frame).
This seems such a random problem. As I know the data frame contents is correct, I feel as though the problem must be in my use of dbWriteTable(). Any guidance would be appreciated. Thank you.

Error Appending Data to Existing Empty Table on BigQuery Using R

I created an empty table from Big Query GUI with the schema for the table_name. Later I'm trying to append data to the existing empty table from R using bigrquery package.
I have tried below code,
upload_job <- insert_upload_job(project = "project_id",
dataset = "dataset_id",
table = "table_name",
values = values_table,
write_disposition = "WRITE_APPEND")
wait_for(upload_job)
But it is throwing me an error saying,
Provided Schema does not match Table. Field alpha has changed mode from REQUIRED to NULLABLE [invalid]
My table doesn't have any NULL or NA in the mentioned column and data_types in the schema matches exactly with the data types of values_table.
I tried without creating schema uploading directly from R. While I'm doing that it is automatically converting the mode to nullable which is not what I'm looking for.
I also tried by changing write_dispostion = "WRITE_TRUNCATE" which is also converting mode to nullable.
I also looked at this and this which didn't really help me.
Can someone explain what is happening behind the scenes and what is the best way to upload data without recreating schema.
Note: There was a obvious typo error. Earlier it was wirte_disposition edited it to write_disposition.

Import data in to R from MongoDB in JSON Format

I want to import data in JSON format from MongoDB in to R. I am using Mongolite package to connect MongoDB to R, but when i use mongo$find('{}') data is getting stored as dataframe. Please check my Rcode,
library(mongolite)
mongo <- mongolite::mongo(collection = "Attributes", db = "Test", url =
"mongodb://IP:PORT",verbose = TRUE)
df1 <- mongo$find('{}')
df1 is getting stored as dataframe, but I want the data in JSON format only. Please give your suggestions for the same.
Edit -
Actual json structure converted in to list
But when i load data from MongoDB to R using mongolite package, data is getting stored as dataframe and then if i convert to list, the structure is getting changed and few extra columns are inserted in to list.
Please let me know on how to solve this issue.
Thanks
SJB

using RODBC sqlquery in R to import long string, but R truncate the string, how to get around this?

I'm using R library(RODBC) to import the results of a sql store procedure and save it in a data frame then export that data frame using write.table to write it to xml file (the results from sql is an xml output)
anyhow, R is truncating the string (imported xml results from sql).
I've tried to find a function or an option to expand the size/length of the R dataframe cell but didn't find any
I also tried to use the sqlquery in the write.table statement to ignore using a dataframe but also it didn't work, the imported data from sql is always truncated.
Anyone have any suggestions or an answer that could help me.
here is my code
#library & starting the sql connection
library(RODBC)
my_conn<-odbcDriverConnect('Driver={SQL Server};server=sql2014;database=my_conn;trusted_connection=TRUE')
#Create a folder and a path to save my output
x <- "C:/Users/ATM1/Documents/R/CSV/test"
dir.create(x, showWarnings=FALSE)
setwd(x)
Mpath <- getwd()
#importing the data from sql store procedure output
xmlcode1 <- sqlquery(my_conn, "exec dbo.p_webDefCreate 'SA25'", stringsAsFactors=F, as.is=TRUE)
#writing to a file
write.table(xmlcode1, file=paste0(Mpath,"/SA5b_def.xml"), quote = FALSE, col.names = FALSE, row.names = FALSE)
what I get is plain text that is not the full output.
and the code below is how I find the current length of my string
stri_length(xmlcode1)
[1] 65534
I had similar issue with our project, the data that was coming from the db was getting truncated to 257 characters, and I could not really get around it. Eventually I converted the column def on the db table from varchar(max) to varchar(8000) and I got all the characters back. I did not mind changing the table defintion.
In your case you can perhaps convert the column type in your proc output to varchar with some defined value if possible.
M
I am using PostgeSQL but experienced the same issue of truncation upon importing into R with RODBC package. I used Michael Kassa's solution with a slight change to set the data type to text which can store a string with unlimited length per postgresqltutorial. This worked for me.
The TEXT data type can store a string with unlimited length.
varchar() also worked for me
If you do not specify the n integer for the VARCHAR data type, it behaves like the TEXT datatype. The performance of the VARCHAR (without the size n) and TEXT are the same.

Resources