I'm trying to link data entered into a Shiny form to SQLite. The Shiny part works just fine as I've tried saving the data to an excel file. When I try to write the data to SQLite, I keep getting this error "Near "delete": syntax error" -- any ideas on what it could possible be? I've also tried this with just three input variables and it writes it just fine to SQLite but with ~20 variables, I keep getting this error.
Your variables should be quoted when sending statements to SQLite. If your variable of interest is generated by an input you can quote it like this paste0('"',input$variable,'"')
Related
first time poster here. I’m using R to ‘automate’ some Teradata (TD) SQL scripts. The ones that return data are working great. But I have an UPDATE SQL statement that only returns a message in TD like ‘xxx rows updated’. I’m using the RODBC package in R for my connection. When I use ‘sqlQuery’ and send the ‘update’ SQL statement to TD from R, I get nothing back, whether successful or not. I know only data returning is normal here, but what I want is to get that message back from TD to R. Then I can continue to ‘automate’ things based on the message. Is there a way, either in putting something in the SQL code at the end, or afterwards with R, to get this ‘successful’ message back?
Package RODBC has “odbcGetErrMsg”, but it doesn’t work on the success message. The only workaround I can think of is to do a count() before the update, then send the update statement, then count() after to get a number of rows changed. This may work, but I’d like to get the message instead. I’ve searched SO & Googled this with no luck. Any ideas of how to get a successful TD message from an update statement sent from R returned back in R please?
I am currently working on a warehouse management system operated on a Raspberry Pi. Scanning a QR code should open the correct line of the database.
I read the text file/CSV file containing the QR code into my Table QR database via:
insert into QR values(readfile("C:\...\IDNumberfromQR.csv"));
this works, because the ID number appears in the database in the correct table. However, the content of the text file is read in the file type "Blob".
If I now make a table comparison via
SELECT * from warehouse management table
where PulverID=( select code from QR);
nothing appears.
However, if I enter the ID number on the computer in the table QR.code and do not have the ID read in via my file, the line I am looking for appears. So it is obviously a Data format problem.
What I already tried:
I have already set both to blob in the settings. This still did not work. The functions found in the SQLiteStudio tutorial like import(file,format,table) don't work either.
Does anyone have any idea how i can solve this problem?
Is it possible to read a CSV file as double?
The use case is that, there is an Informatica Cloud mapping which loads from SQL Server to Teradata database. If there any failures during the run time of the mapping then that mappings writes all the failed rows to a table in Teradata database. The key column in this error table is HOSTDATA which I assume. I am trying to decode the HOSTDATA column so that if a similar ETL failure happens in the production then it would be helpful in identifying the root cause much quickly. By default HOSTDATA is a column of type VARBYTES.
To decode the HOSTDATA column, converted the column to ASCII and Base 16 format. None of them made any use.
Then tried the below from the Teradata forum.
Then tried to extract the data from the error table using a BTEQ script. For that the data is being exported into a .err file and it is being loaded back into the Teradata database using a fastload script. Fastload is unable to load the data because there is no specific delimiter for the data. There data in the .err file looks gibberish. Snapshot of the data from the .err file:
My end goal is to interpret the Hostdata column in a more human readable way. Any suggestions in this direction are also welcome.
The Error Table Extractor command twbertbl which is part of "Teradata Parallel Transporter Base" software is designed to extract and format HOSTDATA from the error table VARBYTE column.
Based on the screenshot in your question, I suspect you will need to specify FORMATTED as the record format option for twbertbl (default is DELIMITED).
I'm running into an error trying to use a SQLite prepared statement:
create table RawRecord (?, ?, ?);
Calling sqlite3_prepare16_v2 gives me this error: SQLITE_ERROR: SQLITE_ERROR[1]: near "?": syntax error
I don't run into problems with prepared statements anywhere else (and have been using SQLite for many years). I have tried to find whether prepared statements are simply not allowed for CREATE TABLE, but haven't found anyone saying that's the case.
If I build the create string manually and embed my column names, it works. I prefer to use prepared statements simply because it makes things like quotes cleaner, and in this case the column names come from user data, so I don't know what they will be.
I can certainly work around this, but was hoping to understand why this is an error.
Help?
I'm using schema.ini to validate the data types/columns in my CSV file before loading into SQL. If there is a datatype mismatch in a row, it will still import the row but leaves that particular mismatch cell blank. Is there a way in which I can stop user from importing the CSV file if there is any issues and/or provide a error report (i.e. which row has problems).
The best approach is to check the file for any mismatch; but in the case of a large file, then this is not feasible.
You might need to load it first the check the loaded data in the table for the mismatch. This is much faster than checking the file (You can use a simple T-SQL script to check for nulls in the table).
IF mismatches are found, the user can then be notified and the table can then be cleared.
have a look at he FileHelpers library: http://www.filehelpers.com/
This is a very powerful library to do all kinds of imports, including csv and they also have a pretty neat error handling part
Using the Differents Error Modes The FileHelpers library has support
for 3 kinds of error handling.
In the standard mode you can catch the exceptions when something fail.
This approach not is bad but you lose some info about the current
record and you can't use the records array because is not asigned.
A more intelligent way is usign the ErrorMode.SaveAndContinue of the
ErrorManager:
Using the engine like this you have the good records in the records
array and in the ErrorManager you have te records with errors and can
do wherever you want.
Another option is to ignore the errors and continue how is showed in
this example
1 engine.ErrorManager.ErrorMode = ErrorMode.IgnoreAndContinue; 2
3 records = engine.ReadFile(... Copy to Clipboard | View Plain |
Print | ? engine.ErrorManager.ErrorMode =
ErrorMode.IgnoreAndContinue;
records = engine.ReadFile(... In the records array you only have the
good records.