I am building a shiny application which will allow CRUD operations by a user on a table which exists in an sqlite3 database. I am using the input$table_rows_selected() function in DT to get the index of the rows selected by the user. I am then trying to delete the rows (using an action button deleteRows) from the database which have a matching timestamp (the epoch time stored as the primary key). The following code runs without any error but does not delete the selected rows.
observeEvent(input$deleteRows, {
if(!is.null(input$responsesTable_rows_selected)){
s=input$responsesTable_rows_selected
conn <- poolCheckout(pool)
lapply(length(s), function(i){
timestamp = rvsTL$data[s[i],8]
query <- glue::glue_sql("DELETE FROM TonnageListChartering
WHERE TonnageListChartering.timestamp = {timestamp}
", .con = conn)
dbExecute(conn, sqlInterpolate(ANSI(), query))
})
poolReturn(conn)
# Show a modal when the button is pressed
shinyalert("Success!", "The selected rows have been deleted. Refresh
the table by pressing F5", type = "success")
}
})
pool is a handler at the global level for connecting to the database.
pool <- pool::dbPool(drv = RSQLite::SQLite(),
dbname="data/compfleet.db")
Why does this not work? And if it did, is there any way of refreshing the datatable output without having to reload the application?
As pointed out by #RomanLustrik there was definitely something 'funky' going on with timestamp. I am not well versed with sqlite but running PRAGMA table_info(TonnageListChartering); revealed this:
0|vesselName||0||0
1|empStatus||0||0
2|openPort||0||0
3|openDate||0||0
4|source||0||0
5|comments||0||0
6|updatedBy||0||0
7|timestamp||0||1
8|VesselDetails||0||0
9|Name||0||0
10|VslType||0||0
11|Cubic||0||0
12|DWT||0||0
13|IceClass||0||0
14|IMO||0||0
15|Built||0||0
16|Owner||0||0
I guess none of the variables have a data type defined and I am not sure if that's possible to do it now. Anyway, I changed the query to ensure that the timestamp is in quotes.
query <- glue::glue_sql("DELETE FROM TonnageListChartering
WHERE TonnageListChartering.timestamp = '{timestamp}'
", .con = conn)
This deletes the user selected rows.
However, when I am left with only one row, I am unable to delete it. No idea why. Maybe because of a primary key that I have defined while creating the table?
Related
I have the following dataframe:
library(rpostgis)
library(RPostgreSQL)
library(glue)
df<-data.frame(elevation=c(450,900),
id=c(1,2))
Now I try to upload this to a table in my PostgreSQL/Postgis database. My connection (dbConnect) is working for "SELECT"-Statements properly. However, I tried two ways of updating a database table with this dataframe and both failed.
First:
pgInsert(postgis,name="fields",data.obj=df,overwrite = FALSE, partial.match = TRUE,
row.names = FALSE,upsert.using = TRUE,df.geom=NULL)
2 out of 2 columns of the data frame match database table columns and will be formatted for database insert.
Error: x must be character or SQL
I do not know what the error is trying to tell me as both the values in the dataframe and table are set to integer.
Second:
sql<-glue_sql("UPDATE fields SET elevation ={df$elevation} WHERE
+ id = {df$id};", .con = postgis)
> sql
<SQL> UPDATE fields SET elevation =450 WHERE
id = 1;
<SQL> UPDATE fields SET elevation =900 WHERE
id = 2;
dbSendStatement(postgis,sql)
<PostgreSQLResult>
In both cases no data is transferred to the database and I do not see any Error logs within the database.
Any hint on how to solve this problem?
It is a mistake from my site, I got glue_sql wrong. To correctly update the database with every query created by glue_sql you have to loop through the created object like the following example:
for(i in 1:max(NROW(sql))){
dbSendStatement(postgis,sql[i])
}
I have the following function written in R that (I think) is doing a poor job of updating my mongo databases collections.
library(mongolite)
con <- mongolite::mongo(collection = "mongo_collection_1", db = 'mydb', url = 'myurl')
myRdataframe1 <- con$find(query = '{}', fields = '{}')
rm(con)
con <- mongolite::mongo(collection = "mongo_collection_2", db = 'mydb', url = 'myurl')
myRdataframe2 <- con$find(query = '{}', fields = '{}')
rm(con)
... code to update my dataframes (rbind additional rows onto each of them) ...
# write dataframes to database
write.dfs.to.mongodb.collections <- function() {
collections <- c("mongo_collection_1", "mongo_collection_2")
my.dataframes <- c("myRdataframe1", "myRdataframe2")
# loop dataframes, write colllections
for(i in 1:length(collections)) {
# connect and add data to this table
con <- mongo(collection = collections[i], db = 'mydb', url = 'myurl')
con$remove('{}')
con$insert(get(my.dataframes[i]))
con$count()
rm(con)
}
}
write.dfs.to.mongodb.collections()
My dataframes myRdataframe1 and myRdataframe2 are very large dataframes, currently ~100K rows and ~50 columns. Each time my script runs, it:
uses con$find('{}') to pull the mongodb collection into R, saved as a dataframe myRdataframe1
scrapes new data from a data provider that gets appended as new rows to myRdataframe1
uses con$remove() and con$insert to fully remove the data in the mongodb collection, and then re-insert the entire myRdataframe1
This last bullet point is iffy, because I run this R script daily in a cronjob and I don't like that each time I am entirely wiping the mongo db collection and re-inserting the R dataframe to the collection.
If I remove the con$remove() line, I receive an error that states I have duplicate _id keys. It appears I cannot simply append using con$insert().
Any thoughts on this are greatly appreciated!
When you attempt to insert documents into MongoDB that already exist in the database as per their primary key you will get the duplicate key exception. In order to work around that you can simply unset the _id column using something like this before the con$insert:
my.dataframes[i]$_id <- NULL
This way, the newly inserted document will automatically get a new _id assigned.
you can use upsert ( which matches document with the first condition if found it will update it, if not it will insert a new one,
first you need to separate id from each doc
_id= my.dataframes[i]$_id
updateData = my.dataframes[i]
updateData$_id <- NULL
then use upsert ( there might be some easier way to concatenate strings in R)
con$update(paste('{"_id":"', _id, '"}' ,sep="" ) , paste('{"$set":', updateData,'}', sep=""), upsert = TRUE)
I have created a table in a sqlite3 database from R using the following code:-
con <- DBI::dbConnect(drv = RSQLite::SQLite(),
dbname="data/compfleet.db")
s<- sprintf("create table %s(%s, primary key(%s))", "PositionList",
paste(names(FinalTable), collapse = ", "),
names(FinalTable)[2])
dbGetQuery(con, s)
dbDisconnect(con)
The second column of the table is UID which is the primary key. I then run a script to update the data in the table. The updated data could contain the same UID which already exists in the table. I don't want these existing records to be updated and just want the new records(with new UID values) to be appended to this database. The code I am using is:-
DBI::dbWriteTable(con, "PositionList", FinalTable, append=TRUE, row.names=FALSE, overwite=FALSE)
Which returns an error:
Error in result_bind(res#ptr, params) :
UNIQUE constraint failed: PositionList.UID
How can I achieve the task of appending only the new UID values without changing the existing UID values even if they appear when I run my updation script?
You can query the existing UIDs (as a one-column data frame) and remove corresponding rows from the table you want to insert.
uid_df <- dbGetQuery(con, "SELECT UID FROM PositionList")
dbWriteTable(con, "PositionList", FinalTable[!(FinalTable$UID %in% uid_df[[1]]), ], ...)
When you are going to insert data,first get the data from database by using UID.If data is exist nothing to do else insert new data with new UID.Duplicate Primary Key (UID) recard is not exist ,so it show the error.
I am trying to get a value from a user and create a query with the dynamic value in an rmd file with the following code:
```{r }
returnSQLDS<-function(sqlQuery, rdsFileName){
library(RODBC)
cn <- odbcDriverConnect(connection="myconnectionstring")
resultSet <- sqlQuery(cn,sqlQuery)
odbcClose(cn)
return (resultSet)
}
h1("Choose your lookup")
inputPanel(
textInput("whichlookup", "Which lookup items do you wish to view?", "States")
)
renderDataTable(runif(input$whichlookup),(reactive({
sql ="SELECT LookupId , Lookup.LookupTableId , ItemId , ItemName , ItemDescription FROM LookupTable INNER JOIN dbo.Lookup ON Lookup.LookupTableId = LookupTable.LookupTableId WHERE Name = '"
sql=paste(sql,input$whichlookup,sep="")
sql=paste(sql,"'",sep="")
returnSQLDS( sql) #calls the SQL database with correct connection details
})))
```
It keeps dying on the sql statement. Which leads me to believe I can't have a variable be set in the reactive section. Can anyone tell how I can modify this code to achieve the goal of allowing a user to contribute to the dynamic sql query?
Note-I will not be using a text search for a where clause in the final product I am just trying to get the mechanics correct so I can add my select list of proper values but even those will be sent to a stored procedure.
I have this code to insert dataframe into DB (SQLite) :
l1 = df.to_dict(orient='record')
meta = schema.MetaData(bind=db,reflect=True)
t= Table(t1, meta, autoload=True)
Session = sessionmaker(bind=db)
session = Session()
db.execute(t.insert(), l1)
session.commit()
session.close()
It fails because the unique id of the table 'is not unique' (df comes from the db and fields have been modified).
in df, there is one unique id called id, which needs to be incremented.
However, how to specify the unique_id and how to make it increment automatically through sql alchemy?
Solution found (if it helps anybody in same situation):
Just add this code, after creating the list, it would remove the id.
del l1['id']