I have a DuckDB with columns of data which I would like to query using multiple columns. I'm in R but I'm not sure how to create a multicolumn index (or even a single column index). Can anyone suggest a reference please? I've added SQLite as a tag because I gather that the commands could be the same.
Edit:
Based on kukuk1de's recommendation I'm trying the following
require(DBI)
require(duckdb)
DBI::dbExecute(con,statement = "CREATE INDEX multi_idx ON (percent prevalence fresh_flow maskProp dropExhale)")
but I get the following error:
Error in .local(conn, statement, ...) :
duckdb_prepare_R: Failed to prepare query CREATE INDEX multi_idx ON (percent prevalence fresh_flow maskProp dropExhale)
Error: Parser Error: syntax error at or near "("
LINE 1: CREATE INDEX multi_idx ON (percent prevalence fresh_flow maskProp...
Try this:
library("DBI")
con = dbConnect(duckdb::duckdb(), dbdir=":memory:", read_only=FALSE)
dbExecute(con, "CREATE TABLE items(item VARCHAR, value DECIMAL(10,2), count INTEGER)")
dbExecute(con, "INSERT INTO items VALUES ('jeans', 20.0, 1), ('hammer', 42.2, 2)")
dbExecute(con, "CREATE INDEX itemcount_idx ON items (item, count);")
Running the last command again will tell you the index already exists.
dbExecute(con, "CREATE INDEX itemcount_idx ON items (item, count);")
Error in duckdb_execute(res) : duckdb_execute_R: Failed to run query
Error: Catalog Error: Index with name "itemcount_idx" already exists!
Related
I'm trying to insert data into a Cloud Spanner table using DBI's dbWriteTable in R, however, it is asking me to supply column names. It is my understanding that as long as the dataframe contains the same amount of columns as required by the table then this should work. Here's my code and error I'm facing (leaving out connection details):
Code:
write.to.spanner <- function(table, df){
dbWriteTable(con, table, df, overwrite=FALSE, append=TRUE, row.names=FALSE)
}
req_df <- data.frame(req_id=123, req_name="test req")
write.to.spanner("dbi_test", req_df)
DDL for spanner table:
CREATE TABLE dbi_test (
req_id INT64,
req_name STRING(20),
) PRIMARY KEY(req_id);
Error:
Warning: Error in .local: execute JDBC update query failed in dbSendUpdate ([Simba][SpannerJDBCDriver](100605) There was an error while executing the DML query : INVALID_ARGUMENT: com.simba.cloudspanner.shaded.com.google.api.gax.rpc.InvalidArgumentException: com.simba.cloudspanner.shaded.io.grpc.StatusRuntimeException: INVALID_ARGUMENT: INSERT must specify a column list [at 1:1]
INSERT INTO dbi_test VALUES(#var1,#var2)
I'm able to insert using dbSendUpdate but would prefer to use dbWriteTable and not write out insert statements.
This works fine:
write.to.spanner <- function(table, df){
insert_qry <- paste0("INSERT INTO ",table," (req_id, req_name) VALUES (",df[1,1],",'",df[1,2],"');")
dbSendUpdate(con,insert_qry)
}
Apologies for the simple question I am new to using PostgreSQL and Psycopg2. I have two columns in a table I am trying to populate using values from two other tables based on a where conditional. Example code below:
cur = con.cursor()
db_insert = """INSERT INTO "Database"."TABLE_ONE"
("LT_ID_ONE", "LT_ID_TWO")
VALUES(
(SELECT "LT_ID_ONE" FROM "Database"."TABLE_TWO" WHERE "LT_NUM_ONE" =%s),
(SELECT "LT_ID_TWO" FROM "Database"."TABLE_THREE" WHERE "LT_NUM_TWO" =%s)
);"""
insert_values = (df1.iloc[0, 0], df1.iloc[0, 1])
cur.execute(db_insert, insert_values)
When running this command I receive the following error:
psycopg2.errors.NotNullViolation: null value in column "LOT_ID_ONE" violates not-null constraint
DETAIL: Failing row contains (null, null).
Any help would be appreciated.
Looks as though my error was in the order of the elements for the insert_values variable. Once I swapped them it worked. So the SQL code I have is correct.
I want to update a table in PostgreSQL table from a newData dataframe in local through a loop when the id matches in both tables. However, I encountered issues that the text values do not update exactly as our newData to the database. Number is updating correctly but there are 2 issues when updating the text:
1) I have a column for house_nbr and it can be '120-12', but somehow it calculated and updated as '108' which should really be the text '120-12'.
2) I have a column for street_name and it can be 'Main Street', but I received an error that I couldn't resolve.
(Error in { :
task 1 failed - "Failed to prepare query: ERROR: syntax error at or near "Street")
The database table datatype is in char. It seems something is wrong with special character in the text, such as hyphen and space. Please advise how to retain the character text when updating to a Postgre database. Below is the code I am using. Thanks!
Update <- function(i) {
con <- dbConnect(RPostgres::Postgres(),
user="xxx",
password="xxx",
host="xxx",
dbname="xxx",
port=5432)
text <- paste("UPDATE dbTable SET house_nbr=" ,newData$house_nbr[i], ",street_name=",newData$street_name[i], "where id=",newData$id[i])
dbExecute(con, text)
dbDisconnect(con)
}
foreach(i = 1:length(newData$id), .inorder=FALSE,.packages="RPostgreSQL")%dopar%{
Update(i)
}
I connected from R to the PostgreSQL and i am able to write a table by using timestamp as table name, But i am unable to extract the values.
I used the following code.
library(DBI)
con <- dbConnect(RPostgres::Postgres(),dbname = 'postgres',
host = 'hostname',
port = 5432,
user = 'username',
password = 'pwd')
tm<-paste0('job_status_',Sys.time())
dbWriteTable(con,tm,jbs)
dbGetQuery(con,paste0('select * from ',tm))
When I ran the select command, I got the following syntax error.
Error in result_create(conn#ptr, statement) :
Failed to prepare query: ERROR: syntax error at or near "-"
LINE 1: select * from job_status_2019-03-12 04:33:08
Can anyone help me to resolve this issue?!
As your table name contains characters - :, it needs to be quoted with " to be understood as a table name.
dbGetQuery(con,paste0('select * from "',tm, '"'))
BTW. It may be a good idea to avoid unusual characters in tables' names and limit yourself to just letters, digits and underscore (_). To achieve that you can utilize gsub().
tm<-gsub('-|:| ', '_', paste0('job_status_',Sys.time()))
dbWriteTable(con,tm,jbs)
dbGetQuery(con,paste0('select * from ',tm))
I have a problem to add a new column to a SQLITE table if this column is not already exist.
I have try with this code but i don’t know why it wont execute:
IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'mytable' AND COLUMN_NAME = 'mynewcolumn')
BEGIN
ALTER TABLE mytable ADD COLUMN mynewcolumn TEXT
END
I get an Exception error :
error: near "IF": syntax error
This is the solution i select :
1- I do PRAGMA table_info :
pragma table_info(MyTable)
This command gives all the informations about all the columns of the table. each row correspand to the information of a column.
This commande return an output table with 4 columns : cid, name, type, notnull, dft value, pk
2- I read all the rows from "PRAGMA table_info (MyTable)", and i compare the column "name" with the name of the column i want to check if exist.
3- If Column exist then i dont do anything
4- but if the column doen't exist then, here i add the column to my table using this commade :
ALTER TABLE MyTable ADD COLUMN NewColumn TEXT;
this is work for me, and do the job correctly.
To test whether a column exists, execute PRAGMA table_info and check if the column name appears in the result.