R; how to solve an "expired PostgreSQLConnection" error? - r

I have a file with R code that builds up several dataframes and next tries to store them into a Postgres database. This ususally fails, the code snippet that fails is below.
require ("RPostgreSQL")
drv <- dbDriver("PostgreSQL")
res <- dbConnect (drv, dbname = db,
host = "localhost", port = 5432,
user = "postgres", password = pw)
table_name <- "gemeenten"
print (c ("adding ", table_name))
if (dbExistsTable (con, table_name)) dbRemoveTable (con, table_name) ### Error!
result <- dbWriteTable (con, table_name, gemeenten)
The error I get is:
Error in postgresqlQuickSQL(conn, statement, ...) :
expired PostgreSQLConnection
and the error occurs at the test of dbExistsTable. When I call dbListConnections (PostgreSQL ()) then umber of connections increases by one each time, a call dbDisconnect (con) does not decrement this number.
I got this error before when I tried to create the driver from a .Profile file and I could resolve this be removing the drv variable and assigning it again. I have succeeded twice in creating this table but I am not able to reconstruct why this happened. Does anyone know what I am doing wrong?

One of the things I noticed is that I started to get this error when I started sourcing my trials. When starting to source I made a lot of mistakes and I noticed in the Postgres status screen that these connections remained open. I carefully tried to disConnect all connections after usage. Using a tryCatch block is very useful in this respect. Use the finally branch to close the connection, unload the driver and remove their variables. It is not enough when you close connections from Postgres, R still thinks they're open and will refuse any connection attempt after there are 16 connections open. dbListConnections (PostgreSQL ()) returns a list, disConnect all elements of that list.
This did not work at first, I tried to remove package "RPostgreSQL" but that did not work either. I had to manually kill it from the library. As I am a newbie in R as in Postgres I suspect I did something wrong during install. Anyway, remove and reinstall the package. Next restart the Postgres server. After that it worked.
Somewhat paranoid I agree, but after having lost a night of sleep I didn't want to take any chances :-) If someone can pinpoint more precisely the cause of the problem I'll happily choose his answer as the correct one.

Related

RStudio Connect - Intermittent C stack usage errors when downloading from Snowflake database

Having an intermittent issue when downloading a fairly large dataset from a Snowflake database view. The dataset is about 60 million rows. Here is the code used to connect to Snowflake and download the data:
sf_db <- dbConnect(odbc::odbc(),
Driver="SnowflakeDSIIDriver",
AUTHENTICATOR = "SNOWFLAKE_JWT",
Server = db_param_snowflake$server,
Database = db_param_snowflake$db,
PORT=db_param_snowflake$port,
Trusted_Connection = "True",
uid = db_param_snowflake$uid,
db = db_param_snowflake$db,
warehouse = db_param_snowflake$warehouse,
PRIV_KEY_FILE = db_param_snowflake$priv_key_file,
PRIV_KEY_FILE_PWD = db_param_snowflake$priv_key_file_pwd,
timeout = 20)
snowflake_query <- 'SELECT "address" FROM ABC_DB.DEV.VW_ADDR_SUBSET'
my_table <- tbl(sf_db, sql(snowflake_query)) %>%
collect() %>%
data.table()
About 50% of the time, this part of the script runs fine. When it fails, the RStudio Connect logs contain messages like this:
2021/05/05 18:17:51.522937848 Error: C stack usage 940309959492 is too close to the limit
2021/05/05 18:17:51.522975132 In addition: Warning messages:
2021/05/05 18:17:51.523077000 Lost warning messages
2021/05/05 18:17:51.523100338
2021/05/05 18:17:51.523227401 *** caught segfault ***
2021/05/05 18:17:51.523230793 address (nil), cause 'memory not mapped'
2021/05/05 18:17:51.523251671 Warning: stack imbalance in 'lazyLoadDBfetch', 113 then 114
To try to get this working consistently, I have tried using a process that downloads rows in batches, and that also intermittently fails, usually after downloading many millions of records. I have also tried connecting with Pool and downloading, that also only works sometimes. Also tried dbGetQuery, same inconsistent results.
I have Googled this extensively, and found threads related to C stack errors and recursion, but those problems seemed to be consistent (unlike this one that works sometimes) and I'm not sure what I can do if there is some recursive process running as part of this download.
We are running this on a Connect server with 125GB of memory, and at the time this script runs there are no other scripts running, and (at least according to the Admin screen that shows CPU and memory usage) this script doesn't use any more than 8-10GB before it (sometimes) fails. As far as when it succeeds and when it fails, I haven't noticed any pattern. I could run it now and it fail, then immediately run it again and it works. When it succeeds, it takes about 7-8 minutes. When it fails, it generally fails after anywhere from 3-8 minutes. All packages are the newest versions, and this has always worked inconsistently, so cannot think of anything to roll back.
Any ideas for troubleshooting, or alternate approach ideas, are welcome. Thank you.

Every time I source my R script it leaks a db connection

I cannot paste the entire script here, but I am explaining the situation. If you have ever got leaked DB connections then you would be knowing what I am talking about.
I have an R script file that has many functions (around 50) that use db connections using the DBI & RMySQL R packages. I have consolidated all DB access through 4 or 5 functions. I use on.exit(dbDisconnect(db)) in every single function where a dbConnect is used.
I discovered that just on loading this script using source("dbscripts.R") causes one DB connection to leak. I see this when I run the command
dbListConnections(MySQL())
[[1]]
MySQLConnection:0,607>
[[2]]
MySQLConnection:0,608>
[[3]]
MySQLConnection:0,609>
[[4]]
MySQLConnection:0,610>
I see one more DB connection added to the list everytime. This quickly reaches to 16 and my script stops working.
The problem is, I am unable to find out which line of code is causing the leak.
I have checked each dbConnect line in the code. All of them are within functions and no dbConnect happens outside in the main code.
So, why is the connection leak occurring?

RPostgreSQL connections are expired as soon as they are initiated with doParallel clusterEvalQ

I'm trying to setup a parallel task where each worker will need to make database queries. I'm trying to setup each worker with a connection as seen in this question but each time I try it returns <Expired PostgreSQLConnection:(2781,0)> for however many workers I registered.
Here's my code:
cl <- makeCluster(detectCores())
registerDoParallel(cl)
clusterEvalQ(cl, {
library(RPostgreSQL)
drv<-dbDriver("PostgreSQL")
con<-dbConnect(drv, user="user", password="password", dbname="ISO",host="localhost")
})
If I try to run my foreach despite the error, it fails with task 1 failed - "expired PostgreSQLConnection"
When I go into the postgres server status it shows all the active sessions that were created.
I don't have any problems interacting with postgres from my main R instance.
If I run
clusterEvalQ(cl, {
library(RPostgreSQL)
drv<-dbDriver("PostgreSQL")
con<-dbConnect(drv, user="user", password="password", dbname="ISO",host="localhost")
dbGetQuery(con, "select inet_client_port()")
})
then it will return all the client ports. It doesn't give me the expired notice but if I try to run my foreach command it will fail with the same error.
Edit:
I've tried this on Ubuntu and 2 windows computers, they all give the same error.
Another Edit:
Now 3 windows computers
I was able to reproduce your problem locally. I am not entirely sure but I think the problem is related to the way clusterEvalQ works internally. For example, you say that dbGetQuery(con, "select inet_client_port())
gave you the client port output. If the query was actually evaluated/executed on the cluster nodes then you would be unable to see this output (the same way that you are unable to directly read any other output or print statements that are executed on the external clusternodes).
Hence, It is my understanding that the evaluation is somehow first performed on the local environment and the relevant functions and variables are subsequently copied/exported to the individual clusternodes. This would work for any other type of functions/variables but obviously not for db connections. If the connections/portmappings are linked to the master R instance, then the connections would not work from the slave instances. You would also get the exact same error if you tried to use the clusterExport function in order to export connections that are created on the master instance.
As an alternative, what you can do is create separate connections inside the individual foreach tasks. I have verified with a local database that the following works:
library(doParallel)
nrCores = detectCores()
cl <- makeCluster(nrCores)
registerDoParallel(cl)
clusterEvalQ(cl,library(RPostgreSQL))
clusterEvalQ(cl,library(DBI))
result <- foreach(i=1:nrCores) %dopar%
{
drv <- dbDriver("PostgreSQL")
con <- dbConnect(drv, user="user", password="password", dbname="ISO",host="localhost")
queryResult <- dbGetQuery(con, "fetch something...")
dbDisconnect(con)
return(queryResult)
}
stopCluster(cl)
However, now you have to take into account that you will create and disconnect a new connection every foreach iteration. You might incur some performance overhead because of this. You can obviously circumvent this by splitting up your queries/data intelligently so that a lot of work gets done during the same iteration. Ideally, you should split up the work in exactly as much number of cores that you have available.

Failed R instances + RODBC

Quick question: I am running multiple R instances parallel in batch mode using an RODBC connection, and randomly one (or more) of my instances is failing. If I go back and run the instances one by one, all of them are successful. There is no error in the log, and I am just trying to deduce where exactly the issue is coming from. My main hypotheses are that I am hitting a memory heap top and the instance is failing, or (more probably) there is some kind of time out happening with the RODCB connection. Any suggestions?
Thanks,
Jim
It is not clear why no error shows, perhaps you could try options(error = recover)
I used to get the following error when using multiple database connections:
Error in mysqlExecStatement(conn, statement, ...) :
RS-DBI driver: (connection with pending rows, close resultSet before continuing)
I avoid this error by issuing the following line to close any open connections before issuing a new query:
lapply(dbListConnections(MySQL()), dbDisconnect)
I took this code from the R help list.
update: one of my collaborators has created a suite of functions to facilitate database interactions, including db.con, db.open, db.close, and db.query that could be used like:
## load functions
source("https://raw.github.com/PecanProject/pecan/master/db/R/utils.R")
## example
params <- list(dbname = "mydb", username = "myname", password = "!##?$")
con <- db.open(params)
mydata <- db.query("select * from mytable;")
db.close(con)

How do I unlock a SQLite database?

When I enter this query:
sqlite> DELETE FROM mails WHERE (id = 71);
SQLite returns this error:
SQL error: database is locked
How do I unlock the database so this query will work?
In windows you can try this program http://www.nirsoft.net/utils/opened_files_view.html to find out the process is handling db file. Try closed that program for unlock database
In Linux and macOS you can do something similar, for example, if your locked file is development.db:
$ fuser development.db
This command will show what process is locking the file:
> development.db: 5430
Just kill the process...
kill -9 5430
...And your database will be unlocked.
I caused my sqlite db to become locked by crashing an app during a write. Here is how i fixed it:
echo ".dump" | sqlite old.db | sqlite new.db
Taken from: http://random.kakaopor.hu/how-to-repair-an-sqlite-database
The SQLite wiki DatabaseIsLocked page offers an explanation of this error message. It states, in part, that the source of contention is internal (to the process emitting the error). What this page doesn't explain is how SQLite decides that something in your process holds a lock and what conditions could lead to a false positive.
This error code occurs when you try to do two incompatible things with a database at the same time from the same database connection.
Changes related to file locking introduced in v3 and may be useful for future readers and can be found here: File Locking And Concurrency In SQLite Version 3
If you want to remove a "database is locked" error then follow these steps:
Copy your database file to some other location.
Replace the database with the copied database. This will dereference all processes which were accessing your database file.
Deleting the -journal file sounds like a terrible idea. It's there to allow sqlite to roll back the database to a consistent state after a crash. If you delete it while the database is in an inconsistent state, then you're left with a corrupted database. Citing a page from the sqlite site:
If a crash or power loss does occur and a hot journal is left on the disk, it is essential that the original database file and the hot journal remain on disk with their original names until the database file is opened by another SQLite process and rolled back. [...]
We suspect that a common failure mode for SQLite recovery happens like this: A power failure occurs. After power is restored, a well-meaning user or system administrator begins looking around on the disk for damage. They see their database file named "important.data". This file is perhaps familiar to them. But after the crash, there is also a hot journal named "important.data-journal". The user then deletes the hot journal, thinking that they are helping to cleanup the system. We know of no way to prevent this other than user education.
The rollback is supposed to happen automatically the next time the database is opened, but it will fail if the process can't lock the database. As others have said, one possible reason for this is that another process currently has it open. Another possibility is a stale NFS lock, if the database is on an NFS volume. In that case, a workaround is to replace the database file with a fresh copy that isn't locked on the NFS server (mv database.db original.db; cp original.db database.db). Note that the sqlite FAQ recommends caution regarding concurrent access to databases on NFS volumes, because of buggy implementations of NFS file locking.
I can't explain why deleting a -journal file would let you lock a database that you couldn't before. Is that reproducible?
By the way, the presence of a -journal file doesn't necessarily mean that there was a crash or that there are changes to be rolled back. Sqlite has a few different journal modes, and in PERSIST or TRUNCATE modes it leaves the -journal file in place always, and changes the contents to indicate whether or not there are partial transactions to roll back.
the SQLite db files are just files, so the first step would be to make sure it isn't read-only. The other thing to do is to make sure that you don't have some sort of GUI SQLite DB viewer with the DB open. You could have the DB open in another shell, or your code may have the DB open. Typically you would see this if a different thread, or application such as SQLite Database Browser has the DB open for writing.
My lock was caused by the system crashing and not by a hanging process. To resolve this, I simply renamed the file then copied it back to its original name and location.
Using a Linux shell that would be:
mv mydata.db temp.db
cp temp.db mydata.db
If a process has a lock on an SQLite DB and crashes, the DB stays locked permanently. That's the problem. It's not that some other process has a lock.
I had this problem just now, using an SQLite database on a remote server, stored on an NFS mount. SQLite was unable to obtain a lock after the remote shell session I used had crashed while the database was open.
The recipes for recovery suggested above did not work for me (including the idea to first move and then copy the database back). But after copying it to a non-NFS system, the database became usable and not data appears to have been lost.
Some functions, like INDEX'ing, can take a very long time - and it locks the whole database while it runs. In instances like that, it might not even use the journal file!
So the best/only way to check if your database is locked because a process is ACTIVELY writing to it (and thus you should leave it the hell alone until its completed its operation) is to md5 (or md5sum on some systems) the file twice.
If you get a different checksum, the database is being written, and you really really REALLY don't want to kill -9 that process because you can easily end up with a corrupt table/database if you do.
I'll reiterate, because it's important - the solution is NOT to find the locking program and kill it - it's to find if the database has a write lock for a good reason, and go from there. Sometimes the correct solution is just a coffee break.
The only way to create this locked-but-not-being-written-to situation is if your program runs BEGIN EXCLUSIVE, because it wanted to do some table alterations or something, then for whatever reason never sends an END afterwards, and the process never terminates. All three conditions being met is highly unlikely in any properly-written code, and as such 99 times out of 100 when someone wants to kill -9 their locking process, the locking process is actually locking your database for a good reason. Programmers don't typically add the BEGIN EXCLUSIVE condition unless they really need to, because it prevents concurrency and increases user complaints. SQLite itself only adds it when it really needs to (like when indexing).
Finally, the 'locked' status does not exist INSIDE the file as several answers have stated - it resides in the Operating System's kernel. The process which ran BEGIN EXCLUSIVE has requested from the OS a lock be placed on the file. Even if your exclusive process has crashed, your OS will be able to figure out if it should maintain the file lock or not!! It is not possible to end up with a database which is locked but no process is actively locking it!!
When it comes to seeing which process is locking the file, it's typically better to use lsof rather than fuser (this is a good demonstration of why: https://unix.stackexchange.com/questions/94316/fuser-vs-lsof-to-check-files-in-use). Alternatively if you have DTrace (OSX) you can use iosnoop on the file.
I added "Pooling=true" to connection string and it worked.
This error can be thrown if the file is in a remote folder, like a shared folder. I changed the database to a local directory and it worked perfectly.
I found the documentation of the various states of locking in SQLite to be very helpful. Michael, if you can perform reads but can't perform writes to the database, that means that a process has gotten a RESERVED lock on your database but hasn't executed the write yet. If you're using SQLite3, there's a new lock called PENDING where no more processes are allowed to connect but existing connections can sill perform reads, so if this is the issue you should look at that instead.
I have such problem within the app, which access to SQLite from 2 connections - one was read-only and second for writing and reading. It looks like that read-only connection blocked writing from second connection. Finally, it is turns out that it is required to finalize or, at least, reset prepared statements IMMEDIATELY after use. Until prepared statement is opened, it caused to database was blocked for writing.
DON'T FORGET CALL:
sqlite_reset(xxx);
or
sqlite_finalize(xxx);
I just had something similar happen to me - my web application was able to read from the database, but could not perform any inserts or updates. A reboot of Apache solved the issue at least temporarily.
It'd be nice, however, to be able to track down the root cause.
lsof command on my Linux environment helped me to figure it out that a process was hanging keeping the file open.
Killed the process and problem was solved.
This link solve the problem. : When Sqlite gives : Database locked error
It solved my problem may be useful to you.
And you can use begin transaction and end transaction to not make database locked in future.
Should be a database's internal problem...
For me it has been manifested after trying to browse database with "SQLite manager"...
So, if you can't find another process connect to database and you just can't fix it,
just try this radical solution:
Provide to export your tables (You can use "SQLite manager" on Firefox)
If the migration alter your database scheme delete the last failed migration
Rename your "database.sqlite" file
Execute "rake db:migrate" to make a new working database
Provide to give the right permissions to database for table's importing
Import your backed up tables
Write the new migration
Execute it with "rake db:migrate"
In my experience, this error is caused by: You opened multiple connections.
e.g.:
1 or more sqlitebrowser (GUI)
1 or more electron thread
rails thread
I am nore sure about the details of SQLITE3 how to handle the multiple thread/request, but when I close the sqlitebrowser and electron thread, then rails is running well and won't block any more.
I ran into this same problem on Mac OS X 10.5.7 running Python scripts from a terminal session. Even though I had stopped the scripts and the terminal window was sitting at the command prompt, it would give this error the next time it ran. The solution was to close the terminal window and then open it up again. Doesn't make sense to me, but it worked.
I just had the same error.
After 5 minets google-ing I found that I didun't closed one shell witch were using the db.
Just close it and try again ;)
I had the same problem. Apparently the rollback function seems to overwrite the db file with the journal which is the same as the db file but without the most recent change. I've implemented this in my code below and it's been working fine since then, whereas before my code would just get stuck in the loop as the database stayed locked.
Hope this helps
my python code
##############
#### Defs ####
##############
def conn_exec( connection , cursor , cmd_str ):
done = False
try_count = 0.0
while not done:
try:
cursor.execute( cmd_str )
done = True
except sqlite.IntegrityError:
# Ignore this error because it means the item already exists in the database
done = True
except Exception, error:
if try_count%60.0 == 0.0: # print error every minute
print "\t" , "Error executing command" , cmd_str
print "Message:" , error
if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back
print "Forcing Unlock"
connection.rollback()
time.sleep(0.05)
try_count += 0.05
def conn_comit( connection ):
done = False
try_count = 0.0
while not done:
try:
connection.commit()
done = True
except sqlite.IntegrityError:
# Ignore this error because it means the item already exists in the database
done = True
except Exception, error:
if try_count%60.0 == 0.0: # print error every minute
print "\t" , "Error executing command" , cmd_str
print "Message:" , error
if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back
print "Forcing Unlock"
connection.rollback()
time.sleep(0.05)
try_count += 0.05
##################
#### Run Code ####
##################
connection = sqlite.connect( db_path )
cursor = connection.cursor()
# Create tables if database does not exist
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS fix (path TEXT PRIMARY KEY);''')
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS tx (path TEXT PRIMARY KEY);''')
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS completed (fix DATE, tx DATE);''')
conn_comit( connection )
One common reason for getting this exception is when you are trying to do a write operation while still holding resources for a read operation. For example, if you SELECT from a table, and then try to UPDATE something you've selected without closing your ResultSet first.
I was having "database is locked" errors in a multi-threaded application as well, which appears to be the SQLITE_BUSY result code, and I solved it with setting sqlite3_busy_timeout to something suitably long like 30000.
(On a side-note, how odd that on a 7 year old question nobody found this out already! SQLite really is a peculiar and amazing project...)
Before going down the reboot option, it is worthwhile to see if you can find the user of the sqlite database.
On Linux, one can employ fuser to this end:
$ fuser database.db
$ fuser database.db-journal
In my case I got the following response:
philip 3556 4700 0 10:24 pts/3 00:00:01 /usr/bin/python manage.py shell
Which showed that I had another Python program with pid 3556 (manage.py) using the database.
An old question, with a lot of answers, here's the steps I've recently followed reading the answers above, but in my case the problem was due to cifs resource sharing. This case is not reported previously, so hope it helps someone.
Check no connections are left open in your java code.
Check no other processes are using your SQLite db file with lsof.
Check the user owner of your running jvm process has r/w permissions over the file.
Try to force the lock mode on the connection opening with
final SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(false);
config.setLockingMode(LockingMode.NORMAL);
connection = DriverManager.getConnection(url, config.toProperties());
If your using your SQLite db file over a NFS shared folder, check this point of the SQLite faq, and review your mounting configuration options to make sure your avoiding locks, as described here:
//myserver /mymount cifs username=*****,password=*****,iocharset=utf8,sec=ntlm,file,nolock,file_mode=0700,dir_mode=0700,uid=0500,gid=0500 0 0
I got this error in a scenario a little different from the ones describe here.
The SQLite database rested on a NFS filesystem shared by 3 servers. On 2 of the servers I was able do run queries on the database successfully, on the third one thought I was getting the "database is locked" message.
The thing with this 3rd machine was that it had no space left on /var. Everytime I tried to run a query in ANY SQLite database located in this filesystem I got the "database is locked" message and also this error over the logs:
Aug 8 10:33:38 server01 kernel: lockd: cannot monitor 172.22.84.87
And this one also:
Aug 8 10:33:38 server01 rpc.statd[7430]: Failed to insert: writing /var/lib/nfs/statd/sm/other.server.name.com: No space left on device
Aug 8 10:33:38 server01 rpc.statd[7430]: STAT_FAIL to server01 for SM_MON of 172.22.84.87
After the space situation was handled everything got back to normal.
If you're trying to unlock the Chrome database to view it with SQLite, then just shut down Chrome.
Windows
%userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Web Data
or
%userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Chrome Web Data
Mac
~/Library/Application Support/Google/Chrome/Default/Web Data
From your previous comments you said a -journal file was present.
This could mean that you have opened and (EXCLUSIVE?) transaction and have not yet committed the data. Did your program or some other process leave the -journal behind??
Restarting the sqlite process will look at the journal file and clean up any uncommitted actions and remove the -journal file.
As Seun Osewa has said, sometimes a zombie process will sit in the terminal with a lock aquired, even if you don't think it possible. Your script runs, crashes, and you go back to the prompt, but there's a zombie process spawned somewhere by a library call, and that process has the lock.
Closing the terminal you were in (on OSX) might work. Rebooting will work. You could look for "python" processes (for example) that are not doing anything, and kill them.

Resources