I'm experiencing a disk I/O error during a query on a SQL database. I have several large (95gb) db with the same schema, and am trying to run the same query on all of them. Two run fine, returning ~28,000,000 results; one hits a 'Disk I/O error', both when run via SQLalchemy and command line.
If I limit the query to return ~10,000,000 results, I don't get the error- but I need the full output as these are photons from a large physics simulation.
The full error message in SQLalchemy is:
Traceback (most recent call last):
File "/Users/swm1n12/anaconda/lib/python3.5/site-packages/sqlalchemy/engine/result.py", line 1120, in fetchall
l = self.process_rows(self._fetchall_impl())
File "/Users/swm1n12/anaconda/lib/python3.5/site-packages/sqlalchemy/engine/result.py", line 1071, in _fetchall_impl
return self.cursor.fetchall()
sqlite3.OperationalError: disk I/O error
And for sqlite3 on the command line:
Error: disk I/O error
Can anyone tell me how I'd be able to figure out what's going wrong?
(I'm aware that for DB of this size, SQLite is not a great choice- they turned out to be bigger than I expected and ~academic publishing pressures~ means retooling SQL version and rerunning the code is not practical)
In the sqlite3 command-line shell, you can use .log stderr to see the OS errors.
But "disk I/O error" means that there is an error on your disk. Throw it away, and restore the database from the backup.
Related
I have a system writing data to an sqlite file. I had everything operational under CentOS 8. After upgrading the system to Rocky Linux 9 I see this error when running a commit command: DBD::SQLite::db commit failed: disk I/O error
I have checked file permissions, disk space, SMART readings, everything disk related that I can think of but without success.
Has anyone encountered this error before? What could I try to fix it?
The problem turned out to be a missing Perl module (LWP::https) that was causing DBD::SQLite not to get the data it wanted. Apparently, DBD::SQLite says Disk I/O error for that case.
I am running owncloudcmd to sync files from a local* path to an ownCloud/Nextcloud server, all running Debian 8. However it fails with the error:
[5] csync_statedb_query sqlite3_compile error: disk I/O error - on
query PRAGMA quick_check; [6] csync_statedb_load ERR: sqlite3
integrity check failed - bail out: disk I/O error. #### ERROR during
csync_update : "CSync failed to load the journal file. The journal
file is corrupted."
I am not very familiar with csync or sqlite so I am a bit in the dark and although I can find talk of this issue through googling, I can't find a fix. The data in this case can be dumped to start over so I'm happy to flush any database or anything else. I've trying removing the created csync and journal files assuming one of them was corrupted but it doesn't seem to change anything.
I have read talk about changing PRAGMA settings to ignore the error (or check) but I can't see how this is implemented either.
Is anyone able to show me how to clear out the corruption?
*the local file is a mounted path to an AWS S3 bucket but I think this is irrelevant because it is working on other systems fine.
I am using Client Database and it will be restored successfully in my local system and working fine but when I am printing any report the within that database at that time.
I got the below traceback from the terminal.
Traceback (most recent call last):
File "/home/best/workspace/dynaweld/web/addons/web/http.py", line 285, in dispatch
r = method(self, **self.params)
File "/home/best/workspace/dynaweld/web/addons/web/controllers/main.py", line 1769, in index
cookies={'fileToken': int(token)})
File "/home/best/workspace/dynaweld/web/addons/web/http.py", line 332, in make_response
response.set_cookie(k, v)
File "/usr/local/lib/python2.7/dist-packages/Werkzeug-0.10.4-py2.7.egg/werkzeug/wrappers.py", line 1008, in set_cookie
self.charset))
File "/usr/local/lib/python2.7/dist-packages/Werkzeug-0.10.4-py2.7.egg/werkzeug/http.py", line 920, in dump_cookie
value = to_bytes(value, charset)
File "/usr/local/lib/python2.7/dist-packages/Werkzeug-0.10.4-py2.7.egg/werkzeug/_compat.py", line 106, in to_bytes
raise TypeError('Expected bytes')
TypeError: Expected bytes
I have tried the following way to resolve above traceback issue but I have not yet succeed.
1. Try remove the unwanted data from my local client database remove the all the data of mail.message object.
2. Remove all the unnecessary database from my system and using only 2-3 database for my OpenERP Server Run.
3. Clean my pc for unwanted files and other detail which was not relevant for my database.
4. I have also check with my enough memory space but I have that enough space for restoring that database file.
Can any one help me how can i fix this issue.
This is because cookies are not intended to support unicode characters, you must use a decoded variable in the cookie you are trying to set. something like :
set_cookie(k, bytes(v))
or at least send your variable as bytes.
I have fixed this by installing an older version of werkzeug, 0.6.2
I have roughly 14 million records that I am attempting to export from a Teradata table to file using a fast export connection object.
There is no size limit for fast export files on our Linux system, and there is 1.2 TB of available space in the target directory.
The session fails, and gives the following errors:
READER_2_1_1 FEXP_87011 Process [16022] exited with status [12]
SDKS_38200 Partition-level [SOURCE_TABLE_NAME]: Plug-in #305400 failed in deinit()
I googled the error message, and found this post:
Here
I followed the recommendations in the port to delete the .out file in the temp directory, delete the files that were partially filled in the target directory, and drop the error table and delete the log file. This did not fix the issue and the session still fails with the same error messages.
Try to use TPT Export plug-in instead. Also you can try to execute this FastExport using bteq scripts directly on your unix environment.
When trying to commit to my SVN repository, I got the following error:
Working copy 'Z:\prace-pj\projects\other\CopyRT' locked.
So I run the clean up command and then the commit succeeded, but at the end of the response message, there was the following error:
Error bumping revisions post-commit (details follow):
disk I/O error, executing statement 'RELEASE s11'
Now when I try to e.g. update the repository, it says that it is stil locked. When I clean up and try to update again, I get an error like this:
disk I/O error, executing statement 'RELEASE s2'
sqlite: disk I/O error
What should I do to fix this?
For others reference, I just had this same error and found that one of my log files was taking up all my space (and could not write to the HDD because there was no free space).
Run (to make sure you have enough disk space)
df -h
Then I just needed to run:
svn cleanup
This resolved the error for me.
have you tried:
svn unlock --force path/to/workingcopy
? Seems it can be pointed at a url if the problem is in the repository itself... I've only used an unlock operation via the tortoise gui before, but I assume it just wraps the svn command anyway.
hope that helps