SVN - SQLite - disk I/O error - sqlite

When trying to commit to my SVN repository, I got the following error:
Working copy 'Z:\prace-pj\projects\other\CopyRT' locked.
So I run the clean up command and then the commit succeeded, but at the end of the response message, there was the following error:
Error bumping revisions post-commit (details follow):
disk I/O error, executing statement 'RELEASE s11'
Now when I try to e.g. update the repository, it says that it is stil locked. When I clean up and try to update again, I get an error like this:
disk I/O error, executing statement 'RELEASE s2'
sqlite: disk I/O error
What should I do to fix this?

For others reference, I just had this same error and found that one of my log files was taking up all my space (and could not write to the HDD because there was no free space).
Run (to make sure you have enough disk space)
df -h
Then I just needed to run:
svn cleanup
This resolved the error for me.

have you tried:
svn unlock --force path/to/workingcopy
? Seems it can be pointed at a url if the problem is in the repository itself... I've only used an unlock operation via the tortoise gui before, but I assume it just wraps the svn command anyway.
hope that helps

Related

DBD::SQLite::db commit failed: disk I/O error

I have a system writing data to an sqlite file. I had everything operational under CentOS 8. After upgrading the system to Rocky Linux 9 I see this error when running a commit command: DBD::SQLite::db commit failed: disk I/O error
I have checked file permissions, disk space, SMART readings, everything disk related that I can think of but without success.
Has anyone encountered this error before? What could I try to fix it?
The problem turned out to be a missing Perl module (LWP::https) that was causing DBD::SQLite not to get the data it wanted. Apparently, DBD::SQLite says Disk I/O error for that case.

Pintos - UserProg all tests fail is_kernel_vaddr()

I am doing the Pintos project on the side to learn more about operating systems. I had tons of devops trouble at first with it not running well on an 18.04 Ubuntu droplet. I am now running it on the VirtualBox image that UCCS tells students to download for pintos.
I finished project 1 and started to map out my solution to project 2. Following the instructions to create a file I ran
pintos-mkdisk filesys.dsk --filesys-size=2
pintos -- -f -q
but am getting error
Kernel PANIC at ../../threads/vaddr.h:87 in vtop(): assertion
`is_kernel_vaddr (vaddr)' failed.
I then tried running make check (all the tests). They are all failing for the same reason.
Am I missing something? Is there something I need to implement to fix this? I reread the instructions and didnt see anything?
Would appreciate help!
Thanks
I had a similar problem. My code for Project 1 ran fine, but I could not format the filesystem for Project 2.
The failure for me came from the following call chain:
thread_init() -> ... -> thread_schedule_tail() -> process_activate() -> pagedir_activate() -> vtop()
The problem is that init_page_dir is still NULL when pagedir_activate() is called. init_page_dir should have been initialized in paging_init() but this is called after thread_init().
The root cause was that my scheduler was being called too early, i.e. before the call to thread_start(). The reason for my problem was that I had built in a call to thread_yield() upon completion of every call to lock_release() which makes sense from a priority donation standpoint. Unfortunately, locks are used prior to the scheduler being ready! To fix this, I installed a flag called threading_started that bails in the first line of my thread_block() and thread_yield() functions if thread_start() has not yet been called.
Good luck!

csync/sqlite error when running ownCloud command

I am running owncloudcmd to sync files from a local* path to an ownCloud/Nextcloud server, all running Debian 8. However it fails with the error:
[5] csync_statedb_query sqlite3_compile error: disk I/O error - on
query PRAGMA quick_check; [6] csync_statedb_load ERR: sqlite3
integrity check failed - bail out: disk I/O error. #### ERROR during
csync_update : "CSync failed to load the journal file. The journal
file is corrupted."
I am not very familiar with csync or sqlite so I am a bit in the dark and although I can find talk of this issue through googling, I can't find a fix. The data in this case can be dumped to start over so I'm happy to flush any database or anything else. I've trying removing the created csync and journal files assuming one of them was corrupted but it doesn't seem to change anything.
I have read talk about changing PRAGMA settings to ignore the error (or check) but I can't see how this is implemented either.
Is anyone able to show me how to clear out the corruption?
*the local file is a mounted path to an AWS S3 bucket but I think this is irrelevant because it is working on other systems fine.

Error importing SVN report to git using svn2git

I get the following error when importing an SVN repository to git with svn2git:
fatal: EOF in data (285 bytes remaining)
Does anyone know what this error means?
This is caused by a segmentation fault, there is a branch/tag/ in your repository that is causing it to core.
To get the core files you will need to enable cores:
Uncomment this line in /etc/security/limits.conf
soft core unlimited
Run svn2git, it may take up to 2 hours to get the segmentation fault. Install gdb:
yum install gdb
Analyse the core:
gdb svn2git/svn-all-fast-export core.NNNN
Get a back trace, type:
bt
You should see the branch/tag which caused problems in the back trace. Exclude the branch from processing by updating your ruleset:
match /branches/broken_branch_name
end match
See issue opened with owner of svn2git here:
https://github.com/svn-all-fast-export/svn2git/issues/26
Or even easier, pstack <pid of svn2git> and you will see where it is stuck, then Ctrl + C, add the dud branch to your rule set and start svn2git again.

RHadoop Stream Job Fail with Apache Oozie

I'm really just looking to pick the community's brain for some leads in figuring out what is going on with the issue I'm having.
I'm writing a MR job with RHadoop (rmr2, v3.0.0) and things are great -- IO with HDFS, mapping, reducing. No problems. Life is great.
I'm trying to schedule the job with Apache Oozie, and am running into some issues:
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
I've read the rmr2 debugging guide, but nothing is really getting to the stderr because the job fails before anything even gets scheduled.
In my head, everything points to a difference in environments. However, Oozie is running the job as the same user that I'm able to run everything with via cli, and all of the R environment variables (fetched with Sys.getenv()) are the same, excepting there's some additional class path stuff set with Oozie.
I can post more of the OS or Hadoop versions and config details, but sleuthing some version-specific bugs seems like a bit of a red herring as everything runs fine at the command line.
Anybody have any thoughts what might be some helpful next steps in hunting this beast down?
UPDATE:
I overwrote the system function in the base package to log the user, the host name of the node, and the command being executed before the internal call to system. So before any system call is actually executed, I get something like the following in the stderr:
user#host.name
/usr/bin/hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming-2.2.0.2.0.6.0-102.jar ...
When ran with Oozie, the command printed in the stderr fails with an exit status of 1. When I run the command on user#host.name, it runs successfully. So essentially the EXACT same command with the SAME user on the SAME node fails with Oozie, but runs successfully from cli.

Resources