Database Corruption - Disk Image Is Malformed - Unraid - Plex [migrated] - sqlite

This question was migrated from Stack Overflow because it can be answered on Super User.
Migrated 17 days ago.
I am not sure where a question like this really falls as it is from an Unraid Linux server, with a Plex Media Server container, which utilizes SQLite (looking for troubleshooting at the root level). I have posted in both Unraid, and Plex forums with no luck.
My Plex container has been failing time and time again on Unraid resulting in me doing integrity checks, rebuilds, dump, import, and a complete wipe and restart (completely remove old directory and start over). At best I get it up for a few minutes before the container fails again. The errors I am receieving have changed but as of the last situation (complete wipe and reinstall of a new container) I am getting the following error in the output log:
Error: Unable to set up server:
sqlite3_statement_backend::loadOne:database disk image is malformed
(N4soci10soci_errorE)
I decided to copy the database onto my windows machine and poke around the database to get a better understanding of the structure. Upon viewing a table called media_items I get the same error.
Clearly one of what I assume to be main tables is corrupt. The question I have then is what if anything can I do to try and fix this or learn about the cause? I would have thought a completely new database would fix my issue, unless it's purely coincidence two back to back databases became corrupted before I could even touch them, with no connection. Could it be one of my media files? Could it be Unraid? Could it be my hard drive?
For context, if unfamiliar with Plex. Once the container is up, it scans my media library and populates it with data such as metadata, posters, watch state, ratings, etc. I get through the full automated build and within 30 minutes it falls apart before I can even customize my library.
Below are references to the bash lines I used in several scenarios throughout troubleshooting. May be useful to someone somewhere.
Integrity Check:
./Plex\ SQLite "$plexDB" "PRAGMA integrity_check"
Recover From Backup:
./Plex\ SQLite "$plexDB" ".output recover.out" ".recover"
Dump:
./Plex\ SQLite "$plexDB" ".output dump.sql" ".dump"
Import:
./Plex\ SQLite "$plexDB" ".read dump.sql"

After hours, days, and a week of all kinds of troubleshooting. To include resetting the docker image (plus others mentioned in the post), it was suggested in another forum to run a memtest. Put memtest on a bootable USB and I was immediately able to conclude one stick was bad. Upon removing that stick I have zero issues and everything is completely fine... Bizarre.

Related

R and Rstudio Docker vs Binder

My problem is that I can't use R-studio at my work place as the IT does not support it . I want to use R and R-studio that installed on my personnel laptop on my company laptop ( using a modern browser which is behind firewall ) . Some of the options I am thinking of two two things
should I need to build a docker for R and R-studio (I see base images are already available) , I am mostly interested in basic R , Dplyr (haven ,xporter, and Reticulate ) packages .
Should I have to use a binder . I am not technical person and my programming skills are very limited can any one suggest me way .
What exactly are the difference between using Docker option vs Binder ?
I know I can use R-Studio online and get my work done but with the new paid account I am running out of project hours and very slow sometimes . Thanks in advance
Here are some examples beyond the modern RStudio MyBinder example:
https://github.com/fomightez/pythonista_skewedf
https://github.com/fomightez/r_phylogenetics_worshop
https://github.com/fomightez/chapter7/tree/master/binder
The modern RStudio MyBinder example has been set as a template on GitHub so you can use
The first one is for a special use of a package not on conda. And I started that one from square one.
The other two were converted from content by others to aid in making them Binder-ready.
You essentially list everything you need from conda in the environment.yml along with the appropriate channels. If you need special stuff not on conda, you need the other configuration files included there.
Getting everything working can take some iterations on adding things, letting the image get built, and testing your libraries are available. Although you seem to think your situation is not overly complex.
The binder launch badges you see are just images where you modify the URL to point the MyBinder federation site at your repository. Look at the URL and you should see the pattern where you put studio at the end of the URL pointing at your repo. The form at MyBinder.org site can help with this; however, most often it is easier to just adapt a working launch badge's code copied from elsewhere. The form isn't set up at this time for making the URLs for launching to RStudio.
Download anything useful your create in a running session. The sessions timeout after 10 minutes, although RStudio usually keeps them active.
Lack of Persistence and limited memory, storage, & power can be drawbacks. The inherent reproducibility and portability are advantages.
MyBinder.org doesn't work with private repos. If you have code you don't want to share, you can upload it to the temporary session, using the repo for specifying the environment. You could host a private binderhub that does allow the use of private git repositories; however, that is probably overkill for your use case and exceed your ability level at this time.
GitHub isn't the only place to host repositories that can be pointed at the MyBinder system. If you go to the MyBinder.org page and click where it says 'GitHub' on the left side of the top line of the form, you can see a list of the sources at which you can host a repository and point the system to build an image and launch a container with that specified image.
Building the image from a source repository takes some minutes the first time. Once the image is built though on the service, launch is typically less than 30 seconds. Each time you make a change on the source repo, a build is necessary. Some changes don't cause the new build to be as long as the initial one as some optimizing is done to only build what is necessary after a change. Keep in mind there are several members of the federation around the workd and if traffic on the internet gets sent to where the built image isn't yet available, it will be built from scratch again first.
The Holepunch project is out there to offer some help for users working in the R ecosystem; however, with the R-Conda system that is now integrated into MyBinder it is pretty much as easy to do it the way I described. Last I knew, the Holepunch route makes a Dockerfile that isn't as easy to troubleshoot as using the current the R-Conda system route. Dockerfiles are essentially a last ditch configuration file that MyBinder can handle. The reason being the other configuration files are much easier and don't require knowing Dockerfile syntax. MyBinder aims to offer the ability to take advantage of Docker offering containers with a specified environment without users needing to know anything about Docker.
There is a Binder Help category for posting to get help at the Jupyter Discourse Forum. Some other examples of posts already there may help you troubleshoot.
Notice of a common pitfall
Most of the the configuration files for making a repository Binder-ready are simply text and can be edited right in the GitHub browser interface, without need to git or even cloning the repo locally.
Last I knew, there are two exceptions to this. The postBuild and start configuration files have settings that allow them to be run as scripts and these get altered in a way they no longer work if you edit them via the GitHub browser interface. (This was my experience when last I tried. Your mileage may vary or things may have changed now.) To edit those, you have to have git available on a system you have and pull one from some other source. Then edit that on your machine that has git working & add it your repo and push it back up from your local computer.
(If this is a problem, you can post in the Jupyter Discourse Forum Binder help category and you and I could coordinate where I fork and edit those files in your repo to your specifications and then make a pull request to update your source of the fork with those changes.)
If you are using Jupyter notebooks extensively then it may make sense to use Binder
But if you simply want to use R and Rstudio, then all you need is docker. A good resource is
https://github.com/rocker-org/rocker

Failed to read Firefox OS indexeddb for backup completely

Finally I need to change my current smartphone with firefox OS on it, due to some problems with the speaker.
Anyway I would like to backup not only the contacts, but also the messages (sms).
I found this great tutorial and the work of laenion http://digitalimagecorp.de/flatpress/?x=cat:8? and was able to retrieve the sqlite database of the messages.
Anyway laenion Exporter http://digitalimagecorp.de/software/firefox-os-data-exporter/ffosexporter.html? is only exporting the last messages from each thread, but I want to have a full backup.
Now I started reading in the file with his recommendation of the Firefox Storage Manager. Now it happens, that I can read out 2351 entries and then the conversion stops. However, if I delete some rows and do a restart it loads additional entries up to 2351 again.
Now I am wondering, is there a value in the about:config on how to extend the keys to be converted? Or is there anything else I am doing wrong? Why is the storage manager not reading in the complete database?
Thanks for any hint on how to solve it. Unfortunately I was not able to open the database in a readable style with any other program.

Automatically log changes to system files and allow revert

I'm trying to learn about the guts of Unix right now, mostly through experimentation. When I was first starting, I found myself looking through forum posts, copying and pasting bash code. When I broke something, I often had to do a fresh install because I couldn't remember what exactly I had changed where. Now, the simple solution is to record a log of all the system files I've changed and keep original copies of all the default files so I can revert if necessary. It would be great if there was a cl tool which did this for me automatically. It would be even greater if I could step back through changes. Basically, I'm looking to version control my entire OS.
Does anything like this exist? I would also accept alternative strategies for spelunking through Unix without causing permanent damage if you think I'm going about this wrong.
Using debian if it matters.

Is it possible to have the entire contents of a class that tripped an error included in the stacktrace?

A lot of time can pass between the moment a stack trace is generated and the moment the stack trace is thoroughly investigated. During that time, a lot can happen to the file in question, sometimes obscuring the original error. The error might have been fixed in the meantime (overlapping bugs).
Is it possible to get Stacktraces that show the offending file at the time of the error?
Not elegantly, and you normally don't want the user browsing through code that's throwing unexpected exceptions anyway (open door to an attacker).
Usually, what happens in a dev shop is that the user reports an error, stack trace, and the build it occurred on. As a tester, you can grab that build from your archives (you ARE keeping an archive of all supported releases somewhere handy, RIGHT?), install, run, and try to reproduce the error, working with the user to provide additional info as necessary. I've seen very few bugs that couldn't be reproduced EVENTUALLY, even if it required running the program against a backup of the user's production database to do it.
As a developer, you can download that build's source code from your version control repository (you ARE using version control, RIGHT?), and examine the lines in the stack trace to try to discover the problem by inspection, and/or build and run it to reproduce the error. Then, you go back to the latest source version, build, and run the same steps (a UI automation system can help out here), and if you don't get the error, someone else already found and fixed it. If you still get the error, you also got an updated stack trace with lines that match the current build, allowing you to set your breakpoints and step through.
What KeithS said, plus there are ways to capture more helpful state information at the time of the Exception using the Exception.Data property. See http://blog.abodit.com/2010/03/using-exception-data-to-add-additional-information-to-an-exception/
While KeithS' answer as pretty much correct, it can be easier and more elegant than you think. If you can collect a dumpfile (instead of just a stack trace), you can use a Symbol Server and Source Server in combination with your debugger to automatically pull your correct-version code from source control.
For example: if you enable PDB output and source-server integration in MSBuild, and upload the resulting PDBs to a symbol server, Visual Studio can automatically load the correct source control from a TFS or SourceSafe repository based on the information in a minidump.

Problem of SQLite3::SQLException: SQL logic error or missing database

SQLite3::SQLException: SQL logic error
or missing database
error when do insert, update and delete operation to tables from browser( that means the create, update and destroy action is failed but the show action is fine ), the same operation in console is OK. I googled this problem and found most of the solutions is to remove duplication in the fixtures, so I removed all the test data from the fixture and restart the server, and it failed again:(
Any advise is appreciated.
It turned out that I forget use "sudo script/server" to get write permission to the database :)
I don't mean to resurrect the dead, but I just encountered this problem, and the popular answers I found did not apply.
My problem turned out to be the SQLite Manager add-on for Firefox. I used the SysInternals "handle" program to determine that a) Firefox/SQLiteManager had (I assume) an open transaction, and b) every time I used the add-on to connect to the database, it did not destroy the previous one, which was no longer accessible.
I exited Firefox, and my code ran fine. I loaded Firefox and SQLite Manager again, but did not begin a transaction; again, my code ran fine. My code was Python, not RoR.
I would recommend this answer, and the original question be tagged for [sqlite3]. It's definitely not specific to RoR.
This might not be the right place for my observation, but:
I spent some hours to find a problem having two C++ threads connected to one database.
For some stupid reason I was executing a COMMIT from one thread being meant to be executed on the other side.
The commit worked well, but the other thread immediately had AUTO COMMIT being set to true.

Resources