PgAdmin III not responding when trying to restore a database - postgresql-9.1

I'm trying to restore a database (via a backup file), I'm on PgAdmin III (Postgresql 9.1).
After choosing the backup file, a window indicating that pg_restore.exe is running then PgAdmin is not responding ,it has been few hours (It is not a RAM shortage issue)
It might be due the backup file size (500 MB), but i have already restored a database with a 300 MB backup file few days ago, and it was done smoothly.
By the way the format of the backup file ( created via pg_dump)is the "tar" format.
Please let me know if anything comes to mind or if you need any more information. I appreciate any help or pointers anyone has. Thanks in advance.

I have the same problem and I solved looking this web site tutorial
File has been generated in my backup with the size of 78 MB, I have generated it again using
Format:Custom
Ratio:
Enconding: UTF8
Rolename:postgres
I try to restore again and then works fine.

On OS X I had similar issues. After selecting the .backup file and clicking restore in pgAdmin I got the spinning beachball. Everything however was working in the background, the UI was just hanging.
One way you can check that everything is working is by looking at Activity Monitor on OS X or Resource Monitor on Windows. If you look at the 'Disk' tab you should see activity from postgres and you should see the value in the 'Bytes Read' column slowly going up. I was restoring a ~15G backup and the counter in the 'Bytes Read' column slowly counted up and up. When it got to 15G it finished without errors and the pgAdmin UI became active again.
(BTW, my 15G restore took about 1.5 hours on a Mac with an i7 and 8G of RAM)

Related

Database Corruption - Disk Image Is Malformed - Unraid - Plex [migrated]

This question was migrated from Stack Overflow because it can be answered on Super User.
Migrated 17 days ago.
I am not sure where a question like this really falls as it is from an Unraid Linux server, with a Plex Media Server container, which utilizes SQLite (looking for troubleshooting at the root level). I have posted in both Unraid, and Plex forums with no luck.
My Plex container has been failing time and time again on Unraid resulting in me doing integrity checks, rebuilds, dump, import, and a complete wipe and restart (completely remove old directory and start over). At best I get it up for a few minutes before the container fails again. The errors I am receieving have changed but as of the last situation (complete wipe and reinstall of a new container) I am getting the following error in the output log:
Error: Unable to set up server:
sqlite3_statement_backend::loadOne:database disk image is malformed
(N4soci10soci_errorE)
I decided to copy the database onto my windows machine and poke around the database to get a better understanding of the structure. Upon viewing a table called media_items I get the same error.
Clearly one of what I assume to be main tables is corrupt. The question I have then is what if anything can I do to try and fix this or learn about the cause? I would have thought a completely new database would fix my issue, unless it's purely coincidence two back to back databases became corrupted before I could even touch them, with no connection. Could it be one of my media files? Could it be Unraid? Could it be my hard drive?
For context, if unfamiliar with Plex. Once the container is up, it scans my media library and populates it with data such as metadata, posters, watch state, ratings, etc. I get through the full automated build and within 30 minutes it falls apart before I can even customize my library.
Below are references to the bash lines I used in several scenarios throughout troubleshooting. May be useful to someone somewhere.
Integrity Check:
./Plex\ SQLite "$plexDB" "PRAGMA integrity_check"
Recover From Backup:
./Plex\ SQLite "$plexDB" ".output recover.out" ".recover"
Dump:
./Plex\ SQLite "$plexDB" ".output dump.sql" ".dump"
Import:
./Plex\ SQLite "$plexDB" ".read dump.sql"
After hours, days, and a week of all kinds of troubleshooting. To include resetting the docker image (plus others mentioned in the post), it was suggested in another forum to run a memtest. Put memtest on a bootable USB and I was immediately able to conclude one stick was bad. Upon removing that stick I have zero issues and everything is completely fine... Bizarre.

My Xampp (mysql) getting shutdown unexpectedly

I am getting strange issue with mysql server after every 2 days
I am working on moodle plugin development
so as normally I installed moodle in my xampp htdocs folder
before 2 days I was writing my plugin and it was working fine after weekend I open my system and now mysql is not running. This happened third or fourth times.
I am attaching the error
error what I am getting
Xampp error screenshot
enter image description here
I even changed my machine to high configuration by thinking that the other machine could be out of memory etc.
So far I tried to backup the data but if I do backup then it doesn't have my moodle database everytime.
I know I should take backup time to time but problem is why this error is coming after every 2-3 days
I tried running xampp as admin
tried changing port etc. and tried various solution also but If I do copy from backup folder it will wipe out all my moodle database and I need to start if from scratch
and Dont mark it as duplicate and I tried almost every solution but nothing working for me
I even tried this one(Error: MySQL shutdown unexpectedly XAMPP)
Updated here is the actual error before doing anything which coming after every 2 daysOriginal error
Because mysql.db is a system table used for authentication it aborts if it can't be opened.
Start the database with skip-grant-tables added the the configuration file.
After restarting run the SQL REPAIR TABLE:
REPAIR TABLE mysql.db EXTENDED
After this is complete. shutdown, remove the skip-grant-tables from the configuration file and start MariaDB again.

What could be preventing SQLite WAL files from being cleared?

I've had three cases of WAL files growing to massive sizes of ~3 GB on two different machines with the same hardware and software setup. The database file itself is ~7 GB. Under normal runtime, WAL is either non-existent or a few kilobytes.
I know it's normal for WAL to grow as long as there's constantly a connection open, but even closing the application doesn't get rid of the WAL file or the shared memory file. Even restarting the machine, without ever starting my application, and attempting to open the database with DB Browser for SQLite doesn't help. It takes ages to launch (I don't know how long exactly, but definitely over 5 minutes) and then WAL and shared memory files remain, even after closing the browser.
Bizarrely, here's what did help. I zipped up the three files to investigate locally, but then on a hunch I deleted the originals and extracted the files back again. Launching the app took seconds and the extra files were gone just like that. The same thing happened on my laptop with the unzipped files. I ran the integrity check on my machine and the database seems to be fine.
Here's the rough description of the app, if it means anything.
EF Core 2.1.14 (it uses SQLite 3.28.0)
.Net Framework 4.7.1
except for some rare short-lived overlap with read-only access, only one thread accesses the database
entire connection (DbContext) gets closed roughly every 2 seconds max
shared memory is used with PRAGMA mmap_size = 268435456 (256 MiB)
synchronous mode is NORMAL
the OS is Windows 7
Any ideas what might be causing this... quirk?
I temporarily switched to journal mode, which helpfully reported errors instead of silently failing.
The error I was having was SQLITE_IOERR_WRITE. After more digging, it turned out the root Windows error causing all this was 665 - hitting into an NTFS file system limitation. An answer here then led me to the actual cause: extreme file fragmentation of the database.
Copying the file reduces fragmentation, which is why the bizarre fix I mentioned, copying the file, temporarily worked. The actual fix was to schedule defragmentation using Sysinternals' Contig.

Failed to read Firefox OS indexeddb for backup completely

Finally I need to change my current smartphone with firefox OS on it, due to some problems with the speaker.
Anyway I would like to backup not only the contacts, but also the messages (sms).
I found this great tutorial and the work of laenion http://digitalimagecorp.de/flatpress/?x=cat:8? and was able to retrieve the sqlite database of the messages.
Anyway laenion Exporter http://digitalimagecorp.de/software/firefox-os-data-exporter/ffosexporter.html? is only exporting the last messages from each thread, but I want to have a full backup.
Now I started reading in the file with his recommendation of the Firefox Storage Manager. Now it happens, that I can read out 2351 entries and then the conversion stops. However, if I delete some rows and do a restart it loads additional entries up to 2351 again.
Now I am wondering, is there a value in the about:config on how to extend the keys to be converted? Or is there anything else I am doing wrong? Why is the storage manager not reading in the complete database?
Thanks for any hint on how to solve it. Unfortunately I was not able to open the database in a readable style with any other program.

Memory problem when using XCode4

Updated to xcode4 days ago, xcode4 is really nicer to xcode3. But I met a memory issue when using xcode 4. The total active memory kept growing when the xcode4 war running, grew from 500m to 2.4G, the process memory is around 200m. It's strange~
After I closed xcode, the total active memory didn't go down soon, it was 2.4G for about 10 minutes.
Has anyone else met this issue too? Thanks for any info!
== Updates ==
Upgrade to XCode4.0.2, still has memory issue
I have the same problem. At times Xcode 4 starts to index your project (you can see "Indexing" message in the status bar at the top of window). During that it could use up to 2.8GB (!) of memory.
As soon as it happens I stop to use my laptop and start to make tea :)
If the swap exceeds 500M I restart my computer. I have 4GB of memory installed in my macbook 5.2 and there is no way to increase it :(
I don't know exactly what that "indexing" actually means. I supposed that it is connected with Code Sense in some way. But when I tried to disable code completion (preferences -> text editing -> editing), it didn't help.
I hope Apple would fix it in the next release. If not, the only way is upgrade my computer. Or use Xcode 3.2.
I'm having this same issue. Currently I'm using following workaround:
I have Activity Monitor opened on a second screen, and whenever Xcode reaches 1GB I restart it, and it works smoothly once again.
I know it's far from perfect workaround, and I'm looking forward for a better one.
I have Xcode 4.0.1 & OSX 10.6.7
I found a solution!!!
I wanted to clean my /Library/Cache. Accidentally I deleted part of my /Library :-) so I decided to do a full system restore using OSX DVD and my current (20 minutes old) Time Machine backup. I did the restore and ... It fixed the problem!. Time Machine restore cleans all cache! (it should be enough if you only delete the content of /Library/Cache and {HomeDirectory}/Library/Cache). Good luck!

Resources