I'm currently in the process of transferring what should have been about 2.7TB of data to a 5TB backup disk. Everything looks like it has been going smoothly, except for one thing. The fact that my three 1TB source disks have apparently transferred 3.7TB of data so far and it's still going...
That doesn't add up. All three sources and the destination are Mac OS Extended (with one of the sources being non-journaled, but that finished destination folder is still showing equivalent data amounts with the source disk).
Does anyone know a potential cause of this, or what could be going on? Even if the sources were full to the brim how am I receiving almost a whole TB of extra data when going between same filesystems?
This last source disk is sitting at about 300/899GB transferred so there is still another 600GB to move, pushing the eventual total above 4TB from 3 x 1TB source disks. I'm so confused...
This seems to have been caused by rsync full-copying symlinks. Adding -a --no-links or -a --no-l solves the issue.
ref: https://serverfault.com/a/233682
Related
This question was migrated from Stack Overflow because it can be answered on Super User.
Migrated 17 days ago.
I am not sure where a question like this really falls as it is from an Unraid Linux server, with a Plex Media Server container, which utilizes SQLite (looking for troubleshooting at the root level). I have posted in both Unraid, and Plex forums with no luck.
My Plex container has been failing time and time again on Unraid resulting in me doing integrity checks, rebuilds, dump, import, and a complete wipe and restart (completely remove old directory and start over). At best I get it up for a few minutes before the container fails again. The errors I am receieving have changed but as of the last situation (complete wipe and reinstall of a new container) I am getting the following error in the output log:
Error: Unable to set up server:
sqlite3_statement_backend::loadOne:database disk image is malformed
(N4soci10soci_errorE)
I decided to copy the database onto my windows machine and poke around the database to get a better understanding of the structure. Upon viewing a table called media_items I get the same error.
Clearly one of what I assume to be main tables is corrupt. The question I have then is what if anything can I do to try and fix this or learn about the cause? I would have thought a completely new database would fix my issue, unless it's purely coincidence two back to back databases became corrupted before I could even touch them, with no connection. Could it be one of my media files? Could it be Unraid? Could it be my hard drive?
For context, if unfamiliar with Plex. Once the container is up, it scans my media library and populates it with data such as metadata, posters, watch state, ratings, etc. I get through the full automated build and within 30 minutes it falls apart before I can even customize my library.
Below are references to the bash lines I used in several scenarios throughout troubleshooting. May be useful to someone somewhere.
Integrity Check:
./Plex\ SQLite "$plexDB" "PRAGMA integrity_check"
Recover From Backup:
./Plex\ SQLite "$plexDB" ".output recover.out" ".recover"
Dump:
./Plex\ SQLite "$plexDB" ".output dump.sql" ".dump"
Import:
./Plex\ SQLite "$plexDB" ".read dump.sql"
After hours, days, and a week of all kinds of troubleshooting. To include resetting the docker image (plus others mentioned in the post), it was suggested in another forum to run a memtest. Put memtest on a bootable USB and I was immediately able to conclude one stick was bad. Upon removing that stick I have zero issues and everything is completely fine... Bizarre.
I'm not sure if this is a server or programming problem. I searched for a similar problem, but coudn't find anything like this.
I have a server running Debian Buster, serving sites on Apache2.
This week, one of my sites turned veeery slow, taking more than 25 seconds to renderize a page that usually took between 2 and 4 seconds.
At first, I checked the PHP program, but it completes processing everything in less than 1 second, sometimes it takes 2 seconds.
As I have to place some menus depending on the size of the page, I save everything in a PHP variable, then decide if I add extra menus or not.
In the end, I "echo" the variable to the browser.
That said, after checking a lot, I found that:
when I open a page, it takes no time to process the html in PHP, and after writing it to the browser, the browser starts "waiting for www.mydomainname.tld" for 20+ seconds.
by running top, I see 2 or 3 Apache processes running at 100% CPU on the server during that time.
One of my css files is missing on the server. By replacing it, one of the Apache processes at 100% disappeared (probably ran and closed);
Another CSS file is in the server, but with that file called in the html page, the same (100% CPU running) problem appear. If I disable it in the html, everything is running quickly as expected. Renderize in the browser in less than 4 seconds top.
I know this is not easily reproducible, and for now I disabled that second CSS file, so my site is running, but it would be great if anyone could give me an idea on where should I look for this solution, if any?
I even checked the hard disk, the SMART flags seems OK. Didn't stop the server to check the disks yet.
I have a copy of this server in a virtualbox running the same system, and locally it is amazingly fast.
My worry is if there is any problem in my server that I should get some maintenance for?
Just to add: The server is an AMD octa core with 32GB of RAM and 3 TB x 2 disks in a RAID1 set, so the server specs is not a culprit (I think)
I ran a badblocks check in the RAID, and found no errors.
Then I asked the support staff to replace the older disk by its manufacturing date and surprisingly (since I had no proof of damage to that disk) they accepted.
After rebuilding the RAID set, the slowness disappeared somehow.
So, this is not a good answer, but may serve as a hint for anyone having this problem, too.
I am quite unsure of how the move files/directories use case in a client and NAS scenario technically works - perhaps someone can enlighten me or tell me if this is normal OS-behavior.
I have a NAS ( Synology DiskStation ) in a Gigabit-LAN with sometimes big directories ( in the range of ~ 10GB ) which I want to move somewhere else on the same NAS ( even on the same hard disk ).
The problem is that if I move a directory from lets say
//diskstation:/dir_foo/dir_1/src_1
to
//diskstation:/dir_foo/dir_2/
via my Windows 7 Desktop PC in Explorer ( I even tried it in Finder on MacBook ) this can take up to 10 Minutes (or the like) and I really wonder why this is the case.
To me this seems as if the whole data was first transported over LAN to my client PC and then afterwards moved back to the NAS!?
Shouldn't the explorer or the NAS notice that this is local file operation so that the data doesn't have to be transported through my LAN and the movie should be much quicker?
How can I analyze if the file movement is really executed over LAN? Because if i wanted to do these kind of operations via VPN from external, it would be pretty much unusable...
Is this normal behavior?
It's hard to give a firm answer, because it depends. What access protocol are you using, and what operation are you performing? Is it a drag-drop in your GUI?
Your NAS does what it's told. It almost certainly implements some sort of internal rename function, that means you don't need to copy data in order to 'move' it.
If you do this from the command line, using 'move' or 'mv' (depending on DOS/Unix) do you have the same problem? I'm prepare to bet you don't, because you're telling the NAS to rename, and it will, and it'll be fine.
Move it from the GUI instead of the file explorer.
If you are using your windows explorer for moving the files then your OS will first download the file from source directory to client PC and then upload it to target directory, this is because your using SAMBA shares.
If you want to move files quickly within your nas then best way would be to use putty or WinSCP which uses ssh & ftp etc.
I'm trying to restore a database (via a backup file), I'm on PgAdmin III (Postgresql 9.1).
After choosing the backup file, a window indicating that pg_restore.exe is running then PgAdmin is not responding ,it has been few hours (It is not a RAM shortage issue)
It might be due the backup file size (500 MB), but i have already restored a database with a 300 MB backup file few days ago, and it was done smoothly.
By the way the format of the backup file ( created via pg_dump)is the "tar" format.
Please let me know if anything comes to mind or if you need any more information. I appreciate any help or pointers anyone has. Thanks in advance.
I have the same problem and I solved looking this web site tutorial
File has been generated in my backup with the size of 78 MB, I have generated it again using
Format:Custom
Ratio:
Enconding: UTF8
Rolename:postgres
I try to restore again and then works fine.
On OS X I had similar issues. After selecting the .backup file and clicking restore in pgAdmin I got the spinning beachball. Everything however was working in the background, the UI was just hanging.
One way you can check that everything is working is by looking at Activity Monitor on OS X or Resource Monitor on Windows. If you look at the 'Disk' tab you should see activity from postgres and you should see the value in the 'Bytes Read' column slowly going up. I was restoring a ~15G backup and the counter in the 'Bytes Read' column slowly counted up and up. When it got to 15G it finished without errors and the pgAdmin UI became active again.
(BTW, my 15G restore took about 1.5 hours on a Mac with an i7 and 8G of RAM)
I am running some basic data manipulation on a Macbook Air (4GB Memory, 120GB HD with 8GB available). My input file is about 40 MB, and I don't write anything to the disk until end of the process. However, in the middle of my process, my Mac says there's no memory to run. I checked hard drive and found there's about 500MB left.
So here are my questions:
How is it possible that R filled up my disk so quickly? My understanding is that R store everything in memory (unless I explicitly write something out to disk).
If R does write temporary files on the disk, how can I find these files to delete them?
Thanks a lot.
Update 1: error message I got:
Force Quit Applications: Your Mac OS X startup disk has no more space available for
application memory
Update 2: I checked tempdir() and it shows "var/folders/k_xxxxxxx/T//Rtmpdp9GCo". But I can't locate this director from my Finder
Update 3: After unlink(tempdir(),recursive=TRUE) in R and restarting my computer, I got my disk space back. I still would like to know if R write on my hard drive to avoid similar situations in the future.
Update 4: My main object is about 1GB. I use Activity Monitor to track process, and while Memory usage is about 2GB, Disk activity is extremely high: Data read: 14GB, data write, 44GB. I have no idea what R is writing.
R writes to a temporary per-session directory which it also cleans up at exit.
It follows convention and respects TMP and related environment variables.
What makes you think that disk space has anything to do with this? R needs all objects held in memory, not off disk (by default; there are add-on packages that allow a subset of operations on on-disk stored files too big to fit into RAM).
One of the steps in the "process" is causing R to request a chunk of RAM from the OS to enable it to continue. The OS could not comply and thus R terminated the "process" that you were running with the error message you failed to give us. [Hint, it would help if you showed the actual error not your paraphrasing thereof. Some inkling of the code you were running would also help. 40MB on-disk sounds like a reasonably large file; how many rows/columns etc.? How big is the object within R; object.size()?