I've had three cases of WAL files growing to massive sizes of ~3 GB on two different machines with the same hardware and software setup. The database file itself is ~7 GB. Under normal runtime, WAL is either non-existent or a few kilobytes.
I know it's normal for WAL to grow as long as there's constantly a connection open, but even closing the application doesn't get rid of the WAL file or the shared memory file. Even restarting the machine, without ever starting my application, and attempting to open the database with DB Browser for SQLite doesn't help. It takes ages to launch (I don't know how long exactly, but definitely over 5 minutes) and then WAL and shared memory files remain, even after closing the browser.
Bizarrely, here's what did help. I zipped up the three files to investigate locally, but then on a hunch I deleted the originals and extracted the files back again. Launching the app took seconds and the extra files were gone just like that. The same thing happened on my laptop with the unzipped files. I ran the integrity check on my machine and the database seems to be fine.
Here's the rough description of the app, if it means anything.
EF Core 2.1.14 (it uses SQLite 3.28.0)
.Net Framework 4.7.1
except for some rare short-lived overlap with read-only access, only one thread accesses the database
entire connection (DbContext) gets closed roughly every 2 seconds max
shared memory is used with PRAGMA mmap_size = 268435456 (256 MiB)
synchronous mode is NORMAL
the OS is Windows 7
Any ideas what might be causing this... quirk?
I temporarily switched to journal mode, which helpfully reported errors instead of silently failing.
The error I was having was SQLITE_IOERR_WRITE. After more digging, it turned out the root Windows error causing all this was 665 - hitting into an NTFS file system limitation. An answer here then led me to the actual cause: extreme file fragmentation of the database.
Copying the file reduces fragmentation, which is why the bizarre fix I mentioned, copying the file, temporarily worked. The actual fix was to schedule defragmentation using Sysinternals' Contig.
Related
I'm having this following error while developing on my local computer/machine (Windows 10):
FATAL ERROR: Evacuation Allocation failed - process out of memory
This does not happen on my DigitalOcean server. I have no problems with my app on it, but on my development PC, I keep having this error non-stop. I can't develop my app without disruption.
I tried the following things:
1) Deleted my Temp folder on my computer. I had over 3 million files that took me an entire day to clean up.
2) I increased my virtual memory on my computer from 2GB to 3GB (I have an 8GB RAM on my PC)
I looked through StackOverflow for similar questions, but none of the things worked for me. I do have a pretty large app with around 8 dependencies/packages, but I don't understand why all of a sudden it would just slow down and "short circuit" so frequently. The project itself takes a good 5 minutes to load after typing the meteor command in the Windows command prompt.
Anyone know how to fix this problem? I have the sense it's because of a Node problem, but not sure how to begin fixing it with Meteor on top of it all.
If you are passing a string to console.log() it may contain too much data. Remove the call to console.log() and see if it works. It worked in my case. Pass a substring of the string or string.length if you just want to confirm that the string contains data.
I'm trying to restore a database (via a backup file), I'm on PgAdmin III (Postgresql 9.1).
After choosing the backup file, a window indicating that pg_restore.exe is running then PgAdmin is not responding ,it has been few hours (It is not a RAM shortage issue)
It might be due the backup file size (500 MB), but i have already restored a database with a 300 MB backup file few days ago, and it was done smoothly.
By the way the format of the backup file ( created via pg_dump)is the "tar" format.
Please let me know if anything comes to mind or if you need any more information. I appreciate any help or pointers anyone has. Thanks in advance.
I have the same problem and I solved looking this web site tutorial
File has been generated in my backup with the size of 78 MB, I have generated it again using
Format:Custom
Ratio:
Enconding: UTF8
Rolename:postgres
I try to restore again and then works fine.
On OS X I had similar issues. After selecting the .backup file and clicking restore in pgAdmin I got the spinning beachball. Everything however was working in the background, the UI was just hanging.
One way you can check that everything is working is by looking at Activity Monitor on OS X or Resource Monitor on Windows. If you look at the 'Disk' tab you should see activity from postgres and you should see the value in the 'Bytes Read' column slowly going up. I was restoring a ~15G backup and the counter in the 'Bytes Read' column slowly counted up and up. When it got to 15G it finished without errors and the pgAdmin UI became active again.
(BTW, my 15G restore took about 1.5 hours on a Mac with an i7 and 8G of RAM)
So I have a series of ASP.net web apps which are each assigned their own AppPool
This results in several instances of w3wp.exe residing in memory.
I've been trying to figure out why a couple of them steadily increase their use of RAM over the course of a day.
I found this suggestion that "Debug Diagnostics Tool" might be of use:
I downloaded installed and attempted to use it to create a full dump of the process.
For some reason it failed.
However afterwards I noticed that the memory used (private bytes) had dropped from nearly 600Mb down to ~90Mb
Did DDT cause the app to restart (or recycle), or did some form of garbage collection get invoked and cause the App to release a bunch of memory?
I am running some basic data manipulation on a Macbook Air (4GB Memory, 120GB HD with 8GB available). My input file is about 40 MB, and I don't write anything to the disk until end of the process. However, in the middle of my process, my Mac says there's no memory to run. I checked hard drive and found there's about 500MB left.
So here are my questions:
How is it possible that R filled up my disk so quickly? My understanding is that R store everything in memory (unless I explicitly write something out to disk).
If R does write temporary files on the disk, how can I find these files to delete them?
Thanks a lot.
Update 1: error message I got:
Force Quit Applications: Your Mac OS X startup disk has no more space available for
application memory
Update 2: I checked tempdir() and it shows "var/folders/k_xxxxxxx/T//Rtmpdp9GCo". But I can't locate this director from my Finder
Update 3: After unlink(tempdir(),recursive=TRUE) in R and restarting my computer, I got my disk space back. I still would like to know if R write on my hard drive to avoid similar situations in the future.
Update 4: My main object is about 1GB. I use Activity Monitor to track process, and while Memory usage is about 2GB, Disk activity is extremely high: Data read: 14GB, data write, 44GB. I have no idea what R is writing.
R writes to a temporary per-session directory which it also cleans up at exit.
It follows convention and respects TMP and related environment variables.
What makes you think that disk space has anything to do with this? R needs all objects held in memory, not off disk (by default; there are add-on packages that allow a subset of operations on on-disk stored files too big to fit into RAM).
One of the steps in the "process" is causing R to request a chunk of RAM from the OS to enable it to continue. The OS could not comply and thus R terminated the "process" that you were running with the error message you failed to give us. [Hint, it would help if you showed the actual error not your paraphrasing thereof. Some inkling of the code you were running would also help. 40MB on-disk sounds like a reasonably large file; how many rows/columns etc.? How big is the object within R; object.size()?
I have created a memory dump of an ASP.NET process on a server using the following command: .dump /ma mydump.dmp. I am trying to identify a memory leak.
I want to look at the dump file in more detail on my local development PC. I read somewhere that it is advisable to debug on the same machine as you create the dump file. However, I have also read that some developers do analyse the dump file on their local development PC's. What is the best approach?
I notice that when I create a dump file using the command above the W3WP process memory increases by about 1.5 times. Why this this? I suppose this should be avoided on a live server.
Analyzing on the same machine can save you from SOS loading issues thereafter. Unless you are familiar with WinDbg and SOS, you will find it confusing and frustrating then.
If you have to use another machine for analysis, make sure you read carefully this blog post, http://blogs.msdn.com/b/dougste/archive/2009/02/18/failed-to-load-data-access-dll-0x80004005-or-what-is-mscordacwks-dll.aspx as it shows you how to copy the necessary files from the source machine (where the dump is captured) to the target machine (the one you launch WinDbg).
For your second question, as you use WinDbg to attach to the process directly, and use .dump command to capture the dump, the target process unfortunately is modified. Not easy to explain in a few words. The recommended way is to use ADPlus.exe or Debug Diag. Even procdump from SysInternals is better. Those tools are designed for dump capture and they have minimal impact on the target processes.
For memory leak from unmanaged libraries, you should use memory leak rule of Debug Diag. for managed memory leak, you can simply capture hang dumps when memory usage is high.
I am no expert on WinDBG but I once had to analyse a dump file on my ASP.NET site to find a StackOverflowException.
While I got a dump file of my live site (I had no choice since that was what was failing), originally I tried to analyse that dump file on my local dev PC but ran into problems when trying to load the CLR data from it. The reason being that the exact version of the .NET framework differed between my dev PC and the server - both were .NET 4 but I imagine my dev PC had some cumulative updates installed that the server did not. The SOS module simply refused to load because of this discrepancy. I actually wrote a blog post about my findings.
So to answer part of your question it may be that you have no choice but to run WinDBG from your server, at least you can be sure that the dump file will match your environment.
It is not necessary to debug on the actual machine unless the problem is difficult to manifest on your development machine.
So long as you have the pdbs with the private symbols then the symbols should be resolved and call stacks correctly displayed and the correct version of .NET installed.
In terms of looking at memory leaks you should enable Gflags user stack trace and take memory dumps at 2 intervals so you can compare the memory usage before and after the action that provokes the memory leak, remember to disable gflags afterwards!
You could also run DebugDiag on the server which has automated memory pressure analysis scripts that will work with .Net leaks.