MS Access database size huge with no data? - ms-access-2010

I have an MS access DB with 35 linked tables, a few queries and another 35 reports.
The database has no physical tables in it as all data for its tables are coming from the linked back-end MS Access database. The problem now is, the front-end database size is above 1 GB. How and why? And how can I fix it?

First, please try HansUp's suggestion.
But, if that doesn't shrink it as much as you expect, try this:
Make a file called decompile.bat. In it, put the command:
"C:\Program Files (x86)\Microsoft Office\Office14\MSACCESS.EXE" "C:\Your\Path\To\YourFrontEnd.accdb" /nostartup/decompile
Edit the paths to suit. Put this batch file in the same directory as your front-end.
To use:
Run decompile.bat by double-clicking on it in Windows Explorer
In Access:
Hit Alt-F11 to go to the Visual Basic Editor
Click Debug, then Compile
Save, then exit the Visual Basic Editor
In the main Access window, click Database Tools, Compact and Repair Database
When finished, exit Access
You should notice that your front-end is dramatically smaller.

That's a very late addition but I wonder why didn't anyone point at the cause of the proplem! Microsoft says the cause is:
If you do not release a recordset's memory each time that you loop
through the recordset code, DAO may recompile, using more memory and
increasing the size of the database.
And the solution:
Use the Close method of the Recordset object to explicitly close the
recordset's memory when you no longer need the recordset. If the
database has increased in size because you did not use the Close
method of the Recordset object, you can reduce the size of the
database by running the Compact and Repair utility (on the Tools
menu).
https://learn.microsoft.com/en-us/office/troubleshoot/access/prevent-database-bloat-dao
I've used the "Compact and repair tool" from the "Database tools" ribbon once and it did reduced the size of -an almost empty- database from 2.0 GB (Max. Size) to just 3.41 MB!

I just ran across this. We use access databases to store project data. For the most part they stay less than a megabyte. Then I was getting issues from a client that storage cost was going up on a server because some databases were growing to the tens and hundreds of megabytes. So I search my stuff and found a database at a gig!
I'm using Office 7, and I'm pretty sure it is still in 365.
Solution:
From the Office button in access, (upper left). Go to Manage -> Backup Database. For me, it copied that 1 gig database to a 720k database. And, of course, it opened fine and all the data was there. I found that Compact and Repair did not come anywhere close to the results of backup
I don't know why Microsoft lets this occur, I could not find a reasonable explanation for it.

Related

Sqlite query reports SQLITE_CORRUPT

I have an embedded system running with a RTOS and is using C language.
I am using Sqlite to maintain a file(let's call it sqlLiteFile.db) on a File System residing on a NAND. The Sqlite version is 3.8.5
Earlier, I was creating a new database for this file every time the system comes up. So, it was a volatile file. I had no issues at that time.
However, now I made the sqlLiteFile.db to be persistent. So, every time system reboots, it opens the same file and starts writing. This works fine for some time, and survives few reboots. But, after a while, the Sqlite query starts reporting SQLITE_CORRUPT error. However, the write operation to sqlite still works fine, it is the query which starts reporting error. I can see the write operation successful, using a debugger. Also, the size of the file in the file system keeps increasing.
When I download the file and use Sqlite browser, I can not open the file anymore. When I use some other tool to convert the sqlLiteFile.db to sqlLiteFile.txt, I can see an error at the bottom: /**** ERROR: (11) database disk image is malformed *****/
Any suggestions on how to prevent this corruption would be helpful.
Edit:
Further I did try doing clean shutdowns which closes the database using sqlite3_close() prior to rebooting. This time the database survived a little longer through reboots, but it got corrupted again eventually. So, it seems it is more then just about closing the database before exiting the application. Probably the size?
Update:
The system reboots(and re-opening/closing the sqlite database) doesn't cause corruption, but it happens after the database size reaches a certain amount(~55 Kb)
It did seem that fsync() was doing something to the sqlite database. Taking out fsync() functionality didn't cause sqlite corruption. Also, I was opening and reading the database while downloading and at the same time sqlite database was written. Both these factor or fsync() alone was causing file system corruption. I still need to figure out a better way to perform fysnc(), but now I know exactly what was causing the corruption.

Out of memory error on access

I have an out of memory error on access. My DB is approx 20mb and holds approx 100,000 lines in different tables.
It started this afternoon, whenever i go in the VBA editor, i can't edit anything, because it will delete the text i just typed, and popup "Out of memory".
If i try to use a OnUpdate event on a DropDown list, it will say the same error, and will do nothing. I can't even setup a break point on my code, because it will never go in the code.
I tried compacting it, separating the back and front end. but nothing works, same error. I'm on Windows Xp Sp3
Strange errors like this are sometimes caused by a corrupt form in your database. I would recommend trying to decompile the database file.
You can get more information about the /decompile switch from the following:
How to Decompile a Database
Decompile Your Microsoft Access Database
I would make a backup copy of the database, then do a decompile, and then a compact. Then open up the database and open your VBA Editor and Compile your code. Then test it.
The /decompile switch has fixed many strange problems with Microsoft Access databases for me in the past.

"sqlite3.OperationalError: database or disk is full" on Lustre

I have this error in my application log:
sqlite3.OperationalError: database or disk is full
As plenty of disk space is available and my SQLite database does not appear to be corrupted (integrity_check did not report any error), why is this happening and how can I debug it?
I am using the Lustre filesystem (with flock set), and until now, it worked perfectly.
Versions are:
Python 2.6.6
SQLite 3.3.6
It's probably too late for the original poster, but I just had this problem and couldn't find an answer so I'll document my findings in the hope that it will help others:
As it turns out, an SQLite database actually can get full even if there's plenty of disk space, because it has a limit for the number of pages in a database file:
http://www.sqlite.org/pragma.html#pragma_max_page_count
In my case the value was 1073741823, which meant that in combination with a page size of 1024 Bytes the database maxed out at 1 TB and returned the "database or disk is full" error.
The good news is that you can raise the limit; for example double it by issuing PRAGMA max_page_count = 2147483646;.
The limit doesn't seem to be saved in the database file, though, so you have to run it in your application every time you open the database.
By default, SQLite uses /tmp temporary directory (not the memory). If /tmp is too small you will get disk full. In that case change the temporary directory like that: export TMPDIR=<big file system>.
I had same problem too.
Your host or PC's storage is full so delete some files in your system then problem is gone.

SQLite3 Data rescue on Error: Database disk image is malformed

Background
I have a database thats been corrpted, and want to save so much of the data possible.
I have tried sql dump the data with numerous of tools, without success.
Always same error message:
Error: database disk image is malformed
I'm pretty sure this did happen due to a power failure.
Approach?
Now the the database is in fact a file. And I'm thinking if its possible to treat it so and try to save so much data as possible.
I guessing when the db is opened by a tool or program it first check its headers.
In my case I get the error message right away. I'm assuming that the headers are corrupt, or miss matching. And due to that no tool will try to read the payload.
In the documents http://www.sqlite.org/fileformat2.html there are explanations for the header offsets.
Questions: Is this is an reasonable approach? And if it possible to repair, modify or exchange headers on the corrupted db. And how do I do it?
Despite several replies in multiple threads on SO to the contrary, SQLite databases can be recovered from corruption!
I have requested an update from the SQLite team in their FAQ (http://www.sqlite.org/faq.html#q20), but in the meantime, here are a couple of options.
The FAQs state:
"...If SQLITE_SECURE_DELETE is not used and VACUUM has not been run, then some of the deleted content might still be in the database file, in areas marked for reuse. But, again, there exist no procedures or tools that we know of to help you recover that data."
and:
"...Depending how badly your database is corrupted, you may be able to recover some of the data by using the CLI to dump the schema and contents to a file and then recreate. Unfortunately, once humpty-dumpty falls off the wall, it is generally not possible to put him back together again."
There are in fact at least two excellent tools to do data recovery for whole SQLite databases and individual records, and they can help in cases of hardware failure, software errors or human problems. It will not be 100% pristine, but the situation is not hopeless
PhotoRec is open source and multi-platform. While historically, it was used for images and pdfs, it now supports SQLite recovery (http://www.cgsecurity.org/wiki/File_Formats_Recovered_By_PhotoRec), along with 220+ binary file types. If a database (or entire directory) is deleted, PhotoRec can often restore the db file to a sane-enough state to be opened and exported. There are pre-compiled versions of the app freely available for Windows, Mac and Linux.
In addition, the commercial product Epilog by CCL Forensics can do very advanced record recovery, including retrieving data from the write-ahead log (WAL) transaction files. It is a few hundred dollars, but it can do fairly amazing forensic reconstruction on SQLite data (both native binary db files as well as raw disk images).
Both the above have saved my hide several times, so passing this along for others who may have lost hope in deleted/corrupted SQLite databases (as well as genuine forensics for popular use cases, like mobile phones, browsers, address books, etc.).
Once you've regenerated/exported data, it's always a good idea to verify your backup schemes and definitely run a pragma integrity_check periodically, along with vacuuming.
I have requested that the official FAQ be updated to at least mention that one can google "sqlite recovery" or something if it's verboten to mention other projects/products by name.
Cheers.

concurrent reading and writing image files (asp.net, but applies to most web languages)

I have a .jpg file which represents the current image from a webcam. User's will be downloading this file at an interval of once a second. Because there could be dozens of users reading it, this could be dozens of times a second (which is normal for any web server).
Problem is, this image is updated by a 3rd party application also once a second which "spiders" my local networks webcam portal image. This is so we can build our webcams into our current administration panel.
The problem I am already finding is ASP.net sometimes gets an error it can not access the file because it is open for write permissions by the bot. Likewise, the bot can not access it because IIS is feeding it to the user.
The bot uses io.streamwriter to save the data to the file, and my script uses Response.WriteFile to send the file to the script. (I need to use an actual ASP.net page with a JPG content-type that feeds the file to make sure only users with a active session can view the JPG).
My question is what is the best practices for this? I know why it's happening but what is the best resolution for this? Would storing as a BLOB in a database maybe be smarter since databases are created for concurrent read/writing already? Is there an easier way of doing this with a file I have not thought of yet?
Thanks in advance,
Anthony Greco
Using a BLOB will work if the readers use SNAPSHOT isolation model (SQL Server 2005 and up). See Download and Upload images from SQL Server via ASP.Net MVC for how to stream an image from a BLOB, and see Understanding Row Versioning-Based Isolation Levels for a lecture on SNAPSHOT.
But using a BLOB may be overkill, you could get away with something much simpler. For instance, if you only have one ASP.Net process, then you could have a global volatile variable for the current file name. The writer writes the JPG into a new file, and then updates the global 'current' file name with an Interlocked.CompareExchange operation (it has to be Compare because a newer writer might actually finish faster, outrun a previous writer, and you want to preserve the latest update). There are still some issues left to solve (find out the file name at startup, clean up old files etc) but they are all fairly ease to solve.
If you have a farm of servers, or multiple ASP.Net processes serving the site, then things could get complicated. I would still do a rotating file name and do a try-and-error approach (try to respond with newest file, fall back to previous older one if conflict is detected).
You could get the bot to write the data to a different filename and then do a delete and rename to the filename being served by ASP.Net. This should reduce the file lock time down to the time for a delete and rename to occur. To clarify:
ASP.Net serving image from "webcam.jpg"
bot writes image data to "temp.jpg"
when last image byte written, bot deletes "webcam.jpg" and renames "temp.jpg" to "webcam.jpg"
ASP.Net should check "webcam.jpg" exists, if not wait 10ms (or suitable small increment) and check again.

Resources