SQLite porting to STM32 having issue with memory allocation - sqlite

I am trying to port SQLite to uCOS RTOS running in STM324xG_EVAL board. I am using micrium File system for making RAM based file system used by SQLite. I have tried with different build configurations and used sqlite3_config API to define different memory regions like heap, scratch and page memory. I am able to initialize (sqlite3_initialize) and open(sqlite3_open) DB. But when I am trying to create tables(sqlite3_exec), I am getting errors like "journal file not found", "out of memory". What may be the issue
Thanks,
Shijo Thomas

Related

I'm getting OutOfMemoryExceptions, but my trace file is much smaller than my available memory

As the title says, I've got tons of free memory, but I keep getting OutOfMemoryExceptions when processing traces and calling properties on Data Sources. Why is this happening?
The ETL file format is designed to be very space-efficient, and also supports optional compression. Due to these factors taking the data from an .etl file and transforming it into our more useful structures can often require significantly more memory than the original size of the file. However there are two steps that can be taken to make OutOfMemoryExceptions less likely:
Don't use data sources you don't need. Even if none of the properties on a data source are called by your code, simply turning it on by calling its Use method will result in the data source processing events and preparing data for consumption.
Ensure your program is running as a 64-bit process. The default Visual Studio C# project settings are to compile your program targeting AnyCPU, but to prefer running it as a 32-bit process. Unchecking the "Prefer 32-bit" option in your project's Build properties or switching your project's build configuration to x64 will cause your program to run as a 64-bit process.

Difference between Jstack and gcore when generating Core Dumps?

We all know that Core Dumps are an essential diagnostic tools for analysing various processes in Unix . I know both jstack and gcore are both used for generating Javacore files or Core Dumps but I have a doubt that Gcore is mainly used for Processes and Jstack is used for threads .
As from an Operating System perspective Process and Threads though interrelated (Process comprises of Threads only) they are relatively different from each other w.r.t memory/speed/execution . So is that gcore will diagnose the process and jstack will analyse the threads in that process ???
GCore act at OS level and you got a dump of native code that is currently running. From a java point of view, it is not really understandable.
JStack get you the stack trace at VM level (the java stack) of all thread your application have. You can find from that what is the real java code executed at a point.
Clearly, GCore is almost never used (too low level, native code...). Only really strange issue with native library or stuff like that will perhaps need this kind of tool.
There is also jmap that can generate a hprof file which is the heap data from you VM. A tool like 'Memory Analyser Tool' can open the hprof, and you can drill down on what was going on (on memory side).
If your VM crash because of a OutOfMemory, you can also set parameter to get the hprof when the event occurs. It helps to understand why (too many users, A DB query that fetch too much data...)
Last thing is the fact that you can add a debug option when your start your VM, so that you can connect to it, and put debug on running process. It can help if you have some strange issue that you are not able to reproduce in your local environment.

OutofMemory error with ibm jdk 1.7

I am using IBM jdk 1.7(to support TLS cyphers) for an struts based application deployed with embedded tomcat.
We are running with memory leaks(OOM) that generated almost 30 gigs of dumps.This has become a rotine event.
We have tried increasing the heap mem by including
wrapper.java.additional.1="-XX:MaxPermSize=256m -Xss2048k" in the wrapper.conf.
But this didnt help much.
Try using Memory Analyzer, you can follow the instructions here to download and install it:
https://www.ibm.com/developerworks/java/jdk/tools/memoryanalyzer/
It should provide an overview of your heap usage.
I'd recommend starting with the dominator tree view to see which objects are responsible for keeping data alive on the heap. You can also run various reports which analyse the heap for you.
You should have core files (.dmp) and heap dumps (.phd), the core files will be large but may be faster to access and will also contain all the values for basic types in objects and strings. The phd files just contain object sizes and the connections between them. It may be easier to relate what you are seeing back to your code if you start with the core file.

WinDBG - Analyse dump file on local PC

I have created a memory dump of an ASP.NET process on a server using the following command: .dump /ma mydump.dmp. I am trying to identify a memory leak.
I want to look at the dump file in more detail on my local development PC. I read somewhere that it is advisable to debug on the same machine as you create the dump file. However, I have also read that some developers do analyse the dump file on their local development PC's. What is the best approach?
I notice that when I create a dump file using the command above the W3WP process memory increases by about 1.5 times. Why this this? I suppose this should be avoided on a live server.
Analyzing on the same machine can save you from SOS loading issues thereafter. Unless you are familiar with WinDbg and SOS, you will find it confusing and frustrating then.
If you have to use another machine for analysis, make sure you read carefully this blog post, http://blogs.msdn.com/b/dougste/archive/2009/02/18/failed-to-load-data-access-dll-0x80004005-or-what-is-mscordacwks-dll.aspx as it shows you how to copy the necessary files from the source machine (where the dump is captured) to the target machine (the one you launch WinDbg).
For your second question, as you use WinDbg to attach to the process directly, and use .dump command to capture the dump, the target process unfortunately is modified. Not easy to explain in a few words. The recommended way is to use ADPlus.exe or Debug Diag. Even procdump from SysInternals is better. Those tools are designed for dump capture and they have minimal impact on the target processes.
For memory leak from unmanaged libraries, you should use memory leak rule of Debug Diag. for managed memory leak, you can simply capture hang dumps when memory usage is high.
I am no expert on WinDBG but I once had to analyse a dump file on my ASP.NET site to find a StackOverflowException.
While I got a dump file of my live site (I had no choice since that was what was failing), originally I tried to analyse that dump file on my local dev PC but ran into problems when trying to load the CLR data from it. The reason being that the exact version of the .NET framework differed between my dev PC and the server - both were .NET 4 but I imagine my dev PC had some cumulative updates installed that the server did not. The SOS module simply refused to load because of this discrepancy. I actually wrote a blog post about my findings.
So to answer part of your question it may be that you have no choice but to run WinDBG from your server, at least you can be sure that the dump file will match your environment.
It is not necessary to debug on the actual machine unless the problem is difficult to manifest on your development machine.
So long as you have the pdbs with the private symbols then the symbols should be resolved and call stacks correctly displayed and the correct version of .NET installed.
In terms of looking at memory leaks you should enable Gflags user stack trace and take memory dumps at 2 intervals so you can compare the memory usage before and after the action that provokes the memory leak, remember to disable gflags afterwards!
You could also run DebugDiag on the server which has automated memory pressure analysis scripts that will work with .Net leaks.

"Database is locked" error in SQLite over a Mac network

I have created a simple database using SQLite (actually PySQLite). It works fine when I'm querying or writing to the database from the local machine (ie program and database file on the windows machine drive). However when I copy the database file to my network drive (a time capsule), then Windows machines, although they can see the files and have full read/write access to the drive, give me a "SQL Error: database is locked" even when performing a simple select!
Queries work fine over the network from Macs.
There is no fancy multi-access going on - only one machine has the database open. Seems like some weird Mac networking issue. Happens in either the Python program, or in the SQLite3 command line. I am using SQLite 3.6.14.2.
Anybody seen this problem? Any way of fixing it? Don't really want to get heavy with MYSQL because this is a simple single-user program, but i'd like to use it from multiple machines.
Thanks.
I don't know if it can be done on MAC, on Debian I have to mount the samba directory with the nobrl option.
From mount.cifs(8):
nobrl
Do not send byte range lock requests to the server. This is
necessary for certain applications that break with cifs
style mandatory byte range locks (and most cifs servers do
not yet support requesting advisory byte range locks).
Read the sqlite FAQ: http://www.sqlite.org/faq.html#q5
"People who have a lot of experience
with Windows tell me that file locking
of network files is very buggy and is
not dependable. If what they say is
true, sharing an SQLite database
between two or more Windows machines
might cause unexpected problems."
So it doesn't work on Windows, it doesn't tell about MAC.
Possibly it fails to lock the file over the network, I think you use SMB protocol so the bugginess comes with the package. If you would like to use SQLite over the network see SQLite Network for alternatives.
I've had a similar problem and I solved it by installing a newer sqlite version. Since Python 2.6 the problem has disappeared too because it uses a newer sqlite dll.
Thank you Carlos. Cherrytree depends on SQLite, and for some reason it recently stopped working with my samba-mounted SQLite database file, complaining about a locked database. Adding "nobrl" to my ubuntu fstab entry solved the problem.
//192.168.3.122/Files /mnt/Files cifs username=public,password=asdf,rw,noperm,nobrl 0 0

Resources