ASP.NET Memory Leak - OracleCommand Object - asp.net

I have a memory leak and I am having a really hard time trying to figure out where the problem is. The ASP.NET process is raising to 1GB every now and then. I have followed the instructions on this page (http://humblecoder.co.uk/tag/windbg) and the !gcroot command returns the following (last x lines). I have looked at all my OracleConnections and OracleCommands and they appear to be closed and disposed correctly:
6523dfd4 282 28200 System.Data.SqlClient.SqlParameter
0e90d850 548 28496 System.IO.MemoryStream
67b71a0c 1461 29220 System.Transactions.SafeIUnknown
7a5ee588 1924 30784 System.Collections.Specialized.ListDictionary+NodeKeyValueCollection
648c91f4 665 31920 System.Configuration.ConfigurationValues
7a5e5d04 1342 32208 System.Threading.Semaphore
652410f8 670 34840 System.Data.ProviderBase.DbConnectionPool+PoolWaitHandles
6613228c 1319 36932 System.Web.Security.FileSecurityDescriptorWrapper
66106948 2449 39184 System.Web.UI.AttributeCollection
0e8ff780 2021 40420 Microsoft.Win32.SafeHandles.SafeLsaPolicyHandle
01e34730 336 43008 Oracle.DataAccess.Client.OracleDataReader
648c9434 2218 44360 System.Configuration.ConfigurationValue
7a5ea0e4 1918 46032 System.Collections.Specialized.ListDictionary+NodeKeyValueCollection+NodeKeyValueEnumerator
7a5eaaa8 3088 49408 System.Collections.Specialized.NameObjectCollectionBase+NameObjectEntry
652435c4 1138 59176 System.Data.SqlClient.SqlBuffer
0e912c9c 2491 59784 System.Collections.ArrayList
0e9132c0 1236 69216 System.Collections.Hashtable
6614bf64 45 69660 System.Web.Caching.ExpiresEntry[]
0e8ff7d8 4042 80840 Microsoft.Win32.SafeHandles.SafeLsaMemoryHandle
66105ff4 5434 86944 System.Web.UI.StateBag
01e364c8 5686 90976 Oracle.DataAccess.Client.OpoSqlValTimeoutCtx
0e912e08 1007 91556 System.Int32[]
7a5ee300 3942 94608 System.Collections.Specialized.ListDictionary+NodeEnumerator
01e35ef8 7918 95016 Oracle.DataAccess.Client.OpoSqlRefCtx
01e353bc 6043 96688 Oracle.DataAccess.Client.MetaData
0e8f83e8 5017 100340 Microsoft.Win32.SafeHandles.SafeLocalAllocHandle
7a5ef738 6284 125680 System.Collections.Specialized.HybridDictionary
7a5ef7f4 5143 144004 System.Collections.Specialized.ListDictionary
661060d0 10908 174528 System.Web.UI.StateItem
0e91189c 533 184492 System.Char[]
6610d15c 2426 203784 System.Web.UI.WebControls.TableCell
01e362ec 7918 221704 Oracle.DataAccess.Client.OracleXmlQueryProperties
7a5ef8b4 11231 224620 System.Collections.Specialized.ListDictionary+DictionaryNode
65242390 1814 232192 System.Data.SqlClient._SqlMetaData
0e8f832c 12124 242480 Microsoft.Win32.SafeHandles.SafeTokenHandle
01e36444 7918 253376 Oracle.DataAccess.Client.OracleXmlSaveProperties
0e8f7ca8 13394 267880 Microsoft.Win32.SafeHandles.SafeWaitHandle
0e9133bc 1255 267912 System.Collections.Hashtable+bucket[]
0e8f7a98 12048 289152 System.Threading.ManualResetEvent
0e8e443c 7886 385508 System.Object[]
01e34b60 6456 387360 Oracle.DataAccess.Client.OpoConRefCtx
01e33860 6432 668928 Oracle.DataAccess.Client.OracleConnection
01e34f9c 6439 824192 Oracle.DataAccess.Client.OpoConCtx
01e34038 7918 1171864 Oracle.DataAccess.Client.OracleCommand
000dfbe0 70 5839608 Free
0e9136dc 2622 17492932 System.Byte[]
0e910c6c 56049 19472876 System.String
Total 283875 objects

If mem usage drops to 200 MB after a time, this shows your memory is beeing collected, but you still might have a memory missuse issue.
this dump doesn't show alot, but if it was taken when the process is 1GB as you said, you can still use it:
1) use !gcroot on several objects, to see how they are attached to the memory (i would check the DB usage, it seems you have a large amount of oracle connections (6432), and alot of other DB stuff floating around.)
like this:
!dumpheap -MT <mt = the left most number>
objects will show with memory addresses
!gcroot <address>
an object stack will show displaying how the object is attached to the memory tree.
a sample of this process
2) check performance counters (start->run->perfmon) add these counters:
- .Net Clr Memory-> #bytes all heaps
- Process->private bytes
calculate the difference between them - this is the memory consumed by unmanaged resources (like DB client objects)
check this in low memory and in high memory scenarios, and you will see if the memory consumption is mostly due to Managed memory (all heaps) or unmanaged.
3) if the memory is unmanaged, it's still likely to be held by managed objects, as the main application is managed, so making sure you free unmanaged resources after you are done with them is key. (close DBConnections, dispose DBCommands, close file handles, free COMObjects etc.)
Hope this helps,
Amit.

Related

Stress-ng stress memory with specific percentage

I am trying to stress a ubuntu container's memory. Typing free in my command terminal provides the following result:
free -m
total used free shared buff/cache available
Mem: 7958 585 6246 401 1126 6743
Swap: 2048 0 2048
I want to stress exactly 10% of the total available memory. Per stress-ng manual:
-m N, --vm N
start N workers continuously calling mmap(2)/munmap(2) and writing to the allocated
memory. Note that this can cause systems to trip the kernel OOM killer on Linux
systems if not enough physical memory and swap is not available.
--vm-bytes N
mmap N bytes per vm worker, the default is 256MB. One can specify the size as % of
total available memory or in units of Bytes, KBytes, MBytes and GBytes using the
suffix b, k, m or g.
Now, on my target container I run two memory stressors to occupy 10% of my memory:
stress-ng -vm 2 --vm-bytes 10% -t 10
However, the memory usage on the container never reaches 10% no matter how many times I run it. I tried different timeout values, no result. The closet it gets is 8.9% never approaches 10%. I inspect memory usage on my container this way:
docker stats --no-stream kind_sinoussi
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c3fc7a103929 kind_sinoussi 199.01% 638.4MiB / 7.772GiB 8.02% 1.45kB / 0B 0B / 0B 7
In an attempt to understand this behaviour, I tried running the same command with an exact unit of bytes. In my case, I'll opt for 800 mega since 7958m * 0.1 = 795,8 ~ 800m.
stress-ng -vm 2 --vm-bytes 800m -t 15
And, I get 10%!
docker stats --no-stream kind_sinoussi
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c3fc7a103929 kind_sinoussi 198.51% 815.2MiB / 7.772GiB 10.24% 1.45kB / 0B 0B / 0B 7
Can someone explain why this is happening?
Another question, is it possible for stress-ng to stress memory usage to 100%?
stress-ng --vm-bytes 10% will use sysconf(_SC_AVPHYS_PAGES) to determine the available memory. This sysconf() system call will return the number of pages that the application can use without hindering any other process. So this is approximately what the free command is returning for the free memory statistic.
Note that stress-ng will allocate the memory with mmap, so it may be that during run time mmap'd pages may not necessarily be physically backed at the time you check how much real memory is being used.
It may be worth trying to also use the --vm-populate option; this will try and ensure the pages are physically populated on the mmap'd memory that stress-ng is exercising. Also try --vm-madvise willneed to use the madvise() system call to hint that the pages will be required fairly soon.

Recover a overwritten encrypted luks device

Yesterday I was very tired and I had a friend of
mine that was kind of annoying. I had to put an
Linux ISO on a usb drive.
I had stored all of my data temporarily on one
device, which is a 5 TB drive. This device was/is
encrypted using :
cryptsetup -luksFormat ...
I mounted this device to /a and did a dd of an
Linux ISO on the 5 TB drive. The second I hit
enter I realized this and then I did a control C
and removed the plug from my drive.
After this I had a complete mental breakdown and
i started hyperventilating.
Anyhow I have read that even if data is overwritten
there is till a chance of reading the data below the
overwritten part.
I assume this is far more complicated then simply
running testdisk ( maybe testdisk can do this, no clue ).
The entire process of yesterday can be seen below :
:/a/1master-targz/iso# ls
debian-8.1.0-amd64-xfce-CD-1.iso FreeBSD-11.0-RELEASE-amd64-disc1.iso
linuxmint-18.1-cinnamon-32bit.iso nonpaelubuntu.iso
:/a/1master-targz/iso# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4.6T 0 disk
└─sda 254:0 0 4.6T 0 crypt /a
sdb 8:16 1 28.7G 0 disk
├─sdb1 8:17 1 648M 0 part
└─sdb2 8:18 1 416K 0 part
:/a/1master-targz/iso# dd if=linuxmint-18.1-
linuxmint-18.1-cinnamon-32bit.iso linuxmint-18.1-mate-32bit.iso
linuxmint-18.1-mate-64bit.iso
:/a/1master-targz/iso# dd if=linuxmint-18.1-cinnamon-32bit.iso 5~^C
:/a/1master-targz/iso# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 4.6T 0 disk
└─sda 254:0 0 4.6T 0 crypt /a
sdb 8:16 1 28.7G 0 disk
├─sdb1 8:17 1 648M 0 part
└─sdb2 8:18 1 416K 0 part
:/a/1master-targz/iso# dd if=linuxmint-18.1-cinnamon-32bit.iso of=/dev/sda bs=512k
10+1 records in
10+1 records out
5685920 bytes (5.7 MB, 5.4 MiB) copied, 0.81171 s, 7.0 MB/s
:/a/1master-targz/iso#
:/a/1master-targz/iso# ^C
:/a/1master-targz/iso# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 1 28.7G 0 disk
├─sdb1 8:17 1 648M 0 part
└─sdb2 8:18 1 416K 0 part
sdc 8:32 0 4.6T 0 disk
└─sdc1 8:33 0 1.6G 0 part
mmcblk0 179:0 0 29.8G 0 disk
├─mmcblk0p1 179:1 0 41.8M 0 part /boot
└─mmcblk0p2 179:2 0 29.8G 0 part /
:/a/1master-targz/iso#
#somewhere here I got a panic attack
TestDisk 7.0, Data Recovery Utility, April 2015
Christophe GRENIER <grenier#cgsecurity.org>
http://www.cgsecurity.org
Disk /dev/sdc - 5000 GB / 4657 GiB - CHS 4769307 64 32
Partition Start End Size in sectors
>P ext4 0 63 31 4769306 63 30 9767538688 [Ghost] # doubt this is right
fdisk -l /dev/sdb
Disk /dev/sdb: 4.6 TiB, 5000981077504 bytes, 9767541167 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x69e72c6a
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 0 3289087 3289088 1.6G 17 Hidden HPFS/NTFS
# head /dev/mapper/sda
head: error reading '/dev/mapper/sda': Input/output error
Summary
What do I have :
1 : Overwritten part of an encrypted luks device.
2 : I still have the mapper of when cryptsetup opened the device ( I have not
touched my computer since then )
My hope:
-There may be a way of tricking my OS of seeing my encrypted drive as /dev/sda
and connect the /dev/mapper/sda to this device, and then remount the device ignoring the part of the drive being destroyed ? ( I really have no clue ..)
-I know which image I used to overwrite the data, maybe this helps in possibly reading the data it destroyed.
-Any other idea's are very helpful
Clean up the question and remove the story.
"I have read that even if data is overwritten there is till a chance of reading the data below the overwritten part."
You will not be able to recover the over written data with your computer and drive as-is. In order to attempt to recover overwritten data it is necessary to be able to see the drive read head actual voltage so that the remnant of the over written value can be seen as a slight difference in the voltage. That means opening the HD and using specialized equipment. IOW: no.
You have learned the hard way to have continual backups, it seems the "hard way" is the only way that works and often takes multiple lessons, make this one count.

My R has memory leaks?

I'm using R 2.15.3 on Ubuntu 12.04 (precise) 64-bit.
If I run R in valgrind:
R -d "valgrind" --vanilla
I then exit the program using q() and I get the following report:
==7167== HEAP SUMMARY:
==7167== in use at exit: 28,239,464 bytes in 12,512 blocks
==7167== total heap usage: 28,780 allocs, 16,268 frees, 46,316,337 bytes allocated
==7167==
==7167== LEAK SUMMARY:
==7167== definitely lost: 120 bytes in 2 blocks
==7167== indirectly lost: 480 bytes in 20 blocks
==7167== possibly lost: 0 bytes in 0 blocks
==7167== still reachable: 28,238,864 bytes in 12,490 blocks
==7167== suppressed: 0 bytes in 0 blocks
==7167== Rerun with --leak-check=full to see details of leaked memory
==7167==
==7167== For counts of detected and suppressed errors, rerun with: -v
==7167== Use --track-origins=yes to see where uninitialised values come from
==7167== ERROR SUMMARY: 385 errors from 5 contexts (suppressed: 2 from 2)
Lately R is crashing quite often, especially when I call C++ functions through Rcpp,
could this be the reason?
Thanks!
You may be misreading the valgrind output. Most likely, there is no (obvious) leak here as R is pretty well studied as a system. Yet R is a dynamically typed language which has of course done allocations. "Definitely lost: 120 bytes" is essentially measurement error -- see the valgrind docs.
If you want to see a leak, create one, e.g., with a file like this:
library(Rcpp)
cppFunction('int leak(int N) {double *ptr = (double*) malloc(N*sizeof(double)); \
return 0;}')
leak(10000)
which reserves memory, even explicitly out of R's reach, and then exits. Here we get:
$ R -d "valgrind" -f /tmp/leak.R
[...]
R> leak(10000)
[1] 0
R>
==4479==
==4479== HEAP SUMMARY:
==4479== in use at exit: 35,612,126 bytes in 15,998 blocks
==4479== total heap usage: 47,607 allocs, 31,609 frees, 176,941,927 bytes allocated
==4479==
==4479== LEAK SUMMARY:
==4479== definitely lost: 120 bytes in 2 blocks
==4479== indirectly lost: 480 bytes in 20 blocks
==4479== possibly lost: 0 bytes in 0 blocks
==4479== still reachable: 35,611,526 bytes in 15,976 blocks
==4479== suppressed: 0 bytes in 0 blocks
==4479== Rerun with --leak-check=full to see details of leaked memory
==4479==
==4479== For counts of detected and suppressed errors, rerun with: -v
==4479== Use --track-origins=yes to see where uninitialised values come from
==4479== ERROR SUMMARY: 31 errors from 10 contexts (suppressed: 2 from 2)
$
Now there is a bit more of a leak (though it is still not as easily readable as one would hope). If you add the suggested flags, it will eventually point to the malloc() call we made.
Also, I have a worked example of actual leak in an earlier version of a CRAN package in one of my 'Intro to HPC with R' slide sets. If and when there is a leak, this helps. When there is none, it is harder to see through the noise.
So in short, if you code crashes, it is probably your code's fault. Try a minimal reproducible example is the (good) standard advice.

X Error of failed request: BadAlloc (insufficient resources for operation)

I noticed this question has been asked many times in the past and surfing the web I found many pages about it. However, it seems like the proposed solutions rarely work and, in my case, the problem does not refer to a program that I wrote. So I'll give it another try here.
I recently installed Linux Mint 14 on my laptop. After the OS was spick and span, I started to install the software I need, and among these netgen (a Mesh Generator software). I tried both ways: download+unpack+compile+install and synaptic. Either way, this is the output I get when I try to execute the program
X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 490
Current serial number in output stream: 491
As I said, I surfed the web, and apparently, this is thought to be linked to some problem in the X server configuration. And here start the mess. Someone says I should modify /etc/X11/Xorg.conf, adding the lines
Option "Videoram" "65536"
Option "Cachelines" "1980"
Under the section "Device." Unfortunately, I have no such file, as apparently in recent distros, the X configuration file has been moved to /usr/share/X11/xorg.conf.d/* and it's now split into different files. The one about the monitor and graphics should be called 10-monitor.conf...which I don't have. I tried to create one, following the instruction at this link, and then add those lines, but nothing happened. To be fair, I'm not 100% sure I generated the file correctly since I am not sure how to detect the driver for my graphics card.
I don't know how much and which information people would need to have an idea of how to fix this problem. Here's what I see might be useful.
The output of 'lspci | grep VGA' is
01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee
ATI RS880M [Mobility Radeon HD 4200 Series]
My current /usr/share/X11/xorg.conf.d/10-monitor.conf is the following
Section "Monitor"
Identifier "Monitor0"
Modeline "1920x1080_60.00" 172.80 1920 2040 2248 2576 1080 1081 1084 1118 -HSync +Vsync
EndSection
Section "Device"
Identifier "LVSD"
Driver "fglrx" #Choose the driver used for this monitor
EndSection
Section "Screen"
Identifier "Screen0"
Device "LVDS"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
Modes "1920x1080_60.00" "1366x768"
EndSubSection
EndSection
Under the section "Device." Unfortunately, I have no such file
Try creating your own xorg.conf file, placing it in this location will override your X settings after restarting X or simply by restarting the computer.
mkdir -p /etc/X11/xorg.conf.d/
cp /etc/X11/xorg.conf.d/xorg.conf /etc/X11/xorg.conf.d/xorg.conf.bk # in case it exists
cp /usr/share/X11/xorg.conf.d/10-monitor.conf /etc/X11/xorg.conf.d/xorg.conf
The content of /etc/X11/xorg.conf.d/xorg.conf would look like (adding your options):
Section "Monitor"
Identifier "Monitor0"
Modeline "1920x1080_60.00" 172.80 1920 2040 2248 2576 1080 1081 1084 1118 -HSync +Vsync
EndSection
Section "Device"
Identifier "LVSD"
Driver "fglrx" #Choose the driver used for this monitor
Option "Videoram" "65536"
Option "Cachelines" "1980"
EndSection
Section "Screen"
Identifier "Screen0"
Device "LVDS"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
Modes "1920x1080_60.00" "1366x768"
EndSubSection
EndSection
This also could be related to the driver you're using, there are other common drivers available like
Driver "fbdev"
Driver "vesa"
Driver "fglrx"
The fbdev driver supports all hardware where a framebuffer driver is available.
The vesa driver supports most VESA-compatible video cards. There are some known exceptions, and those should be listed here.
fglrx is a X.org(7x) driver for ATI (Mobility(TM)) RADEON® and (Mobility(TM)) FireGL(TM) based video cards. The driver provides hardware acceleration for 3D graphics and video playback. It includes support for dual displays, TV Output and as of version 8.21.7 also OpenGL 2.0 (GLSL).
Depending on which driver you choose, certain options/functionality/compatibility would be enabled or not, you could change the driver and test with the options you said would work.
Finally, you have hundreds of options here to play with X11.
When I had made written a lot text to my program I took similiar error. When I reduce the text size the error was disappeared. I think you should reduce too showing things or reorganise how they are looking on screen.
I hope I could help you with this broken English ;)

Memory Usage/Allocation in R

I'm running R to handle a file which is about 1Gb in size, filtering it into several smaller files, and then trying to print them out. I'm getting errors like this at different points throughout the process:
**Error: cannot allocate vector of size 79.4 Mb**
A vector of this size should be a non-issue, with how much memory I /should/ be working with. My machine has 24Gb of memory, and the overwhelming majority of that is still free, even when the R environment with these large objects in it is up and running, and I'm seeing the error above.
free -m
total used free shared buffers cached
Mem: 24213 2134 22079 0 55 965
-/+ buffers/cache: 1113 23100
Swap: 32705 0 32705
here is R's response to gc():
corner used (Mb) gc trigger (Mb) max used (Mb)
Ncells 673097 18.0 1073225 28.7 956062 25.6
Vcells 182223974 1390.3 195242849 1489.6 182848399 1395.1
I'm working in Ubuntu 12.04.1 LTS
here are some specs from the machine I'm using:
i7-3930K 3.20 GHz Hexa-core (6 Core)12MB Cache
ASUS P9X79 s2011 DDR3-1333MHZ,1600 upto 64GB
32GB DDR3 ( 8x4GB Module )
128GB SSD drive
Asus nVidia 2GB GTX660 TI-DC2O-2GD5 GeForce GTX 660 Ti i
this is the object I'm attempting to write to file:
dim(plant)
[1] 10409404 13
the 'plant' object is of the class "data.frame". here is one of the lines of code that prompts the error:
write.table(plant, "file.txt", quote=F, sep="\t", row.names=T, col.names=F)
Any help on solving this issue would be much appreciated.
Tries with memory.limit() function!
-
memory.limit(2000)
-

Resources