How can I change the image memory on OpenStack? - openstack

I'm playing around with the OpenStack API. Is there a way to change the minimum memory required by an image? I initially created an image with 4GB RAM, but now I need to launch that image with only 2GB RAM. Since Linux supports changing the amount of available RAM (after a reboot), I'm assuming there must be a way to reduce the minimum RAM required by an OpenStack image.
Note: When I try to launch the image with a 2GB flavor, I get the following message:
Error: Unable to launch instance: Instance type's memory is too small
for requested image. (HTTP 400)

Yes you can there is an API for that.
# First list all images to note down the uuid of image you want to change
glance index
# Check how to use the image update function
glance help image-update
# or if you dont know which function to use just run
glance help
# The following command will change minimum ram to 2000MB and minimum disk to 1GB
glance image-update <<uuid or name of image>> --min-ram 2000 --min-disk 1

Related

Using Windbg to find Memory leak issue in asp.net Application

Problem Background
From the past few months we found issue in my online Asp.net application. Application working fine but 1 or 2 times in a day it suddenly crash on different modules on live server, but their is no as such issue in code at all and find not such types of issues on local server.
After some research i found that process that run my application on IIS on live server its memory increase continuously and when it reaches at certain level its begin to crash.
Temporary solution:
when ever we found such issue we restart application on IIS.
which can end this process and start new process,then after that application start working.
In a day some time we need to restart our application 2 or 3 times some times more.
Problem that i find: memory leak.
Find some solution after some research:
create a dump file from my application process from task manager when application is crashes.
Tool use: Windbg
open in Windbg tool for analysis.
write command
.load by clr
dumpheap -stat
shows a tons of references of datatypes.
now i stuck at thses point.
i share with you in image section.
Question:
1. I am on the right direction in finding memory leaks issue?
2. if my path is right where whats my next step?
3. Windbg tool is good for finding such kind of issue?
dump file link for detail review, i take this dump file when server stop to response
create a dump file from my application process from task manager when application is crashes
That's not a good choice, because
you don't have much time to do so. You can only do that as long as the crash dialog is displayed. If you're too late, the application is gone.
in that state, you'll have difficulties debugging it. Instead of the original exception, it will show a breakpoint, which is used by the OS to show the dialog and collect diagnostic data
Use WER local dumps to automatically create a crash dump when it crashes instead of doing it manually. It's much more reliable and gives you the original exception. See How do I take a good crash dump for .NET
I am on the right direction in finding memory leaks issue?
Sorry, you're on the wrong track already.
Starting with !dumpheap -stat is not a good idea. Usually one would start at the lowest level, which is !address -summary. It will give you an indicator whether it's a managed memory leak or a native memory leak. If it's a managed leak, you could continue with !dumpheap -stat
if my path is right where whats my next step?
Even if it's not the right path, it's a good idea that you learn how figure out that you're on the wrong path. So, how do I know?
Looking at your output of !dumpheap -stat, you can see
[...]
111716 12391360 System.String.
This tells you there are 110.000 different strings, using 12 MB of memory. It also tells you that everything else takes less than 12 MB. Look at the other sizes and you'll find out that .NET is not the reason for your OutOfMemoryException. They use less than 50 MB.
If there were a managed leak, you would look for paths where objects are connected to, so that the garbage collector thinks it cannot be freed. The command is !gcroot.
Windbg tool is good for finding such kind of issue?
It is possible, but WinDbg is not the best tool. Use a memory profiler instead. That's a dedicated tool for memory leaks. Typically it has a much better usability. Unfortunately you'll need to decide whether you need a managed memory profiler, native memory profiler or both.
I once wrote how to use WinDbg to track down .NET OutOfMemoryException. You'll find a chart there which gives you ideas on how to proceed in different situations.
In your dump I see 2 TB of <unknown> memory, which could be .NET, but needn't be. Still, these 2 TB are likely the cause of the OOM, because the rest is less than 350 MB in size.
Since clr is in the list of loaded modules, we can check !dumpheap -stat as you did. But there are not many objects using memory.
!eeheap -gc shows that there are 8 heaps, corresponding to the 8 processors of your machine, for parallel garbage collection. The largest individual heap is 45 MB, the total 249 MB. This roughly matches the sum of !dumpheap. conclusion: .NET is not the culprit.
Let's check the special cases:
Presence of MSXML
Bitmaps
Calls to HeapAlloc() which are so large that they are directly forwarded to VirtualAlloc().
Direct calls to VirtualAlloc()
MSXML is not present: lm m msxml* does not produce output.
There are no Bitmaps: !dumpheap -stat -type Bitmap
Heap allocations larger than 512 kB: !heap -stat. Here's a truncated part of the output:
0:000> !heap -stat
_HEAP 0000018720bd0000
Segments 00000006
Reserved bytes 0000000001fca000
Committed bytes 0000000001bb3000
VirtAllocBlocks 00000002
VirtAlloc bytes 00000312cdc4b110
_HEAP 0000018bb0fe0000
Segments 00000005
Reserved bytes 0000000000f0b000
Committed bytes 0000000000999000
VirtAllocBlocks 00000001
VirtAlloc bytes 0000018bb0fe0110
As you can see, there are 3 blocks that went to VirtualAlloc. The size is somewhat unrealistic:
0:000> ? 00000312cdc4b110
Evaluate expression: 3379296514320 = 00000312`cdc4b110
0:000> ? 0000018bb0fe0110
Evaluate expression: 1699481518352 = 0000018b`b0fe0110
That would be a total of 3.3TB + 1.7TB = 6TB and not 2TB. Now, it may happen that this is a bug of !address, but 4TB is not a common overflow point.
With !heap -a 0000018720bd0000 you can see the 2 virtual allocs:
Virtual Alloc List: 18720bd0110
0000018bac70c000: 00960000 [commited 961000, unused 1000] - busy (b), tail fill
0000018bad07b000: 00960000 [commited 961000, unused 1000] - busy (b), tail fill
And with !heap -a 0000018bb0fe0000 you can see the third one:
Virtual Alloc List: 18bb0fe0110
0000018bb1043000: 00400000 [commited 401000, unused 1000] - busy (b), tail fill
These are all relatively small blocks of 4.1MB and 9.8 MB.
For the last part, direct calls to VirtualAlloc(), you need to get back to the level of !address. With !address -f:VAR -c:".echo %1 %3" you can see the address and size of all <unknown> regions. You'll find a lot of entries there, many of small sizes, some which could be the .NET heaps, a few 2GB ones and one really large allocation
The 2GB ones:
0x18722070000 0x2d11000
0x18724d81000 0x7d2ef000
0x187a2070000 0x2ff4000
0x187a5064000 0x7d00c000
0x18822070000 0x2dfe000
0x18824e6e000 0x7d202000
0x188a2070000 0x2c81000
0x188a4cf1000 0x7d37f000
0x18922070000 0x2d13000
0x18924d83000 0x7d2ed000
0x189a2070000 0x2f5a000
0x189a4fca000 0x7d0a6000
0x18a22070000 0x2c97000
0x18a24d07000 0x7d369000
0x18aa2070000 0x2d0c000
0x18aa4d7c000 0x7d2f4000
It is likely that these are the .NET heaps (committed part + reserved part).
The large one:
0x7df600f57000 0x1ffec56a000
The information about it:
0:000> !address 0x7df600f57000
Usage: <unknown>
Base Address: 00007df6`00f57000
End Address: 00007ff5`ed4c1000
Region Size: 000001ff`ec56a000 ( 2.000 TB)
State: 00002000 MEM_RESERVE
Protect: <info not present at the target>
Type: 00040000 MEM_MAPPED
Allocation Base: 00007df5`ff340000
Allocation Protect: 00000001 PAGE_NOACCESS
It looks like a 2TB memory mapped file which is unused (and therefore reserved).
I don't know what your application is doing. This is really where I need to stop the analysis. I hope this was helpful and you can draw your conclusions and fix the issue.

Out of Memory with large data in codename one

My Codename One application downloads around 16000 records of data (approx 10 fields in each record).
On my Android phone (OS6.0, RAM 2GB) it's able to load 8000 to 9000 records but then shows out of memory error.
From the trace, it looks like it run out of heap memory allocated to the app.
Any suggestion what would be the ideal way to handle that large amount of data, please?
Here is the log file
The amount of RAM on the phone doesn't mean much. The OS takes about half and then divides the rest to the various apps running in parallel. You would typically have much less see What is the maximum amount of RAM an app can use?
You need to review your code and check what is eating up memory. 16k records of 1kb each would be 16Mb which probably shouldn't crash an app so the question is where is memory taken, I would suggest reading the performance section of the developer guide to figure out memory usage.
This might not apply to your situation, but would it be possible to only download x number of records at a time? Then, when the user takes some action (scrolls, hits next page, etc) it loads the next batch? Codename one has a great endless scroller implementation. See here for an example - https://www.codenameone.com/blog/property-cross-revisited.html

Script to automatically grow LVM partition CentOS

I'm looking for a script to check the size of a particular LVM volume on CentOS 6.5 and when it reaches a certain threshold, have it automatically extend the partition and online re-size the file system.
I have this particular machine monitored, and could do it manually, but I saw a script once to do just this.
I have plenty of disk space on the physical volumes but, since it's easier to expand when needed than reduce later, I'd rather expand my logical partitions only when they start to fill up. There are several logical volumes on this machine, but only one that regularly grows.
Any tips are appreciated; and, if the overall best thing to do is just expand the volume manually when the time comes that advice is welcome as well!

Virtuoso 7 crashes while bulk loading

I am trying to create a local SPARQL endpoint for Freebase for running some local experiments. While using Virtuoso 7, I regularly see server getting killed by OOM killer. I have followed all the required steps as mentioned here. I have also made the required changes to my virtuoso.ini file as mentioned in RDF Performance Tuning.
My system configuration is:
8 CPU 2.9 Ghz
16 GB RAM
I have enough hard disk too.
Regarding data dumps, I have split the freebase data dump (23GB gzipped, approx 250 GB uncompressed) into 10 smaller gzipped files containing 200,000,000 triples each.
Following are the changes I made to virtuoso.ini
NumberOfBuffers = 1360000
MaxDirtyBuffers = 1000000
MaxCheckpointRemap = 340000 # (1/4th of NumberOfBuffers)
Along with this I have set vm.swapiness = 10 as mentioned in 2.
Am I missing something obvious?
P.S.:
I did try virtuoso-opensource-6.1 too. But it appeared to be too slow.
One interesting observation I had was that during bulk loading process, virtuoso-6.1 memory consumption was rising too slowly, but it might be because general indexing itself was too slow.
Another observation I had was the virtuoso-6.1 at start time occupies almost negligible memory (order of 500MB) whereas virtuoso-7 starts with approx 6500 MB and grows quickly.
Any help in this regard would be highly appreciated.
Numbers of buffers you are using is little bit too high. Do not forget that some memory is also consume by OS and other processes.
Which exact version do you use? (development or stable branch?)
Do you use disk striping ?
I load freebase to Virtuoso 7 too, but I used smaller files. Circa 260 gzipped files, 10mil triples each = circa 100M. A commit is executed after every file load.
Maybe would be easier for you to use images with Virtuoso preloaded by Freebase

How to Save an Image of a Large Flex Component (EX: 25000px by 3000px # 72dpi)

My application consists of displaying a large custom tree like structure to the user that can eventually grow to massive proportions like the dimensions listed in the question. I allow them to export the image with the following line of code tied to a button click event:
var image:ImageSnapshot = ImageSnapshot.captureImage(this, 72, new PNGEncoder(), false);
I've managed to export images close to the dimensions listed but around there it start to get the error message listed below after spinning for close to 15 seconds:
Error: Error #1000: The system is out of memory.
at flash.utils::ByteArray/writeBytes()
at mx.graphics::ImageSnapshot$/mergePixelRows()[E:\dev\4.x\frameworks\projects\framework\src\mx\graphics\ImageSnapshot.as:511]
at mx.graphics::ImageSnapshot$/captureAll()[E:\dev\4.x\frameworks\projects\framework\src\mx\graphics\ImageSnapshot.as:482]
at mx.graphics::ImageSnapshot$/captureImage()[E:\dev\4.x\frameworks\projects\framework\src\mx\graphics\ImageSnapshot.as:318]
at vertical/saveChart()[C:\devel\workspace\vertical\src\CustomObject.mxml:501]
at vertical/__saveImageBtn_click()[C:\devel\workspace\vertical\src\CustomObject.mxml:574]
Is the flashplayer plugin for my browser running out of memory? I noticed in my task manager it got up to about 1.2GB of memory usage(I have 4GB on my system). If that is the case is it possible to limit the memory usage for a given function like the ImageSnapshot.captureImage() call above?
Is there maybe a way to generate the component into 2 or 4 ImageSnapshot objects and piece them together afterward?
Any advice would be greatly appreciated.
I believe the latest Flash Player 11 has a new feature to solve this issue:
"Enhanced high resolution bitmap support — BitmapData objects are no longer limited to a maximum resolution of 16 megapixels (16,777,215 pixels), and maximum bitmap width/height is no longer limited to 8,191 pixels, enabling the development of apps that utilize very large bitmaps." from this PDF
If you are using BitmapData, it makes a difference which FlashPlayer you are targetting:
versions VS maximum bitmapsize
flashplayer -9 : 2880x2880 px
flashplayer 10 : 4096x4096 px
flashplayer 11 : unlimited
I don't know what you exactly are trying to do with this huge capture, but I would recommend using tiles. Break it down to chunks of relative small bitmaps. Create them separately, so you don't have to open/create that huge amount of data in your memory.
Anyway, it would be nice to know if it is possible to encode that big-ass sized image, without Error #1000 out of memory errors.

Resources