Razr i: Using external memory as RAM - motorola

So I've read a bit about making you external memory as the actual flash memory on Android but am not sure about doing that on Razr i. I'm not that familiar with hacking and technology, so I need your help on that. How can I know if the device has this capability?
I wouldn't use it so much for storing data. So I'm considering buying a 32GB microSD for speeding up, using more apps at the same time. Is it worth it? Can I use it as RAM?

no you can't use Storage Memory as RAM , the only way to optimize your memory usage is to disable and stop any process/services within cache or running in the device

Related

Are Google Cloud Disks OK to use with SQLite?

Google Cloud disks are network disks that behave like local disks. SQLite expects a local disk so that locking and transactions work correctly.
A. Is it safe to use Google Cloud disks for SQLite?
B. Do they support the right locking mechanisms? How is this done over the network?
C. How does disk IOP's and Throughput relate to SQLite performance? If I have a 1GB SQLite file with queries that take 40ms to complete locally, how many IOP's would this use? Which disk performance should I choose between (standard, balanced, SSD)?
Thanks.
Related
https://cloud.google.com/compute/docs/disks#pdspecs
Persistent disks are durable network storage devices that your instances can access like physical disks
https://www.sqlite.org/draft/useovernet.html
the SQLite library is not tested in across-a-network scenarios, nor is that reasonably possible. Hence, use of a remote database is done at the user's risk.
Yeah, the article you referenced, essentially stipulates that since the reads and writes are "simplified", at the OS level, they can be unpredictable resulting in "loss in translation" issues when going local-network-remote.
They also point out, it may very well work totally fine in testing and perhaps in production for a time, but there are known side effects which are hard to detect and mitigate against -- so its a slight gamble.
Again the implementation they are describing is not Google Cloud Disk, but rather simply stated as a remote networked arrangement.
My point is more that Google Cloud Disk may be more "virtual" rather than purely networked attached storage... to my mind that would be where to look, and evaluate it from there.
Checkout this thread for some additional insight into the issues, https://serverfault.com/questions/823532/sqlite-on-google-cloud-persistent-disk
Additionally, I was looking around and I found this thread, where one poster suggest using SQLite as a read-only asset, then deploying updates in a far more controlled process.
https://news.ycombinator.com/item?id=26441125
the persistend disk acts like a normal disk in your vm. and is only accessable to one vm at a time.
so it's safe to use, you won't lose any data.
For the performance part. you just have to test it. for your specific workload. if you have plenty of spare ram, and your database is read heavy, and seldom writes. the whole database will be cached by the os (linux) disk cache. so it will be crazy fast. even on hdd storage.
but if you are low on spare ram. than the database won't be in the os cache. and writes are always synced to disk. and that causes lots of I/O operations.
in that case use the highest performing disk you can / are willing to afford.

Can I tell if application has memory leak only based on it's memory consumption?

I was told on one of environments ASP.NET application consumes even up to 64GB of RAM. I don't know how long it takes to consume it and I have not tried to monitor this app with any kind of tool yet. But I suspect that this is some memory leak. My colleague said that maybe it is not and that it's possible that GC decides not to garbage collect because it still has 64GB RAM left.
From what I understand it's not possible to use that much of RAM without some extensive caching built in and I have not seen this in this applications' source code. I know GC can decide to grow Generation 0 when it sees that it needs more space but in order to consume 64GB this memory must be used by either Gen2 or LOH right? This is Business Intelligence app and it does store some data in Session between postbacks so that it does not hit data warehouse every time but still 64GB of RAM consumed seems suspicious to me.

How to execute large code in less ram?

I have a doubt that , in all micro controllers the flash memory much more that ram( Example: atmega16 it is 16k, However the RAM is just 1 Kb).
.
So , how exactly that code is executed , does the CPU execute directly from the Flash itself , if yes then whats the use of that small RAM given.
The flash memory is for storing the programs that you want to execute. They change seldom, so flash memory is appropriate.
The RAM is for the memory required during execution of the program: stack (local variables), heap (malloc), etc.
AVRs using a Harvard Architecture that strictly separates Program and Data Memory.
In difference to PC that laods the Programm to RAM first to execute it from RAM, the code is directly executed from Programm Memory and only runtime data is stored in the RAM.
Be aware that setting a variable as const does not necessarily create the variable and put it in flash. Although it may or may not be best off in flash, the compiler does not automatically do this.
For an example check out the following link for avr-gcc.
http://www.nongnu.org/avr-libc/user-manual/pgmspace.html

OpenCL shared memory optimisation

I am solving a 2d Laplace equation using OpenCL.
The global memory access version runs faster than the one using shared memory.
The algorithm used for shared memory is same as that in the OpenCL Game of Life code.
https://www.olcf.ornl.gov/tutorials/opencl-game-of-life/
If anyone has faced the same problem please help. If anyone wants to see the kernel I can post it.
If your global-memory really runs faster than your local-memory version (assuming both are equally optimized depending on the memory space you're using), maybe this paper could answer your question.
Here's a summary of what it says:
Usage of local memory in a kernel add another constraint to the number of concurrent workgroups that can be run on the same compute unit.
Thus, in certain cases, it may be more efficient to remove this constraint and live with the high latency of global memory accesses. More wavefronts (warps in NVidia-parlance, each workgroup is divided into wavefronts/warps) running on the same compute unit allow your GPU to hide latency better: if one is waiting for a memory access to complete, another can compute during this time.
In the end, each kernel will take more wall-time to proceed, but your GPU will be completely busy because it is running more of them concurrently.
No, it doesn't. It only says that ALL OTHER THINGS BEING EQUAL, an access from local memory is faster than an access from global memory. It seems to me that global accesses in your kernel are being coalesced which yields better performance.
Using shared memory (memory shared with CPU) isn't always going to be faster. Using a modern graphics card It would only be faster in the situation that the GPU/CPU are both performing oepratoins on the same data, and needed to share information with each-other, as memory wouldn't have to be copied from the card to the system and vice-versa.
However, if your program is running entirely on the GPU, it could very well execute faster by running in local memory (GDDR5) exclusively since the GPU's memory will not only likely be much faster than your systems, there will not be any latency caused by reading memory over the PCI-E lane.
Think of the Graphics Card's memory as a type of "l3 cache" and your system's memory a resource shared by the entire system, you only use it when multiple devices need to share information (or if your cache is full). I'm not a CUDA or OpenCL programmer, I've never even written Hello World in these applications. I've only read a few white papers, it's just common sense (or maybe my Computer Science degree is useful after all).

Throttle application on the basis of per disk usage or CPU usage

Can anyone recommend a way in which I can throttle an application based on the current disk usage or even CPU usage.
The application I am writing scans files on the hard disk and will be pretty hard disk intensive in itself.
Can anyone recommend a way in which I can either throttle down my application(or even pause it for that matter) when the disk usage is high(i.e. user himself is running very HDD or CPU intensive app)? Basically my application shouldn't hamper user's productivity. I know this is a pretty big research topic in itself. But I at least need some cues on how would I approach this.
Help in any form is highly appreciated. :)
Thanks.
Samrat.
Vista has added I/O Prioritization to Windows so if you're using that platform you can just let the O/S take care of it.
For other operating systems maybe finding the I/O latency, and if it is over some predefined threshold then sleep your disk scanner for a bit would work?
Take a look at this ("How can I programmatically limit my program’s CPU usage to below 70%?") and this ("Win32 Thread scheduling#The Larry Osterman answer")

Resources