change VGA share memory laptop - intel

I have try to change in bios setting but cannot find it
my laptop type hp430

From the screenshot I see that you use a 32-bit version of Windows 7. Note that 32-bit versions of Windows can use up to 3GB or RAM in most cases. You then subtract from that the amount of memory allocated for the integrated graphics card and you get the total amount of RAM usable by the system. If you want your computer to be able to use 4GB RAM or more, I recommend installing a 64-bit version of Windows.
Regarding changing the memory allocated for Intel HD Graphics, unfortunately it's rarely possible to do this on laptops, but I recommend you check out your BIOS thoroughly as this article recommends. You can also try to contact your laptop's manufacturer and ask him whether this is possible or not.

Related

Stop OpenCL support for a GPU

I have two GPUs installed on my machine. I am working with library which uses OpenCL acceleration that only support one GPU and it is not configurable. I can not tell it which one I want. It seems that this library for some reason chose one of my GPUs that I do not want.
How can I delete/stop/deactivate this GPU from being supported as an OpenCL device?
I want to do this so I get only one supported GPU and the library will be forced to use it.
Note: Any option that contains change or edit the library is available for me.
P.S. I am on Windows 10 with Intel processor and Intel GPU + NVidia GPU
On Windows the OpenCL ICD system uses Registry entries to find all of the installed OpenCL platforms.
Solution: Using RegEdit you can backup and then remove the entry for the GPU you do not want to use. The Registry location is HKEY_LOCAL_MACHINE\SOFTWARE\Khronos\OpenCL\Vendors.
Reference: https://www.khronos.org/registry/cl/extensions/khr/cl_khr_icd.txt

Use OpenCL on AMD APU but use discrete GPU for the X server

Is it possible to enable OpenCL on an A10-7800 without using it for the X server? I have a Linux box that I use for GPGPU programming. A discrete GEForce 740 card is used for both the X server and running OpenCL & Cuda programs I develop. I would also like the option of running OpenCL code on the APU's integrated GPU cores.
Everything I've read so far implies that if I want to use the APU for OpenCL, I have to install Catalyst and, AFAIK, that means using it for the X server. Is this true? Would there be an advantage to using the APU for my X server and using the GEForce solely for GPGPU code?
I had a similar goal, so I've built a system with AMD APU (4 regular cores + 6 GPUs) and Nvidia discrete graphics board. Sorry to say it wasn't easy to make it work, so I asked a question on the Ask Ubuntu forum, didn't get any answers, experimented a lot with hardware and software setup, and finally have posted my own answer to my question.
I'll describe my setup again here - who knows, what might happen with my auto-answered question on the Ask Ubuntu?
At first, I had to enable the integrated graphics hardware via a BIOS flag. This flag is called IGFX Multi-Monitor on my motherboard (ASUS A88X-PRO).
The second step was to find a right mix of a low-level graphics driver and high-level OpenCL implementation. The low-level driver for AMD processors is called AMD Catalyst and has a file name fglrx. I didn't install this driver from the Ubuntu software center - instead I used a version 15.302, directly downloaded from the AMD site. I had to install a significant number of prerequisites for this driver. The most important finding was that I had to skip running the aticonfig command after the fglrx installation - this command actually configures the X server to use this driver for graphics output, and I didn't want that.
Then I've installed the AMD SDK Ver 3.0 (release 130.136, earlier releases didn't work with my fglrx) - it's the OpenCL implementation from AMD. The clinfo command reports both CPUs and GPUs with correct number of cores now.
So, I have a hybrid AMD processor, supported by the OpenCL, with all the graphics output, supported by a discrete graphics card with Nvidia processor.
Good luck!
I maintain a Linux server (OpenSUSE, but the distribution shouldn't matter) containing both NVIDIA and (a discrete) AMD GPU. It's headless, so technically I do not know whether the X server will create additional problems, but I don't think so. You can always configure xorg.conf to use exactly the driver you want. Or for that matter: install Catalyst, but delete the X server driver file itself, which is not the same thing that you need for OpenCL.
There is one problem with a mixed-vendor system that I noticed, however: AMDs OpenCL driver (ICD) will go spelunking for a libGL.so library, I guess in order to do OpenCL/OpenGL-interop. If it finds any of the NVIDIA-supplied libGL.so's, it will get confused and hang - at least on my machine. I "solved" this by deleting all libGL.so's (I do not need it on a headless compute server), but that might not be an acceptable solution for you. Maybe you can arrange things such that the AMD-supplied libGL.so's take precedence, possibly by installing the AMD driver last.

Will R take advantage of 64GB memory on Mac Pro running OSX?

I've been using R 3.1.2 on an early-2014 13" MacBook Air with 8GB and 1.7GHz Intel Core I7, running Mavericks OSX.
Recently, I've started to work with substantially larger data frames (2+ million rows and 500+ columns) and I am running into performance issues. In Activity Monitor, I'm seeing virtual memory sizes of 64GB, 32GB paging files, etc. and the "memory pressure" indicator is red.
Can I use the "throw more hardware" at this problem? Since the MacBook Air tops out at 8GB physical memory, I was thinking about buying a Mac Pro with 64GB memory. Before I spend the $5K+, I wanted to ask if there are any inherent limitations in R other than the ones that I've read about here: R Memory Limits or if anyone who has a Mac Pro has experienced any issues running R/RStudio on it. I've searched using Google and haven't come up with anything specific about running R on a Mac Pro.
Note that I realize I'll still be using 1 CPU core unless I rewrite my code. I'm just trying to solve the memory problem first.
Several thoughts:
1) Its a lot more cost effective to use a cloud service like https://www.dominodatalab.com (not affiliated). Amazon AWS would also work, the benefit of domino is that it takes the work out of managing the environment so you can focus on the data science.
2) You might want to redesign your processing pipeline so that not all your data needs to be loaded in memory at the same time (soon you will find you need 128 GB, then what). Read up on memory mapping, using databases, separating your pipeline into several steps that can be executed independent of each other, etc (googling brought up http://user2007.org/program/presentations/adler.pdf). Running out of memory is a common problem when working with real life datasets, throwing more hardware at the problem is not always your best option (though sometimes it really can't be avoided).

OSX resource allocation?

I am in the process of converting all .avi files to .mp4 (for compatibility with my PS3 and iPad).
I noticed today that both the applications I've used so far (MPEG Streamclip & Handbrake) only use around 30% of my CPU and only about 300Mb of my available RAM (4GB installed). Why is this? Is there any way to speed this conversations up by somehow allowing the applications to use more of the available resources?
I am currently running Mavericks on an MBP with a 2.4GHz i5, 4GB RAM, 256MB NVIDIA GeForce GT330M and a hybrid drive so I don't know where, if any, bottle necks would be.
Thanks!
StackOverflow is a place to discuss coding, programming and software development.
Head over to http://www.superuser.com/, and ask there. They're the people you want to talk to about this.
Good luck!

32 bit OS on 64bit architecture

I am running 32bit ubuntu on 64bit x86 processor (intel). I know that the word size is 64bits in this case but I am little confused about the 32bit OS.
So while I calculate the memory bandwidth, shall I assume that the data bus width of 64 lines will be used and it will exhibit the same performance as the 64bit OS? IOW, I want to better understand the relation between OS width on the architecture width.
For instance, a 64bit operand can be read in single shot with a 64bit wide memory bus. Does this need the support of 64bit OS? With a 32bit OS, will it make two reads (32bits each time) to read the 64bit operand?
Thanks!
You shouldn't worry about this.
OS 32-bit vs. 64 bit is deffer only on memory addressing. IN 64-bit you may addres more.
Data loading from memory is independent from OS - it depends on processor architecture.
Better processor may load 128, 256 bits at one memory load.
It is shortest explanation and should be true in 99,9% of programs running on this OS.
0,1% if reserved for programs, that doesn't care about memory aligment when accessing data. But this problem may be addressed in next 99,9% by processor cache.
Summarizing - you shouldn't worry if your OS has enough memory to run all programs.
Applications compiled for a 32 bit OS don't even know that the bus is 64bits large, they will always use the first 32bits only.
Because the processor is NOT running in 64bits mode, your 64bits data will be stored in two distinct registers (say EAX:EDX), reading two distinct parts of the same number, using the first 32bits of the bus.

Resources