OSX resource allocation? - cpu-usage

I am in the process of converting all .avi files to .mp4 (for compatibility with my PS3 and iPad).
I noticed today that both the applications I've used so far (MPEG Streamclip & Handbrake) only use around 30% of my CPU and only about 300Mb of my available RAM (4GB installed). Why is this? Is there any way to speed this conversations up by somehow allowing the applications to use more of the available resources?
I am currently running Mavericks on an MBP with a 2.4GHz i5, 4GB RAM, 256MB NVIDIA GeForce GT330M and a hybrid drive so I don't know where, if any, bottle necks would be.
Thanks!

StackOverflow is a place to discuss coding, programming and software development.
Head over to http://www.superuser.com/, and ask there. They're the people you want to talk to about this.
Good luck!

Related

How to develop an OpenCL application targeting specifically Intel CoffeeLake-H GT2 (UHD Graphics 630) without this device?

I've been tasked to develop an OpenCL application for a specific platform, Intel CoffeeLake-H GT2 (UHD Graphics 630). There are two problems for me:
Even having some OpenCL programming experience (not that much though), I wouldn't know where to begin. I have no prior experience with targeting specific hardware before.
The device itself has to be emulated or something, because I don't have it at hand.
Of course, I tried googling information today, but couldn't find anything that could really help me. Guess, it's just because of my lack of experience. So, I'm stumped right now, and asking for help.
It would be really great if I can be helped. Any help would be appreciated. Thanks in advance.
Small note: I'm working on this project under Ubuntu 18.04.
I'm not aware of any emulated environment, and anyway, ultimately nothing replaces access to the target hardware. I see a few workarounds:
Target a similar-enough device. Intel GPUs haven't changed that drastically, so especially if you have an older/lower-spec one around, whatever you end up with should run better on the newer GPU. You can also work with a GPU from another vendor if you have at least sporadic access to a system with an Intel GPU. You don't want to go for too long at a time without testing on your target hardware. (It's generally a good idea to test OpenCL code against different implementations while developing, as it's easy to accidentally rely on implementation-defined or undefined behaviour otherwise.)
Rent a relevant physical device. Places exist that allow you to rent laptops or desktop PCs for a short time period.
Remote access to a target device. Presumably whoever posed the requirement actually has such devices. Ask for remote access to one of them, via the magic of the internet. (RDP, VNC, SSH)
Rent similar hardware in a data centre. There are bare metal hosting companies that rent out physical servers built from commodity hardware. Find one that offers servers with a close enough match to the system you're targeting and rent one there.
As for the skill gap, well, you'll either have to bridge that one yourself by following enough documentation, tutorials, etc. or by finding (hiring…) someone who will give you some degree of hand-holding through the project.

Will R take advantage of 64GB memory on Mac Pro running OSX?

I've been using R 3.1.2 on an early-2014 13" MacBook Air with 8GB and 1.7GHz Intel Core I7, running Mavericks OSX.
Recently, I've started to work with substantially larger data frames (2+ million rows and 500+ columns) and I am running into performance issues. In Activity Monitor, I'm seeing virtual memory sizes of 64GB, 32GB paging files, etc. and the "memory pressure" indicator is red.
Can I use the "throw more hardware" at this problem? Since the MacBook Air tops out at 8GB physical memory, I was thinking about buying a Mac Pro with 64GB memory. Before I spend the $5K+, I wanted to ask if there are any inherent limitations in R other than the ones that I've read about here: R Memory Limits or if anyone who has a Mac Pro has experienced any issues running R/RStudio on it. I've searched using Google and haven't come up with anything specific about running R on a Mac Pro.
Note that I realize I'll still be using 1 CPU core unless I rewrite my code. I'm just trying to solve the memory problem first.
Several thoughts:
1) Its a lot more cost effective to use a cloud service like https://www.dominodatalab.com (not affiliated). Amazon AWS would also work, the benefit of domino is that it takes the work out of managing the environment so you can focus on the data science.
2) You might want to redesign your processing pipeline so that not all your data needs to be loaded in memory at the same time (soon you will find you need 128 GB, then what). Read up on memory mapping, using databases, separating your pipeline into several steps that can be executed independent of each other, etc (googling brought up http://user2007.org/program/presentations/adler.pdf). Running out of memory is a common problem when working with real life datasets, throwing more hardware at the problem is not always your best option (though sometimes it really can't be avoided).

Do Chromebooks offer adequate offline programability?

Do Chromebooks offer adequate programming capabilities offline?
I can never guarantee my WiFi access.
I know I can access local files, and being Linux-based, what does this mean for programming offline?
Also, I am returning to obtain my MSc in IT. Would this be a good purchase for such a cause? I am focusing on web development (HTML, JavaScript, Rails).
I want to know specifically if a Chromebook (I have my eyes on the Acer C720) can get the work done. True, I'll probably rare ever be offline, but I want to know if I'll be able to both edit code, then run it to troubleshoot.
My main points: editing and running code on a Chromebook. Also, could I amend the drawback by running Windows or Linux (ie, Ubuntu, Mint)?Thanks guys for any advice.
I use an Acer C720 Chromebook (2GB RAM, 16GB SSD) as my Meteor (Javascript, HTML. CSS, MongoDB) development machine. The specs may sound poor but in reality - thanks to the fantastic Haswell chip - the laptop is great.
I have Xubuntu installed instead of ChromeOS... so maybe that is not a real answer to your question.
It's a fantastic little machine - long battery life and boots in a few seconds. I tried Bodhi Linux first but find Xubuntu better for my needs.
I expanded the storage using a keep-in tiny UltraFit 64GB USB 3.0 flash key. Amazing device.
I use an HDMI monitor when doing longer coding sessions.
Device cost me $150 on eBay and around $25 for the USB key.
I use the free http://komodoide.com/komodo-edit/ as my editor.
If you feel like taking the plunge and converting from ChromeOS to Xubuntu, these two links may help:
BIOS changes: https://blogs.fsfe.org/the_unconventional/2014/09/19/c720-coreboot/
Xubuntu distribution: https://www.distroshare.com/distros/get/14/
Good luck and enjoy!

system build research

I'm in the research phase of my next computer build. I have the idea in my head of running a hypervisor as the base of the system, but i would want to be able to take a shot at programming opencl with one of the OS's installed on the hypervisor...and maybe some gaming. Would i have enough access to the GPU to be able to achieve this effectively, or am i better off installing an OS that i will do development(and gaming) from and then just virtualize any systems on top of that?
what are your recommendations for a hypervisor, vmware, microsoft or other?
sidenote: Recently graduated with a BS in CS, the massive parallel processing seems like a good idea of something to learn, won't be doing any 'real'/major development work. also, i'm aware that CUDA is more mature in it's development, but i'm sticking with opencl for a few reasons, so please don't try to persuade me.
thanks for your input!
dave k.
whats your focus? Virtualisation or OpenCL?
Hak5 did a nice walkthrough of debian based virtualisation environment ProxMox, but I don't know whether it allows virtual hosts hardware access or OpenCL virtualisation.

How does BitLocker affect performance? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm an ASP.NET / C# developer. I use VS2010 all the time. I am thinking of enabling BitLocker on my laptop to protect the contents, but I am concerned about performance degradation. Developers who use IDEs like Visual Studio are working on lots and lots of files at once. More than the usual office worker, I would think.
So I was curious if there are other developers out there who develop with BitLocker enabled. How has the performance been? Is it noticeable? If so, is it bad?
My laptop is a 2.53GHz Core 2 Duo with 4GB RAM and an Intel X25-M G2 SSD. It's pretty snappy but I want it to stay that way. If I hear some bad stories about BitLocker, I'll keep doing what I am doing now, which is keeping stuff RAR'ed with a password when I am not actively working on it, and then SDeleting it when I am done (but it's such a pain).
2015 Update: I've been using Visual Studio 2015 on my Surface Pro 3 when I travel, which has BitLocker enabled by default. It feels pretty much like my desktop, which is an i7-2600k # 4.6 GHz. I think on modern hardware with a good SSD, you won't notice!
2021 Update: I have been enabling bitlocker on all my computers and it flies now. No worries. Get an NVMe SSD and don't look back.
With my T7300 2.0GHz and Kingston V100 64gb SSD the results are
Bitlocker off → on
Sequential read 243 MB/s → 140 MB/s
Sequential write 74.5 MB/s → 51 MB/s
Random read 176 MB/s → 100 MB/s
Random write, and the 4KB speeds are almost identical.
Clearly the processor is the bottleneck in this case. In real life usage however boot time is about the same, cold launch of Opera 11.5 with 79 tabs remained the same 4 seconds all tabs loaded from cache.
A small build in VS2010 took 2 seconds in both situations. Larger build took 2 seconds vs 5 from before. These are ballpark because I'm looking at my watch hand.
I guess it all depends on the combination of processor, ram, and ssd vs hdd. In my case the processor has no hardware AES so compilation is worst case scenario, needing cycles for both assembly and crypto.
A newer system with Sandy Bridge would probably make better use of a Bitlocker enabled SDD in a development environment.
Personally I'm keeping Bitlocker enabled despite the performance hit because I travel often. It took less than an hour to toggle Bitlocker on/off so maybe you could just turn it on when you are traveling then disable it afterwards.
Thinkpad X61, Windows 7 SP1
Some practical tests...
Dell Latitude E7440
Intel Core i7-4600U
16.0 GB
Windows 8.1 Professional
LiteOn IT LMT-256M6M MSATA 256GB
This test is using a system partition. Results for a non-system partition are a bit better.
Score decrease:
Read: 5%
Write: 16%
Without BitLocker:
With BitLocker:
So you can see that with a very strong configuration and a modern SSD disk you can see a small performance degradation with tests. I don't know what about a typical work, especially with the Visual Studio.
Having used a laptop with BitLocker enabled for almost 2 years now with more or less similar specs (although without the SSD unfortunately), I can say that it really isn't that bad, or even noticable. Although I have not used this particular machine without BitLocker enabled, it really does not feel sluggish at all when compared to my desktop machine (dual core, 16 GB, dual Raptor disks, no BitLocker). Building large projects might take a bit longer, but not enough to notice.
To back this up with more non-scientifical "proof": many of my co-workers used their machines intensively without BitLocker before I joined the company (it became mandatory to use it around the time I joined, even though I am pretty sure the two events are totally unrelated), and they have not experienced noticable performance degradation either.
For me personally, having an "always on" solution like BitLocker beats manual steps for encryption, hands-down. Bitlocker-to-go (new on Windows 7) for USB devices on the other hand is simply too annoying to work with, since you cannot easily exchange information with non-W7 machines. Therefore I use TrueCrypt for removable media.
I am talking here from a theoretical point of view; I have not tried BitLocker.
BitLocker uses AES encryption with a 128-bit key. On a Core2 machine, clocked at 2.53 GHz, encryption speed should be about 110 MB/s, using one core. The two cores could process about 220 MB/s, assuming perfect data transfer and core synchronization with no overhead, and that nothing requires the CPU in the same time (that one hell of an assumption, actually). The X25-M G2 is announced at 250 MB/s read bandwidth (that's what the specs say), so, in "ideal" conditions, BitLocker necessarily involves a bit of a slowdown.
However read bandwidth is not that important. It matters when you copy huge files, which is not something that you do very often. In everyday work, access time is much more important: as a developer, you create, write, read and delete many files, but they are all small (most of them are much smaller than one megabyte). This is what makes SSD "snappy". Encryption does not impact access time. So my guess is that any performance degradation will be negligible(*).
(*) Here I assume that Microsoft's developers did their job properly.
The difference is substantial for many applications. If you are currently constrained by storage throughput, particularly when reading data, BitLocker will slow you down.
It would be useful to compare with other software based whole disk or whole partition encryption like TrueCrypt (which has the advantage if you dual boot with Linux since it works for both Windows and Linux).
A much better option is to use hardware encryption, which is available in many SSDs as well as in Hitachi 7200 RPM HDD. The performance of encrypted v. not is undetectable, and the encryption is invisible to operating systems. If you have a decent laptop, you can use the built-in security functions to generate and store the key, which your password unlocks from the encrypted key storage of the laptop.
I used to use the PGP disk encryption product on a laptop (and ran NTFS compressed on top of that!). It didn't seem to have much effect if the amount of disk to be read was small; and most software sources aren't huge by disk standards.
You have lots of RAM and pretty fast processors. I spent most of my time thinking,
typing or debugging.
I wouldn't worry very much about it.
My current work machine came with bitlocker, and being an upgrade from the prior model. It only seemed faster to me. What I have found, however, is that bitlocker is more bullet proof than truecrypt, when it comes to accurately laying down the data. I do a lot of work in SAS which constantly writes backup copies to disk as it moves along and shoots a variety of output types to disk at the end. SAS works fine writing output from multithreaded processes back to bitlocker and doesn't seem to know it's there. This has not been the case for me with truecrypt. I'm not sure what happens or how, but I found that processes got out of synch when working with source/output data in a truecrypt container, which is what I installed on my second work computer since it had no bitlocker. The constant backups were shooting to an SSD while the truecrypt results were on a regular HD. Maybe that speed difference helped trip it up. Whatever the cause, I had to quit using truecrypt on that second computer because it made my SAS results out of synch with respect to processing order and it screwed up some of my processes and data. Scary stuff in my world.
I work with people who have successfully used Truecrypt on the exact same computer, but they weren't using a disk intensive app. like SAS.
Bitlocker to Go, the encryption which bitlocker applies to thumb-drives, does slow things down quite a bit when it comes to read/write times. It's not too hard to use as long as you remember your password on the thumbdrive, and are willing to wait for it to format/initialize the drive, but in my experience it made access to the flash drive about 4 times as slow. Don't know why it would slow down a thumb drive and not a disk but that's how it was for me and my coworker.
Based on my success with bitlocker at work, I bought Windows Pro for my home computer to get bitlocker and plan to encrypt some directories with it for things like financials.

Resources