CUDA IPC Memcpy + MPI fails in Theano, works in pycuda - mpi

For learning purposes, I wrote a small C Python module that is supposed to perform an IPC cuda memcopy to transfer data between processes. For testing, I wrote equivalent programs: one using theano's CudaNdarray, and the other using pycuda. The problem is, even though the test programs are nearly identical, the pycuda version works while the theano version does not. It doesn't crash: it just produces incorrect results.
Below is the relevant function in the C module. Here is what it does: every process has two buffers: a source and a destination. Calling _sillycopy(source, dest, n) copies n elements from each process's source buffer to the neighboring process's dest array. So, if I have two processes, 0 and 1, processes 0 will end up with process 1's source buffer and processes 1 will end up with process 0's source buffer.
Note that to transfer cudaIpcMemHandle_t values between processes, I use MPI (this is a small part of a larger project which uses MPI). _sillycopy is called by another function, "sillycopy" which is exposed in Python by the standard Python C API methods.
void _sillycopy(float *source, float* dest, int n, MPI_Comm comm) {
int localRank;
int localSize;
MPI_Comm_rank(comm, &localRank);
MPI_Comm_size(comm, &localSize);
// Figure out which process is to the "left".
// m() performs a mod and treats negative numbers
// appropriately
int neighbor = m(localRank - 1, localSize);
// Create a memory handle for *source and do a
// wasteful Allgather to distribute to other processes
// (could just use an MPI_Sendrecv, but irrelevant right now)
cudaIpcMemHandle_t *memHandles = new cudaIpcMemHandle_t[localSize];
cudaIpcGetMemHandle(memHandles + localRank, source);
MPI_Allgather(
memHandles + localRank, sizeof(cudaIpcMemHandle_t), MPI_BYTE,
memHandles, sizeof(cudaIpcMemHandle_t), MPI_BYTE,
comm);
// Open the neighbor's mem handle so we can do a cudaMemcpy
float *sourcePtr;
cudaIpcOpenMemHandle((void**)&sourcePtr, memHandles[neighbor], cudaIpcMemLazyEnablePeerAccess);
// Copy!
cudaMemcpy(dest, sourcePtr, n * sizeof(float), cudaMemcpyDefault);
cudaIpcCloseMemHandle(sourcePtr);
delete [] memHandles;
}
Now here is the pycuda example. For reference, using int() on a_gpu and b_gpu returns the pointer to the underlying buffer's memory address on the device.
import sillymodule # sillycopy lives in here
import simplempi as mpi
import pycuda.driver as drv
import numpy as np
import atexit
import time
mpi.init()
drv.init()
# Make sure each process uses a different GPU
dev = drv.Device(mpi.rank())
ctx = dev.make_context()
atexit.register(ctx.pop)
shape = (2**26,)
# allocate host memory
a = np.ones(shape, np.float32)
b = np.zeros(shape, np.float32)
# allocate device memory
a_gpu = drv.mem_alloc(a.nbytes)
b_gpu = drv.mem_alloc(b.nbytes)
# copy host to device
drv.memcpy_htod(a_gpu, a)
drv.memcpy_htod(b_gpu, b)
# A few more host buffers
a_p = np.zeros(shape, np.float32)
b_p = np.zeros(shape, np.float32)
# Sanity check: this should fill a_p with 1's
drv.memcpy_dtoh(a_p, a_gpu)
# Verify that
print(a_p[0:10])
sillymodule.sillycopy(
int(a_gpu),
int(b_gpu),
shape[0])
# After this, b_p should have all one's
drv.memcpy_dtoh(b_p, b_gpu)
print(c_p[0:10])
And now the theano version of the above code. Rather than using int() to get the buffers' address, the CudaNdarray way of accessing this is via the gpudata attribute.
import os
import simplempi as mpi
mpi.init()
# select's one gpu per process
os.environ['THEANO_FLAGS'] = "device=gpu{}".format(mpi.rank())
import theano.sandbox.cuda as cuda
import time
import numpy as np
import time
import sillymodule
shape = (2 ** 24, )
# Allocate host data
a = np.ones(shape, np.float32)
b = np.zeros(shape, np.float32)
# Allocate device data
a_gpu = cuda.CudaNdarray.zeros(shape)
b_gpu = cuda.CudaNdarray.zeros(shape)
# Copy from host to device
a_gpu[:] = a[:]
b_gpu[:] = b[:]
# Should print 1's as a sanity check
print(np.asarray(a_gpu[0:10]))
sillymodule.sillycopy(
a_gpu.gpudata,
b_gpu.gpudata,
shape[0])
# Should print 1's
print(np.asarray(b_gpu[0:10]))
Again, the pycuda code works perfectly and the theano version runs, but gives the wrong result. To be precise, at the end of the theano code, b_gpu is filled with garbage: neither 1's nor 0's, just random numbers as though it were copying from a wrong place in memory.
My original theory regarding why this was failing had to do with CUDA contexts. I wondered if it was possible theano was doing something with them that meant that the cuda calls made in sillycopy were run under a different CUDA context than had been used to create the gpu arrays. I don't think this is the case because:
I spent a lot of time digging deep in theano's code and saw no funny business being played with contexts
I would expect such a problem to result in a bad crash, not an incorrect result, which is not what happens.
A secondary thought is whether this has to do the fact that theano spawns several threads, even when using a cuda backend, which can be verified this by running "ps huH p ". I don't know how threads might affect anything, but I have run out of obvious things to consider.
Any thoughts on this would be greatly appreciated!
For reference: the processes are launched in the normal OpenMPI way:
mpirun --np 2 python test_pycuda.py

Related

Pyaudio - test computer speakers?

I am trying to put together some code in Python 3.6 to help test the computer hardware that passes through my hands as an IT tech.
I'd like to have a script that plays a simple sine wave tone on the left speaker, then the right and then the both speakers together.
I have found a potentially helpful script over at Pyaudio How to get sound on only one speaker but some of the code to actually run it is missing - chiefly the code for making sin wave tones. I have looked around online and have tried reverse-engineering this back into the code on that page but the maths is a little high-level for me! Sorry.
Thanks,
Will
Update:
I think I have found a partial (albeit long-winded solution) with 'sounddevice' for python 3
#!/usr/bin/env python3
import argparse
import logging
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("filename", help="audio file to be played back")
parser.add_argument("-d", "--device", type=int, help="device ID")
args = parser.parse_args()
try:
import sounddevice as sd
import soundfile as sf
data, fs = sf.read(args.filename, dtype='float32')
sd.play(data, fs, device=args.device, blocking=True, mapping=[1])
status = sd.get_status()
if status:
logging.warning(str(status))
except BaseException as e:
# This avoids printing the traceback, especially if Ctrl-C is used.
raise SystemExit(str(e))
The main chunk of code is repeat twice more but with "mapping = [1]" changed to "mapping = [2]" to test the right speaker and finally with "mapping = [?]" removed in the final block to test both speakers.
I found this over at https://python-sounddevice.readthedocs.io/en/0.2.1/examples.html.
Of course, if anyone knows a quicker and graceful way of getting this done, please share!
You could generate the sine tone directly in Python instead of loading it from a file. I've written some tutorials about creating simple sine tones:
https://nbviewer.jupyter.org/github/mgeier/python-audio/blob/master/simple-signals.ipynb
http://nbviewer.jupyter.org/github/spatialaudio/communication-acoustics-exercises/blob/master/intro.ipynb
Those tutorials use NumPy, because it makes manipulating the audio buffers very easy. But you can of course also do it in pure Python, if you prefer.
Here's an example:
#!/usr/bin/env python3
import math
import sounddevice as sd
sd.default.device = None
sd.default.samplerate = samplerate = 48000
duration = 1.5
volume = 0.3
frequency = 440
# fade time in seconds:
fade_in = 0.01
fade_out = 0.3
buffer = memoryview(bytearray(int(duration * samplerate) * 4)).cast('f')
for i in range(len(buffer)):
buffer[i] = volume * math.cos(2 * math.pi * frequency * i / samplerate)
fade_in_samples = int(fade_in * samplerate)
for i in range(fade_in_samples):
buffer[i] *= i / fade_in_samples
fade_out_samples = int(fade_out * samplerate)
for i in range(fade_out_samples):
buffer[-(i + 1)] *= i / fade_out_samples
for mapping in ([1], [2], [1, 2]):
sd.play(buffer, blocking=True, mapping=mapping)
sd.sleep(500)
Note that this code is using 32-bit floating point numbers (each one using 4 bytes), that's why we reserve 4 times more bytes in our bytearray than the required number of samples.

using dask for scraping via requests

I like the simplicity of dask and would love to use it for scraping a local supermarket. My multiprocessing.cpu_count() is 4, but this code only achieves a 2x speedup. Why?
from bs4 import BeautifulSoup
import dask, requests, time
import pandas as pd
base_url = 'https://www.lider.cl/supermercado/category/Despensa/?No={}&isNavRequest=Yes&Nrpp=40&page={}'
def scrape(id):
page = id+1; start = 40*page
bs = BeautifulSoup(requests.get(base_url.format(start,page)).text,'lxml')
prods = [prod.text for prod in bs.find_all('span',attrs={'class':'product-description js-ellipsis'})]
prods = [prod.text for prod in prods]
brands = [b.text for b in bs.find_all('span',attrs={'class':'product-name'})]
sdf = pd.DataFrame({'product': prods, 'brand': brands})
return sdf
data = [dask.delayed(scrape)(id) for id in range(10)]
df = dask.delayed(pd.concat)(data)
df = df.compute()
Firstly, a 2x speedup - hurray!
You will want to start by reading http://dask.pydata.org/en/latest/setup/single-machine.html
In short, the following three things may be important here:
you only have one network, and all the data has to come through it, so that may be a bottleneck
by default, you are using threads to parallelise, but the python GIL limits concurrent execution (see the link above)
the concat operation is happening in a single task, so this cannot be parallelised, and with some data types may be a substantial part of the total time. You are also drawing all the final data into your client's process with the .compute().
There are meaningful differences between multiprocessing and multithreading. See my answer here for a brief commentary on the differences. In your case that results in only getting a 2x speedup instead of, say, a 10x - 50x plus speedup.
Basically your problem doesn't scale as well by adding more cores as it would by adding threads (since it's I/O bound... not processor bound).
Configure Dask to run in multithreaded mode instead of multiprocessing mode. I'm not sure how to do this in dask but this documentation may help

Theano with opencl GPU

I have configured theano as follows:
[idf#localhost python]$ more ~idf/.theanorc
[global]
device = opencl0:0
floatX = float32
[lib]
cnmem=100
[idf#localhost python]$
I also needed to
[idf#localhost python]$ export MKL_THREADING_LAYER=GNU
although interestingly enough, if I install openblas and add
[blas]
ldflags = -lopenblas
to the .theanorc file, I no longer need to:
export MKL_THREADING_LAYER=GNU
Using a program I found on the internet which I modified slightly to use gpuarray, I am attempting to use theano with an Intel GPU through opencl:
import os
import shutil
from theano import function, config, shared, gpuarray
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
When I run the program, it seems as if it recognizes the GPU, but at the end the message "used the cpu" is printed.
[idf#localhost python]$ python theanoexam1.py
Mapped name None to device opencl0:0: Intel(R) HD Graphics 5500 BroadWell U-Processor GT2
[GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float32, vector)>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 1.231896 seconds
Result is [ 1.23178029 1.61879337 1.52278054 ..., 2.20771813 2.29967737
1.62323284]
Used the cpu
[idf#localhost python]$
I am skeptical of the message "used the cpu": 1.231896 seconds seems fast for an Intel i3 with four cores.
Is there an extra configuration that is needed to use opencl with theano? Or did this program indeed show that theano is configured to use the GPU through opencl?
Firstly thank you for you post.
I am running on Ubuntu 16.04, with Conda, and I have manually installed libgpuarray - all of which is well documented on the web.
I used the same test program you did (thank you for providing it).
So here are my settings
export MKL_THREADING_LAYER=GNU
The file ~/.theanorc looks like this
[global]
device = opencl0:0
floatX = float32
[lib]
cnmem=100
When I run the code
python test.py
I see the output
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [2]
param: 4, val: 0
Mapped name None to device opencl0:0: Ellesmere
[GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float32, vector)>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.282664 seconds
Result is [1.2317803 1.6187935 1.5227805 ... 2.207718 2.2996776 1.6232328]
Used the gpu
I can not figure out how to use the 2nd GPU (also OpenCL) - but I am happy at the moment that at least I have 1 GPU running.

XC8 builds font tables from top ROM

I wrote a barebone progran template in XC8 (1.37) that I use to develop and test new GLCD functions for the 18F family. Programming is done via a PICkit3. Since I need to quicky reprogram several times the code it is really important that programming is faster as much as possible.
Tipically, the code size is around 2K and it takes less than 10 sec to program,
Everiything is fine until I must use a font table, defined as:
const char font8[] = {....
Now, with just $400 bytes added, the compiler place the table at the ROM's end and the programming of 64K memory takes more than 1 minute.
Is there any way to avoid this?
I tried to manually limit the memory range in the MPLABX options, but this is annoying and a little unsafe (sometimes part of code is truncated).
A while back I had to write some code for emissions testing, where I needed to copy data between extreme ends of RAM. To do that I needed to specify the exact memory addresses. You can also use the C extension __at() construct. http://ww1.microchip.com/downloads/en/DeviceDoc/50002053F.pdf#page=27
int scanMode __at(0x200);
const char keys[] __at(123) = { ’r’, ’s’, ’u’, ’d’};
int modify(int x) __at(0x1000) {
return x * 2 + 3;
}

Callback from "multiprocessing" with CFFI segfaults after ~100 iterations

A PyPy callback, that works perfectly (in an infinite loop) when implemented (straightforwardly) as method of a Python object, segfaults after approximately 100 iterations when I move the Python object into a separate multiprocessing process.
In the main code I have:
import multiprocessing as mp
class Task(object):
def __init__(self, com, lib):
self.com = com # communication queue
self.lib = lib # ffi library
self.proc = mp.Process(target=self.spawn, args=(self.com,))
self.register_callback()
def spawn(self, com):
print('%s spawned.'%self.name)
# loop (keeping 'self' alive) until BREAK:
while True:
cmd = com.get()
if cmd == self.BREAK:
break
print("%s stopped."%self.name)
#ffi.calback("int(void*, Data*"): # old cffi (ABI mode)
def callback(self, data):
# <work on data>
return 1
def register_callback(self):
s = ffi.new_handle(self)
self.lib.register_callback(s, self.callback) # C-call
The idea is that multiple tasks should serve an equal number of callbacks concurrently. I have no clue what may cause the segfault, especially since it runs fine for the first ~100 iterations or so. Help much appreciated!
Solution
Handle 's' is garbage collected when returning from 'register_callback()'. Making the handle an attribute of 'self' and passing keeps it alive.
Standard CPython (cffi 1.6.0) segfaulted at the first iteration (i.e. gc was immediate) and provided me a crucial informative error message. PyPy on the other hand segfaulted after approximately 100 iterations without providing a message... Both run fine now.

Resources