How does one build an OpenCL program with multiple .cl sources - opencl

I am in the process of converting my GPU application that has many CUDA kernels to OpenCL. I am trying to find online with no success on how to build an app using multiple .cl files. I will need to run 1 kernel then hav the outputs of the first go into the 2nd and so on.
Any healp would be appreciated.

You read the .cl files into a strings at C++ runtime. With multiple files, you have several choices:
You can just concatenate the strings of multiple .cl files (in the correct order of course).
You can outsource parts of the GPU code in .hcl header files, and in the main .cl source file call #include “header_file.hcl". OpenCL has a preprocessor that at C++ runtime and before OpenCL compile time handles the #include.
If your .cl files really are separate entities without any cross-referencing, you can compile them with separate program objects.

It's similar to a single kernel code. Look at the below example:
create input and output buffers for kernel-1
setKernelArgs and enququeWritedata for kernel-1
enqueueNdRange and enqueueReadData the output-1 to host(which will be input to kernel-2)
create input and output buffers for kernel-2
use the earlier read output-1 to enququeWritedata for kernel-2
enqueueNdRange and enqueueReadData the output-2 to host(which will be input to kernel-3)
And goes on...
This is a simpler way to achieve your goal and written with OpenCL 1.2 in mind, however, there could be an easy way in higher versions.

Related

Can I run output of Python code/text file in Julia?

I wrote a python script that generates a tree and outputs some variable creation and function calls in Julia syntax to a text file (I am testing the correctness of some Julia tree algorithms in phylogenetics).
I was wondering if there is a way to "run" the text file in a Julia Jupyter notebook?
It gets tedious to manually copy the file and run it as I am generating many files.
You can run include("treealgos.jl") in a Jupyter cell, to run the entire file there. It's equivalent to copy-pasting the file contents to that cell, and all the variables and functions defined in the file become available in the notebook after that.
Note that this is very different from using or importing a module, which require a module name and come with extra features like namespacing and exports. An include in contrast is a more basic and simpler feature, similar to a #include in C language, just bringing the code that was included into wherever the include statement happens to be.

Streaming data in Julia

Currently, is there a good way to read data in Julia in a streaming fashion?
For example, let's say I have a CSV file that is too big to fit in memory. Are there currently built in functions or a library that facilitates working with this?
I'm aware of the prototype DataStream functionality in DataFrames, but that's not currently exposed via a public API.
The eachline function turns an IO source into an iterator of lines. That should allow you to read a file a line at a time. from there the readcsv and readdlm function can read each line if you turn it into an IOBuffer.
for ln in eachline(open("file.csv"))
data = readcsv(IOBuffer(ln))
# do something with this data
end
It's still pretty do it yourself but there aren't that many steps so it's not too bad.

run saxon xquery over batch of xml files and produce one output file for each input file

How do I run xquery using Saxon HE 9.5 on a directory of files using the build in command-line? I want to take one file as input and produce one file as output.
This sounds very obvious, but I can't figure it out without using saxon extensions that are only available in PE and EE.
I can read in the files in a directory using fn:collection() or using input parameters. But then I can only produce one output file.
To keep things simple, let's say I have a directory "input" with my files 01.xml, 02.xml, ... 99.xml. Then I have an "output" directory where I want to produce the files with the same names -- 01.xml, 02.xml, ... 99.xml.
Any ideas?
My real data set is large enough (tens of thousands of files) that I don't want to fire off the jvm, so writing a shell script to call the saxon command-line for each file is out of the question.
If there are no built-in command-line options, I may just write my own quick java class.
The capability to produce multiple output files from a single query is not present in the XQuery language (only in XSLT), and the capability to process a batch of input files is not present in Saxon's XQuery command line (only in the XSLT command line).
You could call a single-document query repeatedly from Ant, XProc, or xmlsh (or of course from Java), or you could write the code in XSLT instead.

QFile: chop a file into parts

I am making a Qt application (4.7). Is there a way to split a file easily with QFile so that if I have a file x, I can split it equally into n parts fileX1, fileX2, ... fileXn?
As far as I know there is no in-build QFile method to split an existing file.
Depending on your use-case you can easily read the file into a QByteArray, split that into n parts and save those back to disc. (If you want an example of how to do that, just comment to this answer.)
There used to be an option to configure Qt, to build it with "large file support". Just Google for "qt large file support" (without the quotes), to see many references to this.
But I can't find any mention of this in the the Qt 4.7 Installation guide.
However, the option -no-largefile is mentioned in the page Platform and Compiler Notes - X11.

Is there a way to read and write in-memory files in R?

I am trying to use R to analyze large DNA sequence files (fastq files, several gigabytes each), but the standard R interface to these files (ShortRead) has to read the entire file at once. This doesn't fit in memory, so it causes an error. Is there any way that I can read a few (thousand) lines at a time, stuff them into an in-memory file, and then use ShortRead to read from that in-memory file?
I'm looking for something like Perl's IO::Scalar, for R.
I don’t know much about R, but have you had a look at the mmap package?
It looks like ShortRead is soon to add a "FastqStreamer" class that does what I want.
Well, I don't know about readFastq accepting something other than a file...
But if it can, for other functions, you can use the R function pipe() to open a unix connection, then you could do this with a combination of unix commands head and tail and some pipes.
For example, to get lines 90 to 100, you use this:
head file.txt -n 100 | tail -n 10
So you can just read the file in chunks.
If you have to, you can always use these unix utilities to create a temporary file, then read that in with shortRead. It's a pain but if it can only take a file, at least it works.
Incidentally, the answer to generally how to do an in-memory file in R (like Perl's IO::Scalar) is the textConnection function. Sadly though, the ShortRead package cannot handle textConnection objects as inputs, so while the idea that I expressed in the question of reading a file in small chunks into in-memory files which are then parsed bit by bit is certainly possible for many applications, but not for may particular application since ShortRead does not like textConnections. So the solution is the FastqStreamer class described above.

Resources