What is the best way to handle huge arrays in C++? - bigdata

I am trying to simulate some plasma phenomena, which translates into simulating the dynamics of a huge number of particles. The usual approach is to cluster a number of particles into some "macroparticle", so that instead of having to loop over N particles, the program loops over some n < N, assuming the dynamics of the N/n particles in the n macroparticles is exactly the same.
However, even doing this in some occasions I need to work with 1e22 macroparticles. I need to store the position of those 1e22 particles in some array of the form
std::vector<double> pos(Npart)
where Npart is part of the input. Since there is no C++ type that can store such a big integer number in Npart I am wondering what are the usual strategies used in these problems. Maybe defining many arrays with less particles? What is a good practice here?

Related

Is MPI_BYTE better than a derived data type in this case?

I'm going to transfer the following structure in memory from one MPI process to another one with a MPI_Recv/MPI_Send pair.
n1 doubles
n1 ints
n2 doubles
n2 ints
...
nX doubles
nX ints
where X, and n1, n2, etc are positive integers. This communication is repeated many times, and each time X, and n1, n2, ... can be different from the previous time. I wonder if I should create a derived MPI datatype here, or simply use MPI_BYTE. On this site I've seen people discouraging use of MPI_BYTE because it makes the code less readable (in their opinion, at least) and also it would be impossible to run the code successfully on heterogeneous systems. Sounds reasonable.
Usually, derived data types are defined once and used many times. In my case, if I use a derived data type it must be defined again and again, since the data structure isn't the same each time. Won't this affect the performance significantly?

How would I normalize a float array to the range [0.0, 1.0] in parallel?

I want to design a kernel in which I can pass an array of floats and have them all come out with the maximum being 1.0 and the minimum being 0.0. Theoretically, each element would be mapped to something like (x-min)/(max-min). How can I parallelize this?
A simple solution would be to split the problem into 2 kernels:
Reduction kernel
Divide your array into chunks of N * M elements each, where N is the number of work-items per group, and M is the number of array elements processed by each work-item.
Each work-item computes the min() and max() of its M items.
Within the workgroup, perform a parallel reduction of min and max across the N work-items, giving you the min/max for each chunk.
With those values obtained, one of the items in the group can use atomics to update global min/max values. Given that you are using floats, you will need to use the well known workaround for the lack of atomic min/max/CAS operations on floats.
Application
After your first kernel has completed, you know that the global min and max values must be correct. You can compute your scale factor and normalisation offset, and then kick off as many work items as your array has elements, to multiply/add each array element to adjust it.
Tweak your values for N and M to find an optimum for a given OpenCL implementation and hardware combination. (Note that M = 1 may be the optimum, i.e. launching straight into the parallel reduction.)
Having to synchronise between the two kernels is not ideal but I don't really see a way around that. If you have multiple independent arrays to process, you can hide the synchronisation overhead by submitting them all in parallel.

Divisibility function in SML

I've been struggling with the basics of functional programming lately. I started writing small functions in SML, so far so good. Although, there is one problem I can not solve. It's on Project Euler (https://projecteuler.net/problem=5) and it simply asks for the smallest natural number that is divisible from all the numbers from 1 - n (where n is the argument of the function I'm trying to build).
Searching for the solution, I've found that through prime factorization, you analyze all the numbers from 1 to 10, and then keep the numbers where the highest power on a prime number occurs (after performing the prime factorization). Then you multiply them and you have your result (eg for n = 10, that number is 2520).
Can you help me on implementing this to an SML function?
Thank you for your time!
Since coding is not a spectator sport, it wouldn't be helpful for me to give you a complete working program; you'd have no way to learn from it. Instead, I'll show you how to get started, and start breaking down the pieces a bit.
Now, Mark Dickinson is right in his comments above that your proposed approach is neither the simplest nor the most efficient; nonetheless, it's quite workable, and plenty efficient enough to solve the Project Euler problem. (I tried it; the resulting program completed instantly.) So, I'll go with it.
To start with, if we're going to be operating on the prime decompositions of positive integers (that is: the results of factorizing them), we need to figure out how we're going to represent these decompositions. This isn't difficult, but it's very helpful to lay out all the details explicitly, so that when we write the functions that use them, we know exactly what assumptions we can make, what requirements we need to satisfy, and so on. (I can't tell you how many times I've seen code-writing attempts where different parts of the program disagree about what the data should look like, because the exact easiest form for one function to work with was a bit different from the exact easiest form for a different function to work with, and it was all done in an ad hoc way without really planning.)
You seem to have in mind an approach where a prime decomposition is a product of primes to the power of exponents: for example, 12 = 22 × 31. The simplest way to represent that in Standard ML is as a list of pairs: [(2,2),(3,1)]. But we should be a bit more precise than this; for example, we don't want 12 to sometimes be [(2,2),(3,1)] and sometimes [(3,1),(2,2)] and sometimes [(3,1),(5,0),(2,2)]. So, we can say something like "The prime decomposition of a positive integer is represented as a list of prime–exponent pairs, with the primes all being positive primes (2,3,5,7,…), the exponents all being positive integers (1,2,3,…), and the primes all being distinct and arranged in increasing order." This ensures a unique, easy-to-work-with representation. (N.B. 1 is represented by the empty list, nil.)
By the way, I should mention — when I tried this out, I found that everything was a little bit simpler if instead of storing exponents explicitly, I just repeated each prime the appropriate number of times, e.g. [2,2,3] for 12 = 2 × 2 × 3. (There was no single big complication with storing exponents explicitly, it just made a lot of little things a bit more finicky.) But the below breakdown is at a high level, and applies equally to either representation.
So, the overall algorithm is as follows:
Generate a list of the integers from 1 to 10, or 1 to 20.
This part is optional; you can just write the list by hand, if you want, so as to jump into the meatier part faster. But since your goal is to learn the basics of functional programming, you might as well do this using List.tabulate [documentation].
Use this to generate a list of the prime decompositions of these integers.
Specifically: you'll want to write a factorize or decompose function that takes a positive integer and returns its prime decomposition. You can then use map, a.k.a. List.map [documentation], to apply this function to each element of your list of integers.
Note that this decompose function will need to keep track of the "next" prime as it's factoring the integer. In some languages, you would use a mutable local variable for this; but in Standard ML, the normal approach is to write a recursive helper function with a parameter for this purpose. Specifically, you can write a function helper such that, if n and p are positive integers, p ≥ 2, where n is not divisible by any prime less than p, then helper n p is the prime decomposition of n. Then you just write
local
fun helper n p = ...
in
fun decompose n = helper n 2
end
Use this to generate the prime decomposition of the least common multiple of these integers.
To start with, you'll probably want to write a lcmTwoDecompositions function that takes a pair of prime decompositions, and computes the least common multiple (still in prime-decomposition form). (Writing this pairwise function is much, much easier than trying to create a multi-way least-common-multiple function from scratch.)
Using lcmTwoDecompositions, you can then use foldl or foldr, a.k.a. List.foldl or List.foldr [documentation], to create a function that takes a list of zero or more prime decompositions instead of just a pair. This makes use of the fact that the least common multiple of { n1, n2, …, nN } is lcm(n1, lcm(n2, lcm(…, lcm(nN, 1)…))). (This is a variant of what Mark Dickinson mentions above.)
Use this to compute the least common multiple of these integers.
This just requires a recompose function that takes a prime decomposition and computes the corresponding integer.

maximum number of cycles in a directed graph with verices=|V| and edges =|E|

Respected Sir,
I am working with a specific graphical structure representing 2-player normal form games (game theory). I know that I can compute all strongly connected components of the directed graph in O(V+E) via Tarjans, but was wondering what the complexity of computing all of the simple cycles of a strongly connected component is? AND, if there is a known upper bound on the number of such simple cycles given the number of vertices defining the strongly connected component?
I am looking for any literature/algorithms related to both of these problems. THANK YOU!
In your case the number of possible simple 2k-cycles are (n choose k) * (m choose k). If n, m, and k are not small, this grows exponentially.
Enumerating the cycles is not feasible. I doubt that it is possible to count them for an arbitrary graph in reasonable time. Even with dynamic programming techniques this takes exponential time and space (but with a lower exponent than without those techniques).

Converting Real and Imaginary FFT output to Frequency and Amplitude

I'm designing a real time Audio Analyser to be embedded on a FPGA chip. The finished system will read in a live audio stream and output frequency and amplitude pairs for the X most prevalent frequencies.
I've managed to implement the FFT so far, but it's current output is just the real and imaginary parts for each window, and what I want to know is, how do I convert this into the frequency and amplitude pairs?
I've been doing some reading on the FFT, and I see how they can be turned into a magnitude and phase relationship but I need a format that someone without a knowledge of complex mathematics could read!
Thanks
Thanks for these quick responses!
The output from the FFT I'm getting at the moment is a continuous stream of real and imaginary pairs. I'm not sure whether to break these up into packets of the same size as my input packets (64 values), and treat them as an array, or deal with them individually.
The sample rate, I have no problem with. As I configured the FFT myself, I know that it's running off the global clock of 50MHz. As for the Array Index (if the output is an array of course...), I have no idea.
If we say that the output is a series of One-Dimensional arrays of 64 complex values:
1) How do I find the array index [i]?
2) Will each array return a single frequency part, or a number of them?
Thankyou so much for all your help! I'd be lost without it.
Well, the bad news is, there's no way around needing to understand complex numbers. The good news is, just because they're called complex numbers doesn't mean they're, y'know, complicated. So first, check out the wikipedia page, and for an audio application I'd say, read down to about section 3.2, maybe skipping the section on square roots: http://en.wikipedia.org/wiki/Complex_number
What that's telling you is that if you have a complex number, a + bi, you can picture it as living in the x,y plane at location (a,b). To get the magnitude and phase, all you have to do is find two quantities:
The distance from the origin of the plane, which is the magnitude, and
The angle from the x-axis, which is the phase.
The magnitude is simple enough: sqrt(a^2 + b^2).
The phase is equally simple: atan2(b,a).
The FFT result will give you an array of complex values. The twice the magnitude (square root of sum of the complex components squared) of each array element is an amplitude. Or do a log magnitude if you want a dB scale. The array index will give you the center of the frequency bin with that amplitude. You need to know the sample rate and length to get the frequency of each array element or bin.
f[i] = i * sampleRate / fftLength
for the first half of the array (the other half is just duplicate information in the form of complex conjugates for real audio input).
The frequency of each FFT result bin may be different from any actual spectral frequencies present in the audio signal, due to windowing or so-called spectral leakage. Look up frequency estimation methods for the details.

Resources