Wednesday, April 02, 2008

QuickMP 0.8.0 Released

I just released the first version of QuickMP, which is a very small (one header file) piece of C++ code to ease the burden of shared-memory programming. It's ideal for applications where you perform the same operations repeatedly on lots of data.

The basic idea is that you convert your main C++ for loop from something like this:
// Normal for loop uses only 1 thread.
for (int i = 0; i < 1000000; ++i)
{
processMyData(i);
}
...to something like this:
// Parallel for loop automatically uses 1 thread per processor.
QMP_PARALLEL_FOR(i, 0, 1000000)
processMyData(i);
QMP_END_PARALLEL_FOR

3 comments:

Anonymous said...

Hello. This post is likeable, and your blog is very interesting, congratulations :-). I will add in my blogroll =). If possible gives a last there on my blog, it is about the Notebook, I hope you enjoy. The address is http://notebooks-brasil.blogspot.com. A hug.

Joseph said...

I love the way these spam links can have so many words and yet be completely devoid of content. The obscure references to things that never happened. Ingratiating comments that provide no evidence of being based in reality. Very funny.

Well, Tyler, I also think "this post is likeable." However, it seems like cortex simulations need to be massively parallel but not necessarily independent. Still it would work great for ray-tracing or just generalized number crunching. Very cool.

Tyler Streeter said...

Joseph-

I think there are many parts of a cortex simulation that are massively parallel *and* independent. You could imagine two main update phases: communication and processing. The communication phase would pass new information among various cortical areas. The processing phase would perform local computations in each cortical region based on the latest incoming information from other regions. Both phases use lots of independent systems that can easily be parallelized.

My motivation in making QuickMP was not totally for my brain-inspired AI research... I wanted it to be useful in all kinds of problems that exhibit data-parallelism. Generalized number crunching, as you said. However, it is also an excellent tool for my research. I have begun to use it to parallelize several pieces of my research code with great results.

Fwiw, I realize that the whole multi-threading approach to parallelism is very coarse and not ideal... what we really need is fine-grained parallelism. But threads are the best we have right now.