Mike has provided a copy of his slides and his code examples.
NVidia and other graphics card have put comparatively inexpensive massively parallel computing within reach of common desktop (and laptop) computers in the form of the Graphics Processing Unit (GPU). GPUs can reach amazing computational throughput in massively parallel applications, with current estimates of 3 teraflops being bandied about. But the key here is the parallelization which means that perhaps tens of thousands of threads are executing exactly the same instruction at any time.
This brings with it a completely different computational view. Just like Alice's Looking-Glass world didn't make sense to her at first but was still internally consistent, the world of massively parallel heterogeneous programming most likely won't make sense to programmers whose entire experience has been in sequential programming - but it is still internally consistent.
Member Mike Elliott's presentation will give a brief overview of the basic computational model of many GPUs. Additionally sample programs will be shown emphasizing the difference in thinking which must be done in order to make use of this new gift of massive parallelism.
Mike is a programmer with the Boeing Company in Long Beach, where he creates instrument simulations, web pages, and various tools used by the C-17 program.