Google Serach

Loading

Mar 6, 2012

Parallel Programming By Bill Hollins

What is parallel programming and why should you care? Put simply, it's a method of programming where multiple items of code are being executed at once. While this method of programming is more complex than sequential (regular) programming, it can be much more powerful, especially with today's technology.
Parallelism may already be familiar to you in its most common form, multithreading. A multithreaded application has multiple parts of its code being executed at once in the individual threads.
The reason why parallelism is so important is that as programs become more powerful and resource intensive, hardware has had to catch up. However, it's not an easy task to simply build a faster processor forever, and clock speeds of modern processors have more or less stabilized. Instead, processors have become more advanced with the introduction of new instructions and, more importantly, multiple processor cores. A modern desktop processor generally has between 2 and 8 independent cores, each of which runs code separately. This is why parallelism is important. A sequential program can only use one of the cores for instruction execution, while a parallel program can use all of them and consequently run more quickly.
Another factor in parallelism is modern GPUs. Although the most common use of graphics cards is for the graphics rendering in video games, they have also seen more mainstream use. Since a GPU is essentially a large number of slow specialized processors for number crunching, data intensive applications can be run extremely quickly on a GPU.
This is how high performance applications, such as Photoshop and video games, have been able to get more powerful over the years. These programs have been optimized to run in parallel, so they can take advantage of the parallel processing power that modern computers have.
In many cases, writing parallel code does not require any special programming languages. Since modern operating systems have multithreading libraries built in, you can simply use those. However, for some solutions, multithreading might not be the ideal option, such as when you need the code to be more flexible in where it runs.
For example, consider a program that runs on multiple computers in order to increase processing power. Since the operating system generally only manages a single computer at a time, a multithreaded application won't run on multiple computers. Specialized software such as OpenMP is needed to achieve this. This also requires rewriting the application code in a more explicitly parallel way so that OpenMP can perform the parallel calculations on multiple computers at once.
A more common use of specialized parallelism libraries is when calculations are done on a GPU. Operating systems don't directly control the GPU, leaving that up to the graphics driver. Again, specialized tools are needed. In this case, two libraries called CUDA and OpenCL are used. When the application code is written to accommodate them, these software packages execute a large number of slowly running threads on the individual processing units on the GPU. For computer problems that can be parallelized in this manner, such as cryptography, using the GPU provides a manyfold increase in processing power over the CPU, which makes rewriting the application code for parallelism very worthwhile.
While, specialized parallelism is not used much in mainstream programming and likely won't be in the future, multithreading has become of great importance, so to be a good programmer, it's important to gain at least a cursory understanding of multithreading.
Thanks for reading my article. There's lots more great information written by me about programming languages at http://programminglanguagehelp.com
Article Source: http://EzineArticles.com/?expert=Bill_Hollins

Article Source: http://EzineArticles.com/6876675

No comments:

Post a Comment