OpenMP
Encyclopedia
OpenMP is an API (application programming interface)
that supports multi-platform shared memory
multiprocessing
programming in C
, C++
, and Fortran
, on most processor architectures and operating system
s, including Linux
, Unix
, AIX, Solaris, Mac OS X
, and Microsoft Windows
platforms. It consists of a set of compiler directives, library routines, and environment variable
s that influence run-time behavior.
OpenMP is managed by the non-profit
technology consortium
OpenMP Architecture Review Board (or OpenMP ARB), jointly defined by a group of major computer hardware and software vendors, like AMD, IBM
, Intel, Cray
, HP, Fujitsu
, NVIDIA
, NEC
, Microsoft
, Texas Instruments
, Oracle Corporation
, and more.
OpenMP uses a portable
, scalable model that gives programmer
s a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer
to the supercomputer
.
An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and MPI (Message Passing Interface)
, or more transparently through the use of OpenMP extensions for non-shared memory systems.
, a method of parallelization whereby the master "thread" (a series of instructions executed consecutively) "forks" a specified number of slave "threads" and a task is divided among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.
The section of code that is meant to run in parallel is marked accordingly, with a preprocessor directive that will cause the threads to form before the section is executed. Each thread has an "id" attached to it which can be obtained using a function (called
By default, each thread executes the parallelized section of code independently. "Work-sharing constructs" can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism
and data parallelism
can be achieved using OpenMP in this way.
The runtime environment allocates threads to processors depending on usage, machine load and other factors. The number of threads can be assigned by the runtime environment based on environment variables or in code using functions. The OpenMP functions are included in a header file
labelled "omp.h" in C
/C++
.
Version 3.0, released in May, 2008. Included in the new features in 3.0 is the concept of tasks and the task construct. These new features are summarized in Appendix F of the OpenMP 3.0 specifications.
Version 3.1 of the OpenMP specification was released July 9, 2011.
In C/C++, OpenMP uses #pragmas. The OpenMP specific pragmas are listed below:
Example (C program): Display "Hello, world" using multiple threads.
Output on a computer with 2 Cores and 2 threads.
Example: initialize the value of a large array in parallel, using each thread to do a portion of the work
s and there is a need to pass values between the sequential part and the parallel region (the code block executed in parallel), so data environment management is introduced as data sharing attribute clauses by appending them to the OpenMP directive. The different types of clauses are
C
This program can be compiled using gcc-4.4 with the flag -fopenmp
C++
This program can be compiled using GCC: gcc -Wall -fopenmp test.cpp -lstdc++
NOTE: The STL types (e.g. iostreams) are not thread-safe (as of 2011-09-20). Therefore, for instance, "cout" calls must be executed in critical areas or by only one thread (e.g. masterthread).
Clauses in work-sharing constructs (in C
The application of some OpenMP clauses are illustrated in the simple examples in this section. The piece of code below updates the elements of an array "b" by performing a simple operation on the elements of an array "a". The parallelization is done by the OpenMP directive "#pragma omp". The scheduling of tasks is dynamic. Notice how the iteration counters "j" and "k" have to be made private, whereas the primary iteration counter "i" is private by default. The task of running through "i" is divided among multiple threads, and each thread creates its own versions of "j" and "k" in its execution stack, thus doing the full task allocated to it and updating the allocated part of the array "b" at the same time as the other threads.
The next piece of code is a common usage of the "reduction" clause to calculate reduced sums. Here, we add up all the elements of an array "a" with an "i" dependent weight using a for-loop which we parallelize using OpenMP directives and reduction clause. The scheduling is kept static.
An equivalent, less elegant, implementation of the above code is to create a local sum variable for each thread ("loc_sum"), and make a protected update of the global variable "sum" at the end of the process, through the directive "critical". Note that this protection is crucial, as explained elsewhere.
for various processors. Sun Studio
compilers and tools support the latest OpenMP specifications with productivity enhancements for Solaris OS (UltraSPARC and x86/x64) and Linux platforms. The Fortran, C and C++ compilers from The Portland Group also support OpenMP 2.5. GCC
has also supported OpenMP since version 4.2.
A few compilers have early implementation for OpenMP 3.0, including
Sun Studio 12 update 1 has a full implementation of OpenMP 3.0.
Cons
In order to analyze the performance problems of OpenMP-based applications, Periscope, an online based performance analysis toolkit, is extended. Periscope will even pinpoint the performance problems of OpenMP 3.0, a task-based OpenMP applications. Several studies about the performance analysis of OpenMP applications are carried out by Shajulin et al.
on OpenMP threads to associate them with particular processor cores.
This minimizes thread migration and context-switching cost among cores. It also improves the data locality and reduces the cache-coherency traffic among the cores (or processors).
Application programming interface
An application programming interface is a source code based specification intended to be used as an interface by software components to communicate with each other...
that supports multi-platform shared memory
Shared memory
In computing, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Depending on context, programs may run on a single processor or on multiple separate processors...
multiprocessing
Multiprocessing
Multiprocessing is the use of two or more central processing units within a single computer system. The term also refers to the ability of a system to support more than one processor and/or the ability to allocate tasks between them...
programming in C
C (programming language)
C is a general-purpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system....
, C++
C++
C++ is a statically typed, free-form, multi-paradigm, compiled, general-purpose programming language. It is regarded as an intermediate-level language, as it comprises a combination of both high-level and low-level language features. It was developed by Bjarne Stroustrup starting in 1979 at Bell...
, and Fortran
Fortran
Fortran is a general-purpose, procedural, imperative programming language that is especially suited to numeric computation and scientific computing...
, on most processor architectures and operating system
Operating system
An operating system is a set of programs that manage computer hardware resources and provide common services for application software. The operating system is the most important type of system software in a computer system...
s, including Linux
Linux
Linux is a Unix-like computer operating system assembled under the model of free and open source software development and distribution. The defining component of any Linux system is the Linux kernel, an operating system kernel first released October 5, 1991 by Linus Torvalds...
, Unix
Unix
Unix is a multitasking, multi-user computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna...
, AIX, Solaris, Mac OS X
Mac OS X
Mac OS X is a series of Unix-based operating systems and graphical user interfaces developed, marketed, and sold by Apple Inc. Since 2002, has been included with all new Macintosh computer systems...
, and Microsoft Windows
Microsoft Windows
Microsoft Windows is a series of operating systems produced by Microsoft.Microsoft introduced an operating environment named Windows on November 20, 1985 as an add-on to MS-DOS in response to the growing interest in graphical user interfaces . Microsoft Windows came to dominate the world's personal...
platforms. It consists of a set of compiler directives, library routines, and environment variable
Environment variable
Environment variables are a set of dynamic named values that can affect the way running processes will behave on a computer.They can be said in some sense to create the operating environment in which a process runs...
s that influence run-time behavior.
OpenMP is managed by the non-profit
Non-profit organization
Nonprofit organization is neither a legal nor technical definition but generally refers to an organization that uses surplus revenues to achieve its goals, rather than distributing them as profit or dividends...
technology consortium
Consortium
A consortium is an association of two or more individuals, companies, organizations or governments with the objective of participating in a common activity or pooling their resources for achieving a common goal....
OpenMP Architecture Review Board (or OpenMP ARB), jointly defined by a group of major computer hardware and software vendors, like AMD, IBM
IBM
International Business Machines Corporation or IBM is an American multinational technology and consulting corporation headquartered in Armonk, New York, United States. IBM manufactures and sells computer hardware and software, and it offers infrastructure, hosting and consulting services in areas...
, Intel, Cray
Cray
Cray Inc. is an American supercomputer manufacturer based in Seattle, Washington. The company's predecessor, Cray Research, Inc. , was founded in 1972 by computer designer Seymour Cray. Seymour Cray went on to form the spin-off Cray Computer Corporation , in 1989, which went bankrupt in 1995,...
, HP, Fujitsu
Fujitsu
is a Japanese multinational information technology equipment and services company headquartered in Tokyo, Japan. It is the world's third-largest IT services provider measured by revenues....
, NVIDIA
NVIDIA
Nvidia is an American global technology company based in Santa Clara, California. Nvidia is best known for its graphics processors . Nvidia and chief rival AMD Graphics Techonologies have dominated the high performance GPU market, pushing other manufacturers to smaller, niche roles...
, NEC
NEC
, a Japanese multinational IT company, has its headquarters in Minato, Tokyo, Japan. NEC, part of the Sumitomo Group, provides information technology and network solutions to business enterprises, communications services providers and government....
, Microsoft
Microsoft
Microsoft Corporation is an American public multinational corporation headquartered in Redmond, Washington, USA that develops, manufactures, licenses, and supports a wide range of products and services predominantly related to computing through its various product divisions...
, Texas Instruments
Texas Instruments
Texas Instruments Inc. , widely known as TI, is an American company based in Dallas, Texas, United States, which develops and commercializes semiconductor and computer technology...
, Oracle Corporation
Oracle Corporation
Oracle Corporation is an American multinational computer technology corporation that specializes in developing and marketing hardware systems and enterprise software products – particularly database management systems...
, and more.
OpenMP uses a portable
Software portability
Portability in high-level computer programming is the usability of the same software in different environments. The prerequirement for portability is the generalized abstraction between the application logic and system interfaces...
, scalable model that gives programmer
Programmer
A programmer, computer programmer or coder is someone who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to...
s a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer
Desktop computer
A desktop computer is a personal computer in a form intended for regular use at a single location, as opposed to a mobile laptop or portable computer. Early desktop computers are designed to lay flat on the desk, while modern towers stand upright...
to the supercomputer
Supercomputer
A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation.Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics, weather forecasting, climate research, molecular modeling A supercomputer is a...
.
An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and MPI (Message Passing Interface)
Message Passing Interface
Message Passing Interface is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computers...
, or more transparently through the use of OpenMP extensions for non-shared memory systems.
Introduction
OpenMP is an implementation of multithreadingThread (computer science)
In computer science, a thread of execution is the smallest unit of processing that can be scheduled by an operating system. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process...
, a method of parallelization whereby the master "thread" (a series of instructions executed consecutively) "forks" a specified number of slave "threads" and a task is divided among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.
The section of code that is meant to run in parallel is marked accordingly, with a preprocessor directive that will cause the threads to form before the section is executed. Each thread has an "id" attached to it which can be obtained using a function (called
omp_get_thread_num
). The thread id is an integer, and the master thread has an id of "0". After the execution of the parallelized code, the threads "join" back into the master thread, which continues onward to the end of the program.By default, each thread executes the parallelized section of code independently. "Work-sharing constructs" can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism
Task parallelism
Task parallelism is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing execution processes across different parallel computing nodes...
and data parallelism
Data parallelism
Data parallelism is a form of parallelization of computing across multiple processors in parallel computing environments. Data parallelism focuses on distributing the data across different parallel computing nodes...
can be achieved using OpenMP in this way.
The runtime environment allocates threads to processors depending on usage, machine load and other factors. The number of threads can be assigned by the runtime environment based on environment variables or in code using functions. The OpenMP functions are included in a header file
Header file
Some programming languages use header files. These files allow programmers to separate certain elements of a program's source code into reusable files. Header files commonly contain forward declarations of classes, subroutines, variables, and other identifiers...
labelled "omp.h" in C
C (programming language)
C is a general-purpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system....
/C++
C++
C++ is a statically typed, free-form, multi-paradigm, compiled, general-purpose programming language. It is regarded as an intermediate-level language, as it comprises a combination of both high-level and low-level language features. It was developed by Bjarne Stroustrup starting in 1979 at Bell...
.
History
The OpenMP Architecture Review Board (ARB) published its first API specifications, OpenMP for Fortran 1.0, in October 1997. October the following year they released the C/C++ standard. 2000 saw version 2.0 of the Fortran specifications with version 2.0 of the C/C++ specifications being released in 2002. Version 2.5 is a combined C/C++/Fortran specification that was released in 2005.Version 3.0, released in May, 2008. Included in the new features in 3.0 is the concept of tasks and the task construct. These new features are summarized in Appendix F of the OpenMP 3.0 specifications.
Version 3.1 of the OpenMP specification was released July 9, 2011.
The core elements
The core elements of OpenMP are the constructs for thread creation, workload distribution (work sharing), data-environment management, thread synchronization, user-level runtime routines and environment variables.In C/C++, OpenMP uses #pragmas. The OpenMP specific pragmas are listed below:
Thread creation
omp parallel. It is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original process will be denoted as master thread with thread ID 0.Example (C program): Display "Hello, world" using multiple threads.
Output on a computer with 2 Cores and 2 threads.
Work-sharing constructs
used to specify how to assign independent work to one or all of the threads.- omp for or omp do: used to split up loop iterations among the threads, also called loop constructs.
- sections: assigning consecutive but independent code blocks to different threads
- single: specifying a code block that is executed by only one thread, a barrier is implied in the end
- master: similar to single, but the code block will be executed by the master thread only and no barrier implied in the end.
Example: initialize the value of a large array in parallel, using each thread to do a portion of the work
OpenMP clauses
Since OpenMP is a shared memory programming model, most variables in OpenMP code are visible to all threads by default. But sometimes private variables are necessary to avoid race conditionRace condition
A race condition or race hazard is a flaw in an electronic system or process whereby the output or result of the process is unexpectedly and critically dependent on the sequence or timing of other events...
s and there is a need to pass values between the sequential part and the parallel region (the code block executed in parallel), so data environment management is introduced as data sharing attribute clauses by appending them to the OpenMP directive. The different types of clauses are
Data sharing attribute clauses
- shared: the data within a parallel region is shared, which means visible and accessible by all threads simultaneously. By default, all variables in the work sharing region are shared except the loop iteration counter.
- private: the data within a parallel region is private to each thread, which means each thread will have a local copy and use it as a temporary variable. A private variable is not initialized and the value is not maintained for use outside the parallel region. By default, the loop iteration counters in the OpenMP loop constructs are private.
- default: allows the programmer to state that the default data scoping within a parallel region will be either shared, or none for C/C++, or shared, firstprivate, private, or none for Fortran. The none option forces the programmer to declare each variable in the parallel region using the data sharing attribute clauses.
- firstprivate: like private except initialized to original value.
- lastprivate: like private except original value is updated after construct.
- reduction: a safe way of joining work from all threads after construct.
Synchronization clauses
- critical: the enclosed code block will be executed by only one thread at a time, and not simultaneously executed by multiple threads. It is often used to protect shared data from race conditionRace conditionA race condition or race hazard is a flaw in an electronic system or process whereby the output or result of the process is unexpectedly and critically dependent on the sequence or timing of other events...
s. - atomic: the memory update (write, or read-modify-write) in the next instruction will be performed atomically. It does not make the entire statement atomic; only the memory update is atomic. A compiler might use special hardware instructions for better performance than when using critical.
- ordered: the structured block is executed in the order in which iterations would be executed in a sequential loop
- barrier: each thread waits until all of the other threads of a team have reached this point. A work-sharing construct has an implicit barrier synchronization at the end.
- nowait: specifies that threads completing assigned work can proceed without waiting for all threads in the team to finish. In the absence of this clause, threads encounter a barrier synchronization at the end of the work sharing construct.
Scheduling clauses
- schedule(type, chunk): This is useful if the work sharing construct is a do-loop or for-loop. The iteration(s) in the work sharing construct are assigned to threads according to the scheduling method defined by this clause. The three types of scheduling are:
- static: Here, all the threads are allocated iterations before they execute the loop iterations. The iterations are divided among threads equally by default. However, specifying an integer for the parameter "chunk" will allocate "chunk" number of contiguous iterations to a particular thread.
- dynamic: Here, some of the iterations are allocated to a smaller number of threads. Once a particular thread finishes its allocated iteration, it returns to get another one from the iterations that are left. The parameter "chunk" defines the number of contiguous iterations that are allocated to a thread at a time.
- guided: A large chunk of contiguous iterations are allocated to each thread dynamically (as above). The chunk size decreases exponentially with each successive allocation to a minimum size specified in the parameter "chunk"
IF control
- if: This will cause the threads to parallelize the task only if a condition is met. Otherwise the code block executes serially.
Initialization
- firstprivate: the data is private to each thread, but initialized using the value of the variable using the same name from the master thread.
- lastprivate: the data is private to each thread. The value of this private data will be copied to a global variable using the same name outside the parallel region if current iteration is the last iteration in the parallelized loop. A variable can be both firstprivate and lastprivate.
- threadprivate: The data is a global data, but it is private in each parallel region during the runtime. The difference between threadprivate and private is the global scope associated with threadprivate and the preserved value across parallel regions.
Data copying
- copyin: similar to firstprivate for private variables, threadprivate variables are not initialized, unless using copyin to pass the value from the corresponding global variables. No copyout is needed because the value of a threadprivate variable is maintained throughout the execution of the whole program.
- copyprivate: used with single to support the copying of data values from private objects on one thread (the single thread) to the corresponding objects on other threads in the team.
Reduction
- reduction(operator | intrinsic : list): the variable has a local copy in each thread, but the values of the local copies will be summarized (reduced) into a global shared variable. This is very useful if a particular operation (specified in "operator" for this particular clause) on a datatype that runs iteratively so that its value at a particular iteration depends on its value at a previous iteration. Basically, the steps that lead up to the operational increment are parallelized, but the threads gather up and wait before updating the datatype, then increments the datatype in order so as to avoid racing condition. This would be required in parallelizing Numerical IntegrationNumerical integrationIn numerical analysis, numerical integration constitutes a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations. This article focuses on calculation of...
of functions and Differential Equations, as a common example.
Others
- flush: The value of this variable is restored from the register to the memory for using this value outside of a parallel part
- master: Executed only by the master thread (the thread which forked off all the others during the execution of the OpenMP directive). No implicit barrier; other team members (threads) not required to reach.
User-level runtime routines
Used to modify/check the number of threads, detect if the execution context is in a parallel region, how many processors in current system, set/unset locks, timing functions, etc.Environment variables
A method to alter the execution features of OpenMP applications. Used to control loop iterations scheduling, default number of threads, etc. For example OMP_NUM_THREADS is used to specify number of threads for an application.Sample programs
In this section, some sample programs are provided to illustrate the concepts explained above.Hello World
This is a basic program, that exercises the parallel, private and barrier directives, as well as the functionsomp_get_thread_num
and omp_get_num_threads
(not to be confused).CC (programming language)C is a general-purpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system....
This program can be compiled using gcc-4.4 with the flag -fopenmpC++C++C++ is a statically typed, free-form, multi-paradigm, compiled, general-purpose programming language. It is regarded as an intermediate-level language, as it comprises a combination of both high-level and low-level language features. It was developed by Bjarne Stroustrup starting in 1979 at Bell...
This program can be compiled using GCC: gcc -Wall -fopenmp test.cpp -lstdc++NOTE: The STL types (e.g. iostreams) are not thread-safe (as of 2011-09-20). Therefore, for instance, "cout" calls must be executed in critical areas or by only one thread (e.g. masterthread).
Fortran 77
Free form Fortran 90
Clauses in work-sharing constructs (in CC (programming language)C is a general-purpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system....
/C++C++C++ is a statically typed, free-form, multi-paradigm, compiled, general-purpose programming language. It is regarded as an intermediate-level language, as it comprises a combination of both high-level and low-level language features. It was developed by Bjarne Stroustrup starting in 1979 at Bell...
)
The application of some OpenMP clauses are illustrated in the simple examples in this section. The piece of code below updates the elements of an array "b" by performing a simple operation on the elements of an array "a". The parallelization is done by the OpenMP directive "#pragma omp". The scheduling of tasks is dynamic. Notice how the iteration counters "j" and "k" have to be made private, whereas the primary iteration counter "i" is private by default. The task of running through "i" is divided among multiple threads, and each thread creates its own versions of "j" and "k" in its execution stack, thus doing the full task allocated to it and updating the allocated part of the array "b" at the same time as the other threads.The next piece of code is a common usage of the "reduction" clause to calculate reduced sums. Here, we add up all the elements of an array "a" with an "i" dependent weight using a for-loop which we parallelize using OpenMP directives and reduction clause. The scheduling is kept static.
An equivalent, less elegant, implementation of the above code is to create a local sum variable for each thread ("loc_sum"), and make a protected update of the global variable "sum" at the end of the process, through the directive "critical". Note that this protection is crucial, as explained elsewhere.
Implementations
OpenMP has been implemented in many commercial compilers. For instance, Visual C++ 2005, 2008 and 2010 support it (in their Professional, Team System, Premium and Ultimate editions), as well as Intel Parallel StudioIntel Parallel Studio
Intel Parallel Studio is a software development product developed by Intel that plugs into the Microsoft Visual Studio Integrated Development Environment. Its purpose is to facilitate developing programs for parallel computing...
for various processors. Sun Studio
Sun Studio (software)
The Oracle Solaris Studio compiler suite is Oracle's flagship software development product for Solaris and Linux. It was formerly known as Sun Studio...
compilers and tools support the latest OpenMP specifications with productivity enhancements for Solaris OS (UltraSPARC and x86/x64) and Linux platforms. The Fortran, C and C++ compilers from The Portland Group also support OpenMP 2.5. GCC
GNU Compiler Collection
The GNU Compiler Collection is a compiler system produced by the GNU Project supporting various programming languages. GCC is a key component of the GNU toolchain...
has also supported OpenMP since version 4.2.
A few compilers have early implementation for OpenMP 3.0, including
- GCC 4.3.1
- Nanos compiler
- Intel Fortran and C/C++ versions 11.0 and 11.1 Compilers, Intel C/C++ and Fortran Composer XE 2011 and Intel Parallel Studio.
- IBM XL C/C++ Compiler
Sun Studio 12 update 1 has a full implementation of OpenMP 3.0.
Pros and cons
Pros- Simple: need not deal with message passing as MPIMessage Passing InterfaceMessage Passing Interface is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computers...
does - Data layout and decomposition is handled automatically by directives.
- Incremental parallelism: can work on one portion of the program at one time, no dramatic change to code is needed.
- Unified code for both serial and parallel applications: OpenMP constructs are treated as comments when sequential compilers are used.
- Original (serial) code statements need not, in general, be modified when parallelized with OpenMP. This reduces the chance of inadvertently introducing bugs.
- Both coarse-grained and fine-grained parallelism are possible
Cons
- Risk of introducing difficult to debug synchronization bugs and race conditionRace conditionA race condition or race hazard is a flaw in an electronic system or process whereby the output or result of the process is unexpectedly and critically dependent on the sequence or timing of other events...
s. - Currently only runs efficiently in shared-memory multiprocessor platforms (see however Intel's Cluster OpenMP and other Distributed shared memoryDistributed shared memoryDistributed Shared Memory , in Computer Architecture is a form of memory architecture where the memories can be addressed as one address space...
platforms). - Requires a compiler that supports OpenMP.
- Scalability is limited by memory architecture.
- no support for compare-and-swapCompare-and-swapIn computer science, the compare-and-swap CPU instruction is a special instruction that atomically compares the contents of a memory location to a given value and, only if they are the same, modifies the contents of that memory location to a given new value...
- Reliable error handling is missing.
- Lacks fine-grained mechanisms to control thread-processor mapping.
- Can't be used on GPU
- High chance of accidentally writing false sharingFalse sharingIn computer science, false sharing is a performance degrading usage pattern that can arise in systems with distributed, coherent caches at the size of the smallest resource block managed by the caching mechanism...
code - Multithreaded Executables often incur longer startup times so, they can actually run much slower than if compiled single-threaded, so, there needs to be a benefit to being multithreaded.
- Often multithreading is used when there is no benefit yet the downsides still exist.
Performance expectations
One might expect to get an N times speedup when running a program parallelized using OpenMP on a N processor platform. However, this is seldom the case due to the following reasons:- A large portion of the program may not be parallelized by OpenMP, which means that the theoretical upper limit of speedup is limited according to Amdahl's lawAmdahl's lawAmdahl's law, also known as Amdahl's argument, is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved...
. - N processors in a SMPSymmetric multiprocessingIn computing, symmetric multiprocessing involves a multiprocessor computer hardware architecture where two or more identical processors are connected to a single shared main memory and are controlled by a single OS instance. Most common multiprocessor systems today use an SMP architecture...
may have N times the computation power, but the memory bandwidthMemory bandwidthMemory bandwidth is the rate at which data can be read from or stored into a semiconductor memory by a processor. Memory bandwidth is usually expressed in units of bytes/second, though this can vary for systems with natural data sizes that are not a multiple of the commonly used 8-bit bytes.Memory...
usually does not scale up N times. Quite often, the original memory path is shared by multiple processors and performance degradation may be observed when they compete for the shared memory bandwidth. - Many other common problems affecting the final speedup in parallel computing also apply to OpenMP, like load balancingLoad balancing (computing)Load balancing is a computer networking methodology to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid...
and synchronization overhead.
In order to analyze the performance problems of OpenMP-based applications, Periscope, an online based performance analysis toolkit, is extended. Periscope will even pinpoint the performance problems of OpenMP 3.0, a task-based OpenMP applications. Several studies about the performance analysis of OpenMP applications are carried out by Shajulin et al.
Thread affinity
Some vendors recommend setting the processor affinityProcessor affinity
Processor affinity is a modification of the native central queue scheduling algorithm in a symmetric multiprocessing operating system. Each task in the queue has a tag indicating its preferred / kin processor...
on OpenMP threads to associate them with particular processor cores.
This minimizes thread migration and context-switching cost among cores. It also improves the data locality and reduces the cache-coherency traffic among the cores (or processors).
Benchmarks
There are some public domain OpenMP benchmarks for users to try.- NAS parallel benchmark
- OpenMP validation suite
- OpenMP source code repository
- EPCC OpenMP Microbenchmarks
Learning resources online
- Tutorial on llnl.gov
- Reference/tutorial page on nersc.gov
See also
- CilkCilkCilk is a general-purpose programming language designed for multithreaded parallel computing. The commercial instantiation is Intel Cilk Plus.-Design:...
and Intel Cilk PlusIntel Cilk PlusCilk Plus is an extension to the C and C++ programming languages, designed for multithreaded parallel computing.On July 31, 2009, Cilk Arts, producers of the Cilk++ programming language, announced that its products and engineering team were now part of Intel Corp... - Message Passing InterfaceMessage Passing InterfaceMessage Passing Interface is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computers...
- ConcurrencyConcurrency (computer science)In computer science, concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other...
- Parallel computingParallel computingParallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently . There are several different forms of parallel computing: bit-level,...
- Parallel programming modelParallel programming modelA parallel programming model is a concept that enables the expression of parallel programs which can be compiled and executed. The value of a programming model is usually judged on its generality: how well a range of different problems can be expressed and how well they execute on a range of...
- POSIX ThreadsPOSIX ThreadsPOSIX Threads, usually referred to as Pthreads, is a POSIX standard for threads. The standard, POSIX.1c, Threads extensions , defines an API for creating and manipulating threads....
- Unified Parallel CUnified Parallel CUnified Parallel C is an extension of the C programming language designed for high-performance computing on large-scale parallel machines, including those with a common global address space and those with distributed memory...
- X10 (programming language)X10 (programming language)X10 is a programming language being developed by IBM at the Thomas J. Watson Research Center as part of the Productive, Easy-to-use, Reliable Computing System project funded by DARPA's High Productivity Computing Systems program...
- Parallel Virtual MachineParallel Virtual MachineThe Parallel Virtual Machine is a software tool for parallel networking of computers. It is designed to allow a network of heterogeneous Unix and/or Windows machines to be used as a single distributed parallel processor. Thus large computational problems can be solved more cost effectively by...
- Bulk synchronous parallelBulk synchronous parallelThe Bulk Synchronous Parallel abstract computer is a bridging model for designing parallel algorithms. A bridging model "is intended neither as a hardware nor a programming model but something in between" . It serves a purpose similar to the Parallel Random Access Machine model. BSP differs from...
- Grand Central Dispatch - comparable technology for CC (programming language)C is a general-purpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system....
, C++C++C++ is a statically typed, free-form, multi-paradigm, compiled, general-purpose programming language. It is regarded as an intermediate-level language, as it comprises a combination of both high-level and low-level language features. It was developed by Bjarne Stroustrup starting in 1979 at Bell...
, and Objective-CObjective-CObjective-C is a reflective, object-oriented programming language that adds Smalltalk-style messaging to the C programming language.Today, it is used primarily on Apple's Mac OS X and iOS: two environments derived from the OpenStep standard, though not compliant with it...
by Apple - Partitioned global address spacePartitioned global address spaceIn computer science, a partitioned global address space is a parallel programming model. It assumes a global memory address space that is logically partitioned and a portion of it is local to each processor. The novelty of PGAS is that the portions of the shared memory space may have an affinity...
- GPGPUGPGPUGeneral-purpose computing on graphics processing units is the technique of using a GPU, which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the CPU...
- CUDACUDACUDA or Compute Unified Device Architecture is a parallel computing architecture developed by Nvidia. CUDA is the computing engine in Nvidia graphics processing units that is accessible to software developers through variants of industry standard programming languages...
- NVIDIA - AMD FireStreamAMD FireStreamThe AMD FireStream is a stream processor produced by Advanced Micro Devices to utilize the stream processing/GPGPU concept for heavy floating-point computations to target various industries, such as the High Performance Computing , scientific, and financial sectors...
- OpenCLOpenCLOpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, and other processors. OpenCL includes a language for writing kernels , plus APIs that are used to define and then control the platforms...
- Standard supported by both NVIDIA and AMD/ATI
Further reading
- Quinn Michael J, Parallel Programming in C with MPI and OpenMP McGraw-Hill Inc. 2004. ISBN 0-07-058201-7
- R. Chandra, R. Menon, L. Dagum, D. Kohr, D. Maydan, J. McDonald, Parallel Programming in OpenMP. Morgan Kaufmann, 2000. ISBN 1-55860-671-8
- R. Eigenmann (Editor), M. Voss (Editor), OpenMP Shared Memory Parallel Programming: International Workshop on OpenMP Applications and Tools, WOMPAT 2001, West Lafayette, IN, USA, July 30–31, 2001. (Lecture Notes in Computer Science). Springer 2001. ISBN 3-540-42346-X
- B. Chapman, G. Jost, R. van der Pas, D.J. Kuck (foreword), Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press (October 31, 2007). ISBN 0-262-53302-2
- Parallel Processing via MPI & OpenMP, M. Firuziaan, O. Nommensen. Linux Enterprise, 10/2002
- MSDN Magazine article on OpenMP
- SC08 OpenMP Tutorial (PDF) - Hands-On Introduction to OpenMP, Mattson and Meadows, from SC08 (Austin)
- Comparing programmability of Open MP and pthreads
- OpenMP 3.0 Summary Card (PDF)
- Parallel Programming in Fortran 95 using OpenMP (PDF)
External links
- The official site for OpenMP includes the latest OpenMP specifications, links to resources, and a lively set of forums where questions about OpenMP can be asked and are answered by the experts and implementors.
- GOMP is GCCGNU Compiler CollectionThe GNU Compiler Collection is a compiler system produced by the GNU Project supporting various programming languages. GCC is a key component of the GNU toolchain...
's OpenMP implementation, part of GCC - IBM Octopiler with OpenMP support
- Blaise Barney, Lawrence Livermore National Laboratory site on OpenMP
- ompca, an application in REDLIB project for the interactive symbolic model-checker of C/C++ programs with OpenMP directives