By Kato Mivule
Parallel programming involves the concurrent computation or simultaneous execution of processes or threads at the same time.  While Sequential programming involves a consecutive and ordered execution of processes one after another. 
In other words with sequential programming, processes are run one after another in a succession fashion while in parallel computing, you have multiple processes execute at the same time. With sequential programming, computation is modeled after problems with a chronological sequence of events. 
The program in such cases will execute a process that will in turn wait for user input, then another process is executed that processes a return according to user input creating a series of cascading events. In contrast to sequential computation, parallel programming, while processes might execute concurrently, yet sub-processes or threads might communicate and exchange signals during execution and therefore programmers have to place measures in place to allow for such transactions. 
It is in this type of environment that we have to think about CPU utilization. Certainly parallel computation is meant for multi processor environments. To avoid any impasses, Michael Suess in his article “ Mutual Exclusion with Locks – an Introduction”, suggests a number of ways in which to solve the problem of stalemates by implementing Mutual Exclusion which prevents multiple threads running concurrently from working on the same data at the same time. 
In simplest terms mutual exclusiveness is achieved by placing locks on the critical region thus allowing only one thread at a time until that thread is done and unlocks the door to the critical region, giving access to another process to access the critical region. In this case stalemates are done away with as Michael Suess mentions a number of methods in dealing with such impasses: 
- Mutex: One thread at a time is allowed into critical section, other requesting threads are put to sleep until thread in critical section exits, allowing room for another thread.
- Spinlocks: Threads continue to spin and are not put to sleep, the spin continues until thread in critical section exits, giving room to another thread.
- Recursive Locks: an internal counter is utilized to keep track of locks and unlocks of threads so as to prevent deadlocks.
- Timed Locks: A timer is used when accessing the critical section, should it still be utilized by another thread, then the requesting thread will do something else rather than be put to sleep of keep spinning, as such working as a time saver – do something else while you wait.
- Hierarchical Locks: Threads with the same memory locality acquire locks consecutively.
These methods to control thread communication and execution in processes is a critical difference between sequential programming and parallel programming. However, the benefits of parallel computation out way any overheads as multiple threads can get executed concurrently, making it the main difference between sequential and parallel computation.
 “Parallel computing – Wikipedia, the free encyclopedia.” [Online]. Available: http://en.wikipedia.org/wiki/Parallel_computing. [Accessed: 29-Sep-2010].
 “sequence (programming) — Britannica Online Encyclopedia.” [Online]. Available: http://www.britannica.com/EBchecked/topic/1086517/sequence. [Accessed: 29-Sep-2010].
 Brian Harvey, Matthew Wright. “Simply scheme: introducing computer science, Edition 2, illustrated, MIT Press, 1999, ISBN0262082810, 9780262082815
 M. O. Tokhi, Mohammad Alamgir Hossain, Mohammad Hasan Shaheed, Parallel computing for real-time signal processing and control, Advanced textbooks in control and signal processing, Edition illustrated, Springer, 2003, ISBN 1852335998, 9781852335991
 “Mutual Exclusion with Locks – an Introduction » Thinking Parallel.” [Online]. Available: http://www.thinkingparallel.com/2006/09/09/mutual-exclusion-with-locks-an-introduction/. [Accessed: 29-Sep-2010].