(This is a continuation of my previous post, Why is parallel programming difficult?.)
Because we’re not looking for it, we often don’t notice the potential parallelisms in our programs. I’m not talking about obvious parallel problems like matrix multiplication and graphics rendering pipelines. I mean everyday problems. For example, we’re always weaving independent patterns together to construct our procedures, like this.
pattern_a( A, B ) C1 <- calc_1( A) C2 <- calc_2( A, B, C1) return calc_3( C1, C2) pattern_b( X, Y) Z <- calc_4( X) return calc_5( X, X, Z, Y) procedure_that_uses_both_patterns( N, M ) C1 <- calc_1( N) Z <- calc_4( M) C2 <- calc_2( N, Z, C1) return calc_5( M, M, Z, calc_3( C1, C2))
We usually don’t think much about the fact that the first two lines C1<-calc_1(N) and Z<-calc_4(M) can probably be switched around or performed in parallel.
And why should we? There’s no annotation in most programming languages to capture the parallelism. The compiler doesn’t want to know.
And what about common algorithms like quicksort. It’s usually taught and implemented as linear, but it’s really a bifurcating parallel algorithm. The merge-sort algorithm involves a massive parallel split at the beginning followed by a hierarchical parallel reduction for the merge. Bubble-sort is a great parallel algorithm. It’s a linear-time sort, provided you have a very long chain of processors, each of which can communicate with its two neighbors.
Linearism is often not inherent. There are parallelisms in many common algorithms. But we don’t notice them much because our programming languages encourage linear thought, and don’t give us a way to capture or annotate parallelisms.