Tiny Optimization on Potentials

Speed

This morning I was looking for a little something in the Potentials code, and looked at the following code:

  x = MAX( MAX(_values[r+1][c], _values[r+1][c+1]),
           MAX(_values[r][c], _values[r][c+1]) );

And while it wasn't bad code, the MAX function will execute each argument, and then the winner will be executed again. So while the inner MAX implementations were not really doing any extra work, the outer MAX is causing each of them to be doing more work than necessary.

If we switched from that to the C function fmax:

  x = fmax(fmax(_values[r+1][c], _values[r+1][c+1]),
           fmax(_values[r][c], _values[r][c+1]));

we now know that the inner calls will be executed once, and then the outer is now working with doubles as opposed to macro expansions, and that will make things a little faster - without compromising the readability of the code.

For the traditional test case, the original code reported:

2019-06-19 ...-0500 Potentials[...] [ResultsView -drawRect:] - plot drawn in 18.533 msec

and with the fmax, the results were:

2019-06-19 ...-0500 Potentials[...] [ResultsView -drawRect:] - plot drawn in 16.382 msec

which was a repeatable difference. Now this isn't huge, but it's more than a 10% drop in the execution, so it's not trivial, and more to the point - the redundancy was just unnecessary. Good enough.