Recently I had a conversation with Scott Meyers about performance issues. One of the topics that came up was how our performance intuition often sucks eggs. There are so many factors that change, yet our basic assumptions remain the same. For example, there are still mainstream game programmers who try to avoid floating-point math because their performance intuition says "floating-point math is slow." This performance intuition is usually backed up with past evidence that indicated floating-math was indeed slow. However, this intuition fails to take into account that the floating-math was slow on:
a particularly platform
with a particular compiler or virtual machine
using a particular algorithm
on a particular operating system
or using a particular library.
Any one of these things could change our performance intuition of floating-math, but if we haven’t measured the change, our intuition is unlikely to shift. As performance engineers we all understand the need to measure, but as Scott pointed out, who really has the time to measure each potential performance issue on each platform/compiler/VM/library/OS/library?
There’s no easy answer to this problem. Part of my role at Microsoft is to help answer questions about performance on various platforms/compilers/libraries, but even that level of education only goes so far. As I like to say in my presentations, it’s up to all of us to change the world. When you make a new discovery about performance, educate your team. Get the word out. We can only make true progress collectively.