Occasionally, when designing how software should work, the question comes up: the existing user interface or use case is bad, and we think we can improve it. Should we? A change for the better is still a change – does the benefit outweigh the pain of re-learning? Or, more pessimistically, the users are familiar with bad technology, so why bother making improvements?

I’ve heard computer game controls discussed this way. The usual recommendation is, don’t invent your own player control system, unless you have a vastly superior system – players are already so familiar with the existing system (and its flaws) that the pain of learning a new system will probably not out-weigh the pain of living with the existing system’s flaws.

In the realm of computer games, I suppose this makes sense. The player control system is a fairly complex thing, navigating a 3D space with a 2D control (the mouse), and there have been so many games produced that the problem is pretty well hashed-out.

But how many of us build software that generic? How many of us build software that deals with such a well-known problem domain? If you’re a developer in that situation, then you can rely on what’s been done before. But I think more often than not, people put too much faith in what’s already been done.