A good test is to run a simple test many times. Occasionally this finds things like memory leaks, file handle leaks, temporary file leaks, and so on.
If you have a simple web server for static content, and it works fine for a few requests, and seems fast when serving a thousand requests, you could try running a million requests. Or a billion. Not necessarily by many concurrent clients, although that'd be a good test too, just sequential requests.
I found a limitation in ab that way.
@liw I found memory leaks in git-annex this way
(sadly haskell's state monad has a laziness related memory leak, so cannot be used in production. mathemeaticians.)
Extremely true for storage systems. Writing a file wotks fine. Writing a hundred files works fine. Filling up the storage with files and suddenly something goes wrong at 80% ot 98%...
Lars and friends