Tracking the performance of an application matters. Too often, developers will try and tune and optimize an application based on their instincts about where the performance is bad- instincts which are frequently wrong.

Remy L's company included performance tracking blocks in their code, enabled by a debug flag. According to the performance stats, their program performed incredibly well. There were rarely ever any long-running methods.

Unfortunately, these wonderful statistics didn't jive with the experience of the users, who would often wait thirty seconds to a minute for the application to respond after clicking a button.

And also, as one weird thing, the performance monitor sometimes reported that methods took negative amounts of time to execute, making this program fast enough to break the laws of time and space.

Why was there such a difference?

#ifdef _DEBUG
    unsigned long TimeToExecute = GetTickCount() - lasttickcount;
    AnsiString StatString;
    if(TimeToExecute > 20000)
         TimeToExecute = -TimeToExecute;

    StatString.sprintf("Refresh Display required %d ms to complete.", TimeToExecute);
    if( EventLog )
        EventLog->LogEvent(EVENT_FORM_NAME, EVENT_AGENT, StatString);


GetTickCount in this environment returns the number of milliseconds since the computer booted. And if the TimeToExecute is greater than 20,000 milliseconds, we set it equal to negative itself.

You'll be shocked to learn that this particular block was copy/pasted everywhere in the code. Maybe, in one of those copies, it actually made sense (somehow, for some reason), but the future copy/pastes just replicated that weird if block elsewhere.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.