Networks are complex beasts, and as they grow, they get more complicated. Diagnosing and understanding problems on networks rapidly gets hard. “Fortunately” for the world, IniTech ships one of those tools.

Leonore works on IniTech’s protocol analyzer. As you might imagine, a protocol analyzer gathers a lot of data. In the case of IniTech’s product, the lowest level of data acquisition is frequently sampled voltage measurements over time. And it’s a lot of samples- depending on the protocol in question, it might need samples on the order of nanoseconds.

In Leonore’s case, those raw voltage samples are the “primary data”. Now, there are all sorts of cool things that you can do with that primary data, but those computations become expensive. If your goal is to be able to provide realtime updates to the UI, you can’t do most of those computations- you do those outside of the UI update loop.

But you can do some of them. Things like level crossings and timing information can be built quickly enough for the UI. These values are “secondary data”.

As data is collected, there are a number of other sections of the application which need to be notified: the UI and the various high-level analysis components. Architecturally, Leonore’s team made an event-driven approach to doing this. As data is collected, a DataUpdatedEvent fires. The DataUpdatedEvent fires twice: once for the “primary data” and once for the “secondary data”. These two events always happen in lockstep, and they happen so closely together that, for all other modules in the application, they can safely be considered simultaneous, and no components in the application ever only care about one- they always want to see both the primary and the secondary data.

So, to review: the data collection module outputs a pair of data updated events, one containing primary data, one containing secondary data, and can never do anything else, and these two events could basically be viewed as the same event by everything else in the application.

Which raises a question about this C++/COM enum, used to tag the different events:

  enum DataUpdatedEventType 
  {
    [helpstring("Unknown data type.")] UnknownDataType = 0, 
    [helpstring("Primary data.")] PrimaryData = 1,
    [helpstring("Secondary data.")] SecondaryData = 2,
  };

As stated, the distinction between primary/secondary events is unnecessary. In fact, sending two events makes all the consuming code more complicated, because in many cases, they can’t start working until they’ve received the secondary data, and thus have to cache the primary data until the next event arrives.

But that’s minor. The UnknownDataType is never used. It can never be used. There is no case in which the data collection module will ever output that. There’s no reason why it would ever need to output that. None of the consumers are prepared to handle that- sending an UnknownDataType would almost certainly cause a crash in most configurations.

So why is it there? I’ll let Leonore explain:

The only answer I can give is this: When this was written, half of us didn’t know what we were doing most of the time, and most of us didn’t know what we were doing half of the time. So now there’s an enum in the code base that has never been used and, I would submit, CAN never be used. Or maybe I ought to say SHOULD never be used. I would just delete it, but I’ve never quite been able to bring myself to do so.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.