Tuesday, July 21, 2015

Error and Complexity

The learning curve model that I developed based on observations of how fast tasks are performed has yielded another valuable insight. The difference between expected performance of a task and actual performance over time, will tend to spike and then fall off gradually, which is likely to confound typically linear (and often overly-optimistic) approaches to scheduling its completion. Furthermore, the timing and amount of the spike will vary with the complexity of the task, which may not even be explicitly factored into expectations. The math shows that the timing and amount of this spike in what could be considered "error" is theoretically predictable.

In "Units of Completion," I suggested that the complexity of a task could be assessed in terms of a number of units that are simultaneously performed during the task, and which define how closely we can measure its completion. I've taken this another step, by using the concept of a unit to identify the highest meaningful efficiency that could be used to establish ideal expectations.

For example, if my task is to edit a page with 500 words, then the highest completion I could reliably measure is 499 words (500 minus one), which is 499/500 or 99.8% of the total. That fraction is also the highest meaningful efficiency, which translates into an expectation of editing 499 words in the best-case time. With average editing ability, I would have an efficiency of 50% instead of 99.8%; so during the best-case time, I will have only edited 50% of the total, or 250 words. If I'm responsible for meeting a schedule based on 99.8% efficiency, at the end of the best-case time I will be behind by 249 words (499 minus 250), or 49.8%, which is my error at that time, and it will take nine times that long to reach zero error. A manager tracking my progress up to the best-case time would see an even worse picture, because my error would reach a peak of 67% when I was at just 40% of the best-case time. Ideally, of course, the manager should plan for the actual time to achieve zero error, and not care what happens until then.

Editing ten pages instead of one could be treated as a single task, and the same fractions would simply apply to the larger number of words, with the minimum allowable error at the end now being ten words instead of one. If, however, this error was still kept at one word, then the highest completion (and the highest meaningful efficiency) would increase to 99.98% (or 4999/5000 words) and have significant side-effects which might together be considered a major degradation in performance. For one, the manager would now need to allow more than 12 times the new best-case time (which accounts for all ten pages) for me to reach zero error. My maximum error would increase to nearly 74%, occurring at 32% of the best-case time; and at the best-case time, my error would be slightly higher, at 50.0%.

My actual efficiency in editing pages is higher than the average, more like 70% than 50%, which would decrease the time and value of the maximum error, as well as the amount of time to reach zero error in each case. There would still appear to be a degrading effect on performance as the complexity increased (for example, the maximum error would have increased from 54% to 62%), and that could still raise an unnecessary "red flag" by a manager who was looking too closely and didn't expect it.

The real world is certainly messier than this theoretical discussion might imply. As I described in "Units of Completion," a lot depends on whether the task you think you're evaluating is one of these idealized pure tasks, a parallel combination of pure tasks, or a sequence of pure tasks. Since my analysis is based on actual observations (as are the other models I've developed), the behaviors I've identified are potentially observable in actual situations, and are therefore subject to test. They suggest a reasonable set of explanations for what may be unresolved or even unrecognized issues in actual applications, which is why I've brought them up.

One such issue, which I alluded to and can foresee, is an increase in waste: wasted time, wasted effort, and wasted physical resources. For example, a coordinated "task" such as a major industrial or government project might be terminated because of commitment to unrealistic planning goals that could not be met, and the waste of discontinuing it would be added to the loss of opportunity for meeting the needs it was intended to address. Spikes in what I've called "error" might result in the waste of resources to correct problems that don't exist, which rings true as a consequence of too much complexity. If more realistic schedules are impractical, either because they demand resources that aren't available, or because of competition with others who do not acknowledge their necessity, then the gains of previous effort should be preserved as much as possible until a new and more effective task -- or set of tasks -- can be devised. If preservation cannot be done, waste seems inevitable, and the ultimate objectives of the task are too important to abandon, then cooperation (rather than competition) may be needed between multiple entities who can together address the impediments to success.



No comments: