Flaw in Hoff's analysis of Y2K errorsgreenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread |
Hoff,Since the thread through which we had been interacting has fallen off the main list, I decided to start a new thread with the essence of my most recent answer to that older thread.
Your analysis estimates the error rate prior to rollover (that is, for each of the 24 months of 1998 and 1999) as 1.13%, and the error rate at rollover as 1.05%, and concludes from these apparently comparable rates that rollover should not be expected to raise the magnitude of errors.
I accept your calculation of these respective error rates from your stated assumptions, but not your conclusion. These error rates are not to the same scale: the 1.13% figure is for an entire month, whereas the 1.05% figure is for a two week period. So I believe these figures actually suggest a doubling in error rate at rollover.
Moreover, since the rollover is assumed to be a two week period surrounding January 1, 2000, the last week of 12/99 would experience both of the above error rates simultaneously, nearly tripling the error rate prior to rollover.
-- David L (bumpkin@dnet.net), September 25, 1999
DavidFirst, cutovers to new systems tend to cluster around month-end/month-start; or more precisely, end/start of fiscal periods.
This is especially true for financial systems. The point is to get a "clean" cutover after fiscal close, instead of converting balances, then also having to apply transactions processed following close in the old system.
Since many other systems tend to also have connections to financial system, this end/start of the period cutover is in general true for other system implementations, as well.
Second, I simplified the spread of implementations to a constant rate, to provide another "buffer" and level of conservatism.
No doubt, some are racing the clock on system implementations; but I think there is no doubt that they are truly at the far-end of the bell curve. For example, SAP implementations seemed to peak somewhere in the period from mid-1998 to mid-1999. There is no question that demand for consultants dropped substantially around January-February of 1999; by July-August, the bench at virtually every consulting company was overflowing.
For remediated systems, my guess is the peak will be found somewhat later; probably in the period between July-September of this year.
In any case, while some may go down to the wire, the peak for system implementations no doubt has already occurred.
The other factor is the "system freeze" that is now in effect virtually across the board. That is, the normal level of maintenance/enhancement activity on systems, which added to the "noise" level previously (for example, the MCI-Lucent episode), is dropping to a very low-level, a level I expect to be virtually zero at the date-rollover itself.
-- Hoffmeister (hoff_meister@my-deja.com), September 25, 1999.
Hoff:Thanks for addressing remediated systems here as well as those firms who decided to engage in re-engineering. Even though many firms didn't choose to engage in re-engineering, I've seen many that included upgrades and future expansion as well. My experiences were confirmed in Yardeni's T-100 audio seminar in the form of Patrick D 'Acre's hour-long presentation. This presentation consisted of question/answer sessions with representatives from MULTIPLE firms. It also explained the Cap-Gemini reported results. The thread in which this was discussed died out quickly on this forum.
David:
If you don't mind, I'm going to combine some issues from other threads into this one. I've followed your discussions with Hoff on the threads before this one, but didn't want to interject other topics at that time.
First, the Gartner results (which I assume were your guidelines in the clustering of failures around the first two weeks of January, 2000) HAVE been modified recently as information has come in from the field regarding embedded systems. THIS was the whole reason behind their clustering failures in those first two weeks. The bottom-line on that seems to be (in general, but not all-inclusively) that had they done NOTHING AT ALL, things would have processed normally save seeing dates on indicators that were incorrect. It's similar in nature to the SSA fiasco recently discussed. Things will process normally, but the date on the report (letter in the case of SSA) will reflect 1900 rather than 2000.
Failures of the type that SYSMAN fears (formats and files being changed and non-remediated programs failing due to this) have already been experienced and fixed when these programs were moved back into production.
There are some that argue that many of us are out of work because remediation has been deferred to a FOF status. Where, exactly, is the evidence for this? I've worked on COUNTLESS projects which INCLUDED remediation in the past 10 years. I've seen some forego remediation on some applications simply to address more important issues, but the main impetus in ANY firm is to keep the firm moving along and making money. NO company will make money if their systems fail. WE (as contractors) are called in to fix the systems that show evidence of possible failure, or are called in to DETERMINE if systems show evidence of possible failure. That there are so many of us unemployed at this time, or working on projects that will be implemented after rollover speaks VOLUMES for the Y2k work that has already been done. Did they announce to the public that they were compliant? NO! This argument always tickles me. As contractors, we're called in whenever a company needs more help. A Y2k project is no different from any other. It's kindof like the obituaries. How many folks put ads in the papers stating that they're alive and well?
-- Anita (spoonera@msn.com), September 25, 1999.
Anita, you sure paint a rosy picture. The trouble is, I don't believe you. Not one little bit. You may be fooling some people, but not me.
-- Not Fooled (notfooled@noway.com), September 25, 1999.
Not:Anita is painting her rosy picture using pigments made of extensive hands-on experience. Your attitude seems to be painted out of hot air. Got anything substantive backing you?
-- Flint (flintc@mindspring.com), September 25, 1999.
Hoff,When other folks on this forum disagreed with the assumptions you used, you responded that while assumptions could be argued, you did use highly reputable sources for those assumptions, and therefore you would place greater weight on criticism of your calculations. I saw your perspective, so I took the time to carefully go through and absorb your calculations, and found a substantial problem. Now it seems the roles have reversed. Instead of explaining why I am incorrect, you are handwaving about your own assumptions. In short, you are engaging in the same tactic that you criticized.
Nonetheless, I am appreciative of the effort you put into your calculations, since the exercise of my going through it and understanding what the result really means, has tended to confirm the intuition that I and a lot of others have had about the magnitude of errors occuring at rollover.
-- David L (bumpkin@dnet.net), September 25, 1999.
David L:Take the time to address Yardeni's T-100 audio conference, and consider the presentation made by Howard Rubin (who seems to be used by many on this forum to back up their pessimistic views.) Therein he states that 56% of the largest firms expect to be 100% compliant by end of year and that 94% expect to be 76% compliant by end of year. Does this spell failure? Not at all, as he further explains in Patrick D'Acre's segment in that audio seminar.
Consider also the statistics used by Mr. Milne in the thread "More on Flint's "Rousing Success." He states [with my comments in brackets]:
It is sheer luncacy to come to the conclusion that we are in the throes of a "Rousing success." [But it's NOT lunacy to conclude that we are in the throes of a "Numbing failure"?]
Half not getting even their mission critical systems done, [using Cap Gemini and Howard Rubin's figures, explained already above] 25% not dealing with their embedded systems [75% DEALING with their embedded systems] ,18% more than a month behind schedule and many relying solely upon 'fix on failure'. [Gee...82% on schedule or ahead of schedule, and many more NOT relying solely upon "fix on failure"]
I didn't have to wave my hands or even do any research to come up with this. It's right there sticking out at me on this forum....100% - 18% = 82%, etc. Which is bigger? Which indicates lunacy when used to backup assertions?
I suspect you're getting my point, David. I, personally, appreciate the time you spent engaging in debating the points that you did with Hoff.
-- Anita (spoonera@msn.com), September 25, 1999.
David, my original post on this subject made the point about implementations clustering around month/fiscal period closings.It is not "Hand-Waving" to realize the effects that system-freezes have on the overall error rate of systems; that is, in fact, the very reason the freezes are implemented.
As well, the fact that new implementations for Y2k have already peaked, and are not at a "constant" level running through the date rollover, is again hardly deniable.
I appreciate the effort you put into at least reading and understanding the argument; it has allowed the expansion of some of the points. It is somewhat disappointing that you now resort to accusations of "Hand-Waving", instead of continuing with the realities of the situation. But again, thanks for the effort to this point.
-- Hoffmeister (hoff_meister@my-deja.com), September 25, 1999.
Now wait a minute, Hoff. The assumptions you gave included that the rollover consists of a two week period surrounding Jan 1, 1999. You derived the 1.13% figure and the 1.05% figure, and claimed these were comparable. My only point is that, based on the parameters that you defined, these figures do not imply, as you claim, that the error rate in the last week of 1999 will be comparable to that experienced prior to that time, but rather that the former will be almost triple the latter. Maybe a ratio of nearly 1:3 satisfies your definition of "comparable," but it doesn't satisfy mine.
I am disturbed that you placed considerable emphasis on no one's being able to find a flaw in your calculations, but that your response to a flaw being discovered is that the calculations shouldn't be taken seriously because they were based on conservative assumptions. This falls shy of my definition of intellectual integrity.
I am not suggesting that you or anyone else change your view of Y2K based on any of the above. It seems only prudent to place greater weight on one's own observations than on theoreticians' estimates, and I completely respect your and others' right to an opinion different from mine. Moreover, even though function points and other metrics seem to be in fashion, I remain skeptical of their value.
-- David L (bumpkin@dnet.net), September 25, 1999.
Your attitude seems to be painted out of hot air. Got anything substantive backing you?-- Flint (flintc@mindspring.com), September 25, 1999.
Dear oh dear... pot, kettle, black...
-- Andy (2000EOD@prodigy.net), September 26, 1999.
Thank you David L.
-- Will (sibola@hotmail.com), September 26, 1999.
David Now wait a minute, Hoff. The assumptions you gave included that the rollover consists of a two week period surrounding Jan 1, 1999. You derived the 1.13% figure and the 1.05% figure, and claimed these were comparable. My only point is that, based on the parameters that you defined, these figures do not imply, as you claim, that the error rate in the last week of 1999 will be comparable to that experienced prior to that time, but rather that the former will be almost triple the latter. Maybe a ratio of nearly 1:3 satisfies your definition of "comparable," but it doesn't satisfy mine.And as I said, David, my initial post included the base assumption that system implementations are clustered around Month-Ends.
Granted, if you consider the Jan 1, 2000 rollover "just another Month-End", then you have a point.
However, dealing in reality, where system freezes are already in place, is another matter.
I used a constant rate of implementation, because I had and have no valid measure to determine the shape and peak rate of implementations. So using a constant rate provided the most conservative estimate.
But, to use the fact I utilized a constant rate, to extend that rate through through the date-rollover itself, is absurd.
I am disturbed that you placed considerable emphasis on no one's being able to find a flaw in your calculations, but that your response to a flaw being discovered is that the calculations shouldn't be taken seriously because they were based on conservative assumptions. This falls shy of my definition of intellectual integrity.
To my knowledge, I never challenged anyone to find flaw's in my calculations. In fact, I'd be surprised if there were not some errors there.
Again, the only "flaw" you seem to have found is that I apparently needed to spell out the fact that implementations would not occur at the same rate thru the actual rollover. Thank you.
-- Hoffmeister (hoff_meister@my-deja.com), September 26, 1999.
And as I said, David, my initial post included the base assumption that system implementations are clustered around Month- Ends.
Granted, if you consider the Jan 1, 2000 rollover "just another Month-End", then you have a point.
However, dealing in reality, where system freezes are already in place, is another matter.
I used a constant rate of implementation, because I had and have no valid measure to determine the shape and peak rate of implementations. So using a constant rate provided the most conservative estimate.
But, to use the fact I utilized a constant rate, to extend that rate through through the date-rollover itself, is absurd.
Hoff, you are confusing the concept of date-rollover, i.e., 1/1/00, with the concept of rollover period, which you've assumed to surround date-rollover by one week in each direction. The "Gartner spike" begins at the rollover period, not at date-rollover.
Are you suggesting that instead of system implementations continuing up until the end of 1999, they will stop one week prior to the end of 1999 in deference to the Gartner Group?
-- David L (bumpkin@dnet.net), September 26, 1999.
Are you suggesting that instead of system implementations continuing up until the end of 1999, they will stop one week prior to the end of 1999 in deference to the Gartner Group?No, David, I'm suggesting that system implementations have already declined from previous levels, and will continue to decline going into the rollover. The evidence of this overwhelming, and no one, not Ed Yourdon, not anyone, denies this. They may interpret the cause differently, but not the actual facts. Not in deference to the Gartner Group, but in deference to potential Y2k problems.
Why do you think system freezes have been implemented to begin with?
-- Hoffmeister (hoff_meister@my-deja.com), September 26, 1999.
No, David, I'm suggesting that system implementations have already declined from previous levels, and will continue to decline going into the rollover. The evidence of this overwhelming, and no one, not Ed Yourdon, not anyone, denies this. They may interpret the cause differently, but not the actual facts. Not in deference to the Gartner Group, but in deference to potential Y2k problems.
Why do you think system freezes have been implemented to begin with?
Hoff, in an attempt to put a damper on this merry-go-round, I will attempt to carefully state what aspect of your analysis I disagree with. I had been assuming, perhaps wrongly, that my finding your writing style easy to understand implied the converse.
For system replacement work (primarily), your analysis calculates a function point error figure of 1.13% for each of the 12 months of 1999, including the month of 12/99. Also calculated is a function point error figure of 1.05% for the rollover period of two weeks, centered at 1/1/00.
Since the 1.05% rate is for two weeks, it is equivalent to a monthly rate of 2.10%. So from the beginning of 12/99 up to but not including the last week of 12/99, there is replacement work but not the rollover period, so the rate of 1.13% applies. In the last week of 12/99, there is both replacement work and the rollover period, so the rate for that week is 1.13% + 2.10% = 3.23%, which I view as not comparable to 1.13%.
So my one and only point is that your analysis omitted these necessary steps, and therefore you erroneously concluded that based on the stated assumptions, the overall rate for the last week of 12/99 was comparable to the overall rate for each of the preceding weeks of that month and that year.
I freely acknowledge that your stated assumptions might have been unrealistically conservative. Again, all I am saying is that given the assumptions you did use, irrespective of their merit, what I have given above is in fact the correct conclusion.
If you elect to ignore this, I'll interpret that as your having no criticism of my conclusion. If you elect to accept it explicitly, fine. If you elect to dispute it, fine. But if you continue to answer questions I did not ask, I see no point in going through the pretense of a discussion.
-- David L (bumpkin@dnet.net), September 26, 1999.
David, perhaps I should be clearer.Unless I'm misreading, you make two basic points:
1) You feel the error rate calculated for system replacements should be spread throughout a month-long period. However, as I stated in the original post, implementations cluster around the month-end period, and are not spread uniformly throughout the month. That is precisely why I made the point in the first place, and is why the error rate is comparable to the Gartner Group's two-week rollover period.
2) Because I used a constant, uniform distribution of implementations for the 24-month period, you feel the error rate should be in addition to the rollover error rate, because the implementations would occur at the same level through rollover. Again, this was not my intent in using the uniform distribution. I do not expect implementations to occur at rollover at the same rate as in the past; indeed, they are not occurring now at the same rate as in the past. My intent in using the uniform rate was to add a level of conservatism.
-- Hoffmeister (hoff_meister@my-deja.com), September 26, 1999.
OK, let's see if I get it (finally). The assumption of system replacements taking place during the 24 months of 1998 and 1999, was meant to be worded to exclude the last week of 12/99 which comprises the first half of the rollover period. So the 1.13% figure applies up until but excluding the last week of 12/99.But this correction does not seem to alter that the rollover period is generating errors at 2.10% function points per month, versus 1.13% per month for the period prior to rollover.
Question, what suggests that errors in a replacement system would not continue to surface after the system was installed, and why should this not be included?
Another question, given the problems that can surface during installation, might not the effort to actually get the system up and running continue for several days or possibly even weeks after the scheduled production date? This would suggest that concentrating planned installations at the end of a month (for example) might cause effort to spill over into the next week or two.
Finally, in distributing the unremediated or missed Y2K errors, there's a point in the analysis where you take 25% of 4.2%. I was wondering whether you might have meant to take 25% of the 5.25%. I noticed this when I was initially reading your calculations, but hadn't bothered to mention it because it would have negligible effect compared to the other areas of concern.
Thanks.
-- David L (bumpkin@dnet.net), September 26, 1999.