Testing Software for Y2K Compliance
For the first time in software history, a major catastrophe may be building up steam for a global computer meltdown. This potential disaster is known as the Year Two Thousand Problem and is also simply known as Y2K. This problem originated from software engineers, who were most likely being conservative with computer resources, used only two digits to represent years (such as 98 for the Year 1998). When the year changes from 1999 to 2000, some computer software will register the New Year as 00, which could be interpreted as 1900, 2000, or just 00. As a result, these software applications, with Y2K bugs, might malfunction by miscalculating data or incorrectly effect logic control. Obviously, this could produce any number of problems, such as erroneous output or severe breakdown of critical systems. Hopefully, these Y2K problems will only cause minor inconveniences or harmless issues, yet it is impossible to be certain without at least testing these software applications. That is why there is a very strong need to test software for Y2K compliance, meaning that no Y2K bugs have been detected or all Y2K bugs found have been fixed. In other words, Y2K compliance means a software application will continue to accurately process date information after the change to the Year 2000.
Keep in mind that there are two main goals in software testing. First, testing should “achieve adequate quality (debug testing),” whereby “the objective is to probe the software for defects so that these can be removed.” Second, testing should “assess existing quality (operational testing),” whereby “the objective is to gain confidence that the software is reliable” (Frankl 586). This means that software applications do not necessarily have to be completely free of Y2K bugs, because they might work satisfactorily enough with some defects. However, knowing what Y2K defects exist and their risks is the only way to validate if software applications will be acceptable during the Year 2000. Unfortunately, the task of Y2K testing is a very long and difficult process.
There are several steps that must be followed to ensure a system is Y2K compliant. They can be generalized into these three phases: preparation and evaluation, implementation, and re-evaluate and deployment. The first phase is preparation and evaluation that will first establish and analyze the risks of each software application. Then the requirements, code, and/or executables are tested for Y2K faults made by the developer(s). The second step is the implementation phase. Obviously, this is when the developer(s) redesign and re-implement the parts of the code that have Y2K faults. And the final step is the re-evaluate and deployment phase. This is when the code is re-tested for Y2K bugs to ensure that no new defects have been introduced. Thus, after the risks have been re-evaluated to be acceptable, then the software application is released to the customers (Sandler).
One example of a more detailed and functional process is as follows. First, software developers and testers must “convince management” to take the Y2K problem seriously (Schultz 64). With all the Y2K hype from the news media and customers inquiring about Y2K compliance, it should be relatively easy to persuade management to supply funding and resources to ensure the quality of their products to their customers. Second, all the software applications of each product must be examined and classified according to their level of operational importance, even though doing so may be subjective (Schultz 64). For example, a set of recommended categories, in descending order of importance, are listed below.
- Critical Applications: Applications with the unacceptable possibility of being off-line or malfunctioning.
- High Applications: Applications that are required to perform unique and important tasks.
- Medium Applications: Applications that can be replaced.
- Low Applications: Applications predicted to be phased out or replaced by Year 2000.
Next, schedule testing according to the software applications with the highest importance first, then in decreasing order. This allows the most critical software applications to have the most resources and time to be more thoroughly tested and fixed. Especially since low priority applications might never get an opportunity to be tested because of lack of time or resources. The following step is to “express how programs use dates as a numeric coefficient” (that simply means calculating the total number of date data types and logical constructs divided by the total number of data types and logical constructs of each program). This makes it possible to “highlight date-related data structures and their frequency in specific programs” or functions. This data will also help to schedule the order of what applications to test more accurately. Next, examine the code found in the previous step for date errors or nonstandard date formats. This data will help “create a test library to data sets” to verify, during dynamic testing, the existence of Y2K bugs in the code (Schultz 65). Finally, develop and execute tests according to the library created, while of course recording the results. It is suggested to develop or purchase automated tools to execute these steps more quickly and thoroughly (Schultz 67).
Additionally, there are several procedures for fixing Y2K bugs once they have been found. First, each program requiring fixing will need to be assessed for design plan(s) with respects to all the possible internal and external interfaces (Schultz 66). The next step is to estimate the cost of each plan, so that management can get a better understanding for selecting the best solution to be implemented (Schultz 68). At last, software engineers will finally implement these planned changes necessary for Y2K compliance. Fortunately, Y2K bugs are much easier to fix than regular software maintenance, because they usually do not need a complete redesign and re-implementation on an entire application or function. But of course, it will still be necessary to re-evaluate and monitor the results of these changes (Schultz 71).
There are several types of Y2K tests that can be performed on software applications. The first possible type is to search for date functions or Y2K “sensitivities” testing, that are probable locations in the code and/or requirements where Y2K bugs are likely to exist (Fredrickson). Of course, this method does imply that the code and/or requirements are available; otherwise, black box testing would be the only alternative. This procedure is usually implemented using automated tools that were written or purchased specifically for such a task. However, software engineers must still examine and verify the results of these tools. Another type of Y2K testing is called regression testing. Regression testing ensures that all the changed code operates as well as before the changes (Fredrickson). A comprehensive regression test would involve a complete re-execution of all the tests done before Y2K changes were implemented (this is called baseline testing), and then comparing these results with the same tests after Y2K changes were implemented. This ensures that functionality of software applications has not decreased. A third type of testing is called validation testing. Validation testing ensures that all the changed code will operate during the Y2K transition and afterwards (Fredrickson). This is done using a “time-advanced environment,” such as incrementing the date and time of the operating system and/or hardware. Unfortunately, there are no “critical dates that programs must be tested with,” because each program is different (Sandler). However, it is recommended to test, at the very least, if the software will continuously work as the internal date changes to January 1, 2000. Additionally, it should be tested during the next century, such as March 15, 2001, especially with past data from this century, such as from the Year 1998. It could be concluded that the most difficult part of testing is the need to re-test all software applications at least several times. As a result, testing for Y2K compliance is very costly in personnel, resources, and time that may not be available.
Automated tools are very common in today’s testing environments, especially when testing for Y2K defects. These tools are programs designed to automatically find and sometimes fix Y2K bugs. They do this by examining requirements, code, and/or executing programs using predefined inputs in comparison with their expected outputs. These inputs, including requirements and code, are almost always attained through software engineers that are familiar with the software application being tested, in respects to what needs to be tested. Thus some manual work is necessary when dealing with automated tools. The most common types of automated tools include: “system date simulators, code analyzers, pre-written add-on date functions, language and operating system upgrades, and database converters” (Sandler). These tools are extremely valuable, because they are faster and more thorough at examining, executing, and sometimes fixing programs than people are (Beizer 449). Thus, automated tools are more cost effective than manual testing by software engineers. However, “too much automation does not allow for learning and customization.” It has been “found that the accuracy and completeness of the identification task indicates the productivity and quality for each of the subsequent tasks. So, in addition to a range of automation, there is also a range of accuracy and completeness for the automation” (Sandler). This is because of the limited artificial intelligence programmed into these generalized tools (Beizer 449). Therefore, software engineer should manually verify and question the results of any automated tool or implement their own holistic tools specifically for each of their applications being tested. Thus, automated tools are not exceptionally good at finding new and unique defects. Fortunately, Y2K bugs are usually simpler to find and easier to fix then most bugs, so automated tools are more ideal for testing Y2K compliance. However, it is still better for software engineers to take responsibility for Y2K testing, and not the tools, to help ensure a more complete understanding of the problem and implementation of the most appropriate fixes.
Most companies tend to use internal resources to handle Y2K testing. And since many companies underestimate the effort, time, and resources needed for Y2K testing, a large number of other scheduled projects are being delayed and will not finish on time. Yet testing software for Y2K compliance may not be sufficient to find and resolve all the Y2K faults that software application might have, even though testing software for Y2K compliance is the last defense for most companies. However, most software applications already have many known defects in them and are still being used, so Y2K bugs may not prevent applications from being used in the future. Currently, there are thousands of software applications that have known defects in them; as a result, people have learned to deal with them and use them effectively. In the Year 2000, software applications will probably have more bugs, and people will probably continue to deal with them. Except for critical, financial, and life-depending software applications, companies and people will just have to struggle with Y2K bugs until those systems are upgraded or are phases out. Luckily, testing for Y2K compliance greatly decreases the risks of software applications not working in the Year 2000, even though it does not guarantee perfect performance.
Alternatively, there are risks with fixing software that is not Y2K compliant. The Y2K Aftermath (Y2KA) “refers to problems caused by bugs in the software that is supposed to fix the Y2K problem.” When defects are repaired, news bugs are often created. These new bugs may go undetected years “after the repaired software goes into operation” (Goth 26). Furthermore, if a company’s suppliers and customers are not Y2K compliant, then the company can still suffer. Thus, the company needs to test or rather question its’ suppliers and customers to encourage their “Y2K preparedness.” This will enable companies to analyze how they will be adversely impacted and prepare for it (Goth 26). As a result, testing for Y2K compliance should not only be an in-house activity, but rather a comprehensive process for every company associated with a product.
by Phil for Humanity
NOTE: This paper was first published in the Fall of 1999.
- Beizer, Boris. “Software Testing Techniques.” Second Edition. International Thomson Computer Press, U.S.A., 1990.
- Frankl, Phyllis G., Richard Hamlet, Bev Littlewood, and Lorenzo Strigini. “Evaluating Testing Methods by Delivered Reliability.” IEEE Transactions of Software Engineering. Volume 24, Number 8, August 1998.
- Fredrickson, Janet. “Y2K Testing Basics and Beyond.” http://www.mitre.org/research/y2k/docs/BASICS.html. July 1998.
- Goth, Greg. “Concern Rising About Y2KA (the Y2K Aftermath).” Computer: Innovative Technology for Computer Professionals. Volume 32, Number 1, January 1999.
- Sandler, Robert J. “The Year 2000 FAQ: Frequently Asked Questions about the Year 2000 Computer Crisis.” http://www.y2k.com/y2kfaq.htm. Version 2.3, May 1998.
- Schultz, James E. “Managing a Y2K Project- Starting Now.” IEEE Software. May/June 1998.