研究生: |
張君儒 Chang, Jun-Ru |
---|---|
論文名稱: |
具高效率及成本效益軟體測試方法之設計與分析 Design and Analysis of High-Efficient and Cost-Effective Software Testing Methods |
指導教授: |
黃慶育
Huang, Chin-Yu |
口試委員: |
王豐堅
蘇銓清 陳朝欽 張隆紋 |
學位類別: |
博士 Doctor |
系所名稱: |
電機資訊學院 - 資訊系統與應用研究所 Institute of Information Systems and Applications |
論文出版年: | 2011 |
畢業學年度: | 99 |
語文別: | 英文 |
論文頁數: | 107 |
中文關鍵詞: | 測試個案排序 、帶有權重事件流程圖 、可測試性 、PIE分析 、修正條件/覆蓋涵蓋率 、結構覆蓋率 |
外文關鍵詞: | Test Case Prioritization, Weight-based Event-flow Graph, Testability, PIE Analysis, Modified Condition/Decision Coverage, Structural Coverage |
相關次數: | 點閱:4 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
The rapid development of technology provides high performance and reliability for the hardware system; based on this, software engineers can focus their developed software on more convenience and ultra high reliability. To reach this goal, the testing stage of software development life cycle (SDLC) usually takes more time and effort due to the growing complexity of the software. In general, users would like to use the convenience or artistic software, thus command-mode applications are gradually replaced by GUI-based applications in recent years. In today's modern society, we usually use all kinds of GUI applications frequently when using the desktop computer, notebook or mobile phone. However, testing the correctness of a GUI-based application is more complex than the conventional code-based application. In addition to testing the underlying codes of the GUI application, the space of possible combinations of events with a large GUI input sequence also requires creating numerous test cases to confirm the adequacy of the GUI testing.
It is noted that running all GUI test cases and then fixing all found bugs may be time-consuming and delaying the project completion. Hence, it is important to advance the test cases that uncover the most faults as fast as possible in the testing process. Test-case prioritization has been proposed and used in recent years because it can improve the rate of fault detection during the testing phase. However, few studies have discussed the problem of GUI test-case prioritization. To solve this issue, we propose a weighted-event flow graph for solving the non-weighted GUI test case and ranking GUI test cases based on weight scores. The weighted scores can either be ranked from high to low or be ordered by dynamic adjusted scores.
On the other hand, how to build software that can be tested efficiently has become an important topic in addition to enhancing and developing new testing methods. In the past, a dynamic technique for estimating program testability was proposed and called propagation, infection, and execution (PIE) analysis, but it requires a lot of computational overhead in estimating the testability of software components. In this dissertation, we propose an Extended PIE (EPIE) method to accelerate the conventional PIE analysis, based on generating group testability as a substitute for statement testability. Our proposed method can be systematically separated into three steps: breaking a program into blocks, dividing blocks into groups, and marking target statements. These three steps can decrease the number of analyzed statements effectively and the calculated values of testability are still acceptable.
Finally, Modified Condition/Decision Coverage (MC/DC) is a structural coverage measure and it can be used to assess the adequacy and quality of the requirements-based testing (RBT) process. NASA has been proposed a method to select the needed test cases for satisfying this criterion. However, it is noted that using NASA’s method, the selected test cases may not satisfy the original definition of the MC/DC criterion. Additionally, this method is too complex and could take a lot of time to obtain the needed test cases. In this dissertation, we will propose a classification-based algorithm to select the needed test cases. First, test cases will be classified based on the outcome value of expression and the target condition. After classifying all test cases, MC/DC pairs can be found quickly, conveniently and effectively. Also, if there are some missing (unfound) test cases, our proposed classification-based method can also suggest to developers what kinds of test cases have to be generated. Finally, some experiments are performed based upon real programs to evaluate the performance and effectiveness of our proposed classification-based algorithm.
由於現代科技快速演進,軟體工程師能夠在高效能及高可靠度的硬體系統環境中,開發出更為方便和高可靠度的應用程式。為了能夠生產出高可靠度的程式,在軟體開發週期中的軟體測試階段,必須要投入更多的測試時間及人力來尋找是否有錯誤隱藏在程式之中,特別是對於複雜且龐大的軟體。一般而言,使用者較喜歡使用便利且美觀的軟體,且近幾年,命令模式的程式逐漸被圖形介面的軟體所取代。在現今的社會中,無論我們使用桌上型電腦、筆記型電腦或是手機,軟體執行畫面幾乎都是圖形介面。為了要提高軟體品質,對於圖形介面軟體進行測試是必要的。可是要測試此類軟體比一般傳統程式還來的複雜,且在測試此類軟體時,需要產生各種不同的輸入來確認此程式的正確性。因此在測試過程中,會使用大量空間來暫存中間產生資料。
值得注意的是,在執行測試個案和修復找到的錯誤會花費較長的時間,進而有機會造成計劃進度的延遲。假使能在測試階段,預先執行可能能找出比較多的錯誤的測試個案,使得大部分的錯誤能在比較早的時間被發現及修正,這是一件很重要的研究。最近幾年,有些學者提出測試個案排序的方法來解決以上敘述的問題。可是非常少的研究對於圖形介面軟體的測試個案來進行排序。為了能夠解決此問題,我們提出一帶有權重的事件流程圖來排序要測試圖形介面軟體的測試個案。另外可以根據權重總和,將測試個案由高到低做排序,或是動態調整權重總合來做動態排序。
另一方面,除了發展新的軟體測試技術外,要如何撰寫出一容易測試的程式也是一個重要的議題。過去,有學者提出一方法,用來評估軟體的可測試性。此方法被稱作PIE分析。但是PIE分析需要花費大量的計算時間來評估軟體元件的可測試性。因此在本論文中,我們提出一方法,用計算群體的可測試性取代計算每一行程式碼的可測試性,進而減少計算時間。我們所提出的方法可分做三個步驟:先將程式分成數個區塊,再將每個區塊分成數個群體,最後標記要計算可測試性的程式碼。經過此三步驟,需要計算可測試性的程式碼數量就會有效地減少,並且所計算出來的可測試性的預估值也能夠被接受。
在軟體測試過程中,通常會計算程式涵蓋率來當作是否停止測試的準則。而修正條件/覆蓋涵蓋率(MC/DC)是一個高階涵蓋率準則。NASA過去已提出要如何實做此準則來篩選測試個案。然而經由此方法所篩選出來的測試個案無法符合原本修正條件/覆蓋涵蓋率的定義。另外在執行NASA所提出的方法會花費大量的時間,且此方法在實作上也太過於複雜。因此,本篇論文提出基於測試個案分類的演算法來有效地篩選測試個案,且被篩選出來的測試個案能夠符合修正條件/覆蓋涵蓋率準則的要求。當在對測試個案做分類時,會根據判斷式的值和每個條件的值來做為分類的依據。由於測試個案在前一步驟已經先分類完成,因此在接下來的步驟就能夠快速且有效地尋找成對的測試個案,來符合修正條件/覆蓋涵蓋率的定義。另外,如果有缺少的測試個案能符合修正條件/覆蓋涵蓋率準則,我們也會建議開發者需要產生哪些測試個案來滿足此準則的要求。此外,在實驗部分,我們會實際在真實的程式做實驗,來觀察我們演算法的效率,並且會將我們的方法運用在減少測試個案的技術和測試個案的排序上。
[1] C. Y. Huang and C. T. Lin, “Analysis of Software Reliability Modeling Considering Testing Compression Factor and Failure-to-Fault Relationship,” IEEE Trans. on Computers, Vol. 59, No. 2, pp. 283-288, 2010.
[2] C. J. Hsu, C. Y. Huang, and J. R. Chang, “Enhancing Software Reliability Modeling and Prediction through the Introduction of Time-Variant Fault Reduction Factor,” Applied Mathematical Modelling, Vol. 35, No. 1, pp. 506-521, 2011.
[3] Y. Tamura and S. Yamada, “Optimisation Analysis for Reliability Assessment Based on Stochastic Differential Equation Modelling for Open Source Software,” International Journal of Systems Science, Vol. 40, No. 4, pp. 429-438, 2009.
[4] P. K. Kapur, A. K. Bardhan, and V. S. S. Yadavalli, “On Allocation of Resource during Testing Phase of a Modular Software,” International Journal of Systems Science, Vol. 38, No. 6, pp. 493-499, 2007.
[5] G. J. Myers, The Art of Software Testing, 2nd ed., John Wiley & Sons, 2004.
[6] B. Beizer, Software Testing Techniques, 2nd ed., Van Nostrand Reinhold Company, Inc., 1983.
[7] J. Christmansson, Z. Kalbarczyk, and J. Torin, “Dependable Flight Control System by means of Datadiversity with Error Recovery,” International Journal of Computer Systems Science & Engineering, Vol. 9, pp. 142-150, 1994.
[8] F. Bergadano and D. Gunetti, “Testing by Means of Inductive Program Learning,” ACM Trans. on Software Engineering and Methodology, Vol. 5, No. 2, pp. 119-145, 1996.
[9] J. W. Lin and C. Y. Huang, “Analysis of Test Suite Reduction with Enhanced Tie-Breaking Techniques,” Information and Software Technology, Vol. 51, No. 4, pp. 679-690, 2009.
[10] C. Y. Huang, J. R. Chang, and Y. H. Chang, “Design and Analysis of GUI Test Case Prioritization Using Weight-Based Methods,” Journal of Systems and Software, Vol. 83, No. 4, pp. 646-659, 2010.
[11] M. J. Harrold, R. Gupta, and M. L. Soffa, “A Methodology for Controlling the Size of a Test Suite,” ACM Transactions on Software Engineering and Methodology, Vol. 2, No. 3, pp. 270-285, 1993.
[12] G. Rothermel, S. Elbaum, A. G. Malishevsky, P. Kallakuri, and X. Qiu, “On Test Suite Composition and Cost-Effective Regression Testing,” ACM Trans. on Software Engineering and Methodology, Vol. 13, No. 3, pp. 277-331, 2004.
[13] J. A. Jones and M. J. Harrold, “Test-Suite Reduction and Prioritization for Modified Condition/Decision Coverage,” IEEE Trans. on Software Engineering, Vol. 29, No. 3, pp. 195- 209, 2003.
[14] S. Elbaum, D. Gable, and G. Rothermel, “The Impact of Software Evolution on Code Coverage Information,” Proceedings of IEEE International Conference on Software Maintenance, pp. 170- 179, 2001, Italy.
[15] Y. K. Malaiya, M. N. Li, J. M. Bieman, and R. Karcich, “Software Reliability Growth with Test Coverage,” IEEE Trans. on Reliability, Vol. 51, pp. 420-426, 2002.
[16] M. Hirayama, O. Mizuno, and T. Kikuno, “Analysis of Software Test Item Generation - Comparison Between High Skilled and Low Skilled Engineers,” Journal of Computer Science and Technology, Vol. 20, pp. 250-257, 2005.
[17] X. Zhang and H. Pham, “Predicting Operational Software Availability and Its Applications to Telecommunication Systems,” International Journal of Systems Science, Vol. 33, pp. 923-930, 2002.
[18] RTCA/DO-178B, Software Considerations in Airborne Systems and Equipment Certification, RTCA, Inc., Washington, DC, 1992.
[19] A. M. Memon, M. L. Soffa, and M.E. Pollack, “Coverage Criteria for GUI Testing,” Proceedings of the 8th European Software Engineering Conference held jointly with 9th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 256-267, 2001, Vienna, Austria.
[20] A. M. Memon, M.E. Pollack, and M.L. Soffa, M.L., “Using A Goal-Driven Approach to Generate Test Cases for GUIs,” Proceedings of the 15th International Conference on Software Engineering, pp. 257-266, 1999, Los Angeles, CA, USA.
[21] L. White, H. Almezen, and N. Alzeidi, N., “User-Based Testing of GUI Sequences and Their Interactions,” Proceedings of 12th International Symposium on Software Reliability Engineering, pp. 54-63, 2001, Hong Kong, China.
[22] F. Belli, “Finite-State Testing and Analysis of Graphical User Interfaces,” Proceedings of 12th International Symposium on Software Reliability Engineering, pp. 34-43, 2001, Hong Kong, China.
[23] J. Chen and S. Subramaniam, “A GUI Environment to Manipulate FSMs for Testing GUI-based Applications in Java,” Proceedings of 34th Annual Hawaii International Conference on System Sciences, pp. 10, 2001, Hawaii, USA.
[24] J. Steven, P. Chandra, B. Fleck, and A. Podgurski, “jRapture: A Capture/Replay Tool for Observation-Based Testing,” Proceedings of the 2000 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 158-167, 2000, Portland, Oregon, USA.
[25] G. Rothermel, R. H. Untch, C. Chengyun, and M. J. Harrold, “Test Case Prioritization: An Empirical Study,” Proceedings of the 15th International Conference on Software Engineering, pp. 179, 1999, Los Angeles, CA, USA.
[26] A. Spillner, T. Linz, and H. Schaefer, Software Testing Foundations, 2nd Edition: A Study Guide for the Certified Tester Exam, Rocky Nook, 2007.
[27] E. Dustin, Effective Software Testing: 50 Specific Ways to Improve Your Testing, Addison-Wesley, 2003.
[28] G. Rothermel, R. H. Untch, C. Chengyun, and M. J. Harrold, “Prioritizing Test Cases for Regression Testing,” IEEE Transactions on Software Engineering, Vol. 27, No. 10, pp. 929-948, 2001.
[29] S. Elbaum, A. G. Malishevsky, and G. Rothermel, “Test Case Prioritization: A Family of Empirical Studies,” IEEE Transactions on Software Engineering, Vol. 28, No. 2, pp. 159-182, 2002.
[30] R. C. Bryce and A. M. Memon, “Test Suite Prioritization by Interaction Coverage,” Proceedings of the Workshop on Domain Specific Approaches to Test Automation, pp. 1-7, 2007, Dubrovnik, Croatia.
[31] F. Belli, M. Eminov, and N. Gokce, “Prioritizing Coverage-Oriented Testing Process – An Adaptive-Learning-Based Approach and Case Study,” Proceedings of the 31st IEEE Annual International Computer Software and Applications Conference, pp. 197-203, 2007, Beijing, China.
[32] A. M. Memon, M.E. Pollack, M.L. Soffa, “Hierarchical GUI Test Case Generation Using Automated Planning,” IEEE Transactions on Software Engineering, Vol. 27, No. 2, pp. 144-155, 2001.
[33] J. M. Voas, “PIE: A Dynamic Failure-Based Technique,” IEEE Trans. on Software Engineering, Vol. 18, pp. 717-727, 1992.
[34] L. J. Morell, “Theoretical Insights into Fault-Based Test,” Proceedings of the 2nd Workshop on Software Testing, Verification, and Analysis, pp. 45-62, 1988, Banff.
[35] L. J. Morell, “A Theory of Fault-Based Testing,” IEEE Trans. on Software Engineering, Vol. 16, pp. 844-857, 1990.
[36] M. R. Woodward and K. Halewood, “From Weak to Strong, Dead or Alive? An Analysis of Some Mutation Testing Issues,” Proceedings of the 2nd Workshop on Software Testing, Verification, and Analysis, pp. 19-21, 1988, Banff.
[37] W. E. Howden, “Weak Mutation Testing and Completeness of Test Sets,” IEEE Trans. on Software Engineering, Vol. 8, pp. 371-379, 1982.
[38] J. R. Chang, C. Y. Huang, and T. H. Tsai, “Software Testability Analysis Using Extended PIE Method,” CD-ROM Proceedings of the 18th IEEE International Symposium on Software Reliability Engineering, 2007, Trollhättan, Sweden.
[39] T. H. Tsai, C. Y. Huang, J. R. Chang, “A Study of Applying Extended PIE Technique to Software Testability Analysis,” Proceedings of the 33th IEEE Annual International Computer Software and Applications Conference, pp. 89-98, 2009, Seattle, Washington.
[40] P. J. Bernhard, “A Reduced Test Suite for Protocol Conformance Testing,” ACM Trans. on Software Engineering and Methodology, Vol. 3, No. 3, pp. 201-220, 1994.
[41] K. J. Hayhurst, D. S. Veerhusen, J. J. Chilenski, and L. K. Rierson, “A Practical Approach to Modified Condition/Decision Coverage,” Proceedings of the 20th Digital Avionics Systems Conference, pp. 1B2/1-1B2/10, 2001, NASA Langley Technical Report Server, Daytona Beach, Florida, USA.
[42] J. J. Chilenski, “An Investigation of Three Forms of the Modified Condition Decision Coverage (MCDC) Criterion,” Report DOT/FAA/AR-01/18, pp. 214, 2001, National Technical Information Service, Springfield, Virginia.
[43] J. J. Chilenski and S. P. Miller, “Applicability of Modified Condition/Decision Coverage to Software Testing,” Software Engineering Journal, Vol. 9, No. 5, pp. 193-200, 1994.
[44] X. Cai and M.R. Lyu, “Software Reliability Modeling with Test Coverage: Experimentation and Measurement with A Fault-Tolerant Software Project,” Proceedings of the 18th IEEE International Symposium on Software Reliability, pp. 17-26, 2007, Trollhättan, Sweden.
[45] K. J. Hayhurst, D. S. Veerhusen, J. J. Chilenski, and L. K. Rierson, A Practical Tutorial on Modified Condition/Decision Coverage, Report NASA/ TM-2001-210876, NASA, 2001.
[46] P. C. Jorgensen, Software Testing: A Craftsman’s Approach, 3rd ed., Auerbach Publishing, 2007.
[47] A. M. Memon, A Comprehensive Framework for Testing Graphical User Interfaces, Ph.D. thesis, Department of Computer Science, University of Pittsburgh, 2001.
[48] H. Zhu, P. A. V. Hall, J. H. R. May, “Software Unit Test Coverage and Adequacy,” ACM Computing Surveys, Vol. 29, No. 4, pp. 366-427, 1997.
[49] P. Ammann and J. Offutt, Introduction to Software Testing, Cambridge University Press, 2008.
[50] D. Peters and D. L. Parnas, “Generating a Test Oracle from Program Documentation: Work in Progress,” Proceedings of the 1994 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 58-65, 1994, Seattle, Washington, USA.
[51] W. E. Wong, J. R. Horgan, S. London, and A. P. Mathur, “Effect of Test Set Minimization on Fault Detection Effectiveness,” Proceedings of the 17th International Conference on Software Engineering, pp. 41-50, 1995, Seattle, Washington, USA.
[52] G. Rothermel and M. J. Harrold, “Empirical Studies of a Safe Regression Test Selection Technique,” IEEE Trans. on Software Engineering, Vol. 24, pp. 401-419, 1998.
[53] M. Grindal, J. Offutt, and S. F. Andler, “Combination Testing Strategies: A Survey. Software Testing,” Verification and Reliability, Vol. 15, No. 3, pp. 167-199, 2005.
[54] D. Jeffrey and N. Gupta, N, “Experiments with Test Case Prioritization Using Relevant Slices,” Journal of Systems and Software, Vol. 81, No. 2, pp. 196-221, 2008.
[55] A. M. Smith and G. M. Kapfhammer, “An Empirical Study of Incorporating Cost into Test Suite Reduction and Prioritization,” Proceedings of the 24th Annual ACM Symposium on Applied Computing, pp. 461-467, 2009, Honolulu, Hawaii.
[56] C. Y. Huang, M. R. Lyu, “Optimal Release Time for Software Systems Considering Cost, Testing-Effort, and Test Efficiency,” IEEE Transactions on Reliability, Vol. 54, No. 4, pp. 583-591, 2005.
[57] A. M. Memon and Q. Xie, “Empirical Evaluation of the Fault-Detection Effectiveness of Smoke Regression Test Cases for GUI-Based Software,” Proceedings of the 20th International Conference on Software Maintenance, pp. 8-17, 2004, Chicago, USA.
[58] A. M. Memon and Q. Xie, “Using Transient/Persistent Errors to Develop Automated Test Oracles for Event-Driven Software,” Proceedings of the International Conference on Automated Software Engineering, pp. 186-195, 2004, Linz, Austria.
[59] M. Robinson and P. A. Vorobiev, Swing: A Fast-Paced Guide with Production- Quality Code Examples, Manning Publications, 1999.
[60] M. Jorgensen, U. Indahl, and D. Sjoberg, “Software Effort Estimation by Analogy and Regression Toward the Mean,” Journal of Systems and Software, Vol. 68, No. 3, pp. 253-262, 2003.
[61] TerpOffice, Available: http://www.cs.umd.edu/~atif/TerpOfficeWeb/, 26 January 2011.
[62] Java Swing Calculator, Available: http://www.beginner-java-tutorial.com/ java-swing-calculator.html, 26 January 2011.
[63] GUITAR, Available: http://www.cs.umd.edu/~atif/GUITAR-distribution/ manuals/index.html, 26 January 2011.
[64] D. Hackner and A. M. Memon, “Test Case Generator for GUITAR,” Proceedings of the 30th International Conference on Software Engineering, pp. 959-960, 2008, Leipzig, Germany.
[65] A. M. Memon and Q. Xie, Q., “Studying the Fault-Detection Effectiveness of GUI Test Cases for Rapidly Evolving Software,” IEEE Transactions on Software Engineering, Vol. 31, No. 10, pp. 884-896, 2005.
[66] Y. S. Ma, J. Offutt, and Y. R. Kwon, “MuJava: A Mutation System for Java,” Proceedings of the 28th International Conference on Software Engineering, pp. 827-830, 2006, Shanghai, China.
[67] A. M. Memon, M. E. Pollack, and M. L. Soffa, “Automated Test Oracles for GUIs,” Proceedings of the 8th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 30-39, 2000, San Diego, California, USA.
[68] Q. Xie and A. M. Memon, “Studying the Characteristics of a Good GUI Test Suite,” Proceedings of the International Symposium on Software Reliability Engineering, pp. 159-168, 2006, Raleigh, North Carolina, USA.
[69] A. M. Memon, I. Banerjee, and A. Nagarajan, A., “GUI Ripping: Reverse Engineering of Graphical User Interfaces for Testing,” Proceedings of 10th Working Conference on Reverse Engineering, pp. 260-269, 2003, Victoria, Canada.
[70] R. S. Pressman, Software Engineering: A Practitioner's Approach, 6th Edition, McGraw-Hill, 2004.
[71] C. Kaner, J. Falk, and H. Q. Nguyen, Testing Computer Software, 2nd Edition, Wiley Computer Publishing, 1999.
[72] R. S. Freedman, “Testability of Software Component,” IEEE Trans. on Software Engineering, Vol. 17, pp. 553-564, 1991.
[73] R. V. Binder, “Design for Testability in Object-Oriented Systems,” Communications of the ACM, Vol. 37, pp. 87-101, 1994.
[74] R. S. Pressman, Software Engineering: A Practitioner's Approach, 4th ed., McGraw- Hill, 1997.
[75] J. Vincent, G. King, P. Lay, and J. Kinghorn, “Principles of Built-In-Test for Run- Time-Testability in Component-Based Software Systems,” Software Quality Journal, Vol. 10, pp. 115-133, 2002.
[76] M. Harman, L. Hu, R. Hierons, J. Wegener, H. Sthamer, A. Baresel, and M. Roper, “Testability Transformation,” IEEE Trans. on Software Engineering, Vol. 30, pp. 3-16, 2004.
[77] J. Gao and M. C. Shih, “A Component Testability Model for Verification and Measurement,” Proceedings of the 29th Annual International Computer Software and Applications Conference, pp. 211- 218, 2005, Edinburgh.
[78] T. J. McCabe, “A Complexity Measure,” IEEE Trans. on Software Engineering, Vol. 2, pp. 308- 320, 1976.
[79] B. A. Nejmeh, “NPATH: A Measure of Execution Path Complexity and Its Applications,” Communications of the ACM, Vol. 31, pp. 188-200, 1988.
[80] Y. Le Traon, F. Oubdesselam, and C. Robach, “Analyzing Testability on Data Flow Designs,” Proceedings of the 11th IEEE International Symposium on Software Reliability Engineering, pp. 162-173, 2000, Los Alamitos, CA.
[81] Y. Le Traon and C. Robach, “Testability Measurements for Data Flow Designs,” Proceedings of the 4th IEEE International Software Metrics Symposium, pp. 91-98, 1997, Albuquerque, NM, USA.
[82] T. B. Nguyen, M. Delaunary, and C. Robach, “Testability Analysis for Software Component,” Proceedings of IEEE International Conference on Software Maintenance, pp. 422-429, 2002, Montreal, Canada.
[83] S. D. Sohn and P. H. Seong, “Quantitative Evaluation of Safety Critical Software Testability Based On Fault Tree Analysis and Entropy,” Journal of Systems and Software, Vol. 73, pp. 351-360, 2004.
[84] B. Lu, Q. Shi, and Q. J. Cao, “Research on Computation Model for Software Testability Based on Random Graph Theory,” Bulletin of the College of Computer Engineering, University of Shanghai for Science and Technology, No. 27, pp. 551-555, 2005.
[85] B. Baudry, Y. L. Traon, and G. Sunye, “Testability Analysis of a UML Class Diagram,” Proceedings of the 9th IEEE Symposium on Software Metrics, pp. 54-63, 2002, Ottawa, Canada.
[86] M. Bruntink and A. van Deursen, “An Empirical Study into Class Testability,” Journal of Systems and Software, Vol. 79, pp. 1219-1232, 2006.
[87] W. T. Tsai, J. Gao, X. Wei, and Y. Chen, “Testability of Software in Service-Oriented Architecture,” Proceedings of the 30th IEEE Annual International Computer Software and Applications Conference, pp. 163-170, 2006, Chicago.
[88] B. Baudry, Y. L. Traon, G. Sunye, and J. M. Jezequel, “Measuring and Improving Design Patterns Testability,” Proceedings of the 9th International Software Metrics Symposium, pp. 50-59, 2003, Australia.
[89] S. Mouchawrab, L. C. Briand, and Y. Labiche, “A Measurement Framework for Object-Oriented Software Testability,” Information and Software Technology, Vol. 47, pp. 979-997, 2005.
[90] S. Kansomkeat and W. Rivepiboon, “An Analysis Technique to Increase Testability of Object-Oriented Components,” Software Testing, Verification and Reliability, Vol. 18, pp. 193-219, 2008.
[91] J. M. Voas, K. W. Miller, and J. E. Payne, “PISCES: A Tool for Predicting Software Testability,” Proceedings of the 2nd Symposium on Assessment of Quality Software Development Tools, pp. 297-309, 1992, New Orleans, Louisiana.
[92] J. M. Voas and K. W. Miller, “Applying a Dynamic Testability Technique to Debugging Certain Classes of Software Faults,” Software Quality Journal, Vol. 2, pp. 61-75, 1993.
[93] J. M. Voas and K. W. Miller, “Software Testability: The New Verification,” IEEE Software, Vol. 12, pp. 17-28, 1995.
[94] B. Marick, Two Experiments in Software Testing, Technical Report UIUCDCS-R-90-1644, University of Illinois at Urbana-Champaign, Department of Computer Science, 1990.
[95] The Software-artifact infrastructure repository, Available: http://sir.unl.edu/ portal/, 26 January 2011.
[96] X. Li, M. Xie, and S. H. Ng., “Sensitivity Analysis of Release Time of Software Reliability Models Incorporating Testing Effort with Multiple Change-Points,” Applied Mathematical Modelling, Vol. 34, pp. 3560-3570, 2010.
[97] M. Xie and G. Y. Hong, “A Study of the Sensitivity of Software Release Time,” Journal of Systems and Software, Vol. 44, pp. 163-168, 1998.
[98] C. Y. Huang and M. R. Lyu, “Optimal Testing Resource Allocation and Sensitivity Analysis in Software Development,” IEEE Trans. on Reliability, Vol. 54, pp. 592-603, 2005.
[99] T. L. Graves, M. J. Harrold, J.-M. Kim, A. Porter, and G. Rothermel, “An Empirical Study of Regression Test Selection Techniques,” ACM Trans. on Software Engineering and Methodology, Vol. 10, No. 2, pp.184-208, 2001.
[100] A. Rajan, M. W. Whalen, and M. P. E. Heimdahl, “The Effect of Program and Model Structure on MC/DC Test Adequacy Coverage,” Proceedings of 30th International Conference on Software Engineering, pp. 161-170, 2008, Leipzig, Germany.
[101] J. Aidemark, P. Folkesson, and J. Karlsson, “Path-based Error Coverage Prediction,” Journal of Electronic Testing: Theory and Applications, Vol. 18, No. 3, pp.343-349, 2002.
[102] Y. Y. Li, “Structural Test Cases Analysis and Implementation,” Proceedings of 42nd Midwest Symposium on Circuits and Systems, pp. 882-885, 1999, Las Cruces, NM.
[103] J. R. Chang and C. Y. Huang, “A Study of the Enhanced MC/DC Coverage Criterion for Software Testing,” Proceedings of the 31st IEEE Annual International Computer Software and Applications Conference, pp. 457-464, 2007, Beijing, China.
[104] U. Manber, Introduction to Algorithms: A Creative Approach, Addition- Wesley, 1989.
[105] Z. Li, M. Harman, and R. M. Hierons, “Search Algorithms for Regression Test Case Prioritization,” IEEE Trans. on Software Engineering, Vol. 33, No. 4, pp. 225-237, April 2007.
[106] H. Muccini, M. Dias, and D. J. Richardson, “Reasoning about Software Architecture-based Regression Testing through a Case Study,” Proceedings of the 29th IEEE Annual International Computer Software and Applications Conference, pp. 189-195, 2005, Edinburgh, Scotland.
[107] S. Elbaum, A. G. Malishevsky, and G. Rothermel, “Prioritizing Test Cases for Regression Testing,” Proceedings of the 2000 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 102-112, 2000, Portland, Oregon.
[108] D. Jeffrey and N. Gupta, “Improving Fault Detection Capability by Selectively Retaining Test Cases during Test Suite Reduction,” IEEE Trans. on Software Engineering, Vol. 33, No. 2, pp. 108-123, Feb. 2007.
[109] J. J. Chilenski and P. H. Newcomb, “Formal Specification Tools for Test Coverage Analysis,” Proceedings of the 9th Knowledge-Based Software Engineering Conference, pp. 59-68, 1994, CA, USA.