研究生: |
蔡宗翰 Tzung-Han Tsai |
---|---|
論文名稱: |
運用延伸的PIE技術於軟體可測試性分析 Software Testability Analysis Using Extended PIE Techniques |
指導教授: |
黃慶育
Chih-Yu Huang |
口試委員: | |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2007 |
畢業學年度: | 95 |
語文別: | 英文 |
論文頁數: | 40 |
中文關鍵詞: | 軟體可測試度 |
外文關鍵詞: | Software testability |
相關次數: | 點閱:51 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在軟體工程當中,測試階段中所獲得的資料可以幫助開發更準確的預測軟體可靠度。但是隨著軟體複雜度的增加,測試階段需要花費越來越多的負擔。為了減少測試的花費,除了發展新的測試技術外,如何建立一個可以有效測試的軟體也變成了一個新的議題。因此,各式各樣有關軟體可測試性的研究紛紛興起。PIE軟體可測試度分析是在過去許多用來評估軟體可測試度方法裡的其中之一,其主要是由傳播(propagation)、感染(infection)、和執行(execution)三個步驟所組成。以往的一些研究說明了PIE軟體可測試度分析可以有效的幫助測試階段進行。然而,PIE可測試度分析隨著程式的增長需要花費大量的運算及時間來評估軟體元件的可測試度。在這篇論文當中,我們提出了使用group testability取代location testability以達到加速PIE可測試度分析的目的。我們所提出的方法主要可分為三個步驟:將程式轉化為數個區塊、將區塊轉化為數個群組、標記目標。最後,我們會將結果與傳統的PIE軟體可測試度分析做比較。我們也開發了一個名為ePAT的工具來幫助我們標記出需要分析的目標。我們的實驗結果指出所需分析的程式碼數量可以有效的減少,並且計算出的可測試度仍然是處於可以接受的範圍。
In software engineering, data that was gained from a testing phase can help developers to predict software reliability precisely. But the testing stage takes more and more effort due to the growing complexity of software. How to build software that can be tested efficiently has become an important topic in addition to enhancing and developing new testing methods. Thus, research on software testability has been developed variously. In the past, a dynamic technique for estimating program testability, was proposed and called propagation, infection, and execution (PIE) analysis. Previous research studies show that PIE analysis can complement software testing. However, this technique requires a lot of computational overhead in estimating the testability of software components. In this thesis, we propose a method (EPIE) to accelerate the traditional PIE analysis based on generating group testability as a substitute for location testability. This technique can be separated into three steps: breaking a program into blocks, dividing blocks into groups, and marking target statements. We developed a tool called ePAT (extended PIE Analysis Tool) to help us identify the locations which will be analyzed. The experimental result shows that the number of analyzed locations can be effectively decreased and that the estimated value of testability remains acceptable and useful.
[1] G. J. Myers, The Art of Software Testing, 2nd ed., John Wiley & Sons Inc., 2004.
[2] B. Beizer, Software Testing Techniques, Second ed., Van Nostrand Reinhold Company, Inc., 1983.
[3] F. Bergadano and D. Gunetti, "Testing by Means of Inductive Program Learning", ACM Transactions on Software Engineering and Methodology, Vol. 5, No. 2, pp. 119-145, Apr. 1996.
[4] M. J. Harrold, R. Gupta, and M. L. Soffa, "A Methodology for Controlling the Size of a Test Suite", ACM Transactions on Software Engineering and Methodology, Vol. 2, No. 3, pp. 270-285, Jul. 1993.
[5] G. Rothermel and S. Elbaum, A. G. Malishevsky, P. Kallakuri, X. Qiu, "On Test Suite Composition and Cost-Effective Regression Testing", ACM Transactions on Software Engineering and Methodology, Vol. 13, No. 3, pp. 277-331, Jul. 2004.
[6] J. A. Jones and M. J. Harrold, "Test-Suite Reduction and Prioritization for Modified Condition/Decision Coverage", IEEE Transactions on Software Engineering, Vol. 29 , No. 3, pp. 195-209, Mar. 2003.
[7] S. Elbaum, D. Gable, and G. Rothermel, "The Impact of Software Evolution on Code Coverage Information", Proceedings of the IEEE International Conference on Software Maintenance, pp. 170-179, Nov. 2001.
[8] Y. K. Malaiya, M. N. Li, J. M. Bieman, and R. Karcich, "Software Reliability Growth With Test Coverage", IEEE Transactions on Reliability, Vol. 51, No. 4, pp. 420-426, Dec. 2002.
[9] R. S. Freedman, “Testability of Software Component”, IEEE Transaction on Software Engineering, Vol. 17, No. 6, pp. 553-564, 1991.
[10] R. V. Binder, “Design for testability in object-oriented systems”, Commun. ACM, Vol. 37, Iss. 9, pp. 87-101, Sep. 1994.
[11] R. S. Pressman, Software Engineering: A Practitioner's Approach, 4th ed., McGraw-Hill, 1997.
[12] J. Vincent, G. King, P. Lay, and J. Kinghorn, “Principles of Built-In-Test for Run-Time-Testability in Component-Based Software Systems”, Software Quality Journal, Vol. 10, Iss. 2, pp. 115-133, Sep. 2002.
[13] M. Harman, L. Hu, R. Hierons, J. Wegener, H. Sthamer, A. Baresel, M. Roper, "Testability Transformation," IEEE Transactions on Software Engineering, Vol. 30, No. 1, pp. 3-16, Jan. 2004.
[14] J. Gao, M. C. Shih, "A Component Testability Model for Verification and Measurement", 29th Annual International Computer Software and Applications Conference (COMPSAC'05), Vol. 2, pp. 211-218, 2005.
[15] J. M. Voas, “PIE: A Dynamic Failure-Based Technique”, IEEE Transaction on Software Engineering, Vol. 18, No. 8, pp. 717-727, 1992.
[16] L. J. Morell, “Theoretical Insights into Fault-Based Test”, Second Workshop on Software Testing, Validation, and Analysis, pp. 45-62, 1988.
[17] L. J. Morell, “A Theory of Fault-Based Testing”, IEEE Transaction on Software Engineering, Vol. 16, Iss. 8, pp. 844-857, 1990.
[18] M. R. Woodward and K. Halewood, “From Weak to Strong, Dead or Alive? An Analysis of Some Mutation Testing Issues”, IEEE 2nd Workshop on Software Testing, Verification, and Analysis, pp. 19-21, Canada, 1988.
[19] W. E. Howden, “Weak Mutation Testing and Completeness of Test Sets”, IEEE Transaction on Software Engineering, SE-8, Iss. 4, pp. 371-379, 1982.
[20] T. J. McCabe, “A Complexity Measure”, IEEE Transactions on Software Engineering, SE-2, No. 4, pp. 308-320, 1976.
[21] B. A. Nejmeh, “NPATH: A Measure of Execution Path Complexity and Its Applications”, Communication of the ACM, Vol. 31, No. 2, pp. 188-200, 1988.
[22] Y. Le Traon, C. Robach, “Testability Measurements for Data Flow Designs”, Proceedings of the Fourth International Software Metrics Symposium, pp. 91-98, Albuquerque, New Mexico, USA, 1997.
[23] Y. Le Traon, C. Robach, “From Hardware to Software Testability”, Proceedings of International Test Conference, pp. 710-719, 1995.
[24] Y. Le Traon, F. Oubdesselam and C. Robach, “Analyzing Testability on Data Flow Designs”, Proceedings of the 11th International Symposium on Software Reliability Engineering ISSRE 2000, pp. 162-173, San Jose, California, USA, 2000.
[25] R. Cytron, B. Rosen, M. Wegman, and F. Zadeck, “Efficiently Computing Static Single Assignment Form and the Control Dependence Graph”, ACM Transactions on Programming Languages and Systems, pp. 451-490, 1991.
[26] T. B. Nguyen, M. Delaunary, and C. Robach, “Testability Analysis for Software Component”, IEEE Proceedings of the International Conference on Software Maintenance (ICSM’02), pp. 422-429, 2002.
[27] S. D. Sohn and P. H. Seong, “Quantitative evaluation of safety critical software testability based on fault tree analysis and entropy”, Journal of Systems and Software, Vol. 73, Iss. 2, Applications of statistics in software engineering, pp. 351-360, 2004.
[28] M. Bruntink and A. van Deursen, “An empirical study into class testability”, Journal of Systems and Software, Vol. 79, Iss. 9, Selected papers from the fourth Source Code Analysis and Manipulation (SCAM 2004) Workshop, pp. 1219-1232, 2006.
[29] B. Lu, Q. Shi, Q. J. Cao, “Research on Computation Model for Software Testability Based on Random Graph Theory”, J. University of Shanghai for Science and Technology, Vol. 27, No. 6, 2005.
[30] B. Baudry, Y. L. Traon, G. Sunye, and J. M. Jezequel, “Measuring and improving design patterns testability”, Proceedings of Software Metrics Symposium, Ninth International, pp. 50-59, 2003.
[31] J. M. Voas and K. W. Miller, “Software Testability: The New Verification”, IEEE Software, Vol. 12, No. 3, pp. 17-28, 1995.
[32] B. Marick, Two Experiments in Software Testing, Technical Report UIUCDCS-R-90-1644, University of Illinois at Urbana-Champaign, Department of Computer Science, 1990.
[33] P. C. Jorgensen, Software Testing: A Craftsman’s Approach, 2nd ed., CRC Press LLC, 2002.