簡易檢索 / 詳目顯示

研究生: 徐煙清
Hsu, Yen-Ching
論文名稱: Performance Assessment of Applying Severity-Weighted Greedy Algorithm to Test Case Prioritization
應用嚴重性權重貪婪演算法於測試案例排序之效能評估
指導教授: 黃慶育
Huang, Chin-Yu
口試委員: 蘇銓清
林振緯
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊系統與應用研究所
Institute of Information Systems and Applications
論文出版年: 2011
畢業學年度: 99
語文別: 英文
論文頁數: 48
中文關鍵詞: 測試案例排序程式碼覆蓋率搜尋演算法
外文關鍵詞: APFD, APFDc
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Regression testing is a useful technique for software testing. There are three mostly used applications: regression test selection, test case minimization, and test case prioritization. This thesis focused on test case prioritization problem. Traditionally, there are several techniques for test case prioritization; two of the most used techniques are Greedy and Additional Greedy Algorithm (GA and AGA). However, there exists a drawback on forward two techniques that it does not consider severity while prioritizing test cases. Thus, we provided a technique: Enhanced Additional Greedy Algorithm (EAGA) that was modified from AGA for test case prioritization. We design an experiment with eight subject programs to investigate the effects of different techniques under different criteria and fault severity. The results of this thesis were: EAGA outperformed the other techniques in terms of “units-of-fault-severity-detected-per-unit-test-cost” when severity was taken into account; EAGA performed equally well as AGA in terms of “fault-detection-of-test suite” and “decision-detection-of-test suite” in large size program when severity was not taken into consideration, although AGA excelled in small size programs. In summary, while severity was taken into account, EAGA outperformed AGA and GA in test case prioritization problem.


    Abstract in Chinese i Abstract ii Acknowledgement iii List of Tables vi List of Figures vii Chapter 1 Introduction 1 Chapter 2 Test Case Prioritization 3 Chapter 3 Enhanced AGA Technique 8 3.1 Motivated Examples 8 3.1.1 Example 1: GA Technique 8 3.1.2 Example 2: AGA Technique 9 3.1.3 Observation 10 3.2 EAGA Algorithm 12 3.3 Proposed EAGA 16 Chapter 4 Experimental Result and Discussion 18 4.1 Experiment Environment 18 4.2 Criteria & Metrics 20 4.3 Performance Evaluation 22 4.3.1 Case 1: Performance on APDC 24 4.3.2 Case 2: Performance on APFD 27 4.3.3 Case 3: Performance on APFDc 33 4.3.4 Threats to Validity 35 Chapter 5 Improvement to Validity 37 Chapter 6 Conclusion and Future Work 44 References 46

    [1] P. C. Jorgensen, Software Testing: A Craftsman’s Approach, Auerbach, 2007.
    [2] G. J. Myers, The Art of Software Testing, John Wiley & Sons, Inc, 2004.
    [3] G. Rothermel, M. J. Harrold, J. Ostrin, and C. Hong, “An Empirical Study of the Effects of Minimization on the Fault Detection Capabilities of Test Suites,” In Proceeding of International Conference of Software Maintenance, pp. 34-43, Bethesda, Maryland, Nov. 16-20, 1998.
    [4] W. E. Wong, J. R. Horgan, S. London, and A. P. Mathur, “Effect of Test Set Minimization on Fault Detection Effectiveness,” Software Practice and Experience, Vol. 28, No. 4, pp. 347-369, Apr. 1998.
    [5] H. Do, S. Siavash, L. Tahvildari, and G. Rothermel, “The Effects of Time Constraints on Test Case Prioritization: A Series of Controlled Experiments,” IEEE Transactions on Software Engineering, Vol. 36, No. 5, pp. 593-617, Sep.-Oct., 2010.
    [6] G. Rothermel, R. H. Untch, C. Chu, and M. J. Harrold, “Test Case Prioritization: An Empirical Study,” In Proceeding of the International Conference on Software Maintenance, pp. 179-188, Oxford, England, UK, Sep. 1999.
    [7] D. Binkley, “Semantics Guided Regression Test Cost Reduction,” IEEE Transactions on Software Engineering, Vol. 23, No. 8, pp. 498-516, Aug. 1997.
    [8] Y. F. Chen, D. S. Rosenblum, and K. P. Vo, “Test Tube: A System for Selective Regression Testing,” In Proceeding of the 16th International Conference of Software Engineering, pp. 211-222, Sorrento, Italy, May 16-21, 1994.
    [9] H. K. N. Leung and L. J. White, “A Study of Integration Testing and Software Regression at the Integration Level,” In Proceeding of the Conference of Software Maintenance, pp. 290-300, San Diego, USA, Nov. 26-29, 1990.
    [10] G. Rothermel and M. J. Harrold, “A Safe, Efficient Regression Test Selection Technique,” ACM Transactions on Software Engineering and Methodology, Vol. 6, No. 2, pp. 173-210, Apr. 1997.
    [11] T. F. Chen and M. F. Lau, “Dividing Strategies for the Optimization of a Test Suite,” Information Processing Letters, Vol. 60, No. 3, pp. 135-141, Mar. 1996.
    [12] M. J. Harrold, R. Gupta, and M. L. Soffa, “A Methodology for Controlling the Size of a Test Suite,” ACM Transactions on Software Engineering and Methodology, Vol. 2, No. 3, pp.270-285, Jul. 1993.
    [13] G. Rothermel, R. H. Untch, C. Chu, and M. J. Harrold, “Prioritizing Test Cases For Regression Testing,” IEEE Transactions on Software Engineering, Vol. 27, No. 10, pp. 929-948, Oct. 2001.
    [14] W. E. Wong, J. R. Horgan, S. London, and H. Agrawal, “A Study of Effective Regression Testing in Practice,” In Proceeding of the 8th International Symposium on Software Reliability Engineering, pp. 230-238, New Mexico, USA, Nov. 1997.
    [15] K. Onoma, W. T. Tsai, M. Poonawala, and H. Suganuma, “Regression Testing in an Industrial Environment,” Communications of the ACM, Vol. 41, No. 5, pp. 81-86, May 1988.
    [16] P. R. Srivastava, “Test Case Prioritization,” Journal of Theoretical and Applied Information Technology, Vol. 4, No. 3, pp. 178-181, Mar. 2008.
    [17] H. Do and G. Rothermel, “On the Use of Mutation Faults in Empirical Assessments of Test Case Prioritization Techniques,” IEEE Transactions on Software Engineering, Vol. 32, No. 9, pp. 733-752, Sep. 2006.
    [18] A. Srivastava and J. Thiagarajan, “Effectively prioritizing tests in development environment,” In Proceeding of the International Symposium on Software Testing and Analysis, pp. 97-106, Rome, Italy, Jul. 2002.
    [19] D. Leon and A. Podgueski, “A Comparison of Coverage-Based and Distribution-Based Techniques for Filtering and Prioritizing Test Cases,” In Proceeding of the International Symposium on Software Reliability Engineering, pp. 442-453, Denver, Colorado, USA, Nov. 17-20, 2003.
    [20] H. Do, G. Rothermel, and A. Kinneer, “Prioritizing JUnit Test Cases: An Empirical Assessment and Cost-Benefits Analysis,” Empirical Software Engineering: An International Journal, Vol. 11, No. 1, pp. 33-70, Mar. 2006.
    [21] S. Elbaum, P. Kallakuri, A. Malishevsky, G. Rothermel, and S. Kanduri, “Understanding the Effects of Changes on the Cost-Effectiveness of Regression Testing Techniques,” Journal of Software Testing, Verification and Reliability, Vol. 13, No. 2, pp. 65-83, Jun. 2003.
    [22] S. Elbaum, A. G. Malishevsky, and G. Rothermel, “Test Case Prioritization: A Family of Empirical Studies.” IEEE Transactions on Software Engineering, Vol. 28, No. 2, pp. 159-182, Feb. 2002.
    [23] S. Elbaum, A. Malishevsky, and G. Rothermel, “Incorporating Varying Test Costs and Fault Severities into Test Case Prioritization,” In Proceeding of the 23th International Conference of Software Engineering, pp. 329-338, Toronto, Canada, May 12-19, 2001.
    [24] R. Santelices, J. A. Jones, Y. Yu, and M. J. Harrold, “Lightweight Fault-Localization Using Multiple Coverage Types,” In Proceeding of the 31th International Conference of Software Engineering, pp. 56-66, Vancouver, Canada, May 16-24, 2009.
    [25] Z. Li, M. Harman, and R. M. Hierons, “Search Algorithms for Regression Test Case Prioritization,” IEEE Transactions on Software Engineering, Vol. 33, No. 4, pp. 225-237, Apr. 2007.
    [26] M. Hutchins, H. Foster, T. Goradia, and T. Ostrand, “Experiments on the Effectiveness of Dataflow- and Controlflow-Based Test Adequacy Criteria,” In Proceeding of the 16th International Conference of Software Engineering, pp.191-200, Sorrento, Italy, May 16-21, 1994.
    [27] K. J. Hayhurst, D. S. Veerhusen, J. J. Chilenski, and L. K. Rierson, “A Practical Tutorial on Modified Condition/Decision Coverage,” Report MASA/TM-2001-210876, NASA, 2001.
    [28] K. J. Hayhurst, D. S. Veerhusen, J. J. Chilenski, and L. K. Rierson, “A Practical Approach to Modified Condition/Decision Coverage,” In Proceeding of the 20th Digital Avionics Systems Conference, NASA Langley Technical Report Server, pp. 1B2/1-1B2/10, Daytona Beach, Florida, USA, 2001.
    [29] J. A. Jones and M. J. Harrold, “Test-Suite Reduction and Prioritization for Modified Condition/Decision Coverage,” IEEE Transactions on Software Engineering, Vol. 33, No. 4, pp. 195-209, Mar. 2003.
    [30] H. Do, S. Elbaum, and G. Rothermel, "Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and its Potential Impact," Empirical Software Engineering: An International Journal, Vol. 10, No. 4, pp. 405-435, Jul. 2005.
    [31] “The Software-artifact Infrastructure Repository,” Available: http://sir.unl.edu/portal/, 15 Feb. 2008.
    [32] J. J. Chilenski, S. P. Miller, “Applicability of Modified Condition/Decision Coverage to Software Testing,” Software Engineering Journal, Vol. 9, No. 5, pp. 193-200, Sep. 1994.
    [33] J. Badlaney, R. Ghatol, and R. Jadhwani, “An Introduction to Data-Flow Testing,” Technical Report NCSU CSC TR-2006-22, University of North Carolina State, Aug. 2006.
    [34] A. G. Malishevsky, J. R. Ruthruff, G. Rothermel, and S. Elbaum, “Cost-cognizant Test Case Prioritization,” Technical Report TR-UNL-CSE-2006-0004, University of Nebraska-Lincoln, Mar. 2006.
    [35] M. J. Rummel, G. M. Kapfhammer, and A. Thall, “Towards the Prioritization of Regression Test Suites with Data Flow Information,” In Proceedings of the 20th Symposium on Applied Computing, pp. 1499-1504, Santa Fe, New Mexico, USA, Mar. 13-17, 2005.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE