Responsive image
博碩士論文 etd-0829110-122654 詳細資訊
Title page for etd-0829110-122654
論文名稱
Title
可協助除錯的自動評估系統
A Debugging Supported Automated Assessment System for Novice Programming
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
69
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2010-07-26
繳交日期
Date of Submission
2010-08-29
關鍵字
Keywords
自動評估、實體-符號除錯、自動除錯、實體-符號測試、頻譜式除錯
fault localization, automated debugging, concolic debugging, concolic testing, automated assessment
統計
Statistics
本論文已被瀏覽 5848 次,被下載 4
The thesis/dissertation has been browsed 5848 times, has been downloaded 4 times.
中文摘要
要求先備知識不足的初學者自行發現程式中潛在的錯誤是相當困難的。為了提供協助,我們首先必須能夠有效評估程式的正確性,並發現缺陷時提供足夠的提示協助他們除錯。
在正確性的評估上,我們利用「具體-符號測試(concolic testing)」演算法來針對初學者的程式進行測試。當發現學生作業中含有缺陷之後,我們便啟用自動除錯的機制來協助初學者診斷程式碼。基於具體-符號測試的啟發,我們提出了一套新的演算法「具體-符號除錯(concolic debugging)」,該演算法能夠根據錯誤案例(a failed test)來找出缺陷可能存在的位置。
我們改善了自動評估系統的評估方式,使系統不再僅是使用預先設定的測試案例、而是能夠使用自動化產生測試案例的方式進行評估。在我們的實驗中能夠評估86.67%以上的樣本程式,然而在程式應用領域與規格上檢驗,尚須仰賴手動設計的測試案例。
缺陷定位方面,我們所使用的實體-符號除錯方式,只需要一則錯誤案例便能開始進行缺陷定位,因此不像頻譜式的缺陷定位需要許多的正確案例與錯誤案例才能進行定位缺陷,另一方面,由於我們不需要正確案例,所以不會有頻譜式的缺陷定位會因為正確案例頻譜中含有實際缺陷而造成的缺陷定位失準。在我們的實驗中,實體符號除錯的表現較頻譜式缺陷定位的方法有穩定且優異的表現。
Abstract
Novice programmers are difficult to debug on their own because of their lacking
of prior knowledge. If we want to help them, first we need to able to check the
correctness of a novice’s program. And whenever any error is found, we could
provide some suggestion to assist them in debugging.
We use concolic testing algorithm to automatically generate test inputs. The test
inputs generation of the concolic testing is directed by negating path conditions and is
produced by solving path constraints. By using of concolic testing, we are able to
explore as much more branches as we can.
And once we found an error, we will try to locate it for novice programmers. We
propose a new method called concolic debugging. Its idea comes from concolic
testing. The concolic debugging algorithm initiates with a given failed test, and try to
locate the faulty block by negating and backtracking the path conditions of the failed
test.
We use concolic testing to improve assessing style of the automated assessment
system. 86.67% of our sample programs are successfully assessed by concolic testing
algorithm on our new automated assessment system. And we also found our concolic
debugging is much more stable and accuracy on fault localization then
spectrum-based fault localization.
目次 Table of Contents
致謝 I
摘要 II
ABSTRACT III
內容目錄 IV
圖目錄 VIII
表目錄 X
1 緒論 1
1.1 研究背景 1
1.2 研究動機 2
1.3 問題描述與研究目的 3
1.4 論文架構 4
2 文獻探討 5
2.1 初學者的除錯能力 5
2.1.1 錯誤發生的原因與類型 5
2.1.2 初學者的除錯策略 8
2.2 自動評估系統(AUTOMATED ASSESSMENT SYSTEMS) 10
2.3 具體-符號測試(CONCOLIC TESTING) 12
2.3.1 軟體測試 12
2.3.2 具體-符號測試 13
2.4 缺陷定位(FAULT LOCALIZATION) 17
2.4.1 缺陷定位的方法 17
2.4.2 頻譜式缺陷定位(spectrum-based fault localization) 19
2.4.3 Tarantula 21
2.4.4 χDebug 21
2.4.5 Statistical Bug Isolation(SBI) 22
2.4.6 Jaccard 22
2.4.7 Ochiai 22
2.4.8 小結 23
3 系統架構與實作 24
3.1 系統架構 24
3.2 自動評估階段:具體-符號測試 25
3.2.1 測試輸入產生工具:CREST 25
3.2.2 設定符號變數(set up symbolic variables) 26
3.2.3 程式碼的插裝與編譯(instrumentations) 26
3.2.4 具體-符號推演(concolic execution) 29
3.2.5 產生測試案例 29
3.3 缺陷定位階段:具體-符號除錯(CONCOLIC DEBUGGING) 30
3.3.1 取得錯誤路徑的限制條件 32
3.3.2 邏輯否定運算與回溯路徑的路徑條件 33
4 系統實驗與評估 35
4.1 實驗假設與限制 35
4.2 實驗樣本 36
4.3 評估方法 36
4.3.1 自動評估的成效 37
4.3.2 缺陷定位的成效 37
4.4 實驗流程 39
4.5 實驗結果與討論 40
4.5.1 實驗一:自動評估的成效 40
4.5.2 實驗一:討論 40
4.5.3 實驗二:缺陷定位的成效 41
4.5.4 實驗二:討論 43
5 結論與建議 46
5.1 研究結論 46
5.2 未來研究建議 47
6 參考文獻 48
7 附錄 53
參考文獻 References
1. 程尚文, 使用樣板教學於初階程式設計課程之探討. 2009, 國立中山大學; 資訊管理學系研究所: 高雄.
2. Mow, I., Issues and Difficulties in Teaching Novice Computer Programming. Innovative Techniques in Instruction Technology, E-learning, E-assessment, and Education, 2008: p. 199-204.
3. Butler, M. and M. Morgan, Learning challenges faced by novice programming students studying high level and low feedback concepts. ASCILATE 2007 Singapore, 2007. 99: p. 107.
4. Lahtinen, E., K. Ala-Mutka, and H.M. Järvinen. A study of the difficulties of novice programmers. in Proceedings of the 10th annual SIGCSE conference on Innovation and technology in computer science education. 2005: ACM.
5. Ahmadzadeh, M., D. Elliman, and C. Higgins. An analysis of patterns of debugging among novice computer science students. in Proceedings of the 10th annual SIGCSE conference on Innovation and technology in computer science education. 2005. Monte de Cparica, Portugal.: ACM.
6. Fitzgerald, S., et al., Debugging: finding, fixing and flailing, a multi-institutional study of novice debuggers. Computer Science Education, 2008. 18(2): p. 93-116.
7. Murphy, L., et al. Debugging: the good, the bad, and the quirky--a qualitative analysis of novices' strategies. in Proceedings of the 39th SIGCSE technical symposium on Computer science education. 2008: ACM.
8. Rodrigo, M., et al. Affective and behavioral predictors of novice programmer achievement. in Proceedings of the 14th annual ACM SIGCSE conference on Innovation and technology in computer science education. 2009: ACM.
9. Lee, G.C. and J.C. Wu, Debug It: A debugging practicing system. Computers & Education, 1999. 32(2): p. 165-179.
10. Douce, C., D. Livingstone, and J. Orwell, Automatic test-based assessment of programming: A review. Journal on Educational Resources in Computing (JERIC), 2005. 5(3): p. 4.
11. Jackson, D. and M. Usher, Grading student programs using ASSYST. ACM SIGCSE Bulletin, 1997. 29(1): p. 335-339.
12. Leal, J.P. and F. Silva, Mooshak: a Web-based multi-site programming contest system. Software Focus, 2003. 33(6): p. 567-581.
13. Punitha, T., et al. Mooshak A Valuable Repository of Codes. 2008.
14. Thompson, S., An Exploratory Study of Novice Programming Experiences and Errors. 2004, Citeseer.
15. Ebrahimi, A., D. Kopec, and C. Schweikert, Taxonomy of Novice Programming Error Patterns with Plan, Web, and Object Solutions.
16. Hristova, M., et al., Identifying and correcting Java programming errors for introductory computer science students. ACM SIGCSE Bulletin, 2003. 35(1): p. 156.
17. Spohrer, J. and E. Soloway, Novice mistakes: Are the folk wisdoms correct? Communications of the ACM, 1986. 29(7): p. 624-632.
18. Someren, M., What's wrong? Understanding beginners' problems with Prolog. Instructional Science, 1990. 19(4): p. 257-282.
19. Soloway, E., Learning to program= learning to construct mechanisms and explanations. 1986.
20. Ko, A. and B. Myers, A framework and methodology for studying the causes of software errors in programming systems. Journal of Visual Languages & Computing, 2005. 16(1-2): p. 41-84.
21. McCauley, R., et al., Debugging: a review of the literature from an educational perspective. Computer Science Education, 2008. 18(2): p. 67-92.
22. Ducasse, M. and A.M. Emde. A review of automated debugging systems: knowledge, strategies and techniques. in Proceedings of the 10th international conference on Software engineering. 1998. Singapore: IEEE Computer Society Press Los Alamitos, CA, USA.
23. Ahoniemi, T., E. Lahtinen, and T. Reinikainen, Improving pedagogical feedback and objective grading. ACM SIGCSE Bulletin, 2008. 40(1): p. 72-76.
24. 林盟憲, 一個適用於個別練習之程式設計學習系統. 2008, 國立中山大學; 資訊管理學系研究所: 高雄.
25. Osterweil, L., Strategic directions in software quality. ACM Computing Surveys (CSUR), 1996. 28(4): p. 750.
26. 鄭炳強, 軟體工程:從實務出發. 2007, 台北市: 智勝文化.
27. Tripathy, K.N.P., Software Testing and Quality Assurance: Theory and Practice. 2008, Hoboken, New Jersey: John Wiley & Sons, Inc.
28. Jhala, R. and R. Majumdar, Software model checking. ACM Computing Surveys (CSUR), 2009. 41(4): p. 1-54.
29. Clarke, E., D. Kroening, and F. Lerda, A tool for checking ANSI-C programs. Lecture Notes in Computer Science, 2004: p. 168-176.
30. Sen, K. Concolic testing. 2007: ACM.
31. Kim, Y., M. Kim, and N. Dang, Scalable Distributed Concolic Testing: a Case Study on a Flash Storage Platform.
32. GNU Coreutils. Available from: http://www.gnu.org/software/coreutils/.
33. SGLIB. Available from: http://xref-tech.com/sglib/main.html.
34. Cadar, C., D. Dunbar, and D. Engler. Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. 2008.
35. Cadar, C. and D. Engler, Execution generated test cases: How to make systems code crash itself. Model Checking Software, 2005: p. 2-23.
36. Sen, K. and G. Agha. CUTE and jCUTE: Concolic unit testing and explicit path model-checking tools. 2006: Springer.
37. Sen, K., D. Marinov, and G. Agha, CUTE: A concolic unit testing engine for C. ACM SIGSOFT Software Engineering Notes, 2005. 30(5): p. 272.
38. Godefroid, P., N. Klarlund, and K. Sen. DART: Directed automated random testing. 2005: ACM.
39. Collofello Scott, N. and S. James, Evaluating the effectiveness of reliability-assurance techniques. Journal of Systems and Software, 1989. 9(3): p. 191-195.
40. Vessey, I., Expertise in debugging computer programs: A process analysis. International Journal of Man-Machine Studies, 1985. 23(5): p. 459-494.
41. Pan, H. and E. Spafford, Heuristics for automatic localization of software faults. World Wide Web, 1992.
42. Agrawal, H., et al., Fault localization using execution slices and dataflow tests. Proceedings of IEEE Software Reliability Engineering, 1995: p. 143¡V151.
43. Renieres, M. and S. Reiss. Fault localization with nearest neighbor queries. 2003.
44. Cleve, H. and A. Zeller. Locating causes of program failures. 2005.
45. Harrold, M., et al. An empirical investigation of program spectra. 1998: ACM.
46. Eric Wong, W., V. Debroy, and B. Choi, A family of code coverage-based heuristics for effective fault localization. Journal of Systems and Software, 2009.
47. Zeller, A., Isolating cause-effect chains from computer programs. ACM SIGSOFT Software Engineering Notes, 2002. 27(6): p. 10.
48. Jones, J. and M. Harrold. Empirical evaluation of the tarantula automatic fault-localization technique. 2005: ACM.
49. Vayani, R., Improving Automatic Software Fault Localization, in Computer Science. 2007, Delft University of Technology: Delft.
50. Jones, J., M. Harrold, and J. Stasko. Visualization of test information to assist fault localization. 2002: ACM.
51. Dallmeier, V., C. Lindig, and A. Zeller, Lightweight defect localization for Java. ECOOP 2005-Object-Oriented Programming, 2005: p. 528-550.
52. Wong, W., et al. Effective fault localization using code coverage. 2007.
53. Abreu, R., et al. Automatic software fault localization using generic program invariants. 2008: ACM.
54. Janssen, T., R. Abreu, and A. van Gemund. Zoltar: a spectrum-based fault localization tool. 2009: ACM.
55. Yu, Y., J. Jones, and M. Harrold. An empirical study of the effects of test-suite reduction on fault localization. 2008: ACM.
56. Liblit, B., et al. Scalable statistical bug isolation. 2005: ACM.
57. Abreu, R., P. Zoeteweij, and A. van Gemund. An evaluation of similarity coefficients for software fault localization. 2006.
58. Burnim, J. and K. Sen. Heuristics for scalable dynamic test generation. 2008: IEEE Computer Society.
59. Necula, G., et al. CIL: Intermediate language and tools for analysis and transformation of C programs. 2002: Springer.
60. Dutertre, B. and L. De Moura, The yices smt solver. 2006.
61. CREST. Available from: http://code.google.com/p/crest.
62. Abreu, R., P. Zoeteweij, and A. Van Gemund. On the accuracy of spectrum-based fault localization. 2007.
63. Jiang, B., et al. How well do test case prioritization techniques support statistical fault localization. 2009: IEEE.
64. Hao, D., et al., Test input reduction for result inspection to facilitate fault localization. Automated Software Engineering, 2010. 17(1): p. 5-31.
65. Wang, T. and A. Roychoudhury. Automated path generation for software fault localization. 2005: ACM.
66. Guo, L., A. Roychoudhury, and T. Wang. Accurately choosing execution runs for software fault localization. 2006: Springer.
67. Groce, A., et al., Error explanation with distance metrics. International Journal on Software Tools for Technology Transfer (STTT), 2006. 8(3): p. 229-247.
68. Ferrante, J., K. Ottenstein, and J. Warren, The program dependence graph and its use in optimization. ACM Transactions on Programming Languages and Systems (TOPLAS), 1987. 9(3): p. 349.
69. Coppit, D. and J. Lian. yagg: an easy-to-use generator for structured test inputs. 2005: ACM.
70. Daniel, B., et al. Automated testing of refactoring engines. 2007: ACM.
71. Godefroid, P., A. Kiezun, and M. Levin, Grammar-based whitebox fuzzing. ACM SIGPLAN Notices, 2008. 43(6): p. 206-215.
72. Dimitrov, M. and H. Zhou. Anomaly-based bug prediction, isolation, and validation: an automated approach for software debugging. 2009: ACM.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:校內公開,校外永不公開 restricted
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus:永不公開 not available

您的 IP(校外) 位址是 18.222.69.152
論文開放下載的時間是 校外不公開

Your IP address is 18.222.69.152
This thesis will be available to you on Indicate off-campus access is not available.

紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code