簡易檢索 / 詳目顯示

研究生: 曾開一
Zeng, Kai Yi
論文名稱: 基於Cascade SVM之平行化AdaBoost分類器之研究
A Study on Parallel AdaBoost with Cascade SVM-based Component Classifier
指導教授: 胡殿中
Hu, Tien Chung
口試委員: 呂理裕
LEU, LII YUH
趙一峰
Chao, I Feng
學位類別: 碩士
Master
系所名稱: 理學院 - 數學系
Department of Mathematics
論文出版年: 2015
畢業學年度: 103
語文別: 中文
論文頁數: 30
中文關鍵詞: adaboost支撐向量機
外文關鍵詞: adaboost, support vector machine
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文以兩種演算法來檢驗動態調整 C 值在資料分類上的影響,探討其正確率和計算時間之表現。過去的研究顯示:(一)使用整體演算法能提高模型分類準確率。(二)將資料分批以分散式平行運算,能有效降低運算時間。(三)透過調整 C 值所得之分類器模型會有所不同。本論文共執行兩種實例分析演算法於數位化手寫數字資料庫(MNIST database)上。實例分析(一)演算法為 AdaBoostCascadeSVM.PL,結果發現動態調整 C 值,能有效地降低22~30程式運算時間,且 C 值為 25 與 50 時,能得到與無動態調整相近之分類準確率。實例分析(二)演算法為 AdaBoostCascadeRVM.PL,結果發現其演算法之核心 RVM 在處理大型資料因時間複雜度過高,有運算時間過長之問題。


    In this thesis, we use two algorithms, AdaBoostCascadeSVM.PL and AdaBoostCascadeRVM.PL, which to verify the effect of classification with dynamic adjustment C value, and observe the performance of accuracy and computation time. In AdaBoostCascadeSVM.PL, classification with dynamic adjustment C value can save 22~30 computation time, and receive the similar accuracy when C value equal 25 and 50. On the other hand, the complexity of AdaBoostCascadeRVM.PL is too high to obtain classifier efficiently.

    1.緒論....1 1.1前言...1 1.2研究動機與目的.....1 1.3論文架構....2 2.文獻探討.....3 2.1整體演算法(Ensemble Learning)....3 2.2AdaBoost(Adaptive Boosting)演算法...4 2.3支撐向量機(Support Vector Machine, SVM)演算法...5 3.實例分析工具.....6 3.1AdaBoost.PL...6 3.2SVM演算法與 Cascade SVM演算法...9 3.2.1硬性邊界支撐向量機(Hard-Margin Support Vector Machine)...9 3.2.2軟性邊界支撐向量機(Soft-Margin Support Vector Machine...12 3.2.3SVM演算法之平行化版本(Cascade SVM)....13 3.3相關向量機(Relevance vector machine, RVM)....15 3.4實例分析架構....18 3.4.1實例分析(一)之具體程序...19 3.4.2實例分析(二)之具體程序....21 3.4.3實例分析資料....22 4.實例分析結果....24 5.討論....27 5.1摘述....27 5.2結果討論....27 5.3限制與建議...28

    [1] B. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin clas- sifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144–152. ACM Press, 1992.
    [2] L. Breiman. Bagging predictors. Machine Learning, 24:123–140, 1996.
    [3] L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001.
    [4] R. Collobert, Y. Bengio, and S. Bengio. A parallel mixture of svms for very large scale problems. Neural Computation, 14(5):1105–1114, 2002.
    [5] C. Cortes and V. Vapnik. Support-vector network. Machine Learning, 20:1–25, 1995.
    [6] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learn- ing and an application to boosting. Journal of Computer and System Sciences, 55:119–139, 1995.
    [7] H. P. Graf, E. Cosatto, L. Bottou, I. Durdanovic, and V. Vapnik. Parallel support vector machines: The cascade svm. Advances in Neural Information Processing Systems, 17:521–528, 2005.
    [8] H. P. Graf, E. Cosatto, L. Bottou, I. Durdanovic, and V. Vapnik. Scalable and parallel boosting with mapreduce. IEEE Transactions on Knowledge and Data Engineering, 24(10):1904–1916, 2012.
    [9] L. Hansen and P. Salamon. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12:993–1001, 1990.
    [10] R. Maclin and D. Opitz. Popular ensemble methods: An empirical study. Journal Of Artificial Intelligence Research, 11:169–198, 1999.
    [11] R. E. Schapire. The strength of weak learnability. Machine Learning, (2):197– 227, 1990.
    [12] M. E. Tipping. Sparse bayesian learning and the relevance vector machine. Jour- nal of Machine Learning Research, 1:211–244, 2001.
    [13] L. Xuchun, W. Lei, and S. Eric. Adaboost with svm-based component classifiers. Engineering Applications of Artificial Intelligence, 21:785–795, 2008.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE