研究生: |
祝傳旻 Chu, Chuan-Min |
---|---|
論文名稱: |
運用誤分類偵測與離群值修正於深度學習系統之錯誤修復 Adopting Misclassification Detection and Outlier Modification to Fault Correction in Deep Learning System |
指導教授: |
黃慶育
Huang, Chin-Yu |
口試委員: |
蘇銓清
Sue, Chuan-Ching 林振緯 Lin, Jenn-Wei 林其誼 Lin, Chi-Yi |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2022 |
畢業學年度: | 110 |
語文別: | 英文 |
論文頁數: | 67 |
中文關鍵詞: | 深度學習系統 、深度學習系統之錯誤修復 、誤分類偵測 、離群值偵測 、離群值修正 |
外文關鍵詞: | Deep learning system, Fault-correction in deep learning system, Misclassification detection, Outlier detection, Outlier modification |
相關次數: | 點閱:4 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在過去幾十年中,軟體工程(SE)的研究人員專注於自動有效地測試、分析、修復與生成程式。時至今日,結合類神經網路與傳統的軟體工程技術在提高軟體品質和生產力方面具有巨大的潛力;提到神經網路的發展,深度學習(DL)與卷積神經網路(CNN)已被廣泛用於軟體應用程式中提供決策或是建議。舉例來說,由於可以自動化地提升程式設計師的生產力,基於深度學習的程式合成是一個新興且令人興奮的領域;由於最近程式合成的方法利用深度學習技術以從測試資料或是輸入、輸出的範例中學習,程式合成方法中的深度學習模組也須面對過擬合的問題。過擬合問題具有許多面向,其中之一便是類別之間的錯誤分類。在本次研究中,我們專注於偵測深度學習模型中的誤分類情形並正確地修正模型的輸出推論,以此緩解過擬合問題並提升深度學習系統之可靠性。
此外,考慮到基於深度學習的生命攸關系統,深度學習系統的可靠性與強韌性需要找到方法測量並提升至更高水準;然而,由於深度學習系統在許多方面皆與傳統的軟體不相同,傳統的軟體測試方法與指標並不適合深度學習系統。隨著用於建構更強健的深度學習系統的測試方法的快速發展,學者們也提出幾種基於神經元激勵值的深度學習系統測試方法,用以監測深度學習模型在接收各種輸入資料時的行為。現有的深度學習測試方法大多利用合成、突變的資料於重新訓練深度學習模型,儘管如此,這些方法並不能及時修正深度學習模型做出的錯誤決策。有鑑於此,我們提出了一種新穎的錯誤修復框架稱為深度學習系統之離群值修正(OMDLS),並以此緩解深度學習系統中潛在的誤分類問題。我們首先提出了一種離群值檢測的方法來區分正常與異常的輸入資料,以此避免修正原先就正確的推論;除此之外,我們也提出了一種誤分類偵測方法來確定資料集中可能會被深度學習模型錯誤分類的那些標籤;最後,我們提出了一種修正策略,其藉由檢查模型的推論與誤分類對的關聯來修正離群值。我們對四種不同規模、類別數的公開資料集的實驗表明,藉由誤分類對來修正離群值可以提升模型準確度至多2.12%,無須重新訓練框架且能立即修正模型的推論。
Over the past few decades, researchers in software engineering (SE) have focused on testing, analyzing, repairing, and generating programs automatically and effectively. Today, combining neural networks and traditional software engineering techniques has major potential to benefit software quality and productivity. Regarding the development of neural networks, deep learning (DL) and convolution neural networks (CNNs) have been widely adopted by software applications for making decisions or providing suggestions. For example, DL-based programming synthesis is an emerging and exciting field, as it can enhance the productivity of programmers automatically. Since recent programming synthesis approaches leverage DL techniques to learn from the test cases or the I/O examples of the program, DL modules in programming synthesis approaches are faced with the overfitting problem. The problem of overfitting is multifaceted, and one facet includes the misclassification between classes. In this study, we focus on detecting misclassification in the DL model and modifying the output inference correctly to alleviate the overfitting problem and improve the reliability of DL systems.
Moreover, considering life-critical DL-based applications, there is a need to measure and improve the reliability and robustness of DL systems. However, since DL systems are distinct from traditional software in many ways, traditional software testing methods and criteria are not suitable for testing a DL system. With the rapid development of testing methods designed to construct stronger DL systems, researchers have also proposed several testing criteria for DL systems based on neuron activation values to monitor the behavior of a DL model in various input data. Existing testing methods for DL systems leverage synthesis or mutated data to retrain the DL models; nevertheless, they cannot correct the wrong decisions made by DL systems immediately. Therefore, we propose a novel fault-correction framework for alleviating potential misclassification issues of DL systems called the Outlier Modification for DL Systems (OMDLS). We first propose an outlier detection strategy to distinguish outliers from the normal input data to avoid modifying the correct inferences. We also propose a misclassification detection approach to determine the labels that are likely misclassified by DL models in the dataset. Finally, we propose a modification strategy to correct these outliers by examining the relationships between the inference made from the DL model and the misclassification pairs. Our experiment results with four public datasets using different scales and label numbers to show that modifying the outliers based on the misclassification pairs can improve accuracy by up to 2.12% without retraining the model and modifying the inference immediately.
[1] Y. LeCun, Y. Bengio, and G. Hinton, "Deep Learning," Nature, vol. 521, pp. 436-444, May 2015.
[2] A. Barbu, D. Mayo, J. Alverio, W. Luo, C. Wang, D. Gutfreund, J. Tenenbaum, and B. Katz, "Objectnet: A Large-Scale Bias-Controlled Dataset for Pushing the Limits of Object Recognition Models," in Proceedings of the Advances in Neural Information Processing Systems 32 (NeurlPS 2019), pp. 9453–9463, Vancouver, Canada, Dec. 2019.
[3] J. Tan, C. Wang, B. Li, Q. Li, W. Ouyang, C. Yin, and J. Yan, "Equalization Loss for Long-Tailed Object Recognition," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11662-11671, Seattle, Washington, USA, June 2020.
[4] M. Zhou, Y. Bai, W. Zhang, T. Zhao, and T. Mei, "Look-Into-Object: Self-Supervised Structure Modeling for Object Recognition," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11771-11780, Virtual, June 2020.
[5] A. Mahmood, M. Bennamoun, S. An, F. A. Sohel, F. Boussaid, R. Hovey, G. A. Kendrick, and R. B. Fisher, "Deep Image Representations for Coral Image Classification," IEEE Journal of Oceanic Engineering, vol. 44, pp. 121-131, Jan. 2019.
[6] W. Rawat and Z. Wang, "Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review," Neural Computation, vol. 29, pp. 2352-2449, Sep. 2017.
[7] J. Zhang, Y. Xie, Q. Wu, and Y. Xia, "Medical Image Classification Using Synergic Deep Learning," Medical Image Analysis, vol. 54, pp. 10-19, May 2019.
[8] R. Collobert and J. Weston, "A Unified Architecture for Natural Language Processing: Deep Neural Networks With Multitask Learning," in Proceedings of the 25th International Conference on Machine Learning (ICML'08), pp. 160-167, Helsinki, Finland, July 2008.
[9] D. W. Otter, J. R. Medina, and J. K. Kalita, "A Survey of the Usages of Deep Learning for Natural Language Processing," IEEE Transactions on Neural Networks and Learning Systems, vol. 32, pp. 604-624, Feb. 2020.
[10] T. Young, D. Hazarika, S. Poria, and E. Cambria, "Recent Trends in Deep Learning Based Natural Language Processing [Review Article]," IEEE Computational Intelligence Magazine, vol. 13, pp. 55-75, Aug. 2018, doi: 10.1109/MCI.2018.2840738.
[11] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, "Deepdriving: Learning Affordance for Direct Perception in Autonomous Driving," in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2722-2730, Santiago, Chile, Dec. 2015.
[12] S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, "A Survey of Deep Learning Techniques for Autonomous Driving," Journal of Field Robotics, vol. 37, pp. 362-386, Apr. 2020.
[13] B. R. Kiran, I. Sobh, V. Talpaert, P. Mannion, A. A. A. Sallab, S. Yogamani, P. P, x00E, and rez, "Deep Reinforcement Learning for Autonomous Driving: A Survey," IEEE Transactions on Intelligent Transportation Systems, June. vol. 23, pp. 4909-4926, D 2022, doi: 10.1109/TITS.2021.3054625.
[14] A. Ajayi, L. Oyedele, H. Owolabi, O. Akinade, M. Bilal, J. M. Davila Delgado, and L. Akanbi, "Deep Learning Models for Health and Safety Risk Prediction in Power Infrastructure Projects," Risk Analysis, vol. 40, pp. 2019-2039, Oct. 2020.
[15] J.-M. Kim, K.-K. Lim, S.-G. Yum, and S. Son, "A Deep Learning Model Development to Predict Safety Accidents for Sustainable Construction: A Case Study of Fall Accidents in South Korea," Sustainability, vol. 14, p. 1583, Feb. 2022.
[16] J. Park, H. Lee, and H. Y. Kim, "Risk Factor Recognition for Automatic Safety Management in Construction Sites Using Fast Deep Convolutional Neural Networks," Applied Sciences, vol. 12, p. 694, Jan. 2022.
[17] G. Lin, S. Wen, Q. L. Han, J. Zhang, and Y. Xiang, "Software Vulnerability Detection Using Deep Neural Networks: A Survey," Proceedings of the IEEE, vol. 108, pp. 1825-1848, Oct. 2020, doi: 10.1109/JPROC.2020.2993293.
[18] Y. Pang, X. Xue, and H. Wang, "Predicting Vulnerable Software Components through Deep Neural Network," in Proceedings of the 2017 International Conference on Deep Learning Technologies (ICDLT '17), pp. 6–10, Chengdu, China, June 2017.
[19] Z. Li, D. Zou, S. Xu, H. Jin, Y. Zhu, and Z. Chen, "SySeVR: A Framework for Using Deep Learning to Detect Software Vulnerabilities," IEEE Transactions on Dependable and Secure Computing, vol. 19, pp. 2244-2258, July-Aug. 2021, doi: 10.1109/TDSC.2021.3051525.
[20] L. Qiao, X. Li, Q. Umer, and P. Guo, "Deep Learning Based Software Defect Prediction," Neurocomputing, vol. 385, pp. 100-110, Apr. 2020.
[21] S. Omri and C. Sinz, "Deep Learning for Software Defect Prediction: A Survey," in Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops (ICSEW'20), pp. 209–214, Seoul, Republic of Korea, June 2020.
[22] S. Wang, T. Liu, J. Nam, and L. Tan, "Deep Semantic Feature Learning for Software Defect Prediction," IEEE Transactions on Software Engineering, vol. 46, pp. 1267-1293, Dec. 2020, doi: 10.1109/TSE.2018.2877612.
[23] H. K. Dam, T. Pham, S. W. Ng, T. Tran, J. Grundy, A. Ghose, T. Kim, and C.-J. Kim, "Lessons Learned from Using a Deep Tree-based Model for Software Defect Prediction in Practice," in Proceedings of the 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR), pp. 46-57, Montreal, QC, Canada, May 2019.
[24] S. F. Lin, C. Y. Huang, and N. C. Fang, "Applying a Deep-Learning Approach to Predict the Quality of Web Services," in Proceedings of the 2021 IEEE 21st International Conference on Software Quality, Reliability and Security (QRS), pp. 776-785, Hainan, China, Dec. 2021.
[25] T. Y. Yu, C. Y. Huang, and N. C. Fang, "Use of Deep Learning Model with Attention Mechanism for Software Fault Prediction," in Proceedings of the 2021 8th International Conference on Dependable Systems and Their Applications (DSA), pp. 161-171, Yinchuan, China, Aug. 2021.
[26] C. Y. Huang, Arthur, C. Huang, M. C. Yang, and W. C. Su, "A Study of Applying Deep Learning-Based Weighted Combinations to Improve Defect Prediction Accuracy and Effectiveness," in Proceedings of the 2019 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), pp. 1471-1475, Macao, China, Dec. 2019.
[27] C. Y. Wu and C. Y. Huang, "A Study of Incorporation of Deep Learning Into Software Reliability Modeling and Assessment," IEEE Transactions on Reliability, vol. 70, pp. 1621-1640, Dec. 2021, doi: 10.1109/TR.2021.3105531.
[28] H. Ha and H. Zhang, "DeepPerf: Performance Prediction for Configurable Software with Deep Sparse Neural Network," in Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), pp. 1095-1106, Montreal, QC, Canada, May 2019.
[29] J. Chen, K. Hu, Y. Yu, Z. Chen, Q. Xuan, Y. Liu, and V. Filkov, "Software Visualization and Deep Transfer Learning for Effective Software Defect Prediction," in Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (ICSE'20), pp. 578-589, Seoul, South Korea, June 2020.
[30] C. L. Goues, M. Pradel, and A. Roychoudhury, "Automated Program Repair," Communications of the ACM, vol. 62, pp. 56–65, Nov. 2019, doi: 10.1145/3318162.
[31] R. Bavishi, C. Lemieux, R. Fox, K. Sen, and I. Stoica, "AutoPandas: Neural-Backed Generators for Program Synthesis," Proceedings of the ACM on Programming Languages, vol. 3, pp. 1-27, Oct. 2019, doi: 10.1145/3360594.
[32] P. Domingos, "A Few Useful Things to Know About Machine Learning," Communications of the ACM, vol. 55, pp. 78–87, Oct. 2012, doi: 10.1145/2347736.2347755.
[33] D. Hendrycks and K. Gimpel, "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks," in Proceedings of the 5th International Conference on Learning Representations (ICLR), Palais des Congrès Neptune, Toulon, France, Apr. 2017.
[34] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, "Mastering the Game of Go With Deep Neural Networks and Tree Search," Nature, vol. 529, pp. 484-489, Jan. 2016, doi: 10.1038/nature16961.
[35] X. Xie, L. Ma, F. Juefei-Xu, M. Xue, H. Chen, Y. Liu, J. Zhao, B. Li, J. Yin, and S. See, "DeepHunter: a Coverage-guided Fuzz Testing Framework for Deep Neural Networks," in Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2019), pp. 146-157, Beijing, China, July 2019.
[36] Y. Tian, K. Pei, S. Jana, and B. Ray, "DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars," in Proceedings of the 40th International Conference on Software Engineering (ICSE '18), pp. 303-314, Gothenburg, Sweden, May-June 2018.
[37] K. Pei, Y. Cao, J. Yang, and S. Jana, "DeepXplore: Automated Whitebox Testing of Deep Learning Systems," in Proceedings of the 26th Symposium on Operating Systems Principles (SOSP '17), pp. 1-18, Shanghai, China, Oct. 2017.
[38] M. Sensoy, M. Saleki, S. Julier, R. Aydogan, and J. Reid, "Misclassification Risk and Uncertainty Quantification in Deep Classifiers," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 2484-2492, Virtual, Jan. 2021.
[39] J. Kim, R. Feldt, and S. Yoo, "Guiding deep learning system testing using surprise adequacy," in Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), pp. 1039-1049, 2019. IEEE.
[40] P. J. Werbos, "Backpropagation Through Time: What It Does and How to Do It," Proceedings of the IEEE, vol. 78, pp. 1550-1560, Oct. 1990.
[41] V. Nair and G. E. Hinton, "Rectified Linear Units Improve Restricted Boltzmann Machines," in Proceedings of the 27th International Conference on International Conference on Machine Learning (ICML'10), pp. 807-814, Haifa, Israel, June 2010.
[42] A. Arpteg, B. Brinne, L. Crnkovic-Friis, and J. Bosch, "Software Engineering Challenges of Deep Learning," in Proceedings of the 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 50-59, Prague, Czech Republic, Aug. 2018.
[43] C. Watson, N. Cooper, D. N. Palacio, K. Moran, and D. Poshyvanyk, "A Systematic Literature Review on the Use of Deep Learning in Software Engineering Research," ACM Transactions on Software Engineering and Methodology, vol. 31, pp. 1-58, Apr. 2022, doi: 10.1145/3485275.
[44] R. Kumar, Z. Xiaosong, R. U. Khan, I. Ahad, and J. Kumar, "Malicious Code Detection based on Image Processing Using Deep Learning," presented at the 2018 International Conference on Computing and Artificial Intelligence (ICCAI 2018), Chengdu, China, Mar. 2018.
[45] A. Bosio, P. Bernardi, A. Ruospo, and E. Sanchez, "A Reliability Analysis of A Deep Neural Network," in Proceedings of the 2019 IEEE Latin American Test Symposium (LATS), pp. 1-6, Santiago, Chile, Mar. 2019.
[46] Y. LeCun, L. D. Jackel, B. Boser, J. S. Denker, H. P. Graf, I. Guyon, D. Henderson, R. E. Howard, and W. Hubbard, "Handwritten Digit Recognition: Applications of Neural Network Chips and Automatic Learning," IEEE Communications Magazine, vol. 27, pp. 41-46, Nov. 1989, doi: 10.1109/35.41400.
[47] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, Las Vegas, Nevada, June-July 2016.
[48] N. Carlini and D. Wagner, "Towards Evaluating the Robustness of Neural Networks," in Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), pp. 39-57, San Jose, CA, USA, May 2017.
[49] F. Harel-Canada, L. Wang, M. A. Gulzar, Q. Gu, and M. Kim, "Is Neuron Coverage a Meaningful Measure for Testing Deep Neural Networks?," in Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 851-862, Virtual Event, USA, Nov. 2020.
[50] M. P. J. M. C. Wand, Kernel smoothing. London; New York: Chapman & Hall, 1995.
[51] J. Davis and M. Goadrich, "The Relationship Between Precision-Recall and ROC Curves," presented at the 23rd International Conference on Machine Learning (ICML'06), Pittsburgh, Pennsylvania, USA, June 2006.
[52] H. Jin and C. X. Ling, "Using AUC and Accuracy in Evaluating Learning Algorithms," IEEE Transactions on Knowledge and Data Engineering, vol. 17, pp. 299-310, Jan. 2005, doi: 10.1109/TKDE.2005.50.
[53] H. Wang and W. Chan, "Orchid: Building Dynamic Test Oracles with Training Bias for Improving Deep Neural Network Models," in Proceedings of the 2021 8th International Conference on Dependable Systems and Their Applications (DSA), pp. 128-139, Yinchuan, China, Aug. 2021.
[54] M. Grandini, E. Bagli, and G. Visani, "Metrics for Multi-class Classification: an Overview," arXiv:2008.05756 [stat.ML], Aug. 2020.
[55] L. Deng, "The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]," IEEE Signal Processing Magazine, vol. 29, pp. 141-142, Oct. 2012.
[56] A. Krizhevsky, V. Nair, and G. Hinton. The CIFAR-10 and CIFAR-100 dataset. [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html
[57] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, "Man Vs. Computer: Benchmarking Machine Learning Algorithms for Traffic Sign Recognition," Neural Networks, vol. 32, pp. 323-332, Aug. 2012.
[58] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, June 2016.
[59] M. Tan and Q. Le, "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks," in Proceedings of the 36th International Conference on Machine Learning, pp. 6105-6114, Long Beach, California, USA, June 2019. vol. 97.
[60] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely Connected Convolutional Networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700-4708, Honolulu, Hawaii, July 2017.
[61] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," in Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, California, USA, May 2015.
[62] N. Chinchor, "MUC-4 Evaluation Metrics," presented at the 4th Conference on Message Understanding (MUC4'92), McLean, Virginia, June 1992.
[63] M. Buckland and F. Gey, "The Relationship Between Recall and Precision," Journal of the American Society for Information Science, vol. 45, pp. 12-19, Jan. 1994.
[64] J. Read, B. Pfahringer, G. Holmes, and E. Frank, "Classifier Chains for Multi-label Classification," Machine Learning, vol. 85, p. 333, June 2011.
[65] C. Corbière, N. Thome, A. Bar-Hen, M. Cord, and P. Pérez, "Addressing Failure Prediction by Learning Model Confidence," in Proceedings of the Advances in Neural Information Processing Systems 32 (NeurIPS 2019), pp. 2902-2913, Vancouver, Canada, Dec. 2019.
[66] Y. Liu, C. Chen, R. Zhang, T. Qin, X. Ji, H. Lin, and M. Yang, "Enhancing the Interoperability Between Deep Learning Frameworks by Model Conversion," in Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2020), pp. 1320–1330, Virtual Event, USA, Nov. 2020.
[67] A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana, and S. Tarantola, Global Sensitivity Analysis. The Primer. Hoboken, New Jersey: John Wiley & Sons, 2008, pp. 237-269.
[68] H. Chin-Yu and M. R. Lyu, "Optimal testing resource allocation, and sensitivity analysis in software development," IEEE Transactions on Reliability, vol. 54, pp. 592-603, Dec. 2005, doi: 10.1109/TR.2005.858099.
[69] L. Jung-Hua, H. Chin-Yu, K. Sy-Yen, and M. R. Lyu, "Sensitivity analysis of software reliability for component-based software applications," in Proceedings of the 27th Annual International Computer Software and Applications Conference. COMPAC 2003, pp. 500-505, Dallas, TX, USA, Nov. 2003.
[70] S. C. B. Lo, H. P. Chan, J. S. Lin, H. Li, M. T. Freedman, and S. K. Mun, "Artificial Convolution Neural Network for Medical Image Pattern Recognition," Neural Networks, vol. 8, pp. 1201-1214, 1995.