請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/60890完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 于天立 | |
| dc.contributor.author | Chung-Yu Shao | en |
| dc.contributor.author | 邵中昱 | zh_TW |
| dc.date.accessioned | 2021-06-16T10:34:59Z | - |
| dc.date.available | 2013-08-17 | |
| dc.date.copyright | 2013-08-17 | |
| dc.date.issued | 2013 | |
| dc.date.submitted | 2013-08-14 | |
| dc.identifier.citation | Bibliography
[1] H. Bai, D. OuYang, X. Li, L. He, and H. Yu. Max-min ant system on gpu with cuda. In Innovative Computing, Information and Control (ICICIC), 2009 Fourth International Conference on, pages 801–804. IEEE, 2009. [2] S. Baluja. Population-based incremental learning: A method for integrating genetic search based function optimization and competitive learning. Technical report, Carnegie Mellon University, Pittsburgh, PA, 1994. [3] E. Cant’u-Paz. Efficient and accurate parallel genetic algorithms. Kluwer Academic Publishers, Boston, MA, 2000. [4] J. Dean and S. Ghemawat. Mapreduce: simplified data processing on large clusters. Communications of the ACM, 51(1):107–113, 2008. [5] K. Fan, T. Yu, and J. Lee. Interaction detection by nfe estimation: A practical view of building blocks. In Proceedings of the 13th annual conference companion on Genetic and evolutionary computation, pages 71–72. ACM, 2011. [6] D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, MA, 1989. [7] D. E. Goldberg. The design of innovation: Lessons from and for competent genetic algorithms. Kluwer Academic Publishers, Boston, MA, 2002. [8] D. E. Goldberg and S. Voessner. Optimizing global-local search hybrids. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-1999), pages 220–228, 1999. [9] C. Grosan, A. Abraham, and H. Ishibuchi. Hybrid evolutionary algorithms. Springer Publishing Company, Incorporated, 2007. [10] S. Harding and W. Banzhaf. Distributed genetic programming on GPUs using CUDA. In Workshop on Parallel Architectures and Bioinspired Algorithms, Raleigh, USA, 2009. [11] G. R. Harik, F. G. Lobo, and K. Sastry. Linkage Learning via Probabilistic Modeling in the Extended Compact Genetic Algorithm (ECGA). Springer, 2006. [12] J. H. Holland. Adaptation in natural and artificial systems. Ann Arbor, MI: University of Michigan Press, 1975. [13] P. Kr‥omer, V. Sn’asel, J. Platos, and A. Abraham. Many-threaded implementation of differential evolution for the cuda platform. Proceedings of the 13th annual conference on Genetic and evolutionary computation, pages 1595–1602, 2011. [14] P. Larra˜naga and J. A. Lozano, editors. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers, Boston, MA, 2002. [15] F. Lobo, K. Sastry, and G. Harik. Extended compact genetic algorithm in C++: Version 1.1. IlliGAL Report No. 2006012, University of Illinois at Urbana-Champaign, 2006. [16] A. Mendiburu, J. Miguel-Alonso, and J. A. Lozano. Implementation and performance evaluation of a parallelization of estimation of bayesian network algorithms. Parallel Processing Letters, 16(1):133–148, 2006. [17] H. M‥uhlenbein and G. Paas. From recombination of genes to the estimation of distributions I. Binary parameters. In Parallel Problem Solving from Nature, pages 178–187. Springer-Verlag, 1996. [18] A. Munawar, M. Wahib, M. Munetomo, and K. Akama. Hybrid of genetic algorithm and local search to solve max-sat problem using nvidia cuda framework. Genetic Programming and Evolvable Machines, 10(4):391–415, 2009. [19] A. Munawar, M. Wahib, M. Munetomo, and K. Akama. Theoretical and empirical analysis of a gpu based parallel bayesian optimization algorithm. In Parallel and Distributed Computing, Applications and Technologies, 2009 International Conference on, pages 457–462. IEEE, 2009. [20] L. Mussi, F. Daolio, and S. Cagnoni. Evaluation of parallel particle swarm optimization algorithms within the CUDA T M architecture. Information Sciences, 181(20):4642–4657, 2011. [21] J. Nickolls, I. Buck, M. Garland, and K. Skadron. Scalable parallel programming with CUDA. Queue, 6(2):40–53, 2008. [22] C. Nvidia. C best practices guide. NVIDIA, Santa Clara, CA, 2012. [23] C. Nvidia. CUDA C Programming Guide 4.2. NVIDIA, Santa Clara, CA, 2012. [24] J. Oˇcen’aˇsek and J. Schwarz. The parallel bayesian optimization algorithm. The State of the Art in Computational Intelligence, pages 61–67, 2000. [25] J. Oˇcen’aˇsek and J. Schwarz. The distributed bayesian optimization algorithm for combinatorial optimization. In K. C. Giannakoglou, D. T. Tsahalis, J. P’eriaux, K. D. Papailiou, and T. Fogarty, editors, Evolutionary Methods for Design Optimization and Control with Applications to Industrial Problems, pages 115–120, Athens, Greece, 2001. International Center for Numerical Methods in Engineering (Cmine). [26] J. Oˇcen’aˇsek, J. Schwarz, and M. Pelikan. Design of multithreaded estimation of distribution algorithms. In E. Cant’u-Paz, J. A. Foster, K. Deb, D. Davis, R. Roy, U.-M. O’Reilly, H.-G. Beyer, R. Standish, G. Kendall, S. Wilson, M. Harman, J. Wegener, D. Dasgupta, M. A. Potter, A. C. Schultz, K. Dowsland, N. Jonoska, and J. Miller, editors, Genetic and Evolutionary Computation – GECCO-2003, volume 2724 of LNCS, pages 1247–1258, Chicago, 12-16 July 2003. Springer-Verlag. [27] M. Pelikan, D. E. Goldberg, and E. Cant’u-Paz. BOA: The Bayesian optimization algorithm. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-1999), I:525–532, 1999. [28] K. Sastry. Evaluation-relaxation schemes for genetic and evolutionary algorithms. Master thesis, University of Illinois at Urbana-Champaign, Urbana, IL, 2002. [29] K. Sastry, M. Pelikan, and D. E. Goldberg. Efficiency enhancement of genetic algorithms via building-block-wise fitness estimation. Proceedings of the IEEE Conference on Evolutionary Computation, pages 720–727, 2004. [30] D. Thierens. Scalability problems of simple genetic algorithms. Evolutionary computation, 7(4):331–352, 1999. [31] A. Verma, X. Llor`a, S. Venkataraman, D. E. Goldberg, and R. H. Campbell. Scaling eCGA model building via data-intensive computing. Urbana, 51:61801, 2010. [32] P. Vidal and E. Alba. Cellular genetic algorithm on graphic processing units. Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), pages 223–232, 2010. [33] T.-L. Yu, K. Sastry, D. E. Goldberg, and M. Pelikan. Population sizing for entropybased model building in discrete estimation of distribution algorithms. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2007), pages 601–608, 2007. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/60890 | - |
| dc.description.abstract | 為了提供電動遊戲所需要的即時、高畫質的3D立體繪圖,圖形
處理器在 過去二十年進步成擁有強大的運算能力的處理器。自從輝 達(NVIDIA)釋出 統一計算架構(CUDA)之後, 圖形處理器也成為可以在 更廣泛的用途上 提供平行計算並促進與CPU協同運算的裝置。 圖形處 理器已經在各種領域平行化了大量的可規模化的應用程式。 因為演化 式計算具有平行的本質, 平行化一直是一種直覺上可以增進效率的 方式。 然而將分佈估計演算法(EDA)運用在圖形處理器上的研究並不 多。 在這篇論文裡我們提出了兩種在CUDA上能夠加速擴展的 緊湊型基 因遺傳演算法(ECGA)的模型建立的實作方式。 第一個實作方式與原 本的ECGA在演算法上完全一致。 第二種實作修改了模型建立的演算 法,透過犧牲模型建立的精準度,獲得了 更高的加速。在實驗中,第 一個實作相對於基準的實作在一個長度為550 並且子問題長度為5的陷 阱問題上加速了大約374 倍。第二個 實作方式在相同的問題上加速了 大約531 倍。這兩種實作法 在一張Tesla C2050的圖形顯示卡上可以規 模化到長度為9800的陷阱問題。 | zh_TW |
| dc.description.abstract | Due to the demand for realtime, high-defination 3D graphics in video
game, graphic processing unit (GPU) has advanced to have tremendous computational power in the past two decades. Since NVIDIA released the compute unified device architecture (CUDA), GPU has become a general parallel computing device that facilitates heterogeneous computing between CPU and GPU. GPU has enabled lots of scalable parallel programs in a wide range of fields and parallelization is a straightforward approach to enhance the efficiency for evolutionary computation due to its inherently parallel nature. However, parallelization of model building for EDA on GPU is rarely studied. In this thesis, we propose two implementations on CUDA to speed up the model building in the extended compact genetic algorithm (ECGA). The first implementation is algorithmically identical to original ECGA. Aiming at a greater speed boost, the second implementation modifies the model building. It slightly decreases the accuracy of models in exchange for more speedup. Empirically, the first implementation achieves a speedup of roughly 374 to the baseline on 550-bit trap problem with order 5, and the second implementation achieves a speedup of roughly 531 to the baseline on the same problem. Finally, both of our implementations scale up to 9,800-bit trap problem with order 5 on one single Tesla C2050 GPU card. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T10:34:59Z (GMT). No. of bitstreams: 1 ntu-102-R00921046-1.pdf: 3056921 bytes, checksum: 4f733148320b5a6b0101f43c6a6ace7c (MD5) Previous issue date: 2013 | en |
| dc.description.tableofcontents | Contents
口試委員會審定書 i Acknowledgments ii 致謝 iii Abstract iv 中文摘要 v 1 Introduction 1 2 GPU and CUDA 4 2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 CUDA Programming Model . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Design Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 Simple GAs, EDAs and ECGA 10 3.1 Simple Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Estimation of Distribution Algorithms . . . . . . . . . . . . . . . . . . . 11 3.3 Extended Compact Genetic Algorithms . . . . . . . . . . . . . . . . . . 12 4 Related Work 16 5 CUDA-based ECGA 17 5.1 A Table-look-up Method to Speed Up Counting Distribution . . . . . . . 17 5.2 gECGA: CUDA-based Implementation . . . . . . . . . . . . . . . . . . 18 5.2.1 Cache and model structure . . . . . . . . . . . . . . . . . . . . . 19 5.2.2 Memory space allocation . . . . . . . . . . . . . . . . . . . . . . 20 5.2.3 Tasks allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.2.4 Update cache and model . . . . . . . . . . . . . . . . . . . . . . 24 5.3 GM Search: The Modified Model-searching Algorithm . . . . . . . . . . 24 6 Experiments 26 6.1 Hardware Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 6.2 General Experiment Setting . . . . . . . . . . . . . . . . . . . . . . . . . 26 6.3 Speedups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6.4 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6.5 Experiments of GM Search . . . . . . . . . . . . . . . . . . . . . . . . . 31 vi 7 Conclusion 38 Bibliography 39 | |
| dc.language.iso | en | |
| dc.subject | 模型建造 | zh_TW |
| dc.subject | 統一計算架構 | zh_TW |
| dc.subject | 圖形處理器 | zh_TW |
| dc.subject | 分佈估計演算法 | zh_TW |
| dc.subject | 緊湊型基因遺傳演算法 | zh_TW |
| dc.subject | 效率增進 | zh_TW |
| dc.subject | Efficiency Enhancemen | en |
| dc.subject | Estimation of Distribution Algorithms | en |
| dc.subject | ECGA | en |
| dc.subject | Model Building | en |
| dc.subject | CUDA | en |
| dc.subject | GPU | en |
| dc.title | 在CUDA平台上加速ECGA的模型建造 | zh_TW |
| dc.title | Speeding Up Model Building for ECGA on CUDA Platform | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 101-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 張時中,陳穎平 | |
| dc.subject.keyword | 統一計算架構,圖形處理器,分佈估計演算法,緊湊型基因遺傳演算法,模型建造,效率增進, | zh_TW |
| dc.subject.keyword | CUDA,GPU,Estimation of Distribution Algorithms,ECGA,Model Building,Efficiency Enhancemen, | en |
| dc.relation.page | 41 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2013-08-14 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-102-1.pdf 未授權公開取用 | 2.99 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
