Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92560
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林澤zh_TW
dc.contributor.advisorChe Linen
dc.contributor.author江軍zh_TW
dc.contributor.authorJun Jiangen
dc.date.accessioned2024-04-16T16:12:48Z-
dc.date.available2024-04-17-
dc.date.copyright2024-04-16-
dc.date.issued2024-
dc.date.submitted2024-04-11-
dc.identifier.citation[1] S. Dattani, F. Spooner, H. Ritchie, and M. Roser, “Causes of death,” Our World in Data, 2023, https://ourworldindata.org/causes-of-death.
[2] D.-M. Koh, N. Papanikolaou, U. Bick, R. Illing, C. E. Kahn Jr, J. Kalpathi-Cramer, C. Matos, L. Martí-Bonmatí, A. Miles, S. K. Mun et al., “Artificial intelligence and machine learning in cancer imaging,” Communications Medicine, vol. 2, no. 1, p. 133, 2022.
[3] R. Kumar and P. Saha, “A review on artificial intelligence and machine learning to improve cancer management and drug discovery,” International Journal for Re search in Applied Sciences and Biotechnology, vol. 9, no. 3, pp. 149–156, 2022.
[4] S. N. Bhatia, X. Chen, M. A. Dobrovolskaia, and T. Lammers, “Cancer nanomedicine,” Nature Reviews Cancer, vol. 22, no. 10, pp. 550–556, 2022.
[5] P. Tan, X. Chen, H. Zhang, Q. Wei, and K. Luo, “Artificial intelligence aids in development of nanomedicines for cancer management,” in Seminars in Cancer Biology. Elsevier, 2023.
[6] I. Subramanian, S. Verma, S. Kumar, A. Jere, and K. Anamika, “Multi-omics data integration, interpretation, and its application,” Bioinformatics and biology insights, vol. 14, p. 1177932219899051, 2020. 67
[7] R. Bellman, “Dynamic programming,” Science, vol. 153, no. 3731, pp. 34–37, 1966.
[8] P. Hammer, “Adaptive control processes: a guided tour (r. bellman),” 1962.
[9] I. Inza, P. Larranaga, R. Blanco, and A. J. Cerrolaza, “Filter versus wrapper gene se lection approaches in dna microarray domains,” Artificial intelligence in medicine, vol. 31, no. 2, pp. 91–103, 2004.
[10] C. Ding and H. Peng, “Minimum redundancy feature selection from microarray gene expression data,” Journal of bioinformatics and computational biology, vol. 3, no. 02, pp. 185–205, 2005.
[11] L.-H. Cheng, T.-C. Hsu, and C. Lin, “Integrating ensemble systems biology feature selection and bimodal deep neural network for breast cancer prognosis prediction,” Scientific Reports, vol. 11, no. 1, p. 14914, 2021.
[12] Y. Li, “Research and application of deep learning in image recognition,” in 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applica tions (ICPECA). IEEE, 2022, pp. 994–999.
[13] D. W. Otter, J. R. Medina, and J. K. Kalita, “A survey of the usages of deep learn ing for natural language processing,” IEEE transactions on neural networks and learning systems, vol. 32, no. 2, pp. 604–624, 2020.
[14] I. Portugal, P. Alencar, and D. Cowan, “The use of machine learning algorithms in recommender systems: A systematic review,” Expert Systems with Applications, vol. 97, pp. 205–227, 2018.
[15] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016. 68
[16] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” nature, vol. 323, no. 6088, pp. 533–536, 1986.
[17] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[18] K. O’Shea and R. Nash, “An introduction to convolutional neural networks,” arXiv preprint arXiv:1511.08458, 2015.
[19] J. L. Elman, “Finding structure in time,” Cognitive science, vol. 14, no. 2, pp. 179– 211, 1990.
[20] Y. Yu, X. Si, C. Hu, and J. Zhang, “A review of recurrent neural networks: Lstm cells and network architectures,” Neural computation, vol. 31, no. 7, pp. 1235– 1270, 2019.
[21] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” IEEE transactions on neural networks, vol. 20, no. 1, pp. 61–80, 2008.
[22] S. Wu, F. Sun, W. Zhang, X. Xie, and B. Cui, “Graph neural networks in recom mender systems: a survey,” ACM Computing Surveys, vol. 55, no. 5, pp. 1–37, 2022.
[23] S. Kearnes, K. McCloskey, M. Berndl, V. Pande, and P. Riley, “Molecular graph convolutions: moving beyond fingerprints,” Journal of computer-aided molecular design, vol. 30, pp. 595–608, 2016. 69
[24] J. You, B. Liu, Z. Ying, V. Pande, and J. Leskovec, “Graph convolutional policy network for goal-directed molecular graph generation,” Advances in neural infor mation processing systems, vol. 31, 2018.
[25] D. Szklarczyk, A. Franceschini, S. Wyder, K. Forslund, D. Heller, J. Huerta-Cepas, M. Simonovic, A. Roth, A. Santos, K. P. Tsafou et al., “String v10: protein– protein interaction networks, integrated over the tree of life,” Nucleic acids re search, vol. 43, no. D1, pp. D447–D452, 2015.
[26] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolu tional networks,” arXiv preprint arXiv:1609.02907, 2016.
[27] J. N. Weinstein, E. A. Collisson, G. B. Mills, K. R. Shaw, B. A. Ozenberger, K. Ell rott, I. Shmulevich, C. Sander, and J. M. Stuart, “The cancer genome atlas pan cancer analysis project,” Nature genetics, vol. 45, no. 10, pp. 1113–1120, 2013.
[28] G. K. Verma and U. S. Tiwary, “Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals,” NeuroImage, vol. 102, pp. 162–172, 2014.
[29] Y. Fang, R. Rong, and J. Huang, “Hierarchical fusion of visual and physiological signals for emotion recognition,” Multidimensional systems and signal processing, vol. 32, no. 4, pp. 1103–1121, 2021.
[30] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh, “Vqa: Visual question answering,” in Proceedings of the IEEE international con ference on computer vision, 2015, pp. 2425–2433.
[31] Y. Zhang, J. Hare, and A. Prügel-Bennett, “Learning to count objects in natural images for visual question answering,” arXiv preprint arXiv:1802.05766, 2018. 70
[32] F. Zhang, Z. Li, B. Zhang, H. Du, B. Wang, and X. Zhang, “Multi-modal deep learning model for auxiliary diagnosis of alzheimer's disease,” Neurocomputing, vol. 361, pp. 185–195, 2019.
[33] E. A. Bernal, X. Yang, Q. Li, J. Kumar, S. Madhvanath, P. Ramesh, and R. Bala, “Deep temporal multimodal fusion for medical procedure monitoring using wear able sensors,” IEEE Transactions on Multimedia, vol. 20, no. 1, pp. 107–118, 2017.
[34] T. Xu, H. Zhang, X. Huang, S. Zhang, and D. N. Metaxas, “Multimodal deep learning for cervical dysplasia diagnosis,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19. Springer, 2016, pp. 115–123.
[35] X. He, Y. Deng, L. Fang, and Q. Peng, “Multi-modal retinal image classification with modality-specific attention network,” IEEE transactions on medical imaging, vol. 40, no. 6, pp. 1591–1602, 2021.
[36] H. Yuan, H. Yu, J. Wang, K. Li, and S. Ji, “On explainability of graph neural net works via subgraph explorations,” in International conference on machine learn ing. PMLR, 2021, pp. 12 241–12 252.
[37] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
[38] W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in neural information processing systems, vol. 30, 2017. 71
[39] T. Nguyen, G. T. Nguyen, T. Nguyen, and D.-H. Le, “Graph convolutional networks for drug response prediction,” IEEE/ACM transactions on computational biology and bioinformatics, vol. 19, no. 1, pp. 146–154, 2021.
[40] R. Li, J. Yao, X. Zhu, Y. Li, and J. Huang, “Graph cnn for survival analysis on whole slide pathological images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2018, pp. 174–182.
[41] H. Gao and S. Ji, “Graph u-nets,” in international conference on machine learning. PMLR, 2019, pp. 2083–2092.
[42] H. Peng, J. Li, Q. Gong, Y. Ning, S. Wang, and L. He, “Motif-matching based subgraph-level attentional convolutional network for graph classification,” in Pro ceedings of the AAAI conference on artificial intelligence, vol. 34, no. 04, 2020, pp. 5387–5394.
[43] J. Liu, T. Lichtenberg, K. A. Hoadley, L. M. Poisson, A. J. Lazar, A. D. Cherniack, A. J. Kovatich, C. C. Benz, D. A. Levine, A. V. Lee et al., “An integrated tcga pan-cancer clinical data resource to drive high-quality survival outcome analytics,” Cell, vol. 173, no. 2, pp. 400–416, 2018.
[44] G. F. Gao, J. S. Parker, S. M. Reynolds, T. C. Silva, L.-B. Wang, W. Zhou, R. Ak bani, M. Bailey, S. Balu, B. P. Berman et al., “Before and after: comparison of legacy and harmonized tcga genomic data commons'data,” Cell systems, vol. 9, no. 1, pp. 24–34, 2019.
[45] Y.-H. Lai, W.-N. Chen, T.-C. Hsu, C. Lin, Y. Tsao, and S. Wu, “Overall survival prediction of non-small cell lung cancer by integrating microarray and clinical data with deep learning,” Scientific reports, vol. 10, no. 1, p. 4679, 2020. 72
[46] T.-C. Hsu and C. Lin, “Training with small medical data: robust bayesian neural networks for colon cancer overall survival prediction,” in 2021 43rd Annual In ternational Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2021, pp. 2030–2033.
[47] B. . W. H. . H. M. S. C. L. . . P. P. J. . K. R. 13, G. data analysis: Baylor College of Medicine Creighton Chad J. 22 23 Donehower Lawrence A. 22 23 24 25, I. for Systems Biology Reynolds Sheila 31 Kreisberg Richard B. 31 Bernard Brady 31 Bressler Ryan 31 Erkkila Timo 32 Lin Jake 31 Thorsson Vesteinn 31 Zhang Wei 33 Shmulevich Ilya 31 et al., “Comprehensive molecular portraits of human breast tumours,” Nature, vol. 490, no. 7418, pp. 61–70, 2012.
[48] C. G. A. R. Network et al., “Comprehensive molecular profiling of lung adenocar cinoma,” Nature, vol. 511, no. 7511, p. 543, 2014.
[49] C. G. A. Network et al., “Comprehensive molecular characterization of human colon and rectal cancer,” Nature, vol. 487, no. 7407, p. 330, 2012.
[50] R. Siegel, K. Miller, H. Fuchs, and A. Jemal, “Cancer statistics, 2022,” CA: a cancer journal for clinicians, vol. 72, no. 1, pp. 7–33, 2022.
[51] R. L. Siegel, K. D. Miller, N. S. Wagle, and A. Jemal, “Cancer statistics, 2023,” Ca Cancer J Clin, vol. 73, no. 1, pp. 17–48, 2023.
[52] “111 年國人死因統計結果,” https://www.mohw.gov.tw/cp-16-74869-1.html, accessed: 2023-12-16.
[53] B. Li, V. Ruotti, R. M. Stewart, J. A. Thomson, and C. N. Dewey, “Rna-seq gene ex pression estimation with read mapping uncertainty,” Bioinformatics, vol. 26, no. 4, pp. 493–500, 2010. 73
[54] Z. Wang, M. Gerstein, and M. Snyder, “Rna-seq: a revolutionary tool for transcrip tomics,” Nature reviews genetics, vol. 10, no. 1, pp. 57–63, 2009.
[55] D. Sahoo, D. L. Dill, R. Tibshirani, and S. K. Plevritis, “Extracting binary signals from microarray time-course data,” Nucleic acids research, vol. 35, no. 11, pp. 3705–3712, 2007.
[56] L. St, S. Wold et al., “Analysis of variance (anova),” Chemometrics and intelligent laboratory systems, vol. 6, no. 4, pp. 259–272, 1989.
[57] C. Stark, B.-J. Breitkreutz, T. Reguly, L. Boucher, A. Breitkreutz, and M. Tyers, “Biogrid: a general repository for interaction datasets,” Nucleic acids research, vol. 34, no. suppl_1, pp. D535–D539, 2006.
[58] H. Akaike, “A new look at the statistical model identification problem,” IEEE Trans. Autom. Control, vol. 19, p. 716, 1974.
[59] Student, “The probable error of a mean,” Biometrika, vol. 6, no. 1, pp. 1–25, 1908.
[60] T. K. Kim, “T test as a parametric statistic,” Korean journal of anesthesiology, vol. 68, no. 6, pp. 540–546, 2015.
[61] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” The bulletin of mathematical biophysics, vol. 5, pp. 115–133, 1943.
[62] A. K. Jain, J. Mao, and K. M. Mohiuddin, “Artificial neural networks: A tutorial,” Computer, vol. 29, no. 3, pp. 31–44, 1996.
[63] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006. 74
[64] J. Han and C. Moraga, “The influence of the sigmoid function parameters on the speed of backpropagation learning,” in International workshop on artificial neural networks. Springer, 1995, pp. 195–201.
[65] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann ma chines,” in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807–814.
[66] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” arXiv preprint arXiv:1511.07289, 2015.
[67] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing sys tems, vol. 25, 2012.
[68] D. Mesquita, A. Souza, and S. Kaski, “Rethinking pooling in graph neural net works,” Advances in Neural Information Processing Systems, vol. 33, pp. 2220– 2231, 2020.
[69] A. Ruderman, N. C. Rabinowitz, A. S. Morcos, and D. Zoran, “Pooling is nei ther necessary nor sufficient for appropriate deformation stability in cnns,” arXiv preprint arXiv:1804.04438, 2018.
[70] A. Mislove, M. Marcon, K. P. Gummadi, P. Druschel, and B. Bhattacharjee, “Mea surement and analysis of online social networks,” in Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, 2007, pp. 29–42.
[71] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dy namic graph cnn for learning on point clouds,” ACM Transactions on Graphics (tog), vol. 38, no. 5, pp. 1–12, 2019. 75
[72] A. Micheli, “Neural network for graphs: A contextual constructive approach,” IEEE Transactions on Neural Networks, vol. 20, no. 3, pp. 498–511, 2009.
[73] N. Biggs, Algebraic graph theory. Cambridge university press, 1993, no. 67.
[74] M. Hein, J.-Y. Audibert, and U. v. Luxburg, “Graph laplacians and their conver gence on random neighborhood graphs.” Journal of Machine Learning Research, vol. 8, no. 6, 2007.
[75] D. A. Spielman, “Spectral graph theory and its applications,” in 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07). IEEE, 2007, pp. 29–38.
[76] R. Kiros, R. Salakhutdinov, and R. S. Zemel, “Unifying visual-semantic embed dings with multimodal neural language models,” arXiv preprint arXiv:1411.2539, 2014.
[77] N. Srivastava and R. R. Salakhutdinov, “Multimodal learning with deep boltzmann machines,” Advances in neural information processing systems, vol. 25, 2012.
[78] Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio et al., “Tacotron: Towards end-to-end speech syn thesis,” arXiv preprint arXiv:1703.10135, 2017.
[79] A. Salvador, N. Hynes, Y. Aytar, J. Marin, F. Ofli, I. Weber, and A. Torralba, “Learning cross-modal embeddings for cooking recipes and food images,” in Pro ceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3020–3028. 76
[80] S. Poria, E. Cambria, and A. Gelbukh, “Deep convolutional neural network tex tual features and multiple kernel learning for utterance-level multimodal sentiment analysis,” in Proceedings of the 2015 conference on empirical methods in natural language processing, 2015, pp. 2539–2544.
[81] Y. R. Pandeya and J. Lee, “Deep learning-based late fusion of multimodal informa tion for emotion classification of music video,” Multimedia Tools and Applications, vol. 80, pp. 2887–2905, 2021.
[82] H. Tian, Y. Tao, S. Pouyanfar, S.-C. Chen, and M.-L. Shyu, “Multimodal deep rep resentation learning for video classification,” World Wide Web, vol. 22, pp. 1325– 1341, 2019.
[83] S. Qiao, W. Shen, Z. Zhang, B. Wang, and A. Yuille, “Deep co-training for semi supervised image recognition,” in Proceedings of the european conference on com puter vision (eccv), 2018, pp. 135–152.
[84] T.-C. Huang, T.-C. Hsu, Y.-H. Hsieh, and Che-Lin, “Utilizing graph neural net works for breast cancer prognosis prediction with high-dimensional genomic data,” in 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2023, p. Tentative.
[85] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network train ing by reducing internal covariate shift,” in International conference on machine learning. pmlr, 2015, pp. 448–456.
[86] V. G. Raju, K. P. Lakshmi, V. M. Jain, A. Kalidindi, and V. Padma, “Study the influence of normalization/transformation process on the accuracy of supervised 77 classification,” in 2020 Third International Conference on Smart Systems and In ventive Technology (ICSSIT). IEEE, 2020, pp. 729–735.
[87] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word rep resentations in vector space,” arXiv preprint arXiv:1301.3781, 2013.
[88] J. Turian, J. Bergstra, and Y. Bengio, “Quadratic features and deep architectures for chunking,” in Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, 2009, pp. 245–248.
[89] M. Hossin and M. N. Sulaiman, “A review on evaluation metrics for data classifica tion evaluations,” International journal of data mining & knowledge management process, vol. 5, no. 2, p. 1, 2015.
[90] D. M. Powers, “Evaluation: from precision, recall and f-measure to roc, informed ness, markedness and correlation,” arXiv preprint arXiv:2010.16061, 2020.
[91] W. J. Youden, “Index for rating diagnostic tests,” Cancer, vol. 3, no. 1, pp. 32–35, 1950.
[92] M. D. Brundage, D. Davies, and W. J. Mackillop, “Prognostic factors in non-small cell lung cancer: a decade of progress,” Chest, vol. 122, no. 3, pp. 1037–1057, 2002.
[93] J. C. Buckner, “Factors influencing survival in high-grade gliomas,” in Seminars in oncology, vol. 30. Elsevier, 2003, pp. 10–14. 78
[94] E. L. Kaplan and P. Meier, “Nonparametric estimation from incomplete observa tions,” Journal of the American statistical association, vol. 53, no. 282, pp. 457– 481, 1958.
[95] F. E. Harrell, R. M. Califf, D. B. Pryor, K. L. Lee, and R. A. Rosati, “Evaluating the yield of medical tests,” Jama, vol. 247, no. 18, pp. 2543–2546, 1982.
[96] H. Uno, T. Cai, M. J. Pencina, R. B. D’Agostino, and L.-J. Wei, “On the c-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data,” Statistics in medicine, vol. 30, no. 10, pp. 1105–1117, 2011.
[97] B. Efron, “Bootstrap methods: another look at the jackknife,” in Breakthroughs in statistics: Methodology and distribution. Springer, 1992, pp. 569–593.
[98] B. Efron and R. J. Tibshirani, An introduction to the bootstrap. CRC press, 1994.
[99] T. Saito and M. Rehmsmeier, “The precision-recall plot is more informative than the roc plot when evaluating binary classifiers on imbalanced datasets,” PloS one, vol. 10, no. 3, p. e0118432, 2015.
[100] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
[101] M. Wang, D. Zheng, Z. Ye, Q. Gan, M. Li, X. Song, J. Zhou, C. Ma, L. Yu, Y. Gai et al., “Deep graph library: A graph-centric, highly-performant package for graph neural networks,” arXiv preprint arXiv:1909.01315, 2019. 79
[102] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., “Scikit-learn: Machine learning in python,” the Journal of machine Learning research, vol. 12, pp. 2825– 2830, 2011.
[103] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[104] B.-R. Wu, “Multi-cancer prognosis prediction with multi-task learning integrat ing rna sequencing and clinical data,” Master’s thesis, National Taiwan University, 2022.
[105] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predic tions,” Advances in neural information processing systems, vol. 30, 2017.
[106] P. W. Koh and P. Liang, “Understanding black-box predictions via influence func tions,” in International conference on machine learning. PMLR, 2017, pp. 1885– 1894.
[107] M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD interna tional conference on knowledge discovery and data mining, 2016, pp. 1135–1144.
[108] Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “Gnnexplainer: Gen erating explanations for graph neural networks,” Advances in neural information processing systems, vol. 32, 2019.
[109] T.-C. Huang, “Predicting cancer prognosis using graph neural networks by integrat ing rna-sequencing and clinical data,” Master’s thesis, National Taiwan University, 2023. 80
[110] J. Asberger, T. Erbes, M. Jaeger, G. Rücker, C. Nöthling, A. Ritter, K. Berner, I. Juhasz-Böss, and M. Hirschfeld, “Endoxifen and fulvestrant regulate estrogen receptor α and related deadbox proteins,” Endocrine connections, vol. 9, no. 12, pp. 1156–1167, 2020.
[111] M. Ogino, T. Fujii, Y. Nakazawa, T. Higuchi, Y. Koibuchi, T. Oyama, J. Horiguchi, and K. Shirabe, “Implications of topoisomerase (top1 and top2α) expression in patients with breast cancer,” in vivo, vol. 34, no. 6, pp. 3483–3487, 2020.
[112] A. Zundelevich, M. Dadiani, S. Kahana-Edwin, A. Itay, T. Sella, M. Gadot, K. Ce sarkas, S. Farage-Barhom, E. G. Saar, E. Eyal et al., “Esr1 mutations are frequent in newly diagnosed metastatic and loco-regional recurrence of endocrine-treated breast cancer and carry worse prognosis,” Breast Cancer Research, vol. 22, no. 1, pp. 1–11, 2020.
[113] M. Trzpis, E. Bremer, P. M. McLaughlin, L. F. de Leij, and M. C. Harmsen, “Epcam in morphogenesis,” Frontiers in Bioscience-Landmark, vol. 13, no. 13, pp. 5050– 5055, 2008.
[114] J. Münsterberg, D. Loreth, L. Brylka, S. Werner, J. Karbanova, M. Gandrass, S. Schneegans, K. Besler, F. Hamester, J. R. Robador et al., “Alcam contributes to brain metastasis formation in non-small-cell lung cancer through interaction with the vascular endothelium,” Neuro-oncology, vol. 22, no. 7, pp. 955–966, 2020.
[115] L. M. van der Waals, I. H. Borel Rinkes, and O. Kranenburg, “Aldh1a1 expression is associated with poor differentiation,`right-sidedness'and poor survival in human colorectal cancer,” PloS one, vol. 13, no. 10, p. e0205536, 2018. 81
[116] L. Du, H. Wang, L. He, J. Zhang, B. Ni, X. Wang, H. Jin, N. Cahuzac, M. Mehrpour, Y. Lu et al., “Cd44 is of functional importance for colorectal cancer stem cells,” Clinical cancer research, vol. 14, no. 21, pp. 6751–6760, 2008.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92560-
dc.description.abstract疾病是人類的主要死因,其中又以癌症占最大比例。這表明,通過深入的癌症研究,我們能夠對全球死亡率的降低作出實質性的貢獻。癌症的預後預測是一個重要議題。早期階段的高風險患者經過治療後可以達到很高的存活率。透過數據科學與深度學習的幫助,我們能從患者身上得到不同性質資料的隱藏資訊。高通量 (High-throughput) 技術就是其中的一種,它可以產生出大量組學數據供我們分析。預後預測中使用到的基因組學數據便是其中一種,然而它具備樣本少、特徵多兩種特性,在這種情況下,模型難以捕捉到重要訊息。為了解決這個狀況,我們使用先前開發的系統生物學特徵選擇器 (Systems Biology Feature Selector) 進行降維,它會從 RNA 測序 (RNA-Seq) 數據中選擇與癌症預後高度相關的生物標誌物 (biomarker),並且引入了基因相互作用網路 (Gene Interaction Networks) 來增加額外的信息。為了充分使用這兩種資料,我們選用圖神經網路 (Graph Neural Network; GNN) 作為我們的基底模型。將篩選出來的預後生物標誌物視為節點,基因相互作用網路中獲得的基因之間的潛在關係視為邊。考慮到不同的基因組合可能含有隱藏的資訊,我們將預後生物標誌物構成的圖拆分為更小的子圖,並利用獨立的圖卷積層萃取子圖的資訊,此子圖集的圖捲積宛如卷積神經網路 (Convolutional Neural Network; CNN) 中的感受視野 (receptive field),試圖抓到大圖中的圖案 (pattern)。此外,我們也納入了臨床數據,設計了子圖級圖雙模態神經網路,它不但能抓取不同基因子集包含的隱藏關係,也能從RNA測序與臨床數據中提取有用資訊。實驗結果顯示出良好的性能,特別是在精確召回曲線下面積 (Area Under the Precision-Recall Curve; AUPRC) 中,與先前和基線模型相比平均提高了 9.16% 和 9.8% ,這間接證明了基因子集合擁有豐富資訊。我們希望此研究能為癌症研究盡一份心力。zh_TW
dc.description.abstractDiseases are a leading cause of human mortality, with cancer comprising a significant proportion. Conducting robust research on cancer can contribute to a global reduction in mortality rates. Prognostic prediction for cancer is a pressing issue, as high-risk patients in the early stages of the disease can achieve high survival rates through timely intervention. Leveraging data science and deep learning enables us to uncover hidden insights from diverse patient data. High-throughput technologies, such as genomics, generate extensive datasets for analysis. However, genomic data used in prognosis prediction presents limited samples and high-dimensional features, making it challenging for models to capture crucial information in such extreme conditions. To address this challenge, we employ a previously developed systems biology feature selector for dimensionality reduction. This selector identifies biologically relevant biomarkers highly correlated with cancer prognosis from RNA sequencing data. Additionally, we incorporate gene interaction networks to add additional information. To fully harness these two types of data, we opt for a graph neural network as our foundational model. We treat the selected prognostic biomarkers as nodes and the potential relationships between genes obtained from the gene interaction network as edges. Considering that different gene combinations may contain hidden information, we decompose the graph formed by prognostic biomarkers into smaller subgraphs. Using independent graph convolutional layers, we extract special feature embeddings between gene subsets from the subgraph. This subgraph-level graph convolution mimics the receptive field in convolutional neural networks, aiming to capture patterns in the larger graph. Furthermore, we integrate clinical data and design a subgraph-level graph bimodal neural network. This network not only captures hidden relationships within different gene subsets but also extracts valuable information from RNA sequencing and clinical data. Our experimental results demonstrate promising performance, particularly for the AUPRC, exhibiting an average improvement of 9.16% and 9.8% when compared to our previous approach and the baseline models. This indirectly verifies the substantial information richness within the gene subset. With this study, we aim to drive meaningful progress in the field of cancer research.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-04-16T16:12:47Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-04-16T16:12:48Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員審定書 i
誌謝 ii
摘要 iii
Abstract iv
Contents vi
List of Figures ix
List of Tables xi
Chapter 1 Introduction 1
Chapter 2 Related work 5
2.1 Multimodal Representation . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Graph Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Subgraph Representation . . . . . . . . . . . . . . . . . . . . . . . . 8
Chapter 3 Materials and Methods 10
3.1 TCGA Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Data Selection and Summarization . . . . . . . . . . . . . . . . . . . 11
3.3 Systems Biology Feature Selection . . . . . . . . . . . . . . . . . . 14
3.4 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4.1 Deep Neural Network . . . . . . . . . . . . . . . . . . . . . 19
3.4.2 Convolution Neural Network . . . . . . . . . . . . . . . . . . 21
3.4.3 Graph Convolutional Network . . . . . . . . . . . . . . . . . 23
3.4.4 Bimodal Neural Network . . . . . . . . . . . . . . . . . . . . 26
3.5 Building Patient Graphs . . . . . . . . . . . . . . . . . . . . . . . . 27
3.6 Subgraph-level Convolution Neural Network . . . . . . . . . . . . . 28
3.7 Subgraph-level Graph Bimodal Neural Network . . . . . . . . . . . 31
3.7.1 Genomic Feature Extractors . . . . . . . . . . . . . . . . . . 31
3.7.2 Clinical Feature Extractors . . . . . . . . . . . . . . . . . . . 33
3.7.3 Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8.1 Receiver Operating Characteristic (ROC) Curve . . . . . . . 37
3.8.2 Precision-Recall Curve (PRC) . . . . . . . . . . . . . . . . . 38
3.8.3 Youden Index . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.9 Survival Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.9.1 Kaplan-Meier(KM) Analysis . . . . . . . . . . . . . . . . . . 40
3.9.2 Concordance index (c-index) . . . . . . . . . . . . . . . . . . 41
Chapter 4 Experiment Results 43
4.1 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.1 Experiment settings . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.2 Hardware and software specifications . . . . . . . . . . . . . 44
4.2 Overall Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3 Survival Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Chapter 5 Discussion 52
5.1 Comparison with or without Graph Models . . . . . . . . . . . . . . 52
5.2 Comparison with different aggregation methods . . . . . . . . . . . . 56
5.3 Comparison among biomarkers, AIC selected prognostic biomarkers, and all biomarkers . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.3.1 Comparison with different sets of prognostic markers . . . . . 58
5.3.2 Comparison of the number of AIC select markers . . . . . . . 60
5.4 Model interpretability . . . . . . . . . . . . . . . . . . . . . . . . . 61
Chapter 6 Conclusions 65
Bibliography 67
-
dc.language.isoen-
dc.title整合核糖核酸測序與臨床數據與使用具有子圖表式的圖模型預測癌症預後zh_TW
dc.titlePredicting Cancer Prognosis Using Graph-based Model with Subgraph Representation by Integrating RNA-Sequencing and Clinical Dataen
dc.typeThesis-
dc.date.schoolyear112-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee陳倩瑜;黃韻如;劉子毓zh_TW
dc.contributor.oralexamcommitteeChien-Yu Chen;Yun-Ju Huang;Tzu-Yu Liuen
dc.subject.keyword深度學習,生物資訊學,特徵選取,癌症預後預測,圖神經網路,子圖,多模態學習,zh_TW
dc.subject.keyworddeep learning,bioinformatics,feature selection,cancer prognosis prediction,graph neural network,subgraph,multimodal learning,en
dc.relation.page82-
dc.identifier.doi10.6342/NTU202400840-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2024-04-11-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電信工程學研究所-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-112-2.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
8.45 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved