Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89698
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林澤zh_TW
dc.contributor.advisorChe Linen
dc.contributor.author吳柏潤zh_TW
dc.contributor.authorBo-Run Wuen
dc.date.accessioned2023-09-15T16:18:52Z-
dc.date.available2023-09-16-
dc.date.copyright2023-09-15-
dc.date.issued2022-
dc.date.submitted2002-01-01-
dc.identifier.citation[1] R. L. Siegel, K. D. Miller, H. E. Fuchs, and A. Jemal, “Cancer statistics, 2022,” CA: a cancer journal for clinicians, 2022.
[2] M. J. Barry, “Prostate-specific–antigen testing for early diagnosis of prostate cancer,” New England Journal of Medicine, vol. 344, no. 18, pp. 1373–1377, 2001.
[3] C. Wu, F. Zhou, J. Ren, X. Li, Y. Jiang, and S. Ma, “A selective review of multi-level omics data integration using variable selection,” High-throughput, vol. 8, no. 1, p. 4, 2019.
[4] P. Indyk and R. Motwani, “Approximate nearest neighbors: towards removing the curse of dimensionality,” in Proceedings of the thirtieth annual ACM symposium on Theory of computing, 1998, pp. 604–613.
[5] Y.-H. Lai, W.-N. Chen, T.-C. Hsu, C. Lin, Y. Tsao, and S. Wu, “Overall survival prediction of non-small cell lung cancer by integrating microarray and clinical data with deep learning,” Scientific reports, vol. 10, no. 1, pp. 1–11, 2020.
[6] L.-H. Cheng, T.-C. Hsu, and C. Lin, “Integrating ensemble systems biology feature selection and bimodal deep neural network for breast cancer prognosis prediction,” Scientific Reports, vol. 11, no. 1, pp. 1–10, 2021.
[7] V. Gulshan, L. Peng, M. Coram, M. C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros et al., “Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs,” Jama, vol. 316, no. 22, pp. 2402–2410, 2016.
[8] B. E. Bejnordi, M. Veta, P. J. Van Diest, B. Van Ginneken, N. Karssemeijer, G. Litjens, J. A. Van Der Laak, M. Hermsen, Q. F. Manson, M. Balkenhol et al., “Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer,” Jama, vol. 318, no. 22, pp. 2199–2210, 2017.
[9] A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” nature, vol. 542, no. 7639, pp. 115–118, 2017.
[10] T.-C. Hsu and C. Lin, “Training with small medical data: Robust bayesian neural networks for colon cancer overall survival prediction,” in 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2021, pp. 2030–2033.
[11] T. Reya, S. J. Morrison, M. F. Clarke, and I. L. Weissman, “Stem cells, cancer, and cancer stem cells,” nature, vol. 414, no. 6859, pp. 105–111, 2001.
[12] R. L. Grossman, A. P. Heath, V. Ferretti, H. E. Varmus, D. R. Lowy, W. A. Kibbe, and L. M. Staudt, “Toward a shared vision for cancer genomic data,” New England Journal of Medicine, vol. 375, no. 12, pp. 1109–1112, 2016.
[13] Y. Zhang and Q. Yang, “A survey on multi-task learning,” IEEE Transactions on Knowledge and Data Engineering, 2021.
[14] R. K. Ando, T. Zhang, and P. Bartlett, “A framework for learning predictive structures from multiple tasks and unlabeled data.” Journal of Machine Learning Research, vol. 6, no. 11, 2005.
[15] R. Collobert and J. Weston, “A unified architecture for natural language processing: Deep neural networks with multitask learning,” in Proceedings of the 25th international conference on Machine learning, 2008, pp. 160–167.
[16] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,” Journal of machine learning research, vol. 12, no. ARTICLE, pp. 2493–2537, 2011.
[17] T. Zhang, B. Ghanem, S. Liu, and N. Ahuja, “Robust visual tracking via structured multi-task sparse learning,” International journal of computer vision, vol. 101, no. 2, pp. 367–383, 2013.
[18] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “Decaf: A deep convolutional activation feature for generic visual recognition,” in International conference on machine learning. PMLR, 2014, pp. 647–655.
[19] Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Facial landmark detection by deep multi-task learning,” in European conference on computer vision. Springer, 2014, pp. 94–108.
[20] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448.
[21] J. Zhou, L. Yuan, J. Liu, and J. Ye, “A multi-task learning formulation for predicting disease progression,” in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011, pp. 814–822.
[22] F. Mordelet and J.-P. Vert, “Prodige: Prioritization of disease genes with multitask machine learning from positive and unlabeled examples,” BMC bioinformatics, vol. 12, no. 1, pp. 1–15, 2011.
[23] B. Ramsundar, S. Kearnes, P. Riley, D. Webster, D. Konerding, and V. Pande, “Massively multitask networks for drug discovery,” arXiv preprint arXiv:1502.02072, 2015.
[24] J.N.Weinstein,E.A.Collisson,G.B.Mills,K.R.Shaw,B.A.Ozenberger,K.Ellrott, I. Shmulevich, C. Sander, and J. M. Stuart, “The cancer genome atlas pan-cancer analysis project,” Nature genetics, vol. 45, no. 10, pp. 1113–1120, 2013.
[25] J. Liu, T. Lichtenberg, K. A. Hoadley, L. M. Poisson, A. J. Lazar, A. D. Cherniack, A. J. Kovatich, C. C. Benz, D. A. Levine, A. V. Lee et al., “An integrated tcga pan-cancer clinical data resource to drive high-quality survival outcome analytics,” Cell, vol. 173, no. 2, pp. 400–416, 2018.
[26] A. Colaprico, T. C. Silva, C. Olsen, L. Garofano, C. Cava, D. Garolini, T. S. Sabedot, T. M. Malta, S. M. Pagnotta, I. Castiglioni et al., “Tcgabiolinks: an r/bioconductor package for integrative analysis of tcga data,” Nucleic acids research, vol. 44, no. 8, pp. e71–e71, 2016.
[27] T. C. Silva, A. Colaprico, C. Olsen, F. D’Angelo, G. Bontempi, M. Ceccarelli, and H. Noushmehr, “Tcga workflow: Analyze cancer genomics and epigenomics data using bioconductor packages,” F1000Research, vol. 5, 2016.
[28] M. Mounir, M. Lucchetta, T. C. Silva, C. Olsen, G. Bontempi, X. Chen, H. Noushmehr, A. Colaprico, and E. Papaleo, “New functionalities in the tcgabiolinks package for the study and integration of cancer data from gdc and gtex,” PLoS computational biology, vol. 15, no. 3, p. e1006701, 2019.
[29] A. Dobin, C. A. Davis, F. Schlesinger, J. Drenkow, C. Zaleski, S. Jha, P. Batut, M. Chaisson, and T. R. Gingeras, “Star: ultrafast universal rna-seq aligner,” Bioinformatics, vol. 29, no. 1, pp. 15–21, 2013.
[30] A. Mortazavi, B. A. Williams, K. McCue, L. Schaeffer, and B. Wold, “Mapping and quantifying mammalian transcriptomes by rna-seq,” Nature methods, vol. 5, no. 7, pp. 621–628, 2008.
[31] B. Li, V. Ruotti, R. M. Stewart, J. A. Thomson, and C. N. Dewey, “Rna-seq gene expression estimation with read mapping uncertainty,” Bioinformatics, vol. 26, no. 4, pp. 493–500, 2010.
[32] L. Pachter, “Models for transcript quantification from rna-seq,” arXiv preprint arXiv:1104.3889, 2011.
[33] G. P. Wagner, K. Kin, and V. J. Lynch, “Measurement of mrna abundance using rna-seq data: Rpkm measure is inconsistent among samples,” Theory in biosciences, vol. 131, no. 4, pp. 281–285, 2012.
[34] D. J. Slamon, G. M. Clark, S. G. Wong, W. J. Levin, A. Ullrich, and W. L. McGuire, “Human breast cancer: correlation of relapse and survival with amplification of the her-2/neu oncogene,” science, vol. 235, no. 4785, pp. 177–182, 1987.
[35] K. D. Bunting, “Abc transporters as phenotypic markers and functional regulators of stem cells,” Stem cells, vol. 20, no. 1, pp. 11–20, 2002.
[36] L. K. Dunnwald, M. A. Rossing, and C. I. Li, “Hormone receptor status, tumor characteristics, and prognosis: a prospective cohort of breast cancer patients,” Breast cancer research, vol. 9, no. 1, pp. 1–10, 2007.
[37] L. A. Carey, E. C. Dees, L. Sawyer, L. Gatti, D. T. Moore, F. Collichio, D. W. Ollila, C. I. Sartor, M. L. Graham, and C. M. Perou, “The triple negative paradox: primary tumor chemosensitivity of breast cancer subtypes,” Clinical cancer research, vol. 13, no. 8, pp. 2329–2334, 2007.
[38] R. Dent, M. Trudeau, K. I. Pritchard, W. M. Hanna, H. K. Kahn, C. A. Sawka, L. A. Lickley, E. Rawlinson, P. Sun, and S. A. Narod, “Triple-negative breast cancer: clinical features and patterns of recurrence,” Clinical cancer research, vol. 13, no. 15, pp. 4429–4434, 2007.
[39] P. Baeuerle and O. Gires, “Epcam (cd326) finding its role in cancer,” British journal of cancer, vol. 96, no. 3, pp. 417–423, 2007.
[40] S. K. Lau, P. C. Boutros, M. Pintilie, F. H. Blackhall ,C.-Q. Zhu, D. Strumpf, M. R. Johnston, G. Darling, S. Keshavjee, T. K. Waddell et al., “Three-gene prognostic classifier for early-stage non–small-cell lung cancer,” Journal of Clinical Oncology, vol. 25, no. 35, pp. 5562–5569, 2007.
[41] P. Dalerba, S. J. Dylla, I.-K. Park, R. Liu, X. Wang, R. W. Cho, T. Hoey, A. Gurney, E. H. Huang, D. M. Simeone et al., “Phenotypic characterization of human colorectal cancer stem cells,” Proceedings of the National Academy of Sciences, vol. 104, no. 24, pp. 10 158–10 163, 2007.
[42] D. Horst, L. Kriegl, J. Engel, T. Kirchner, and A. Jung, “Cd133 expression is an independent prognostic marker for low survival in colorectal cancer,” British journal of cancer, vol. 99, no. 8, pp. 1285–1289, 2008.
[43] T. Schatton, G. F. Murphy, N. Y. Frank, K. Yamaura, A. M. Waaga-Gasser, M. Gasser, Q. Zhan, S. Jordan, L. M. Duncan, C. Weishaupt et al., “Identification of cells initiating human melanomas,” Nature, vol. 451, no. 7176, pp. 345–349, 2008.
[44] T. Klonisch, E. Wiechec, S. Hombach-Klonisch, S. R. Ande, S. Wesselborg, K. Schulze-Osthoff, and M. Los, “Cancer stem cell markers in common cancers–therapeutic implications,” Trends in molecular medicine, vol. 14, no. 10, pp. 450– 460, 2008.
[45] A. M. Gonzalez-Angulo, J. K. Litton, K. R. Broglio, F. Meric-Bernstam, R. Rakkhit, F. Cardoso, F. Peintinger, E. O. Hanrahan, A. Sahin, M. Guray et al., “High risk of recurrence for patients with breast cancer who have human epidermal growth factor receptor 2–positive, node-negative tumors 1 cm or smaller,” Journal of clinical oncology, vol. 27, no. 34, p. 5700, 2009.
[46] M. Dean, “Abc transporters, drug resistance, and cancer stem cells,” Journal of mammary gland biology and neoplasia, vol. 14, no. 1, pp. 3–9, 2009.
[47] M. E. H. Hammond, D. F. Hayes, M. Dowsett, D. C. Allred, K. L. Hagerty, S. Badve, P. L. Fitzgibbons, G. Francis, N. S. Goldstein, M. Hayes et al., “American society of clinical oncology/college of american pathologists guideline recommendations for immunohistochemical testing of estrogen and progesterone receptors in breast cancer (unabridged version),” Archives of pathology & laboratory medicine, vol. 134, no. 7, pp. e48–e72, 2010.
[48] E. B. C. T. C. Group et al., “Relevance of breast cancer hormone receptors and other factors to the efficacy of adjuvant tamoxifen: patient-level meta-analysis of randomised trials,” The lancet, vol. 378, no. 9793, pp. 771–784, 2011.
[49] B. D. Lehmann, J. A. Bauer, X. Chen, M. E. Sanders, A. B. Chakravarthy, Y. Shyr, J. A. Pietenpol et al., “Identification of human triple-negative breast cancer subtypes and preclinical models for selection of targeted therapies,” The Journal of clinical investigation, vol. 121, no. 7, pp. 2750–2767, 2011.
[50] W. C. Zhang, N. Shyh-Chang, H. Yang, A. Rai, S. Umashankar, S. Ma, B. S. Soh, L. L. Sun, B. C. Tai, M. E. Nga et al., “Glycine decarboxylase activity drives non-small cell lung cancer tumor-initiating cells and tumorigenesis,” Cell, vol. 148, no. 1-2, pp. 259–272, 2012.
[51] A. G. Vaiopoulos, I. D. Kostakis, M. Koutsilieris, and A. G. Papavassiliou, “Colorectal cancer stem cells,” Stem cells, vol. 30, no. 3, pp. 363–371, 2012.
[52] A. Wolff, M. Hammond, D. Hicks, M. Dowsett, L. McShane, K. Allison, D. Allred, J. Bartlett, M. Bilous, P. Fitzgibbons et al., “American society of clinical oncology; college of american pathologists. recommendations for human epidermal growth factor receptor 2 testing in breast cancer: American society of clinical oncology/ college of american pathologists clinical practice guideline update,” J Clin Oncol, vol. 31, no. 31, pp. 3997–4013, 2013.
[53] L. Du, G. Rao, H. Wang, B. Li, W. Tian, J. Cui, L. He, B. Laffin, X. Tian, C. Hao et al., “Cd44-positive cancer stem cells expressing cellular prion protein contribute
to metastatic capacity in colorectal cancerprpc+ metastatic cancer stem cells,” Cancer research, vol. 73, no. 8, pp. 2682–2694, 2013.
[54] Z. Eroglu, T. Tagawa, and G. Somlo, “Human epidermal growth factor receptor family-targeted therapies in the treatment of her2-overexpressing breast cancer,” The oncologist, vol. 19, no. 2, pp. 135–150, 2014.
[55] S. H. Giordano, S. Temin, J. J. Kirshner, S. Chandarlapaty, J. R. Crews, N. E. Davidson, F. J. Esteva, A. M. Gonzalez-Angulo, I. Krop, J. Levinson et al., “Systemic therapy for patients with advanced human epidermal growth factor receptor 2–positive breast cancer: American society of clinical oncology clinical practice guideline,” Journal of clinical oncology, vol. 32, no. 19, p. 2078, 2014.
[56] W.-F. Zhu, J. Li, L.-C. Yu, Y. Wu, X.-P. Tang, Y.-M. Hu, and Y.-C. Chen, “Prognostic value of epcam/muc1 mrna-positive cells in non-small cell lung cancer patients,” Tumor Biology, vol. 35, no. 2, pp. 1211–1219, 2014.
[57] C. Papadaki, M. Sfakianaki, E. Lagoudaki, G. Giagkas, G. Ioannidis, M. Trypaki, E. Tsakalaki, A. Voutsina, A. Koutsopoulos, D. Mavroudis et al., “Pkm2 as a biomarker for chemosensitivity to front-line platinum-based chemotherapy in patients with metastatic non-small-cell lung cancer,” British journal of cancer, vol. 111, no. 9, pp. 1757–1764, 2014.
[58] R. Chen, P. Khatri, P. K. Mazur, M. Polin, Y. Zheng, D. Vaka, C. D. Hoang, J. Shrager, Y. Xu, S. Vicent et al., “A meta-analysis of lung cancer gene expression identifies ptk7 as a survival gene in lung adenocarcinoma,” Cancer research, vol. 74, no. 10, pp. 2892–2902, 2014.
[59] H. S. Rugo, R. B. Rumble, E. Macrae, D. L. Barton, H. K. Connolly, M. N. Dickler, L. Fallowfield, B. Fowble, J. N. Ingle, M. Jahanzeb et al., “Endocrine therapy for hormone receptor-positive metastatic breast cancer: American society of clinical oncology guideline,” Journal of Clinical Oncology, vol. 34, no. 25, pp. 3069–3103, 2016.
[60] D. Zeng, X. Wu, J. Zheng, Y. Zhuang, J. Chen, C. Hong, F. Zhang, M. Wu, and D. Lin, “Loss of cadm1/tslc1 expression is associated with poor clinical outcome in patients with esophageal squamous cell carcinoma,” Gastroenterology research and practice, vol. 2016, 2016.
[61] M. Duffy, N. Harbeck, M. Nap, R. Molina, A. Nicolini, E. Senkus, and F. Cardoso, “Clinical use of biomarkers in breast cancer: Updated guidelines from the european group on tumor markers (egtm),” European journal of cancer, vol. 75, pp. 284–298, 2017.
[62] D. Sahoo, D. L. Dill, R. Tibshirani, and S. K. Plevritis, “Extracting binary signals from microarray time-course data,” Nucleic acids research, vol. 35, no. 11, pp. 3705–3712, 2007.
[63] C. Stark, B.-J. Breitkreutz, T. Reguly, L. Boucher, A. Breitkreutz, and M. Tyers, “Biogrid: a general repository for interaction datasets,” Nucleic acids research, vol. 34, no. suppl_1, pp. D535–D539, 2006.
[64] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016.
[65] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning. PMLR, 2015, pp. 448–456.
[66] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal processing magazine, vol. 29, no. 6, pp. 82–97, 2012.
[67] A. Graves, A.-r. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE international conference on acoustics, speech and signal processing. Ieee, 2013, pp. 6645–6649.
[68] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
[69] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[70] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[71] K. Cho, B. VanMerriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014.
[72] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
[73] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,Ł.Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
[74] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Icml, 2010.
[75] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-normalizing neural networks,” Advances in neural information processing systems, vol. 30, 2017.
[76] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” arXiv preprint arXiv:1511.07289, 2015.
[77] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” nature, vol. 323, no. 6088, pp. 533–536, 1986.
[78] H. Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function,” Journal of statistical planning and inference, vol. 90, no. 2, pp. 227–244, 2000.
[79] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.
[80] S. Santurkar, D. Tsipras, A. Ilyas, and A. Madry, “How does batch normalization help optimization?” Advances in neural information processing systems, vol. 31, 2018.
[81] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, “Multimodal deep learning,” in ICML, 2011.
[82] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013.
[83] J. Turian, J. Bergstra, and Y. Bengio, “Quadratic features and deep architectures for chunking,” in Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, 2009, pp. 245–248.
[84] R. Caruana, “Multitask learning,” Machine learning, vol. 28, no. 1, pp. 41–75, 1997.
[85] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?” Advances in neural information processing systems, vol. 27, 2014.
[86] S. Ruder, “An overview of multi-task learning in deep neural networks,” arXiv preprint arXiv:1706.05098, 2017.
[87] G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, “Continual lifelong learning with neural networks: A review,” Neural Networks, vol. 113, pp. 54–71, 2019.
[88] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on knowledge and data engineering, vol. 22, no. 10, pp. 1345–1359, 2009.
[89] I. Misra, A. Shrivastava, A. Gupta, and M. Hebert, “Cross-stitch networks for multi-task learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3994–4003.
[90] M. Long, Z. Cao, J. Wang, and P. S. Yu, “Learning multiple tasks with multilinear relationship networks,” Advances in neural information processing systems, vol. 30, 2017.
[91] S. Liu, E. Johns, and A. J. Davison, “End-to-end multi-task learning with attention,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 1871–1880.
[92] V. Dumoulin, E. Perez, N. Schucher, F. Strub, H. d. Vries, A. Courville, and Y. Bengio, “Feature-wise transformations,” Distill, vol. 3, no. 7, p. e11, 2018.
[93] T. Yu, S. Kumar, A. Gupta, S. Levine, K. Hausman, and C. Finn, “Gradient surgery for multi-task learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 5824–5836, 2020.
[94] Z. Chen, V. Badrinarayanan, C.-Y. Lee, and A. Rabinovich, “Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks,” in International Conference on Machine Learning. PMLR, 2018, pp. 794–803.
[95] A. Kendall, Y. Gal, and R. Cipolla, “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7482–7491.
[96] O. Sener and V. Koltun, “Multi-task learning as multi-objective optimization,” Advances in neural information processing systems, vol. 31, 2018.
[97] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., “Scikit-learn: Machine learning in python,” the Journal of machine Learning research, vol. 12, pp. 2825–2830, 2011.
[98] T. Fawcett, “An introduction to roc analysis,” Pattern recognition letters, vol. 27, no. 8, pp. 861–874, 2006.
[99] T. Saito and M. Rehmsmeier, “The precision-recall plot is more informative than the roc plot when evaluating binary classifiers on imbalanced datasets,” PloS one, vol. 10, no. 3, p. e0118432, 2015.
[100] W. J. Youden, “Index for rating diagnostic tests,” Cancer, vol. 3, no. 1, pp. 32–35, 1950.
[101] C. Davidson-Pilon, “lifelines: survival analysis in python,” Journal of OpenSource Software, vol. 4, no. 40, p. 1317, 2019.
[102] F. E. Harrell, R. M. Califf, D. B. Pryor, K. L. Lee, and R. A. Rosati, “Evaluating the yield of medical tests,” Jama, vol. 247, no. 18, pp. 2543–2546, 1982.
[103] T. G. Clark, M. J. Bradburn, S. B. Love, and D. G. Altman, “Survival analysis part i: basic concepts and first analyses,” British journal of cancer, vol. 89, no. 2, pp. 232–238, 2003.
[104] E. L. Kaplan and P. Meier, “Nonparametric estimation from incomplete observations,” Journal of the American statistical association, vol. 53, no. 282, pp. 457–481, 1958.
[105] W. Nelson, “Theory and applications of hazard plotting for censored failure data,” Technometrics, vol. 14, no. 4, pp. 945–966, 1972.
[106] O. Aalen, “Nonparametric inference for a family of counting processes,” The Annals of Statistics, pp. 701–726, 1978.
[107] N. Mantel, “Evaluation of survival data and two new rank order statistics arising in its consideration,” Cancer Chemother Rep, vol. 50, pp. 163–170, 1966.
[108] B. Efron and R. J. Tibshirani, An introduction to the bootstrap. CRC press, 1994.
[109] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
[110] I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in International conference on machine learning. PMLR, 2013, pp. 1139–1147.
[111] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Advances in neural information processing systems, vol. 30, 2017.
[112] N. Kokhlikyan, V. Miglani, M. Martin, E. Wang, B. Alsallakh, J. Reynolds, A. Melnikov, N. Kliushkina, C. Araya, S. Yan et al., “Captum: A unified and generic model interpretability library for pytorch,” arXiv preprint arXiv:2009.07896, 2020.
[113] K. Pearson, “Liii. on lines and planes of closest fit to systems of points in space,” The London, Edinburgh, and Dublin philosophical magazine and journal of science, vol. 2, no. 11, pp. 559–572, 1901.
[114] H. Hotelling, “Analysis of a complex of statistical variables into principal components.” Journal of educational psychology, vol. 24, no. 6, p. 417, 1933.
[115] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in International conference on machine learning. PMLR, 2017, pp. 1126–1135.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89698-
dc.description.abstract癌症是全球主要的死亡原因之一,我們透過精準的癌症預後預測,篩選並給予高風險病患最合適的治療,提升病患的存活率。然而在現今複雜且龐大的異質醫學資料裡,深度學習相較於統計或傳統機器學習的演算法,在不同方面皆表現得更為出色,讓我們得以處理這些複雜且龐大的異質醫學資料。除此之外,在深度學習中,我們可以運用多任務學習和多模態學習,讓模型學習不同癌症間的知識,並利用這些知識提供精準的癌症預後預測。作為案例研究,我們使用 The Cancer Genome Atlas (TCGA) 計畫所獲得的三個資料集(乳癌、肺癌和大腸直腸癌),實作了用於癌症預後預測的多任務雙模態神經網路,整合 RNA Sequencing (RNA-Seq) 和 Clinical 的資料。除此之外,我們還為此打造一個可重複使用的 Python 程式碼庫,其中包含下載並處理 TCGA 計畫資料庫中的資料、RNA-Seq 資料前處理和深度學習模型開發架構。實驗結果證實,本論文所提出之多任務雙模態神經網路的 concordance index (c-index) 和 area under the precision-recall curve (AUPRC) 分別大幅提高了 26% 和 41%,為此研究方向踏出嶄新的一步。我們相信此研究方向,可以透過深度學習,解開不同癌症之間潛在的關係,為精準醫療奠定更進一步的基礎。zh_TW
dc.description.abstractCancer is one of the leading causes of death worldwide. With accurate cancer prognosis predictions, patients with high risk could be screened out for proper treatments to increase their chance of survival. In this study, we integrate medical data from multiple cancer types and utilize multi-task learning to exploit the shared knowledge among them. As a case study, we implemented the multi-task bimodal neural network, which can handle both RNA-Seq and clinical data, for cancer prognosis predictions with three datasets, including breast, lung, and colon cancer, obtained from the TCGA project. Moreover, we developed a reusable Python code base, including requesting data from the TCGA project database, data pre-processing, and the development pipelines for deep learning models. Experimental results showed significant improvements up to 26% and 41% in the c-index and AUPRC, respectively. Our research marks the initial steps of employing multi-task learning for prognosis predictions among different cancer types.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-09-15T16:18:52Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-09-15T16:18:52Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents誌謝 i
摘要 ii
Abstract iii
Contents iv
List of Figures vii
List of Tables ix
Abbreviation xi
Chapter 1 Introduction 1
Chapter 2 Data Pre-processing 5
2.1 TCGA Project 5
2.2 GDC mRNA Quantification Analysis Pipeline 6
2.2.1 RNA-Seq Alignment Workflow 7
2.2.2 mRNA Expression Workflow 7
2.3 Data Selection and Summarization 9
2.4 Systems Biology Feature Selection 13
2.4.1 StepMiner and Analysis of Variance (ANOVA) 15
2.4.2 Gene Interaction Network (GIN) and Ranking 16
Chapter 3 Materials and Methods 18
3.1 Deep Learning Model Basics 18
3.1.1 Deep Neural Network 18
3.1.2 Batch Normalization 20
3.2 Bimodal Neural Network 21
3.2.1 Feature Extractor 22
3.2.2 Classifier 25
3.3 Multi-Task Learning 27
3.3.1 Problem Formulation 28
3.3.2 Different Learning Paradigms 28
3.3.3 Hard Parameter Sharing 30
3.3.4 Feature Transformation Approach 32
3.3.5 Optimization 33
3.4 Evaluation Metrics and Methods 34
3.4.1 Receiver Operating Characteristic (ROC) Curve 34
3.4.2 Precision-Recall Curve (PRC) 34
3.4.3 Youden Index 35
3.5 Survival Analyses 36
3.5.1 C-index 36
3.5.2 Kaplan-Meier (KM) and Nelson-Aalen (NA) Analysis 36
3.5.3 Log-Rank Test 38
Chapter 4 Experiments 39
4.1 Experiment Settings 39
4.2 Single-Task Learning 40
4.3 Multi-Task Learning 43
4.4 Survival Analysis 45
Chapter 5 Discussion
5.1 Different Modalities in Single-Task Learning 49
5.2 Different Normalizations for RNA-Seq Data in Single-Task Learning 51
5.3 Multi-Task Learning versus Single-Task Learning 55
5.4 Ablation Studies in Multi-Task Learning 56
5.4.1 Without Ordered RNA-Seq Data 56
5.4.2 Unique RNA-Seq Feature Extractor Without Parameter Sharing 57
5.4.3 Without Task Descriptor 59
5.4.4 Without Weighted Random Data Sampler 60
5.5 Relationships Between Different Cancers in Multi-Task Learning 61
5.6 Bimodal Neural Network Explanation 63
5.6.1 Feature Importance 63
5.6.2 Feature Embeddings 65
5.7 Future Works 66
5.7.1 More Datasets and Modalities 66
5.7.2 Feature Extraction for RNA-Seq Data 67
5.7.3 Definition of Task 68
Chapter 6 Conclusion 70
Bibliography 72
-
dc.language.isoen-
dc.title整合核糖核酸測序與臨床資料並應用多任務學習方法於多癌症預後預測zh_TW
dc.titleMulti-Cancer Prognosis Prediction with Multi-Task Learning Integrating RNA Sequencing and Clinical Dataen
dc.typeThesis-
dc.date.schoolyear110-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee李祈均zh_TW
dc.contributor.oralexamcommitteeWei-Chung Wang;Mone-Wei Lin;Shao-Hua Sun;Chi-Chun Leeen
dc.subject.keyword多任務學習,多模態學習,深度學習,生物資訊學,特徵選取,zh_TW
dc.subject.keywordmulti-task learning,multimodal learning,deep learning,bioinformatics,feature selection,en
dc.relation.page87-
dc.identifier.doi10.6342/NTU202203345-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2022-09-26-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電信工程學研究所-
dc.date.embargo-lift2027-09-02-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-110-2.pdf
  目前未授權公開取用
1.27 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved