請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88270完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 王鈺強 | zh_TW |
| dc.contributor.advisor | Yu-Chiang Frank Wang | en |
| dc.contributor.author | 楊福恩 | zh_TW |
| dc.contributor.author | Fu-En Yang | en |
| dc.date.accessioned | 2023-08-09T16:18:07Z | - |
| dc.date.available | 2023-11-09 | - |
| dc.date.copyright | 2023-08-09 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-07-24 | - |
| dc.identifier.citation | [1] A. Liu, Y.-C. Liu, Y.-Y. Yeh, and Y.-C. F. Wang, “A unified feature disentangler for multi-domain image translation and manipulation,” arXiv preprint arXiv:1809.01361, 2018.
[2] Y. Xian, T. Lorenz, B. Schiele, and Z. Akata, “Feature generating networks for zero-shot learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5542–5551. [3] K. Zhou, Y. Yang, T. Hospedales, and T. Xiang, “Deep domain-adversarial image generation for domain generalisation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 13 025–13 032. [4] L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,” arXiv preprint arXiv:1508.06576, 2015. [5] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision. Springer, 2016, pp. 694–711. [6] X. Huang and S. J. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization.” in ICCV, 2017, pp. 1510–1519. [7] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang, “Universal style transfer via feature transforms,” in Advances in Neural Information Processing Systems, 2017, pp. 386–396. [8] A. Sanakoyeu, D. Kotovenko, S. Lang, and B. Ommer, “A style-aware content loss for real-time hd style transfer,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 698–714. [9] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial nets,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [10] Y. Taigman, A. Polyak, and L. Wolf, “Unsupervised cross-domain image generation,” in Proceedings of the International Conference on Learning Representations (ICLR), 2017. [11] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” arXiv preprint, 2017. [12] M.-Y. Liu and O. Tuzel, “Coupled generative adversarial networks,” in Advances in Neural Information Processing Systems (NIPS), 2016. [13] M.-Y. Liu, T. Breuel, and J. Kautz, “Unsupervised image-to-image translation networks,” in Advances in Neural Information Processing Systems (NIPS), 2017. [14] Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by back-propagation,” in Proceedings of the International Conference on Machine Learning (ICML), 2015. [15] E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko, “Adversarial discriminative domain adaptation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [16] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan, “Domain separation networks,” in Advances in Neural Information Processing Systems, 2016, pp. 343–351. [17] P. Haeusser, T. Frerix, A. Mordvintsev, and D. Cremers, “Associative domain adaptation,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2765–2773. [18] J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell, “Cycada: Cycle-consistent adversarial domain adaptation,” arXiv preprint arXiv:1711.03213, 2017. [19] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS), 2016. [20] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mo- hamed, and A. Lerchner, “β-VAE: Learning basic visual concepts with a constrained variational framework,” in Proceedings of the International Conference on Learning Representations (ICLR), 2017. [21] A. Odena, C. Olah, and J. Shlens, “Conditional image synthesis with auxiliary classifier GANs,” in Proceedings of the International Conference on Machine Learning (ICML), 2017. [22] Y.-C. Liu, Y.-Y. Yeh, T.-C. Fu, S.-D. Wang, W.-C. Chiu, and Y.-C. F. Wang, “Detach and adapt: Learning cross-domain disentangled deep representation,” arXiv preprint arXiv:1705.01314, 2017. [23] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NIPS), 2014. [24] X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz, “Multimodal unsupervised image-to-image translation,” arXiv preprint arXiv:1804.04732, 2018. [25] H.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang, “Diverse image-to-image translation via disentangled representations,” arXiv preprint arXiv:1808.00948, 2018. [26] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8789–8797. [27] J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman, “Toward multimodal image-to-image translation,” in Advances in Neural Information Processing Systems, 2017, pp. 465–476. [28] A. Gonzalez-Garcia, J. van de Weijer, and Y. Bengio, “Image-to-image translation for cross-domain disentanglement,” in Advances in Neural Information Processing Systems, 2018, pp. 1287–1298. [29] S. Tulyakov, M.-Y. Liu, X. Yang, and J. Kautz, “Mocogan: Decomposing motion and content for video generation,” arXiv preprint arXiv:1707.04993, 2017. [30] L. Tran, X. Yin, and X. Liu, “Disentangled representation learning gan for pose-invariant face recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1415–1424. [31] Y. Liu, Z. Wang, H. Jin, and I. Wassell, “Multi-task adversarial network for disentangled feature learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3743–3751. [32] X. Peng, X. Yu, K. Sohn, D. N. Metaxas, and M. Chandraker, “Reconstruction-based disentanglement for pose-invariant face recognition,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1623–1632. [33] Y. Tian, X. Peng, L. Zhao, S. Zhang, and D. N. Metaxas, “Cr-gan: learn- ing complete representations for multi-view generation,” arXiv preprint arXiv:1806.11191, 2018. [34] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013. [35] T. Kim, M. Cha, H. Kim, J. Lee, and J. Kim, “Learning to discover cross-domain relations with generative adversarial networks,” in Proceedings of the International Conference on Machine Learning (ICML), 2017. [36] Z. Yi, H. Zhang, P. T. Gong et al., “Dualgan: Unsupervised dual learning for image-to-image translation,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017. [37] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” arXiv preprint, vol. 1711, 2017. [38] Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen, “Attgan: Facial attribute editing by only changing what you want,” arXiv preprint arXiv:1711.10678, 2017. [39] Z. Ding, S. Li, M. Shao, and Y. Fu, “Graph adaptive knowledge transfer for unsupervised domain adaptation,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 37–52. [40] Z. Ding and Y. Fu, “Deep transfer low-rank coding for cross-domain learn- ing,” IEEE transactions on neural networks and learning systems, vol. 30, no. 6, pp. 1768–1779, 2018. [41] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” 2017. [42] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. [43] Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015. [44] A. Anoosheh, E. Agustsson, R. Timofte, and L. Van Gool, “Combogan: Unrestrained scalability for image domain translation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 783–790. [45] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” in Advances in Neural Information Processing Systems, 2017, pp. 6626–6637. [46] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 586–595. [47] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan, “Unsupervised pixel-level domain adaptation with generative adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [48] M. Ghifary, W. B. Kleijn, M. Zhang, D. Balduzzi, and W. Li, “Deep reconstruction-classification networks for unsupervised domain adaptation,” in Proceedings of the European Conference on Computer Vision (ECCV). Springer, 2016, pp. 597–613. [49] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation,” in Computer Vision and Pattern Recognition (CVPR), vol. 1, no. 2, 2017, p. 4. [50] S. Sankaranarayanan, Y. Balaji, C. D. Castillo, and R. Chellappa, “Generate to adapt: Aligning domains using generative adversarial networks,” arXiv preprint arXiv:1704.01705, 2017. [51] P. Russo, F. M. Carlucci, T. Tommasi, and B. Caputo, “From source to target and back: symmetric bi-directional adaptive gan,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8099–8108. [52] Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele, “Evaluation of output embeddings for fine-grained image classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2927–2936. [53] L. Zhang, T. Xiang, and S. Gong, “Learning a deep embedding model for zero-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2021–2030. [54] E. Schonfeld, S. Ebrahimi, S. Sinha, T. Darrell, and Z. Akata, “Generalized zero-and few-shot learning via aligned variational autoencoders,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 8247–8255. [55] Y. Xian, S. Sharma, B. Schiele, and Z. Akata, “f-vaegan-d2: A feature generating framework for any-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 10 275– 10 284. [56] M. R. Vyas, H. Venkateswara, and S. Panchanathan, “Leveraging seen and unseen semantic relationships for generative zero-shot learning,” in European Conference on Computer Vision. Springer, 2020, pp. 70–86. [57] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9. [58] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. [59] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448. [60] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788. [61] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017. [62] K. He, G. Gkioxari, P. Dolla´r, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961– 2969. [63] C. H. Lampert, H. Nickisch, and S. Harmeling, “Attribute-based classification for zero-shot visual object categorization,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 3, pp. 453–465, 2013. [64] Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata, “Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 9, pp. 2251–2265, 2018. [65] W.-L. Chao, S. Changpinyo, B. Gong, and F. Sha, “An empirical study and analysis of generalized zero-shot learning for object recognition in the wild,” in European Conference on Computer Vision. Springer, 2016, pp. 52–68. [66] F. Pourpanah, M. Abdar, Y. Luo, X. Zhou, R. Wang, C. P. Lim, and X.-Z. Wang, “A review of generalized zero-shot learning methods,” arXiv preprint arXiv:2011.08641, 2020. [67] Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid, “Label-embedding for image classification,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 7, pp. 1425–1438, 2015. [68] E. Kodirov, T. Xiang, and S. Gong, “Semantic autoencoder for zero-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3174–3183. [69] G. Dinu, A. Lazaridou, and M. Baroni, “Improving zero-shot learning by mitigating the hubness problem,” arXiv preprint arXiv:1412.6568, 2014. [70] Y. Shigeto, I. Suzuki, K. Hara, M. Shimbo, and Y. Matsumoto, “Ridge regression, hubness, and zero-shot learning,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2015, pp. 135–151. [71] S. Changpinyo, W.-L. Chao, and F. Sha, “Predicting visual exemplars of unseen classes for zero-shot learning,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 3476–3485. [72] R. Felix, V. B. Kumar, I. Reid, and G. Carneiro, “Multi-modal cycle- consistent generalized zero-shot learning,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 21–37. [73] Y. Zhu, J. Xie, B. Liu, and A. Elgammal, “Learning feature-to-feature translator by alternating back-propagation for generative zero-shot learning,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 9844–9854. [74] J. Li, M. Jing, K. Lu, L. Zhu, Y. Yang, and Z. Huang, “Alleviating feature confusion for generative zero-shot learning,” in Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 1587–1595. [75] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, and T. Mikolov, “Devise: A deep visual-semantic embedding model,” in Advances in neural information processing systems, 2013, pp. 2121–2129. [76] B. Romera-Paredes and P. Torr, “An embarrassingly simple approach to zero-shot learning,” in International Conference on Machine Learning, 2015, pp. 2152–2161. [77] R. Socher, M. Ganjoo, C. D. Manning, and A. Ng, “Zero-shot learning through cross-modal transfer,” in Advances in neural information processing systems, 2013, pp. 935–943. [78] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sas- try, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International Conference on Machine Learning. PMLR, 2021, pp. 8748–8763. [79] J. Li, M. Jing, K. Lu, Z. Ding, L. Zhu, and Z. Huang, “Leveraging the invariant side of generative zero-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 7402– 7411. [80] J. Ni, S. Zhang, and H. Xie, “Dual adversarial semantics-consistent network for generalized zero-shot learning,” in Advances in Neural Information Processing Systems, 2019, pp. 6143–6154. [81] A. Mishra, S. Krishna Reddy, A. Mittal, and H. A. Murthy, “A generative model for zero-shot learning using conditional variational autoencoders,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 2188–2196. [82] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” arXiv preprint arXiv:1701.07875, 2017. [83] B. Hariharan and R. Girshick, “Low-shot visual recognition by shrinking and hallucinating features,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3018–3027. [84] E. Schwartz, L. Karlinsky, J. Shtok, S. Harary, M. Marder, A. Kumar, R. Feris, R. Giryes, and A. Bronstein, “Delta-encoder: an effective sample synthesis method for few-shot object recognition,” in Advances in Neural Information Processing Systems, 2018, pp. 2845–2855. [85] M. Chen, Y. Fang, X. Wang, H. Luo, Y. Geng, X. Zhang, C. Huang, W. Liu, and B. Wang, “Diversity transfer network for few-shot learning.” in AAAI, 2020, pp. 10 559–10 566. [86] Q. Mao, H.-Y. Lee, H.-Y. Tseng, S. Ma, and M.-H. Yang, “Mode seeking generative adversarial networks for diverse image synthesis,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 1429–1437. [87] H.-Y. Lee, H.-Y. Tseng, Q. Mao, J.-B. Huang, Y.-D. Lu, M. Singh, and M.-H. Yang, “Drit++: Diverse image-to-image translation via disentangled representations,” International Journal of Computer Vision, vol. 128, no. 10, pp. 2402–2417, 2020. [88] B. AlBahar and J.-B. Huang, “Guided image-to-image translation with bi-directional feature transformation,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 9016–9025. [89] M.-Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz, “Few-shot unsupervised image-to-image translation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 10 551– 10 560. [90] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona, “Caltech-ucsd birds 200,” 2010. [91] C. H. Lampert, H. Nickisch, and S. Harmeling, “Learning to detect unseen object classes by between-class attribute transfer,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2009, pp. 951–958. [92] G. Patterson and J. Hays, “Sun attribute database: Discovering, annotating, and recognizing scene attributes,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012, pp. 2751–2758. [93] M.-E. Nilsback and A. Zisserman, “Automated flower classification over a large number of classes,” in 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing. IEEE, 2008, pp. 722–729. [94] S. Reed, Z. Akata, H. Lee, and B. Schiele, “Learning deep representations of fine-grained visual descriptions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 49–58. [95] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1199–1208. [96] X. Wang, F. Yu, R. Wang, T. Darrell, and J. E. Gonzalez, “Tafe-net: Task-aware feature embeddings for low shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 1831–1840. [97] H. Jiang, R. Wang, S. Shan, and X. Chen, “Transferable contrastive network for generalized zero-shot learning,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 9765–9774. [98] Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele, “Latent embeddings for zero-shot classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 69–77. [99] Y. Zhu, M. Elhoseiny, B. Liu, X. Peng, and A. Elgammal, “A generative adversarial approach for zero-shot learning from noisy texts,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1004–1013. [100] R. Gao, X. Hou, J. Qin, L. Liu, F. Zhu, and Z. Zhang, “A joint generative model for zero-shot learning,” in Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0–0. [101] V. Kumar Verma, G. Arora, A. Mishra, and P. Rai, “Generalized zero-shot learning via synthesized examples,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4281–4289. [102] Y. Liu, J. Guo, D. Cai, and X. He, “Attribute attention for semantic disambiguation in zero-shot learning,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 6698–6707. [103] S. Changpinyo, W.-L. Chao, B. Gong, and F. Sha, “Synthesized classifiers for zero-shot learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 5327–5336. [104] H. Yu and B. Lee, “Zero-shot learning via simultaneous generating and learning,” in Advances in Neural Information Processing Systems, 2019, pp. 46–56. [105] R. Keshari, R. Singh, and M. Vatsa, “Generalized zero-shot learning via over-complete distribution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 13 300–13 308. [106] H. Li, S. J. Pan, S. Wang, and A. C. Kot, “Domain generalization with adversarial feature learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5400–5409. [107] D. Li, Y. Yang, Y.-Z. Song, and T. Hospedales, “Learning to generalize: Meta-learning for domain generalization,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018. [108] Y. Balaji, S. Sankaranarayanan, and R. Chellappa, “Metareg: Towards domain generalization using meta-regularization,” Advances in Neural Information Processing Systems, vol. 31, pp. 998–1008, 2018. [109] D. Li, J. Zhang, Y. Yang, C. Liu, Y.-Z. Song, and T. M. Hospedales, “Episodic training for domain generalization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1446– 1455. [110] Q. Dou, D. C. Castro, K. Kamnitsas, and B. Glocker, “Domain generalization via model-agnostic learning of semantic features,” arXiv preprint arXiv:1910.13580, 2019. [111] S. Shankar, V. Piratla, S. Chakrabarti, S. Chaudhuri, P. Jyothi, and S. Sarawagi, “Generalizing across domains via cross-gradient training,” arXiv preprint arXiv:1804.10745, 2018. [112] K. Zhou, Y. Yang, T. Hospedales, and T. Xiang, “Learning to generate novel domains for domain generalization,” in European Conference on Computer Vision. Springer, 2020, pp. 561–578. [113] K. Zhou, Y. Yang, Y. Qiao, and T. Xiang, “Domain generalization with mixstyle,” arXiv preprint arXiv:2104.02008, 2021. [114] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” arXiv preprint arXiv:1710.09412, 2017. [115] F. M. Carlucci, A. D’Innocente, S. Bucci, B. Caputo, and T. Tommasi, “Domain generalization by solving jigsaw puzzles,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2229–2238. [116] S. Wang, L. Yu, C. Li, C.-W. Fu, and P.-A. Heng, “Learning from extrinsic and intrinsic supervisions for domain generalization,” in European Conference on Computer Vision. Springer, 2020, pp. 159–176. [117] S. Bucci, A. D’Innocente, Y. Liao, F. M. Carlucci, B. Caputo, and T. Tom- masi, “Self-supervised learning across domains,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. [118] I. Misra and L. v. d. Maaten, “Self-supervised learning of pretext-invariant representations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 6707–6717. [119] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning. PMLR, 2020, pp. 1597–1607. [120] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in CVPR, 2020. [121] J.-B. Grill, F. Strub, F. Altche´, C. Tallec, P. H. Richemond, E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar et al., “Bootstrap your own latent: A new approach to self-supervised learning,” arXiv preprint arXiv:2006.07733, 2020. [122] Y. Tian, C. Sun, B. Poole, D. Krishnan, C. Schmid, and P. Isola, “What makes for good views for contrastive learning,” arXiv preprint arXiv:2005.10243, 2020. [123] Y.-H. H. Tsai, Y. Wu, R. Salakhutdinov, and L.-P. Morency, “Self-supervised learning from a multi-view perspective,” arXiv preprint arXiv:2006.05576, 2020. [124] A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” arXiv preprint arXiv:1703.01780, 2017. [125] Y.-C. Liu, C.-Y. Ma, Z. He, C.-W. Kuo, K. Chen, P. Zhang, B. Wu, Z. Kira, and P. Vajda, “Unbiased teacher for semi-supervised object detection,” arXiv preprint arXiv:2102.09480, 2021. [126] J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. Efros, and T. Darrell, “Cycada: Cycle-consistent adversarial domain adaptation,” in International conference on machine learning. PMLR, 2018, pp. 1989– 1998. [127] Y.-C. Chen, Y.-Y. Lin, M.-H. Yang, and J.-B. Huang, “Crdoco: Pixel-level domain transfer with cross-domain consistency,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 1791–1800. [128] Y. Li, Y. Yang, W. Zhou, and T. M. Hospedales, “Feature-critic networks for heterogeneous domain generalization,” arXiv preprint arXiv:1901.11448, 2019. [129] Z. Huang, H. Wang, E. P. Xing, and D. Huang, “Self-challenging improves cross-domain generalization,” arXiv preprint arXiv:2007.02454, vol. 2, 2020. [130] K. Zhou, Z. Liu, Y. Qiao, T. Xiang, and C. C. Loy, “Domain generalization: A survey,” arXiv preprint arXiv:2103.02503, 2021. [131] X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning,” IEEE transactions on neural networks and learning systems, vol. 30, no. 9, pp. 2805–2824, 2019. [132] L. Li, K. Gao, J. Cao, Z. Huang, Y. Weng, X. Mi, Z. Yu, X. Li, and B. Xia, “Progressive domain expansion network for single domain generalization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 224–233. [133] G. Larsson, M. Maire, and G. Shakhnarovich, “Learning representations for automatic colorization,” in European conference on computer vision. Springer, 2016, pp. 577–593. [134] S. Gidaris, P. Singh, and N. Komodakis, “Unsupervised representation learning by predicting image rotations,” arXiv preprint arXiv:1803.07728, 2018. [135] D. Li, Y. Yang, Y.-Z. Song, and T. M. Hospedales, “Deeper, broader and artier domain generalization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 5542–5550. [136] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 5018–5027. [137] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang, “Moment matching for multi-source domain adaptation,” in ICCV, 2019. [138] P. Chattopadhyay, Y. Balaji, and J. Hoffman, “Learning to balance specificity and invariance for in and out of domain generalization,” in European Conference on Computer Vision. Springer, 2020, pp. 301–318. [139] C. Fang, Y. Xu, and D. N. Rockmore, “Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1657–1664. [140] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in CVPR, 2009. [141] F. Qiao, L. Zhao, and X. Peng, “Learning to learn single domain generalization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 12 556–12 565. [142] K. Zhou, Y. Yang, Y. Qiao, and T. Xiang, “Domain adaptive ensemble learning,” arXiv preprint arXiv:2003.07325, 2020. [143] F. C. Borlino, A. D’Innocente, and T. Tommasi, “Rethinking domain generalization baselines,” in 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021, pp. 9227–9233. [144] S. Seo, Y. Suh, D. Kim, J. Han, and B. Han, “Learning to optimize domain specific normalization for domain generalization,” arXiv preprint arXiv:1907.04275, vol. 3, no. 6, p. 7, 2019. [145] S. Motiian, M. Piccirilli, D. A. Adjeroh, and G. Doretto, “Unified deep supervised domain adaptation and generalization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 5715–5725. [146] A. D’Innocente and B. Caputo, “Domain generalization with domain-specific aggregation modules,” in German Conference on Pattern Recognition. Springer, 2018, pp. 187–198. [147] C. Schuhmann, R. Beaumont, R. Vencu, C. Gordon, R. Wightman, M. Cherti, T. Coombes, A. Katta, C. Mullis, M. Wortsman et al., “Laion-5b: An open large-scale dataset for training next generation image-text models,” arXiv preprint arXiv:2210.08402, 2022. [148] K. He, X. Chen, S. Xie, Y. Li, P. Dolla´r, and R. Girshick, “Masked autoencoders are scalable vision learners,” in CVPR, 2022. [149] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. [150] S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin et al., “Opt: Open pre-trained transformer language models,” arXiv preprint arXiv:2205.01068, 2022. [151] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” in NeurIPS, 2020. [152] B. Custers, A. M. Sears, F. Dechesne, I. Georgieva, T. Tani, and S. Van der Hof, EU personal data protection in policy and practice. Springer, 2019. [153] Z. Chen, M. Zhu, C. Yang, and Y. Yuan, “Personalized retrogress-resilient framework for real-world medical federated learning,” in MICCAI, 2021. [154] C.-T. Liu, C.-Y. Wang, S.-Y. Chien, and S.-H. Lai, “Fedfr: Joint optimization federated framework for generic and personalized face recognition,” in AAAI, 2022. [155] W. Zhuang, Y. Wen, X. Zhang, X. Gan, D. Yin, D. Zhou, S. Zhang, and S. Yi, “Performance optimization of federated person re-identification via benchmark analysis,” in ACM MM, 2020. [156] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in AISTATS, 2017. [157] X. Li, M. Jiang, X. Zhang, M. Kamp, and Q. Dou, “Fed{bn}: Federated learning on non-{iid} features via local batch normalization,” in ICLR, 2021. [158] Q. Li, B. He, and D. Song, “Model-contrastive federated learning,” in CVPR, 2021. [159] A. Shamsian, A. Navon, E. Fetaya, and G. Chechik, “Personalized federated learning using hypernetworks,” in ICML, 2021. [160] X. Ma, J. Zhang, S. Guo, and W. Xu, “Layer-wised model aggregation for personalized federated learning,” in CVPR, 2022. [161] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” in MLSys, 2020. [162] H.-Y. Chen and W.-L. Chao, “On bridging generic and personalized federated learning for image classification,” in ICLR, 2022. [163] J. Zhang, Y. Hua, H. Wang, T. Song, Z. Xue, R. Ma, and H. Guan, “Fedala: Adaptive local aggregation for personalized federated learning,” arXiv preprint arXiv:2212.01197, 2022. [164] Y. Shen, Y. Zhou, and L. Yu, “Cd2-pfed: Cyclic distillation-guided channel decoupling for model personalization in federated learning,” in CVPR, 2022. [165] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, 1998. [166] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., “On the opportunities and risks of foundation models,” arXiv preprint arXiv:2108.07258, 2021. [167] L. Qu, Y. Zhou, P. P. Liang, Y. Xia, F. Wang, E. Adeli, L. Fei-Fei, and D. Rubin, “Rethinking architecture design for tackling data heterogeneity in federated learning,” in CVPR, 2022. [168] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in ICLR, 2021. [169] M. Jia, L. Tang, B.-C. Chen, C. Cardie, S. Belongie, B. Hariharan, and S.-N. Lim, “Visual prompt tuning,” in ECCV, 2022. [170] K. Zhou, J. Yang, C. C. Loy, and Z. Liu, “Conditional prompt learning for vision-language models,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. [171] K. Zhou, J. Yang, C. C. Loy, and Z. Liu, “Learning to prompt for vision-language models,” International Journal of Computer Vision (IJCV), 2022. [172] S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, “Scaffold: Stochastic controlled averaging for federated learning,” in ICML, 2020. [173] B. Sun, H. Huo, Y. Yang, and B. Bai, “Partialfed: Cross-domain personalized federated learning via partial initialization,” NeurIPS, 2021. [174] Y. Tan, G. Long, L. Liu, T. Zhou, Q. Lu, J. Jiang, and C. Zhang, “Fedproto: Federated prototype learning across heterogeneous clients,” in AAAI, 2022. [175] L. Zhang, L. Shen, L. Ding, D. Tao, and L.-Y. Duan, “Fine-tuning global model via data-free knowledge distillation for non-iid federated learning,” in CVPR, 2022. [176] M. Mendieta, T. Yang, P. Wang, M. Lee, Z. Ding, and C. Chen, “Local learning matters: Rethinking data heterogeneity in federated learning,” in CVPR, 2022. [177] A. Fallah, A. Mokhtari, and A. Ozdaglar, “Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach,” in NeurIPS, 2020. [178] L. Collins, H. Hassani, A. Mokhtari, and S. Shakkottai, “Exploiting shared representations for personalized federated learning,” in ICML, 2021. [179] T. Li, S. Hu, A. Beirami, and V. Smith, “Ditto: Fair and robust federated learning through personalization,” in ICML, 2021. [180] J. Oh, S. Kim, and S.-Y. Yun, “Fedbabu: Toward enhanced representation for federated image classification,” in ICLR, 2022. [181] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in ICML, 2017. [182] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in NeurIPS, 2017. [183] J. Nguyen, J. Wang, K. Malik, M. Sanjabi, and M. Rabbat, “Where to begin? on the impact of pre-training and initialization in federated learning,” arXiv preprint arXiv:2210.08090, 2022. [184] H.-Y. Chen, C.-H. Tu, Z. Li, H.-W. Shen, and W.-L. Chao, “On the importance and applicability of pre-training for federated learning,” in ICLR, 2023. [185] X. L. Li and P. Liang, “Prefix-tuning: Optimizing continuous prompts for generation,” arXiv preprint arXiv:2101.00190, 2021. [186] X. Liu, K. Ji, Y. Fu, Z. Du, Z. Yang, and J. Tang, “P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks,” arXiv preprint arXiv:2110.07602, 2021. [187] B. Lester, R. Al-Rfou, and N. Constant, “The power of scale for parameter- efficient prompt tuning,” arXiv preprint arXiv:2104.08691, 2021. [188] K. Saenko, B. Kulis, M. Fritz, and T. Darrell, “Adapting visual category models to new domains,” in ECCV, 2010. [189] G. Griffin, A. Holub, and P. Perona, “Caltech-256 object category dataset,” Tech. Rep., 2007. [190] P. Tschandl, C. Rosendahl, and H. Kittler, “The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,” Scientific data, 2018. [191] N. C. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Liopyris, N. Mishra, H. Kittler et al., “Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic),” in ISBI, 2018. [192] A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009. [193] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” in NeurIPS, 2019. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88270 | - |
| dc.description.abstract | 深度學習的進步得益於大規模且精細蒐集的數據資料集。然而,這些數據集通常基於一個假設,即訓練和測試資料是共享相同的分佈。但在實際的應用場景,特別是在計算機視覺領域中,這樣的假設往往很難成立,在圖像領域分佈或是語義類別通常有所差異。由於這些資料分佈的不同,對特定分佈進行訓練的深度神經網絡在不同的資料分佈數據上往往表現不佳。在本論文中,我們的目標是透過遷移學習,以實現在不同的圖像領域分佈或語義類別之間進行知識的遷移。在本論文中,我們首先解決圖像風格的知識轉移問題。我們提出了一個特徵解耦框架,實現跨多個圖像領域和多樣化的風格轉移。接著,我們研究語義類別的知識轉移,透過利用類別內觀察到的差異來完成零樣本圖像識別這一具有挑戰性的任務。為了讓訓練模型能更好地處理落在源域分佈之外的數據,我們提出了一種用於領域泛化的對抗性教師-學生表示學習框架。最後,我們轉向分佈式學習的場景,用以達成在特定應用場景,例如醫療上的隱私保護要求。為了解決這個問題,我們設計了一種針對特定數據領域的提示生成框架,來允許高效並且個性化的聯邦學習。通過實驗的分析與結果,本論文中提出的方法的有效性得以驗證。 | zh_TW |
| dc.description.abstract | Recent progress in deep learning owes a lot to large-scale, curated datasets. However, these datasets typically operate on the assumption that training and test data share the same distribution. This is not always the case in real-world scenarios, particularly in the field of computer vision, where discrepancies in data domains or semantic categories are common. Due to these distribution gaps, deep neural networks trained on a specific distribution can struggle to perform in a different domain. In this thesis, we aim at advancing transfer learning to enable the transfer of knowledge across distinct data domains or semantic classes. Specifically, we first address knowledge transfer for image styles. We propose a feature disentanglement framework that facilitates multi-domain and multi-modal style transfer. Next, we examine knowledge transfer for semantic categories, focusing on the challenging task of zero-shot image recognition by leveraging intra-class variations. With the goal of enabling the trained model to handle data that falls outside the source distribution, we propose an Adversarial Teacher-Student Representation Learning framework for domain generalization. Finally, we transition to a decentralized learning paradigm, accommodating the privacy-preserving requirements of certain applications, such as healthcare. To tackle this, we devise a client-specific prompt generation framework to allow efficient, personalized federated learning. Through the comprehensive analysis and results, the effectiveness of the methods presented in this thesis could be successfully confirmed. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-09T16:18:07Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-08-09T16:18:07Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Abstract i
List of Figures vii List of Tables xiii 1 Knowledge Transfer for Image Styles 1 1.1 Introduction 1 1.2 Related Works 4 1.3 Multi-domain and Multi-modal Representation Disentangler 8 1.3.1 Notation and Model Overview 8 1.3.2 Representation Disentangler 9 1.3.3 Multi-domain and Multi-modal GAN 10 1.3.4 Full Objectives 12 1.3.5 Comparisons to Recent Models 15 1.4 Experiments 17 1.4.1 Implementation Details 17 1.4.2 Datasets 18 1.4.3 Multi-domain and Multi-modal Image Translation and Manipulation 20 1.4.4 Unsupervised Domain Adaptation 27 1.5 Conclusions 28 2 Knowledge Transfer for Semantic Categories 29 2.1 Introduction 29 2.2 Related Works 33 2.2.1 Cross-Modal Embedding 33 2.2.2 Data Generation 34 2.3 Proposed Method 36 2.3.1 Problem Formulation and Algorithm Overview 36 2.3.2 Cross-Modal Consistency GAN for Data Hallucination 38 2.3.3 (Generalized) Zero-Shot Recognition 43 2.4 Experiments 45 2.4.1 Implementation Details 45 2.4.2 Datasets and Evaluation Metrics 46 2.4.3 Evaluation and Comparisons 47 2.4.4 Analysis of Cross-Modal Consistency GAN 52 2.4.5 Parameter Analysis 56 2.5 Conclusion 57 3 Knowledge Transfer for Unseen Domains 59 3.1 Introduction 60 3.2 Related Works 62 3.3 Proposed Method 64 3.3.1 Problem Formulation and Model Overview 64 3.3.2 Teacher-Student Domain Generalized Representation Learning 66 3.3.3 Adversarial Novel Domain Augmentation 67 3.4 Experiments 70 3.4.1 Datasets and Evaluation Protocol 70 3.4.2 Implementation Details 71 3.4.3 Quantitative Evaluation 71 3.4.4 Analysis of Our Method 74 3.4.5 Generalization from A Single Source Domain 78 3.5 Conclusion 79 4 Knowledge Transfer for Decentralized Domains 81 4.1 Introduction 82 4.2 Related Works 85 4.3 Proposed Method 88 4.3.1 Problem Formulation 88 4.3.2 Efficient Model Personalization in FL via Client-Specific Prompt Generation 89 4.3.3 pFedPG Training and Inference 93 4.4 Experiments 94 4.4.1 Datasets and Experimental Setup 94 4.4.2 Quantitative Evaluation 96 4.4.3 Analysis of Our pFedPG 99 4.5 Conclusion 101 5 Conclusion 103 Reference 105 | - |
| dc.language.iso | en | - |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 聯邦學習 | zh_TW |
| dc.subject | 領域泛化 | zh_TW |
| dc.subject | 零樣本學習 | zh_TW |
| dc.subject | 風格轉換 | zh_TW |
| dc.subject | 遷移學習 | zh_TW |
| dc.subject | 電腦視覺 | zh_TW |
| dc.subject | Computer Vision | en |
| dc.subject | Deep Learning | en |
| dc.subject | Federated Learning | en |
| dc.subject | Domain Generalization | en |
| dc.subject | Style Transfer | en |
| dc.subject | Zero-Shot Learning | en |
| dc.subject | Transfer Learning | en |
| dc.title | 知識遷移於視覺理解 | zh_TW |
| dc.title | Visual Understanding with Knowledge Transfer | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-2 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.oralexamcommittee | 廖弘源;莊永裕;陳祝嵩;陳煥宗;孫民;邱維辰;孫紹華 | zh_TW |
| dc.contributor.oralexamcommittee | Hong-Yuan Mark Liao;Yung-Yu Chuang;Chu-Song Chen;Hwann-Tzong Chen;Min Sun;Wei-Chen Chiu;Shao-Hua Sun | en |
| dc.subject.keyword | 深度學習,電腦視覺,遷移學習,風格轉換,零樣本學習,領域泛化,聯邦學習, | zh_TW |
| dc.subject.keyword | Deep Learning,Computer Vision,Transfer Learning,Style Transfer,Zero-Shot Learning,Domain Generalization,Federated Learning, | en |
| dc.relation.page | 129 | - |
| dc.identifier.doi | 10.6342/NTU202301924 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2023-07-25 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電信工程學研究所 | - |
| 顯示於系所單位: | 電信工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf | 22.12 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
