Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97161
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳尚澤zh_TW
dc.contributor.advisorShang-Tse Chenen
dc.contributor.author許郁翎zh_TW
dc.contributor.authorYu-Ling Hsuen
dc.date.accessioned2025-02-27T16:28:27Z-
dc.date.available2025-02-28-
dc.date.copyright2025-02-27-
dc.date.issued2025-
dc.date.submitted2025-02-11-
dc.identifier.citation[1] G. Alon and M. Kamfonas. Detecting language model attacks with perplexity. ArXiv, abs/2308.14132, 2023.
[2] M. F. Alzantot, Y. Sharma, A. Elgohary, B.-J. Ho, M. B. Srivastava, and K.-W. Chang. Generating natural language adversarial examples. ArXiv, abs/1804.07998, 2018.
[3] C. Anil, E. Durmus, M. Sharma, J. Benton, S. Kundu, J. Batson, N. Rimsky, M. Tong, J. Mu, D. Ford, F. Mosconi, R. Agrawal, R. Schaeffer, N. Bashkansky, S. Svenningsen, M. Lambert, A. Radhakrishnan, C. E. Denison, E. Hubinger, Y. Bai, T. Bricken, T. Maxwell, N. Schiefer, J. Sully, A. Tamkin, T. Lanham, K. Nguyen, T. Korbak, J. Kaplan, D. Ganguli, S. R. Bowman, E. Perez, R. Grosse, and D. K. Duvenaud. Many-shot jailbreaking.
[4] A. R. Basani and X. Zhang. Gasp: Efficient black-box generation of adversarial suffixes for jailbreaking llms. ArXiv, abs/2411.14133, 2024.
[5] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. teusz Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. ArXiv, abs/2005.14165, 2020.
[6] P. Chao, A. Robey, E. Dobriban, H. Hassani, G. J. Pappas, and E. Wong. Jailbreaking black box large language models in twenty queries. ArXiv, abs/2310.08419, 2023.
[7] D. Ganguli, L. Lovitt, J. Kernion, A. Askell, Y. Bai, S. Kadavath, B. Mann, E. Perez, N. Schiefer, K. Ndousse, A. Jones, S. Bowman, A. Chen, T. Conerly, N. Dassarma, D. Drain, N. Elhage, S. El-Showk, S. Fort, Z. Dodds, T. Henighan, D. Hernandez, T. Hume, J. Jacobson, S. Johnston, S. Kravec, C. Olsson, S. Ringer, E. Tran-Johnson, D. Amodei, T. B. Brown, N. Joseph, S. McCandlish, C. Olah, J. Kaplan, and J. Clark. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. ArXiv, abs/2209.07858, 2022.
[8] C. Guo, A. Sablayrolles, H. J’egou, and D. Kiela. Gradient-based adversarial attacks against text transformers. In Conference on Empirical Methods in Natural Language Processing, 2021.
[9] H. Inan, K. Upasani, J. Chi, R. Rungta, K. Iyer, Y. Mao, M. Tontchev, Q. Hu, B. Fuller, D. Testuggine, and M. Khabsa. Llama guard: Llm-based input-output safeguard for human-ai conversations. ArXiv, abs/2312.06674, 2023.
[10] J. Ji, T. Qiu, B. Chen, B. Zhang, H. Lou, K. Wang, Y. Duan, Z. He, J. Zhou, Z. Zhang, F. Zeng, K. Y. Ng, J. Dai, X. Pan, A. O’Gara, Y. Lei, H. Xu, B. Tse, J. Fu, S. M. McAleer, Y. Yang, Y. Wang, S.-C. Zhu, Y. Guo, and W. Gao. Ai alignment: A comprehensive survey. ArXiv, abs/2310.19852, 2023.
[11] A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. de Las Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier, L. R. Lavaud, M.-A. Lachaux, P. Stock, T. L. Scao, T. Lavril, T. Wang, T. Lacroix, and W. E. Sayed. Mistral 7b. ArXiv, abs/2310.06825, 2023.
[12] D. Jin, Z. Jin, J. T. Zhou, and P. Szolovits. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In AAAI Conference on Artificial Intelligence, 2019.
[13] L. Li, R. Ma, Q. Guo, X. Xue, and X. Qiu. Bert-attack: Adversarial attack against bert using bert. ArXiv, abs/2004.09984, 2020.
[14] X. Liu, N. Xu, M. Chen, and C. Xiao. Autodan: Generating stealthy jailbreak prompts on aligned large language models. ArXiv, abs/2310.04451, 2023.
[15] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. ArXiv, abs/1706.06083, 2017.
[16] A. Mehrotra, M. Zampetakis, P. Kassianik, B. Nelson, H. Anderson, Y. Singer, and A. Karbasi. Tree of attacks: Jailbreaking black-box llms automatically. ArXiv, abs/2312.02119, 2023.
[17] C. Meister and R. Cotterell. Language model evaluation beyond perplexity. In Annual Meeting of the Association for Computational Linguistics, 2021.
[18] G. T. T. Mesnard, C. Hardin, R. Dadashi, S. Bhupatiraju, S. Pathak, L. Sifre, M. Riviere, M. Kale, J. C. Love, P. D. Tafti, L. Hussenot, A. Chowdhery, A. Roberts, A. Barua, A. Botev, A. Castro-Ros, A. Slone, A. H’eliou, A. Tacchetti, A. Bulanova, A. Paterson, B. Tsai, B. Shahriari, C. L. Lan, C. A. Choquette-Choo, C. Crepy, D. Cer, D. Ippolito, D. Reid, E. Buchatskaya, E. Ni, E. Noland, G. Yan, G. Tucker, G.-C. Muraru, G. Rozhdestvenskiy, H. Michalewski, I. Tenney, I. Grishchenko, J. Austin, J. Keeling, J. Labanowski, J.-B. Lespiau, J. Stanway, J. Brennan, J. Chen, J. Ferret, J. Chiu, J. Mao-Jones, K. Lee, K. Yu, K. Millican, L. L. Sjoesund, L. Lee, L. Dixon, M. Reid, M. Mikula, M. Wirth, M. Sharman, N. Chinaev, N. Thain, O. Bachem, O. Chang, O. Wahltinez, P. Bailey, P. Michel, P. Yotov, P. G. Sessa, R. Chaabouni, R. Comanescu, R. Jana, R. Anil, R. McIlroy, R. Liu, R. Mullins, S. L. Smith, S. Borgeaud, S. Girgin, S. Douglas, S. Pandya, S. Shakeri, S. De, T. Klimenko, T. Hennigan, V. Feinberg, W. Stokowiec, Y. hui Chen, Z. Ahmed, Z. Gong, T. B. Warkentin, L. Peran, M. Giang, C. Farabet, O. Vinyals, J. Dean, K. Kavukcuoglu, D. Hassabis, Z. Ghahramani, D. Eck, J. Barral, F. Pereira, E. Collins, A. Joulin, N. Fiedel, E. Senter, A. Andreev, and K. Kenealy. Gemma: Open models based on gemini research and technology. ArXiv, abs/2403.08295, 2024.
[19] OpenAI. Gpt-4 technical report. 2023.
[20] OpenAI, J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, R. Avila, I. Babuschkin, S. Balaji, V. Balcom, P. Baltescu, H. Bao, M. Bavarian, J. Belgum, I. Bello, J. Berdine, G. Bernadett-Shapiro, C. Berner, L. Bogdonoff, O. Boiko, M. Boyd, A.-L. Brakman, G. Brockman, T. Brooks, M. Brundage, K. Button, T. Cai, R. Campbell, A. Cann, B. Carey, C. Carlson, R. Carmichael, B. Chan, C. Chang, F. Chantzis, D. Chen, S. Chen, R. Chen, J. Chen, M. Chen, B. Chess, C. Cho, C. Chu, H. W. Chung, D. Cummings, J. Currier, Y. Dai, C. Decareaux, T. Degry, N. Deutsch, D. Deville, A. Dhar, D. Dohan, S. Dowling, S. Dunning, A. Ecoffet, A. Eleti, T. Eloundou, D. Farhi, L. Fedus, N. Felix, S. P. Fishman, J. Forte, I. Fulford, L. Gao, E. Georges, C. Gibson, V. Goel, T. Gogineni, G. Goh, R. Gontijo-Lopes, J. Gordon, M. Grafstein, S. Gray, R. Greene, J. Gross, S. S. Gu, Y. Guo, C. Hallacy, J. Han, J. Harris, Y. He, M. Heaton, J. Heidecke, C. Hesse, A. Hickey, W. Hickey, P. Hoeschele, B. Houghton, K. Hsu, S. Hu, X. Hu, J. Huizinga, S. Jain, S. Jain, J. Jang, A. Jiang, R. Jiang, H. Jin, D. Jin, S. Jomoto, B. Jonn, H. Jun, T. Kaftan, Łukasz Kaiser, A. Kamali, I. Kanitscheider, N. S. Keskar, T. Khan, L. Kilpatrick, J. W. Kim, C. Kim, Y. Kim, J. H. Kirchner, J. Kiros, M. Knight, D. Kokotajlo, Łukasz Kondraciuk, A. Kondrich, A. Konstantinidis, K. Kosic, G. Krueger, V. Kuo, M. Lampe, I. Lan, T. Lee, J. Leike, J. Leung, D. Levy, C. M. Li, R. Lim, M. Lin, S. Lin, M. Litwin, T. Lopez, R. Lowe, P. Lue, A. Makanju, K. Malfacini, S. Manning, T. Markov, Y. Markovski, B. Martin, K. Mayer, A. Mayne, B. McGrew, S. M. McKinney, C. McLeavey, P. McMillan, J. McNeil, D. Medina, A. Mehta, J. Menick, L. Metz, A. Mishchenko, P. Mishkin, V. Monaco, E. Morikawa, D. Mossing, T. Mu, M. Murati, O. Murk, D. Mély, A. Nair, R. Nakano, R. Nayak, A. Neelakantan, R. Ngo, H. Noh, L. Ouyang, C. O’Keefe, J. Pachocki, A. Paino, J. Palermo, A. Pantuliano, G. Parascandolo, J. Parish, E. Parparita, A. Passos, M. Pavlov, A. Peng, A. Perelman, F. de Avila Belbute Peres, M. Petrov, H. P. de Oliveira Pinto, Michael, Pokorny, M. Pokrass, V. H. Pong, T. Powell, A. Power, B. Power, E. Proehl, R. Puri, A. Radford, J. Rae, A. Ramesh, C. Raymond, F. Real, K. Rimbach, C. Ross, B. Rotsted, H. Roussez, N. Ryder, M. Saltarelli, T. Sanders, S. Santurkar, G. Sastry, H. Schmidt, D. Schnurr, J. Schulman, D. Selsam, K. Sheppard, T. Sherbakov, J. Shieh, S. Shoker, P. Shyam, S. Sidor, E. Sigler, M. Simens, J. Sitkin, K. Slama, I. Sohl, B. Sokolowsky, Y. Song, N. Staudacher, F. P. Such, N. Summers, I. Sutskever, J. Tang, N. Tezak, M. B. Thompson, P. Tillet, A. Tootoonchian, E. Tseng, P. Tuggle, N. Turley, J. Tworek, J. F. C. Uribe, A. Vallone, A. Vijayvergiya, C. Voss, C. Wainwright, J. J. Wang, A. Wang, B. Wang, J. Ward, J. Wei, C. Weinmann, A. Welihinda, P. Welinder, J. Weng, L. Weng, M. Wiethoff, D. Willner, C. Winter, S. Wolrich, H. Wong, L. Workman, S. Wu, J. Wu, M. Wu, K. Xiao, T. Xu, S. Yoo, K. Yu, Q. Yuan, W. Zaremba, R. Zellers, C. Zhang, M. Zhang, S. Zhao, T. Zheng, J. Zhuang, W. Zhuk, and B. Zoph. Gpt-4 technical report, 2023.
[21] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. E. Miller, M. Simens, A. Askell, P. Welinder, P. F. Christiano, J. Leike, and R. J. Lowe. Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155, 2022.
[22] A. Paulus, A. Zharmagambetov, C. Guo, B. Amos, and Y. Tian. Advprompter: Fast adaptive adversarial prompting for llms. ArXiv, abs/2404.16873, 2024.
[23] R. Rafailov, A. Sharma, E. Mitchell, S. Ermon, C. D. Manning, and C. Finn. Direct preference optimization: Your language model is secretly a reward model. ArXiv, abs/2305.18290, 2023.
[24] A. Robey, E. Wong, H. Hassani, and G. J. Pappas. Smoothllm: Defending large language models against jailbreaking attacks. ArXiv, abs/2310.03684, 2023.
[25] V. S. Sadasivan, S. Saha, G. Sriramanan, P. Kattakinda, A. M. Chegini, and S. Feizi. Fast adversarial attacks on language models in one gpu minute. ArXiv, abs/2402.15570, 2024.
[26] R. Shah, Q. Feuillade-Montixi, S. Pour, A. Tagade, S. Casper, and J. Rando. Scalable and transferable black-box jailbreaks for language models via persona modulation. ArXiv, abs/2311.03348, 2023.
[27] X. Shen, Z. J. Chen, M. Backes, Y. Shen, and Y. Zhang. ”do anything now”: Characterizing and evaluating in-the-wild jailbreak prompts on large language models. ArXiv, abs/2308.03825, 2023.
[28] T. Shin, Y. Razeghi, R. L. L. IV, E. Wallace, and S. Singh. Eliciting knowledge from language models using automatically generated prompts. ArXiv, abs/2010.15980, 2020.
[29] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. Llama: Open and efficient foundation language models. ArXiv, abs/2302.13971, 2023.
[30] H. Wang, H. Li, J. Zhu, X. Wang, C. Pan, M. Huang, and L. Sha. Diffusionattacker: Diffusion-driven prompt manipulation for llm jailbreak. 2024.
[31] A. Wei, N. Haghtalab, and J. Steinhardt. Jailbroken: How does llm safety training fail? ArXiv, abs/2307.02483, 2023.
[32] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. H. hsin Chi, F. Xia, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. ArXiv, abs/2201.11903, 2022.
[33] Z. Wei, Y. Wang, and Y. Wang. Jailbreak and guard aligned language models with only few in-context demonstrations. ArXiv, abs/2310.06387, 2023.
[34] Z. Xie, J. Gao, L. Li, Z. Li, Q. Liu, and L. Kong. Jailbreaking as a reward misspecification problem. ArXiv, abs/2406.14393, 2024.
[35] S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and K. Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. ArXiv, abs/2305.10601, 2023.
[36] J. Yu, X. Lin, Z. Yu, and X. Xing. Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts. ArXiv, abs/2309.10253, 2023.
[37] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, P. Liu, J. Nie, and J. rong Wen. A survey of large language models. ArXiv, abs/2303.18223, 2023.
[38] L. Zheng, W.-L. Chiang, Y. Sheng, S. Zhuang, Z. Wu, Y. Zhuang, Z. Lin, Z. Li, D. Li, E. P. Xing, H. Zhang, J. Gonzalez, and I. Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena. ArXiv, abs/2306.05685, 2023.
[39] A. Zhou, B. Li, and H. Wang. Robust prompt optimization for defending language models against jailbreaking attacks. ArXiv, abs/2401.17263, 2024.
[40] W. Zhou, X. Wang, L. Xiong, H. Xia, Y. Gu, M. Chai, F. Zhu, C. Huang, S. Dou, Z. Xi, R. Zheng, S. Gao, Y. Zou, H. Yan, Y. Le, R. Wang, L. Li, J. Shao, T. Gui, Q. Zhang, and X. Huang. Easyjailbreak: A unified framework for jailbreaking large language models. ArXiv, abs/2403.12171, 2024.
[41] D. M. Ziegler, N. Stiennon, J. Wu, T. B. Brown, A. Radford, D. Amodei, P. Christiano, and G. Irving. Fine-tuning language models from human preferences. ArXiv, abs/1909.08593, 2019.
[42] A. Zou, Z. Wang, J. Z. Kolter, and M. Fredrikson. Universal and transferable adversarial attacks on aligned language models. ArXiv, abs/2307.15043, 2023.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97161-
dc.description.abstract大型語言模型(LLM)近年來快速發展,革新了各種應用,大大提高了便利性和生產力。然而,隨著其強大功能的出現,倫理問題和新型態的攻擊(如越獄攻擊)也隨之產生。雖然許多現有研究由於簡單性和靈活性而專注於個體攻擊策略,但對尋求提升對未見數據可轉移性的通用方法的研究較少。在本文中,我們設計一個方法,用於針對越獄攻擊在通用設定的情境中優化多重提示。此外,我們對方法的設計延伸到防禦的情境上。實驗結果說明我們的方法可以在控制可讀性的情況下達到高攻擊率。zh_TW
dc.description.abstractLarge language models (LLMs) have seen rapid development in recent years, revolutionizing various applications and significantly enhancing convenience and productivity. However, alongside their impressive capabilities, ethical concerns and new types of attacks, such as jailbreaking, have emerged. While many existing studies focus on individual attack strategies due to their simplicity and flexibility, there is limited research on universal approaches, which seek to find generalizable checkpoints to optimize across datasets and improve transferability to unseen data. In this paper, we introduce JUMP, a method designed to discover adversarial multi-prompts in a universal setting. We also adapt our approach for defense, which we term DUMP. Experimental results show that our method for optimizing universal multi-prompts surpasses existing techniques.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-27T16:28:27Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-02-27T16:28:27Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements ii
摘要 iv
Abstract v
Contents vi
List of Figures ix
List of Tables x
Chapter 1 Introduction 1
Chapter 2 Related Work 4
2.1 Jailbreak with Prompting . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Jailbreak with Finetuning Attackers . . . . . . . . . . . . . . . . . . 5
2.3 Defenses against Jailbreaking . . . . . . . . . . . . . . . . . . . . . 5
Chapter 3 Methdology 6
3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 BEAST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 From BEAST to JUMP* . . . . . . . . . . . . . . . . . . . . . . . . 8
3.4 Adding perplexity constraints to JUMP* . . . . . . . . . . . . . . . . 11
Chapter 4 Attack Experiments 12
4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2 Victim Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.4 Comparing Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.5 Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . 14
4.5.1 Single-Prompt vs. Multi-Prompts Settings . . . . . . . . . . . . . . 14
4.5.2 Trade-offs Between ASR and Perplexity . . . . . . . . . . . . . . . 14
4.5.3 From JUMP to JUMP++ . . . . . . . . . . . . . . . . . . . . . . . 15
4.5.4 Sensitivity to different choices of initial templates . . . . . . . . . . 16
4.5.5 Transfer Attack Results . . . . . . . . . . . . . . . . . . . . . . . . 18
Chapter 5 Defenses against Individual Attacks 20
5.1 Defensing with Universal Multi-Prompts (DUMP) . . . . . . . . . . 20
5.2 Comparing Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Chapter 6 Conclusion 23
Chapter 7 Limitations 24
References 25
Appendix A — Algorithms 33
Appendix B — Detail Settings in Experiments 36
B.1 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
B.2 Baseline Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
B.3 Baseline Defenses . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Appendix C — Supplementary Materials 39
Appendix D — Evaluations 42
Appendix E — Demonstrations 44
-
dc.language.isoen-
dc.title利用通用多重提示下的越獄攻擊zh_TW
dc.titleJailbreaking with Universal Multi-Promptsen
dc.typeThesis-
dc.date.schoolyear113-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee李宏毅;陳縕儂;高宏宇zh_TW
dc.contributor.oralexamcommitteeHung-Yi Lee;Yun-Nung Chen;Hung-Yu Kaoen
dc.subject.keyword越獄攻擊,集束搜索,大型語言模型,自然語言處理,深度學習,zh_TW
dc.subject.keywordJailbreak,Beam Search,Large Language Model,Natural Language Processing,Deep Learning,en
dc.relation.page50-
dc.identifier.doi10.6342/NTU202500547-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-02-11-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
dc.date.embargo-lift2025-02-28-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-113-1.pdf1.46 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved