Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/17062
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor連豊力
dc.contributor.authorChih-Ming Hsuen
dc.contributor.author許志明zh_TW
dc.date.accessioned2021-06-07T23:55:06Z-
dc.date.copyright2013-09-06
dc.date.issued2013
dc.date.submitted2013-08-23
dc.identifier.citation[1] J. Shawe-Taylor, T. de Bie, and N. Cristianini, “Data mining, data fusion and information management,” in Proc. IEEE Conf. Intell. Transp. Syst.., vol. 153, no. 3, pp. 221–229, Sep. 2006.
[2] J. Zhang, F. Y. Wang, K. Wang, W. H. Lin, X. Xu, and C. Chen, “Data-Driven Intelligent Transportation Systems: A Survey,” IEEE Trans. Intell. Transp. Syst., vol. 12, no. 4, pp. 1624-1639, 2011.
[3] N. K. Kanhere and S. T. Birchfield, “A taxonomy and analysis of camera calibration methods for traffic monitoring applications,” IEEE Trans. Intell. Transp. Syst, vol. 11, no. 2, pp. 441–452, 2010.
[4] C. Zhaoxue and S. Pengfei, “Efficient method for camera calibration in traffic scenes,” Electronics Letters, vol. 40, no.6, pp.368–369, March, 2004.
[5] J. M. Alvarez and A. M. Lopez, “Road Detection Based on Illuminant Invariance,” IEEE Trans. Intell. Transp. Syst, vol. 12, no. 1, pp.184-193, 2010.
[6] R. K. Lenz and R. Y. Tsai, “Techniques for calibration of the scale factor and image center for high accuracy 3-D machine vision metrology,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 10, no. 5, pp. 713–720, 1988.
[7] R. Y. Tsai, “A versatile camera calibration technique for high accuracy 3-D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Rob. Autom., vol. 3, no 4, pp. 323–344, 1987.
[8] L. L. Wang and W. H. Tsai, “Camera calibration by vanishing lines for 3-D computer vision,” IEEE Trans. Pattern Anal. Mach. Intell. vol.13, no 4, pp. 370–376, 1991.
[9] S. Hold, S. Gormer, and A. Kummert, “A Novel Approach for the Online Initial Calibration of Extrinsic Parameters for a Car-Mounted Camera,” in Proc. IEEE Conf. Intell. Transp. Syst., October 3-7, 2009, pp. 420-425.
[10] E. K. Bas and J. D. Crisman, “An easy to install camera calibration for traffic monitoring,” in Proc. IEEE Conf. Intell. Transp. Syst., 1997, pp. 362–366.
[11] G. S. K. Fung, N. H. C. Yung, and G. K. H. Pang, “Camera calibration from road lane markings,” Opt. Eng., vol. 42, no. 10, pp. 2967–2977, Oct. 2003.
[12] A. H. S. Lai and N. H. C. Yung, “Lane detection by orientation and length discrimination,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 30, no. 4, pp. 539–548, Aug. 2000.
[13] T. N. Schoepflin and D. J. Dailey, “Dynamic camera calibration of roadside traffic management cameras for vehicle speed estimation,” IEEE Trans. Intell. Transp. Syst., vol. 4, no. 2, pp. 90–98, Jun. 2003.
[14] X. C. He and N. H. C. Yung, “New method for overcoming illconditioning in vanishing-point-based camera calibration,” Opt. Eng., vol. 46, no. 3, pp. 1–12, 2007.
[15] S. Gupte, O. Masoud, R. F. K. Martin, and N. P. Papanikolopoulos, “Detection and classification of vehicles,” IEEE Trans. Intell. Transp. Syst., vol. 3, no. 1, pp. 37–47, Mar. 2002.
[16] K.-T. Song and J.-C. Tai, “Dynamic calibration of pan–tilt–zoom cameras for traffic monitoring,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 36, no. 5, pp. 1091–1103, Oct. 2006.
[17] S. Ernst, C. Stiller, J. Goldbeck, and C. Roessig, “Camera calibration for lane and obstacle detection,” in Proc. IEEE Conf. Intell. Transp. Syst., October 5-8, 1999, pp. 356-361.
[18] M. Bellino, Y. L. de Meneses, S. Kolski, and J. Jacot, “Calibration of an embedded camera for driver-assistant systems,” in Proc. IEEE Conf. Intell. Transp. Syst., 13–15 Sept. 2005, pp. 354–359.
[19] A. Bodis-Szomoru, T. Daboczi, and Z. Fazekas. “A far-range off-line camera calibration method for stereo lane detection systems,” in Proc. IEEE Conf. Instrumentation and Measurement Technology, May 2007, pp. 1-6.
[20] A. Guiducci. “Camera calibration for road applications,” Computer Vision and Image Understanding, vol. 79, no. 2, pp. 250–266, 2000.
[21] B. Southall and C. J. Taylor. “Stochastic road shape estimation,”. in Proc. IEEE Conf. Computer Vision, 7–14 July, 2001, pp. 205–212.
[22] Dornaika F., Alvarez J., Sappa A. and Lopez A., “A New Framework for Stereo Sensor Pose through Road Segmentation and Registration”, IEEE Trans. Intell. Transp. Syst., Vol. 12, No. 4, pp. 954-966, Dec. 2011
[23] A. Ess, T. Mueller, H. Grabner, and L. Van Goo, “Segmentation-based urban traffic scene understanding,” in Proc. of BMVC, 2009, pp. 1-11.
[24] Y. C. Liu, K. Y. Lin, and Y. S. Chen, “Bird’s-eye view vision system for vehicle surrounding monitoring,” in Proc. Conf. Robot Vision, Berlin, Germany, Feb., 2008, pp.207-218.
[25] M. Bertozzi, A. Broggi, and A. Fascioli, “Vision-based intelligent vehicles: State of the art and perspectives,’’ Robotics and Autonomous Systems, vol. 32, pp. 1-16, 2000.
[26] B. Ma, S. Lakshmanan, and A. O. Hero, “Simultaneous detection of lane and pavement boundaries using model-based multisensor fusion,” IEEE Trans. Intell. Transp. Syst., vol. 1, no. 3, pp. 135–147, Sep. 2000.
[27] J. Sparbert, K. Dietmayer, and D. Streller, “Lane detection and street type classification using laser range images,” in Proc. IEEE Conf. Intell. Transp. Syst., 2001, pp. 456–464.
[28] J. McCall and M. M. Trivedi., “Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation,” IEEE Trans. Intell. Transp. Syst., vol. 7, no. 1, pp. 20-37, March 2006.
[29] M. Bertozzi and A. Broggi, “Gold: A parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Trans. Image Process., vol. 7, no. 1, pp. 62–81, Jan. 1998.
[30] R. Labayrade, D. Aubert, and J.-P. Tarel, “Real time obstacle detection on non flat road geometry through V-disparity representation,” in Proc. IEEE Intell. Vehicles Symp., Versailles, France, Jun. 2002, pp. 646–651.
[31] R. Labayrade and D. Aubert, “A single framework for vehicle roll, pitch, yaw estimation and obstacles detection by stereovision,” in Proc. IEEE Intell. Vehicles Symp., Columbus, OH, Jun. 2003, pp. 31–36.
[32] Z. Hu and K. Uchimura, “U–V-disparity: An efficient algorithm for stereovision based scene analysis,” in Proc. IEEE Intell. Vehicles Symp., Las Vegas, NV, Jun. 2005, pp. 48–54.
[33] Broggi, C. Caraffi, P. P. Porta, and P. Zani, “The single frame stereo vision system for reliable obstacle detection used during the 2005 DARPA grand challenge on TerraMax,” in Proc. IEEE Intell. Transp. Syst., Toronto, ON, Canada, Sep. 17–20, 2006, pp. 745–752.
[34] M. Vergauwen, M. Pollefeys, and L. V. Gool, “A stereo-vision system for support of planetary surface exploration,” Mach. Vis. Appl., vol. 14, no. 1, pp. 5–14, Apr. 2003.
[35] C. Rasmussen, “Grouping dominant orientations for ill-structured road following,” in Proc. CVPR, 2004, vol. 1, pp. 470–477.
[36] J. D. Crisman and C. E. Thorpe, “SCARF: A color vision system that tracks roads and intersections,” IEEE Trans. Robotics and Automation, Feb. 1993.
[37] Y. He, H. Wang, and B. Zhang, “Color-based road detection in urban traffic scenes,” IEEE Trans. Intell. Transp. Syst., Dec.2004.
[38] C. Tan, T. Hong, T. Chang, and M. Shneier, “Color model-based real-time learning for road following,” in Proc. IEEE Intell. Transp. Syst., 2006, pp. 939–944.
[39] M. J. Lighthill and J.B. Whitham, “On kinematic waves II: a theory of traffic flow on long crowded roads,” in Proc. of the Royal Society, 1955, vol. A299, pp. 317-345.
[40] P. I. Richard, “Shock wave on the highway,” Operation Research, vol. 4, pp.42-51,1956.
[41] G. B. Whitham, Linear and Nonlinear Waves, John Wiley and Sons Inc, New York, 1974.
[42] H. M. Zang, “Analyses of the stability and wave properties of a new continuum traffic theory,” Transp. Res. Part B, vol 33, no.6, pp. 399-415, 1999.
[43] H. Raza and P. Ioannou, “Vehicle following control design for utomated highway systems,” IEEE Control Systems Magazine, pp. 43-60, Dec. 1996.
[44] S. E. Shladover, C. A. Desoer, J.K. Hedrick, M. Tomizuka, J. Walrand, W.-B. Zhang, D.H. McMahon, H. Peng, S. Sheikholeslam, and N. McKeown, “Automatic vechicle control development in the PATH program,” IEEE Trans. Vehicular Technology, vol. 40, no. 1, pp. 114-130, 1991.
[45] J. K. Hedrick, M. Tomizuka, and P. Varaiya, “Control issues in automatic highway systems,” IEEE Control Systems Magazine, pp 21-32, Dec. 1994.
[46] H. Sun, H. Liu, and B. Ran, “Sort term traffic forecasting using the local linear regression model,” presented at the 82nd Annual Meeting of the Transp. Res. Board, Washington, DC, Jan. 2003.
[47] X. Zhang and J. A. Rice, “Short-term travel time prediction using a time-varying coefficient linear model,” Tech. Rep. Dept. Statistics, Univ. California, Berkeley, Mar. 2001.
[48] S. Lee, and D. B. Fambro, “Application of subsets autoregressive integrated moving average model for short-term freeway traffic volume forecasting,” Transp. Res. Rec. no. 1678, pp.179-188, 1999.
[49] C. K. Moorthy, and B. G. Ratcliffe, “Short term traffic forecasting using time series methods,” Transp. Plan. Technol., vol. 12, no. 1, pp. 45-56, 1988.
[50] C. Quek, M. Pasquier, and B. B. S. Lim, “POP-traffic: a novel fuzzy neural approach to road traffic analysis and prediction,” IEEE Trans. Intell. Transp. Syst., vol. 7, no. 2, pp. 133-146, Jun. 2006.
[51] H. B. Yin, S. C. Wong, J. M. Xu, and C. K. Wong, “Urban traffic flow prediction using a fuzzy-neural approach,” Transp. Res., Part C Emerg. Technol., vol. 10, no. 2, pp.85-98, Apr. 2002.
[52] W. Weijermars and E. V. Berkum, “Analyzing highway flow patterns using cluster analysis,” in Proc. IEEE Conf. Intell. Transp. Syst., Austria, 2005, pp.831-836.
[53] D. Wild, “Short-term forecasting based on a transformation and classification of traffic volume time series,” Int. J. of Forecasting, pp. 63-72, 1997.
[54] J. Han, M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufmann, San Francisco, 2001 pp.346-389.
[55] T. W. Liao, “Clustering of time series data - a survey,” Pattern Recognition, vol. 38, pp.1857-1874, 2005.
[56] D.W. Huang, 'Triggered stop-and-go traffic in a continuum model,”International Journal of Modern Physics B, vol.18 , No.12 p.1679-1685, 2004.
[57] B. S. Kerner, The Physics of Traffic, Springer, Berlin, New York, 2004.
[58] B. S. Kerner, “Control of spatiotemporal congested traffic patterns at highway bottlenecks,” IEEE Trans. Intell. Transp. Syst., vol. 8, no. 2, pp. 308-320, Jun. 2007.
[59] B. S. Kerner and H. Rehborn, “Experimental properties of phase transition in traffic flow,” Physical Review Letters, vol. 79, no. 20, pp.4030-4033, 1997.
[60] E. Trucco and A. Verri, Introductory Techniques for 3D Computer Vision, Prentice-Hall, 1998.
[61] R. Hartley and A. Zisserman, Multiple View Geometry in Computer vision, Cambridge University Press, 2004.
[62] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, 2nd ed., Cambridge, Massachusetts: The MIT Press, 2001, pp.531-532.
[63] M. Sotelo, F. Rodriguez, L. Magdalena, L. Bergasa, and L. Boquete, “A color vision-based lane tracking system for autonomous driving in unmarked roads,” Auton. Robots,vol. 16, no. 1, 2004.
[64] C. Rotaru, T. Graf, , and J. Zhang, “Color image segmentation in HIS space for automotive applications,” Journal of Real-Time Image Processing, 2008, pp. 1164–1173.
[65] J. C. Rojas and J. D. Crisman, “Vehicle Detection in Color Images,” in Proc. IEEE Intell. Transp. Syst., pp.403-408, 1997.
[66] C. Lee, and E. Choi, “Bayes error evaluation of the Gaussian ML classifier,” IEEE Trans. Geosci. Remote Sens. vol. 38, no. 3, pp. 1471–1475, 2000.
[67] K. Fukunaga. Introduction to Statistical Pattern Recog-nition, Academic Press, Inc., 2nd edition, 1990.
[68] Jyh-Shing Roger Jang, 'Data Clustering and Pattern Recognition,' (in Chinese) available at the links for on-line courses at the author's homepage at http://www.cs.nthu.edu.tw/~jang.
[69] R. Nagai, T. Nagatani and A. Yamada, “Phase diagram in multi-phase traffic model,” Physica A, vol. 335, pp. 530-550, 2005.
[70] L. C. Davis, “Controlling traffic flow near the transition to the synchronous flow phase,” Physica A, vol. 368, pp. 541-550, 2006.
[71] E. L. Mar’ıa and L. A. I. Luis, “Towards a Realistic Description of Traffic Flow based on Cellular Automata,” in Proc. IEEE Intell. Transp. Syst., Washington, DC, USA, October 5-7, pp. 828-833, 2011.
[72] R. Jiang, and Q. S. Wu, “Spatial–temporal patterns at an isolated on-ramp in a new cellular automata model based on three-phase traffic theory,” Journal of Physics A: Mathematical & General, vol. 37, pp. 8197–8213, 2004.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/17062-
dc.description.abstract本論文提出在智慧型公路系統中以視覺基礎之可行駛空間偵測與模型基礎之交通車流特徵化研究。對於智慧型車輛與移動式機器人而言,交通場景的認知是相當重要的一個研究議題,特別在複雜且動態的環境中,去決定出可行駛的空間與可能的障礙物位置是路面場景認知中最基本的一項工作。本論文中,我們利用視覺的方法以路面色彩特徵結合路面幾何資訊,在動態的環境中感知道路與移動的障礙物,此方法主要是藉由識別相鄰區域的亮度差異性,分割出未被佔用的路面區域,此鄰域相似性的識別是一種利用統計特徵分析與結合廣度優先搜尋機制的搜尋演算法。路面模型和路面候選區域的相似性量測,則是以巴塔恰里雅距離來計量,在識別出未被佔用的路面區域之後,利用我們提出的障礙物掃描機制與攝影機線上校準法則可以估測出前方障礙物的相對距離。經由廣泛的實驗,呈現出本方法在實際的交通場景中,可偵測出可行駛區域與動態估測出前方障礙物相對距離。
在特徵化交通車流行為的研究上,為了能預測未來車流容量之變化,我們首先建立一個正確的交通車流模型,然後,以交通車流的動態特徵化與交通訊息量測定法則,分別鑑別出車流的動態變化之行為特性與量化交通動態變化之訊息。
為了精確地特徵化交通車流,我們提出了一種階層式高斯混合模型的架構,以建構出正確的經驗性動態模型。首先為了能特徵化相關時間性與地域性的參數和降低交通資料量,交通車流資料可以由多個高斯函數的線性組合來呈現,同時,為了檢驗動態變化的行為特性,我們採用相位轉變法鑑別出多種的交通車流型態和動態切換的行為特性;此外,我們收集不同的車輛偵測器的交通資料,藉以訊息熵的計量結果,可以決定出不同位置之車輛偵測器的重要性。經由收集六個月的台灣高速公路區段的交通資料與實驗分析,鑑別出五種交通車流型態,每種交通車流型態顯示出各別的特殊交通動態行為的獨特解釋意義。
zh_TW
dc.description.abstractThe vision-based drivable space detection and model-based traffic state estimation systems are presented in this dissertation. Traffic scene understanding and perception is an important issue for intelligent vehicles and autonomous mobile robots. Especially in complex and dynamic environments, the determination of drivable space and moving obstacles are fundamental tasks for road scene understanding. In this dissertation, a vision-based approach of combining road geometry and color features to percept road and moving obstacles in a dynamic environment is proposed. In the approach, a free road surface is detected and segmented by identifying the intensity difference among neighboring regions. The identification of region similarity is one type of searching algorithm by using statistical feature analysis (SFA) combined with a breadth-first search (BFS). Then, the similarity between the road model (its color distribution) and the road region candidates is expressed by a metric derived from the Bhattacharyya distance. After the identification of the free road surface, the relative distance of preceding obstacles can be easily estimated using the obstacle scanning mechanism (OSM) and camera calibration scheme. Extensive experiments have been performed to demonstrate that the proposed approach can properly detect the drivable region and dynamically estimate the relative distance of preceding obstacles in real traffic scenes.
In characterizing traffic flow behavior, the static analysis of traffic flow is first presented for properly constructing traffic flow models for predicting future traffic volumes. Then, the dynamic characterization of the traffic flow and traffic information measurement is proposed to identify dynamically changing behaviors and to quantize the information of traffic dynamics.
To accurately characterize traffic flow, a hierarchical Gaussian mixture modeling (HGMM) framework is proposed for constructing a proper empirical dynamics model. The traffic flow data are first represented by a linear combination of multiple Gaussian functions for characterizing related timing and geographical parameters and for reducing the quantity of collected traffic data. To further examine dynamically changing behaviors, the phase-transition approach is used for identifying various traffic flow patterns and their dynamic switching behaviors. Furthermore, the information entropy on the traffic data collected at various vehicle detectors can be calculated for characterizing the location significance of these detectors. Detailed experimental analyses show that five types of traffic flow patterns can be identified on a six-month traffic data set from a highway section in Taiwan. Each traffic flow pattern indicates a distinct interpretation of a special dynamic traffic behavior.
en
dc.description.provenanceMade available in DSpace on 2021-06-07T23:55:06Z (GMT). No. of bitstreams: 1
ntu-102-D93921007-1.pdf: 11779308 bytes, checksum: 1dbe0cc9a1d797e3237df29d872f9e92 (MD5)
Previous issue date: 2013
en
dc.description.tableofcontents摘要 I
ABSTRACT III
CONTENTS VI
ABBREVIATIONS VIII
LIST OF FIGURES IX
LIST OF TABLES: XII
CHAPTER 1 INTRODUCTION 1
1.1 MOTIVATION 4
1.2 CONTRIBUTION 9
1.3 ORGANIZATION OF THE DISSERTATION 12
CHAPTER 2 BACKGROUND AND LITERATURE REVIEW 15
2.1 CAMERA CALIBRATION 15
2.2 ROAD-AREA EXTRACTION 22
2.3 CHARACTERISTIC ANALYSIS OF TRAFFIC FLOW 26
CHAPTER 3 CAMERA CALIBRATION 31
3.1 PROJECTIVE INVARIANT 33
3.2 SIMULATIONS 39
3.3 EXPERIMENTAL RESULTS 43
CHAPTER 4 ROAD-AREA EXTRACTION 49
4.1 STATISTICAL FEATURE ANALYSIS 51
4.2 SEEDED REGION-GROWING ALGORITHM 53
4.3 EXPERIMENTATION RESULTS 58
4.3.1 A Case Study in NTUT Campus 58
4.3.2 Case Studies Conducted in Urban and Highway Traffic Settings 62
CHAPTER 5 ROAD DETECTION BASED ON FEATURE SIMILARITY SEARCHES 73
5.1 MULTI-SEEDED REGION-GROWING ALGORITHM 75
5.2 ROAD FEATURE SELECTION 78
5.3 EXPERIMENTAL RESULTS 85
CHAPTER 6 APPLICATION TO RELATIVE LOCATION ESTIMATION OF VEHICLES 93
6.1 LANE DETECTION AND ROAD MODEL FITTING 95
6.2 OBSTACLE SCANNING MECHANISM 99
6.3 EXPERIMENTAL RESULTS 101
CHAPTER 7 STATIC CHARACTERISTIC ANALYSIS OF TRAFFIC FLOW 107
7.1 EMPIRICAL FEATURES OF TRAFFIC FLOW 109
7.2 STATIC CHARACTERISTIC ANALYSIS BASED ON GMM 112
7.3 EXPERIMENTAL RESULTS 117
7.3.1 1-D Traffic Flow Modeling 117
7.3.2 2D Traffic Flow Modeling 121
CHAPTER 8 DYNAMIC CHARACTERISTIC ANALYSIS OF TRAFFIC FLOW 125
8.1 PHASE TRANSITION ANALYSIS 127
8.2 TRAFFIC INFORMATION MEASUREMENT 134
8.3 EXPERIMENTAL RESULTS 136
8.3.1 The Results of Phase Transition Analysis 136
8.3.2 Entropy Measurement in Traffic Dynamics 139
CHAPTER 9 CONCLUSION & FUTURE WORK 143
REFERENCES 147
dc.language.isoen
dc.subject攝影機校準zh_TW
dc.subject訊息熵zh_TW
dc.subject相似性量測zh_TW
dc.subject路面偵測演算法zh_TW
dc.subject高斯混合模型zh_TW
dc.subjectcamera calibrationen
dc.subjectRoad detectionen
dc.subjectdrivable detectionen
dc.subjectGaussian mixture modelingen
dc.title智慧型公路系統中以視覺基礎之可行駛空間偵測與模型基礎之交通車流特徵化研究zh_TW
dc.titleVision-Based Drivable Space Detection and Model-Based Traffic Flow Characterization in Intelligent Highway Systemen
dc.typeThesis
dc.date.schoolyear101-2
dc.description.degree博士
dc.contributor.oralexamcommittee張帆人,陳柏全,曾百由,李後燦,簡忠漢
dc.subject.keyword路面偵測演算法,相似性量測,攝影機校準,高斯混合模型,訊息熵,zh_TW
dc.subject.keywordRoad detection,drivable detection,camera calibration,Gaussian mixture modeling,en
dc.relation.page152
dc.rights.note未授權
dc.date.accepted2013-08-23
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-102-1.pdf
  未授權公開取用
11.5 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved