請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/53879
標題: | 基於影像透視擴增實境之第一人稱視角人物動畫編輯系統 First-person view animation editing utilizing video see-through augmented reality |
作者: | Liang-Chen Wu 吳亮辰 |
指導教授: | 歐陽明 |
關鍵字: | 擴增實境,影像透視,動畫編輯,人物模型,色度鍵控,頭戴式顯示器,手部追蹤, augmented reality,video see-through,animation editing,figure model,chroma keying,head mounted display,hand tracking, |
出版年 : | 2015 |
學位: | 碩士 |
摘要: | 在進行3D動畫時,我們通常是在螢幕上編輯在三維空間中的3D物件,另外我們也必須使用滑鼠與鍵盤等輸入裝置才能對模型的動作進行編輯與觀察。然而,這段過程是可以改善的。透過現今越來越進步的手勢辨識,對虛擬資訊的操作不再被侷限在滑鼠與鍵盤上面。我們可以使用辨識出的手勢去對應到以往在編輯模型動作上複雜的操作。使用者所需要去做的就是使用他們的雙手去調整在他們前方的模型。而另外一個複雜的問題就是觀察3D模型。在這裡我們使用第一人稱擴增實境,再搭配頭部追蹤來改善它。透過利用額外的攝影機來進行頭部追蹤,從不同的視角與位置觀察與3D模型互動後的結果就變得較為簡單,因為系統將會自動且正確地去對應真實世界中所有頭部的運動軌跡而省去了那些複雜的操作。
在我們系統中,之所以使用第一人稱視角是因為可以讓使用者簡單的觀察到在不同視角下的模型動作,避免物理上遮擋所造成的盲點。而影像透視技術,我們是使用雙攝影機來模擬雙眼去捕捉視覺影像,將之作為背景並因此而有更彈性的空間讓開發者附加虛擬資訊與進行影像處理。從攝影機捕捉到的外部真實世界資訊將會幫助我們實作多人對同一模型進行編輯的系統,所以我們選擇使用了擴增實境的方法呈現。也因為透過擴增實境,我們在編輯或移動模型時可以加入一些道具做為參考幫助我們做得更好。 最後,我們提供了一個基於影像透視的擴增實境人物動畫編輯系統,其中整合了頭戴式虛擬實境顯示器、雙攝影機模組與手部追蹤裝置,並以此讓動畫的編輯更加真實與有趣。 In making 3D animation with traditional method, we usually edit 3D objects in 3-dimension space on the screen; therefore, we have to use a mouse along with a keyboard to edit the model motion and to observe 3D models. However, this process can be improved. With the improvement in gesture recognition nowadays, virtual information operations are no longer confined to the mouse and keyboard. We can use the recognized gestures to apply to difficult operations in editing model motion. All the users have to do is simply using their hands to adjust the model in front of them. Another problem is the observation of 3D model. For this problem, we would use first-person view augmented reality, alone with head tracking to improve it. For head tracking, with the external camera, it would be easy to observe the interactive results in different angles and positions without complicated operation because the system will accurately map all of the real world head movements. In our system, with first-person view, users can easily view a model’s gesture under different angles, avoiding the blind spot caused by physical occlusion. Next, for video see-through, we use dual camera as eyes to catch vision images as background so as to create more flexible spaces for adding virtual information and proceeding image processing. Lastly, for the need of a camera to capture external information to help achieving many-to-one model editing, we chose the augmented reality method. With augmented reality, we can implement some props in reality as the movement editing reference to achieve a better placement. We provide a video see-through AR figure animation editing system, which integrates a VR HMD, a dual camera module and a hand tracking device to make animation editing more realistic and interesting. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/53879 |
全文授權: | 有償授權 |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-104-1.pdf 目前未授權公開取用 | 1.06 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。