Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電子工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/31969
標題: 立體影像合成系統:演算法及架構設計之研究
Fast Algorithm and Hardware Architecture for Stereo Image Synthesis Systems
作者: Wan-Yu Chen
陳菀瑜
指導教授: 陳良基
關鍵字: 三維電視,立體影像合成,立體影像,
3D TV,Stereo Image Synthesis,Stereo Image,
出版年 : 2006
學位: 碩士
摘要: 立體影像顯示器已正式邁入量產階段。現在主流國際LCD大廠(SHARP、PHILLIPS、NEC)已投入3D LCD的研究與設計,下一個數位世代將由2D擴展至3D。在如此的潮流下,立體視覺(Stereo 3D Video)內容的提供需求一瞬間增加,唯有顯示設備和內容的提供與時並進,才能給予這個新世代的顯示系統一個完整的市場空間,也同時能夠帶領世界的視訊系統重大的革命。
傳統立體影像藉由雙鏡頭照相機同步擷取左右兩眼的影像,雙鏡頭照相機不僅需要兩組光學鏡頭,且影像處理系統均需要兩套,因此成本接近單鏡頭照相機的兩倍。 此外,雙鏡頭相機的鏡頭需以兩眼的平均距離置放(約6.5cm),因此體積較傳統單鏡頭相機來得大,較不適合做在體積較小的裝置,如手機、PDA。
有別於傳統立體內容的擷取模式,本論文提出低成本的單鏡頭立體影像合成系統,藉由單一鏡頭擷取多重聚焦影像便可生成具有三維深度資訊的畫面(Depth Map Estimation From Multi-Focus Images),透過深度影像為基準之立體影像繪製(Depth Image-Based Rendering, DIBR)的技術,便可以生出雙眼畫面,此雙眼畫面再配合三維立體顯示設備,可得到極佳的觀賞效果。應用到傳統的數位相機系統上,不需任何的光學系統修改,就能夠單純從數位信號處理的角度生出立體影像。
本篇論文中,我們針對所提出的立體影像合成系統分別作演算法的分析,架構及系統分析。 在演算法方面,我們首先針對所研發之單鏡頭立體合成系統,運用雙眼視覺的特性與相機原理,發展了「有效率之多重聚焦深度影像估計」,以及「有效率之深度影像為基準之立體影像繪製」演算法。前者較之前演算法再運算時間部份可加速2000倍以上,所造成之客觀品質評比僅下降1%,正確率仍高達95%。後者相較於之前演算法可節省96%的運算量,且在客觀PSNR評比上較之前作法高了約6dB。此技術能夠有效的提昇立體影像合成效率。在架構設計方面,我們針對立體影像合成系統之核心部分「深度影像為基準之立體影像繪製核心」提出了文獻史上第一個有效率的硬體架構,與傳統立體影像繪製之硬體架構相比,本架構可完全消除傳統架構的深度暫存記憶體且可節省33%彩色暫存記憶體並僅需1/21的垂直運算單元與1/11水平運算單元。另外,新提出的平行度分析可以有效的在系統頻寬與硬體成本間提供最理想的平衡點;新提出的演算法亦有效的節省91%的系統頻寬以及不必要的運算量。原型晶深度影像繪製片在80MHz的工作頻率下,能夠即時處理左右兩通道各25張SDTV (720x576) 影像的預測運算,位移範圍(Disparity Vector Range):[0, +63]。本硬體架構透過 CIC實際製作晶片,採用TSMC 0.18 um 1P6M 製程,晶片大小約2.03x2.03 mm2,邏輯閘數為162K,且只需要10.8K位元的記憶體。此為研究文獻上第一顆可應用於單鏡頭立體影像合成系統與前瞻性三維數位電視系統中的深度影像為基準之立體影像合成核心原型晶片。
此外,在系統整合方面,我們為整個單鏡頭立體視訊系統架設了一個軟硬體整合的操作平台,整合了單鏡頭視訊影像擷取裝置、單鏡頭立體影像合成演算法,以及立體視訊撥放裝置。深度影像為基準之立體影像繪製核心部分利用FPGA作硬體加速;其餘部分用軟體處理。此系統除了夠能即時的展示立體影像,帶給使用者立體視覺的觀感,同時也驗証了本論文所提出之演算法及硬體架構的可行性。
Stereo images can make users sense depth perception by showing two images to each
eye simultaneously. It gives users a vivid information about the scene structure and can
be used for 3D-TV, telepresence, and immersive communication. With the technologies
of 3D video capture device and 3D-TV display device getting more and more mature,
the importance of the stereo content will rise in the near future. Under this trend, stereo
image synthesis draws more and more attention. Stereo image synthesis is the essential
part in 3D image system. It is also the main part which is focused on in this thesis.
However, to build up a stereo image synthesis system, many design challenges, such as
bad synthesis quality, high computational complexity, and hardware architecture implementation,
etc., must be overcome.
In this thesis,a novel stereo image synthesis system from single lens auto focus camera
is proposed to overcome the overheads of traditional two lens stereo camera. First
we proposed a novel object based depth estimation algorithm to estimate depth map
from multi-focus images. Besides, we adopted depth-image-based-rendering with edge
dependent gaussian filter and interpolation to render left and right image with high quality.
Finally, we proposed a new depth background mode to enrich the background depth.
The proposed single camera stereo image synthesis system with efficient object based
depth from focus and depth image based rendering core design is proposed and discussed
in algorithm level, architecture level, and system level. In algorithm level, first
we focus our attention on the first stage: Object-Based-Depth-From-Focus(OBDFF).
We proposed a new object based DFF to increase the noise resistibility and discriminability
of depth map with multi-objects. Besides, the proposed algorithm only require
less than ten multi-focus images to accomplish stereo image generation. The simulation
results show the proposed object based depth from focus method can reach more
than 95% match rate compared with ground truth depth map. Furthermore, the novel
color segmentation based depth interpolation reduced the run-time of depth interpolation
in depth from focus by 99.6%. Then we focus our attention on the second stage:
Efficient Depth Image Based Rendering(DIBR) with Edge Dependent Gaussian Filter
and Interpolation The proposed DIBR algorithm outperforms previous work by 6–7
dB in PSNR performance. In addition to that, the number of instruction cycles is 3.7
percent compared with the previous work. Besides, in architecture level, a hardwired
DIBR core architecture design is proposed to provide a cost-effective solution to DIBR
core implementation in not only stereo image synthesis system but also advanced threedimensional
television system. We propose a novel depth image based rendering algorithm
with edge-dependent gaussian filter and interpolation to improve the rendered
stereo image quality. Based on our proposed algorithm, a fully-pipelined depth image
based rendering hardware accelerator is proposed to support real-time rendering. The
proposed hardware accelerator is optimized in three steps. First, we analyze the effect
of fixed point operation and choose the optimal wordlength to keep the stereo image
quality. Second, a three-parallel edge-dependent gaussian filter architecture is proposed
to solve the critical problem of memory bandwidth. Finally, we optimize the hardware
cost by the proposed hardware architecture. Only 1/21 amounts of vertical PEs and
1/11 amounts of horizontal PEs is needed by the proposed folded edge-dependent gaussian
filter architecture. Futhermore, by the proposed check mode, the whole Z-buffer
can be eliminated during 3D image warping. In additions, the on-chip SRAMs can be
reduced to 66.7 percent compared with direct implementation by global and local disparity
separation scheme. A prototype chip can achieve real-time requirement under the
operating frequency of 80 MHz for 25 SDTV frames per second (fps) in left and right
channel simultaneously. The simulation result also shows the hardware cost is quite
small compared with the conventional rendering architecture.
Moreover, in system integration level, a prototype of real-time stereo image synthesis
system is successfully built up. The system integrates a single camera, FPGA
accelerator, software optimization, and autostereoscopic 3D display LCD. The demonstration
results proves proposed algorithm and architecture of the DIBR core indeed
improve the performance in the stereo image synthesis system.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/31969
全文授權: 有償授權
顯示於系所單位:電子工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-95-1.pdf
  未授權公開取用
1.63 MBAdobe PDF
顯示文件完整紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved