請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83607完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳宏銘 | zh_TW |
| dc.contributor.advisor | Homer H. Chen | en |
| dc.contributor.author | 羅翊展 | zh_TW |
| dc.contributor.author | I-Chan Lo | en |
| dc.date.accessioned | 2023-03-19T21:11:46Z | - |
| dc.date.available | 2023-11-10 | - |
| dc.date.copyright | 2023-09-15 | - |
| dc.date.issued | 2022 | - |
| dc.date.submitted | 2002-01-01 | - |
| dc.identifier.citation | S. Chan, X. Zhou, C. Huang, S. Chen, and Y. Li, “An improved method for fisheye camera calibration and distortion correction,” in 2016 International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2016, pp. 579–584.
T. Ho, I. D. Schizas, K. R. Rao, and M. Budagavi, “360-degree video stitching for dual-fisheye lens cameras based on rigid moving least squares,” in 2017 IEEE International Conference on Image Processing (ICIP), 2017, pp. 51–55. S. Avidan and A. Shamir, “Seam carving for content-aware image resizing,” in ACM SIGGRAPH 2007 papers, 2007, pp. 10–es. N. H. Wang, B. Solarte, Y. H. Tsai, W. C. Chiu, and M. Sun, “360sd-net: 360° stereo depth estimation with learnable cost volume,” in Proc. International Conference on Robot. and Automat, 2020, pp. 582–588. H. Jiang, Z. Sheng, S. Zhu, and R. H. Z. Dong, “Unifuse: Unidirectional fusion for 360° panorama depth estimation,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1519–1526, 2021. N. K. Kalantari, T.-C. Wang, and R. Ramamoorthi, “Learning-based view synthesis for light field cameras,” ACM Trans. Graph, vol. 35, no. 6, pp. 1–10, November 2016. Ricoh theta 360. Accessed on: Jan. 28, 2022. [Online]. Available: https://theta360.com RYLO camera. Accessed on: Jan. 28, 2022. [Online]. Available: https://www.rylo.com Insta360 camera. Accessed on: Jan. 28, 2022. [Online]. Available: https://www.insta360.com LG camera. Accessed on: Jan. 28, 2020. [Online]. Available: https://www.lg.com GoPro fusion camera. Accessed on: Jan. 28, 2022. [Online]. Available: https://gopro.com J.-H. Lee, I.-C. Cheng, H. Hua, and S.-T. Wu, “Fundamentals of head- mounted displays for virtual and augmented reality,” in Introductions to Flat-Panel Displays, 2020, pp. 259–336. D. Scaramuzza, A. Martinelli, and R. Siegwart, “A toolbox for easily calibrat- ing omnidirectional cameras,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006, pp. 5695–5701. E. Reinhard, M. Adhikhmin, B. Gooch, and P. Shirley, “Color transfer be- tween images,” IEEE Computer Graphics and Applications, vol. 21, no. 5, pp. 34–41, 2001. G. D. Finlayson, M. Mackiewicz, and A. Hurlbert, “Color correction using root-polynomial regression,” IEEE Transactions on Image Processing, vol. 24, no. 5, pp. 1460–1470, 2015. F. Bellavia and C. Colombo, “Dissecting and reassembling color correction algorithms for image stitching,” IEEE Transactions on Image Processing, vol. 27, no. 2, pp. 735–748, 2018. C. M. Bishop and N. M. Nasrabadi, Pattern recognition and machine learning. Springer, 2006, vol. 4, no. 4. D. L. Ruderman, T. W. Cronin, and C.-C. Chiao, “Statistics of cone responses to natural images: implications for visual coding,” JOSA A, vol. 15, no. 8, pp. 2036–2045, 1998. F. Perazzi, A. Sorkine-Hornung, H. Zimmer, P. Kaufmann, O. Wang, S. Wat- son, and M. Gross, “Panoramic video from unstructured camera arrays,” in Computer Graphics Forum, vol. 34, no. 2. Wiley Online Library, 2015, pp. 57–68. W.-Y. Lin, S. Liu, Y. Matsushita, T.-T. Ng, and L.-F. Cheong, “Smoothly varying affine stitching,” in CVPR 2011, 2011, pp. 345–352. W. Jiang and J. Gu, “Video stitching with spatial-temporal content-preserving warping,” in 2015 IEEE Conference on Computer Vision and Pattern Recog- nition Workshops (CVPRW), 2015, pp. 42–48. Y. Nie, T. Su, Z. Zhang, H. Sun, and G. Li, “Dynamic video stitching via shakiness removing,” IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 164–178, 2018. A. Hamza, R. Hafiz, M. M. Khan, Y. Cho, and J. Cha, “Stabilization of panoramic videos from mobile multi-camera platforms,” Image and Vision Computing, vol. 37, pp. 20–30, 2015. T. Ho and M. Budagavi, “Dual-fisheye lens stitching for 360-degree imaging,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 2172–2176. J. Li, Y. Zhao, W. Ye, K. Yu, and S. Ge, “Attentive deep stitching and quality assessment for 360◦ omnidirectional images,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 1, pp. 209–221, 2020. A. Utter. Dual-fisheye image stitching tool. Accessed Jan. 28, 2022. [Online]. Available: https://github.com/ooterness/DualFisheye I.-C. Lo, K.-T. Shih, and H. H. Chen, “Image stitching for dual fisheye cameras,” in 2018 25th IEEE International Conference on Image Processing (ICIP), 2018, pp. 3164–3168. I.-C. Lo, K.-T. Shih, P. C. Yu, C.-T. Hung, M. Shih, M. Odamaki, and H. H. Chen, “Seamless stitching dual fisheye images for 360° free view,” in 2019 IEEE International Conference on Image Processing (ICIP), 2019, pp. 2459–2459. I.-C. Lo, K.-T. Shih, and H. H. Chen, “360° video stitching for dual fisheye cameras,” in 2019 IEEE International Conference on Image Processing (ICIP), 2019, pp. 3522–3526. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004. H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speeded up robust features,” in European conference on computer vision. Springer, 2006, pp. 404–417. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International Conference on Computer Vision, 2011, pp. 2564–2571. A. Pressley, “Gauss’s theorema egregium,” in Elementary Differential Geometry, A. Pressley, Ed. London: Springer London, 2001, pp. 229–246. L. Deng and D. Yu, Deep learning: methods and applications. Foundations and Trends in Signal Processing, 2014. Z. Chen, X. Sun, L. Wang, Y. Yu, and C. Huang, “A deep visual correspon- dence embedding model for stereo matching costs,” in Proc. IEEE interna- tional conference on computer vision, 2015, pp. 972–980. H. Hirschmuller, “Stereo processing by semiglobal matching and mutual in- formation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 328–34, 2008. A. Seki and M. Pollefeys, “Sgm-nets: Semi-global matching with neural networks,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, July 2017. Q. Yang, “A non-local cost aggregation method for stereo matching,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 1402–1409. N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox, “A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4040–4048. F. Zhang, V. Prisacariu, R. Yang, and P. H. Torr, “Ga-net: guided aggregation net for end-to-end stereo matching,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 185–194. M. Goesele, N. Snavely, B. Curless, H. Hoppe, and S. M. Seitz, “Multi- view stereo for community photo collections,” in Proc. IEEE International Conference on Computer Vision. Brazil: Rio de Janeiro, 2007, pp. 1–8. Y. Furukawa and J. Ponce, “Accurate, dense, and robust multi-view stereopsis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 8, pp. 1362–1376, August 2010. P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, and R. Ng, “Learning to synthesize a 4d rgbd light field from a single image,” in Proc. IEEE International Conference on Computer Vision, August 2017, pp. 2243–2251. J.-Y. Zhu, P. Krahenbuhl, E. Shechtman, and A. A. Efros, “Generative visual manipulation on the natural image manifold,” in Proc. European Conference on Computer Vision, 2016. Q. Chen and V. Koltun, “Photographic image synthesis with cascaded re- finement networks,” in Proc. IEEE International Conference on Computer Vision, 2017. A. Dosovitskiy and T. Brox, “Generating images with perceptual similarity metrics based on deep networks,” in Advances in Neural Information Processing Systems, 2016. L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolu- tional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision, 2016. T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proc. Conference on Computer Vision and Pattern Recognition, 2018, pp. 8798– 8807. R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph., vol. 37, no. 6, dec 2018. M. Broxton, J. Flynn, R. Overbeck, D. Erickson, P. Hedman, M. Duvall, J. Dourgarian, J. Busch, M. Whalen, and P. Debevec, “Immersive light field video with a layered mesh representation,” ACM Trans. Graph., vol. 39, no. 4, jul 2020. Levenberg-marquardt backpropagation matlab trainlm. Accessed on Sep. 30, 2020. [Online]. Available: https://www.mathworks.com/help/deeplearning/ ref/trainlm.html R. Green, “Spherical harmonic lighting: The gritty details,” in Archives of the game developers conference, vol. 56, 2003, p. 4. J.-C. Yoo and T. H. Han, “Fast normalized cross-correlation,” Circuits, sys- tems and signal processing, vol. 28, no. 6, pp. 819–843, 2009. Fotopro 360° swivel. Accessed on Jan. 30, 2022. [Online]. Available: https://fotopro.com/product/fph-52q A. Duda and U. Frese, “Accurate detection and localization of checkerboard corners for calibration.” in BMVC, 2018, p. 126. S. Urban, J. Leitloff, and S. Hinz, “Improved wide-angle, fisheye and omnidi- rectional camera calibration,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 108, pp. 72–79, 2015. A. HSL. (2007) collection of fortran codes for large-scale scientific computation. [Online]. Available: http://www.hsl.rl.ac.uk G. Sharma, W. Wu, and E. N. Dalal, “The ciede2000 color-difference for- mula: Implementation notes, supplementary test data, and mathematical observations,” Color Research & Application: Endorsed by Inter-Society Color Council, vol. 30, no. 1, pp. 21–30, 2005. J. P. Snyder, Map projections–A working manual. US Government Printing Office, 1987, vol. 1395. F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content-preserving warps for 3d video stabilization,” ACM Transactions on Graphics (ToG), vol. 28, no. 3, pp. 1–9, 2009. S. K. Nayar, “Catadioptric omnidirectional camera,” in Proc. Conference on Computer Vision and Pattern Recognition, 1997. G. Upton and I. Cook, A dictionary of statistics 3e. Oxford university press, 2014. C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman, “Sift flow: Dense correspondence across different scenes,” in European conference on computer vision. Springer, 2008, pp. 28–42. P. J. Burt and E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Transactions on Graphics (TOG), vol. 2, no. 4, pp. 217–236, 1983. M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International journal of computer vision, vol. 74, no. 1, pp. 59–73, 2007. C. M. Bishop, Pattern recognition and machine learning. Berlin, Heidelberg: Springer-Verlag, 2006. X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feed-forward neural networks,” in Proc. of the thirteenth international conference on artificial intelligence and statistics, 2010, pp. 249–256. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” in Neural Information Processing Systems Workshop, 2017. I.-C. Lo, K.-T. Shih, and H. H. Chen, “Efficient and accurate stitching for 360° dual-fisheye images and videos,” IEEE Trans. Image Processing, vol. 31, pp. 251–262, 2022. Levenberg-marquardt backpropagation - matlab trainlm. Accessed on Jun. 29 2022. [Online]. Available: https://www.mathworks.com/help/deeplearning/ ref/trainlm.html M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in Proc. Conference on Computer Vision and Pattern Recognition, 2015. A. C. et al., “Matterport3d: Learning from rgb-d data in indoor environments,” in Proc. International Conference on 3D Visions., 2017, pp. 667–676. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process, vol. 13, no. 4, p. 600–612, April 2004. V. Kiran Adhikarla, M. Vinkler, D. Sumin, R. K. Mantiuk, K. Myszkowski, H.-P. Seidel, and P. Didyk, “Towards a quality metric for dense light fields,”in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 58–67. K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Proc. Asian Conference on Computer Vision, 2016. C.-L. Liu, K.-T. Shih, J.-W. Huang, and H. H. Chen, “Light field synthesis by training deep network in the refocused image domain,” IEEE Trans. Image Process., vol. 29, pp. 6630–6640, 2020. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83607 | - |
| dc.description.abstract | 虛擬實境(VR)與擴增實境(AR)是未來科技發展的大趨勢之一,這兩項技術提供了虛擬世界和現實世界的橋樑,讓使用者能以最自然的方式暢遊虛擬空間。為了能讓使用者達到身歷其境的感受,我們需要一種攝影技術能夠記錄全視角且細緻的多媒體,稱為360 度光場。本論文將探討一種新穎並且有效率的方式產生360 度光場,這種技術能夠產生完整環景視野和高角分辨率,完整的紀錄真實世界的每一道光線。我們將從360 度全景攝影術出發,探討如何以克服魚眼畸變的問題,以最有效率的方式縫合雙魚眼影像,並利用單一台簡單輕巧的雙魚眼相機獲取360 度光場,透過深層網路架構,有效率產生360 度高品質光場。
本論文可分為三個部分。第一部分集中於探討一背對背雙魚眼相機的校準。我們開發了一個基於同心軌跡的幾何校準模型,並針對色差的問題提出強度和顏色補償的校正模型,提供有效和準確的局部顏色轉移。第二部分描述了360 度影像和影片縫合方法。具體來說,我們開發了網格變形模型以及用於影片縫合的自適應切割線,以減少幾何失真並確保最佳縫合。第三部分描述如何利用雙魚眼相機產生360 度光場。我們將複數張360 度原始影像輸入深層網路生成360度光場。這個深層網路有三個要素:能對生成的360 度光場的子影像施加時空一致性約束的捲積網路、能提高視差估計精度的等距長方投影匹配損失函數、以及360 度光場重新取樣網路。這項技術能使單一個簡單輕巧的魚眼相機產生出高品質的360度光場。 | zh_TW |
| dc.description.abstract | The second part of the dissertation describes our solution for image and video stitching for dual-fisheye cameras. Specifically, we develop a photometric correction model for intensity and color compensation to provide efficient and accurate local color transfer, and a mesh deformation model along with an adaptive seam carving method for image stitching to reduce geometric distortion and ensure optimal spatiotemporal alignment. The stitching algorithm and the compensation algorithm can run efficiently for 1920×960 images.
The third part of the dissertation describes an efficient pipeline for light field acquisition using a dual-fisheye camera. The proposed pipeline generates a light field from a sequence of 360° images captured by the dual-fisheye camera. It has three main components: a convolutional network (CNN) that enforces a spatiotemporal consistency constraint on the subviews of the 360° light field, an equirectangular matching cost that aims at increasing the accuracy of disparity estimation, and a light field resampling subnet that produces the 360° light field based on the disparity information. We demonstrate the effectiveness, robustness, and quality of the proposed pipeline using real data obtained from a commercially available dual-fisheye camera. | en |
| dc.description.provenance | Made available in DSpace on 2023-03-19T21:11:46Z (GMT). No. of bitstreams: 1 U0001-0707202214204900.pdf: 44400906 bytes, checksum: aa61485853cadb33feaca2fddbff5b5c (MD5) Previous issue date: 2022 | en |
| dc.description.tableofcontents | Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 2 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 2.1 Camera calibration . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Color compensation . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Image and video stitching . . . . . . . . . . . . . . . . . . . . . .9 2.4 Equirectangular projection . . . . . . . . . . . . . . . . . . . . . 10 2.5 Disparity estimation . . . . . . . . . . . . . . . . . . . . . . . . 11 2.6 View synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.7 Panoramic light field. . . . . . . . . . . . . . . . . . . . . . . . 15 3 Dual fisheye camera calibration . . . . . . . . . . . . . . . . . . . 15 3.1 Geometric calibration based on concentric trajectories . . . . . . . 15 3.2 Photometric compensation . . . . . . . . . . . . . . . . . . . . . . 21 3.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3.1 Evaluation of camera calibration . . . . . . . . . . . . . . . . . 24 3.3.2 Evaluation of photometric compensation . . . . . . . . . . . . . . 25 3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4 Image stitching for dual-fisheye cameras . . . . . . . . . . . . . . . 35 5 Generating 360° image . . . . . . . . . . . . . . . . . . . . . . . . .41 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.2 Feature matching . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.3 Seam blending . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44 5.4.1 Evaluation of image stitching . . . . . . . . . . . . . . . . . . .44 5.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 6 Video stitching for dual-fisheye cameras . . . . . . . . . . . . . . . 57 7 Generating 360° video . . . . . . . . . . . . . . . . . . . . . . . . 61 7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 7.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61 7.2.1 Evaluation of video stitching . . . . . . . . . . . . . . . . . . .62 7.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 8 Generating 360° light field . . . . . . . . . . . . . . . . . . . . . 69 8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 8.2 360° representation of light field . . . . . . . . . . . . . . . . . 71 8.3 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . .72 8.3.1 Camera Geometry . . . . . . . . . . . . . . . . . . . . . . . . . 73 8.3.2 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . 78 8.3.3 Disparity Estimation . . . . . . . . . . . . . . . . . . . . . . . 81 8.3.4 360° Light Field Resampling . . . . . . . . . . . . . . . . . . . .83 8.3.5 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . .84 8.3.6 Training and Inference . . . . . . . . . . . . . . . . . . . . . . 85 8.4 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 86 8.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90 8.5.1 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . 90 8.5.2 Evaluation of Disparity Map . . . . . . . . . . . . . . . . . . . 90 8.5.3 Evaluation of Light Field . . . . . . . . . . . . . . . . . . . . .92 8.6 Disccusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 | - |
| dc.language.iso | en | - |
| dc.subject | 縫合線計算 | zh_TW |
| dc.subject | 影像轉換 | zh_TW |
| dc.subject | 視差 | zh_TW |
| dc.subject | 影片縫合 | zh_TW |
| dc.subject | 色彩轉換 | zh_TW |
| dc.subject | 影像校正 | zh_TW |
| dc.subject | 魚眼相機 | zh_TW |
| dc.subject | 深度偵測 | zh_TW |
| dc.subject | 360°光場 | zh_TW |
| dc.subject | 捲積神經網路 | zh_TW |
| dc.subject | 光場生成 | zh_TW |
| dc.subject | 影像生成 | zh_TW |
| dc.subject | 360°影片 | zh_TW |
| dc.subject | convolutional neural network | en |
| dc.subject | fisheye lens camera | en |
| dc.subject | image rectification | en |
| dc.subject | color transfer | en |
| dc.subject | video stitching | en |
| dc.subject | parallax | en |
| dc.subject | image warping | en |
| dc.subject | seam carving | en |
| dc.subject | 360° video | en |
| dc.subject | view synthesis | en |
| dc.subject | 360° light field | en |
| dc.subject | light field generation | en |
| dc.subject | depth estimation | en |
| dc.title | 以雙魚眼相機產生360度影像與光場 | zh_TW |
| dc.title | Generating 360° Image and Light Field with a Dual-Fisheye Camera | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 110-2 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.author-orcid | 0000-0002-2640-2433 | |
| dc.contributor.oralexamcommittee | 吳忠幟;吳家麟;莊永裕;廖偉智;何業勤;鄭文皇;王鈺強;黃朝宗 | zh_TW |
| dc.contributor.oralexamcommittee | ;;;;;;; | en |
| dc.subject.keyword | 魚眼相機,影像校正,色彩轉換,影片縫合,視差,影像轉換,縫合線計算,360°影片,影像生成,360°光場,光場生成,深度偵測,捲積神經網路, | zh_TW |
| dc.subject.keyword | fisheye lens camera,image rectification,color transfer,video stitching,parallax,image warping,seam carving,360° video,view synthesis,360° light field,light field generation,depth estimation,convolutional neural network, | en |
| dc.relation.page | 120 | - |
| dc.identifier.doi | 10.6342/NTU202201328 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2022-08-25 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電信工程學研究所 | - |
| 顯示於系所單位: | 電信工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-110-2.pdf 未授權公開取用 | 43.36 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
