The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Apresentamos um sistema de renderização de vídeo em tempo real usando um conjunto de câmeras de rede. Nosso sistema consiste em 64 câmeras de rede comuns conectadas a um único PC por meio de uma Ethernet gigabit. Para renderizar uma nova visualização de alta qualidade, nosso sistema estima um mapa de profundidade por pixel dependente da visualização em tempo real usando uma representação em camadas. O algoritmo de renderização é totalmente implementado na GPU, o que permite que nosso sistema execute com eficiência processos de captura e renderização como um pipeline usando a CPU e a GPU de forma independente. Usando resolução de vídeo de entrada QVGA, nosso sistema renderiza um vídeo de ponto de vista livre em até 30 quadros por segundo, dependendo da resolução do vídeo de saída e do número de camadas de profundidade. Os resultados experimentais mostram imagens de alta qualidade sintetizadas a partir de várias cenas.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copiar
Yuichi TAGUCHI, Keita TAKAHASHI, Takeshi NAEMURA, "Design and Implementation of a Real-Time Video-Based Rendering System Using a Network Camera Array" in IEICE TRANSACTIONS on Information,
vol. E92-D, no. 7, pp. 1442-1452, July 2009, doi: 10.1587/transinf.E92.D.1442.
Abstract: We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a gigabit Ethernet. To render a high-quality novel view, our system estimates a view-dependent per-pixel depth map in real time by using a layered representation. The rendering algorithm is fully implemented on the GPU, which allows our system to efficiently perform capturing and rendering processes as a pipeline by using the CPU and GPU independently. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 frames per second, depending on the output video resolution and the number of depth layers. Experimental results show high-quality images synthesized from various scenes.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E92.D.1442/_p
Copiar
@ARTICLE{e92-d_7_1442,
author={Yuichi TAGUCHI, Keita TAKAHASHI, Takeshi NAEMURA, },
journal={IEICE TRANSACTIONS on Information},
title={Design and Implementation of a Real-Time Video-Based Rendering System Using a Network Camera Array},
year={2009},
volume={E92-D},
number={7},
pages={1442-1452},
abstract={We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a gigabit Ethernet. To render a high-quality novel view, our system estimates a view-dependent per-pixel depth map in real time by using a layered representation. The rendering algorithm is fully implemented on the GPU, which allows our system to efficiently perform capturing and rendering processes as a pipeline by using the CPU and GPU independently. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 frames per second, depending on the output video resolution and the number of depth layers. Experimental results show high-quality images synthesized from various scenes.},
keywords={},
doi={10.1587/transinf.E92.D.1442},
ISSN={1745-1361},
month={July},}
Copiar
TY - JOUR
TI - Design and Implementation of a Real-Time Video-Based Rendering System Using a Network Camera Array
T2 - IEICE TRANSACTIONS on Information
SP - 1442
EP - 1452
AU - Yuichi TAGUCHI
AU - Keita TAKAHASHI
AU - Takeshi NAEMURA
PY - 2009
DO - 10.1587/transinf.E92.D.1442
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E92-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2009
AB - We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a gigabit Ethernet. To render a high-quality novel view, our system estimates a view-dependent per-pixel depth map in real time by using a layered representation. The rendering algorithm is fully implemented on the GPU, which allows our system to efficiently perform capturing and rendering processes as a pipeline by using the CPU and GPU independently. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 frames per second, depending on the output video resolution and the number of depth layers. Experimental results show high-quality images synthesized from various scenes.
ER -