start-ver=1.4 cd-journal=joma no-vol= cd-vols= no-issue= article-no= start-page=95 end-page=100 dt-received= dt-revised= dt-accepted= dt-pub-year=2003 dt-pub=20038 dt-online= en-article= kn-article= en-subject= kn-subject= en-title= kn-title=Color blending based on viewpoint and surface normal for generating images from any viewpoint using multiple cameras en-subtitle= kn-subtitle= en-abstract= kn-abstract=
A color blending method for generating a high quality image of human motion is presented. The 3D (three-dimensional) human shape is reconstructed by volume intersection and expressed as a set of voxels. As each voxel is observed as different colors from different cameras, voxel color needs to be assigned appropriately from several colors. We present a color blending method, which calculates voxel color from a linear combination of the colors observed by multiple cameras. The weightings in the linear combination are calculated based on both viewpoint and surface normal. As surface normal is taken into account, the images with clear texture can be generated. Moreover, since viewpoint is also taken into account, high quality images free of unnatural warping can be generated. To examine the effectiveness of the algorithm, a traditional dance motion was captured and new images were generated from arbitrary viewpoints. Compared to existing methods, quality at the boundaries was confirmed to improve.
en-copyright= kn-copyright= en-aut-name=MukaigawaYasuhiro en-aut-sei=Mukaigawa en-aut-mei=Yasuhiro kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=1 ORCID= en-aut-name=GendaDaisuke en-aut-sei=Genda en-aut-mei=Daisuke kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=2 ORCID= en-aut-name=YamaneRyo en-aut-sei=Yamane en-aut-mei=Ryo kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=3 ORCID= en-aut-name=ShakunagaTakeshi en-aut-sei=Shakunaga en-aut-mei=Takeshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=4 ORCID= affil-num=1 en-affil= kn-affil=University of Tsukuba affil-num=2 en-affil= kn-affil=Okayama University affil-num=3 en-affil= kn-affil=Okayama University affil-num=4 en-affil= kn-affil=Okayama University en-keyword=cameras kn-keyword=cameras en-keyword=colour graphics kn-keyword=colour graphics en-keyword=computer animation kn-keyword=computer animation en-keyword=image colour analysis kn-keyword=image colour analysis en-keyword=image motion analysis kn-keyword=image motion analysis END start-ver=1.4 cd-journal=joma no-vol= cd-vols= no-issue= article-no= start-page=118 end-page=125 dt-received= dt-revised= dt-accepted= dt-pub-year=2005 dt-pub=20056 dt-online= en-article= kn-article= en-subject= kn-subject= en-title= kn-title=Coordination of appearance and motion data for virtual view generation of traditional dances en-subtitle= kn-subtitle= en-abstract= kn-abstract=A novel method is proposed for virtual view generation of traditional dances. In the proposed framework, a traditional dance is captured separately for appearance registration and motion registration. By coordinating the appearance and motion data, we can easily control virtual camera motion within a dancer-centered coordinate system. For this purpose, a coordination problem should be solved between the appearance and motion data, since they are captured separately and the dancer moves freely in the room. The present paper shows a practical algorithm to solve it. A set of algorithms are also provided for appearance and motion registration, and virtual view generation from archived data. In the appearance registration, a 3D human shape is recovered in each time from a set of input images after suppressing their backgrounds. By combining the recovered 3D shape and a set of images for each time, we can compose archived dance data. In the motion registration, stereoscopic tracking is accomplished for color markers placed on the dancer. A virtual view generation is formalized as a color blending among multiple views, and a novel and efficient algorithm is proposed for the composition of a natural virtual view from a set of images. In the proposed method, weightings of the linear combination are calculated from both an assumed viewpoint and a surface normal.
en-copyright= kn-copyright= en-aut-name=KamonYuji en-aut-sei=Kamon en-aut-mei=Yuji kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=1 ORCID= en-aut-name=YamaneRyo en-aut-sei=Yamane en-aut-mei=Ryo kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=2 ORCID= en-aut-name=MukaigawaYasuhiro en-aut-sei=Mukaigawa en-aut-mei=Yasuhiro kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=3 ORCID= en-aut-name=ShakunagaTakeshi en-aut-sei=Shakunaga en-aut-mei=Takeshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=4 ORCID= affil-num=1 en-affil= kn-affil=Sharp Corporation affil-num=2 en-affil= kn-affil=Okayama University affil-num=3 en-affil= kn-affil=Osaka University affil-num=4 en-affil= kn-affil=Okayama University en-keyword=humanities kn-keyword=humanities en-keyword=image colour analysis kn-keyword=image colour analysis en-keyword=image motion analysis kn-keyword=image motion analysis en-keyword=image registration kn-keyword=image registration en-keyword=stereo image processing kn-keyword=stereo image processing en-keyword=tracking kn-keyword=tracking en-keyword=virtual reality kn-keyword=virtual reality END