start-ver=1.4 cd-journal=joma no-vol= cd-vols= no-issue= article-no= start-page=94 end-page=99 dt-received= dt-revised= dt-accepted= dt-pub-year=1998 dt-pub=19984 dt-online= en-article= kn-article= en-subject= kn-subject= en-title= kn-title=Integration of eigentemplate and structure matching for automatic facial feature detection en-subtitle= kn-subtitle= en-abstract= kn-abstract=

An algorithm is proposed for facial feature detection from a facial image. The algorithm consists of the bottom-up and the top-down interpretation processes, which work with the feature matching module and the structure matching module. Experimental results show that the proposed algorithm can detect no less than five features in 99.3% of the frontal views and can work even if the face orientation is unknown

en-copyright= kn-copyright= en-aut-name=ShakunagaTakeshi en-aut-sei=Shakunaga en-aut-mei=Takeshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=1 ORCID= en-aut-name=OgawaKeisuke en-aut-sei=Ogawa en-aut-mei=Keisuke kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=2 ORCID= en-aut-name=OkiShohei en-aut-sei=Oki en-aut-mei=Shohei kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=3 ORCID= affil-num=1 en-affil= kn-affil=Okayama University affil-num=2 en-affil= kn-affil=Okayama University affil-num=3 en-affil= kn-affil=Okayama University en-keyword=eigenvalues and eigenfunctions kn-keyword=eigenvalues and eigenfunctions en-keyword=face recognition kn-keyword=face recognition en-keyword=feature extraction kn-keyword=feature extraction en-keyword=image matching kn-keyword=image matching END start-ver=1.4 cd-journal=joma no-vol= cd-vols= no-issue= article-no= start-page=115 end-page=124 dt-received= dt-revised= dt-accepted= dt-pub-year=1999 dt-pub=199910 dt-online= en-article= kn-article= en-subject= kn-subject= en-title= kn-title=Photometric image-based rendering for virtual lighting image synthesis en-subtitle= kn-subtitle= en-abstract= kn-abstract=

A concept named Photometric Image-Based Rendering (PIBR) is introduced for a seamless augmented reality. The PIBR is defined as image-based rendering which covers appearance changes caused by the lighting condition changes, while Geometric Image-Based Rendering (GIBR) is defined as image-based rendering which covers appearance changes caused by the view point changes. PIBR can be applied to image synthesis to keep photometric consistency between virtual objects and real scenes in arbitrary lighting conditions. We analyze conventional IBR algorithms, and formalize PIBR within the whole IBR framework. A specific algorithm is also presented for realizing PIBR. The photometric linearization makes a controllable framework for PIBR, which consists of four processes: (1) separation of environmental illumination effects, (2) estimation of lighting directions, (3) separation of specular reflections and cast-shadows, and (4) linearization of self-shadows. After the-photometric linearization of input images, we can synthesize any realistic images which include not-only diffuse reflections but also self-shadows, cast-shadows and specular reflections. Experimental results show that realistic images can be successfully synthesized while keeping photometric consistency

en-copyright= kn-copyright= en-aut-name=MukaigawaYasuhiro en-aut-sei=Mukaigawa en-aut-mei=Yasuhiro kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=1 ORCID= en-aut-name=MihashiSadahiko en-aut-sei=Mihashi en-aut-mei=Sadahiko kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=2 ORCID= en-aut-name=ShakunagaTakeshi en-aut-sei=Shakunaga en-aut-mei=Takeshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=3 ORCID= affil-num=1 en-affil= kn-affil=Okayama University affil-num=2 en-affil= kn-affil=Okayama University affil-num=3 en-affil= kn-affil=Okayama University en-keyword=augmented reality kn-keyword=augmented reality en-keyword=lighting kn-keyword=lighting en-keyword=realistic images kn-keyword=realistic images en-keyword= rendering (computer graphics) kn-keyword= rendering (computer graphics) END start-ver=1.4 cd-journal=joma no-vol=1 cd-vols= no-issue= article-no= start-page= end-page= dt-received= dt-revised= dt-accepted= dt-pub-year=2001 dt-pub=20010101 dt-online= en-article= kn-article= en-subject= kn-subject= en-title= kn-title=Decomposed eigenface for face recognition under various lighting conditions en-subtitle= kn-subtitle= en-abstract= kn-abstract=

Face recognition under various lighting condition's is discussed to cover cases when too few images are available for registration. This paper proposes decomposition of an eigenface into two orthogonal eigenspaces for realizing robust face recognition under such conditions. The decomposed eigenfaces consisting of two eigenspaces are constructed for each person even if only one image is available. A universal eigenspace called the canonical space (CS) plays an important role in creating the eigenspaces by way of decomposition, where CS is constructed a priori by principal component analysis (PCA) over face images of many people under many lighting conditions. In the registration stage, an input face image is decomposed to a projection image in CS and the residual of the projection. Then two eigenspaces are created independently in CS and in the orthogonal complement CS/sup /spl perp//. Some refinements of the two eigenspaces are also discussed. By combining the two eigenspaces, we can easily realize face identification that is robust to illumination change, even when too few images are registered. Through experiments, we show the effectiveness of the decomposed eigenfaces as compared with conventional methods.

en-copyright= kn-copyright= en-aut-name=ShakunagaTakeshi en-aut-sei=Shakunaga en-aut-mei=Takeshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=1 ORCID= en-aut-name=ShigenariKazuma en-aut-sei=Shigenari en-aut-mei=Kazuma kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=2 ORCID= affil-num=1 en-affil= kn-affil=Okayama University affil-num=2 en-affil= kn-affil=Okayama University en-keyword=eigenvalues and eigenfunctions kn-keyword=eigenvalues and eigenfunctions en-keyword=face recognition kn-keyword=face recognition en-keyword=image registration kn-keyword=image registration en-keyword=principal component analysis kn-keyword=principal component analysis END start-ver=1.4 cd-journal=joma no-vol=1 cd-vols= no-issue= article-no= start-page=648 end-page=651 dt-received= dt-revised= dt-accepted= dt-pub-year=2002 dt-pub=20020811 dt-online= en-article= kn-article= en-subject= kn-subject= en-title= kn-title=Natural image correction by iterative projections to eigenspace constructed in normalized image space en-subtitle= kn-subtitle= en-abstract= kn-abstract=

Image correction is discussed for realizing both effective object recognition and realistic image-based rendering. Three image normalizations are compared in relation with the linear subspaces and eigenspaces, and we conclude that normalization by L1-norm, which normalizes the total sum of intensities, is the best for our purposes. Based on noise analysis in the normalized image space (NIS), an image correction algorithm is constructed, which is accomplished by iterative projections along with corrections of an image to an eigenspace in NIS. Experimental results show that the proposed method works well for natural images which include various kinds of noise shadows, reflections and occlusions. The proposed method provides a feasible solution to object recognition based on the illumination cone. The technique can also be extended to face detection of unknown persons and registration/recognition using eigenfaces.

en-copyright= kn-copyright= en-aut-name=ShakunagaTakeshi en-aut-sei=Shakunaga en-aut-mei=Takeshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=1 ORCID= en-aut-name=SakaueFumihiko en-aut-sei=Sakaue en-aut-mei=Fumihiko kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=2 ORCID= affil-num=1 en-affil= kn-affil=Okayama University affil-num=2 en-affil= kn-affil=Okayama University en-keyword=eigenvalues and eigenfunctions kn-keyword=eigenvalues and eigenfunctions en-keyword=face recognition kn-keyword=face recognition en-keyword=iterative methods kn-keyword=iterative methods en-keyword=natural scenes kn-keyword=natural scenes en-keyword=object recognition kn-keyword=object recognition en-keyword= rendering (computer graphics) kn-keyword= rendering (computer graphics) END start-ver=1.4 cd-journal=joma no-vol= cd-vols= no-issue= article-no= start-page=95 end-page=100 dt-received= dt-revised= dt-accepted= dt-pub-year=2003 dt-pub=20038 dt-online= en-article= kn-article= en-subject= kn-subject= en-title= kn-title=Color blending based on viewpoint and surface normal for generating images from any viewpoint using multiple cameras en-subtitle= kn-subtitle= en-abstract= kn-abstract=

A color blending method for generating a high quality image of human motion is presented. The 3D (three-dimensional) human shape is reconstructed by volume intersection and expressed as a set of voxels. As each voxel is observed as different colors from different cameras, voxel color needs to be assigned appropriately from several colors. We present a color blending method, which calculates voxel color from a linear combination of the colors observed by multiple cameras. The weightings in the linear combination are calculated based on both viewpoint and surface normal. As surface normal is taken into account, the images with clear texture can be generated. Moreover, since viewpoint is also taken into account, high quality images free of unnatural warping can be generated. To examine the effectiveness of the algorithm, a traditional dance motion was captured and new images were generated from arbitrary viewpoints. Compared to existing methods, quality at the boundaries was confirmed to improve.

en-copyright= kn-copyright= en-aut-name=MukaigawaYasuhiro en-aut-sei=Mukaigawa en-aut-mei=Yasuhiro kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=1 ORCID= en-aut-name=GendaDaisuke en-aut-sei=Genda en-aut-mei=Daisuke kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=2 ORCID= en-aut-name=YamaneRyo en-aut-sei=Yamane en-aut-mei=Ryo kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=3 ORCID= en-aut-name=ShakunagaTakeshi en-aut-sei=Shakunaga en-aut-mei=Takeshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=4 ORCID= affil-num=1 en-affil= kn-affil=University of Tsukuba affil-num=2 en-affil= kn-affil=Okayama University affil-num=3 en-affil= kn-affil=Okayama University affil-num=4 en-affil= kn-affil=Okayama University en-keyword=cameras kn-keyword=cameras en-keyword=colour graphics kn-keyword=colour graphics en-keyword=computer animation kn-keyword=computer animation en-keyword=image colour analysis kn-keyword=image colour analysis en-keyword=image motion analysis kn-keyword=image motion analysis END start-ver=1.4 cd-journal=joma no-vol= cd-vols= no-issue= article-no= start-page=241 end-page=247 dt-received= dt-revised= dt-accepted= dt-pub-year=2004 dt-pub=20045 dt-online= en-article= kn-article= en-subject= kn-subject= en-title= kn-title=Robust face recognition by combining projection-based image correction and decomposed eigenface en-subtitle= kn-subtitle= en-abstract= kn-abstract=This work presents a robust face recognition method, which can work even when an insufficient number of images are registered for each person. The method is composed of image correction and image decomposition, both of which are specified in the normalized image space (NIS). The image correction [(F. Sakaue and T. Shakunaga, 2004), (T. Shakunaga and F. Sakaue, 2002)] is realized by iterative projections of an image to an eigenspace in NIS. It works well for natural images having various kinds of noise, including shadows, reflections, and occlusions. We have proposed decomposition of an eigenface into two orthogonal eigenspaces [T. Shakunaga and K. Shigenari, 2001], and have shown that the decomposition is effective for realizing robust face recognition under various lighting conditions. This work shows that the decomposed eigenface method can be refined by projection-based image correction. en-copyright= kn-copyright= en-aut-name=ShakunagaTakeshi en-aut-sei=Shakunaga en-aut-mei=Takeshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=1 ORCID= en-aut-name=SakaueFumihiko en-aut-sei=Sakaue en-aut-mei=Fumihiko kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=2 ORCID= en-aut-name=ShigenariKazuma en-aut-sei=Shigenari en-aut-mei=Kazuma kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=3 ORCID= affil-num=1 en-affil= kn-affil=Department of Information Technology, Faculty of Engineering Okayama University affil-num=2 en-affil= kn-affil=Department of Information Technology, Faculty of Engineering Okayama University affil-num=3 en-affil= kn-affil=Department of Information Technology, Faculty of Engineering Okayama University en-keyword=eigenvalues and eigenfunctions kn-keyword=eigenvalues and eigenfunctions en-keyword=face recognition kn-keyword=face recognition en-keyword=object recognition kn-keyword=object recognition END start-ver=1.4 cd-journal=joma no-vol= cd-vols= no-issue= article-no= start-page=118 end-page=125 dt-received= dt-revised= dt-accepted= dt-pub-year=2005 dt-pub=20056 dt-online= en-article= kn-article= en-subject= kn-subject= en-title= kn-title=Coordination of appearance and motion data for virtual view generation of traditional dances en-subtitle= kn-subtitle= en-abstract= kn-abstract=

A novel method is proposed for virtual view generation of traditional dances. In the proposed framework, a traditional dance is captured separately for appearance registration and motion registration. By coordinating the appearance and motion data, we can easily control virtual camera motion within a dancer-centered coordinate system. For this purpose, a coordination problem should be solved between the appearance and motion data, since they are captured separately and the dancer moves freely in the room. The present paper shows a practical algorithm to solve it. A set of algorithms are also provided for appearance and motion registration, and virtual view generation from archived data. In the appearance registration, a 3D human shape is recovered in each time from a set of input images after suppressing their backgrounds. By combining the recovered 3D shape and a set of images for each time, we can compose archived dance data. In the motion registration, stereoscopic tracking is accomplished for color markers placed on the dancer. A virtual view generation is formalized as a color blending among multiple views, and a novel and efficient algorithm is proposed for the composition of a natural virtual view from a set of images. In the proposed method, weightings of the linear combination are calculated from both an assumed viewpoint and a surface normal.

en-copyright= kn-copyright= en-aut-name=KamonYuji en-aut-sei=Kamon en-aut-mei=Yuji kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=1 ORCID= en-aut-name=YamaneRyo en-aut-sei=Yamane en-aut-mei=Ryo kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=2 ORCID= en-aut-name=MukaigawaYasuhiro en-aut-sei=Mukaigawa en-aut-mei=Yasuhiro kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=3 ORCID= en-aut-name=ShakunagaTakeshi en-aut-sei=Shakunaga en-aut-mei=Takeshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=4 ORCID= affil-num=1 en-affil= kn-affil=Sharp Corporation affil-num=2 en-affil= kn-affil=Okayama University affil-num=3 en-affil= kn-affil=Osaka University affil-num=4 en-affil= kn-affil=Okayama University en-keyword=humanities kn-keyword=humanities en-keyword=image colour analysis kn-keyword=image colour analysis en-keyword=image motion analysis kn-keyword=image motion analysis en-keyword=image registration kn-keyword=image registration en-keyword=stereo image processing kn-keyword=stereo image processing en-keyword=tracking kn-keyword=tracking en-keyword=virtual reality kn-keyword=virtual reality END start-ver=1.4 cd-journal=joma no-vol=3 cd-vols= no-issue= article-no= start-page=1155 end-page=1160 dt-received= dt-revised= dt-accepted= dt-pub-year=2006 dt-pub=20068 dt-online= en-article= kn-article= en-subject= kn-subject= en-title= kn-title=A Real-life Test of Face Recognition System for Dialogue Interface Robot in Ubiquitous Environments en-subtitle= kn-subtitle= en-abstract= kn-abstract=

This paper discusses a face recognition system for a dialogue interface robot that really works in ubiquitous environments and reports an experimental result of real-life test in a ubiquitous environment. While a central module of the face recognition system is composed of the decomposed eigenface method, the system also includes a special face detection module and the face registration module. Since face recognition should work on images captured by a camera equipped on the interface robot, all the methods are tuned for the interface robot. The face detection and recognition modules accomplish robust face detection and recognition when one of the registered users is talking to the robot. Some interesting results are reported with careful analysis of a sufficient real-life experiment.

en-copyright= kn-copyright= en-aut-name=SakaueFumihiko en-aut-sei=Sakaue en-aut-mei=Fumihiko kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=1 ORCID= en-aut-name=KobayashiMakoto en-aut-sei=Kobayashi en-aut-mei=Makoto kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=2 ORCID= en-aut-name=MigitaTsuyoshi en-aut-sei=Migita en-aut-mei=Tsuyoshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=3 ORCID= en-aut-name=ShakunagaTakeshi en-aut-sei=Shakunaga en-aut-mei=Takeshi kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=4 ORCID= en-aut-name=SatakeJunji en-aut-sei=Satake en-aut-mei=Junji kn-aut-name= kn-aut-sei= kn-aut-mei= aut-affil-num=5 ORCID= affil-num=1 en-affil= kn-affil=Nagoya Institute of Technology affil-num=2 en-affil= kn-affil=Okayama University affil-num=3 en-affil= kn-affil=Okayama University affil-num=4 en-affil= kn-affil=Okayama University affil-num=5 en-affil= kn-affil=National Institute of Information and Communications Technology END