Virtual Experiences in Reality

Virtual Experiences in Reality is a project that aims to build a workflow to evaluate and exploit both the quantitative basis of reality in visual experiences and the quality of visual experiences created by sensory experiences.

Project Overview

In international standards for video (ITU-R BT.2020, 2100, etc.), physical video parameters such as resolution, luminance, color gamut, and frame rate have been standardized for the purpose of constructing a reality close to reality based on human visual characteristics. On the other hand, the quality of the visual experience, which is determined by the way humans “feel”, is still difficult to quantify, as in the case of movies, which provide a highly immersive and cinematic experience[1] despite their low frame rate of 24 fps. This project aims to develop a new technology that allows us to understand the quality of visual experiences through existing visual and psychophysical research.

In this project, while actively referring to existing visual research and psychophysical knowledge, such as academic knowledge on luminance response, viewing angle, eye guidance, and temporal (HDR, high frame rate, etc.), HMD-based immersive and interactive video, and so on. Just as movies once achieved a high quality of visual experience in the 24fps format, Logoscope aims to build a workflow for next-generation video that can evaluate and exploit both aspects of the quality of visual experience created by the sensory experience value that intervenes in the current diverse technological environment, and the high sense of reality that humans feel again. We aim to build a workflow for the next generation of video that allows us to evaluate and exploit both aspects of the quality of the video experience created by the sensory experience that intervenes in this experience.