Thesis and Independent Study Topics
My research interests involve
creating unemcumbered, interactive, immersive displays
and applications for these displays.
Students can use CompCore
facilities if needed.
You'd want some background in one or more of these:
These topics can be independent study at the undergraduate or graduate level,
and form the basic of an M.S. thesis.
William dawt Thibault ayat csueastbay dawt edu
- Multi-projector cluster-based displays.
These topics would support and extend the CompCore Immersive Display, a
cluster-driven multi-projector display using camera-based calibration.
The use of a camera automates the process of aligning the projectors,
allowing for quick and casual alignment of projectors,
and warping images to create a single, large, high-resolution display.
For an immersive display, the projectors use walls, floors, and ceilings
to surround the user.
This technology could someday soon be used to create "flash displays" where
cell-phones with both cameras and projectors can quickly be calibrated
and used to watch HD or better resolution movies.
We have recently expanded on the work in
[Yuen and Thibault, 2008] to create the CompCore Immersive Display in VBT217.
See http://www.compcore.csueastbay.edu for some videos and pictures.
- Fully immersive calibration - Extend the dcmapper projector calibration project to
support multiple camera views to create a full 360-degree environment.
The current system uses a single wide-angle camera for calibration.
This is essentially a computer vision problem, along the lines of panoramic stitching.
The transformations between the camera pose in each image can be found using regions
of overlap. In this case, the correspondences between the images are given,
since we are imaging known projected patterns (structured light).
The camera poses can then be found using numerical minimization.
Investigate the use of calibrated and uncalibrated cameras, along with fisheye
and perspective projections. Using set of computers, each with a webcam
and a video projector could be targeted, similar in capabilities to the
smartphone/camera/projector unit of Real Soon Now.
[panoramic stitching with structured light]
- Photometric Calibration - Camera images of the projected images can be used
to compensate for overlap between projectors, intra-projector and inter-projector
brightness variation, and even colored patterns on display surfaces.
[bimber smart projectors]
- Scattering compensation - When projecting on concave display surfaces such as
domes, light reflects from the screen onto itself, reducing contrast.
Using the GPU and information about how the display surfaces interact,
each frame can in realtime be modified to compensate for these effects.
- 3d Application Framework with Equalizer integration - Equalizer is great and all
but it's its own framework. So adopting a simple authoring environment like
a Python-based game engine is not so straightforward. [Ogre for Equalizer]
- 3d audio rendering -
Speaker arrays can inexpensively support groups of listeners,
and are compatible with unemcumbered virtual and augmented realities. Sounds can be
tied to 3d objects and rendered into an array of speakers at known positions
to account for the position, distance, orientation ,etc. of the 3d object relative
to the listener.
Sounds could be network streams, files, or procedurally generated.
Effects of room geometry can be modeled with high fidelity using GPGPU techniques.
The project could investigate the use of
Open Sound Control (OSC) for cluster-driven virtual environments.
- Point-Based Rendering of LIDAR data -
LIDAR data is becoming available online in large quantities.
This project aims to render large point sets using point-based rendering
techniques. The raw 3D point clouds are usually preprocessed to create
point primitives with additional surface properties such as normals.
The data can be registered with satellite imagery to obtain color information.
Rendering and preprocessing of extremely large point clouds can use out-of-core
techniques. Such large problems can benefit from parallel approaches using
cloud, GPGPU, and cluster computing technologies. Creating a prototype
framework for rendering large numbers of points from LIDAR data could run
in the Equalizer framework and support laptops as well as clusters without
recompiling. Continuous Level-of-Detail (CLOD) techniques can preprocess
large point clouds for efficient rendering.
- Video-based rendering
- Multiperspective rendering
- Head and gesture tracking