Towards a Transparent Camera With Eye Tracking Capabilities

January 2024 - April 2024

A photo showcasing a concrete hallway with visible image defects caused from image reconstruction.
A photo showcasing an outside scene with visible image defects caused from image reconstruction.
An image of the Transparent Camera showcasing an acrylic pane in a stand with two cameras pointing into it.

Project Description

The Transparent Camera Project was a research project I undertook for my ECE5960 Computational Photography class.  

The goal of this project was to create a camera that appeared to simply be a plane of transparent acrylic.  An individual could look through this plane of acrylic and not realize that it was a camera at all.  However, this pane of acrylic was designed to be capable of taking photos of what individuals would see if they looked through the acrylic pane without having the camera component of the pane visible.  This was accomplished by putting a camera on the side of the acrylic pane and reconstructing the image with a neural network.

The long-term goal of this project was to utilize the transparent camera for eye tracking, particularly for car windshields and VR/AR purposes where this technology would be particularly helpful for techniques such as foveated rendering.

Example Images

Some example images taken from the transparent camera can be seen below.  For those not familiar with the notation, the raw input is what the transparent camera sees without any post-processing.  The reconstruction is what the neural network was able to turn the raw input into after post-processing.  The ground truth is what the image was supposed to look like.  You can think of the ground truth as what the image would look like if you took a picture of it with a normal camera.

Furthermore, the gathered data for the transparent camera was split into two subsets:  the training set and the testing set.  In the training set, the neural network has previously trained on these images and thus, often is better at reconstructing the images.   In the testing set, the neural network has never seen these images before and thus, doesn't do as good of a job with image reconstruction. This partitioning of data allows us to test for how generalizable the trained neural network is.  The higher the quality of reconstructed images in the testing set, the more generalizable the neural network model is.

An image showcasing yellowish-white lines from an acrylic pane photographed from its side.

Raw Image from the
Training Set

A photo showcasing a concrete hallway with visible image defects caused from image reconstruction.

Reconstructed Image from the
Training Set

A photo showcasing a concrete hallway.

Ground Truth from the
Training Set

An image showcasing white lines from an acrylic pane photographed from its side.

Raw Image from the
Testing Set

A photo showcasing an outside scene with visible image defects caused from image reconstruction.

Reconstructed Image from the
Testing Set

A photo showcasing an outside scene.

Ground Truth from the
Testing Set

Paper

Below is a copy of the final paper for the Transparent Camera Project.

Transparent Camera Project - Paper.pdf

Code Repository

A link to the paper's GitHub code repository can be found here.