year:
2017/2018
with:
L. Marzolf
T. Yuan
tools:
Python
oF
Arduino
for:
IDEO

SC1

SC1 is a camera that can learn a person's nuanced preferences for self portraits, and make them accessible as a function. It leverages a combination of a person's muscle memory for how to hold the camera and a convolutional neural network, to identify the right moment for taking a picture.

Picture showing two rendered cameras, on from the front and one from the back.

For every “good” selfie there are at least 100 “bad” tries. For most people these “bad” tries look exactly like the “good” one. Except for the person who took them. It’s about the combination of contless minuscule nuances only this person can see, making the “good” picture special.

Two pictures taken with SC1, illustrating the detection of subtle expression details, indicated by a red number value in the bottom-right corner of each picture.

It’s close to impossible to describe these nuances with words, yet they are engrained in people’s muscle-memory; a correlation between the way the camera is held and facial expressions. To capture the right moment is the challenge. SC1 is a camera application to make something this personal as accessible as detecting the “best” exposure for a picture.

Image illustrating a repetitve image taking process comparison with the shorter process of a teaching enabled SC1.

While the software is a fully functional application, the physical device is only for illustrative purposes.

The camera’s front plate acts as a rotating mode switch, displaying the current mode through emojis in a round cut-out. The lens sits in the frontplate’s center. On the side of the device is a shutter button with built in vibration motor and attached to the bottom is a one way mirror.

Image illustrating the camera with callouts for specific parts

Training the camera is part of its physical interface: first capturing images from all angles with a neutral expression, followed by capturing one’s preferred expression. The camera focuses only on a person’s facial expressions by extracting the face from the background and lighting, reducing the training to about 10 images each for first results.

l.: capturing neutral expression
r.: capturing preferred expression

This training can also be done with previously taken pictures. The picture below shows the app I built to test the training.

Picture showing a hand holding a phone that displays a image-swiping interface to train SC1 with previously taken images.

Once trained, the camera uses a range of haptic feedback to indicate how close the current frame is to the user’s personal ideal. Taking the final picture remains in the hands of the user, embracing the fact that self-perception continuously evolving. Haptics are illustrated with a light in the right picture, due to the lack of haptics on the web :)

During my user testings I realized that this function allows people to share their personal sense for beauty. It allows others to capture portraits of the person who trained the camera, according to their personal ideal.

Picture showing the view of a third person through SC1's one way mirror, takng a picture of a woman.

Below you can see protoypes and pictures taken with them after training.

The red numeric value in each portrait indicates how close it is to the training person's ideal. It was only used during testing.

Picture showing multiple photos lying on a table. The photos themselves are selfies of multiple people, taken with SC1.
Two pictures side by side, showing two functional protoypes of SC1.

Working with haptic feedback was necessary since any visual feedback would potentially interferre with people’s gaze and head angle. The challenge was to design haptics that are subtle, yet intuitive to differentiate, to communicate the different states of the system. In 2017, without access to Apple’s Core Haptics API, this required working with alternative haptic dsp ...

picture of iPhone 7 plus with haptics actuators attached to the back.