SC1 is a camera that can learn a person's nuanced preferences for self portraits, and make them accessible as a function. It leverages a combination of a person's muscle memory for how to hold the camera and a convolutional neural network, to identify the right moment for taking a picture.
For every “good” selfie there are at least 100 “bad” tries. For most people these “bad” tries look exactly like the “good” one. Except for the person who took them. It’s about the combination of contless minuscule nuances only this person can see, making the “good” picture special.
It’s close to impossible to describe these nuances with words, yet they are engrained in people’s muscle-memory; a correlation between the way the camera is held and facial expressions. To capture the right moment is the challenge. SC1 is a camera application to make something this personal as accessible as detecting the “best” exposure for a picture.
While the software is a fully functional application, the physical device is only for illustrative purposes.
The camera’s front plate acts as a rotating mode switch, displaying the current mode through emojis in a round cut-out. The lens sits in the frontplate’s center. On the side of the device is a shutter button with built in vibration motor and attached to the bottom is a one way mirror.
Training the camera is part of its physical interface: first capturing images from all angles with a neutral expression, followed by capturing one’s preferred expression. The camera focuses only on a person’s facial expressions by extracting the face from the background and lighting, reducing the training to about 10 images each for first results.
This training can also be done with previously taken pictures. The picture below shows the app I built to test the training.
Once trained, the camera uses a range of haptic feedback to indicate how close the current frame is to the user’s personal ideal. Taking the final picture remains in the hands of the user, embracing the fact that self-perception continuously evolving. Haptics are illustrated with a light in the right picture, due to the lack of haptics on the web :)
During my user testings I realized that this function allows people to share their personal sense for beauty. It allows others to capture portraits of the person who trained the camera, according to their personal ideal.
Below you can see protoypes and pictures taken with them after training.
The red numeric value in each portrait indicates how close it is to the training person's ideal. It was only used during testing.
Working with haptic feedback was necessary since any visual feedback would potentially interferre with people’s gaze and head angle. The challenge was to design haptics that are subtle, yet intuitive to differentiate, to communicate the different states of the system. In 2017, without access to Apple’s Core Haptics API, this required working with alternative haptic dsp ...