YaYa Shang

UX Designer ♡ San Francisco



ArcSoft Depthcam

My Role

This project took place between June of 2015 to its launch in June 2016. I worked on this project as the Sr. UX Designer at Intel.

I collaborated with Arcsoft to define and build Depthcam, which uses Intel Realsense technology. This application allows users to capture and edit photos with depth information.

I worked with the ArcSoft team to iterate on standard camera features and functionality and provided additional wireframes for depth related features. I worked closely with an Intel visual designer to review iconography and conduct user testing for visual understanding.

I led usability studies at different phases of implementation to evolve the design and make sure it's on target.

Download the app
(requires device with R200)

Mode Behavior

During early testing, a major issue we encountered was that people had a hard time understanding what depth photography means (we even had issues with what to call it, depth? 3d?). We envisioned depth photography to be the ultimate way to capture images - in that people do not need to change modes, and all photos will have the depth metadata - however, due to technical limitations, we had to separate out the "depth mode" vs the "standard mode". To help people visually connect the mode icon better to mode changes for the camera, we tested various animations and placement for the mode button.

Depth Enhanced Filters

Depth photography is basically a collection of pixels that can be mapped to a point-cloud (data points that have x, y, z coordinates in space) from one side. This allows the ability to control and change pixels that have common coordinates. The easiest way to think about it is like Layers in photoshop.

To help explain how we wanted to use this ability on filters, I created wireframes to document how a user would interact with the photo and what the corresponding effects should look like.

Depth Aware Stickers

This feature allows user to add objects to their photo realistically. Both the placed objects and the background photo contain depth data. This allows the user to easily create a photo montage that visually looks accurate and requiring minimal reizing and and light corrections.

Usability Testing

My test plan consisted of 2 internal testings at Pre-alpha and Alpha build.

Pre-alpha consisted more of concept testing and visual understanding.
Alpha was geared more towards usability.

We had time between Alpha release and Beta release to iterate on design changes and improve interactions.

2 months later, an external test was conducted on the Beta release.

 

 


Robotic experiences | Explorations | Intel
Depth Measurement | Mobile | Intel
Depth Camera | Mobile | Intel
Nature's Way | Website | Organic
Depth Motion | Mobile | Intel
Buy a Toyota | Website | Organic
Body Scanner | Demo App | Intel
PowerGuide | Mobile | SolarCity
PL200 | Mobile | Plastic Logic
Depth features | Explorations | Intel
Erodr | Mobile | Erodr
Hyphenated | Research |MFA Thesis

YaYa Shang © 2016 Built with love and [googled] html/css  |  Email me  |   Linkedin