Crowd as AI Proxy
ARAD is a concept company that would leverage crowd-sorted images of celebrities in order to generate branded augmented reality advertising. This company would work with celebrity estates to manage celebrity licensing and contracts.
This project was an exercise is matchmaking, an AI design process proposed by Elizabeth Churchill and Sarah Bly.
Out overall goals for this process were to:
Create system for AI
Use crowd-sourced data as proxy for AI
Use matchmaking AI design technique
UX Designer • Facilitator
Techniques + Tools
Wireframing • Sketch • System Diagramming • Value Flows
Student teams design a system that uses a crowd of people as a proxy for an AI computing system. You must work to identify an opportunity area and then design a service, one that motivates a crowd to produce valuable data for their system. This project has three distinct stages:
Explore space: Teams consider many possible opportunities for services that could benefit from a crowd as an AI proxy.
Model preferred future: Teams develop and refine a value flow model that describes how all of the stakeholders within their ecology gain value. At this stage, it is critical to understand how the crowd is motivated to produce the required data in a time-sensitive way.
Refine interaction: Teams will develop a set of wireframes that show the transactional flows for crowd participants who generate the data. This should include scenarios of use, that describe a typical interaction with the system.
Creating new artificial intelligence products and services seems like a limitless work space. This limitlessness can be overwhelming. To begin this concept planning and appropriately scope this project, we researched potential crowd uses in other AI projects like VizWiz.
Then, using a Matchmaking method developed by Sarah Bly and Elizabeth Churchill, we narrowed these concepts down to identity related concerns.
The best concept derived from this process was around deep-fake style AI generated videos. Thought this technology is not quite yet feasible, the future privacy concerns are massive. We wanted to target this area concerned with likeness, ownership, ethics, and AI.
2.1 Use Cases
After researching deepfake technologies, we built an understanding of use and detection. We then created use cases and scenarios where human identification would be valuable, and warrant more accurate results than an AI.
This process ranged included identity verification, forensic analysis, propaganda generation, and athlete training monitoring.
Use Case 1: ARAD will collect user classifications of visual data through interfaces placed within ad-supported games.
Instead of ads, users will be offered the opportunity to do a series of quick classifications of images in exchange for reduced wait time until they can return to playing their game. By situating the labeling task within a tap-happy environment, we hope to incentivize user participation with the promise of quicker gratification and continued entertainment.
Users will be incentivized to produce high-quality classifications by being penalized for bad classifications. If they tap through too quickly, or their results deviate significantly from the consensus of pass classifications, users will be slowed down and have to wait through additional ad time.
Use Case 2: ARAD’S classification system is built around a gameplay experience.
Users are players of Celebrity Cold Case; as they progress through the episodic role-playing game, solving gossip mysteries and hypothetical crimes, they will complete investigation tasks that result in the labeling of image data for celebrity identity licensing. For instance, a task might be presented to a player in the form of a multiple choice question: "Einstein was enraged to find out that he had been misrepresented in the tabloids as a homewrecker. Which photo is from when he found out?" From a set of images, users would select the photo that they deem to most represent negative emotions. These results would be added to the images metadata file describing it as depicting Albert Einstein in a state of distress. Through tasks like these, presented as gameplay, users would create a robust data set of labels about various still images that could then be used to build the augmented-reality likenesses that the brands have contracted for their advertisements.
These tasks could also be replicated in ads as described above, with the addition of information about the game. Now, the classification tasks are also advertisements for the classification platform, and through successful marketing, we would be able to bring more users into the classification space.
2.3 Value Flow
To fully explore the potential of this technology and its application, we identified a wide range of values that this technology would provide. We then mapped these values to our use cases and considered the relationship between value and user motivation.
Value Flow V1
This process informed our value flows. In early attempts (below), we were interested in a scenario that was narrowly focused on identity verification. In this scenario crowd workers would analyze images and video of celebrities, politicians, influencers, or anyone seeking identity protection. We imagine this kind of audio-visual security would operate like a verification system more robust, but functionally similar, to Twitter’s blue checkmark. This verification builds a personal brand and followers trust, both would equally ensure the continued need for our system and securing a channel for authenticated user data.
Value Flow V2
In second iterations of the value flow, we pivoted to focus on the crowd. We returned to our potential use cases and values to guide us in a direction that would result in a crowd-first value model.
The initial verification value was dependent on social buy-in, which is a highly volatile gamble. Instead, we focused on specific kinds of celebrities: one’s that no longer exist. Whether deceased, grown up, or retired, there are many celebrities whose likenesses are solely managed by estate executors. These likeness are sometimes used by major brands to instill a sense of nostalgia, and to tap in to new markets. Here, we identified a specific need that could be solved with ARAD. In this next iteration we mapped what content production would look like for advertisers, brands, or media companies.
Value Flow V3 — Final
In our final concept, we propose a narrowed scope of our celebrity identity licensing business. We know that the technology may not be there to always produce a high-quality deepfake, so we looked at areas where it would matter less whether the likeness looked perfect. We centered in on the idea of public personas that are no longer with us - whether dead or aged out of fame, figures like Prince, Freddie Mercury, Elvis, Judy Garland, and Hannah Montana-era Miley Cyrus have great value to advertisers. In our final concept, we propose ARAD, a company that works with the estates of dead celebrities to create likenesses for augmented reality advertisements on Snapchat, Instagram, and elsewhere in order to provide brands with nostalgia value, to generate revenue for estates, and to carve out a market for our own endeavor.
These wireframes depict how a user would experience a crowd-work classification task that looks like an in-game advertisement. Users would have the option to download the full game, which would have complete storylines for the user to explore and help in media classification.
3.2 System Diagram
This diagram depicts how ARAD perceives its system working. Exclusive of the value exchange, this diagram is expressing how mutual trust between celebrity estates and growing brands is mediated by the services of ARAD.
ARAD would use the crowd as proxy until enough training data is gathered for a Bayesian classification algorithm
We've learned a lot in the process of going through this design project. We benefited greatly from starting early with a specific domain, deepfakes, which helped us begin the process of understanding the existing technology, identifying where a crowd could augment the capabilities, and matchmaking our concepts to various activities. Looking at the work of our peers, it's clear that having a jumping-off point is critical to constant forward motion, even if our end-result is somewhat far from where we started. With each critique and iteration of our concept, we narrowed our scope. We started off with big expectations of what made might be able to accomplish given the current state of technology and the increasingly convincing deepfakes. While we haven't seen the implications of what this form might do in a broader sense, it's an interesting area to explore and we have been extremely grateful to work together as a team through this iterative design process.
Through the ad interface to the left, users would label data information like vowel shape of lips, emotions, and face angle in order to build better AR likenesses.
AI Ethics Research
As a graduate research assistant I have the pleasure of working with Dr. Molly Steenson. This research explores design as an ethical question in the conversation between artificial intelligence and ethics.
My research this semester has been focused on analyzing toolkits, statements, and frameworks published by industry leaders. Along with another research student, we’ve been conducting a qualitative comparison using sentiment analysis and word sorting.