Artefact’s staff reflects on AI’s potential impact on individuals and society by answering questions prompted by the Tarot Cards of Tech. Each section contains videos that explores a tarot card and provides our perspectives and provocations.

The Tarot Cards of Tech was created to help innovators think deeply about scenarios around scale, disruption, usage, equity, and access.  With the recent developments and democratization of AI, we revisited these cards to imagine a better tech future that accounts for unintended consequences and our values we hold as a society.

Cultural implications for youth

Jeff Turkelson, Senior Strategy Director

Transcript: I love that this card starts to get at, maybe some of the less quantifiable, but still really important facets of life. So, when it comes to something like self-driving cars, which generative AI is actually really helping to enable, of course people think about how AI can replace the professional driver, or how AI is generally coming for all of our jobs.

But there are so many other interesting implications. So for example, if you no longer need to drive your car, would you then ever need to get a license to drive? And if we do away with needing a license to drive, then what does that mean for that moment in time where you turn 16 years old and you get newfound independence with your drivers license? If that disappears, that could really change what it means to become a teenager and become a young adults, etc. So what other events or rituals would AI disrupt for young adults as they grow older?

Value and vision led design

Piyali Sircar, Lead Researcher

Transcript: This invitation to think about the impact of incorporating gen AI into our products is really an opportunity to think about design differently. We should be asking ourselves, “What is our vision for the futures we could build?” and once we define those, the next question is, “Does gen AI have a role to play in enabling these futures?” Because the answer may be “no”, and that should be okay if we’re truly invested in our vision. And if the answer is “yes”, then we need to try to anticipate the cultural implications of introducing gen AI into our domain space. For example, “How will this shift the way people spend time? How will it change the way they interact with another? What do they care about? What does this product say about society as a whole?” Just a few questions to think about.

Introducing positive friction

Chad Hall, Senior Design Director

Transcript: The ‘Big Bad Wolf’ card reminds me to consider not only which AI product features are vulnerable to manipulation, but also who the bad actors might be. Those bad actors could be a user, it could be us, our teams, or even future teams. So, for example, while your product might not misuse data now, a future feature could exploit it.

A recent example that comes to mind is two students who added facial recognition software to AI glasses with a built-in camera. They were able to easily dox the identities of just about anyone they came across in their daily life.

I think product teams need to introduce just enough positive friction in their workflows to pause and consider impacts. Generative AI is only going to ask for more access to our personal data to help with more complex tasks. So the reality is, if nobody tries to ask the question, the questions are never going to get asked.

Minimizing harm in AI

Neeti Sanyal, VP Creative

Transcript: I think it’s important to ask whether AI could be a bad actor? Even when you’re not trying to produce misinformation with generative AI, in some ways it is inherently doing that. I am concerned about the potential for generative AI to cause harm in a field that has low tolerance for risk, things like health care or finance. An example that comes to mind is a conversational bot that can give the wrong mental health advice to someone that is experiencing a moment of crisis.

One exciting way that companies are addressing this is by building a tech stack that uses both generative and traditional AI. And it’s the combination of these techniques that help minimize the chance of hallucinations and can create outputs are much more predictable.

If we are thoughtful in how the AI is constructed in the first place, we can help prevent AI from being the bad actor.