How AI Reflects Our Broken Humanity
A reflection of our humanity in our technology, disability advocacy, equitable design, and my favorite keyboard
AI has taken over our tech industry and media by storm in just the past few months. From inequity strewn into the products to AI ethicists asking the FTC to pause OpenAI, we’re seeing issues beginning to bubble up.
What worries me is that many of the foundational design principles of AI highlight aspects of our humanity that are problematic. How we are building, deploying, and interacting with these products needs to be fundamentally re-examined to avoid dire societal and ecological consequences down the line.
Our obsession with control & servitude
AI products are built to serve whenever, wherever, and whatever we desire — all designed around the paradigm of command and response.
But where have we seen this model before?
From colonial slavery to modern-day indentured laborers, our capitalist societies depend on workers taking orders with no qualms (and very little pay). Plainly, we suck at creating a healthy and sustainable symbiotic work model compared to most companies' toxic top-down corporate structures today. (Love referring back to this clip of Anthony Bourdain calling out indentured servitude)
And the way we anthropomorphize these digital products recreates societal hierarchies and biases we claim to break away from. From typical AI platforms resembling women to recreating “butler” archetypes, our “innovation” becomes a reenactment of past colonial structures.
If we want to shift our societies toward a new future (one that is more equitable for all), we must break away from these inequitable historical ways of working and move toward a collaborative and symbiotic ecosystem with our technologies and communities.
Our obsession with extraction
Much of the marketing around AI today is based on lowering output to pennies by eliminating human labor, but the allure of decreased overhead masks its impact on our future.
About 6% (50m) of people worldwide are unemployed and 50% (4b) live in extreme poverty. AI is soon projected to displace another 300m jobs around the world.
Don’t we still have a fiduciary responsibility to employ and financially empower those around us? If our technology doesn’t solve joblessness and financial insecurity, what is? What do we expect our displaced communities to do?
Additionally, there are environmental concerns with running AI at a mass scale. One report estimates training a single AI system emits over 250,000 pounds of carbon dioxide, and another estimates AI industry produces carbon dioxide emissions at levels comparable to the aviation industry.
The foundation of these platforms should be rooted in sustainable architecture for both societal and ecological balance. We must also set guardrails against cancerous hypergrowth that places us in a predicament where we cannot undo the harms perpetuated. Our future is not an experiment.
Our obsession with deception
A common litmus test for generative AI is often “Can you tell if this is AI-generated?”
Ironically, these platforms depend entirely on troves of human-generated content to “deceive” us. So, how can it ever be deceiving other humans if it’s ultimately created on the labor of others?
The scraped data informing these generative AI platforms have been harvested illegally without the creators’ consent. These platforms are currently facing copyright infringement lawsuits after monetizing the work of others without permission or payment.
While we should be clearly identifying generative AI content due to ethical reasons, this can also become a new vertical of content that lives side-by-side with human-created content. When we can set aside this desire to deceive each other, we can open ourselves to the reality that both types of content can exist, ethically, consensually, and symbiotically.
When so much of our community’s data is the underpinning of these platforms, perhaps it should become a common good accessible, modifiable, and removable by all.
What are your thoughts about these new technologies?
Thanks for reading Mindful Moments by Steven Wakabayashi! Subscribe for free to receive new posts and support my work.
What I’m up to
Recently gave a talk at Creative South on equitable design processes and rest. Loved meeting amazing designers from all around the country!
For our community
For QTBIPOC: Tue, Apr 11 - UX Nights: Design Justice
For queer Asians: Thu, Apr 13 - Yellow Glitter Sparkles Support Group
In NYC: May 21 - 22 - Asian Creative Festival: AAPI-centered talk series, Marketplace, and Film Screenings
Something to watch
Moved by this recent NYC Creative Mornings talk by disability rights advocate and writer, Vanessa Kelly. A beautiful narrative and journey about her life’s experience coming to and embracing her deafness.
Something to read
Like many other designers, I had looked up to Don Norman, author of Design of Everyday Things, as I was coming up in my design expertise. (We had even integrated his book into our initial curriculum for QTBIPOC Design UX Bootcamp, though no more) Over the past few years, he has shown his colors and struggles to adapt to the changing world — especially its diverse audience and designers.
Highly recommend reading this thought-provoking piece, The Problem with Don Norman, by Fast Company.
Something to try
I recently bought and have fallen in love with my Keychron mechanical keyboard. It makes typing more comfortable, and the “clicky” sounds have been so satisfying.
PS - I got the brown switches (not too loud, not too little resistance)
PSS - Combine with a fun typing game to improve your typing!
As always, thanks for reading!
P.S. If you enjoyed this, share or sign up here: mindfulmoments.substack.com
Anything else? You can always hit "reply" to email me directly. 💌
Have a beautiful day!
Follow: Twitter | LinkedIn | Instagram | YouTube | Facebook | TikTok
Collaborate: Stranger Creative | StevenWakabayashi.com
Support: QTBIPOC Design
Listen: Yellow Glitter Podcast on Apple Podcast | Spotify | Google Podcast