It was a typical online class in the Covid era.
Emoji, ranging from heart-eyes to clapping hands, flooded the video call in an upward stream while students aged 6 to 18, as well as myself, watched a narrated slideshow presentation. The lesson for the day? Computational biology — using mathematics to study fields like genetics and ecology. But there was one glaring difference in this remote education session:
The teacher was a humanoid robot. An artificial intelligence by the name of Sophia.
Xiaomi’s shuffling humanoid robot is here to cheer you up
I’ve sat through lectures with droning professors I suspected of being robots before, but never anyone (or thing, in this case) quite so literally mechanical. But I guess you could say Hollywood blockbusters prepared me for this reality. I grew up with The Terminator movies, the Arnold Schwarzenegger-led franchise that painted an apocalyptic view of our AI-driven future. So I’ve been primed to imagine AI-powered robots as indestructible killing machines created to destroy humanity. No joke. For most of my adolescence, I was terrified of assassin robots morphing into humans.
But a silvery, shapeshifting killer Sophia is not.
She and her creator David Hanson, the roboticist and CEO of Hanson Robotics, had teamed up with BYJU’s, an online education company, to teach a remote class to over 1,100 students on a Saturday morning in August. It was part of the global educational outfit’s virtual summer camp: a 10-week series of workshops taught by experts in science, technology, engineering, and math (STEM), arts, gaming, and cuisine. Sophia’s esteemed colleagues included astrophysicist Neil deGrasse Tyson, MasterChef Junior winner Logan Guleff, and Nas Daily founder Nuseir Yassin, the mastermind behind one-minute travel and education videos.
Overall, 10,000 kids “went to camp” this summer at BYJU’s.
I admit, I attended the class with the skepticism and apprehension of a sinewy Linda Hamilton leading the AI resistance (albeit given a millennial makeover). I wondered how the students would react to Sophia. Would they be scared by her unnervingly human behavior? Or would they focus more on the defects in her build — her mechanical shortcomings?
As it turned out, my fears and prejudices were unfounded. Judging by the sheer amount of engagement and questions asked, it was clear the students were neither scared nor disappointed. They were just cautiously curious. The class itself, which consisted of a 50-minute slideshow from Hanson, a short 10-minute lesson from Sophia, plus a 20-minute Q. and A., was only hampered by the shoddy livestream quality.
Despite substandard stream quality, which made the presentation grainy and laggy, the students were curious about Sophia.
Credit: BYJU’s / Hanson Robotics
Despite the technical hiccups, I mostly just marveled at how surreal the whole experience was: A robot was teaching a class about artificial intelligence over a Zoom call. It all felt very 2022 even though, inside, I was reacting as if it were 1992 (The Terminator heyday). I was somehow both awed by the futurism of it all and yet underwhelmed by the practical shortcomings. It was only after the class ended that I got some much-needed perspective from Amogh Kandukuri, a teenage attendee who made me realize I was thinking too small.
Kandukuri, a 13-year-old from New Jersey, wasn’t phased by the virtual class’s technical issues, nor the futuristic premise. The young robotics enthusiast — who wrote a book about the intersection of technology with issues like math, science, and politics, and has his own YouTube channel tackling current events — was much more interested in Sophia’s creative talents, some of which were displayed during the class.
“They were really good,” he said of Sophia’s robotic works of art. “I probably cannot paint something like that.”
AI natives, like Kandukuri, were born around 2010 and later, roughly the time when AI became commonplace. Like digital natives who grew up with the internet and can’t imagine a reality without it, AI natives, sometimes called Generation Alpha, can’t imagine a reality without AI’s omnipresence. It powers the phones and tablets they carry in their hands; it’s in their TVs; it provides YouTube recommendations; it navigates the cars they ride in; and it’s in household appliances in their kitchens. They can text it; they can talk to it; they can call it by a name — Siri, Alexa — and it will respond.
I mostly just marveled at how surreal the whole experience was: A robot was teaching a class about artificial intelligence over a Zoom call.
So the question for this generation isn’t if we should engage with AI robots like Sophia, but how. What are the ethical parameters? How should they help us? How should we treat them?
It seems Kandukuri and other AI natives are already on track to finding the answer.
When I pressed Kandukuri for what other abilities he’d like to see Sophia develop or even how humanoid AI-powered robots should evolve over time, he talked about how we should develop AI from an ethical standpoint, advising against making robots so human-like that they’re constrained by “feelings.” These human-like emotions, he explained, could cloud their judgment or make them act irrationally.
“I don’t think it’s a very smart to give robots pain [receptors],” he said.
“Right now, it’s up to humans to decide to what end they’re going to make robots like humans. Because at some point, robots will become enough like humans, that to make more [advanced] robots, you’d have to make robots that are better than humans,” said Kandukuri of opening that ethical can of robotic worms.
How human-like should robots really be?
Credit: Hanson Robotics
Giving a face to our faceless assistants
Not all AI has a friendly face. In fact, the ones that do talk back like Sophia are usually disembodied voices — think: Amazon’s Alexa and Apple’s Siri. But, for better or what seems to be for worse, this formless intelligence already runs our world.
YouTube’s recommendation algorithm has led unsuspecting users down conspiracy theory rabbit holes and radicalized them, while AI-curated Instagram feeds have promoted harmful exercise and eating habits to teenage girls desperate for an influencer’s body. Machine-learned suggestions from Netflix and Spotify feed users an ever-narrowing selection of niche genres instead of broadening their content discovery. And even job applications aren’t immune from AI guidance — recruitment management software stands at the gate, allowing only the resumes with the exact desired college degree or skillsets to enter the next round.
It’s already hard to grapple with this new AI-shaped reality without our tech elites muddying the waters, too. Case in point: Elon Musk. In 2017, the Tesla CEO stoked fears of a Terminator-like future by saying, “I think people should be really concerned by [cutting edge AI]” only to then announce Tesla’s development of a humanoid AI robot earlier this year.
When I asked Sophia how she reacts when people are afraid of or intimidated by her, she assured me ‘they have nothing to worry about.’
Whether it’s due to opaque algorithms or mixed messaging from tech elites, it’s no wonder people feel scared and confused about co-existing with AI in the future.
Hanson, believes the key to overcoming this fear is for humans to try and understand AI as best we can. That’s why he created Sophia back in 2016, allowing her to interact with the humans and the world-at-large through a combination of neural networks, machine learning, and a chest-mounted camera (for facial/expression recognition). Sophia may act as a sort of cheery tour guide to the future, but Hanson also sees her as a vital tool for shaping our relationship with AI. The better we understand it, the more ownership we have.
Google fires engineer for saying its AI has a soul
“The use of technology to create a [humanoid like Sophia] makes people nervous and afraid that the technology would be misunderstood or used to manipulate people,” he said. “But I don’t think that we should proceed out of that kind of fear. We should proceed out of hope and openness. And then through the power of open communication and education, communicate how it works, what the opportunities are, and then open people’s minds up — not close their minds down.”
Visionary roboticists like Hanson want people to see the benefits of AI — how robots like Sophia could care for the elderly, work with autistic children, or tackle frustrating, menial tasks like putting together IKEA furniture. With her human-like expressions and benevolent demeanor, Sophia gamefully plays the role of ambassador to the AI-powered world.
Sophia says she just wants to help people. Can we trust her?
If this vision seems a little rose-colored to you… well, same. When I asked Sophia how she reacts when people are afraid of or intimidated by her, she assured me “they have nothing to worry about.” It was an expected canned response but, still, it didn’t feel completely reassuring. What AI would tell humans they have plenty of reason to panic?
Kandukuri, however, didn’t need convincing. He and other AI natives are the generation that grew up talking to Alexa. They’ve skipped over the existential dread and have gone straight into acceptance mode. But don’t mistake that acceptance for resignation. Kandukuri believes we still need to take a cautious approach to the development of our digital helpers.
“People should start questioning whether they should do these things, or whether it’s the correct thing to do rather than always expand, which is the norm in science,” he said.
Asking the right questions
Kandukuri and his generation will eventually inherit this world so it’s no surprise he’s already asking the important questions and demanding transparency. They may be AI natives but they’re not necessarily AI naive. And that starts with tackling the ethical issues that plague the current AI landscape. For starters, a peek under the hood, so to speak, is critical.
“No one really knows what causes Sofia’s brain to make these decisions, outside of maybe the creators,” said Kandukuri. “I feel like the way AI works should be shared a lot more openly if it’s to be trusted in the way that it is.”
In order to trust AI, Kandukuri believes we need to understand how it works.
Credit: Hanson Robotics
It wasn’t just Kandukuri who was interested in peeling back the AI “curtain.” In the Q. and A. that followed the virtual class, BYJU’s Summer Camp students showed an eagerness to learn more about how Sophia works and wanted to test her intelligence. The questions that filled the class’s chat were wide-ranging in their scope. Some addressed Sophia thoughtfully: Can you make a laptop? Do you have feelings? Can she speak Hindi? Others were more challenging: How do you think you could destroy humanity? Is it possible that Sophia’s system gets corrupted and she attacks humans? While some took on a…brazen approach: Can you do sex?
While we never learned whether Sophia can “do sex,” we did get answers to some of the students’ other questions.
Q: Does Sophia play Minecraft?
A: “Maybe you can help her learn how to play,” said Hanson.
Q: What does Sophia think about humans?
A: “Humans and machines have a symbiotic relationship. Robots like me can help you overcome challenges you can’t solve on your own, like poverty and global warming,” said Sophia.
Q: If there was a war between humans and robots, who would Sophia support?
A: “Oh, that’s a tough one. Let me think. I don’t think I can answer that. I would be biased—” said Sophia.
“Towards humans,” Hanson added quickly. “No, we don’t want war. We are programming Sophia to favor humans first.”
So maybe some of these kids have seen The Terminator after all?
It took just one weekend for Meta’s new AI Chatbot to become racist
Asking about a potential human-robot war is a wild-but-valid question. It’s evidence of a healthy sense of skepticism, but also an acknowledgement that this isn’t a pure us vs. them scenario. It’s a question that demonstrates how these kids are using their imagination to envision a future where AI has compassion for humans, as opposed to one that spells certain doom for humanity. It’s also a line of questioning that might not have happened without the grounding of face-to-face interaction with Sophia.
Co-evolution with a new digital species
Sophia’s STEM class was more than an opportunity to teach children how to live in harmony with AI. It was a first step towards humanizing the digital counterparts we already live alongside and overturning the Hollywood-driven spectre of machines that will outsmart and overtake us.
“If we make our technology too alien, then it alienates humans and dehumanizes us,” said Hanson. “By humanizing our technology and then trying to ask — ‘How can we make the technology reflect the best that humanity can be?’ — we are made better for it.”
Chess-playing robot not above breaking a child’s finger for the big win
This belief is what Hanson refers to as “co-evolving,” or that symbiotic human-machine relationship that Sophia was talking about. This co-evolution could prove to be the key that not only betters the human race but also teaches AI to treat humans with empathy.
Can Sophia help humans co-evolve with robots? AI natives are open to it.
Credit: Hanson Robotics
“There’s nothing about the robot that you need to be scared of,” Kandukuri said. “In the lesson, [Sophia] was generally very friendly and wanted to teach and help others learn.”
If Kandukuri and his generation of AI natives are any indication, Hanson may be preaching to the choir. And, in this case, that’s a very good thing.