Fittingly, the CEO of a startup that hopes to take over the AI world appears on screen with what appears to be a robotic limb.
“I’ve got this piece of metal in me now,” Dmitry Shapiro says of the high-tech brace. “It’s kind of cool, like a cyborg arm.” He ruptured his bicep while helping his family evacuate during floods in San Diego, all of which sounds kind of cool, too. Then Shapiro, 54, admits the cause: Picking up a box in the garage. As he and his wife discovered doing research afterwards, a bicep tendon rupture is most frequently felt by … middle-aged men picking up boxes.
The problem and the solution were both duller – and more widespread and more helpful, respectively – than they seemed at first blush. Which makes the not-so-cyborg arm an even more fitting visual for Shapiro’s startup, and for where the AI industry is going next.
Shapiro, a tech veteran who previously ran Google machine learning teams, co-founded his startup YouAI and launched its product MindStudio last year. MindStudio lets managers build apps using any or all of the major AI services, like OpenAI’s GPT-4 Turbo or Google’s Gemini. Fresh data can be sprinkled in from all kinds of other non-AI databases or documents. These AI apps can be constructed in minutes, visually, like a flow chart, without the user needing to touch a line of code — the way Windows runs on top of DOS, Shapiro notes. (Left unsaid: putting Windows atop DOS made Microsoft the wealthiest company of the 1980s and 1990s.)
Less than a year in, MindStudio can boast more than 18,000 user-created AI apps (putting it ahead of Dante AI, a similar brew-your-own chatbot service with 6,000 user-created apps). Its growth was based entirely on word-of-mouth. Again, the 18,000 number sounds awesome and a little terrifying — like a flurry of drones — until Shapiro offers an example that drills home the mundanity of those apps. A friend of his exited the tech world, bought a pool-cleaning business, and used MindStudio to build a “pool-cleaning copilot” in a matter of minutes. Now his employees test the pH of pools and the app tells them what to do with the result.
“The things you can do with this now would have been seen as science fiction last year,” Shapiro says, suggesting he needs to read more interesting science fiction. Still, his point stands. As it comes into focus, this technology looks less like the Terminator, and more like giving robot arms to people lifting boxes.
Jobs apocalypse postponed
AI is getting smaller and more boring, in short — and that’s a good thing for workers, bosses, and the economy at large.
When ChatGPT launched in November 2022, it looked as cool and terrifying as a cyborg arm. To some, it seemed like a Terminator that will not stop until it eliminates all our jobs. It’s been like the rise of the machines out there ever since, especially since the Large Language Models and the AI art/video apps merged, creating so-called multimodal AI like Gemini.
But in 2024, our infatuation and our fear are wearing off. We’re all more savvy about all the ways in which these machines give themselves injuries. They hallucinate facts and figures, they may be getting dumber and lazier, they’re prone to strange meltdowns, and letting them roam the internet without supervision can cause major legal headaches for corporations — as we saw in a first-of-its-kind case in February when Air Canada was found to be liable for fare reductions its AI chatbot made up out of thin air.
“Now a lot of companies are like, ‘wait a minute, we’re going to be held legally responsible for the bullshit this AI is spitting out,'” says Christopher Noessel, an author and design principal with a focus on applied AI at IBM. That, for the companies’ own sake, is the correct response, Noessel adds: “I’m really glad to have that fear out in the world.”
Not to mention AI art, which may bring its own legal headaches while becoming both boringly normal (on social media, at least) and shocking in all the wrong ways. Most recently, Google hit pause on Gemini’s art program after it generated images featuring people of color as Nazi soldiers, reportedly the result of an attempt to subvert the kind of racial stereotypes often seen in generative AI.
“Generative AI is here to stay, but it won’t live up to the kind of hype we saw last year,” says Gary Marcus, a cognitive scientist and futurist who wrote about the potential for AI hallucination problems as early as 2001. “People will find uses for it, but reliability will remain an issue for a long time, and so you will see a lot of big companies be reluctant to use it broadly.”
Think small
So, what can the corporate world count on AI to do? Simple: it can sweat the small stuff. Keep it in the cage of a small app with defined parameters, and AI can really be that tool for that thing you really don’t want to do. Mundane tasks, data crunching, and minor training can be automated in ways that may save a few employee-hours every week, but that adds up to big efficiencies.
A 2024 report by IT firm Cognizant — one with a much more sober outlook than the flurry of reports from analysts in the AI hype year of 2023 — suggests AI applications could inject $1 trillion a year to the U.S. economy by 2032. Cognizant cautions that the number depends entirely on the “very human decisions” by managers. Disasters are possible if those managers downsize based on the promise of AI replacing jobs. There is no one-size-fits-all solution. “Answering the many questions raised by generative AI will require time, experimentation and intellectual honesty,” the report concludes.
One thing that is becoming clearer at the managerial level: the full-on jobs apocalypse we feared isn’t happening just yet, in large part because of the need to stand by the machine and check its output. Platforms are becoming more realistic about the need to monitor and tweak AI apps in case of self-injury. MindStudio, for example, has a prominent tab in its app-building process that deals with testing and bug fixing.
These small apps are augmentations, not replacements. You’re unlikely to lay off your HR department just because AI can help them create training modules; there are plenty more really good business and legal reasons to have humans run your Human Resources. Or, to take the example of the pool cleaning company, you’re hardly going to replace in-the-field technicians with robots. You’re just making them smarter. Asked about job categories that AI might wipe out in the near future, Gary Marcus offers just one: “Voiceover actors are in trouble.”
White collar workers can breathe a little easier, then — and their work lives might become more interesting if AI is taking over the boring stuff. “Across the board, what we find are people taking mundane things that humans have been doing and automating those and making those more intelligent, so that humans can get their time back,” Shapiro says. “A lot of people are doing tasks that they don’t need to be doing that are better automated for everyone’s sake. We need to take people and move them up. The good news is that retraining” — with AI involved, that is — “becomes easier and easier. It’s the rising tide that lifts all ships.”
AI-augmented humans: What could possibly go wrong?
Of course, anyone living in our climate-changed world has reason to be wary of any metaphors based on rising tides. What unexpected consequences might result from all this AI augmentation? Might these thousands upon thousands of small, boring AI apps have a negative effect on the workplace, or the world, in the long run?
Well, one key question is what happens if we forget how to do all the small boring tasks that AI takes over. This is an old fear, going back at least to E.M. Forster’s famous 1909 story “The Machine Stops,” in which human civilization collapses when a vast Machine that has been taught to run everything for us breaks down. Here in the 21st century, we’re more likely to recall the helpless baby-like humans from the Pixar movie Wall-E: same concept.
You may not think such an outlandish concept applies to your workplace, but just think of the institutional knowledge that gets lost with every employee. We’re forgetting how to do key work tasks all the time — and with small AI doing the drudge work, we may not even feel like it matters.
Of course, over-reliance on technology is a problem we struggle with even without AI. Take the Post Office scandal now roiling the UK. Starting in 1999, a very dull piece of accounting software made by Japanese tech firm Fujitsu falsely reported shortfalls in revenue for local offices. More than a thousand sub-postmasters were prosecuted for stealing. Many went bankrupt trying to make up the difference out of their own pockets. Lives were ruined because the machine was trusted implicitly over the word of humans.
But a product that’s inherently untrustworthy to begin with? That’s a different story. With AI now widely known to spit out hallucinations, with bug-fixing front and center even in user-friendly platforms like MindStudio, AI problems might turn white collar workers into mechanics who are constantly tinkering in their garages. That is, they’ll be more wary, more aware of how the tech infrastructure of their company works compared to the pre-AI era, not lulled into complacency.
Still, a focus on AI app tweaking could lead us into “a dark future of being a babysitter for machines,” says IBM AI expert Noessel. “That’s not a future I want for my kids.” A similar dark future is suggested in Blood in the Machine by Brian Merchant, a book that connects Silicon Valley’s AI obsession to the Luddite struggle against mill owners in the 19th century. Those old-school entrepreneurs used weaving technology to replace workers, only to discover they still had to hire people to stop the machines making mistakes.
As for the smarter, more creative tasks that all these boring AI bots are going to free us up to do? Well, to use the terms popularized by psychologist Daniel Kahneman in his 2011 bestseller Thinking Fast and Slow, what we’re talking about is more workers transitioning from system 1 thinking (fast, automatic responses) to system 2 thinking (slow, logical, plodding, and not to put too fine a point on it, really hard for many of us.) But there may be a significant number of workers who can’t transition from one to the other.
“What do you do with system 1 people if all the jobs are going to be system 2 heavy because of AI?” says Noessel, who recommends that we “pad the system” with simple rote tasks for humans that will help provide a “brain break” from all that slow, deep thinking.
It may not come with a piece of metal helping our brain to heal, but squeezing more dull routine checks into our MindStudio AI app flows may actually be the mental equivalent of a cyborg arm. The future is getting boring, and that’s a good thing.