Search
Close this search box.

Does Fake it Till You Make It Work in AI?

This post was featured in our Cognilytica Newsletter, with additional details. Didn’t get the newsletter? Sign up here

Even though artificial intelligence (AI) and cognitive technologies have been around even longer than the electronic computer has been around, AI seems to get stuck in this perpetual trap where AI fantasy doesn’t meet reality. People’s hopes and visions for what AI can be are soon met with the reality of just how tough it is to create any sort of artificial intelligence in machine form. Of course at some point these contrasting perspectives of AI come to a clash and the end result is the decline of interest and investment known as the AI winters, something we’ve often written and spoken about.

Try as we might to fight the temptation to promise too much of what AI technology can do, it seems inevitable that companies, investors, enterprises, and others end up pinning too much on AI capabilities. To this point, three big pieces of news came out in the past weeks that seems to highlight growing frustration on the part of media and enterprises as to what some companies are promising as to what their AI systems can do. In all these cases, it seems that these companies are promising one thing when it comes to AI capabilities, and either delivering less than what was promised, or even worse, using humans to pretend that they are computers so as to deliver on the vendor overpromises.

In startup circles, the idea of selling the big vision but delivering things differently (or not at all) is known as “fake it till you make it”.  Many of the recent startup successes jumpstarted their companies by doing something that was not the way they sold it. For example, Reddit is famously known for using many fake accounts to post on the site to pretend there was organic activity happening on the site. Zappos pretended they had inventory by taking photos of shoes at Foot Locker, and just reselling those shoes. AirBnB reposted listings on Craigslist to grow their network. Microsoft faked that they had an operating system that they sold to IBM before rushing out to buy one from another company. So, in these cases, Fake it Till You Make It (FTYMI) seemed to have worked and the fake parts quickly replaced by real parts. But can FTYMI work in AI? Or is FTYMI part of the problem? Is too much fakery coming at the expense of solving the hard parts of AI? And if those fake parts aren’t replaced by real intelligent counterparts, is this all a farce? Will this lead us to the next AI winter?

AI Fakery in Natural Language Processing

A disturbing article posted last week in The Guardian discussed how some notable technology vendors are using “pseudo-AI” to deliver the promises of their so-called AI-enabled products. The article specifically cites a few examples of how technology vendors are using humans to pose as machines to enable the company’s offerings. Specifically, this article cited the examples of Edison Software that used humans to look through personal email to provide their “smart replies” feature, startups X.AI and Clara that used humans powered by Amazon’s Mechanical Turk to provide their supposedly AI-enabled smart scheduling features, and revealed how Expensify used humans to read receipts and invoices — a far cry from the completely automated solutions professed by these companies. Even Facebook was using humans in the replies for their “M” Messenger virtual assistant.

It’s not clear how much of the current solutions from X.AI, Expensify, Edison, and others are now completely machine automated, and I doubt you will get a straight answer if you ask them directly about it. Yet, these disclosures are highly troubling from a few perspectives. First, it’s troubling that they don’t disclose upfront that your voicemails, emails, receipts, and calendars might be viewed by other people instead of machines. Second, they don’t indicate that the technology they are trying to use to solve these problems are not up to the task of actually completely handling these tasks. Even if the AI system can handle 80% or more of the requests, the fact that in even such a narrow area as calendar scheduling humans must be needed to handle the drudgery of coordinating calendar invites shows how far away from reality much of the AI promise is.

From a not-so-recent tweet that seems to indicate where things are at:

In our opinion, these vendors need to dispense with the fakery. Stop faking it. If you need humans, disclose it. Identify what parts of your system are completely, 100% machine automated, and which parts are not. Show the community how you’re working to address the parts that still need humans in the loop. And for goodness sake, don’t violate any privacy policies or cause security nightmares. This does disservice to your customers, your company, and the AI industry.

AI Fakery in Artificial General Intelligence

In another media piece released last week, CNBC did an expose´ on the Sophia animatron robot designed and developed by Hanson Robotics. In the video piece, CNBC discloses how they were not able to ask the questions they wanted and instead were presented with a scripted set questions and answers. The Sophia robot creators then explain that the answers are pre-scripted and much of what we see in Sophia’s appearances in media are pre-coordinated events where the Sophia device is used more as a performance prop than an illustration of the current state of the art of AI. We’ve written and spoken numerous times how we see Sophia as being more harmful than helpful by leading people astray as to what the current capabilities of AI are. Some argue that Sophia is meant to be an exemplar model of what an actually intelligent robot could be in the future, not an actual illustration of what is possible. I guess this is sort of like those demonstration vehicles you see at the car shows that have crazy designs but would never really be possible in real life. We don’t buy it.

Enterprises who are buying this technology and implementing it don’t want demonstration AI that doesn’t work in reality. Also last week, we saw news that one Swedish Bank dumped their virtual assistant / chatbot because it couldn’t deliver on expectations. Is this what we want from our AI systems? Do we want to overpromise to make the sale just to have them cancel the sale and blame AI when it doesn’t work out? We can’t see how this is helpful for anyone.

Artificial General Intelligence is hard. Very hard. Perhaps near impossible. AGI is like the space race to colonize other planets and stars. Maybe we will never really be able to live on Mars. It’s the quest to get us there that results in all sorts of other advancements that we can find valuable here on earth. But why pretend we’re already heading to Mars? Why pretend that our software systems have virtual neocortex brains when they don’t? Why pretend our robots can actually understand what we are saying when the don’t? Why pre-program scripts and force humans to act like machines when they ask questions instead of actually trying to solve the big problems in natural language processing and intelligence?

Is FTYMI Just Temporary Gap Filling or a Sign of Real Problems?

Why? Because it’s easier to fake. It’s harder to make. Yet, we’ve been doing this now for over six decades. It’s time to stop faking it. We know what people want from AI. We’ve all seen the movies. We’ve played with the toy robots. We’ve read the fiction and the thought pieces. We know what fake robots are like. We know what fake intelligence is like. The last thing we need are companies selling the sizzle but delivering the fake. Eventually the mirage fails because humans are humans.

There are two ways to look at this fakery. We can chalk this up to companies trying to solve the hard problems of AI but facing the short-term realities of technology capability. Solving real problems takes real time and costs real money. Investors are willing to fund AI companies but they aren’t willing to wait months or years or even decades for these companies to really solve these problems. So what are well-funded AI companies to do? Should they hold back on selling anything until the almost-impossible is made possible, or do they fill the gaps with humans masquerading as machines? It’s clear on what side of the fence technology companies fall on this one: FTYMI.

However, the other way to look at this is the harder truth: AI technology is just not capable of delivering on the promises that these companies are selling. If this is the case, then these temporary stopgaps are really the only stopgaps. If computers can’t be made smart enough to process text in emails, handle receipt images, converse about schedules, or process messages, then no amount of pretending will get us there faster. This is the harsh reality. We need to solve these problems. Enterprises are spending their money and time buying into these solutions. They clearly want AI solutions that can solve these problems. They won’t put up long with fakery. If enterprises stop putting up with fakery, then there won’t be much opportunity for makery. And that’s the biggest travesty – these fake solutions suck the energy, time, and money from the space and can deliver a major setback to AI adoption and growth in the long term.

Login Or Register

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The Does Fake it Till You Make It Work in AI?

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!