Prime time television viewers have undoubtedly noticed the slew of recent commercials featuring IBM’s Watson computing platform in conversation with celebrities such as Bob Dylan, Carrie Fisher, Serena Williams, and Stephen King. These ads showcase continuing advances in Watson’s speech capabilities and intelligence applied to various disciplines, which were initially exhibited in Watson’s championship performance on the Jeopardy! game show back in 2011.
The public and much of the press tend to think of such computing capabilities as “artificial intelligence” (a.k.a. AI), although that term can bring with it connotations of technology run amuck, à la HAL 9000 in the film 2001: A Space Odyssey, The Terminator’s Skynet, and many other popular depictions. Outside the realm of fiction, technology business leader Elon Musk has tweeted that AI is “potentially more dangerous than nukes,” and physicist Stephen Hawking warned “development of full artificial intelligence could spell the end of the human race."
IBM generally avoids describing Watson as artificial intelligence, instead using the term “cognitive computing.” The difference between the two terms (which we’ll get to in a moment) became most evident at IBM’s first ever Watson Analyst Day, held on May 23, 2016, at the company’s brand new Watson Health building in the Kendall Square area of Cambridge, Massachusetts, a major hub of the biotechnology industry. The facility is now headquarters of IBM’s division devoted to analytics and other computing services for health care.
Clearly, IBM is investing heavily in health care as a market in which there is considerable room to optimize both patient outcomes and financial efficiencies. The former point was demonstrated in an area called the Watson Health Immersion Room featuring a panoramic display wall and a presentation about a doctor and her cancer patient. It showed how Watson could help the doctor evaluate treatments such as chemotherapy and surgery options, factoring in the specific type of cancer along with the patient’s age, gender, and other demographic information. Watson analyzed its extensive database of medical records and journal articles to provide the doctor with the merits of various treatment combinations, ranked by metrics such as effectiveness, strength of supporting research evidence, speed of treatment, and convenience for the patient. Notably, cost was not one of the factors displayed in the demo, although we believe a real-world implementation would need to present the doctor with relative costs of the treatments whenever either service-based (traditional) or outcome-based health insurance is involved.
Now, for the difference between artificial intelligence and cognitive computing: In an artificial intelligence system, the system would have told the doctor which course of action to take based on its analysis. In cognitive computing, the system provides information to help the doctor decide. In many cases, a doctor will indeed make the same choice that Watson suggests, but the exceptions are where human experience and judgment become most important.
This distinction between artificial intelligence and cognitive computing, and Watson’s role in helping humans make decisions, was repeatedly emphasized in presentations throughout the day. As Rob High, IBM’s CTO for Watson put it, “What it’s really about is involvement of a human in the loop,” and he described Watson as “augmented intelligence” rather than artificial intelligence.
Similarly, Michael Karasick, VP of Cognitive Computing for IBM Research, said the objective of Watson analytics was to do the “heavy lifting” for people. Along those lines, Steve Tolle, Chief Strategy Officer & President of iConnect Network Services at Merge Healthcare (acquired by IBM in 2015), described how radiologists are highly susceptible to “burnout” due to the daunting numbers of images they must examine every day looking for abnormalities as small as several pixels on a five megapixel monitor. Tolle then described a Watson application under development—code named Avicenna—that could help them identify the most relevant images among numerous CT and MRI scans.
In addition to specific applications, Andrea Morgan-Vandome, IBM’s Vice President of Watson Offering Management, said Watson content and data will be available as “insights-as-a-service” as well as “platforms-as-a-service,” the latter being more low-level developer tools. Analysts at the event were also pre-briefed in advance of IBM’s announcement (now public) that it would make Watson APIs available through the Twilio developer marketplace, in addition to IBM’s Bluemix platform where Watson services already have been available. Twilio will offer IBM Watson Message Sentiment and Message Insight services to analyze consumer SMS text messages. And IBM will later offer Watson’s speech-to-text services as an API on Twilio.
Twilio has more than one million registered developers, and we expect the collaboration will lead to some interesting and perhaps unusual applications of Watson.
And speaking of unusual applications, earlier this year an independent group calling itself the Watson 2016 Foundation (not affiliated with IBM) has called for Watson to run for President of the United States. (We note that doing so would bring a whole new meaning to the term “political platform.”) In response to the group’s suggestion that Watson participate in a political debate, an IBM spokesperson told Newsweek, “IBM’s Watson is not running for president, though we’re humbled by the suggestion. Today, Watson is focused on other important work like helping doctors improve healthcare and teachers improve education—so we will have to decline your kind offer to debate.”
Given the significance of U.S. Presidential elections, however, we at VDC have a slightly different suggestion which might be even more intriguing: that Watson moderate one of the Presidential debates. If the cognitive computing platform isn’t up to the task today, IBM has plenty of time to prepare it for 2020. Check out VDC's 2017 IoT & Embedded Technology Predictions report to learn more.