Applications for Watson’s DeepQA technology

This post is just to point out a few sources that discuss Watson’s potential, among all the hubbub about Watson, intelligence, and the future that is now out on the web.

Natural language and unstructured sources of information

Now that the suspense about the Jeopardy! grand challenge is over, there’s lots of discussion about what the result portends.  The following short (4 – 5 minute) little promotional video from IBM called  “Watson after Jeopardy!” is interesting in that it highlights that the advance is about how much useful information is currently in what is called “unstructured” form (i.e., in newspapers, documents, books, interviews, etc. as opposed to being formatted in a manner to provide the information required by an algorithm that works only if the data is structured in a particular manner).  It is the ability to pick up information from such sources, coupled with the many interacting algorithms Watson contains to examine hypotheses, ask for evidence, and evaluate how the evidence bears on the confidence one ought to have in an hypothesis, that makes it a new kind of tool.  And that is how Watson’s value is presented here:  as a tool for humans to use, albeit one that not only appears more like a human assistant than any that currently exists, but which actually functions more like a human assistant than currently exists.   That sounds right.

Here is the URL for the short IBM promotional video:

http://www-943.ibm.com/innovation/us/watson/what-is-watson/watson-after-jeopardy.html

TED panel discussion  “Watson’s next job”  (Baker, Holley, Chase, Ferrucci)

There was a panel discussion on the topic — whimsically put — of “Watson’s next job?  Where will Watson find work?” among other things.   More seriously, some panelists talked about development work on applications already lined up for this technology in the field of clinical medicine.  The talk is archived on TED, here (it is over 33 minutes long; I mention this because there are so many videos about IBM and Watson on the web (there is even more than one on TED), that this one is hard to find just using search terms, so knowing it is about 33 minutes long may help you verify that you’ve got this one):

http://www.ted.com/webcast/archive/event/ibmwatson

The URL should lead you to an archived video of a panel discussion moderated by author Stephen Baker, as he discusses the applications of Watson’s technology with the following panelists:  IBM Watson Principal Investigator Dr. David Ferrucci, IBM Fellow and CTO of IBM’s SOA Center for Excellence Kerrie Holley and Columbia University Professor of Clinical Medicine Dr. Herbert Chase.

I had wondered if they would mention what seemed to me an obvious next step: asking questions.  That is, if Watson or a Watson-like assistant determines that some additional information would help it increase the confidence level of a competing hypothesis, or distinguish between several hypotheses, can it ask the human it is assisting to collect a certain piece of information not available among all the information it has,  or to run a particular test on a patient?  It seems this is something they are thinking of in applying Watson’s technology to clinical medicine.   Formulating queries that it would like answered in order to provide a better response is a pretty significant step, I think. Someone commented that Watson only knows how to answer questions, whereas humans can ask questions.  Strictly speaking, I suppose it is not proper to say that a machine can ask a question.   Although, once it has the voice technology to annunciate a question, it is hard to remind oneself that what Watson is doing is merely formulating a question, not asking it.

It seems to me that Watson’s technology might be especially well suited to identifying the information that would be really helpful to have.  (I hedge with “might be” here because I don’t know exactly how Watson works.)  I don’t see any downside to having a computer assistant make suggestions as to what questions it thinks ought to be asked, and providing an analysis as to which hypothesis it’s determined the answer might bear upon.

Another thing that caught my attention was the mention of using images as well as text.  That’s been a particularly challenging problem for AI, so I wonder how far they’ll get with that.  Or, rather, mimicing human perception has been a particularly challenging task;  it is not clear to me whether reading images from medical tests would present the same problems, though.   It wouldn’t surprise me if it turns out that the only way to do this well is to ask the experts who know how to read certain kinds of images —  X-rays, MRIs, etc., — for their heuristics, and incorporate them in the algorithms.  But it could turn out just the opposite:  it could turn out that machines can be trained to read some images (such as astronomical or crystallographic images) much better than any human can.

So, for several reasons, I expect to be watching what happens with these projects in the years to come.
Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s