The information seeker in 2015.
While Watson was victorious by a significant margin over the two best Jeopardy human experts EVER, the real winner is the information seekers of the future – the 21st Century Library customer. PBS reporter Hari Sreenivasan posted the following coverage of the epic human vs. computer smackdown.
Ken Jennings and Brad Rutter — the least-likely Jeopardy! underdogs ever — found themselves no match Wednesday night for Watson – IBM’s Frankenstein of trivia.
Below his Final Jeopardy! answer about novelist Bram Stoker, Ken Jennings scribbled out a meme which seems a perfect blend of wit and doom. It was a mix of his eloquent nerdiness and his humble recognition in that he represented what may be the finger in the dike, trying to stop the inevitable superiority of artificial intelligence.
With every question, viewers were shown the top three possible answers that Watson had deduced, along with the probability of its accuracy. It was interesting to also note that it appeared that Watson rang in based on that probability level. The higher the probability, the faster the response. Watson never missed an answer that predicted over 90% probability of being correct. Most questions Watson answered correctly and first were high probability – 75% or higher. On other questions, the probability for all three answers was well below 25% probability. The final result was that Watson more than tripled the amount of money won by either human super contestant. (All of Watson’s winnings will be donated to charity. 50% of Jennings’ and Rutter’s winnings will go to charities of their choice.)
It was an impressive and intense demonstration between the capabilities of humans and computers. Not only did Watson have the disadvantage of not being privy to the other contestants’ answers to questions, therefore sometimes repeating the same wrong answer as another contestant, Watson also has the handicap of having to select THE “correct” answer itself without benefit of “reason” – or wisdom, a uniquely human trait.
That was NOT really what the computer architects intended. Watson was designed to deliver a list of potential answers with probability of accuracy from which the human would select the most appropriate answer for their purposes. Place that potential into the context of a reference interview (for want of a better term for the questioning situation), and you have the ideal solution. Give the customer a choice of probable best answers from which to choose. When did ANY customer get better service than that from a reference librarian?
Nobody will deny that the human mind can not be replicated, but it was obvious that purely analytical processes are Watson’s forte. IBM and other agencies are already brainstorming what they could do with Watson. Gov lessons from Jeopardy’s Watson computer challenge quotes Dave McQueeney, VP of Software at IBM as saying; “Many of our customers, especially the government customers, have enormous data sources and they feel there’s tremendous insight available in those data sources if only they had the tools to process them.”
Unfortunately, there are those who are simply focused on the paranoid “computer overlord” aspect they can criticize, or the mechanics of the Jeopardy game. John C. Dvorak, who is obviously in denial and sadly missing the point of this whole demonstration (Watson Is Creaming the Humans. I Cry Foul.), decided the game was rigged in favor of Watson that could “buzz in faster”.
Where has Watson been hiding?
It’s interesting to me that this project is only now becoming a reality. The project was started about four years ago. Has technology really changed that much in the last ten years? I mean the machines may be smaller, but why did it take this long to make this happen? Seriously, how much has processing power and programming changed since 2000?
Was something new invented that I don’t know about? Couldn’t Watson have shown up five years ago or even before that?
Google can answer most of these Jeopardy! questions already, except not as fast. Type a Jeopardy! question into the Google search bar and you’ll have the answer within the first five or six hits. Of course, it usually takes a little reading, and you may have to go to a few Web sites, but the answer is there.
I didn’t realize Google had the capability to process a complete phrase to isolate potential answers, and I’m certain it does not have the capability to rank each with a probability of accuracy. Not to mention that Google, like other search engines, find resources – more information – not provide answers. Naysayers regarding Watson and the potential for the future of information technology should really overcome their computer prejudices and recognize its potential for the future of information seekers.
I view Watson as the potential solution to information overload that is becoming a worse situation rather than better. Dr. Bruce A. Johnson, St. Louis Adult Education Examiner, has asked “When does information become too much for students?.” His observation is that,
Is it possible that students reach a point where they have been given too much information to process? …
When students are reviewing the course materials and resources they are selectively reading the information and processing it from their individual perspective, which is influenced by their background, learning style, belief system, and prior academic experience. … For class assignments students search for resources and often utilize a library to find articles. When presented with a list of possible resources to choose from students must make a choice based upon what they perceive to be the most relevant information. These ongoing approaches to thinking are often selective in nature and not always directed towards a specific learning goal or objective. Students will first choose what information they will accept based upon their perceived needs.
What if a Watson computer could significantly narrow the possible information retrieved from a search, rank those selections based on probability of being a match for the most appropriate and accurate information? Wouldn’t that drastically reduce the information overload? The volume of information is not likely to decrease significantly any time in the future, so having a computer to sort through relevant information, select the most appropriate and even recommend statistically which is best – isn’t that a good thing?
While the future appears bleak for “reference librarian” functions in light of Watson computers, doesn’t it make sense to embrace the change and use it to the benefit of the library customer? (With the same spirit Bunny did with EMERAC.) Isn’t that what we’re all about? Or are we about protecting our jobs and elite “librarian” status? Are we about change and progress in library services? Or are we about trying to preserve the past elite status of librarianship?
Watson is absolutely THE technology to watch for the next five years to see who is the first research university to acquire one. After that – - – - – - –