There is a fascinating article in the New York Times today – Aiming to Learn as We Do, a Machine Teaches Itself.
NELL is an effort to create a computer that can learn semantics, understanding the meaning of language, rather than grammar or other structural aspects of language. Meaning requires interpreting words and phrases in their specific context of usage and against a large arena of background knowledge.
Unlike previous efforts that often tried to program all that knowledge and interpretation into a program, the computer engineers behind NELL have programmed the machine to learn more like a human, “cumulatively, over a long term.”
To do that, NELL obviously learns facts about the world, but more importantly places facts into categories and also creates relations between members of categories. By relying on categorization and relation, built through self-learning, NELL comes much closer to human learning.
With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.
Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”
NELL, Dr. Mitchell explains, is designed to be able to grapple with words in different contexts, by deploying a hierarchy of rules to resolve ambiguity. This kind of nuanced judgment tends to flummox computers. “But as it turns out, a system like this works much better if you force it to learn many things, hundreds at once,” he said.
For example, the text-phrase structure “I climbed XXX” very often occurs with a mountain. But when NELL reads, “I climbed stairs,” it has previously learned with great certainty that “stairs” belongs to the category “building part.” “It self-corrects when it has more information, as it learns more,” Dr. Mitchell explained.
But the Carnegie Mellon researchers found that NELL did well with certain categories and relations, and less well with others. Over the summer, the researchers help NELL out in those categories where accuracy was lower, correcting obvious errors (at least to us!), such as “Internet cookies” not really being in the baked goods category.
Though the article and the project homepage don’t describe it as such, what is interesting to me is that the computer engineers are combining cognitive linguistics with scaffolded learning. With cognitive linguistics, concepts and categories matter, and language is learned through use, not through some innate faculty. The scaffolding, or helping an individual learn through supports that facilitate learning, happens in three ways: (1) the initial seeding of a category, (2) the correction of blatant errors to help NELL get back on track, and (3) the Web itself, which itself can be “a rich trove of text to assemble structured ontologies — formal descriptions of concepts and relationships.”
Fascinating stuff. And if you want to see what NELL has learned, you can check out the NELL knowledge base.