I don’t usually chime in on these episode-specific discussions, but I figured I may as well put myself in the ring this time around considering human-robot interaction is actually my area of study. With that in mind, I apologize in advance for what will likely be an incredibly geeky & uninteresting ramble for many >.>
I just wanted to give my two cents on the issue of whether or not natural language processing (NLP) is a necessary component to AI. Audra felt that it wasn’t a critical element, but from a certain point of view </oldben>, it actually is.
NLP is incredibly, ridiculously tricky to pull off. That’s why it doesn’t actually exist yet. Even the most advanced speech recognition systems only work effectively when a) they’re designed to only interpret a very limited lexicon and syntax, or b) they’re designed to only work within the confines of an extremely limited context of operation. An example of what I mean by that:
Your cell phone most likely has some speech processing capability, and it probably works equally well whether you’re sitting on your couch or standing on top of a mountain. The range of things that you can actually do by voice, though, is quite limited. On the other end of the spectrum, you have desktop speech interaction software like Dragon NaturallySpeaking (which is how I interact with the computer). It has a significantly more thorough grasp of grammer (in the linguistic sense of the word, not literally punctuation usage) and so it’s much more lenient in terms of command structures that it can understand. This is offset, though, by the fact that it’s only useful within the context of daily computer usage. Commands related to household tasks or talking about the weather would be completely lost on it.
The real problem here that leads to these limitations is that natural language is more than just the sum of the literal meanings of all the individual uttered words. To understand natural language, the listener needs to share with the speaker a common knowledge of the speech’s context. Current speech recognition systems simply have no way of establishing this common ground. We humans do it without even thinking, because our brains are constantly preattentively processing visual, auditory, and other types of sensory information to understand where we are and what’s going on around us. To give an AI true NLP, it needs to be able to have this human-like degree of contextual awareness.
So, when you think about it in a bottom-up way like that rather than top-down, you can see how intertwined AI and NLP really are. To understand natural, human language, a system has to first understand the world around it, and such understanding is a core tenant of almost any definition of AI. In many ways, once it has that contextual awareness, NLP is just a naturally emergent phenomena from basic speech recognition. It therefore may be possible to have NLP without AI (that is, a system that can understand natural language but not reason or be sentient), but it is extremely improbable that we’ll ever have AI without NLP.