Two Goals in AI
This is just my opinion, and I haven't read anything to support it, but I think most academic fields can be separated in this way two. The two goals are, roughly, theoretical and practical. For AI, the practical is to make computers smarter, while the theoretical is to use computers to better understand humans.
Example: natural language processing (since I'm in the class). Humans clearly do a lot to understand language. There is a lot of stuff that our brain does which no computer can do yet. That doesn't mean people haven't been trying though; things like the knowledge base Cyc, or the Structure Mapping Engine for analogies, or ACT-R the general cognitive architecture, try to really get at how humans do all this stuff we take for granted. But despite all this, it's very difficult to get computers to even understand a news article. The Learning Reader is an attempt in that direction.
On the other hand, there are purely statistical, stochastic methods of understanding language. For example, we might not know exactly what a student in fourth grade would understand, short of creating a model of their entire brain. We could, however, guess by looking at how long the sentences are, how many syllables each word has, and how long the article as a whole is. Notice that this has nothing to do with the meaning, or semantics, of the article, but only some surface properties. Clearly humans don't do this, but a machine which can tell what grade level a piece of text is would be considered "intelligent".
If this divide is seen as two different directions one can take when studying or doing research in computer science, I'm not sure it's clear which side I'm on. I think I lean slightly more towards using computers to understand humans, or more broadly, to understand humans in general. I enjoy simulations of nature more than I enjoy trying to find a proxy for whatever aspect of human reasoning we're trying to mimic.
That said, I won't complain if I got a job in either camp.