Nowadays it is much quicker to search for answers on the Web. A search on the subject Artificial Intelligence (AI) will be flooded with answers, with diverse viewpoints but few consensus.
Rather than repeating what others say, I’ll illustrate AI with a little example: simple arithmetic.
Once a child learn enough arithmetic to understand that 1/2 = 0.5, he or she will try the next: 1/3. Converting this into decimal gives: 0.333… The child will soon realize that this is an never-ending job. Most likely you can’t trick him/her by saying, “maybe it will terminate if you go far enough.” The child can see through and offer an explanation as why it won’t terminate. That’s math discovery for the child — and that’s intelligence, the natural kind.
Any good programmer can code a short program to do the same thing: converting unit fractions to decimals. A naive loop will print 1/3 as 0.333… forever! Of course, a smart programmer can put in some “intelligence” to the program, making it “realize” that the digit(s) repeat, thereby prints a message and terminate. That’s intelligence, the artificial kind.
The central debate for AI is this: can human intelligence be always coded into software, i.e. put into machines? Both yes-camp and no-camp have followers. For a balanced view, have a read on Wikipedia.
Our brains and our machines both compute. They have different makeups, use different algorithms, etc., but are they different in smartness?