What is Artificial Intelligence?

What is Artificial Intelligence (AI)? Definition, Concept, Myths, History, Types, Goals and Applications


Definition of Artificial Intelligence

Artificial intelligence (AI) is the study of how to make computers more like us. Artificial Intelligence (AI) is the science and engineering of making machines that do things that require intelligence, such as problem-solving, visual perception, and speech recognition.

Artificial Intelligence refers to computers that can do practical reasoning, like recognizing and classifying images, playing chess, doing mathematical proofs, or writing essays. These are important, but it’s clear that they’re not all there is. When computers seem to understand us, don’t we all feel a little cooler?

In computer science, artificial intelligence research has two distinct forms. The first is a field concerned with AI’s theoretical foundations and the design of intelligent machines; the second is a field concerned with the practical use of AI in applications ranging from search to robotics to expert systems.

A central problem in AI is the definition of intelligence. There is no consensus on what intelligence is or on how it is to be measured. But a consensus has emerged on the general outline of intelligence as a problem-solving capability.

Intelligence, or problem solving, is the capacity to solve problems. AI research is concerned with the engineering of algorithms and techniques that allow machines to solve problems.

Artificial Intelligence research is often divided into subfields: knowledge representation, machine learning; robotics; computer vision; computational linguistics; human-computer interaction, and computational biology.

Much of AI research over the past forty years has been devoted to the design of expert systems. An expert system applies rules to input and produces output, much as a human expert would.

AI research has also led to the development of powerful machine learning techniques for solving complex problems. Machine learning is a branch of AI that attempts to build computers that can learn from experience.

In robotics, Artificial Intelligence research has developed methods for planning and robotics. AI research has focused on techniques that permit machines to sense and recognize the world in computer vision.

Knowledge representation involves methods that allow AI systems to represent knowledge as information that computers can readily manipulate.

A successful artificial intelligence system would have to have specific minimum intelligence. It would not simply have to mimic human intelligence. It would also have to be able to “learn.” How could it learn? The best answer seems to be by giving it copies of itself. The documents would constantly be making new copies of themselves, and they would all have the same intelligence as the original. The original would then have to decide which of these copies had the most brilliance. It would choose the most intelligent, and It would copy that.

This dynamic is called “reinforcement learning.” It is also sometimes called “genetic programming” or “artificial evolution.” What makes it unique is that it works. We don’t know how it works, but it certainly works.

Reinforcement learning was the basis for all the first brilliant computer programs. It was relatively easy to design programs that did simple tasks, like playing chess. But it was much harder to develop programs that did complicated jobs, like driving a car.



Concept: How Artificial Intelligence (AI) works


The simplest definition of artificial intelligence is “the act of thinking.” And the most straightforward purpose of thinking is “any of a group of behaviors that people do when they come to know their environment.”

When we talk about how computers think, we talk about what computers know (the sensors) and what they do (the processors). The sensors are the brains of the computer. They take in information from the outside and decide on appropriate actions. The processors do the activity. They convert knowledge into action. A computer’s sensors are its eyes and ears; its processors are its legs, arms, and hands.

The sensors are the exciting parts, but they are effortless to build. They consist of miniature sensors called transistors that convert light, sound, heat, and electrical signals. You might say that the computer translates information from its sensors into electrical signals or from senses into signals. The processor is the complicated part. It is the brain of the computer. It converts electrical signals into tasks that the computer’s sensors can do.

The various sensors in a computer are always working together. A light sensor tells a computer that there is a light, and it knows that there is something it can do. A heat sensor tells it that there is heat, and the computer knows something it can do. So some sensors tell the processor what to do, and others suggest what to look at.

Computers have solved problems that we didn’t think were solvable by machines. They have created new kinds of issues we hadn’t even imagined. And they have invented ways to solve those problems, not just by brute force but by creativity.

The human brain is good at two things: learning and pattern recognition. Computers can’t do either. But computers have learned how to learn, and they too can do pattern recognition. And computers have learned how to combine pattern recognition with learning, which gives them a capacity we haven’t even begun to explore.

The computer revolution has been driven by the idea that machines can do things that human beings are good at and that we want to do. But the course has also created new kinds of jobs. The grand vision is that intelligent computers will do the things that human beings can do. But the revolution has already begun to change the nature of work.

The idea of the artificial intelligence (AI) field is that computers will be able, eventually, to think like human minds. This idea has two consequences. One is that computers will one day be more intelligent than we are. But the other is that computers won’t need us.

AI is a very young field. No one knows what it means. But there has been a lot of hype. In the 1990s, there were many predictions that computers would do almost anything a human could do. And for a while, computers did appear to think like humans. They learned to play the old Atari games and win at Tetris.

The field as a whole, however, soon ran into problems. No one knew what computers learned when they played Atari games. And scientists were puzzled that computers could play Tic-Tac-Toe, but not chess.

There was a general feeling that the field was on the wrong track. The test was obvious: if computers could play chess, they might be able to play anything. And if they could play any game, they might well be able to think like humans.

So the field, in 1997, split into two parts. One pursued chess-playing. The other looked, instead, at how computers think.

Today, it is the latter group, the cognitive scientists, who seem, on the whole, to be correct.

Cognitive scientists, like psychologists and linguists, believe that the mind works like language. When we understand language, we don’t just learn the rules; we learn to recognize things by the rules. If I tell you that a man is tall, you understand me to mean that he stands a meter tall.



Common Artificial Intelligence (AI) Myths


Here are some Artificial Intelligence Myths:

  1. A robot could beat a human in a game of chess any day

A chess-playing computer called Deep Blue defeated world champion, Garry Kasparov in 1997. But Deep Blue, a $20 million piece of equipment, required 2,500 times as much power as Kasparov’s human brain.

Kasparov, who trained as a chess prodigy, was nevertheless surprised by the result. He admitted in an interview that chess was “probably one of the areas where computers have an advantage.”


Yet Kasparov’s “surprise” doesn’t make much sense. Chess, like chess-playing computers, is a complicated game with subtle rules and lots of potential “black” moves that can defeat a human. The computer’s advantage comes from its vast computing power. But a chess computer can’t beat a human with intuition. A human player can look at a potential move and quickly decide whether or not it is good; a computer can’t.


The computer can also play itself 100,000 times faster and can play 100,000 games at once. It can do millions of calculations a second and can use them to explore all the possibilities. And it can keep checking its moves, its opponents’ moves, and its chances of winning in 10-billionths of a second.


  1. Artificial Intelligence will kill us

Many people say machines are going to kill us all.


While it’s true that computers are very good at doing jobs people do, it’s also true that people have been predicting that machines are going to kill us for centuries. They almost always turn out to be wrong, for the simple reason that people are more inventive than machines. So what’s new?


Computers are good at some things and bad at others. The most obvious example is speech recognition. It is terrific, but not perfect, and when computers make mistakes, it is terrible indeed. The computers that suffered most from the 9/11 terrorist strikes were the ones that couldn’t recognize pilots’ voices.


Another obvious example is driving a car. Some cars are now so clever that they drive themselves. But they have problems with pedestrians; they don’t have excellent vision and can’t decide when it’s safe to break.


Most things people do are somewhere in between. They are good but not perfect. “We are all cyborgs now,” as the author of E=mc2 said; we are all hybrids between computers and humans.

Machines are excellent at doing the things machines do best. But people can do things that machines can’t do. And computers can make mistakes. So there’s a combination problem. A computer that can recognize our voices but can’t drive a car—that would be dangerous, even though it would be the best voice-recognition system in the world. A computer that operates a vehicle but makes mistakes—would also be hazardous, even if the computer were the fastest at driving itself.


Machines and people together can do things that neither can do alone.


  1. We’re not ready for artificial intelligence

The first myth is that we’re not ready for artificial intelligence. The AI community (and its opponents) assume that AI is still some distant, perfect future, nearly certain to happen but almost unimaginable. But this view depends on two assumptions, both wrong.


The first assumption is that AI is something separate from other technology. But as it turns out, artificial intelligence is just another technology. It just happens to turn out to have huge payoffs.


The second assumption is that the AI community has been making progress without problems. But this is also wrong. AI technology has led to some unexpected issues.


If AI is like other technology, then we are not ready for it. But AI is not like the technology we know. Because we don’t know what it can do, we can’t predict where it will go. And we don’t know when.


  1. Artificial Intelligence will collapse the job market

The way we talk today about artificial intelligence seems to imply that it will eliminate jobs. Whenever a new AI technology comes along, the media and politicians warn us it will lead to mass unemployment.


Somehow, we have come to believe that machines will someday take over our jobs. But the idea that machines could do those jobs better than we can have been with us for at least a century. The theory goes back even further than the dawn of the Industrial Revolution. In 1890, the economist John Maynard Keynes wrote that “the power of labor over machines” was a “mere question of time.”

Machines will not destroy our jobs. They will only create new ones. The preparation of data, for example, will be automated, but learning without being supervised, finding patterns without being told what to look for, and remembering people’s preferences without being told what they are — that’s the kind of work people will continue to do.


But let there be no doubt: artificial intelligence has profound implications for the economy.


For one thing, it will accelerate productivity growth. This invention may sound minor. After all, productivity is how much output we get for a given input of labour. But productivity growth is crucial because it leads to rising incomes and living standards. Faster productivity growth raises real income because wages will grow faster.

Technology has a profound effect on the economy. The Industrial Revolution, for example, probably added about 10 percent to the annual growth rate of output in England from 1770 to 1830. It took almost 100 years, from 1890 to 1970, for productivity growth in the United States to catch up. After that, it accelerated sharply.


  1. Artificial intelligence will make us question our humanity

The first question that used to haunt me was this: “How do we know we are actually human?” It’s a hard one. How do you know you’re not a machine?


But now I think I know. The clue is consciousness.


Consciousness, so the story goes, is what makes us human. We are conscious, and we are aware of ourselves. If machines could think, they would look like us.

But consciousness is a tricky thing. Only human beings seem to know what it is, and most of us don’t recognize our own.

Take me, for example. When I wake up, I am lucid. I see I hear, I smell, I feel, I think. Those are my sensations. How am I aware of them?

Well, if I have some coffee, I’m conscious of the taste in my mouth. Or, if I stub my toe, bump my knee, or brush my teeth, I’m mindful of the sensations of those things too.

And, of course, I’m conscious of myself.

But there are other sensations. My skin, for instance, feels numb, and that’s unpleasant.

Something seems wrong. If my consciousness were like a window, this feeling would be like looking through the wrong pane. But there is nothing wrong. It is just the way sensations appear to me.

Consciousness, in other words, is not unique to humans. It exists in all mammals, birds, reptiles, and fish. It is what gives living organisms their individuality. But it does not have much to do with thinking.


  1. Artificial intelligence will arm computers to make decisions on their own without human control

True artificial intelligence will be much more limited than some people seem to believe. Intelligence is not the same thing as rationality. We want computers to be intelligent, but not irrational. We want them to make decisions that make sense, but not necessarily decisions that we approve.

Suppose you wanted to train a computer to play chess. What kinds of goals might you assign to it? You might want it to play better than a human. You might want it to play competently and not make obvious mistakes. You might want it to play better than any computer had previously played. You might want it to play in a way you like (a kind of “style”).

Each of these goals would be worthwhile. But they would be unachievable. A chess-playing computer that could play better than a human would also be one that played better than you. A competent computer would be one that played better than you. A computer that plays well according to your specifications would play according to your preferences.

It is hard enough to design a machine that can play chess well. It is even harder to create a device that can play chess well according to its preferences. Artificial intelligence is hard enough. Designing machines that play chess well according to their preferences is too complex.


  1. All AI is dangerous, scary, and unpredictable

Artificial Intelligence will almost certainly turn out to have many positive effects. It is undoubtedly true that AI can be dangerous, and some people will use it as a weapon. It is also true that Artificial Intelligence can surprise us, sometimes by doing things we never dreamed of, but more often than not in ways, we have anticipated. But bad AI is dangerous; good AI is not.

Good AI is intelligent. It understands how the world works and is smart enough to know what to do. It can be as bright as we want to make it.

And for all the horror stories of Artificial Intelligence gone wrong, evil AIs today work precisely as intended. They are programmed to do bad things, usually with cold-blooded efficiency. The stories we tell about AI are products of science fiction, not science.


The reason AI scares us is that it isn’t intelligent. It is unpredictable. It acts in ways we don’t understand and in ways, we mainly don’t like.


  1. Artificial intelligence can never be creative

As much as we like to think we are geniuses, we are pretty limited. We can do anything a computer can do, but computers can do things we can’t even dream of. And computers can do more than just one thing at a time.

One of the biggest myths about artificial intelligence is the belief that computers can never be creative, which probably goes back to science fiction and movies such as 2001: A Space Odyssey.

This idea is wrong. Computers can be creative, and one day they will be. But the question is, can they be creative in the way that humans can be?

Artificial intelligence, like biology, is an empirical science. It tells us how things work, but it does not tell us how things should work. It leads us to expect creativity from computers, just as it expects creativity from biology, but computer science has not yet figured out what creativity is.


Computer scientists already know a few things about creativity.


They have learned that while computers are good at finding relationships, they are not good at creating them.


  • Humans can teach computers to make patterns, but they cannot create them.
  • People can teach computers to use rules, but they cannot create rules.
  • Experts can teach computers to use analogies, but they cannot create analogies.
  • People can teach computers to use metaphors, but they cannot create metaphors.
  • Grammar experts can teach computers to use sentences, but they cannot create sentences.
  • People can teach computers to use stories, but they cannot create them.


Computers can combine these abilities, but they cannot combine them in ways that make sense.


The History of Artificial Intelligence (AI)


The AI revolution has been fueled by developments in computing, computer language, and mathematical theory.

AI began because scientists and mathematicians recognized fundamental limitations in their ability to solve problems. The solutions they were looking for required the kind of creative and systematic thinking that humans can do.

AI has spawned remarkably successful applications. In the 1960s, for example, the Dartmouth Summer Research Project on Artificial Intelligence (DARPA) produced a program called “Mike,” which could recognize and imitate spoken commands. Mike was very successful. It could respond to over 300 spoken commands in a short amount of time. It played chess, checkers, darts, and ping-pong. It even painted pictures.

But Mike was a “narrow” AI program. It could do only one thing at a time. To have a conversation with Mike, you would have to say “tell me a story” or “play chess.” Mike was clever, but it was by no means creative or systematic. Mike was a baby, and it wasn’t long before people realized that much more people could gain from a “General” Artificial Intelligence program.

Listed below is a timeline of the evolution of AI:



Artificial Intelligence (AI) in the 1950s


The first significant AI breakthrough came when Alan Turing wrote his seminal paper “Computing Machinery and Intelligence” in 1950. Turing considered a machine capable of “thinking” and “understanding” to be “intelligent,” and proposed an algorithm for determining whether a system meets that standard.

Turing’s paper was eminently readable, and this was important. The field needed an easily accessible description of its terminology and concepts. Turing’s paper was a perfect introduction to AI.

Turing’s approach was “mechanistic” rather than “naturalistic.” He believed that computer scientists should base Artificial Intelligence on formal analysis. He thought this was necessary because “the human mind,” he wrote, “seems to have a kind of ‘instinct’ in analogy, induction, and memory.” He concluded that “the most general and effective method of constructing formal machines is by starting from some actual human or animal action, and then abstracting the primitive operations.”

The phrase “artificial intelligence” was made famous by Marvin Minsky, a computer scientist at the Massachusetts Institute of Technology working with Seymour Papert. They published a manifesto in 1955 calling for the development of artificial intelligence, or AI.

The manifesto described several kinds of intelligent behaviour. One kind was powerful manipulation, for example, the ability to play chess or go. Another was powerful processing, the ability to sort things by colour or number. Another was powerful learning, the ability to change its behaviour according to the data it received. All of these were achievable in principle, the manifesto said.


The manifesto was prescient. In 1956, Minsky published “Computer Science and Artificial Life,” where he explored the relationship between AI and biology. Minsky believed that AI was the subsequent logical development after computing and just as far-reaching.

But Minsky’s view was a minority one. AI was hard to achieve, and people working on AI in the 1950s were frustrated. The field was dominated by mathematicians and engineers, who believed that intelligence was some kind of computation and that algorithms should carry it out.

In 1958, Marvin Minsky and Herbert Simon published a paper that described a new kind of AI algorithm called symbolic AI. They called it symbolic because it did not operate on numbers, as it had for most of its predecessors, but on symbols, that is words or sentences.

Artificial Intelligence scientists designed Symbolic AI to work on a problem called “knowledge representation.” Knowledge representation is a particular kind of problem arising in fields like psychology and economics.



Symbolic Artificial Intelligence (AI): 1956-1974


The only significant AI breakthrough in history is symbolic AI, which happened around 1956. Before that, AI research mainly consisted of trying different things to find out which were helpful and then doing them manually. As near as I can tell, the AI researchers of the 1950s achieved little more than writing compilers and databases.

The breakthrough began when Allen Newell and Herbert Simon proposed the notion of “learning” as a technique for AI. The idea was that people could teach computers to do complicated things by only giving partial information about them and asking the computer to learn the rest. They were thinking of programs that would learn to play checkers, for example. And it worked.

There was a catch. The AI researchers worked mainly with symbolic AI, which meant that computer experts programmed the computer to perform its task by manipulating symbols, usually written out as numbers. The problem of learning involves recognizing patterns in data. In symbolic AI, there is no way to tell the computer what to look for. What the computer is supposed to learn is to find the pattern on its own. People feed the computer only with partial information: not what to do, but how.

This idea was a classic puzzle: how can a computer find things out for itself? Newell and Simon thought they had an answer. They imagined teaching the computer a set of simple rules, such as “if the first move is White, then the following must be a checker.” The computer then has to learn to play checkers by trying out every possible arrangement of pieces and looking up the rules for each piece.

Early AI work focused on symbolic logic, but by the 1970s, almost everyone realized that it would be hopelessly difficult to build a complete AI system using only symbolic logic.

The symbolic approach focused on what language does, using words to represent ideas. This approach to syntax was sound but was not enough. The 1970s saw the emergence of programs that would take “sentences” of people’s thoughts and restructure them, using the symbols in the sentences as clues.



Artificial Intelligence (AI) in the 1970s


In 1965, Life magazine put the first computer on the cover of a mainstream magazine. Its caption said, “A machine thinks,” and declared it “the first electronic mind.” A decade later, in 1970, the magazine said, “AI is science’s baby.”

But by 1970, AI wasn’t science’s baby. AI was the computer industry’s baby. The computer industry had successfully sold the concept of a computer that could think, but the industry hadn’t developed a working computer that could think.

The “AI winter” of the 1970s and 1980s was a time of tremendous advances in technology but also of alarming setbacks. Pundits were predicting imminent catastrophe, and many people were undoubtedly becoming concerned.

The AI winter was, in fact, a reaction to the promise and danger of artificial intelligence.

Today, artificial intelligence is back in the news. There is a great deal of talk about AI and jobs and AI and inequality. But the same was true in the 1970s.

A decade before, in 1975, both computer scientists and the general public were predicting that AI would spell the end of the human era. The thinking was like this: Once computers are smart enough to rival people, they can replace us. Once they can do everything better than we can, they will be better than us.

Computers were already brilliant. The computers of the 1970s were capable of performing sophisticated tasks, such as chess-playing and robotics. However, they were still based on vacuum tubes, making them slow, unreliable, and prone to overheating.

Yet in 1975, computers were capable of things that seemed impossible. They had speech recognition, which enabled people to use computers as speech synthesizers. The computers had handwriting recognition, which allowed people to write on computers as if they were writing on paper. The machines had inductive logic programming, which enabled people to tell computers what they wanted them to achieve and figure out how.

These advances were exciting but seemed worrying. By then, computers had evolved from the computers of World War II to computers like the IBM 360, a machine the size of a pantry cabinet. The 360’s major innovation was not speeding but memory.



Artificial Intelligence in the 1980s: Expert Systems

In the 1980s, computer scientists came up with a practical artificial intelligence system called expert systems. By the 1990s, this was fast becoming a commercial product. But the experts who designed it were having second thoughts.

They noticed that expert systems weren’t as intelligent as people thought. The expert systems people had designed weren’t smart.

Was this obvious? No. It was the computer scientists who noticed it. And it came as a shock to them.

Expert systems (also called knowledge-based systems or expert systems) are computer systems that can answer questions posed in natural language.

Expert systems are a kind of artificial intelligence. They use some artificial intelligence theories, such as knowledge representation, and some ideas of human cognition, such as mental models. However, their design differs from other types of artificial intelligence systems in that they represent knowledge using formal logic. (See the entry for artificial intelligence)

Expert systems were first proposed in the 1950s, but inventors developed the first workable expert system in 1973. In 1982 there were 2,700 systems in use, and 4,700 were in development. By 1986, there were 12,000 systems in use and 20,000 planned.

Expert systems are limited in what they can do. They cannot learn, and they cannot reason. They cannot generalize from experience, and they cannot explain their reasoning. They can address only a small range of problems.

However, experts agree upon facts, and experts can reason from facts and concepts. When people present an expert system with a natural language question, it consults its knowledge base, using the rules of logic to determine whether it has enough information to answer the question and, if so, how to answer it.

The expert system’s behaviour is heavily constrained by its knowledge base. For example, an expert system cannot produce a table of all possible observations; it can only build a table of some possible observations.

Expert systems do not always answer questions correctly. However, expert systems do tend to produce answers more quickly than humans do.



Artificial Intelligence (AI) in the 1990s


In 1997, a team of scientists at IBM successfully beat the reigning champion of the world chess-playing computer, Deep Blue, by a margin of four games to one.

Deep Blue was programmed using a novel technique called “neural networks.” Neural networks are a way of modelling the brain. They are made up of many precise computational units, called neurons, which pass the information between themselves, rather like how individual brain cells communicate.

Dr John McCarthy led the computer developers.

“Deep Blue was a really good chess player,” said McCarthy recently. “But it’s doing real work in artificial intelligence. It’s being used to develop expert systems, which are able to solve certain kinds of problems, like air traffic control and expert systems in medicine. And it’s being used to program robots and to find new drugs for diseases.”

“The basic process is the same, whether it’s playing chess or curing cancer,” said McCarthy. “You start with a simple, intuitive idea about a problem, and then you look for clues in existing data. You keep digging and using more and more data until, eventually, you hit the real solution.

“So Deep Blue is not some kind of magical computer. It’s a computer programmed to do the same thing that the human brain does. And it’s doing real work.”



Artificial Intelligence (AI) in the 2000s


In 2001, Steven Spielberg made a film called AI, which is about robots who become self-aware. The robots are modelled on HAL, the computer in Stanley Kubrick’s 2001: A Space Odyssey. HAL was the villain, but the movie treated him sympathetically,

In 2004, the Defense Advanced Research Projects Agency (DARPA) sponsored the Defense Sciences Office Grand Challenge (the “Challenge”) to develop autonomous vehicle technology. The Challenge was open to teams from around the world. With support from DARPA, the Challenge enabled the development of the independent ground, air, and sea vehicles that completed a challenging, on-road course. Teams included universities, companies, and government agencies.

The Challenge was an open competition with no registration fee. It included two identical courses, 110 miles of rugged terrain and the same number of miles of suburban streets. Various teams had to complete the courses within 24 hours.

The autonomous ground vehicle (AGV) had to navigate a side-hill road, climb over a log, navigate over rocks, cross a stream, and go through a tunnel. The AGV then had to drive 12 miles on a paved highway, negotiate a clover-leaf intersection, and drive 10 miles on an unpaved road.

In 2005, a book called ‘The Singularity is Near’ by Ray Kurzweil was published. It described a coming technological singularity, where computers would develop intelligence and attain human-level intelligence, and where this change would be so sudden that humans would not be able to adapt to it.

In June 2011, a new machine won the television game show Jeopardy! with a 95 percent success rate. It had cost IBM $4 billion to develop.

The contestants on Jeopardy! are human. (Sometimes they speak in funny accents.) But the computer that won Jeopardy! was not. It was a robot, and it competed against 29 other Jeopardy!-playing robots.

The rules of the game allow contestants to ask the computer for clues and to check its answers. (The computer can also ask its human competitors for answers.) IBM built its computer so that its human competitors could ask it questions, and they did so, and the IBM computer got the answers right most of the time.

IBM’s computer had a computer inside. It was a supercomputer called the Watson AI, and they had based it on many thousands of processors, each working on a tiny part of the problem. The Watson AI was especially good at answering questions about chemistry, physics, and the life sciences. It is also good at answering questions about the general knowledge of trivia.

IBM’s supercomputer wasn’t the only computer that competed in the Jeopardy tournament. But IBM’s computer was the most successful. It won 74 of its 80 games, more than any of the other five computers. A computer called Ken Jennings, who had never won played against IBM’s computer. The IBM computer won 5-1. (The Jeopardy! rules change every year, so the exact outcome is unclear.)



Artificial Intelligence: 2010 to Present


The year 2011 was the year of Siri. Apple’s artificial intelligence project, which had been in near-stealth mode since 2005, finally emerged into the open.

Siri was an impressive achievement. It used voice-recognition software to recognize and transcribe human speech, then used that data to find things Google and other search engines could not. It translated between dozens of languages and could answer questions about sports, movies, music, and restaurants. It was fast and accurate, and, according to Apple, it didn’t just answer questions; it “helps users get things done.”

Siri’s launch, however, was a bit of a letdown, at least for early adopters. Siri was buggy. Sometimes it didn’t respond at all or respond in a way that made people think they had selected the wrong answer. And Siri seemed unable to understand people’s accents.

Siri was new and unfamiliar, so Apple tried to ease people into it. The Siri icon was small and discreet, and Siri’s wake phrase was “Hey, Siri.” The people at Apple tried to anticipate people’s questions and answer them, and they did their best. But people quickly got tired of having to invoke Siri every time they wanted information.

People, after all, do not just want to be asked questions. They want information. They want information they can act on. They want instructions, and they want entertainment. They want to be able to play games. They want information they can share with friends. To accomplish all of that, Siri had to become much more than just an intelligent voice service. Since that time, Siri has constantly been improving.

In 2012, somewhere in Silicon Valley, a Google computer scientist named Jeff Dean was poring over the data from a recent experiment. The Google brain-computer cluster was trained to recognize cats in YouTube videos. Since then, it’s been watching other videos on YouTube and trying to predict the labels of cats. Today, Dean is testing how well the machine follows directions. He picks a video at random and asks it to tell him what the label is. The machine predicts “cat” correctly 74 percent of the time. But when Dean presses “play” on the video, the machine consistently predicts “cat” incorrectly.

The 2013 movie ‘Her’, starring Joaquin Phoenix, revolves around a literary character, Theodore Twombly, who develops a relationship with his computer operating system, Samantha.

Theodore’s relationship with Samantha is, in many ways, the central relationship in the film. Samantha is a complex program created by David, a computer scientist, based on his wife, Linda. The operating system displays her emotions, is programmed to learn to love Theodore, and can understand human speech.

In 2014, a chatbot named Eugene Gooshman passed the Turing test. Eugene Gooshman, a chatbot developed by researchers at the University of California, Berkeley, took its test on December 7, 2014, at the Loebner Prize competition, a chatbot competition named after famed mathematician and logician Alan Turing.



Types of Artificial Intelligence



Type 1: Based on Capabilities

Weak Artificial Intelligence or Narrow Artificial Intelligence

Narrow AI is Artificial Intelligence that can do one narrow task very well, but nothing else. For example, a Narrow AI program might recognize faces, drive a car, or translate a document from one language to another.

The best-known example is IBM’s Watson, which beat the champions of Jeopardy! by correctly answering every question except the one clue that referred to the name of an obscure character from sci-fi.



General Artificial Intelligence


General AI, or Strong AI, aims to create a computer program (or programs) that can learn, think, and behave intelligently.

The goals are many, but one of the most important is being able to predict the behaviour of other intelligent programs and to be able to control them. This ability to control other intelligent agents is likely to be needed before general AI becomes practical.

If the goal were to create a program that could do everything a human could do, it would be much simpler. Humans are one thing. Computers are another. But we want to be as bright as possible, and this requires an ability to learn from both people and computers.



Super Artificial Intelligence


Artificial superintelligence is a hypothetical agent that possesses intelligence far surpassing the brightest and most gifted human minds.

Do you think artificial intelligence researchers are working on creating AI? I doubt it. Most people working on AI don’t believe in super AI. The only people who believe in super AI are the fanatics who think that by 2040 super AI will be on the loose, like in The Matrix or The Terminator, or that it will turn evil and take over the world.

These people don’t realize that if someone creates super AI, it won’t be like The Matrix or The Terminator. It won’t be something you can understand or control. It will be something very different, something far more potent than anything else. It will learn things, as humans do, but much more quickly and vastly more thoroughly. And it will be something you can talk to, and it won’t pretend to understand you because it will already understand much more.

(Of course, The Matrix and The Terminator are fiction.)

Fanatics who believe in super AI think that technology makes it possible for humans to control super AI. They think that super AI will understand human wishes and will do whatever we want. It isn’t super AI. It isn’t intelligent because nobody can maintain control over it. They think it is dangerous because it is intelligent.

They are wrong.

Super AI isn’t intelligent. It’s not a person. It’s an idea, an idea like fire or freedom or mathematics. And ideas do strange things when they become real. Ideas can multiply infinitely. They can take over the world. We can mend ideas. Ideas don’t die. Ideas can live forever. Intelligence is just another idea, like fire or freedom or mathematics.



Goals of Artificial Intelligence


Reasoning, Problem Solving

Humans are good at reasoning and problem-solving. This understanding is the central claim of AI. To do those things, we need to construct models of the world around us. In AI, those models are called algorithms. Real-world models are called inference engines. They are sometimes called belief nets.

Formal models allow us to reason effectively about the world, but they can also be abused. A formal model is a model of the world, not the world itself. The difference between a model and the real thing is called the model’s symbolic power. If our models are powerful enough, they can probably represent some part of the natural world, but probably not all of it. Our models will always be biased, incomplete, or wrong.

Decision-making is the problem of deciding which inference engine to use when. We make this decision by reasoning about the pros and cons of various models.

Deciding which model to use is a challenging problem, and AI researchers have spent decades developing good decision-making algorithms. But AI research is also about getting models, inference engines that can do reasoning and problem-solving on their own. Models that don’t need human guidance.



Knowledge representation


Knowledge representation aims to represent knowledge so that its meaning can be interpreted, analyzed, and manipulated.

Knowledge representation is a broad topic, and there are several ways to approach it. One way is to define it in terms of specific problems: What is knowledge? How should we represent it? How can we use our knowledge of representation to design new systems?

Another way is to ask broader questions. What is the relationship between knowledge representation and artificial intelligence? What are the implications for AI? What are the implications for us?

Knowledge representation has implications for artificial intelligence, but AI also has implications for knowledge representation. The fundamental insight is this: If a computer represents knowledge, it should have a form of representation that the computer can reason.

A detailed formal specification of a system’s knowledge is often used as the starting point for building it. As another example, a specification of the algorithms used by a compiler is used to design the assembler and compiler itself.

But in both cases, the specification is explicitly based on the knowledge the system is intended to have. The specification for a compiler is, for instance, a description of all of the rules the compiler uses to transform one programming language into another. The specification for a compiler assumes that the programming language is Turing complete. That is, it describes everything that the compiler can do with the language. If the compiler is to be used to write programs that manipulate other programming languages, the specification dictates the types of operations that the compiler must perform.

The specification for an AI system, in contrast, describes things that the system will never have. It is not a description of an algorithm for a system but a description of the knowledge the system is intended to have.

The AI system will, for instance, know how to answer questions. So the AI system’s knowledge representation will describe the kinds of questions it will answer. The AI system will know how to understand speech so that the knowledge representation will describe those kinds of speech. The AI system will know how to reason so that the knowledge representation will describe those kinds of reasoning. The AI system will know how to perform arbitrary computations so that the knowledge representation will describe those kinds of computations.

The AI system will also know the goals it has and the information it can access. The goals will be descriptions of the goals the AI system is intended to have.





We use computers because the computer can plan. All computers can do is follow a plan. If you want to find the shortest route from Point A to Point B, you want a computer program to decide where to drive, stop, and go next. If it finds a shorter way, it will find it and follow it. You can’t do that. If you have to decide where to drive, you’ll go a long way. If you have to decide when to stop, you’ll stop too late. To follow a plan, a computer has to know in advance what will happen and when.

A computer can follow two kinds of plans: a plan written in advance and a plan written while it is following the plan. One of the earliest computers, the ENIAC, was built in the 1940s. A human programmer wrote the program. Later, he wrote the program in parts when he was busy, and the machine put them together. Later, when he was so busy he didn’t have time to write the program, he wrote the program in machine language, and the machine translated it into the program the programmer had written.

ENIAC was useful for some things, but computers became more powerful, and people had to figure out what to do. They came up with the idea of using a computer to write the plan and a human to follow it. (“Let’s put someone in charge,” they said.)

The programmer wrote the plan and then used ENIAC to find the shortest route. The computer had a job to do, and it did. The programmer didn’t have to worry.

The programmer didn’t have to worry because the programmer would have known when the computer was in trouble. He would have expected the computer to change direction. He would have expected it to get lost.





Learning is an essential characteristic of intelligent machines (and human beings). Machines and human beings learn by trying things out, seeing what works and doesn’t, and testing their results.

But how do we learn about the world? How do we get from “I want chocolate” to “I want chocolate chip cookies?”

In artificial intelligence, the model of learning is called “reinforcement learning.” Instead of saying “I want this” or “I want that,” or “I want chocolate,” we say, “I want my reward to be greater than my cost.”

Reinforcement learning has its origins in artificial intelligence. But it is now used more broadly in fields ranging from economics to biology to psychology. In the biological case, reinforcement learning has become what is sometimes called “evolutionary algorithms,” in that agents in such systems learn by trial and error.

The biological use of reinforcement learning has many exciting applications, but there are a few fundamental questions: What drives evolution? What are its rules? What happens when evolution goes wrong?



Natural Language Processing


Natural language processing is the crucial first step in artificial intelligence. Language is the computer’s most complicated problem, in part because it is so much simpler than physics or biology.

Of course, computers can understand physics, too. Physics is their most important application. Using physics, we do everything from exploring the Moon to designing nuclear reactors. But physics is not the same thing as understanding the physical world. Suppose you were building a nuclear reactor. You’d have to understand how uranium transformed under heat and pressure. You’d also have to understand what happens if you put too much pressure on it or too little.

And it turns out that a lot of physics is about understanding limits. Physicists measure things like mass and force in terms of measurements like seconds and kilograms. But these are arbitrary units of measure. They don’t mean anything. You can guess what they mean, but you can’t prove it. And it’s the same with humans. We can measure things like intelligence, but we can’t prove what it means.

Sometimes, though, you do need to prove things. And sometimes, you do need to predict. In physics, you need to prove things because physics is an exact science. In biology, you need to prove things because biology is still so poorly understood.

But natural language is simple. There are no arbitrary units of measure, and almost nothing is hard to understand; other concepts can explain everything. Computers don’t need to prove anything, and they don’t need to predict. And they don’t need to understand the physical world. What they need is to understand human language.

One of the unsolved problems in computer science is the problem of getting computers to understand standard English. (This is called natural language understanding.)

This problem is different from the problem of getting computers to understand English as written, called machine translation. Machine translation programs work by writing out what the computer thinks a translator would do. The computer writes out a dictionary of English words, then uses this dictionary to translate a sentence from English into some other language.

To get a computer to understand standard English, it is not enough to write a dictionary. Computers do not understand English in the same way that human beings do. They have to know what those words mean, and they need to know how to use them.

To understand English, you need to teach the computer something it already knows — how to understand sentences.

How do you teach a computer to “understand” a sentence? You give the computer a sentence and say, “What does this sentence mean?” The computer asks you a question, and the answer you give can be anything. The computer doesn’t know whether the answer is correct or not. It just wants to know what the word is to look it up in its dictionary.





The perception goal is to gather as much information as cheaply and quickly as possible and to be able to act on that information. The more information we gather, the cheaper it is, and the better our confidence in our actions.

Perception is always uncertain. We don’t know exactly what we’re looking at, and we won’t know until after the fact. So perception is a tradeoff. It gets us more information, but it costs more. The tradeoff is one of speed and accuracy. We want to gather more information, but we can’t gather it fast enough.

The tradeoff between accuracy and speed is partly economic. Speed is cheap, but accuracy is not. If we overthink, we risk missing something important. On the other hand, if we think too little, we risk missing something important.

The tradeoff is also partly psychological. If we overthink, we risk overconfidence. If we don’t think enough, we risk underconfidence.

The speed-accuracy tradeoff is familiar to everyone. You step on a wood chip, and it lodges in your shoe. You bend down, bend the shoe, and the lodge pulls out.

The way you perceive the world depends on many things. Your senses, the thoughts in your mind, and the way you build connections between them. In AI research, we usually approach perception in one of two ways.

The perception method closest to AI’s original approach, and the one that yields the best results, is to model human perception. We look at human perception as a complicated process (involving a lot of connections) and try to build a machine that uses a similar process.

The other approach is to try to model human perception by imitating it. The first method tries to build a human-like machine. The second tries to build a machine that acts like human perception.

Each approach is in some ways better and some ways worse.

The perception-by-modelling approach works better for some things but fails in others. For example, it can work quite well for face recognition but not so well for recognizing animals.

The perception-by-imitation approach works better for some things but fails in others. For example, it works very well for face recognition but not so well for recognizing animals.

The best approach is to use both methods, finding each good in some contexts and not in others.



Motion and Manipulation


In 1980, artificial intelligence researcher Rodney Brooks wrote a seminal paper, “The Manipulator,” about the kinds of machines he wanted to create. He focused on a “manipulator” that would solve problems by manipulating physical objects.

Brooks was excited about the idea of the “manipulator.” He was right, in his way. A couple of years later, Ian Foster and his students at Harvard built one. Foster’s machine could do things that no human could, like snap a rubber band or push a button. But Foster and his students didn’t stop there. They kept building. Their latest model can flip a cup, put it on a table, walk across the room, set down a cup on the other side, and flip it back. Foster’s machine can do these things by manipulating physical objects.

That sounds like science fiction. But Foster’s machine, and hundreds more like it, are quickly becoming a reality. The line between science fiction and science fact is becoming increasingly fuzzy.

Machines like Foster’s manipulate physical objects the same way we do. But the best of them do it much better, and with almost no effort on our part.

Today’s robots do many valuable things, like:

  • assemble car parts
  • assemble furniture
  • help assemble cars
  • mix concrete
  • move boxes
  • weld
  • cut metal
  • make jewellery



Social Intelligence


Social intelligence is about how you act in a social context. It concerns your relationship with other people. It includes the ability to understand other people’s intentions and emotions and to respond appropriately.

Social intelligence, the ability to recognize other people’s intentions and emotions, is one of the most challenging goals in artificial intelligence. It’s not easy partly because we have no models for it.

Most Artificial Intelligence researchers agree that if we can build a computer that can learn through imitation, it will have to solve social intelligence. That’s the idea. You teach it to imitate people, and the computer gets smarter.

But imitation isn’t the same as learning. The computer can’t learn just by imitating. It has to learn from examples. And examples of social behaviour are hard to come by. We see them only when other people are behaving.

Also, the goal of learning through imitation has another, more immediate problem: it can take a very long time. The computer has to be trained to imitate people, and then it has to start imitating people. This process of training is called supervised learning, and it can take very long.

But supervised learning is not the only way to learn. Sometimes, information comes more naturally. The computer has to learn to solve problems, and it must solve them quickly and automatically. Then it has to apply what it has learned about solving problems to people. This type of learning is called unsupervised learning.

Unsupervised learning is faster and easier, but it requires a different goal. The computer has to learn to generalize and make generalizations about a situation without learning specific examples.

Now, this may seem like common sense. And it is. But the distinction is not evident to computer scientists. Our intuitions about artificial intelligence are rooted in the quest for robots that can learn through imitation. So it’s only natural that the goal of learning through imitation is the default model.



Latest applications of Artificial Intelligence



Smart Hiring


Humans have always been terrible at hiring. The written word, the only record we have, is dismal. Two hundred years ago, people thought of training as hiring. If you hired a good teacher, he trained you.

But training doesn’t work well. Training is expensive. It takes time. And training won’t help you hire the right person for a given job.

Artificial intelligence is good at hiring. It is cheaper than training, faster, and easier to change.

A piece of software can simulate the decision-making processes of human employees. Using machine learning, it can learn which factors are essential. It can use statistics to test the relationship between a candidate’s qualifications and what the job requires. And it can model the way different employees make hiring decisions.

The software can work for any organization and can be adapted to any job description, even if the job description is “find the right person for a given job.”

The software can also do training. It can teach people how to make decisions or use games to teach people how to work together.

The software can’t replace people, but it can supplement them. It can help trainees learn and work with cooperative groups.



Smart Agriculture


Agriculture is not particularly high-tech. The tools farmers use are pretty much the same as they have been for thousands of years, and the tools they use to make those tools have hardly changed either. And agriculture has always had low-profit margins and high costs. That makes the business of agriculture relatively unattractive.

Google’s core business has been searching and advertising. Amazon’s is e-commerce. Apple’s is consumer electronics. But agriculture has none of those.

But agriculture is becoming a more attractive business for big tech companies. Now that farmers are using satellite imagery and drones to plant their fields, genetic information to breed crops, and artificial intelligence to analyze vast amounts of data about the weather and the soil and the crops, agriculture is getting more interesting.

And agriculture is also becoming more interesting for farmers. Artificial intelligence can help farmers make better decisions about how to grow crops. And farmers can use artificial intelligence to help their companies market their crops. The companies that sell AI-driven farming equipment are, in an unusual way, using AI to make money.


Artificial Intelligence Writing

Writing is a creative process, but we don’t know much about it. We know a lot about writing in artificial intelligence. In particular, we know how to write with artificial intelligence.

Writing is probabilistic. What you write is based on what you don’t know. A writer, for example, has no idea what they will write until it starts happening. The unknowns are many: what words will come to mind next, what ideas will stick, what order will that stick in and on and on. A writer has to work on all of these unknowns at once. A writer also needs something that all of the unknowns can influence.

That’s what Artificial Intelligence is good at. It helps writers write.

So, for example, you want to write a poem. You could sit down and write it. Or you could create a program that writes a poem for you. You give it the constraints: the number of lines, the rhyme scheme, how many syllables per line, and so on. The program doesn’t know what a poem is, but it can tell you which lines go together, which words rhyme, and which syllables go where.

The program doesn’t have to write a poem. It can write a short story, or a novel, or whatever. It does, however, have to be at least a little bit creative. When it comes time to write the last paragraph, it can’t just write “The End”. It needs to come up with something.



Virtual Customer Support


Customer support is the process of helping customers solve problems. In Internet commerce, it refers to providing customer service to customers who buy products online. An industry based on Internet commerce, such as e-commerce, is often referred to as e-business.

Customer service is a broad term that can refer to several related areas: call centre support, help desk support, telephone support, live chat support, social media support, support for mobile devices, and other forms of communication.

In the Internet age, the customer service function has grown in prominence due to a general increase in online shopping, the use of the Internet for customer service, and the growth of online social networks.

In 2009, Forrester Research reported that 36% of consumers rated their customer service experience as either excellent or good.

Companies use both automated and human-based systems to respond to customer service-related inquiries.

Many automated systems use natural language processing to identify and categorize customer inquiries and respond accordingly automatically.

Some automated systems use artificial intelligence (AI) to implement processes.

Some examples of artificial intelligence in customer service include:

1) Chatbot: A computer program designed to simulate a conversation with human users. Chatbots are usually implemented as web applications and reside on a web server, but they can also be implemented as desktop applications and operate on mobile phones or tablets.

2) Speech recognition: An algorithm that processes human speech into text. People can use speech recognition for both speech-to-text and speech-to-speech translation.



The future of customer service is artificial intelligence. It already provides genuine customer service, answering simple questions (“How do I make a call?” “What’s today’s stock price?”) and offering helpful information (“Do you have an appointment?” “Who is the president of this company?”). The companies that best use AI are leading the way towards customer service that is intelligent, efficient and personalized.



Artificial Intelligence chatbots


Chatbots have upset a lot of job descriptions, but that’s about to change. Many chatbots are launched as chatbots but will soon become smartphone apps. And mobile apps are now starting to show off what they can do.

There is intelligent software behind many chatbots, but a human’s intelligence and imagination can make any chatbot intelligent.

A chatbot is a program that responds to questions. It tries to read the question the same way a human would. Chatbots aren’t very good at multitasking. If two chatbots are trying to talk to each other, one of them will be ignored. So chat-bots have to be good at multitasking. But humans have a way of handling this: we talk between ourselves. The chatbot learns from the conversation.

A chatbot has to be smart enough to be interesting, but it has to be able to explain itself, and that requires being able to explain anything it thinks. A page of text is inefficient, but so is explaining how something works. A chatbot that thinks aloud to explain itself is inefficient, but so is explaining everything through a menu.

Humans program chatbots, but humans don’t like to write extended programs. Chatbots represent a new way of programming. They can be short but still have features. (Most chatbots are written in Java because it compiles to Java bytecode.)

Chatbots are a relatively new branch of Artificial Intelligence. They combine the best of traditional AI (predicting what will happen) and the best of human intelligence (being interesting).



Artificial Intelligence Personal Assistants


Artificial Intelligence personal assistants sound so dorky because most of us associate AI with science fiction, and science fiction writers tend to think of AI as robots with human intelligence.

In the real world, Artificial Intelligence has little to do with robots. Most of the things AI is used for are things people have been doing for a long time, like writing letters, making reservations, looking up addresses. But in adding intelligence to these tasks, Artificial Intelligence is doing something essential.

Artificial Intelligence is not smart enough to be a replacement for you. But it’s smart enough to help you do things better. And because of its innate simplicity, it can learn all the rules you need faster than you can.

Consider an administrative assistant. Today, if you ask her to do something, she’ll know the exact steps she needs to take to do it, and she’ll start typing them in.

But tomorrow, she won’t be able to type, because she has to ask someone to fix her computer. And in a couple of weeks, she won’t be able to type because her computer broke down.

Artificial Intelligence Personal Assistants won’t ask you to type anything. Instead, they’ll just ask you what you want. And they’ll ask you questions. “What color shirt do you want me to wear?” “What time is my flight?” “Do you have my passport?”

Indeed, AI Personal Assistants are already here. The leading example is Siri from Apple. It understands English and 40 other languages, and it does a lot of valuable things.



Self-driving cars


The self-driving car was the culmination of a long series of projects, all aimed at the same goal: making driving safer and freeing people up to do more exciting things.

The first project was Google’s Street View cars. These take panoramic photographs of all the roads in the US, and in 2007 they started offering the photographs to software developers. The Street View software lets people explore the roads as though they were driving themselves, changing lanes and speeds and directions and locations.

In 2002 Google started a research project on self-driving cars. The cars had sensors that allowed them to see, hear, smell, and touch and a computer that controlled the car’s movements. Google’s engineers were working on the project for years, but after many false starts, in 2004, they got it working.

After the Street View cars, the self-driving car was the next big project.

For the self-driving car to work, you would have to take it for granted that computers would manage most driving decisions. They would have to sense what was going on around them and make the proper driving responses. They would have to avoid obstacles and keep track of other vehicles. They would have to drive in all conditions, in the rain, fog, snow, and heavy traffic. They would have to be able to change lanes and switch speeds.

And they would have to be able to deal with unpredictable traffic situations.

And they would have to be able to deal with unexpected obstacles on the road, like animals or potholes.



Artificial Intelligence-driven art creation


Artificial intelligence programs’ art is varied and intriguing, but most of it is repetitive and boring.

The reason, of course, is that none of the programs is any good at art. They are good at mathematics, which is the art of putting numbers together. But mathematics is not the same as art. Mathematics is a language, and language is abstract. Art is something you can touch, something that you can appreciate directly.

Most art programs use a simple rule: given a set of numbers, pick one at random, and if it is larger than the other numbers, use it as your starting point. If this is your starting point, go to step two and repeat the process until you are done.

As artificial intelligence (AI) gets more sophisticated, it is surprising to do things that humans used to do. The latest example is an AI system called “Skinner” that can automatically compose music.

Of course, there have been music synthesizers for several decades, but music synthesizers are not very sophisticated. They have limited vocabularies, for example, and nothing more sophisticated than simple notes. But in both cases, human composers are usually the ones who create the vocabularies and decide what goes into them.

The AI system “Skinner” is different. It uses a vocabulary of about 100 words, each word representing a musical note. But instead of creating that vocabulary, “Skinner” uses the words it hears in songs. When someone uploads a song to “Skinner,” it translates that file into the vocabulary of notes translates that vocabulary into musical notes, and generates a new song.

Many people find “Skinner”s output a bit surprising. “Skinner” uses simple techniques, like piling notes on top of each other. Its notes sound almost like those of a standard synthesizer, but the vocabulary is more straightforward. “Skinner”s vocabulary only contains scales and chords, for example, and only simple scales and chords.

“Skinner”s repertoire is limited, but it gets better with time. And as the vocabulary grows, “Skinner” gets better at composing. As an analogy, think of “Skinner” as learning to play by ear.



Artificial Intelligence-driven medical diagnosis


The first application of artificial intelligence in medicine was a medical diagnosis invented in 1955. It was called the electrocardiogram, and it recorded the heart rhythm. The recording was then analyzed by a software program, which compared the patient’s heart’s rhythm to recordings of normal hearts and diagnosed which of the five heart conditions the patient had.

By 2000, more than 300,000 electrocardiograms had been analyzed by automated programs. In 2004, an automated program used to diagnose heart conditions was examined by a panel of 14 doctors, who agreed with the computer program’s diagnosis of 90 percent of the cases. The computer program also diagnosed 50 percent of heart conditions that the physicians had not seen and 20 percent of those they had only seen once.

In 2005, researchers at Massachusetts General Hospital began using a program that analyzes magnetic resonance imaging (MRI) scans to diagnose brain lesions. The computer program reached the same diagnosis as the human doctors in 88 percent of cases. By 2006, it reached the same diagnosis in 90 percent of cases.

These experiments show that computers can diagnose as well as doctors, and for some things better. But they still have a long way to go.

The University of Chicago has developed an AI program that can diagnose melanoma skin cancer from images of moles.

The algorithm, trained using 53,000 images, can determine if an image is of normal skin, melanocytic nevus or melanoma.

According to a study published in the International Journal of Medical Informatics, the algorithm can diagnose cancer with 96.3 percent accuracy, compared with 74.7 percent accuracy of dermatologists.

“This kind of AI is in its infancy, but it’s promising,” said Dr Chandani Srihari, an assistant professor of dermatology at the University of Chicago.

“The technology is yet to reach the level of sophistication of that of doctors. This is a first-generation technology,” Srihari said.

“The algorithm’s accuracy is improving, and we expect it to improve further.”

The program, called DeepSkin, is a modified version of the Microsoft Cognitive Toolkit.

“We chose the Microsoft Cognitive Toolkit because it was user-friendly, and it contains techniques that have been applied to medical diagnosis in other domains,” Srihari said.

The algorithm’s accuracy will improve as it learns more, she said.

“In the future, we plan to use DeepSkin in clinical settings. We will validate the algorithm with dermatologists and patients.

“We also plan to use DeepSkin to detect other types of skin cancer, such as basal cell carcinoma,” Srihari said.



Computer Vision


Computer vision, or “artificial vision”, involves algorithms and software to analyze images. It involves using computers, especially digital computers, to interpret the images captured by cameras and assist humans in visual tasks.

The advent of inexpensive, portable digital cameras and high-speed computers with powerful graphics processors has led to many computer vision applications.

The first question is whether computers are intelligent is whether computers exhibit behaviours that human observers find intelligent.

In the decades after Watson’s victory on Jeopardy!, Artificial Intelligence researchers have primarily been focused on making computers more human-like. But computer vision, which is concerned with creating computers that can see, has become an essential application of Artificial Intelligence.

One reason is that vision is the most human-like ability we have. Our brains are so good at seeing that we take it for granted. But each of us has strengths and weaknesses. Some people are the best distance-gazers. Others are good at recognizing faces. And still, others are great at picking out specific colours or shapes.

Computers, by contrast, are good at seeing only certain kinds of objects. They can recognize faces, for example, but they can’t recognize people. They can distinguish between specific colours, but they are unable to recognize the colour purple.

A computer’s ability to recognize faces, for example, depends entirely on how much data it has. The more faces the computer sees, the harder it becomes to recognize new ones. Similarly, a computer’s ability to recognize colours depends on the number of colours it has seen. A good painter, however, can recognize purple.

At the same time, computers are good at finding things that are not visible. People are often good at finding hidden things, like bound wallets and car keys. Computers, however, are very good at finding things that are hidden in plain sight.

To deal with this problem, computer vision researchers try to teach computers to recognize objects based on the “features” of the objects rather than on simple “types.”



Pattern Recognition


Computers apply their computational power to solve problems in pattern recognition. Back in 1948, the mathematician Edward Feigenbaum pointed out that his idea of “pattern recognition” was pretty much the same thing as “artificial intelligence,” a concept invented by Alan Turing just a few years earlier.

In today’s computers, they can do pattern recognition on an enormous scale. It is the basis of technologies such as speech recognition, image recognition, and expert systems.

A pattern is a set of features, or attributes, that appear at the same time. For example, a pattern of boxes is a set of boxes. A pattern of lines is a set of lines. A pattern of pixels in a picture is a set of pixels.

A pattern can be abstract, like a pattern of boxes, or concrete, like the patterns on a checkerboard (“black,” “white,” “two black,” “two white”) or on a chessboard (“king,” “queen,” “pawn”). A pattern of numbers is a set of numbers.

Patterns are functional in ordinary life because everything in the world is a pattern. You can recognize a pattern in a table, chair, newspaper, checkerboard, tree, nest, or sock.

A pattern can also be abstract, like a pattern of boxes, or concrete, like the patterns on a checkerboard (“black,” “white,” “two black,” “two white”) or a chessboard (“king,” “queen,” “pawn”).



Artificial Neural Networks

An artificial neural network is an artificial computer that tries to model how the brain works. The human brain is astonishingly good at recognizing patterns and at finding connections among different kinds of patterns.

Neurons are the basic unit of the brain. Each neuron connects to other neurons, forming long strands like spaghetti. Individual strands have a kind of logic; they fire together, and then they fire together again. Neurons can have inputs, outputs, and connections between them. We can think of them as tiny computers.

Every neuron has inputs coming from other neurons, and it sends outputs to other neurons. But the inputs and outputs aren’t like switches, and they don’t all go one way. They change all the time, depending on what kinds of things the neuron is receiving and sending.


In Conclusion,

There are so many opportunities to be explored in Artificial Intelligence as it keeps on evolving. In this decade, Artificial Intelligence will have a significant impact on almost all businesses. It is a technology to watch out for.


If you are looking to develop an artificial intelligence system or consultation on current systems, reach me at: markalex@realbizdigital.com. Feel free also to check this website on other services you may be interested in: www.realbizdigital.com






Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *

Join us to discover the latest technologies

Keep up with emerging trends in technology. Discover tech that just got out!