Understanding cognitive computing versus artificial intelligence, deep learning and machine learning.
Cognitive computing brings with it a promise of genuine, human-to-machine interaction. When machines become cognitive, they can understand requests, connect data points and draw conclusions. They can reason, observe and plan. Consider:
- Leaving for a business trip tomorrow? Your cognitive device will automatically offer weather reports and travel alerts for your destination city.
- Planning a large birthday celebration? Your cognitive device will help with invitations, make reservations and remind you to pick up the cake.
- Planning a direct marketing campaign? Your cognitive assistant can instinctively segment your customers into groups for targeted messaging and increased response rates.
Clearly, we’re not talking about robotic butlers. This isn’t a Hollywood movie. But we are at a new level of cognition that has grown out of the artificial intelligence field to be truly useful in our lives.
We get it, though. You might be confused about how all these topics – artificial intelligence, machine learning, deep learning and cognitive computing – relate. You’re not alone. And we want to help.
“Cognitive computing is an outgrowth of artificial intelligence.”
Consider this article a quick primer on cognitive computing. We’ll explore the basic components of artificial intelligence and describe how various technologies have combined to help machines become more cognitive.
The history of artificial intelligence
Cognitive computing is an outgrowth of artificial intelligence (AI), which originally set out to make computers more useful and more capable of independent reasoning.
But where did AI come from? Well, it didn’t leap from single-player chess games straight into self-driving cars. The field has a long history rooted in military science and statistics, with contributions from philosophy, psychology, math and cognitive science.
Most historians trace the birth of AI to a Dartmouth research project in 1956 that explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and increased the focus on training computers to mimic human reasoning.
For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Google, Amazon or Microsoft tackled similar projects.
This work paved the way for the automation and formal reasoning that we see in computers today.
As a whole, artificial intelligence contains many sub-fields, including:
- Machine learning automates analytical model building. It uses methods from neural networks, statistics, operations research and physics to find hidden insights in data without being explicitly programmed where to look or what to conclude.
- A neural network is a kind of machine learning inspired by the workings of the human brain. It’s a computing system made up of interconnected units (like neurons) that processes information by responding to external inputs, relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.Share
“Advancements brought artificial intelligence closer to its original goal of creating intelligent machines.”
- Deep learning uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.
- Computer vision relies on pattern recognition and deep learning to recognize what’s in a picture or video. When machines can process, analyze and understand images, they can capture images or videos in real time and interpret their surroundings.
- Natural language processing is the ability of computers to analyze, understand and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.
How big data plus artificial intelligence produced non-native computing
Remember the big data hoopla a few years ago? Advancements in computer processing and data storage made it possible to ingest and analyze more data than ever before.
Around the same time, we started producing more and more data by connecting more devices and machines to the internet and streaming large amounts of data from those devices.
With more language and image inputs into our devices, computer speech and image recognition improved. Likewise, machine learning had much more information to learn from.
All of these advancements brought artificial intelligence closer to its original goal of creating intelligent machines, which we now call cognitive computing.
Where are we today with cognitive computing?
“Cognitive computing is the holy grail of artificial intelligence.”
With cognitive computing, you can ask a machine questions – out loud – and get answers about sales, inventory, customer retention, fraud detection and much more. The computer can also discover information that you never thought to ask.
It will offer a narrative summary of your data and suggest other ways to analyze it. It will also share information related to previous questions from you or anyone else who asked similar questions. You’ll get the answers on a screen or just conversationally.
How will this play out in the real world? In healthcare, treatment effectiveness can be more quickly determined. In retail, add-on items can be more quickly suggested. In finance, fraud can be prevented instead of just detected. And so much more.
In each of these examples, the machine understands what information is needed, looks at relationships between all the variables, formulates an answer – and automatically communicates it to you with options for follow-up queries.
We have decades of artificial-intelligence research to thank for where we are today. And we have decades of intelligent human-to-machine interactions to come.
What technologies contribute to cognitive computing?
The sub-fields of artificial intelligence include machine learning, deep learning and natural language processing. When these technologies joined with big data, we moved into the cognitive computing era.
This article first appeared on SAS Insights and was republished with permission.
Every morning, wake up to the blog that gives you the latest trends shaping tomorrow.
You may also like: