What is artificial intelligence?

Modern AI is a system that is able to perceive its environment and take actions to maximize the chance of successfully achieving its goals as well as interpret and analyze data in such a way that it learns and adapts as it goes.

Artificial Intelligence Overview

An early definition of artificial intelligence (AI) came from one of its founding fathers, Marvin Minsky, who described it as “the science of making machines do things that would require intelligence if done by men.” While the core of that definition holds true today, modern computer scientists go a bit further and define AI as a system that is able to perceive its environment and take actions to maximize the chance of successfully achieving its goals – and furthermore, that system’s ability to interpret and analyze data in such a way that it learns and adapts as it goes.

History of AI

From the Greek myth of Pygmalion to the Victorian tale of Frankenstein, humans have long been fascinated by the idea of creating a man-made being that could think and act like a person. With the rise of computers, we realized that the vision of artificial intelligence would emerge not in the form of self-contained, independent entities – but as a set of tools and connected technologies that could augment and adapt to human needs.

 

The term artificial intelligence was coined in 1956, at a scientific conference at Dartmouth University in Hanover, New Hampshire. Since then, AI and data management have developed in an extremely interdependent fashion. In order to perform meaningfully robust analyses, AI requires a lot of Big Data. In order for a lot of data to be digitally processed, the system requires AI. As such, the history of AI has developed alongside the rise in computing power and database technologies.

 

Today, business systems that could once only handle a few gigabytes of data can now manage terabytes and can use AI to process outcomes and insights in real time. And unlike a man-made creation lurching down to the village, AI technologies are agile and responsive – designed to improve and augment their human partners, not replace them.

Types of AI

AI is one of the fastest-growing areas of technological development. Yet today, even the most complex AI models are only making use of “artificial narrow intelligence,” which is the most basic of the three types of AI. The other two are still the stuff of science fiction and, at the moment, are not being used in any practical way. That said, at the rate computer science has advanced in the past 50 years, it’s difficult to say where the future of AI will take us.

The three main types of AI

Artificial narrow intelligence (ANI)


ANI is the kind of AI that exists today and is also known as “weak” AI. While the tasks narrow AI can perform may be driven by highly complex algorithms and neural networks, they are nonetheless singular and goal-oriented. Facial recognition, internet searches, and self-driving cars are all examples of narrow AI. It is categorized as weak not because it lacks scope and power, but because it is still a long way from having the human components we ascribe to true intelligence. The philosopher John Searle defines narrow AI as being “useful for testing a hypothesis about minds, but would not actually be minds.”

 

Artificial general intelligence (AGI)

 

AGI should be able to successfully perform any intellectual task that a human can. Like narrow AI systems, AGI systems can learn from experience and can spot and predict patterns – but they have the capacity to take it a step further. AGI can extrapolate that knowledge across a wide range of tasks and situations that are not addressed by previously acquired data nor existing algorithms.

 

The Summit Supercomputer is one of only a few such supercomputers in the world that demonstrates AGI. It can perform 200 quadrillion computations in one second – which would take a human a billion years to do. For AGI models to be meaningfully feasible, they wouldn’t necessarily need that much power, but they would require computational capacities that currently only exist at supercomputer levels.

 

Artificial superintelligence (ASI)

 

ASI systems are theoretically fully self-aware. Beyond simply mimicking or understanding human behavior, they grasp it at a fundamental level.

 

Empowered with these human traits – and further augmented with processing and analytical power that far exceeds our own – ASI can seem to present a dystopian, sci-fi future in which humans become increasingly obsolete.

 

It is unlikely that anyone living today will ever see such a world, but that said, AI is advancing at such a rate that it is important to consider ethical guidelines and stewardship in anticipation of an artificial intelligence that could surpass us in almost every measurable way. As Stephen Hawking advises, “Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

Benefits of AI

Just a couple of decades ago, the use of AI in business operations was at an “early adopter” stage and its potential was still somewhat theoretical. Since then, AI technologies and applications have been advancing and adding value to businesses. And as AI technologies improve and emerge as the next wave of innovation, so does the human understanding of their potential and the creativity with which they are applied. Today, businesses derive an ever-growing range of measurable benefits from AI-powered systems, including the five below:

  1. Business-wide resilience: Long before computers existed, companies knew the value of gathering and understanding data about their business, market, and customers. As data sets grew larger and more complex, the ability to analyze that data accurately and in a timely way became increasingly challenging. AI-powered solutions bring the ability to not only manage Big Data, but to take actionable insights from it. With AI, complex processes can be automated, resources can be more efficiently used, and disruptions (and opportunities) can be better predicted and adapted for.
  2. Better customer service: AI allows businesses to personalize service offerings and interact with their customers in real time. As consumers move through the modern sales funnel from “lead” to “conversion,” they generate complex and diverse data sets. AI gives business systems the power to leverage this data and to better serve and engage with their customers.
  3. Confident decision-making: Good business leaders always strive to make decisions that are prompt and informed. The more crucial the decision, the greater the likelihood that it will have myriad and complex components and interdependencies. AI helps to augment the wisdom and experience of humans, with advanced data analysis and actionable insights that support confident, real-time decision-making.
  4. Relevant products and services: Many traditional R&D models were backward-looking. The analysis of performance and customer feedback data often didn’t occur until well after a product or service had entered the market. Nor were systems in place that could quickly spot potential gaps and opportunities in the market. With AI-powered systems, companies can look at a wide variety of data sets, simultaneously and in real time. This allows them to modify existing products and introduce new ones, based upon the most relevant and up-to-date market and customer data.
  5. Engaged workforces: A recent Gallup poll shows that companies whose employees report a high level of engagement are up to 21% more profitable on average. AI technologies in the workplace can reduce the burden of mundane tasks and allow employees to focus on more fulfilling work. HR technologies that use AI can also help to notice when employees are anxious, tired, or bored. By personalizing wellness recommendations and helping to prioritize tasks, AI can support employees and help them restore a healthy work-life balance.

AI technologies

In order to be useful, AI must be applicable. Its true value can only be realized when it delivers actionable insights. If we think of AI in terms of a human brain, then AI technologies are like the hands, the eyes, and the movements of the body – all that allows the brain’s ideas to be executed. The following are some of the most widely used and rapidly-advancing AI technologies.

Artificial intelligence technologies

Machine learning


Machine learning – and all its components – is a subset of AI. In machine learning, algorithms are applied to different types of learning methods and analysis techniques, which allow the system to automatically learn and improve from experience without being explicitly programmed. For businesses, machine learning can be applied to any problem or goal that requires predictive outcomes, arrived at from complex data analysis.

 

What is the difference between AI and machine learning? Machine learning is a component of AI and cannot exist without it. So it’s not so much that they are different – as how they are different. AI processes data to make decisions and predictions. Machine learning algorithms allow AI to not only process that data, but to use it to learn and get smarter, without needing any additional programming.

Natural language processing (NLP)

NLP allows machines to recognize and understand written language, voice commands, or both. This includes the ability to translate human language into a form that the algorithm can understand. Natural language generation (NLG) is a subset of NLP that allows the machine to convert digital language into natural human language. In more sophisticated applications, NLP can use context to infer attitude, mood, and other subjective qualities to most accurately interpret meaning. Practical applications of NLP include chatbots and digital voice assistants such as Siri and Alexa.

Computer vision


Computer vision is the method by which computers understand and “see” digital images and videos – as opposed to just recognizing or categorizing them. Computer vision applications use sensors and learning algorithms to extract complex, contextual information that can then be used to automate or inform other processes. Computer vision can also extrapolate data for predictive purposes which basically means it can see through walls and around corners. Self-driving cars are a good example of computer vision in use.

 

Robotics


Robotics is nothing new and has been used for years, especially in manufacturing. Without the application of AI, however, automation must be accomplished through manual programming and calibration. If weaknesses or inefficiencies exist in those workflows, they can only be spotted after the fact – or after something breaks down. The human operator can often never know what led to a problem, or what adaptations could be made to achieve better efficiency and productivity. When AI is brought into the mix – typically via IoT sensors – it brings with it the capacity to greatly expand the scope, volume, and type of robotic tasks performed. Examples of robotics in industry include order-picking robots for use in large warehouses and agricultural robots that can be programmed to pick or service crops at optimum times.

Enterprise AI in action

Every year, more and more companies are realizing the benefits and competitive advantages that AI solutions can bring to their operations. Some sectors, such as healthcare and banking, hold particularly large and vulnerable data sets. For them, the usefulness of AI was obvious from its earliest iterations. But today, the scope and accessibility of modern AI means it has relevant applications across almost all business models. The following examples are just a few such industries.

  • AI in healthcare Medical data sets are some of the largest, most complex – and most sensitive – in the world. A major focus of AI in healthcare is to leverage that data to find relationships between diagnosis and treatment protocols, and patient outcomes. Additionally, hospitals are turning to AI solutions to support other operational areas and initiatives. These include workforce satisfaction and optimization, patient satisfaction, and cost-saving to name just a few.
  • AI in banking Banks and financial institutions have a heightened need for security, compliance, and transactional speed and, as such, were some of the earliest adopters of AI technologies. Features such as AI bots, digital payment advisers, and biometric fraud detection mechanisms all contribute to improved efficiency and customer service, as well as reduction in risk and fraud. See how banks are driving end-to-end service with digitization and intelligent technologies.
  • AI in manufacturing When devices and machines are connected to send and receive data via a central system, they comprise an IoT network. AI not only processes that information, but uses it to predict opportunities and disruptions, and to automate the best tasks and workflows in response. In smart factories, this extends to on-demand manufacturing protocols for 3D printers, and virtual inventories.
  • AI in retail The pandemic had a massive impact on shopping habits, seeing a significant rise in online shopping over the same period in the previous year. This has contributed to an extremely competitive and fast-changing climate for retailers. Online shoppers are engaging across a wide range of touchpoints and generating larger amounts of complex and unstructured data sets than ever before. To best understand and leverage this data, retailers are looking to AI solutions that can process and analyze disparate data sets to provide useful insights and real-time interactions with their customers.

AI ethics and challenges

In 1948, the computer science pioneer Alan Turing said, “A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.” Although the processing speed and analytical power of a modern AI-driven computer would have seemed unbelievable to Turing, he nonetheless would likely have understood the ethical quandaries presented by that level of power. As AI gets better at understanding us and mimicking us, it increasingly seems human. And as we generate increasing amounts of personal data via digital channels, we need – more and more – to be able to trust the AI applications that underpin so many of our daily activities. Below are a few examples of ethical challenges that business leaders need to be aware of and monitor.

 

 

  • Ethical use of customer data In the 2020s, the bulk of information we share and gather as businesses or individuals is done via digitally connected channels. At the start of 2020, there were over 3.5 billion smartphones in the world, all sharing enormous amounts of data – from their GPS location to their users’ personal details and preferences, via social media and search behaviors. As businesses gain wider access to their customers’ personal information, it becomes extremely important to set up benchmarks and ever-developing protocols to protect privacy and minimize risk.
  • AI bias Bias can creep into an AI system due to human bias in the programming of its algorithms or through systemic prejudices that can be propagated via faulty assumptions in the machine learning process. In the first instance, it is more obvious to see how this might occur. But in the second, it can be harder to spot and avoid. A well-known example of AI bias occurred within the U.S. healthcare system, where AI applications were being used to allocate standards of care. The algorithm learned that certain demographic groups were less able to pay for care. It then extrapolated this information to mistakenly correlate that group as being less entitled to care. After discovering this unfortunate mistake, computer scientists at UC Berkeley worked with the developers to modify the algorithmic variables, thus reducing bias by 84%.
  • AI transparency and explainable AI Transparency in AI is the ability to determine how and why an algorithm arrived at a particular conclusion or decision. AI and machine learning algorithms that inform outcomes – and the outcomes themselves – are often so complex as to be beyond human understanding. Algorithms such as this are known as “black box” models. For businesses, it is important to ensure that data models are fair, unbiased, and can be explained and externally scrutinized. Especially in areas such as aviation or medicine, human lives are at stake. It is therefore vital that the humans who use this data are taking data governance initiatives extremely seriously.
  • Deepfakes and fake news Deepfake is a portmanteau of “deep learning” and “fake.” It is a technique that uses AI and machine learning to – typically – superimpose one person’s face on to another’s body in a video, with such accuracy as to be indistinguishable from the original. In its innocent form, it can result in amazing visual effects such as the 30-year de-aging of Robert De Niro and Joe Pesci in the film The Irishman. Unfortunately, its more common use is to create believable fake news stories or to put celebrities into graphic or compromising videos in which they never originally appeared.