Saturday, April 22, 2023

Demystifying the Concept of Explainable AI

 

Understanding the Idea of  Explainable AI


Introduction:

Due to its capacity to offer intelligent solutions and automate numerous operations, artificial intelligence (AI) has recently attracted a lot of interest. However, because typical AI systems are opaque, it might be difficult to grasp how judgements are reached, raising possible ethical, legal, and societal issues. Making AI systems transparent, responsible, and intelligible is a potential strategy for resolving these problems. This technique is known as explainable AI (XAI).


Explainable AI: What is it?

An emerging area of research called explainable AI seeks to make AI systems comprehensible and transparent. It entails creating algorithms and models that can shed light on the processes involved in decision-making and the variables that affect them. With the use of XAI, users may recognise biases, comprehend the reasoning behind an AI's decision-making process, and fix mistakes.



Why is understandable AI crucial?

AI that is understandable is crucial for a number of reasons. First, it can contribute to increasing confidence and trust in AI systems, especially in critical fields like national security, finance, and healthcare. The second benefit is that it might help in detecting and eliminating biases that can exist in the data used to train AI systems. Third, it can help with adhering to ethical norms and legal obligations, particularly when decisions have an effect on people's lives.



How does comprehensible AI operate?

Transparency and interpretability are included into the design of AI systems to create explainable AI. This entails the use of algorithms and modelling tools that may reveal how decisions are formed and what influences them. Simple XAI techniques like decision trees and rule-based systems are used with more sophisticated strategies like neural network visualisations and counterfactual justifications.

Examples of explainable AI in use: Explainable AI is being applied in a number of different sectors, as seen by the numerous examples provided. XAI is being applied to healthcare to support clinical decision-making, medication discovery, and medical diagnostics. XAI is used in finance to enhance fraud detection, risk analysis, and investment choices. XAI is used in national security to detect risks, stop cyberattacks, and assess intelligence data.

An effective strategy for overcoming the opaqueness of conventional AI systems is explainable AI. It can make it possible for people to comprehend the reasoning underlying AI decision-making processes, recognise biases, and fix mistakes. Building trust and confidence in AI systems, adhering to legal and ethical criteria, and making sure that AI system judgements are just and transparent all depend on XAI.



Conclusion

The goal of explainable AI is to increase the transparency and comprehension of AI systems for humans. In applications where the results of AI judgements might have major effects on human lives, XAI is particularly crucial. Explainable AI may be achieved in a number of ways, including model-based explanations, rule-based explanations, feature-based explanations, and example-based explanations. XAI can assist in enhancing the accuracy, dependability, and trustworthiness of AI systems by making them more transparent and intelligible.



Thursday, April 20, 2023

Deep Learning Models: A Brief Overview


Deep learning is a branch of machine learning that uses artificial neural networks to learn from large amounts of data and perform complex tasks such as image recognition, natural language processing, speech synthesis, and more. Deep learning models are composed of multiple layers of neurons that process information and pass it to the next layer. The more layers a model has, the deeper it is and the more capable it is of learning abstract and high-level features from the data. There are many types of deep learning models, each with its own advantages and disadvantages. In this blog, we will introduce some of the most common and popular ones and explain when and how to use them.

Supervised Models

Supervised models are trained with labelled data, meaning that each input has a corresponding output or target value. Supervised models can be used for tasks such as classification and regression, where the goal is to predict a category or a numerical value for a given input.



Classic Neural Networks (Multilayer Perceptron’s)

Classic neural networks, also known as multilayer perceptron’s (MLPs), are the simplest and most basic type of deep learning models. They consist of an input layer, one or more hidden layers, and an output layer. Each layer is fully connected to the next one, meaning that every neuron in one layer receives input from every neuron in the previous layer. Each neuron applies a nonlinear activation function to its input and produces an output. Classic neural networks can be used for tabular data formatted in rows and columns (CSV files), as well as for classification and regression problems where a set of real values is given as the input. They offer a high level of flexibility and can be applied to different types of data.



Convolutional Neural Networks (CNNs)

Convolutional neural networks (CNNs) are a more advanced and powerful type of deep learning models that are designed for image data. They can also handle other types of data that have a spatial structure, such as audio, video, or text. CNNs use convolutional layers instead of fully connected layers to extract features from the input. A convolutional layer applies a set of filters or kernels to the input, producing feature maps that capture local patterns in the data. CNNs also use pooling layers to reduce the size and complexity of the feature maps, as well as activation functions and fully connected layers at the end. CNNs are very effective for image classification problems, where the goal is to assign a label to an image based on its content. They can also be used for other tasks such as object detection, face recognition, image segmentation, image generation.



Recurrent Neural Networks (RNNs)

Recurrent neural networks (RNNs) are a type of deep learning models that are specialized for sequential data, such as text, speech, or time series. RNNs have a recurrent structure that allows them to process each element of a sequence in relation to the previous ones. They have a hidden state that stores information from previous inputs and updates it with each new input. RNNs can also have multiple layers and different architectures, such as bidirectional RNNs or long short-term memory (LSTM) networks. RNNs are widely used for natural language processing tasks, such as text classification, sentiment analysis, machine translation, text summarization, question answering, and more. They can also be used for speech recognition, speech synthesis, music generation, anomaly detection.


Unsupervised Models

Unsupervised models are trained with unlabelled data, meaning that there is no output or target value for each input. Unsupervised models can be used for tasks such as clustering and association rule learning, where the goal is to discover patterns or relationships in the data without any prior knowledge.



Self-Organizing Maps (SOMs)

Self-organizing maps (SOMs) are a type of unsupervised models that use neural networks to map high-dimensional data onto a low-dimensional grid. Each node in the grid represents a prototype or a cluster centre that is similar to some inputs in the data. The nodes are arranged in such a way that neighbouring nodes are more similar than distant ones. SOMs can be used for data visualization, dimensionality reduction, clustering, anomaly detection.




Boltzmann Machines

Boltzmann machines are a type of unsupervised models that use stochastic neural networks to model the probability distribution of the data. A Boltzmann machine consists of a network of binary units that can be either visible or hidden. The units are connected by symmetric weights and have biases. The state of each unit is determined by a stochastic function that depends on the energy of the network. The energy of the network is defined as:

Where:

w i j {\displaystyle w_ {ij}} is the connection strength between unit j {\displaystyle j} and unit i {\displaystyle i} .

s i {\displaystyle s_ {i}} is the state, s i { 0 , 1 } {\displaystyle s_ {i}\in \ {0,1\}} , of unit i {\displaystyle i} .

θ i {\displaystyle \theta _ {i}} is the bias of unit i {\displaystyle i} in the global energy function. ( − θ i {\displaystyle -\theta _ {i}} is the activation threshold for the unit.)1

Boltzmann machines can be used for generative modeling, feature extraction, dimensionality reduction.



Autoencoders

Autoencoders are a type of unsupervised models that use neural networks to learn a compressed representation of the data. An autoencoder consists of two parts: an encoder and a decoder. The encoder takes the input data and transforms it into a lower-dimensional latent space. The decoder takes the latent representation and reconstructs the original input data. The goal of an autoencoder is to minimize the reconstruction error, which is the difference between the input and the output. Autoencoders can be used for data compression, denoising, anomaly detection, generative modelling.

 

Conclusion

In this blog, we have introduced some of the most common and popular deep learning models and explained when and how to use them. We have also seen that deep learning models can be classified into supervised and unsupervised models, depending on whether they use labelled or unlabelled data. We hope that this blog has given you a brief overview of deep learning models and inspired you to learn more about them.




Everything You Need to Know About Computer Vision



Introduction:

Computer vision is a field of computer science that focuses on replicating parts of the complexity of the human vision system and enabling computers to identify, process, and analyse visual data from the world around us. It is a powerful and compelling type of artificial intelligence that has numerous applications in various industries, including healthcare, automotive, retail, and more. In this blog post, we will explore the basics of computer vision, including what it is, how it works, and its applications.

I. What is Computer Vision?

Definition of computer vision: Computer vision is a field of artificial intelligence that focuses on enabling computers to derive information from digital images, videos, and other inputs. Brief history of computer vision: The history of computer vision dates back to the 1950s when early experiments were conducted to distinguish between typed and handwritten text using computers.

Importance of computer vision in AI: Because it allows computers to interpret and comprehend the visual environment, computer vision is an essential branch of artificial intelligence (AI). Machines can reliably recognise and classify items using digital photos and videos, and then react to what they "see." Many AI applications rely on computer vision, such as self-driving cars, face recognition, medical image analysis, and surveillance systems. Computer vision functions similarly to human vision, with the exception that computers can interpret visual data considerably quicker and more precisely than humans. Computers may be trained to analyse enormous datasets of visual pictures and uncover characteristics and patterns within those images that can be applied to other images using deep learning and neural networks.

II. How Does Computer Vision Work?

Overview of computer vision process: Three basic processes make up the computer vision process: collecting an image or video from a camera, processing the image, and analysing the image to draw out useful information. In the first step, a camera captures the image or video, which is subsequently processed in the second stage. Picture processing comprises a series of procedures including image enhancement, segmentation, and feature extraction that get the picture ready for analysis. In the third step, the processed image is assessed to draw out pertinent data such object detection, tracking, and categorization. To understand visual input at the pixel level, machine learning techniques like convolutional neural networks (CNNs) and deep learning recurrent neural networks (RNNs) are utilised. Computer vision is used to make computers systems more functional.

Techniques used in computer vision: Techniques used in computer vision include feature detection, which involves computing abstractions of image information and making local decisions at every image point whether there is a specific structure in the image such as points, edges, or objects. Other techniques include machine vision, which provides imaging-based automatic inspection and analysis for applications such as automatic inspection, process control, and robot guidance. Role of machine learning and deep learning in computer vision

III. Applications of Computer Vision

Healthcare: This field develops computational and mathematical methods for solving problems pertaining to medical images and their use for biomedical research and clinical care. The main goal of MIC is to extract clinically relevant information or knowledge from medical images. While closely related to the field of medical imaging, MIC focuses on the computational analysis of the images, not their acquisition. The methods can be grouped into several broad categories: image segmentation, image registration, image-based physiological modeling, and others.



Retail:
 
Companies like Trax Retail employ this technology to assist merchants in streamlining their in-store procedures and enhancing the consumer experience.

Virtual mirrors, which employ computer vision, face identification, and face tracking technologies to analyse visual patterns and convey digital information, are another example of how computer vision is used in retail.

Virtual mirrors are used to show marketing and promotional content, teach customers about products, and improve the in-store experience.

In the retail sector, machine vision is also utilised for imaging-based automatic inspection and analysis for uses including automatic inspection, process control, and robot navigation. Another method used in computer vision is feature detection, which identifies certain structures in retail photos like points, edges, or objects.


AutomobilesIn Self-Driving Cars, Computer Vision is one of the most important and useful topic. In fact, we can pretty much agree that the camera is the only sensor you cannot ditch in a self-driving car.

In an earlier article called “Introduction to Computer Vision for Self-Driving Cars”, I talk about how Computer Vision works for basic applications. These are the “traditional” Computer Vision techniques. In this article, I mentioned 3 major Perception problems to solve using Computer Vision. Lane Line Detection Obstacle & Road Signs/Lights Detection Steering Angle Computation For these problems, I respectively used traditional Computer Vision, Machine Learning and Deep Learning.




IV. Benefits of Computer Vision

Increased efficiency and accuracy: By automating image-based inspection and analysis, computer vision can perform tasks faster and more accurately than humans.

Feature detection is one technique used in computer vision to identify specific structures in images such as points, edges, or objects.

Another technique is machine vision, which provides imaging-based automatic inspection and analysis for applications such as automatic inspection, process control, and robot guidance. Improved decision-making: Computer vision can generate numerical or symbolic information that may be utilised to make choices by analysing and comprehending digital pictures. Computer vision can generate numerical or symbolic information that may be utilised to make choices by analysing and comprehending digital pictures.

V. Future of Computer Vision

Advancements in computer vision technology: The future of computer vision is promising, with advancements in technology leading to new and improved applications. Computer vision technology is expected to become more accurate and efficient, with the ability to process larger amounts of data in real-time. Meta AI, an artificial intelligence laboratory that belongs to Meta Platforms Inc. (formerly known as Facebook, Inc.), is conducting research on computer vision to extract information about the environment from digital images and videos. Machine perception is another field that includes methods for interpreting data in a manner similar to the way humans use their senses to relate to the world around them. This can help computers take in sensory input in a way similar to humans, allowing for more accurate decision-making. The emerging technology of machine vision is also expected to have significant potential in applications such as biometrics, robotics, and autonomous vehicles. Feature detection is another technique used in computer vision to identify specific structures in images such as points, edges, or objects. Overall, advancements in computer vision technology are expected to lead to new and improved applications in a wide range of industries.



Potential impact on various industries:
 
The future of computer vision is expected to have a significant impact on various industries. Meta AI, an artificial intelligence laboratory that belongs to Meta Platforms Inc., is conducting research on computer vision to improve augmented and artificial reality technologies. Computer vision technology can be used in a wide range of industries, including healthcare, retail, and manufacturing. In healthcare, computer vision can be used for medical image computing to extract clinically relevant information or knowledge from medical images. In retail, computer vision can be used to optimize in-store operations and improve the customer experience1. In manufacturing, machine vision can provide imaging-based automatic inspection and analysis for applications such as automatic inspection, process control, and robot guidance. The book "The Industries of the Future" explores how advances in robotics and life sciences will change the world. Overall, the potential impact of computer vision on various industries is significant, and the technology is expected to continue to advance and improve in the future.

Ethical considerations: As computer vision technology continues to advance, there are ethical considerations that need to be addressed. The ethics of technology is a sub-field of ethics that addresses the ethical questions specific to the Technology Age, including the use of technology in areas such as computer vision. Meta AI, an artificial intelligence laboratory that belongs to Meta Platforms Inc., is conducting research on computer vision to improve augmented and artificial reality technologies. The ethics of artificial intelligence is another branch of the ethics of technology that is specific to artificially intelligent systems. As computer vision technology becomes more advanced, there are concerns about privacy, bias, and the potential misuse of the technology. For example, facial recognition technology has been criticized for its potential to be used for surveillance and the violation of privacy rights. As such, it is important for developers and users of computer vision technology to consider the ethical implications of its use and to ensure that it is used in a responsible and ethical manner.


Conclusion:

Computer vision is a rapidly growing field with numerous applications and benefits. As technology continues to advance, we can expect to see even more exciting developments in the field of computer vision. From healthcare to retail to security, computer vision has the potential to revolutionize the way we live and work. As with any technology, it is important to consider the ethical implications and ensure that it is used in a responsible and beneficial way.



Wednesday, April 19, 2023

What is machine learning, exactly?

 What is machine learning, exactly?


Machine learning is a fast-expanding discipline of computer science that focuses on creating algorithms and models that can learn from data and improve their performance over time. This blog article will go through the fundamentals of machine learning and its applications.

Machine learning is an artificial intelligence (AI) discipline that uses statistical and mathematical approaches to allow computer systems to learn from data without being explicitly programmed. In other words, rather than depending on human involvement, machine learning algorithms may automatically enhance their performance by learning from instances.


Machine learning classifications

Machine learning is classified into three types: supervised learning, unsupervised learning, and reinforcement learning.

Supervised learning entails training a model on labelled data with the desired output known ahead of time. This training data is then used by the model to generate predictions on fresh, previously unknown data.

Unsupervised learning entails training a model on unlabeled data with no prior knowledge of the desired outcome. After that, the model attempts to discover patterns or correlations in the data.

Reinforcement learning is the process of training a model by trial and error, with the model receiving feedback in the form of rewards or penalties based on its performance.



Machine Learning Applications




Machine learning has several applications in a variety of areas, including healthcare, finance, retail, and others. Among the most frequent machine learning applications are:

Recognition of pictures and audio: machine learning methods may be used to detect and identify items in photographs and speech.

Machine learning algorithms may be used to study and comprehend human language, allowing chatbots and virtual assistants to offer more accurate and tailored replies.

Machine learning algorithms may be used to detect fraudulent transactions or activities by evaluating data trends.

Machine learning algorithms can be used to propose items or services to clients based on their past behavior and preferences.


Problems and Concerns




While there are several advantages to machine learning, there are also questions about its possible hazards and limits. One of the most significant issues of machine learning is the possibility of bias in algorithms, which might perpetuate existing societal disparities and penalize specific groups.


Furthermore, there are questions regarding machine learning's ethical implications, particularly in domains such as healthcare and finance. It is critical for academics and practitioners to think about these challenges and work towards ethical and responsible machine learning practices.


Conclusion


Machine learning is a useful technique for addressing complicated issues and increasing computer system efficiency. Machine learning has the potential to change sectors ranging from healthcare to retail by allowing computers to learn from data.

However, there are concerns regarding machine learning's potential hazards and limits, notably in areas such as prejudice and ethical considerations. To guarantee that the advantages of this technology are realized without causing damage, academics and practitioners must work together to establish ethical and responsible machine learning practices.




Tuesday, April 18, 2023

What are the ethical implications of artificial intelligence?

 What are the ethical implications of artificial

 intelligence?




AI is becoming increasingly widespread in our culture, with applications ranging from self-driving vehicles to personalised treatment. However, as AI becomes more common, severe ethical questions about its impact on society arise. This blog post will examine the ethical problems surrounding artificial intelligence.

The set of norms and ideals that guide the development and use of AI systems is known as "ethics in AI." It requires considering the potential impact of AI on society and ensuring that its use is consistent with key human values such as privacy, fairness, and accountability.


What is the significance of AI ethics?

Artificial intelligence has the potential to significantly improve human lives, but it also offers risks and challenges. As AI systems advance, they may make decisions that have far-reaching effects for individuals and society as a whole.

As a result, it is vital to assess the ethical implications of AI research and use to ensure that they are consistent with our values and serve the greater good.


Important AI Ethical Considerations

Bias: Artificial intelligence systems can reflect and foster societal preconceptions and discrimination. AI systems must be developed and tested in order to minimise bias and enhance justice and equity.

 Because AI systems usually rely on large amounts of personal data, privacy and data protection concerns have developed. It is vital that AI systems are transparent about how they collect and use data and that users have control over their personal data.

Accountability: As AI systems become increasingly self-sufficient, it becomes more difficult to assign responsibility for their behaviour. It is vital that AI systems be designed to be open and accountable, and that people have recourse if AI systems cause them harm.

 While AI systems are incredibly efficient and precise, they lack human contextual knowledge and moral reasoning. It is vital to ensure that AI systems are designed to augment rather than replace human decision-making.


The Ethics of Artificial Intelligence in the Future

The ethical challenges surrounding AI research and implementation will grow increasingly challenging as technology improves. Researchers, policymakers, and business leaders must work together to design ethical frameworks for AI that ensure its development and application are in line with our values and serve the greater good.


Conclusion

The ethical implications of AI are enormous, raising major worries about the impact of AI on society. As AI becomes more prevalent, it is vital to recognise the potential risks and difficulties it may cause, as well as to ensure that its development and application reflect our values and serve the greater good.

By supporting fairness, openness, and accountability in AI research and use, we may embrace AI's promise to improve our lives while avoiding potential damage.




Monday, April 17, 2023

Generative Artificial Intelligence


 

    Generative Artificial Intelligence

A form of AI system known as generative artificial intelligence (AI) or (GenAI) may produce text, pictures, or other types of media in response to commands. In order to produce data based on the training data set that was used to develop them, generative AI systems require generative models, such as massive language models.

Important generative AI systems include Bard, a chatbot developed by Google using the LaMDA model, and ChatGPT, a chatbot developed by Open AI utilising the GPT-3 and GPT-4 big language models. Artificial intelligence art systems like Stable Diffusion, Midjourney, and DALL-E are examples of further generative AI models.

Numerous sectors, including software development, marketing, and fashion, might benefit from generative AI. Early in the 2020s, there was a significant increase in investment in generative AI, with multiple smaller startups in addition to well-known corporations like Microsoft, Google, and Baidu creating generative AI models.


Modalities

A detailed oil painting of figures in a futuristic opera scene

Théâtre d'Opéra Spatial, an image generated by Midjourney

A generative AI system is constructed by applying unsupervised or self-supervised machine learning to a data set. The capabilities of a generative AI system depend on the modality or type of the data set used.

 

Text: Generative AI systems trained on words or word tokens include GPT-3, LaMDA, LLaMA, BLOOM, GPT-4, and others (see List of large language models). They are capable of natural language processing, machine translation, and natural language generation and can be used as foundation models for other tasks. Data sets include BookCorpus, Wikipedia, and others (see List of text corpora).

Code: In addition to natural language text, large language models can be trained on programming language text, allowing them to generate source code for new computer programs. Examples include OpenAI Codex.

Images: Generative AI systems trained on sets of images with text captions include such as Imagen, DALL-E, Midjourney, Stable Diffusion and others (see Artificial intelligence art, Generative art, Synthetic media). They are commonly used for text-to-image generation and neural style transfer. Datasets include LAION-5B and others (See Datasets in computer vision).

Molecules: Generative AI systems can be trained on sequences of amino acids or molecular representations such as SMILES representing DNA or proteins. These systems, such as AlphaFold, are used for protein structure prediction and drug discovery. Datasets include various biological datasets.

Music: Generative AI systems such as MusicLM can be trained on the audio waveforms of recorded music along with text annotations, in order to generate new musical samples based on text descriptions such as "a calming violin melody backed by a distorted guitar riff".

Video: Generative AI trained on annotated video can generate temporally-coherent video clips. Examples include Gen1 by RunwayML and Make-A-Video by Meta Platforms.

Multimodal: A generative AI system can be built from multiple generative models, or one model trained on multiple types of data. For example, one version of OpenAI's GPT-4 accepts both text and image inputs.


Computational creativity

The multidisciplinary field of computational creativity, also referred to as artificial creativity, mechanical creativity, creative computing, or creative computation, is at the nexus of artificial intelligence, cognitive psychology, philosophy, and the arts (for example, computational art as part of computational culture).

Computational creativity aims to model, mimic, or duplicate creativity in order to accomplish one or more goals. To develop a computer programme or system capable of creativity at the level of a human. To develop a computational perspective on human creativity and a better understanding of human creativity. Can create software without having to be creative oneself that can improve creativity in people. The study of creativity has both theoretical and practical problems, which are addressed by the discipline of computational creativity. The implementation of systems that demonstrate creativity is carried out concurrently with theoretical study on the nature and appropriate definition of creativity, with one strand of work influencing the other.

Media synthesis is the term for computational creativity used in application.


The problems of theory

The core of creativity is a topic of theoretical perspectives. In particular, it should be clarified under what conditions the model qualifies as "creative" since true originality entails violating the rules or rejecting convention. This is a variation of Ada Lovelace's argument against artificial intelligence, which contemporary theorists like Teresa Amabile have rephrased. How can a machine's behaviour ever be described as creative if it can only carry out the tasks that were programmed into it?

 

The idea that computers can only perform tasks that they have been designed to perform is one that not all computer theorists would concur with, which is a crucial argument in favour of computational creativity.

 Defining creativity in computational terms

Because no single perspective or definition seems to offer a complete picture of creativity, the AI researchers Newell, Shaw and Simon developed the combination of novelty and usefulness into the cornerstone of a multi-pronged view of creativity, one that uses the following four criteria to categorize a given answer or solution as creative:

 The answer is novel and useful (either for the individual or for society)

The answer demands that we reject ideas we had previously accepted

The answer results from intense motivation and persistence

The answer comes from clarifying a problem that was originally vague

Whereas the above reflects a top-down approach to computational creativity, an alternative thread has developed among bottom-up computational psychologists involved in artificial neural network research. During the late 1980s and early 1990s, for example, such generative neural systems were driven by genetic algorithms. Experiments involving recurrent nets were successful in hybridizing simple musical melodies and predicting listener expectations.



Sunday, April 16, 2023

Symbolic Artificial Intelligence


 

Symbolic Artificial Intelligence

Artificial intelligence research methodologies that are based on high-level symbolic (human-readable) representations of issues, logic, and search are collectively referred to as symbolic artificial intelligence. Symbolic AI developed applications like knowledge-based systems (in particular, expert systems), symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. It used tools like logic programming, production rules, semantic nets, and frames. Seminal concepts in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the benefits and drawbacks of formal knowledge and reasoning systems emerged as a result of the symbolic AI paradigm.

From the middle of the 1950s through the middle of the 1990s, symbolic AI dominated AI research. The ultimate objective of their area, according to researchers in the 1960s and 1970s, was to develop a computer with artificial general intelligence using symbolic methods. Early triumphs like Samuel's Checkers Playing Programme and the Logic Theorist caused false hopes and promises, which were followed by the First AI Winter when funding dwindled. With the emergence of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate acceptance (1969–1986), there was a second boom.

This period of prosperity and even early achievements, like XCON at DEC, were ultimately followed by disillusionment. Large knowledge bases needed to be maintained, and issues with brittleness when tackling situations outside of one's field of expertise also occurred. Then came a second AI Winter (1988–2011). AI researchers then concentrated on finding solutions to fundamental issues with handling uncertainty and learning. Formal techniques like hidden Markov models, Bayesian thinking, and statistical relational learning were used to deal with uncertainty. With contributions from Version Space, Valiant's PAC learning, Quinlan's ID3 decision-tree learning, case-based learning, and inductive logic programming to learn relations, symbolic machine learning tackled the knowledge acquisition problem.

A sub symbolic technique known as neural networks was pursued from the beginning and would make a significant comeback in 2012. Early examples include LeCun et al.'s 1989 work on convolutional neural networks, the backpropagation work of Rumelhart, Hinton, and Williams, and work on perceptron learning by Rosenblatt. However, until around 2012, neural networks were not considered to be successful: The so-called neural-network technique was widely believed to be hopeless in the Al world until Big Data became the norm. Systems just weren't as effective as other approaches.




Foundational ideas

The symbolic approach was succinctly expressed in the "physical symbol systems hypothesis" proposed by Newell and Simon in 1976

The required and sufficient tools for universal intelligent action are present in a physical symbol system.

A second maxim was then adopted by practitioners employing knowledge-based approaches:

to emphasise that achieving high performance in a particular subject need both universal and very domain-specific knowledge, as in the proverb "In the knowledge lies the power." The Knowledge Principle is what Doug Lenat and Ed Feigenbaum dubbed this.

(1) The Knowledge Principle: A programme must have extensive knowledge of the environment it operates in if it is to successfully complete a complicated job.

(2) The Breadth Hypothesis is a reasonable extension of this idea, which states that two additional skills falling back on ever-more-general information and drawing analogies to particular but distant knowledge are essential for intelligent action in unexpected situations.

Finally, as deep learning has gained popularity, the symbolic AI approach has been compared to deep learning as complementary, with parallels being drawn frequently by AI researchers between Kahneman's research on human reasoning and decision making, which is reflected in his book Thinking, Fast and Slow, and the so-called "AI systems 1 and 2," which would theoretically be modelled by deep learning and symbolic reasoning, respectively. This viewpoint holds that symbolic thinking is more suited for deliberate reasoning, planning, and explanation, whereas deep learning is better suited for rapid pattern detection in perceptual applications with noisy input.



A succinct narrative

The following is a brief history of symbolic AI up to the present. Dates and titles have been significantly altered for more clarity from Henry Kautz's 2020 AAAI Robert S. Engelmore Memorial Lecture and the lengthier Wikipedia page on the History of AI.

Irrational euphoria during the first AI summer, 1948–1966

Early experiments at AI had success primarily in three areas: knowledge representation, heuristic search, and artificial neural networks, which raised expectations. The history of the earliest AI is recapped in this part by Kautz.

Approaches inspired by human or animal cognition or behaviour

Cybernetic approaches attempted to replicate the feedback loops between animals and their environments. A robotic turtle, with sensors, motors for driving and steering, and seven vacuum tubes for control, based on a pre-programmed neural net, was built as early as 1948. This work can be seen as an early precursor to later work in neural networks, reinforcement learning, and situated robotics.

An important early symbolic AI program was the Logic theorist, written by Allen Newell, Herbert Simon and Cliff Shaw in 1955–56, as it was able to prove 38 elementary theorems from Whitehead and Russell's Principia Mathematica. Newell, Simon, and Shaw later generalized this work to create a domain-independent problem solver, GPS (General Problem Solver). GPS solved problems represented with formal operators via state-space search using means-ends analysis.

During the 1960s, symbolic approaches achieved great success at simulating intelligent behaviour in structured environments such as game-playing, symbolic mathematics, and theorem-proving. AI research was cantered in three institutions in the 1960s: Carnegie Mellon University, Stanford, MIT and (later) University of Edinburgh. Each one developed its own style of research. Earlier approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.

The study of human problem-solving abilities and attempts to formalise them by Herbert Simon and Allen Newell lay the groundwork for the fields of artificial intelligence, cognitive science, operations research, and management science. Their study team created simulations of problem-solving methods using software based on the findings of psychological investigations. The culmination of this history, started at Carnegie Mellon University, was the creation of the Soar architecture in the middle of the 1980s.




The Top 5 Skills You Need For The Future Of Work

Artificial intelligence (AI) has slowly been infiltrating the American workforce for years. But when OpenAI released ChatGPT in ...