Symbolic Artificial Intelligence
Artificial intelligence research methodologies that
are based on high-level symbolic (human-readable) representations of issues,
logic, and search are collectively referred to as symbolic artificial intelligence.
Symbolic AI developed applications like knowledge-based systems (in particular,
expert systems), symbolic mathematics, automated theorem provers, ontologies,
the semantic web, and automated planning and scheduling systems. It used tools
like logic programming, production rules, semantic nets, and frames. Seminal
concepts in search, symbolic programming languages, agents, multi-agent
systems, the semantic web, and the benefits and drawbacks of formal knowledge
and reasoning systems emerged as a result of the symbolic AI paradigm.
From the middle of the 1950s through the middle of the
1990s, symbolic AI dominated AI research. The ultimate objective of their area,
according to researchers in the 1960s and 1970s, was to develop a computer with
artificial general intelligence using symbolic methods. Early triumphs like
Samuel's Checkers Playing Programme and the Logic Theorist caused false hopes
and promises, which were followed by the First AI Winter when funding dwindled.
With the emergence of expert systems, their promise of capturing corporate
expertise, and an enthusiastic corporate acceptance (1969–1986), there was a
second boom.
This period of prosperity and even early achievements,
like XCON at DEC, were ultimately followed by disillusionment. Large knowledge
bases needed to be maintained, and issues with brittleness when tackling
situations outside of one's field of expertise also occurred. Then came a
second AI Winter (1988–2011). AI researchers then concentrated on finding
solutions to fundamental issues with handling uncertainty and learning. Formal
techniques like hidden Markov models, Bayesian thinking, and statistical
relational learning were used to deal with uncertainty. With contributions from
Version Space, Valiant's PAC learning, Quinlan's ID3 decision-tree learning,
case-based learning, and inductive logic programming to learn relations,
symbolic machine learning tackled the knowledge acquisition problem.
A sub symbolic technique known as neural networks was
pursued from the beginning and would make a significant comeback in 2012. Early
examples include LeCun et al.'s 1989 work on convolutional neural networks, the
backpropagation work of Rumelhart, Hinton, and Williams, and work on perceptron
learning by Rosenblatt. However, until around 2012, neural networks were not
considered to be successful: The so-called neural-network technique was widely
believed to be hopeless in the Al world until Big Data became the norm. Systems
just weren't as effective as other approaches.
Foundational ideas
The symbolic approach was succinctly expressed in the
"physical symbol systems hypothesis" proposed by Newell and Simon in
1976
The required and sufficient tools for universal
intelligent action are present in a physical symbol system.
A second maxim was then adopted by practitioners
employing knowledge-based approaches:
to emphasise that achieving high performance in a
particular subject need both universal and very domain-specific knowledge, as
in the proverb "In the knowledge lies the power." The Knowledge
Principle is what Doug Lenat and Ed Feigenbaum dubbed this.
(1) The Knowledge Principle: A programme must have
extensive knowledge of the environment it operates in if it is to successfully
complete a complicated job.
(2) The Breadth Hypothesis is a reasonable extension
of this idea, which states that two additional skills falling back on
ever-more-general information and drawing analogies to particular but distant
knowledge are essential for intelligent action in unexpected situations.
Finally, as deep learning has gained popularity, the
symbolic AI approach has been compared to deep learning as complementary, with
parallels being drawn frequently by AI researchers between Kahneman's research
on human reasoning and decision making, which is reflected in his book
Thinking, Fast and Slow, and the so-called "AI systems 1 and 2,"
which would theoretically be modelled by deep learning and symbolic reasoning,
respectively. This viewpoint holds that symbolic thinking is more suited for
deliberate reasoning, planning, and explanation, whereas deep learning is
better suited for rapid pattern detection in perceptual applications with noisy
input.
A succinct narrative
The following is a brief history of symbolic AI up to
the present. Dates and titles have been significantly altered for more clarity
from Henry Kautz's 2020 AAAI Robert S. Engelmore Memorial Lecture and the
lengthier Wikipedia page on the History of AI.
Irrational euphoria during the first AI summer,
1948–1966
Early experiments at AI had success primarily in three
areas: knowledge representation, heuristic search, and artificial neural
networks, which raised expectations. The history of the earliest AI is recapped
in this part by Kautz.
Approaches inspired by human or animal
cognition or behaviour
Cybernetic approaches attempted to replicate the
feedback loops between animals and their environments. A robotic turtle, with
sensors, motors for driving and steering, and seven vacuum tubes for control,
based on a pre-programmed neural net, was built as early as 1948. This work can
be seen as an early precursor to later work in neural networks, reinforcement
learning, and situated robotics.
An important early symbolic AI program was the Logic
theorist, written by Allen Newell, Herbert Simon and Cliff Shaw in 1955–56, as
it was able to prove 38 elementary theorems from Whitehead and Russell's
Principia Mathematica. Newell, Simon, and Shaw later generalized this work to
create a domain-independent problem solver, GPS (General Problem Solver). GPS
solved problems represented with formal operators via state-space search using
means-ends analysis.
During the 1960s, symbolic approaches achieved great
success at simulating intelligent behaviour in structured environments such as
game-playing, symbolic mathematics, and theorem-proving. AI research was
cantered in three institutions in the 1960s: Carnegie Mellon University,
Stanford, MIT and (later) University of Edinburgh. Each one developed its own
style of research. Earlier approaches based on cybernetics or artificial neural
networks were abandoned or pushed into the background.
The study of human problem-solving abilities and
attempts to formalise them by Herbert Simon and Allen Newell lay the groundwork
for the fields of artificial intelligence, cognitive science, operations
research, and management science. Their study team created simulations of
problem-solving methods using software based on the findings of psychological
investigations. The culmination of this history, started at Carnegie Mellon
University, was the creation of the Soar architecture in the middle of the
1980s.




No comments:
Post a Comment