ARTIFICIAL INTELLIGENCE FOR GLAUCOMA SPECIALISTS
Artificial intelligence (AI) is the simulation or replication of human intelligence processes by machines, especially computer systems. These processes are basically cognitive functions seen in humans and animals, such as: learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction.
Simply
put, AI is the intelligence which is shown by machines, unlike the “natural intelligence”
shown by humans/animals. The aim of AI is to make the machines/computers reach
such a level of intelligence that they can perform almost all human functions.
AI was founded on the premise that human intelligence “can be so precisely
described that a machine can be made to simulate it.”
A scene from "The Terminator" movie franchise |
AI
was introduced in 1956, but was followed by some long periods during which no
significant contributions were made (known as “AI winters”). Prior to that, in 1936 a British
mathematician Alan Turing presented his paper entitled: “On Computable Numbers,
with an Application to the Entscheidungsproblem” at the London Mathematical
Society. According to the Church-Turing Hypothesis, symbols as simple as 0 and
1 could simulate any conceivable act of mathematical deduction. In other words,
according to this hypothesis, digital computers can simulate any process of
formal reasoning. Turing went on to comment:”If a human could not distinguish
between responses from a machine and a human, the machine could be considered ‘intelligent’”.
This was a historical development in the establishment of AI.
"The Imitation Game", a hard-hitting movie on Alan Turing |
A
typical AI perceives its environment and takes actions that maximize its chance
of successfully and efficiently achieving the goals set for it. Such AIs are
dependent on the use of “algorithms”. An algorithm is a set of unambiguous
instructions that a computer can comprehend and execute. Algorithms can be “simple”
or “complex”; the latter built on top of simpler algorithms. AIs can themselves
be “weak”, when they perform straightforward tasks like retrieving information
or acquiring images; or “strong”, capable of all and any cognitive functions
that a human may have, and is in essence no different than a real human mind.
A
machine can be presented with vast amounts of data (“Big data”). However,
machines have to work intelligently by using only useful data out of what is
presented to it. Many algorithms are capable of extrapolating from data. They take
in the available data and parsing it, evaluating it and comparing different
data pieces come up with results. Parsing is the analysis of a string or text
into logical syntactic components. Algorithms allow the machine to learn from
its operations. Algorithms can enhance themselves by learning new heuristics
(strategies or rules of thumb, which have worked well in the past) or can
themselves write other algorithms. Simply put, an algorithm can respond
automatically by following from results of past experiences available to it
(the rule of thumb) or as the machine runs, it works on sets of data which are
new for it and reaches conclusions from scratch. Every new experience is used
to improve the performance.
“Machine
Learning” is a subfield of AI that “gives computers the ability to learn
without being explicitly programmed”. In machine learning the computer is designed
to optimize a performance criterion using data from past experience. “Data
mining” algorithms look for characteristic patterns in the information
available to them. “Machine Learning” does the same thing but improves upon
this ability: the program modifies its behavior based on its learning
experience.
Relationship of AI and other subfields |
When
the machine has to be trained, data is fed into it. This is called the “training
set”. This is the initial baseline data which is used to compare with
subsequently presented data sets. If the machine has to be trained to, say identify
certain images, the machine is trained to look for certain markers or properties
which are called “labels”. The machine can be fed a label to identify an image
(for example, a picture of the optic nerve head is labeled “ONH” and fed to the
machine so that the machine can identify the structure by looking at the label,
i.e. “ONH”. On the other hand if labels are not available, the machine can be
provided with some structural landmarks to identify an optic nerve head and
when the machine is presented with an image, the machine assesses the image for
those landmarks which identify it as optic nerve head and then concludes that
the image is indeed of the optic nerve head or not).
Artificial
Neural Networks (ANN or simply “neural nets”) function like machine learning
algorithms with biological models applied to them. ANNs are defined as a
software setup that seeks to imitate the behavior of the human brain through
the use of layers of artificial neurons, which are digital constructs with
weighted inputs, activation functions and outputs. We shall revisit this
concept in a subsequent post on “Tools in AI”.
AI
systems are already available or in development for the detection of various
ophthalmic conditions such as: diabetic retinopathy, age-related macular
degeneration, and glaucoma. In subsequent posts THE GLOG will look into the
tools used in AI and the application of AI in glaucoma.