Monday, October 22, 2018

AN INTRODUCTION TO GLAUCOMA

EDITORIAL IN THE OPEN OPHTHALMOLOGY JOURNAL






Tuesday, October 2, 2018

TOOLS IN ARTIFICIAL INTELLIGENCE



Computer systems are far from perfect. Computer glitches, loss of data, insufficient speed, hacking and other shortcomings continue to afflict these systems. Therefore, Artificial Intelligence (AI) dependent tools are being developed to overcome these problems. Some of the AI tools are as follows: 
  1. Search and optimization
  2. Logic
  3. Probabalistic methods for uncertain reasoning
  4. Classifiers and statistical learning methods
  5. Artificial Neural Networks (ANN)

 (1) Search & optimization: One way to tackle problems affecting computerized systems and AI is by performing an intelligent search of possible solutions. A “search algorithm” is any algorithm which solves the problem involving searches. In other words, it is able to retrieve information stored within some data structure or calculated in the search space of the problem domain. Examples include search engines such as Google and Yahoo, which are searchable data banks. In computer lingo, “Search Space or State Space” is the number of places to search or the space containing all feasible solutions. Some examples of search algorithms include: linked list, array data structure and search tree. Search algorithms are used for a number of AI tasks including “path-finding”. Pathfinding or pathing is the plotting, by a computer application, of the shortest route between two points.

When the space search grows to astronomical proportions due to vast amounts of data, the search becomes too slow or never completes (information explosion). Therefore, “heuristics” (or rules of thumb) are used to prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. Heuristics can also entirely eliminate some choices that are unlikely to lead to a goal (“pruning the search tree”). Heuristics reduce the sample size and supply a program with the “best guess” for the path which leads to the solution.

For example, if the search is done for the word “fundus”, the search engine may turn up 82,50,000 results, including those for fundus of the eye, fundus of uterus, fundus of gall bladder and so on. A search engine may also advise to refine the search. This will avoid the search engine from slowing down or its inability to trace the required item. Similarly, when we use a GPS application to reach a destination, the program will find the best route, eliminating unnecessary turns and traffic jams from the search.


In “optimization”, the search starts with some form of guess and the guess is refined incrementally until no more refinements can be made. Some forms of optimization include: hill-climbing, simulated annealing, beam search and random optimization.


Swarm intelligence (SI) is also used for optimization. It is the collective behavior of decentralized, self-organized systems, whether natural or artificial. SI systems consist typically of a population of simple agents or boids interacting locally with one another and with their environment. The agents in a SI system follow very simple rules. However, when these agents interact with each other, it leads to the emergence of “intelligent” global (or collective) behavior, unknown to each individual agent. Examples of SI include: ant-colonies, bird-flocking, animal-herding, bacterial-growth, fish-schooling and microbial-intelligence.


 (2) Logic= Logic is used for knowledge representation and problem solving. Different forms of logic are utilized in AI research. 

Propositional logic involves truth functions such as “or” and “not”. First order logic adds quantifiers and predicates, and can express facts about objects, their properties and their relations with each other.   

Another concept is fuzzy sets (aka uncertain sets) which are somewhat like sets whose elements have degrees of membership (i.e. relationship with each other). Fuzzy set theory assigns a “degree of truth” (between 0 and 1) to vague statements that are too linguistically imprecise to be completely true or false (Fuzzy= vague, indistinct). Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules that can be numerically refined within the system. However, Fuzzy logic doesn’t scale well in knowledge bases; many AI researchers question the validity of chaining fuzzy-logic inferences. Qualitative symbolic logic is brittle and scores poorly in the presence of noise or other uncertainty. It is also difficult for logical systems to function in the presence of contradictory rules.

(3) Probabilistic methods for uncertain reasoning= A number of problems in AI (involving reasons, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. Bayesian networks are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference logarithm), learning (using the expectation-maximization algorithm), planning (using decision networks) and perception (using dynamic Bayesian networks).

A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are probabilistic because they are built from probability distributions and also use the laws of probability for prediction and anomaly detection, for reasoning and diagnostics, decision making under uncertainty and time series prediction.

Bayesian networks can be used to build models from data and/or expert opinion.

A Bayesian network is a graph made up of Nodes and directed Links between them. This network, consisting of nodes and links, is regarded as a “structural specification”. Each Node represents a variable, such as height, age or gender. Links are added between nodes to indicate that one node directly influences the other.

(4) Classifiers and statistical learning methods= A classifier is an algorithm that maps the input data to a specific category. Classifiers are a function that uses pattern matching to determine a closest match. They can be tuned according to “examples”. These examples are known as observations or patterns. For example, when you receive multiple emails, some of them go to the junk folder as they are classified as useless depending on your preference for opening emails. A classifier can be trained using statistical and machine learning approaches. The most widely used machine learning algorithm is the “Decision Tree”. Other Classifiers are: neural network, k-nearest neighbor algorithm, kernel methods such as support vector machine (SVM), Gaussian mixture model and naive Bayesian Classifier. Model based Classifiers perform well if the assumed model is an extremely good fit for the actual data. If no matching model is available, and if accuracy is the sole concern, discriminative classifiers (especially SVM) tend to be more accurate.

Decision Tree

(5) Artificial Neural Networks (ANN) = ANN is defined as: “a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs”. Neural networks or neural nets were inspired by the architecture of neurons in the human brain. ANNs are composed of multiple nodes, which imitate biological neurons of the human brain. The neurons are connected by links and interact with each other. ANN has made possible computers "to think and understand the world in the way humans do, while retaining the innate advantages they hold over us, such as, speed, accuracy and lack of bias". Data fed to an ANN is able to make statements, decisions or predictions with a degree of certainty.
A simple “neuron” N accepts inputs from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for and against whether neuron N should itself activate. (In other words, whether the information should be further propagated). Learning requires an algorithm to adjust these weights based on the training data (By training the network, it can be decided whether the information should be further propagated). Each link is associated with the “weight”. The net forms “concepts” that are distributed among a subnetwork of shared neurons that tend to fire together.
There are 2 types of ANN:
(a) Feedforward: Here the information flow is unidirectional. A unit sends information to another unit from which it does not receive any information. There are no feedback loops. They are used in pattern generation/recognition/classification and have fixed inputs and outputs.
(b) Feedback ANN: Here feedback loops are allowed. They are used in content addressable memories.
If the network generates a “good or desired” output, there is no need to adjust the weights. However, if the network generates a “poor or undesired” output or an error, then the system alters the weights in order to improve subsequent results.

Feedforward ANN
 
Feedback ANN

Machine Learning in ANNs:
ANNs are capable of learning and they need to be trained. There are several learning strategies=
A. Supervised learning: Here labeled data is used to train algorithms. Algorithms are trained using marked data, where the input and output are known. The data is input in the algorithm, which is known as “Features”. The output is then matched between actual productions with expected correct outputs to find errors. The model can then be modified accordingly.
B. Unsupervised learning: It is required when there is no example data set with known answers. For example: searching for a hidden pattern. Unlabeled data is used to train the algorithm. The purpose is to explore the data and find some structure within it.
C. Reinforcement learning: This strategy is built on observation. The ANN makes a decision by observing its environment. If the observation is negative, the network adjusts its weights to be able to make a different required decision the next time.
Convolutional Neural Network (CNN or ConvNet)= It is a class of deep, feed-forward, ANN. It is usually applied in the field of visual imagery. They are also known as Shift Invariant or Space Invariant Artificial Neural Networks (SIANN), based on their shared-weights architecture and translation variance characteristics.
CNNs were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. CNNs use a variation of multilayer perceptrons, designed in such a way that they require minimal preprocessing. Unlike ANN, which receives simple information (vectors), the input in CNN is a multi-channeled image.

A CNN consists of an input and output layer, as well as multiple hidden layers. The hidden layers of CNN typically consist of convolutional layers, pooling layers, fully connected layers and normalization layers.
Convolutional layers apply a convolutional operation to the input, passing the result to the next layer. The convolution mimics the response of an individual neuron to visual stimuli.

Architecture of CNN
The picture above shows a CNN coming up with the best response in identifying the image, that is, dog.

Deep Learning: Also known as Deep Structured Learning or Hierarchical Learning. It is a part of AI concerned with mimicking the learning approach used by humans to gain certain types of knowledge.

Traditional Machine-Learning algorithms are linear while Deep-Learning algorithms are stacked in hierarchy of increasing complexity and abstraction. Information passes through multiple hidden layers , thus the process is called "deep". In deep learning the program builds the "feature set" by itself without supervision. Unsupervised learning as seen in deep learning is faster and more accurate.

LIGHT-ACTIVATED LIPOSOMES FOR GLAUCOMA

  Biomedical researchers at Binghamton University in the USA, have developed a mechanism for drug-carrying liposomes that can be activated i...