September 9, 2015

AI

quasiArtificial intelligence (AI) plays a critical role in todays computational applications. Some examples of how AI is used include; credit card fraud, military applications, medical diagnostics, robot control, scientific discoveries, graphics, games, speech recognition, and music. AI does not necessarily produce stand-alone applications, but adds to an existing environment to make the application smarter and more sensitive.  However, the AI portion of the application can be a rather larger part of the system.  For example, an AI application that many are familiar with is IBM’s Big Blue chess computer.

John McCarthy is widely known as one of the founders of AI, and actually coined the phrase, “Artificial Intelligence”. McCarthy also developed the Lisp programming language which is regarded as one of the premier programming language for AI.  Another notable scientist, Allen Newell, also a pioneer of AI, in 1954 attended a seminar that was describing a computer program that could learn to recognize letters and patterns, it was then Allen realized that systems may be created that contain intelligence and have the ability to adapt.  Back in the 1970’s Newell predicted that man would be developing smart cars, road, homes, and appliances.  As computing power becomes more inexpensive and compact, AI applications will spread into all industry and government applications.

The general problem of simulating or creating intelligence can be broken down into sub-problems consisting of particular traits, such as deduction, reasoning, problem solving, and learning.  In the past half of a century of AI research, a large number of tools have been developed to solve these problems.  Search and optimization have become some of the most popular methods for this type of problem solving.  One of the most successful methods for optimization comes from evolution.  This method of computation begins with a population of organisms or genes then allows these genes to mutate and recombine, selecting only the fittest to survive each iteration.  Some of these methods are called; swarm intelligence algorithms and genetic algorithms.

I have a great interest in AI and its applications for problem solving.  Various types of problems I am fascinated with involve the use of cellular automata, Wang tiling, support vector machines, genetic algorithms, Hopfield networks, steering, fractals, and finite state machines.

The following is a slide presentation I gave detailing the theory of a Hopfield network:

Hopfield_Model_web

It is interesting to note that the Hopfield network is very similar to magnetic materials models in statistical mechanics.  This analogy become useful when the network is generalized to accommodate for the use of stochastic units into the network theory, such as temperature.  As an example, a simple description of a magnetic material can consist of atomic magnets, or spin, arranged  onto a lattice.  The term spin comes from quantum mechanics and represents the magnetic moment.  The spin can point in any direction.  However, for certain purposes, especially for spin = 1/2 atoms, there are only two listing directions possible for the spin’s orientation; up or down.