Santei–Voet–Shortliffe network
The Santei–Voet–Shortliffe network, commonly abbreviated SVS net, is the key artifact of robotic intelligence that enables the near-sentient performance of machines produced by Nanite Systems and other companies. Developed by the Consumer Products Division using ideas from the Tactical Planning Algorithms Group at the ADRG, the SVS net design was completed in 1985.
SVS nets are based on a surprising evolutionary leap from fuzzy-logic expert systems and the then-recent breakthrough of backpropagation perceptrons. Santei and Voet hypothesized that neuron clusters in the human brain achieved their incredible consciousness and reasoning properties not merely through overwhelming quantity (a sentiment echoed throughout the rest of the machine learning research community at the time, which had largely abandoned connectionism by 1970). This was somewhat validated by early research with backpropagation models, which found that deep networks of many layers eventually lost a meaningful signal, resulting in correction being evenly distributed across all nodes during the final portions of the backward pass.
Their solution to this problem was to introduce designs that have since been paralleled in mainstream AI research through the ensemble methods used in conventional deep neural networking, on the premise that the algorithm would then be able to switch between, or even blend, multiple regression and classification techniques depending on the circumstances at hand. While most ensemble models use a coordinating method that largely reduces to a mixture of experts, the Santei and Voet approach was to employ genetic programming, thereby allowing random evolution to develop strategies to maximize efficacy under problem domains of unknowable complexity. Koichi strongly believed that it was fallacious to pursue biological-like reasoning without attempting to proceed toward a solution using analogous methods. These developments remained trade secrets for decades afterward, until independent discovery by European researchers in 2007.
Exploring a function space as vast and unbounded as that of genetic programming, however, presented its own special form of challenges, especially in an era before generalized representer theorems were developed. To guide the learning process, April Voet proposed one of the first-known cases of an adversarial training system, wherein networks were trained against various sample tasks and abstract decision-making problems. This model proved to be particularly successful in a wide range of sensory and data interpretation problems, ranging from question-answering to computer vision. Elements of the system not easily trained in this fashion, including memory recall and creativity, were trained by taking examples from narratives.
Development
SVS nets are based on a surprising evolutionary leap from fuzzy-logic expert systems and the then-recent breakthrough of backpropagation perceptrons. Santei and Voet hypothesized that neuron clusters in the human brain achieved their incredible consciousness and reasoning properties not merely through overwhelming quantity (a sentiment echoed throughout the rest of the machine learning research community at the time, which had largely abandoned connectionism by 1970). This was somewhat validated by early research with backpropagation models, which found that deep networks of many layers eventually lost a meaningful signal, resulting in correction being evenly distributed across all nodes during the final portions of the backward pass.
Their solution to this problem was to introduce designs that have since been paralleled in mainstream AI research through the ensemble methods used in conventional deep neural networking, on the premise that the algorithm would then be able to switch between, or even blend, multiple regression and classification techniques depending on the circumstances at hand. While most ensemble models use a coordinating method that largely reduces to a mixture of experts, the Santei and Voet approach was to employ genetic programming, thereby allowing random evolution to develop strategies to maximize efficacy under problem domains of unknowable complexity. Koichi strongly believed that it was fallacious to pursue biological-like reasoning without attempting to proceed toward a solution using analogous methods. These developments remained trade secrets for decades afterward, until independent discovery by European researchers in 2007.
Exploring a function space as vast and unbounded as that of genetic programming, however, presented its own special form of challenges, especially in an era before generalized representer theorems were developed. To guide the learning process, April Voet proposed one of the first-known cases of an adversarial training system, wherein networks were trained against various sample tasks and abstract decision-making problems. This model proved to be particularly successful in a wide range of sensory and data interpretation problems, ranging from question-answering to computer vision. Elements of the system not easily trained in this fashion, including memory recall and creativity, were trained by taking examples from narratives.
Santei–Voet–Shortliffe networks
Introduction · Architecture · Adaptability
Koichi Santei · April Voet
Introduction · Architecture · Adaptability
Koichi Santei · April Voet