Encortex brain emulation
Under the HEART Act and subsequent amendments, there are two applications for ongoing artificial representation of the human brain; §5 permits destructive mind uploading in the course of saving a patient's life, and §6 provides for temporary (less than a month) duplication of a human consciousness for research and development purposes, subject to strict licensing controls. Consequently, the range of technologies available for human brain emulation is extremely limited.

Brain emulation technologies can be generally categorized into two types: mechanistic and denotational. The earliest approaches to simulate the human mind in earnest were heavily denotational methods, which attempted to model abstracted mental architectures rather than meticulously recreating the anatomy and behavior of the human brain. Mechanistic approaches, which strive for accurate reproduction of human brain functions as extensively as possible, were not seriously explored until the beginning of the 21st century due to the enormously increased processing requirements.

Following the success of artificial intelligence at MIT, Stanford, CMU, UC Berkeley, and other major American universities, the ADRG began researching denotational brain abstraction in the mid-1970s. These efforts ultimately led to the creation of the SVSnet, which retains many of the human mind's unique properties, including language acquisition, spontaneous creativity, and abstract reasoning, but does not meet the definition of a human brain emulator, as information organization, emotion, and decision-making are based on classical knowledge representation techniques and Bayesian statistics. As a result SVSnets bear little resemblance to the human brain.

Liang et al. 1993


This path of inquiry into human brain emulation stalled in 1993, when Terrence Liang of the Flatirons Computer Company and his University of Boulder research team published a landmark study in Science identifying these deviations in SVSnet architecture. While alternatives to the SVSnet, that strive to accurately depict human processes, still exist today, these are scarce and fall short of the goal of genuine denotational equivalence. Due to their poorly-studied architectures, such systems are classified separately, under Type 5b of the Elysium AI rating system.


architectureyear of introductioninventorfidelity
Generative SVSnet (5a)1988Santei and Voet, NS Research46%
Encortex 1.0 (5c)2013Annika Voet, NS Research97.3%
Genera1989Edward Turin, Symbolics43%
QSTAR-21986Valentin Munsch, Veradyne37%
Deckard1991Andrei Kazar, IBM Research49%
XQ1983Robert Taylor, Xerox PARC4%
Examples of Type 5b AI systems. The SVSnet (Type 5a) and Encortex (Type 5c) are included for comparison.


The Liang–Hill fidelity score introduced in the aforementioned study is still used today as a measurement of how closely a given cortex architecture mimics the behavior, organization, and function of the human brain. For mechanistic models, it is calculated as the correlation between a paired time series of simulated brain changes and real brain changes when a human and his or her matching simulation are tasked to solve identical problems that incorporate a range of tasks, including short- and long-term memory, pattern recognition, emotional reasoning, metacognition, and language skills. Typically this involves translating a poem written in an unfamiliar language into a familiar one with the aid of a dictionary and grammar book. In the original study, when no mechanistic models were yet available, Liang et al. utilized a set of non-reversible transformations that reduce SVSnet and fMRI data to a comparable format, and a separate set of tools to convert denotational models into SVSnets. These methods are collectively known as the Colorado Springs algorithms.

Mechanistic models


Developments in non-invasive electron microscopy (NEM), functional magnetic resonance imaging (fMRI), and most importantly nucleotide sequencing in the late 1980s and early 1990s allowed explorations to proceed further into understanding the human mind. The findings of the Liang paper concluded that organic sentience depended so extensively on physical interactions between molecules that no reductionistic rendition could be reasonably achieved within the next several decades. Subsequent research has emphasized more exact duplication of human brain function, with the particular objective of facilitating mind transfer technology for medical applications. These modern systems are called mechanistic emulators, and mostly fall under the Type 5c classification.

In 2013, Annika Voet's team at the Neurological Research Group completed the first version of the Encortex mechanistic emulator, achieving a human fidelity over 95% for the first time. This was a substantial improvement over the previous frontrunner in Type 5c technologies, the NEC CyberBrain, which averaged 86.9% over dozens of trials after its introduction in 2008. The CyberBrain was composed almost entirely of specialized SIMD processors which were specialized for molecular dynamics simulations of retrotransposon docking, but faced interconnect bottlenecks in non-local memory access. This in turn caused problems in accurate dispersal of global phenomena like the spread of endocrine molecules and neurotransmitters, leading the simulation quality to degrade over time. The Encortex architecture favors KVSnet geometry (similar to that used by modern-day static SVSnets) comprising a smaller number of more fully-featured cores that denotationalize the movement of RNA molecules, exploiting recent research in mapping transposon activity in the human brain. This greatly reduces traffic congestion during cache access, and as an additional benefit, a sequenced genome is not required.

The current best performance of Encortex 3.6, 97.4%, may not appear to be much of an improvement over earlier versions. This in fact reveals limitations of the Liang–Hill fidelity score, as the remaining 2.6% can be attributed to the necessary alterations involved in making the synthetic mind interface with the chassis's sensors and effectors. It is generally thought that fidelity scores are asymptotically bound to a maximum of 98.2%, with the remaining 1.8% only achievable by a completely human body, which runs contrary to the goals of mechanistic emulation, as fully organic mind transfer has been possible since 2005. Comparative tests with human brains in full prosthetic bodies have supported this hypothesis, and Encortex modules have achieved up to 99.7% consistency with such subjects.

Regulatory challenges


Although the HEART Act as originally passed included provisions for Type 5c systems initialized with genuine human brain data, these regulations were designed with the expectation that mechanistic emulators would never achieve fidelities high enough for the subject to be classified as a true human. As acceptance rates of full prosthetic bodies improved, interest in the closely related technology of mental uploading was renewed in the mid 2010s, and Nanite Systems began to lobby Congress for an amendment to the HEART Act which would allow individuals to retain their rights and citizenship. The company successfully made the argument that uploading was a choice, and that the government should support such personal liberties.

One significant additional barrier remained, which was the general public's disinclination to trust the security of such artificial intelligence systems. The HEART Act's text acknowledges this, identifying the potential mutability of a robot's memories as the primary reason why such machines are not permitted citizenship. The new §5b was added to the HEART Act effective Jan 1, 2018, providing a route to emancipation for §5a convertees, albeit in a state of disenfranchisement often compared to that of convicted felons. Vital to this victory for transhuman rights was the inclusion of anti-tampering requirements. Typical anti-tampering systems, such as the Encortex ATS anti-tampering system, use a non-distributed blockchain that accumulates compressed delta vectors at a regular interval. As long as this transaction history can be reconstructed within certain tolerances, it is assured that no misinformation has been planted, other than that to which any human would be subject (i.e., being lied to.)