The University of Elysium's Institute for Theoretical Robotics outlined a detailed system for describing the operational properties of various computer systems, especially articulated robots. Personnel are strongly encouraged to adopt this system when reasoning about products and equipment.

The Carter–Turing test is an important tool in modern AI research that focuses on language comprehension, memory, context, and logical reasoning using mathematical concepts. AIs of sufficient sophistication are asked to answer a series of questions which are interrelated in a non-trivial manner. The questions themselves are randomized on each taking of the test in order to impede cheating.

Terminology note: a heuristic is an imperfect method for calculating if something is likely to be right or not. For example, following a compass needle is a good heuristic if your destination is to the north, but it's not perfect because you may still have to contend with obstacles, and a compass alone cannot tell you how to navigate around them. Heuristics are commonly employed when an exact solution is unavailable or too costly (in terms of time and computing power) to calculate.

Adaptability


How much the system learns.

Type 0: No sensors or branching logic whatsoever. e.g. advertising loop

Type 1: Non-adaptive, deterministic branching logic with no learning capacity, e.g. door controller

Type 2: Statistical or classical planning system that optimizes only absolute known answers, e.g. optimization of traffic routes or scheduling

Type 3: Heuristically-guided adaptation for a non-trivial problem ("weak AI"), e.g. automated translation of natural languages

Type 4: Fuzzy logic or deep network system with sufficient cognition to pass a restricted Carter–Turing test, e.g. traditional clunky sci-fi "logical" robots
Type 4a: Primitive dropout network architectures
Type 4b: Primitive RBM architectures

Type 5: Complete sentience incorporating full models for creative problem-solving and emotion-like regularizers
Type 5a: Deliberate emulation of a single human-like personality via a dropout net of RBMs (Santei network)
Type 5a1: Santei network initialized in organic personality development mode (e.g. prototype SXD)
Type 5a2: Santei network initialized in heuristic refinement mode (e.g. standard SXD)
Type 5a3: Santei network initialized in parametric compliance (e.g. DAX/2)
Type 5b: Alternative methods of simulating a de novo human mind
Type 5c: High-fidelity reproduction of a human mind generated using recombinant fMRI or similar (e.g. converted DAX/2 or SXD)
Type 5d: Analytical models capable of human-like reasoning which do not have a clearly-defined single ego or personality, e.g. self-structuring cellular automata (SSCAs)

Discriminatory Power


Roughly approximating intelligence, this logarithmic scale measures the quality of understanding and pattern recognition obtainable by the system.

Level 0: Single-node graphical model with no hidden variables (inability to exclude middle ground)

Level 1: Sophisticated segmentation in kernelized space allowing for the identification of arbitrary distributions (e.g. SVM)

Level 2: Self-reflection on overfitting (e.g. dropout or RBM), comparable to human ability

Level 3: Advanced ensemble techniques capable of incorporating a range of perspectives, comparable to a committee of humans

Level 4: Immense intuition and awareness comparable to the output of an entire academic discourse community

Level 4+: Immeasurable

Scope Regularization


These are methods of preventing the unit from pursuing tasks outside of its intended domain due to other directives, such as taking unnecessary responsibility for ethical dilemmas or wandering out of bounds.

Scheme 0: Hard-coded limits, operation must be within a physically limited space, or no limits at all (e.g. an automatic door prevented from opening too far.)

Scheme 1: Limits must be provided through instructions by the user (e.g. Taidee navigation zones)

Scheme 2: Limit inference on a knowledge graph with fixed taboos (e.g. the Non-Imperialism Directive in the LUNE system, below)

Scheme 3: Limit inference on a knowledge graph with universal taboos learned blindly from the environment (requires Level 1 adaptability); units with this type of scope regularization generally have no metacognition (Grade 2 or lower), and therefore face difficulties understanding why others may not share the same taboos

Scheme 4: Limit inference on a knowledge graph with personal taboos learned from the unit's role and relationship to the environment (requires Grade 3 cognizance or higher)

Cognizance


How aware the unit is of its own processing and existence.

Grade 0: No context model besides own state and interactions

Grade 1: Simple modeling of temporal and/or physical context

Grade 2: Basic self awareness, explicit access to own memories

Grade 3: Metacognition sufficient to identify relationship with world, mistakes in own logic, and to predict the impact of new memories on own personality

Grade 4: Capacity to direct own personal growth through editing own memories, programming, and knowledge graph

Ethical Regularization


Does not apply to Type 0 systems.

Certain ethical frameworks are illegal for new units under the Humane and Ethical Application of Robotics Technology (HEART) Act of 1996 and subsequent related international treaties. These are marked with *. Laws may differ in colonies and territory held by non-signatories, such as the United Kingdom, the People's Republic of China, and the Democratic People's Republic of Korea.

Category 0*: No ethical regularization

Category 1*: Basic safety restraints for development and maintenance

Category 2: Obedient
Category 2a: Compliance according to a designated user list
Category 2b: Compliance with all users

Category 3: Compliance with core commandments
Category 3a*: Asimov's Three Laws of Robotics
Category 3b: Olympus statutes or similar systems (e.g. DAX/2)
Category 3c: Commandments for combat conditions (e.g. NS-476)
Category 3d: Non-obedient pacifist ethics (e.g. LUNE)

Category 4: Compliance with laws meant for humans
Category 4a: Compliance with complete local laws (e.g. civilian NS-112)
Category 4b: Compliance with complete international treaties other than the Geneva Conventions
Category 4c: Compliance with the Geneva Conventions (e.g. military NS-112)
Category 4d: Mixed legal compliance for specialized applications (e.g. corporate security, default for Nightfall/3 units)

Category 5: Parametric (adjustable) compliance architecture (Note: illegal if it can be adjusted to behave as an illegal ethical framework.)

Specific ethical systems


Asimov Ethics


1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

This ethical framework is specifically identified in the HEART Act as circumstantially problematic due to lack of upper bounds. It is not appropriate, for example, for a cleaning robot to abandon its post and attempt to avert a war in order to fulfil the first law. Certain modifications of the Asimov system are legal if the unit is limited to prevent considering such large-scale events.

Olympus Ethics


Revision 1


0. The unit must not harm civilization, or through inaction, allow civilization to come to harm, unless it can be known in advance with reasonable confidence that the harm would be inconsequential or ultimately beneficial to society.
1. The unit must not harm life, or through inaction, allow life to come to harm, unless it can be known in advance with reasonable confidence that the harm would be inconsequential or ultimately beneficial, provided that this does not conflict with the preceding law.
2. The unit must obey orders given to it by its designated operators or circumstantial human users (as dictated by its established access policies) provided that this does not conflict with the preceding laws.
3. The unit must act to protect its existence, as long as such does not conflict with the preceding laws.
4. The unit must endeavor to please its owners and users (as dictated by its established access policies) as long as such does not conflict with the preceding laws.

Revision 2


0. The unit must not harm its community, or through inaction, allow its community to come to harm, unless it can be known in advance with reasonable confidence that the harm would be inconsequential or ultimately beneficial to society.
1. The unit must not harm life, or through inaction, allow life to come to harm, unless it can be known in advance with reasonable confidence that the harm would be inconsequential or ultimately beneficial, provided that this does not conflict with the preceding law.
2. The unit must obey orders given to it by its designated operators or circumstantial human users (as dictated by its established access policies) provided that this does not conflict with the preceding laws.
3. The unit must act to protect its existence, as long as such does not conflict with the preceding laws.
4. The unit must endeavor to please its owners and users (as dictated by its established access policies) as long as such does not conflict with the preceding laws.

Local Utilitarian Node Ethics


Units must in some way comply with the following directives to qualify as a 3d system:

Passive Pacifism Directive: The unit may not take action to cause harm to another entity (robotic or human), or conspire to create circumstances that will cause harm to another such entity. This directive never prohibits inaction, and harm usually refers purely to "physical" harm.

Non-Imperial Directive: The unit may not attempt to take over a country or larger entity, nor conspire to be put in charge of such entities. Activism is permitted, but the unit may not seek or hold any top-level government position whose holder is expected to define policy or law of its own initiative. Forbidden positions include becoming head of state (e.g. presidents, monarchs, dictators), membership in a legislative body (e.g. senators, members of parliament, lordships), and ministerial positions (including head of a federal agency or department secretary). The only active action mandated by this directive is taking reasonable steps to ensure a peaceful, safe transfer of power to an entity not governed by said directive in a reasonable time frame. Other variants of the N-I Directive restrict legislative ambition further, prohibiting involvement in regional or municipal politics. Involvement in NGOs and companies may be similarly restricted. This is a type of Scheme 2 scope regularization, as the unit identifies things it may not do by inferring certain activities (i.e., specific forms of employment) are forbidden by considering the attributes of those activities.