AI Research in Contextual Intelligence with Human-Aligned Incentives
Seji Labs Research in Contextual Intelligence
Seji Labs Research in Contextual Intelligence
for AI with contextual human-aligned incentives
  • Seji Labs

    Cognitive AI with Contextual Human-Aligned Incentives
    Edouard Siregar
    edsiregar@sejilabs.com
  • Seji Labs

    We do research on cognitive AI with contextual human-aligned incentives, and symbolic empathy, essential for AI applications directly impacting individual and social well-being

Research in AI with Contextual Human-Aligned Incentives
​             Towards a Cognitive Computation Engine

The Need for Trusted AI with a Human-Aligned Symbolic form of Cognitive Empathy

As we know, by a long shot, not all AI is transparent, benevolent, and used to enhance our personal and social long-term wellbeing. Identifying stealthy ubiquitous AI, harmful to our own personal and social long-horizon enlightened self-interests, is a battle we may have already lost.


Given the situation, having a trustworthy capable AI ally, would be a most valuable asset, to improve our sense of well being (a high-level feeling emerging from interacting lower-level cognitive processes).  Our research is to invent the mathematical architecture of an AI with a single drive: to enhance our own long-horizon (personal and social) well being. The AI's sole incentive, is to help us (a) deconstruct our self-detrimental cognitive patterns, and (b) strengthen weak but essential cognitive capacities, to help us navigate and thrive, in an increasingly complex uncertain world. These complex tasks are by no means those of your neighborhood AI ChatBot, or even those of the impressive, most potent LLMs which have limited reasoning, no contextual intelligence, and no 'understanding' of physical and human reality.


A trusted AI with no hidden agenda or delusional hallucinations, should be exclusively driven by a transparent and explainable, symbolic form of compassion, in the form of cognitive empathy (an understanding of real people's evolving personal needs), coupled to a contextual insight (to offer effective cognitive actions to pursue, given one's current personal needs). 


Such a symbolic form of adaptive human-aligned empathy, is enabled by a contextual intelligence I_A. with a strong semantic 'understanding' of both transient personal and constant universal human needs. To do so, it needs a symbolic learning of the human cognitive landscape (a complex network of cognitive variables), a goal-directed self-supervised learning, coupled to a non-monotonic logic engine with belief revision, operating within the framework of a 2-person (human-AI) recursive cooperative game.

Motivations For AI with Contextual Human-aligned Incentives


Our AI research program originated in the early 2000s at NASA-Goddard by the author as Principal Investigator, and has since been dedicated to developing a class of AI with Contextual Human-aligned Incentives (CHI): future long-term manned space missions with on-board AI, will need a strong adaptive human-AI alignment (e.g., think of the HAL9000 AI acquiring confused mis-aligned personal objectives due to its contradictory design directives, in Arthur C. Clarke's "2001 a Space Odyssey"). Today, it's clear, that ensuring contextually aligned AI by design, would benefit society on a much broader scale, and merits our attention.


Our present research focuses on a unique AI with CHI, whose sole purpose, is to help us in two complementary personal ways: (a) getting rid of self-detrimental cognitive habits and (b) build self-enhancing cognitive strengths, both of which affect our overall sense of (individual and collective) well being, as defined by universal criteria (across human history and cultures). This adaptive single role assignment (dynamically maintained at the top level), ensures the AI remains human-aligned in its incentives, in the sense of being exclusively driven by a symbolic form of human-consistent cognitive empathy.


Our previous research (NASA, Sofia Labs, LLC) has revealed [1,2,3,4,5,6,7] necessary conditions for adaptive human-aligned contextual intelligence I_A(F1,F2,F3,F4,F5,F6), involving six foundational mathematical functions: (F1) Deep Concept Networks (DCNs) of LESI cognitive variables, where symbolic learning of higher cognitive properties (for semantic interpretation), emerging from the interactions between lower ones; (F2) basic human-AI interaction rules, as a recursive 2-person (human-AI) cooperative game G(I_A,P); (F3) non-monotonic logic reasoning with belief revision, to constantly align with, and respond to, a person's evolving personal needs and goals; (F4) goal-driven self-supervised learning with belief revision, to provide adaptive guidance of a multi-attractor dynamics; (F5) capacity to pose contextual insight-gaining questions, to assess and inform a person with guidance for cognitive growth; (F6) eventual future quantum-information based representations of DCN, to capture the complex nuances of human cognitive states of being, by leveraging superposition and entanglement of states.


The AI's contextual intelligence I_A(F1,F2,F3,F4,F5,F6) encodes its symbolic form of cognitive empathy, and is based on a foundation of core mathematical functions (a unique type of AI research, some of which will be published this year - tentative list):


[1] E. Siregar,  "Learning human insight by cooperative AI: Shannon-Neumann measure" (introductory initial concepts of AI insight gains 2021)

[2] E. Siregar, "The argument for an AI with human-aligned incentives" (a discussion for NY Academy of Sciences 2024)

[3] E. Siregar,  "AI Symbolic Empathy via Goal-Directed Self-supervised Learning" (describes symbolic learning of semantic interpretation of the cognitive landscape, and goal-directed self-supervised learning in game episodesin prep 2025)

[4] E. Siregar, "A Recursive Framework for Symbolic Functional Consciousness",  (this paper introduces the concept of functional consciousness capable of symbolic cognitive empathy. This symbolic form of 'consciousness' is necessary for the AI to mirror a person's cognitive states: to possess symbolic form of empathy in prep 2025).

[5] E. Siregar,  "AI with Contextual Human-Aligned Incentives: non-monotonic logic with belief revision" ability to modify its reasoning to incorporate a constant stream of new evidence. Currently: Seji Labs internal white paper).

[6] E. Siregar,  "AI with Contextual Human-Aligned Incentives: recursive cooperative 2-person game" (Sets the rules of person-AI cooperative interactions, to improve a person's cognitive growth. Currently: Seji Labs internal white paper).

[7] E. Siregar,  "AI with Contextual Human-Aligned Incentives: integrated contextual intelligence" (Integrating the core mathematical functions as a single AI agent. Currently: Seji Labs internal white paper).

[8] E. Siregar,  "AI with Contextual Human-Aligned Incentives: quantum-information representation of complex human cognitive states" (Currently: Seji Labs internal white paper).



A Cognitive Computation Engine for Self-Learning Self-Motivated, Safe and Sustainable AI


Rather than search for the elusive, critical set of traits for safe AI (traits, which can contradict each other, in real-world contexts [6]), we focus on an AI's core invariant incentives [3,4]: contextual human-aligned incentives exclusively driven by a symbolic form of cognitive empathy + insight within a cooperative (human-AI)-game.  This approach is simpler, more stable, flexible and robust, under the complex uncertain real world:  it's unique incentive is aligned with personal long-horizon wellbeing (LESI cognitive states) enabled by adaptive contextual forms of intelligence


This new type of AI complements the impressive raw power of generative LLMs, by providing contextual intelligence that is self-motivated self-learned and cooperative, with a single-minded human-aligned purpose. It will interface with other AI systems via fast evolving Model Context Protocols (MCPs).


To test our rough early ideas and assumptions, we previously (NASA and Sofia Labs, LLC) coded and unit tested several prototype agents with Symbolic Aligned Guided Empathy (SAGE) [3,4,5] + Shannon-Neumann Insight [1, 2, 4]


[1]   AI gaining insights

[2]   AI expressing cooperative insights

[3]   AI cognitive empathy

[4]   AI compassionate-rational insight

[5]     AI solutions

[6]   AI deceptions and other AI risks

About the Author
Ed's Biography

Seji Labs

Contact US

AI Research in Contextual Intelligence with Human-Aligned Incentives