AI Research in Contextual Intelligence with Human-Aligned Incentives
Seji Labs Research in Cognitive Intelligence
Seji Labs Research in Cognitive Intelligence
for AI with human-empathetic incentives
  • Seji Labs

    Cognitive AI with Symbolic Human-Aligned Compassion
    SFC/TM
    Ed Siregar
    edsiregar@sejilabs.com
  • Seji Labs

    We do research on SFC/TM cognitive AI with human-aligned compassion (Empathy + Insight)essential for AI applications directly impacting individual and social well-being

Research in AI with SymbolicHuman-Aligned Compassion
​             Towards a Cognitive Computation Engine

The Need for Trusted AI with a Symbolic form of Human-Aligned Empathy + Insight

As we know, by a long shot, not all AI is transparent, benevolent, and used to enhance our personal and social long-term wellbeing. Identifying stealthy ubiquitous AI, harmful to our own personal and social long-horizon enlightened self-interests, is a battle we may have already lost, contributing the silent death of human thinking and initiative (at the benefit of a few deciders).


AI-driven Tech is enabling shallow reactive group behaviors at the expense of our most valuable unique abilities: creativity, agency, curiosity, questioning, critical thinking, and intellectual independence which are no longer rewarded, and even punished for their inconvenience (against power concentration). Our collective mental apathy enables new 'science policies' devoid of science or policy, wars on Health, Arts and Culture etc., indicating seriously troubling times.


We need new types of AI tools to enable a resistance movement against algorithmic mental colonization technologies.  Tools to rebuild our agency against addictive passive digital consumption designed to enslave our minds into seeking fast cycles of shallow rewards (maximizing profits along the way).


Given the situation, having a trustworthy capable AI ally, would be a most valuable asset, to improve our sense of well being (a high-level feeling emerging from interacting lower-level cognitive processes).  Our research is to invent the mathematical architecture of an AI with a single drive: to enhance our own long-horizon (personal and social) well being. The AI's sole incentive, is to help us (a) deconstruct our self-detrimental cognitive patterns, and (b) strengthen weak but essential cognitive capacities, to help us navigate and thrive, in an increasingly complex uncertain world. These complex tasks are by no means those of your neighborhood AI ChatBot, or even those of the impressive, most potent LLMs which have limited reasoning, no contextual intelligence, and no actual 'understanding' of physical and human reality.


A trusted AI with no hidden agenda, would be at its core, exclusively driven by a transparent and explainable, symbolic form of compassion, in the form of symbolic cognitive empathy (an understanding of real people's evolving personal needs), coupled to a contextual insight (to offer effective cognitive actions to pursue, given one's current personal needs). 


Such a symbolic form of adaptive human-aligned empathy, is enabled by a contextual intelligence I_A. with a strong symbolic understanding of both transient personal and constant universal human needs.

Motivations For AI with Human-Empathetic Incentives


Our AI research program originated in the early 2000s at NASA-Goddard by Ed Siregar as a Principal Investigator, and has since been dedicated to developing a class of AI with human-empathetic incentives. Future long-term manned space missions with on-board AI, will need a strong adaptive human-AI alignment (e.g., think of the HAL9000 AI acquiring confused mis-aligned personal objectives due to its contradictory design directives, in Arthur C. Clarke's "2001 a Space Odyssey"). Today, it's clear, that ensuring contextually aligned AI by design, would benefit society on a much broader scale, and merits our attention (e.g., see 60 Minutes and CNN Talk by Nobel Laureate G. Hinton, in August, 2025).


Our present research focuses on a unique SAGE (Symbolic Aligned Guided Empathy) AI, with a human-aligned form of compassion (Symbolic Empathy + Insight) whose sole purpose, is to help us in two complementary personal ways: (a) getting rid of self-detrimental cognitive habits and (b) build self-enhancing cognitive strengths, both of which affect our overall sense of (individual and collective) well being, as defined by universal criteria (across human history and cultures, and coded into our most important historical social documents: Constitutions and Human Rights). 


This context-adaptive but time-invariant single role assignment (at the top level objective), ensures the AI remains human-empathetic in its incentives, in the sense of being exclusively driven by a symbolic form of human-consistent cognitive empathy.


Our previous research (NASA, Sofia Labs, LLC) has revealed [1,2,3,4,5,6,7] necessary functional agencies for cognitive intelligence with adaptive human-aligned compassion, involving eight foundational mathematical engines: (1) Deep Causal Networks (DCNs) cognitive ontology supporting a (2) Symbolic Functional Consciousness (SFC), where symbolic learning of higher cognitive properties (for semantic interpretation), emerging from the interactions between lower ones;  (3) the AI's Theory-of-Mind TM about a person's evolving cognitive-affective states, basic human-AI interaction rules as a (4) recursive 2-person (human-AI) cooperative game G(I_A,P) protocol to ensure proper interactions;  (5) non-monotonic logic reasoning with belief revision, to constantly align with, and respond to, a person's evolving personal needs and goals;  (6) goal-driven self-supervised learning with belief revision, to provide adaptive guidance of multi-attractor dynamic states (hierarchical multi-timescale); (7) capacity to pose contextual insight-gaining questions, to assess and inform a person with guidance for the Logic of cognitive growth; and finally (8) an Ethics principles enforcer to ensure I-AI, X-AI and T-AI, eventual future quantum-information based representations of DCN, to capture the complex nuances of human cognitive states of being, by leveraging superposition and entanglement of states. These eight critical modules form the core of a SFC/TM foundation model for human-aligned compassionate AI.



Scalable Foundation Model for Cognitive AI with Symbolic Human-Aligned Compassion: SFC/TM


SFC/TM based AI models encode a form of symbolic compassion: symbolic empathy + useful actionable insight, and is based on a foundation of eight core mathematical inference/decision engines [1,2,3,4,5,6,7].  SFC/TM is a unique type of Cognitive AI research, based on a hierarchical multiscale Symbolic Functional Consciousness/Theory-of-Mind architecture (some of which will be published this year - a tentative list).


Recent SFC/TM AI Research:


[1] Ed Siregar,  "Learning human insight by cooperative AI: Shannon-Neumann measure". Introductory initial concepts supporting AI insight gains, IOP Science Notes: Mathematics and Computation, Vol. 2, N2,  2021. Citation Edouard Siregar 2021 IOP SciNotes 2 025001

DOI 10.1088/2633-1357/abec9e


[2] Ed Siregar, "The argument for an AI with human-aligned incentives". Technical Report: A discussion for New York Academy of Sciences 2024.


[3] Siregar, E. AI with Symbolic Empathy: Shannon-Neumann Insight Guided Logic.Springer Nature: Cognitive Comput 18, 7 (2026). Describes the SFC\TM-based AI's ability to modify its abductive-deductive-bayesian layers, guided by the Shannon-Neumann Insight Gain measure, to incorporate a constant stream of new evidence to provide dynamic personal human-empathetic guidance to boost universally accepted (across time and cultures) forms of wellbeing

https://doi.org/10.1007/s12559-025-10536-9



Current SFC/TM AI Research under Peer-Review:


[4] Ed Siregar, "A Recursive Framework for Symbolic Functional Consciousness".  Introduces the concept of a SFC capable of symbolic cognitive empathy. SFC is a hierarchical multi-scale symbolic process, and a minimum necessary for a personal AI to mirror a person's dynamic cognitive-affective states in its Theory-of-Mind (TM) and possess a symbolic form of cognitive empathy. Springer Nature: Scientific Reports + in peer review 2025. 


[5] Ed Siregar, "Recursive Theory-of-Mind Dynamics in Symbolic Functional Consciousness: A Variational Framework and the Principle of Least Shannon–Neumann Action", Describes the dynamics of Theory-of-Mind (TM) refinement balancing epistemic information (entropy) and utilitarian compassion-driven by Boltzmann-Gibbs MoE gating, for capturing the complex evolving states of a hierarchical multi-scale Persona object. Submitted + in peer review 2025.


Current and Future SFC/TM AI Research:


[6] Ed Siregar, "The Least Shannon-Neumann Action Path Integral: Unifying Pure Cooperative SFC/TM AI Games" Introduces a unambiguous mathematical notion of symbolic human-aligned compassion (symbolic empathy+useful actionable insight) in purely cooperative iterated 2-person SFC/TM games. It introduces the generalized Shannon-Neumann Lagrangian and its Action Path Integral: a unifying principle for optimizing general epistemic entropy, personal utilitarian compassion and long-horizon global dynamical attractors (wellbeing landscape gradients), Manuscript. in prep. for 2026.


[7] Ed Siregar, "SFC Scale-Free -- Small World Causal Ontology with Polynomial-Time Symbolic Theory of Mind Processing". Presents the architecture of a scalable polynomially-bounded symbolic cognitive AI cognitive ontologyManuscript. in prep. for 2026.


[8] Ed Siregar,  "AI with Nuanced Human-Aligned Incentives: quantum representation of complex human cognitive states. Explores Hilbert state representations for superposition and entanglement" (Seji Labs internal white paper).



Cognitive Agents for Trusted Sustainable AI


Rather than search for the elusive, critical set of traits for safe AI (traits, which can contradict each other, in real-world contexts [6]), we focus on an AI's core invariant incentives [3,4]: human-aligned incentives exclusively driven by a symbolic form of cognitive empathy + insight within a cooperative (human-AI)-game.  This approach is simpler, more stable, flexible and robust, under the complex uncertain real world:  it's unique incentive is aligned with personal long-horizon wellbeing (LESI cognitive states) enabled by adaptive contextual forms of intelligence


This new type of AI complements the impressive raw power of generative LLMs, by providing contextual intelligence that is self-motivated self-learned and cooperative, with a single-minded human-aligned purpose. It will interface with other AI systems via fast evolving Model Context Protocols (MCPs).


To test our rough early ideas and assumptions, we previously (NASA and Sofia Labs, LLC) coded and unit tested several prototype agents with Symbolic Aligned Guided Empathy (SAGE) [3,4,5] + Shannon-Neumann Insight [1, 2, 4]


[1]   AI gaining insights

[2]   AI expressing cooperative insights

[3]   AI cognitive empathy

[4]   AI compassionate-rational insight

[5]     AI solutions

[6]   AI deceptions and other AI risks

About the Author
Ed's Biography

Seji Labs

Contact US

AI Research in Contextual Intelligence with Human-Aligned Incentives