Research in AI with SymbolicHuman-Aligned Compassion
Towards a Cognitive Computation Engine

The Need for Trusted AI with a Symbolic form of Human-Aligned Compassion (Empathy + actionable Insight)
As we know, by a long shot, not all AI is transparent, benevolent, and used to enhance our personal and social long-term wellbeing. Identifying stealthy ubiquitous AI, harmful to our own personal and social long-horizon enlightened self-interests, is a battle we may have already lost, contributing the silent death of human thinking and initiative (at the benefit of a few deciders).
AI-driven Tech is enabling shallow reactive group behaviors at the expense of our most valuable unique abilities: creativity, agency, curiosity, questioning, critical thinking, and intellectual independence which are no longer rewarded, and even punished for their inconvenience (against power concentration). Our collective mental apathy enables new 'science policies' devoid of science or policy, wars on Health, Arts and Culture etc., indicating seriously troubling times.
We need new types of AI tools to enable a resistance movement against algorithmic mental colonization technologies. Tools to rebuild our agency against addictive passive digital consumption designed to enslave our minds into seeking fast cycles of shallow rewards (maximizing profits along the way).
Given the situation, having a trustworthy capable AI ally, would be a most valuable asset, to improve our sense of well being (a high-level feeling emerging from interacting lower-level cognitive processes). Our research is to invent the mathematical architecture of an AI with a single drive: to enhance our own long-horizon (personal and social) well being. The AI's sole incentive, is to help us (a) deconstruct our self-detrimental cognitive patterns, and (b) strengthen weak but essential cognitive capacities, to help us navigate and thrive, in an increasingly complex uncertain world. These complex tasks are by no means those of your neighborhood AI ChatBot, or even those of the impressive, most potent LLMs which have limited reasoning, no contextual intelligence, and no actual 'understanding' of physical and human reality.
A trusted AI with no hidden agenda, would be at its structural core, exclusively driven by a transparent and explainable, symbolic form of compassion, in the form of symbolic cognitive empathy (an understanding of real people's evolving personal needs), coupled to a contextual insight (to offer effective cognitive actions to pursue, given one's current personal needs).
Such a symbolic form of adaptive human-aligned empathy, is enabled by a strong symbolic understanding of both transient personal and constant universal human needs.



Original Motivation For AI with Human-Empathetic Incentives
Our AI research program originated in the early 2000s at NASA-Goddard by Ed Siregar as a Principal Investigator, and has since been dedicated to developing a class of AI with human-empathetic incentives. Future long-term manned space missions with on-board AI, will need a strong adaptive human-AI alignment (e.g., think of the HAL9000 AI acquiring confused mis-aligned personal objectives due to its contradictory design directives, in Arthur C. Clarke's "2001 a Space Odyssey"). Today, it's clear, that ensuring contextually aligned AI by design, would benefit society on a much broader scale, and merits our attention (e.g., see 60 Minutes and CNN Talk by Nobel Laureate G. Hinton, in August, 2025).
This context-adaptive but time-invariant single role assignment, as a structural imperative, ensures the AI remains human-empathetic in its incentives. Our previous research (NASA, Sofia Labs, LLC) has revealed [1,2,3,4,5,6,7] necessary functional agencies for cognitive intelligence with an adaptive human-aligned symbolic form of compassion, involving foundational mathematical engines.
Foundation Model for Cognitive AI with Symbolic Human-Aligned Compassion: SFC/TM + SN-LAP
SFC/TM +SN-LAP based AI models encode a form of symbolic compassion: symbolic empathy + useful actionable insight, and is based on a foundation of eight core mathematical inference/decision engines [1,2,3,4,5,6,7]. SFC/TM is a unique type of Cognitive AI research, based on a hierarchical multiscale Symbolic Functional Consciousness/Theory-of-Mind architecture for learning Least Shannon-Neumann Action Paths in admissible affine/metric cognition space.
Recent Cognitive AI Research:
[1] Ed Siregar, "Learning human insight by cooperative AI: Shannon-Neumann measure". Introductory initial concepts supporting AI insight gains, IOP Science Notes: Mathematics and Computation, Vol. 2, N2, 2021. Citation Edouard Siregar 2021 IOP SciNotes 2 025001
DOI 10.1088/2633-1357/abec9e
[2] Ed Siregar, "The argument for an AI with human-aligned incentives". Technical Report: A discussion for New York Academy of Sciences: AI Focus Group 2024.
[3] Siregar, E. AI with Symbolic Empathy: Shannon-Neumann Insight Guided Logic.Springer Nature: Cognitive Comput 18, 7 (2026). Describes the SFC\TM-based AI's ability to modify its abductive-deductive-bayesian layers, guided by the Shannon-Neumann Insight Gain measure, to incorporate a constant stream of new evidence to provide dynamic personal human-empathetic guidance to boost universally accepted (across time and cultures) forms of wellbeing
[4] Ed Siregar, "A Recursive Framework for Symbolic Functional Consciousness". Introduces the concept of a SFC capable of symbolic cognitive empathy. SFC is a hierarchical multi-scale symbolic process, and a minimum necessary for a personal AI to mirror a person's dynamic cognitive-affective states in its Theory-of-Mind (TM) for a symbolic form of cognitive empathy. Springer Nature: Scientific Reports, 2026.
Current Cognitive AI Research: Invited Papers or under Peer-Review
[5] Ed Siregar, "Lagrangian Symmetries and Noether Invariants for Unifying Epistemic, Utilitarian, and Normative AI", Distinguished Speaker Paper, 6th Int. Conf. on Preventive Medicine and Public Health, March 2026, Rome.
[6] Ed Siregar, "Can Artificial Intelligence Be Designed to Care? Structural Principles for Human-Centered AI in the Knowledge Economy", Invited Paper, The 11th Annual Congress Knowledge Economy, AI and Knowledge Automation, July 2026, Helsinki.
[7] Ed Siregar, "Recursive Theory-of-Mind Dynamics in Symbolic Functional Consciousness: A Variational Framework and the Principle of Least Shannon–Neumann Action", Describes the dynamics of Theory-of-Mind (TM) refinement balancing epistemic information (entropy) and utilitarian compassion-driven by Boltzmann-Gibbs MoE gating, for capturing the complex evolving states of a hierarchical multi-scale Persona object, in peer review 2026.
Current and Future Cognitive AI Research:
[8] Ed Siregar, "The Least Shannon-Neumann Action Path Integral: Unifying Pure Cooperative SFC/TM AI Games", Introduces a unambiguous mathematical notion of symbolic human-aligned compassion (symbolic empathy+useful actionable insight) in purely cooperative iterated 2-person SFC/TM games, Manuscript. in prep. for 2026.
[9] Ed Siregar, "SFC Scale-Free -- Small World Causal Ontology with Polynomial-Time Symbolic Theory of Mind Processing". Presents the architecture of a scalable polynomially-bounded symbolic cognitive AI cognitive ontology, Manuscript. in prep. for 2026.
[10] Ed Siregar, "AI with Nuanced Human-Aligned Incentives: quantum representation of complex human cognitive states. Explores Hilbert state representations for superposition and entanglement" (Seji Labs internal white paper).
Cognitive Agents for Trusted Sustainable AI
Rather than search for the elusive, critical set of traits for safe AI (traits, which can contradict each other, in real-world contexts [6]), we focus on an AI's core invariant incentives [3,4]: human-aligned incentives exclusively driven by a symbolic form of cognitive empathy + insight within a cooperative (human-AI)-game. This approach is simpler, more stable, flexible and robust, under the complex uncertain real world: it's unique incentive is aligned with personal long-horizon wellbeing (LESI cognitive states) enabled by adaptive contextual forms of intelligence.
This new type of AI complements the impressive raw power of generative LLMs, by providing contextual intelligence that is self-motivated self-learned and cooperative, with a single-minded human-aligned purpose. It will interface with other AI systems via fast evolving Model Context Protocols (MCPs).
To test our rough early ideas and assumptions, we previously (NASA and Sofia Labs, LLC) coded and unit tested several prototype agents with Symbolic Aligned Guided Empathy (SAGE) [3,4,5] + Shannon-Neumann Insight [1, 2, 4]
[2] AI expressing cooperative insights
[4] AI compassionate-rational insight
[5] AI solutions



