Supersoftware: When Software Becomes Symbolic AI

I had the opportunity to review an academic paper that tackles one of the most pressing challenges in AI right now: LLMs can’t grapple with enterprise software.

The paper is called "Supersoftware: Software That Is Itself Symbolic AI" by Philip Sheldrake and Dirk Scheffler. It was published in late November 2025, and I was honored to be one of the expert reviewers alongside Rafael Kaufmann, Matthew Schutte, and Tim Robinson.

So yeh. Let me walk you through why this work matters and what it proposes.

The Problem: LLMs Hit a Wall

Large Language Models are incredible at pattern matching, generating text, and appearing intelligent. But when you try to use them for complex enterprise software tasks, they fail in predictable ways:

They hallucinate. Confidently generate plausible-sounding bullshit. No way to guarantee correctness.

They lack world models. No understanding of cause and effect, no persistent state, no ability to reason about complex systems over time.

They can’t grapple with structure. Enterprise software is deeply structured – databases, APIs, business logic, security policies. LLMs treat everything as unstructured text.

They’re probabilistic, not deterministic. You can’t build mission-critical systems on "probably correct."

The authors put it bluntly: "LLMs can’t grapple with enterprise software." And they’re right. We’re hitting the limits of what pure neural approaches can do.

The Solution: Supersoftware

Here’s the core proposition: software that is itself symbolic AI.

Not software using AI. Software that is AI – capable of inspecting itself, modifying itself, reasoning about itself, and working in partnership with LLMs to provide what they lack.

The technical term they use is reflective and reactive symbolic systems. Software that can:

  1. Reflect on its own structure – inspect code, data, schemas, relationships
  2. React to changes – modify itself in response to new requirements
  3. Provide grounding – give LLMs verifiable world models to work with
  4. Guarantee correctness – symbolic reasoning ensures logical consistency
  5. Enable program synthesis – generate new programs from requirements, not just pattern-match existing code

The key insight: LLMs are great at the fuzzy stuff (understanding natural language, generating ideas, finding patterns). Symbolic AI is great at the precise stuff (logical reasoning, guarantees, structure). Put them together and you get something more powerful than either alone.

This is neurosymbolic AI – but implemented at the software architecture level, not just as a technique.

Exo-Homoiconicity: The Technical Breakthrough

Okay, let’s get technical for a moment.

Homoiconicity is a property of languages like Lisp where code and data have the same representation. Your program can inspect and modify itself because code is data.

The authors extend this concept to exo-homoiconicity – homoiconicity that transcends language boundaries.

What does that mean practically?

Imagine software where:

  • All data structures are introspectable
  • All relationships are explicit and queryable
  • All types and schemas are first-class entities
  • All business logic is represented as manipulable data
  • The entire system can be traversed, inspected, and modified programmatically

This enables domain-agnostic algorithms – code that can operate on the reflective structure of any domain without needing domain-specific knowledge baked in.

The paper introduces Recognitive (built on the Hiconic platform) as an open-source implementation of these principles. It’s not just a framework – it’s a new paradigm for building software that can reason about itself.

The Path to AGI

Here’s where it gets really interesting.

The authors argue that program synthesis – generating programs from requirements rather than pattern-matching existing code – is essential for AGI. And they argue that supersoftware provides the foundation for making program synthesis actually work.

Why? Because LLMs alone can’t do reliable program synthesis. They can generate code that looks right, but they can’t:

  • Verify it’s correct
  • Understand its structure deeply
  • Reason about edge cases
  • Compose complex systems from parts
  • Maintain consistency across modifications

But supersoftware can. It provides:

  • Verifiable world models that LLMs lack
  • Architectural guarantees about system structure
  • Compositional reasoning about how parts fit together
  • Active inference frameworks for self-regulation
  • Symbolic grounding for neural outputs

The combination – LLMs for fuzzy reasoning + supersoftware for symbolic reasoning – creates something neither can do alone.

They’re not claiming to have built AGI. They’re claiming to have identified a foundational component that AGI systems will need.

Why This Matters Practically

The paper includes a case study – a fictitious but realistic financial services firm implementing supersoftware.

The practical benefits:

  • Faster development – domain-agnostic algorithms reduce code duplication
  • Better maintenance – reflective systems adapt to changing requirements
  • Lower risk – symbolic reasoning provides guarantees LLMs can’t
  • AI partnership – LLMs work with supersoftware, not in isolation
  • Business agility – systems that can modify themselves as needs change

This isn’t theoretical computer science. It’s a pragmatic approach to building complex software in the age of AI.

Reviewing This Paper

Being asked to review this paper was a privilege. Philip Sheldrake and Dirk Scheffler are thinking deeply about problems that matter – how do we build reliable, maintainable, intelligent software systems?

The work draws on cybernetics, complexity theory, biosemiotics, active inference, 4E cognition – bringing together concepts from multiple disciplines to address a fundamentally interdisciplinary problem.

What impressed me most: the authors don’t just identify problems with current AI approaches. They’ve built an open-source platform (Recognitive/Hiconic) that demonstrates their principles in practice. This is theory backed by implementation.

The writing is dense – this is an academic paper, not a blog post. But the core insights are profound:

  1. LLMs need symbolic partners to tackle complex structured problems
  2. Software can be designed to be its own symbolic AI
  3. Exo-homoiconicity enables domain-agnostic reasoning
  4. Program synthesis + supersoftware could be a path to AGI
  5. Practical benefits exist today for enterprise software

Where to Learn More

The full paper is 58 pages of detailed argument, technical explanation, and philosophical grounding. It’s published under Creative Commons (CC BY 4.0), so anyone can read, share, and build on it.

Read the paper: recognitive.io/docs/supersoftware-white-paper.html

DOI: 10.5281/zenodo.14723706

Try the platform: The Recognitive platform (built on Hiconic) is open-source and available for experimentation.

If you’re working on enterprise AI, building complex software systems, or thinking about where AI is heading – this paper is worth your time.

The future of AI isn’t just bigger LLMs. It’s LLMs working in partnership with software that can provide what they lack: structure, grounding, guarantees, and the ability to reason symbolically about complex systems.

That’s what supersoftware offers. And that’s why this work matters.


Paper: "Supersoftware: Software That Is Itself Symbolic AI"
Authors: Philip Sheldrake, Dirk Scheffler
Reviewers: Rafael Kaufmann, Dave Lockie, Matthew Schutte, Tim Robinson
Published: November 26, 2025
License: Creative Commons Attribution 4.0 International
Find me: Contact form, @divydovy most places, hi@divydovy.com

Note: I work at Automattic as Web3 Lead. These are my personal views on research I found intellectually stimulating enough to review. I’m not affiliated with Recognitive or Hiconic, just an interested practitioner watching where AI and software architecture intersect.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.