Ndea Combines Deep Learning with Program Synthesis for Novel AI
Brief on the January 2025 founding of Ndea by Keras creator François Chollet and Zapier co-founder Mike Knoop.
Ndea Combines Deep Learning with Program Synthesis for Novel AI
In January 2025, François Chollet (creator of the Keras deep learning framework) and Mike Knoop (co-founder of Zapier) launched Ndea, a company combining deep learning with program synthesis in a novel approach to artificial intelligence. The approach addresses a key limitation of current neural networks — their inability to discover explicit rules and algorithms from data, instead relying on implicit pattern matching. Program synthesis could enable AI systems that not only recognize patterns but discover and articulate the rules underlying those patterns — a capability relevant to AGI development and potentially to consciousness, since rule discovery and explicit representation are associated with higher-order cognitive processing. See our neural networks coverage and cognitive computing analysis.
The Program Synthesis Approach
Ndea’s founding thesis addresses a fundamental limitation of current deep learning systems: while neural networks excel at pattern matching — learning to associate inputs with outputs through exposure to vast training datasets — they struggle with explicit rule discovery and algorithmic reasoning. A language model trained on millions of mathematical expressions can predict the next token in a mathematical sequence, but it does not discover the underlying mathematical rule in the way a mathematician does.
Program synthesis — the automatic generation of programs from specifications — offers a complementary approach. Rather than learning statistical associations from data, program synthesis systems discover explicit rules, algorithms, and programs that explain data and generalize to novel cases. Chollet’s ARC-AGI benchmark specifically tests this kind of fluid intelligence — the ability to identify abstract patterns and rules in novel problems that cannot be solved by memorization or statistical pattern matching. Current AI systems perform well below human levels on ARC, highlighting the gap between pattern matching and genuine reasoning.
Ndea’s approach combines deep learning’s representation learning capabilities with program synthesis’s rule discovery capabilities, potentially creating systems that both learn from data (like current AI) and discover explicit algorithms (like human reasoning). This hybrid approach could address the reasoning brittleness that limits current transformer models — their tendency to fail on out-of-distribution problems or adversarial inputs that require genuine understanding rather than pattern matching.
Chollet’s Philosophy of Intelligence
Francois Chollet, creator of the Keras deep learning framework (used by millions of developers worldwide) and senior staff engineer at Google, brings a distinctive philosophical perspective on intelligence to Ndea. In his 2019 paper “On the Measure of Intelligence,” Chollet argued that intelligence should be defined not by performance on specific tasks (which can be achieved through memorization and brute-force computation) but by the ability to efficiently acquire new skills — the capacity to generalize from limited experience to novel situations.
This definition directly challenges the scaling hypothesis — the belief, held by many at OpenAI and DeepMind, that simply scaling current architectures with more data and compute will produce AGI. Chollet argues that no amount of scaling will produce genuine intelligence if the underlying architecture lacks the capacity for abstraction and rule discovery. Ndea represents the practical test of this philosophy — building systems that combine the statistical power of deep learning with the symbolic reasoning capabilities that Chollet argues are necessary for genuine intelligence.
Implications for AGI Development
Ndea’s program synthesis approach has significant implications for the AGI timeline debate. If Chollet is correct that current architectures fundamentally lack the capacity for abstract reasoning and rule discovery, then AGI cannot be achieved through scaling alone — architectural innovations of the kind Ndea is pursuing would be necessary. This view contrasts with the scaling-focused approach of OpenAI, Anthropic, and DeepMind, which have invested billions in training ever-larger transformer models on ever-larger datasets.
The success or failure of Ndea’s approach will provide important evidence for this debate. If program synthesis combined with deep learning produces systems that significantly outperform pure deep learning on fluid intelligence benchmarks (ARC-AGI and successors), this would validate Chollet’s critique of scaling and suggest that AGI requires qualitatively different approaches. If scaling continues to produce capability improvements that close the gap with human fluid intelligence, Chollet’s critique may prove premature.
Connection to Consciousness Research
The program synthesis approach has indirect relevance to consciousness research. Rule discovery and explicit representation are associated with higher-order cognitive processing — the kind of processing that Higher-Order Theories of consciousness associate with conscious awareness. If Ndea’s systems demonstrate genuine rule discovery and explicit self-representation of their reasoning processes, they could satisfy consciousness indicators from Higher-Order Theories more robustly than current language models, which produce metacognitive outputs through pattern matching rather than genuine self-reflection.
Furthermore, program synthesis could provide more interpretable AI systems — systems whose reasoning can be understood and verified by examining the discovered programs rather than inspecting opaque neural network weights. This interpretability is directly relevant to consciousness assessment, where understanding what a system is actually computing (rather than just observing its behavior) is essential for evaluating architectural indicators.
Market Context
Ndea operates at the intersection of the $34.28 billion deep learning market and the emerging field of neurosymbolic AI — systems that combine neural network learning with symbolic reasoning. The neurosymbolic approach has gained significant academic attention as the limitations of pure deep learning become more apparent, and Ndea represents one of the most prominent commercial ventures in this space. Mike Knoop’s background as co-founder of Zapier — a company built on automation and integration — complements Chollet’s AI research expertise, providing the product and business development experience needed to translate research insights into commercially viable products.
See our neural networks coverage and cognitive computing analysis.
The ARC-AGI Benchmark and Its Significance
Chollet’s ARC (Abstraction and Reasoning Corpus) benchmark specifically tests fluid intelligence — the ability to solve novel problems that require identifying abstract patterns without relying on memorized knowledge. Current AI systems, including the most capable transformer-based models from OpenAI, Anthropic, and Google DeepMind, perform well below human levels on ARC tasks. This performance gap highlights a fundamental limitation: current deep learning systems excel at pattern matching on familiar data but struggle with the kind of genuinely novel reasoning that defines general intelligence. ARC has become one of the most important benchmarks in the AGI timeline debate because it directly measures the capability gap between current AI and human-level reasoning.
Program Synthesis: A Different Path to Intelligence
Ndea’s program synthesis approach represents a fundamentally different paradigm for AI. Rather than training a neural network to approximate the mapping from inputs to outputs through billions of parameters, program synthesis generates explicit programs — sequences of operations — that produce the correct output for any given input. This approach has several theoretical advantages: generated programs are interpretable (you can read and understand the logic), compositional (programs can be combined to solve larger problems), and generalizable (a program that correctly solves a pattern extends to any instance of that pattern, not just training examples).
The challenge is making program synthesis practical and scalable. The space of possible programs is vast, and searching it efficiently requires sophisticated heuristics, abstraction mechanisms, and — potentially — neural networks that guide the search process. Ndea’s approach combines deep learning with program synthesis, using neural networks to propose candidate programs and evaluate their fit to the problem, while the program synthesis framework ensures that solutions are explicit, testable, and generalizable. This hybrid approach could address the reasoning brittleness that limits pure deep learning systems while maintaining the scalability that pure symbolic approaches lack.
Implications for AGI and Consciousness Research
Ndea’s program synthesis approach has implications for both the AGI timeline and consciousness research. If program synthesis can achieve genuine abstract reasoning — solving novel problems through compositional program generation rather than pattern matching — it would address one of the key technical bottlenecks that AGI skeptics identify. For consciousness research, program synthesis raises interesting questions: would a system that reasons through explicit programs be more or less likely to satisfy consciousness indicators than a system that reasons through neural network activation patterns? Under Integrated Information Theory, the answer depends on the causal structure of the program execution, while under Global Workspace Theory, the answer depends on whether the program synthesis process implements workspace-like dynamics.
See our neural networks coverage and cognitive computing analysis.
The ARC-AGI Benchmark: Measuring What Matters
The ARC (Abstraction and Reasoning Corpus) benchmark that Chollet created and that Ndea targets represents a fundamentally different approach to evaluating AI capabilities than conventional benchmarks. Most AI benchmarks test whether a system has memorized patterns from training data — a test that transformer-based models can pass through scale alone. ARC tests whether a system can reason about novel patterns that cannot have appeared in training data, requiring genuine abstraction — the ability to identify the underlying rule generating a set of examples and apply that rule to new instances. Human performance on ARC tasks averages approximately 85 percent, while the best AI systems achieve roughly 30-40 percent, demonstrating a substantial capability gap in the kind of reasoning that defines human intelligence.
This gap matters for the AGI debate because it suggests that current scaling approaches — training larger transformer models on more data — may not be sufficient to achieve general intelligence. If AGI requires the kind of fluid reasoning that ARC measures, then architectural innovations like program synthesis may be necessary regardless of how much compute is applied to existing approaches. Ndea’s bet is that program synthesis provides the architectural innovation needed to close this gap.
Market Implications and Competitive Positioning
Ndea operates in a distinctive competitive niche within the $390.9 billion AI market. While most AI companies compete on language model performance, multimodal capability, or deployment scale, Ndea targets a fundamental capability — abstract reasoning — that current frontier models lack. If program synthesis achieves breakthrough results on ARC and similar benchmarks, Ndea could establish a new architectural paradigm for AI development, potentially disrupting the transformer-centric approach that currently dominates the $34.28 billion deep learning market. However, translating abstract reasoning capability into commercial products requires bridging the gap between benchmark performance and real-world applications — a challenge that many AI research breakthroughs have struggled to overcome. Ndea’s commercial success will depend on whether its program synthesis technology can be integrated into practical applications in software engineering, scientific discovery, and mathematical reasoning where abstract problem-solving creates direct economic value.
The Broader Landscape of AGI Approaches
Ndea’s program synthesis approach exists within a broader landscape of alternative AGI strategies that challenge transformer dominance. Neuromorphic computing pursues AGI through biologically faithful hardware. Google Titans pursues AGI through memory-enhanced architectures. Multi-agent systems pursue AGI through the coordination of specialized AI modules. And program synthesis pursues AGI through explicit reasoning and compositional abstraction. The diversity of these approaches reflects the genuine uncertainty about which path will lead to general intelligence — an uncertainty that the cognitive computing industry, consciousness researchers, and AGI governance frameworks must navigate. For investors and strategists in the AI market, tracking multiple AGI approaches — rather than betting exclusively on transformer scaling — provides the diversification needed to capture value regardless of which architectural paradigm ultimately succeeds.
Chollet’s Definition of Intelligence and Its Implications
Francois Chollet’s formal definition of intelligence — published in his 2019 paper “On the Measure of Intelligence” — centers on skill-acquisition efficiency rather than task performance. Under this definition, a system is more intelligent not because it performs better on any particular task, but because it acquires new skills more efficiently from fewer examples. This definition explicitly penalizes systems that achieve performance through memorization of large training sets, favoring systems that can generalize from minimal data. The implications for the $390.9 billion AI market are significant: if Chollet’s definition becomes the standard measure of AI capability, the current scaling paradigm — training ever-larger transformer models on ever-larger datasets — may prove insufficient, creating demand for architecturally novel approaches like program synthesis that achieve efficiency through abstraction rather than scale.
Program Synthesis and the Future of Software Engineering
Beyond its implications for AGI, Ndea’s program synthesis technology could transform software engineering itself. If AI systems can generate correct, efficient programs from high-level specifications, the role of software developers shifts from writing code to specifying requirements and validating outputs. This transformation would affect the entire software industry, potentially reducing development costs while improving software reliability. For the cognitive computing market, program synthesis represents an approach to AI that produces transparent, verifiable outputs rather than opaque neural network predictions, addressing the interpretability concerns that limit AI adoption in safety-critical domains.
Updated March 2026. Contact info@subconsciousmind.ai for corrections.
Subscribe for full access to all 7 analytical lenses, including investment intelligence and geopolitical risk analysis.
Subscribe from $29/month →