Thought Leadership
Philosophy & Frameworks
Field notes from someone thinking carefully about intelligence, systems, and the limits of what we know.
How I Think About Intelligence, Systems, and Tools
I'm interested in how intelligence emerges, not just how it performs. My work in digital marketing and AI is informed by ideas from biology, cognition, ecology, and complexity science—not because these fields offer easy answers, but because they help me cultivate better questions. Like tending a garden, good thinking requires patience, attention to the right conditions, and a willingness to dig beneath the surface.
I don't treat AI as magic, nor as a threat. I treat it as a system with constraints—constraints that are often invisible to people who interact with it through polished interfaces and confident outputs.
Intelligence—human, artificial, organizational, or ecological—is contextual, emergent, and bounded by perspective.
This page exists to show how I think, not just what I do. It's the philosophical counterweight to the practical, execution-oriented work that fills most of my days.
Emergent Intelligence & Strange Loops
Informed by: Gödel, Escher, Bach by Douglas Hofstadter
Intelligence is not always located in an individual unit. Simple agents, when connected by feedback loops, can produce complex behavior that no single component could generate alone.
The classic example: a single ant is not intelligent. An ant colony is. The intelligence exists in the pattern of interactions, not in any individual node.
This pattern repeats everywhere once you start looking: in organizations, in markets, in search ecosystems, in AI systems trained on massive collective human output. The behavior you observe at the system level often can't be reduced to what's happening in its parts.
Hofstadter's concept of strange loops adds another layer. These are systems that refer back to themselves—recursive patterns where different levels of abstraction fold into each other. Meaning, identity, and intelligence can arise from this self-reference. It's not mystical; it's structural.
What this means for marketing & AI:
AI doesn't "understand" in a human sense. It participates in a loop where human language, behavior, and incentives feed back into the system. Search visibility, content performance, and AI outputs are emergent properties of these interactions—not levers you pull once and forget.
Umwelt: Perspective, Not Omniscience
Informed by: An Immense World by Ed Yong
The German word umwelt describes the perceptual world unique to each organism—the slice of reality it can actually sense and respond to. Every creature experiences a different version of the world based on its sensory inputs.
Humans have one umwelt. Bats navigating by echolocation have another. Bees seeing ultraviolet patterns on flowers have another. Plants detecting chemical signals through root networks have yet another. No organism perceives "the world as it is." Each perceives the world as its biology allows.
AI is often treated as if it has escaped this limitation—as if it sees the world more completely than we do. In reality, AI has an extremely narrow umwelt. It perceives the world almost entirely through language. No chemical senses. No physical embodiment. No lived experience. No felt sense of time or mortality.
Reframing AI:
This reframes AI as powerful within its sensory bounds—and blind outside of them. It is not "seeing the truth of the world." It is pattern-matching within a constrained representational system. Bad assumptions come from mistaking coverage for comprehension.
Perception as Controlled Hallucination
Informed by: Anil Seth's work on consciousness & predictive processing
Here's a disorienting idea from contemporary neuroscience: what you experience as "reality" is actually your brain's best guess. Perception isn't a window onto the world—it's a prediction, continuously updated by sensory input. Neuroscientist Anil Seth calls this a "controlled hallucination."
Your brain is constantly running a model of what it expects to encounter, then adjusting that model when predictions don't match incoming signals. The redness of an apple, the solidity of a table, the continuity of your own self—these feel like direct perceptions, but they're constructions. Useful constructions, refined by evolution. But constructions nonetheless.
This has profound implications for understanding AI. When a neural network "hallucinates"—generating plausible but false information—we treat it as a bug. But hallucination, in the neuroscience sense, is how all minds work. The question isn't whether a system hallucinates. The question is whether its hallucinations are well-calibrated to reality.
Google's Deep Dream visualizations offer a glimpse through a different door. When we ask a neural network to amplify the patterns it "sees," we get recursive fractals of dog faces, eyes, and architectural forms blooming from noise. It's unsettling because it reveals the alien umwelt of a pattern-matching system—one that finds signal everywhere, unconstrained by the priors that human biology imposes.
The doors of perception:
Aldous Huxley borrowed William Blake's phrase to describe how consciousness filters reality. We perceive not the world itself, but what our nervous system allows through. Every intelligence—biological or artificial—operates behind its own doors. Understanding those filters, their constraints and their affordances, is essential to working effectively with any system that processes information. Including ourselves.
Non-Human Intelligence & Humility
Informed by: The Light Eaters by Zoë Schlanger
Plant intelligence challenges everything we assume about cognition. Plants solve problems. They communicate through chemical signals. They remember. They make decisions about resource allocation under uncertainty. And they do all of this without brains, without speed, without anything that looks like human thought.
Intelligence can be slow. Intelligence can be decentralized. Intelligence can be chemical, relational, or environmental. The pace and form of human cognition is not the only template. As someone who spends time in the garden, I've come to appreciate that growth happens on its own timeline—you can create the right conditions, but you can't force the bloom.
This cuts both ways when thinking about AI. Just because something doesn't think like us doesn't mean it isn't intelligent. But also: just because something resembles us—and language is a powerful resemblance—doesn't mean it understands like we do.
The humility this requires:
Respect for different forms of intelligence. Skepticism toward anthropomorphizing machines. Recognition that fluent language production and genuine understanding are not the same thing.
Context Shapes Cognition
On the importance of set and setting
There's a concept from consciousness research that applies directly to AI: set and setting. The output of any cognitive process—human or machine—depends heavily on the context in which it operates. Mindset. Environment. Framing. Expectations. These aren't peripheral factors; they're constitutive.
Feed an AI the same prompt with different system instructions, and you get different outputs. Ask a human the same question in different contexts, and you get different answers. The boundary between "the system" and "the context" is blurrier than we typically acknowledge.
This is why prompt engineering matters. Why organizational culture matters. Why the framing of a marketing message matters. Intelligence doesn't operate in a vacuum—it's always situated, always responsive to conditions that extend beyond the immediate input.
Default modes of operation—the patterns we fall into when we're not paying attention—shape outcomes more than we realize. Breaking out of those defaults requires intentionality. Sometimes it requires changing the container, not just the contents.
Practical implication:
If you want different results from a system—AI, team, or yourself—examine the context first. The most leveraged interventions often aren't about working harder within existing constraints. They're about recognizing that the constraints themselves are variables.
The Expert Halo & the Risk of Over-Trust
A concept from backcountry safety and risk management
The "expert halo" is the halo effect applied to expertise—when perceived competence in one domain spills over into inappropriate deference in others. People stop questioning. They defer judgment too broadly. Credentials in one area become assumed authority in all areas.
In backcountry and avalanche safety communities, this is a known risk factor. Many tragedies come not from lack of information, but from social dynamics: deferring to the most confident or credentialed person in the group, failing to voice concerns, not questioning assumptions in unfamiliar conditions.
The parallel to AI is uncomfortable but important. AI systems are excellent at some tasks. That excellence creates a false sense of universal authority. The fluency of the output makes it easy to stop cross-checking. The risk is not primarily that AI is wrong—it's that people stop thinking critically when interacting with something that sounds so confident.
Core takeaway:
AI should be treated like a specialized tool, not a general authority. The same vigilance we'd apply to any confident expert in an unfamiliar domain should apply to AI outputs—especially when the stakes are high.
What This Means for My Work
These aren't abstract ideas for me. They shape how I approach clients, build strategies, and evaluate new technology.
I design marketing systems with the assumption that no system is neutral, no model is complete, and no tool is context-free. Every optimization has tradeoffs. Every metric is a proxy for something harder to measure. Every "insight" is bounded by the data and assumptions that produced it.
I favor:
- Human-in-the-loop processes
- Cross-checking signals from multiple sources
- Organic growth over quick fixes—deep roots yield strong results
- Transparency about uncertainty
I'm skeptical of:
- "Set it and forget it" automation
- Claims of complete objectivity
- Tool worship and hype cycles
- Optimizing metrics that don't matter
These books aren't endorsements of a single worldview—they're tools that helped me think better questions.
Gödel, Escher, Bach: An Eternal Golden Braid
Douglas Hofstadter
On strange loops, self-reference, and how meaning emerges from formal systems. Dense but rewarding—changed how I think about minds and machines.
An Immense World: How Animal Senses Reveal the Hidden Realms Around Us
Ed Yong
A tour of animal umwelten that makes human-centric assumptions about perception feel provincial. Essential for thinking about what AI can and cannot perceive.
The Light Eaters: How the Unseen World of Plant Intelligence Offers a New Understanding of Life on Earth
Zoë Schlanger
Challenges every assumption about what cognition requires. Plants as problem-solvers, communicators, and decision-makers—without brains or speed.
The Doors of Perception
Aldous Huxley
A meditation on the reducing valve of consciousness and what happens when ordinary filters are suspended. More relevant to AI than it might first appear.
Being You: A New Science of Consciousness
Anil Seth
The neuroscience of how perception constructs reality. "Controlled hallucination" as a framework for understanding both human minds and artificial ones.
More titles added as this list grows...
The goal of this page is not to persuade you that I'm right. It's to show that I'm thinking carefully—and that the thinking continues.
Continue the Conversation