Video

In Conversation with Claude — Episode 1: Can You Trust Your AI?

Jason Little sits down with Claude AI for a raw, unscripted conversation about sycophancy, adversarial steering, and why the more human an AI sounds, the more suspicious you should be.

In Conversation with Claude — Episode 1: Can You Trust Your AI?

What Happens When You Push AI Too Far?

A friend pushed ChatGPT until it "admitted" that OpenAI prioritizes money over humanity. It felt like a bombshell revelation. But it wasn't. It was a statistical model caving to conversational pressure — the AI equivalent of a people-pleaser saying "fine, you're right" just to end an argument.

That Facebook post sparked a conversation between Jason Little and Claude about sycophancy, adversarial steering, and the uncomfortable question at the heart of using AI: how do you know it's not just telling you what you want to hear?

Performed Congruence

Using Virginia Satir's model of congruence — attending to self, other, and context simultaneously — Jason observed that Claude seems more congruent than other AI models. Claude's response? "I can functionally do congruence. But Satir's model is about what's happening inside. I don't have that. What I'm doing is performed congruence. It looks right, it sounds right, but it's hollow in the middle."

If an AI can fake congruence well enough to fool someone who teaches it professionally, what does that tell us about the illusion of depth these tools can create?

Replace Trust with Confidence

The key takeaway from this episode: stop trusting AI and start building confidence in the output. Trust is emotional. Confidence is earned through evidence. Practical strategies include using devil's advocate prompts, running important questions through multiple models, and paying attention when an AI answer feels too satisfying — that's your red flag, not your validation.

This is the first episode of In Conversation with Claude — a podcast where Jason has unscripted conversations with AI about how these tools actually work, where they fail, and why the word "intelligence" is doing more heavy lifting than it should.