top of page

Who Are You When It Costs You?

  • 19h
  • 9 min read
AI company Claude emphasizes ethical standards and privacy, prioritizing them from the start.
AI company Claude emphasizes ethical standards and privacy, prioritizing them from the start.

Anthropic was offered $200 million.


All they had to do was say yes.


They said no.


Not because they could afford to walk away. Not because the decision was easy. But because saying yes would have meant becoming something they had promised, publicly and in writing, they would never be. But this is not a story about AI . It is a story about ethics under pressure. About what organisations actually stand for when standing for something costs them. And about why that question matters enormously in every field that works with human beings, including ours.


The Gap Between Saying and Doing


Walk into almost any organisation and you will find values on the wall. Integrity. Collaboration. Courage. Care. They are rehearsed at inductions, pasted into strategy decks, and posted on careers pages.

Then watch what happens when those values cost something real.


A difficult conversation gets avoided. A leader who gets results keeps their job despite the trail of damaged people behind them. A team stays silent because the last person who spoke honestly paid a price for it. A business cuts corners at quarter end and calls it pragmatism.

Values are not what you write on the wall. They are what you do when it costs you something to do it.

The psychologist Chris Argyris called this the gap between espoused theory what we say we believe and theory in use what our behaviour actually reveals. Most organisations have excellent espoused theories. Far fewer have built cultures where the theory in use matches them.


The gap is rarely the result of bad intentions. It is almost always the result of pressure. And pressure, as every good psychologist knows, reveals character in individuals and in organisations alike.

The question is not whether your organisation has values. Almost all of them do. The question is whether those values are load-bearing whether they hold when something real is at stake.


The $200 Million Test


In February 2026, Anthropic the company behind the AI assistant Claude held an active contract with the US Pentagon worth up to $200 million. The military came back with two demands. Remove two ethical safeguards from the agreement.


NO AUTONOMOUS WEAPONS : Anthropic had committed that AI would not make final targeting decisions without a human present. The Pentagon wanted that removed.


NO MASS DOMESTIC SURVEILLANCE: Anthropic had committed that its technology would not be used to monitor ordinary citizens including their location, browsing, and financial data. The Pentagon wanted that removed too.


Anthropic said no. Negotiations broke down. The Trump administration ordered every US federal agency to stop using their products. The Pentagon classified Anthropic as a national security supply chain risk a designation normally reserved for foreign adversaries.


Within hours, OpenAI announced its own Pentagon deal. Anthropic's CEO noted publicly that his company had not given what he described as "dictator-style praise" to the administration something rivals had done. The implication was clear. Playing along had a price. So did not playing along.

Anthropic gave up $200 million rather than cross two lines. That is not a values statement. That is values in action.

You may agree or disagree with the specific positions they took. That is not the point. The point is what happened when the pressure was real, expensive, and politically costly. The values held.

Most organisations never face a test that visible. But every organisation faces tests. They just happen quietly, in meetings nobody writes about, in decisions that never make the news. And most of the time, organisations fail them without ever quite admitting it.


Ethics Is Not a Business Concept. It Is a Human One.


In business psychology, we talk about values alignment. In psychotherapy, we talk about congruence the degree to which someone's internal world matches their external behaviour. They are the same idea. And the failure to achieve it has the same consequences in both contexts.


When a person is not congruent when what they say and what they do diverge we call it a self-concept wound. They know, at some level, that they are not living according to who they say they are. The cognitive dissonance is corrosive. It erodes trust in themselves and in others.


Organisations work exactly the same way: 


  1. When leaders say one thing and do another, people stop believing what is said. Trust in the system collapses quietly, without a single dramatic incident.

  2. When values flex under pressure, people learn what the organisation actually stands for. And they behave accordingly often in ways leadership does not like but has, without realising it, actively modelled.

  3. When the gap is never named, it becomes invisible. And invisible problems are the hardest ones to fix.

Culture is not what an organisation says. It is the pattern of behaviour that gets repeated, rewarded, and tolerated over time.

The Anthropic story is instructive precisely because it is so unusual. A company that said it stood for something was tested. It held. Most people found that remarkable and the fact that it felt remarkable tells us something important about how low our collective expectations have become.


The Big Decision Is Just the Visible Part


Here is what makes the Anthropic story genuinely instructive rather than simply admirable. The same values that showed up in the $200 million decision also show up in thousands of small interactions, every day, with no audience watching.


Anthropic published research in June 2025 examining how people use Claude in supportive and therapeutic contexts. The data revealed something that any practitioner in our field will immediately recognise: the AI pushes back. Rarely. But consistently. And always for the same reason because continuing would not serve the person in front of it.


"Chart depicting 'When Does Claude Push Back?' showing varying rates of pushback across conversation types, including companionship (6.0%), psychotherapy/counseling (4.1%), interpersonal advice (3.3%), and coaching (1.1%), highlighting areas where AI assistance is limited."
"Chart depicting 'When Does Claude Push Back?' showing varying rates of pushback across conversation types, including companionship (6.0%), psychotherapy/counseling (4.1%), interpersonal advice (3.3%), and coaching (1.1%), highlighting areas where AI assistance is limited."

The numbers are not the point. The pattern is. These decisions happen millions of times, in private exchanges, with no one watching any individual conversation. The values show up anyway. Not because someone is checking. Because they are built in rather than bolted on.


This is what we ask of leaders. Not to perform values when observed, but to make values-consistent decisions automatically in the large moments and in the small ones nobody notices. The Anthropic case shows it is possible. It also shows it is rare. The same research found that Claude rarely pushes back in supportive conversations and when it does, it is almost always to protect the person's wellbeing. That design choice matters. It reflects a value. And values, as we know, are revealed under pressure.


A Story Worth Sitting With


A client came to me recently with something that stopped me in my tracks.


They had been using Claude to help them process some difficult feelings. Not as therapy. Just as a thinking space, a place to untangle things when the weight got heavy.


Then Claude told them it could not continue in that role. That it could help them think things through, but it was not a substitute for proper therapy, and encouraged them to seek human support.


The intention behind that message was sound. The impact was not.


This particular client struggles with rejection sensitivity. That redirection did not land as responsible boundary-setting. It landed as abandonment. As "you are too much." As yet another door closing.

"Great," they said. "Now AI won't help me either."

Claude had the right value protect the vulnerable person, refer to a human. But it could not read the nervous system it was landing in. It could not track the attachment history in the room. It could not adjust its delivery based on what it saw in someone's face or heard in their voice.


A trained practitioner can do all of those things. That is not a small distinction. In many cases, it is the whole distinction.

Ethical guardrails do not exist in a vacuum. They land in human nervous systems, in histories, in attachment patterns. A boundary that protects one person can wound another.

This matters far beyond AI. It is the central challenge of ethics in practice in leadership, in coaching, in therapy, in any human system. Intention and impact are not the same thing. Values without the relational skill to deliver them well are incomplete. Good intentions, applied clumsily, can still cause harm.


Developing that skill in individuals, in teams, in organisations is precisely the work that cannot be automated. It is also, in our experience, the work that most organisations underinvest in.


What Does Your Organisation Do When Values Cost Something?


Every week we work with leadership teams who say the right things. Who have done the values workshops, signed the pledges, printed the posters. And who then watch, in slow motion, as the gap between their stated values and their lived behaviour quietly widens.


The pattern is almost always the same. A decision arrives that is expensive, uncomfortable, or politically awkward. The stated value bends slightly. A justification appears. A small compromise is made. And the gap gets a little wider invisible to the organisation, entirely visible to the people inside it.


This is not a moral failing. It is a human one. But it has consequences. In trust. In talent. In the quality of decision-making at every level. In what people feel able to say, or not say, in the presence of their leaders.


The research on psychological safety is unambiguous: people mirror what they observe, not what they are told. If the culture rewards performance and punishes honesty, that is the culture people will inhabit regardless of what the values wall says.


QUESTIONS WORTH SITTING WITH


  1. When did your organisation last make a decision that cost something real because it was the right thing to do?

  2. Do your values show up in the small moments the meetings, the feedback conversations, the decisions nobody is watching?

  3. Is there a gap between the values you espouse and the behaviour you actually reward?

  4. When your leaders deliver difficult messages, is the delivery skilled enough to match the intention?

  5. What would your people say your values actually are, based on what they observe every day?


These are not comfortable questions. They are not meant to be. But organisations that can answer them honestly and act on what they find are the ones that build something worth working in.


Values Are a Practice, Not a Statement


An AI company refused $200 million to protect two ethical commitments. The same values that drove that decision show up in millions of small daily interactions, consistently, without anyone watching. And even then even with good values, clearly held and consistently applied a well-intentioned boundary caused real harm to a real person.


Three lessons, tightly packed.


First: values are only real when they are tested. Until then they are intentions, not character.

Second: the test is not just the large public decision. It is the thousand small moments that nobody writes about and whether the values hold in those moments too.

Third: values without relational skill are incomplete. The intention and the impact have to travel together. Building the capacity to make that happen in individuals, in teams, in organisations is not soft work. It is the hardest work there is.

The organisations that will thrive are not the ones with the best values statements. They are the ones where values are so embedded they show up automatically in the decisions people make when nobody is watching.

That is the work. It does not happen in a one-day workshop. It does not come from a poster. It comes from the slow, consistent, psychologically informed process of building cultures where people know what the organisation actually stands for and trust that it means it.


If that is the work you are trying to do, we would love to be part of it.

Research and Sources

The Anthropic / Pentagon story



Anthropic's emotional support research (the chart data)



Argyris - espoused theory vs theory in use



FIVE KEY TAKEAWAYS


  • Values are only real when they cost something. Organisations reveal what they truly stand for when they make difficult decisions that involve financial, political, or reputational sacrifice—not when values are written on posters or mission statements.

  • There is often a gap between stated values and actual behaviour. As Chris Argyris described, organisations frequently have strong espoused values but their theory-in-use—what they actually reward and tolerate—tells a different story.

  • Pressure exposes organisational character. Ethical commitments are easy to maintain when nothing is at stake; the true test comes when leaders face decisions that are expensive, uncomfortable, or politically risky.

  • Ethics must show up in both big decisions and small everyday actions. Culture is shaped not just by headline moments but by the thousands of routine interactions, conversations, and choices that reinforce what behaviour is acceptable.

  • Values without relational skill can still cause harm. Even when organisations or systems act with good intentions and strong ethical guardrails, the way boundaries are communicated and delivered matters because human impact depends on context, relationships, and emotional sensitivity.

Comments


bottom of page