The Competition Trap
Why the dynamic that ruins a workplace is now running the AI race
I have worked at several different tech companies over the years, from small startups to massive corporations. Some were great experiences and others were absolutely miserable. For a long time I couldn’t put my finger on what made the miserable ones so miserable. It wasn’t the work itself, or the pay, or the colleagues. Eventually I realized that the ingredient that always spoiled the soup was unhealthy competitiveness.
At one company, there was an inner circle that hoarded all the information. New employees, women especially, were treated as outsiders from day one. The most dominant team members made the less dominant ones do the work they didn’t want to do, and the system protected this behaviour in all its forms. When we missed a deadline because my boss hadn’t delivered his share of a task, I was told it was my fault for not being assertive enough with him. The problem was reframed as my personal failure rather than a basic failure of collaboration or his personal responsibility. I remember feeling like I was losing my mind. I kept asking myself if I was crazy or if this was just wrong, and the answer was both. It was wrong, but in that system calling it out made me the problem.
At another company, favouritism and unfair treatment were constant and everyone was always on edge. Team members and managers would switch to their native language in meetings, right in front of people who did not speak it, just to hide information. The common working language was English, but power meant being able to exclude people from conversations happening in real time. Everyone developed a habit of withholding information and working defensively instead of being open and collaborative. I watched generous coworkers become guarded and calculating. The environment kept everyone in fight-or-flight mode.
When the game is zero-sum, people start playing that way. Throwing someone under the bus or building alliances to exclude others can become a survival tactic. I spent years trying to understand why intelligent people with good intentions would act this way, and the only answer I found was fear. When your identity depends on being the smartest person in the room, collaboration can feel like a threat. It takes trust, humility, and the willingness to learn from others, but in competitive environments, those things start looking like weaknesses.
The companies where I was happiest worked differently. Success was always a collective effort and information was shared freely because helping a teammate was helping yourself. These companies didn’t have better people, but they had better incentives.
I thought I had left the toxic dynamics behind when I joined these healthier companies. But when I started paying attention to how the world’s most powerful nations are handling the development of AI, I felt like I was watching a performance I had already seen. The same gaslighting and same zero-sum thinking, now on a global stage. The US and China are locked in what people call an AI arms race. But what are they racing toward, and what do they fear losing?
The stakes, at least according to politicians and tech leaders, are enormous. They believe whoever leads in AI will dominate the global economy and military for decades. These fears drive massive investment, with billions going into chips and research. Both sides hesitate with safety regulations because any delay could let the other get ahead. Nobody seems to know what the finish line looks like, but the race is driven by the fear of not winning.
The situation resembles the Prisoner’s Dilemma, a framework from game theory that explains why cooperation can fail even when it would help everyone. Two parties each make a choice: cooperate with each other or pursue their own advantage. If both cooperate, they both get a good outcome. If both defect, they both get a bad one. If one defects while the other cooperates, the defector wins big and the cooperator gets the worst outcome.
Each side knows that defection gives a better individual outcome no matter what the other does. But if both act on it, they both lose. The sad part is that they can also see that cooperation would have helped them both, but they can’t trust that the other party will cooperate. This is a structure that punishes cooperation and rewards defection even when everyone would be better off striving for the good outcome.
Both countries are chasing AGI, but they talk about it so differently that each side has convinced itself the other must be lying. American leaders obsess over superintelligence and existential breakthroughs. Chinese policy documents bury AGI research under endless talk of optimizing factories, agriculture, and infrastructure. So the US assumes China is hiding their real weapon-building agenda while China assumes the US is so fixated on digital gods sci-fi fantasy that they’re missing the actual economic war.
I recognize this dynamic. It’s the same confusion I felt when people would switch languages in meetings. I had no idea what they were saying, but the exclusion felt like proof they were hiding something. I’d get trapped in my own head, coming up with stories about their intentions that would make me respond to them differently. My defensiveness would then confirm their suspicions. Soon we were caught in a spiral of mutual distrust that had nothing to do with what anyone actually wanted or said.
That’s what’s happening between nations right now. Both are building the same technology and both are terrified of falling behind, but instead of talking about their actual goals, they’re trapped in stories they’re telling themselves about each other. The US tried to slow things down by blocking China’s access to advanced chips, but it didn’t work. The restrictions just made both sides more paranoid and more determined to win at any cost.
There is one critical difference between the Prisoner’s Dilemma and the AI race. In the classic dilemma scenario, mutual defection leads to a bad outcome for both, but they survive to play another round. With the AI arms race, there may be no next round. If both sides move forward without safety, we could lose control of what we create, and that’s the story with no happy ending.
In the tech world, a catastrophic failure is just a post-mortem and an archived Slack channel. The project gets canceled, the survivors find new jobs, and life goes on. But in a global arms race there’s no exit strategy. We are all employees of the same system, and it is the only one we have.
The best teams I knew escaped this cycle because someone in power chose to share what they could have kept. They changed the incentives and made openness safe, allowing everyone to breathe again.
We are not doomed by the technology, we are just stuck in a bad meeting. The real work of the next decade is finding the courage to walk out, take a breath, and remember what it feels like to build something together. The problem is that everyone is still sitting at the table, convinced that standing up first means getting fired. Someone will have to stand up first.



If the competing parties decide to cooperate, and that means making their fears known, then the problem gets resolved. While the intentions are under wraps, suspicion and fear remains and we stay in a toxic competition.