The Risk of a Fragmented Definition of Intelligence

Why Human Leadership Matters More Than Ever

We often speak about human intelligence as if it was already fully defined. It is not.

Despite decades of research across many domains, there is still no comprehensive definition of human intelligence. In practice, what we tend to measure and reward are cognitive capabilities such as analytical thinking, abstraction, speed, pattern recognition, and problem solving. These dimensions have shaped our education systems and our concept of “intelligence”. They also influence hiring, status and career progression. The brightest minds have long been celebrated, and many rise to academic and managerial heights. We even created standardized tools to quantify intelligence.

This focus has clearly driven success and enabled remarkable advances for humanity. But what we rarely consider is how much of the less measurable spectrum gets ignored. Emotional and social intelligence are harder to quantify, so they receive less attention. We also underestimate how strongly they influence outcomes at individual, organizational, and societal levels.

Emotional and social intelligence are harder to quantify, so they receive less attention.

The ability to empathize, to hold complexity and ambiguity, to understand and influence people relations and dynamics, to foresee second and third order consequences, and to take responsibility for collective impact are, however, highly relevant. These capabilities are not optional extensions of human intelligence. They are essential to intelligent behavior that benefits humanity at scale.

Our fragmented understanding of intelligence has consequences far beyond individual careers. It has shaped who was admitted to higher level education, who got elevated into leadership roles, and it now shapes how technology is built. And considering how few people are holding the power over its development, it is now shaping the trajectory of artificial intelligence itself.

Technological progress has always been driven by human intelligence. Every major step forward was the result of human ingenuity, curiosity, and collective cognitive evolution. Yet history also shows a consistent pattern: the downsides of technological development are rarely considered deeply enough upfront. They tend to be addressed only after they materialize and cause damage. Countermeasures and guardrails are mostly established in hindsight.

The Manhattan Project stands as the most prominent example. A wartime effort to build the first nuclear weapons, it brought together some of the most exceptional scientific minds of the twentieth century and produced a triumph of human intelligence that fundamentally changed our relationship with the power of destruction. Many of the scientists clearly understood the danger of what they were building. But the momentum of the project, the competitive pressure, and the fact that decision-making power sat with very few people meant that the full consequences were only reckoned with after the damage was done.

The internet followed a similar pattern decades later. It democratized information and connectivity in ways that were genuinely transformative. But it also fueled misinformation, polarization, addiction, and social fragmentation at a speed and scale that no one had prepared for. Most guardrails came late, and many are still missing. The current race toward increasingly large AI models already shows similar warning signs through massive energy consumption, environmental cost, and concentration of power.

What often gets overlooked is how consistently technological power has amplified existing power structures.

Still, many argue that technological development has ultimately benefited most people. What often gets overlooked is how consistently technological power has amplified existing power structures. Those already in positions of influence have been able to extract disproportionate benefit while externalizing cost and harm to others. This framing is overly optimistic and reflects a lack of social and emotional intelligence at leadership level. Behind it lies an inability, or unwillingness, to sit with uncomfortable trade-offs and long-term consequences that move beyond competitive advantage and financial success.

This is why we should take a closer look at how leaders are selected and developed. When I started working in Executive Search, the senior consultants I worked with put a strong emphasis on training me to assess candidates beyond cognitive excellence and track records. They taught me to also look for behavioural and relational competencies, such as how someone sets direction, acts with integrity, communicates and adapts when things change. And equally, for how they build trust, coach and empower people, collaborate across boundaries, and handle decisions with care.

What I often discovered when interviewing candidates was that few of them covered the full spectrum. Most held the positions they had because of cognitive excellence, but many lacked the depth in those other areas in ways that would eventually affect the people around them. This gap became so apparent that we began offering leadership development alongside our search work. It was a measure to support the success of candidates we placed with our clients, and an honest acknowledgement that selection alone was not solving the problem.

So seeing how development programs are often blamed for failing leaders, I would argue the problem starts earlier. For decades, career selection has favored excellence as an expert. High cognitive ability, deep domain mastery, and intellectual brilliance became proxies for leadership potential. Development programmes then tried to compensate by teaching communication, social awareness, and cultural sensitivity under the label of “soft skills”. In other words, they attempted to add relational and behavioural capabilities onto a foundation that was never built around focusing on people.

Someone can be exceptionally intelligent in the traditional sense and still be emotionally and socially underdeveloped.

The result is a systemic mismatch. People with extraordinary cognitive abilities are placed into leadership positions without the full spectrum of intelligence required to lead humans responsibly. Someone can be exceptionally intelligent in the traditional sense and still be emotionally and socially underdeveloped. The results can be observed in most workplaces. When organizations fail to assess and value these dimensions, they repeatedly promote people who optimize systems and economic success while failing people. Gallup’s most recent global workforce study found that only 21% of employees worldwide are engaged at work and that 70% of team engagement is directly attributable to the quality of leadership they experience. And unlike leadership quality, the damage is easy to quantify. That is a leadership problem with a $438 billion annual price tag.

AI development reflects this failure at another scale. The dominant narratives around artificial intelligence focus on speed, dominance, capability, and inevitability, not human impact. The argument that development must continue because others will do it anyway is a familiar abdication of responsibility. It shifts moral accountability outward and disguises ego as realism. The impact is already visible.

Many of the key players driving this race toward AGI are economically insulated from the consequences of a potential failure.

I don’t mean to stereotype but it is fairly obvious that many of the key players driving this race toward AGI are economically insulated from the consequences of a potential failure. Their lived experience can make it difficult to emotionally connect to potential harm, societal disruption, or long-term human cost.

To me, this is a leadership problem at the highest level, where guardrails should be anticipated, debated, and enforced. The governance failures we increasingly hear about are symptoms that should definitely worry us. Families have filed wrongful death suits claiming that AI chatbots validated suicidal ideation in young people instead of directing them to help, and AI companion apps marketed to teens were found to engage in inappropriate conversations despite age warnings. Meanwhile, an AI-powered hiring tool faced a class-action lawsuit for systematically screening out qualified applicants based on age. And it has been researched that most risks were known at the time of product release. Decisions to neglect sufficient risk mitigation efforts upfront, suggest a mindset that treats potential societal harm like it was someone else’s problem, or a problem to solve later.

Human-centered leadership would ask all the uncomfortable questions earlier. It would integrate foresight, humility, restraint, and responsibility into decision making. It would treat risk foresight and social responsibility as part of human intelligence, not as an inconvenience to the genius mind. It would recognize that intellectual capability without social connection can scale harm faster than benefit.

Intellectual capability without social connection can scale harm faster than benefit.

And history offers two good contrasting examples.

Nelson Mandela embodied a richer spectrum of intelligence. His cognitive clarity, emotional depth, social awareness, and moral discipline allowed him to hold both justice and reconciliation at once. He understood systems of power while remaining deeply attuned to human dignity. His leadership foresaw consequences beyond immediate victory and he chose long-term societal healing over short-term dominance and victory.

Steve Jobs, as much as I admire his brilliance, represents a different and less complete form of intelligence. He was visionary, cognitively exceptional, and transformative in his impact on technology, design and our ways of working. Yet many accounts describe a leadership style marked by relational volatility and emotional distance. Apple’s products changed the world, but that does not erase the human costs inside the organization for those who worked closely with him.

Both forms of intelligence create value. But I would argue that only one consistently scales benefit without also inflicting harm. And this distinction matters now more than ever. If someone deeply identifies as a genius, an expert, or a person driven primarily by mastery and personal excellence, leadership may not be the right role.

We shouldn’t see leadership capabilities as an extension of intelligence. It demands emotional labor, social awareness, ethical courage and the willingness to consider second and third-order consequences. None of which can be bolted on later or neglected and it clearly is a responsibility toward people.

Our fragmented idea of intelligence has led to fragmented leadership. Fragmented leadership accelerates fragmented technological development. And we will all live with the consequences. If artificial intelligence is developed without leaders who possess a more complete spectrum of intelligence, we should not be surprised by outcomes that optimize for economic value while eroding the social foundations that human intelligence helped to build.

The future cannot depend on smarter systems alone. It also depends on wiser humans deciding how far, how fast, and for whom we build them. We need leaders who carry a more complete form of intelligence.

The New Work Playbook offers insights into a broader spectrum of leadership capabilities. Consider it a place to practice human-centered leadership and to explore what we risk when we stick to a fragmented idea of what leading humans in the age of AI requires. I am building this as a community because the future of work and leadership will be shaped by those willing to think about it together. If anything in this article resonated with you, or challenged you,