back

Are fears of technological unemployment misdirected?

Sep. 19, 2024. 9 mins. read. 14 Interactions

As AI disrupts industries, workers fear job losses. At the GAIN summit, experts suggest reskilling and adaptive corporate cultures as essential for survival in an automated future.

Credit: Mindplex

The anxiety is very real. Globally, around 70% of the younger generation are worried about the impact AI will have on their job security.

These remarks were made by Ana Kreacic, Chief Knowledge Officer of management consultant firm Oliver Wyman, on the second day of the GAIN summit which I am attending in Riyadh. (Click here for a recording of the live stream of all the Day 2 sessions from the main GAIN summit auditorium.)

Kreacic was part of an absorbing session entitled “The Future of Work: AI’s Role in Reskilling and Innovation”.

Kreacic went to share some more statistics. In Saudi Arabia, it is 69% of the younger generation who are worried about the impact of AI on their job security. Evidently, that figure is close to the global average. However, there’s a more significant difference when comparing the proportion of older workers who are concerned about the impact of AI on job security. Globally, that figure is 40%, since (according to Oliver Wyman), “senior employees may feel more secure in their careers because they believe AI will have less impact on higher-level employment”. But the same figure in Saudi Arabia is 59%. In other words, apprehension about job security exists more widely in the Saudi workforce.

However, the panellists were keen to dispel that apprehension. With appropriate actions, there would, they said, be plenty of opportunities for people to continue having good jobs. That’s despite an increasing adoption of AI within companies. Yes, jobs will change. But employees will be able to reskill, so that their careers can flourish.

According to these panellists, it is important to recognise that AI can assist the process of reskilling. Kreacic explained the ways in which AI is enabling personalized training. Rather than employees having to attend standard courses that only loosely match their individual requirements, AIs can provide training materials uniquely tailored (“hyper-targeted”) to each employee. Employees find these courses much more enjoyable, with high rates of knowledge retention long after the course has concluded. Moreover, employees will be able to tap into online training programs to acquire skills that aren’t core to their current roles, but which will be important in the positions into which they hope to move in the near future.

Credit: David Wood

Another panellist, Abdulrahman Alhaqbani, a general manager at leading Saudi information services company stc, described how learning in the real world often involves a relationship between two humans – a mentor and a mentee. However, AI can assist here too, by identifying and recommending good mentor-mentee pairs, in a way similar to how various AI systems suggest other kinds of human relationships.

Nadeem Mazen, CEO of the full-service creative agency Nimblebot, foresaw AI enabling fast career progression of a different kind. Namely, a small group of people – perhaps just 4 or 10 people – with a new idea, some ambition and some curiosity, and with the help of AI, “could take on enormous incumbent brands”. Mazen said that we are entering “the heyday of the small firm”.

Najwa Alghamdi, Analytics Innovation Director at stc, spoke about “injecting into the current workforce digital co-workers” that handle the routine aspects of work tasks, resulting in huge boosts in productivity and efficiency. She said that employees were initially wary of these digital co-workers, but once they saw how well they worked in practice, the employees soon accepted the concept. Indeed, they started to propose additional parts of their existing work that could be handled by new co-created digital co-workers. One reason for the acceptance, Alghamdi emphasised, was that the co-workers had been designed to adhere to the principles of explainable AI.

Ramez El-Serafy, CEO of Flat6Labs, a leading seed and early stage venture capital firm operating in the Middle East and North Africa, described how they offer a learning platform to young founders and entrepreneurs. When they started using AI to create the content for these training programmes, they found it much more efficient and more readily adaptable. By using AI, the content of the courses can be updated as often as once each week.

Maan Al-Mulla, Director of Digital Transformation at Saudi Aramco, described the digital upskilling programmes that his team supervises. For example, the company has launched a “citizen development programme, that enables non-technical employees to develop their own applications and their own solutions using low-code and no-code platforms that are powered by AI”. As a result, the Aramco workforce is “more adaptive to any work change”. In summary, by embracing digital solutions and AI, the result is a smoother and faster transition.

A transition to what? Speakers on this panel, as well as on other panels throughout the event, seemed to share an assessment of the kinds of tasks which cannot be handled by AI:

  • Creativity
  • Critical thinking
  • Emotional intelligence
  • Strategic planning
  • Ensuring customer satisfaction.

Accordingly, worries about employees being pushed out of the workforce by increasing automation should be resisted. Instead of automation being a threat, it should be wisely embraced, and will result in lots of innovative work experiences.

Right?

But my analysis now contains two twists. The first of these twists was made by panellists throughout the day, but the second was raised less often in public discussions. Let’s take these twists one at a time.

Don’t fear automation; fear a stagnant culture

One problem with learning new types of skill is that we go back to being a beginner, and we are prone to make mistakes. For a while, we may become less productive. Our bosses may see us as, for a while, performing less well than in the past. As for ourselves, we may be unsure about how our identity is changing: we used to be such-and-such a kind of employee, but now we are becoming something quite different. We may feel awkward and unsure.

In other words, as well as the technical aspects of a new skill, needed for a change of work condition, there are human aspects. Without strong emotional intelligence, we may shirk from the change. Without supportive management, we may feel pressured to give up the transition. Without appropriate metrics for our contributions in the workforce, the pressures will magnify to resist the change.

That’s why corporate culture plays a large part in whether reskilling is successful. Or, if we are unlucky enough to have unsympathetic bosses or colleagues at work, we need to find emotional support from other parts of our network of relationships – such as family, friends, or mentors.

On these points, Ana Kreacic of Oliver Wyman had a positive message to share: If a company can create the right culture – where it’s OK for people to test things out, spend some time learning, and fail several times in the process – then that company will drive innovation, with employees being willing to experiment.

This is a subject I often address in my own presentations to clients, where I warn about failures, not just of individual career transition, but of companies that are unable to respond to disruptive trends, or indeed of whole nations whose prior culture, processes, and infrastructure are unsuited to changed global conditions. For example, here’s one of the slides I sometimes use.

Credit: David Wood

The takeaway at this point is that employees should stop worrying about AI and automation, but should worry instead about whether the culture in which they operate will be conducive to the kinds of adaptions that AI and automation will necessitate.

But as I suggested above, that’s not yet the end of this analysis. There’s one more twist.

Don’t fear automation; fear bad politics

I can’t remember which panellist said it – probably in a session in one of the breakout rooms around the venue – but it was an important point. IT workers will soon become increasingly unionised.

Why so? Because, despite the positive thoughts offered above, IT workers will soon find their entire jobs under increasing pressure from automation.

That’s not from today’s automation – the “AI Now” part of the theme of the GAIN summit – but from the AI of a few years in the future – “AI Next”.

Look again at the kinds of task listed above, which supposedly cannot be handled by AI. In each case, there is already evidence that AI is well on the path to performing these tasks just as well as humans:

  • Creativity is often displayed by generative AI, although at present it needs good prompts from human operators
  • Critical thinking can arise in large language models in response to instructions to “think step by step”, and is likely to be a standard feature in forthcoming AI systems
  • Artificial emotional intelligence has a vast field of research all to itself – often called “affective computing”
  • Strategic planning is shown by the ability of various AIs to outplay humans in increasingly complex games of strategy
  • Customer satisfaction has many aspects to it, and arises by a combination of all the above traits.

Indeed, toward the end of the panel mentioned above, Nadeem Mazen of Nimblebot remarked that chatbots like ChatGPT can already provide employees with some of the psychological assistance that we previously looked to human colleagues to provide. (“It’s almost spooky”, Mazen mused.)

In short, even the work tasks which are claimed to lie outside the reach of today’s robots and AIs, are likely to come within the expanded reach of automation in the relatively near future.

Perhaps you don’t believe my claim here. In that case, I ask you to dip into my recent essay “Six possible responses as the Economic Singularity approaches”, where the section entitled “Disbelief” offers more analysis in support of my claim.

That same essay also makes the case that the best response to this forthcoming wave of job losses isn’t to resist them, but is to accelerate an enhancement of the way the entire economy is run. This enhancement will involve a redistribution of the bountiful fruits of automation so that everyone in society benefits.

The takeaway at this point of my argument is that employees should stop worrying about AI and automation, but should worry instead about whether the political setup in their country is preparing for this kind of significant redistribution as part of a revised social contract.

It’s for this reason that the slide in my presentations on “The 7 most important characteristics for success over the next 3-5 years” gives special prominence to the skill area of “Politics”:

  • Building and managing coalitions
  • Agile regulations & incentives
  • Revised social contract

Credit: Tesfu Assefa

Happily, in my discussions with senior AI leaders in Saudi Arabia over the last three days, it’s my impression that they already have a good understanding of these points. I’ll say more about that in a forthcoming article.

Footnote: Patrick Linnenbank, Partner at management consulting firm Arthur D. Little, deserves a big shout-out for his skill in moderating the above panel discussion.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

About the writer

David Wood

35.99539 MPXR

David Wood is chair of London Futurists and the author of eight books about the future — including The Singularity Principles and Vital Foresight. He co-hosts the London Futurists Podcast.

Comment on this article

5 Comments

5 thoughts on “Are fears of technological unemployment misdirected?

  1. David thank you.

    While the anxiety around AI and job security is understandable, the narrative often overlooks a key opportunity: the redefinition of what it means to work. The debate shouldn't just be about job loss; it should focus on reshaping the economy and education to align with these advancements. What is work? Why do we work?

    Also, while the concern about reskilling is real, we should think beyond mere adaptation. There's an untapped potential in human-AI collaboration where humans and AI complement each other’s strengths. It's not just about surviving AI’s impact on jobs—it's about thriving by leveraging AI as a tool to enhance human capabilities.

    Also, we need to advocate for policies that ensure the wealth generated by AI is more evenly distributed. I don't think the work we know today will exist in the age of the AGIs and the Super AIs.

    For people like me, the anxiety isn’t just about the tech itself—it's about how fast our systems can adapt. In a world where change is happening faster than ever, it’s less about whether AI will take jobs, but more about whether companies, governments, and workers can evolve fast enough to keep pace with AI's capabilities. Stagnation is part of the real fear: getting left behind because we’re too slow in adopting new skills or transforming how we measure value in the workplace. The danger lies in human systems being outpaced by AI advancements.

    I believe AI isn’t inherently a threat, but failing to restructure education, economic systems, and business cultures around it certainly is. Today re-imagining the meaning of work, why we work etc. are at a stagnant stage and when we find our selves in a world where the work we do now can be done by machines, chaos reigns. David, I appreciate your role here and we need more conversations on this topic.

    Let’s not just train workers for AI; let’s redesign work itself so AI becomes a partner in creativity and productivity—not a competitor. It's about reshaping the workforce to move at the speed of AI while safeguarding what makes us human: innovation, emotional intelligence, and adaptability.

    1 Like
    Dislike
    Share
    Reply
  2. If we achieve superintelligent AI that surpasses human capabilities or that equates in every domain, it could render most or all human jobs obsolete. These advanced AIs would be able to do anything humans can do, and likely much better.

    So the real concern shouldn't be technological unemployment per se, but the potential for massive income inequality. As AI takes over more and more tasks, wealth will concentrate in the hands of those who own the AI systems.

    Tru, eeducation, reskilling, and similar reform activities on human capital are very critical and necessary steps during the transion perio: I think we are in that period since our narrow AIs are already out and about and very soon maybe near or after 2040s, we will see the AGI and then the super AI.

    To prepare, the world should start planning for a fair universal basic income or even a moneyless economic system. Otherwise, the rise of superintelligent AI could lead to a dystopian future of haves and have-nots. Proactively addressing these issues is crucial as we advance towards artificial general intelligence and beyond. David, thank you for the piece. I don't think the fear is baseless.

    Like
    Dislike
    Share
    Reply
  3. Heyday of smaller decision-making units is definitely the future. That’s my first highlight from this piece.

    What comes to the approach where supportive transparent adaptation to changing situations is seen as a crucial cultural choice in organizations, I completely agree. I expect it to be a societal journey when people learn how to become great learners and develop the hunger for constant personal improvement. All kinds of meta skills from health management to critical thinking and personal knowledge management rise in value and prominence. 

    But onwards we go and humans have no choice but keep specializing in tasks that are more…. well, ‘humanlike’; One day our brain is mixing us cocktails of angels yet on another one our heart is torn apart and we feel like scattered in the multiverse. One goes unconscious from pain shock just to wake again and wishing not to have woken. And again, not so long after, the pain is your gain and ‘you wouldn’t change a day’. Quite harsh, but often on the way the person learns the greatest lessons of one’s life. On the day when humans in their current form don’t dominate scientific or technological progress there will still be ways to generate great value. We live through human lives and document it in songs and diaries. We suffer, we celebrate, while recording heartbeats in fitness trackers and tears in photographs. AI will be able to understand it all. AI will be able to guide (or destruct or augment or upgrade) us in it but it can’t experience it inside from the perspective of a flawed unique individual human soul.

    They say there are as many beautiful unique stories as there are humans that ever existed. I have always wondered what is out there in the universe of lived real experiences — all that knowledge, growth, beauty, patterns and transformations. Those are not in the center of our attention because there are more practical things to do in our everyday lives. However, I would bet a big buck that things are going to change in this regard as well. 

    And finally, when we go to talk about David’s last point about value distribution, it is not even about valuable ‘contributions’ anymore. Instead, it is about being valuable, being valued by everyone else or, in other words, by the collective agent. Wealthy ought to be one who finds opportunities to increase the collective value. Wealthy ought to be one who recognizes lives whose well-being could be improved the most and go do that. What are the dynamic collective values for every choice that happens to come up? What is the process by which individuals’ values turn into collective action? 

    My core message here is that there will always be unmatched value in every agent in existence. In fact, there will always be some 'comparative advantage' for everyone as well, even if tiny and it would be a failure of our collective society not to be able to harness that value. I personally see the rise of AI and automation extremely liberating. The beneficial outcome is not guaranteed without dedicated effort but, in my view, we have a realistic roadmap to handle this. At least, people will not fall in the trap of existential purposelessness. That's something I'm sure of.


    1 Like
    Dislike
    Share
    Reply
    1. I really loved this comment.

      Like
      Dislike
      Share
      Reply
  4. My anxiety is real. Technological Unemployment is real and the more the AIs become stronger, this anxiety will turn true.

    Like
    Dislike
    Share
    Reply

Related Articles

6

Like

Dislike

Share

4

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks