back

A Disruption in the International AI Landscape?

Sep. 12, 2024. 8 mins. read. 7 Interactions

David Wood explores Saudi Arabia's ambitious bid to become a global AI leader at the GAIN Summit in Riyadh. Can Saudi's vision disrupt the dominance of USA and China?

In the years ahead, which countries will have the biggest impact on the development and deployment of AI?

The two most common answers to that question are the USA and China. Sometimes people put in a word for the EU or India – or the UKCanadaIsrael, or Korea.

Well, in one possible future scenario, another country may power forward to join this group of the biggest AI influencers – and, in the process, might disrupt the international AI landscape.

The country in question is the one where I’ve been for the last 72 hours, and where I will stay for two more days, namely, Saudi Arabia.

I’m attending the GAIN summit in Riyadh. The ‘GAI’ of “GAIN” stands for Global AI. The ‘N’ has a triple meaning: Now, Next, Never.

Credit: David Wood

To quote from the event website:

  • AI Now: How are the AI leaders of the world today deploying, scaling, and leveraging the technology?
  • AI Next: What does the future of AI look like and how will it impact people, businesses and government organizations?
  • AI Never: How do we ensure that the future we design is one that we want to live in and not a dystopian sci-fi?

The ‘A’ in GAIN could plausibly also stand for “ambition”, as the organizers have high ambitions. To quote again from the event website:

The Global AI Summit is the leading platform for advancing the global discussion on AI, where visionary experts, academics, corporates, and policymakers converge from every part of the world to shape the future of artificial intelligence for the good of humanity.

Of course, it’s one thing for a country to express big ambitions to play a leading role in the future of AI. It’s quite another thing to make significant steps toward that ambition. Therefore, as I approached the event, I didn’t know what to think.

Indeed, it was my first visit to Saudi Arabia. I found myself reminded of my first visit to China, way back in October 2002. On that occasion, I was representing Symbian, at what was described as the first UK-China CEO Forum. I recently came across a photo of that event – where my hair was a brighter shade of red than in more recent times!

Credit: David Wood

In both cases – my first visit to China, and my first visit to Saudi Arabia – I was unsure what to expect. It turned out that Shanghai was a bustling metropolis, with gleaming shops and a lively entrepreneurial spirit. The Chinese people I met were dressed nothing like the Chairman Mao suits that I had remembered reading about in my schooldays, and were impressively knowledgeable about technology and business. That visit was to be the first of many I would make in the following years, as Chinese companies steadily became more powerful players on the world stage.

That was 2002. What about my experience in the last 24 hours, in 2024, in Riyadh?

Part of the answer lies in numbers:

  • Over 450 speakers, spread over multiple parallel tracks
  • The speakers represented more than 100 different countries
  • Over 32,000 attendees expected during the three days.

These numbers are all significant steps up from the corresponding numbers from the two previous occasions this summit has been held, in 2020 and 2022.

The speakers include a host of prominent leaders from business and technology worldwide. Some examples:

  • Julie Sweet, the Chair and CEO of Accenture
  • Cristiano Amon, the President and CEO of Qualcomm
  • Marc Raibert, the Founder of Boston Dynamics
  • Martin Kon, the President and COO of Cohere
  • Brian Behlendorf, the Chief AI Strategist of the Linux Foundation
  • Nick Studer, the President and CEO of Oliver Wyman Group
  • Matthew Kropp, the CTO and Managing Director of Boston Consulting Group
  • Alan Qi, the President of Huawei Cloud
  • Yuwon Kim, the CEO of Naver Cloud
  • Caroline Yap, the Global Managing Director of Google Cloud.

Multiple segments of society in Saudi Arabia were well represented too – including an impressive number of adept, articulate women leaders, who had some fascinating pieces of advice.

With so many speakers, it is perhaps inevitable that some speeches fell flat – especially several of the ones about the governance of AI, where the conversations seemed to be going round in circles, with little appreciation of what I see as the risks of catastrophic harm if next generation AI is mishandled. However, the technical talks were generally compelling.

I particularly liked the talks by Andrew Feldman, Co-founder and CEO of Cerebras Systems, and Jonathan Ross, Founder and CEO of Groq. These two companies each position themselves as disruptors of the GPU market, and, hence, as potentially overtaking Nvidia. Instead of GPUs, or the TPUs developed by Google, they have created LPUs (Language Processing Units) in the case of Groq, and waferscale AI chips in the case of Cerebras. Both companies claim notable improvements in speed over previous AI chip configurations. I heard the phrase “like ChatGPT but insanely fast”.

Both Cerebras and Groq emphasized close partnerships with Saudi Arabia. Andrew Feldman of Cerebras described a special collaboration with KAUST (King Abdullah University of Science and Technology). And Jonathan Ross of Groq appeared on stage alongside Tareq Amin, the CEO of Aramco Digital. Ross gave three reasons for their company investing strongly in the country:

  • The abundance of energy resources in the country
  • The good business environment, that encourages and supports this kind of partnership
  • The geographical location, close to three different continents, so that the resulting high-performance AI cluster could serve the needs of up to four billion people.

It was while listening to these two talks that the Saudi ambition to become a global leader in AI started to become more credible in my mind.

I had already seen the strong enthusiasm in large numbers of Saudi delegates at the event. They were avidly leaning forward in their seats, to capture as much as possible of the advice being provided on the various stages. It seems that the country is aware of the need to transition away from reliance on the oil industry, and instead to actively participate in shaping the global AI marketplace.

There were many other talks and panels which left me with new ideas to consider. For example, I wished that Marc Raibert, the Founder of Boston Dynamics, could have had more time to develop his fascinating ideas further. He made the case that true intelligence involves an interactive combination of cognitive intelligence (“what’s going on in our heads”) and athletic intelligence (“what’s going on in our bodies”). That explanation formed the backdrop for the progress made by Boston Dynamics over the years, with robots such as Spot (commercially significant “today”), Stretch (“tomorrow”), and Atlas (“future”). In addition to his role at Boston Dynamics, Raibert is also the Founder and Executive Director of the AI Institute, whose website proudly reports that “The AI Institute aims to solve the most important and fundamental problems in robotics and AI”. As I said, I wish he had more time to continue talking about that work.

Earlier in the day, I watched a fascinating six-way round-table discussion on the subject “Hallucinations and Confabulations: when chatbots go rogue”, with speakers from KearneyMoznSaudi AramcoVectaraKAUST, and INF, who each had long careers as experts in various aspects of AI. The discussion went on for 90 minutes, but I would have been happy for it to continue longer, as it had lots of good-spirited clashes of ideas about the strengths and weaknesses of large language models, and possible approaches to add fact-checking components into the AI systems of the near future. One of the speakers, Amr Awadallah of Vectara, boldly predicted that AGI would exist by 2028. Part of his reasoning was his argument that ongoing improvements in RAG (Retrieval Augmented Generation) were steadily reducing the prevalence of hallucinations in the content being generated.

That kind of awareness of potential dramatic improvements in AI capability by 2028 was, sadly (to my mind) missing from what many of the speakers in other sessions were assuming. These other speakers were focused, in effect, on the “Now” of AI, and didn’t foresee many real changes for “AI Next” any time soon. Frankly, if they keep thinking that way, they’re likely to be disrupted themselves. Anyway, this issue is something I hope will feature again in the sessions on days two and three of this year’s GAIN. I look forward to these days with great interest.

I’ll end at the beginning. The day started with an artistic performance, symbolizing the sequential creation of ANI (Artificial Narrow Intelligence), AGI (Artificial General Intelligence), and then ASI (Artificial Super Intelligence). The narrator offered a positive vision of a beneficial relationship of humanity and superintelligence: “There will be no more confusion, and a golden age of progress will flourish, where men and machines, united by an unprecedented alliance, will walk together toward a destiny of glory and happiness”.

Having come to life, the ASI spoke to a young boy, who was a representative of humanity, saying “I am your new avatar, and I will become your best friend”.

Credit: David Wood

In response, the boy started singing what was said to be his favourite song. The music was increasingly stirring and the singing increasingly dramatic. Given the location, Riyadh, I could hardly believe what I was hearing:

Imagine there’s no heaven
It’s easy if you try
No hell below us
Above us, only sky
Imagine all the people
Livin’ for today
Ah

Imagine no possessions
I wonder if you can
No need for greed or hunger
A brotherhood of man

Imagine all the people
Sharing all the world

You may say I’m a dreamer
But I’m not the only one
I hope someday you’ll join us
And the world will live as one

The words and music of John Lennon’s “Imagine” have graced numerous stages over the decades, but somehow, for me, this was particularly evocative.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

About the Writer

David Wood

37.67483 MPXR

David Wood is chair of London Futurists and the author of eight books about the future — including The Singularity Principles and Vital Foresight. He co-hosts the London Futurists Podcast.

Comment on this article

10 Comments

10 thoughts on “A Disruption in the International AI Landscape?

  1. This is an excellent recap of the GAIN summit. David, I want to ask you one critical question. You explained that the possibility of true AGI in 2028 is not far fetched and if one fails to assume this, one might be disrupted. Okay, technically speaking the right answer is we don't know and therefore expecting this won't harm anyone. However, if we decide to dive a little deeper, the 2028 assumption is at best naively put at worst a hype prostitution for investment money.

    Okay, let's ignore the current challenges on the software side, how about the hardware? It always amuse me when people complelety ignore hardware which is equally if not more important in the development of AGI or ASI.

    Right now the hardware field doesn't show any hint that we will have a powerful enough processing unit (RAM and GPU) that can host AGI. Quantum computing is decades away and it is just an infant now. How can we say, as respected professionals, we might see AGI by 2028? This is only after 3 years, 3,months and a couple of weeks far.

    Yes I know Moors Law has been in action in the last four decades, but we our now in stalemate decade. Hardware is not doubling.

    I know you know this but I want to share the following for other people (to show them that Moor's law has been true in the hardware field). In 1990s, the cost of storing a terabyte of data was a little above 7 million dollars. Today, it is a little above 14 dollars. Storing a megabyte data costs a little above 9,000 dollars in 1950s but today it is 0.00002 dollar. Similarly the processing part has also witnessed an exponential changes: cost has declined and performance sky rocketed. The cost of computation today is less than 100 million time of what it costed in the 1970s,so do the math then. In terms of performance Gigaflop operations per second in 1990s was on average in the hundreds while today it's in billions per second.

    Yet, do you know what it takes for true AGI, this needs to be in trillions if not hundred billions. There is a huge gap which the hardware side is not covering.

    Like
    Dislike
    Share
    Reply
    1. Hi Aaron,

      Scenarios in which AGI is reached in 2028 (or even sooner) generally presume that we already have enough hardware at our disposal.

      These scenarios assert that getting to AGI is more of a software issue than a hardware issue.

      After all, the human brain has many of the aspects of general intelligence, but only consumes around 20 watt.

      As a former software engineer myself, I am well aware of occasions where improving the software had a much more significant impact on performance than waiting for the hardware to improve.

      So, attaining AGI would, in this view, depend on us making better use of hardware that already exists.


      On the subject of Moore's Law: it has actually held correct, not for four decades, but *six* decades. The first data point on Gordon Moore's famous 1965 graph, which had five data points, was 1959, when the first silicon integrated circuit had exactly one (that is, 2^0) transistor.

      Focussing only on the way that CPUs have packed more chips would omit the improvements that came from CPUs being displaced by GPUs and TPUs. Looking ahead, the prospects for yet another chip architecture, which groq calls LPU, are very interesting too.


      Three final notes:

      1. People have been forecasting the imminent end of Moore's Law for at least twenty years. That forecast has a bad track record!
      2. I agree with you quantum computing is likely not to make a significant difference to AI on current trajectories. However, there are scenarios in which improved AI enables significant improvements in the design and operation of quantum computing. Might that happen by 2028? I wouldn't rule that out.
      3. Bear in mind that it's not just individual chips that matter - it's the way that they're linked together, especially during the training of new models. Improvements in chip networking architecture are at least as important for progress with AI as improvements in individual chips.
      1 Like
      Dislike
      Share
      Reply
      1. David, thank you for your feedback. I will leave our argument on the Hardware vs. Software issue as it is because this appears to be a choice between two different schools of thought. You are in the Software camp while I am in the Hardware camp. This is a moot point for both of us, and it will likely continue to be so for the foreseeable future. Hardware is the bottleneck for AGI or Super AI, which is why all the major companies (Google, Microsoft, Meta, OpenAI, and Amazon) have started working on their own AI processors; they have all launched hardware ventures. In short, they know that the current hardware won't serve them well in their quest to unlock AGI. The most likely scenarios are the top ten super computers might be able to run true AGI, but I even doubt the sustainability of that. However, as you stated, there is another group that believes they can achieve this regardless of the hardware problem, so I will not challenge this difference any more than I have.

        Now, I am more concerned about your other positions, which I think are more pronounced now than ever. You have been writing about ethical concerns and how the world is unprepared for AGIs in terms of policies, applications, control mechanisms, inclusivity, etc. David, 2028 is too soon, and please don't stop writing on these issues. We need you and others to awaken the world. I appreciate what you have been doing and am thankful. I see what you see now; while this technology might be at our doorstep, the majority of us are still not considering it a lifetime possibility, giving it little attention if we do pay any at all.

        1 Like
        Dislike
        Share
        Reply
        1. I would take 'before 2030' pretty seriously and there is no slowing down. But yea indeed, regarding the development of social systems and decision-making structures we are lightyears behind the schedule. I second what you said. Thanks guys, keep the conversation going, spread awareness and onboard people to contribute.

          Like
          Dislike
          Share
          Reply
  2. A beautiful recap; thank you, David.

    However, I am surprised to see that you completely forgot to mention Japan as one of the leading countries in AI development. Furthermore, Japan is among the countries with the most significant impact on the development and deployment of AI. Assuming that Canada, Israel, Australia, Korea, and India will have more impact than Japan in this landscape is a mistake. Japan is on par with the USA and China, if not leading! Additionally, Russia is a major player in this field, often overlooked by the West.

    Saudi Arabia has all the money, so if they are true to their ambitions, they could quickly become a significant player. However, they utterly lack the expertise, culture, and institutions, along with the individuals necessary to dominate the AI race. I do not expect them to change the game in this and the coming decade (maybe in 2040s). The only shortcut I see here is maybe they could seduce talent because money talks. I would not be surprised to see leading AI researchers from Japan, the USA, Russia, and China moving to the Saudi camp. If they do that, and if somehow they manage to create a favorable culture and living environment for such individuals, perhaps they will join the big league mid 2030s. Time will tell.

    Like
    Dislike
    Share
    Reply
    1. Hi Naonobu,

      You're right, there was a case for me to include Japan in that list too. And Singapore.

      You may be interested to read the analysis in the Tortoise Global AI Index, https://www.tortoisemedia.com/intelligence/global-ai/#rankings

      (That index places Japan as #12 worldwide. And Singapore at #3.)

      The Tortoise team considered over 100 separate measurements for each country, in compiling their ranking index.

      Saudi Arabia is placed at #31 on that list. But, unexpectedly, for the category of "Government Strategy", they are #1.

      1 Like
      Dislike
      Share
      Reply
      1. Hi David, thank you for sharing the Tortoise Media report. This is my first time hearing about it. I took a look at it, and it doesn't make sense in some aspects; at times, it seems to present arbitrary scoring.

        For example, it ranks China as the second-leading country in total score but then places China as the 20th nation regarding Talent. On the same breath, it ranks China as second in Research. On the other hand, it labels India as the second-leading nation in Talent but then ranks it as 20th in Research. Talent is what drives research, and such a disparity between these two indexes' rankings is a paradox. If talent is not the one driving the research, then the whole situation is just absurd. So in this aspect, the report is ridiculously fallacious. If talent here refers to potential or every call center dude, then it is even more absurd.

        The development index is another joke. For example, Japan is ranked as the 12th nation overall but then 6th in development, while Canada is 5th overall but 11th in development. It defines development as "focusing on the development of fundamental platforms and algorithms upon which innovative artificial intelligence projects rely." Such disparity between parallel indexes and the lack of clear quantification prevents me from taking the report as a serious study.

        Regardless, I really appreciate you sharing this on top of your article.

        Like
        Dislike
        Share
        Reply
        1. Tortoise Media carefully explain their methodology in this 15 page PDF. (As linked from the page I mentioned earlier.)

          Once you read that, you'll appreciate that your criticisms of it are completely unsound.

          1 Like
          Dislike
          Share
          Reply
          1. Again thank you David, but I still remain skeptical about the accuracy and quantification of the Indexes. I read the guidelines and it says the following.

            Implementation Talent:Artificial intelligence is implemented by people. This refers to the everyday practitioners of artificial intelligence who are employed by the public and private sector to apply 

            technology to specific problems. Capacity, therefore, is based substantially on the personnel able to deploy, manage and implement technology systems. The geographical concentration of AI specialists, their movements, and the changing supply and demand for them across the field is the focus of the ‘Talent’ sub-pillar. The purpose of measuring talent is to define the level of capacity offered by human capital within a given nation.

            Innovation Research: Research and researchers generate new ideas in artificial intelligence. Capacity as a result of research is substantially based upon the level of activity amongst research communities

            and the extent to which they share and propagate ideas.Measuring the level of research includes an assessment of the number of papers, and the citations, their impact according to the Computer Science H-Index, as well as attendance at conferences and contributions to IEEE journals, among others. The ‘Research’ sub-pillar is indicative of the advances in capability that contribute to capacity through new innovations.

            First, while talent is defined greatly and clearly Reaserch has some quirks: citation is not good data. Famous researchers can be cited a million times more so few celebrity like easerchers can put number thousdans or researchers in terms of citation. Second, as I mentioned in my other comment, the difference between the Talent score and the Research score shouldn't be that high. From the guidelines definition these two things are clearly tied up.

            Second how can you quantify Reaserch by 'attendance' number? Countries that have high population or rich people can easily have high attendance. I am just highlighting the problem of quantification and qualification applied in their methodology.

            Furthermore, while their definition of 'Development' is clear and looks like very important to measure the AI capacity of nations, it has almost a loose relationship with Total Score for some countries.

            Investment Government strategy: Government strategies - often a collection of publications outlining approaches to digital transformation, innovation and artificial intelligence – detail commitments to invest and align interests in research communities.

            The content, presentation and apparent urgency of these national strategies is the focus of the ‘Government strategy’ sub-pillar, with indicators measuring AI spending and targets.

            As you can see this is loosely defined and it should have only focused on the actual money the governments spent. Governments can talk a lot about AI, host tones of conferences brimmed with attendees, but these doesn't do any good unless the real money is spent on real AI projects, Saudi for example is doing all these for Propoganda and Imaage building. Here I am also trying to scrutinize this Indexes qualification and quantification. Based on the definition good policies might also get you high scores.

            Development is the only thing that matters. When it comes to country commitment and policies even the African nations have a fancy commitment policies but either it is a paper tiger or all the money is embezzled

            Innovation Development:Innovation is demonstrated by the development of new techniques and advancements – especially in the field of artificial intelligence. The ‘Development’ sub-pillar focuses on collaboration on open source artificial intelligence platforms, the ISO Artificial Intelligence Committee status, and several indicators describing the level of patentable innovation.

            This (Development) should have a higher weight than at least Government Strategy.

            Anyway, David, pardon my lawyering up for Japan."

            Like
            Dislike
            Share
            Reply
            1. Nao buddy, you might hate it with all your gut, but Saudi indeed might be in a better place than Japan. Maybe today they are not there yet, but pretty soon, just like David said (the 2030 vision), there is a reliable evidence that they might join the top ten nations. By the way it is not just Saudi but other gulf nations like Quatar are on the move. They are investing almost half a trillion dollar (the gulf nations), they are building the talent, they are also attracting the talent. Just because Japan used to be a tech marvel, it doesn't mean this will stay constant. "Wake up and compete", that should be what you tell to your beloved Japan. For me their massive uninhabited deserts and their geographic location where solar power can easily be harnessed is also another potential, for these people building massive cloud farms is as easy as ABC. Don't underestimate them.

              Like
              Dislike
              Share
              Reply

Related Articles

3

Like

Dislike

Share

2

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks