Soon the UAE will dethrone OpenAI

UAE’s Technological Innovation Institute (TII) yesterday released the Falcon 180B, a higher scaled-up version of the Falcon 40B. According to the official blog post, it is the largest open-source language model, containing a whopping 180 billion parameters.

According to TII, the Falcon 180B is trained simultaneously on 4096 GPUs on 3.5 trillion tokens, using Amazon SageMaker for a total of ~7,000,000 GPU hours. To put it in perspective, is the Falcon 180B 2.5 times larger than llama 2 and required Four times more computing power for his training. How UAE’s TII acquires such substantial computing power is certainly interesting.

UAE has oil money

As an oil-rich nation, the UAE has ample financial resources. Hydrocarbons play a significant role in the UAE’s economy, with 30% of the UAE’s GDP directly dependent on the oil and gas industry and 13% based on exports, according to a report.

UAE is allocating money earned from oil to AI projects. Six years ago, they launched the National Strategy for AI 2031, which aims for AI to make a significant contribution to their economy and target up to 13.6% of their GDP by 2030.

In 2020, the UAE government established the ARTC (Advanced Technology Research Council) to promote scientific research and innovation in AI. A few months later, ARTC established TII which is behind the production of Falcon 180B today. There is no doubt that the UAE is enthusiastic about investing in AI initiatives. In June, when OpenAI CEO Sam Altman visited Abu Dhabi, he praised the country’s foresight in recognizing the potential of AI, noting that the city has been “talking about AI since before it was cool”.

While the world is scrambling to get NVIDIA GPUs, the UAE got access to thousands of NVIDIA chips, which they used to build the Falcon model in May. Furthermore, the report states that the UAE wants to control its own computing power and talent rather than relying on the Chinese or the Americans. No doubt they have the capital, energy resources and talent to do it.

Similarly, Saudi Arabia also has no less than 3,000 H100 chips. These processors cost $40,000 each. The acquisition was facilitated by King Abdullah University of Science and Technology (KAUST), a public research institution. A little crunching and it becomes clear that the Saudis have invested a whopping $120 million to secure this impressive range of GPUs.

This is the reason why both AMD and NVIDIA raised eyebrows when the US tried to ban the export of AI chips to Middle Eastern nations. All the world’s major economies are currently engaged in an LLM race that has led to a Cold War with the US and an all-out effort to prohibit domestic AI chip manufacturers from supporting their competitors.

Not only this, UAE’s G42 recently launched Jais, an Arabic-language AI model with 13 billion parameters. Jais was built with supercomputers manufactured by Silicon Valley-based Cerebras Systems for which it had a $100 million contract with G42. With NVIDIA’s chips in short supply, the UAE was smart enough to find alternatives.

Furthermore, in 2021 G42 raised $800 million from US tech investment firm Silver Lake, which is backed by the UAE’s sovereign wealth fund Mubadala.

What about OpenAI?

Coming to OpenAI, the company’s progress is largely dependent on the multi-billion dollar investment it received from Microsoft earlier in the year. However, recent developments show that it has run out of investment. recently, Altman posted on X The company is not coming with GPT-5 or GPT- 4.5 in the near future and asked people to calm down.

Accordingly Information According to the report, OpenAI’s losses roughly doubled last year to about $540 million as it developed ChatGPT and GPT-4. According to the report, it cost them more than $4 million to train GPT-3 with 175 billion parameters. Now, GPT-4 is rumored to have about 1.76 trillion parameters, approximating the cost of building the model. $46.3 billion, Assuming a linear increase in cost per parameter. Again, this is a simplified estimate and the actual cost may vary based on various aspects including research and development costs, talent, hardware improvements and more.

This explains why OpenAI is avoiding releasing GPT-4’s multimodal capabilities to the public or disclosing parameter sizes, which the team seems to be deliberately hiding to avoid unwanted attention. Who knows, maybe OpenAI fooled us all and we never actually got GPT-4.

Altman previously suggested that OpenAI could seek to raise $100 billion over the next few years to achieve its goal of developing AGI. Perhaps, OpenAI should also attract some oil money or perhaps expand to the Middle-East. Interestingly, Microsoft is already considering doing so.

Currently, OpenAI is trying to attract enterprises to stay in business. Announced the inaugural developer conference on November 6, 2023 in San Francisco, where developers from around the world are expected to bring new ideas and tools for ChatGPT and the API.

Leave a Comment