OpenAI stated to become thinking about establishing its personal artificial intelligence potato chips

OpenAI, among the best-funded AI start-ups in organization, is actually discovering producing its personal artificial intelligence potato chips.

Discussions of artificial intelligence potato chip methods within the firm have actually been actually continuous considering that a minimum of in 2013, depending on to News agency, as the lack of potato chips to qualify artificial intelligence designs worsens. OpenAI is actually supposedly thinking about an amount of methods to develop its own potato chip aspirations, featuring obtaining an artificial intelligence potato chip maker or even positioning an attempt to develop potato chips inside.

OpenAI chief executive officer Sam Altman has actually helped make the procurement of additional AI potato chips a leading concern for the firm, News agency discloses.

Currently, OpenAI, like many of its own rivals, counts on GPU-based components to create designs including ChatGPT, GPT-4 as well as DALL-E 3. GPUs’ capability to execute lots of estimations in analogue create all of them fit to instruction today’s very most competent AI.

But the generative AI boom — a windfall for GPU makers like Nvidia — has massively strained the GPU supply chain. Microsoft is facing a shortage of the server hardware needed to run AI so severe that it might lead to service disruptions, the company warned in a summer earnings report. And Nvidia’s best-performing AI chips are reportedly sold out until 2024.

GPUs are also essential for running and serving OpenAI’s models; the company relies on clusters of GPUs in the cloud to perform customers’ workloads. However they come at a sky-high cost.

An analysis from Bernstein analyst Stacy Rasgon found that, if ChatGPT queries grew to a tenth the scale of Google Search, it’d require roughly $48.1 billion worth of GPUs initially and about $16 billion worth of chips a year to keep operational.

OpenAI wouldn’t be the first to pursue creating its own AI chips.

Google has actually a processor, the TPU (short for “tensor processing unit”), to train large generative AI systems like PaLM-2 and Imagen. Amazon offers proprietary chips to AWS customers both for training (Trainium) and inferencing (Inferentia). And Microsoft, reportedly, is actually working with AMD to develop an in-house AI chip called Athena, which OpenAI is said to be testing.

Certainly, OpenAI is in a strong position to invest heavily in R&D. The company, which has raised over $11 billion in venture capital, is nearing $1 billion in annual revenue. And it’s considering a share sale that could see its secondary-market valuation soar to $90 billion, according to a recent Wall Street Journal report.

But components is an unforgiving business — particularly artificial intelligence chips.

Last year, AI chipmaker Graphcore, which allegedly had its valuation slashed by $1 billion after a deal with Microsoft fell through, said that it was planning job cuts due to the “extremely challenging” macroeconomic environment. (The situation grew more dire over the past few months as Graphcore reported falling revenue and increased losses.) Meanwhile, Habana Labs, the Intel-owned AI chip firm, laid off an estimated 10% of its workforce. And also Meta’s custom AI chip efforts have been beset with issues, leading the company to scrap some its experimental hardware. 

Even if OpenAI commits to bringing a custom chip to market, such an effort could take years as well as price hundreds of millions of dollars annually. It remains to be actually seen if the startup’s investors, one of which is actually Microsoft, have actually the hunger for such an unsafe wager.