The carbon footprint to train a large language model (LLM) with one billion parameters varies depending on specific factors, but estimates suggest that training such models could produce significant carbon emissions. For example, training models like GPT-3 requires approximately 552 metric tons of CO2[6].
Moreover, the operational and embodied emissions of LLMs can collectively reach up to 24-35% of their total carbon footprint from manufacturing hardware[5]. Therefore, the average carbon footprint for training a 1 billion parameter model could be several hundred tons, equating to roughly 502 tons of CO2[5].
Get more accurate answers with Super Search, upload files, personalized discovery feed, save searches and contribute to the PandiPedia.
Let's look at alternatives: