Abstract
As language models gain prominence for their generative capabilities, their growing carbon footprint must be critically addressed in the context of the climate crisis. This paper aims to increase transparency by showcasing a comparison of emissions from training – particularly of small language models – and inference. We therefore scan existing benchmark data, investigate two representative models, TinyLlama and nanoGPT, evaluating energy consumption during training and task performance. We also reflect upon the specificity of use cases, model architecture, hardware choices, and their influence on efficiency and sustainability. Our findings indicate that existing benchmarks and publications rarely report energy consumption, creating a significant information gap, and urging for a harmonized evaluation framework that integrates standardized sustainability aspects. Despite this, small language models show potential for selected application scenarios where resource efficiency is key. To address the challenges of fair and sustainable AI, we emphasize the importance of ongoing documentation efforts. We also encourage model developers and providers to communicate energy usage data more openly. Transparent reporting supports responsible model selection and helps align AI development with climate-conscious technology practices.
| Original language | English |
|---|---|
| Article number | 108670 |
| Journal | Resources, Conservation and Recycling |
| Volume | 226 |
| Early online date | 31 Oct 2025 |
| DOIs | |
| Publication status | Published - Feb 2026 |
Keywords
- Carbon footprint
- Energy efficiency
- Generative AI
- Small language models
- Sustainability
ASJC Scopus subject areas
- Waste Management and Disposal
- Economics and Econometrics
Fields of Expertise
- Sustainable Systems