National statistical systems generate the statistics that underpin policy, economic analysis, and public trust. Yet, despite decades of investment in statistical capacity, two persistent challenges, data accessibility and interpretability, limit the impact of these official statistics. The rise of large language models (LLMs) and GenAI applications such as ChatGPT and Gemini appeared to offer a solution by enabling users to retrieve statistics using natural language. However, testing demonstrates that while the GenAI applications excel at synthesizing text, they perform poorly at delivering official statistics: they frequently provide dangerously “reasonable” but incorrect figures. This paper introduces StatGPT, an initiative by the IMF Statistics Department that leverages LLMs not to generate statistics, but to generate structured queries against APIs of official statistical agencies. StatGPT ensures that users receive the exact published figures, every time, while benefiting from natural language interaction. This paper examines the limitations of off-the-shelf GenAI applications, outlines how StatGPT overcomes these limitations, and proposes a roadmap for making official statistics AI-ready through open data access, enriched metadata standards, and strengthened data governance. By aligning technological innovation with statistical rigor, StatGPT represents a critical step toward a future where official statistics remain authoritative, trusted, and universally accessible in an AI-driven world.