The scaling laws became a foundational principle for AI development after being popularized by researchers, including those at OpenAI and Google DeepMind, who emphasized that it's not just about making models larger, but also about training them on sufficient data.