Despite increasing GPU resources, the intelligence improvements from the current generation of LLMs are not meeting expectations, with reports of failed training runs and diminishing returns emerging from major labs. Additionally, the lack of high-quality data to train new models has become a major bottleneck for AI development.