Foundation models (FMs) are systems trained on large-scale multimodal datasets to integrate vision, language, and other modalities. This integration enables contextual reasoning, compositional understanding, and strong generalization abilities. In this study, we investigate whether increased GPU resources improve research outcomes in FM development, including the quality of pre-trained models and success rates in prestigious AI/ML publications. While substantial resources have rapidly been allocated to support FM research since 2022, it remains unclear if greater computational power directly translates into higher-quality models and increased academic recognition. By focusing specifically on GPU availability and usage, the study seeks to clarify the practical impacts of computational investment and to anticipate future implications for the AI research community.

In our study, we first collected 6,517 papers related to foundation model research from OpenReview API and ACL ARR platform. Then we extracted their pdfs with potential GPU, human resource, funding, and dataset information. In the survey study, werecruited 229 first-author foundation model researchers, representing 326 papers in total, to participate in our survey. Participants provided self-reported responses regarding computing resources when such information was not documented in their publications. Dotted boxes indicate potentially unavailable information.