Foundation Models and Their Transformative Impact on Machine Learning

Main Article Content

Srashti Farkya
Priyanka Khabiya

Abstract

Artificial Intelligence (AI) has made remarkable progress in recent years, mostly enabled by the growth of large-scale ML models based on abundant and various datasets. Although conventional models are meant for just a single job, foundation models are adaptable and may be applied in fields such as natural language processing, computer vision and even the creation of programming code. All these models, for example, BERT, GPT-4, Claude and Gemini, work with transformers and have been trained in an unsupervised manner. Because of this approach, models are able to figure out what’s in the data and how the components relate which requires very little labeled information. Models trained on a large dataset can be customized or guided to do different tasks with only a little more data. The paper looks into the concept, architecture and how foundation models are applied. It describes the upsides of ML such as its adaptable setup, speed and versatility and it similarly notes the main issues such as potential bias in data, moral aspects, huge computing requirements and privacy concerns. The objective is to explain how foundation models are changing ML and what issues should be kept in mind when they are used or introduced.

Downloads

Download data is not yet available.

Article Details

Section

Research Paper

How to Cite

Foundation Models and Their Transformative Impact on Machine Learning. (2025). Journal of Global Research in Electronics and Communications(JGREC), 1(5), 07-14. https://doi.org/10.5281/zenodo.15614115

Similar Articles

You may also start an advanced similarity search for this article.