Article Audio Reading In Different Languages
How predictable are human lives? A recent study conducted by researchers at the Technical University of Denmark and Northeastern University suggests that an artificial intelligence (AI) transformer model can predict major human life events, including death.
“Our framework allows researchers to identify new potential mechanisms that impact life outcomes and associated possibilities for personalized interventions,” wrote lead author Sune Lehmann, a professor at Technical University of Denmark, along with co-authors Germans Savcisens, Tina Eliassi-Rad, Lars Kai Hansen, Laust Hvas Mortensen, Lau Lilleholt, Anna Rogers, and Ingo Zettler.
The researchers report that their proof-of-concept AI model shows a high degree of accuracy in its predictions. In artificial intelligence machine learning development, the two key components that impact model accuracy are the algorithm and dataset used.
In machine learning, the quality of the AI algorithm depends on the depth and breadth of training data. To put this in context, the large language model (LLM) developed by OpenAI, ChatGPT (Chat Generative Pretrained Transformer) is a powerful AI chatbot that was trained on massive amounts of data. GPT-3 has 175 billion parameters and was trained using massive amounts of internet data, including 570 gigabytes during the period 2016-2019 from Common Crawl, an open-repository of web crawl data, along with data from the WebText2 dataset, English-language Wikipedia, and two internet-based books datasets called Books1 and Books2, according to a 2020 OpenAI preprint.
To train the AI transformer model, the researchers used a massive dataset containing individual-level detail from work and health databases of six million Danish residents spanning decades. Not only did the data include detailed information on life events, but also day-to-day resolution with data on education, work, working hours, income, and health.
“We can observe how individual lives evolve in the space of diverse types of events (information about a heart attack is mixed with salary increases or information about moving from an urban to a rural area),” they wrote.
The AI deep learning model used, called “life2vec,” is based on a transformer architecture. Transformer models were introduced at the 31st Conference on Neural Information Processing Systems in 2017 in the paper “Attention Is All You Need” by Google researchers Ashish Vaswani, Illia Polosukhin, Jakob Uszhoreit, Noam Shazeer, Niki Parmar, Llion Jones, Lukasz Kaiser, along with Aidan Gomez at the University of Toronto. Transformer models are widely used in natural language processing (NLP), computer vision, speech recognition, and more purposes.
For this current study, the researchers created life2vec using a design based on the BERT model, which is short for Bidirectional Encoder Representations from Transformers. BERT is an open-source AI transformer that was released in 2018 by Google for natural language processing.
“Our models allow us to predict diverse outcomes ranging from early mortality to personality nuances, outperforming state-of-the-art models by a wide margin,” reported the scientists.
With the validation of their research prototype, it is clear that AI can have the predictive capabilities of human life events using a powerful transformer and immense training data, which may present valuable insights for health research and social sciences in the future.