Bloomberg is reportedly working on incorporating an artificial intelligence (AI) model built using the same technology as OpenAI's GPT into capabilities offered via its terminal software.
Bloomberg GPT
In a report by CNBC, a representative claims that the company's proprietary AI model, Bloomberg GPT, can answer queries more accurately.
It also has the ability to do generative AI tasks, such as proposing a new headline based on a paragraph of text. Moreover, it can determine if news headlines are pessimistic or optimistic for investors.
The tech industry's trendiest spot is in the development of large language models trained on gigabytes of text data. Tech behemoths like Microsoft and Google are scrambling to implement the feature, while AI companies are routinely attracting capital at values of over $1 billion.
In doing so, Bloomberg demonstrates how software engineers in many sectors outside Silicon Valley see cutting-edge AI like GPT as a technological innovation that enables them to carry out tasks that previously required a person.
Bloomberg's head of Machine Learning product and research, Gideon Mann, said the team was surprised by GPT-3's capabilities and the method through which it attained its success via language modeling, hence, the initiative.
Addressing Common Concern
Some have worried that OpenAI and Big Tech will gain an unbeatable advantage since building huge language models is costly, needing access to supercomputers and millions of dollars to pay for them. In this case, they would come out on top and just charge everyone else for access to their AIs.
The Bloomberg GPT, however, does not use OpenAI. The organization was able to use commercially accessible AI techniques for its vast trove of unique data.
Possible Applications
Bloomberg takes a unique approach. According to CNBC, the model was developed via intensive training on thousands of pages of financial data amassed by the company over the years.
OpenAI's GPT, on the other hand, was trained on a massive corpus of text that had nothing to do with finance.
Data from places like GitHub, YouTube subtitles, and Wikipedia make up around half of the total needed to develop Bloomberg's model.
Additionally, over 100 billion words were added from Bloomberg's proprietary dataset FinPile. These contain financial data the company has accumulated over the last 20 years and can be found in securities filings, press releases, Bloomberg News stories, stories from other publications, and a web crawl focused on financial webpages.
Given that Bloomberg's GPT improved accuracy and performance on financial tasks when given more targeted training materials, the business plans to include it in features and services accessible through the Terminal product. However, Bloomberg has no plans to develop a chatbot in the vein of OpenAI's ChatGPT.
One potential usage would be to translate from human English into the database language used by Bloomberg's software. The model might also be used to do tasks such as data cleaning and maintenance in the background of the application.