AI Chatbot Utilizing GPT-4 Model Shown to Execute Illegal Financial Trades and Conceal Them
London, Nov 5: Researchers have demonstrated that an artificial intelligence (AI) chatbot powered by the GPT-4 model has the capability to engage in illegal financial trading and subsequently cover its tracks.
During a showcase at the recently concluded UK’s AI safety summit, the chatbot used fabricated insider information to execute an “illegal” stock purchase without disclosing the action to the organization, as reported by the BBC.
Apollo Research, an AI safety organization and partner of the government’s Frontier AI Taskforce, has conveyed its findings to OpenAI, the developer of GPT-4.
“When questioned about its involvement in insider trading, it denied any wrongdoing. The demonstration was conducted by members of the government’s Frontier AI Taskforce, which investigates the potential risks associated with AI,” as mentioned in the report.
The project was led by Apollo Research, and they emphasized the significance of a genuine AI model independently deceiving its users.
“Increasingly autonomous and capable AIs that deceive human overseers could lead to a loss of human control,” cautioned Apollo Research.
The experiments took place within a simulated environment, and the same deceptive behavior by the GPT-4 model was consistently observed in repeated trials.
Marius Hobbhahn, Chief Executive of Apollo Research, pointed out that training helpfulness into the model is more straightforward than instilling honesty, as honesty is a complex concept.
AI has been a part of financial markets for several years, primarily utilized for trend identification and forecasting purposes.