The UK’s National Cyber Security Center (NCSC) is warning organizations to beware of the impending cyber risks associated with the integration of large language models (LLMs) such as ChatGPT into their business, products or services.
one in set of blog postsThe NCSC emphasizes that the global technical community does not yet fully understand the strengths, weaknesses, and (most importantly) vulnerabilities of the LLM. “You can say that our understanding of LLM is still in ‘beta’,” the authority said.
One of the most widely reported security weaknesses of existing LLMs is their vulnerability to malicious “early injection” attacks. This occurs when a user creates an input intended to cause the AI model to behave in an unexpected way – such as generating objectionable content or revealing confidential information.
Furthermore, the data on which LLM is trained poses a double whammy. Firstly a huge amount of this data is collected from the open internet, which means it may contain inaccurate, controversial or biased content.
Second, cyber criminals can not only distort available data for malicious practices (also known as “data poisoning”), but also use it to hide quick injection attacks. In this way, for example, a bank’s AI-assistant can be tricked into transferring money to attackers for account holders.
“The emergence of the LLM is undoubtedly a very exciting time in technology – and many people and organizations (including NCSC) want to explore and benefit from it,” the authority said.
“However, organizations building services that use LLM need to exercise caution, in the same way as if they were using a product or code library that was in beta,” the NCSC said. That is, with caution.
The UK authority is urging organizations to establish cyber security principles and ensure that they can deal with the “worst-case scenario” of what their LLM-powered applications are allowed to do.