Facebook Twitter Instagram
    News Trek
    • Home
    • Latest News
    • World
    • U.S
    • Politics
    • Technology
    • Sports
    Facebook Twitter Instagram
    News Trek
    Home»Technology»LLMs prone to data poisoning and prompt injection risks, UK authority warns
    Technology

    LLMs prone to data poisoning and prompt injection risks, UK authority warns

    August 31, 2023No Comments2 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Reddit Twitter LinkedIn Pinterest

    The UK’s National Cyber ​​Security Center (NCSC) is warning organizations to beware of the impending cyber risks associated with the integration of large language models (LLMs) such as ChatGPT into their business, products or services.

    one in set of blog postsThe NCSC emphasizes that the global technical community does not yet fully understand the strengths, weaknesses, and (most importantly) vulnerabilities of the LLM. “You can say that our understanding of LLM is still in ‘beta’,” the authority said.

    One of the most widely reported security weaknesses of existing LLMs is their vulnerability to malicious “early injection” attacks. This occurs when a user creates an input intended to cause the AI ​​model to behave in an unexpected way – such as generating objectionable content or revealing confidential information.

    Furthermore, the data on which LLM is trained poses a double whammy. Firstly a huge amount of this data is collected from the open internet, which means it may contain inaccurate, controversial or biased content.

    Second, cyber criminals can not only distort available data for malicious practices (also known as “data poisoning”), but also use it to hide quick injection attacks. In this way, for example, a bank’s AI-assistant can be tricked into transferring money to attackers for account holders.

    “The emergence of the LLM is undoubtedly a very exciting time in technology – and many people and organizations (including NCSC) want to explore and benefit from it,” the authority said.

    “However, organizations building services that use LLM need to exercise caution, in the same way as if they were using a product or code library that was in beta,” the NCSC said. That is, with caution.

    The UK authority is urging organizations to establish cyber security principles and ensure that they can deal with the “worst-case scenario” of what their LLM-powered applications are allowed to do.

    Share. Facebook Reddit Twitter Pinterest WhatsApp LinkedIn Tumblr Email

    Related Posts

    Large Hadron Collider needed a new database system to sustain its petabyte-hungry experiments

    September 19, 2023

    Modder uses CAD software and 3D printer to create custom cooling ducts

    September 19, 2023

    NASA releases photo of a baby star that will grow up to be like our Sun

    September 19, 2023

    Leave A Reply Cancel Reply

    Watch News Max Now
    YouTube video
    Facebook Twitter Instagram Pinterest
    © 2023 Pro Times News. All Rights Reserved

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!

    Ad Blocker Enabled!
    Our website detected adblocker. Please disable it to continue Reading. Thank You😊