Google warns: for the first time, hackers used AI to find and exploit a security flaw


According to Google’s analysis, the exploit script bore unmistakable hallmarks of AI-generated code: an abundance of educational annotations, a hallucinated severity score, and a clean, textbook-style structure characteristic of large language model output. Based on these indicators, GTIG said it had high confidence that an AI model was used to both identify and build the exploit, though it said the tool was most likely not Google’s own Gemini.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *