News

At RSAC, a security researcher explains how bad actors can push LLMs off track by deliberately introducing false inputs, causing them to spew wrong answers in generative AI apps.