At a computer security conference in Arlington, Virginia, last October, a few dozen AI researchers took part in a first-of-its-kind exercise in “red teaming,” or stress-testing a cutting-edge language model and other artificial intelligence systems. Over the course of two days, the teams identified 139 novel ways to get the systems to misbehave including by generating misinformation or leaking personal …
Category:
Tech & AI
-
Google on Wednesday launched its AI coding agent, Jules, out of beta, …
-
With Scramble, you get secure, encrypted cloud storage for life — no …
-
Writer-director Mary Bronstein and star Rose Byrne dissect the anxieties of motherhood …
-
According to the unpublished bulletin, FEMA funds may not necessarily be yanked …
-
Microsoft is making OpenAI’s new free and open GPT model, gpt-oss-20b, available …
-
Microsoft’s Windows 11 SE, aimed at challenging Chromebooks in schools with low-cost …
-
SAVE OVER $100: As of Aug. 6, Beats Solo 4 headphones are …
-
HBO Max may not have the shine it once did, but the …
-
After months of beta testing, newsletter platform Ghost has shipped a new …