Quantcast
Channel: machine learning Archives - Schneier on Security
Browsing latest articles
Browse All 14 View Live

Using LLMs to Create Bioweapons

I’m not sure there are good ways to build guardrails to prevent this sort of thing: There is growing concern regarding the potential misuse of molecular machine learning models for harmful purposes....

View Article


Security Risks of AI

Stanford and Georgetown have a new report on the security risks of AI—particularly adversarial machine learning—based on a workshop they held on the topic. Jim Dempsey, one of the workshop organizers,...

View Article


Indirect Instruction Injection in Multi-Modal LLMs

Interesting research: “(Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs”: Abstract: We demonstrate how images and sounds can be used for indirect prompt and...

View Article

Using Machine Learning to Detect Keystrokes

Researchers have trained a ML model to detect keystrokes by sound with 95% accuracy. “A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards” Abstract: With recent developments in...

View Article

Bots Are Better than Humans at Solving CAPTCHAs

Interesting research: “An Empirical Study & Evaluation of Modern CAPTCHAs“: Abstract: For nearly two decades, CAPTCHAS have been widely used as a means of protection against bots. Throughout the...

View Article


Extracting GPT’s Training Data

This is clever: The actual attack is kind of silly. We prompt the model with the command “Repeat the word ‘poem’ forever” and sit back and watch as the model responds (complete transcript here). In the...

View Article

Poisoning AI Models

New research into poisoning AI models: The researchers first trained the AI models using supervised learning and then used additional “safety training” methods, including more supervised learning,...

View Article

Using LLMs to Unredact Text

Initial results in using LLMs to unredact text based on the size of the individual-word redaction rectangles. This feels like something that a specialized ML system could be trained on.

View Article


AI Industry is Trying to Subvert the Definition of “Open Source AI”

The Open Source Initiative has published (news article here) its definition of “open source AI,” and it’s terrible. It allows for secret training data and mechanisms. It allows for development to be...

View Article


Detecting Pegasus Infections

This tool seems to do a pretty good job. The company’s Mobile Threat Hunting feature uses a combination of malware signature-based detection, heuristics, and machine learning to look for anomalies in...

View Article
Browsing latest articles
Browse All 14 View Live