Talk

Virtual

Endless runner: Agentic pipelines for the OS supply chain

The models powering your AI applications are only as trustworthy as the packages they depend on. In 2024, attackers proved they don't need to touch your source code to compromise your software - they just need to poison the distribution.

CEST

This talk walks through three high-impact supply chain attacks targeting AI/ML Python packages: the Ultralytics YOLO cryptominer injected via a compromised GitHub Actions pipeline, the aiocpa "trust and betray" attack that kept GitHub spotless while poisoning PyPI releases, and the 2022 PyTorch dependency confusion attack that silently exfiltrated SSH keys from thousands of developers. All three share a common pattern: clean source, compromised distribution.

It examines what these attacks reveal about the blind spot in shift-left security, the source-to-distribution gap, and what defenses help close it:

• Provenance attestations and transparency logs with Sigstore
• SLSA build integrity levels
• Rebuilding packages from source in hardened infrastructure

It closes with practical takeaways on what protects against these attacks, what does not, and the low-hanging fruit for teams running AI/ML workloads in production.

Virtual

Register for PlatformCon 2026