Leaders Watch

Yoshua Bengio

Researcher · Deep learning researcher; scientific director, Mila

Tier 1 focus

A co-founder of modern deep learning who has moved into explicit AI safety and governance work, including institutional experiments that bridge research and policy.

We follow how a core architect of the field narrates obligation as capabilities scale—not as conversion, but as updated responsibility.

Key themes

Drawn from profile tags and timeline entries—useful for spotting which vocabulary recurs as years advance.

Timeline

Chronological, expandable rows. Key moments are highlighted. Each row should tie to a primary URL or first-party document—we add new beats as sourcing catches up.

2023

2024

2025

How their thinking has evolved

Editorial synthesis—not a biography. Grounded in the evidence linked in the timeline.

Bengio’s row shows one of the clearest transitions from foundational machine-learning prestige to explicit safety institution-building. What makes it analytically strong is not fame but structure: repeated movement from research authority into named governance initiatives, with dated artefacts and public commitments.

The profile should continue to emphasize that institutional arc - statements, policy convenings, and organisational experiments - while keeping every step tied to primary documents. That is how this timeline remains evidentiary rather than reputational.

Related on sAIfe Hands

Library essays, AI Safety Map entries, Spotlight briefings, and TED talks referenced from this profile.

AI Risk Statement cluster: Primary document · Why 2023 mattered · Signatories