• Home
  • People
    Current Members
    Lab Alumni
  • Research
    Overview
    Highlights
    Methods & Tools
  • Publications
  • News
  • Resources
  • Join Us
  • Home
  • People
    Current Members
    Lab Alumni
  • Research
    Overview
    Highlights
    Methods & Tools
  • Publications
  • News
  • Resources
  • Join Us
Home > Journal Club & Teaching

Journal Club & Teaching

An opponent striatal circuit for distributional reinforcement learning

Abstract

Machine learning research has achieved large performance gains on a wide range of tasks by expanding the learning target from mean rewards to entire probability distributions of rewards—an approach known as distributional reinforcement learning (RL). The mesolimbic dopamine system is thought to underlie RL in the mammalian brain by updating a representation of mean value in the striatum, but little is known about whether, where and how neurons in this circuit encode information about higher-order moments of reward distributions. Here, to fill this gap, we used high-density probes (Neuropixels) to record striatal activity from mice performing a classical conditioning task in which reward mean, reward variance and stimulus identity were independently manipulated. In contrast to traditional RL accounts, we found robust evidence for abstract encoding of variance in the striatum. Chronic ablation of dopamine inputs disorganized these distributional representations in the striatum without interfering with mean value coding. Two-photon calcium imaging and optogenetics revealed that the two major classes of striatal medium spiny neurons—D1 and D2—contributed to this code by preferentially encoding the right and left tails of the reward distribution, respectively. We synthesize these findings into a new model of the striatum and mesolimbic dopamine that harnesses the opponency between D1 and D2 medium spiny neurons to reap the computational benefits of distributional RL.


Adam S. Lowet, Qiao Zheng, Melissa Meng, Sara Matias, Jan Drugowitsch & Naoshige Uchida. An opponent striatal circuit for distributional reinforcement learning. Nature, 2025-02. [LINK]


Speaker: Qiyue Zhang

Time: 9:00 am, 2025/05/12

Location: CIBR A622





  • People
  • Research
  • Publications
  • News
  • Resources
  • Join Us
  • 北京脑科学与类脑研究所 - 周景峰实验室
  • Chinese Institute for Brain Research, Beijing
  • Bldg 3, 9 Yike Rd, ZGC Life Sci Park, Changping, Beijing 102206

2021–2025 © Zhou Lab - Chinese Institute for Brain Research, Beijing - 京ICP备18029179号 ❀