Beyond neural scaling laws: beating power law scaling via data pruning.
B Sorscher*, R Geirhos*, S Shekhar, S Ganguli, AS Marcos.
NeurIPS 2022. Outstanding paper award.
Neural representational geometry underlies few-shot concept learning.
B Sorscher, S Ganguli, H Sompolinsky.
PNAS 2022
A unified theory for the computational and mechanistic origins of grid cells.
B Sorscher*, G Mel*, SA Ocko, L Giocomo, S Ganguli.
Neuron 2022
A theory of learning with constrained weight-distribution.
W Zhong, B Sorscher, DD Lee, H Sompolinsky.
NeurIPS 2022
Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks.
A Nayebi, A Attinger, M Campbell, K Hardcastle, I Low, CS Mallory, G Mel, B Sorscher, AH Williams, S Ganguli, L Giocomo, D Yamins.
NeurIPS spotlight 2021
A unified theory for the origin of grid cells through the lens of pattern formation.
B Sorscher*, G Mel*, S Ganguli, SA Ocko.
NeurIPS spotlight 2019
A theory of weight distribution-constrained learning.
W Zhong*, B Sorscher*, DD Lee, H Sompolinsky. APS March Meeting 2022.
Understanding the emergence of hexagonal grid cells in networks trained to path integrate.
G Mel*, B Sorscher*, S Ocko, S Ganguli. COSYNE 2020.
A theory of learning with a constrained weight distribution.
B Sorscher*, W Zhong*, DD Lee, H Sompolinsky. COSYNE 2020.
Harvard University A.B., Physics and Mathematics, 2018
Harvard University A.M., Physics, 2018
Stanford University Ph.D. candidate, Applied Physics, 2023
US mail: Ben Sorscher 290 Jane Stanford Way Stanford, CA 94305 Email: bsorsch@gmail.com