6 Comments
User's avatar
redbert's avatar

This got me reading about DQN and the Atari experiment and 'their' approach to generalization... the rabbit hole gets sooo much deeper with the idea of how its structural learning is conceptually similar to ours (?) if that makes sense... I'm not a neuro bro but the parallels between AI and neural systems is wild to "see"...

Expand full comment
Michael Halassa's avatar

It makes perfect sense. I think that was the goal of the NHB paper this post is highlighting: to what degree are the representations in AI architectures informative of our own cognitive strategies. How the brain exactly implements these things is a much harder problem, but the algorithmic parallels are striking.

Expand full comment
redbert's avatar

bro my neurons are just as confused as I am 😎

Expand full comment
Michael Halassa's avatar

My guess is they’re less confused 😄

Expand full comment
Sam H's avatar

This is very thought provoking- but I’m not sure if I lumped or split my way to those thoughts.

Huge implications for therapy - of which I’ve had a lot - and the lenses that are applied to clients currently. . .

Expand full comment
Michael Halassa's avatar

I’m glad you liked it. Whether you lumped or split your way through probably makes no difference, so long as you had fun and there was something there that was useful! Cheers.

Expand full comment