Hi Lucas, thank you for the kind words. I haven’t yet read the Dreamcoder paper, but I got to watch a talk by Tennenbaum covering it. I definitely believe that we may not have found “all the right” priors yet, and that program synthesis is an exciting direction to tackle generalization issues in Deep Learning (as highlighted by the ARC challenge). But it is also a question about how much we want to bake into the system. Ultimately, I enjoy the beauty of fully end-to-end systems :) Just two pointers: I really enjoyed two recent subsequent ML Street Talk podcast episodes, which affected my way of thinking:
1. Chris Szegedy: Talking about Formal Maths via Transfomer-style models (https://youtu.be/ehNGGYFO6ms)
2. Francois Chollet: Talking about the generalization limitations (https://youtu.be/J0p_thJJnoo)
Have a great day, Rob