Hi Robert thanks for the piece. I was wondering if you had a chance to take a look at the DreamCoder: https://arxiv.org/abs/2006.08381 paper, it seems to me that future models could be heading towards a more hybrid architecture where part of it is interpolation (neural nets) and part of it discrete search (like the program synthesis approach from the paper). Do you see those approaches having an interesting impact on how we think about generalization or do you feel like the true insights will come from papers that try to do the hacks on the Deep Learning toolbox as you put it? I ask this because an older article of yours on Chollet's "On the Measure of Intelligence" was actually the first encounter I had with that paper which led to a lot of study on generalization over the last year. That is it! Cheers and thanks again for the piece! :)