4 Comments
User's avatar
Stefania's avatar

Such a well written and interesting read. The mathematics and principles of Generative AI are one of the most captivating concepts I have encountered. It has become a tool used by many people everyday without having a clear knowledge of the engine behind it. Recently, the narrative has been shifted towards the hatred of Generative AI, but as your work has pointed out, when used correctly and with a clear knowledge of the domain from where we are interpolating the data from, they become such powerful systems for which we can learn based representations of our data. This is where the conversation between the domain and the model experts becomes crucial!

Piyush's avatar

Wonderful article. Thank you !

Lucas's avatar

This is excellent. Your “speciation window” is a clean name for something I’ve been circling from the RL side: once “progress” is defined through an authored measurement channel, you create a surface optimizers learn to climb, and the behavior we actually care about often lives in a transient regime before collapse onto the proxy. I’ve been writing about this as the non-neutrality of scalarization, and about benchmarks as governance-by-metric; your diffusion phase picture makes the structural risk unusually legible.

Olexandr Isayev's avatar

Diffusion is all you need 😎