Listen. I’m no expert, but since 99% of the models that are currently active in the world were trained on the general output of most of humanity.. and even then it had trouble attempting to output usable items..
What happens when AI is the only output.. for all of time? Do the models swap from being pure GPT LLMS, and work as LRM’s or a combination of models? (An ensemble, if you will), or do these models no longer require public data, and instead need private data (I’m looking at you OpenAI, with your “Github replacement”, just wanna steal more stuff) in order to push models farther? It’s clear that at some point, entropy for a model increases, the long context degradation occurs, and then llms can no longer output the probably best token.. because the inference data matches their training data (because they made the training data?) and then it’s just overfitting and underfitting all the way down.
I hope that LLM usage allows for more people to create, build, and imagine all of the same things they imagine, but I also hope there will be the die hards. The never GPTer’s, writing code from their brain, and by hand, and selling it on Etsy for a premium. There has to be some way we can make original code non-replicated or non-fungible (blockchain, heres your shot!). I imagine there must be a way to protect or encrypt your code, allow it to be visible only to an actual human being so that it can’t be scanned or inserted into training data, to make another model slightly better.
I like the idea of turning coding into a comodity, but I don’t like the idea of completely removing humans from the loop. Not only is it economically not viable for the rest of us, it sort of breaks down the whole system doesn’t it? If humans aren’t working in intelligence work, then they won’t need tools to make them more productive, faster, more connected.. which means less frameworks, less new languages, more agentic code. Which means less humans working, less humans making money.. less things being invested in and build, less innovation etc etc.
At what point is there no economy? Who’s going to need agentic ai agents if the whole reason we write software is to speed up manual tasks? Who’s going to consume scientific research that isn’t funded, because an AI does it? At what point does a recursive loop occur, and AI companies don’t have any customers to consume their product or use their product, and then they run out of business and we have no new companies, no entry level employees to replace them etc etc.
It all just doesn’t seem to make sense? What’s the economical principle/structure? Who pays for what and who spends what?
I’d really like to know. Thanks for reading my rant. I’ll catch you on the flip side 🙂
– IAmHaxk