Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisat…
I suspect that the kind of people who would “know how to use it” don’t use it right now since it has not yet reached “useful if you know how to use it” status.
Software work is dominated by the fat tail distribution of time it takes to figure out and fix a bug. Not by typing code. LLMs, much like any other form of cutting and pasting code without having any clue what it does, gives that distribution a longer, fatter tail, hence its detrimental effect on productivity.
“Useful if you know how to use it” does not sound worth destroying the environment over.
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
https://www.forbes.com/sites/cindygordon/2024/02/25/ai-is-accelerating-the-loss-of-our-scarcest-natural-resource-water/
I suspect that the kind of people who would “know how to use it” don’t use it right now since it has not yet reached “useful if you know how to use it” status.
Software work is dominated by the fat tail distribution of time it takes to figure out and fix a bug. Not by typing code. LLMs, much like any other form of cutting and pasting code without having any clue what it does, gives that distribution a longer, fatter tail, hence its detrimental effect on productivity.