Yes, but that doesn’t mean it is more efficient, which is what the whole thing is about.
Let’s pretend we’re not talking about AI, but tuna fishing. OpenTuna is sending hundreds of ships to the ocean to go fishing. It’s extremely expensive, but it gets results.
If another fish distributor shows up out of nowhere selling tuna for 1/10 the price, it would be amazing. But if you found out that they could sell them cheap because they were stealing the fish from OpenTuna warehouses, you wouldn’t argue that the secret to catching fish going forward is theft and stop building boats.
So what happens when OpenTuna runs out of fish to steal and there are no more boats?
Information doesn’t stop being created. AI models need to be constantly trained and updated with new information. One of the biggest issues with GPT3 was the 2021 knowledge cutoff.
Let’s pretend you’re building a legal analysis AI tool that scrapes the web for information on local, state, and federal law in the US. If your model was from January 2008 and was never updated, then gay marriage wouldn’t be legal in the US, the ACA wouldn’t exist, Super PACs would be illegal, the Consumer Financial Protection Bureau wouldn’t exist, zoning ordinances in pretty much every city would be out of date, and openly carrying a handgun in Texas would get you jailtime.
It would essentially be a useless tool, and copying that old training data wouldn’t make a better product no matter how cheap it was to do.
Once tuna runs out, and we run out of boats?
Maybe we then stop destroying the tuna population?
Or, to bring this back to point: the environment will be better off once the AI bubble collapses.