This is like asking what your probability is of being run over by a car while sitting in your living room in your high-rise apartment…
I actually remember a 2015 study from Toretto et al. showing that this is a really more plausible than you might think. Other than that this is a great piece. I particularly appreciated one of the better breakdowns of what people mean by “ChatGPT is just a giant table of numbers” for someone who doesn’t have technical background in the area.
A computer can never be held accountable
Therefore a computer must never always make a management decision
So if it turns out, as people like Penrose assert, that the brain has a certain quantum je-ne-sais-quoi, then all bets for representing the totality of even the simplest neural state with conventional computing hardware are off.
No, that’s not what Penrose asserts. His whole thing has been to say that quantum mechanics needs to be changed, that quantum mechanics is wrong in a way that matters for understanding brains.
Cosigned by the author I also include my two cents expounding on the cheque checker ML.
The most consequential failure mode — that both the text (…) and the numeric (…) converge on the same value that happens to be wrong (…) — is vanishingly unlikely. Even if that does happen, it’s still not the end of the world.
I think extremely important is that this is a kind of error that even a human operator could conceivably make. It’s not some unexplainable machine error, likely the scribbles were just exceedingly illegible on that one cheque. We’re not introducing a completely new dangerous failure mode.
Compare that to, for example, using an LLM in lieu of a person in customer service. The failure mode here is that the system can manufacture things whole cloth and tell you to do a stupid and/or dangerous thing. Like tell you to put glue on pizza. No human operator would ever do that, and even if, then that’s straight-up a prosecutable crime with a clear person responsible. Per previous analogy, it’d be a human operator that knowingly inputs fraudulent information from a cheque. But then again, there would be a human signature on the transaction and a person responsible.
So not only is a gigantic LLM matrix a terrible heuristic for most tasks - eg “how to solve my customer problem” - it introduces failure modes that are outlandish, essentially impossible with a human (or a specialised ML system) and leave no chain of responsibility. It’s a real stinky ball of bull.
indeed. the recent air canada matter underscores this.
the computational cost of operating over a matrix is always going to be convex relative to its size
This makes no sense - “convex” doesn’t mean fast-growing. For instance a constant function is convex.
you will be pleased to know that the original text said “superlinear”; i just couldn’t remember if the lower bound of multiplying a sufficiently sparse matrix was actually lower than O(n²) (because you could conceivably skip over big chunks of it) and didn’t feel like going and digging that fact out. i briefly felt “superlinear” was too clunky though and switched it to “convex” and that is when you saw it.
handwritten cheques—an archaic system for transferring money that I want to underscore I believe is nevertheless perfectly ordinary and fine—and will no doubt be with us until money itself is somehow abolished.
Well, in the US. As a consumer in Europe (at least the countries I’ve lived in), getting a cheque book from your bank is in most cases impossible, and I’d say that’s probably for the better.
Still, fun article.
Surprised to see Natwest will still send you a chequebook, though it takes a phone call.
Huh. To be honest, I wouldn’t know what to use it for. Most shops here don’t accept them, and if someone paid me with one, I’d probably have to contact my bank, since they don’t have any automated facilities for cashing them.
They just seem like an artifact from a bygone era.