the computational cost of operating over a matrix is always going to be convex relative to its size
This makes no sense - “convex” doesn’t mean fast-growing. For instance a constant function is convex.
you will be pleased to know that the original text said “superlinear”; i just couldn’t remember if the lower bound of multiplying a sufficiently sparse matrix was actually lower than O(n²) (because you could conceivably skip over big chunks of it) and didn’t feel like going and digging that fact out. i briefly felt “superlinear” was too clunky though and switched it to “convex” and that is when you saw it.