“in all fairness, everything is an algorithm”
While we’re here, can I get an explanation on that one too? I think I’m having trouble separating the concept of algorithms from the concept of causality in that an algorithm is a set of steps to take one piece of data and turn it into another, and the world is more or less deterministic at the scale of humans. Just with the caveat that neither a complex enough algorithm nor any chaotic system can be predicted analytically.
I think I might understand it better with some examples of things that might look like algorithms but aren’t.
speaking of 4th dimensional processing, https://en.wikipedia.org/wiki/Holonomic_brain_theory is pretty interesting imo
An algorithm is:
A finite set of unambiguous instructions that, given some set of initial conditions, can be performed in a prescribed sequence to achieve a certain goal and that has a recognizable set of end conditions.
For the sake of argument, let’s be real generous with the terms “unambiguous”, “sequence”, “goal”, and “recognizable” and say everything is an algorithm if you squint hard enough. It’s still not the end-all-be-all of takes that it’s treated as.
When you create an abstraction, you remove context from a group of things in order to focus on their shared behavior(s). By removing that context, you’re also removing the ability to describe and focus on non-shared behavior(s).So picking and choosing which behavior to focus on is not an arbitrary or objective decision.
If you want to look at everything as an algorithm, you’re losing a ton of context and detail about how the world works. This is a useful tool for us to handle complexity and help our minds tackle giant problems. But people don’t treat it as a tool to focus attention. They treat it as a secret key to unlocking the world’s essence, which is just not valid for most things.
Thanks for the help, but I think I’m still having some trouble understanding what that all means exactly. Could you elaborate on an example where thinking of something as an algorithm results in a clearly and demonstrably worse understanding of it?
Algorithmic thinking is often bad at examining aspects of evolution. Like the fact that crabs, turtles, and trees are all convergent forms that have each evolved multiple times through different paths. What is the unambiguous instruction set to evolve a crab? What initial conditions do you need for it to work? Can we really call the “instruction set” to evolve crabs “prescribed”? Prescribed by whom? Like, there’s a really common mental pattern with evolutionary thinking where we want to sort variations into meaningful and not-meaningful buckets, where this particular aspect of this variation was advantageous, whereas this one is just a fluke. Stuff like that. That’s much closer to algorithmic thinking than the reality where it is a truly random process and the only thing that makes it create coherent results is relative environmental stability over a really long period of time.
I would also guess that algorithmic thinking would fail to catch many aspects of ecological systems, but have thought less about that. It’s not that these subjects can’t gaining anything by looking at them through an algorithmic lens. Some really simple mathematical models of population growth are scarily accurate, actually. But insisting on only seeing them algorithmically will not bring you closer to the essence of these systems either.