By Ed Finn
We spend an awful lot of time now thinking about what algorithms know about us: the ads we see online, the deep archive of our search history, the automated photo-tagging of our families. We don’t spend as much time asking what algorithms want. In some ways, it’s a ridiculous question, at least for now: Humans create computational systems to complete certain tasks or solve particular problems, so any kind of intention or agency would have to be built in, right?
This would be an acceptable answer if algorithms didn’t happen to surprise us so often. But surprise us they do, from the mundane yet hilarious autocorrect and transcription fails to the more troubling instances of complex behaviors, like the cascading bad choices high-frequency trading algorithms made that caused the 2010 “Flash Crash.” There’s an interesting philosophical question lurking in there—where does the surprise come from, exactly? Do complex systems sometimes behave in ways that are objectively, statistically surprising? Or is the term surprise a human invention, another storytelling crutch for mammals whose brains were never well-suited for the rational evaluation of complex cultural systems?
Read the full article at Future Tense…
Learn more about the Future Tense event on The Tyranny of Algorithms…