Thursday, April 6, 2017

"What BlackRock's Robots Don't Know Can Hurt Them"

This is one of the points Ms. Kaminska was making when she decided to dive into the commenting mosh-pit on the post we highlighted in last Saturday's "FT Alphaville: That Time Izabella's Comments Were Better Than The Post (and the post was pretty good)". Giving as good as she got:
...Then, in the cheap seats, the comments veered hard into artificial intelligence and algorithms, and Holy Hannah, hang on for the ride. As I quoted in another context...

...Here's an example, Ms Kaminska jumping into the scrum:
...An AI can never be all knowing. At best it will operate like a quantum leap ziggy machine quoting probabilities all the time. Which means it can still be gamed or duped by another AI, or by the underlying data.
In some ways this is regressive. Informed trading used to be about certain information....
From Forbes:
There is nothing surprising about BlackRock’s decision to begin replacing human portfolio managers with artificial intelligences, at least not to anyone who has been following the financial industry. And unless you are one of the dozens of human portfolio managers whose jobs are being replaced by AIs, there is nothing wrong with it either — as long as BlackRock remembers what happened to Knight Capital.

In 2012, Knight Capital almost went bankrupt after its new automated trading algorithm made $440 million in bad trades in just 45 minutes.

The Knight Capital story is a cautionary tale about what can happen when companies defer too much of their decision making to automated systems, and a textbook example of what cognitive psychologists call automation bias.

Automation bias refers to our tendency to stop questioning the results generated by such automated systems once we begin to rely on them.

It is a big problem in the airline industry, where increased cockpit automation has made pilots dangerously dependent on such technology. Study after study in flight simulators has shown that even experienced pilots will often disregard important information when automated systems fail to alert them to it or, worse still, make dangerous mistakes when those same systems provide them with erroneous data. Automation bias has been cited as a major factor in several real-world crashes, too, including the loss of Air France 447 in 2009.

The problem is that most people, while initially skeptical of automated systems, come to view them as infallible after they get used to working with them. And that is as true for executives at investment firms as it is for airline pilots.

BlackRock’s robots won’t cost anyone their lives, but the company’s increased reliance on AI fund managers does pose new risks for investors, as the Knight Capital story illustrates.

Of course, so does not using such systems....MORE