It’s a world of powerful algorithms, where we’re increasingly trusting decisions that have been made by software. But will there be unintended consequences? Last week the Popular Mechanics shared some startling research: “Left to their own devices, pricing algorithms resort to collusion.”
This month a team of four researchers in Europe unveiled a new study on their experiments with AI-powered pricing algorithms in a carefully-controlled environment “to demonstrate that even relatively simple algorithms systematically learn to play sophisticated collusive strategies.”
“Most worrying is that they learn to collude by trial and error, with no prior knowledge of the environment in which they operate, without communicating with one another, and without being specifically designed or instructed to collude,” the researchers concluded.
Pricing algorithms can learn to collude, new paper demonstrates. Surprising finding: they don't coordinate on the joint profit maximizing price. There is is still some hope for human price-fixers. https://t.co/5IgwZ1DhTX @emilioc_ pic.twitter.com/BjDyxIMlly
— Bastiaan Overvest (@BOvervest) February 14, 2019
The algorithms were trained using reinforcement learning — that learn-from-experience technique that’s proven so effective for besting human players in games like chess and Go. The researchers let two algorithms set competing prices, over and over again, and the algorithms appear to have spontaneously learned not to engage in damaging price wars, instead settling on (and consistently charging) a price above what would’ve been competitive. The researchers call the results “a distinctive sign of genuine collusion, and it would be difficult to explain otherwise.”
And it’s not particularly easy to eliminate the behavior, since “the propensity to collude is stubborn — substantial collusion continues to prevail even when the active firms are three or four in number.”
Their results were summarized in a larger paper released in December. It argues that even in more complex scenarios, “our algorithms achieve convergence in a matter of minutes, which suggests that we are still far from having exhausted their learning capacity.” And their research also suggests another worrying possibility: that algorithms “may actually be better than humans at colluding tacitly.”
“What is most worrying is that the algorithms leave no trace of concerted action — they learn to collude purely by trial and error, with no prior knowledge of the environment in which they operate, without communicating with one another, and without being specifically designed or instructed to collude,” the researchers write. “From the antitrust standpoint, the concern is that these autonomous pricing algorithms may independently discover that if they are to make the highest possible profit, they should avoid price wars. That is, they may learn to collude even if they have not been specifically instructed to do so, and even if they do not communicate with one another.
“This is a problem,” the researchers conclude.
The end results may be beneficial to sellers, but they won’t benefit consumers, since “good performance” means the highest possible prices. While acknowledging that more research is needed, based on these results the researchers are now warning that this kind of AI-powered tacit collusion “may become more prevalent” — requiring a stronger response from antitrust enforcers.
When Algorithms Eat the World
It’s an issue that will soon be all around us. Even by 2015, one study had estimated that a third of the prices for Amazon top seller’s were already being set by an algorithm. After noting that some algorithms change prices hundreds of times a day, those researchers warned that “the impact of algorithmic pricing on marketplaces and customers is not yet understood,” adding “especially in heterogeneous markets that include competing algorithmic and non-algorithmic sellers.”
The researchers point out that the laws in both Europe and the U.S. only bar explicit, intentional collusion — and not this intuited, no-communication collusion, “on the grounds that it is unlikely to occur among human agents and that, even if it did occur, it would be next to impossible to detect.”
But is that same leniency still appropriate in a world of automated algorithms? Their paper acknowledges that “Though no real-world evidence of autonomous algorithmic collusion has been produced so far, antitrust agencies are actively debating the problem.”
The world’s only antitrust case involving an algorithm covered four months at the end of 2013 when an online poster retailer conspired with other sellers on Amazon Marketplace to fix the prices of certain posters, according to The New Yorker. “We will not tolerate anticompetitive conduct, whether it occurs in a smoke-filled room or over the Internet using complex pricing algorithms,” Assistant Attorney General Bill Baer told the magazine.
In that case, the algorithm used “wasn’t an impediment to prosecution,” according to the New Yorker, “because the seller had otherwise demonstrated a will to collude with other parties and then coded the algorithm to carry out the agreement.”
Tricks of the Human Enforcers
So where are we now? In 2017 Maureen Ohlhausen, then the acting chairman of America’s Federal Trade Commission, gave a speech where she warned her audience that “some of the concerns about algorithms are a bit alarmist… An algorithm is a tool, and like any other tool, it can be put to either useful purposes or nefarious ends.” Transparency may have good or bad effects for consumers, but “there is nothing inherently suspect about using computer algorithms to look carefully at the world around you before participating in markets.”
Towards the end of her speech she acknowledged that “In theory, these systems can allow competitors to communicate with each other in ways that may be difficult for enforcers to detect,” warning about the possibility of “using algorithms essentially to fly under the radar, so their unlawful agreements can escape detection by the enforcement agencies.”
But at the end of the day, Ohlhausen pointed out, the real question is simply whether enforcement agencies can still recognize and respond to collusion.
For example, in 1993 eight airlines were charged with violating the Sherman Anti-Trust Act (first passed in 1890) by “agreeing to fix prices by increasing fares, eliminating discounted fares, and setting fare restrictions.” Ohlhausen noted that “Both the enforcers and the court had little trouble understanding the legal implications of the airlines’ conduct. This is because the type of technology used to communicate with competitors is wholly irrelevant to the legal analysis.
“Whether it is phone calls, text messages, algorithms or Morse code, the underlying legal rule is the same — agreements to set prices among competitors are always unlawful,” she said.
She also pointed out that existing laws would also handle the situation where collusion was being contracted out to a third-party algorithm. “Is it okay for a guy named Bob to collect confidential price strategy information from all the participants in a market, and then tell everybody how they should price? If it isn’t okay for a guy named Bob to do it, then it probably isn’t okay for an algorithm to do it either.”
In the end, tricky humans may simply turn the algorithms against each other. Last May, Reuters reported that EU regulators “may set up their own algorithms to find companies that use software to fix prices with peers or squeeze out their rivals,” citing comments European Competition Commissioner Margrethe Vestager.
“It is a hypothesis that not all algorithms will have been to law school,” she said. “So maybe there are a few out there who may get the idea that they should collude with another algorithm who hasn’t been to law school either.”
An EU commission had already found two-thirds of retailers were using algorithms to at least track their competitors’ prices, and Vestager had already commissioned another study, threatening to impose bigger fines on any companies that were ultimately found to be using algorithms while colluding.
So at least, as Reuters points out, we humans are already on the alert.
- Are robots better than humans at picking strawberries?
- NASA’s taking steps towards a human settlement on the moon.
- The Bill and Melinda Gates Foundation reveals how the last year surprised them.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, Prevalent, Real, Bit.