312-251-1301 info@cfachicago.org
Log In
Cathy O’Neil: Founder of ORCAA and
Author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy

Big Data, artificial intelligence (AI) and machine learning are becoming some of the hottest topics in finance. A packed room of CFA charterholders gathered to hear a presentation by Weapons of Math Destruction author Cathy O’Neil and a panel discussion on that topic in early May.

O’Neil began her discussion with Google’s autocomplete predictive search that can occasionally feature unreliable or conspiracy-laden results. She said that Google shouldn’t be able to have it both ways, making money off users’ trust yet saying that bogus search results aren’t their fault.

According to O’Neil, AI is not a model for truth. Artificial intelligence technology could be characterized as a series of opinions fed into an algorithm. The authors behind the algorithm will tell you to “trust the math”, but should we be that trusting when companies are incentivized not by truth but by profit?

AI amounts to making a prediction. There are two parts to an artificial intelligence prediction: the historical data, which contains a possible pattern, and the algorithm’s definition of success (such as a quant generating profit at a certain volatility level). Even mundane things such as determining what to cook for dinner could be characterized as an algorithm.

Our lives are increasingly touched by algorithms, in areas such as banking and credit, policing, jobs and even matchmaking. Sometimes the algorithms are incredibly helpful, but sometimes they can cause a great deal of harm. When a company runs an algorithm on you, you should trust that it will optimize the result to the company’s definition of success, not necessarily what is best for you. O’Neil said that many of today’s algorithms can be characterized as WMDs (Widespread, Mysterious and Destructive). And algorithms do make mistakes, but those mistakes aren’t typically publicized because the algorithms are usually secret intellectual property.

O’Neil told a story about a teacher who was fired because her students received poor test scores. This happened even though the administration didn’t have access to the actual score which was generated in a black box that no one outside of the firm had access to. Finally, access to the scores was provided. Upon reviewing them, the teacher scores looked essentially like completely random numbers with little predictive power from year to year. Some teachers have sued for wrongful termination and have won their cases.

Another example O’Neil gave of an algorithm causing harm was the case of Kyle Beam, who didn’t get a Kroger job because of a personality test result. The test resulted in a “red light” outcome where Kyle was not offered an interview. He complained to his father about the process, who is an attorney, and his father determined that the test violated the Americans With Disabilities act, as it is unlawful for a company to require a health exam as part of a job screening.

One of the main problems with algorithms today is that they tend to look for an initial condition that led to success in the past. Amazon developed a hiring algorithm (that wasn’t ultimately used) that aimed to determine which characteristics of certain hires led to success in the job. The algorithm proxied job success with metrics such as salary raises, promotions and workers who stayed more than four years. Upon scanning the data, the algorithm found that initial conditions such as being named “Jared” and using the word “execute” more frequently on resumes tended to lead to success. Unfortunately, it was also determined that male candidates tended to use the word “execute” more frequently than women, so some of the characteristics the algorithm was searching for were proxies for gender.

Couldn’t there be a market-based solution to all the defects inherent in algorithmic decision-making? According to O’Neil, expecting companies to self-police their own algorithms might be somewhat unlikely. This is because algorithms that maximize profits without any constraint on fairness are more profitable than algorithms with fairness constraints. This dilemma can be seen with Facebook. Facebook has a higher level of engagement and a more lucrative advertising business when its users are arguing about fake news and conspiracies.  Most companies facing demanding shareholders would be reluctant to agree to lower profitability in order to ensure fair algorithms. Because of this issue and others outlined above, O’Neil believes that regulation is needed.

Currently the legality and ethics around employers sourcing alternative data such as health information in order to make hiring decisions is murky. “What’s stopping Walmart from buying data to see who is sick or healthy [in order to make decisions on employment],” O’Neil asked.

O’Neil laid out three principles for responsible algorithm usage:

1) First do no harm

2) Give users the ability to understand scores and decisions

3) Create an FDA-like organization that is tasked with assessing and approving algorithms with a high level of importance

L to R:
Metin Akyol, Ph.D., CFA (Zacks Investment Management) Kevin Franklin (BlackRock),
Sam Shapiro (Goldman Sachs Asset Management), Cathy O’Neil (ORCAA)

During the panel discussion, speakers talked about how machine learning and AI are used in their portfolio management process, particularly parsing through large data sets. They talked about how it is more challenging to hold risk models to the same standard as trading models because risk cannot be directly measured, and the success of a trading model can easily be evaluated by the P&L generated. Are machine learning and Big Data a flash in the pan, or are they here to stay? The CFA Institute believes that it’s the latter, and have added the topics to the 2019 CFA curriculum.