Cassie Kozyrkov, who served as chief decision scientist for an Internet company and helped pioneer the field of decision intelligence, is going solo and working on projects to help business leaders navigate the tricky waters of artificial intelligence.
As AI becomes more powerful and more prevalent in all industries, Kozyrkov will launch her first LinkedIn course, publish a book and give keynotes on how to make informed decisions. Her goal, she said, is to give leaders the tools to think about how they deploy AI and to help the public hold AI decision makers accountable for choices that affect millions of people. fortune.
She spent 10 years at Google, five of them as a Chief Decision Scientist. Among her responsibilities, she guides company leaders to make informed and responsible decisions regarding AI.
“I’ve always believed that Google’s heart is in the right place,” Kozyrkov said. But it’s a big company, and outsiders sometimes equate her personal opinions with Google’s stance on a topic. In her new role, she said, she doesn’t have to worry about how her advocacy affects the company she represents fortune
AI is going through a period of massive growth, which has raised concerns about the future for some. Top minds in the AI space have recently warned that it could end humanity as we know it. From time to time this point seems like an inflection point in the world of technology. According to Kozyrkov, there needs to be leaders who are educated in decision making and leaders who can be held accountable by customers.
Raised in South Africa, Kozyrkov graduated from the University of Chicago with a degree in economics. She also holds a master’s degree in mathematical statistics from North Carolina State University and a partially completed PhD in psychology and neuroscience from Duke University. Before working at Google, she spent 10 years as an independent data science consultant.
During Kozyrkov’s tenure as chief decision scientist, which began in 2018, Google’s AI division grew significantly. CEO Sundar Pichai unveiled Duplex, an add-on to Google Assistant that can make phone calls on the user’s behalf, aimed at helping with scheduling appointments, restaurant reservations and other engagements. Google has made leaps and bounds in creating videos from text, images and prompts, and is developing robots that can write their own code. He also released Bard, a competitor to ChatGPT. Many of Google’s developments have raised ethical questions from employees and academics, not unlike what’s happening at other AI companies. Google did not respond to requests for comment.
Kozyrkov won’t comment on the decisions she makes at Google because of her nondisclosure agreement, but it’s not hard to think of areas where the company faces tough choices when it comes to AI. When creating Bard, Google had to decide whether to remove copyrighted information to train the AI model. In a lawsuit filed against Google in July, the company alleged that it did so. Google also had to decide at what point to release the technology in order to remain competitive with ChatGPT but not tarnish its reputation. The chatbot came into the limelight soon after Bard published a demo video of it giving wrong answers.
Kozyrkov’s work revolves around the idea that individuals can make choices that affect many people and that those at the top are not necessarily educated in the practice of decision-making. “It’s easy to think of technology as autonomous,” she said. “But behind that technology are people who make very subjective decisions, with or without the skills to impact millions of lives.”
The best way to make decisions is something that humans have long struggled with and is constantly evolving. Benjamin Franklin has a three-century-old pro/con model, but there are also more advanced ways to answer important questions, Kozyrkov said. While she targets business leaders, her methods can also be used to make other important life decisions, such as where to go to college or start a family.
Decision makers should ask themselves: What will it take to change my mind? They should also use the data, but before looking at it, set criteria for what they will do based on what the data says. This helps decision makers avoid confirmation bias, or using data to confirm an opinion they already have. According to Kozyrkov, documenting the process of arriving at an important decision—with information available at the time—is also useful for evaluating its quality after a selection is made.