Summary
- Anna Makanju, OpenAI’s VP of global affairs, claims that reasoning models like o1 can make AI less biased by self-identifying biases and adhering to rules
- Internal testing found that o1 is less likely to produce toxic, biased, or discriminatory answers compared to non-reasoning models
- However, o1 performed worse in some instances on a bias test compared to GPT-4o in terms of explicit discrimination on race and age
- While reasoning models have potential to improve AI bias, they also have limitations such as cost and performance issues