Usunięcie strony wiki 'Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy' nie może zostać cofnięte. Kontynuować?
Machine-learning designs can fail when they attempt to make predictions for individuals who were underrepresented in the datasets they were trained on.
For example, a design that anticipates the very best treatment option for somebody with a persistent disease might be trained utilizing a dataset that contains mainly male patients. That design might make incorrect predictions for female patients when released in a healthcare facility.
To improve results, engineers can try balancing the training dataset by getting rid of data points up until all subgroups are represented equally. While dataset balancing is appealing, it typically needs removing large quantity of data, hurting the design’s total efficiency.
MIT researchers developed a brand-new strategy that recognizes and eliminates specific points in a training dataset that contribute most to a model’s failures on minority subgroups. By eliminating far fewer datapoints than other approaches, this method maintains the general accuracy of the model while improving its performance relating to underrepresented groups.
In addition, the method can recognize covert sources of bias in a training dataset that lacks labels. Unlabeled data are far more common than identified information for lots of applications.
This approach might also be combined with other techniques to improve the fairness of machine-learning designs released in high-stakes circumstances. For instance, it may someday help make sure underrepresented clients aren’t misdiagnosed due to a prejudiced AI model.
"Many other algorithms that attempt to resolve this problem assume each datapoint matters as much as every other datapoint. In this paper, we are showing that assumption is not true. There are particular points in our dataset that are adding to this bias, and we can discover those data points, remove them, and get better performance,” states Kimia Hamidieh, an electrical engineering and computer system science (EECS) graduate trainee at MIT and author of a paper on this technique.
She composed the paper with co-lead authors Saachi Jain PhD ‘24 and fellow EECS graduate trainee Kristian Georgiev
Usunięcie strony wiki 'Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy' nie może zostać cofnięte. Kontynuować?