Forced to quit Mountain View a year ago, the computer engineer created an institute for the study of bias in the use of artificial intelligence
Kimberly White / Getty Images for TechCrunch
Timnit Gebru is, according to Fortune 2021, one of the 50 most influential in the world
Timnit Gebru, computer engineer born in 1984, in December 2020 was at the center of a controversy for his sudden exit from Google in Mountain View, where he worked as technical co-manager of the Ethical Artificial Intelligence Team, the study group which deals with the ethics of AI. She also co-authored a historic 2018 study that revealed racial and gender bias in facial recognition software..
A year ago in a document, which she was asked to withdraw prior to publication, Gebru highlighted risks and bias in large language models, contested by Google . Gebru threatened to resign and Google immediately terminated the working relationship with her. Following the death of the researcher, in controversy over her unfair dismissal, two other engineers have left Mountain View.
Now Gebru is looking to make changes to the industry from the outside, as founder of the Dair Institute, the Distributed Artificial Intelligence Research Institute, who will work with artificial intelligence researchers around the world, with a particular focus on Africa and African immigration to the United States, to evaluate the results of using this increasingly present technology in our lives.
In 2021, Gebru was named by Fortune one of the 50 great world leaders and is very attentive to ethical issues in general, such as the wealth accumulated by the big tech companies. She also co-founded the Black in AI group to support the work and leadership of the black community in this area.
“After I got fired by Google, I knew I was going to be blacklistedof a whole group of large technology companies, ”Timnit Gebru said in an interview with the Associated Press. And when he decided to give birth to Dair, “the first thing that came to my mind and I want it to be distributed . I have seen how people in certain places simply cannot weigh on the decisions of technology companies and the course that the development of AI is taking ”. The goal, therefore, is “ to involve the communities that are usually on the sidelines [of the process] so that they can benefit from it. When there are instances where [an AI] shouldn’t be built, we can say, ‘Well, it shouldn’t be built.’ ”Which is the exact opposite of the“ technological solutionism perspective.
One of Dair’s first projects will be to use satellite imagery to study geographic apartheid in South Africa with local researchers.
Google Maps
Johannesburg Apartheid Museum
“When I was at Google, I spent a lot of time trying to change people’s attitudes. For example, they would organize a workshop with all men – 15 male colleagues – and I would send them an email saying, ‘You can’t do this.’ Now I spend my energies thinking about what I want to do, what I want to build and how to support the people who are already on the right side “. And according to Gebru, there are many people like that, only they don’t have a role of power.
“What happened to me at Google – recalls the researcher – had to do with a study we had done on large language models, a type of technology that Google Search uses to classify the questions or question-and-answer boxes we see. , machine translation, auto corrector, stuff like that. What we were seeing was a rush to implement ever larger language models, with more data, more powerful computers, and we wanted to alert people to potential negative consequences. ” Ultimately, Gebru continues, there would not have been all the attention to that study if it weren’t for the controversy. “I wish I hadn’t been fired, of course,” she adds.
The researcher is not optimisticon the future of artificial intelligence software, indeed. “What is most depressing to me is that even the applications that people are most aware of the damage they do are increasing instead of decreasing . We’ve been talking about facial recognition and surveillance for a long time now. There are positive cases: some cities and administrations have banned the use of facial recognition by the police, for example. But then the government uses all these technologies that we are sensing the risks of. First in wars , then to keep refugees – products of the war – at bay. In Mexicothere are so many such automated systems that have never been seen before. And they are using them primarily to keep people away ”.

Previous articleSolid shampoo: what it is, how to use it and why it is the green choice of 2021
Next articleWhere to fly in a hot air balloon in Italy