Rechercher dans ce blog

Monday, May 2, 2022

Another Firing Among Google’s A.I. Brain Trust, and More Discord - The New York Times

The researchers are considered a key to the company’s future. But they have had a hard time shaking infighting and controversy over a variety of issues.

Less than two years after Google dismissed two researchers who criticized the biases built into artificial intelligence systems, the company has fired a researcher who questioned a paper it published on the abilities of a specialized type of artificial intelligence used in making computer chips.

The researcher, Satrajit Chatterjee, led a team of scientists in challenging the celebrated research paper, which appeared last year in the scientific journal Nature and said computers were able to design certain parts of a computer chip faster and better than human beings.

Dr. Chatterjee, 43, was fired in March, shortly after Google told his team that it would not publish a paper that rebutted some of the claims made in Nature, said four people familiar with the situation who were not permitted to speak openly on the matter. Google confirmed in a written statement that Dr. Chatterjee had been “terminated with cause.”

Google declined to elaborate about Dr. Chatterjee’s dismissal, but it offered a full-throated defense of the research he criticized and of its unwillingness to publish his assessment.

“We thoroughly vetted the original Nature paper and stand by the peer-reviewed results,” Zoubin Ghahramani, a vice president at Google Research, said in a written statement. “We also rigorously investigated the technical claims of a subsequent submission, and it did not meet our standards for publication.”

Dr. Chatterjee’s dismissal was the latest example of discord in and around Google Brain, an A.I. research group considered to be a key to the company’s future. After spending billions of dollars to hire top researchers and create new kinds of computer automation, Google has struggled with a wide variety of complaints about how it builds, uses and portrays those technologies.

Tension among Google’s A.I. researchers reflects much larger struggles across the tech industry, which faces myriad questions over new A.I. technologies and the thorny social issues that have entangled these technologies and the people who build them.

The recent dispute also follows a familiar pattern of dismissals and dueling claims of wrongdoing among Google’s A.I. researchers, a growing concern for a company that has bet its future on infusing artificial intelligence into everything it does. Sundar Pichai, the chief executive of Google’s parent company, Alphabet, has compared A.I. to the arrival of electricity or fire, calling it one of humankind’s most important endeavors.

Google Brain started as a side project more than a decade ago when a group of researchers built a system that learned to recognize cats in YouTube videos. Google executives were so taken with the prospect that machines could learn skills on their own, they rapidly expanded the lab, establishing a foundation for remaking the company with this new artificial intelligence. The research group became a symbol of the company’s grandest ambitions.

But even as Google has promoted the technology’s potential, it has encountered resistance from employees about its application. In 2018, Google employees protested a contract with the Department of Defense, concerned that the company’s A.I. could end up killing people. Google eventually pulled out of the project.

In December 2020, Google fired one of the leaders of its Ethical A.I. team, Timnit Gebru, after she criticized the company’s approach to minority hiring and pushed to publish a research paper that pointed out flaws in a new type of A.I. system for learning languages.

Cody O'Loughlin for The New York Times

Before she was fired, Dr. Gebru was seeking permission to publish a research paper about how A.I.-based language systems, including technology built by Google, may end up using the biased and hateful language they learn from text in books and on websites. Dr. Gebru said she had grown exasperated over Google’s response to such complaints, including its refusal to publish the paper.

A few months later, the company fired the other head of the team, Margaret Mitchell, who publicly denounced Google’s handling of the situation with Dr. Gebru. The company said Dr. Mitchell had violated its code of conduct.

The paper in Nature, published last June, promoted a technology called reinforcement learning, which the paper said could improve the design of computer chips. The technology was hailed as a breakthrough for artificial intelligence and a vast improvement to existing approaches to chip design. Google said it used this technique to develop its own chips for artificial intelligence computing.

Google had been working on applying the machine learning technique to chip design for years, and it published a similar paper a year earlier. Around that time, Google asked Dr. Chatterjee, who has a doctorate in computer science from the University of California, Berkeley, and had worked as a research scientist at Intel, to see if the approach could be sold or licensed to a chip design company, the people familiar with the matter said.

But Dr. Chatterjee expressed reservations in an internal email about some of the paper’s claims and questioned whether the technology had been rigorously tested, three of the people said.

While the debate about that research continued, Google pitched another paper to Nature. For the submission, Google made some adjustments to the earlier paper and removed the names of two authors, who had worked closely with Dr. Chatterjee and had also expressed concerns about the paper’s main claims, the people said.

When the newer paper was published, some Google researchers were surprised. They believed that it had not followed a publishing approval process that Jeff Dean, the company’s senior vice president who oversees most of its A.I. efforts, said was necessary in the aftermath of Dr. Gebru’s firing, the people said.

Google and one of the paper’s two lead authors, Anna Goldie, who wrote it with a fellow computer scientist, Azalia Mirhoseini, said the changes from the earlier paper did not require the full approval process. Google allowed Dr. Chatterjee and a handful of internal and external researchers to work on a paper that challenged some of its claims.

The team submitted the rebuttal paper to a so-called resolution committee for publication approval. Months later, the paper was rejected.

The researchers who worked on the rebuttal paper said they wanted to escalate the issue to Mr. Pichai and Alphabet’s board of directors. They argued that Google’s decision to not publish the rebuttal violated its own A.I. principles, including upholding high standards of scientific excellence. Soon after, Dr. Chatterjee was informed that he was no longer an employee, the people said.

Ms. Goldie said that Dr. Chatterjee had asked to manage their project in 2019 and that they had declined. When he later criticized it, she said, he could not substantiate his complaints and ignored the evidence they presented in response.

“Sat Chatterjee has waged a campaign of misinformation against me and Azalia for over two years now,” Ms. Goldie said in a written statement.

She said the work had been peer-reviewed by Nature, one of the most prestigious scientific publications. And she added that Google had used their methods to build new chips and that these chips were currently used in Google’s computer data centers.

Laurie M. Burgess, Dr. Chatterjee’s lawyer, said it was disappointing that “certain authors of the Nature paper are trying to shut down scientific discussion by defaming and attacking Dr. Chatterjee for simply seeking scientific transparency.” Ms. Burgess also questioned the leadership of Dr. Dean, who was one of 20 co-authors of the Nature paper.

“Jeff Dean’s actions to repress the release of all relevant experimental data, not just data that supports his favored hypothesis, should be deeply troubling both to the scientific community and the broader community that consumes Google services and products,” Ms. Burgess said.

Dr. Dean did not respond to a request for comment.

After the rebuttal paper was shared with academics and other experts outside Google, the controversy spread throughout the global community of researchers who specialize in chip design.

The chip maker Nvidia says it has used methods for chip design that are similar to Google’s, but some experts are unsure what Google’s research means for the larger tech industry.

“If this is really working well, it would be a really great thing,” said Jens Lienig, a professor at the Dresden University of Technology in Germany, referring to the A.I. technology described in Google’s paper. “But it is not clear if it is working.”

Adblock test (Why?)

Article From & Read More ( Another Firing Among Google’s A.I. Brain Trust, and More Discord - The New York Times )
https://ift.tt/QCBNPkL
Technology

No comments:

Post a Comment

Search

Featured Post

Microsoft wins battle with Sony as UK reverses finding on Activision merger - Ars Technica

Enlarge / Sony's PlayStation 5. Sony UK regulators reviewing Microsoft's proposed acquisition of Activision Blizzard reverse...

Postingan Populer