Ai chat bot

๐Ÿ˜ฎ AI’s Big Kahuna Should Talk Story Earlier!

Wen you hea da big kahuna of artificial intelligence, Geoffrey Hinton, say he pau wit Google and get regret for his life’s work, das one heavy kine news. ๐Ÿค–

Hinton, da guy who wen make super important kine stuffs fo’ AI research in da 1970s wit his neural networks work, wen tell plenny news guys dis week dat da big tech companies stay movin’ too fast fo’ push out AI to da public. One big part of da problem stay dat AI stay gettin’ mo’ like humans more quick den da experts wen tink before. “Das scary,” he wen tell da New York Times. ๐Ÿ˜ฑ

Hinton’s worries make choke sense, but would be mo’ powaful if they wen come out couple years earlier, wen odda researchers who no can fall back on retirement wen start fo’ make noise wit da same kine alarm bells. ๐Ÿ•ฐ๏ธ

Hinton, in one tweet, wen try fo’ clear up how da New York Times wen paint his reasons, all worried dat da article wen make like he left Google fo’ give ’em stink eye. “Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google,” he wen say. “Google has acted very responsibly.” ๐Ÿ‘

While Hinton’s big name in da field might have keep him safe from any kine blowback, da whole thing show one chronic problem in AI research: Da big tech companies get one strong chokehold on AI research dat choke of their scientists stay scared fo’ talk about their worries cause they scared fo’ mess up their career. ๐Ÿค

You can undastand why. Meredith Whittaker, one former research manager at Google, had spend choke money on lawyers in 2018 afta she wen help organize da walkout of 20,000 Google workers ova da company’s contracts wit da US Department of Defense. “It’s really, really scary to go up against Google,” she tell me. Whittaker, who now da CEO of encrypted messaging app Signal, wen finally say aloha to da search giant with one public warning about the company’s direction. ๐Ÿ˜จ๐Ÿ“ฑ

Two years afta, Google AI researchers Timnit Gebru and Margaret Mitchell wen get pink slip from da tech giant afta they wen publish one research paper dat show da dangers of large language models, da kine technology dat stay at da heart of worries ova chatbots and generative AI. They wen point to kine issues like racial and gender bias, hard fo’ undastand, and environmental cost. ๐Ÿ“๐Ÿ”

Whittaker stay irked dat Hinton now da one getting all da praise fo’ his contributions to AI afta oddas wen take bigger risks to stand up fo’ what they believe in while they still was working at Google. “People with much less power and more marginalized positions were taking real personal risks to name the issues with AI and of corporations controlling AI,” she say. ๐Ÿ’”๐Ÿ‘ฅ

Why Hinton neva speak up earlier? Da scientist wen decide not fo’ answer da questions. But look like he been worried about AI fo’ some time, including wen his work buddies wen start fo’ push fo’ mo’ careful approach to da technology. One 2015 New Yorker article wen describe him talking to anoddah AI researcher at one conference about how politicians could use AI fo’ make fear in da people. Wen asked why he still doing da research, Hinton replied: “I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet.” Sounds kinda like what J. Robert Oppenheimer said about the “technically sweet” appeal of working on the atomic bomb. ๐Ÿ’ฃ๐Ÿฌ

Hinton say dat Google has acted “very responsibly” in how they handle AI. But das only half true. Yeah, the company did shut down its facial recognition business cause they worry about misuse, and it did keep its powerful language model LaMDA secret for two years fo’ work on making it safer and less biased. Google also wen limit the powers of Bard, its competition to ChatGPT. ๐Ÿ•ต๏ธโ€โ™‚๏ธ๐Ÿ‘

But being responsible also mean being open and accountable, and Google’s history of trying fo’ hide internal concerns about its technology no make us feel safe. ๐Ÿ˜’๐Ÿ”’

Hinton’s goodbye and his warnings, we hope, going encourage other researchers at the big tech companies fo’ speak up about their worries. ๐Ÿ—ฃ๏ธ๐Ÿ‘€

Tech companies wen scoop up some of the smartest guys in school cause of the big bucks, good benefits, and the huge computer power used fo’ train and experiment on even more powerful AI models. ๐Ÿ’ฐ๐Ÿ’ป

But get signs some researchers thinking about speaking up more. “I often think about when I would quit [AI startup] Anthropic or leave AI entirely,” wen tweet Catherine Olsson, one worker at AI safety company Anthropic on Monday, responding to Hinton’s words. “I can already tell this move will influence me.” ๐Ÿค”๐Ÿ’ฌ

Plenty AI researchers seem to have one kine acceptance that little can be done to stop the wave of generative AI, now that it’s been let loose to the world. As Anthropic co-founder Jared Kaplan wen tell me in one interview published Tuesday, “the cat is out of the bag.” ๐Ÿฑ๐ŸŒ

But if the researchers of today willing fo’ speak up now, while it still matter, and not right before they retire, all us guys going likely benefit. ๐Ÿ“ฃ๐Ÿ’ก


NOW IN ENGLISH

๐Ÿ˜ฎ AI’s Big Boss Could Have Voiced Concerns Sooner!

When you hear that the godfather of artificial intelligence, Geoffrey Hinton, is leaving Google and regretting his life’s work, it’s significant news indeed. ๐Ÿค–

Hinton, who made substantial contributions to AI research in the 1970s with his work on neural networks, shared his concerns with several news outlets this week, stating that big tech companies were moving too quickly in deploying AI to the public. A major part of the problem is that AI is achieving human-like capabilities much faster than experts had anticipated. “That’s scary,” he told the New York Times. ๐Ÿ˜ฑ

Hinton’s concerns are valid, but they would have been more impactful had they been voiced a few years earlier when other researchers, who couldn’t rely on impending retirement, were raising similar concerns. ๐Ÿ•ฐ๏ธ

Interestingly, Hinton sought to clarify via a tweet how the New York Times characterized his motivations, worried that the article implied he left Google to criticize it. “Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google,” he said. “Google has acted very responsibly.” ๐Ÿ‘

While Hinton’s reputation in the field might have shielded him from repercussions, this situation underlines a persistent issue in AI research: Large tech companies have such a stronghold on AI research that many of their scientists are hesitant to voice their concerns for fear of jeopardizing their career. ๐Ÿค

One can understand why. Meredith Whittaker, a former research manager at Google, spent a considerable amount on legal fees in 2018 after she helped organize the walkout of 20,000 Google employees over the company’s contracts with the US Department of Defense. “It’s really, really scary to go up against Google,” she said. Whittaker, who is now the CEO of encrypted messaging app Signal, eventually resigned from the tech giant, issuing a public warning about the company’s direction. ๐Ÿ˜จ๐Ÿ“ฑ

Two years later, Google AI researchers Timnit Gebru and Margaret Mitchell were fired from the tech giant after they released a research paper that highlighted the risks of large language models, the technology currently at the center of concerns over chatbots and generative AI. They pointed to issues like racial and gender bias, opacity, and environmental cost. ๐Ÿ“๐Ÿ”

Whittaker is irked that Hinton is now the subject of glowing tributes for his contributions to AI when others took greater risks to stand up for what they believed while still employed at Google. “People with much less power and more marginalized positions were taking real personal risks to name the issues with AI and of corporations controlling AI,” she says. ๐Ÿ’”๐Ÿ‘ฅ

Why didn’t Hinton speak up earlier? The scientist declined to respond to questions. But it seems he has been worried about AI for some time, including when his colleagues were advocating for a more cautious approach to the technology. A 2015 New Yorker article describes him talking to another AI researcher at a conference about how politicians could use AI to terrorize people. When asked why he was still conducting the research, Hinton replied: “I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet.” This statement echoes J. Robert Oppenheimer’s famous description of the “technically sweet” appeal of working on the atomic bomb. ๐Ÿ’ฃ๐Ÿฌ

Hinton states that Google has acted “very responsibly” in its deployment of AI. But that’s only partly true. Yes, the company did shut down its facial recognition business over concerns of misuse, and it did keep its powerful language model LaMDA under wraps for two years to make it safer and less biased. Google also restricted the capabilities of Bard, its competitor to ChatGPT. ๐Ÿ•ต๏ธโ€โ™‚๏ธ๐Ÿ‘

However, being responsible also entails being transparent and accountable. Google’s history of suppressing internal concerns about its technology does not inspire confidence. ๐Ÿ˜’๐Ÿ”’

Hopefully, Hinton’s departure and warnings will inspire other researchers at large tech companies to voice their concerns. ๐Ÿ—ฃ๏ธ๐Ÿ‘€

Tech conglomerates have attracted some of the brightest minds in academia with high salaries, generous benefits, and the massive computing power used to train and experiment on increasingly powerful AI models. ๐Ÿ’ฐ๐Ÿ’ป

Yet there are signs that some researchers are contemplating being more vocal. “I often think about when I would quit [AI startup] Anthropic or leave AI entirely,” tweeted Catherine Olsson, a staff member at AI safety company Anthropic, in response to Hinton’s comments. “I can already tell this move will influence me.” ๐Ÿค”๐Ÿ’ฌ

Many AI researchers seem to have a fatalistic acceptance that little can be done to stem the tide of generative AI, now that it has been unleashed to the world. As Anthropic co-founder Jared Kaplan told me in an interview published Tuesday, “the cat is out of the bag.” ๐Ÿฑ๐ŸŒ

But if today’s researchers are willing to speak up now, when it matters, and not just before they retire, we are all likely to benefit. ๐Ÿ“ฃ๐Ÿ’ก

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *