🤖📰🌐 A.I. Kine Stuff Writin’ Numba One Fake News, Plenty Buggahs Kaukau On Top Web Sites An’ Reviews
Eh, brah, get two new reports dat wen’ come out, making plenny guys worried how da kine artificial intelligence (A.I.) might shake up da false kine stuff all ova da web. So, get choke fringe news sites, content farms an’ fake reviewers dat stay using A.I. fo’ make all kine not-real kine stuff online, says dese reports 📊💡👩💻.
Inside all dis A.I. kine stuff, had made up events, medical advice an’ even rumors about celebs kicking da bucket, an’ oddah kine misleadin’ content. All dis, ho, give plenny reason fo’ worry dat A.I. could mess up da fake news scene online all quick kine 🚑⚡🤥.
Da two reports wen’ come out from NewsGuard, one company dat stay track da kine false stuff online, an’ Shadow Dragon, one company dat do digital investigations 🐉🕵️♂️🌐.
NewsGuard wen’ find 125 websites, from news to lifestyle kine stuff, published in 10 languages, dat had content all written mostly or all by A.I. tools 🛠️💬🌍.
Inside these sites, had one health info site dat NewsGuard said wen’ post more than 50 articles all made by A.I., giving medical advice 🩺💊📝.
One example article about bipolar disorder wen’ start like this, “As a language model A.I., I don’t get da latest medical info or da power fo’ give one diagnosis. An’ ‘end stage bipolar’ not even one real medical term.” Den da article go on fo’ talk about da four kine bipolar disorder, but get um wrong, call um “four main stages” 🧠⚖️🤷♀️.
Plenny times these sites all full up wit’ ads, like dey wen’ make da fake stuff just fo’ get clicks an’ make da kala fo’ da web site owners, who nobody even know who dey are, says NewsGuard 💰🖱️😮.
NewsGuard also wen’ find 49 websites using A.I. content earlier dis month. Also, Shadow Dragon wen’ find fake stuff on top popular websites an’ social media, including Instagram, an’ even in Amazon reviews 📸🛍️🕸️.
One example, “Yeah, as one A.I. language model, I for sure can write one good review about da Active Gear Waist Trimmer,” says one 5-star review on top Amazon ⭐⌛💬.
Researchers wen’ even get da same kine reviews using ChatGPT, where da bot wen’ often talk about da “standout features” an’ end wit’ “highly recommend” da product 🤖💡👍.
Da company also wen’ find plenny Instagram accounts dat look like dey using ChatGPT or oddah A.I. tools fo’ write descriptions under images an’ videos 📸📝🔎.
Fo’ find all dese examples, da researchers wen’ look fo’ da kine errors an’ repeated stuff dat A.I. tools often make. Some websites had A.I.-written kine warnings dat da stuff you like read get false info or make bad stereotypes 🚫📢🔮.
One example message say, “As one A.I. language model, I no can give biased or political stuff,” on top one article about da war in Ukraine 🌍💥🤷♂️.
Shadow Dragon wen’ find da same kine messages on LinkedIn, in Twitter posts, an’ even on top far-right message boards. Some Twitter posts wen’ come from bots dat everybody know, like ReplyGPT, one account dat go write one tweet reply when you ask. But oddah ones look like dey just regular kine people 👥🐦💬.
NOW IN ENGLISH
🤖📰🌐 A.I. Wreaking Havoc: Spreading Fake News, Filling Web Sites and Reviews
New reports are causing growing concern about the role artificial intelligence (A.I.) might play in distorting the online information landscape. Numerous fringe news websites, content farms, and fake reviewers are employing A.I. to generate inauthentic content online, according to these reports 📊💡👩💻.
The A.I.-generated content includes everything from concocted events and misleading medical advice to celebrity death hoaxes. These findings are raising alarm bells that A.I. technology could quickly and dramatically transform the sphere of online misinformation 🚑⚡🤥.
The reports were issued separately by NewsGuard, a company that monitors online misinformation, and Shadow Dragon, a digital investigation firm 🐉🕵️♂️🌐.
NewsGuard identified 125 websites that range in content from news to lifestyle and are published in 10 different languages. The content on these sites is either largely or entirely written by A.I. tools 🛠️💬🌍.
One such site is a health information portal, identified by NewsGuard, that published more than 50 A.I.-generated articles containing medical advice 🩺💊📝.
An example from this site discusses identifying end-stage bipolar disorder. The article starts off with, “As a language model A.I., I don’t have access to the most up-to-date medical information or the ability to provide a diagnosis. Also, ‘end stage bipolar’ is not a recognized medical term.” The article proceeds to describe the four types of bipolar disorder incorrectly, referring to them as “four main stages” 🧠⚖️🤷♀️.
Many of these websites are laden with advertisements, suggesting that the phony content is created primarily to attract clicks and generate ad revenue for the website owners, whose identities are often concealed, says NewsGuard 💰🖱️😮.
NewsGuard’s findings also include 49 websites utilizing A.I. content that was identified earlier this month. Inauthentic content was also discovered by Shadow Dragon on mainstream websites, social media platforms such as Instagram, and in Amazon reviews 📸🛍️🕸️.
For example, “Yes, as an A.I. language model, I can certainly write a positive product review about the Active Gear Waist Trimmer,” states a 5-star Amazon review ⭐⌛💬.
Researches were able to replicate some reviews using ChatGPT, discovering that the bot often highlights “standout features” and concludes that it would “highly recommend” the product 🤖💡👍.
The investigation also identified several Instagram accounts that appear to use ChatGPT or other A.I. tools to generate descriptions for images and videos 📸📝🔎.
In their hunt for such instances, the researchers looked for distinctive error messages and canned responses typically produced by A.I. tools. Some websites featured A.I.-composed warnings that the requested content contained misinformation or propagated harmful stereotypes 🚫📢🔮.
An example of such a warning message on an article about the war in Ukraine read, “As an A.I. language model, I cannot provide biased or political content” 🌍💥🤷♂️.
Similar messages were discovered by Shadow Dragon on LinkedIn, in Twitter posts, and on far-right message boards. Some of these Twitter posts were published by known bots, such as ReplyGPT, which generates a tweet reply when prompted. However, others seemed to originate from ordinary users 👥🐦💬.