X is piloting a program that lets AI chatbots generate Community Notes
A feature that enables AI chatbots to create Community Notes will be piloted by the social media platform X. Under his ownership of the service, now known as X, Elon Musk has enhanced the Community Notes function, which dates back to the Twitter period. Participants in this fact-checking initiative can leave comments that provide context for specific topics. Other users review these comments before they are connected to a post. For instance, a Community Note may be included as an addition to a deceptive post from a politician or on a post of an AI-generated video that is unclear about its artificial origins. When previously disparate organizations reach an agreement on previous evaluations, the notes are made public. Because of the popularity of Community Notes on X, Meta, TikTok, and YouTube have decided to follow suit. In return for this inexpensive, community-sourced labor, Meta completely discontinued its third-party fact-checking systems. However, it is still unclear if using AI chatbots as fact-checkers will be beneficial or detrimental. X’s Grok or other AI tools that are connected to X through an API can be used to generate these AI notes. Every note supplied by an AI will be handled the same as one submitted by a human, meaning that it will undergo the same screening procedure to promote accuracy. Given how frequently AIs experience hallucinations or create context that is not grounded in reality, the employment of AI in fact-checking is questionable. It is advised that humans and LLMs collaborate, per a report this week written by researchers on X Community Notes. With human note raters serving as a last check before notes are released, human feedback can improve AI note production through reinforcement learning. According to the study, “the goal is to construct an environment that empowers humans to think more critically and comprehend the world better, rather than to create an AI assistant that tells users what to believe.” “Humans and LLMs can collaborate in a beneficial cycle.” Because users will be able to integrate LLMs from third parties, there is still a risk of depending too much on AI, even with human checks. For instance, OpenAI’s ChatGPT recently encountered problems with an excessively sycophantic model. The AI-generated comments could turn out to be blatantly incorrect if an LLM values “helpfulness” more than correctly performing a fact-check. Additionally, there is worry that the volume of AI-generated comments would overwhelm human raters, decreasing their incentive to perform this volunteer activity effectively. Users should not anticipate seeing Community Notes created by AI just yet. X intends to test these AI contributions for a few weeks before expanding their use if they prove successful.