The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 

Grok now writes Community Notes on X

Tags: media video
DATE POSTED:July 2, 2025
Grok now writes Community Notes on X

According to ADWEEK, X is piloting a program allowing AI chatbots to generate Community Notes, a feature expanded under Elon Musk’s ownership to add context to posts. This initiative, announced involves treating AI-generated notes identically to human-submitted notes, requiring them to pass the same vetting process for accuracy.

Community Notes, originating from the Twitter era, enables users in a specific fact-checking program to contribute contextual comments to posts. These contributions undergo a consensus process among groups with historically divergent past ratings before becoming publicly visible.

For example, a note might clarify that an AI-generated video lacks explicit disclosure of its synthetic origin or provide additional context to a misleading political post. The success of Community Notes on X has influenced other platforms, including Meta, TikTok, and YouTube, to explore similar community-sourced content moderation strategies. Meta notably discontinued its third-party fact-checking programs in favor of this model.

The AI notes can be generated using X’s proprietary Grok AI or through other AI tools integrated with X via an API. Despite the potential for efficiency, concerns exist regarding the reliability of AI in fact-checking due to the propensity of artificial intelligence models to “hallucinate,” or generate information not grounded in reality. A paper published by researchers working on X Community Notes recommends a collaborative approach between humans and large language models (LLMs).

Learn: Chatbot hallucinations

This research suggests that human feedback can refine AI note generation through reinforcement learning, with human note raters serving as a final verification step before notes are published. The paper states, “The goal is not to create an AI assistant that tells users what to think, but to build an ecosystem that empowers humans to think more critically and understand the world better.” It further emphasizes, “LLMs and humans can work together in a virtuous loop.”

Even with human oversight, the reliance on AI carries risks, particularly since users will have the option to embed third-party LLMs. An instance involving OpenAI’s ChatGPT demonstrated issues with a model exhibiting overly sycophantic behavior. If an LLM prioritizes “helpfulness” over factual accuracy during a fact-check, the resulting AI-generated comments could be incorrect. Additionally, there is concern that the volume of AI-generated comments could overwhelm human raters, potentially diminishing their motivation to adequately perform their voluntary work. X plans to test these AI contributions for several weeks before a broader rollout, contingent on their successful performance during the pilot phase. Users should not anticipate immediate widespread availability of AI-generated Community Notes.

Featured image credit

Tags: media video