• progress_activity cloud_sync

    Reconnection to the server…

    Movim cannot talk with the server, please try again later

  • back_to_tab fullscreen tile_small dialpad mic videocam switch_camera screen_share

    mic_none No sound detected from your microphone


    • Public subscriptions

    • chevron_right

      Blue

    • chevron_right

      Miho

  • Register Login

    Movim

    chat.macaw.me


  • group_work rss_feed
    add Follow

    ArsTechnica

    • Ar chevron_right

      How often do AI chatbots lead users down a harmful path?

      news.movim.eu / ArsTechnica • 29 January 2026

    At this point, we've all heard plenty of stories about AI chatbots leading users to harmful actions , harmful beliefs , or simply incorrect information . Despite the prevalence of these stories, though, it's hard to know just how often users are being manipulated. Are these tales of AI harms anecdotal outliers or signs of a frighteningly common problem?

    Anthropic took a stab at answer ingthat question this week, releasing a paper studying the potential for what it calls "disempowering patterns" across 1.5 million anonymized real-world conversations with its Claude AI model. While the results show that these kinds of manipulative patterns are relatively rare as a percentage of all AI conversations, they still represent a potentially large problem on an absolute basis.

    A rare but growing problem

    In the newly published paper "Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage," researchers from Anthropic and the University of Toronto try to quantify the potential for a specific set of "user disempowering" harms by identifying three primary ways that a chatbot can negatively impact a user's thoughts or actions:

    Read full article

    Comments

    • tagai tagartificial intelligence tagclaude taganthropic tagdisempowerment tagresearch

    • Pictures 1 image

    • visibility
  • cloud_queue

    Powered by Movim