• progress_activity cloud_sync

    Reconnection to the server…

    Movim cannot talk with the server, please try again later

  • back_to_tab fullscreen tile_small dialpad mic videocam switch_camera screen_share

    mic_none No sound detected from your microphone


    • Public subscriptions

    • chevron_right

      Blue

    • chevron_right

      Miho

  • Register Login

    Movim

    chat.macaw.me


  • group_work rss_feed
    add Follow

    ArsTechnica

    • Ar chevron_right

      OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips

      news.movim.eu / ArsTechnica • 12 February 2026

    On Thursday, OpenAI released its first production AI model to run on non-Nvidia hardware, deploying the new GPT-5.3-Codex-Spark coding model on chips from Cerebras. The model delivers code at more than 1,000 tokens (chunks of data) per second, which is reported to be roughly 15 times faster than its predecessor. To compare, Anthropic's Claude Opus 4.6 in its new premium-priced fast mode reaches about 2.5 times its standard speed of 68.2 tokens per second , although it is a larger and more capable model than Spark.

    "Cerebras has been a great engineering partner, and we're excited about adding fast inference as a new platform capability," Sachin Katti, head of compute at OpenAI, said in a statement.

    Codex-Spark is a research preview available to ChatGPT Pro subscribers ($200/month) through the Codex app, command-line interface, and VS Code extension. OpenAI is rolling out API access to select design partners. The model ships with a 128,000-token context window and handles text only at launch.

    Read full article

    Comments

    • tagai tagnvidia tagopenai tagbiz & it tagmachine learning tagsam altman tagai coding tagai agents tagai development tools tagcode agents tagai chips tagcerebras tagai speed tagtokens

    • Pictures 1 image

    • visibility
  • cloud_queue

    Powered by Movim