Hindi Tamil Telugu
    More
    In the news
    Narendra Modi
    Amit Shah
    Box Office Collection
    Bharatiya Janata Party (BJP)
    OTT releases
    Hindi Tamil Telugu
    User Placeholder

    Hi,

    Logout

    India
    Business
    World
    Politics
    Sports
    Technology
    Entertainment
    Auto
    Lifestyle
    Inspirational
    Career
    Bengaluru
    Delhi
    Mumbai

    Download Android App

    Follow us on
    • Facebook
    • Twitter
    • Linkedin
    Home / News / Technology News / AI start-up Cerebras unveils WSE-3, the world's largest semiconductor
    Summarize
    Next Article
    AI start-up Cerebras unveils WSE-3, the world's largest semiconductor
    Cerebras is experiencing high demand for its new chip

    AI start-up Cerebras unveils WSE-3, the world's largest semiconductor

    By Pratyaksh Srivastava
    Mar 14, 2024
    06:37 pm

    What's the story

    US-based Cerebras Systems has introduced its third-generation AI chip, the Wafer Scale Engine 3 (WSE-3). It is touted as the world's largest semiconductor.

    The WSE-3 is designed to train AI models by fine-tuning their neural weights or parameters.

    The new chip doubles the performance of its predecessor, WSE-2, which was released in 2021.

    However, it maintains the same power draw and price despite increased performance.

    Details

    WSE-3: Doubling performance and shrinking transistors

    The WSE-3 chip, the size of a 12-inch wafer, has doubled its rate of instructions from 62.5 petaFLOPs to 125 petaFLOPs.

    The chip's transistors have been reduced from seven nanometers to five nanometers, increasing the transistor count from 2.6 trillion in WSE-2 to four trillion in WSE-3.

    The on-chip SRAM memory content has slightly increased from 40GB to 44GB, and the number of computing cores has increased from 850,000 to 900,000.

    You're
    20%
    through

    In comparison

    WSE-3 outperforms NVIDIA's H100 GPU

    The WSE-3 chip is 57 times larger than NVIDIA's H100 GPU, with 52 times more cores and 800 times more on-chip memory.

    It also boasts 7,000 times more memory bandwidth and over 3,700 times more fabric bandwidth.

    According to Cerebras co-founder and CEO Andrew Feldman, these factors underpin the chip's superior performance.

    The CEO further stated that the WSE-3 can handle a theoretical large language model (LLM) of 24 trillion parameters on a single machine.

    You're
    40%
    through

    Ease of working

    WSE-3: Easier programming and faster training times

    Feldman argued that the WSE-3 is easier to program than a GPU, requiring significantly fewer lines of code.

    He also compared training times by cluster size, stating that a cluster of 2,048 CS-3s could train Meta's 70-billion-parameter Llama 2 large language model 30 times faster than Meta's AI training cluster.

    This efficiency allows enterprises to access the same compute power as hyperscalers but at a much faster rate.

    You're
    60%
    through

    Partnership with Qualcomm

    Cerebras partners with Qualcomm to reduce inference costs

    Cerebras has partnered with chip giant Qualcomm to use its AI 100 processor for the inference process in generative AI.

    The partnership aims to reduce the cost of making predictions on live traffic, which scales with the parameter count.

    Four techniques are applied to decrease inference costs, including sparsity, speculative decoding, output conversion into MX6, and network architecture search.

    These approaches increase the number of tokens processed per dollar spent by an order of magnitude.

    You're
    80%
    through

    Demand

    Cerebras's high demand and future inference market focus

    Cerebras is experiencing high demand for its new chip, with a significant backlog of orders across enterprise, government, and international clouds.

    Feldman also highlighted the future focus on the inference market as it moves from data centers to more edge devices.

    He believes that easy inference will increasingly go to the edge where Qualcomm has a real advantage.

    This shift could potentially change the dynamics of the AI arms race in favor of energy-constrained devices like mobiles.

    Done!
    Facebook
    Whatsapp
    Twitter
    Linkedin
    Related News
    Latest
    Artificial Intelligence and Machine Learning

    Latest

    'India already has 140cr…': SC rejects Sri Lankan's refuge plea  Madras High Court
    Decoding the 170-plus partnerships against Delhi Capitals in IPL Indian Premier League (IPL)
    Microsoft Build 2025 starts today: How to watch the keynote Microsoft
    Xiaomi to take on Tesla's best-selling EV with YU7 SUV Tesla

    Artificial Intelligence and Machine Learning

    This AI chatbot builds websites in seconds using text prompts ChatGPT
    Start-up that compresses AI models for quantum computing raises $27M Business
    Google DeepMind alums launch AI video generation tool called Haiper DeepMind
    Google updating search algorithm in May to tackle SEO spam Google
    Indian Premier League (IPL) Celebrity Hollywood Bollywood UEFA Champions League Tennis Football Smartphones Cryptocurrency Upcoming Movies Premier League Cricket News Latest automobiles Latest Cars Upcoming Cars Latest Bikes Upcoming Tablets
    About Us Privacy Policy Terms & Conditions Contact Us Ethical Conduct Grievance Redressal News News Archive Topics Archive Download DevBytes Find Cricket Statistics
    Follow us on
    Facebook Twitter Linkedin
    All rights reserved © NewsBytes 2025