DreamBig Semiconductor Announces Partnership with Samsung Foundry to Launch Chiplets for World Leading MARS Chiplet Platform on 4nm FinFET Process Technology Featuring 3D HBM Integration
DreamBig’s open MARS Chiplet Platform with world leading Chiplet HubTM for scale-up and Networking IO Chiplets for scale-out enables customers to compose the most advanced AI solutions with UCIe/BoW compliant chiplets leveraging Samsung Foundry SF4X process technology
DreamBig empowers customers to forge the future of AI, datacenter, edge, storage, and automotive solutions by developing application specific processor/accelerator chiplets and composing products by adding the chiplets to MARS Chiplet Platform with world leading Chiplet Hub™ for scale-up and Networking IO Chiplets for scale-out.
DreamBig is the original pioneer of 3D HBM stacking on the Chiplet Hub™ improving performance and efficiency for systems that are memory dominated such as AI inference and training. 3D integrated HBM stacked on the Chiplet Hub™ is part of a composable memory hierarchy with SRAM, HBM, chiplet connected DDR, chiplet connected CXL memory expansion, and chiplet connected PCIe SSD storage. The Chiplet Hub™ memory hierarchy is accelerated by integration of a highly differentiated and fully associative hardware managed Final Level Cache (from FLC Technology Group). DreamBig is partnering with Samsung Foundry to bring the innovative Chiplet Hub™ and Networking IO Chiplets to market for MARS Chiplet Platform. This collaboration leverages Samsung Foundry’s proven broad expertise and its combined technology leadership, including the state-of-the-art SF4X FinFET process, a robust ecosystem partnership for 3D chip-on-wafer stacking advanced packaging, and HBM memory synergy.
“DreamBig is disrupting the industry with the leading open chiplet solution for AI. The differentiated technology integrated in the Chiplet Hub™ serves the most demanding AI, datacenter, edge, storage, and automotive use cases. The Chiplet Hub™ base die implemented with advanced capabilities of Samsung Foundry SF4X process and 3D manufacturing provides the best performance and latency while achieving low power required for 3D integration of HBM that has eluded the industry,” stated Sohail Syed, Co-founder and CEO of DreamBig. “DreamBig MARS Chiplet Platform combining Chiplet Hub™ with 3D HBM, Networking IO Chiplets, customer AI processor/accelerator chiplets, and leveraging Silicon Box Advanced Panel Level Packaging enables unparalleled scale-up and scale-out solutions so customers can achieve the highest levels of performance and energy efficiency at lowest cost and fastest time-to-market.”
“Next-generation AI applications and chiplet-based system designs are converging, and DreamBig’s MARS Chiplet Platform is well-positioned to deliver chiplet-based AI solutions with 3D HBM integration,” said Mijung Noh, vice president and the head of Foundry Business Development 1 Team at Samsung Electronics. “We are thrilled to partner with DreamBig on our SF4X process technology with a full array of design enablement for chiplet-based architectures. This successful collaboration leverages Samsung’s solution with optimized silicon technology, memory, and advanced packaging.”
About DreamBig :
Founded in 2019, DreamBig is developing a cutting-edge chiplet platform that drives the next wave of affordable, scalable, and modular semiconductor solutions for the AI era and beyond. The company’s specialties include applications in Large Language Models (LLMs), Generative AI, Data Centers, Edge computing, and Automotive sectors. DreamBig is renowned for providing the most advanced Chiplet Hub™, facilitating the scaling of processor, accelerator, and networking chiplets.
DreamBig closes $75M Series B Funding Round
DreamBig Semiconductor Inc., a pioneer in high-performance accelerator platforms utilizing its industry-leading Chiplet Hub™ with 3D HBM, raised $75M in series B equity funding round.
This round was co-led by the Samsung Catalyst Fund and the Sutardja Family. New investors include Samsung, Hanwha, Event Horizon, and Raptor, alongside continuing contributions from existing stakeholders including the Sutardja Family, UMC Capital, BRV, Ignite Innovation Fund, Grandfull Fund, amongst others. These funds will bolster the development and commercialization of products built on DreamBig’s Chiplet Hub™ and Platform Chiplets
DreamBig Announces world leading 800G AI-SuperNIC chip (Mercury) with Fully HW Offloaded RoCE v2 + UEC RDMA Engine
Mercury delivers 800Gbps bandwidth connectivity for any AI chipset with industry leading throughput of 800Mpps while minimizing power consumption, latency, and area. It features a hardened RDMA engine with programmable congestion control for RoCE v2 and UEC standards, making it an ideal solution for AI and HPC applications. When integrated as a chiplet compatible component with previously announced DreamBig Mars Chiplet Platform, the solution can scale to 12.8Tbps RDMA throughput.
San Jose, Calif., January. 6, 2025 /*Newswire*/ — DreamBig Semiconductor, Inc. today proudly unveils Mercury AI-SuperNIC, that enables scaling-out of next generation AI Platforms by seamlessly connecting GPUs with unparalleled efficiency.
DreamBig Mercury chip features a hardware accelerated RDMA engine that supports existing RoCE (RDMA over Converged Ethernet) v2 and new UEC (Ultra Ethernet Consortium) standards, delivering best-in-class bandwidth (800Gbps) and throughput (800Mpps) with lowest power, ultra low latency, and smallest area.
Mercury is designed with fully programmable Congestion Control to adapt to any data center and provides the following critical functions for AI applications
Multi-pathing and packet spraying
Out-of-order packet placement with in-order message delivery
Programmable congestion control for RoCE v2 and UEC algorithms
Advanced packet trimming and telemetry congestion notifications
Support for selective retransmission
Mercury provides UEC-compliant software drivers enabling GPU-to-GPU communication with exceptional throughput and minimal latency.
“This year marks the 25th anniversary that the team we have assembled at DreamBig has been developing RDMA together, from Infiniband to iWARP to RoCE, and now UEC. All that experience has gone into re-architecting RDMA from the ground up for the AI era,” said Steve Majors, SVP of Engineering at DreamBig Semiconductor. “With the efficiency of full HW RDMA offload, innovative layering to adapt to evolving Ethernet transport modes and congestion control methods, and unparalleld performance scaling from 800Gbps to 12.8Tbps, DreamBig is setting the bar for the next several generations of AI networking.”
As a groundbreaking monolithic chip, Mercury redefines 800Gbps discrete networking for TPUs/GPUs, combining unmatched performance, power efficiency, and cost-effectiveness to meet the demands of next-generation AI platforms. As a chiplet compatible component, Mercury provides unparalleled scaling of upto 12.8Tbps integrated networking for AI SuperChips.
To learn more, come visit DreamBig technology solution demo January 7-10 at CES 2025 – The Most Powerful Tech Event in the World in the Venetian Bellini Suite #XX, Las Vegas, Nevada.
About DreamBig Semiconductor:
DreamBig, founded in 2019, is developing a disruptive and world leading chiplet platform that enables customers to develop the next generation of responsible, affordable, scalable, and composable semiconductor chiplet solutions for the AI revolution and digital world. The company specializes in high performance applications used for Large Language Model (LLM), Generative AI, Datacenter, Edge, and Automotive markets.
DreamBig provides the industry’s most advanced Chiplet Hub to scale up processor, accelerator, and networking Chiplets.
DreamBig World Leading "MARS" Open Chiplet Platform
DreamBig Semiconductor, Inc. today unveiled “MARS”, a world leading platform to enable a new generation of semiconductor solutions using open standard Chiplets for the mass market. This disruptive platform will democratize silicon by enabling startups or any size company to scale-up and scale-out LLM, Generative AI, Automotive, Datacenter, and Edge solutions with optimized performance and energy efficiency.
To learn more, come see DreamBig technology solution demo January 9-12 at CES 2024
DreamBig World Leading “MARS” Open Chiplet Platform Unveiled at CES 2024
DreamBig World Leading “MARS” Open Chiplet Platform Enables Scaling of Next Generation Large Language Model (LLM), Generative AI, and Automotive Semiconductor Solutions
DreamBig Semiconductor, Inc. today unveiled “MARS”, a world leading platform to enable a new generation of semiconductor solutions using open standard Chiplets for the mass market. This disruptive platform will democratize silicon by enabling startups or any size company to scale-up and scale-out LLM, Generative AI, Automotive, Datacenter, and Edge solutions with optimized performance and energy efficiency.
DreamBig “MARS” Chiplet Platform allows customers to focus investment on the areas of silicon where they can differentiate to have competitive advantage and bring a solution to market faster at lower cost by leveraging the rest of the open standard chiplets available in the platform. This is particularly critical for the fast moving AI training and inference market where the best performance and energy efficiency can be achieved when the solution is application specific.
“DreamBig is disrupting the industry by providing the most advanced open chiplet platform for customers to innovate never before possible solutions combining their specialized hardware chiplets with infrastructure that scales up and out maintaining affordable and efficient modular product development,” said Sohail Syed, CEO of DreamBig Semiconductor.”
DreamBig “MARS” Chiplet Platform solves the two biggest technical challenges facing HW developers of AI servers and accelerators – scaling up compute and scaling out networking. The Chiplet Hub is the most advanced 3D memory first architecture in the industry with direct access to both SRAM and DRAM tiers by all compute, accelerator, and networking chiplets for data movement, data caching, or data processing. Chiplet Hubs can be tiled in a package to scale-up at highest performance and energy efficiency. RDMA Ethernet Networking Chiplets provide unparalleled scale-out performance and energy efficiency between devices and systems with independent selection of data path BW and control path packet processing rate.
“Customers can now focus on designing the most innovative AI compute and accelerator technology chiplets optimized for their applications and use the most advanced DreamBig Chiplet Platform to scale-up and scale-out to achieve maximum performance and energy efficiency,” said Steve Majors, SVP of Engineering at DreamBig Semiconductor. “By establishing leadership with 3D HBM backed by multiple memory tiers under HW control in Silicon Box advanced packaging that provides highest performance at lowest cost without the yield and availability issues plaguing the industry, the barriers to scale are eliminated.”
The Platform Chiplet Hub and Networking Chiplets offer the following differentiated features:
Open standard interfaces and architecture agnostic support for CPU, AI, Accelerator, IO, and Memory Chiplets that customers can compose in a package
Secure boot and management of chiplets as a unified system-in-package similar to a platform motherboard of chips
Memory First Architecture with direct access from all chiplets to cache/memory tiers including low-latency SRAM/3D HBM stacked on Chiplet Hubs and high-capacity DDR/CXL/SSD on chiplets
FLC Technology Group fully associative HW acceleration for cache/memory tiers
HW DMA and RDMA for direct placement of data to any memory tier from any local or remote source
Algorithmic TCAM HW acceleration for Match/Action when scaled-out to cloud
Virtual PCIe/CXL switch for flexible root port or endpoint resource allocation
Optimized for Silicon Box advanced Panel Level Packaging to achieve the best performance/power/cost – an alternative to CoWoS for the AI mass market
Customers are currently working with DreamBig on next generation devices for the following use cases:
HAI Servers and Accelerators
High-end Datacenter and Low-end Edge Servers
Petabyte Storage Servers
DPUs and DPU Smart Switches
Automotive ADAS, Infotainment, Zonal Processors
“We are very proud of what DreamBig has achieved establishing leadership in driving a key pillar of the market for high performance, energy conscious, and highly scalable AI solutions to serve the world,” stated Sehat Sutardja and Weili Dai, Co-founders and Chairman/Chairwoman of DreamBig. “The company has raised the technology bar to lead the semiconductor industry by delivering the next generation of open chiplet solutions such as Large Language Model (LLM), Generative AI, Datacenter, and Automotive solutions for the global mass market.”
To learn more, come see DreamBig technology solution demo January 9-12 at CES 2024 – The Most Powerful Tech Event in the World in The Venetian Expo, Bellini 2003 Meeting Room.
About DreamBig Semiconductor
DreamBig, founded in 2019, is developing a disruptive and world leading Chiplet Platform that enables customers to bring to market the next generation of high performance, energy conscious, affordable, scalable, and composable semiconductor chiplet solutions for the AI revolution and digital world. The company specializes in high performance applications used for Large Language Model (LLM), Generative AI, Datacenter, Edge, and Automotive markets.
DreamBig provides the industry’s most advanced Chiplet Hub to scale-up compute/accelerator Chiplets, and the most advanced Networking Chiplets to scale-out.
Author
DreamBig Semiconductor Inc. || PR News Wire
DreamBig Semiconductor Participated in SmartNICs SUMMIT 2023
We are pleased to announce that we participated in SmartNICs SUMMIT 2023 & presented our MARS SmartNIC LAN & RDMA Device Model’ we demonstrated (IPsec, Match/Action, MAC Filtering, Checksum Verification, and RSS) offloads for MARS LAN Device Model.
Our Deimos Chiplet Hub leverages Arm Flexible Access for Startups
Our CEO & president was quoted by Arm as he said:
“Arm Flexible Access for Startups was a game-changer for us. It enabled us to innovate on top of Arm’s world-class IP, access its broad ecosystem in a cost-efficient way and prototype our industry-leading Deimos Chiplet Hub for next-generation data center solutions.”