Answer-first summary
Synopsys announced a major set of EDA innovations at SNUG/Synopsys 2024 focused on three strategic areas: 1) AI-driven design and verification with DSO.ai and VSO.ai to automate and optimize design/verification flows; 2) NVIDIA-accelerated computing integrations to dramatically speed compute‑intensive EDA workloads using GPUs; and 3) system-level design advancements that shorten hardware/software co‑validation and enable complex SoC and multi‑die architectures. These announcements deliver faster time‑to‑silicon, larger design space exploration, and earlier system validation for modern heterogeneous chips.
What Synopsys announced (high level)
- DSO.ai: an AI-native Design Space Optimization capability embedded in design flows to explore architecture and implementation tradeoffs automatically, prioritize Pareto-optimal solutions, and reduce manual iteration.
- VSO.ai: an AI-driven Verification Space Optimization solution that accelerates verification planning, test selection, and coverage closure using machine learning to focus compute where it matters most.
- NVIDIA-accelerated computing: expanded support and validated integrations that leverage NVIDIA GPUs (CUDA, tensor cores) and accelerated software stacks to speed ML training, statistical analysis, timing signoff pre- and post-layout runs, and simulation‑heavy flows.
- System-level advances: enhanced virtual prototyping, improved HW/SW co‑simulation, support for chiplet/multi-die partitioning, and tighter integration between architecture exploration and physical design.
Why it matters (key benefits)
- Faster convergence to optimal PPA (power, performance, area) points through automated, ML-guided exploration. Teams can evaluate many more architecture/physical options in the same calendar time.
- Reduced verification cycle time and compute costs by focusing regression, emulation, and simulation resources on tests most likely to find gaps and bugs.
- Multi‑order speedups for workloads that map to GPUs: ML model training used by DSO.ai/VSO.ai, large-scale simulation, Monte Carlo analysis, and some optimization kernels.
- Earlier and higher‑fidelity system validation that reduces late-stage surprises and shortens software bring‑up time.
DSO.ai — AI-driven design space optimization (details)
DSO.ai brings automated, model-driven search to architecture and physical implementation. Instead of manual parameter sweeps, DSO.ai uses a combination of surrogate modeling, active learning, and constrained optimization to identify Pareto‑optimal design points across PPA, timing, thermal, and cost objectives.
Key capabilities:
- Surrogate models that predict outcomes of expensive runs with confidence estimates.
- Active sampling strategies that propose the next set of runs to maximize information gain.
- Constraint-aware optimization that respects design rules and signoff criteria.
Typical outcomes include fewer costly full-tool runs, earlier identification of tradeoffs, and practical recommendations that integrate with standard Synopsys flows.
VSO.ai — AI-driven verification optimization (details)
VSO.ai focuses verification effort where it yields the best risk reduction. By ingesting coverage results, simulation/emulation metadata, and historical bug data, VSO.ai ranks tests and partitions verification tasks to close coverage gaps faster and reduce redundant regression runs.
Key capabilities:
- Test prioritization to run the most effective regressions first.
- Intelligent test generation candidates informed by ML-derived weak spots.
- Resource-aware scheduling for emulation and simulation farms.
VSO.ai helps teams shift from exhaustive, brute-force verification to targeted, statistical approaches that preserve quality while lowering runtime and compute cost.
NVIDIA-accelerated computing — where GPUs matter
Synopsys extended validated GPU acceleration across ML-driven EDA components and compute‑heavy flows. NVIDIA GPUs accelerate:
- ML model training and inference used by DSO.ai and VSO.ai (shorter model iteration cycles).
- Monte Carlo variability and statistical timing engines that are amenable to parallelization.
- High‑throughput simulation kernels and some parts of physical optimization that benefit from massive data parallelism.
Integrations use NVIDIA CUDA, optimized libraries, and, where applicable, NVIDIA’s AI primitives and tensor cores to maximize throughput. The practical result is multi‑fold reduction in elapsed time for data‑parallel workloads, enabling more iterative exploration and faster time to decision.
System-level design advances — co‑design and validation
Synopsys emphasized tighter flows for system‑level work: virtual prototyping enhancements, faster platform‑level simulation, and better support for heterogeneous SoCs and chiplet architectures. Highlights:
- Improved virtual platform fidelity and faster software bring‑up, reducing the time to run real application workloads on models.
- Enhanced integration between architecture exploration (DSO.ai) and downstream RTL-to‑GDS flows so decisions carry through to physical implementation.
- Support for multi‑die partitioning, enabling designers to optimize across chip boundaries and model interposer/channel effects earlier.
These advances shorten the HW/SW integration window and reduce risk for complex, heterogeneous systems.
Adoption and practical considerations
- Early access and pilot programs are available; production adoption depends on integrating AI-driven heuristics into established signoff and verification policies.
- GPU acceleration requires validated hardware and software stacks (NVIDIA GPUs, drivers, and optimized runtimes) and may be most beneficial for teams with heavy ML or parallel simulation workloads.
- DSO.ai and VSO.ai augment, not replace, existing Synopsys tools — they are built to integrate into standard flows and produce actionable artifacts for downstream tools.
What's next and how to learn more
Synopsys continues to refine AI models, expand GPU-enabled kernels, and grow ecosystem integrations with cloud and on-prem compute partners. To evaluate these innovations for your flow: request a demo, engage in an early access program, or attend follow‑up Synopsys technical deep dives and SNUG sessions for hands‑on examples and benchmarks.