
The Design, Automation & Test in Europe (DATE) Conference in Verona in April showed an EDA research community moving with real momentum into the AI era. The strongest signal from the conference was that AI is no longer a separate topic sitting beside chip design. It’s now shaping the workloads, architectures, design tools, verification flows, and security questions that will define the next phase of semiconductor development.
The conference was upbeat because the direction is clear and the opportunity is substantial. Heterogeneous compute, RISC-V, chiplets, AI accelerators, agentic EDA, structured specifications, and AI-assisted verification are all advancing at the same time. The challenge is significant: these systems must be designed, verified, secured, and trusted.
However, DATE 2026 showed that the research community is already developing the methods, tools, and flows needed to address that challenge. For Europe, the opportunity is not simply to catch up with existing EDA capability, but to help lead the next wave of AI-enabled, verification-aware, and trustworthy semiconductor design.
This also re-frames the European sovereignty discussion. There are three distinct parts: sovereignty in processor design, sovereignty in EDA tools, and sovereignty in next-generation AI+EDA capability. Processor design is being opened up by RISC-V, chiplets and design-enablement platforms.
EDA-tool sovereignty is more challenging, because advanced-node signoff depends on mature commercial tools, process design kits (PDKs), verification IP, and foundry-qualified flows. The strongest near-term opportunity is therefore AI+EDA capability: building the methods, benchmarks, structured specifications, secure deployment models, and verification-aware AI flows that will define the next generation of design automation.
Conference context and program messaging
DATE 2026 provided a useful view of where semiconductor research is moving as AI, EDA, advanced architectures, verification, and security begin to converge. DATE is not the Design and Verification Conference (DVCon), with its practitioner focus on verification methodology and commercial tool use. It is not the Design Automation Conference (DAC), where the exhibition floor is often as important as the technical program. DATE is research-led, with the papers, focus sessions, tutorials, keynotes, and European project sessions forming the center of gravity.
That research-led character matters. It makes DATE a good indicator of topics that are still forming before they become mature tool flows or standard industry practice. The commercial ecosystem was clearly present with Cadence, Synopsys, Qualcomm, Arm, Infineon, Micron, STMicroelectronics, Tenstorrent, Axelera AI, Real Intent, and others represented in the sponsor list. However, the tone was less product marketing and more ecosystem development.
A key takeaway was that AI is now present as a workload, a design objective, a design-assistance technology, a verification challenge, and a security risk. The individual sessions differed in emphasis, but the common thread was the same: the next phase of EDA will be shaped by the interaction between AI, heterogeneous architectures, verification, security, and trust.
DATE 2026 included 325 regular papers and 91 extended abstracts across the D, A, T, and E research tracks, giving 416 accepted research-track outputs. The program offered 41 main technical sessions, three Best Paper Award candidate sessions, two late-breaking-result sessions, five keynotes, 10 focus sessions, five workshops, four special-day sessions, and four embedded tutorials.
The geographical distribution was also significant. DATE is European in location and culture, but the research paper base reflects the global semiconductor research map. By country-affiliated appearances in technical paper-like entries, China, plus Hong Kong and Taiwan, accounted for 247 appearances, or 44.7%. Europe, plus the U.K., accounted for 133 appearances, or 24.1%. The U.S. accounted for 94 appearances, or 17.0%, with the rest of the world at 79 appearances, or 14.2%.
Using a broad classification, roughly 27% of the technical country-affiliated appearances had some AI connection. Most of this was hardware-for-AI: accelerators, compute-in-memory, large language model (LLM) inference, edge AI, photonic AI, and memory systems. AI applied directly to verification, test generation, fuzzing, coverage, and security validation was closer to 2.7% of the technical program. This shows that AI-for-verification is currently a specialist part of the larger AI-related research activity.
AI as workload, tool, and risk
The opening keynote from Luc Van de Hove of IMEC set out one of the central pressures: AI models are evolving faster than semiconductor hardware development, creating bottlenecks that require new compute architectures and semiconductor platforms. In this framing, AI is a key demand changing the hardware stack.
At DATE, AI appeared in at least four roles. First, AI is the workload driving accelerators, compute-in-memory structures, chiplets, photonics, and energy-efficient platforms. Focus session FS02, “Architecting Intelligence: Next-Gen Acceleration for Generative AI,” and TS36, “Next-Generation Memory Systems for AI Acceleration,” were good examples. Second, AI is becoming a design tool, with LLMs, agents, and machine-learning-driven optimization applied to routing, placement, high-level synthesis (HLS), analog sizing, and lithography simulation.
Third, AI is changing the research process itself, as raised in the keynote from Rolf Drechsler from the University of Bremen in Germany. Fourth, AI is becoming a security and trust problem, since AI-guided verification tools can introduce risks such as adversarial manipulation, biased test generation, or hallucinated security guidance.
The AI-for-EDA message was therefore not simply that AI will automate design. AI can accelerate parts of the design and verification flow, while also creating systems and flows that are harder to verify, explain, secure, and certify.
Future platforms are heterogeneous
A repeated architectural message was that general-purpose compute is no longer sufficient for many target workloads. The program included strong content on AI accelerators, chiplets, 3D integrated circuits (3DIC), RISC-V vector extensions, photonic accelerators, quantum and high-performance computing (HPC) coupling, FPGAs, high level synthesis (HLS), open chiplet ecosystems, and domain-specific processors.
RISC-V appeared prominently as an instruction set architecture (ISA), especially where openness, customization, and verification interact. It appeared in open-source cores such as Rocket, BOOM, XiangShan, and Snitch; in vector-extension verification; in processor fuzzing; in cryptographic accelerators; in SoC security; and in lightweight wearable systems. This is consistent with the broader RISC-V opportunity: the open ISA makes architectural experimentation easier but also increases the verification responsibility for each implementation and extension.
The Cornell University keynote by Zhiru Zhang on accelerator design and programming described a familiar problem. Performance and efficiency increasingly come from specialized accelerators, but there is a widening gap between how accelerators are designed and how they are programmed. That gap is an EDA problem because the design flow needs to connect architecture, programmability, verification, performance estimation, and software maintenance.
Quantum was also treated as a systems topic rather than as isolated physics. Nvidia’s Bettina Heim described NVQLink, coupling GPU real-time processing with quantum processors at sub-microsecond latency for error correction and control. A focus session covered MLIR, QIR, and intermediate representations for quantum-classical compilation. The point for EDA is that quantum-classical systems create problems in compilation, control, architecture, timing, and verification. These are recognizable EDA problems, even if the devices are different.
Verification and security become first-class constraints
The third major theme was the convergence of verification, security, and open ecosystems. DATE treated verification and security as part of the same scalability problem. As systems become heterogeneous, AI-driven, and assembled from chiplets and third-party IP, functional correctness, security validation, explainability, and certification overlap.
The verification panel (session FS06), “Who Is Best Suited to Do Verification?”, framed rising re-spin rates and verification cost as a central industry problem. The hardware security focus session argued that heterogeneous SoCs, CPUs, and accelerators create attack surfaces too large for manual analysis alone. The AI-for-verification thread included coverage-driven test generation, reinforcement-learning-guided concolic (concrete + symbolic) testing, processor fuzzing, SystemVerilog Assertion (SVA) generation, and agentic security assistants.
This work is still emerging. However, the direction is clear: verification needs more automation, and that automation needs to be tool-grounded, measurable, and traceable. A generated test, assertion, or security recommendation is useful only if it connects to coverage, formal results, simulation results, reviewable traces, or other engineering evidence.
AI for RTL and verification
A specialist but important cluster was AI applied to register-transfer level (RTL) design. This included LLM-generated Verilog, closed-loop RTL repair, multi-agent design flows, HLS-to-RTL pathways, and benchmark contamination. The volume was small, roughly 2-3% of the technical program, but the technical direction was important.
The field has moved beyond asking an LLM to write Verilog. The more credible flows put verification in the loop: generate RTL, run checks, estimate correctness, repair errors, and preserve equivalence. VeriBToT (session TS07.1) combined self-decoupling and self-verification for modular Verilog generation.
EstCoder (TS22.9) used a collaborative agent flow with a functional-estimation agent scoring generated RTL before accepting or correcting it, reporting up to 9% improvement in RTL correctness. LiveVerilogEval (TS29.1) addressed benchmark contamination and found that LLM performance degraded significantly on dynamically generated benchmarks, suggesting that static benchmarks may have overstated current capability.
The sponsor-hosted executive session on EDA agentic AI provided a useful industrial view. Agentic AI is moving from demonstrations toward production flows with RTL checking and fixing, specification-to-testbench construction, and synthesis-to-GDSII flows identified as near-term use cases. The hard constraints are determinism, traceability, IP protection, tool integration, and signoff confidence.
The AI-for-verification work showed the same pattern. The best examples were closed-loop and tool-grounded, not generic prompt-based test generation. ChatTest (TS22.7) used a multi-agent LLM framework with a structured Verification Description Language (VDL), retrieval-augmented generation, and a coverage-feedback loop. It reported 1.46 times higher toggle coverage, 2.28 times higher line coverage, and a 24.23% improvement in functional coverage across 20 complex RTL designs. CoverAssert (TS40.10) used functional coverage feedback to guide LLM generation of SVAs.
Processor fuzzing gave another important example. SimFuzz (TS40.6) applied similarity-guided block-level mutation to RISC-V processors Rocket, BOOM, and XiangShan, finding 17 bugs, including 14 previously unknown issues and seven CVE-assigned bugs affecting decode and memory units.
This connects to GhostWrite (CVE-2024-44067), a RISC-V vector-extension implementation bug in T-Head XuanTie processors that allowed unprivileged code to write arbitrary physical memory. GhostWrite was not a side channel. It was a direct architectural flaw, and the mitigation required disabling the vector extension. This is a strong argument for structure-aware, security-directed processor verification.
AI-generated SVAs also appeared in several forms. PALM (TS07.6) investigated LLM assistance for valid SVAs in security verification, while CoverAssert (TS40.10) and AutoAssert (TS02.5) extended coverage-driven, LLM-assisted assertion generation with formal verification feedback. This seems to be the right near-term role for AI in formal verification: assistant and accelerator, not replacement for formal reasoning.
Agentic AI and structured specifications
The most visible emerging pattern in AI+EDA was the movement from single-shot prompting to multi-agent, tool-grounded, feedback-driven workflows. The focus session (FS07) “From Concept to Silicon: End-to-End Agentic AI for Smarter Chip Design” made this explicit across HLS, physical design, testing, and security verification.
The Nexus paper presented by PrimisAI (session SD01.1) framed the engineering problem clearly. EDA workflows need reliability and traceability, and weak coordination and unstructured communication are bottlenecks for multi-agent deployment. Nexus reported 100% accuracy on RTL generation tasks in VerilogEval-Human and nearly 30% average power savings on Verilog-to-routing (VTR) timing-optimization benchmarks.
AgenticTCAD (TS41.6) applied a natural-language-driven multi-agent system to TCAD device optimization, achieving IRDS-2024 specifications for a 2-nm nanosheet FET within 4.2 hours, compared with 7.1 days for human experts.
The key point is that agentic AI wraps the LLM in an engineering process. The flow is to decompose the task, call EDA tools, inspect reports, measure quality, repair errors, and iterate. That is much more credible for EDA than single-shot generation.
Two structured-language examples were also notable. The first was the Universal Specification Format (USF), a formal specification format (in session TS24.3) with unambiguous syntax and semantics able to generate formal properties and behavioral simulation models.
The second was Verification Description Language (VDL), introduced in ChatTest (TS22.7), which captures I/O pins, timing, functional coverage targets, stimulus sequences, checkpoints, and boundary conditions in YAML format. These are early signs that AI-assisted EDA may require better intermediate representations, not only better models.
European sovereignty and the next EDA wave
European semiconductor sovereignty was an undercurrent throughout DATE 2026, but it needs to be framed carefully. Semiconductor sovereignty is not about becoming completely self-sufficient, it is about reducing dangerous dependencies on other geographic regions. There are several separate questions, for example: sovereignty in processor design, sovereignty in EDA tools, and sovereignty in next-generation AI+EDA capability.
For processor design, the RISC-V activity, open chiplet ecosystems, and European design-enablement platforms such as the cloud-based makeChip point in a useful direction. However, first-time-right silicon still depends heavily on commercial EDA tools, qualified PDKs, verified sign-off flows, and high-quality verification IP. A realistic sovereignty strategy means sovereign design competence and secure access to the best tools, not an assumption that open-source-only flows can replace the commercial stack.
For EDA-tool sovereignty, open-source EDA is strategically valuable for education, research, reproducibility, open PDKs, and lowering barriers for small and medium-sized enterprises (SMEs) and universities. However, advanced-node commercial EDA represents decades of investment in algorithms, foundry relationships, sign-off maturity, and customer regression infrastructure.
The keynote by Luca Benini of the University of Bologna in Italy on democratizing silicon made the positive case for broader access, but open-source EDA is a supplemental and educational platform, not a near-term substitute for advanced-node sign-off.
The more compelling opportunity is next-generation AI+EDA. DATE 2026 showed that this area is still being defined. Agentic workflows, AI-assisted verification, coverage-driven test generation, formal and SVA support, open benchmarks, trustworthy AI, structured specification languages, and secure on-premise model deployment are all areas where research depth and engineering discipline matter.
Europe has strong universities, safety-critical application domains, active RISC-V and open-source hardware communities, and the policy framework of the EU Chips Act. That combination is well suited to shaping the next EDA wave.
The strongest form of European sovereignty is not isolation. It is capability: the ability to design, verify, secure, and understand the systems Europe depends on. DATE 2026 showed that the future of EDA will require new compute architectures, better verification methods, more automation, structured specifications, stronger security methods, and a clear understanding of where AI helps and where it introduces new risks. These are exactly the problems that a research-led, ecosystem-focused community should be able to address.
DATE 2026 was therefore not just an EDA conference about AI in chip design. It was a useful indication that the next phase of EDA will be defined by the interaction between AI, heterogeneous architectures, verification, security, and trust. The next step is to turn these research directions into reliable engineering flows.
Simon Davidmann is an EDA industry pioneer and serial technology entrepreneur with over 40 years of experience in simulation and verification. His career has been instrumental in shaping the foundational languages and methodologies used in modern chip design, particularly those now critical for AI/ML hardware. Davidmann was the co-creator of Superlog that became SystemVerilog. After selling Imperas to Synopsys in 2023 and being Synopsys VP for Processor Modeling & Simulation, he left Synopsys and is now an AI + EDA researcher at Southampton University, UK.
Editor’s Note
DATE 2026 was held on 20-22 April 2026 in Verona, Italy. The conference program is available at https://www.date-conference.com/programme. Specific session labels are noted in parentheses in the article.
Related Content
- AI features in EDA tools: Facts and fiction
- EDA’s big three compare AI notes with TSMC
- What is the EDA problem worth solving with AI?
- DAC 2025: Towards Multi-Agent Systems In EDA
- How AI-based EDA will enable, not replace the engineer
The post The next EDA wave: Lessons from DATE 2026 appeared first on EDN.