
Against a tsunami of AI slop
Sigasi, your first line of defense
A recent GPTZero investigation should make every engineer pause. They scanned 4,841 accepted NeurIPS 2025 papers and reported 100+ confirmed hallucinated citations spread across 50+ papers; issues that slipped past “3+ reviewers” per paper.
The NeurIPS wake-up call: when “looks right” ships anyway
That’s not just an academic gotcha. It’s a symptom of a bigger shift: review systems — human ones especially — are getting overwhelmed by AI volume. GPTZero points to a “submission tsunami” fueled by generative AI and publication pressure, noting NeurIPS submissions rising more than 220% (2020→2025).
If this can happen in one of the most prestigious AI conferences in the world, it’s worth asking:
What does the “AI slop” problem look like in chip design?
In hardware, “AI slop” isn’t fake citations — it’s “vibe RTL”
In research papers, the failure mode is “vibe citing”: citations that look plausible but don’t exist. In RTL, the equivalent failure mode is “vibe RTL”: code that looks plausible, sometimes even compiles, but is wrong in ways that only show up later—during integration, simulation bring-up, or (worst) in the lab.
And RTL is uniquely sensitive to this, because:
- The “cost of wrong” is huge (time, respins, missed windows).
- Debug is inherently unpredictable.
- Many issues aren’t syntax; they’re semantic mismatches, misuse of interfaces, wrong widths, wrong resets, wrong CDC assumptions, etc.
So the question becomes: how do you safely use probabilistic code generation in a deterministic engineering domain?
The right mental model: AI is a firehose — you need a filter, not a ban
Banning AI rarely sticks. People will use Copilot, ChatGPT, internal LLMs,… anyway. A more realistic strategy is: Let AI generate drafts. Put a deterministic wall in front of production RTL.
That wall should do three things:
- Catch errors early (while context is still fresh)
- Standardize quality (rules a team can agree on)
- Keep feedback reproducible (so a change is a change, not a vibe)
This is exactly where Sigasi fits.
Sigasi Visual HDL: turning probabilistic drafts into deterministic RTL
Sigasi Visual HDL positions itself as a solution for creating, integrating, and validating HDL specifications—inside VS Code, where engineers already work. And it’s explicitly designed to help teams “shift-left” by validating earlier than simulation/synthesis.
Here are the practical “defense wall” mechanisms.
1) Real-time semantic checks while you type (not later)
AI slop thrives on delayed feedback. If it takes 30 minutes to learn a snippet is wrong, people keep stacking wrongness. Sigasi runs syntax validation + semantic checks (“linting”) at type-time, and many issues can be fixed with Quick Fixes. That’s the first layer of the wall: fast deterministic feedback against AI-generated (or copied) RTL before it pollutes the project.
Even the VS Code Marketplace listing frames this explicitly: you can import existing code “whether from legacy projects, HLS- or AI-generated—and check it” in your project context.
Why this matters for AI usage:
- You get immediate “this doesn’t type-check / this doesn’t bind / this doesn’t map” feedback.
- Your project context matters (configs, libraries, preprocessors), not just a snippet in isolation.
2) Team rules that turn “style” into enforceable guardrails
When AI writes code, it tends to produce “locally plausible” output that easily violates team conventions and project rules. That creates review churn and integration friction. Sigasi includes a large catalog of linting rules and checks (spanning VHDL + SystemVerilog/UVM) and supports Quick Fixes for many problems. This lets teams define what “acceptable RTL” means before code review, not during.
What does the Sigasi defense-wall do:
- Your ruleset becomes the schema AI output must conform to.
- Review becomes about architecture and correctness—not policing avoidable violations.
3) Mixed-language reality checks (where “vibe RTL” gets dangerous)
One of the highest-risk areas for AI-generated HDL is language boundaries—VHDL ↔ SystemVerilog, wrappers, naming conventions, case sensitivity, keyword collisions, etc. Sigasi Visual HDL is explicitly a VHDL and SystemVerilog analysis engine in VS Code. In practice, that means it can help you keep cross-language integration sane—exactly where small “looks fine” mistakes become late-stage failures.
This is critical for FPGA → ASIC teams: many teams prototype in one language and evolve the system while pulling in IP (and verification) from another.
4) AI can be inside the workflow — but it shouldn’t be the judge
Sigasi doesn’t pretend AI is going away. The key is the division of labor:
- AI helps generate ideas and drafts
- Deterministic checks decide what’s acceptable to merge
That’s the difference between “AI as author” and AI as assistant under guardrails.
5) Bring the wall into CI so it’s not “just one engineer’s editor”
If the defense only exists on one person’s machine, AI slop will still land in main. Sigasi supports automation hooks in VS Code and can integrate with task runners to streamline repetitive designflow actions. And there’s a Sigasi CLI (in Enterprise Edition) intended to bring capabilities into terminals/CI workflows.
** The defense-wall works on different levels:
- the editor catches issues at creation time.
- CI catches issues at merge time.
- Both run the same practical playbook: “AI allowed, but only through the wall”
If you want a simple policy that works in real teams:
- AI may generate RTL drafts (snippets, wrappers, boilerplate)
- Everything must pass Sigasi checks + team rules before review
- CI enforces the same rules (so quality is not optional)
- Review focuses on architecture + correctness, not lint and style debt
This is how you turn AI from a chaos multiplier into a throughput multiplier.
The tsunami isn’t the point — your defenses are
GPTZero’s NeurIPS findings are a warning about modern review pipelines under strain. Chip design teams are facing the same structural pressure—more output, more speed, more reuse, more AI assistance.
The best response isn’t panic or prohibition. It’s tooling that makes quality continuous, deterministic, and team-scalable.
That’s what Sigasi Visual HDL is built for: a professional HDL workflow where AI can help write faster; without letting “AI slop” quietly ship into silicon.
Our support engineers — hands-on, with extensive FPGA and ASIC design and verification experience — are happy to show how “shift-left” is done in our “AI in Chip Design” webinar on Wednesday, 24th of March, at 9:00AM CET/1:30PM IST/5:00PM JST and at 9:00AM PST/12PM EST/6PM CET. Book your seat in our webinar now and learn how to safely introduce AI generated code in your design. Select your preferred time on the form below by clicking on the v and register now.
2026-01-28, last modified on 2026-01-28