Classic automation in Functional verification

There are various areas of the functional verification, which requires automation beyond scope of standard simulator bundled tool chains. For some of these areas there…

There are various areas of the functional verification, which requires automation beyond scope of standard simulator bundled tool chains. For some of these areas there is lack of standard third party tools or there is resistance to adopt external tools due to legacy reasons. Whatever are the reasons, verification engineers often roll up their sleeves up and create an in-house solution for these problems.

Verification engineers mostly rely on perl, python, TCL and shell scripting for most of the automations. Some do venture to the C, C++ and Java but they are rare. Let’s not forget they have full time verification job to take care as well.

Let’s look at few popular problems that often see in-house automations. All these in-house automations can be broadly classified into three categories.

Data mining and aggregation

Regression management

Most companies have some proprietary solution for this.

This problem involves launching the regressions on compute farms, periodic monitoring to find out the status of run and finally publishing the status of run at the end of regression.

Verification plan management:

All big 3 EDA vendors bundle some form of solutions with their simulator. They are quite popular as well. But sometimes for tighter integration with the in-house regression management system verification engineers do build custom solutions in this space.

These typically manifest in the form of verification plans, being either maintained as text or in the form of data structures to serve as input to the regression management systems.

These help in maintaining the tests, their variations, descriptions and tracking information. Using this total tests, their variations, seeds allocation the tests to be written can all be figured out.

Bugs statistics

Bugs management can be third party or in-house tool. As this is a critical tool for quality management, often companies invest in building their own custom tool to suit to their product lines and flows. Of-course this out of reach of verification engineers and falls in typical software engineering category.

These bug management systems provide the browser-based interface to access the bug information and statistics. Along with that they do expose web service interfaces such as SOAP.

To find out various bug statistics frequently utilities are created using the SOAP interface. Eventually these also get integrated with the verification plan and regression management system to get clear idea of current status. Integration for example helps in finding regression failures that have bugs open versus ones where debugging is in progress.

All these type of automation brings in clarity and transparency.

Utilities for these require good understanding of linux commands, Make, file io and regular expressions.

Code generation

Lot of the code that is written in the high-level verification languages and verification methodologies is boilerplate code. It has lot repetition. Along with that there are some code that needs to be parameterized but cannot be written with only language provided constructs.

Register abstraction layer (RAL)

RTL register block generation is common practice and many companies had custom flow for it well before verification methodologies came up with RAL.

Naturally the same information about registers was leveraged for the RAL code generation for the verification. Of-course verification purists may not like the fact that designs code and verification code is generated from same source information.

UVM RAL code, functional coverage for registers and some basic register read/write tests can be automatically generated.

Basic UVM environment

This can be typically done in two ways.

First simple approach is to generate the code to get started with the initial environment development. Here all the placeholder classes for complete UVM environment are generated. Once generated, users update their actual code inside the generated code. Automation is limited to only one time generation at the beginning.

Second approach is less complete but more practical. Here partial or base environment is generated. Base environment includes regular stuff like SystemVerilog interface generation, hooking them to right agents, instancing agents and connecting TLM ports between them etc. There on these base environments are extended and additional functionality that is not easy to automate is added.

Assertions and functional coverage generation

Assertions and functional coverage for regular RTL with highly parameterised designs are also automatically generated to keep up with the design changes. Some examples of such designs include anything that is multiple ports, switches, hubs or network on chip (NOC) etc.

Knowingly or unknowingly the concepts of the design patterns are used in code generation.

High level specification model based functional coverage generation is another approach that can help you capture the intent in executable form.  It’s a first baby step towards specification to functional coverage generation.

Linting

Yes, checklist is one of the important mechanisms to ensure everything is completed. Checklists are tied to important milestones. Some of the checklist items needs to be repeated on multiple milestones. Some of these checklist items can be very specific to organizations and hence require automation in the form linting to repeat them.

Linting term used here in broad sense. It can be code linting or linting anything. Basically doing some checks.

Some examples are:

  • Enforcing some organization specific coding guidelines or identifying potential gotchas in verification code (fork usage for instance)
  • Identifying usage of invalid, unsupported or deprecated command line arguments in test command lines
  • Identifying TODO in the code. TODO can be represented in different ways. Capture all of them and check if they are present
  • No compile time or run time warnings

Utilities for these are file io and regular expression driven.

Beyond these lot of temporary tasks requiring analysis of large quantity of data like CDC reports, reset connectivity or coverage reports also see some partial automations to ease the task and reduce the possibility of something being missed.

Getting some grip on basic scripting, regular expression and data structures can help you improve productivity of yourself and your team.

This might be good starting point to improve your programming skills: 600 free online courses on programming

Similar Posts

Leave a Reply