Category: Automation in Functional verification

  • Register verification : Why are we still missing the issues?

    RISC-V open virtual platform simulator quoted “Silicon without software is just a sand”. Well, it’s true. Isn’t it?

    For many design IPs, the programming interface to the software is through a set of registers. The registers contained by design IP can be broadly divided into 3 categories.

    • Information registers: Set of registers that provides the static information about the design IP. Some examples are vendor ID or revision information or capabilities of the design
    • Control registers: Set of registers that allow controlling the behavior or features of the design IPs. These are called control registers. Some examples include enable/disable controls, thresholds, timeout values etc.
    • Status registers: Set of registers that provides the ability to report various events. Some examples are interrupt status, error status, link operational status, faults etc.

    The software uses the “information registers” for the initial discovery of the design IP. There on programs the subset of control registers in a specific order to make the design IP ready for the operation. During the operation, status registers provide the ability for the software to figure out if the design IP is performing the operations as expected or does it need some attention.

    Register verification

    From verification point of view we need to look at it from two points of view. They are:

    • Micro-architecture point of view
    • Requirement specification point of view

    Micro-architecture point of view focuses on the correctness of the implementation of the register structure. Which can be described some of the following items:

    • Each register bit properties implemented as read-only, read-write, write to clear, read to clear or write-1-clear type of implementation
    • Is the entire register address space accessible for both read and write
    • If there are any byte enables used are they working correctly
    • All possible read and write sequences operational
    • Protected register behavior is as expected

    From requirements point of view focuses is on the correctness of the functionality provided by the register. Which can be described some of the following items:

    • Whether the power on reset value matches the expected value defined by the specification
    • For all the control registers whether the programmed values are having the desired effect
    • When the events corresponding to different status register updates are taking place, whether they are reflecting it correctly
    • Any status registers whose values needs to be retained through the reset cycling
    • Any registers that needs to be restored to proper values through power cycling are they taking place correctly

    Micro-architecture implementation correctness is typically verified by the set of the generated tests. Typically a single automation addresses the RTL registers generation, UVM register abstraction layer generation and associated set of test generation. These automations also tend to generate the documentations about registers, which can serve as programming guides for both verification and software engineering teams.

    Functional correctness of the registers is the more challenging area. Information category of registers is typically covered by the initial values checking tests. Control and status register functional correctness is spread out across various tests. Although some tests may explicitly verify the register functional correctness, but many tests that cover register functional correctness are not really verifying only that. They are focusing on higher-level operational correctness, in doing so, they utilize the control and status registers. Hence they verify them indirectly.

    In spite of all this verification effort, register blocks still will end up having some issues, which are found late in verification cycles or during silicon bring up.

    Why we are still missing the issues?

    Register functionality evolves throughout the project execution. Typical changes are in the form of additions, relocations, extensions, update to definitions of existing registers and compacting the register space by removing some registers.

    Automations associated with registers generation ease the process of any changes. At the same times, sometimes layers of automations can make the process of review difficult or give a false sense of positive security that all changes are being verified by automatically generated tests. The key point is automation is only as good as the high-level register specification provided as input. If there are mistakes in the input, sometimes automations  can  mask them.

    Since the register verification is spread out across automated tests, register specific tests and other features testing, its difficult to pin-point what gets verified and to what extent.

    What can we do about it?

    First is traditional form of review. This can help catch many of the issues. But considering the total number of registers and their dynamic nature it’s difficult to do this review thoroughly and iteratively.

    We need to aid the process of reviews. We need to open up the verification to questioning by designers and architects. This can be effectively done when there are high-level data about the verification done on the registers.

    We have built a register analytics app that can provide various insights about your register verification in the simulation.

    One of the capabilities register app helped catch issues of dynamically reprogrammed registers. There are subsets of control registers that can be programmed dynamically multiple times during the operations. As the register specification kept on changing the transition coverage on a specific register which was expected to be dynamically programmed was not added.

    Our register analytics app provided the data regarding which registers were dynamically reprogrammed, how many times they were reprogrammed and what unique value transitions were seen. It was made available as a spreadsheet. One could quickly filter the columns of registers that were not dynamically programmed. This enabled questioning of why certain registers were not dynamically reprogrammed? This enabled catching certain dynamically re-programmable registers, which were not reprogrammed. When they were dynamically reprogrammed some them even lead to the discovery of additional issues.

    We have many more micro-architecture stimulus coverage analytics apps that can quickly provide you the useful insights about your stimulus. The data is available both at per test as well as aggregated across the complete regression. Information from third party tools can additionally provide some level or redundancy to verification efforts catching any hidden issues in the automations already used.

    If you are busy, we do offer services to set up our analytics flow containing both functional and statistical coverage, run your test suites and share the insights that can help you catch the critical bugs and improve your verification quality.

  • SystemVerilog: Transition coverage of different object types using cross

    Tudor timisescu also known as the verification gentleman in verification community posted this question on twitter.

    His question was, can we create transition coverage using cross between two different types of objects? He named it as heterogeneous cross.

    His requirement has very useful application in CPU verification to cover transitions of different instructions. For RISC-V (and basically all other ISAs), different instructions have different formats, so you end up with cases where you get such heterogeneous transitions.

    So, let’s jump into understanding the question further. I know it’s not easy to understand it on first impression. So let’s do bit of deep dive into question. Followed by that we will take a look in to one of the proposed solution and scalable automation using the code generation approach.

    Question:

    Can we do heterogeneous cross coverage in SystemVerilog?

    Partial screenshot of the question on twitter.

    Tudor clarifies the question in his own words.

    Heterogeneous cross coverage is cross between two different object types.

    Let me clarify by what I mean with heterogeneous. First, I’m trying to model some more involved form of transition coverage. I imagine the best way to do this is using cross coverage between the current operation and the previous operation.

    Assuming you only have one category of operations, O, each with a set of properties P0, P1, P2, … it’s pretty easy to write this transition coverage. Let the tilde (‘) denote previous. The cross would be between the values of P0, P1, P2, … and P0′, P1′, P2′, …

    If you have two categories of operations, Oa and Ob, each with different sets of properties: Pa0, Pa1, …, Pam for Oa and Pb0, Pb1, …, Pbn (with m and n possibly different), the cross gets a bit more involved.

    If the current operation is of type Oa and the previous is of type Oa, then you want to cover like in the case where all operations are the same (i.e. Pa0, Pa1, …, Pa0′, Pa1′). This also goes for when both are of type Ob.

    If the current operation is of type Oa and the previous is of type Ob, then what you want to cross is something like Pa0, Pa1, Pa2, …, Pb0′, Pb1′, Pb2’, … The complementary case with the operation types switched is analogous to this one.

    I don’t see any way of writing this in #SystemVerilog without having 4 distinct covergroups (one for each type transition).

    Imagine you add a third operation type, Oc, and suddenly you need 9 covergroups you need to write.

    The more you add, the more code you need and it’s all pretty much boilerplate.

    The only thing  that the test bench writer needs to provide are definitions for the cross of all properties of each operation. Since it’s not possible to define covergroup items (coverpoints and crosses) in such a way that they can be reused inside multiple covergroup definitions, the only solution I see is using macros.

    Code generation would be a more robust solution, but that might be more difficult to set up.

    Solution code snippet:

    He was kind enough to provide the solution for it as well. So what was he looking for? He was looking for, is there any easier and scalable ways to solve it?

    Following are the two different data types that we want to cross.

    When you create all 4 possible combinations of transition crosses, it would look as following:

    I thought we could follow the precedence of scientific community and refer the heterogeneous cross as “Tudor cross” for formulating the problem and defining the solution.

    Real life use cases for Tudor cross

    Okay, before we invest our valuable time understanding automation are there any real life use cases?

    Tudor was facing this real problem for project he worked on related to critical pieces of security. For confidentiality reasons he could not provide any more details about it. He was kind enough to share another example where this type of problem would be faced again and hence the solution would be useful.

    In Tudor’s own words, an example from the top of my head (completely unrelated to the one I was working on) where this might be useful is if you have to cover transitions of different instructions. For RISC-V (and basically all other ISAs), different instructions have different formats, so you end up with cases where you get such heterogeneous transitions.

    The same CPU will be executing all of those instructions and you can get into situations that the previous instruction busted something that will cause the current instruction to lock up, which is why you want to at least test all transitions.

    One step even further is if you also add the state of the CPU to the cross. Different parts of the state are relevant to different instructions. It could be that transition a -> b is fine in state Sa0, but is buggy in state Sa1.

    (more…)

  • SystemVerilog : Cross coverage between two different covergroups

    Question:

    Does SystemVerilog support cross coverage between two different covergroups?

    This was one of the question raised on verification academy.

    Following is the code snippet provided by the author to clarify the question.

    Answer:

    SystemVerilog’s covergroup, does not support the cross coverage between two different covergroups as clarified by Dave.

    No, the above code will not compile. The cross a1b1 from covergroup ab1 is used in the different covergroup ab1c1. The cross a1b1 is used in creating cross a1b1c1 in the covergroup ab1c1. Referencing is done in object oriented way ab1.a1b1. Please note the SystemVerilog covergroups are not object oriented. Lack of this support manifests as inability to reuse the cross across covergroups.

    One of the key reasons for not supporting reuse of cross across covergroups is, what if the sampling events across the covergroups are different.

    But what if they are same or it does not matter in specific case of reuse? In that case, why it cannot be reused?

    Before we get in to that real question is, are there sufficient real life use cases for this reuse?

    (more…)

  • Specification to Functional coverage generation

    Introduction

    (Note: As this is a long article, you can download it in pdf format along with USB Power delivery case study. Don’t worry, we don’t ask for email address)

    We were presenting our whitebox functional and statistical coverage generation solution, one of the engineer asked, can it take standard specifications as input and generate the functional coverage from it?

    Figure 1: Specification to functional coverage magic possible?

    I replied “No”. It cannot.

    But then after the presentation, questioned myself as to, why not?

    No, no still not brave enough to parse the standard specifications use natural language processing (NLP) to extract the requirements and generate the functional coverage from it. But we have taken first step in this direction. It’s a baby step. May be some of you might laugh at it.

    We are calling it as high level specification model based functional coverage generation. It has some remarkable advantages. As every time, I felt this is “the” way to write functional coverage from now on

    Idea is very simple. I am sure some of you might have already doing it as well. Capture the specification in form of data structures. Define bunch of APIs to filter, transform, query and traverse the data structures. Combine these executable specifications with our python APIs for SystmVerilog functional coverage generation. Voila, poor man’s specification to functional coverage generation is ready.

    Yes, you need to learn scripting language (python in this case) and re-implement some of the specification information in it. That’s because SystemVerilog by itself does not have necessary firepower to get it all done. Scared? Turned off? No problem. Nothing much is lost. Please stop reading from here and save your time.

    Adventurers and explorers surviving this hard blow please hop on. I am sure you will fall in love with at least one thing during this ride.

    How is this approach different?

    How is this approach different from manually writing coverage model? This is a very important question and was raised by Faisal Haque.

    There are multiple advantages, which we will discuss later in the article. In my view single biggest advantage is making the coverage intent executable by truly connecting the high-level model of specifications to functional coverage. No we are not talking about just putting specification section numbers in coverage plan we are talking about really capturing the specification and using it for generation of functional coverage.

    Let me set the expectations right, this approach will not figure out your intent. The idea is about capturing and preserving human thought process behind the functional coverage creation in executable form. So that it can be easily repeated when things change. That’s all. It’s a start and first step towards specifications to functional coverage generation.

    Typically functional coverage is implemented as set of discrete independent items. The intent and its connection to specifications are weak to non-existent in this type of implementation. Most of the intent gets either left behind in the word of excel plan where it was written or in the form of comments in the code, which cannot execute.

    Making intent executable

    Why capturing intent in executable form is important?

    We respect and value the human intelligence. Why? Is it only for this emotional reason? No. Making human intelligence executable is first step to artificial intelligence.

    Ability to translate the requirements specification into coverage plan is highly dependent on the experiences and depth of specification understanding of the engineer at the moment of writing it. If its not captured in the coverage plan it’s lost. Even the engineer who wrote the functional coverage plan may find it difficult to remember why exactly certain cross was defined after 6 months.

    Now this can become real challenge during the evolution and maintenance of the functional coverage plan as the requirements specifications evolve. Engineer doing incremental updates may not have luxury of the time as the earlier one had. Unless the intent is executable the quality of the functional coverage will degrade over period of time.

    Now if you are doing this design IP for only one chip and after that if you are throwing it away this functional coverage quality degradation may not be such a big concern.

    Let’s understand this little further with example. USB power delivery supports multiple specification revisions. Let’s say, we want to cover all transmitted packets for revision x.

    In manual approach we will discretely list protocol data units valid for revision x.

    For this listing you scan the specifications, identify them and list them. Only way to identify them in code as belonging to revision x is either through covergroup name or comment in the code.

    In the new approach you will be able to operate on all the protocol data units supported by revision x as a unit through APIs. This is much more meaningful to readers and makes your intent executable. As we called out, our idea is to make coverage intent executable to make it adaptable. Let’s contrast both approaches with another example.

    For example, let’s say you want to cover two items:

    • All packet transmitted by device supporting revision 2.0
    • Intermediate reset while all packet transmitted by device supporting revision 2.0

    If you were to write discrete coverage, you would have sampled packet type and listed all the valid packet types of revision 2.0 as bins. Since bins are not reusable in SystemVerilog you would do copy and paste them across these two covergorups.

    Now imagine, if you missed a packet type during initial specification scan or errata containing one more packet type came out later, you need to go back and add this new type at two different places.

    But with this new approach, as soon as you update the specification data structure with new type you are done. All the queries requesting revision x will automatically get updated information. Hence all the functional coverage targeted to revision x will be automatically updated.

    Remember initially it may be easy to spot two places where the change is required. But when you have hundreds of covergroups it will be difficult to reflect the incremental changes to all the discrete covergroups. It will be even more difficult when new engineer has to do the update without sufficient background on the initial implementation.

    In the USB Power delivery case study you will be able to see how to put this concept into action.

    Benefits

    What are the benefits of this approach?

    With high-level specification model based functional coverage the abstraction of thought process of writing coverage moves up and it frees up brain bandwidth to identify more items. This additional brain bandwidth can significantly help improve the quality of functional coverage plan and hence the overall quality of functional verification.

    Benefits of high-level model based functional coverage generation:

    • Intent gets captured in executable form. Makes it easy to maintain, update and review the functional coverage
    • Executable intent makes your coverage truly traceable to specification. Its much better than just including the specification section numbers which leads to more overhead than benefit
    • Its easy to map the coverage from single specification from different components points of view (Ex: USB device or host point of view or PCIe root complex or endpoint or USB Power delivery source or sink point of view) from single specification model
    • Easy to define and control the quality of coverage controlled by the level of details in the coverage required for each feature (Ex: Cover any category, cover all categories or cover all items in each category)
    • Easy to support and maintain multiple versions of the specifications
    • Dynamically switch the view of the coverage implemented based on the parameters to ease the analysis (Ex: Per speed, per revision or for specific mode)

    Architecture

    How to go about building high-level specification model based functional coverage?

    First let’s understand the major components. Following is the block diagram of the high-level specification model based functional coverage. We will briefly describe role and functionality of each of these blocks. This diagram only shows basic building blocks.

    Later we will look at the case studies where we will see these blocks in action making their explanations more clear. It will also guide how to implement these blocks for your project as well.

    Figure 2: Block diagram of high-level specification model based functional coverage generation

    Executable coverage plan

    Executable coverage plan is the block that actually hosts all the functional coverage items. It’s coverage plan and its implementation together.

    It does the implementation of functional coverage items by connecting the high-level specification model, source of information and SV coverage APIs. The APIs utilized, specification information accessed and relations of various items utilized preserves the intent in executable form.

    User still specifies the intent of what to cover.

    It won’t read your mind but you will be able to express your thoughts at higher level of abstractions and more closer or specifications and in highly programmable environment that is much more powerful that SystemVerilog alone.

    High-level specification modeling

    This block is combination of set of data structures and APIs.

    Data structures capture high-level information from the specifications. These data structures can be capturing information about properties of different operations, state transition tables representing the state machines, information about timers as to when they start, stop, timeout or graphs capturing various forms of sequences. Idea here is capture the relevant information about the specification that is required for the definition and implementation of the functional coverage. Choose the right form of data structures that fit the purpose. These data structures will vary from domain to domain.

    APIs on the other hand process the data structures to generate different views of the information. APIs can be doing filtering, combinations, permutations or just ease access to the information by hiding the complexity of data structures. There is some level of reuse possible for these APIs across various domains.

    Using these set of data structures and APIs now we are ready to translate the coverage plan to implementation.

    Information source

    Specification data structures may define the structure of operations but to cover it, we need to know how to identify the completion of operation, what is the type operation of operation completed and current values of its properties etc.

    Information source provides the abstraction to bind the specification information to either test bench or design RTL to extract the actual values of these specification structures. This abstraction provides the flexibility to easily switch the source of coverage information.

    Bottom line stores information about sources that are either sampled for information or provides triggers to help decide when to sample.

    SystemVerilog Coverage API in Python

    Why do we need these APIs, why can’t we just directly write it in SystemVerilog itself?

    That’s because SystemVerilog covergroup has some limitations, which prevent the ease of reuse.

    Limitations of SystemVerilog Covergroup

    SystemVerilog functional covergroup construct has some limitations, which prevents its effective reuse. Some of the key limitations are following:

    • Covergroup construct is not completely object oriented. It does not support inheritance. What it means is you cannot write a covergroup in base class and add, update or modify its behavior through derived class. This type of feature is very important when you want to share common functional coverage models across multiple configurations of DUT verified in different test benches and to share the common functional coverage knowledge
    • Without right bins definitions the coverpoints don’t do much useful job. The bins part of the coverpoint construct cannot be reused across multiple coverpoints either within the same covergroup or in different covergroup
    • Key configurations are defined as crosses. In some cases you would like to see different scenarios taking place in all key configurations. But there is no clean way to reuse the crosses across covergroups
    • Transition bin of coverpoints to get hit are expected to complete defined sequence on successive sampling events. There is no [!:$] type of support where the transition at any point is considered as acceptable. This makes transition bin implementation difficult on relaxed sequences

    Coverage API Layering

    At VerifSudha, we have implemented a Python layer that makes the SystemVerilog covergroup construct object oriented and addresses all of the above limitations to make the coverage writing process more productive. Also the power of python language itself opens up lot more configurability and programmability.

    Based on this reusable coverage foundation we have also built many reusable high level coverage models bundled which make the coverage writing easier and faster. Great part is you can build library of high-level coverage models based on best-known verification practices of your organization.

    These APIs allows highly programmable and configurable SystemVerilog functional coverage code generation.

    Fundamental idea behind all these APIs is very simple.

    Figure 3: SV Coverage API layering

    We have implemented these APIs as multiple layers in python.

    Bottom most layer is basic python wrappers through which you can generate the functional coverage along with the support for object orientation. This provides the foundation for building easy to reuse and customize high-level functional coverage models. This is sufficient for the current case study.

    RTL elements coverage models cover various standard RTL logic elements from simple expressions, CDC, interrupts to APPs for the standard RTL element such as FIFOs, arbiters, register interfaces, low power logic, clocks, sidebands.

    Generic functionality coverage models are structured around some of the standard high-level logic structures. For example did interrupt trigger when it was masked for all possible interrupts before aggregation. Some times this type of coverage may not be clear from the code coverage. Some of these are also based on the typical bugs found in different standard logic structures.

    At highest-level are domain specific overage model. For example many high-speed serial IOs have some common problems being solved especially at physical and link layers. These coverage models attempt to model those common features.

    All these coverage models are easy to extend and customize as they are built on object oriented paradigm. That’s the only reason they are useful. If they were not easy to extend and customize they would have been almost useless.

    Implementation

    • Backbone of these APIs is data structure for the SystemVerilog covergroups modeled as list of dictionaries. Each of the covergroup being a dictionary made up of list of coverpoint dictionaries and list of cross dictionaries. Each of the coverpoint and cross dictionaries contain list of bin dictionaries
    • These data structures are combined with simple template design pattern to generate the final coverage code
    • Using layer of APIs on these data structure additional features and limitations of SystemVerilog covergroup are addressed
    • Set of APIs provided to generate the reusable bin types. For example if you want to divide an address range between N equal parts, you can do it through these APIs by just providing the start address, end address and number of ranges
    • There are also bunch of object types representing generic coverage models. By defining the required properties for these object types covergroups can be generated
    • Using python context managers the covegroup modeling is eased off for the user

    Any user defined SystemVerilog code can co-exist with these APIs. This enables easy mix of generated and manually written code where APIs fall short.

    Figure 4: What to expect from APIs

    Structure of user interface

    All the APIs essentially work on the object. Global attributes can be thought of as applicable to entire covergroup. For example if you specified bins at the global level it would apply to all the coverpoints of the covergroup. Not only the information required for coverage generation but also description and tracking information can be stored in the corresponding object.

    This additional information can be back annotated to simulator generated coverage results helping you correlate your high-level python descriptions to final coverage results from regressions easily.

    Also the APIs support mindmaps and Excel file generations to make it easy to visualize the coverage plan for reviews.

    Figure 5: Structure of user interface for objects

    Source information

    Covergroups require what to sample and when to sample.

    This is the block where you capture the sources of information for what to sample and when to sample. It’s based on very simple concept like Verilog macros. All the coverage implementation will use these macros, so that it abstracts the coverage from statically binding to source of the information.

    Later these macros can be initialized with the appropriate source information.

    Snippet 1: Specifying source information

    This flexibility allows using information source from either between the RTL and test bench. Easily be able to switch between them based on need.

    Following code snippets showcase how covergroup implementation for simple read/write and address can be done using either RTL design or test bench transactions.

    Snippet 2: Coverage generated using testbench transaction

    Coverpoints in snippet 2 are sampling the register read write transaction object (reg_rd_wr_tr_obj). Sampling is called on every new transaction

    Snippet 3: Coverage generated using DUT signals

    Coverpoints in snippet 3 are sampling the RTL signals to extract the read/write operation and address. Sampling is called on every new clock qualified by appropriate signals.

    Summary:

    Functional coverage is one of the last lines of defense for verification quality. Being able to repeatedly do a good job and do it productively will have significant impact on your quality of verification.

    Initially it may seem like lot of work, you need to learn a scripting language and learn different techniques of modeling. But pay off will not only for the current project but throughout the lifetime of your project by easing the maintenance and allowing you to deliver the higher quality consistently.

    Download a case study of how this is applied to USB Power delivery protocol layer coverage.

  • Classic automation in Functional verification

    There are various areas of the functional verification, which requires automation beyond scope of standard simulator bundled tool chains. For some of these areas there is lack of standard third party tools or there is resistance to adopt external tools due to legacy reasons. Whatever are the reasons, verification engineers often roll up their sleeves up and create an in-house solution for these problems.

    Verification engineers mostly rely on perl, python, TCL and shell scripting for most of the automations. Some do venture to the C, C++ and Java but they are rare. Let’s not forget they have full time verification job to take care as well.

    Let’s look at few popular problems that often see in-house automations. All these in-house automations can be broadly classified into three categories.

    Data mining and aggregation

    Regression management

    Most companies have some proprietary solution for this.

    This problem involves launching the regressions on compute farms, periodic monitoring to find out the status of run and finally publishing the status of run at the end of regression.

    Verification plan management:

    All big 3 EDA vendors bundle some form of solutions with their simulator. They are quite popular as well. But sometimes for tighter integration with the in-house regression management system verification engineers do build custom solutions in this space.

    These typically manifest in the form of verification plans, being either maintained as text or in the form of data structures to serve as input to the regression management systems.

    These help in maintaining the tests, their variations, descriptions and tracking information. Using this total tests, their variations, seeds allocation the tests to be written can all be figured out.

    Bugs statistics

    Bugs management can be third party or in-house tool. As this is a critical tool for quality management, often companies invest in building their own custom tool to suit to their product lines and flows. Of-course this out of reach of verification engineers and falls in typical software engineering category.

    These bug management systems provide the browser-based interface to access the bug information and statistics. Along with that they do expose web service interfaces such as SOAP.

    To find out various bug statistics frequently utilities are created using the SOAP interface. Eventually these also get integrated with the verification plan and regression management system to get clear idea of current status. Integration for example helps in finding regression failures that have bugs open versus ones where debugging is in progress.

    All these type of automation brings in clarity and transparency.

    Utilities for these require good understanding of linux commands, Make, file io and regular expressions.

    Code generation

    Lot of the code that is written in the high-level verification languages and verification methodologies is boilerplate code. It has lot repetition. Along with that there are some code that needs to be parameterized but cannot be written with only language provided constructs.

    Register abstraction layer (RAL)

    RTL register block generation is common practice and many companies had custom flow for it well before verification methodologies came up with RAL.

    Naturally the same information about registers was leveraged for the RAL code generation for the verification. Of-course verification purists may not like the fact that designs code and verification code is generated from same source information.

    UVM RAL code, functional coverage for registers and some basic register read/write tests can be automatically generated.

    Basic UVM environment

    This can be typically done in two ways.

    First simple approach is to generate the code to get started with the initial environment development. Here all the placeholder classes for complete UVM environment are generated. Once generated, users update their actual code inside the generated code. Automation is limited to only one time generation at the beginning.

    Second approach is less complete but more practical. Here partial or base environment is generated. Base environment includes regular stuff like SystemVerilog interface generation, hooking them to right agents, instancing agents and connecting TLM ports between them etc. There on these base environments are extended and additional functionality that is not easy to automate is added.

    Assertions and functional coverage generation

    Assertions and functional coverage for regular RTL with highly parameterised designs are also automatically generated to keep up with the design changes. Some examples of such designs include anything that is multiple ports, switches, hubs or network on chip (NOC) etc.

    Knowingly or unknowingly the concepts of the design patterns are used in code generation.

    High level specification model based functional coverage generation is another approach that can help you capture the intent in executable form.  It’s a first baby step towards specification to functional coverage generation.

    Linting

    Yes, checklist is one of the important mechanisms to ensure everything is completed. Checklists are tied to important milestones. Some of the checklist items needs to be repeated on multiple milestones. Some of these checklist items can be very specific to organizations and hence require automation in the form linting to repeat them.

    Linting term used here in broad sense. It can be code linting or linting anything. Basically doing some checks.

    Some examples are:

    • Enforcing some organization specific coding guidelines or identifying potential gotchas in verification code (fork usage for instance)
    • Identifying usage of invalid, unsupported or deprecated command line arguments in test command lines
    • Identifying TODO in the code. TODO can be represented in different ways. Capture all of them and check if they are present
    • No compile time or run time warnings

    Utilities for these are file io and regular expression driven.

    Beyond these lot of temporary tasks requiring analysis of large quantity of data like CDC reports, reset connectivity or coverage reports also see some partial automations to ease the task and reduce the possibility of something being missed.

    Getting some grip on basic scripting, regular expression and data structures can help you improve productivity of yourself and your team.

    This might be good starting point to improve your programming skills: 600 free online courses on programming