Author: admin

  • Functional coverage for bug fixes

    What? We don’t even have any functional coverage and you want us to write functional coverage for bug fixes?

    Ah!.. That’s nice try but sorry we don’t have time for that. May be next project, when we have some extra cycles we will certainly revisit it.

    We are very close to meeting our code coverage targets. So I guess we are good here. We don’t have time for wishy-washy functional coverage.

    Fair enough. We hope you are aware of limitations of the code coverage.

    Oh, yeah, we are aware of those. Anything new?

    Let’s see.

    How about bugs? Are you finding any new bugs even after almost closing code coverage goals?

    Awkward pause for few moments, some throat clearing. Yes, we do.

    Well, we can consider the design gold standard till we don’t discover the bugs in it. We can all assume code coverage is good enough and RTL quality is great. But the moment we discover bug even after code coverage closure that assumption breaks down. We can no longer hide behind that.

    There is never a single cockroach

    Lets just presume emulation or SOC verification team reports a bug that should have been ideally caught at unit verification. A bug found in a very specific configuration and specific scenario. Should the unit verification just recreate that scenario in that specific configuration in unit verification and say it’s done?

    We can but first would like to quote my boss from early days of my career. Every time we reported saying we discovered and fixed a bug, he would say “there is never a single cockroach”. Just think back have you ever seen only one? There are always more if you look around. What does this mean in this context?

    Think about it. We can consider, RTL with 100 % code coverage as innocent till a bug is discovered in it. After the bug is discovered it’s guilty and has to face the trial. Thorough investigation must be performed.

    Even if you did not create a comprehensive functional coverage plan as part of initial verification planning now is time to rethink about it. No, no need to rush to create one now. That’s not going to help much. Typically the resource and time at this point are well spent in verifying. We don’t want them to go on systematic hunt for new weak areas with comprehensive functional coverage plan.

    Bugs are already hinting where there is weakness in design. Now the question is how do we use it to reduce the further risk?

    How to verify bug fixes?

    Verification of the bug fixes is very similar to filling the potholes.

    A pothole filling is not effective, if you just pour asphalt right into the pothole and move on. It’s not going to hold up. The right way to do it would be to first cut out the loose areas around the pothole. Clear any debris inside pothole. Now pour asphalt. Roll it nice and clean. If you do this then you have greater chances that it will hold good.

    Similarly while verifying the bug fixes consider widening and deepening the verification scope a bit. Remember there could be more bugs hiding around or behind this current bug. Statistics say every five bug fixes introduce a new bug.

    Functional coverage for bug fixes
    Functional coverage for bug fixes

    Functional coverage is important now as this bug has escaped the radar of code coverage. I hope you are now convinced. Even if you had avoided functional coverage so far now is the time to bring it into play. Set the clear scope for the bug verification using the functional coverage around the area where the bug was discovered. Remember the cost of the bug found increase as time passes.

    Sounds good. But we are really resource and time constrained. That isn’t going away even if we agree.

    If you really want to do it right then our tool curiosity can help. It can help you generate the whitebox functional coverage 3x-5x faster. Generated code requires no integration and compiles out of the box. So you get to coverage results faster. Faster results provide more time for verification. More time for verification of bugs results in better quality of RTL.

    What do you say?

  • Black box functional coverage model – Architecture

    There are two types of functional coverage. Black box functional coverage and white box functional coverage. This blog will focus on the black box functional coverage architecture.

    In order to cater to functional coverage model architecture requirements following architecture is proposed. It’s verification methodology independent. Can be used in verification environments built with or without verification methodology. Please note this is one of the way, not the only way.

    Idea is very simple. We will primarily addresses two questions:

    • How to organize your functional coverage?
    • How to integrate it with test bench?

    A good architecture exhibits the strong cohesion and low coupling. We had applied the same principles to test bench architecture. Now let’s apply the same to functional coverage model architecture as well. This translates to related functional coverage staying together and having very low coupling with the test bench. This model would then satisfy the objectives functional coverage model architecture requirements.
    (more…)

  • Functional coverage model – Architecture requirements

    Functional coverage model architecture requirements are ability to meet following three primary objectives:

    • Portability
    • Maintainability
    • Debuggability

    Portability

    Functional coverage is one of the reusable part of the test bench. It can have both horizontal as well as vertical reuses. Horizontal reuse could come in the form of same design IP used with different parameterizations to meet different use cases. Vertical reuse could come in the form of the reuse of subset of unit level functional coverage at sub-system or SOC level.

    Maintainability

    Functional coverages continues to live as long the IP lives. Add to that, it’s one of the reusable part adds to its higher shelf life as well. Which means it should be maintained for long time.

    Easy to maintain code should be easy to understand, enhance, debug and fix the issues. Easy to understand codes are organized with a theme. After initial effort once code reader recognizes the theme, he sees it consistently reflecting in all parts of code. This makes maintenance process easier.

    Functional coverage model is no exception to this rule. It should follow the same the same principles.

    Debuggability

    Functional coverage implementation is just the first part of the story. Climax of functional coverage story is, it’s closure and meeting the verification objectives successfully. This requires quite bit of debugging and analysis. Let’s understand some of the debugging challenges for functional coverage closure.

    First step of coverage closure involves classifying the functional coverage results generated from regressions into false positive coverage, false negative coverage and coverage that will not be hit. Among these false positive case is the most dangerous one. Review and redundancy is one of the most effective guard against false positive coverage.

    Bulk of these issues are caused by incorrect coverage information tapping points in test bench, incorrect sampling event selection and incorrect definitions of bins.

    Functional coverage model architecture should lend itself well to these debug challenges.

  • Functional coverage types – Black box vs. White box

    Functional coverage is one of the key metrics for measuring functional verification progress and closure. It complements by addressing the limitations of the code coverage. Functional coverage is one of the key factors contributing to quality of the functional verification.

    Code coverage holes are typically closed first followed by the functional coverage. So remember functional coverage is one of those last gates to catch the uncovered areas. After the design is signed off from functional coverage point of view, it’s one step closer to tapeout from functional verification point of view.

    Once the functional coverage is signed off, bugs hiding in the uncovered areas will most likely get discovered in silicon validation. The cost of bug increases significantly when its caught later in the ASIC verificaiton cycle. This emphasizes the importance of functional coverage. Please note functional coverage is not directly catching the bugs. It helps illuminate the various design areas to increase the probability of bugs being found. To make best use of it we need to understand different types of functional coverage.

    Functional coverage is implementation of the coverage plan created in planning phase. Coverage plan is part of, verification plan. It refers primarily to two sources of information for its verification requirements. Requirements specification and micro-architectural specification of the implementation. Functional coverage should address both of them.

    There are two types of functional coverage, black box and white box created to address both the requirements.

    Let’s look at them in more details.

    Yin-Yang of functional coverage
    Functional coverage types

    Black box functional coverage

    Functional coverage addressing the requirements specifications is referred to as black box functional coverage. It is agnostic to the specific implementation of requirements. It will not be dependent on micro-architectural implementation.

    Typically it’s extracted with the help of various test bench components. It also represents the coverage in the form of design’s, final application usage.

    Lets understand this better with simple example. Application usage, in processor world would mean instructions. One of the area for functional coverage would be to cover all the instructions and in all their possible programming modes like registers, direct memory, indirect memory etc. Another example from peripheral interconnect world, one of the coverage item can be to cover all the types of packets exchanged with various legal and illegal values for all the fields of packets.

    One of the best way to write black box functional coverage is to generate it from the specifications. This allows the intent of the coverage items to be preserved allowing the functional coverage to automatically evolve with the specification. Find out how you can take first step in specification to functional coverage.

    White box functional coverage

    Functional coverage covering the micro-architectural implementation is referred to as white box functional coverage.

    White box verification and its functional coverage is one of the under focused area. This is due to reliance on standard code coverage to take care of it. Verification engineers typically leave this space to be addressed by the design engineers. Design engineers do try to take care of this by adding assertions on assumptions and test point to see if the scenario of interest to implementation are covered.

    But for design engineers this is additional work among many others tasks. Thus it ends up not getting the focus desired.  This can sometime lead to very basic issues getting discovered in this area very late in game.

    White box functional coverage will be dependent on the specific design implementation. Typically, it will tap into internal design signals to extract the functional coverage. This tapping can be at design’s interface level or deep inside the design.

    Lets understand this with simple example. One of the white box coverage item in processor world can be instruction pipelines. Covering all possible instruction combinations taking place in instruction pipelines. Note that,  this will not be addressed by code coverage.

    In peripheral interconnect world it can be the FIFO’s in data path. Covering  different levels utilizations, including the full conditions. Transitions from empty to full to empty. Covering errors injected at certain internal RTL state. Covering number of clocks an interface experienced the stall. Covering all possible request combinations active at the critical arbiter interface. These are to name few cases. A simple LUT access coverage could have helped prevent famous pentium FDIV bug.

    White box coverage writing effort depends on complexity and size of the design. White box coverage writing effort can be reduced up to 3x easily by generating them instead of writing. Generation can happen as part of plug-ins in RTL or  using framework like curiosity.

    White box functional coverage and black box functional coverage can have some overlapping areas. There will bit of white box functional coverage in black box functional coverage and vice versa. Right balance of both black box and white box functional coverage provides the desired risk reduction and helps achieve the high functional verification quality.

     

  • Code coverage – Pain points

    Code coverage is composed of line coverage, expression coverage, toggle coverage, FSM coverage and conditional coverage. Code coverage is one of the oldest and reliable forms of coverage metric. It’s a working horse of verification engineers. Code coverage is one of the key metrics used in closure of verification.

    There are two reasons for its popularity. First it’s automatically generated. And second it’s comprehensive.

    Code coverage is automatically generated from the standard simulator tools. It just requires enabling few additional options during compile of the code and during run of test cases. A complete and comprehensive coverage report without any effort from user, that’s what, makes it so attractive.

    While all this true, there are sides of code coverage that are not so attractive. There are some pain points. In this blog we will look in to three such pain points.

    Pain point #1: Code coverage is not useful in early stages of verification cycles

    Code coverage to become useful requires certain level of maturity to RTL code. That’s how it ends up being looked at the later in the verification cycle. What are the downsides of it?

    Code coverage effectiveness in project execution

    While the comprehensive nature of the code coverage is its one of the advantage but it’s also a deterrent. Due to its comprehensive nature, code coverage requires good bit of time and effort to analyze and identify the actionable items for the verification to drive it to closure.
    (more…)

  • Cost of bug – Learnings for verification engineer

    Everyone related to any form of ASIC design verification has to internalize one fact: the “cost of bug” discovered increases exponentially as it advances in the ASIC verification phase. There is a big cost difference between bug found pre-silicon versus post silicon. Let’s understand what are those phases.

    The ASIC verification takes place in various phases planned as a part of verification strategy. Typically they are unit verification, System/SOC verification, FPGA prototyping or/and Emulation and post silicon validation. Hence bugs can be found in any of these phases.

    Leaving all complex cost calculations of these phases on the table, let’s just look at the time and effort to debug the bugs found in different stages. This will automatically bring out the cost element and why it’s important to find as many bugs as possible in, as early phase as possible. Let’s first identify what is required for the debug.
    (more…)

  • 5 myths about testbench quality

    Testbench is primary vehicle for verifying the functional verification requirements. Quality of testbench affects the quality of verification. Yet, it’s often ignored due to various myths.

    This blog will look into 5 such myths about testbench quality and share different perspective to look at them and how it affects the projects.
    (more…)

  • Verification strategy

    We often hear about verification strategy. The word strategy may make it sound more business like and may seem to alien engineers away from it. That’s how many projects end up without clear definition of verification strategy. This leads to a bad start.

    Dictionary meaning of strategy is “a plan of action designed to achieve a long-term or overall aim”. If you google and see this word usage, it has only grown over period of time. It means strategy is becoming increasingly important to all areas of work and not just business problems.
    (more…)

  • Testbench logging

    Testbench logging is one of the under focused areas. It does not receive the level of attention it deserves. What testbench architects fail to realize is poor logging has direct impact on the debug efficiency. Logging also should be designed with same intensity as testbench architecture. Most of what is discussed below is all obvious things but still not widely adopted. One of the most pathetic and most easy to fix problem of the functional verification is logging.

    Logging refers to all the messages displayed by the testbench during test execution. Logging includes the error messages and debug messages. Error messages are printed when the check or assertion fails. The debug messages are printed by the different verification components of the testbench during their operation.
    (more…)

  • Testbench debug logging tips

    Testbench logging is front end of the debug. These logging guidelines are designed to help make debug effective, not only for selected geniuses in team but also for everyone. Debug activity affects entire team and carried out throughout product’s life cycle. Logging is one of the very important contributors in making debug efficient and simple.

    Some of the following guidelines for easing the debug may require additional code implementation beyond normal testbench or test functionality. But this additional effort pays off by saving the valuable engineer’s time during debug throughout product’s life cycle. Let machine’s do bit extra work to reduce the human effort. What do you say?
    (more…)