Functional verification quality – Poor development phase

Poor quality in the development phase results in poor quality test bench and tests. Poor quality in development phase has second biggest impact on functional verification…

Poor quality in the development phase results in poor quality test bench and tests. Poor quality in development phase has second biggest impact on functional verification quality after the poor planning phase.

One of the early symptoms of poor quality test benches are schedule slips in writing tests and getting them to passing state. Poorly architected test benches make the test writers job difficult. They do not provide adequate hooks and abstractions. This causes test writers to write lot of additional test specific code.  Coupled with this are the test bring up delays due to struggle with the debugging the test bench code and test bench bugs are early symptoms.

Certain feature update in test bench especially during the middle of the development, showing more than average bring up delays and causing significant instability regression causing large number of existing passing tests to fail frequently is also clear sign of potential bad architecture and implementation around that feature area.

Although poor quality test bench architecture is more damaging than poor quality of its implementation. But if the poor quality of implementation continues for long time all around even the good testbench architecture will succumb eventually.

Poor quality test bench and tests

Test bench is a vehicle for the functional verification. Tests are the drivers of that vehicle. Poor quality of the test bench and tests results meaningless detours in the journey toward successful closure of functional verification.

Poor quality of development results in:

  • Difficulty in meeting verification objective (Stimulus & Check)
    • Longer bring up time mostly spent in test bench issues for every new feature
    • More test bench issues found than the design bugs which it’s intended to catch
    • Lack of controllability of test bench preventing the scenario creation
    • False checks firing and important checks missed leading to loss of trust
    • Internal randomizations that are not controllable by the tests leading to difficulty and delay in achieving desired scenarios
    • Worst of all, test bench coming in the way of achieving verification objectives and requiring workarounds to keep the test bench happy
    • Internal randomization in components uncontrollable from tests leading to issues in the reproduction of the scenarios
  • Usability problems
    • Longer duration for the test writing due to lack of reusable APIs at right abstraction
    • Bulky test benches taking performance hits in both CPU time and memory consumption leading to longer simulation runs and crashes
    • Lack of right abstractions for hooks leading to additional effort for scoreboard and functional coverage integration
    • Bus functional model(BFM) lacking features making the test bench code more complex and bulky
    • Requiring lot of configurations making it difficult to use and causing delay for the bring up
    • Lack of parameterization making the multi instances usage challenge
  • Debuggability problems
    • Longer debug cycles as verification engineers are clueless about the issues. This also leads to wasting of the designer’s bandwidth in debug for isolating issue between test bench and design
  • Maintainability problems
    • Unintended cross functional dependency due to poor functional division leading to changes in one part of the code leading to instability in the many other areas of the test bench
    • Too many flavors of the test bench leading to chaos and maintenance hazard
    • In the name of reuse which is not well thought out leading to bloated code which is not usable. Code for usability first. Reusability later
    • Ambiguous parts of the implementation not quarantined or localized
    • Without theme problems are handled with battalion of if-else or case statements
    • Irresponsible thread usage
    • Standard coding practice violations
      • No comments, not even the headers describing class or file
      • Class, property and method naming not followed
      • Using advanced features of language and verification methodology for the sake of it
      • Incorrect usage of features of High level verification languages(HVL) and verification methodology
      • Hard coded numbers and hierarchical referencing
      • Lengthy files with more than 1000 lines of codes
    • Copy and paste of the code that could have built as reusable class or sub-routine
    • Forced usage of the legacy reusable code libraries that have lost their meaningfulness and usefulness

Nature of Bugs

Type of bugs due to poor quality in development phase are:

  • Operational bugs – Basic feature may work fine. As the next level ispursued it ends up catching more and more test bench issues or hitting test bench limitations
  • Regression stability due to test bench issues caused by poor test bench architecture and its implementation. Fixes leading to significantfailures in other passing tests. Developers being scared of changing the test bench code
  • Bugs in the bus functional models in the stimulus generated or checks firing due to false reasons
  • Illegal stimulus generation due to improper constraints
  • Various false timeout issues due to lack of synchronization between the verification components
  • Incorrect DUT programming sequences

Similar Posts

Leave a Reply