Testbench quality improvement: Refactoring test bench

First step in refactoring the test bench code is to identify the code requiring refactoring. Poor code in the test bench typically bloated, showing the…

First step in refactoring the test bench code is to identify the code requiring refactoring.

Poor code in the test bench typically bloated, showing the instability, limitations or partial implementation should be identified. This can be based on the areas identified during debugs, previous bug history and feedback from verification engineers.

Let’s looks at some of the test bench components that can have major impact on quality and how to address it.

Poor quality Bus functional model(BFM)

Bus functional models are pillars of the test bench. Quality of BFMs used can have significant impact on overall quality. Poor quality BFMs can hide the real bugs, bring up false failures and lead to instability of regression.

If your in-house bus functional model is having frequent quality issues then rather that considering refactoring it may be good time to consider replacement if it’s based on standard specification. If it’s not then some of the quality overhaul guidelines described for test bench can be applied to bus functional model as well.

If you aren’t using the third party BFMs for the standard off the shelf interfaces then consider the possibility of licensing one. Because most of the time verification team are short on resources and there may not be sufficient engineers to work on the bus functional models in-house. In such case result can be potentially poor quality and incomplete implementation. It may be difficult to maintain it and keep it updated with the latest specifications.

Buying third party BFMs allows compatibility checks and offers additional protection of specifications inferred by different teams.

Design dependent checkers

Highly DUT dependent test bench components can do more damage than benefit they provide. Scoreboards, performance checkers, monitors, arbitration logic checkers, finite state machine(FSM) checkers, latency checkers can become very closely tied to the design by depending on the internal signals and details of functionality. Now this can cause failures when the design undergoes changes and design specifications change. These failures can be false failures and turn into exercise of updating these verification components looking at waveform or RTL code completely defying their original purpose and yielding no additional verification benefit. They can become dead weights. Be bold and get rid of them.

Scoreboard can also sometimes very closely get tied to design behavior. For example relying on the DMA memory read order from the design to extract reference data. This can change depending on caching algorithm implementation within the design. This can lead to scoreboard failures when such algorithms are changed within the design. Note that scoreboard was not verifying the order of the data fetch. Scoreboard should not become dependent on details that its not verifying. It should always find different means to gather the necessary data without becoming closely dependent on the DUT behavior not relevant for the checks its implementing.

Over designed verification components

Over designed verification components can also lead to quality problems. Many of these verification components are over designed with the reuse promise. Many of them never get reused but pay price of additional complexity. This additional complexity means additional problems, time and resources. Reuse is not free. Invest wisely.

Many constrained random verification environment components promise reuse from unit to system level. This is not feasible many times as the nature of verification at both of these abstractions is quite different. See if these verification components are adding value or is there any other components already covering the same. If these are really required consider refactoring them.

Consider trimming the over designed verification components.

Confessions: Yes, my mistake

Yes mistakes happen. If you talk to your developers, at least good ones will admit it. Yes, I have made mistake. This component is screwed and I want to fix it.

It’s just they never got out of fire fighting to be able to refactor and restore them. Talk to them and consider listing such components.

Also sometimes the requirements are not clear at the start of the developments and clarity comes in as the development proceeds. It’s practically impossible to get all the verification component right. List the ones that developers strongly consider to be refactored as it’s hard to maintain. Hard to maintain verification components are early signs of forthcoming quality hazards. Fix them before they become sore.

 

Similar Posts

3 Comments

  1. Design dependent checkers was useful topic. I went through similar issues captured here. To avoid the same in future I am coupling only monitors with interface remaining all components work at higher abstraction level. The communication between monitors and other components happens through events, messages etc.

    Let me know if you have something better.

    1. Praveen: Monitor’s relying on external physical interfaces and remaining components working on higher level of abstractions is the right approach.

      1. Also take care not build dependency on the behaviour of DUT which is not being verified. Both internal signals and internal behaviour details of DUT which are not focus of verification should be avoided and minimized in verification components.

Leave a Reply