There is accountability for every test yet to be written, passing and failing test in the test plan. A test added to test plan is guaranteed to be executed. A verification plan that can be trusted. A traceable verification plan can be trusted. Its anchor for the functional verification quality.
Right not the concept is called out in the context of test plan which is the one of the three plans of verification plans where traceability matters most. Ideally it should be implemented to all three plans.
Why we need traceable verification plan?
Three phase of functional verification are: Planning phase -> Development phase <-> Regression phase. Requirements specifications are not really written for the functional verification. (more…)
Review of verification plans can be very challenging process. First challenge is presence of the verification plan. If present, matching it with the latest status. Verification plans are created initially in many cases but they are not kept up-to-date.
Three possible scenarios good, bad and ugly based on status of verification plan are captured below.
Verification plans itself does not exist – Yes this can happen in certain cases. Verification has just evolved. This is ugly
Verification plans were created initially but not updated. Initial verification plans were never updated and current state of execution is far different from what is present in the plans. But it’s agreed that plans are not up to-date. This is good
Verification plans are present and unclear update status. Initial verification plans were updated some times but thought to be completely updated. Yet they cannot correlate 1:1 to tests and regression results. This is bad
Verification plan – good, bad & ugly
The ugly is just plain ugly. It’s raw and hence its clean to fix. (more…)
One of the early symptoms of poor quality test benches are schedule slips in writing tests and getting them to passing state. Poorly architected test benches make the test writers job difficult. They do not provide adequate hooks and abstractions. This causes test writers to write lot of additional test specific code. Coupled with this are the test bring up delays due to struggle with the debugging the test bench code and test bench bugs are early symptoms.
Certain feature update in test bench especially during the middle of the development, showing more than average bring up delays and causing significant instability regression causing large number of existing passing tests to fail frequently is also clear sign of potential bad architecture and implementation around that feature area. (more…)
Poor quality in planning phase leads to poor verification plans and test bench architecture. This is a seed. This seed grows into tree as the project progresses and yields bitter fruits later. Poor planning phase has big impact on the functional verification quality.
Bulk of the planning activities about 60-80% completes during the planning phase of the project and remaining is completed during the development phase. This provides opportunity to recover from some of the mistakes of the planning phase during development phase.
If there are quality problems found with the planning phase deliverables during development phase, it’s best to fix them at that point rather than ignoring or putting it off to later. It’s well known fact that cost of fixing the mistakes increases exponentially as the project progresses. (more…)
Functional verification quality has big impact on the quality of the design.
Quality and functional verification are very closely related. The whole objective of the functional verification is to build the high quality design. In order to do that all aspects related to functional verification itself should be of high quality.
As a part of ZVM: Verification methodology for quality we have already seen science, art and religion of verification needs to applied together for achieving high quality verification, which in turn increases the chances of achieving high quality design. This is all good but what if, for some reasons process was not followed and now we have quality issues with the functional verification. What to do now?
Here are the series of articles to help restore the quality of the functional verification. There could be varied level of quality issues and each needs varied application of these generic principles to restore quality. These are generic guidelines to help detect the causes and series of steps to address them. A firm commitment to invest time and resources is required to restore the quality. Quality always comes with its own price tag. (more…)
Test failure is an indication that something is not behaving as per the developer’s understanding. All programs work perfectly fine in the developer’s head but fail to function in real world prior to verification.
Debugging is a process of finding out difference between the world inside the developer’s head and real world. Developer’s world is perfect. It’s world free of exceptions. Developer has full control over time and events. Developer loves it.
Contrast it with real world filled with exceptions, no control over events or time. Things fall apart. That’s why debugging is so difficult for developers. Note that, it’s not that developer has intentionally created this nice world. It takes time for developer to build the understanding of the real world. Programs cannot wait for full understanding and hence they will fail in real world.
First mind block for developer is they find it difficult to believe their program is not working. Even when it is failing in real world they look at it from the point of how it would work in their world inside their head. Developers find it hard to imagine what can cause it to not work? First step for developer is to unlearn looking at it from how it can work to start looking at it from how it can fail.
Developers can live in their world during development but they need come back to real world for debugging. After every debugging they should keep refining the world inside their head with new findings.
The way we need a special goggles to see 3D movies clearly, verification goggles is required to see the verification world correctly.
Functional verification is about two things: stimulus generation and response checking. Stimulus generated is applied to DUT. DUT’s response to stimulus is checked against the expectations. Every aspect related to verification thus has to be viewed through these two angles.
Imagine it as a googgle with one lense as stimulus generation and another lense as response checking. Wear your verification goggle all through your verification activity and it will guide you in the right direction.
Verification goggle
The stimulus generation and response checking has to be done at multiple levels of abstractions.
This is perception, especially the new engineers coming in have. Totally agree, SV and UVM have made a big impact on the productivity of the functional verification. But they are just enablers. They are means to an end but not the end itself. At times enablers get glorified so much that the final destination itself is forgotten.
This happens due to lack of visibility into bigger picture. A clean bigger picture of functional verification will make it possible to develop right sense of proportion to each part. Functional verification is much beyond verification methodology and HVL.
Following presentation is a quick tour, especially helpful for the new entrants to area of functional verification.
If verification methodology and HVL were the superpowers of functional verification then all the projects done using it should have been successful – Right? Unfortunately, many verification projects even when using the SV and UVM (also other verification methodologies as well) have failed to meet their objectives.
UVM and SV are powerful tools. But one needs to understand the bigger picture and fundamentals well to extract the maximum performance from HVL and verification methodologies.
Please notes it not a campaign against HVL and verification methodologies. It’s an attempt to put them in the right perspective. Provide them the place they deserve in the bigger picture of the functional verification.
One simplified view of the big picture of the functional verification is following.
Functional verification – a bigger picture
Functional verification is made up four major activities. Planning, Development, Regression and Management. First three activities planning, development and regression are phases of the verification project. These phases do not end completely but major focus shifts from phase to phase during the course of the project. Functional verification starts off with the planning phase. There on for every milestone is executed as combination of development phase and regression phase. Fourth activity is managing the three phases for high quality and productivity.
Planning phase is mainly putting in together the verification plan consisting of test plan, checks plan and coverage plan. Using the verification plan to define the test bench architecture. Using both of the build detailed tasks lists and milestones to prepare for the development.
Development phase is about the execution of the tasks lists created during the planning phase. Development phase consists of building test bench, bus functional models, writing tests, coding functional coverage and getting the sanity tests passing. This where the HVL and Verification methodologies play a dominant role. But note that his just one of three activities.
Regression phase is the climax of the verification. Regression phase is mainly getting all the tests and test variants passing. Filing the bugs and validating the fixes. Achieving the desired the desired passing rate and convergence on the coverage.
Managing each of these phases has its own challenges. There are six parameters identified for ensuring quality and productivity. They are Clarity, People, Metrics, Tracking, Review and Closure.Each of these six parameters manifest differently in each of these phases.
Verification team composition is one of the important aspect for successful closure of verification projects. Verification methodology has focused extensively on the engineering problem alone. This has distanced verification methodology from the reality. Thus at times we see the verification environments using all aspects of methodology but yet failing to deliver the desired results.
Its time, the best-known practices of verification project management also got into the verification methodology. This will make verification methodology a holistic approach to help achieve functional verification quality. One of the key aspects of the verification project management is formation of verification teams.
It’s no secret that successful verification requires team of experts and engineers to come together. In order for this to happen effectively, at bare minimum various roles and guidelines for responsibilities of those roles have to be laid out.
Here is an attempt towards the key roles required to be played in the successful completion of verification project. Note that based on the complexity of project and availability, each role can be played by different individuals or single person could be playing multiple roles. Bottom line is all these roles have to be played for efficient execution and successful closure of verification.
Requirements for verification could be industry standard specifications or custom specifications for an innovative proprietary designs or mix of both.
Along with the requirements specifications, there are specifications of the design implementation. This is user point of view of design usage in terms of possible modes of operation, various interfaces of the design, programmable registers, clocks and reset requirements etc.
“Specification expert” needs to have thorough understanding of all these specifications. He plays a key role in the planning phase of the verification.
He is responsible for:
Maintaining the latest specifications
Keeping track of supported multiple revisions of specifications, if applicable. This includes the changes to design implementation specification as well as it evolves during the course of implementation
If multiple specifications are implemented which needs to work together, keep track of which version of specifications are compatible with each other
Should be able to answer any questions regarding specifications. If he cannot answer himself take it up with right forum and get the answers
Should be able to spot the ambiguities in the specifications. If the answers to some of these ambiguities cannot be given immediately make sure to provide the reasonable direction to move ahead. Track those and take it up with the right forum and get the answers.
Should have good understanding for why’s and motivations behind major features of the specification. This is key to answering some of the non explicit aspects, resolving ambiguities and prioritizing
Should be participating and contributing to standard bodies for the specification creation. Also keep track of ongoing useful discussions and important roadmaps
Should have good understanding of the use cases for design as well
Today’s verification world offers array of tools and techniques to achieve the results. To quote few of them are, portable stimulus, directed verification, constrained random verification, assertion driven formal verification, accelerated simulation and emulation technologies.
A “Verification expert” is the one who understands the fundamentals of the verification well. He is aware, experienced, understands the verification requirements and technologies available.
He also needs to have good understanding of the requirements specification as well as micro-architecture of the implementation.
He is responsible for:
Selection of the right combination of the verification technologies for the project and Suggest the newer verification technologies for evaluation and adoption
Should deliver, Verification plan made up of execution ready Test plan, Coverage plan and Checks/Assertion plan. These may not be fully complete in terms of all leaf level cases. But the sections and major features need to be covered by it. Some of the missing cases can only be determined only during the course of the execution
Should deliver, architectures of various test bench environments and components. These should not be just laundry lists suggested by the verification methodology. Rather should be able to call out the abstract APIs, data structures to be used, threads and their pseudo algorithms
Sets up the sign off criteria for the verification closure
He is also a technical lead who would provide the guidance to team on various verification components implementation
Breaks up the implementation into various tasks. Works with the verification manager for the task assignments. Should make recommendation and request for right engineers for implementation
He is also responsible in ensuring the quality of the verification done, maintainability of the code and writing down any technical documentations
Verification manager
Rise in complexity of projects has also lead to larger verification team sizes. Larger teams have resulted in project executions challenged with contradictory demands on execution.
“Verification manager” is single point contact responsible for delivering quality results. Clarity leads to results. Hence his primarily responsible for ensuring clarity among chaos of execution. Standard management skills are also important but I am not going to describe those over here. This is more specific from verification project management point of view to manage planning phase, development phase and regression phase of verification project.
He has good understanding of the business requirements of the project in hand. He will have to balance between budgets, business requirements, engineering resource optimization, technical and resources constraints. He is responsible in periodic evaluations to determine the right priority of the features execution
Clarity is life. Ambiguity is death. Clarity drives best results. He needs to ensure the entire team remains clear on the results to be achieved and goals to be met. Teams have to march as single entity towards aligned goal
Creates and drives the creation of the tasks lists. Detailed tasks lists are as important as verification plan itself. The detailed tasks lists improves, the schedule forecast accuracy. Use of right task management systems is highly undervalued in many projects
Although development tasks are relatively easy to plan but the tasks such as regression cleanup and certain debugs can be unbound in nature. Its needs to be smartly tracked and closed. Ideas to manage this in another post.
Creates the weekly or monthly plans aligning to overall project milestones and allocates the tasks from the task tracking to engineers for implementation
Makes the critical calls about when to pause and resume development to handle the dynamic situations such as codes releases and code freeze
Sets up and drives the process definition and deployment for productive and efficient project execution
Defines the different metrics to be used during the course of life cycle of the project. Uses the right metrics at right points to achieve the desired results
Periodically publishes the verification status. Clearly puts the status in right perspective to upper management and to various engineering teams to enable them to act upon. Goes beyond just numbers to make sense of those numbers for driving the further execution
Ensures that the tasks allocation to engineers are grouped so that it helps engineers execute optimally rather than doing random allocation to save his own time and effort. Works to opportunistically align individual engineers goals to project goals to create win-win situations
Dynamically creates and merges various sub teams for execution efficiency
Smaller teams with the focused area of execution with a theme, execute better than chaotic individual random assignments spread out. Chaotic random assignments can ease the leader or manager’s job of task assignments. Simplest random task assignment algorithm is, as and when task comes in just assign to engineer who has some bandwidth at that point. Don’t blame it on agile. Agile is not random task assignments. Random task assignments is a morale killer. It should be strictly avoided wherever possible.
However there are going to be certain crunch times where this type of assignments may be necessary but making it a default practice is a reflection of poor project planning
Raising issue(/bug) counts requires attention from verification manager to ensure it stays within the limits. Any cross team prioritization needs to be handled by him
Selects the right projects for its for adoption of new verification technologies and spread learning’s of one project to other projects
Verification engineers
“Verification engineers” are primary workhorses who translate the plans to results. Verification teams, which are larger than 6 members should be strictly organized in more smaller sub teams.
Verification engineers will participate broadly in three major areas of activities:
Tests – writing the tests on reusable environment, executing tests in different parameters and configurations. Debugging the failures between the test bench vs design under test (DUT)
Coverage and regression management: Running different regressions, failure bucketing, failure assignments and tracking them to closure. Generating various coverage metrics. Publishing and looping in the appropriate team members for getting inputs for the convergence.
He is responsible for:
Verification engineer should have clarity on the fundamentals of the verification concepts and test benches
Verification engineers should be intimately familiar with the simulators, version control, operating system, code collaborations tools, task tracking systems, issue(/bug) tracking systems and any other internal custom tools being used
Verification engineers exposure to basics of software engineering especially data structures can go a long way in helping build good verification environments
Verification engineers should follow the practice of raising the issues(bugs) through the issue tracking system religiously. They are responsible for follow up of the issues raised till its closed
Verification engineers will face challenge of getting closure on the issues filed. They should do periodic publishing of open issues, prioritizing and using either team meetings of quick one to one phone calls to drive it to closure
Verification engineer will own, running various type of regressions periodically either through automation or semi manual way
Verification engineer will keep updating the regressions status and various coverage metrics status. This is key to convergence and closure of verification activity
Last phase where the verification where the coverage convergence needs to achieved, verification engineers should work closely with the design engineers wherever desired. Verification engineer needs to do the right home work to ensure such interactions are meaningful and effective
Verification engineers should help recreate any of the failures not seen in the verification environment but showing up in emulation or silicon bring up. These are opportunities to question the verification plan and test benches.
Verification engineers are responsible for periodic updates to the verification plan with the various new cases discovered. All major categories needs to be identified for the verification lead but each sub-section or test cases it self will have holes. These holes will be discovered as a part of execution. It’s extremely important to look at the bigger picture and see if it’s just specific hole or is there a missing theme to holes. Appropriately the verification plan should be updated
Verification engineer’s familiarity with the various automation and scripting languages goes in long way improving the verification productivity. In fact every verification team needs at least one automation expert
Automation expert
Verification is not a work that is one-size fits all type of activity. Each of the verification projects has their own unique challenges. This requires one to continuously evolve and adapt to keep the efficiency high. Due to this uniqueness and dynamic nature, off the shelf tools may not meet all your requirements. Every verification teams forced to do something specific to meet their own requirements.
“Automation expert” is your own in-house EDA/productivity tool vendor.
Automation engineer on board should be part time automation engineer. He should also be part of the verification activity to understand the problems better.
Automation problems, proven to have great potential after prototype can be handed off to special software teams dedicated for building them in scalable and right ways. Prototyping aspect is best handled by on board automation engineer with the verification team who understands the requirements well.
He is responsible for:
Evaluating and choosing the right tools and tool versions to be used during the course of the project
Efficient utilization of the compute grid resources and tool licenses
Efficient compile, simulation and regression flows
Regression failure triaging and failure tracking infrastructures
Test plan, coverage plan and assertion plan management flows
Code generation automations for repeated boilerplate codes. RAL(Register abstraction layer) for instance many companies have their own flow. From single source of register information RTL code, RAL model for the test benches and coverage information is generated
Keeps track of the new features introduced by the various EDA tool vendors. Attempts, irons out issues and helps team to adopt them
Of-course goes without saying he should be well versed with scripting such as Perl, Python and Shell scripting. More than scripting it’s about being able to figure out the ideas and translate them quickly to usable prototypes
At times, it may be very easy to staff all these folks. Key to success is ensuring each of them works together towards achieving the project objectives and does not get isolated in his own universe.