Change in the VISA policies of the USA and increased demand in the UK has created a new global crisis for knowledge-based industry in general and affecting the semiconductor industry as well.
Every crisis comes packaged with the opportunity in it.
This could be an opportunity for the verification service companies to build their business further.
I wanted to take a shot at how the Indian Verification service industry could rise to this occasion and help the global semiconductor community continue their journey unaffected by political churns in different nations.
Customers for the India’s verification service companies
Multinational corporations already have design centers in different parts of the world including India. They can easily scale their centers in India. They rely on service companies to meet the increased staffing requirements in short time needs or in case of growth opportunities where future certainty is not guaranteed or for specialized skills.
Chip startups and mid sized fabless companies from west who cannot set up their own design centers in India. In the current times, many of these companies are doing some innovation in software supported by hardware for IoT and data center space. Many of these companies are building few unique designs IPs and combining it with more than 60% off-the shelf design IPs to create the SoC. Typically, they get the integration done and show the proof of concept in real system using FPGA or emulator based systems. In this phase they are not worried about the comprehensive verification of the SoC or the newer design IPs. The goal is to demonstrate the promised final value in an incubated setup as soon as possible. But as the proof of concept gains traction there is now a great rush to get things out of the door to production. Now this is worrying problem because so far designers could have done part time babysitting their designs for verification. Now they cannot continue to do that. They need full-fledged verification teams to verify the designs to scale it to production grade.
As we all know the verification teams is one of the largest part of the overall ASIC design team. Verification will have to be done at all three levels, unique design IP level, sub-system level and SoC level. Now they start looking for possibility of outsourcing the verification work to different parts of the world.
Challenges for India’s verification service industry
A successful verification team, which can deliver high quality results, is mix of verification engineers with different experience levels and slightly different related skill sets.
For senior talent, verification service companies have to not only compete with the company’s in their own space, but also with the product companies having their own design centers in India.
Design centers of deep pocketed of multinational product companies can afford to pay their employees lavishly. They take care of them and set whole new high standards to ease the lives of their employees so that they can focus on the work.
This is one of the big barriers for service companies to attract the talent from product-based company’s design centers in India. Not only it’s difficult to attract talent, but also its challenge to retain their own engineers. Junior engineers trained and groomed by Service Company, will eventually migrate to bigger multinational setups as they gain about four to eight years of valuable industry experience. All that can be said about this talent drain is its good karma. It will come back.
No doubt overall this is a very good development for engineers and it keeps the whole eco system competitive. Net result, many of the verification service companies become bottom heavy. They have a large supply of junior engineers, but they are starved on the technical leadership front. Another reason encouraging the bottom heaviness is readily available large pool of trained junior engineers.
Verification training institutes
There is a steep rise in the number of training institutes in the recent years. Although Indian engineering colleges have introduced HDLs for more than a decade in their syllabus, but it’s still not helping students become industry ready.
Indian engineering colleges slowly recognizing this gap, they are working with the external training institutes to help their students.
Many of these training institutes lead by industry veterans understand the industry needs but look at the academia to set up their overall process. This is another challenge. When verification trainings are given to fresh engineering graduate students, they go through entire SystemVerilog and UVM, which is probably condensed content of 3000+ pages of documentation. How can we realistically help them understand everything? Is everything equally important? How many newly trained students get the opportunity to develop test bench from scratch? Why not tune this further and make them productive in the role they are most likely to play?
Challenge for bottom heavy verification service companies
Technical leadership roles, which are typically played by engineers in the band of 6-12 years experience, is starved. This gap is filled either by some smart 3-4 years experienced engineers or part time work by senior leaders with more than 15 years of experience.
Efficiency of middle manager or technical lead goes down if he has to manage more than 6 directly reporting engineers who need regular hand holding. Why? Do the math yourself, 1 hour per engineer everyday either explaining or reviewing work or solving issues, will easily take away 6 hours add to it lead’s own hands on work, attending meetings and interacting with upper management. What’s the net effect? Either they burn out or verification quality gets compromised.
So, how we can prevent the possibility of compromised verification quality?
Anything that is important, which requires protection has to have a clearly demarcated entry and exit points. They have to be guarded to protect it.
For verification it can be coverage. The well-defined coverage plan being entry point and coverage goals being met can be exit point.
Automatically generated code coverage even with its limitations is useful at design IP level. But at the sub-system and SoC level verification the code coverage is not much useful. The toggle coverage part of code coverage can be used as basic metric for the integration coverage, but it cannot tell you if the toggle has happened in various modes or configurations. Any key transitions or important concurrent relationships cannot be confirmed with the toggle coverage alone. That’s where the functional coverage comes in.
Considering the complexity of ASICs even the directed tests are not simple any more. Very often his junior engineering team would run it one of the configuration or mode and when the test passes, they may think it’s all done and wait for instructions as to what to do next. Sometimes due to unavailability of inputs from the overloaded verification lead on the next item, valuable engineering hours might be getting wasted.
When you have clearly defined comprehensive coverage goals and they are attached to tests, especially semi-random ones, it’s very clear to verification engineers what goal needs to be met to call the test as done.
Some may even question do we really need it?
We have an expert and experienced in-house verification team executing our derivative chip project. They understand what needs to be verified and highly diligent to make sure everything is taken care of. Being in-house they are in constant touch with architects and designers about any changes taking place. Agreed, risk is reduced by your expert and experienced verification team who has been with same design IP/SoC for a long time. Probably they can get away without doing functional coverage planning or implementation.
The same is not true when the teams are new and spread out geographically. Especially when the functional verification is outsourced and done from different service company. There will be confusion about the clarity of the verification scope and constant struggle to keep up with the specification changes taking place, as the information flow is limited.
This problem multiplies when the service company is bottom heavy. There will be leaks in the information passed from client to service provider. Verification lead being the primary point of contact will have to absorb and translate in a way that can be understood by the junior team. While he does that there will either be some information going to get missed or misinterpreted by the junior verification engineers. This problem multiplies further, as verification lead will not have bandwidth to review every line of the test bench code produced. So such misunderstanding could hide a potential bug.
But when verification lead keeps an eye on the coverage plan and makes sure it stays updated the exit point is guarded. All the teams even spread out remotely have a very clear idea about where they are and how much more they need to do.
Bottom-line, a well-defined coverage model ensures the verification quality irrespective of who writes the test and how they write it. It enforces them to meet the attached functional coverage goals. This provides a great safety net. Even if the test bench code quality gets compromised in the heat of execution the verification quality will not be compromised.
This is very important for the chip startups. Because if the ASIC survives the first landing the in the field, they will have one more chance to set things right. But if the verification quality gets compromised and it fails in the field the future of chip startup will be at great risk.
Why it’s not done this way?
Functional coverage implementation effort vs. Value derived from it, the return on investment is not apparent. Not immediately.
Because it’s additional effort to write a comprehensive coverage plan, keep it updated and implement it. Functional coverage plan writing and implementation by itself does not show up, as a directly visible verification progress in the form of bugs caught.
How to go about it?
The simplest process to follow for the verification service companies is following:
- Define a comprehensive functional coverage plan taking both requirements specification and micro-architecture specifications
- Accept the fact that even experts will be able to identify only 60-80% items in the coverage plan written during the initial planning. So it has to stay alive and evolve throughout the project. Assign these as primary responsibility for the verification lead.
- Any specification changes should reflect in the coverage plan
- Any reviews that results in the new or updates to the existing verification items should reflect in coverage plan
- Any emails or corridor discussions that result in pointing towards missing items should reflect in the coverage plan
- As the specifications are soaked and understanding evolves updates should reflect in the coverage plan
- Any bugs found leading to detection of possible additional scenarios, should reflect in coverage plan
- Connect following in a single view to ensure nothing falls through the cracks during any transformations
- Planned coverage items
- Management information regarding status, ownerships and priorities
- Coverage results annotated to planned items
- Comments made during coverage results analysis about holes
- Attach the coverage items to tests and make sure to set the targets. This enables verification engineers who are implementing tests or test bench updates to clearly know when they are done
The bottom line in the battle for achieving verification quality, verification lead should make sure that the coverage fort is never lost. As long as you are holding that fort secured, there is always a chance to turn things around by additional engineering resource reinforcements coming in to help win the battle.
Overall in the summary, an executable coverage plan enables a top to bottom sign off criteria between the
- Clients and service providers at bigger picture level
- Verification lead and verification engineers at grass root level
For more than the last two years we have been focusing on how to improve the functional verification quality and have created a framework that make the comprehensive functional coverage writing much easier and ability to do specification to functional coverage generation.
- Creating comprehensive coverage plans
- Generating the functional coverage from high-level specification of the design you are verifying or the limited new features or enhancements you are verifying
- Setting up the flow to ease the tracking of deliverables from multiple verification engineers using coverage metrics
- Automatic delta verification progress report generation for weekly status enabling you to invest time in doing verification rather than writing reports
We are sure that our services will enable even the bottom heavy team to function more efficiently, help reduce the load on the verification leads and create higher quality verification results with reduced uncertainty leading to higher satisfaction and repeat business from your clients.