Wednesday, December 16, 2009

Infact coverage convergence tool !!!

To start with infact tools features; infact is a graph based tool which generates the random values and traverse through the various predefined paths through graph algorithm. It uses a complimentary technology to a constraint solver (i.e) a graph algorithm. The tool has to be inputted with graph rules which will be processed by the tool to generate all possible combination of the rules. There are no redundant values or combinations generated. The unique value is generated across multiple parallel runs on server farm. We need to provide the tool with all possible combinations. The tool can work with any language say verilog, vhdl, vera, ntb, specmen, system verilog & C++ . Infact tool communicates with simulator through a PLI call. Existing constraint random code needs to be represented as a graph rule and the infact tool will generate the random values.

Wednesday, October 7, 2009

Joy of doing ASIC verification!!!

Verification is often treated as the step-child of design. Decade back verification was considered less critical task than design by some companies and fresher’s where often pushed in to verification It’s not surprising, then, that most of the verification engineers want to be designers. But now verification is more lucrative career option than design and many experience people now hold on to verification without moving to design. It is generally estimated that 70% of ASIC design cycle is spend on functional verification. The ratio of verification engineers to design engineers is approximately 3:1. Job switching for verification engineers is easy when compared to the designers provided they have the right skill set. The advancements happen in verification at a very fast rate that design.

Earlier verification job was looked down, as the design is what is getting taped out and moves in to mass production stage not the test bench. But verification requires lot more effort and skills, example, to test a 100 line state machine we need to develop testbench which will have atleast 500 lines of code and draft a test plan which covers all the possible scenarios. VIP development companies get their revenues from their testbench which is licensed and shipped as a product.

Reasonably experienced person will know that building a re-useable system level verification environment and verifying the design without any post silicon bugs is more difficult than adding a glue logic in the design.

Do you still believe verification is a less critical task and requires lesser expertise that design?

Sunday, September 27, 2009

Score board architecture !!!

How would you implement the following requirement of designing a scoreboard ?

1) Scoreboard should predict a DUT transformation.
2) Scoreboard should be able to handle packet drops.

Just extend the VMM data stream scoreboard and implementing few virtual methods like transform, quick_compare & compare. use expect_with_losses() method for requirement (2). The requirement (1) can be implemented easily with transform method ().

VMM has much more robust features to offer, it is a good idea to check out the features available in VMM score board before coding your own scoreboard, i believe it would save lot of time.

Sunday, September 20, 2009

How do you identify a good functional verification engineer?

Evaluation based on product success:

The answer looks straight forward, at the end of an emulation effort, chip tape out & chip production. If there are no functional bugs and design works as expected then obviously the person who has verified the design is a good verification engineer.

The above statement has a rider; the above result can be produced in three circumstances

1) Good designer, bad verification engineer & very low bug rate.
2) Bad designer, good verification engineer & very high bug rate.
3) Re-used design which is silicon proven & no bugs.

If your chip taped out successfully without any functional issues with scenario 2 , then you have identified a good verification engineer.

Evaluation based on process success:

You wrote a verification environment, found lot of design bugs, now you need to verify a design enhancement which requires changes in your earlier verification environment. Now effort required to do the changes in your verification environment depends on reusability of code you have written earlier. If you are able add the enhancement within a short span of time with little code changes you are on track to be identified as good verification engineer.

Evaluation when the design success is not immediately visible:

This type of scenario is seen in VIP development where the development is verified internally and product release is done for the customers use.

The only way to identify a good verification engineer under this scenario is “customer bug rate over a fixed time say 12 months” Vs “internal bug rate during development”. If there are no customer bugs for a long period of time on the feature, you have identified a good verification engineer.

Evaluation based on knowledge of language /methodology /protocol:

This is the method most of the people use for identifying a verification engineer during hiring in a new organization. If the person has good coding experience in projects, he will be having a very good command on verification languages and will be well versed with the intricacies of the language. Asking a person to write a code for a given scenario will test his knowledge of the language and test his problem solving skills. Testing a persons knowledge of protocol will help us to know how well he has understood the protocol and used the knowledge in verifying the design.

Evaluation based on moving with technology:

This is very important aspect in today’s industry. The verification methodology, tools improvements happens at a very fast rate from the EDA vendors helping the verification engineers to reduce the time on verification. A good verification engineer will definitely keep himself updated on the new aspects of functional verification which is good sign in identifying a good verification engineer.

I have also in my past come across some sense less interviewers evaluating verification hires on his knowledge of digital electronics and CMOS. Does a verification engineer use digital design or CMOS for architecting or writing his test bench?

Saturday, September 12, 2009

Verification effectiveness!!!

“Functional verification takes 70% of the chip design cycle”. Writing test plans, Writing reusable verification environment, writing assertions for the design, debugging RTL failures, attaining code coverage and functional coverage goals & gate level simulation and debug are some of the common activities a functional verification engineer goes through in project life cycle before tape out. The work of verification engineer exponentially increases if the design under test has more number of bugs, which involves lot of RTL debug effort. Metrics on which a verification engineer is evaluated is on “How many bugs where hit during functional verification” Vs “bugs hit during emulation/post silicon validation” Even a single post silicon functional bug indicates the ineffectiveness in the functional verification.

If you are verification engineer and you feel that you have hard time in meeting your schedules and you work for more hours in office (say more than 8 hours) to meet the dead lines. Following are some effective ways i used to meet the dead lines without compromising on work quality or the compromise on working hours.

1) Micro schedule your tasks with effort estimates and get an approval on the time line from your manager.

2) Whenever scope of the work is increased / decreased re-schedule the effort estimates and keep your manager updated about this.

3) Prioritize your tasks and complete one after the other.

4) Whenever you are writing a test bench make sure your test bench is reusable. This can help you to minimize your work at a later point of time.

5) Always try to use module level verification environment at the system level. Maximum effort you need to put is the integration effort from module level to system level.

6) Always write random verification environment to test your design, most of the bugs are easily captured by random verification environment. Write a directed test case only if it is absolutely required to hit a functional coverage / code coverage hole.

7) Always move with the market, try learning and using new technology which will overall reduce the verification effort. Example adoption to tested methodology like VMM or OVM might be initially tough but on a longer run it will reduce your time to verification.

8) Always file a bug when you hit design issue, this is very important because this is the only metrics on which a verification engineer gets evaluated. Also top level management will know the effectiveness of the verification and schedule slips due to design issues.

9) Always keep your manager updated with status of your task so that he will be in position to evaluate your bandwidth for future tasks.

10) Never compromise on testing a DUT feature due to lack of test bench support, this may lead to emulation/post-silicon bugs.

11) Always keep the code coverage analysis towards the end of the project after your function coverage goals are met.

12) When you are stuck with a problem, do not work continuously to fix the issue this will increase the stress level and you will end up spending more time hitting on the areas around the issue. Take a break from the issue and come back with a fresh mind.

13) Try to understand the code base relevant for the enhancement or the modification, spending time on understanding overall code base has diminishing returns.

Does the functional verification engineer get rewarded for his verification efforts is really a question mark and largely dependent on the company you work for.

Sunday, August 9, 2009

six reasons why you should use system verilog for verification !!!

1) System verilog is an IEEE standard supported by multiple vendors, your code is portable across simulators.You are not tied to a single vendor, which is the case if you are using HVL like VERA/NTB/SPECMAN.

2) Free open source standard verification methodologys like VMM & OVM are available which can be used with system verilog.

3) Simulation speed will improve,if you are an HVL user using VERA/SPECMAN for verification.

4) Most of the VIP vendors support system verilog, building verification environment for SOC will not be an issue.

5) System verilog interoperability layers are available, you can re-use you VERA/NTB code from system verilog,similar arrangement is available for specman user too.

6) System verilog supports most of the constructs supported by HVL ( VERA/NTB/SPECMAN) , migration to sytem verilog for HVL users will not be an issue.

Sunday, July 19, 2009

Learning curve for a verification engineer !!!

For a verification engineer, which of following work environment gives him maximum learning opportunity ?

1) IP verification
2) SOC verification
3) Verification IP developement
4) Verification consultancy

I will try to evaluate each of these work environment.

According to me,the skill you accuire on doing your day to day activities at your work place should match the requirements of the industry and should be portable across companies. (i.e) Assume you are doing an assembly level verification for a processor design using internal tools of that company, the methodology and tool knowledge is limited to that particular company and skill is not portable between companies, then the skill you accuired is not marketable, hence the learing curve is minimum in this case.

We can find our self in above scenario in verification consultancy work environment where we have very little control of job nature , implementation flexibily is also minimal in this case. One good thing about this work enviroment is you will get your hands dirty on different type of projects and you will rarely find your self struck with the same project. In comparitive scale verification consultancy work enviroment gives verification engineer moderate learning curve.

Verification IP developement requirement and process are different from RTL developement and verification. In this kind of work environment we have very good learning curve on the new verification methodology , we can improve our knowledge on different languages as VIP are developed in single language but controlled through different languages like VERA/NTB/VERILOG/SYSTEM VERILOG /C. you can gain good protocol knowledge by developing the VIP and stay updated on the developements in the protocol. One of the draw backs in VIP developement work environment is your initial learning curve will be steep, after few years most of you work will be just bug fixes and some times VIP enhancements. In comparitive scale VIP developement work enviroment gives verification engineer moderate learning curve.

SOC verification work environment is different, as the verification is done on proven design IP,finer points in protocol are generally ignored. One good thing in SOC verification is we end up working on different type of protocol and interfaces. In comparitive scale SOC verification work enviroment gives verification engineer good learning curve provided he works on different interfaces every project.

In case of IP developement work environment , your learning curve on the protocol will be good.The test plan and implementation will touch the finer points in protocol verification. In comparitive scale IP verification work enviroment gives verification engineer good learning curve provided his company has migrated to system verilog or HVL based verification.

The best scenario is to have at least a few years of experience in all the type work environments, so that you can get good experience in verification methodology, verification tools,languages & different protocols.

Thursday, April 30, 2009

VMM Planner

So what is VMM planner ? VMM planner is a tool which can atomatically annotate the functional coverage and code coverage from regression runs and present the data in a XLS or XML format. It associates the test plan to the test result automatically. The planner can be used for managing verification effort for any project. The basic requirement for using VMM planner is we need to have complete test plan mapped to a functional coverage model. Once we have the plan as an XML or HVP document we can automaticlly annotate the test result using HVP commands. Some of the user provided metrics like bug count , test pass/fail count can be provided to planner tool using the userdata command .We can use VMM planner to report verification status to top level management.

Friday, March 20, 2009

Coverage convergence technology (CCT)

CCT automates the process of going between coverage goals and determining what constraints to modify, then modifying the constraint to achive the functional coverage goal. CCT also has a provision to automatically generate functional coverage groups from the VERA/NTB/System verilog code based on the constraints specified.

The automatically generated functional coverage code can be used as the starting point for writing the functional coverage model and can be integrated with the DV enviroment. CCT also allows parallel test runs with each test run targetting different coverage points without having any overlap between them. parallel test runs without overlapping random values is achived by providing the tool with a bias file which is generated by the tool based on the functional coverage data base.

Sunday, March 1, 2009

Streaming operators --- System verilog

In VERA/NTB we have vera_pack & vera_unpack methods to pack a class object to bit stream or unpack a bit stream to a class object. System verilog does not have a pack or unpack methods the replacement for this methods are the Streaming operators << , >>. These operaters have the same functionality as the pack/unpack methods. We frequently see the usage of pack and unpack methods while extending the RVM/VMM classes like rvm_data/vmm_data.

Following is an example on the usage of Streaming operators

byte stream[$]; // byte stream

class Packet
rand int header;
rand int len;
rand byte payload[];
int crc;
constraint G { len > 1; payload.size == len ; }
function void post_randomize; crc = payload.sum; endfunction

send: begin // Create random packer and transmit
byte q[$];
Packet p = new;
q = {<< byte{p.header, p.len, p.payload, p.crc}}; // pack
stream = {stream, q}; // append to stream

receive: begin // Receive packet, unpack, and remove
byte q[$];
Packet p = new;
{<< byte{ p.header, p.len, p.payload with [0 +: p.len], p.crc }} = stream;
stream = stream[ $bits(p) / 8 : $ ]; // remove packet

Thursday, February 19, 2009

Automated coverage closure

Nusym tool does automated coverage closure. It automatically directs the random constraints so as to target coverage points.It automates the process of going between coverage goals and determining what constraints to modify, then modifying the constraint to achive the coverage goal. When we hit bug in random scenario we generally re-play the sequence to fix the bug , but with Nusym's verification tool the same bug is reproduced just at the point of bug , the long sequence of random transaction need not be reproduced to hit the bug. The tool supports both VERA and Systemverilog languages for automatic coverage closure.

Sunday, February 15, 2009

Randomization of scalar variables -- System Verilog

In VERA/NTB to randomize a set of variables we need to have variables in the class add constraints/ in-line constraints to randomize the variables. More over the variables have to be of type rand or randc Assume we have a requirement to randomize a set of variables outside a class with a set of constraints. We have the option of using random(),urandom() or urandom_range() and randomize the variables separately. When we use the above listed random methods we can not randomize a variable based on another variable.

System verilog has an option of randomizing scalar variables out side a class with constraints.


integer a,b,c;
void'( std::randomize(a,b,c) with { a==b;b>0;c==10; } );

The above construct generates random values for a,b,c the constraint is provided in line. The variables a,b,c are out side the class scope.

Saturday, February 14, 2009

Coverage grading

What is coverage grading ?

Coverage grading is an option used to rank the test cases based on the number of functional coverage points hit by the individual test case. Grading option can be used to analyze and remove redundant test cases which is targeting the same functionality. This will help to optimize the regression run and save simulation time.

In random test run scenario it also helps in identifying the random seeds which provides maximum coverage.It will be a good idea to go for functional coverage grading when the verification environment and test cases are frozen and suitable functional coverage numbers are achieved. if your test bench is constantly in development and changing ,using the same seed that gave you good coverage before may not do so again since the randomization may have been affected by the changes in the source code.

Command to generate functional coverage grading in VCS is

urg -dir ( *.vdb ) -grade -metric group

Sunday, February 8, 2009

functional coverage

Code coverage metrics such as line coverage, fsm coverage, expression coverage, block coverage, toggle coverage and branch coverage is extracted automatically by the code coverage tool it gives us the picture of which sections of the RTL have been executed. Root cause analysis can be done on the code coverage holes and suitable test case can be added to cover the RTL functionality.Code coverage has a draw back of not identifying missing features in the RTL. There is no automatic way of getting the correlation between the functionality to be tested and the implementation of the functionality.Lot of manual effort has to put in to get this correlation.

Functional coverage is the determination of how much functionality of the design has been exercised by the verification environment. Functional coverage is a user defined coverage which maps every functionality to be tested (defined in the test plan) to a coverage point. When ever the functionality to be tested is hit in the simulation the functional coverage point is automatically updated. Functional coverage report can be generated which gives us the summary of how many coverage points where hit. Functional coverage metrics can be used as a feedback path to measure the progress of a verification effort.

Adding functional coverage to a verification environment involves three steps.

1. Identifying the functional coverage and cross coverage points.
( directly maps to your test plan )
2. Implementing the functional coverage monitors.
3. Running simulation to collect the functional coverage and functional coverage analysis.

Methodology of identifying functional coverage points and cross coverage points can be explained with a simple USB2.0 Bulk transfer example.

12 diffrent axis have been identified for a simple USB 2.0 bulk transfer. Total no of basic coverage points for this functional coverage group is 34 coverage points. Now we need to get the cross coverage points. So what is a cross coverage point ? Cross coverage is the valid combinations of the identified axis for example one cross coverage point can be

HS(speed) --> IN(direction) -->ACK(response) -->SMALL(length) --> No (data toggle error)

--> No (crc error) -->No ( pid error) --> SMALL (No of tokens per uframe)

--> No ( token error) --> ODD (payload) --> OFF (ping token)

We need to identify all the cross coverage points for this functional coverage group. Each of the cross coverage point is an test scenario. The way to find all the cross coverage point is to simply cross all the axis which will give you all the possible combinations, now eliminate the invalid combinations. Eliminating the invalid cross can be done using ignore construct or bad state construct in VERA/NTB.

example :: finding the cross coverage points for Axis1 and Axis2 alone. ( In the actual scenario all the axis should be taken in to account). possible cross coverage points

HS --> IN, HS -->OUT , FS --> IN , FS-->OUT

4 cross coverage points have been identifyed by crossing Axis 1 and Axis 2.

Now the identified functional coverage / cross coverage points need to be implemented in VERA/NTB/SV as a coverage group and integrated to the verification environment.

Saturday, February 7, 2009

Fine-grain process control -- System Verilog

In Vera/NTB the user has very limited control to the threads/process spawned by fork ..join construct . We have constructs like wait_child() to wait for all the threads to complete. terminate() to kill all threads spawned. What was missing in VERA/NTB was the fine-grain process control which will allow the user to selectively suspend,resume,wait and kill the spawned threads.

System Verilog has a build in process class which can be used for fine-grain process control.This process class is a good addition in system verilog and provides fine-grain process control which was not available in VERA/NTB. The prototype of the process class is as follows.

class process;
static function process self();
function state status();
task kill();
task await();
task suspend();
task resume();

Objects of type process are created internally when processes are spawned. Users cannot create objects of type process; attempts to call new shall not create a new process, and instead result in an error. The process class cannot be extended. Attempts to extend it shall result in a compilation error.

The self() function returns a handle to the current process, that is, a handle to the process making the call. The status() function returns the process status, as defined by the state enumeration:

  • FINISHED Process terminated normally.

  • RUNNING Process is currently running (not in a blocking statement).

  • WAITING Process is waiting in a blocking statement.

  • SUSPENDED Process is stopped awaiting a resume.

  • KILLED Process was forcibly killed (via kill or disable).

The await() task allows one process to wait for the completion of another process. It shall be an error to call this task on the current process, i.e., a process cannot wait for its own completion.

The suspend() task allows a process to suspend either its own execution or that of another process. If the process to be suspended is not blocked waiting on some other condition, such as an event, wait expression, or a delay then the process shall be suspended at some unspecified time in the current time step.

The resume() task restarts a previously suspended process.

The kill() task terminates the given process and all its sub-processes, that is, processes spawned using fork statements by the process being killed.

Usage example for the process class

task do_n_way( int N );
process job[1:N];
for ( int j = 1; j <= N; j++ )
automatic int k = j;
begin job[j] = process::self(); ... ; end
for( int j = 1; j <= N; j++ ) // wait for all processes to start
wait( job[j] != null );
job[1].await(); // wait for first process to finish
for ( int k = 1; k <= N; k++ ) begin
if ( job[k].status != process::FINISHED )

Monday, January 26, 2009

System verilog !!!

System verilog is now being used widely across the industry for any new code development. With the introduction of standard verification methodology like VMM from Synopsys, AVM from mentor, URM from cadence and OVM from cadence and mentor. There is a wide range of verification methodologies to choose from. Companies now are thinking in terms of verification reuse .All the verification methodology tell us how to write a reusable code ,but the real bottle neck is the legacy code written during time where no standard verification methodology like RVM,OVM & VMM existed. The solution to this issue is to re-write the code in more reusable way.

To use system verilog for the newer development in an SOC is also challenging. Luckily in some simulators have an interop mode where we can access HVL code from system verilog and vice versa. This solves the issue of integrating and using the legacy HVL code with system verilog. interop solves integration issues with system verilog but we still have to use two languages ( one for doing enhancements in legacy HVL code and system verilog for new development ). The Solution to this issue is to migrate the VERA/NTB code to system verilog. Most of the features used in VERA/NTB is available in system verilog. Syntax migration from one language to other is the fastest way to port your code and it takes only fraction of your development time. Industry standard conversion tools are available to covert your VERA/NTB code to System verilog or you can develop your own Perl script for migration. Open source perl scripts are also available on the internet to convert code from VERA/NTB to system verilog.