Wednesday, December 16, 2009
Infact coverage convergence tool !!!
Wednesday, October 7, 2009
Joy of doing ASIC verification!!!
Earlier verification job was looked down, as the design is what is getting taped out and moves in to mass production stage not the test bench. But verification requires lot more effort and skills, example, to test a 100 line state machine we need to develop testbench which will have atleast 500 lines of code and draft a test plan which covers all the possible scenarios. VIP development companies get their revenues from their testbench which is licensed and shipped as a product.
Reasonably experienced person will know that building a re-useable system level verification environment and verifying the design without any post silicon bugs is more difficult than adding a glue logic in the design.
Do you still believe verification is a less critical task and requires lesser expertise that design?
Sunday, September 27, 2009
Score board architecture !!!
2) Scoreboard should be able to handle packet drops.
Just extend the VMM data stream scoreboard and implementing few virtual methods like transform, quick_compare & compare. use expect_with_losses() method for requirement (2). The requirement (1) can be implemented easily with transform method ().
Sunday, September 20, 2009
How do you identify a good functional verification engineer?
The answer looks straight forward, at the end of an emulation effort, chip tape out & chip production. If there are no functional bugs and design works as expected then obviously the person who has verified the design is a good verification engineer.
The above statement has a rider; the above result can be produced in three circumstances
1) Good designer, bad verification engineer & very low bug rate.
2) Bad designer, good verification engineer & very high bug rate.
3) Re-used design which is silicon proven & no bugs.
If your chip taped out successfully without any functional issues with scenario 2 , then you have identified a good verification engineer.
Evaluation based on process success:
You wrote a verification environment, found lot of design bugs, now you need to verify a design enhancement which requires changes in your earlier verification environment. Now effort required to do the changes in your verification environment depends on reusability of code you have written earlier. If you are able add the enhancement within a short span of time with little code changes you are on track to be identified as good verification engineer.
Evaluation when the design success is not immediately visible:
This type of scenario is seen in VIP development where the development is verified internally and product release is done for the customers use.
The only way to identify a good verification engineer under this scenario is “customer bug rate over a fixed time say 12 months” Vs “internal bug rate during development”. If there are no customer bugs for a long period of time on the feature, you have identified a good verification engineer.
Evaluation based on knowledge of language /methodology /protocol:
This is the method most of the people use for identifying a verification engineer during hiring in a new organization. If the person has good coding experience in projects, he will be having a very good command on verification languages and will be well versed with the intricacies of the language. Asking a person to write a code for a given scenario will test his knowledge of the language and test his problem solving skills. Testing a persons knowledge of protocol will help us to know how well he has understood the protocol and used the knowledge in verifying the design.
Evaluation based on moving with technology:
This is very important aspect in today’s industry. The verification methodology, tools improvements happens at a very fast rate from the EDA vendors helping the verification engineers to reduce the time on verification. A good verification engineer will definitely keep himself updated on the new aspects of functional verification which is good sign in identifying a good verification engineer.
I have also in my past come across some sense less interviewers evaluating verification hires on his knowledge of digital electronics and CMOS. Does a verification engineer use digital design or CMOS for architecting or writing his test bench?
Saturday, September 12, 2009
Verification effectiveness!!!
If you are verification engineer and you feel that you have hard time in meeting your schedules and you work for more hours in office (say more than 8 hours) to meet the dead lines. Following are some effective ways i used to meet the dead lines without compromising on work quality or the compromise on working hours.
1) Micro schedule your tasks with effort estimates and get an approval on the time line from your manager.
2) Whenever scope of the work is increased / decreased re-schedule the effort estimates and keep your manager updated about this.
3) Prioritize your tasks and complete one after the other.
4) Whenever you are writing a test bench make sure your test bench is reusable. This can help you to minimize your work at a later point of time.
5) Always try to use module level verification environment at the system level. Maximum effort you need to put is the integration effort from module level to system level.
6) Always write random verification environment to test your design, most of the bugs are easily captured by random verification environment. Write a directed test case only if it is absolutely required to hit a functional coverage / code coverage hole.
7) Always move with the market, try learning and using new technology which will overall reduce the verification effort. Example adoption to tested methodology like VMM or OVM might be initially tough but on a longer run it will reduce your time to verification.
8) Always file a bug when you hit design issue, this is very important because this is the only metrics on which a verification engineer gets evaluated. Also top level management will know the effectiveness of the verification and schedule slips due to design issues.
9) Always keep your manager updated with status of your task so that he will be in position to evaluate your bandwidth for future tasks.
10) Never compromise on testing a DUT feature due to lack of test bench support, this may lead to emulation/post-silicon bugs.
11) Always keep the code coverage analysis towards the end of the project after your function coverage goals are met.
12) When you are stuck with a problem, do not work continuously to fix the issue this will increase the stress level and you will end up spending more time hitting on the areas around the issue. Take a break from the issue and come back with a fresh mind.
13) Try to understand the code base relevant for the enhancement or the modification, spending time on understanding overall code base has diminishing returns.
Does the functional verification engineer get rewarded for his verification efforts is really a question mark and largely dependent on the company you work for.
Sunday, August 9, 2009
six reasons why you should use system verilog for verification !!!
2) Free open source standard verification methodologys like VMM & OVM are available which can be used with system verilog.
3) Simulation speed will improve,if you are an HVL user using VERA/SPECMAN for verification.
4) Most of the VIP vendors support system verilog, building verification environment for SOC will not be an issue.
5) System verilog interoperability layers are available, you can re-use you VERA/NTB code from system verilog,similar arrangement is available for specman user too.
6) System verilog supports most of the constructs supported by HVL ( VERA/NTB/SPECMAN) , migration to sytem verilog for HVL users will not be an issue.
Sunday, July 19, 2009
Learning curve for a verification engineer !!!
1) IP verification
2) SOC verification
3) Verification IP developement
4) Verification consultancy
I will try to evaluate each of these work environment.
According to me,the skill you accuire on doing your day to day activities at your work place should match the requirements of the industry and should be portable across companies. (i.e) Assume you are doing an assembly level verification for a processor design using internal tools of that company, the methodology and tool knowledge is limited to that particular company and skill is not portable between companies, then the skill you accuired is not marketable, hence the learing curve is minimum in this case.
We can find our self in above scenario in verification consultancy work environment where we have very little control of job nature , implementation flexibily is also minimal in this case. One good thing about this work enviroment is you will get your hands dirty on different type of projects and you will rarely find your self struck with the same project. In comparitive scale verification consultancy work enviroment gives verification engineer moderate learning curve.
Verification IP developement requirement and process are different from RTL developement and verification. In this kind of work environment we have very good learning curve on the new verification methodology , we can improve our knowledge on different languages as VIP are developed in single language but controlled through different languages like VERA/NTB/VERILOG/SYSTEM VERILOG /C. you can gain good protocol knowledge by developing the VIP and stay updated on the developements in the protocol. One of the draw backs in VIP developement work environment is your initial learning curve will be steep, after few years most of you work will be just bug fixes and some times VIP enhancements. In comparitive scale VIP developement work enviroment gives verification engineer moderate learning curve.
SOC verification work environment is different, as the verification is done on proven design IP,finer points in protocol are generally ignored. One good thing in SOC verification is we end up working on different type of protocol and interfaces. In comparitive scale SOC verification work enviroment gives verification engineer good learning curve provided he works on different interfaces every project.
In case of IP developement work environment , your learning curve on the protocol will be good.The test plan and implementation will touch the finer points in protocol verification. In comparitive scale IP verification work enviroment gives verification engineer good learning curve provided his company has migrated to system verilog or HVL based verification.
The best scenario is to have at least a few years of experience in all the type work environments, so that you can get good experience in verification methodology, verification tools,languages & different protocols.
Thursday, April 30, 2009
VMM Planner
Friday, March 20, 2009
Coverage convergence technology (CCT)
The automatically generated functional coverage code can be used as the starting point for writing the functional coverage model and can be integrated with the DV enviroment. CCT also allows parallel test runs with each test run targetting different coverage points without having any overlap between them. parallel test runs without overlapping random values is achived by providing the tool with a bias file which is generated by the tool based on the functional coverage data base.
Sunday, March 1, 2009
Streaming operators --- System verilog
Following is an example on the usage of Streaming operators
byte stream[$]; // byte stream
class Packet
rand int header;
rand int len;
rand byte payload[];
int crc;
constraint G { len > 1; payload.size == len ; }
function void post_randomize; crc = payload.sum; endfunction
endclass
...
send: begin // Create random packer and transmit
byte q[$];
Packet p = new;
void’(p.randomize());
q = {<< byte{p.header, p.len, p.payload, p.crc}}; // pack
stream = {stream, q}; // append to stream
end
...
receive: begin // Receive packet, unpack, and remove
byte q[$];
Packet p = new;
{<< byte{ p.header, p.len, p.payload with [0 +: p.len], p.crc }} = stream;
stream = stream[ $bits(p) / 8 : $ ]; // remove packet
end
Thursday, February 19, 2009
Automated coverage closure
Sunday, February 15, 2009
Randomization of scalar variables -- System Verilog
System verilog has an option of randomizing scalar variables out side a class with constraints.
Example
integer a,b,c;
void'( std::randomize(a,b,c) with { a==b;b>0;c==10; } );
The above construct generates random values for a,b,c the constraint is provided in line. The variables a,b,c are out side the class scope.
Saturday, February 14, 2009
Coverage grading
Coverage grading is an option used to rank the test cases based on the number of functional coverage points hit by the individual test case. Grading option can be used to analyze and remove redundant test cases which is targeting the same functionality. This will help to optimize the regression run and save simulation time.
In random test run scenario it also helps in identifying the random seeds which provides maximum coverage.It will be a good idea to go for functional coverage grading when the verification environment and test cases are frozen and suitable functional coverage numbers are achieved. if your test bench is constantly in development and changing ,using the same seed that gave you good coverage before may not do so again since the randomization may have been affected by the changes in the source code.
Command to generate functional coverage grading in VCS is
urg -dir ( *.vdb )
Sunday, February 8, 2009
functional coverage
Adding functional coverage to a verification environment involves three steps.
( directly maps to your test plan )
2. Implementing the functional coverage monitors.
3. Running simulation to collect the functional coverage and functional coverage analysis.
Methodology of identifying functional coverage points and cross coverage points can be explained with a simple USB2.0 Bulk transfer example.
12 diffrent axis have been identified for a simple USB 2.0 bulk transfer. Total no of basic coverage points for this functional coverage group is 34 coverage points. Now we need to get the cross coverage points. So what is a cross coverage point ? Cross coverage is the valid combinations of the identified axis for example one cross coverage point can be
HS(speed) --> IN(direction) -->ACK(response) -->SMALL(length) --> No (data toggle error)
--> No (crc error) -->No ( pid error) --> SMALL (No of tokens per uframe)
--> No ( token error) --> ODD (payload) --> OFF (ping token)
We need to identify all the cross coverage points for this functional coverage group. Each of the cross coverage point is an test scenario. The way to find all the cross coverage point is to simply cross all the axis which will give you all the possible combinations, now eliminate the invalid combinations. Eliminating the invalid cross can be done using ignore construct or bad state construct in VERA/NTB.
example :: finding the cross coverage points for Axis1 and Axis2 alone. ( In the actual scenario all the axis should be taken in to account). possible cross coverage points
HS --> IN, HS -->OUT , FS --> IN , FS-->OUT
4 cross coverage points have been identifyed by crossing Axis 1 and Axis 2.
Now the identified functional coverage / cross coverage points need to be implemented in VERA/NTB/SV as a coverage group and integrated to the verification environment.
Saturday, February 7, 2009
Fine-grain process control -- System Verilog
System Verilog has a build in process class which can be used for fine-grain process control.This process class is a good addition in system verilog and provides fine-grain process control which was not available in VERA/NTB. The prototype of the process class is as follows.
enum state { FINISHED, RUNNING, WAITING, SUSPENDED, KILLED };
static function process self();
function state status();
task kill();
task await();
task suspend();
task resume();
endclass
Objects of type process are created internally when processes are spawned. Users cannot create objects of type process; attempts to call new shall not create a new process, and instead result in an error. The process class cannot be extended. Attempts to extend it shall result in a compilation error.
The self() function returns a handle to the current process, that is, a handle to the process making the call. The status() function returns the process status, as defined by the state enumeration:
- FINISHED Process terminated normally.
- RUNNING Process is currently running (not in a blocking statement).
- WAITING Process is waiting in a blocking statement.
- SUSPENDED Process is stopped awaiting a resume.
- KILLED Process was forcibly killed (via kill or disable).
The await() task allows one process to wait for the completion of another process. It shall be an error to call this task on the current process, i.e., a process cannot wait for its own completion.
The suspend() task allows a process to suspend either its own execution or that of another process. If the process to be suspended is not blocked waiting on some other condition, such as an event, wait expression, or a delay then the process shall be suspended at some unspecified time in the current time step.
The resume() task restarts a previously suspended process.
The kill() task terminates the given process and all its sub-processes, that is, processes spawned using fork statements by the process being killed.
Usage example for the process class
task do_n_way( int N );
process job[1:N];
for ( int j = 1; j <= N; j++ )
fork
automatic int k = j;
begin job[j] = process::self(); ... ; end
join_none
for( int j = 1; j <= N; j++ ) // wait for all processes to start
wait( job[j] != null );
job[1].await(); // wait for first process to finish
for ( int k = 1; k <= N; k++ ) begin
if ( job[k].status != process::FINISHED )
job[k].kill();
end
endtask