Saturday, December 3, 2011

Using vmm_opts to configure the test bench !!!

Traditionally plusargs switch is used in verification environments to configure attributes of the verification environment from the command line, like configuring timeout value of the test environment, configuring no of packets to be generated by the generator and setting error limits for the test to exit simulation. More compact and robust implementation of configuration mechanism in VMM is through vmm_opts. vmm_opts class has methods like get_object_bit(), get_object_int() to receive the runtime values in the environment. We can set different values to different instances from the command line. Example you can use the same configuration attribute for configuring no of packets for both TX as well as RX transactions and hierarchically set different values for TX and RX from the command line. Configuring global values like the test bench timeout does not require hierarchical access. The values can also be overridden from the test case using set_int () and set_bit() methods in place of a command line override.

Configurability of test bench from command line is a must have feature especially to abstract your test bench complexity from the consumers in this case a RTL designer , test case development team or release management team.

Saturday, November 12, 2011

Register Automation using VMM/UVM RAL

VMM RAL had been around for a long time is a very powerful feature to verify your hardware registers it provides the user with features like name based register access, mirroring registers, back door access, functional coverage and the predefined tests .The same set of VMM RAL features are available in UVM as well. One of the features which i was impressed with was auto mirror update feature which updates the RAL mirror on register changes through backdoor, this feature is handy when you do your register read/write through an embedded processor instead of regular front door access and you want to synchronize your test bench based on the value of register mirrors.

Accellera has come up with standards like IPXACT and system RDL to define your registers, this enables vendor independence to the users and most of generator tools are converging on these standards. The ideal way of automating your register is to define the registers using the standards and use generators from vendors to auto generate the RTL, firmware code, documentation and system Verilog RAL classes.

Quick survey on the tools supporting the standards indicated that there are many players in this space. One aspect which did not sink into my head was why the entire register solution can’t be packaged with the simulator itself, so that the user does not have to make additional investment on another tool. May be it might be on the product road map for the simulators.

With VMM RAL being adopted in UVM, definitely the user base for RAL is going to increase.

Saturday, November 5, 2011

Downside of being the “guinea pig” for adopting the latest buzz words in verification !!!

Adopting the latest’s verification methods and trends is my hobby right from the year 2002 and I have been continuing this till date. Adoption includes adopting new feature in the tool/methodology or construct in a language to adopting a new method of verification which could potentially improve the productivity, results in finding more bugs and improve the maintenance of the test bench. The upside of this hobby is you keep yourself updated with new trends in the market and become the early bird in adopting a trend. Evaluating the down side of this hobby did not come to my mind till one of my friends put an insight into my mind, adopting all the marketing buzz words can become fatal.

His words of wisdom made me rewind my thoughts on all the possible trends I tried to adopt over years. Few common trends i observed in each of the adoption are as follows

1) New code development is only 10% rest of the effort is making enhancements to the current environment. Tool vendors do not take this equation and end up facing gaps in the feature offering.

2) Being the first one to adopt a new verification trend, be prepared to face tool bugs, and sometimes even showstopper bugs.

3) Not all people have the practice of updating themselves to the latest trend and are more than happy to implement things with outdated technology. These people find it difficult to adapt to changes, we have to carry this type of people with us.


There have been feature or tools i have adopted early with became highly popular over years and some features i adopted early don’t even exist now.

Saturday, October 8, 2011

Gate level simulation !!!

One of time-consuming task in functional verification life cycle is gate level simulation. Gate level simulation can be broadly broken down into

1) Gate level simulation after synthesis to check equivalence with the RTL.

2) Gate level simulation with SDF on the post routed netlist.

A common question that is asked by many fokes is why step 2 is required. The netlist before P & R is compared with the netlist after P & R using formal tools for logic equivalence. Timing after P & R is checked with STA for all possible corners and configuration. It looks like an SDF simulation on post routed netlist is redundant.

Assume now a designer by mistake has place timing exception like false path and multi cycle path, the above condition goes undetected in the STA. Dynamic functional gate level simulation with SDF on post routed netlist is counter check for STA and catches if the designer has made mistakes on placing an timing exception.

Wednesday, October 5, 2011

Code coverage !!!

Coverage closure be it functional coverage or code coverage is one of the time consuming task in functional verification life cycle. With respect to functional coverage automatic coverage closure tools are available in the market. When it comes to code coverage it is very tedious and manual process to measure and close coverage, Code coverage number (Line, condition, fsm , toggle) of 95% is unacceptable in many places. The remaining 5 % coverage holes need to be completely analyzed and closed. More perfection requires more time and resource . Generally code coverage analysis is done by verification engineers who have relatively less knowledge in design this impacts the overall efficiency of the task execution. Why is code coverage important especially when we have functional coverage ? missing test cases are captured easily by code coverage which seems to have slipped through the functional coverage plan.

It is good idea to not include the register package test for code coverage as this test would give a false pass on the register coverage. We generally expect the register to be covered using functional tests, by including register package test we will end up covering registers by just doing a read and write to all the bits which is not our intent.

Wednesday, September 14, 2011

What make a verification team great !!!

Move with the industry trend

Verification keeps moving at a very fast pace, great verification teams move with the advancements very fast and implement those in the projects to reap the benefits. Ordinary teams stay with the old ways of doing things.

Knowledge sharing

The most important aspect for a verification team to become a great team is to share the knowledge they gained with the entire team improving the overall efficiency of the team.

Take the entire team with you

Most important aspect in managing and staging advance verification techniques in the project is the ability of the members to take the entire team with them. Ordinary teams follow the decision of the most powerful person in the team; great teams take a collaborative decision.

Risk taking

To implement or adopt new verification trends the team should be able to take calculated risk. If the risk taking ability is not there then it points to an ordinary team.

Expect the unexpected

One of the attributes of the great verification team is to expect the unexpected result during an adoption to a new trend and successfully overcome it. Ordinary verification teams back out when the unexpected behavior happens.

Saturday, August 20, 2011

VMM Channel methods grab, ungrab, lock, unlock !!!

Assume two different threads feeding a single channel with a sequence of transactions. If the transactions are just put in to the channel without using channel grab() the result would be that the sequence between two threads will be mixed producing unexpected results. Grab() method is used to request for exclusive access to the channel once the grab is activated no other thread can put an object in to the channel. Once the channel is loaded with the sequence of transaction object ungrab() should be used to release the channel for other threads. Is_grabbed () function can be used to know if the channel is grabbed.

Lock () method can be used to lock the channel producer (put ) or channel consumer (get ). Unlock () can be initiated to remove the lock(). Status of the lock can be got using is_locked() method. These methods are useful to control channel from a different location say different block.

Monday, July 11, 2011

Using a verification methodology to solve a verification problem !!!

Few problems that i always come across with verification methodology users is limited knowledge on what the methodology can offer and lack of effort on their side to update them self on the new features of the methodology. Once they are presented a verification problem to be solved using a verification methodology they come up with a solution based on their limited knowledge of the methodology. The solution they come up is exactly not the best solution that the methodology can offer. User manual / reference guide of a methodology can only educate you on the features the methodology can offer. Ultimately taking a decision on what features to use under what circumstance is totally left to the users of the methodology. This is where I find many people making mistakes starting from a simple noncompliance to a major architectural mistake. Once such types of mistakes are done the code eventually becomes an excess baggage that needs to be carried and maintained for the rest of the project life span.

Saturday, June 18, 2011

vmm_broadcast and vmm_scheduler !!!

Channels are point-to-point data transfer mechanisms. If multiple consumers had to extract same transaction descriptors from a channel then vmm_broadcast should be used. vmm_broadcast broadcasts transaction from one source channel to multiple output channels. Copy of the transaction from the source channel is forwarded to the output channels. Assume if you have multiple interfaces having different signal level protocol transmitting the same transaction at the same time a VMM broadcast can be used in this scenario. Unified generator is connected to the source channel of the vmm_broadcast and output channels of the broadcast are connected to the drivers of the interfaces.


Vmm_scheduler directs the transaction from different input channels to single output channel based on a scheduling algorithm. The default scheduling algorithm is round robin mode, by adjusting the constraint you can also have a random scheduling implemented. If you need custom scheduling you can implement the same in the vmm_scheduler_election class. Scheduler can be used if you need to schedule transactions based on some timing information.


Sunday, May 8, 2011

Verification environment architecture !!!


Architecting a verification environment with a dynamically changing requirement is a real challenge, when the entire requirement is known upfront you can fit your problem statement perfectly in to methodology. But if you get requirements in bit and pieces then you make an architectural decision based on the know requirements, when more requirements trickle in you find that decision you did earlier was not right. Option you have on hand is to re-write the code and correct your mistake or patch up the code and deviate it from the methodology recommendations resulting in less reusability. This is a typical problem that happens when overall picture of the problem statement is not understood by the architect. Another problem is due to the attitude that let us get the basic stuff working first, then incrementally fit the requirements in to the basic architecture.Typical project flow when requirements are not know upfront will be like this ( Just for humor !!! ).



Saturday, April 2, 2011

Channel record & playback in VMM !!!

Channel record and playback is useful feature which can be used to reproduce an issue hit at top level environment in a different block level environment. As you know the randomization changes with the change in system Verilog files for the same seed, the same scenario cannot be reproduced with the same seed in a different environment due to the change in the system Verilog files. We need to spend time running random regression with different seeds at block level to reproduce the issue that happened in the top level with a particular sequence. The alternative way to reproduce the issue at block level is to record the transaction at top level which is done using channel record. Then playback the transaction through channel playback from the block level to reproduce the issue.

// Record transaction in top level

gen.out_chan.record("Record_transaction");

// Play back at Block level

gen.out_chan.playback(status,"Record_transaction",tr);

If (!status)

`vmm_error(log,”play back failed \n”);

Sunday, March 20, 2011

First look at UVM Methodology !!!

I have been a RVM/VMM user for many years following advancements that come in VMM in every release; I try to use the new features whenever i get a chance. Recently i had a chance to take a first look at the UVM methodology. I was fully aware that UVM is based on the OVM methodology, with register package derived from VMM (RAL). Now how easy or difficult is it for a person with VMM background to pick up UVM with the fact that he does not have an OVM knowledge. From my experience I felt that this can be done very quickly in a matter of few days. As i started reading the UVM methodology and exploring the features of the methodology, I did see a lot of similarities between UVM and VMM. People who had a chance to use VMM 1.2 can make this switch even faster. I went through all the basic features UVM had to offer, looks very interesting. Then i decided i should explore UVM features in detail.

Sunday, February 20, 2011

Error injection in VMM environments !!!


There are different approaches in VMM to inject error , each one selects a way which is comfortable for them. But when finalizing on an approach it is good to know the advantages and disadvantages in terms of test bench reuse and code organization.

Error injection in transaction class attributes

Design a transaction class with virtual fields specifying the error types and control the physical fields based on the virtual error types.

This error injection code can be placed with basic transaction class or the error injection code can be a separate class extending the basic transaction class. I would prefer the error injection class to be a separate class extending the basic transaction class as the code become better organized and you don’t end up having one big monolithic transaction class.

Most of the transaction based error injection should be done using this approach. Sequence involving error injection can be easily generated using this approach.

Error injection through call back registered through driver.

Error injection on the signal level protocol controlling the driver attributes should be done through call back registers to the driver. Transaction class error injection should not be handled through driver call backs.


Monday, January 3, 2011

Formal verification !!!

Recently i had a chance to know what formal verification is all about. My first impression on formal verification, it uses white box techniques to verify design against the black box approach used in constraint random verification. If your assertions or property definitions is accurate, formal tool can hit bugs faster than regular simulation effort. Ramp up on the formal tool takes little bit of time for people who are new to the formal verification world, debugging failures requires little bit of ramp up time as well. we need to debug failures without timing information by tracing schematics. When you get through the initial hiccups, you will definitely enjoy doing formal verification. Formal tool takes system Verilog or PSL assertions and tries to prove that your assertion can be violated by generating all possible stimuli.