Consider a simple constraint example a == b + c;
Now if you constraint any of the 2 values of a,b,c the third value gets generated automatically due to the bi-directional nature of the constraint.
Now modify the constraint void (a) == b+ C;
Value of a gets generated first based on which b and c are generated. Generating a based on b and c is not possible. The same functionality can be achieved by using a solve before constraint as follows.
a== b+c;
solve a before b; solve a before c;
Functions can be used in constraints for reusability between constraints, but important point to note is the constraints will become uni-directional when used in a function.
function integer sum (integer b, integer c);
sum=b+c;
end function
Constraint sum_constraint { a==sum(b,c); }
Saturday, December 4, 2010
Friday, November 26, 2010
Randomization of floating point or Real variable !!!
What are the application areas of floating point numbers ?
1) Floating point numbers are used in PLL configuration were fractional values are required.
2) Processor,image processing & graphics applications mostly work on floating point numbers.
Vera/ NTB does not even have a real or a floating point type; forget about randomizing a floating point number. The workaround for this is to write your own floating point class, randomize the class and use it. System verilog has a real type which is used to represents a floating point number, but a real type cannot be randomized. The LRM does not support the randomization of real data types. Some time back when i was discussing about this with one of my friend he was telling me that system verilog committee was working on this, not sure how true this information is.
1) Floating point numbers are used in PLL configuration were fractional values are required.
2) Processor,image processing & graphics applications mostly work on floating point numbers.
Vera/ NTB does not even have a real or a floating point type; forget about randomizing a floating point number. The workaround for this is to write your own floating point class, randomize the class and use it. System verilog has a real type which is used to represents a floating point number, but a real type cannot be randomized. The LRM does not support the randomization of real data types. Some time back when i was discussing about this with one of my friend he was telling me that system verilog committee was working on this, not sure how true this information is.
Sunday, November 7, 2010
Test end condition using vmm_consensus !!!
VMM provides the vmm_consensus class as a voting mechanism, which is used to determine when the test could be terminated. Earlier before the introduction of vmm_consensus class end of test condition was determined by some condition like score board empty condition or a timeout. The test end condition was tied to a specific environment and was not reusable across environments.
With the introduction of vmm_consensus class various elements such as channels, notification, and transactors play the role of voter and all voters have to agree for a consensus. Even if one voter opposes there will be no consensus. The vmm_consensus has a wait_for_consensus ( ) method which will be called in vmm_env’s wait_for_end ( ) method. The wait_for_consensus ( ) method will block till all voters consent.
How to use vmm_consensus to determine when the test should end ?
1) Add vmm_consensus::wait_for_consensus ( ) method to vmm_env::wait_for_end () method
2) Register voters in the vmm_env::build ( ) using end_vote.register_* () method, where end_vote is the instance of vmm_consensus defined in vmm.sv
3) Each of the sub environment instance has a single vote, it can consent or oppose.
4) We have options such as consensus_force_thru() which can be used by a particular subenv or a VMM components to force consensus through even though other components oppose the decision.
5) As usual we have methods that can be used to monitor the status of consensus, which components oppose and which components consent.
With the introduction of vmm_consensus class various elements such as channels, notification, and transactors play the role of voter and all voters have to agree for a consensus. Even if one voter opposes there will be no consensus. The vmm_consensus has a wait_for_consensus ( ) method which will be called in vmm_env’s wait_for_end ( ) method. The wait_for_consensus ( ) method will block till all voters consent.
How to use vmm_consensus to determine when the test should end ?
1) Add vmm_consensus::wait_for_consensus ( ) method to vmm_env::wait_for_end () method
2) Register voters in the vmm_env::build ( ) using end_vote.register_* () method, where end_vote is the instance of vmm_consensus defined in vmm.sv
3) Each of the sub environment instance has a single vote, it can consent or oppose.
4) We have options such as consensus_force_thru() which can be used by a particular subenv or a VMM components to force consensus through even though other components oppose the decision.
5) As usual we have methods that can be used to monitor the status of consensus, which components oppose and which components consent.
Saturday, October 9, 2010
Scoreboard for checking interrupts !!!
Architecting a good interrupt monitor and an interrupt scoreboard is very essential to hit bugs and close coverage on the interrupt logic of the design.The architecture is very simple, you need to have a shadow register for your interrupt status register and predict the interrupt by writing in to this register, the predication should be done in the transactor based on the transaction class attributes for a regular transaction, error condition ect .. The interrupt scoreboard should reside on the passive interrupt monitor which reads the shadow register as well as the original hardware register and the mask and makes a comparison on the expected interrupts and flags an error if the interrupt is mismatching. The monitor should have options to clear the interrupt when it is available or accumulate interrupt and clear it when it is required. At end of the simulation a comparison of shadow register and real interrupt status register needs to be done to check if all the expected interrupts have arrived and flag appropriate error messages.
Saturday, September 11, 2010
Interfacing CRV environment with procedural environments !!!
One of interesting challenges in verification is building constraint random verification environment on top of an existing code which is a procedural code.The challenges are unique to each of the environments; simplest solution is wrapping the procedural code in a transactor which is connected to a channel which interfaces with the constraint random generators. Transaction object attributes are mapped to different functionality in the procedural code.
Sunday, August 22, 2010
Tips to improve constraint solver performance !!!
Performance issues are very hard to debug, especially to find the root cause of the issue. If the constraint solving time is at unacceptable limits, the first area to debug is the bidirectional nature of the constraint implemented. quickly review the constrains that does not require bi-directional functionality and convert constraints from bidirectional to unidirectional using void termination or solve before, also reduce the number of constraints solved at a single point of time to improve performance.
Saturday, July 3, 2010
Structures and union in system verilog !!!
VERA/NTB users migrating to system verilog have a tendency of not using structures and union construct in system verilog when architecting or implementing a system verilog verification environment. The reason being, this constructs are not available in VERA/NTB. Struct is pretty useful construct to group fields which are logically related to each other. It helps to organize you code. The union construct is very similar to a structure, but only one of the fields will be valid at a given point of time. Using packed struct and packed unions helps to organize fields in the memory without gaps, which in turn results in faster access to memory and results in faster simulation time.
Thursday, May 20, 2010
Factory replacement in the scenario generator scenario using ‘.using’ gotcha !!!
When you want to do a factory replacement of a transaction class in your scenario,we assign the factory to .using.
Scenario.using=transaction_factory;
The gotcha in this factory replacement is we need to implement the allocate() and copy() methods in the extended class for the factory replacement to work.
Many times i have seen people including myself spending time debugging their code when using “.using” for factory replacement as they are not aware of the gotcha.Maybe the RVM/VMM documentation should have this requirement highlighted so that the user can easily understand this requirement.
Scenario.using=transaction_factory;
The gotcha in this factory replacement is we need to implement the allocate() and copy() methods in the extended class for the factory replacement to work.
Many times i have seen people including myself spending time debugging their code when using “.using” for factory replacement as they are not aware of the gotcha.Maybe the RVM/VMM documentation should have this requirement highlighted so that the user can easily understand this requirement.
Saturday, May 8, 2010
Atomic generator using allocate() !!!
RVM/VMM atomic and scenario generator randomizes a blueprint pattern which is assigned with the extended class (factory) , the copy of the randomized blueprint is pushed into the channel. Most of the RVM/VMM users use this approch in there custom generators. Is there a different way of implementing your atomic generator without using a copy() method ?. The answer is yes, following method can be used to generate atomic transaction without using copy().
For the above code to work you need to implement the allocate method in your extended class. The extended class is assigned to the factory before the start_xactor() method is called.
For the above code to work you need to implement the allocate method in your extended class. The extended class is assigned to the factory before the start_xactor() method is called.
Sunday, May 2, 2010
ASIC verification tasks verification engineers dislike the most !!!
I was having a friendly discussion with few of my fellow verification engineers, we were discussing about the verification tasks they dislike the most. The conclusion from the discussion was they disliked verification tasks which were laborious, which requires manual effort and had very little learning opportunity. The verification task they disliked are as follows.
1)Language migration without the help of a commercial language conversion tool
2)Code coverage analysis and identification of code coverage exclusions.
3) “X” tracing while debugging gate level simulation.
4)Release management ( Tagging the release code ,Triggering regression and debug of regression results ,becomes messy when more that 25 people are involved and release management is handled by a single person)
5)Working on outdated technology which is away from the verification market movement.
1)Language migration without the help of a commercial language conversion tool
2)Code coverage analysis and identification of code coverage exclusions.
3) “X” tracing while debugging gate level simulation.
4)Release management ( Tagging the release code ,Triggering regression and debug of regression results ,becomes messy when more that 25 people are involved and release management is handled by a single person)
5)Working on outdated technology which is away from the verification market movement.
Sunday, April 25, 2010
Top 6 verification misconceptions!!!
1)This is a silicon proven IP no need to do rigorous testing and coverage – Hold on are you sure if all the feature crosses are validated by the IP vendor in the silicon.
2)RTL is the one that is taped out and converted to a product no need to review verification environment, test plan review is sufficient --- 70 % of time is spend on verification out of which considerable amount of time is spend on developing test bench, re-usable and well architected test bench helps you to achieve your verification objectives in less time and in maintenance of your code in longer run.
3)This is a working legacy code keep it --- Legacy code with defect in the architecture is difficult to maintain and doing an enhancement on the code is a nightmare. Consider re-coding sections of the legacy code when you have to do an enhancement in the code to support a new feature.
4)Start the implementation and study the protocol as you go. We want you to be productive from day1 --- A big no!! You cannot architect an environment without understanding the protocol completely and you will always end up with a situation were you were not informed about a requirement.
5)Good verification engineer is the one who hit lot of bugs – Hold on you can have a bad designer who can make many mistakes, making ordinary verification engineer look exceptional.
6)No need to have an architecture document for a verification environment --- Reverse engineering a code to find out the verification environment architecture is not an easy job. It is a pain to understand the architecture of an environment after reverse engineering a code. Always document the architecture of your verification environment if possible document a class diagram.
2)RTL is the one that is taped out and converted to a product no need to review verification environment, test plan review is sufficient --- 70 % of time is spend on verification out of which considerable amount of time is spend on developing test bench, re-usable and well architected test bench helps you to achieve your verification objectives in less time and in maintenance of your code in longer run.
3)This is a working legacy code keep it --- Legacy code with defect in the architecture is difficult to maintain and doing an enhancement on the code is a nightmare. Consider re-coding sections of the legacy code when you have to do an enhancement in the code to support a new feature.
4)Start the implementation and study the protocol as you go. We want you to be productive from day1 --- A big no!! You cannot architect an environment without understanding the protocol completely and you will always end up with a situation were you were not informed about a requirement.
5)Good verification engineer is the one who hit lot of bugs – Hold on you can have a bad designer who can make many mistakes, making ordinary verification engineer look exceptional.
6)No need to have an architecture document for a verification environment --- Reverse engineering a code to find out the verification environment architecture is not an easy job. It is a pain to understand the architecture of an environment after reverse engineering a code. Always document the architecture of your verification environment if possible document a class diagram.
Saturday, April 17, 2010
Checker for complex constraints spanning multiple transaction !!!
It is hard to debug the constraint failures spanning multiple transactions say sequence of ten to fifteen transaction were the constraints on each atomic transaction is dependent on the other. Manual debug on such failures are time consuming.We can use a procedural checker to check the randomization of the scenario which is an array of transaction objects. In RVM/VMM the procedural checker can be placed in the post_scenario_gen callback of the RVM/VMM scenario generator. The post_scenario_gen has drop bit which can be set from the callback to stop the transactions from being pushed in to the channel.The checker is active at all times and flags an error if there is an error in the randomization across transaction.
Wednesday, April 14, 2010
System verilog 2009 new features !!!
Following are some of the new features of system verilog 2009 standard that caught my attention.
1)`begin_keyword `end_keyword
The above defines can be used between any section of code to maintain backward compatibility with verilog or system verilog 2005, very useful if you are migrating your testbench or design to system verilog, you might come across system verilog keywords used in your verilog design which can cause compile to fail. With system verilog 2009 just wrap the code with `begin_keyword and `end_keyword to get past the error without modifying your code.
2) Let construct substitute to macros ?
Package example_package;
Let expand_operation (a,b) = assert ( !a & b)
end package;
module test ( …);
import example_package::*;
always @ ( …) begin
expand_operation (read,write); // expands to assert(!read & write)
end
end module
3) Pure constraints
virtual class example;
pure constraint valid;
endclass
Allows you to declare pure constraint in the abstract class,just the declaration alone without implementation. The implementation of this constraint is provided in the extended class with the same constraint name.
1)`begin_keyword `end_keyword
The above defines can be used between any section of code to maintain backward compatibility with verilog or system verilog 2005, very useful if you are migrating your testbench or design to system verilog, you might come across system verilog keywords used in your verilog design which can cause compile to fail. With system verilog 2009 just wrap the code with `begin_keyword and `end_keyword to get past the error without modifying your code.
2) Let construct substitute to macros ?
Package example_package;
Let expand_operation (a,b) = assert ( !a & b)
end package;
module test ( …);
import example_package::*;
always @ ( …) begin
expand_operation (read,write); // expands to assert(!read & write)
end
end module
3) Pure constraints
virtual class example;
pure constraint valid;
endclass
Allows you to declare pure constraint in the abstract class,just the declaration alone without implementation. The implementation of this constraint is provided in the extended class with the same constraint name.
Thursday, March 25, 2010
Closing “quality gap” in functional verification !!!
Recently i came across a white paper on a new EDA tool certitude which is used to close the ”quality gap” in functional verification using mutation techniques. The tool introduces mutations in the RTL code and then subjects the mutated RTL to the verification team's test bench and checks if the verification environment has been able to detect, activate and propagate the mutations. If the verification environment is able to catch the mutations, then the verification is probably complete and there is a reasonable certainty that no bugs are left. If not, there could be serious weaknesses in the test bench and it needs to be reworked. Put simply, if the verification testbench cannot detect the mutations (or bugs) introduced, chances are there that other bugs are also left out.
Another way to see the situation is that after introducing mutations, we have two versions of the RTL code, one original and one mutated (say, bugs) and the verification environment is passing both. Clearly then there is a weakness in the verification environment which needs to be fixed.
The tool works with all the major simulators available in the market and works with all languages C, System C, System Verilog, Specman e, Vera.
Another way to see the situation is that after introducing mutations, we have two versions of the RTL code, one original and one mutated (say, bugs) and the verification environment is passing both. Clearly then there is a weakness in the verification environment which needs to be fixed.
The tool works with all the major simulators available in the market and works with all languages C, System C, System Verilog, Specman e, Vera.
Sunday, March 21, 2010
VMM 1.2 tutorial !!!
I was looking out for a nice tutorial on the new features of VMM 1.2. I came across this 2 hr video tutorial on VMM 1.2 features on the VMM central web site (http://www.vmmcentral.org ). Nicely composed video covers all aspects of VMM 1.2 release from implicit phasing, analysis ports, transport & factory. Definitely useful for VMM 1.1 users migrating to VMM 1.2.
Wednesday, March 17, 2010
Another verification methodology UVM !!!
Recently through one of my friends i came to know about the development of a new verification methodology UVM.The UVM ( Universal verification methodology ) is being standardized by Accellera Technical Subcommittee (TSC) and claims to solve the system verilog cross methodology interoperability problem. This methodology is being suppored by all the 3 major EDA vendors synopsys, cadence and mentor. The advantage of switching to this methodology (when it is available) is portability of the methodology across different vendors. The base code for this methodology will be from OVM version 2.0.3.The base classes for UVM methodology is expected by Q1 2010.
It is still not clear about the backward compatability of this methodology with OVM. To me backward compatibility with VMM is out of question as the base code for this methodology is from OVM.
It is still not clear about the backward compatability of this methodology with OVM. To me backward compatibility with VMM is out of question as the base code for this methodology is from OVM.
Saturday, February 20, 2010
Functional coverage in RVM /VMM !!!
The conventional way of collecting functional coverage for a transaction class in RVM/VMM is through RVM/VMM callback and it generally reports the functional coverage on the random transaction generated by the generator, The functional coverage point hit or miss is purely dependent on the generated random values there is no real check if that scenario was hit in the RTL or not, meaning if you disconnect the RTL completely from the verification environment and run the test the functional converge point will still be hit.
Ideal way is to have a passive monitor hooked up to the RTL which will collect functional coverage, the idea is like sampling the values from the RTL through the passive monitor and unpacking the bytes to a transaction class object and from the transaction class object we can easily collect functional coverage. By having such an arrangement we make sure that the functional coverage point is only hit when the scenario is hit in the RTL. I know this approach requires more effort but we can be absolutely sure that the functional coverage hit will happen only if the scenario happens in the RTL. More over your functional coverage code will be isolated from rest of the verification environment code. In case of VIP development, this is the right approach to group you functional coverage model with a passive monitor.
Some of the drawbacks to this approach are the latest coverage convergence technology might not work well with this approach. You might not be able to tune or bias your constraint solver to achieve your functional coverage convergence automatically. Especially when you collect the functional coverage from RTL nodes you need to update your test bench whenever there is a change in the RTL code which affects the way functional coverage is collected.
one more reason for choosing this approach is if you have more than one way of verifying the design and we want the functional coverage monitor to collect functional coverage irrespective of the way of generating the stimulus this approach will work perfectly well.
Ideal way is to have a passive monitor hooked up to the RTL which will collect functional coverage, the idea is like sampling the values from the RTL through the passive monitor and unpacking the bytes to a transaction class object and from the transaction class object we can easily collect functional coverage. By having such an arrangement we make sure that the functional coverage point is only hit when the scenario is hit in the RTL. I know this approach requires more effort but we can be absolutely sure that the functional coverage hit will happen only if the scenario happens in the RTL. More over your functional coverage code will be isolated from rest of the verification environment code. In case of VIP development, this is the right approach to group you functional coverage model with a passive monitor.
Some of the drawbacks to this approach are the latest coverage convergence technology might not work well with this approach. You might not be able to tune or bias your constraint solver to achieve your functional coverage convergence automatically. Especially when you collect the functional coverage from RTL nodes you need to update your test bench whenever there is a change in the RTL code which affects the way functional coverage is collected.
one more reason for choosing this approach is if you have more than one way of verifying the design and we want the functional coverage monitor to collect functional coverage irrespective of the way of generating the stimulus this approach will work perfectly well.
Friday, January 1, 2010
RVM/VMM Scenario generator !!!
RVM/VMM ships with some pretty useful built-in components and applications. RVM/VMM's Atomic Generator is probably one of the most powerful ones, yet it's pretty basic. It can definitely help you generate a flow of random items but it was not intended for generation of sequences. A sequence (scenario) is a set of items that have some sort of correlation between them. For example - consider a set of 6 transactions where the transaction 4 depends on the previous transactions say transaction 2. Atomic generator can not generate this kind of sequence, RVM/VMM addresses the need for smart scenarios with the "RVM/VMM Scenario Generator"
Since we can not anticipate the future enhancement in the verification environment it is better that we provide the flexibility in the verification environment for generating scenarios. Selecting scenario generator will be the right step.
The deployment of scenario generator in the verification environment is slightly complex that deploying an atomic generator, but we can have the complete controllability on the micro transactions in a sequence using scenario generators. Deploying a scenario generator in the verification environment will help you in the longer run, also you can randomize between scenarios.
Since we can not anticipate the future enhancement in the verification environment it is better that we provide the flexibility in the verification environment for generating scenarios. Selecting scenario generator will be the right step.
The deployment of scenario generator in the verification environment is slightly complex that deploying an atomic generator, but we can have the complete controllability on the micro transactions in a sequence using scenario generators. Deploying a scenario generator in the verification environment will help you in the longer run, also you can randomize between scenarios.
Subscribe to:
Posts (Atom)