System. Verilog Primer for VHDL Engineers Session . For example, the design model (i. DUT) can be mapped into a hardware accelerator and run much faster during verification, while the testbench continues to run in simulation on a workstation.
In this section of the Verification Academy, we focus on building verification acceleration skills. Coverage. Coverage is a simulation metric we use to measure verification progress and completeness. UVM Verification Primer John Aynsley. UVM is a methodology for functional verification using SystemVerilog, complete with a supporting library of SystemVerilog code. The letters UVM stand for the Universal Verification. 18-09-2016 2/2 A Structures Primer. Other Files Available to Download Design & Verification Languages. Verification languages are the foundation of the very dynamic electronics industry. Industry continually demands improvements in the process of providing differentiated products into their markets. These verification language courses provide in- depth knowledge of key design and verification languages so that you can identify and deploy them in your upcoming projects. Formal- Based Techniques. This topic area focuses on formal- based techniques, ranging from formal property checking to clock- domain crossing (CDC) verification. Assertion- based verification (as it relates to formal property checking) is also covered in this topic area. FPGA Verification. The definition of what FPGA really means has changed dramatically over the last two decades. Whether blazing the trail or being on the trailing edge of Moore’s Law, this is an exciting time to be an FPGA Designer. New opportunities bring new challenges for the FPGA market. As devices grow and become more complex resembling complete systems, the task of verifying such a system becomes daunting. In this section you will find timely, unbiased information from subject- matter experts that will help you navigate through this ever- changing landscape. Planning, Measurement, and Analysis. This topic area focuses on the early stages of a verification project. Topics include considerations for analyzing and evolving your verification capabilities, verification planning, and the introduction of metrics into a flow to measure success. Simulation- Based Techniques. This topic area focuses on simulation- based techniques, ranging from stimulus generation, coverage modeling, and correctness checking. Building a contemporary testbench using UVM is also covered in this topic area. UVM - Universal Verification Methodology. Welcome to the most complete UVM Online resource collection. Whether it's downloading the kit(s), discussion forums or online or in- person training. The UVM Academy Courses provide a great overview of the introductory and advanced methodology concepts, including videos that walk you through some useful code examples. UVM Verification Primer. True to the spirit of UVM, this tutorial was created by taking an existing tutorial on OVM and replacing the letter . Please let us know if you find any inconsistencies! The letters UVM stand for the Universal Verification Methodology. UVM was created by Accellera based on the OVM (Open Verification Methodology) version 2. The roots of these methodologies lie in the application of the languages IEEE 1. The hardware or system to be verified would typically be described using Verilog, System. Verilog, VHDL or System. C at any appropriate abstraction level. This could be behavioral, register transfer level, or gate level. UVM is explicitly simulation- oriented, but UVM can also be used alongside assertion- based verification, hardware acceleration or emulation. But UVM test benches are more than traditional HDL test benches, which might wiggle a few pins on the design- under- test (DUT) and rely on the designer to inspect a waveform diagram to verify correct operation. UVM test benches are complete verification environments composed of reusable verification components, and used as part of an overarching methodology of constrained random, coverage- driven, verification. If you are already familiar with these topics, you can jump straight to the next tutorial. A traditional Verilog or VHDL test bench might contains processes to read raw vectors or commands from a file, use those to change the values of the wires connected to the DUT over time, and perhaps collect output from the DUT and dump it to another file. This is fine as far as it goes, but this process does not scale up well to support the reliable verification of very complex systems. From this is derived a verification plan, broken down feature- by- feature, and agreed in advance by all those with a specific interest in creating a working product. This verification plan is the basis for the whole verification process. Verification is only complete when every item on the plan has been tested to an acceptable level, where the meaning of . Functional checking must be automated if the process is to scale well, as must the collection of verification metrics such as the coverage of features in the verification plan and the number of bugs found by each test. Along with the verification plan, automated checking and functional coverage collection and analysis are cornerstones of any good verification methodology, and are explicitly addressed by System. Verilog and UVM. Checkers and a functional coverage model, linked back to the verification plan, take engineering time to create but result in much improved quality of verification. One way to address this issue is using constrained random stimulus. The use of random stimulus brings two very significant benefits. Firstly, random stimulus is great for uncovering unexpected bugs, because given enough time and resources it can allow the entire state space of the design to be explored free from the selective biases of a human test writer. Secondly, random stimulus allows compute resources to be maximally utilised by running parallel compute farms and overnight runs. Of course, pure random stimulus would be nonsensical, so adding constraints to make random stimulus legal is an important part of the verification process, and is explicitly supported by System. Verilog and UVM. This will typically achieve much less than 1. The state space of a typical design is so vast that random stimulus alone is not enough to explore all the key use cases, yet directed or highly constrained tests can be too narrow to give good overall coverage. Constrained random stimulus is a compromise between the two extremes, but effective usage comes down to making a series of good engineering judgements. The solution is to use the priorities set in the verification plan to direct verification resources to the key areas. Nothing is gained by throwing more and more random stimulus into a design to take functional coverage to ever higher levels unless the design- under- test is being checked automatically for functional correctness. Checkers can be implemented using System. Verilog assertions or using regular procedural code. Assertions can be embedded within the design- under- test, placed on the external interfaces, or can be part of the verification environment. UVM provides mechanisms and guidelines for building checkers into the verification environment and for logging reports. System. Verilog offers two separate mechanisms for functional coverage collection; property- based coverage (cover directives) and sample- based coverage (covergroups). Both can be used in a UVM verification environment. The specification and execution of the coverage model is intimately tied to the verification plan, and many simulation tools are able to annotate coverage information onto the verification plan document, facilitating tight management control. Without shaping, random stimulus alone may be insufficient to exercise many of the deeper states of the design- under- test. Constrained random stimulus is still random, but the statistical distribution of the vectors is shaped to ensure that interesting cases are reached. System. Verilog has dedicated language features for expressing constraints, and UVM goes further by providing mechanisms that allow constraints to be written as part of a test rather then embedded within dedicated verification components. This and other features of UVM facilitate the creating of reusable verification components. With many simulation tools, the verification plan will include references to the corresponding coverage statements, and as simulation runs, coverage data is back- annotated from the simulator onto the verification plan feature- by- features. This provides direct feedback on the effectiveness of any given test. Holes in the coverage goals can be plugged by writing further tests. The verification plan itself is not part of UVM proper, but is a vital element in the verification process. UVM provides guidance on how to collect coverage data in a reusable manner. With constrained random testing, the role of the tests shifts slightly. Although a constrained random test may be written with specific coverage goals in mind, it is not assumed before- the- fact that any particular test will actually test one feature rather than another. The constrained random test is run, and the coverage model is used to empirically measure which features the test did in fact exercise. Tests can be graded after- the- fact using the coverage data, and the most effective tests, that is those that achieve the highest coverage in the fewest number of cycles, can be used to form the basis of a regression test set. Random stimulus then enables compute resources to be fully utilized in the pursuit of hitting coverage goals. The total number of man- hours dedicated to verification will not necessarily decrease, but verification quality will be dramatically improved, and the verification process will become far more transparent and predictable, both to the verification team itself and to outside observers. Automated coverage collection gives accurate feedback on the progress of the verification effort, and the emphasis on verification planning ensures that resources are focussed on achieving agreed goals. Verification reuse is enabled by having a modular verification environment where each component has clearly defined responsibilities, by allowing flexibility in the way in which components are configured and used, by having a mechanism to allow imported components to be customized to the application at hand, and by having well- defined coding guidelines to ensure consistency. Low- level driver and monitor components can be reused across multiple designs- under- test. The whole verification environment can be reused by multiple tests and configured top- down by those tests. Finally, test scenarios can be reused from application to application. This degree of reuse is enabled by having UVM verification components able to be configured in a very flexible way without modification to their source code. This flexibility is built into the UVM class library.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2016
Categories |