Quantcast
Channel: Verification
Viewing all 413 articles
Browse latest View live

Mode Support for SimVision “Stop Simulation” Button

$
0
0

Prior to Incisive Enterprise Simulator (IES) 12.1, clicking the SimVision "Stop Simulation" button would stop the simulation both in an HDL context and in a Specman context if Specman was present in the simulation. To provide better flexibility in the exact place where you want to pause, the "Stop in Specman only" functionality has been introduced.  

As of IES 12.1,whenever Specman is present in the simulation, the SimVision "Stop Simulation" button provides a drop-down menu that lets you choose between two switchable modes: "Stop Simulation" (wherever the simulation is now), and "Stop in Specman" (see the following figure).

 

 

SimVision will immediately perform the "Stop" operation in the mode you select, and keep that mode persistent. It will also indicate the selected mode in the button icon and tooltip (see the following figure).

 

Note the "e" over the button. The "e" indicates that you selected "Stop Specman" mode. (Had you chosen Stop Simulation, no indicator would appear over the button.)

When you click the "Stop" button while SimVision operates in "Stop Specman" mode, the simulation will stop only when it is in Specman, not before.

To toggle between the Stop Simulation modes, just make your new choice in the dropdown menu.

Note that if you run the simulation without Specman, or invoke standalone Specman, the drop-down menu is absent and the "Stop" button works in "Stop Simulation" mode (see the following figure).

 

 Alex Chudnovsky


New Specman Coverage Engine - Extensions Under Subtypes

$
0
0

This is first in a series of three blog posts that are going to present some powerful enhancements that were added to Specman 12.2 in order to ease the modeling of a multi-instance coverage environment. In this blog we're going to focus on the first enhancement, while the other two enhancements will be described in the following coverage blogs.

Starting with Specman 12.2, one can define the coverage options per subtype. Using per-instance, Specman checks the subtypes of each instance, and applies only the relevant subtype options. We will demonstrate the power of this in both "per unit instance" and "per subtype instance."

  • I. Utilizing when extensions for modeling per subtype instances

One intuitive use of extensions of covergroups under "when" subtypes relates to covergroups which are collected per subtype using the per_instance item option:

type packet_size_t: [SMALL, LARGE];

struct packet{

   size: packet_size_t;

   length: uint(bits:4);

   event sent;

   cover sent is{

   item size using per_instance;

   item length;

   };

};

 

The results of the above covergroup are collected per each value of the size item, so in practice this covergroup is collected separately per each "size" subtype.

Let's assume that small packets can only have length < 8, and big packets can only have length >= 8. The following code was needed in pre 12.2 releases for refining the irrelevant values of each subtype:

 

extend packet{

   cover sent(size==SMALL) is also{

   item length using also ignore=(length >= 8);

 };

   cover sent(size==BIG) is also{

   item length using also ignore=(length < 8);

   };

};

 

This code can now be replaced with a native "when" subtype extension:

extend packet{

   when SMALL packet{

     cover sent is also{

     item length using also ignore=(length >= 8);

     };

};

     when BIG packet{

     cover sent is also{

     item length using also ignore=(length < 8);

     };

   };

};

 

  • II. Utilizing when extensions for modeling per unit instances

Extension of covergroups under "when" subtypes can also be used to model the different instances of a covergroup that are collected per-unit instance, according to the exact subtype of the containing instance.

Let's see a code example that illustrates the power of this capability.  In this code we model a packet generator unit that generates packets of different sizes. The packet generator unit has a field which describes the maximal size of a packet that a packet_generator instance can generate:

 

type packet_size_t: [SMALL, MEDIUM,LARGE,HUGE];

struct packet{

   size: packet_size_t;

};

unit packet_generator{

   max_packet_size: packet_size_t;

   event packet_generated;

   cur_packet: packet;

   generate_packet() is{

     gen cur_packet keeping {it.size.as_a(int) <= max_packet_size.as_a(int)};

     emit packet_generated;

   };

};

extend sys{

   packet_gen1: packet_generator is instance;

   keep packet_gen1.max_packet_size == LARGE;

   packet_gen2: packet_generator is instance;

   keep packet_gen2.max_packet_size == MEDIUM;

   packet_gen3: packet_generator is instance;

   keep packet_gen3.max_packet_size == HUGE;

};

Oh, right, there's that coverage thing we need to define in order to check that each valid packet size was generated in each instance of the packet_generator :

extend packet_generator{

   cover packet_generated using per_unit_instance is{

   item p_size: packet_size_t = cur_packet.size;

   };

};

OK, so the above code enables the coverage collection of p_size separately for each instance of packet_generator. Let's generate 100 packets in each packet generator instance. Surely we'll get 100% coverage?

Well, we won't. When launching Incisive Metric Center (IMC), three of the coverage instances are not fully covered. For example the grade of the instance under sys.packet_gen1 is only 75%:

 

The reason for that is the constraint that prevents the generation of HUGE size packets in instance sys.packet_gen1, so no matter how many packets are generated in that instance, the ‘HUGE' bucket (bin) will never be covered.

We need to refine the valid buckets according to the generatable packet size in each instance. We can use the instance specific covergroups extensions for that:

 

extend packet_generator{

   cover packet_generated(e_path==sys.packet_gen1) is also{

   item p_size using also ignore = p_size.as_a(int) >

   packet_size_t'LARGE.as_a(int);

   };

   cover packet_generated(e_path==sys.packet_gen2) is also{

   item p_size using also ignore = p_size.as_a(int) >

   packet_size_t'MEDIUM.as_a(int);

   };

   cover packet_generated(e_path==sys.packet_gen3) is also{

   item p_size using also ignore = p_size.as_a(int) >

   packet_size_t'HUGE.as_a(int);

   };

};

Now we can achieve 100% grade for each instance:

 

  

But in 12.2 we can use the following subtype extensions instead:

extend packet_generator{

   when SMALL'max_packet_size packet_generator{

     cover packet_generated is also{

     item p_size using also ignore = p_size.as_a(int) >

     packet_size_t'SMALL.as_a(int);

     };

   };

   when MEDIUM'max_packet_size packet_generator{

     cover packet_generated is also{

     item p_size using also ignore = p_size.as_a(int) >

    packet_size_t'MEDIUM.as_a(int);

     };

   };

   when LARGE'max_packet_size packet_generator{

     cover packet_generated is also{

     item p_size using also ignore = p_size.as_a(int) >

     packet_size_t'LARGE.as_a(int);

     };

   };

   when HUGE'max_packet_size packet_generator{

     cover packet_generated is also{// for extend packet_type_t: [GIGANTIC]; ...

     item p_size using also ignore = p_size.as_a(int) >

     packet_size_t'HUGE.as_a(int);

     };

   };

};

At first look, the later solution doesn't look more efficient than the former one. It includes 4 extensions of the covergroup instead of 3 extensions that were needed before. But what would have happened if instead of only 3 packet_generator instances we would have 100 instances? If we extend each instance by itself, as we've done in the first solution, we will need to extend each one of the 100 covergroup instances.

While using the "when subtype extension" solution, the 4 extensions above satisfy the requirement for any number of instances.

Even more important, the solution which uses "when subtype extension" is reusable, since it doesn't use the full path of the covergroup instances. So it is much more suitable for verification IPs and for module verification environments which are later integrated into system level verification.

But before you run and start extending your covergroups under subtypes, I'd like to mention that there is another newly supported option in the e language which is even a better suited for that exact scenario which is described above - -it is called the "instance_ignore" item option.

  • What is this "instance_ignore" option?
  • Why it is better suited for the above scenario?
  • For which scenarios is the ‘extension under when subtypes' better suited?

Answers for all of the above questions (and more) will be found in the next Specman coverage blog -- "Using Instance Based Coverage Options for Coverage Parameterization"

Team Specman 

 

Introducing UVM Multi-Language Open Architecture

$
0
0

The new  UVM Multi-Language (ML) Open Architecture (OA) posted to the new UVMWorld is the result of a collaboration between Cadence and AMD.  It uniquely integrates e, SystemVerilog, SystemC, C/C+, and other languages into a cohesive verification hierarchy and runs on multiple simulators.  Moreover, the new solution is open for additional collaboration and technology enhancement. 

Since Cadence introduced ML verification four years ago, the need for it has never been greater.  Complex SoCs are verified with a combination of industry-standard languages and frameworks including IEEE 1647 (e), 1800 (SystemVerilog), 1666 (SystemC), and Accellera UVM, as well as C/C++, VMM, Perl, and others.  The previous ML solution enabled the standard connections but had some limitations.  Among the limitations included are a focus on “quick-stitch” integration that allowed for data communication but required significant additional coding to synchronize the communication. In addition, the solution was built primarily for the Incisive Enterprise Simulator.

Bryan Sniderman, Verification Architect for AMD, introduces the requirements that drove the development, the limitations in existing solutions, and the features you can expect in this UVM ML OA video.  In the video, Bryan describes how the new solution enables hierarchical integration of the frameworks, seamless phasing, seamless configuration, and has the ability to run on multiple simulators.

You can also learn more about the solution in our webinar, “Introducing UVM Multi-Language Open Architecture,” archived on Cadence.com. If you are at DAC, stop by the Cadence Theater on Wednesday at 4:30pm to hear Mike Stellfox present the solution and take part in our Q/A that will follow.  Of course, you can also stop by the booth to learn more as well or send an email to support_uvm_ml@cadence.com if you have any questions.  Finally, note that you will need to register in the Accellera Forums to download the UVM ML OA and that registration is open to all.

Think of the UVM ML OA as a new beginning.  As you read through and watch the background materials, you’ll probably see a mix of exciting new features and opportunities to further improve the solution.  We welcome that input.  The solution you see here represents a solid foundation, but there is more that we can do and we are happy to expand the collaboration to bring in those new ideas.

 

=Adam Sherer on behalf of the AMD and Cadence collaboration

 

 

How Can You Continue Learning About Advanced Verification at Your Desk?

$
0
0

How much time do you spend "playing" and "learning" before you try a new EDA tool, feature, or flow?
Do you really take a training class and sift through the documentation or books about the subject before you start project work? Or are you the type who has the knack of figuring things out on your own by taking a deep dive, head first?

Learning is an iterative and repetitive process.  Human beings spend most of their lives learning through a structured learning program in their school years, then an expensive and elective college adventure, leading to years of learning during their professional lives.  The big challenge that I have faced with learning is how to find the right learning vehicle that helps me discover what I didn't already know in a short period of time.   If you struggle with this aspect, you should look at Cadence Rapid Adoption Kits (or, RAKs).

Rapid Adoption Kits from Cadence help engineers learn foundational aspects of Cadence tools and design and verification methodologies using a "DIY" approach.   Don't get me wrong, instructor-led, structured training programs work beautifully if you can invest the time and money. But there is always demand for learning something simply and quickly in some corner of the world. 

Today, we have made available eight RAKs focused to help our users learn various aspects of digital IP and SoC functional verification methodologies and tools.

The RAKs provide an introduction to state-of-the-art verification solutions including Universal Verification Methodology (UVM), based on the industry-standard UVM Reference Flow donated by Cadence for digital and mixed-signal verification using Incisive Enterprise Simulator and Metric Driven Verification methodology (MDV). For implementation, the flow uses Incisive Enterprise Manager, as well as SoC verification techniques such as I/O connectivity using Incisive Enterprise Verifier and optimization of simullation performance  for large SoCs.

The examples referenced in the Rapid Adoption Kit exercises are based on the Cadence SoC Verification Kit.  You can view presentations, app notes, videos and/or download the package that also contains lab exercises, relevant scripts and instructions.

Download your RAK today at http://support.cadence.com/raks.

Happy Learning!

Umer Yousafzai

The Art of Modeling in e

$
0
0

Verification is the art of modeling complex relationships and behaviors. Effective model creation requires that the verification engineer be driven by a curiosity to explore a design's functionality, anticipate how it ought to work, and understand what should be considered an error. The model must be focused and expressed as clearly as possible, as it transitions from a natural language to a machine-understandable artificial programming language. Ideally, the process should be aided by the modeling language itself.

In this article, we'll highlight such a modeling process - one that describes the structure and problem of the popular Sudoku puzzle. A Sudoku puzzle is a three-dimensional problem, accompanied by a set of rules that actually define its full solution space.

Defining the data structure is the first step in our modeling process. The playing field consists of a box of exactly N by N fields, whereas N may be an arbitrary integer.

The rules by which this game is played is that we look at lines, columns, and boxes which contain a set of symbols. The size of lines, columns, and boxes, respectively, is N; therefore, we need N different symbols which are shared across the playing field. We will have N lines, columns, and boxes.

The actual rules are that each line, column, and box contains every symbol once. Duplication and omissions are not allowed.

First, we want to create a configurable list of symbols. In e, we do this by creating a list and constraining that list properly:

symbols_l: list of uint(bits: 32);

keep SYMBOLS_L is for each in { it == index + 1; };

To represent a set of N lines with N elements, we declare a two-dimensional matrix and ensure that this matrix has lines with one of each of the defined elements:

matrix_lin: list of list of uint(bits: 32);

keep MATRIX_LINES_C is for each in matrix_lin { it.is_a_permutation( symbols_l ); };

Now we do the same thing for the columns:

matrix_col: list of list of uint(bits: 32);

keep MATRIX_COLUMNS_Cis for each in matrix_col { it.is_a_permutation( symbols_l ); };

And we'll do the same thing for the boxes.

matrix_box: list of list of uint(bits: 32);

keep MATRIX_BOX_C is for each in matrix_box { it.is_a_permutation( symbols_l ); };

Now we constrain the first dimension of each matrix to ensure that we are generating the right number of lines, columns, and boxes:

keep MATRIX_SIZES_C is all of {

  matrix_lin.size() == symbols_l.size();

  matrix_col.size() == symbols_l.size();

  matrix_box.size() == symbols_l.size();

};

The only thing now left to do is to connect the three different fields together:

keep CONNECT_LINE_COLUMN_Cis

  for each (line) using index (i_y) in matrix_lin {

    for each (x) using index (i_x) in line {

      matrix_lin[i_y][i_x] == matrix_col[i_x][i_y];

    };

  };

Connecting the boxes with the lines and columns requires some thinking. We already described and constrained all of the boxes; however, mapping the boxes to columns and lines requires some arithmetic. We must first determine the strides needed to identify the box boundaries within the line coordinates. This is done by calculating the square root of N, which we will call n_sqrt. In terms of mapping this to the line coordinates, this means that we will have a new box every n_sqrt elements:

n_sqrt: uint(bits: 32);

keep FIELD_SIZE_C is symbols_l.size() == n_sqrt*n_sqrt;

Let's assume N := 9 and n_sqrt := 3

Line 0

Line 1

Line 2

line[0] == box[0][0]

line[1] == box[0][1]

line[2] == box[0][2]

line[3] == box[1][0]

line[4] == box[1][1]

line[5] == box[1][2]

line[6] == box[2][0]

line[7] == box[2][1]

line[8] == box[2][2]

line[0] == box[0][3]

line[1] == box[0][4]

line[2] == box[0][5]

line[3] == box[1][3]

line[4] == box[1][4]

line[5] == box[1][5]

line[6] == box[2][3]

line[7] == box[2][4]

line[8] == box[2][5]

line[0] == box[0][6]

line[1] == box[0][7]

line[2] == box[0][8]

line[3] == box[1][6]

line[4] == box[1][7]

line[5] == box[1][8]

line[6] == box[2][6]

line[7] == box[2][7]

line[8] == box[2][8]

Line 3

Line 4

Line 5

line[0] == box[3][0]

line[1] == box[3][1]

line[2] == box[3][2]

line[3] == box[4][0]

line[4] == box[4][1]

line[5] == box[4][2]

line[6] == box[5][0]

line[7] == box[5][1]

line[8] == box[5][2]

line[0] == box[3][3]

line[1] == box[3][4]

line[2] == box[3][5]

line[3] == box[4][3]

line[4] == box[4][4]

line[5] == box[4][5]

line[6] == box[5][3]

line[7] == box[5][4]

line[8] == box[5][5]

line[0] == box[3][6]

line[1] == box[3][7]

line[2] == box[3][8]

line[3] == box[4][6]

line[4] == box[4][7]

line[5] == box[4][8]

line[6] == box[5][6]

line[7] == box[5][7]

line[8] == box[5][8]

Line 6

Line 7

Line 8

line[0] == box[6][0]

line[1] == box[6][1]

line[2] == box[6][2]

line[3] == box[7][0]

line[4] == box[7][1]

line[5] == box[7][2]

line[6] == box[8][0]

line[7] == box[8][1]

line[8] == box[8][2]

line[0] == box[6][3]

line[1] == box[6][4]

line[2] == box[6][5]

line[3] == box[7][3]

line[4] == box[7][4]

line[5] == box[7][5]

line[6] == box[8][3]

line[7] == box[8][4]

line[8] == box[8][5]

line[0] == box[6][6]

line[1] == box[6][7]

line[2] == box[6][8]

line[3] == box[7][6]

line[4] == box[7][7]

line[5] == box[7][8]

line[6] == box[8][6]

line[7] == box[8][7]

line[8] == box[8][8]

 

 

This reveals the pattern

line[i_x] == [((i_y/3)%3)*3 + (i_x/3)] [(i_y%3)*3) + (i_x%3)]

The generalized mapping constraint would hence be:

keep CONNECT_LINE_BOX_Cis

  for each (line) using index (i_y) in matrix_lin {

    for each (x) using index (i_x) in line {

      matrix_lin[i_y][i_x] == matrix_box

            [((i_y/n_sqrt)%n_sqrt)*n_sqrt + i_x/n_sqrt]   // i_y coordinate

            [(i_y%n_sqrt)*n_sqrt + i_x%n_sqrt];           // i_x coordinate

    };

  };

As you can see, e lets you describe data structures and rules that describe complex scenarios in a concise way. Layering constraints is the key to creating stimulus. As a verification engineer, you will spend a good deal of time in your projects doing this.

The above code is legal, valid e code. However, because the constraints are obviously quite complex, to avoid possible ICFS errors, you should load your e code. Before generating your environment, you need to use the generation linter. This, however, is an exercise for a different blog article.

 

Feel free to comment on the code and the process above. Perhaps you'll find a different, flexible way to describe the Sudoku game.

Daniel Bayer

 

Fujitsu Gets 3x Faster Regression with Incisive Simulator and Enterprise Manager

$
0
0

Verification regression consumes expensive compute resources and precious project time, so any speed-up has both a technical and business impact. As announced July 17, Fujitsu was able to improve both the compute resource and project time by using Cadence Incisive products and working closely with Cadence field resources to deploy them.  Results:  1.5x faster per test, 3x faster regression overall, and 30x storage reduction.  Wow.

The first step was to optimize each test.  Fujitsu upgraded to the Incisive 12.1 release (shipped June 2012) and applied a feature called "zlib".  This feature compresses the simulatable snapshot. The smaller executable is written faster, occupies less disk space, and loads faster.  Together with the performance improvements available out-of-the-box with the 12.1 release, each test was able to run 1.5x faster, on average.  The Cadence team expects further gains when Fujitsu moves to the latest 13.1 release.

The next step was to apply a technology called incremental elaboration.  The technology allows one or more elaborated "objects" to be created and then linked prior to simulation.  For an individual engineer, the technology means you can link the few blocks you change to the much larger subsystem or system without re-elaborating the unchanged code.  

Fujitsu employed the technology in a slightly different use model.  In regression, there is a matrix of tests and DUT configurations.  In Fujitsu's case, there were 190 tests but only 24 unique Standard Delay Format (SDF) test scenarios.  Before the incremental elaboration was applied, each test scenario was compiled and elaborated with each DUT configuration, resulting in 190 separate elaborations.  When the incremental elaboration was applied, the 24 SDF primary elaborations were linked to the appropriate DUT.  The resulting reduction in compute time for elaboration and storage for the snapshots was combined with the individual test improvements to yield 3x total regression time speed-up and 30x less disk storage.

The final step was automating this process.  The single-elaboration approach is slower but is straightforward becasue each configuration is created new.  Manually integrating the incremental "primaries" is challenging when there are 190 unique tests.

Fujitsu automated this process by applying the Incisive Enterprise Manager (IEM) in several ways.  First, the IEM test runner was able to automatically build the appropriate incremental primaries and link them for each test run.  Second, IEM was able to detect whether an individual test passed or failed, eliminating the "eye-ball" check of the log file or waveforms.  Finally, IEM was able to aggregate the results back to a verification plan (vplan) to show overall project-level progress for the entire regression.

What does the future hold?  Newer releases of Incisive Enterprise Simulator add more out-of-box performance improvements, add black-boxing features, and the ability to link multiple primaries which will make regression faster and make automation more important.  Keeping pace, the Incisive Enterprise Manager adds new analysis features to better automate the overall process.

Call us.  We can do the same for you.

=Adam Sherer, Incisive Product Management Director 

  

New Specman Coverage Engine (Part II) - Using Instance-based Coverage Options for Coverage Parameterization

$
0
0

In the last coverage blog, we showed how the extensions of covergroups under when subtypes can help us write a reusable per-instance coverage.

We described a test case where a packet generator unit can create packets of different sizes. The packet generator unit has a field that describes the maximum size of any packet that can be generated by the packet_generator instance:

type packet_size_t: [SMALL, MEDIUM,LARGE,HUGE];

unit packet_generator{

    max_packet_size: packet_size_t;

    event packet_generated;

    cur_packet: packet;

    generate_packet() is{

        gen cur_packet keeping {it.size.as_a(int) <= max_packet_size.as_a(int)};

        emit packet_generated;

    };

};

 

We defined a covergroup that is collected per each instance of packet_generator, to ensure that each packet generator creates packets of all relevant sizes:

 

extend  packet_generator{

     cover packet_generated using per_unit_instance is{

        item p_size: packet_size_t = cur_packet.size;

     };

};

 

Then we refined the group's instances according to their actual subtypes, so that irrelevant packet sizes are ignored. This solution included setting a different fixed ignore condition for each subtype:

 

extend packet_generator{

    when SMALL'max_packet_size packet_generator{

        cover packet_generated is also{

            item p_size using also ignore = p_size.as_a(int) >

                                         packet_size_t'SMALL.as_a(int);

        };

    };

    when MEDIUM'max_packet_size packet_generator{

        cover packet_generated is also{

            item p_size using also ignore = p_size.as_a(int) >

packet_size_t'MEDIUM.as_a(int);

        };

    };

    // ... Same for other max_packet_size values

};

 

However, if we take a close look at the extensions under the subtypes, we can identify a uniform pattern for all the extensions:

 

item p_size using also ignore = p_size.as_a(int) >

                                         <max_packet_size field value of this subtype>

 

This pattern indicates that defining ignored values in a parameterized manner (that is, ignore all size values that are bigger than the value of the max_packet_size field of the instance) is more suitable here.

And as of Specman version 12.2, we have the appropriate syntax for doing exactly that:

 

extend packet_generator{

   cover packet_generated is also{

       item p_size using also instance_ignore = p_size.as_a(int) >

                                         inst.max_packet_size.as_a(int);

   };

};

 

The above code illustrates two new concepts: First, the use of the instance_ignore item option instead of the ignore option; second, the use of a special field named "inst" in the instance_ignore option.

Parameterized Instance-Based Coverage Options

In previous versions, Specman had four coverage options that defined what would be included in the coverage model:

-          no_collect group option – could be used to exclude groups / covergroup instances from the model.

-          no_collect item option –  could be used to exclude items from the model.

-          ignore / illegal items options – could be used to exclude specific buckets (bin) values from the model

 

In Specman 12.2, we added instance-based versions for these four coverage options:

-          instance_no_collect group option – for selectively refining which instance of the covergroup will be disabled.

-          instance_no_collect item option – for selectively refining from which group instance the item will be excluded.

-          instance_ignore /instance_ illegal items options – for selectively refining which item’s bucket will be under each coverage instance.

 

When using these instance based options, the user can use a special field, named ‘inst’, to reference the relevant unit instance of each coverage instance, and get the values of the configuration fields of the instance.

Specman assigns the value of the ‘inst’ field to the relevant unit instance, and then computes the expressions separately for each coverage instance.

As the above description indicates, the four instance-based options can be used to apply different behaviors to different instances of the same covergroup. But if there is a need to apply a common behavior for all instances of the covergroup, then the original “type based” options are more appropriate. For example, use the no_collect item option, not the instance_no_collect option, to remove base items of a cross item from the model.

Team Specman

That Cowbell Must be Registered – Introducing the UVM SystemVerilog Register Layer Basics Video Series

$
0
0

In May of 2012 we launched the initial cowbell YouTube video series on the basics of UVM for SystemVerilog IEEE 1800 and e IEEE 1647.

This was followed by a video series on debugging with SimVision.

Then, we struck a different kind of cowbell by releasing a MOOCs course for Functional Verification on Udacity.

Now it is definitely time for more cowbells.

One aspect that was not covered in the UVM Basics series was the register layer. In this new video series we are giving an overview of the concepts, components and applications of the UVM register layer.


 

The new video series is broken up into twelve clips:

  1. Introduction
  2. Testbench Integration
  3. Adapter
  4. Predictor & Auto Predict
  5. Register Model & Generation
  6. IP-XACT
  7. Register Model Classes
  8. Register API & Sequences
  9. Access Policies
  10. Frontdoor & Backdoor
  11. Predefined Sequences
  12. Demonstration

Go ahead and register your cowbells!

Axel Scherer

Incisive Product Expert Team
Twitter, @axelscherer


New Specman Coverage Engine (Part III)—Use of Extension Under "when" vs. Using Instance-Based Options

$
0
0

In both previous coverage blog posts (Part I and the Part II), we showed two solutions for refining instance-based coverage in a reusable way. And in doing so, we demonstrated a case where using the instance_ignore option is more suitable than using the extension under when solution.

Now, let us modify the requirement a little, by adding a new item to the covergroup:

extend packet_generator{

  cover packet_generated is also{

     item p_length: uint(bits:4) = cur_packet.length;

  };

};

 

The length of the packet depends on the value of the size field according to the following constraints:

extend packet{

    length: uint(bits:4);

    keep size == SMALL => length in [0..2];

    keep size == MEDIUM => length in [3..6];

    keep size == LARGE => length in [7..10];

    keep size == HUGE => length in [11..15];

};

 

So again, for each packet_generator, some of the higher length values might be irrelevant due to the max_packet_size constraint.

We can set the ignored values using either of the following techniques:

  • Using the instance_ignore option:

cover packet_generated is also{

   item p_length using also instance_ignore =

            (((inst.max_packet_size == SMALL) and (p_length > 2)) or

             ((inst.max_packet_size == MEDIUM) and (p_length > 6)) or

             ((inst.max_packet_size == LARGE) and (p_length > 10)));

   };   

};

 

  • Or by extending the covergroup under subtypes:

when SMALL'max_packet_size packet_generator{

   cover packet_generated is also{

      item p_length using also ignore = (p_length > 2);

   };

};

 

when MEDIUM'max_packet_size packet_generator{

   cover packet_generated is also{

      item p_length using also ignore = (p_length > 6);

   };

};

 

when BIG'max_packet_size packet_generator{

   cover packet_generated is also{

      item p_length using also ignore = (p_length > 10);

   };

};

 

Here we recommend using the extension under when subtype code (the second bulleted option above), since the ignore expressions that need to be evaluated with this code are much simpler than the instance_ignore expression.

In some cases, only one of the solutions can be used:

  • A different setting of one of the other coverage options (for example weight) for each instance can only be achieved by extending the covergroup under when.

For example, if we want to have a larger weight for packet generators that can generate any size of packet, we need to add the following code:

when HUGE'max_packet_size packet_generator{

   cover packet_generated using also weight=2;  

};

 

  • On the other hand, when collecting a covergroup under instances of a unit that is not the definition type of the covergroup (using the per_unit_instance=<other_type> group option), extending the under when subtype cannot be applied. In these cases, only the use of the instance-based options is possible.

For example, suppose that instead of defining the covergroup under the packet_generator unit, we would have defined it under the packet struct (but still collect it per instances of packet_generator):

extend packet{

     cover packet_generated using per_unit_instance=packet_generator is{

        item p_size: packet_size_t = cur_packet.size;

     };

};

 

Now the covergroup can only be extended under the packet type, but we'd like to control the ignored values of its items according to a configuration field of packet_generator unit. So extension under when will not help us here.

But since instance-based options have a reference to the collection unit type (packet_generator) instance via the inst field, they can be used in the same manner that they are used when the covergroup is collected per instances of its declaration unit type. 

Erez Bashi 

Configurable Specman Messaging Webinar Archive Available Now

$
0
0

Configurable Specman Messaging for Improved Productivity

Webinar Archive Available Now!

Hello Specmaniacs:


Ever wondered how to switch on all messages, or how to switch all of them off? Or get confused by the output from the "show message" command?

You're not alone. Many users and even Cadence R&D engineers have struggled with this. The main reason for the confusion is that messages are controlled by loggers, and loggers could be anywhere (apart from the sys logger). In 12.2 we have introduced a new infrastructure to configure messages, which is not based on loggers but on the unit hierarchy of your testbench.

If you missed the Configurable Messages Webinar delivered in July, here's another opportunity for you to view the archived webinar. Hannes Froehlich, a Solution Architect in the Cadence Functional Verification R&D team presents how you can now control your messages based on the location in the verification hierarchy (unit-tree) from which the messages were emitted. This has many benefits over the existing logger based message infrastructure. In the webinar we highlight how the new infrastructure and be used, and how it fixes the issues we had with loggers.

So don't delay and view the archived webinar to:

  • Get a basic introduction to the new message configuration system in Specman/e
  • Understand how messages can now be configured based on the component hierarchy
  • Find out about the new command switches and options to configure messages
  • Learn about the new procedural message configuration APIs in Specman/e

View Now: http://www.cadence.com/cadence/events/pages/archive.aspx 

Team Specman

e Macro Debugging

$
0
0
When creating a testbench using the MDV methodology, you want to write intelligent code whose behavior can be easily modified.

Using e macros can greatly improve your productivity by raising the level of abstraction at which these testbenches are created and used. With e macros, you can reduce the amount of code and simplify usage of code that needs to be used in several places in the testbench.

e macros are powerful code generators whose key benefit istheir ability to extend the e language.

What is called "macro" in some other languages might be merely text replacement, such as replacing all occurrences of some text "A" with the text "B".

Macros in e can do this too, but they are capable of far more sophisticated things. These usages might be more complicated to debug, so Specman allows us to debug the generated code, instead of the macro definition code itself.

In this document, we are going to explore ways to debug macros in various stages of the simulation.

Let's consider the following test case:

 

Here we have a macro (define as) which simply creates a client object, and adds a client to a list of clients. (Note: The parentheses and quotation marks that enclose <x'string> prevent the preprocessor from considering all the parameters that come after the <x'string> declaration in the program code as part of the string.)

The addition of client to the list is done via another macro: 

 

And the macro call is made here:

Parsing time errors

Specman parser gives clear error messages for syntax or parser issues at parsing time. For example, as seen below, we assign <x'name> (instead of <x'num>) to ‘it.num', but we do not have any such argument in the match expression.

This results in the following error:

The error here is pretty straightforward to fix. So let us correct the macro code (change line #14 to "it.num  == <x'num>") and re-run it.

Macro expansion errors (at load time)

There could be cases where the macro parses correctly but encounters issues after it was expanded at load time. In such cases, the code is still not loaded.

We can use the "trace reparse" command to debug such issues. Let us again look at our example. The macro is modified a bit, as shown below:

 

Note: The code "it.name==<3>" at line #9 and 13 parses well, but will fail to load with an error that doesn't give much information (the message merely says that Specman expects a string for "i"). So let us use "trace reparse" (before load phase). This re-parses the code, and gives us the following helpful message from which we can understand the root cause of the error.

 

 

Run-time errors

Run-time errors will occur on the expansion code that was created by the macro. In some cases, you might not get errors per se, but might see unintended or incorrect functionality. This is just like any other bug in your testbench, but the actual code is hidden under the macro definition.

Let's look how this reflects in our example. Once we are done loading and start to run the test, we get the following output:

For some reason, we generate NULL clients all the time. Is that expected behavior? Ummm...it doesn't seem so. So let's check what went wrong?

Like any other e code, we will use the source debugger to find the root cause:

1.       Put a breakpoint on the macro call and then step into the macro call itself, so that it breaks when macro is called.

  

As seen above, a breakpoint is applied at line #37.

 

2.       Run the simulation again after adding breakpoint. It automatically opens the source browser at breakpoint. If you step into the macro, the debugger will take you to the macro definition code:

 

3.       Click on the macro debug mode button  to select expansion mode. This will expand the macro code, to the real code Specman is running. Now you can keep on clicking ‘step into' button  to see the flow of execution.

 

 

We can set a ‘watch' on x and x1 to see how they take the values. After setting a ‘watch', run it for few more steps and the Watch window should show following values.

 

 

This shows that the client ‘x' (NOT x1) was generated. Since x1 is empty, it keeps adding empty items to the list. This clarifies what was causing a NULL list of clients.

Problem solved!

To summarize, macros in e are a very powerful tool. You need to know how to use them, and especially how to debug them. Having the correct tools makes this task much easier and intuitive, and prevents the frustration of debugging code you cannot even see.

Happy Debugging!

Mahesh Soni

Avi Farjoun

Generic Dynamic Run-Time Operations with e Reflection, Part 1

$
0
0

Untyped Values and Value Holders

The reflection API in e not only allows you to perform static queries about your code, but it also allows you to perform dynamic operations on your environment at run time. For instance, you can use reflection to examine or modify the value of a field, or even invoke a method, in a generic way. This means that if the specific field or method name is unknown a priori, but you have the reflection representation of the field or the method at hand, the reflection API provides you with the capability to perform the needed operation.

While this is a very strong and helpful capability, it should be used with care, to avoid unexpected results or even crashes. In this series of blogs, I will describe how to use some of these capabilities, as well as some tricky points which require caution in use.

In this first blog of the series, let's look at two important concepts with which you should be familiar: untyped values and value holders.

Untyped is a predefined pseudo-type in e, serving as a place-holder for any value of any type, which may be a scalar, a struct, a list, or any other valid e type. To assign a value to a variable of type untyped, you use the predefined pseudo-method unsafe(). For example (assuming my_packet is a struct field of type packet):

var a1: untyped = 5.unsafe();

var a2: untyped = my_packet.unsafe();

In this example, we assigned the numeric value 5 into untyped variable a1, and the struct value into untyped variable a2. We also use unsafe() to assign an untyped value back to a variable or a field of the original type, for example:

my_packet = a2.unsafe();

However, it is important to remember that the untyped variable itself does not know the actual type of the value assigned to it via unsafe(). Therefore, when you convert a value to untyped, it is your responsibility to later convert it to the correct original type. Thus, you need to avoid mistakes like this:

my_packet = a1.unsafe();  // This is bad code!

Here we take the value of the untyped variable a1 and try to assign it to my_packet. However, the value assigned previously to a1 was a scalar, not an instance of packet. So, this operation is illegal. The code would compile fine, but at run time it would most likely crash.

In simple cases, to avoid such mistakes you just need to be careful. In more complex cases, you can use a value holder. A value holder is a special object in e, of the pre-defined type rf_value_holder, and it allows you to keep a value of any given type along with its type information. So, as opposed to untyped, here the original type of the value is known. There are several reflection methods that operate on value holders. To create a value holder, we use the create_holder() method of rf_type, for example:

var vh1: rf_value_holder = rf_manager.get_type_by_name("int").create_holder(5.unsafe());

Here we created a value holder that keeps that value 5 of the type int. Note that since the create_holder() method itself can get a value of any type, it treats it as untyped; that's why we had to use unsafe() here. But as long as you call it on the correct rf_type (in this case, the one that represent the int type), it is fine.

Later we can enquire the type of the value kept in the holder, using the get_type() method:

print vh1.get_type();

or the actual value, using the get_value() method:

var x: int;

if vh1.get_type() == rf_manager.get_type_by_name("int") then {

     x = vh1.get_value().unsafe();

};

An important tip:

In general, conversions from any type to untyped and vice versa must only be done using unsafe(). It is a common mistake to use the explicit casting operator as_a(). Doing so leads to unexpected results (and consequently to confusions) and must be avoided. For example, the following causes an unexpected result.

var x: int(bits: 64) = 5;

var a: untyped = x.unsafe();

x = a.as_a(int(bits: 64));

In upcoming Specman releases (starting 13.2), using as_a() with untyped is going to be completely disallowed through a deprecation process.

In the next blog of this series we will look at some actual dynamic usages of the reflection API, based on untyped values and value holders, as well as some helpful tips.

 

Yuri Tsoglin

e Language team, Specman R&D 

Coverage Unreachability UNR App - Rapid Adoption Kit

$
0
0

The Cadence Incisive Enterprise Verifier (IEV) team recently developed a self-help training kit - a Rapid Adoption Kit - to help users gain practical experience applying IEV's Coverage Unreachability (UNR) App. The RAK also helps users see the benefits of different approaches, UNR flow with and without initialization. The "Coverage Unreachability UNR App" RAK is now available on Cadence Online Support.

 

 

Given an existing simulation environment, assertions are automatically generated from the code coverage holes and formal analysis is used to detect any unreachables. Unreachable code coverage is detected with each approach and results are compared between runs using IMC to locate and view the unreachables. You will also learn how to set up the simulation to collect code coverage and dump a minimal reset waveform for initializing the UNR proof.

 

The key objective is to familiarize the user with the flow, by running:

1. Simulation to generate the coverage database (and optional waveform for formal analysis initialization)

2. Formal analysis on the simulation code coverage to detect the unreachables and generate an unreachables database with two setups: basic uninitialized and initialized

3. IMC to merge the generated unreachables database into the original simulation database and load the merged database to view and accept the unreachables

 

 

Normal 0 false false false EN-US X-NONE HI

http://support.cadence.com/raks -> SOC and IP level Functional Verification

Rapid Adoption Kit Name

Overview

Application Note(s)

RAK Database

Coverage Unreachability (UNR) App

View

Lab Instructions

Download (2.6 MB)

We are also covering following technologies through our RAKs at this moment:

Synthesis, Test and Verification flow
Encounter Digital Implementation (EDI) System and Sign-off Flow
Virtuoso Custom IC and Sign-off Flow
Silicon-Package-Board Design
Verification IP
SOC and IP level Functional Verification
System level verification and validation with Palladium XP

Please keep visiting http://support.cadence.com/raks to download your copy of RAK.

We will continue to provide self-help content on Cadence Online Support, your 24/7 partner for getting help in resolving issues related to Cadence software or learning Cadence tools and technologies. If you are signed up for e-mail notifications, you've likely to notice new solutions, Application Notes (Technical Papers), Videos, Manuals, etc.

Note: To access above docs, click a link and use your Cadence credentials to logon to the Cadence Online Support http://support.cadence.com website.

 

Happy Learning!

Sumeet Aggarwal

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Mangal; mso-bidi-theme-font:minor-bidi;}

Covering Edges (Part I) – Cool Automation

$
0
0

With random generation, most of the fields are due to be quite well covered. If the field is of a type with a wide space, e.g. address is of 32 bits, then most likely not each and every of the 0xffffffff values will be generated. As verification engineers, we know that bugs tend to hide in the edges. That is - what will happen if the transfer is sent to the last address, to 0xffffffff? The verification environment challenge is, guaranteeing that these edge cases will be covered.

Making sure that edge cases are generated is easily achieved with the "select edges". For example:

 extend transfer {

    // For ~half of the transfers, the address will be

    // 0 or 0xffffffff

    keep soft address == select {

        50 : edges;

        50 : others;

    };

};

 

This "select edges" is an old feature. What I want to show here is a small utility answering the question of "should I go now and define this "select edge" on all fields?" This seems to be a very exhausting task ....

For this, I suggest using the e reflection to locate fields of interest. For example, all fields whose range is larger than 0xffffff.

This piece of code searches for fields defined in a given package with range larger than the given parameter, num_of_vals -

var t : rf_type;

for each rf_struct in rf_manager.get_user_types() {        

  for each (f) in it.get_declared_fields() {            

  // Do not add constraints to fields that

  //    - are not generate-able

  //    - were defined in a package other than what was requested

  if f.is_ungenerated() or                 

    f.get_declaration_module().get_package().get_name() != package_name {

      continue;

    };

                      

    t =  f.get_type();

    if t is a rf_numeric (nu) {

      if ipow(2, nu.get_size_in_bits()) > num_of_vals {

        fields.add( f);

      };

    };

}; // for each field

             

Once you have the list of fields of interest, you can do many things with it. For example, write into a file code similar to the"select edge" code shown above:

write_code (s : rf_struct, fs : list of rf_field) is {     

  var my_file  :=  files.open("cover_edges.e", "rw","big fields");

  files.write(my_file, append("extend",s.get_name(),"{"));

       

  for each in fs {        

    files.write(my_file,

               append("    // Field defined in ",                              

               it.get_declaration_module().get_name(),

               " @line ",

                                       

               it.get_declaration_source_line_num()));        

    files.write(my_file,

                append("    // Field type is ",                                       

                       it.get_type().get_name()));

             

    files.write(my_file,

                append("    keep soft ", it.get_name(),

                       " == select {"));

    files.write(my_file, "        50 : edges;");

    files.write(my_file, "        50 : others;");

    files.write(my_file, "    };");

  };

  files.write(my_file, "};");

  files.close(my_file);

};

 

You could copy and modify the code above, using the reflection to find fields by many more criteria, e.g. all fields that have "address" in their names, all fields of specific types, anything your imagination might come up with...

If you have any questions, or, even better, any suggestions for cool extensions of this example, please do share.

 

Efrat Shneydor 

Test Your Units Before Your Units Test You — Testing Your Testbench

$
0
0

Bugs are a part of life in any complex software development project. This is no different in the testbench development world.

Most bugs get discovered eventually. The question is: At which stage of the game are they discovered, and at what price?

Let's explore the option of testing parts of your testbench early on, at the lowest level you can leverage unit testing. This is an approach that has been successfully adopted in the general software development world. It consists of isolated, independent tests that target a very small piece of code, in order to test a specific behavior. Often these tests are just applied on methods.

The next question is: What does it take to adapt unit testing to the testbench development effort? 

Fortunately we are in luck. You can learn about unit testing for testbench development in two upcoming venues.

  • On December 12, 2013 in our webinar: "Testing the Testbench"

Register for this webinar

Doug Gibson of Hewlett-Packard will present an industrial application of this approach in session 9.3.

 

Happy (unit) testing,

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer


Practical Guide to the UVM for $15 - Virginia, There is a Santa!

$
0
0

Wondering what to get the verification engineer on your list?  You know, the one with the zealous love of SystemVerilog and UVM? It's the Practical Guide to Adopting the UVM, Second Edition for only $15!

The Practival Guide to the UVM is the most popular source of knowledge for the UVM.  The second edition, available since the beginning of 2013, has sold over 3500 copies. Authored by Kathleen Meade and Sharon Rosenbeg, the book provides novice to expert knowledge on testbench methodology and how to apply UVM to solve verification problems.

To get your deeply discounted version, visit our self-publishing company, LuLu.com.  You can search for the book there or follow this direct link.

Once you get it, be sure to get the examples.  Kathleen posted them on the UVMWorld forums at Accellera.org.  The downloads are free!

So grab your copy while it's at this new low price.  Come mid-January, the price will pop back to $60.

Wow, this sound sooooo much like a late-night commercial.  :-)

 

Happy  Holidays,

Your Cadence UVM team 

Generic dynamic run-time operations with e reflection Part II

$
0
0

Field access and method invocations

In the previous blog, we explained what are untyped variables and value holders in e, and how to assign and retrieve values to/from them. In this and the next blogs, we will see how they can be used in conjunction with the Reflection API, to perform operations at run time.

Normally, when you declare fields in your e structs and units, you then procedurally assign values to those fields at some points and retrieve their values at others. When you declare a method, you call it with certain parameters and retrieve its return value for later use. All of this is fine when you deal with a specific field or method, and that is what you need most of the time.

But what if you want to perform some generic operation? For example, you may want--given anye object (of any struct or unit type, which is unknown upfront)--to go over all its numeric fields and print their values. Or, you may want to traverse the whole unit tree, and on every unit whose type has a specific method (given by name), call that method and print its result.

The Reflection API allows us to perform tasks like that in a fairly easy manner. Here are some reflection methods which are helpful for those tasks. Given an instance object, the following two methods allow you to get the reflection representation of the struct or unit type of the object.

  • rf_manager.get_struct_of_instance(instance: base_struct): rf_struct

This method returns the like struct of the object, disregarding when subtypes.

  • rf_manager.get_exact_subtype_of_instance(instance: base_struct): rf_struct

This method returns the most specific type, including when subtypes, of the object.

For example, for a red packet instance, get_struct_of_instance() will return the reflection representation of type packet, and get_exact_subtype_of_instance() will return the representation of type red packet.

The following methods of rf_field allow, given an instance object of some struct, to set or get the value of the specific field of that object.

  • rf_field.set_value(instance: base_struct, value: rf_value_holder);
  • rf_field.set_value_unsafe(instance: base_struct, value: unsafe);
  • rf_field.get_value(instance: base_struct): rf_value_holder;
  • rf_field.get_value_unsafe(instance: base_struct): unsafe;

The set_valuemethods take the value passed as parameter, and assign it to the given field of the specified object. The get_value methods retrieve the value of the given field of the specified object and return it. There is a safe and an unsafe version of each method. The safe version uses a value holder, which already contains the type information for the value (as was explained in the previous blog), performs additional checks, and throws a run-time error in case of an inconsistency (for example, if the field does not belong to the struct type of the given instance). The unsafe version (the one with the _unsafe suffix) does not use a value holder and does not perform such checks; in case of an inconsistency, its behavior is undefined and might even cause a crash. Thus, you need to use it with a care. However, the unsafe version is more efficient, and I recommend using it when possible.

Similar to the above rf_field methods, the following methods of rf_method, given an instance object of some struct, allow you to invoke a specific method of that object or to start a TCM.

  • rf_method.invoke(instance: base_struct, params: list of rf_value_holder): rf_value_holder;
  • rf_method.invoke_unsafe(instance: base_struct, params: list of unsafe): unsafe;
  • rf_method.start_tcm(instance: base_struct, params: list of rf_value_holder);
  • rf_method.start_tcm_unsafe(instance: base_struct, params: list of unsafe);

The invoke methods call the given method on the specified object and return the value returned from that method. If the given method has parameters, they should be passed as a list in the second parameter; the list size must exactly match the number of parameters the method expects to get. Similarly, the start_tcm methods start the given TCM on the specified object. As with the rf_field methods above, the difference between the safe and unsafe versions of these methods is that the safe one uses value holders and performs additional run-time checks, while the unsafe version is more efficient.

The following short example demonstrates the usage of the above methods. The following method gets an object of an unknown type (declared as any_struct) and a method name. It goes over all fields of the object whose type is int, and calls the method by the given name, passing the field value as parameter. For simplicity, we assume it is known that the method by the given name indeed exists and has one parameter of type int.

extend sys {

    print_int_fields(obj: any_struct, meth_name: string) is {

        // Keep the reflection representation of the int type itself

        var int_type: rf_type = rf_manager.get_type_by_name("int");

        // Keep the struct type of the object

       var s: rf_struct = rf_manager.get_exact_subtype_of_instance(obj);

        // Keep the method which is to be called

        var m: rf_method = s.get_method(meth_name);

        // Go over fields of the struct

        foreach (f) in s.get_fields() do {

            // Is this field of type 'int' ?

            if f.get_type() == int_type then {

                // Retrieve the field value ...

                var value: untyped = f.get_value_unsafe(obj);

                // ... and pass it to the method

                compute m.invoke_unsafe(obj, {value});

            };

        };

    };

};

 

In the next blog in the series, we will discuss some additional relevant reflection methods, give several tips, and look at some more interesting examples.

 

Yuri Tsoglin

e Language team, Specman R&D 

ADI Success Verifying SoC Reset Using X-Propagation Technology - Video

$
0
0

Analog Devices Inc. succeeded in both speeding up the simulation and debug productivity for verifying SoC reset.  In November 2013 at CDNLive India they presented a paper detailnig the new technology they applied to reset verification and eight bugs they found during the project.  We were able to catch up with Sri Ranganayakulu just after his presentation and captured this video explaining the key points in his paper.

Sri had an established process for verifying reset on his SoC.  The challege he faced is one faced by many teams -- reset verification executed at gate level.  Why gate level?  It goes back to the IEEE 1364 Verilog Language Reference Manual (LRM).  At reset, the logic values in a design can either be a 0 or 1 so a special state "X" was defined to capture this uncertainty.  The LRM defined how the logic gates in Verilog could resolve these X values to known values of 0 and 1 as they occur in the hardware.  Unfortunately, the LRM defined a different resolution of X values for RTL.  As a result, companies like ADI simulated at gate level to match the hardware definition. But with larger SoCs, the execution of those simulations became too long.  in addition, SoCs now have power-aware circuits that mimic reset functionality when they come out of power shutdown, increasing the number of reset simulations that have to occur.  A change was needed.

Incisive Enterprise Simulator provides the ability to override the RTL behavior to mimic the gate behavior, resulting in up to 5X faster reset simulation. That's the attraction to "X-prop" simulation. But that is not verification.  Verification requires the ability to plan and measure the reset sequences and to debug when issues are found.  Sri focused on the debug aspects of X-prop verification with debug tools in SimVision to identify X values that are real reset errors from those X values that were artifically propagated in RTL. As a result, Sri found eight bugs in two projects in a shorter time than his previous approach.

In the Incisive 13.2 release, Cadence further improved this technology. The new release extends the language support for X-propagation and adds the ability to separate X values coming from power-down domains from the other two types in the previous paragraph.  In addition, the Superlinting Verification App in Incisive Enterprise Verifier now generates assertions that monitor for X values in simulation.  Since assertions also automatically create coverage, you now have an automated path to connect your reset verification to metric-driven verification (MDV) and your verification plan.

X-propagation in simulation is necessary to achieve performance for reset simulation. However, to get productivity for your reset verification, you need the automation from debug, verification apps, and enterprise planning and managerment.

Regards,

Adam Sherer 

Covering Edges (part II)—“Inverse Normal” Distribution

$
0
0

In the previous example, we used the "select edge" to generate edge values for fields. But in many cases, what you really want to generate is not the exact edge, but "near the edges". For example, for a field of type uint (bits : 24), generate many items whose values are 0..4, and many of 0xfffff0..0xffffff. To achieve this, you can call this "the inverse normal distribution" and give more weight to the edges.

Selecting"inverse normal" can be done by selecting normal distribution, around the edges:

extend transfer {

    keep soft address == select {

        10 : normal(2, 4);

        10 : normal(0xFFFFFD, 4);

        90 : others;

    };

};

 

 

Efrat Shneydor 

Cadence and AMD Add New UVM Multi-Language Features

$
0
0
0 0 1 454 2594 Cadence Design Systems 21 6 3042 14.0 Normal 0 false false false EN-US JA HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-fareast-language:JA;}

The UVM Multi-Language Open Architecture open-source library was recently updated with new features.  The hallmarks of this solution continue to be the ability to integrate verification components of multiple languages and methodologies at the testbench level, expanding beyond simple connectivity at the more limited data level, and the multi-vendor support.

Interestingly, multi-language is a bit of a misnomer – the critical part of the name is Open Architecture.  For sure, this industry has verification IP written in multiple standard languages – SystemVerilogSystemC, and e – but that isn’t the whole story.  If language defined the verification component, then AVM, VMM, OVM, and UVM verification components would all interoperate without any modification or glue code because each one is written in the same language – SystemVerilog.  However, the code needed to be organized into libraries with generally accepted methodologies to create verification components that could be easily reused.  As a result, companies have created many well-verified components that need a lot of additional code to integrate into a coherent verification environment.  By coherent we mean an environment with organized phases, configuration, and control despite the different libraries.  When we add components from other languages, it's easy to see that simple data connections between the languages are quite necessary, but insufficient, to enable verification reuse.

The new UVM ML-OA 1.3 builds on the foundation established in June with the initial download posted on UVMWorld.  The important new feature is multi-language configuration.  With this new feature, users can configure integers, strings, and object values using the hierarchical paths established when the environment is constructed.  Wildcards are permitted but the interpretation is the responsibility of each integrated framework.   The release includes three new demos to help you become familiar with the new capability.  In addition, there are several ease-of-use enhancements aimed at making it easier to set up a multi-language environment and support for g++ 4.1 and 4.4.  The release notes and documentation in the 1.3 tarball have more details on the new features and how to use them.

UVM ML-OA goes beyond inter-language communication to provide the integration that allows verification components to work together in a coherent testbench.  The download is open source and known to run on all major simulators.

Cadence is also working with its partners to develop a portable UVM-SC adapter that will enable running SystemC verification environments with UVM-ML-OA using the SystemC support built into the simulator.  Cadence will test the adapter with the Incisive platform, and its partners will test it with the Mentor and Synopsys simulators.

So if you haven’t yet, come join the 2500 others who have downloaded UVM ML throughout its history and your verification reuse will be more productive.

 

=Adam Sherer, Incisive Product Manager

Viewing all 413 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>