Quantcast
Channel: Verification
Viewing all 413 articles
Browse latest View live

Incisive Verification: Top 10 Things I Learned While Browsing Cadence Online Support Recently

$
0
0
There is always a demand, in most corners of the world today, for learning and troubleshooting something simply and quickly. Most users of any product or tool want access to a self-service knowledge base so that they can go and troubleshoot the issue on their own. They do not really want to sit through a long training class and also pay money; rather, they are of the type who have the knack to figure things out on their own by taking a deep dive, head first.

In this quarterly blog, I will share what the teams across the Cadence Incisive verification platform have developed and shared on Cadence Online Support, http://support.cadence.com, in the last month of 2013 and first month of 2014 to enable verification and design engineers be comfortable and well versed with Cadence verification tools, technologies, and solutions.

Rapid Adoption Kits (RAKs) from Cadence help engineers learn foundational aspects of Cadence tools and design and verification methodologies using a "do-it-yourself" approach. Application notes (app notes), tutorials, and videos also aid in developing a deep understanding of the subject at hand.

Download your copies from http://support.cadence.com now and check them out for yourself. Please note that you will need Cadence customer credentials to log on to the Cadence Online Support http://support.cadence.com website.

1.     Reuse UVC for Acceleration - RAK

There are thousands of legacy UVCs, stable and reliable, developed over the last 15 years. It is ideal to reuse these environments when starting acceleration verification, rather than creating the whole verification environment from scratch.

This RAK provides a short overview of the process required for taking a UVC implemented in e, and using it for verifying a DUT running on an acceleration machine, e.g. - Palladium. It describes the steps that have to be taken for adapting the UVC to achieve the desired goal of acceleration verification - executing tests significantly faster over running with RTL.

Rapid Adoption Kits

Overview

Application Note(s)

RAK Database

UVM e : Reuse UVC for Acceleration

View

View

Download (0.4 MB)

 

2.     Acceleration Performance Boost - RAK

When employing acceleration verification, speed is a crucial aspect. The verification engineers strive to get supreme performance, while maintaining verification capabilities.

This RAK provides suggestions for advanced techniques for maximizing the performance of verification acceleration. It discusses the various interfaces between the simulator and the acceleration machine, and their effect on performance.

Rapid Adoption Kits

Overview

Application Note

RAK Database

UVM e : Acceleration Performance Boost

View

View

Download (0.4 MB)

 

3.     Introduction to CPF Low-Power Simulation - RAK 

This RAK illustrates Incisive Enterprise Simulator support for the CPF power intent language. The RAK provides instructions on invoking a CPF simulation in Incisive Enterprise Simulator, and also provides an overview of SimVision debug capabilities and Tcl debug extensions. It also comes with a hands-on lab to examine CPF behavior in simulation.  

 

 Rapid Adoption Kits

Overview

RAK Database

Introduction to CPF Low-Power Simulation  

View

Download (1.7 MB)

 

4.     Introduction to IEEE-1801 / UPF Low-Power Simulation  - RAK

This RAK illustrates Incisive Enterprise Simulator support for the IEEE 1801 / UPF power-intent language. In addition to an overview of Incisive Enterprise Simulator features, SimVision and Tcl debug features, a lab is provided to give you an opportunity to try these out.

 

 Rapid Adoption Kits

Overview

RAK Database

Introduction to IEEE-1801 / UPF Low-Power Simulation  

View

Download (2.3 MB)

 

5.     Specman Simulator Interface Synchronization Debug Cookbook - App Note

This Specman Simulator Interface Synchronization Debug Cookbook is supposed to be a guiding document for every engineer who wants to learn about Specman - simulator interface synchronization. This is a comprehensive document that includes a flowchart that can be used in order to map the problem, and take the correct steps in order to resolve it. It also includes a detailed section for every possible problem and its solution. This cookbook is also very useful for power users to be able to debug these kinds of issues independently.

6.     Loading Commands at Runtime for Verilog Tests - App Note

This app note on Loading Commands at Runtime for Verilog Tests illustrates how to convert directed Verilog tests into command files to enable a single compile flow, and shows the ability to use the save and restore feature of Incisive Enterprise Simulator.

The flow described in this note focuses on support for Verilog [IEEE 1800]. This app note shows you different approaches to optimize the execution and runtime of Verilog directed tests. It illustrates how to remove redundancy and how to run only portions of a test that are of interest. The suggestions in this app note can be adapted to your particular setup. An example testcase is included. 

7.     Incisive Enterprise Specman Elite Testbench Tutorial - Tutorial

TheIncisive Enterprise Specman Elite Testbench Tutorial is also available online for you to take advantage of this self-help tutorial.

The goal of the Specman tutorial is to give you first-hand experience in how the Specman system effectively addresses functional verification challenges. The tutorial uses the Specman system to create a verification environment for a simple CPU design.

8.      How to Detect Glitches in Simulation Using IES  - Video

The video "How to Detect Glitches in Simulation Using IES" discusses the common reasons of glitches in gate-level simulation. It also discusses the techniques to detect and analyze glitches during simulation with Incisive Enterprise Simulator.

9.      Delay Modes Selection, and Their Impact in Netlist Simulation - Video

The video "Delay Modes Selection, and Their Impact in Netlist Simulation" discusses different delay modes in which netlist simulation can be done. It demonstrates different methods to select a delay mode and the impact of a selected delay mode on timings in simulation. 

10.  What's New in 13.2 Debug Analyzer and SimVision - Videos

Short demo videos are now available on the latest/greatest features of our 13.2 debug solutions.  You may want to review them yourself just as a refresher on the latest features of both SimVision and Incisive Debug Analyzer.

Both of these videos will be linked to in the "What's New in Debug" screen that is launched at SimVision/Debug Analyzer startup or accessible through the help menus.  

We will continue to provide self-help content on Cadence Online Support, your 24/7 partner for getting help in resolving issues related to Cadence software or learning Cadence tools and technologies. If you are signed up for e-mail notifications, you've likely noticed new solutions, app notes (technical papers), videos, manuals, etc.

Happy Learning!

Sumeet Aggarwal

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Mangal; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Normal 0 false false false EN-US X-NONE HI Normal 0 false false false EN-US X-NONE HI /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Mangal; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Normal 0 false false false EN-US X-NONE HI /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Mangal; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Mangal; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;}

e Language Editing with Emacs

$
0
0

Specman and e have been around for a while, and some clever people have developed a nice syntax highlighting package for Emacs. What does this package do? Well, have a look yourself:

 

Editing in Emacs with the Specman mode 

And

 

Editing in Emacs without the Specman mode 

As you can see, the Specman mode gives you syntax highlighting, automatic indentation, it detects comments and shows them in different font or color if you like, adds end-comments (for example, after "};" you get a comment that tells you what struct/unit was edited), inserts newline after a semicolon and more...

The Specman mode for Emacs used to be available here (www.specman-mode.com), but unfortunately this site is no longer actively maintained. If you do need a more recent version (e.g., if you want to run with Emacs 24.x or later), please download it from the related Cadence forum post.

Once you've downloaded and unzipped it, you need to setup Emacs or Xemacs to load the mode when you start the editor. The mechanics to achieve that are slightly different in Emacs and Xemacs. For Emacs, edit the file <HOME>/.emacs and add the following:


;; indicate where the package is stored
(add-to-list 'load-path"~/xemacs/")
;; load the package
(load "specman-mode")
;; setup files ending in .e or .ecom to open in specman-mode
(add-to-list 'auto-mode-alist '("\\.e\\'" . specman-mode))
(add-to-list 'auto-mode-alist '("\\.ecom\\'" . specman-mode))

 Happy coding,

-Hannes Froehlich

Incisive vManager at DVCon - Come See It!

$
0
0

Have you heard the news?  There is a new version of vManager announced this week, right in time for DVCon.   vManager has been completely re-architected to be a database driven environment, scaling to multiple users and supporting gigascale size designs..  And, with ever growing verification requirements there is now a need for highly coordinated verification teams.  With 100x more scalability and 2x greater verification productivity, the time is now to learn about the best verification management solution in the industry, vManager - the best just got better! 

The Incisive vManager solution is showcased on cadence.com, and there is a dedicated launch page you can visit for datasheets, whitepapers, videos, and more.  The direct link to that page is here - http://www.cadence.com/cadence/newsroom/features/Pages/vmanager.aspx?CMP=vManager_bb

And, for those of you going to DVCon this year (March 3 to March 6), you can see a live demonstration and speak to the experts about your verification challenges during the Exhibition hours. The DVCon Expo Hours are listed below:

     - Monday:  5:00 to 7:00PM
     - Tuesday:  2:30 to 6:00PM
     - Wednesday:  2:30 to 6:00PM

You can also sign up for a Metric Driven Verification (MDV) Tutorial on Thursday, which runs from 8:30 to Noon.  The abstract for the tutorial is located at the DVCon website (direct link here - http://dvcon.org/content/event-details?id=163-6-T ).  To get into the tutorial, you will need to register on the DVCon website.  A direct link to the DVCon registration options is here - http://dvcon.org/content/rates

The MDV Team at Cadence hopes to see you DVCon 2014!

John Brennan
MDV Product Management Director

 

 

Resetting Your UVM SystemVerilog Environment in the Middle of a Test — Introducing the UVM Reset Package

$
0
0
In general, reset will be applied at different times within a test.

 

1.   Reset at the beginning of a test

In a typical UVM test you might start out by applying a reset, and then go on to configure your device, and subsequently, start traffic. The associated UVM environment, in particular its components, do not have to do anything special to support this type of test - Life is Good!

 

2.   Reset in the middle of a test

Now, let's change things and apply reset again, later on in the test, in order to determine that the device can transition in and out of the reset condition properly. In this case, your verification environment needs to contain additional infrastructure to support this type of test. Otherwise, for example, your test might produce invalid errors.

 

Reset-Aware Components

UVM components such as scoreboards, sequencers, drivers, monitors, and collectors need to handle an arbitrary occurrence of reset in a robust matter.  This means that you need to implement ways to gracefully terminate ongoing activity once reset is asserted, and restart activity properly after reset drops. In other words, you need to have a reset-aware UVM component implementation.

 

Reset Package

Cadence provides an approach for this that works with the standard UVM library and leverages the UVM run_phase. In the testbench you add a reset monitor that notifies a reset handler, which in turn calls the reset-aware component so that they terminate and restart activity when needed (as show below). The key part of the packages is the utility library used to implement the reset handler.

 

 

The UVM reset package includes examples and documentation that show how this works in detail and how to use it. Cadence has contributed the reset package to Accellera's UVM world community so you can go ahead and check it out, and use it.

 

http://forums.accellera.org/files/file/111-cadence-reset-example-and-package/

 

Real-World Usage

Courtney Schmitt of Analog Devices has adopted this package and will present her experience at DVCon 2014 in San Jose at the poster session (and the associated paper) on Tuesday, March 4, 2014.

1P.7    Resetting Anytime with the Cadence UVM Reset Package

 

Reset away!

 

Axel Scherer

 

 

New Incisive Verification App and Papers at DVCon by Marvell and TI

$
0
0

If you're an avid reader of Cadence press releases (and what self-respecting verification engineer isn't?), you will have noticed in our Incisive 13.2 platform announcement  back on January 13th that Incisive Formal technology, with our new Trident cooperating multi-core engine, took top billing. But you would have needed to be very diligent to have followed the link in the press release to the Top 10 Ways to Automate Verification document that explained some other aspects of the Incisive 13.2 Platform.  There, weighing in at number 6, was a short description of our latest verification app, for register map validation. Verification apps apply combinations of formal, simulation and metric-driven technologies to mainstream verification problems. This approach puts the focus on the verification problem to be solved, rather than the attributes of the technology used to solve it. The Incisive verification apps approach is defined by the following principles:

  • Supplement a well-documented methodology with dedicated tool capabilities focused on a high-value solution to a specific verification problem
  • Use the appropriate combination of formal, simulation, and metric-driven technologies, aimed at solving the given problem with the highest efficiency
  • Provide significant automation for creating the properties necessary to solve the given problem, reducing the need for deep formal expertise
  • Provide customized debug capabilities specific to the given problem, saving considerable time and effort


Verification App for Register Map Validation
The new Register Map Validation app generates properties automatically from an IP-XACT register specification. You can exhaustively check a multitude of common register use cases like value after reset, register access policies (RW, RO, WO), and write-read sequences with front-door and back-door access. All these sequences are shown in clear, easy-to-use debug views. Correct register map access and absence of corruption is difficult and time-consuming to check sufficiently in simulation.

The result is a reduction of verification set-up times and, combined with the Trident engine we mentioned before, huge reduction in execution times, reducing register map validation from weeks to days or even hours. But don't take my word for it - come to DVCon next week and hear Abdul Elaydi of Marvell, who will be presenting "Leveraging Formal to Verify SoC Register Map", and Rajesh Kedia of TI, who will be presenting "Accelerated, High-Quality SoC Memory Map Verification Using Formal Techniques", both on Wednesday, March 5.

Pete Hardee

Randomizing Error Locations in a 2D Array

$
0
0

A design team at a customer of mine started out with Specman for the first time, having dabbled with a bit of SystemVerilog. I can't reveal any details of their design, but suffice to say they had a fun and not-so-simple challenge for me, the outcome of which I can share. Unlike some customers (and EDA vendors) who think it's a good test for a solver to do sudoku or the N-Queens puzzle (see this TeamSpecman blog post http://www.cadence.com/Community/blogs/fv/archive/2011/08/18/if-only-gauss-had-intelligen-in-1850.aspx), this team wanted to know whether IntelliGen could solve a tough real-world problem...

The data handled by their DUT comes in as a 2D array of data bytes, which has been processed by a front-end block. The data in the array can contain multiple errors, some of which will have been marked as "known errors" by the front-end. Other "unknown" errors may also be present, but provided that the total number of errors is less than the number of FEC bytes, all the errors can and must be repaired by the DUT. If too many errors are present, it is not even possible to detect the errors, so the testbench must generate the errors carefully to avoid meaningless stimulus. It also needs to differentiate between marked and unmarked errors so that the DUT's corrections can be tested and coverage performed based on the number of each type of error.

This puzzle is rather more complex than the N-Queens one: we have multiple errors permitted on any single column or row in the array, and there are three possible states for each error: none, marked and unmarked. There is an arithmetic relationship between the error kinds: twice the number of marked errors than unmarked can be corrected. Furthermore, unlike the N-Queens, a test writer may wish to add further constraints such as clustering all the errors into one row, fixing the exact number of errors, or having only one kind of error.

First we define an enumerated type to model the error kind:

By modelling the 2D array twice, once as complete rows and once as complete columns, we can apply constraints to a row or column individually, as well as to the entire array. We only look at whether to inject an error, not what the erroneous data should be (this would be the second stage). I've only shown the row-based model here, but the column-based one is identical bar the naming.

The row_s represents one row from the 2D array, with each element of "col" representing one column along that row. The constraints on num_known and num_unmarked limit how many errors will be present. These are later connected to the column-based model in the parent struct.

The effective_errors field and its constraints model the relationship between the known and unmarked errors, whereby twice as many known errors than unmarked errors can be corrected.

Next we define the parent struct which links the row and column models to form a complete description of the problem. Here "cols" and "rows" are the two sub-models, and the other fields provide the top-down constraint linkage.

The intent is that the basic dimensions are set within the base environment, and the remaining controls are used for test writing.

Next, we look at the constraints which connect the row and column models together. The first things to do are to set the dimensions of the arrays based on the packet dimensions, and to cross-link the row and column models. These are structural aspects that cannot be changed. The rest of the constraints tie together the number of errors in each row, column, and the entire array. By using bi-directional constraints, we are allowing the test writer to put a constraint on any aspect. 

And that's it! With just that small amount of information IntelliGen can generate meaningful distributions of errors in a controlled way. Test writers can further refine the generated error maps with simple constraints that are actually quite readable:

 

Notice another little trick here: the use of a named constraint: "packet_mostly_correctable". This allows a test writer to later extend the error_map_s and disable or replace this constraint by name; far easier than figuring out the "reset_soft()" semantics and a whole lot more readable.

Note that for best results, this problem should be run using Specman 13.10 or later due to various improvements in the IntelliGen solver.

Steve Hobbs

Cadence Design Systems 

Applying Software-Driven Development Techniques to Testbench Development

$
0
0

Over the past couple of years there has been some interest in applying a software development technique called unit testing in the hardware development flow. One of the reasons is that unit tests allow customers to validate their testbench in isolation, enabling very fast and thorough tests. Some customers have developed their own framework to accomplish this testing. In the Incisive 13.2 release, Cadence has introduced this functionality with Specman/e to enable customers to apply the unit test approach to save long simulation times, since they can isolate functions and test them without actually running any simulation, thus improving the testbench quality and potentially reducing the debug time for testbench code more than 30%.

What are unit tests?

-          Simple and directed tests

-          Checks that a feature does something as expected (e.g. a parity calculation method returns the correct parity for a handful of values)

-          Created by the developer that develops/writes the source code

 

What are unit tests used for?

-          To check that the implementation of a small piece of code (e.g. a single method/function) is correct

Some customers may think that unit testing is an extra overhead, and there is little automation for creating test cases. So, why should they develop and use unit tests?

-          Customers can check the implementation of a method that normally takes many simulation cycles in no simulation time (e.g. a complex checking method which requires streams of input, output, and control data). in unit testing, you would provide all the input/output and control data, call the method, and check that is calculates the correct result

-          If other teams reuse code (and this happens a lot with verification code), they can check if the core testbench functions still work, or if something is broken

If you want some more details, have a look at this archived webinar - Testing the Testbench on the e-unit package and/or read the blog on Agile SOC ... Finally... A Reason For Me to Try Specman

Other references worthwhile reviewing are some articles and a unit test framework utility by AgileSOC

Happy testing,


-Hannes Froehlich

Incisive Simulation and Verification: Top 10 New Things I Learned While Browsing Cadence Online Support in Q1 2014

$
0
0

In my first blog of this quarterly series, I focused on how Rapid Adoption Kits (RAKs), developed by Cadence engineers, are enabling our users to be productive and proficient with Cadence products and technologies.  

In this second quarterly blog, let me explain the "Once Resolved, Reused Forever" internal process for documenting knowledge on http://support.cadence.com/. It ensures that we are not solving a problem that has already been solved, and that we benefit from the collective experience of the organization and our customers. Yes, you heard it right. Our customers can also help enrich our knowledge database to help others. And, in fact, we encourage our users to use "Troubleshooting -> Submit Solution" from http://support.cadence.com/ Home Page menu.

Our knowledge team reviews and publishes, and circles back with the contributor.

The "Reuse-Improve-Create" knowledge process is an integral part of Cadence Technical Support. The process ensures that we have the most effective knowledge transfer among our own engineers and customers.

In Q1 2014, the teams across the Cadence Incisive Verification Platform developed the following collateral to support verification and design engineers in becoming well versed with Cadence verification tools, technologies, and solutions.

Download your copies from http://support.cadence.com/ now and check them out for yourself. Please note that you will need your Cadence customer credentials to log on to the Cadence Online Support http://support.cadence.com/ website.

 


1. One can get separate runtime profiler reports with basic or advanced profiler with the simulation profile options.

Article # 20236067: How to dump separate runtime profiler reports with basic or advanced profiler?

 

2. Sometimes there are situations where glitches on signals result in repeated re-triggering of the blocks which reference them. A Verilog always block executes when an object in the event control expression for the block has changed value from when the block was last executed. This default wait mechanism is sensitive to zero-width glitches on the variables that the block is waiting on. These glitches are usually created when a writer process assigns a default value to the variable and then overwrites it with a different value. Even if the writer changes the variable back to the old value, the zero-time glitch is sufficient to wake up the waiting block. When such glitches are combined with a combinational loop in the design, they can stall the simulation.

Article # 11594558: Using the -delay_trigger switch to avoid repeated retriggering of always blocks due to glitches

 

3. In low-power simulation, switchable domains in the design can be powered off, putting the domain into an unknown (X) state. If proper attention is not given to the design state before and after switching the domains, it can lead to an X-propagation issue in the design. Proper retention is needed to save states before the domain so that the states can be successfully written back into the sequential elements when the domain is switched on.

Article 20187515: State Retention: Examples to demonstrate usage of state retention in Low Power Simulation

 

4. In SoC designs many flip-flops exist without set/reset because non-resetable flip-flops are smaller and, therefore, save silicon cost. However, if such flip-flops are not initialized, then they become a potential source of X-propagation issues during simulation.

The requirements can be to correct the design to include set/reset for such flip-flops, or to initialize such flip-flops to avoid X-propagation in simulation. Please read the following article to get the answers.

Article #20187534: How to find Non-resetable Flip Flops in design and initialize them to avoid X-propagation in simulation

 

5. Many teams have C models of the algorithms or systems that they are developing. Integrating these C models into the SystemVerilog verification environment reuses these models. It allows fast stimulus generation and checking. It also allows the internal state of the C model to be checked if a mismatch occurs.

A set of guidelines and an example that demonstrates how SystemC models (or C/C++ models wrapped in SystemC) can be integrated within a SystemVerilog or UVM verification environment are available as a Rapid Adoption Kit (RAK). This RAK also demonstrates how models can be integrated for checking as part of a scoreboard and how they can be integrated to drive the simulation for firmware or software testing.

Rapid Adoption Kits

Overview

Application Note(s)

RAK Database

Integrating C, C++, SystemC Models with SystemVerilog and UVM

View

View

Download (0.8 MB)

 

6. The vManager server setup includes environment variables, topology and terminologies, and various operations. The document describes how to migrate quickly to a new build or a version. It also helps users in collecting information, and debugging the problems encountered while starting and accessing SQL database.   

vManager Sever SetUp - Introduction, Topology, Operations and Debug Techniques

 

7. Here are 5 videos for some common features/flow in Incisive Metrics Center (IMC).

1.      IMC Basic (Video)

2.      IMC Detailed Analysis (Video)

3.      IMC Reporting (Video)

4.      IMC Refinement (Video)

5.      IMC Refinement Resilient - IES 12.2 vs. 13.2 (Video)

 

8. Expression coverage is an important metric for code coverage that is widely deployed to measure how much of verification is completed or progress on coverage closure, yet it is not well understood. It has a wide variety of modes and switches to tune both performance and optimize on runtime to achieve coverage closure results. Also, the number of testcases required for good expression coverage closure could be optimized using this metric.

The Understanding Incisive Expression Coverageenables users to deploy expression coverage more effectively, and achieve faster coverage closure with optimal performance.

This is an overview of Incisive expression coverage technology and methodology that provides a basic understanding of the subject with opportunities for trade-offs that can be used while deploying this form of coverage. Doing so can help achieve faster coverage closure if the right settings are selected.

The tutorial includes a two-part video series explaining various Incisive expression coverage scoring modes, how to deploy expression coverage more effectively, and how to achieve faster coverage closure with optimal performance.

  

9. The application note "Verification Management DRM Setup and Configuration" describes how to configure the Distributed Resource Management (DRM) setting with the Incisive Enterprise Manager (EMGR) and Incisive vManager (VMGR) products. The document provides basic steps towards integrating EMGR with one of the predefined DRMs. In addition, it also details how to debug common configuration issues.

 

10. The verification of a design may require multiple technologies, languages, and products to work together. When the design itself includes Verilog, System-Verilog, VHDL, and SystemC, it is a mixed-language environment.  Specman can work with such designs.

In this AppNote "Controlling Specman Agents and Adapters in a Mixed-HDL Environment ", Avi Farjoun, Cadence Staff Support Applications Engineer, discusses how to successfully integrate all those languages into a single verification environment.  


Let me end with two bonus collateral references, one to a critical article and one to a verification IP applications note:

11. Article # 20227162: How to infer implicit Isolation on loads in VHDL generate block?

12. The application note, Integrating Cadence USB 2.0 Verification IP over DpDm Interface, explains how to create, configure, and instantiate Cadence USB 2.0 Verification IP in the testbench. It focuses on "Why do we need Dp/Dm translator?"

 

We will continue to provide self-help content on Cadence Online Support, your 24/7 partner for getting help in resolving issues related to Cadence software or learning Cadence tools and technologies. If you are signed up for e-mail notifications, you'll see the new solutions, app notes, videos, manuals, etc. as we post them.

Happy Verifying!

Sumeet Aggarwal 


e and SystemVerilog: The Ultimate Race

$
0
0

For years we've watched the e and SystemVerilog race via countless presentations, articles, and blogs. Each language is applied to SoC verification yet the differences are well documented so any comparison is subject to recoding from one language to the other. This makes a direct performance comparison difficult to measure. Until now.  

On April 21, 2014, SystemVerilog and e toed the line for the first direct SoC race. They were set-up in a long-pole test to allow each language sufficient run-time to establish the test as credible. As you can see at below, e started very strong and got out to an early lead. A moment later, SystemVerilog surged to the lead as shown in the next screen capture. This exchange continued relentlessly from Hopkinton deep into the Newton Hills. As the race continued it was just obvious: e and SystemVerilog are joined at the hip and this is a multi-language race. The only way to finish this race was to have the two languages work together. Turning onto Boylston St. and charging across the line, e and SystemVerilog finished the SoC (Sherilog on Course) together.

As much fun as this blog was to write, it was more important to take part in this race.  I am proud to have helped make a statement along with nearly 32,000 others in my 9th Boston Marathon.  I was also able to raise more than $5000 with my patient-partner Linda as I ran for the Boston Children's Hospital.

=Adam Sherilog 

PS: If you are interested, you can donate to our Children's Hospital Team through the end of May. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

sync and wait Actions vs. Temporal Struct and Unit Members

$
0
0
Using sync on a temporal expression (TE), does not guarantee that the execution will continue whenever the TE seems to succeed. In this example, the sync action will miss every second change of my_event:

    tcm0()@any is {
        wait;
        while TRUE {
            sync change (my_event)@any;
            message (LOW, "tcm0: change (my_event)@any occurred");
            wait cycle;
        };
    };

The explanation for this behavior is that Specman evaluates the temporal expression in the sync (or wait) action only when the action is reached.In this case, when "sync change (my_event)@clk;" is reached for the first time,  the value of my_event is saved, and the TE succeeds only at the next my_event change at the sampling event.

If you expect that each change will start a new TE evaluation, an event struct member should be used. Event and expects are struct members and therefore are evaluated as long as the struct is not quitted. In the example below, the execution will continue after each my_event change:

    event e is change(my_event)@any;
    tcm1()@any is {
        wait;
        while TRUE {
            sync @e;
            message (LOW, "tcm1: event e occurred");
            wait cycle;
        };
    };

But using sync on an event still does not guarantee that the execution will always continue whenever the TE seems to succeed. In the following TCM, although e will occur each cycle, the execution will continue after every third occurence:
 
    tcm2()@any is {
        wait;
        while TRUE {
            sync @e;
            message (LOW, "tcm2: event e occurred");
            wait [3]*cycle;
        };
    };

Using on struct/unit member ensures execution upon each e occurrence:
   
   on e {
      message (LOW, "on: change (my_event)@any occurred");
    };

To summarize:
  • sync and wait are actions, and as such they are activated when the action is reached.
  • They should be used to suspend the execution until the TE they are associated with is successful.
  • The TE evaluation is started when the action is reached.

When we want to guarantee the execution of some code whenever a TE succeeds, it is recommended to define an event and on struct member, which remain active throughout the lifetime of the object from its creation until either quit() is called or the run ends.
 
Maya Bar
Specman support 

Updates from the UVM Multi-Language (ML) Front

$
0
0

An updated version of the UMV-ML Open Architecture library is now available on the Accellera uploads page (you need to login in order to download any of the contributions).

The main updates of version 1.4 are:

  • UVM-SV library upgrade: This release includes UVM-1.1d, enabled for work in context of UVM-ML, replacing the previous UVM-1.1c version
  • Portable UVM-SC adapter added: Enabling usage of UVM-ML with vendor-specific implementations of SystemC
  • Multi-language sequence layering methodology and examples added: Demonstrating best-known practices for instantiating a verification component in a multi-language environment and driving sequences and sequence items across the language boundary
  • Performance improvements in the backplane and the SystemC adapters
  • The examples directory structure was simplified: All the examples are now directly under the "examples" directory, grouped by topics

We also found that several users struggled to install and setup the UVM-ML library, so we recorded a short video on how to best achieve that. If you see some strange message or paths, check out this video and make sure your setup is correct.

One more thing—the Accellera Multi-Language Verification Work Group (MLV-WG) has collected a thorough set of requirements, and has started working on defining the ML standard. The UVM-ML OA library is very well aligned with these requirements.

Happy coding,
Hannes Froehlich

Implementing User-Defined Register Access Policies with vr_ad and IPXACT

$
0
0

The register and memory package vr_ad for Specman is used in pretty much every verification environment. In most cases today, the register specification is captured in an IPXACT description and the register e-file can be automatically generated from it.

The vr_ad package comes with a variety of pre-defined register access policies, which cover the typical register usage.

However, many users have the need for special access policies which are not supported by vr_ad  by default. Implementing these custom policies can be done in two ways:

1.       Define the specific behavior for each register through extensions and customization of the register itself

2.       Define a new access policy

To illustrate the two solutions, we use an example register which holds four fields. One field is reserved, the other three (f0, f1 and f2) have custom access policies:

f0: SELF_CLEAR:                        Field clears itself after one clock cycle upon WRITE access

f1: WRITE_CLEARS:                 Field clears itself with the 2nd (or more) write access

f2: READ_CLEARS:                    Field clears itself with the 2nd (or more) read access

The next section illustrates the pros and cons of the two implemenations.

Defining the specific behavior for each register through extensions and customization of the register itself

Typically one can give full write-read access to the register field (if no access is given, the conversion script ipxact2vrad.pl treats the field as Write-Read). One can then extend the register for specific behavior using, for instance, the predefined method post_access()

IPXACT description of register reg2: 

Corresponding e-code:

In post_access() we are using the set_field() method to change the value of a specific field

Note:

The set_field() method itself calls update() which again triggers post_access(). This would result in an endless loop and/or wrong behavior. To overcome this, Cadence enhanced the vr_ad package. The default behavior of set_field() remains, but it can be overridden by a third parameter (perform_post_access with default value TRUE). When set to FALSE, the call to update() will be suppressed. In our case, this is required.

This enhancement will be available from Incisive 13.10-s020 and 13.20-s006.

When using an older version, the user has to implement a locking mechanism to avoid bad behavior. A code snippet is below:

 The extension file has to be imported after the register definitions

 Define a new access policy

The vr_ad package provides specific hooks to implement user-defined behavior. To define a new access policy, the user has to first create a new enum value for the policy:

The IPXACT file needs to make use of the vendorExtensions (access_policy attribute) to define the new access policy.

Note: The venderExtensions will not overwrite any access policy specified in the IPXACT, so make sure there is no regular policy specified.

Note: The IPXACT standard supports the definition of user-defined access policies. However, this description is not yet supported by the ipxact2vrad.pl conversion script.

Using the new access enum values, the corresponding subtype of the vr_ad_reg_field_info struct can be customized to the desired behavior:

The extension file has to be imported prior to the register definitions.

Summary

The vr_ad package provides many hooks to implement custom access policies.

While the register extension makes the flow independent of the IPXACT file and the vendorExtensions, there could be quite some coding required - The policy has to be implemented for each register subtype.

Using vendor extensions in the IPXACT description makes the flow more natural and probably requires less code, but relies on vendorExtensions.

 

Hans  Zander

Incisive Simulation and Verification: Top 10 New Things I Learned While Browsing Cadence Online Support, 2Q 2014

$
0
0

Cadence Online Support, http://support.cadence.com, provides access to support resources including an extensive knowledge base, access to software updates for Cadence products, and the ability to interact with Cadence Customer Support.

In the June release of Cadence Online Support, many new features and functionalities were added to help users filter and narrow their search results, to provide feedback opportunity via Foresee Survey, and to provide additional browser support. Now the site supports all IE 7x-10x, Mozilla Firefox, Google Chrome, and and Safari Browsers.  

The testing for the next release with more new features and enhancements is continuing while I am writing this blog. While I was enjoying all these released new features, as a verification engineer, I also was interested in finding out good knowledge documents that were shared on the site, and if I could easily find them. And indeed, I could easily find many application notes, RAKs, videos, and articles released in last quarter, along with site release updates, that are very helpful.    

You can download your copies from http://support.cadence.com now and check out for yourself. Please note that you will need the Cadence customer credentials to log on to the Cadence Online Support website.

 

1. Using utrace to Debug SystemVerilog Randomization Problems: The constrained random verification environments can exhibit the following problems:

  • Expected values are not reached
  • Certain values are chosen less frequently or more frequently than expected
  • The constraint solver calls take too long to finish
  • The solver is unable to find a solution for a given constraint set
  • The solver is unable to find a solution with certain starting conditions including variables that are not random or handles and arrays that are not initialized
  • The solver runs out of memory

The app note "Using utrace to Debug SystemVerilog Randomization Problems" describes the utrace debugger feature of the SystemVerilog constraint solver, which can be used to identify and debug problems in a SystemVerilog-constrained randomization environment.

2. Troubleshooting Article 20257945: "How to compile multiple libraries with a single invocation of ncvlog/ncvhdl or irun"

In many environments, it can be useful to compile HDL source code into more than one library. Often this is done by means of multiple invocations of the ncvlog or ncvhdl binary. Read this article to learn how.

3. SDF Annotation with Minimum, Typical, and Maximum Delays: This short 8-minute video describes how IOPATH and INTERCONNECT delays are annotated, how MIN, TYP, and MAX delays are implemented, and how annotation is analyzed in the waveform.

You can also find the related Troubleshooting Article 20264486: "How to annotate maximum delay for one instance and minimum delay for other instance."

4.UVM-ML Library Installation and Setup: This video provides a step-by-step walk through of downloading and installing the UVM-ML OA library from the Accellera forum page. This is followed by details on where to find the documentation, and how to run one of the examples delivered with the library.

5. Troubleshooting Article 20257843: "Understanding SystemVerilog Random Stability"

Random stability can be defined as the resistance of random results to code changes. A key requirement of verification is the ability to recreate the exact conditions that found the problem so that the fix can be verified. When random stimulus is used in a test environment, in order to allow debug-fix-debug cycle, it is required that the same random numbers are generated when the same test is run multiple times. This should hold true even when a change (fix) is added to the code in either the DUT or test bench. This concept is called random stability.

This article describes the concept of and how to achieve random stability in any verification environment involving SystemVerilog constrained randomization.

6. Troubleshooting Article 20266731: "How to get switching information (ON/OFF) about power domains during simulation?"

Users often require switching information about the all or selected power domains in low-power simulation. They want to get the simulation time whenever a power domain is switched ON or OFF. This information is quite useful, while creating complex system-level testcases for power simulations. This article describes couple of ways for achieving this functionality in the tool.

7. All you ever wanted to know about the vManager Server Methodology and Server Setup is now available on Cadence Online Support. 

The server setup section describes the operations that can be done on the servers, while the server methodology section answers questions on the methodology to use the servers.

a. vManager Server Methodology: This collateral describes Postgres and vManager server states, operation summary, server statuses, recommended server topology, data storage detail, software install and migration, current remote capabilities, and PG DB maintenance.  

b. vManager Server Set-Up: This collateral describes the vManager server setup in a detailed manner to equip users with a better understanding of the setup: environment variables, topology and terminologies, and various operations. It also enables users to collect and debug problems encountered while starting and accessing SQL database.

8. Troubleshooting Article 20250009: "How to turn off concurrent assertions while leaving the inline assertions on?"

Verification engineers often require switching information about the all or selected power domains in Low Power Simulation. They want to get the simulation time whenever a power domain is switched ON or OFF. This information is quite useful, while creating complex system-level testcases for power simulations. This article describes a couple of ways for achieving this functionality via Cadence Incisive Enterprise Simulator.

9. Troubleshooting Article 20257827: "How to reduce time to create vcd for power analysis?"

Conventionally,  a directed test is created for power analysis. To activate most of the logic at same time, a switching activity file is extracted in any of following formats: vcd, tcf, saf, fsdb.

The Cadence Encounter Power System (EPS) reads activity file for power estimation. It is cumbersome to include all scenarios in a single test to attain maximum activity and it takes time to create such test and execute. The SHM to VCD conversion for lengthy test also takes a long time, before power analysis can be started. This article, thus describes how to reduce time to do power analysis for complex and huge SoCs?

10. Troubleshooting Article 20253114: "Isolation not placed due to driver filtering <D> in CPF-based low-power simulation"

This article suggests what are the various reasons of implicit isolation not being inserted due to driver filtering during CPF-based low-power simulation? How to debug and fix? It also points to another wonderful troubleshooting for a warning ncelab: *W,PRTNCON—Isolation Cell not placed on port.


Hmmm... let me leave with bonus of two more troubleshooting tips, however, I will just provide title and link, and will not describe them in brief. Please go and figure out for yourself.

11. Troubleshooting Article 20266723: "Reinvoke fails when run in an LSF environment"

12. Troubleshooting Article 20267817: "How to extract a list of probed signals in either an interactive or post-process flow?"

Happy Learning!

Sumeet Aggarwal

Normal 0 false false false EN-US X-NONE HI /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-language:AR-SA;}

Advanced Profiling for SystemVerilog, UVM, RTL, GLS, and More

$
0
0

The profiler helps to figure out the components or the code streams that take the maximum time or memory during simulation. Over the years, profiling was more inclined toward RTL and GLS than verification. Today, with the increase in number of performance bottlenecks found in SystemVerilog, UVM, and general verification environments, profiling requirements have changed for the design and verification environment. The list of critical information that is not provided by most traditional profilers today includes:

  • Flat function name associated with CPU/memory, no instance-/object-based association
  • No call graph
  • No abstract level association with standard methodologies like UVM (UVM phases)
  • No dynamic data type information for memory

The Incisive advance profiler (IPROF) addresses most of these and can be used for detailed analysis of performance for all kinds of design and verification environments, including mixed language verification environments. The key features are:

  • The GUI-based utility for post-simulation profile analysis
  • Instance-based association of HDL instances and class objects
    • Helps to narrow down the scope of the problem
  • Provide call-graph information
    • Information about functions which have called a bottleneck function (or functions which are called by a bottleneck function) provides complete context and helps in debug and optimization
  • Associate memory information with dynamic data types
    • Report the consumption of the memory in user processes and dynamic data types in a hierarchical manner. Track the memory allocation and de-allocation of every user process and dynamic data types.

Below are some videos detailing key IPROF features.

1. Introduction to IPROF

This video introduces IPROF, providing a demo that shows the basic features of the profiler and the approach to figure out the performance bottlenecks in any design and verification environment. A simple Verilog design and verification environment is used to demonstrate the profiler features. One of the key feature covered in this demonstration is the instance-based profiling. This is a key differentiator to the traditional profiling where profiling was only module- or type-based. The demonstration also highlights the categories view, which shows category-wise breakup of the time in different domains like:

  • HDL block
  • Assertion
  • Randomization
  • Callgraph

 View the video here: http://support.cadence.com/wps/mypoc/cos?uri=deeplinkmin:VideoViewer;src=wp;q=Video/Functional_Verification/vdn/iProf/iProf_Intro.htm

 

2. IPROF Callgraph Feature

This video demonstrates one of the key features of IPROF, the callgraph. A callgraph is a category which shows the time consumed in SystemVerilog class-based verification environment. The callgraph also shows the time taken in individual class methods along with the contribution of its callers and callees in the simulation. The demonstration describes how to traverse through the call chains in a complex class-based verification environment to figure out performance bottlenecks.

Watch the video here:

http://support.cadence.com/wps/mypoc/cos?uri=deeplinkmin:VideoViewer;src=wp;q=Video/Functional_Verification/vdn/iProf/iProf_Callgraph.htm

 

3. IPROF with PLI, VPI, and DPI

This video demonstrates how to profile the simulator interfaces to third-party applications like PLI/VPI/DPI. One of the in-house third-party applications is used to demonstrate how to figure out the time spent in User C code, standard interface routines like vpi_get_value,vpi_put_value, and the third-party system tasks. The demonstration also explains how to figure out the instance under which the third-party application is called and the callers of the DPI calls in callgraph.

Watch the video here:

http://support.cadence.com/wps/mypoc/cos?uri=deeplinkmin:VideoViewer;src=wp;q=Video/Functional_Verification/vdn/iProf/iProf_Sim_Interfaces.htm

 

4. Incisive Memory Profiler

This video is an introduction to the Incisive memory profiler. This demonstration explains how to use the memory profiler dashboard feature to perform first-level memory analysis. The first-level analysis is helpful to understand the memory consumption in four key areas:

  • Static memory
  • Memory consumed in user testbench
  • Dynamic libraries
  • Internal tool memory consumption

Watch the video here:

http://support.cadence.com/wps/mypoc/cos?uri=deeplinkmin:VideoViewer;src=wp;q=Video/Functional_Verification/vdn/iProf/iProf_Memory.htm

Thanks,

Chinmay Banerjee

Expectations Versus Reality — How I Misjudged the Apple MacBook Pro Retina Display

$
0
0

In technology, simple concepts can have huge implications, and sometimes what you might dismiss as a minor feature, turns into a major improvement.  For example, let me tell you about my experience with the Apple MacBook Pro Retina display, how it improved my satisfaction and usability of the laptop, and relate it to how a simple concept in the Cadence Incisive Debug Analyzer  (IDA) can add a productivity boost to your debugging task. 

For several years my wife had been using a 13" Apple MacBook Pro, and was very happy with it -- she'd better have been because I gave it to her as a gift!  ; )

Then it came time to upgrade. At first, I recommended that she get a MacBook Air, particularly for its significant weight advantage, and because my wife does not do any heavy-duty computing. However, the MacBook Air default configuration only includes 4GB of RAM. We did not want to custom order more RAM and have to wait for delivery.

The MacBook Pro includes 8GB of RAM by default, was in stock at the store, and uses a much better Intel processor (2.6GHz dual-core Intel Core i5) than the MacBook Air. Additionally, with advanced Intel memory architecture and a PCIe interface to the SSD drive, its performance overwhelms the MacBooks Air's, which never hurts, even when you primarily only use word processing, email, and web browsing.

Therefore, I concluded that a new 13" MacBook Pro with the Retina display would be a better choice.

When we got the new laptop home and I installed some software, it was the first time I really saw the ultra high-resolution display. Before that point, I considered the high pixel density as just a feature that was nice to have and comes for free, because what I was really interested in was the default RAM configuration.

In other words I thought the high-resolution Retina display was not a big deal, just another goodie. However, I was wrong and I publicly admit that I grossly misjudged this feature.

After about 10 minutes of using the laptop it was clear to me: The Retina display is a killer feature and purely amazing. 

Many of us have had high-density displays on our smartphones for quite some time now. But a high-density display on a laptop is a whole different ballgame that I now consider essential due to its usefulness--the fonts and the graphics just look gorgeous.

 

Since most of you are probably not reading this using a MacBook Pro, a simple side-by-side screen capture of a standard laptop screen and Retina display will not properly show the resolution difference. But these photos will.

I have been using computers for three decades now, and the display pixel density has only improved marginally during that time. I must have so gotten so used to the status quo that I could not even imagine that it could be much better-that screen resolution could become so dense that you can no longer see pixels. In other words, that a display would look the way it ought to. The Retina display's improvement in pixel density is not incremental--it is a step function that was long overdue.

For me, it was a great experience to be positively surprised by a seemingly simple technological evolution. When you get a chance, check out a high-density display on a new laptop - any laptop - not just an Apple, play with it for a while, and you will get hooked like I am. Caution, you might regret this experience, as it can be pretty painful to work on a standard resolution display afterwards.

Apple isn't alone in developing technological improvements in products that may seem straightforward at first, but actually deliver major productivity enhancements.

As I work in the Cadence business unit responsible for developing Incisive verification products, I am privileged to get my hands on these tools and test them long before they hit the market.

In October of 2012, Cadence released the Incisive Debug Analyzer (IDA), which I used and tested a while ago. The concept and the implementation of IDA sounded very interesting and useful, and it includes the ability to:

  • Step forward or backward through the HLV or HDL source code
  • Click directly on a source line or variable to jump forward or backward through time to the point when the line was executed, or when a variable value changed.
  • Use integrated, interactive log file analysis with smart filtering to go directly to the point of interest in either the source code or the waveform database

But there is always the question of whether a design concept applied to a test model will be successful and useful in an industrial setting.

 

 

 

I was pleasantly surprised that the usefulness of IDA was much better than I expected. And, for several of our customers, this product has made a huge difference in their debug productivity.  

For example, ST Microelectronics has deployed IDA on real-life projects, and they were able to get even more productivity out of IDA than our marketing collateral suggested! In fact, they were able to reduce their time spent debugging by up to 50%. This is huge, because debugging is very resource intensive and can be a major drag on a project. 

I love it when I get positively surprised liked this. For all you technology innovators out there: Keep them coming and make my day!

 

Axel Scherer

 


Objection Mechanism Synchronization Between SystemVerilog and e Active Verification Components

$
0
0

Suppose you have two verification components, each driving its own portion of the DUT (for example, two protocols driving a DUT, one implemented in e and the other in System Verilog).

In this case, you would have two separate sequence-driven, end-of-test mechanisms - one for each framework.

An issue arises when one of the frameworks drops its last TEST_DONE objection. In this case, that framework will begin simulation finalization phase, while the other is still running. This will cause the simulation to come to its end, which will cause the other, still working, framework to stop abruptly.

The solution is to create a dependency between the objection mechanisms of both frameworks, using multi-language UVM features. A method to apply this solution is demonstrated below.

In the following method, the e verification component has two TLM put ports (one for raising the objection and the other for dropping the objection), and the SystemVerilog verification environment has two TLM put implementations. All additions made to the code were made in the topmost hierarchies of these verification components for the sake of simplicity. Also, let's call the framework that has the TLM out ports the "objection client framework," and the one that has the TLM implementation the "objection server framework."

NOTE: Ensure you have the multi-language UVM open architecture package downloaded and installed.

1.       Connect the two frameworks with two TLM ports (analysis or put).

e code:

a)      Extend theeenv to instantiate those two TLM ports

 SV code:

a)      Extend the SV env to declare two put implementation methods and make sure to register them: 

 b) Set the type override by type to replace instances of the old ‘env' by the new ‘extended env' in the build phase of uvm_test_top:

 

 c)      Finally connect the ports in the e framework to the implementations in the SV framework (in the System Verilog test bench top hierarchy). Note that you need to supply the UVM path as a string for the e and SV TLM ports, since the factory override takes effect only in runtime:

  

2.       Implement the TLM write()/put() methods (for analysis/put TLM ports) to drop and raise TEST_DONE objections.

SV code: (note that this code should be where the TLM port is declared - in class counter_env from step 1):

 

3.       At the beginning of the simulation, raise a TEST_DONE objection manually in the objection client framework.

 e code: (note that this code should be in the same unit as in step 1  - counter_env_u): 

  

4.       When the objection client framework starts a sequence branch (there may be more than one), raise a TEST_DONE objection in the objection server framework by using one of the TLM ports.

When the objection client's sequence branch is done, drop the TEST_DONE objection in the objection server framework by using the second TLM port. Do this by extending the relevant sequence's pre_body() and post_body().

e code:

 The objection client framework raises and drops as many objections in the objection server framework as in its independent sequence branches. Also, the objection client always has one more objection raised than objection server (the objection raised in step 3). Therefore:

  • If the objection server framework dropped all of its native objections, it will still have as many objections raised as the objection client's unfinished sequence branches - and the simulation will not end until the objection client finishes its sequences.
  • If the objection client finished first, it will not end until the objection server finishes its native objection count; then the simulation will end.

By applying this solution, we force the simulation to not finish even though one of the frameworks declared that it is "done." Instead, the simulation will run commensurate with the longer framework's simulation time.

 

Appendix 1:

Applying the UVM_ML_OA Features If Previously Unused in Your Multi-Language Environment

 

If you did not previously use the UVM_ML_OA features in your multi-language environment, apply them to the environment as follows:

1.       Download and untar the UVM ML OA package from uvm world from here . You will need to register if you are not already registered.

2.       Set the environment variables and source the setup.csh script as explained in README_INSTALLATION.txt, which is located in the ml/ folder in the untarred folder.

3.       In the file that contains your top module (the module that instantiates the DUT and that includes your UVM sv VC and the uvm_pkg packages ), do the following:

a.       Import the uvm_ml package:

b.       In an initial block, create a string array, where each string points to the top Level entity of each framework in your environment. (In the example below, it is is only the e test file.)

c.      Replace the run_test() statement with uvm_ml_run_test(). Provide uvm_ml_run_test() with the string array from the previous step and with the SV test name. (In the example below - the SV test is also the top entity in the UVM sv (SV?) framework.)

For more information on UVM ML OA constructs, syntax, and other features, go to the UVM ML OA documentation in <uvm_ml_install_dir>/ml/README.html.

 

Yuval Gilad

Specman Support 

My First Internet of Things Device: Moving from a Manual to an Automated Process—Debug Analyzer vs. Simple Logging

$
0
0

The Internet of Things (IoT) has been a buzzword for quite some time now. However, thus far it has not seen wide adoption or market penetration in the home; this, at least, has been my observation. And, in my circle of friends, hardly anyone has adopted any home IoT devices.

Some have flirted with the idea of buying devices like the Nest advanced thermostat, now owed by Google. However, they have not pulled the trigger and actually bought any. 

Although I typically tend to be on the early adopter side of the bell curve when it comes to technology adoption-and I believe IoT will be big--I did not have a compelling reason to get into the game with devices for my home, at least until now.

However, when I sat down and connected the dots, I realized that I have a perfect application for an IoT device.

Many parents out there have experienced similar phases in the"going to bed" habits of their kids. My youngest son is in the phase of: Can't go to bed without the light on.

He just traded with his older brother, who no longer has this problem. However, the light my little one "demands" cannot merely be a nightlight. It has to be a fairly bright light to satisfy him.

Obviously, I do not want to have the light on all night for two reasons:

  • It is not good for his sleep
  • It is a waste of energy (even though I use CFLs)

Hence, the typical nighttime drill is this:

  1. Potty time (use the toilet for you non-US folks out there)
  2. Teeth brushing
  3. Storybook reading
  4. Waiting until he is in a deep sleep and returning to shut his light off

This routine works pretty well, but sometimes turning the light off wakes him up. There ought to be a better way, and there is.

The other night I was too tired to walk over to turn off his light. But I still did it.

However, I thought that this is too stupid a method to be using in 2014 - I needed to automate this process. So I searched around and found a smartphone-controllable LED light bulb for his room with the associated controller hub! Specifically, I got the TCP Connected smart lighting system. 

The experience was amazing. The hub setup was trivial and the app is very user friendly. You can witness my first test in the video below.

Besides the buzz and the general interest in the space, IoT in general and home automation in particular, it's not just marketing hype--there are serious dollars behind it. One recent example is the $90M cash infusion into a company called Savant. Further on, Apple announced an API for this space called HomeKit at their developer conference in June.

It seems that home automation with IoT devices is about to take off.

However, one of the challenges to IoT home automation adoption is that old habits are hard to break. Even I, typically an early adopter of technology, sometimes get stuck in old and inefficient ways of performing a task. To this day I still tend to use vi when editing code on Linux-It is my default mode of operation.

And, while I am fully aware of the advantages in editing e or SystemVerilog code using an IDE such as Eclipse, particularly when it is extended for the use of HVLs with DVT, it still takes a special effort to move away from such true and trusted approaches in order to gain additional automation and productivity.

Many design and verification engineers follow similar habits. For example, when debugging code they spike it with lots of print statements and then peruse the resulting log file.

There is nothing wrong with this approach in and of itself--it is a classic and trusted method that gives the developer the information he or she wants, while being productive.

However, since code is getting continually more complex, like HDLs mixed with HVLs and so on., one quickly gets caught up in what can appear to be an infinite iterative loop. For example, because of a log message A, the developer now needs additional information, such as the value of a variable B, and so on. Consequently, the code has to be edited and re-edited, and the simulation has to run again and again.

With a small verification environment, such iterations can be fairly quick. However, at a complex sub-system level, such iterations might take several minutes, or even hours, which can add up very quickly to a lot of frustrating wait time. 

Besides frustration and wasted time, debugging iterations like this can also reduce productivity in other ways. Debugging is a very complex and intellectually demanding task. Any interruption or wait time will reduce the debug progress. The person debugging has certain thoughts and assumption she uses in determining the cause of a failure. If it takes a long time to get answers to these assumptions, then the debug productivity is adversely affected. In other words, the human idea caching is reduced.

It is exactly for this reason that Cadence introduced Incisive Debug Analyzer.

With Incisive Debug Analyzer, large portions of the productivity problems inherent in iterated debugging are addressed. Many of the debug iteration loops are cut out of the process altogether. One still needs to annotate the code with debug messages. But those messages become smart log messages.

A smart log message is an advanced log message that can come from multiple sources, be it HVL such as e [IEEE 1647] or SystemVerilog [IEEE 1800], HDL, C, C++ or even assertions.

A powerful feature of Incisive Debug Analyzer smart logging is that it allows you to change the verbosity level of log messages without having to re-run the simulation. Incisive Debug Analyzer contains numerous other features that let you interact with log messages to hone in on the root cause of a bug more quickly. Smart logs are also synced up with the waveform database, providing a consistent view of the current simulation time.

 

In addition, Incisive Debug Analyzer enables effective interactive debugging. For example, assume you are stepping through a simulation and you halt using a breakpoint. If you now advance the simulation accidentally, or if you halted because of a wrong assumption, you might have to start the simulation all over again.

With Incisive Debug Analyzer, however, you can move both forward and backward through simulation time, reducing many simulation runs. You can do this because the HVL and HDL code is not being simulated. Instead, recorded values in the Incisive Debug Analyzer database are being stepped though. Consequently, the execution through time is orders of magnitude faster than in a live interactive simulation.

These are just some of the ways Incisive Debug Analyzer can help your debug process. For a full description, check out this link.

Bottom Line: Incisive Debug Analyzer can increase your debug productivity by automating a classic and manual debug process.

 

Long live efficiency!

 

Axel Scherer

Incisive Product Expert Team

Twitter, @axelscherer 

Troubleshooting Incisive Errors/Warnings—nchelp/ncbrowse and Cadence Online Support

$
0
0

I joined Cadence in July 2000 and was immediately put on a three-month training to learn and understand the simulator tools. There were formal training sessions, and I had a mentor who I could ask all my queries. But most of the times, I was on my own, as "learning by doing" was the motto of my mentor. Today, after completing 14 years at Cadence, I can tell you that it works great, especially in cases where the tool is also designed with great utilities that help you learn faster.

nchelp

As I moved on in my job, I faced time crunch in going through product manuals, LRMs, etc., and learning the basic stuff. Since time was less, I decided to write designs and start debugging myself to learn faster. In the process, I soon figured out a great self-help utility called nchelp—the native Help for Incisive simulation Error and Warning messages.

The nchelp utility gives you detailed information about an error or warning message that you may get during the various phases of your Incisive simulation run.

Here is the nchelp usage syntax:

nchelp <tool name> <error/warning code>

Let us take the following warning message as an example:

ncelab:*W,SDFNEP: Failed Attempt to annotate to non-existent path (COND (B===0) (IOPATH A Y)) of instance test.i1 of module xo2 <./a.sdf, line 20>.

Where,

ncelab is the name of the tool which generated this warning.

W indicates the severity of the message, (other levels of severity are Note (N), Error (E) or Fatal (F)), and

SDFNEP indicates the error or warning code. In this case "ncelab", the tool name is followed by the severity of the message

To get extended help for this warning, give the following command on your unix prompt:

% nchelp ncelab SDFNEP

ncelab/SDFNEP =

This path, including the condition, if any, does not exist in the instance being annotated. The requested annotation will not occur. In order to perform the annotation, the information in the SDF file must be updated to correctly match what is in the HDL description.

Now, if you combine the warning message,

ncelab:*W,SDFNEP: Failed Attempt to annotate to non-existent path (COND (B===0) (IOPATH A Y)) of instance test.i1 of module xo2 <./a.sdf, line 20>)

that gives me information on code, line number, etc, and the elaborated description through nchelp, I now know that I need to check the syntax mismatch for (COND (B===0) (IOPATH A Y)) in my HDL and SDF descriptions.

Similarly, there are thousands of such error and warning messages that can be debugged using nchelp.

For more information, I can refer Using theIncisive Simulator Utilities book available under the latest INCISIV Release documentation on Cadence Online Support by visiting http://support.cadence.com, or by looking through the CDNSHelp utility.

ncbrowse

Soon, I discovered the other great utility in its GUI incarnation called NCBrowse Logfile Message Browser.

ncbrowse is a two-window GUI that allows you to interactively view and analyze:

  • Log file messages produced by Cadence tools, such as the HDL analysis and lint tool (HAL)
  • Logs produced by other Cadence simulator tools, such as ncvlog (the Verilog compiler), ncvhdl (the VHDL compiler), and ncelab (the Incisive elaborator).

ncbrowse displays log file messages in a message window, and the corresponding Verilog source code that produced the messages in a source file window.

 

For more information,see the Using the Incisive Simulator Utilities book available under the latest INCISIV Release documentation on Cadence Online Support by visiting http://support.cadence.com, or by looking through the CDNSHelp utility.

Troubleshooting

And what a bonus when I started finding useful information, debugging tips, and learning collateral on the Cadence Online Support homepage, (http://support.cadence.com), which is the 24/7 partner for Cadence customers and employees. The information available on the support site not only helped me in resolving issues related to Cadence software, but also helped me in understanding Cadence tools and technologies better. You can find interesting articles, quick videos, training material, application notes, etc. on the support site that can be used as a quick reference.

To quote an example, after searching completely through NCHelp for information on the SDFNET warning, I wanted additional tips or scenarios. It is then, that I searched on http://support.cadence.com, and found a good article with details that I needed, and which also provided information on SDFNEP, a warning similar to SDFNET (SDFNET or SDFNEP messages, causes and cures).

I also remember a time when my simulation failed/crashed due to an internal error, and it required some deep diving or interactive learning to understand the cause of the failure. I found good debugging tips in the book Debugging Fatal Internal Errors, available on http://support.cadence.com. After reading through this book, I was able to narrow down on my issue and I also provided relevant inputs to development team to fix it.

So, to summarize, I always use these great self-help utilities, in the following order, whenever I need to troubleshoot any Incisive error or warning.

  1. Use NCHelp or NCBrowse to find detailed information on an error or warning message.
  2. Search Cadence Online Support by visiting http://support.cadence.com for any additional information.
  3. Contact expert or submit a case by visiting http://support.cadence.com -> Cases -> Create Case option.  This will report your case to the Technical Support team of Cadence. 

Happy Troubleshooting!

Sumeet Aggarwal

Transferring e "when" Subtypes to UVM SV via TLM Ports—UVM-ML OA Package

$
0
0

The UVM-ML OA (Universal Verification Methodology - Multi-Language - Open Architecture) package features the ability to transfer objects from one verification framework to another via multi-language TLM ports. Check out Appendix A if you are a first-time user of UVM-ML OA.

This feature makes many things possible, such as:  

  • Sequence layering where one framework generates the sequence item and the other drives it to the DUT bus
  • Sequence layering where one framework invokes sequences in the other so that both item generation and DUT bus driving is done from a single framework
  • Monitoring the DUT using a different framework and still obtaining a scoreboard in a single framework
  • And more...

An issue arises when the object that we want to send via the TLM port is an e "when" subtype, since other frameworks do not have such type determinants. This will probably cause a type mismatch between the two frameworks that will probably be expressed by unpacking less/more bits than packed.

The recommended solution is to use the Incisive mltypemap utility, which automatically maps the specified data type to its equivalent representation in the target framework.

However, mltypemap currently does not support e "when" subtypes in terms of creating a different individual type in the target framework for each "when" subtype. Instead it creates one type that contains all of the fields from all "when" subtypes, including their determinants.

Therefore, after using mltypemap, you should use the determinants to determine which "when" subtype was received by the TLM port and extract only the relevant portion of the received object.

Example:

1.Suppose we want to send an e struct called "packet" from e to SV via a TLM put port. "packet" has two "when" subtypes: SINGLE and DOUBLE. The "when" subtype determines how many data fields this packet has (in this case, one or two). Note that there is no need to mark the fields as "physical". Mltypemap will automatically define them as physical unless told otherwise.

 The packet definition is as follows: (file name: packet.e)

Since the packet is sent to SV, an equivalent representation in SV must be defined for it. Therefore a mltypemap TCL input file needs to be created, that will:

  1. Configure the generated code to be UVM ML OA compatible
  2. Provide the source type
  3. Provide the target type name
  4. Provide the target framework 
  5. Optional - you can decide which fields will not be mapped by using config_type  ... -skip_field

This is how the TCL file maptype_to_sv.tcl should look:

 This TCL input file should be used with the mltypemap utility together with the e source file:

3. Three files were generated as a result of this command: packet_ser.e, packet.svh, and packet_set.sv. Be sure to include these three files in the source file list. The packet_ser files determine which fields to serialize/deserialize (you may have chosen to omit some fields from being serialized  by using the TCL config_type command with the -skip_field option). packet.svh includes the new type's definition in SV : (file name : package.svh)

 

 Note that the "when" subtypes are incorporated into the field's names. For example, the field "data_1" in the e "when" subtype DOUBLE't packet is represented here as ‘DOUBLE__t__data_1'. We will use this information when we fetch the data from the TLM implementation of our TLM put imp.

4.      Now suppose we want to send different "when" subtypes of the same type through one TLM port, we would have to define the TLM port in both frameworks.

Output port put_port in the e side (File name:  producer.e ): 

 And in SV, we define the input port, put imp (file name: consumer.sv):

 The TLM put imp must be registered with the backplane, and must also be connected to the e TLM put port. Suppose our hierarchy in SV is uvm_test_top.sv_env.cons.put_imp , and in e it is sys.env.prod.put_port. Then the registration and connection will be as follows (done in uvm_test_top, file name: test.sv):

 

5. In SV, the can_put and try_put functions and put task of the TLM put imp port will need to be defined in order to determine whether we have a SINGLE't packet or a DOUBLE't packet (file name : consumer.sv): 

 The above example shows how to send a struct that represents a data item, but the proposed solution could also be applied to other functionally directed struct types, like sequences.

Suppose we would like to send some sequences from a sequence library in e to be started in SV. All we need to do is map the e sequence library to SV using the mltypemap utility, and then in the SV code, determine which sequence was received from e, extract its relevant fields, and start the equivalent SV sequence with the fields that were extracted from the received sequence struct.

This solution enables users to use TLM ports to send e "when" subtypes to the target framework. The ability to use TLM ports has many advantages over previous solutions, such as  independence frpm a stub file, and the ability to connect the same port to different languages by only changing the UVM path. 

 

Appendix A: Applying the UVM_ML_OA Features If Previously Unused in Your Multi-Language Environment

If you did not previously use the UVM_ML_OA features in your multi-language environment, apply them to the environment as follows:

  1. Download and untar the UVM ML OA package from uvm world from here . You will need to register if you are not already registered.
  2. Set the environment variables and source the setup.csh script (as explained in README_INSTALLATION.txt) which is located in the ml/ folder in the untarred folder.
  3. In the file that contains your top module (the module that instantiates the DUT and that includes your UVM sv VC and the uvm_pkg packages ), do the following:

a.  Import the uvm_ml package: 

 

 b.  In an initial block, create a string array, where each string points to the top-level entity of each framework in your environment. (In the example below, it is is only the e test file.)

 c.  Replace the run_test() statement with uvm_ml_run_test(). Provide uvm_ml_run_test() with the string array from the previous step and with the SV test name. (In the example below, the SV test is also the top entity in the UVM SV framework.)

 


For more information on UVM ML OA constructs, syntax, and other features, go to the UVM ML OA documentation in <uvm_ml_install_dir>/ml/README.html

sync and wait Actions vs. Temporal Struct and Unit Members

$
0
0
Using sync on a temporal expression (TE), does not guarantee that the execution will continue whenever the TE seems to succeed. In this example, the sync action will miss every second change of my_event:

    tcm0()@any is {
        wait;
        while TRUE {
            sync change (my_event)@any;
            message (LOW, "tcm0: change (my_event)@any occurred");
            wait cycle;
        };
    };

The explanation for this behavior is that Specman evaluates the temporal expression in the sync (or wait) action only when the action is reached.In this case, when "sync change (my_event)@clk;" is reached for the first time,  the value of my_event is saved, and the TE succeeds only at the next my_event change at the sampling event.

If you expect that each change will start a new TE evaluation, an event struct member should be used. Event and expects are struct members and therefore are evaluated as long as the struct is not quitted. In the example below, the execution will continue after each my_event change:

    event e is change(my_event)@any;
    tcm1()@any is {
        wait;
        while TRUE {
            sync @e;
            message (LOW, "tcm1: event e occurred");
            wait cycle;
        };
    };

But using sync on an event still does not guarantee that the execution will always continue whenever the TE seems to succeed. In the following TCM, although e will occur each cycle, the execution will continue after every third occurence:
 
    tcm2()@any is {
        wait;
        while TRUE {
            sync @e;
            message (LOW, "tcm2: event e occurred");
            wait [3]*cycle;
        };
    };

Using on struct/unit member ensures execution upon each e occurrence:
   
   on e {
      message (LOW, "on: change (my_event)@any occurred");
    };

To summarize:
  • sync and wait are actions, and as such they are activated when the action is reached.
  • They should be used to suspend the execution until the TE they are associated with is successful.
  • The TE evaluation is started when the action is reached.

When we want to guarantee the execution of some code whenever a TE succeeds, it is recommended to define an event and on struct member, which remain active throughout the lifetime of the object from its creation until either quit() is called or the run ends.
 
Maya Bar
Specman support 
Viewing all 413 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>