Quantcast
Channel: Verification
Viewing all 415 articles
Browse latest View live

Techniques to Boost Incisive Simulation Performance

$
0
0

Functional verification is the biggest challenge in delivering more complex electronic devices on increasingly aggressive schedules. Every technique for functional verification relies on a fast simulation engine for execution, so performance is of prime importance to all users.

Simulation performance can't be a single number or optimization because each environment is unique in terms of the methodology deployed, the languages involved, the size of the design, and the verification environment.

Hence, Cadence Incisive performance team has developed a handbook to cover the major aspects of Simulator performance. It leads users to follow certain steps to better understand, and then address the performance need.

In many situations, the steps to localize the performance bottleneck and resolve it take a considerable amount of time. The application notes in the handbook help speed this process by giving you the information to improve performance.  In the cases where you need more performance than the notes provide, they will guide you to create the well articulated and defined performance requirement making it easier for Cadence to optimize Incisive for speed.

Here is a series of application notes that focuses on areas (techniques and technologies), which helps you in improving the performance with Incisive Simulator.

Topic and LinkBrief Description
Incisive Performance Analysis Checklist A flow-based checklist to analyze the performance with Incisive Simulator
Top Focus Areas to maximize your simulation performance Detailed analysis with Top causes for Performance bottlenecks. 
Maximize Incisive Performance with Assertions Assertions related guidelines and commands to help in Incisive performance analysis 
Maximize Incisive Performance with Coverage Coverage related guidelines and commands to help in Incisive performance analysis. 
Analyzing Incisive Profiler for Performance Understanding Profiler entries for better action-oriented performance analysis. 
Maximize Incisive Performance with GLS Describes command options, delays and timings check that can affect Gate-Level Simulation performance. 
Incisive Debug Memory Consumption Command options and utilities/steps to debug system memory consumption 
 Maximizing Productivity with  Multi-Snapshot Incremental Elaboration MSIE Example Describes a new technology, which allows a large, invariant, portion of the environment to be pre-elaborated and then shared with many varying tests.
Analyze UVM Environment Performance using IprofDescribes the use model of the Incisive Advance profiler (Iprof) and how to use the profiler call graph reports to debug the performance bottlenecks of UVM based design verification environment.
Maximize Incisive Performance with SystemVerilog RandomizationUnderstanding testbench structure, TCL commands, and profiler analysis for incisive performance with SystemVerilog 
Specman Performance HandbookPerformance Aware Coding for e testbenches. Advanced Command Options and Performance Tips for working with Specman.


NOTE - To access the documents listed in the table, click a link and use your Cadence credentials to logon to the Cadence Online Support: http://support.cadence.com/ web site.

Cadence Online Support website http://support.cadence.com is your 24/7 partner for getting help and resolving issues related to Cadence software. If you are signed up for e-mail notifications, you've likely to notice new solutions, Application Notes (Technical Papers), Videos, Manuals, etc.

To help us improve, do send us your feedback by adding a comment below or using the Feedback box on Cadence Online Support. You will be provided with a feedback window on the top of each document view on http://support.cadence.com

Let us know if these documents helped you in improving the performance of your environment. If yes, then it will be good to know by how much. You will be provided with a feedback window on the top of each document view on http://support.cadence.com

Sumeet Aggarwal


New Product: ARM ACE Assertion-Based Verification IP (ABVIP) Available Now

$
0
0

Preface: on Tuesday December 11 we are giving a free a webinar on "ACE Assertion-Based Dynamic, Formal, and Metric-Driven Verification Techniques with ABVIP".  Register today: http://goo.gl/rmBhh

As anyone who has worked with ARM's AMBA 4 AXITM Coherency Extensions -- a/k/a the "ACETM" protocol -- knows, there are a ton of different configuration options and operational scenarios available to the designer.  Of course, this flexibility and power presents a significant verification challenge.  Hence, building on the success of our ACE Universal Verification Component (UVC) Verification IP product, we are excited to announce the immediate availability of the complementary Assertion-Based Verification IP (ABVIP) for ACE.  Written in standard IEEE System Verilog Assertions (SVA), this new ACE ABVIP simultaneously supports simulation-centric ABV, pure formal analysis, and mixed formal and simulation verification flows. 

In this 3 minute video, R&D Product Expert Joerg Muller outlines the main capabilities of this new product --  how it offers specific configuration, run time performance, and context-sensitive work-flow advantages in the SimVision debug environment vs. competitive offerings:

If the video doesn't play, click here.


In a nutshell, this new product marries all the next generation ABVIP capabilities we introduced early this year with Cadence's deep knowledge of the ACE protocol and its many configuration options.

This product is available immediately - please contact your Cadence representative for more details, or ask us more about it via the "Contact" button at the upper RHS of this page.


Team Verify

On Twitter: http://twitter.com/teamverify, @teamverify

And now you can "Like" us on Facebook too, where we post more frequent updates on formal and ABV technology and methodology developments:
http://www.facebook.com/pages/Team-Verify/298008410248534

 

Reference Links

CDNLive Silicon Valley 2012: Mirit Fromovich on automating ARM "ACE" verification



If the video fails to play, click here.

Cadence ACE VIP Accelerates Development of Multi-Processor Mobile Devices 

How to Verify ARM ACE Coherent Interconnects with UVM verification IP

Richard Goering's Industry Insights: ARM ACE Verification IP: Verifying Hardware Cache Coherency

July 2012 Product Update: New Assertion-Based Verification IP (ABVIP) Available Now 

Cadence's Verification IP Catalog

 

 

Avoid Overly Long Expressions in Specman e Code

$
0
0

When you write your e code, a good practice is to avoid expressions that are "overly long" even though they are completely legal. While there is no hard definition of what constitutes an overly long expression, such long expressions can lead to human errors and parser processing errors.

Very long expressions are hard to read and understand. This also makes them error prone, as an accidental syntax error in the middle of such an expression is hard to notice.

Furthermore, such an expression can lead to undesirable results from the Specman parser. It can take it long time to parse, and in some cases (especially if you use a 32-bit platform) it can eventually run out of memory resources and crash. This is all the more likely to happen if the expression contains a real syntax error (which in a shorter expression would normally lead to a syntax error message). Thus, by avoiding expressions that are too long, you benefit twice:

1.  You better avoid introducing accidental syntax errors in the first place.

2.  You help the Specman parser to detect such errors faster in the event that they do occur.

On top of that, it is a good habit to use parentheses in long expressions where appropriate. This not only makes the code more readable, it can actually, in certain cases, make parsing faster.

One last recommendation is to break a long expression into several smaller ones. To illustrate the above recommendations, let's take a look at the following code:

print x == 0 or x == 1 or x == 2 or x == 3 or x == 4 or x == 5 or x == 6 or x == 7 or x == 8 or x == 9 or x == 10 or x == 11 or x == 12 or x == 13 or x == 14 or x == 15 or x == 16 or x == 17 or x == 18 or x == 19 or;

This expression contains a syntax error -- it has an extra "or" at the end. Because the expression is very long, the parser starts to perform a very long calculation, instead of immediately reporting the error. (In this case, the long calculation is avoided when the error is fixed.) Also, the long calculation might eventually lead to a crash.

If you add parentheses as follows, the expression becomes more readable, and you will likely not make the above syntax error in the first place:

print (x == 0) or (x == 1) or (x == 2) or (x == 3) or (x == 4) or (x == 5) or (x == 6) or (x == 7) or (x == 8) or (x == 9) or (x == 10) or (x == 11) or (x == 12) or (x == 13) or (x == 14) or (x == 15) or (x == 16) or (x == 17) or (x == 18) or (x == 19);

You can also break the expression into two smaller ones:

var tmp1 := (x == 0) or (x == 1) or (x == 2) or (x == 3) or (x == 4) or (x == 5) or (x == 6) or (x == 7) or (x == 8) or (x == 9);

var tmp2 := (x == 10) or (x == 11) or (x == 12) or (x == 13) or (x == 14) or (x == 15) or (x == 16) or (x == 17) or (x == 18) or (x == 19);

print tmp1 or tmp2;

In order to improve usability, and ease the pain of this limitation, Specman will automatically issue a warning message, where possible, when parsing an expression actually starts taking too long. This warning will not refer to an exact syntax error, even if there is one; it will, however, refer to the source line in which the problematic expression resides. You will then be able to stop parsing by pressing Control/C, and then examine your code and correct it if needed. This warning will be added in the coming HF releases of Specman, starting from 10.2.

Yuri Tsoglin

e Language team, Specman R&D

Specman: Determining a Good Value for optimal_process_size

$
0
0

Specman's Automatic GC Settings mechanism is aimed at eliminating the need for users to control the parameters which determine each Garbage Collection's behavior.

Setting config mem -automatic_gc_settings=STANDARD tells Specman to calculate all the parameters, to ensure that Specman's memory management system works in an optimal way.

The only parameter that is left for the user to play with is the -optimal_process_size (aka OPS). The importance of this parameter is that many of the other automatically calculated parameters are its derivatives. To set this parameter optimally, one should ask the following question:

WHAT SIZE MEMORY IMAGE DO I WANT MY PROCESS TO HAVE?

Let's say, for instance, that you have 20 GB free on your machine, and you have 2 simulations running in parallel. The optimal process size for each simulation would be 10GB, so you just assign OPS a value of 10GB.

Now, what if you don't know? Or don't care?

Specman then sets this value itself, based on the amount of available RAM on the machine on which the Specman process is to run on. Note - This can be quite a big number. It might make Specman run fast, (hardly perform GCs,) but it consumes lots of memory. The reasoning is that if the user does not care about process size, Specman will use the maximum available memory in order to create a smooth run.

Realistically though, most users do care about their process size and would want to limit it on one hand, while giving it enough liberty to avoid memory issues on the other. So, is there a good way to calculate efficient OPS, one that will ensure that the machine only uses as much resources as needed?

Let's start by saying it is not possible to know exactly how memory settings will affect a program run, if not already measured on EXACTLY the same run. Nevertheless, if we have such knowledge on similar runs, it may give us some hints. We can collect run information and analyze it in a way that can help us understand how efficiently the environment runs and if we need to take some actions to achieve fewer OOM (Out Of Memory) failures, better performance, more effective memory utilization, etc. However, when dealing with batch tests, choosing a specific test "as representative" for purposes of measurement can be as difficult as choosing specific memory settings, so the information must be collected on a representative group of runs, or on all the batch runs.

So what exactly do we need to search for in our log file, in order to calculate the optimal size for our environment? Let's first introduce three concepts:

1)  "Static" (Live) Specman heap - this is the basic size of Specman dynamic memory that mostly belongs to persistent objects and generally remains stable during the simulation

2) "Static" non-Specman heap - Same as 1) but for all the rest of the players in the process

3) We would also like to define the memory requirement for the process during copy GC (since whilst Copy GC is operating, we are most likely to hit the peak of the memory consumption, as Specman might double its memory to perform the GC.

 (2 X Maximum SN live heap) + (Garbage) + (Maximum non-SN heap)

Collecting the Relevant Data

Now in order to sample these values, we will need to collect the relevant memory and garbage collection related information. To collect this information, we need to set the following configuration parameters in our pilot simulation:

  • print_process_size=TRUE - Prints the entire process size in three stages for every GC: Before, At peak time, and After
  • show_mem_raw=TRUE - Prints Specman memory consumption, including top consumers
  • print_debug_msgs=TRUE - Prints messages, including the exact phases of GC
  • setenv SPECMAN_MEMORY_ACCOUNTING - Gives us information about Specman's Dynamic allocation.

Finding a value for Specman Live heap

Analyzing the result log, let's estimate the static Specman heap first. There are three printouts that can help us determine this value:

1.       Last line in Copy (or disk based) GC printout; which is the new process size "after GC":

"Done - new size is nnn bytes"

For example:

MEMORY_DEBUG: process size after GC:

MEMORY_DEBUG:   VSIZE = 1990940, RSS = 1792688

Done - new size is 1804478256 bytes.

2.       "Total size of reachable data" line in show mem "Process sizes" table

For example:

Total allocated size of numerics:       12176 +

Total allocated size of structs:       34568K

Total size of reachable data:           1719M +

Total size in free blocks:               343K +

Total size of unreachable data:          375M

Heap size:                              2096M

 

3.       Last line in OTF GC printout: "Done - total size of reachable data is nnn..."

For example:             

MEMORY_DEBUG: process size at the peak memory usage:

MEMORY_DEBUG:   VSIZE = 3653716, RSS = 3514240

Done - total size of reachable data is 1,096,707,344 bytes (plus 2,417,146,640 free).

There will be several instances of those printings (the same number of GCs that you had in the simulation), and we need to choose the highest value that was printed. Printout no. 1 above should be most correct, and we should take the value from it, but the others should also be considered (if show mem is used and OTF GC is encountered).

Finding a value for Static non-SN heap

To estimate static non-SN heap, you need to take VSIZE after copy (or disk based) GC and subtract from it the "Done - new size is nnn bytes" value from the line below it (values obtained from OTF GC prints are not good).

MEMORY_DEBUG: process size after GC:

MEMORY_DEBUG:   VSIZE = 1990940, RSS = 1792688

Done - new size is 1804478256 bytes.

In this case: 1990940K - 1804478256.

So the maximum of VSIZE after copy GC, which supposed to be "static SN heap" + "static non-SN heap", is an estimation of what is the minimum requirement for the environment.

Finding a value for Dynamic allocation (Garbage)

There is one more thing to be estimated -- the amount of memory used for dynamic allocations collected during GC. It will depend on the environment and how fast it allocates "transient" objects. If it happens fast, you need to have a large buffer so that GC is not triggered too often; if there are few dynamic allocations, it can be relatively small. In most cases, it's the same order of magnitude as static SN heap.

Example Calculation

Let's look at an example on how you could come up with some numbers for OPS on a typical simulation run.

As per the above example:

"process size after GC" : VSIZE = 1990940 (~1944M)

"Done - new size is"        :  1804478256 bytes (~1721M)

Static non-SN heap         = 1944 - 1721 = 223M

Live SN heap                      = 1721M

 

Recommended Optimal Process Size = (Static non-SN heap) + 2 X (Live SN heap) + Dynamic allocation

Or

*OPS= (Static non-SN heap) + ~3X (Live SN heap)

                OPS= (223) + (3 * 1721) = 5386M

*Since we estimated Dynamic allocation to be of the same magnitude as Live SN heap, we used 3 times the value of Live SN heap. In most cases the dynamic allocation value would be lower so we can round down the result we got.  For instance, in the above example

OPS = 5386M =~ 5G.

Notes:

  • The result recommended OPS will not be effective if we notice disk-based GC occurrences in that pilot simulation. In this case we would want to give a higher value than the result OPS we calculated, in order to avoid disk-based GC, and re-calculate the OPS in the same manner, if we were successful avoiding it.
  • If you know your environment does not use a lot of dynamic allocations (in such cases you will see that the difference between values of Live SN heap before and after GC are small), you can change the above formula to something closer to

OPS= (Static non-SN heap) + 2X (Live SN heap)

And round it up. This way you won't encounter a situation where you have a simulation which consumes a lot of memory, but performs no GCs.

  • If you try setting the OPS to a very low value, Specman will automatically adjust it. Specman will notify you if it sets the optimal_process_size to a value other that what a user specifies, if you set the -notify_gc_settings option to TRUE. In that case, you will see a message as follows:

auto_gc_settings: Application too big, setting optimal_process_size to

760803328 which is sn_uintptr2ep(760803328) 

Summary:

Calculating the optimal OPS is a bit tricky. You want it to limit Specman usage while giving it enough space not to encounter memory issues. The above example calculation gives you no more than a recommendation, which is based on the previous run. To get a better, more realistic value, you should run several pilot simulations, and perform the above analysis on the average of those runs.

* From SPMN 12.10s4 and forward this calculation is done automatically when you apply the config memory -print_process_size

Avi Farjoun

Muffadal Laila 

2013 CES: Top 4 Trends Benefiting EDA

$
0
0

While a variety of EDA customer segments are growing, consumer electronics continues to drive the lion's share EDA of industry revenues.  Hence, many events at last week's annual Consumer Electronics Show (CES) in Las Vegas can be extrapolated as leading indicators for the EDA business.  While I couldn't personally attend CES this year, like last year my two trusted agents (specifically, Unified Communications (UC) expert David Danto of Dimension Data, and Joseph Hupcey Jr., video & communications systems architect and father of yours truly) were on the ground to field check the myriad of reports streaming in from legacy and new media.  Thus, allow me to highlight the following trends from CES 2013 that I suggest will have a big impact on EDA this year.

1 - TV's ongoing evolution: clearly the most visible product category at CES were the new crop of "UltraHD" a/k/a "4K" resolution TVs.  That's 3840 X 2160 pixels, or twice the horizontal and vertical resolution of the 1080p HDTV format, with four times as many pixels overall.  Concurrently a lot of very pretty, very large screen OLEDs were given center stage in many booths, suggesting that after over 10 years of CES previews this vibrant, richly color saturated display technology is finally ready for prime time.  My agents report that that 4K screens are noticeably better than today's HD - it's not quite the same dramatic leap from SD to HD - but the difference is visible enough to tempt people to upgrade if the price is right.  And thus the key question(s) revolve around volume production availability and pricing, i.e. when will the cameras, DVRs, streaming support boxes and services, and the TV sets themselves be available at the price points consumers expect?

As it turns out many professional and even some prosumer cameras already support 4K today.  Quite a few productions are shot in 4K and after the final edit are down sampled to standard HD (don't ask me why, but somehow 4K video down sampled to 1080p looks richer than natively shot HD.)  There are also a handful of professional grade theater projectors that support 4K.  However, the good news for EDA and our customers' perspective is that's pretty much where the equipment support ends.  The entire video data flow after the editor is up for grabs - DVRs, routers, and any other apps you can think of for TV need to be re-created to support consumer UltraHD.  Given the bandwidth required to shuffle 4K frames around, clearly hardware-assisted verification products will continue to enjoy robust demand.  With the ongoing growth of apps on TV platform, I further assert hardware/software design and verification solutions will also see ongoing growth.  Last but not least, low power design and verification requirements - whether from regulatory bodies or end-customers themselves -- will continue to be a factor in this new generation of equipment. 

Bottom-line: I agree that UltraHD will inspire demand for new TVs and supporting equipment, which means many more SoCs and peripheral ICs will need to be designed and shipped.

2 - "Born Mobile": this tag line was the theme of Qualcomm's opening keynote presentation, and indeed could be applied to over half of CES where smart, mobile devices of all forms - and a plethora of supporting accessories - took up a large chunk of the exhibit hall acreage.  I see EDA being well positioned to benefit in several major categories: low power (self-explanatory), advanced node tool chain support, and design and verification IP.

At the risk of stating the obvious, the demand for increasing performance and functionality is clearly unabated, and hence the investments being made in 14nm and lower is money well spent by our industry.  Another trend expertly observed in this EETimes interview of Broadcom's co-founder and CTO Henry Samueli is that almost everything on the show floor had embeded WiFi connectivity.  Beyond the opportunities for network infrastructure equipment growth, I believe this significant step toward the "Internet of Things" heralds opportunities in design and verification IP - not for just WiFi and other radio IP, but IP to enable the rapid smartening up of previously unconnected, dumb devices like refrigerators.

3 - Born Mobile, Automotive Style: CES 2013 devoted a massive area to in-car entertainment and supporting accessories.  Such was its scale that my agents were barely able to scratch the surface of this pavilion, but they came away impressed at how this category has visibly grown year-over-year in size and scope.  It used to be all about glitzy car stereos, speakers of all shapes and sizes, and amusing arrays of blinking lights to decorate the audio installation.  Today, the offerings are all about outfitting the passenger cabin like a home entertainment center, where you can customize the standard platform with apps like any other self-respecting modern device.  The obvious point: in addition to the growth in electronics being used under-the-hood, the demand for multiple mobile entertainment centers in the driveway is good news for semiconductor growth.

4 - Standards-Based IP Enabling Clever Innovation:  Perhaps a better case in point for anticipating growth in standards-based IP and low power design & verification is the eminently practical StickNFind Bluetooth Stickers.  Simply affix their special sticker to something you often lose (car keys, TV remote control, phone, luggage), and when it goes missing you can hunt it down using their iOS or Android smartphone app to follow a radar-like display to sweep for the lost item.  Clearly products like this are enabled by the availability of high quality, standards-based design and verification IP; and in turn we can expect clever new applications like this to drive growth.

If you went to CES this year – or not -- please share your observations in the comments below, or offline.

Until next CES, may your throughput be high and your power consumption be low.

Joe Hupcey III

On Twitter: https://twitter.com/jhupcey @jhupcey

P.S. Speaking of trade shows, in the verification space the annual DVCon's clear focus on functional verification technology and methodology has made it a growing, high value technical and trade forum.  Hence, my colleagues and fellow bloggers will be there in force February 25-28 at the DoubleTree Hotel in San Jose, CA!  In particular I welcome you to join me at the Wednesday lunch panel, "Expert Panel: Best Practices in Verification Planning" and the Thursday tutorial entitled "Fast Track Your UVM Debug Productivity with Simulation and Acceleration" (includes coffee & lunch)  Register today!

 

Reference Links and/or Other Interesting CES 2013 reports

David Danto of Dimension Data's report on CES 2013: A View From The Road Volume 7, Number 1 -2013: International CES

SemiWiki: Battling SoCs: QCOM vs NVIDIA vs Samsung

EETimes DesignNews: CES Slideshow: The Next Big (or Little) Things

 

 

Specman: An Assumed Generation Issue and its Real Root Cause

$
0
0

Random generation is always a complex task, and differences in results are usually very hard to debug. Besides, generation misbehavior always rings many bells in R&D :-)

A customer reported a random stability issue, explaining that the generator (IntelliGen) generated different values with the same seed. One simulation was started from vManager, the other in a Unix shell, and they ran in different run modes (compiled vs. interpreted).

Looking into the (quite complex) environment, it turned out that the beginning of the simulation was identical, but as time advanced the results started to differ. I assume some of you have experienced similar behavior in the past.

A first look revealed that complex list manipulations were performed in many levels of nested method calls. Each list -- a list of units -- was manipulated by several list methods (sort, add, unique, etc.). The results were printed out to the screen where, after a while, the lists started to differ.

So an idea came to mind: The problem is probably not a generation issue, as the static generation was identical in all cases; rather it is a runtime issue, most likely caused by list manipulation. But how could the way the simulation was launched or the run-mode be responsible for the differences?

With no alternative, we proceeded to debug through the source code step by step, and examined the list after every manipulation. The complexity and deep nesting of the code (there were even recursive methods that touched those lists), resulted in about two days of painstaking analysis without finding a difference. Then, we hit pay dirt -- we came across the construct where the lists began to differ. Below is the sample code:

So where is the problem? The code identified that the list of units that was sorted, but did not identify the specific field used for sorting -- the argument (it) referred to the unit itself.

Looking up the sort() method in the Incisive/Specman documentation, we found the following:

The Note in the description seemed to suggest a possible clue.

A scan of the log files showed that the vManager run started a garbage collection at some point before the sorting action, and that the plain Specman simulation logs did not show this garbage collection. This difference in behavior was the result of different memory settings between different simulation runs.

The bottom line: Garbage collections can change the physical memory address of the units in the list, which can affect the sorting of these addresses before and after such a memory operation.

Summary:

  •  The root cause for the problem was that the sorting statement contained a bad argument (the result of a copy and paste error) -- the construct was taken from code where it worked perfectly fine with a list of strings.
  •  The problem was not generation-related, even if it looked that way in the first place.
  •  More importantly, we uncovered usage of a problematic construct: The user based the sort on the physical address. This should be avoided even if garbage collection is not performed (and all the more so when garbage collection is performed).
  •  Uncovering such an issue is a perfect task for the Specman Linter. Cadence intends to enhance the linter's capabilities in that direction.

Hans Zander

Improve Debug Productivity - SimVision Video Series on YouTube

$
0
0

Most verification customers claim that they are spending over 50% of their verification effort in debug. If so, you should check out these latest SimVision debug videos since you will quickly see how SimVision can enable you to be much more productive in less than an hour after viewing the videos.

Take the time to browse through these videos.  Everyone will benefit, even if you are a new user looking for a debug solution or if you are an experienced SimVision user looking for new and enhanced debug functionality in our latest 12.2 release.

Cadence Debug Verification Expert, Corey Goss, has recently uploaded 13 SimVision videos on YouTube that you can view. The series focuses on a number of key debug features that support various debug flows (RTL, Testbench, SystemC/C/C++ Debug, etc.) and a common debug environment (HDL and Testbench).

You can view the entire playlist of Debug videos from the link below:

http://www.youtube.com/playlist?list=PLYdInKVfi0KYzCjnkgRgDXFJcKyQRz6eM

Let's Debug with SimVision

Kishore Karnane

Team Debug

DVCon 2013 for the Specmaniac

$
0
0

At the upcoming DVCon (in San Jose, CA February 25-28), Cadence will cover all aspects of our verification technologies and methodologies (full list of Cadence-sponsored events is here).  Of course, Team Specman cannot resist drawing your attention to the many activities that will feature Specman and e language-related content, or be of general relevance to Specmaniacs.  Hence, if you are going to the conference, please consider printing out the following "DVCon 2013 Guide for the Specmaniac".

* Specman-centric posters at the poster session on Tuesday from 10:30-11:30am

1P.21   "Taming the Beast: A Smart Generation of Design Attributes (Parameters) for Verification Closure using Specman", presented by Meirav Nitzan of Xilinx, Inc., with co-authors Yael Kinderman and Efrat Gavish of Cadence R&D.

1P.25   "Maximize Vertical Reuse, Building Module to System Verification Environments with UVM e", presented by Horace Chan of PMC-Sierra, Inc., with co-authors Brian Vandegriend and  Deepali Joshi also of PMC-Sierra, Inc., and Corey Goss, a Solutions Architect in Cadence R&D.

The best part about the poster session is you can easily interact with the authors - asking them questions on the fly in a way that would be awkward if they were presenting the paper in a lecture format.

* The Cadence booth at the free expo on Tuesday & Wednesday Feb. 26-27, 3:30- 6:30pm each day

As always, Specman technology is directly or indirectly a cornerstone of the various demos -- UVM, Verification IP, metric-driven verification & Enterprise Manager updates, ESL & TLM updates, etc.  This year we will be showcasing new automated debug technology - the Incisive Debug Analyzer - that works great with e/Specman testbenches. Even better: R&D leader Nadav Chazan will be present to walk through the tool with you and answer your questions.  Of course, at a relatively small show like DVCon there is often the opportunity to digress from the primary demo(s) and discuss Specman technology updates in specific - Nadav and other members of Team Specman will be happy to give you the highlights of the new capabilities release in Specman 12.2 and more.

* Thursday morning Feb. 28 tutorial (8:30am-Noon), "Fast Track Your UVM Debug Productivity with Simulation and Acceleration"

In this comprehensive tutorial, Specman R&D's Nadav Chazan along with hardware assisted verification expert Devinder Gill will show how you can reduce debug turnaround time of class-based, software-like environments (i.e. like an e/AOP testbench).  Specifically, they will show how to leverage low latency interactive debug techniques to improve debug efficiency, where the user has a much broader range of capabilities at their disposal.  This includes interactive features such as forward and backward source code single-stepping, searching for arbitrary values and types, and automated go-to-cause analysis. Come prepared to take plenty of notes because Nadav and Devinder will walk through many detailed examples.

* Bonus: A free lunch on "Best Practices in Verification Planning" Wednesday Feb. 27!

On the Wednesday of DVCon Cadence is hosting an expert panel on "Best Practices in Verification Planning".  Panel moderator, R&D Fellow Mike Stellfox (yes - *that* Mike Stellfox who's been with the team since Verisity days), will kickoff this important discussion on how creating and executing effective verification plans can be a challenging mix of art and science that can go sideways despite the best efforts of engineers and managers.  Note that this won't be confined to RTL verification planning only -- the panel also includes experts on analog-mixed signal verification and formal analysis.

 

Panel discussion at DVCon 2012

We look forward to seeing you in-person soon!

Team Specman

 

Reference Links

The official DVCon website

Comprehensive list of Cadence-sponsored events & papers

Images from last year's show to give you an idea of what it's like, in case you have never been to a DVCon before.

DVCon 2012 video playlist 

60 second highlights video from DVCon 2012

On Twitter: http://twitter.com/teamspecman, @teamspecman

And on Facebook: http://www.facebook.com/teamspecman


DVCon 2013 for Formal and ABV Users

$
0
0

At the upcoming DVCon (in San Jose, CA February 25-28), Cadence will cover all aspects of our verification technologies and methodologies (full list of Cadence-sponsored events is here).  However, Team Verify would like to alert users of Cadence Incisive formal and multi-engine tools, apps, and assertion-based verification (ABV) to the following papers and posters focused on this domain.

* Session 2, Tuesday Feb. 26, 9-10:30am features two papers:

Paper 2.1, "Overcoming AXI Asynchronous Bridge Verification Challenges with AXI Assertion-Based Verification IP (ABVIP) and Formal Datapath Scoreboards".  Speaker: Chris Komar of Cadence; Authors: Bochra Elmeray - ST-Ericsson and Joerg Mueller of Cadence

Paper 2.3, "How to Succeed Against Increasing Pressure - Automated Techniques for Unburdening Verification Engineers".  Speaker: James S. Pascoe - STMicroelectronics; Authors: James S. Pascoe - STMicroelectronics, Steve Hobbs - Cadence, Pierre Kuhn - STMicroelectronics.  (Note: while it's not clear from the title, this paper covers the "Coverage Unreachablity" app running on Incisive Enterprise Verifier (IEV) - more on this "app" below.) 

* Session 3, Tuesday Feb. 26, 9-10:30am (Unfortunately a conflict with paper 2.1 - flip a coin?)

Paper 3.1, "How to Kill 4 Birds with 1 Stone: In a Highly Configurable Design Using Formal to Validate Legal Configurations, Find Design Bugs, and Improve Testbench and Software Specifications"
Speaker: Saurabh Shrivastava - Xilinx, Inc.; Authors: Saurabh Shrivastava, Kavita Dangi, Mukesh Sharma - Xilinx, Inc, Darrow Chu - Cadence Design Systems, Inc.

* Poster session on Tuesday from 10:30-11:30am

1P.6, "A Reusable, Scalable Formal App for Verifying any Configuration of 3D IC Connectivity"  Speaker: Daniel Han - Xilinx, Inc., Authors: Daniel Han, Walter Sze, Benjamin Ting - Xilinx, Inc., Darrow Chu - Cadence Design Systems, Inc.

(Ed. Note.: the best part about the poster session is you can easily interact with the authors - asking them questions on the fly in a way that would be awkward if they were presenting the paper in a lecture format.)

* The Cadence booth at the free expo on Tuesday & Wednesday Feb. 26-27, 3:30- 6:30pm each day

Among the other demos available, Team Verify experts will be on hand to show you our Coverage Unreachability app, one of a number of free apps available to users of IFV and IEV.  [Ed. Note.: What do we mean by the term "app" in this context?  Verification apps in general put the focus on "problems vs. EDA technology" such that a verification app is a well-documented tool capability or methodology focused on a specific, high-value problem.  In this instance - with IFV or IEV as the platform -- the given problem is more efficiently solved using formal-based methods and/or a combination of formal, simulation, and metric-driven techniques than simulation-based methods alone.  Finally, the barrier to creating the necessary properties and/or the need for ABV expertise is significantly reduced through either automated property generation built-in to the tool(s) or pre-packaged properties (provided).]

* Bonus: A free lunch on "Best Practices in Verification Planning" Wednesday Feb. 27!

On the Wednesday of DVCon Cadence is hosting an expert panel on "Best Practices in Verification Planning".  Panel moderator and R&D Fellow Mike Stellfox will kickoff this important discussion on how creating and executing effective verification plans can be a challenging mix of art and science that can go sideways despite the best efforts of engineers and managers.  Note that this won't be confined to RTL verification planning only -- the panel also includes experts on analog-mixed signal verification and formal analysis.  Specifically, the CEO of long time Cadence partner Oski Technology, Vigyan Singhal, will be on the panel to share how advanced planning can greatly improve the efficiency and effectiveness of formal analysis and ABV.  (Recall that at the last DAC Vigyan's team successfully verified a sight unseen DUT from NVIDIA in 72 hours.  The key their success was resisting the enormous temptation to jump in and start running IEV, and instead taking a whole evening to thoroughly understand the design and scope out the most critical areas for analysis.)

We look forward to seeing you in-person soon!

Joe Hupcey III
for Team Verify

On Twitter: http://twitter.com/teamverify, @teamverify

And on Facebook too:  www.facebook.com/cdnsteamverify

 

Reference Links

The official DVCon site

Comprehensive list of Cadence-sponsored events & papers

Images from last year's conference to give you an idea of what it's like, in case you have never been to a DVCon before.

DVCon 2012 video playlist: http://www.youtube.com/playlist?list=PL66DB89BCDB6E841A

60 second highlights video from DVCon 2012: http://youtu.be/qEzIUX9VvOc

 

Using the ‘restore -append_logs' Feature

$
0
0

As described in Specman Advanced Option appnote, Specman Elite supports dynamic load and reseeding. This allows the user to run the simulation up to a certain point (often until right after reset) and save the simulation. The user can then restore the simulation and run many different tests either by changing the random seed (reseeding) or by loading additional e files which will change the test, e.g., adding constraints (dynamic load).

But many customers who use this new methodology have come across a problem. If a DUT error occurs in one of the new runs, and there is a need to debug the failure, usually the first step is to check the various log files. However, with this methodology we only have log files from the restore point and later; anything written to the log file from the original run until the save is lost. So we don't actually have the full log file, and this can make debugging more difficult.

To avoid this problem and be able to see the full log file, the user must first save the simulation with the log files (do not worry about the size, the file is compressed). Then, when restoring the simulation, the user must add a switch the tell Specman to append the current log files to the previously saved ones.

To support this capability, the following switches were added:

  • Specman:
    • The save command will have an additional switch: -with_logs
    • The restore command will have an additional switch: -append_logs
  • Ncsim:
    • The save command will have an additional switch: -snwithlogs
    • The restart command will have an additional switch: -snlogappend
  • Irun:
    • The command line will have an additional switch: -snlogappend

So, how do you use this feature? We will show you, using the basic xor example which we shortened to 2 operations. If we run using the command:

irun xor.v -snload xor_verify.e -exit

the Specman log file will look like this:

Starting the test ...

Running the test ...

Running should now be initiated from the simulator side

  it = operation-@7: operation   of unit: sys

        ----------------------------------------------  @xor_verify

0       %a:                             0

1       %b:                             0

2       !result_from_dut:               0

  p_out$ = 0

  (it.a ^ it.b) = 0

  sys.time = 150

  it = operation-@8: operation   of unit: sys

        ----------------------------------------------  @xor_verify

0       %a:                             1

1       %b:                             1

2       !result_from_dut:               0

  p_out$ = 0

  (it.a ^ it.b) = 0

  sys.time = 350

Calling stop_run() from at line 45 in @xor_verify.

Last specman tick - stop_run() was called

Normal stop - stop_run() is completed

Checking the test ...

Checking is complete - 0 DUT errors, 0 DUT warnings.

Now let's run the example with save and restore. First we'll do the save:

irun xor.v -snload xor_verify.e -tcl

ncsim> run 200ns

ncsim> save foo -snwithlogs

ncsim> exit

Now, if we run using the command:

irun -r foo -exit

the Specman log will contain:

Restored Specman state INCA_libs/worklib/foo/v/savedir/sn_save.esv

  it = operation-@8: operation   of unit: sys

        ----------------------------------------------  @xor_verify

0       %a:                             1

1       %b:                             1

2       !result_from_dut:               0

  p_out$ = 0

  (it.a ^ it.b) = 0

  sys.time = 350

Calling stop_run() from at line 45 in @xor_verify.

Last specman tick - stop_run() was called

Normal stop - stop_run() is completed

Checking the test ...

Checking is complete - 0 DUT errors, 0 DUT warnings.

However, if we run using the command:

irun -r foo -snlogappend -exit

The Specman log will contain:

Starting the test ...

Running the test ...

Running should now be initiated from the simulator side

  it = operation-@7: operation   of unit: sys

        ----------------------------------------------  @xor_verify

0       %a:                             0

1       %b:                             0

2       !result_from_dut:               0

  p_out$ = 0

  (it.a ^ it.b) = 0

  sys.time = 150

Restored Specman state INCA_libs/worklib/foo/v/savedir/sn_save.esv

  it = operation-@8: operation   of unit: sys

        ----------------------------------------------  @xor_verify

0       %a:                             1

1       %b:                             1

2       !result_from_dut:               0

  p_out$ = 0

  (it.a ^ it.b) = 0

  sys.time = 350

Calling stop_run() from at line 45 in @xor_verify.

Last specman tick - stop_run() was called

Normal stop - stop_run() is completed

Checking the test ...

Checking is complete - 0 DUT errors, 0 DUT warnings.

 

We see that in this latest run the log file was appended to the log file of the run of the save command.

It is important to note that only Specman log files are affected by this switch; irun and ncsim log files are not affected.

Avraham Bloch

Specman R&D

P.S. Reminder: To discuss this feature, Specman in general, and the new Incisive Debug Analyzer, R&D’s Nadav Chazan will be at DVCon Feb. 26-28, 2013.  Ask for him by name in the booth, or sign-up for his tutorial on Thursday February 28 on “Fast Track Your UVM Debug Productivity with Simulation and Acceleration”.  More info: http://dvcon.org/2013_event_details?id=144-5-T

IBM and Cadence Collaboration Improves Verification Productivity

$
0
0

Technology leaders like IBM continuously seek opportunities to improve productivity because they recognize that verification is a significant part of the overall SoC development cycle. Through collaboration, IBM and Cadence identify, refine, and deploy verification technologies and methodologies to improve the productivity of IBM’s project teams. 

Tom Cole, verification manager for IBM’s Cores group, and I took a few minutes to reflect on verification productivity and discuss what the future holds.

Tom, can you describe the types of products your teams verify? 

Our groups develop IP cores for IBM internal and external customer SoC projects.  Among these are Ethernet, DDR, PCIe and HSS communications cores and memories. Our projects tend to be on the leading edge of performance and standards.

What are some of the verification challenges your teams face? 

Our verification challenges fall into three major categories – mixed-signal, debug, and product-level productivity.  All of our cores include PHYs, which makes mixed-signal intrinsic to their functionality, but we all know that transistor-level mixed-signal simulation is too slow for methodologies like OVM and UVM.  OVM and UVM increase productivity because they reduce the test-writing effort, but they create another challenge in debugging the enormous amount of data they produce.  A part of that data set - coverage - is a critical metric for us because it enables us to measure our verification progress. But it also leads to a capacity challenge due to the enormous data volume.

How are IBM and Cadence collaborating to address these challenges?

Several innovative projects are underway with Cadence to address these verification challenges.  For example we have applied the metric driven verification methodology as documented in Nancy Pratt's video summary. Another project that has been running for more than a year models analog circuits with digital mixed-signal models, and shows an order of magnitude performance improvement in preliminary results.  As a result, we were able to use the same models in our pre-silicon verification and in our post-silicon wafer test harness.  As industry leaders, we also share knowledge derived from our collaboration through technical papers.  One example is the SystemVerilog coding for performance paper delivered at DVCon 2012 and the constraint optimization paper we will deliver at DVCon 2013. 

What’s next for verification productivity?

Given the complexity of verification, there are several opportunities to improve productivity.  For example, a promising approach uses formal checks at the designer level to reduce the time to integrate the testbench and blocks of the design for verification.  We are currently collaborating to place these static checks in our code for reuse throughout the verification cycle.  This may catch unintended instabilities introduced by ECO design changes earlier in the verification process and further improve our overall verification productivity.

If you have questions for Tom or me, please post your comment and we’ll do our best to answer you quickly!

=Adam Sherer, Cadence

It’s Coming: Udacity CS348 Functional Hardware Verification Course Launches on March 12, 2013

$
0
0

On October 18, 2012 Google, NVIDIA, Microsoft, Autodesk, Cadence and Wolfram announced their collaboration with Udacity. Working with Udacity, each of the companies listed above is developing new massive open online courses (MOOCs).

The Cadence contribution is CS348 Functional Hardware Verification.

You can enroll in this course by clicking on"Add to my Courses" on this page.

https://www.udacity.com/course/cs348

Today, we are happy to announce that our course will launch on Tuesday, March 12, 2013. The full course consists of 9 units and will include industry cameos from several distinguished engineers from different companies around the world. These engineers provide additional perspective to the topics of the particular units in the course.

To give you a little taste of the course, we are releasing the first clip of the first unit today.

As you watch the video, you will notice this is going to be different.

One key aspect that is not shown in the first clip is the high level of student engagement and interaction. Besides micro-lectures, the course will contain lots of interactive quizzes and many online coding exercises to ensure the concepts are well understood and can be put into practice immediately.

We will preview some of the interactive capabilities in the next weeks.

This is the list of units:

  1. Introduction to Hardware Verification
  2. Basic stimulus modeling and generation
  3. Interfacing to the Hardware Model
  4. Monitoring and Functional Coverage
  5. Checking
  6. Aspect Oriented Programming
  7. Reuse Methodology
  8. Debugging
  9. Conclusion and Exam

The course will be completely self-paced, which means you can take it at your own pace and leisure.

Finally, the course will close with a final exam and Udacity certificate to show your performance.

Get ready to verify and check for course news on Facebook and Twitter!

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer

Planning to Go to DVCon 2013 Next Week? If So, Don't Miss the Debug Tutorial Feb. 28th!

$
0
0

TUTORIAL: Fast Track Your UVM Debug Productivity with Simulation and Acceleration

Session: 5T on Thursday, Feb. 28th from 8:30AM - 12:00PM

For more details on the debug tutorial, click here

This debug tutorial will highlight how customers can reduce their debug turnaround time by employing the most efficient debug tools available. Class based software-oriented environments are best debugged using interactive debug techniques where the user has a much broader range of tools at their disposal. Traditional post process debug techniques can be valuable -- however, many limitations such as performance, and the lack of interactive features such as source level stepping, make debugging difficult. To be more efficient in post-process debug, these restrictions must be removed. The debug techniques presented in this tutorial will provide design and verification engineers with the latest techniques and tools available in the debug space, including a new novel solution that combines the best features of both interactive and post process debug.

Novel debug methodologies that will be discussed in this tutorial will allow users to:

  • Explore their test environment for static and dynamic information
  • Step forward or backward through the simulation, or jump to a specific point in the simulation
  • Investigate possible reasons why the simulation has reached a particular state through advanced go-to-cause features
  • Filter all messages coming from any platform (HVL and HDL code) and explore the cause of the messages

Additional topics to be covered in the tutorial are:

  • Advantages of interactive debug over traditional post-process debug
  • Preparing UVM environments for hardware acceleration
  • Advanced post-process debug techniques improving debug productivity by 40 - 50%
  • Unique advantages of class-based aware debug technologies 
  • Application of debug techniques to both simulation and acceleration engines, including assertions and coverage driven verification 

We will also explain how new data access to coverage and assertions in acceleration extend these methods to catch bugs unique to system verification. In short, we will provide design and verification engineers with the latest techniques and tools available in the debug space, including solutions combining the best features of both interactive and post process debug using both simulation and acceleration engines. 

So, if Debug is a big bottleneck in your overall verification effort, do not miss this debug tutorial on Thursday, February 28th at 8:30AM in the Donner Ballroom.

Looking forward to seeing you all at DVCon!

Kishore Karnane

 

JBYOB (Just Bring Your Own Browser): Interactive Labs on Udacity CS348 Functional Hardware Verification – No Installation Required

$
0
0

On February 19, we announced the launch date for our Udacity MOOCs course: CS348 Functional Hardware Verification, which will launch in exactly one week from now on March 12, 2013.

When we communicated the launch date, we also released the first clip of the first unit.

Now we want to give you a glimpse of one the coolest features of this course: Interactive labs executing in the web browser.

The best way to describe it is to give you a short demo that shows you how this works, even before you can try it out yourself.

The first time I saw this it totally blew my mind. No installation, no setup of labs, everything is fully sandboxed!

Just enroll here, and when the course is live log into the course on a web browser, and you are ready to go. It's simply amazing!

Get ready to code!

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer

DVCon 2013: Functional Verification Is EDA’s “Killer App”

$
0
0

With another year of record attendance, DVCon has again proven that a functional verification-focused mix of trade show and technical conference is what customers need to get their jobs done.  Here are some of the some of the highlights I took away from this informative event:

DVCon 2013 was a one stop shop for panels, papers, posters,
live demos, and tutorials on functional verification

* Great panels on Verification Planning and Drastically Improving D&V

Two panels at the conference provided valuable food for thought in their own ways.  First, in regard to the Cadence lunch panel on "Best Practices in Verification Planning", EDA industry observer Peggy Aycinena wrote:

Sometimes magic happens at panel discussions at technical conferences, and that was the case mid-day on Wednesday at DVCon in San Jose this week, where the conversation was lively, entertaining and informative on the pedestrian, albeit foundational, topic of "Best Practices in Verification Planning."  Ironically, the hour-long conversation did not appear to be planned at all, but to be organic and spontaneous ...

Granted I'm biased - but I have to agree whole heartedly.  The panelists were generous in sharing their experiences with the mixture of art and science required by verification project planning, and I urge you to review either of Peggy's account of the panel or Industry Insights' Richard's Goering's in depth report.

Later that day "panel magic" happened again at the Industry Leaders panel on "The Road to 1M Design Starts".  To everyone's delight, the panelists embraced the spirit of brainstorming how design and verification can be made significantly (think 20x, even 100x) more efficient.  Sound impossible?  One panelist gamely recalled that not many years ago there was a "software crisis" where the best software managers could expect was a net of 10 tested lines of code per day per engineer.  Fast forward to the present, and teenagers with a lot of imagination but limited programming experience are creating money-making apps on incredibly complex mobile platforms thanks to very well thought out development tools and libraries.  The panel challenged the audience to consider the lessons of such anecdotes in increasing abstraction and automation for EDA tool providers and their customers alike.

Richard Goering covers this panel in depth here in his Industry Insights blog.

* Apps as the new EDA paradigm

At last year's DVCon one of my product teams ("Team Verify") introduced the idea formal apps in our tutorial.  (In a nutshell, a formal app enables usage of powerful formal engines "under-the-hood" by an engineer who has never used formal before, to solve specific problems.)  At the time we were the only ones promoting this concept and offering the underlying product support.  What a difference a year makes -- not only have our immediate competitors adapted this approach, but the "app" term was being applied to both formal, multi-engine, and pure dynamic simulation offerings and every thing in between.  Of course, it's hard to be surprised by this given the EDA-related appeal is obvious: because apps are focused on specific, painful problems -- i.e. they are customer-centric by definition and in practice -- they are a clear win for both end users and vendors.

* The e/Specman Surge

After years of having waves of Specman-related abstracts be rejected seemingly out of hand, this year the assembly finally got to see what Specmaniacs have been eager to share with this verification community.  One look at the posters by Meirav Nitzan of Xilinx (1P.21, Taming the Beast: A Smart Generation of Design Attributes (Parameters) for Verification Closure using Specman) and Horace Chan of PMC Sierra (1P.25   Maximize Vertical Reuse, Building Module to System Verification Environments with UVM e) and it's obvious that ‘e' and Specman usage are both thriving and they remain at the forefront of verification innovation.

Until next DVCon, may your power consumption be low and your throughput be high.

Joe Hupcey III

On Twitter: @jhupcey, http://twitter.com/jhupcey

Reference Links

DVCon 2013 Proceedings, http://dvcon.org/

DVCon 2013 YouTube playlist of speaker and panelist video interviews:
http://www.youtube.com/playlist?list=PLYdInKVfi0Kantj1U3H8pk9NkxFykT0rG

Richard Goering Industry Insights report: DVCon 2013 Expert Panel: How to Succeed with Verification Planning
http://www.cadence.com/Community/blogs/ii/archive/2013/03/05/dvcon-2013-expert-panel-how-to-succeed-with-verification-planning.aspx

Richard Goering Industry Insights report: DVCon 2013 Panel: 1 Million IC Design Starts - How Can We Get There?
http://www.cadence.com/Community/blogs/ii/archive/2013/03/01/dvcon-2013-panel-1-million-ic-design-starts-how-can-we-get-there.aspx

Peggy Aycinena, EDA Café: DVCon 2013: Best Practices in Verification Planning
http://www10.edacafe.com/blogs/whatwouldjoedo/2013/02/28/dvcon-2013-best-practices-in-verification-planning/

 


Launch Time – Udacity CS348 Functional Hardware Verification Hits the Web Today, March 12, 2013

$
0
0

Coinciding with the first day of CDNLive! Silicon Valley, our UdacityMOOCs course on Functional Hardware Verification will go live today! Developing this course has been a very rewarding experience and we are happy this day has finally come.

Last week we gave you a sneak preview of the interactivity featured in the course. However, as you all know there is nothing like trying something by yourself to really get it.

So now it is your turn. Go ahead - enroll and check it out.

To give you more motivation to enroll, we are providing you another glimpse of the course. This time the clip is from unit 2, where we model packets for a data router.

Let's verify!

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer

Specman: Getting Source Information on Macros

$
0
0

When you write a define-as or define-as-computede macro, you sometimes need the replacement code to contain or to depend on the source information regarding the specific macro call, including the source module and the source line number.

For example, a macro may need to print source information, or it may need to create different code when used in one module than it needs to create when used in other modules.

You can achieve this as follows.

Define-as macro

In a define-as macro, you can use the following two special kinds of replacement terms inside the replacement block:

<current_line_num>

This is replaced with the decimal numeric representation of the source line number in which the macro is called.

<current_module_name>

This is replaced with the name of the module in which the macro is called.

For example, the following macro modifies the given left hand side expression (such as a field or a variable) to the given value, and prints an informational message reporting this change.

<'

define <my'action> "modify_and_print <field'exp> <value'exp>" as {

    <field'exp> = <value'exp>;

    out("*** The value of <field'exp> was changed to ", <value'exp>,

        " at line <current_line_num> in @<current_module_name>");

};

‘>

Assume the following code is then written in a module called my_module.e:

<'

extend sys {

    !x: int;

    run() is also {

        modify_and_print x 10;

    };

};

‘>

This code will assign the value 10 to the field x, and will print the following output:

*** The value of x was changed to 10 at line 5 in @my_module

 

Define-as-computed macro

In a define-as-computed macro, the following two pre-defined routines can be used to query the current source line number and the current module.

get_current_line_num(): int

This routine returns the source line number in which the macro is called.

get_current_module(): rf_module

This routine returns the reflection representation of the module in which the macro is called.

To get the module name string, similarly to <current_module_name> in define-as macros, use the get_name() method of rf_module.

For example, the following macro adds a new field of type int to the struct in the context of which it is called. The field name is constructed from the current module name and line number. However, if the module name has the "_xxx" suffix, no field is added.

<'

define <my_field'struct_member> "special_field" as computed {

    var m_name: string = get_current_module().get_name();

    if m_name !~ "/_xxx$/" then {

        result = append("f_", m_name, "_",

                get_current_line_num(), ": int");

    };

};

‘>

The following code, if written in a module called some_module.e, adds two integer fields to sys: f_some_module_3 and f_some_module_4:

<'

extend sys {

    special_field;

    special_field;

};

‘>

Note however that if the same code is written in a module called some_module_xxx.e, nothing is done.

Yuri Tsoglin

e Language team, Specman R&D

Incisive Debug Analyzer is a Finalist for EETimes and EDN ACE Software Product of the Year

$
0
0

Great news.... Incisive Debug Analyzer (IDA) is one of five finalists for the EETimes/EDN Annual Creativity in Electronics (ACE) Awards in the Software Product of the Year category. In addition to IDA, Lip-Bu Tan and Cadence are also finalists for ACE Executive of the Year and Company of the Year, respectively.

Check out the Press Release.

The awards program honors the people and companies behind the technologies and products that are changing the world of electronics. Winners will be announced April 23 during Design West in San Jose.

Companies today spend more than 50% of their verification effort in debug because bugs are hard to find at both the HDL level or at the testbench level.  This has created a critical market need for sophisticated debug solutions that can find bugs quickly, thereby lowering design costs and speeding time to market.

In 2012, Cadence met this market need with the introduction of the Incisive Debug Analyzer (IDA) - a new and unique multi-language, "interactive" post-process debug solution that can help customers find bugs in minutes instead of hours.  As the only debug tool in the market to deliver comprehensive and innovative debug functionality in a single, integrated, and synchronized debug environment, the new IDA can cut customer debug time by 40-50% and by more than 2X on really complex bugs.

About IDA

IDA provides sophisticated debug solutions to address RTL, testbench and SoC verification debug needs. Additionally, IDA provides an interactive debug flow in a post-process debug environment. This means that customers have all the functionality of an interactive debug flow while debugging in a post-process mode. Since users have access to all the data files, they only need to run the simulation once -- a significant time saver in debug.

IDA has several unique debug capabilities which are all very tightly integrated and synchronized into a single-multi-pane debug window. Here are just a few:

  • Playback Debugger: Unique functionality which allows customers to either step or jump through time to any source code line or variable change, both forward and backward in time.
  • Cause Analysis: Intuitive, flow-oriented debug environment which presents suggestions about where to look, in order to debug.
  • SmartLog: Integrated message window that shows logfile messages from HDL, testbench, C/C++/SystemC/Assertions, etc.

Learn more about Incisive Debug Analyzer.

Cadence is taking the lead in debug, and there are loads of new features, improved ease of use, and better performance planned for Incisive debug solutions in 2013 and beyond.  Contact your Cadence representative for more information, in-depth demos/presentations, or hands-on technical workshops.

Happy Debugging!

Kishore Karnane

 

 

 

Develop for Debugability – Part 1

$
0
0

Debugging is the most time-critical activity of any verification engineer. Finding a bug is very often a combination of having a good hunch, experience, and the quality of testbench code that you need to analyze. Since having a good hunch and experience is something everyone needs to acquire for themselves, I am going to focus on potential code optimizations that help reduce debug time.

Encapsulate your Aspects

As in any other object-oriented language, modeling should be a planned rather than an ad-hoc process. However, as a verification engineer you are heavily reliant on others to help you in the planning process to develop your testbench. As such, you will quite often be forced to do ad-hoc programming to model a new requirement, or rewrite already existing code to meet a slight change in an already existing requirement. The UVM-e guidelines already provide a very solid basis; however even within those guidelines, your scoreboard can be very prone to becoming a dumpster for anything that you have to do on an ad-hoc basis.

You might be fine with just using your scoreboard for modeling all the RTL-to-testbench output checking. However, your testbench might have to handle more complex input-to-output transformations to provide the testbench output. This is where using the scoreboard as a dumpster for anything you can think of is a bad idea, and you should think about using a dedicated reference model to provide a well encapsulated input-to-output transformation or even an input predictor, based on the output you received.

As an e user you are in luck, because it is very easy to perform ad-hoc programming in e and avoid "the dumpster." In a series of steps, I am going to guide you through how to integrate your reference model into your block-level monitor unit.

  1. Declare your scoreboard

<’

// This is just a place-holder for your scoreboard

unit my_scbd_u like uvm_base_unit {

   // Place-holder method for input-to-output transformation

   transform_received_to_expected( src_tr: src_prot_tr_s ): target_prot_tr_s is empty;

};

‘>

2.       Instantiate your scoreboard in the Block-Level Monitor

<’

unit my_block_monitor_u like uvm_base_unit {

    // the scoreboard instance

    scbd: my_scbd_u is instance;

};

extend my_scbd_u {

    // reference, do not generate

    !p_block_mon: my_block_monitor_u;

   

    connect_pointers() is also {

        p_block_mon = get_enclosing_unit( my_block_monitor_u );

    };

};

‘>

3.       Create your reference model aspect feature and instantiate it in the monitor

<’

unit my_model_aspect_a_u like uvm_base_unit {

    // reference to your monitor unit

    !p_block_mon:   my_block_monitor_u;

    // All your fields, events, methods etc go in here

    // ...

   

    my_transformation_algorithm( src_tr: src_prot_tr_s ): target_prot_tr_s is {

        // algorithm that models the transformation from input to output

    };

   

    connect_pointers() is also {

        p_block_mon = get_enclosing_unit( my_block_monitor_u );

    };

   

    // Add a name for your model

    short_name(): string is also {

        result = ”ASPECT_A”;

    };

};

extend my_block_monitor_u {

    model_aspect_a: my_model_aspect_a_u is instance;

};

‘>

 4.      Integrate the first model aspect into the scoreboard

<’

extend my_scbd_u {

    // Reference the aspect

    !p_model_aspect_a: my_model_aspect_a_u;

    

    // Add the transformation hook

    transform_received_to_expected( src_tr: src_prot_tr ): target_prot_tr is {

        result = p_model_aspect_a.my_transformation_algorithm( src_tr );

    };

   

    connect_pointers() is also {

        p_model_aspect_a = p_block_mon.model_aspect_a;

    };

};

‘>

 5.   Adding more aspects to the verification environment.

<’

extend my_scbd_u {

    // Reference another aspect

    !p_model_aspect_b: my_model_aspect_b_u;

   

    // Add the transformation hook for another model aspect

    transform_received_to_expected( src_tr: src_prot_tr ): target_prot_tr isalso {

        // This could be an example of a filter that alters or removes transactions

        result = p_model_aspect_b.apply_filters(  );

    };

    connect_pointers() is also {

        p_model_aspect_b = p_block_mon.model_aspect_b;

    };

};

‘>

In this flow, steps 1 through 4 have to be done once, and step 5 is simply just extending your transformation hook method with any additional algorithms you need to add to your scoreboard.

 

By following these steps as a guideline, you can quickly change reference model aspects to your scoreboard without creating a dumpster and this will help you debug any issues tremendously. Don’t forget to extend the short_name() method in your units for your messaging!

 

Daniel Bayer

 

Develop For Debugability – Part II

$
0
0
Looking at Coding Styles for Debug

In this blog post we are going to discuss 3 different cases where coding style can help you debug easier:

1.      Declarative vs. Sequential Coding

2.      Method Call Depth

3.      Calculating if-else Conditions

Declarative vs. Sequential Coding

When modeling your testbench you will need to write code that describes time-consuming or complex steps of some intended behavior. This will be considered sequential code.

You will also most likely need code that keeps the current state of your object accessible to other objects, as well as tracking these object states over time. This will be considered declarative code.

While developing your testbench you will find yourself asking quite often: "Should I make this declarative or sequential?" As a general guideline, reducing the number of sequential lines of code and keeping as much information as possible in the declarative code will help you to debug a lot quicker.

Specman has a superb data-browser and step-debugger and it is important to keep this in mind. When debugging code you usually want to see where the testbench and the RTL go out of sync. The common approach would be to fire up your failing simulation and set the breakpoints:

Specman> break on error

Specman> break on gen err

This will make your simulation stop on an error and open up the debugger, highlighting the precise line where the error occurred. In an ideal case, you can already tell from your error description what went wrong. This however is rather rare and you should get acquainted with the current state of your testbench and gather all declarative information you can get through Specman's data browser. This will give you an understanding of where the simulation has headed, and chances that you will understand the error are pretty good if you tracked enough information in your declarative code. If you are fortunate enough you can already resolve the error or at least have a conversation with the person that might have to fix this scenario.

However, there are still quite a few cases where you need to rerun the simulation. Here you will have to dive into the remaining sequential code and do a step-by-step debugging session. Step-debugging is very tedious and will get more cumbersome the more sequential code you have to examine. It gets even more cumbersome if you are relying on a lot of temporary variables inside your methods.

Method Call Depth

While developing sequential code you will be defining and implement a bunch of methods. Crafting methods is usually a straightforward process. In verification you need to think aboiut whether you need a time-consuming method (TCM) or a timeless method. As a general rule, if you are modeling event-based models or checks, then you need a TCM -- otherwise try to stick with a regular, timeless method.

Mostly you will be developing methods for:

  • Interface and Virtual Sequences
  • Interface Monitors (not block-level monitors)
  • Bus-Functional Models (BFMs)
  • Reference Models
  • Scoreboards

Due to this separation of functionality, given by the UVM-e, there is already a given blueprint for how to integrate these methods with each other.

One issue that may come from developing methods is that one may feel tempted to create methods for reusability and hence encapsulate even trivial steps in a method and have that method called, instead of typing out these steps. The problem with debugging code that relies heavily on method calls is that you always step into a new method and lose the scope of your method's callee.

The opposite problem of creating too many methods implements mega-monolithic methods. These kinds of methods are hard to debug and understand as well, since they usually not only carry the context of one specific modeled aspect.

Pre-Calculating if-else Conditions

The essence of code execution is handling conditional branching constructs. Generally there is nothing wrong about simply writing a condition into your if-else actions. However, complex Boolean expressions should be evaluated before entering the expression query. By creating a temporary variable and assigning a Boolean evaluation to it, you will gain two advantages:

  • Create a meaningful variable name
  • Break on the if-execution with the condition already evaluated

To read part 1 of this blog series, click here.

Daniel Bayer

Viewing all 415 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>