Scoreboards and discards

A common difficulty in ASIC Verification is how to scoreboard packet discards. Malformed packets, tail drops (or RED), and the impossible task of being cycle accurate with the internal state of the ASIC make it virtually impossible for a scoreboard to predict what the outcome will be in the presence of saturated queues, error injection and other DUT internal states: will the packet make it, or will it get dropped? Trying to predict the output response is a game you lose before it even starts. Instead try to realize the truth: there is no spoon… er… the scoreboard does not have to predict the outcome, it only has to do the accounting.

For all types of discards except RED, the packet generator is the VC that knows what will happen to the packet. When the packet generator inserts a bogus TCP checksum, it knows the packet will be discarded for this reason. So instead of relying on the scoreboard to parse the packet and re-discover what the packet generator already knew, the scoreboard simply has to look up the packet in a table to know how to account for the packet. And you guessed it, that lookup table has been filled out by the packet generator.

The mechanics of filling out the packet table is simple as hashing on a property of the packet or on the entire packet, and storing or retreiving any information you need regarding the packet: it’s final destination, how it should be routed, whether it is expected to be discarded, if it should be segmented, if it will be discarded and why, etc. All the scoreboard has to do it look up the packet in the table, retreive the associated information, and increment a counter. When the simulation ends, simply check if all the packets have been accounted for by comparing the scoreboard counts with the packet generator counts and the DUT’s internal counters. If the DUT says it discarded 10 packets because of RED, the scoreboard knows that it will be off by 10 when it tries to balance the counts.

Don’t ask the scoreboard to predict outcomes, it’s too hard and it’s not the right place.

Advertisements

4 Responses to Scoreboards and discards

  1. Gopi says:

    What are different ways to get predict outcomes?
    A reference model, user define task based on configs & interface scenarios . What else???

  2. Martin d'Anjou says:

    I am no longer a fan of reference models, because prediciton based on random input stimulus is too hard and does not give me the coverage I want to see. Also, reference models for entire chips tend to be large and bulky and non-portable.

    When I generate random packets, I have to 1) predict where they go, and 2) run them through a transfer function. I don’t like to do 1), predict where the packets go, because it means I rely on randomness to get my coverage. Hardcoding directed packets is a waste of time. So instead, I decide where the packet will go before it is generated, and I generate it with the destination decision as one of the inputs to the constraint solver, as opposed to having a random destination as a consequence of solving the constraint.

    Knowing the destination before the packet is generated means I don’t have to “predict” its destination. I only need to apply the same data transformation as the DUT with a transfer function.

  3. Hi Martin,
    I’m always a fan of a technique that saves verification time. Reusing knowledge from the packet generator is useful, as it saves all that reference model development (whether the reference model is a distinct entity or just built into the scoreboard).

    It does have a reuse impact though. When you move your verification IP up a level, say to the full digital top-level of your chip, the packet generator might be real RTL or C code running on a CPU. In that case, you don’t have the information from the packet generator, and you can’t reuse your verification IP.

    This is a problem I’ve hit time and time again when trying to do SoC integration verification. I get verification IP delivered from the various block-level teams, and I can’t use any of it to build even a simple correctness model for the system.

    I wondered if you had any thoughts on the overall cost trade-offs here

    Cheers
    David

  4. Martin d'Anjou says:

    I don’t have a cost trade-off analisys of developing block level components reusable at the top level vs. writing the same functional models for two different usage contexts: block level and top level. I am slowly reading Software Assements, Benchmarks, and Best Practices by Capers Jones, maybe there is an answer in there!

    I have done both approaches though: bottom up (develop the block level first) and top-down (develop the top level first). In my experience they are both viable and both hard. It takes some amount of planning, and someone has to lead the verification environment architecture when integration and partitioning problems come up.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: