Sunday, August 2, 2015

Dataflow Modeling

Dataflow Modeling

A dataflow model specifies the functionality of the entity without explicitly specifying its structure. This functionality shows the flow of information through the entity, which is expressed primarily using concurrent signal assignment statements and block statements.

Concurrent Signal Assignment Statement

Dataflow modeling models the behavior of an entity is by using the concurrent signal assignment statement.

Eg.,

Figure 6: An or gate.

entity OR2 is
port (signal A, B: in BIT; signal Z: out BIT);
end OR2;
architecture OR2 of OR2 is
begin
Z <= A or B after 9 ns;
end OR2;

The architecture body contains a single concurrent signal assignment statement that represents the dataflow of the 2-input or gate. The semantic interpretation of this statement is that whenever there is an event (a change of value) on either signal A or B (A and B are signals in the expression for Z), the expression on the right is evaluated and its value is scheduled to appear on signal Z after a delay of 9 ns. The signals in the expression, A and B, form the "sensitivity list" for the signal assignment statement.

The input and output ports have their object class "signal" explicitly specified in the entity declaration. If they are not specified, the ports would still have been signals, since this is the default and the only object class that is allowed for ports.

The architecture name and the entity name are the same which is not a problem since architecture bodies are considered to be secondary units while entity declarations are primary units and the language allows secondary units to have the same names as the primary units.

An architecture body can contain any number of concurrent signal assignment statements for which, the ordering of the statements is not important. Concurrent signal assignment statements are executed whenever events occur on signals that are used in their expressions.

Eg.,

Figure 7: External view of a 1-bit full-adder.

entity FULL_ADDER is
port (A, B, CIN: in BIT; SUM, COUT: out BIT);
end FULL_ADDER;

architecture FULL_ADDER of FULL_ADDER is
begin SUM <= (A xor B) xor CIN after 15 ns;
COUT <= (A and B) or (B and CIN) or (CIN and A) after 10 ns;
end FULL_ADDER;

Two signal assignment statements are used to represent the dataflow of the FULL_ADDER entity. Whenever an event occurs on signals A, B, or CIN, expressions of both the statements are evaluated and the value to SUM is scheduled to appear after 15 ns while the value to COUT is scheduled to appear after 10 ns. The after clause models the delay of the logic represented by the expression.

Contrast this with the statements that appear inside a process statement. Statements within a process are executed sequentially while statements in an architecture body are all concurrent statements and are order independent.

A process statement is itself a concurrent statement i.e., if there were any concurrent signal assignment statements and process statements within an architecture body, the order of these statements also would not matter.

Concurrent versus Sequential Signal Assignment

Concurrent Signal Assignment Statement
Sequential Signal Assignment Statement
The signal assignment statements that appear outside of a process are called concurrent signal assignment statements.
The signal assignment statements which appear within the body of a process statement are called as sequential signal assignment statements.
Concurrent signal assignment statements are event triggered, i.e., they are executed whenever there is an event on a signal that appears in its expression.
The sequential signal assignment statements are not event triggered and are executed in sequence in relation to the other sequential statements that appear within the process.
architecture CON_SIG_ASG of FRAGMENT2 is
begin -- Following are concurrent
--signal assignment statements:
A<=B;
Z<=A;
End CON_SIG_ASG;
architecture SEQ_SIG_ASG of FRAGMENT1 is
-- A, B and Z are signals.
begin
process (B)
begin -- Following are sequential
-- signal assignment statements:
A<=B;
Z<=A;
end process;
end SEQ_SIG_ASG;
In architecture CON_SIG_ASG, the two statements are concurrent signal assignment statements. When an event occurs on signal B, say at time T, signal A gets the value of B after delta delay, that is, at time T+Δ. When simulation time advances to T+Δ, signal A will get its new value and this event on A (assuming there is a change of value on signal A) will trigger the second signal assignment statement that will cause the new value of A to be assigned to Z after another delta delay, that is, at time T+2Δ.



In architecture SEQ_SIG_ASG, the two signal assignments are sequential signal assignments. Therefore, whenever signal B has an event, say at time T, the first signal assignment statement is executed and then the second signal assignment statement is executed, both in zero time. However, signal A is scheduled to get its new value of B only at time T+Δ (the delta delay is implicit).




Z is scheduled to be assigned the old value of A (not the value of B) at time T+Δ also.

For every concurrent signal assignment statement, there is an equivalent process statement with the same semantic meaning. The concurrent signal assignment statement:

CLEAR <= RESET or PRESET after 15 ns;

-- RESET and PRESET are signals.
--is equivalent to the following process --statement:.

process
begin
CLEAR <= RESET or PRESET after 15 ns;
wait on RESET, PRESET;
end process;
An identical signal assignment statement (a sequential signal assignment) appears in the body of the process statement along with a wait statement whose sensitivity list comprises of signals used in the expression of the concurrent signal assignment statement.

Delta Delay

In a signal assignment statement, if no delay is specified or a delay of 0ns is specified, a delta delay is assumed. Delta delay is an infinitesimally small amount of time. It is not a real time delta and does not cause real simulation time to change. The delta delay mechanism provides for ordering of events on signals that occur at the same simulation time.

Consider the circuit shown in Fig.8,


Figure 8: Three inverting buffers in series.

entity FAST_INVERTER is
port (A: in BIT; Z: out BIT);
end FAST_INVERTER;

architecture DELTA_DELAY of FAST_INVERTER is
signal B, C: BIT;
begin -- Following statements are order independent :
Z <= not C; - signal assignment #1
C <= not B; - signal assignment #2
B <= not A; - signal assignment #3
end DELTA_DELAY;

The three signal assignments in the FAST_INVERTER entity use delta delays. When an event occurs on signal A, say at 20 ns, the third signal assignment is triggered which causes signal B to get the inverted value of A at 20ns+1Δ. When time advances to 20ns+1Δ, signal B changes. This triggers the second signal assignment, causing signal C to get the inverted value of B after another delta delay, that is, at 20ns+2Δ. When simulation time advances to 20ns+2Δ, the first signal assignment is triggered causing Z to get a new value at time 20 ns+3Δ. Even though the real simulation time stayed at 20 ns, Z was updated with the correct value through a sequence of delta-delayed events. This sequence of waveforms is shown in Fig.9.



Figure 9: Delta delays in concurrent signal assignment statements.

A typical VHDL simulator maintains a list of events that is to occur in an event queue. The events in this queue are ordered not only on the real simulation time but also on the number of delta delays.
Fig.10 shows a snapshot of an event queue in a VHDL simulator during simulation. Each event has associated with it a list of signal-value pairs that are to be scheduled. For example, the value '0' is to be assigned to signal Z when simulation time advances to 10 ns+2Δ.



Figure 10: An event queue in a VHDL simulator.

Eg., The dataflow model for the RS latch as shown in Fig.11.

Figure 11: An RS latch.

entity RS_LATCH is
port (R, S; in BIT := '1'; Q: buffer BIT := '1';
QBAR: buffer BIT := '0');
end RS_LATCH;
architecture DELTA of RS_LATCH is
begin
QBAR <= R nand Q;
Q <= S nand QBAR;
end DELTA;

At start of simulation, both R and S have value'1' and Q and QBAR are at '1' and '0', respectively. Assume signal R changes from '1' to '0' at 5 ns. Fig.12 shows the sequence of events that occur as a result. After two delta delays, the circuit stabilizes with the final values of Q and QBAR being '0' and '1', respectively.


Figure 12: Sequence of events in the RS latch.

Multiple Drivers

Each concurrent signal assignment statement creates a driver for the signal being assigned. In this case, the signal has more than one driver and a mechanism is needed to compute the effective value of the signal.

Eg.,

Figure 13: Two drivers driving signal Z.

entity TWO_DR_EXAMPLE is
port (A, B, C: in BIT; Z: out BIT);
end TWO_DR_EXAMPLE;

architecture TWO_DR_EX_BEH of TWO_DR_EXAMPLE is
begin
Z <= A and B after 10 ns;
Z <= not C after 5 ns;
end; -- Effective value for signal Z has to be
-- determined: not a legal VHDL model.

Here, there are two gates driving the output signal Z. The value of Z is determined by using a user-defined resolution function that considers the values of both the drivers for Z and determines the effective value.

architecture NO_ENTITY of DUMMY is
begin
Z <= '1' after 2 ns, '0' after 5 ns, '1' after 10 ns;
Z <= '0' after 4 ns, '1' after 5 ns, '0' after 20 ns;
Z <= '1' after 10 ns, '0' after 20 ns;
end NO_ENTITY;

In this case, there are three drivers for signal Z. Each driver has a sequence of transactions where each transaction defines the value to appear on the signal and the time at which it is to appear.

The resolution function resolves the value for the signal Z from the current value of each of its drivers as shown in Fig. 14.

The value of each driver is an input to the resolution function and based on the computation performed within the resolution function, the value returned by this function becomes the resolved value for the signal.

The resolution function is user-written and it may perform any function. The function is not restricted to perform a wired-and or a wired-or operation, it could be used to count the number of events on a signal.


 Figure 14: Resolving signal drivers.

A signal with more than one driver must have a resolution function associated with it, otherwise, it is an error. Such a signal is called a resolved signal. A resolution function is associated with a signal by specifying its name in the signal declaration.

Eg.,  
signal BUSY: WIRED_OR BIT;

is one way of associating the resolution function, WIRED_OR, with the signal BUSY. No arguments need be specified, since by default, the arguments for the function are the current values of all the drivers for that signal.

The TWO_DR_EXAMPLE entity is, therefore, incorrect; a resolution function must be specified for port Z such as

port (A,B, C: in BIT; Z: out WIRED_OR BIT);

To declare a resolved subtype, include the name of the resolution function in the subtype declaration and then declaring the signal to be of that subtype.

Eg.,
subtype RESOLVED_BIT is WIRED_OR BIT;

signal BUSY: RESOLVED_BIT;

The resolved signal Z in the TWO_DR_EXAMPLE entity can now be specified as

port (A, B, C: in BIT; Z: out RESOLVED_BIT):

The semantics of when a resolution function is invoked are as follows. Whenever an event occurs on a resolved signal, the resolution function associated with that signal is called with the values of all its drivers. The return value from the resolution function becomes the value for the resolved signal.

In the example of architecture NO_ENTITY, the resolution function is invoked at time 2 ns with driver values '1', '0', and '0' (drivers 2 and 3 have '0' because that is assumed to be the initial value of Z). The function, WIRED_OR, is performed and the resulting resolved value of '1' is assigned to Z at 2 ns. Signal Z is scheduled to have another event at 4 ns, at which time the driver values, '1', '0', and '0', are passed to the resolution function which returns the value of '1' for signal Z. At time 5 ns, the driver values, '0', '1', and '0' are passed to the resolution function which returns the value '1'. At 10ns, the driver values, '1', '1', and '1' are passed to the resolution function. Finally at time 20 ns, the driver values, '1', '0', and '0' are passed to the resolution function to determine the effective value for signal Z, which is 1'.

The resolution function has only one input parameter, which is a one dimensional unconstrained array. The input parameter type and the return type are the same type as the signal.

The function typically computes a value from the various driver values, each element of the input array corresponding to one of the driver values. It should be noted that the identity of the driver is lost in the input array, i.e., there is no way of knowing which driver is associated with which element of the input array.

Eg., a WIRED_OR function that can be used as a resolution function.

function WIRED_OR (INPUTS: BIT_VECTOR) return BIT is
begin
for J in INPUTS'RANGE loop
if lNPUTS(J)='1' then
return '1';
end if;
end loop;
return '0';
end WIRED_OR;

A function is recognized as a resolution function if it is associated with a signal in the signal declaration.

The predefined attribute of an array object called RANGE, returns the range of the number of elements of the specified array object.

Eg., if there are four drivers when the WIRED_OR function is called, INPUTS'RANGE returns the range "0 to 3".

Drivers are also created for signals that are assigned within a process. The one difference is that irrespective of how many times a signal is assigned a value inside a process, there is only one driver for that signal in that process. Therefore, each process will create at most one driver for a signal.

If a signal is assigned a value using multiple concurrent signal assignment statements(which can only appear outside a process), an equal number of drivers are created for that signal.

Conditional Signal Assignment Statement

The conditional signal assignment statement selects different values for the target signal based on the specified, possibly different, conditions (similar to an if statement).

A typical syntax for this statement is

Target - signal <= [ waveform-elements when condition else ]
[waveform-elements when condition else ]
. . .
waveform-elements;

Whenever an event occurs on a signal used either in any of the waveform expressions or in any of the conditions, the conditional signal assignment statement is executed by evaluating the conditions one at a time. For the first true condition found, the corresponding value (or values) of the waveform is scheduled to be assigned to the target signal.

Eg.,
Z <= IN0 after 10ns when S0 = '0' and S1 = '0' else
IN1 after 10ns when S0 = '1' and S1 = '0' else
IN2 after 10ns when S0 = '0' and S1 = '1' else
IN3 after 10 ns;

Here, the statement is executed any time an event occurs on signals IN0, IN1, IN2, IN3, S0, or S1. The first condition (S0='0' and S1='0') is checked; if false, the second condition (S0='1' and S1='0') is checked; if false, the third condition is checked; and so on. Assuming S0='0' and S1='1', then the value of IN2 is scheduled to be assigned to signal Z after 10 ns.

For a given conditional signal assignment statement, there is an equivalent process statement that has the same semantic meaning, which has exactly one if statement and one wait statement within it.

The signals in the sensitivity list for the wait statement is the union of signals in all the waveform expressions and the signals referenced in all the conditions.

The equivalent process statement for these conditional signal assignment statement eg., is

process
begin
if S0 = '0' and S1 = '0' then
Z<= IN0 after 10 ns;
elsif S0='1'and S1='0' then
Z<= IN1 after 10ns;
elsif S0='0' and S1 = '1' then
Z<= IN2 after 10 ns;
else
Z<= IN3 after 10 ns;
end if;
wait on IN0, IN1, IN2, IN3, S0, S1;
end process;

Selected Signal Assignment Statement

The selected signal assignment statement selects different values for a target signal based on the value of a select expression (similar to a case statement).

A typical syntax for this statement is

with expression select —This is the select expression.
target-signal <= waveform-elements when choices,
waveform-elements when choices,
waveform-elements when choices ;
Whenever an event occurs on a signal in the select expression or on any signal used in any of the waveform expressions, the statement is executed. Based on the value of the select expression that matches the choice value specified, the value (or values) of the corresponding waveform is scheduled to be assigned to the target signal.

Note: The choices are not evaluated in sequence. All possible values of the select expression must be covered by the choices that are specified not more than once.

Values not covered explicitly may be covered by an "others" choice, which covers all such values. The choices may be a logical "or" of several values or may be specified as a range of values.

Eg.,
type OP is (ADD, SUB, MUL, DIV);
signal OP_CODE: OP;
. . .
with OP_CODE select
Z <= A+B after ADD_PROP_DLY when ADD,
A - B after SUB_PROP_DLY when SUB,
A * B after MUL_PROP_DLY when MUL,
A / B after DIV_PROP_DLY when DIV;

Whenever an event occurs on signals, OP_CODE, A, or B, the statement is executed. Assuming the value of the select expression, OP_CODE, is SUB, the expression "A - B" is computed and its value is scheduled to be assigned to signal Z after SUB_PROP_DLY time.

For every selected signal assignment statement, there is also an equivalent process statement with the same semantics. In the equivalent process statement, there is one case statement that uses the select expression to branch. The list of signals in the sensitivity list of the wait statement comprises of all signals in the select expression and in the waveform expressions.

The equivalent process statement for the previous example is

process
begin
case OP_CODE is
when ADD => Z <= A +B after ADD_PROP_DLY;
when SUB => Z <= A-B after SUB_PROP_DLY;
when MUL => Z <= A * B after MUL_PROP_DLY;
when DIV => Z <= A /B after DIV_PROP_DLY;
end case;
wait on OP_CODE, A, B;
end process;

Block Statement

A block statement is a concurrent statement that can be used to
1. disable signal drivers by using guards,
2. limit signal scope, and
3. represent a portion of a design.

A block statement itself has no execution semantics but provides additional semantics for statements that appear within it.

The syntax of a block statement is

block-label: block [ ( guard-expression ) ]
[ block-header]
[ block-declarations]
begin
concurrent-statements
end block [ block-label ];

The block-header, if present, describes the interface of the block statement to its environment. Any declarations appearing within the block are visible only within the block, i.e., between block . . . end block. Any number of concurrent statements can appear within a block, possibly none. Block statements can be nested since a block statement is itself a concurrent statement. The block label present at the beginning of the block statement is necessary, however, the label appearing at the end of the block statement is optional, and if present, must be the same as the one used at the beginning of the block.

If a guard-expression appears in a block statement, there is an implicit signal called GUARD of type BOOLEAN declared within the block. The value of the GUARD signal is always updated to reflect the value of the guard expression. The guard expression must be of type BOOLEAN.

Signal assignment statements appearing within the block statement can use this GUARD signal to enable or disable their drivers.

Eg., a gated inverter.

B1: block (STROBE = '1')
begin
Z <= guarded not A;
end block B1;
The signal GUARD that is implicitly declared within block B1 has the value of the expression (STROBE = 1'). The keyword guarded can optionally be used with signal assignment statements within a block statement. This keyword implies that only when the value of the GUARD signal is true (i.e., guard expression evaluates to true), the value of the expression "not A" is assigned to the target signal, Z.

If the GUARD is false, events on A do not affect the value of signal Z. That is, the driver to Z for this signal assignment statement is disabled and signal Z retains its previous value. The block statement is very useful in modeling hardware elements that trigger on certain events in flip-flops and clocked logic.

The only concurrent statements whose semantics are affected by the enclosing block statement are the guarded assignments, i.e., the signal assignment statements that use the guarded option.

The modified semantic meaning is, whenever an event occurs (an event is a change of value) on any signal used in the expression of a guarded assignment or on any signal used in the guard expression, the guard expression is evaluated.

If the value is true, the signal assignment statement is executed and the target signal is scheduled to get a new value. If the value of the guard expression is false, the value of the target signal is unchanged.

Every guarded assignment has an equivalent process statement with identical semantics.

Eg.,
BG: block (guard-expression)
signal SIG: BIT;
begin
SIG <= guarded waveform-elements',
end block BG;
-- The equivalent process statement for the guarded assignment is:
BG: block (guard-expression)
signal SIG: BIT;
begin
process
begin
if GUARD then
SIG <= waveform-elements',
end if;
wait on signals-in-waveform-elements, GUARD;
end process;
end block BG;

The signal GUARD, even though implicitly declared, can be used explicitly within the block statement.

Eg.,
B2: block ((CLEAR = '0') and (PRESET =1'))
begin
Q <= '1' when ( not GUARD ) else '0' ;
end block B2;

Here, the signal assignment in the block statement is not a guarded assignment, and hence, the driver to signal Q is never disabled. However, the value of Q is determined by the value of the GUARD signal because of its explicit use in the signal assignment statement.

The value of the GUARD signal corresponds to the value of the guard expression "(CLEAR = '0') and (PRESET = 1')". This signal assignment statement is executed any time an event occurs on either of the signals, CLEAR or PRESET.

It is also possible to explicitly declare a signal called GUARD, define an expression for it, and then use it within a guarded assignment.

Eg.,
B3: block
signal GUARD: BOOLEAN;
begin
GUARD <= CLEAR = -0' and PRESET = '1';
Q <= guarded DIN;
end block B3;

Eg., a 4 * I multiplexer using a block statement.

use WORK.RF_PACK.WIRED_OR;
entity MUX is
port (DIN: in BIT_VECTOR(0 to 3); S: in BIT_VECTOR(0 to 1);
Z: out WIRED_OR BIT);
end MUX;
architecture BLOCK_EX of MUX is
constant MUX_DELAY: TIME := 5 ns;
begin
B1: block (S = "00")
begin
Z <= guarded DIN(O) after MUX_DELAY;
end block B1;
B2: block (S = "01")
begin
Z <= guarded DIN(1) after MUX_DELAY;
end block B2;
end BLOCK_EX;

Note: a resolution function is needed for signal Z since it has more than one driver. This function, WIRED_OR, is assumed to exist in a package RF_PACK that resides in library WORK.

Eg., a rising-edge triggered D-type flip-flop.

entity D_FLIP_FLOP is
port (D, CLK: in BIT; Q, QBAR: out BIT);
end D_FLIP_PLOP;

architecture DFF of D_FLIP_FLOP is
begin
L1: block (CLK = '1' and (not CLK'STABLE))
signal TEMP: BIT;
begin
TEMP <= guarded D;
Q<= TEMP;
QBAR <= not TEMP;
end block L1;
end DFF;

The guard expression uses a predefined attribute called STABLE. CLK'STABLE is a new signal of type BOOLEAN that is true as long as signal CLK has not had any event in the current delta time. This guard expression implies a rising clock edge.

The scope of the signal TEMP declared in block LI, has its scope restricted to be within the block. Of the three signal assignments, only the first one is a guarded assignment and, hence, controlled by the guard expression. The other two signal assignments are not controlled by the guard expression and are triggered purely on events occurring on signals in their expressions.

When a rising clock edge appears on the signal CLK, say at time T, the value of D is assigned to signal TEMP after delta delay, that is, at time T+A. If the value of TEMP is different from its previous value, the assignments to Q and QBAR will be triggered causing these signals to get new values after another delta delay, that is, at time T+2A.

Concurrent Assertion Statement

A concurrent assertion statement has exactly the same syntax as a sequential assertion statement. An assertion statement is a concurrent statement by virtue of its place of appearance within the model. If it appears inside of a process, it is a sequential assertion statement and is executed sequentially with respect to the other statements in the process; if it appears outside of a process, it is a concurrent assertion statement.

The semantics of a concurrent assertion statement are, whenever an event occurs on a signal in the boolean expression of the assertion statement, the statement is executed.

Eg., a concurrent assertion statement used in a SR flip-flop model that makes a check to ensure that the input signals R and S are never simultaneously zero.
entity SR is
port (S, R: in BIT; Q, NOTQ: out BIT);
end SR;
architecture SR_ASSERT of SR is
begin
assert (not(S = '0' and R = '0'))
report "Not valid inputs: R and S are both low"
severity ERROR;
-- Rest of model for SR flip-flop here.
end SR_ASSERT;


Anytime an event occurs on either of the signals, S or R, the assertion statement is executed and the boolean expression checked. If false, the report message is printed and the severity level is reported to the simulator for appropriate action.

No comments:

Post a Comment