Synthesis from HDL

General comments and (on) some specific tools


Copyright (C) 2007 Fernando G. Tinetti

Modelling from HDL (no matters which one)
In all cases, the corresponding specification and the module or similar language definition performing the testbench are made in the HDL itself. This testbench includes the verification patterns, usually at the I/O level.
Like an "added value" there is a syntax verification of the specification indicating that there are not ambiguities (like in any other programming language)
It is supposed that the logical verification/simulation of the values and signals involved in the specification takes place at this stage.

Synthesis from HDL (no matters which one)
1) Tools coupled with Cadence (or similar):
Synopsys
BuildGates
Synplify
Synopsys and Synplify are from third party companies, what means that they must be bought separately and "coupled". This also arises extra complications because of the Cadence operating licenses. BuildGates does not ignore this problem but the company of one tool cannot accuse the other of bad working. Besides, Europractice seems to include BuildGates and there are now tutorials of these tools.

2) Other tools of free use:
Alliance
Electric
They are far simpler and more practical in some way ( with lesser setup time). The "synthesis destinations" are not known, since there is not information about libraries for tools and/or tool functionalities provided by Europractice manufacturers

3) Other proprietary (not necessarilly free) tools:
From FPGAs companies
From Cadence "competing products"
It is supposed that in both cases there must be a design cycle incorporating HDLs in one of the initial stages (or in the initial stage). FPGA companies usually have available a tool or set of tools for hardware/software development (or development kits/tools) where it is possible to specify, simulate and even implement over a specific FPGA or FPGA family. In the case of Cadence competitors, there is the same problem that with the free use tools: it is unkown how to reach the technology/manufacturing

To learn HDL
Partly, there is a general base coming from the procedural programming languages and the essential knowledge about the involved technologies (ASICs and/or FPGAs). Everything seems to indicate that both kind of knowledges are necessary to use and take effective advantage of HDLs, at least for the simulation tools that all these languages have. As many authors explicitly recognize, the HDLs were created and are still primarily used as specification languages and hardware simulation tools. These tasks reduce in themselves the design/development times as well as make the related tasks simpler, significantly increasing productivity. However, from many years afo and within the automation process of hardware development, synthesis tools have been incorporated for specifications of/in HDLs.
HDLs are not necessarily procedural languages like C or Pascal, many authors indicate that they are closer to Ada or any other concurrent language. In fact, all these assertions are partly true, but not completely. It means: the HDLs made a great effort (well, it is not clear if they made an effort or was or is de facto a proper HDL characteristic) to assimilate constructions, syntax and semantics of the procedural languages. Besides, Ada has incorporated to the language some structures for concurrent processing via tasks, but the Ada rendezvous is too strict as regards what may happen and in fact it happens in hardware.
Actually, one of the more serious "problems" (if it is possible to call it like that) of Ada when it is compared with any other HDL may be that Ada effectively incorporates concurrency. The "concurrency ideas" in every programming language tend to specify/"capture" the tasks that "could" take place in parallel, but not necessary simultaneously. This is the basic idea of concurrency in the operating systems and in many "embedded" systems of minor scale. In a way, it is difficult not to mention or think in a processor able to do "multitasking" as tasks defined in Ada. On the other hand, HDLs tend to assume the simultaneity because of the kind of "processing" that takes place at the hardware level and a high explicit (and even complex) effort must be made to specify "multitasking" with the subjacent idea that many processes share a processing element or processor.
More specifically, the signals behavior in a circuit is very difficult to be "captured" (or expressed or specified) in a programming language in general and in Ada in particular. That is the reason in fact for having defined and still using specific languages (HDLs), instead of procedural, concurrent, object-orientated, functional programming, etc. languages.
Apart from comparing with other "traditional" programming languages, it is always possible to take on the task of learning a programming language. This also applies to the hardware description languages and partly has the complexities inherent in the programming languages besides the complexities about hardware specification and the complexities rising from this "mixture", inb some way, of underlying ideas. In some way, it is necessary to coexist (and in fact specify and use) the von Newman model maintained by the procedural languages and, at the same time, the underlying idea of the signal processing taking place in the hardware circuits (and besides, this can be thought and express in different abstraction levels, but we don"t deepen this issue at present). This learning task seems to be more complicated if the "step by step" idea is taken as the strategy to understand every structure, syntax and semantics of the language. All this in order to justify what is more or less evident: it is suitable to take tutorials.
Available tutorials can be classified in "language-orientated" and "tool-orientated". As it is expected, those language-orientated try to show all the language possibilities regarding specification and simulation in a condensed and delimited way (both things seem not to be compatible). They present the advantage of using/exemplifying with free tools and with contents usable or replicable in/with different "compilers" and simulators of any HDL. Those tool-orientated are more difficult to be supported as regards generality (as it was expected), but have some advantages:
      a) They are much more specific, they do not intend to show all the characteristics/potentialities of language, but how things are made with the tool/s described. In general, they take a simple example, they specify it in the HDL and go on with the development cycle.
      b) Tools or complex tool sets are described, that in a way have the same complexities of language itself. It is better to know the tools in detail given that, when it would be developed in the production context, a well delimited tool set will be used and not any tool (for many reasons, out of the reach of this explanation) and it is useful to know them as soon as possible. More explicitly: there are not many tutorials describing in detail each of the tool that will be finally used, but indeed, there are many tutorials describing the language. If there is something in language that is not understood, it is possible to use the reference manuals, the books on the language (there are many of them by well-known authors) or multiple tutorials. It is not the same with the tools to use in a production environment.
      c) Tools are far more specific than the language, from many points of view. One of the most important, is that of the HDLs integration with the rest of the cycle of hardware development. This integration is almost essential to reach the synthesis from HDL.
      d) As it is well-known and accepted since synthesis tools were introduced, not everything able to be specified in HDL can be synthesized, and it can be said that synthesis (what can be or cannot be synthesized) depends on synthesis tools, so it would be quite difficult to learn an HDL "independently" of the synthesis as well as learning and, later, the limits (there are many of them).

Cadence related tutorials (or those which seem to be)
Synthesis Rapid Adoption Kit (o Synthesis RAK), 182 pp.
Verilog Datapath Extension Reference, 106 pp.
Synthesis Place-and-Route (SP&R) Flow Guide, 104 pp.
RAK seems to be the most suitable to begin, given that: it has Verilog at first and from there it seems to go straight to the synthesis (ok, "straight", is a way of saying). For starting we would not recommend Verilog Datapath Extension Reference because of two things: Extension and Reference. For Cadence these Verilog extensions explain the name of Verilog-DP and make it more suitable for the synthesis integrated with its environment. But it is not suitable to start without knowing the language in detail. Once Verilog is known and something simple had been synthesized, it seems to be very useful. On the other hand, the idea of Reference Manual seems to discard it to learn. Everything indicates that SP&R is not the first thing that comes up when we think in development, therefore, it would be suitable not to start with the SP&R (the last of tutorials mentioned)

About Synthesis RAK
Basically everything deals with the bg or bgx command/tools. We can say that the bg is the original
tool
or simpler than bgx, which is the most advanced. It is supposed that bgx is for applying extra functionalities specifically about synthesis from HDL. The tutorial has four
module
(a term that is rather confusing): the first two modulus performing on bg and the other two ones on bgx. Command list (in command line of OS):
cp -rf your_install_path/doc/syntut/RAK/RAK.tar.gz my_tutorial/.
cd my_tutorial
gzip -dc RAK.tar.gz | tar -xvf -
INSTALL_LIB
The INSTALL_LIB may require the use of perl. The bg_shell command implies the console interface, not graphic. For the graphic interface, the command would be
bg_shell -gui
but at least the first part will be with command line and probably it would be better, as each step is explained in enough detail. From the bg_shell command the bg console prompt appears and it goes on with commands from the console. These commands will be put in this page preceded by the > sign (they are commands of the TCL script language):
> source ../tcl/setup.tcl
> source $tcl_dir/set_globals.tcl
> source $tcl_dir/read_lib.tcl
> report_library
> source $tcl_dir/read_rtl.tcl
> do_build_generic
> check_netlist
All the "source" commands correspond to the "scripts" that are to be interpreted within bg. Actually, each command within source are bg commands. Both the report_library as well as the check_netlist commands show information, they are not necessary to synthesize or to go ahead with synthesis, but to know some details of bg as regards the session or the work being carried out. Within the tcl script read_lib.tcl it can be found the libraries to be used in the synthesis. Libraries are in .tlf format (timing library format) or .alf (ambit library format) and both formats are "compatible" with the .lib format of Synopsis. However, it should be explained that if we have a .lib from the manufacturer, it is possible to use the syn2tlf command coming with Cadence which converts a (.lib) format into the other (.tlf). The read_rtl.tcl script has a succession of bg read_verilog commands which are those incorporating a verilog description to the project to be synthesized. If the specification is made in vhdl the bg command to be used will be read_vhdl. These commands not only incorporate HDL specifications to the project to be synthesized but also check the specification syntax.
From this stage starts what is called "test synthesis", what would be something like logic added to the design itself which is used in order to check the good performance of the design from the manufacturing perspective. In principle, bg has its own way of construct this logic without using more standard methodologies like ATPG (Automatic Test Pattern Generation), boundary scan/JTAG or BIST. All this part for DFT (Design-For-Test) corresponds to what is called in bg as Setting Test Synthesis Assertions. In the tutorial all is made in scripts (with many and different commands included with their corresponding choices) but is not clear if all these assertions must be thought from scratch, or they are provided by manufacturers, or similar)

About Verilog - QII - Logic Analizer
QII allows the logic simulation and timing. Logic simulation does not take into account any kind of hardware (as it could be expected), but the physical simulation does. In any case, the hardware (FPGA or CPLD type) must be specified at the beginning of the specification of the project, but then it is not taken into account in the logic simulation. However, it is obviously necessary for the time simulation and for the device programming itself.
The communication through parallel port between the logic analyzer and the FPGA development board is successful. This means that the FPGA can be programmed from the logic analyzer with what it is possible to measure what happens with the FPGA development board after routing the pins
. Like almost every HDL book indicates, not only the device can be specified but also in the same HDL it is usually specified the device that check it, generating the inputs and analyzing outputs if necessary. This normally corresponds to the logic simulation. In QII, everything is thought as direct function of the device to be implemented, where simulation does not take place through "another" device but using the QII simulation tools. The problem is that it does not seem to be very complete or useful, specially because there are not many alternatives when it is time to generate input signals nor output signal analysis
Input signals for the device to simulate can be generated in two ways:
1) Periodic signals, of the clock signal type, with the standard parameters: frequency, "duty" (%) and phase.
2) "Manual" signals that can be input value by value in a textual manner or using the "signal editor" of QII.
Unfortunately, both ways of input these ways seem to be interactive (at least up to now there is not information about any other alternative), which automatically "reduces" the possibility of a relatively exhaustive simulation or with variations of hundreds of input signal values. It is evident that beyond some tens of values, the interactive data entry takes too much time and increases the probability of wrong values, decreasing productivity. A piori, it seems to be other alternatives
      1) Test on the FPGA itself. The FPGA is programmed with the device from QII in the logic analyzer. The idea is that with the same logic analyzer it is possible to generate the device inputs (analyzer outputs) and the device outputs (analyzer entries). In its favor: the device is used/checked/tested as it will be working on a production environment. Against: signal generation hardware and software is necessary in the analyzer and the association/checking input signal/s <---> output signal/s is relatively difficult
      2) Multiple periodic signals generation. In this case, the idea is to use what can be really parametrically changed in the QII simulation signal generator. It means that the input signals must be thought in terms of
cicle - duty - phase
in order to generate, for example, all combinations of input patters. Although the signals will be not of random type (as the input signals in production are), it is possible to do an exhaustive simulation of the input possibilities. This assuming that an exhaustive checking would be possible (remember that with n input signals there are 2^n possible values, that is to say that it is exponential).
In any case, this possibility is for the logic simulation as well as for the timing simulation, so it is not necessary to save anything in the FPGA. On the other hand, the negative side is that is not easy to associate an input in particular with the corresponding output.
      3) Programming the FPGA with the device and its test device. It is possible, for example, to leave only one input signal for the clock. In fact, to generate a periodic signal (such as a clock) it is the only thing known at present that can be generated from some logic analyzers. In this case, the advantage is that it is possible to simulate in a logic way, to simulate the timing and, also, depending on the FPGA capacity, to program all this in order to be checked in the same FPGA. One of the problems is that the "device" and its testing are "restricted" to something synchronous, in relation with a clock cycle (it wouldn"t be possible to do a design "strictly combinatory", as many of the first examples of Verilog tutorials). It is not evident if this restriction is a real problem or the devices made (for FPGAs in production environments) always include a clock signal from which all tasks/ state changes/ events/ etc. take place.

It seems to be convenient to devote another web page to something like a tutorial showing the steps from HDL circuit specification to the measurement of such circuit on an FPGA using a logic analyzer. Also, that web page will be more specific on a complete example of the full development "cascade" starting with a specific HDL and ending on specific FPGA and logic analyzer.

Report suggestions/errors/etc.: ftinetti @ gmail . com (remove whitespaces), with subject "HDL Synthesis"