Avionics Industry’s Growing Need for TLM

May 31, 2023

Avionic systems are increasingly employing FPGAs and SoC FPGAs with high-speed interfaces such as PCIe and Ethernet to deliver greater performance and reliable connectivity. However, if the underlying FPGA design needs to demonstrate development assurance based on DO-254 / ED-80, verification becomes very challenging. Janusz Kitel explains the problems and says transaction-level modeling is the answer.

The ubiquity and standardization of PCIe, Ethernet and other serial high-speed interfaces – plus their availability within FPGAs and SoC FPGAs as embedded hard IP – have made them very popular in military avionics and aerospace, and for their use in supporting safety-critical functionality. In addition, FPGA vendors have made available some great development tools for device configuration and integration.

The above are all of considerable benefit to design engineers. However, the increasing use of multiple high-speed serial interfaces in safety-critical applications comes at a price. For certification purposes it must be proved the devices function as intended and with high reliability. Unfortunately, that is difficult to do for three reasons:

  • Physical (in-hardware) test of the FPGA with the high-speed interfaces in the target circuit board can produce non-deterministic responses.
  • There is a lack of FPGA input controllability and output visibility.
  • The avionics industry struggles to adopt the appropriate verification techniques and methodologies as quickly as (say) the commercial sector, which is leading to significant project delays and costs.

Board-Level Testing

Simulation is an important activity in the verification process. However, the Design Assurance Guidance for Airborne Electronics Hardware (RTCA DO-254 / ED-80) says that simulation performs only an analysis; since simulation uses models and the simulated environment is always ideal.

This fact becomes obvious when we consider something like an FPGA with an embedded PCIe block (see figure 1). Internally, the FPGA fabric (where the functions designed into the programmable logic reside), communicates with the PCIe block via an AXI bus.

In many situations, simplified BFMs (bus functional models) can be used for simulation purposes. Alternatively, the entire PCIe block is skipped and only the AXI interface is available during the simulation.

RTCA DO-254 / ED-80 (section 6.3.1) guidance states that real hardware must be tested in its intended operational environment. The standard test approach is to conduct board-level testing in the laboratory with the use of specialized equipment such as test vector generators, logic analyzers and oscilloscopes.

For today’s level of integration and complexity, board-level testing does not allow all FPGA-level requirements to be verified. This is caused in part by limited access to I/O pins. It is also down to the physical characteristics of the high-speed interfaces; characteristics such as differential signaling, encoded information and strict impedance matching.

For these reasons, RTCA DO-254 / ED-80 guidance allows augmenting board-level testing with results obtained from tests on hardware items (components) in isolation.

Hardware Test Equipment

Again, specialized test equipment is needed to ensure the DUT is tested with the target frequencies (clocks). All test vectors must be applied at speed to the DUT and its responses must be captured and saved for further analysis or comparison against expected results.

As for where those expected results might come from, it makes sense for these to be the results obtained through simulation, which can be used to verify almost all of the DUT’s functional requirements.

In cases where test vectors are making changes to the I/O pins relatively slowly, and the FPGA design is controlled by a single clock, the analysis of the device response at the bit-level is quite simple. However, when the FPGA design includes multiple asynchronous clock domains, supports several high-speed serial interfaces, and most likely contains an embedded processor core, the hardware response due to variable delays in real hardware (along with clock frequency and phase deviations) can produce non-deterministic responses.

The analysis of such non-deterministic results is very complicated. Firstly, it is very difficult to differentiate device behavior that is still within spec from truly unexpected behavior. Secondly, it is impossible to automate the process of comparing verification results against expected results.

In most cases, the non-deterministic device responses were delayed or reordered and can be considered to be within spec. Accordingly, much time can be spent proving and documenting valid discrepancies.

Transaction-Level Modeling

A solution to the problem is to verify at a higher level of abstraction using TLM (transaction-level modeling). It is a very popular standardized methodology in the commercial ASIC industry. Essentially, if an aspect of the design is to send a packet of data that should arrive intact and within a specified timeframe then that is pretty much all that matters.

A transaction is a single conceptual transfer of high-level data or a control instruction, and is defined by a begin time, an end time, and attributes (relevant information associated with the transaction). Figures 3a and 3b show, respectively, the analysis of delayed and reordered transactions for a PCIe interface, using TLM.

At the transaction level – and whether we are considering PCIe, Ethernet or even lower speed serial interfaces – if multiple buses and asynchronous clocks are used, the implementation details can be hidden for verification purposes.

However, let’s not forget that safety critical projects must be tested against invalid data and under out-of-range scenarios. In many cases, designers are unable to predict the design response. Accordingly, behavior is investigated during the verification phase to determine if it is acceptable or not. Again, TLM makes the analysis much easier.

Implementing TLM

Using TLM, the testbench works with messages but the design is still verified with bit-level signals. In the simulation world, the use of BFMs (mentioned earlier) for modeling interfaces is very popular, but they are not synthesizable and cannot be reused in the real hardware. We need a new element called a transactor that is synthesizable. A transactor connects transaction-level interfaces to pin-level interfaces and translates the high-level message into bit-level (pin) wiggles.

Another important aspect of TLM is the use of an untimed testbench; a.k.a. a transactional testbench (see figure 4). It focuses on functionality (messages) and not on implementation (signals), and the test scenarios are implemented by sending request messages and waiting for the response ones.

A great advantage here is that a transactional testbench can consist of subprograms written in any HDL or even a programming language like C. See figure 5.

A transactional testbench is much easier to maintain and analyze, which makes it valuable from a DO-254 perspective. It also simplifies the verification of multiple high-speed serial interfaces (as well as low speed ones), making the overall verification more robust.

User Defined Transactions

It must be noted that with TLM the whole design can be verified using transactions. However, while BFMs are available for standard interfaces like SPI, I2C, Arinc429, and PCIe, the DUT’s other pins must still be verified. To do this they should be organized into GPIO (general purpose input output) interfaces supporting user-defined messages.

User-defined transactions will also appear for verification of the device containing embedded blocks, as presented in figure 1. In such cases, to reuse the simulation testbench in hardware testing the AXI BFM and transactor PCIe must support the same messages. See figure 6.

Summary

For complex designs that can exhibit non-deterministic behavior, TLM overcomes the limitations of bit-level verification, and is a best-practice methodology from the commercial ASIC industry. At the same time, by focusing more on the functionality than the implementation, the verification process is clearer, more robust, and easier to maintain. What’s not to like?

Also, everything discussed in this article has and is being done within the avionics industry. Aldec’s solution for bit-level verification which, as mentioned, is fine for less complex designs with single clocks, is called the DO-254/ED-80 CTS (compliance tool set). Launched in 2008, the CTS features at-speed testing in the target device, reuses the simulation testbench for hardware testing and integrates with third party RTL simulator, synthesis, and place & route tools.

Most recently, the CTS has been (and continues to be) used by a Europe-based avionics company for transaction-based verification. This is saving the company as great deal of time as, before switching to TLM, a great deal of time was being spent investing discrepancies between RTL simulations and in-hardware results – all because of the non-deterministic behavior of the device. For more information about this particular case study see Industry’s First use of TLM for the At-Speed Verification of a PCIe-Based Avionics Design Requiring DO-254 Compliance.

www.aldec.com

www.militaryembedded.com