Multi-Processor System-on-Chip 1. Liliana Andrade

Чтение книги онлайн.

Читать онлайн книгу Multi-Processor System-on-Chip 1 - Liliana Andrade страница 18

Multi-Processor System-on-Chip 1 - Liliana Andrade

Скачать книгу

driving system functions."/>

      Figure 2.5. Autoware automated driving system functions (CNX 2019)

Schematic illustration of application domains and partitions on the MPPA3 processor.

      Figure 2.6. Application domains and partitions on the MPPA3 processor

      Table 2.1. Cyber-security requirements by application area

Defense Avionics Automotive
Hardware root of trust image image image
Physical attack protection image image
Software and firmware authentication image image image
Boot firmware confidentiality image image
Application code confidentiality image image image
Event data record integrity image image

      2.3.1. Global architecture

      The MPPA3 processor architecture (Figure 2.7) applies the defining principles of many-core architectures: processing elements (SCs on a GPGPU) are regrouped with a multi-banked local memory and a slice of the memory hierarchy into compute units (SMs on a GPGPU), which share a global interconnect and access to external memory. The distinguishing features of the MPPA many-core architecture compared to the GPGPU architecture are the integration of fully software-programmable cores for the processing elements, and the provision of an RDMA engine in each compute unit.

      The structuring of the MPPA3 architecture into a collection of compute units, each comparable to an embedded multi-core processor, is the main feature that enables the consolidation of application partitions operating at different levels of functional safety and cyber-security, on a single processor. This feature requires provision of global interconnects with support for partition isolation. From experience with previous MPPA processors, it became apparent that chip global interconnects implemented as “network-on-chip” (NoC) may be specialized for two different purposes: generalization of busses and integration of macro-networks (Table 2.2).

Schematic illustration of an overview of the MPPA3 processor.

      Table 2.2. Types of network-on-chip interconnects

Generalized busses Integrated macro-network
Connectionless Connection-oriented
Address-based transactions Stream-based transactions
Flit-level flow control [End-to-end flow control]
Implicit packet routing Explicit packet routing
Inside coherent address space Across address spaces (RDMA)
Coherency protocol messages Message multicasting
Reliable communication [Packet loss or reordering]
QoS by priority and aging QoS by traffic shaping
Coordination with the DDR controller Termination of macro-networks

      Accordingly, the MPPA3 processor is fitted with two global interconnects, respectively identified as “RDMA NoC” and “AXI Fabric” (Figure 2.8). The RDMA NoC is a wormhole switching network-on-chip, designed to terminate two 100 Gbps Ethernet controllers, and to carry the remote DMA operations found in supercomputer interconnects or communication libraries such as SHMEM (Hascoët et al. 2017). The AXI Fabric is a crossbar of busses with round-robin arbiters, which connects

Скачать книгу