DDR3 PHY IP Core User Guide
IPUG96 Version 2.1, October 2016
Table of Contents
Chapter 1. Introduction .......................................................................................................................... 5
Quick Facts ........................................................................................................................................................... 5
Features ................................................................................................................................................................ 6
Chapter 2. Functional Description ........................................................................................................ 7
Overview ............................................................................................................................................................... 7
Initialization Module...................................................................................................................................... 8
Write Leveling .............................................................................................................................................. 8
Read Training (Only for ECP5 Device) ........................................................................................................ 8
Selecting READ_PULSE_TAP Value (Only for LatticeECP3 Device) .................................................................. 9
Data Path Logic.......................................................................................................................................... 10
Write Data Path.......................................................................................................................................... 10
Read Data Path.......................................................................................................................................... 10
DDR3 I/O Logic .......................................................................................................................................... 10
Signal Descriptions ............................................................................................................................................. 10
Using the DFI ...................................................................................................................................................... 13
Initialization Control.................................................................................................................................... 13
Command and Address ............................................................................................................................. 14
Write Data Interface ................................................................................................................................... 15
Read Data Interface ................................................................................................................................... 15
Mode Register Programming ..................................................................................................................... 16
Chapter 3. Parameter Settings ............................................................................................................ 18
Type Tab ............................................................................................................................................................. 20
Select Memory ........................................................................................................................................... 20
RefClock (Only for ECP5 DDR3 IP) ........................................................................................................... 20
Clock (for ECP3) MemClock (for ECP5) .................................................................................................... 20
Memory Type ............................................................................................................................................. 21
Memory Data Bus Size .............................................................................................................................. 21
Configuration.............................................................................................................................................. 21
DIMM0 Type or Chip Select Width............................................................................................................. 21
Address Mirror............................................................................................................................................ 21
Clock Width ................................................................................................................................................ 21
CKE Width.................................................................................................................................................. 21
2T Mode ..................................................................................................................................................... 21
Write Leveling ............................................................................................................................................ 21
Controller Reset to Memory ....................................................................................................................... 21
Setting Tab.......................................................................................................................................................... 22
Row Size .................................................................................................................................................... 22
Column Size............................................................................................................................................... 22
Burst Length............................................................................................................................................... 22
CAS Latency .............................................................................................................................................. 22
Burst Type.................................................................................................................................................. 22
Write Recovery........................................................................................................................................... 23
DLL Control for PD..................................................................................................................................... 23
ODI Control ................................................................................................................................................ 23
RTT_Nom................................................................................................................................................... 23
Additive Latency......................................................................................................................................... 23
CAS Write Latency..................................................................................................................................... 23
RTT_WR .................................................................................................................................................... 23
Pin Selection Tab ................................................................................................................................................ 24
Manually Adjust.......................................................................................................................................... 24
Pin Side...................................................................................................................................................... 24
clk_in/PLL Locations .................................................................................................................................. 24
clk_in pin .................................................................................................................................................... 24
IPUG96_2.1, October 2016
2
DDR3 PHY IP Core User Guide
Table of Contents
PLL Used ................................................................................................................................................... 25
DDR3 SDRAM Memory Clock Pin Location............................................................................................... 25
DQS Locations ........................................................................................................................................... 25
Design Tools Options and Info Tab..................................................................................................................... 25
Support Synplify ......................................................................................................................................... 26
Support Precision....................................................................................................................................... 26
Support ModelSim...................................................................................................................................... 26
Support ALDEC.......................................................................................................................................... 26
Memory I/F Pins ......................................................................................................................................... 26
User I/F Pins .............................................................................................................................................. 27
Chapter 4. IP Core Generation and Evaluation
for LatticeECP3 DDR3 PHY.................................................................................................................. 28
Getting Started .................................................................................................................................................... 28
IPexpress-Created Files and Top Level Directory Structure............................................................................... 30
DDR3 PHY IP File Structure ...................................................................................................................... 32
Hardware Evaluation........................................................................................................................................... 34
Enabling Hardware Evaluation in Diamond................................................................................................ 34
Updating/Regenerating the IP Core .................................................................................................................... 34
Chapter 5. IP Core Generation and Evaluation for ECP5 DDR3 PHY............................................... 36
Getting Started .................................................................................................................................................... 36
Created Files and Top Level Directory Structure ................................................................................................ 39
DDR3 PHY IP File Structure ...................................................................................................................... 41
Simulation Files for IP Evaluation .............................................................................................................. 43
Hardware Evaluation........................................................................................................................................... 45
Enabling Hardware Evaluation in Diamond................................................................................................ 45
Regenerating/Recreating the IP Core ................................................................................................................. 45
Regenerating an IP Core in Clarity Designer Tool ..................................................................................... 45
Recreating an IP Core in Clarity Designer Tool ......................................................................................... 46
Chapter 6. Application Support........................................................................................................... 47
Understanding Preferences ................................................................................................................................ 47
FREQUENCY Preferences ........................................................................................................................ 47
MAXDELAY NET ....................................................................................................................................... 47
MULTICYCLE / BLOCK PATH................................................................................................................... 47
IOBUF ........................................................................................................................................................ 47
LOCATE..................................................................................................................................................... 47
Handling DDR3 PHY IP Preferences in User Designs........................................................................................ 47
Reset Handling.................................................................................................................................................... 48
Dummy Logic in IP Core Evaluation ................................................................................................................... 48
Top-level Wrapper File Only for Evaluation Implementation...................................................................... 48
Top-level Wrapper file for All Simulation Cases and Implementation in a User Design............................. 49
RDIMM Module Support...................................................................................................................................... 49
Netlist Simulation ................................................................................................................................................ 49
Chapter 7. Core Verification ................................................................................................................ 50
Chapter 8. Support Resources ............................................................................................................ 51
Lattice Technical Support.................................................................................................................................... 51
E-mail Support ........................................................................................................................................... 51
Local Support ............................................................................................................................................. 51
Internet ....................................................................................................................................................... 51
References.......................................................................................................................................................... 51
Revision History .................................................................................................................................................. 51
Appendix A. Resource Utilization ....................................................................................................... 52
ECP5 Devices ..................................................................................................................................................... 52
Ordering Part Number................................................................................................................................ 52
IPUG96_2.1, October 2016
3
DDR3 PHY IP Core User Guide
Table of Contents
LatticeECP3 FPGAs............................................................................................................................................ 53
Ordering Information ........................................................................................................................................... 53
Appendix B. Lattice Devices Versus................................................................................................... 54
Appendix C. DDR3 PHY IP Matrix........................................................................................................ 54
Appendix A. LatticeECP3 DDR3 PHY IP Locate Constraints............................................................ 55
IPUG96_2.1, October 2016
4
DDR3 PHY IP Core User Guide
Chapter 1:
Introduction
The Double Data Rate (DDR3) Physical Interface (PHY) IP core is a general purpose IP core that provides connectivity between a DDR3 Memory Controller (MC) and DDR3 memory devices compliant with JESD79-3 specification. This DDR3 PHY IP core provides the industry standard DDR PHY Interface (DFI) bus at the local side to
interface with the memory controller. The DFI protocol defines the signals, signal relationships, and timing parameters required to transfer control information and data to and from the DDDR3 devices over the DFI bus.
The DDR3 PHY IP core minimizes the effort required to integrate any available DDR3 memory controller with the
Lattice FPGA’s DDR3 primitives and thereby enables the user to implement only the logical portion of the memory
controller in the user design. The DDR3 PHY IP core contains all the logic required for memory device initialization,
write leveling, read data capture and read data de-skew that are dependent on Lattice FPGA DDR I/O primitives.
Quick Facts
Table 1-1 gives quick facts about the DDR3 SDRAM Controller IP core for ECP5™.
Table 1-1. DDR3 PHY IP Core Quick Facts for ECP51, 2
DDR3 PHY IP Configuration
x8 2cs
Core
Requirements
x16 2cs
x24 2cs
x32 2cs
x56 2cs
x64 2cs
x72 2cs
ECP5
Minimal Device
Needed1
LFE5UM- LFE5UM- LFE5UM- LFE5UM- LFE5UM- LFE5UM- LFE5UM- LFE5UM- LFE5UM85F85F85F85F85F85F85F85F85F8MG285C 8MG285C 8MG381C 8MG381C 8BG554C 8BG756C 8BG756C 8BG756C 8BG756C
Data Path Width
LFE5UM-85F-8BG756C
LUTs
8
16
24
32
40
48
56
64
72
940
1070
1040
1140
1260
1360
1380
1440
1550
1510
1690
1850
2020
0
sysMEM EBRs
Registers
740
970
1000
1180
Lattice
Implementation
Design Tool
Support
x48 2cs
FPGA Families
Supported
Targeted Device
Resource
Utilization
x40 2cs
Synthesis
Simulation
1360
®
Lattice Diamond 3.3
®
Synopsys Synplify Pro® for Lattice I-2014.03L-SP1
Aldec® Active-HDLTM 9.3 Lattice Edition
Mentor Graphics® ModelSim® 6.6
1. Device configuration x8 is considered. For x4 or x16 configurations, the minimal device may be different.
2. The LFE5U and LFE5UM devices have the same Resource Utilization values.
IPUG96_2.1, October 2016
5
DDR3 PHY IP Core User Guide
Introduction
Table 1-2 gives quick facts about the DDR3 PHY IP core for LatticeECP3™.
Table 1-2. DDR3 PHY IP Core Quick Facts for LatticeECP3
DDR3 PHY IP Configuration
x8 2cs
Core
Requirements
x24 2cs
x32 2cs
x48 2cs
x56 2cs
x64 2cs
x72 2cs
LatticeECP3
Minimal Device
Needed1
LFE3LFE3LFE3LFE3LFE3LFE3LFE3LFE3LFE317EA17EA17EA17EA17EA35EA35EA70EA70EA6FN256C 6FN256C 6FN484C 6FN484C 6FN484C 6FN484C 6FN672C 6FN672C 6FN1156C
Targeted Device
LFE3-150EA-8FN1156C
LUTs
8
16
24
32
40
48
56
64
72
929
1060
1180
1320
1320
1430
1520
1620
1730
820
1130
1430
1740
1560
1740
1920
2090
2300
0
sysMEM EBRs
Registers
Lattice
Implementation
Design Tool
Support
x40 2cs
FPGA Families
Supported
Data Path Width
Resource
Utilization
x16 2cs
Synthesis
Lattice Diamond 3.3
Synopsys Synplify Pro for Lattice I-2014.03L-SP1
Simulation
Aldec Active-HDL 9.3 Lattice Edition
Mentor Graphics ModelSim 6.6
1. Device configuration x8 is considered. For x4 or x16 configurations, the minimal device may be different.
Features
The DDR3 PHY IP core supports the following features:
• Interfaces to any DDR3 memory controller (MC) through the DDR PHY Interface (DFI) industry specification
• Interfaces to industry standard DDR3 SDRAM components and modules compliant with JESD79-3 specification
• Support for all ECP5 devices (LFE5U/LFE5UM) and all LatticeECP3 “EA” devices
• High-performance DDR3 operation up to 400 MHz/800 Mbps
• Supports memory data path widths of 8, 16, 24, 32, 40, 48, 56, 64 and 72 bits
• Supports x4, x8, and x16 device configurations
• Supports one unbuffered DDR3 DIMM or DDR3 RDIMM module with up to two ranks per DIMM
• Supports on-board memory (up to two chip selects)
• Programmable burst lengths of 8 (fixed), chopped 4 or 8 (on-the-fly), or chopped 4 (fixed)
• Supports automatic DDR3 SDRAM initialization with user mode register programming
• Supports write leveling for each DQS group. Option to switch off write leveling for on-board memory applications.
• Supports all valid DDR3 commands
• Supports dynamic On-Die Termination (ODT) controls
• I/O primitives manage read skews (read leveling equivalent)
• Option for controlling memory reset outside the IP core
• 1:1 frequency ratio interface between MC and DFI, 1:2 ratio between DFI and PHY
IPUG96_2.1, October 2016
6
DDR3 PHY IP Core User Guide
Chapter 2:
Functional Description
Overview
The DDR3 PHY IP core consists of the following sub-modules: initialization module, write leveling module, write
data path, read data path, address/cmd control module and I/O logic module. This section briefly describes the
operation of each of these modules. Figure 2-1 provides a high-level block diagram illustrating these main functional blocks and the technology used to implement the DDR3 PHY IP core functions.
Figure 2-1. DDR3 PHY IP Block Diagram
DDR3 PHY IP Core
em_ddr_cs_n
em_ddr_we_n
em_ddr_ras_n
em_ddr_cas_n
CSM Interface Signals
Intialization
em_ddr_addr
em_ddr_ba
em_ddr_dqs
Write_leveling
em_ddr_dqs_n
em_ddr_dq
DFI Interface Signals
Addr/Cmd
Control
em_ddr_dm
DDR3
I/O Logic
em_ddr_odt
Write_data_path
em_ddr_reset_n
em_ddr_clk
em_ddr_clk_n
em_ddr_cke
Read_data Path
Capture and
De-skew
Non-DFI Signals
sclk
DDRDLL
eclk
clk_in (100 MHz)
sysCLOCK PLL
sclk2x
Clock Synchronization Module
Along with the DDR3 PHY IP core, a separate module called the Clock Synchronization Module (CSM) is provided
which generates all the clock signals, such as system clock (sclk), edge clock (eclk) and high-speed system clock
(sclk2x) for the DDR3 PHY IP core. The CSM logic ensures that the domain crossing margin between eclk and sclk
stays the same for the IDDR and ODDR buses that produce 2:1 gearing. Without proper synchronization, the bit
order on different elements can become off sync and the entire data bus scrambled. Clock synchronization ensures
that all DDR components start from exactly the same edge clock cycle.
The DDR3 PHY IP core works in a 1:1 frequency ratio between the MC and DFI. Inside the DDR3 PHY IP core, the
initialization module, write leveling module, address/cmd control module, write data logic and read data capture
IPUG96_2.1, October 2016
7
Display Interface Multiplexer IP Core User Guide
Functional Description
and de-skew logic operate using the sclk. These functional modules are implemented as soft logic in the FPGA fabric. This implies that the DFI of the DDR3 PHY IP core follows the 1:1 frequency ratio with the MC.
The DDR3 PHY IP core implements a 1:2 frequency ratio between the functional modules and the DDR I/O primitives. These I/O primitives are the hard logic of the FPGA and they use all the clocks (sclk, eclk and sclk2x) to
implement a 1:2 gearing ratio between the functional block and the PHY memory interface. All transfers from the
sclk to eclk domains and vice-versa happen within the DDR I/O primitives.
In a typical case, if the memory controller operates with a 200 MHz system clock (sclk), the functional modules of
the DDR3 PHY IP core also operate with the same 200 MHz sclk while the DDR I/O logic of the IP core work primarily with the 400 MHz edge clock (eclk).
The combination of this operating clock ratio and the double data rate transfer leads to a user side data bus in the
DFI that is four times the width of the memory side data bus. For example, a 32-bit memory side data width
requires a 128-bit read data bus and a 128-bit write data bus at the user side interface.
Initialization Module
The Initialization Module performs the DDR3 memory initialization sequence as defined by JEDEC protocol. After
power-on or after a normal reset of the DDR3 PHY IP core, memory must be initialized before sending any command to the IP core. It is the user’s responsibility to assert the dfi_init_start input to the DDR3 PHY IP core to start
the memory initialization sequence. The completion of initialization is indicated by the dfi_init_complete output provided by this block.
Since the DDR3 PHY IP core does not use the dfi_data_byte_disable or dfi_freq_ratio DFI signals, the input signal
dfi_init_start needs to be asserted by the memory controller to trigger only a memory initialization process. It
should be noted that this dfi_init_start signal is not used to change the frequency ratio.
Write Leveling
The write leveling block adjusts the DQS-to-CLK relationship for each memory device, using the write level mode of
the DDR3 SDRAM when the fly-by wiring is implemented. Write leveling is always done immediately after a memory initialization sequence if write leveling is not disabled through the GUI. When the dfi_init_complete signal is
asserted after the initialization process it also indicates the completion of write leveling. Along with the assertion of
dfi_init_complete, the signal wl_err is also asserted if the write leveling process is not successful.
The main purpose of write leveling is to provide better signal integrity by using fly-by topology for the address, command, control and clock signals, and then by de-skewing the DQS signal delays to those signals at the DDR3
DRAM side. Since DDR3 memory modules have adapted fly-by topology, write leveling must be enabled for DIMM
based applications. For on-board memory applications, the GUI provides the write leveling function as a user
option. When enabled, the PCB for the on-board memory application must be routed using the fly-by topology. Otherwise, write leveling failures may occur due to the lack of guaranteed DQS to CLK edge relationship at the beginning of write level training. Due to this reason, the write leveling option must be disabled if the PCB does not utilize
fly-by routing for write leveling.
The write leveling scheme of the DDR3 PHY IP core follows all the steps stipulated in the JEDEC specification. For
more details on write leveling, refer to the JEDEC specification JESD79-3.
Read Training (Only for ECP5 Device)
For every read operation, the DDR3 I/O primitives of the ECP5 device must be initialized at the appropriate time to
identify the incoming DQS preamble. Upon proper detection of the preamble, the primitive DQSBUFI extracts a
clean dqs signal out of the incoming dqs signal from the memory and generates the DATAVALID output signal that
indicates the correct timing window of the valid read data.
The DDR3 PHY IP generates an internal pulse signal, READ[3:0], to the primitive DQSBUFI that is used for the
above-mentioned operation. In addition to the READ[3:0] input, another input signal READCLKSEL[2:0] and an
IPUG96_2.1, October 2016
8
Display Interface Multiplexer IP Core User Guide
Functional Description
output signal, BURSTDET, of the DQSBUFI block are provided to the PHY IP to accomplish the READ signal positioning.
Due to the DQS round trip delay that includes PCB routing and I/O pad delays, proper positioning of the READ signal with respect to the incoming preamble is crucial for successful read operations. The ECP5 DQSBUFI block supports a dynamic READ signal positioning function called read training that enables the PHY IP to position the
READ signal within an appropriate timing window by progressively shifting the READ signal and monitoring the
positioning result.
This read training is performed as part of the memory initialization process after the write leveling operation is complete. During the read training, the PHY IP generates the READ[3:0] pulse, positions this signal using READCLKSEL[2:0] and monitors the BURSTDET output of DQSBUFI for the result of the current position. The READ signal
is set high before the read preamble starts. When the READ pulse is properly positioned, the preamble is detected
correctly and the BURSTDET will go high. This will guarantee that the generated DATAVALID signal is indicating
the correct read valid time window.
The READ signal is generated in the system clock (SCLK) domain and stays asserted for the total burst length of
the read operation.
A minimum burst length of four on the memory bus is used in the read training process. The PHY IP can determine
the proper position alignment when there is not a single failure on BURSTDET assertions during the multiple trials.
If there is any failure, the PHY IP shifts the READ signal position and tries again until it detects no BURSTDET failure.
The PHY IP stores the delay value of the successful position of the READ signal for each DQS group. It uses these
delay values during a normal read operation to correctly detect the preamble first, followed by the generation of
DATAVALID signal.
Selecting READ_PULSE_TAP Value (Only for LatticeECP3 Device)
For every read operation, the DDR3 I/O primitives must be initialized at the appropriate time to identify the incoming DQS preamble in order to generate the data valid signal. For this purpose the PHY IP internally generates a
signal called dqs_read in such a way that this signal’s trailing edge is positioned within the incoming DQS preamble
window.
Due to PCB routing delays, DIMM module routing delays and routing delays within the FPGA, the incoming DQS
signal’s delay varies from board to board. To compensate for this variability in DQS delay, the PHY IP shifts the
internal signal dqs_read in such a way to position it within the preamble time.
Each shift (step) moves the dqs_read signal by one half period of the eclk (1.25 ns for 400 MHz memory clock).
A port, read_pulse_tap, is provided in the Core top level file ddr3_sdram_mem_top_wrapper.v for the user to load
the shift count for each DQS group. Each DQS group is assigned a 3-bit shift count value in this port, starting with
LSB 3 bits for DQS_0. This count can be any value from 0 to 7.
For the core to work properly on the board, it is recommended that the dqs_read signal be shifted by two steps for
UDIMMs, by four steps for RDIMMs or by one step for on-board memory. Since the Eval simulation environment is
provided without the PCB and FPGA internal routing delays, the recommended values for Eval simulation are: zero
steps for UDIMMs, two steps for RDIMMs or zero steps for on-board memory.
A parameter READ_PULSE_TAP in ddr_p_eval\testbench\tests\ecp3\tb_config_params.v is made available to the
user as an example. This parameter may be loaded to the port read_pulse_tap with appropriate values for simulation and synthesis.
In almost all cases the recommended value is good enough for stable read operations on the board and it is highly
unlikely that the user has to change this value. If there are frequent read errors on the board, the user should try
adjusting the shift count value loaded to the port read_pulse_tap.
IPUG96_2.1, October 2016
9
Display Interface Multiplexer IP Core User Guide
Functional Description
Should there be a need to change the READ_PULSE_TAP value, it is suggested that the user starts with changing
the value of DQS7 groups first and then move to adjacent group, if required.
Note: The DDR3 PHY IP may fail to generate or improperly generate the read_data_valid signal if the parameter
READ_PULSE_TAP is not loaded to the read_pulse_tap input port or the values are not correct.
Data Path Logic
The Data Path Logic (DPL) block interfaces with the DDR3 I/O modules and is responsible for generating the read
data and read data valid signals during read operations. This block implements all the logic needed to ensure that
the data write/read to and from the memory is transferred to the local user interface in a deterministic and coherent
manner.
Write Data Path
The write data path block interfaces with the DDR3 I/O modules and is responsible for loading the write data along
with write data control signals to the DDR3 I/O primitives during write operations. This block implements all the
logic needed to ensure that the data write to the memory is transferred from the DFI in a deterministic and coherent
manner.
Read Data Path
The read data path block interfaces with the DDR3 I/O modules and is responsible for extracting the read data and
read data valid signals during read operations. This block implements all the logic needed to ensure that the data
read from the memory is transferred to the DFI in a deterministic and coherent manner. In addition, this block has
the logic to deskew the read data delays between different data lanes.
DDR3 I/O Logic
The DDR3 I/O logic block provides the physical interface to the memory device. This block consists mainly of the
LatticeECP3 or ECP5 device DDR3 I/O primitives supporting compliance to DDR3 electrical and timing requirements. These primitives implement all the interface signals required for memory access and convert the single data
rate (SDR) DFI data to double data rate DDR3 data for the write operations. In read mode, they perform the DDR3to-SDR conversion.
Signal Descriptions
Table 2-1 describes the user interface and memory interface signals at the top level.
Table 2-1. DDR3 PHY IP Core Top-Level I/O List
Port Name
clk_in
Active
State
I/O
N/A
Input
Description
Reference clock to the PLL of the CSM block.
Clock Synchronization Module (CSM) Interface Signals
sclk
N/A
Input
System clock used by the PHY IP core. This clock can be used for the
DDR3 memory controller.
eclk
N/A
Input
Edge clock used by the DDR3 PHY IP core. Usually twice the frequency of sclk.
sclk2x
N/A
Input
High-speed system clock used by the IP core. Usually twice the frequency of sclk. (Only for LatticeECP3.)
wl_rst_datapath
High
Input
Signal from the IP core to the CSM module requesting a reset to the
DDR primitive after a write leveling process is done. If multiple PHY IP
cores are implemented in a design, use an AND gate to feed the
wl_rst_datapath signals from all PHY IP cores and connect the output
of the AND gate to the wl_rst_datapath input of the CSM module. (Only
for LatticeECP3.)
dqsbufd_rst
High
Output
Signal from the CSM module to the IP core to reset the DDR primitive.
(Only for LatticeECP3.)
IPUG96_2.1, October 2016
10
Display Interface Multiplexer IP Core User Guide
Functional Description
Table 2-1. DDR3 PHY IP Core Top-Level I/O List (Continued)
Active
State
I/O
clocking_good
High
Input
Signal from the CSM module indicating a stable clock condition.
dqsdel
High
Input
Master DQSDLL delay control line from CSM to the slave DLL delay in
the IP core. (Only for LatticeECP3.)
uddcntln
Low
Output
DQSDLL update request from the IP core to the CSM logic. (Only for
LatticeECP3.)
dll_update
High
Output
DQSDLL update request from the IP core to the CSM logic. (Only for
ECP5.) Remains asserted till update_done is set high.
Port Name
Description
dqsbuf_pause
High
Input
Pause signal from CSM to the DDR3 I/O logic. (Only for ECP5.)
ddr_rst
High
Input
Reset signal from CSM to the DDR3 I/O logic. (Only for ECP5.)
ddrdel
High
Input
Master DQSDLL delay control line from CSM to the slave DLL delay in
the IP core. (Only for ECP5.)
ddrdel_br
High
Input
Eclk bridge DQSDLL delay control line from CSM to the slave DLL
delay in the IP core. (Only for ECP5.)
update_done
High
Input
Signal to indicate DQSDLL update is completed. (Only for ECP5.)
dll_update is de-asserted once this signal is sampled as high.
Low
Input
Asynchronous reset signal from the user to reset only the memory
device. This signal will not reset the DDR3 PHY IP core’s functional
modules. Refer to the Reset Handling section for more details.
Read pulse tap – Counts the value from 0 to 7 by which the IP core’s
internal read pulse signal, dqs_read, is to be shifted for proper
read_data_valid signal generation. Three bits are allocated for each
DQS. Refer to the Netlist Simulation section for more details. (Only for
LatticeECP3 DDR3 IP.)
Non-DFI Interface Signals
mem_rst_n
read_pulse_tap [3*(`DQS_WIDTH) -1:0]
N/A
Input
phy_init_act
High
Output
Signal to indicate that the memory initialization process is active (in
progress).
wl_act
High
Output
Signal to indicate that the memory write leveling process is active (in
progress).
wl_err
High
Write leveling error. Indicates failure in write leveling. The IP core will
not work properly if there is a write leveling error. This signal should be
Output
checked when the init_done signal is asserted at the end of the initialization procedure.
rt_err
High
Output
Read Training error. Indicates failure in Read Training process. The
PHY IP will not work properly if there is a Read Training error. This signal should be checked when init_done signal is asserted. (Only for
ECP5 DDR3 IP.)
Low
Input
Asynchronous reset. By default, when asserted, this signal resets the
entire IP core and also the DDR3 memory. Refer to the Reset Handling
section for more details.
DFI Interface signals
dfi_reset_n
dfi_address
N/A
Input
DFI address bus. This signal defines the address information that is
intended for the DRAM memory devices for all control commands. The
IP core preserves the bit ordering of the dfi_address signals when
reflecting this data to the DRAM devices.
dfi_bank
N/A
Input
DFI bank bus. This signal defines the bank information that is intended
for the DRAM devices for all control commands. The IP core preserves
the bit ordering of the dfi_bank signals when reflecting this data to the
DRAM devices.
dfi_cas_n
Low
Input
DFI column address strobe input. This signal defines the CAS information that is intended for the DRAM devices for all control commands.
dfi_cke[CKE_WIDTH-1:0]
High
Input
DFI clock enable input. This signal defines the CKE information that is
intended for the DRAM devices for all control commands.
IPUG96_2.1, October 2016
11
Display Interface Multiplexer IP Core User Guide
Functional Description
Table 2-1. DDR3 PHY IP Core Top-Level I/O List (Continued)
Port Name
Active
State
I/O
Description
dfi_cs_n[CS_WIDTH-1:0]
Low
Input
DFI chip select input. This signal defines the chip select information
that is intended for the DRAM devices for all control commands.
dfi_odt[CS_WIDTH-1:0]
High
Input
DFI on-die termination control input. This signal defines the ODT information that is intended for the DRAM devices for all control commands.
dfi_ras_n
Low
Input
DFI row address strobe bus. This signal defines the RAS information
that is intended for the DRAM devices for all control commands.
dfi_we_n
Low
Input
DFI write enable input. This signal defines the WEN information that is
intended for the DRAM devices for all control commands.
dfi_wrdata[DSIZE-1:0]
N/A
Input
Write data bus. Refer to the Write Data Interface section for more information.
dfi_wrdata_en
High
Input
Write data and data mask enable input. Refer to the Write Data Interface section for more information.
dfi_wrdata_mask[(DSIZE/8)[1:0]
High
Input
Write data byte mask input. Refer to the Write Data Interface section
for more information.
dfi_rddata[DSIZE-1:0]
N/A
Output
Read data bus output. Refer to the Read Data Interface section for
more information.
dfi_rddata_valid
High
Output
Read data valid indicator. Refer to the Read Data Interface section for
more information.
High
This output signal is asserted for one clock period after the core completes memory initialization and write leveling. When sampled high, the
Output input signal dfi_init_start must be immediately deasserted at the same
edge of the sampling clock. Refer to the Initialization Control section for
more details.
dfi_init_complete
Initialization start request input to the IP core.
dfi_init_start
High
Input
dfi_init_start should be asserted to initiate memory initialization either
right after the power-on reset or before sending the first user command
to the IP core.
Since the DDR3 PHY IP core provides no support for
dfi_data_byte_disable or dfi_freq_ratio, this input signal dfi_init_start is
provided to the MC only to trigger a memory initialization process.
Refer to the Initialization Control section for more details.
DDR3 SDRAM Memory Interface
em_ddr_reset_n
Low
Asynchronous reset signal from the controller to the memory device.
Asserted by the controller for the duration of power on reset or active
Output
rst_n or active mem_rst_n. Refer to the Reset Handling section for
more details.
em_ddr_clk[CLKO_WIDTH-1:0]
N/A
Up to 400 MHz memory clock generated by the core. Lattice software
Output automatically generates an additional complimentary port
(em_ddr_clk_n) for each clock output port.
em_ ddr_cke[CKE_WIDTH -1 :0]
High
Output Memory clock enable generated by the core.
em_ ddr_addr[ROW_WIDTH-1:0]
N/A
Output
em_ ddr_ba[2:0]
N/A
Output Memory bank address.
Memory address bus – multiplexed row and column address information to the memory.
em_ ddr_data[DATA_WIDTH-1:0]
N/A
In/Out Memory bi-directional data bus.
em_ ddr_dm[(DATA_WIDTH/8)-1:0]
High
Output DDR3 memory write data mask
em_ ddr_dqs[DQS_WIDTH-1:0]
N/A
Memory bi-directional data strobe. Lattice software automatically genIn/Out erates an additional complimentary port (em_ddr_dqs_n) for each dqs
port.
em_ ddr_dqs_n[DQS_WIDTH-1:0]
N/A
In/Out Memory complimentary bi-directional data strobe
em_ ddr_cs_n[`CS_WIDTH_BB -1 :0]
Low
Output Memory chip select.
em_ ddr_cas_n
Low
Output Memory column address strobe.
IPUG96_2.1, October 2016
12
Display Interface Multiplexer IP Core User Guide
Functional Description
Table 2-1. DDR3 PHY IP Core Top-Level I/O List (Continued)
Port Name
em_ ddr_ras_n
Active
State
I/O
Description
Low
Output Memory row address strobe.
em_ ddr_we_n
Low
Output Memory write enable.
em_ ddr_odt[`CS_WIDTH_BB -1 :0]
High
Output Memory on-die termination control.
IPUG96_2.1, October 2016
13
Display Interface Multiplexer IP Core User Guide
Functional Description
Using the DFI
The DFI specification includes a list of signals required to drive the memory address, command, and control signals to the DFI bus. These signals are intended to be passed to the DRAM devices in a manner that maintains the
timing relationship of these signals on the DFI.
The DFI is subdivided into the following interface groups:
• Control Interface
• Write Data Interface
• Read Data Interface
• Update Interface (optional)
• Status Interface (optional)
• Training Interface (optional)
• Low Power Control Interface (optional)
The DDR3 PHY IP core provides support for the Control Interface, Write Data Interface and Read Data Interface.
The other optional interfaces are not supported.
The Control Interface is a reflection of the DRAM control interface including address, bank, chip select, row strobe,
column strobe, write enable, clock enable and ODT control, as applicable for the memory technology. The Write
Data Interface and Read Data Interface are used to send valid write data as well as to receive valid read data
across the DFI.
Initialization Control
DDR3 memory devices must be initialized before the memory controller accesses the devices. The DDR3 PHY IP
core starts the memory initialization sequence when the dfi_init_start signal is asserted by the memory controller.
Once asserted, the dfi_init_start signal needs to be held high until the initialization process is completed. The output signal dfi_init_done is asserted high by the core for only one clock cycle period indicating that the core has
completed the initialization sequence and is now ready to access the memory. The dfi_init_start signal must be
deasserted as soon as dfi_init_done is sampled high at the rising edge of sclk. If the dfi_init_start is left high at the
next rising edge of sclk, the core sees this as another request for initialization and starts the initialization process
again. Memory initialization is required only once, immediately after the system reset. As part of the initialization
process the core performs write leveling for all the available DQS lanes and stores the write level delay values for
each of those lanes. The core ensures a minimum gap of 500 µs between em_ddr_reset_n deassertion and the
subsequent em_ddr_cke assertion. It is the user’s responsibility to ensure minimum reset duration of 200 µs as
required by the JEDEC specification.
Figure 2-2 shows the timing diagram of the initialization control signals.
Figure 2-2. Memory Initialization Control Timing
sclk
dfi_init_start
dfi_init_complete
IPUG96_2.1, October 2016
14
Display Interface Multiplexer IP Core User Guide
Functional Description
Command and Address
The DFI control signals dfi_address, dfi_bank, dfi_cas_n, dfi_cke, dfi_cs_n, dfi_reset_n, dfi_odt, dfi_ras_n and
dfi_we_n correlate to the DRAM control signals.
These control signals are expected to be driven to the memory devices. The timing relationship of the control signals at the DFI bus are maintained at the PHY-DRAM boundary; meaning that all delays are consistent across all
signals.
The DDR3 PHY IP core supports all the DDR3 memory commands. Refer to the DDR3 SDRAM Command
Description and Operation table of the JESD79-3, DDR3 SDRAM Standard for more details about DDR3 memory
commands.
Figure 2-3 shows the timing diagram for the Active command and Write/Read command when Additive Latency is
selected as 0. The gap between the Active and Write/Read commands is derived from the tRCD value of the memory device. Since the tRCD value is expressed in terms of memory clocks, the corresponding System Clock count at
the DFI bus is calculated as (tRCD + 1) / 2. In this calculation, (tRCD + 1) is used to round off the memory clock to
sclk conversion.
Figure 2-4 shows the timing diagram for the Active command and Write/Read command when Additive latency is
selected as 1 or 2.
On the memory side, the gap between the Active command and the Write/Read command will be 0, 1 or 2 memory
clocks more than the tRCD value. This extra delay is due to the combined effect of the 1:2 gearing in the DDR3 PHY
IP core and the write/read latency value, odd or even.
Figure 2-3. Active to Write/Read Command Timing for AL=0
sclk
dfi_ras_n
(tRCD+1) / 2
dfi_cas_n
dfi_we_n
ACT
W R/RD
Figure 2-4. Active to Write/Read Command Timing for AL=1 and AL=2
sclk
dfi_ras_n
dfi_cas_n
dfi_we_n
ACT
IPUG96_2.1, October 2016
WR/RD
15
Display Interface Multiplexer IP Core User Guide
Functional Description
Write Data Interface
The write transaction interface of the DFI includes the write data (dfi_wrdata), write data mask (dfi_wrdata_mask),
and write data enable (dfi_wrdata_en) signals as well as the tphy_wrlat and tphy_wrdata delay timing parameters.
In the DDR3 PHY IP core, the parameter tphy_wrlat has a constant value which is the write latency in terms of the
system clock (sclk). The tphy_wrlat is calculated using the equation, tphy_wrlat = (wr_lat +1) / 2 where wr_lat is
write latency in terms of memory clock. (wr_lat+1) is used to round off the memory clock to sclk conversion.
The parameter tphy_wrdata is always 0, therefore dfi_wrdata is valid from the time dfi_wrdata_en is asserted.
For a typical write operation, the memory controller asserts the dfi_wrdata_en signal tphy_wrlat cycles after the
assertion of the corresponding write command on the DFI, and for the number of cycles required to complete the
write data transfer sent on the DFI control interface. For contiguous write commands, the dfi_wrdata_en signal is to
be asserted tphy_wrlat cycles after the first write command of the stream and is to remain asserted for the entire
length of the data stream.
The associated write data (dfi_wrdata) and data masking (dfi_wrdata_mask) are sent along with the assertion of
the dfi_wrdata_en signal on the DFI.
The write data timing on the DFI is shown in Figure 2-5. Refer to the evaluation simulation waveform for the DFI bus
signal timing for different types of write operations (single, back-to-back, BC4 fixed, BL8 fixed and on-the-fly).
Figure 2-5. DFI Bus Write Timing
For CL = 5, AL =0 and CWL = 5
Read Data Interface
The read transaction portion of the DFI is defined by the read data enable (dfi_rddata_en), read data (dfi_rddata)
bus and the valid (dfi_rddata_valid) signals as well as the trddata_en and tphy_rdlat timing parameters.
Since Lattice FPGAs support a preamble detect feature that automatically identifies read data valid timing, the signal dfi_rddata_en is not required for the DDR3 PHY IP core. The timing parameter trddata_en is also not required.
The read command is accepted by the core when the dfi command input signal condition indicates a read command.
The DDR3 PHY IP core uses a total of nine sclks as core latency for the read command transmission, read data
extraction and read data de-skewing. To calculate the tphy_rdlat value the memory device’s read latency, in terms
IPUG96_2.1, October 2016
16
Display Interface Multiplexer IP Core User Guide
Functional Description
of sclk, is added to this IP core’s latency. For a memory read latency (RL) of six memory clocks, the corresponding
tphy_rdlat is 12 sclks which is 9 + ((RL+1)/2). In this calculation, (RL+1) is used to round off the memory clock to
sclk conversion.
The read data will be returned, along with the signal dfi_rddata_valid asserted, after tphy_rdlat cycles from the time
the read command is asserted.
The read data timing on the DFI is shown in Figure 2-6. Refer to the evaluation simulation waveform for the DFI bus
signal timing for the different types of read operations (single, back-to-back, BC4 fixed, BL8 fixed and on-the-fly).
Figure 2-6. DFI Bus Read Timing
sclk
dfi_cs_n
dfi_ras_n
dfi_cas_n
dfi_we_n
tphy_rdlat
dfi_rddata_valid
dfi_rddata
rddata
Mode Register Programming
The DDR3 SDRAM memory devices are programmed using the mode registers MR0, MR1, MR2 and MR3. The
bank address bus (dfi_bank) is used to choose one of the mode registers, while the programming data is delivered
through the address bus (dfi_address). The memory data bus cannot be used for the mode register programming.
The initialization process uses the mode register initial values selected in the PHY IP GUI. If these mode registers
are not re-programmed by the user logic, using the LMR command, they will remain in the same configurations as
programmed during the initialization process. Table 2-2 shows the list of available parameters and their initial default values from GUI if they are not changed by the user.
Table 2-2. Initialization Default Values for Mode Register Settings
Type
Register
Burst Length
Burst Type
2’b00
Local Address
addr[1:0]
GUI Setting
Yes
Sequential
addr[3]
Yes
CL = 5
addr[6:4], addr[2]
Yes
Test Mode
1’b0
Normal
addr[7]
No
DLL Reset
1’b1
DLL Reset = Yes
addr[8]
No
WR Recovery
DLL Control for precharge PD
All Others
IPUG96_2.1, October 2016
1’b0
Description
Fixed 8
3’b000
CAS Latency
MR0
Value
3’b010
1’b1
6
addr[11:9]
Yes
Fast
addr[12]
Yes
addr[ROW_WIDTH-1:13]
No
0
17
Display Interface Multiplexer IP Core User Guide
Functional Description
Table 2-2. Initialization Default Values for Mode Register Settings (Continued)
Type
MR1
Register
MR3
Description
Local Address
GUI Setting
DLL Enable
1’b0
DLL Enable
addr[0]
No
ODI Control
2’b00
RZQ/6
Addr[5],addr[1]
Yes
RTT_nom
3’b001
RZQ/4
Addr[9],addr[6],addr[2]
Yes
Additive Latency
2’b00
Disabled
addr[4:3]
Yes
Write Level Enable
1’b0
Disabled
addr[7]
No
TDQS Enable
1’b0
Disabled
addr[11]
No
Qoff
1’b0
Enable
All Others
MR2
Value
0
addr[12]
No
addr[ROW_WIDTH-1:13]
No
CAS Write Latency
3’b000
5
addr[5:3]
Yes
Rtt_WR
2’b01
RZQ/4
Addr[10:9]
Yes
All Others
0
All
0
IPUG96_2.1, October 2016
No
addr[ROW_WIDTH-1:0]
18
No
Display Interface Multiplexer IP Core User Guide
Chapter 3:
Parameter Settings
DDR3 PHY IP core Configuration GUI in IPexpress (for ECP3) or in Clarity Designer (for ECP5) tool is used to create IP and architectural modules in the Lattice Diamond software. Refer to IP Core Generation and Evaluation for
LatticeECP3 DDR3 PHY or IP Core Generation and Evaluation for ECP5 DDR3 PHY for a description of how to
generate the IP core.
Table 3-1 provides a list of user-configurable parameters for the DDR3 PHY IP core. The parameter settings are
specified using the DDR3 PHY IP core Configuration GUI in IPexpress. The numerous DDR3 PHY IP parameter
options are partitioned across multiple GUI tabs as shown in this chapter.
Table 3-1. IP Core Parameters
Parameter
Range/Options
Default
Type Tab
Device Information
Select Memory
Micron DDR3 1Gb-25E / Micron DDR3 2Gb-25E /
Micron DDR3 4Gb-25E / Custom
Micron DDR3 1Gb-25E
Clock
400 / 333 / 300 MHz (for -8, -8L or -9 device)
333 / 300 MHz (for -7 or -7L device)
300 MHz (for -6 or -6L device)
400 (for -8, -8L or -9 device)
333 (for -7 or -7L device)
300 (for -6 or -6L device)
Memory Type
Unbuffered DIMM / On-board Memory/ Registered DIMM
Unbuffered DIMM
Memory Data Bus Size
8 / 16 / 24 / 32 /40 /48 / 56 / 64 / 72
32
Configuration
x4/ x8/ x16
x8
Memory Configuration
DIMM Type (or Chip Select
Single Rank / Double Rank (or 1 / 2)
Width)
Single Rank (or 1)
Address Mirror
Enable / Disable
Disabled
Clock Width
1/2/4
1
CKE Width
1/2
1
2T Mode
Unselected/Selected
Unselected
Write Leveling
Unselected/Selected
Selected
Controller Reset to Memory Unselected/Selected
Selected
Setting Tab
Address
Row Size
12 - 16
14
Column Size
10 -1 2
10
Mode Register Initial Setting
Burst Length
Fixed 4, On the fly, Fixed 8
CAS Latency
5,6,7,8
5
Burst Type
Sequential/Interleave
Sequential
Write Recovery
5,6,7,8,10,12
6
DLL Control for PD
Slow Exit/Fast Exit
Fast Exit
ODI Control
RZQ/6, RZQ/7
RZQ/6
RTT_Nom (ohm)
Disabled, RZQ/4, RZQ/2, RZQ/6, RZQ/12, RZQ/8
RZQ/4
Additive Latency
0, CL-1, CL-2
0
CAS Write Latency
5/6
5
RTT_WR
Off, RZQ/4, RZQ/2
RZQ/4
IPUG96_2.0, October 2014
Fixed 8
18
DDR3 PHY IP Core User Guide
Parameter Settings
Table 3-1. IP Core Parameters (Continued)
Parameter
Range/Options
Default
Pin Selection Tab (Only for LatticeECP3)
Manually Adjust
Unselected / Selected
Unselected
Left side
Unselected / Selected
Selected
Right side
Unselected / Selected
Unselected
Pin Side
clk_in / PLL Locations¹
clk_in pin
Refer Locate constraints¹
U6¹
PLL used
Refer Locate constraints¹
PLL_R61C5¹
DDR3 SDRAM Memory Clock Pin Location
(Bank 12 /Bank 2 /Bank 3) or (Bank 02 /Bank 6 /Bank 7)
Bank 6
DQS_0
Refer Locate constraints¹
L10¹
DQS_1
Refer Locate constraints¹
M10¹
DQS_2
Refer Locate constraints¹
T9¹
DQS_3
Refer Locate constraints¹
W6¹
DQS_4
Refer Locate constraints¹
N/A¹
DQS_5
Refer Locate constraints¹
N/A¹
DQS_6
Refer Locate constraints¹
N/A¹
DQS_7
Refer Locate constraints¹
N/A¹
DQS_8
Refer Locate constraints¹
N/A¹
em_ddr_clk
DQS Locations
Design Tools Option and Info Tab
Design Tools Option
Support Synplify
Unselected / Selected
Selected
Support ModelSim
Unselected / Selected
Selected
Support ALDEC
Unselected / Selected
Selected
Number of BiDi Pins
Pin count for selected configuration
Pin count for selected configuration
Number of Output Pins
Pin count for selected configuration
Pin count for selected configuration
Number of Input Pins
Pin count for selected configuration
Pin count for selected configuration
Number of Output Pins
Pin count for selected configuration
Pin count for selected configuration
Memory I/F Pins
User I/F Pins
1. The default values for the Pin Selection tab are target device-dependent. Default values provided in the table are for a LatticeECP3-150EA
1156-ball fpBGA device. Refer to Appendix C, LatticeECP3 DDR3 PHY IP Locate Constraints for further details.
2. The Bank 0 or Bank 1 option is available only for 333 MHz and 300 MHz speeds.
IPUG96_2.0, October 2014
19
DDR3 PHY IP Core User Guide
Parameter Settings
Type Tab
The Type tab allows the user to select a DDR3 PHY IP core configuration for the target memory device as well as
the core functional features. These parameters are considered to be static parameters since the values for these
parameters can only be set in the GUI. The DDR3 PHY IP core must be regenerated to change the value of any of
these parameters. Figure 3-1 shows the contents of the Type tab.
Figure 3-1. DDR3 PHY IP Core Type Options
The Type tab supports the following parameters:
Select Memory
The Micron DDR3 1GB -25E is provided as the default DDR3 memory DIMM. The evaluation package comes with
the memory model of this DIMM. The other option, Custom, provides a way to select timing and configuration settings for any other DIMM or on-board memory designs.
RefClock (Only for ECP5 DDR3 IP)
Refresh input clock to PLL which generates the system clock (SCLK) and memory clock (em_ddr_clk).
ECP3 DDR3 PHY IP can only work with a refresh input clock to PLL which is one fourth of the memory clock
selected in the next field Clock in this Type tab.
Clock (for ECP3) MemClock (for ECP5)
This parameter specifies the frequency of the memory clock to the DIMM or on-board memory. The allowed range
is from 300 MHz to 400 MHz. The default value is linked to the speed grade of Lattice device selected. For example, the default memory clock for ECP5 -8 devices is 400 MHz. The corresponding value for ECP5 -7 devices is 333
MHz, and the corresponding value for ECP5 -6 devices it is 300 MHz.
In addition to the default value, the -8 device also has 2 more clock frequency options (333 MHz and 300 MHz) and
the -7 device has one more frequency option (300 MHz).
IPUG96_2.0, October 2014
20
DDR3 PHY IP Core User Guide
Parameter Settings
Memory Type
This option is used to select the DDR3 memory type: Unbuffered DIMM module (UDIMM or SODIMM) or Registered DIMM module. Users can also choose “On-board Memory” for designs that implement on-board devices
instead of DIMMs.
Memory Data Bus Size
This option allows the user to select the data bus width of the DDR3 memory to which the DDR3 PHY IP core is
connected. If the memory module has a wider data bus than required, only the required data width has to be
selected.
Configuration
This option is used to select the device configuration of the DIMM or on-board memory. The DDR3 PHY IP core
supports device configurations x4, x8, and x16.
DIMM0 Type or Chip Select Width
When Unbuffered DIMM or Registered DIMM is selected as the Memory Type, this option allows the user to select
the number (Single/Dual) of ranks available in the selected DIMM.
When On-board Memory is selected as the Memory Type, this option allows the user to select the number of chip
selects required for the on-board memory.
Address Mirror
This option allows the user to select an address mirroring scheme for rank1 if a Dual DIMM module is used. This
option is not available for on-board memory.
Clock Width
This field shows the number of clocks with which the DDR3 PHY IP core drives the memory. The IP core provides
one differential clock per rank/chip select, as default. Users can select up to two differential clocks per rank/chip
select.
CKE Width
This field shows the number of Clock Enable (CKE) signals with which the PHY IP drives the memory. The IP core
provides one CKE signal per Rank/Chip select, as default.
2T Mode
This option allows the user to enable or disable the 2T timing for command signals when Unbuffered DIMM or
Onboard Memory is selected. This option is not available for Registered DIMM modules.
Write Leveling
This option allows the user to enable or disable the write leveling operation of the DDR3 PHY IP core. This option
to enable/disable write leveling is available only when the Memory Type is selected as On-board Memory. For
unbuffered DIMM or registered DIMM, write leveling is always enabled.
Refer to Initialization Module section for more information.
Controller Reset to Memory
When this option is enabled, the asynchronous reset input signal, rst_n, to the DDR3 PHY IP core resets both the
core and the memory devices. If this option is disabled (unchecked), the rst_n input of the core resets only the core,
not the memory device. Refer to the Reset Handling section for more information.
IPUG96_2.0, October 2014
21
DDR3 PHY IP Core User Guide
Parameter Settings
Setting Tab
The Setting tab enables the user to select various configuration options for the target memory device/module.
Parameters under the group, Mode Register Initial Setting, are dynamic parameters. Initialization values are set
from the GUI. These values are dynamically changeable using LOAD_MR commands. Refer to the JESD79-3,
DDR3 SDRAM Standard, for allowed the values.
Figure 3-2 shows the contents of the Setting tab.
Figure 3-2. DDR3 PHY IP Core Setting Options
The Setting tab supports the following parameters:
Row Size
This option indicates the default row address size used in the selected memory configuration. If the option “Custom” is selected in the Select Memory field of the Type tab, the user can choose a value other than the default
value.
Column Size
This option indicates the default column address size used in the selected memory configuration. If the option
“Custom” is selected in the Select Memory field of the Type tab, the user can choose a value other than the default
value.
Burst Length
This option sets the Burst Length value in Mode Register 0 during initialization. This value remains until the user
writes a different value to Mode Register 0.
CAS Latency
This option sets the CAS Latency value in Mode Register 0 during initialization. This value remains until the user
writes a different value to Mode Register 0.
Burst Type
This option sets the Burst Type value in Mode Register 0 during initialization. This value remains until the user
writes a different value to Mode Register 0.
IPUG96_2.0, October 2014
22
DDR3 PHY IP Core User Guide
Parameter Settings
Write Recovery
This option sets the Write Recovery value in Mode Register 0 during initialization. It is set in terms of the memory
clock. This value remains until the user writes a different value to Mode Register 0.
DLL Control for PD
This option sets the DLL Control for Precharge PD value in Mode Register 0 during initialization. This value
remains until the user writes a different value to Mode Register 0.
ODI Control
This option sets the Output Driver Impedance Control value in Mode Register 1 during initialization. This value
remains until the user writes a different value to Mode Register 1.
RTT_Nom
This option sets the nominal termination, Rtt_Nom, value in Mode Register 1 during initialization. This value
remains until the user writes a different value to Mode Register 1.
Additive Latency
This option sets the Additive Latency, AL, value in Mode Register 1 during initialization. This value remains until the
user writes a different value to Mode Register 1.
CAS Write Latency
This option sets the CAS Write Latency, CWL, value in Mode Register 2 during initialization. This value remains
until the user writes a different value to Mode Register 2.
RTT_WR
This option sets the Dynamic ODT termination, Rtt_WR, value in Mode Register 2 during initialization. This value
remains until the user writes a different value to Mode Register 2.
IPUG96_2.0, October 2014
23
DDR3 PHY IP Core User Guide
Parameter Settings
Pin Selection Tab
The Pin Selection tab enables users to assign device pin locations for reference input clock and DQS memory
strobe signals. For each DQS location selected through this tab, the Lattice software automatically assigns pin
locations for the associated DQ and DM signals. Figure 3-3 shows the contents of the Pin Selection tab. Refer to
Appendix C: “DDR3 PHY IP Locate Constraints” for additional information.
Figure 3-3. DDR3 PHY IP Core Pin Selection Options (Only for LatticeECP3 Device)
Manually Adjust
The pin locations displayed in this tab are the default pin locations when the user selects the device LFE3-150EA8FN1156C in the IPexpress GUI.
Users can specify alternate pin locations specific to their application and hardware implementation by selecting the
Manually Adjust checkbox.
Pin Side
In LatticeECP3-EA devices, only the left or right side I/O banks can be used for DDR3 Data (DQ), Data Strobe
(DQS) and Data Mask (DM) signals. The top and bottom I/O banks cannot be used for these signals.
This parameter allows the user to select the device side (left or right) for locating these DDR3 signals.
clk_in/PLL Locations
This parameter supports two options: clk_in pin and PLL used.
clk_in pin
In LatticeECP3-EA devices, there is a dedicated clock input pad for each PLL. This option provides, through a pulldown menu, a list of legal clock input pins allowed for the DDR3 PHY IP core on the selected side. Refer to Appendix C: “DDR3 PHY IP Locate Constraints” for additional clock input pin options.
IPUG96_2.0, October 2014
24
DDR3 PHY IP Core User Guide
Parameter Settings
PLL Used
The content of this box specifies the location of the PLL that is connected to the selected clock input pin as specified by the clk_in pin option. This is a read-only field. To use a different PLL, the user must choose the appropriate
clock input pin via the clk_in pin parameter.
DDR3 SDRAM Memory Clock Pin Location
em_ddr_clk
This option, through a pull-down menu, shows the valid I/O banks available for locating the memory clock. For the
400 MHz memory clock operation, only the left or right side I/O banks are capable of working at that clock speed.
For a 333 MHz or 300 MHz memory clock speed, the top side I/O banks can also be used. The pull-down menu
lists the available I/O banks based on the memory clock speed selected in the Type tab.
Note that the memory clock signals use one full DQS group or a partial DDR group (a DDR group without DQS
pin). When the memory clock signals are located either in the left or right side, the number of available DQS groups
for locating the DQS/DQ signals in that side is reduced by one if one full DQS group is used for memory clock signal. The DDR3 PHY IP core GUI always checks whether the selected data width can be implemented using the
available DQS groups. If it is not possible, the GUI prompts an error message when the IP core is being generated.
DQS Locations
This option allows the user to assign pins for each DQS signal of the selected configuration. All available pin locations for each DQS signal on the selected side are provided in a pull-down menu.
For each DQS pin, selected from the pull-down menu, the Lattice software will automatically assign pin locations
for the associated DQ and DM signals.
Users should check for the duplicate assignment of more than one DQS signal to the same pin.
Note 1: Since there is no I/O bank restriction on address, command, and control signal pin selection, the user is
expected to provide pin locations for these signals directly in the preference (.lpf) file.
Note 2: For designs with a memory clock speed of 400 MHz, the memory clock pads (em_ddr_clk and
em_ddr_clk_n) should be located in the same side where the DQS pads are located. For designs with slower memory clock speeds (333 MHz and below), the memory clock pads can be placed either in the top I/O bank or in the
same side where the DQS pads are located.
IPUG96_2.0, October 2014
25
DDR3 PHY IP Core User Guide
Parameter Settings
Design Tools Options and Info Tab
The Design Tools Options and Info tab enables the user to select the simulation and synthesis tools to be used for
generating their design. This tab also provides information about the pinout resources used for the selected configuration. Figure 3-4 shows the contents of the Design Tools Options and Info tab.
Figure 3-4. DDR3 PHY IP Core Design Tools Options and Info Options in the IPexpress Tool
The Design Tools Options and Info tab supports the following parameters:
Support Synplify
If selected, IPexpress generates evaluation scripts and other associated files required to synthesize the top-level
design using the Synplify synthesis tool.
Support Precision
If selected, IPexpress generates evaluation script and other associated files required to synthesize the top-level
design using the Precision RTL synthesis tool.
Support ModelSim
If selected, IPexpress generates evaluation script and other associated files required to synthesize the top-level
design using the ModelSim simulator.
Support ALDEC
If selected, IPexpress generates evaluation script and other associated files required to synthesize the top-level
design using the Active-HDL simulator.
Memory I/F Pins
This section displays the following information:
Number of BiDi Pins
This is a notification on the number of bi-directional pins used in the memory side interface for the selected configuration. Bi-directional pins are used for the Data (DQ) and Data Strobe (DQS) signals only.
IPUG96_2.0, October 2014
26
DDR3 PHY IP Core User Guide
Parameter Settings
Number of Output Pins
This is a notification on the number of output-only pins used in the memory side interface for the selected configuration. Output-only pins are used for the DDR3 address, command, control, clock and reset signals.
User I/F Pins
This section displays the following information:
Number of Input Pins
This is a notification on the number of input-only pins used in the user side interface for the selected configuration.
Input-only pins are used for user side write data, address, command and control signals. Write data width is four
times that of the memory side data width.
Number of Output Pins
This is a notification on the number of output-only pins used in the user side interface for the selected configuration.
Output-only pins are used for user side read data and status signals. Read data width is four times that of the memory side data width.
IPUG96_2.0, October 2014
27
DDR3 PHY IP Core User Guide
Chapter 4:
IP Core Generation and Evaluation
for LatticeECP3 DDR3 PHY
This chapter provides information on how to generate the DDR3 PHY IP core using the Diamond software IPexpress tool, and how to include the core in a top-level design.
For flow guidelines and known issues on this IP core, see the Lattice DDR3 PHY IP readme.htm document. This
file is available once the core is installed in Diamond. The document provides information on creating an evaluation
version of the core for use in Diamond and simulation
Getting Started
The DDR3 PHY IP core is available for download from the Lattice IP server using the IPexpress tool. The IP files
are installed, using the IPexpress tool, in any user-specified directory. After the IP core has been installed, it will be
available in the IPexpress GUI dialog box shown in Figure 4-1.
The IPexpress tool GUI dialog box for the DDR3 PHY IP core is shown in Figure 4-1. To generate a specific IP core
configuration, the user specifies:
• Project Path – Path to the directory where the generated IP files will be loaded.
• File Name – “username” designation given to the generated IP core and corresponding folders and files.
(Caution: ddr3 and ddr3_sdram_core are Lattice-reserved names. The user should not use any of these names
as a file name.)
• Module Output – Verilog or VHDL
• Device Family – Device family to which the IP core is to be targeted
• Part Name – Specific targeted device within the selected device family
Figure 4-1. IPexpress Dialog Box
Note that if the IPexpress tool is called from within an existing project, Project Path, Design Entry, Device Family
and Part Name default to the specified project parameters. Refer to the IPexpress online help for further information.
IPUG96_2.1, October 2016
28
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for LatticeECP3 DDR3 PHY
To create a custom configuration, the user clicks the Customize button in the IPexpress tool dialog box to display
the DDR3 PHY IP core Configuration GUI, as shown in Figure 4-2. From this dialog box, the user can select the IP
parameter options specific to their application. Refer to the Parameter Settings section for more information on the
DDR3 PHY IP core parameter settings.
Figure 4-2. Configuration GUI
IPUG96_2.1, October 2016
29
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for LatticeECP3 DDR3 PHY
IPexpress-Created Files and Top Level Directory Structure
When the user clicks the Generate button in the IP Configuration dialog box, the IP core and supporting files are
generated in the specified “Project Path” directory. The directory structure of the generated files is shown in
Figure 4-3.
Figure 4-3. LatticeECP3 DDR3 Core Directory Structure
Understanding the core structure is an important step of a system design using the core. A summary of the files of
the core for simulation and synthesis are listed in Table 4-1.
This table provides a list of key files and directories created by the IPexpress tool and how they are used. The IPexpress tool creates several files that are used throughout the design cycle. The names of most of the created files
are customized to the user’s module name specified in IPexpress.
Table 4-1. File List
File
Simulation
Synthesis
Description
Source Files
.lpc
This file contains the IPexpress tool
options used to recreate or modify the IP
core in IPexpress.
.ipx
The IPX file holds references to all of the
elements of an IP core or module after it
is generated from IPexpress (Diamond
version only). The file is used to bring in
the appropriate files during the design
implementation and analysis. It is also
used to re-load parameter settings into
the IP/module generation GUI when an
IP/module is being regenerated.
..\params\ddr3_sdram_mem_params.v
Yes
This file provides user options of the IP
core for the simulation models.
_beh.v
Yes
This is the obfuscated core simulation
model.
IPUG96_2.1, October 2016
30
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for LatticeECP3 DDR3 PHY
Table 4-1. File List (Continued)
File
..\src\rt\top\ecp3\ddr3_sdram_phy_top_wrapper.v
..\src\rtl\top\ecp3\ddr3_sdram_phy_top_wrapper.vhd
Simulation
Yes
Synthesis
Description
Yes
This is the top level file for simulation
and synthesis for a user design (.v file if
Verilog is selected or .vhd file if VHDL is
selected). This file has black-box instantiations of the core and I/O modules and
also source instantiation of clock synchronization module.
Yes
This is the top level file only for evaluation synthesis (.v file if Verilog is selected
or .vhd file if VHDL is selected). This file
has black-box instantiations of the core
and I/O modules and also source instantiation of clock synchronization module.
Yes
This file provides the synthesized IP
core for the selected configuration.
..\impl\ddr3_sdram_phy_top_wrapper.v
..\impl\ddr3_sdram_phy_top_wrapper.vhd
.ngo
Model Files
..\models\ecp3\ddr3_clks.v
Yes
..\models\ecp3\ddr3_pll.v
Yes
These are source files of the Clock Synchronization Module (CSM). The CSM
block provides the necessary clocks and
required timings for DDR3 operations
including the system clock (sclk) for the
core and the edge clock (eclk) and the
faster system clock (sclk2x) for I/O logic.
..\models\ecp3\jitter_filter.v
Yes
..\models\ecp3\clk_stop.v
Yes
..\models\ecp3\clk_phase.v
Yes
..\models\ecp3\pll_control.v
Yes
..\models\mem\ddr3.v
Yes
DIMM simulation model.
..\moels\mem\ddr3__.v
Yes
(DIMM_Type : dimm for UDIMM and onboard memory. rdimm for RDIMM)
..\models\mem\ddr3_parameters.vh
Yes
(mem_data_width:
8/16/24/32/40/48/56/64/72).
..\testbench\top\ecp3\test_phy_ctrl.v
Yes
This is the evaluation test bench file.
..\tests\ecp3\cmd_gen.v
Yes
This is the command generator for the
evaluation test bench.
..\tests\ecp3\tb_config_params.v
Yes
This file contains the test bench configuration parameter.
..\tests\ecp3\testcase.v
Yes
This file is the evaluation test file.
Yes
This file contains the Active-HDL simulation script.
..\sim\aldec\_gatesim_.do
Yes
This file is the Active-HDL script for
netlist simulation. This file is generated
only if the selected device package has
enough I/Os for all the user side and
memory side signals.
: Precision or Synplify
..\sim\modelsim\_eval.do
Yes
This file contains the ModelSim simulation script.
Yes
This file is the ModelSim script for netlist
simulation. This file is generated only if
the selected device package has
enough I/Os for all the user side and
memory side signals.
: Precision or Synplify
Evaluation Test Bench Files
Evaluation Simulation Script Files
..\sim\aldec\_eval.do
..\sim\modelsim\_gatesim_.do
IPUG96_2.1, October 2016
31
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for LatticeECP3 DDR3 PHY
Table 4-1. File List (Continued)
File
Simulation
Synthesis
Description
Evaluation Implementation Script Files
..\impl\synplify\_eval.ldf
Yes
This is the Diamond project file for the
Synplify flow.
..\impl\precision\_eval.ldf
Yes
This is the Diamond project file for the
Precision RTL flow.
..\impl\synplify\_eval.lpf
Yes
This is the par preference file for the
Synplify flow.
..\impl\precision\_eval.lpf
Yes
This is the par preference file for the Precision RTL flow.
..\impl\synplify\post_route_trace.prf
Yes
This is the post_route preference file for
the Synplify flow.
..\impl\precision\post_route_trace.prf
Yes
This is the post_route preference file for
the Precision RTL flow.
DDR3 PHY IP File Structure
The DDR3 PHY IP core consists of the following blocks:
• Top-level wrapper (RTL)
• An obfuscated behavioral model of the DDR3 PHY IP core for simulation and an encrypted netlist for synthesis
• Clock Synchronous Module (RTL files for simulation and Verilog flow synthesis and a netlist file for VHDL flow
synthesis)
All of these blocks are required to implement the IP core on the target FPGA. Figure 4-4 depicts the interconnection among the blocks.
Figure 4-4. File Structure of the DDR3 PHY IP Core
Top-Level Wrapper (RTL)
DFI
System Clock
DDR3 PHY IP Core
DDR3 Memory
Clock Synchronous Module
(CSM)
Top-level Wrapper
The IP core and the CSM block are instantiated in the top-level wrapper. When a system design is made with the
DDR3 PHY IP core, this wrapper must be instantiated. If needed, the CSM block may be moved out of the wrapper
and instantiated separately. The wrapper is fully configured as per the generated parameter file.
Clock Synchronization Module
The DDR3 PHY IP core uses a clock synchronization module that generates the system clock (sclk) for the core
and the edge clock (eclk) and the high-speed system clock (sclk2x) for the I/O modules. This CSM module operates with a dedicated PLL which works on a reference clock input and generates the SCLK, ECLK and SCLK2x
outputs. For easy regeneration of the PLL for different reference clock frequencies, the PLL module ddr3_pll.v is
placed outside the CSM module in the directory ..\ddr_p_eval\models\ecp3. In addition to clock generation, this
IPUG96_2.1, October 2016
32
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for LatticeECP3 DDR3 PHY
CSM block performs a synchronization process, after every reset, to lock a pre-defined phase relationship between
these clocks. This clock synchronization block uses a DQSDLL to extract a PVT-compensated 90 degree delay
count to the I/O block that appropriately shifts the DQS signal during write and read operations.
The sclk clock output from the CSM block which drives the IP core is also made available to the external user logic.
If a system that implements the DDR3 memory controller requires a clock generator external to the IP core, then
the CSM block incorporated inside the IP core’s top-level wrapper can be shifted out of the wrapper. Connections
between the top-level wrapper and the clock generator are fully RTL based, and therefore, it is possible to modify
the structure and connection of the core for the clock distribution to meet system needs.
This module is provided as RTL source for all cases of simulation and for Verilog flow synthesis. For VHDL flow
synthesis, this module is available as a netlist.
Simulation Files for IP Core Evaluation
Once a DDR3 PHY IP core is generated, it contains a complete set of test bench files to simulate a few example
core activities for evaluation. The simulation environment for the DDR3 PHY IP core is shown in Figure 4-5. This
structure can be reused by system designers to accelerate their system validation.
Figure 4-5. Simulation Structure for DDR3 PHY IP Core Evaluation
Parameter File
Testbench Top
Top-level
Wrapper
Command
Generator
and Checker
Memory
Model
DDR3 PHY
and CSM
TB Configuration
Parameter
Memory Model
Parameter
Test Bench Top
The test bench top includes the IP core under test, memory model, stimulus generator and monitor blocks. It is
parameterized by the IP core parameter file.
Obfuscated PHY IP Simulation Model
The obfuscated top-level wrapper simulation model for the core includes all the PHY modules. This obfuscated
simulation model must be included in the simulation.
Command Generator and Checker
The command generator generates stimuli for the IP core. The core initialization and command generation activities are predefined in the provided test case module. It is possible to customize the test case module to see the
desired activities of the IP core.
IPUG96_2.1, October 2016
33
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for LatticeECP3 DDR3 PHY
Test Bench Configuration Parameter
The test bench configuration parameter provides the parameters for test bench files. These parameters are derived
from the core parameter file and are not required to configure them separately. For those users who need a special
memory configuration, however, modifying this parameter set might provide a support for the desired configuration.
Memory Model
The DDR3 PHY IP core test bench uses a memory simulation model provided by one of the most popular memory
vendors. If a different memory model is required, it can be used by simply replacing the instantiation of the model
from the memory configuration modules located in the same folder.
Memory Model Parameter
This memory parameter file comes with the memory simulation model. It contains the parameters that the memory
simulation model needs. It is not necessary for users to change any of these parameters.
Evaluation Script File
ModelSim and Active-HDL simulation macro script files are included for instant evaluation of the IP core. All
required files for simulation are included in the macro script. This simulation script can be used as a starting point
of a user simulation project.
The evaluation test bench files are provided to show the DFI bus signal timing for typical memory access commands from the memory controller. These evaluation simulation files are made available only for the following
memory settings: CL=5, AL=0, CWL=5. For any other memory setting values, the user is advised to refer to this
user’s guide and develop the corresponding simulation environment.
Note on Shortening Simulation Run Time
The DDR3 PHY IP core implements many timers to comply with JEDEC specifications. Due to these timers, the
functional simulation takes a longer time at various stages of the simulation. In order to reduce the simulation run
time, the IP core includes an option for lowering the timer counts, particularly on those timers used for waiting periods. This option can be enabled by adding a define SIM in the simulation script. It is important to note that this
reduced timer value is valid for the simulation only and should not be included in the synthesis script.
Hardware Evaluation
The DDR3 PHY IP core supports Lattice’s IP hardware evaluation capability, which makes it possible to create versions of the IP core that operate in hardware for a limited period of time (approximately four hours) without requiring
the purchase of an IP license. It may also be used to evaluate the core in hardware in user-defined designs.
Enabling Hardware Evaluation in Diamond
Choose Project > Active Strategy > Translate Design Settings. The hardware evaluation capability may be
enabled/disabled in the Strategy dialog box. It is enabled by default.
Updating/Regenerating the IP Core
By regenerating an IP core with the IPexpress tool, the user can regenerate a core with the same configuration or
modify any of its settings including: device type, design entry method, and any of the options specific to the IP core.
Regenerating can be done to modify an existing IP core or to create a new but similar one.
To regenerate an IP core in Diamond:
1. In IPexpress, click the Regenerate button.
2. In the Regenerate view of IPexpress, choose the IPX source file of the module or IP you wish to regenerate.
3. IPexpress shows the current settings for the module or IP core in the Source box. Make your new settings in the
Target box.
IPUG96_2.1, October 2016
34
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for LatticeECP3 DDR3 PHY
4. If you want to generate a new set of files in a new location, set the new location in the IPX Target File box. The
base of the file name will be the base of all the new file names. The IPX Target File must end with an .ipx extension.
5. Click Regenerate. The module’s dialog box opens showing the current option settings.
6. In the dialog box, choose the desired options. To get information about the options, click Help. Also, check the
About tab in IPexpress for links to technical notes and user guides. IP may come with additional information. As
the options change, the schematic diagram of the module changes to show the I/O and the device resources
the module will need.
7. To import the module into your project, if it’s not already there, select Import IPX to Diamond Project (not
available in stand-alone mode).
8. Click Generate.
9. Check the Generate Log tab to check for warnings and error messages.
10.Click Close.
The IPexpress package file (.ipx) supported by Diamond holds references to all of the elements of the generated IP
core required to support simulation, synthesis and implementation. The IP core may be included in a user's design
by importing the .ipx file to the associated Diamond project. To change the option settings of a module or IP core
that is already in a design project, double-click the module’s .ipx file in the File List view. This opens IPexpress and
the module’s dialog box showing the current option settings. Then go to step 6 above.
IPUG96_2.1, October 2016
35
Display Interface Multiplexer IP Core User Guide
Chapter 5:
IP Core Generation and Evaluation
for ECP5 DDR3 PHY
This chapter provides information on how to generate the DDR3 PHY core using the Lattice Diamond design software Clarity Designer tool, and how to include the core in a top-level design.
For example information and known issues on this core see the Lattice DDR3 PHY IP ReadMe document. This file
is available once the core is installed in Diamond. The document provides information on creating an evaluation
version of the core for use in Diamond and simulation
Getting Started
The DDR3 PHY IP core is available for download from the Lattice IP Server using the Clarity Designer tool. IP files
are automatically installed using ispUPDATE technology in any customer-specified directory. After the IP core has
been installed, invoke the Clarity Designer which opens the Clarity Designer tool dialog box shown in Figure 5-1.
• Create new Clarity design – Choose to create a new Clarity Design project directory in which the DDR3
SDRAM IP core will be generated.
• Design Location – Clarity Design project directory Path.
• Design Name – Clarity Design project name.
• HDL Output – Hardware Description Language Output Format (Verilog or VHDL).
• Open Clarity design – Open an existing Clarity Design project.
• Design File – Name of existing Clarity Design project file with .sbx extension.
Figure 5-1. Clarity Designer Tool Dialog Box
IPUG96_2.1, October 2016
36
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for ECP5 DDR3 PHY
When the Create button is clicked, the Clarity Designer IP Catalog window opens as shown in Figure 5-2. You can
generate DDR3 PHY IP configuration by double-clicking the IP name in the Catalog tab.
Figure 5-2. Clarity Designer IP Catalog Window
In the ddr3 sdram phy dialog box shown Figure 5-3, specify the following:
• Instance Name – The instance module name of the DDR3 PHY IP core. This Instance Name is also referred as
username in this User Guide at different places.
Figure 5-3. IP Generation Dialog Box
IPUG96_2.1, October 2016
37
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for ECP5 DDR3 PHY
Note that if the Clarity Designer tool is called from within an existing project, Design Location, Device Family and
Part Name default to the specified project parameters. Refer to the Clarity Designer tool online help for further
information.
To create a custom configuration, click the Customize button in the IP Generation dialog box shown in Figure 5-3
to display the IP core Configuration GUI, as shown in Figure 5-4. From this dialog box, you can select the IP parameter options specific to their application. Refer to Parameter Settings for more information on the DDR3 PHY IP
parameter settings.
Figure 5-4. IP Configuration GUI
IPUG96_2.1, October 2016
38
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for ECP5 DDR3 PHY
Created Files and Top Level Directory Structure
When the user clicks the Generate button in the IP Configuration dialog box, the IP core and supporting files are
generated in the specified “Project Path” directory. An example of the directory structure of the generated files is
shown in Figure 5-5.
Figure 5-5. ECP5 DDR3 Core Directory Structure
Understanding the core structure is an important step of a system design using the core. A summary of the files of
the core for simulation and synthesis are listed in Table 5-1.
Table 5-1 provides a list of key files and directories created by the IPexpress tool and how they are used. The IPexpress tool creates several files that are used throughout the design cycle. The names of most of the created files
are customized to the user’s module name specified in the IPexpress tool.
Table 5-1. File List
File
Simulation Synthesis
Description
Source Files
This file contains the Clarity Designer
tool options to recreate the core.
.lpc
..\params\ddr3_sdram_mem_params_.v
Yes
This file includes all the selected and
derived parameters of the generated IP.
_beh.v
Yes
This is the obfuscated core simulation
model.
..\rtl\top\\ddr3_sdram_mem_top_wrapper_.v
..\rtl\top\\ddr3_sdram_mem_top_wrapper_.vhd
IPUG96_2.1, October 2016
Yes
39
This is the top Level file for simulation
and synthesis (.v file if Verilog is
selected or .vhd file if VHDL is
selected). This file has black-box
instantiations of the core and I/O modules and also source instantiation of
clock synchronization module. Refer
the section DUMMY LOGIC removal for
more details.
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for ECP5 DDR3 PHY
Table 5-1. File List (Continued)
File
Simulation Synthesis
Description
.ngo
Yes
This file provides the synthesized IP
core.
ddr_clks_.ngo
Yes
This file provides the synthesized CSM
module.
ddr3_sdram_phy_top_.ngo
Yes
This file provides the synthesized IP
Eval top level module.
pmi_pll_*.ngo
Yes
This file provides the synthesized PLL
module.
Model Files
..\models\\ddr_clk_src.v
Yes
..\models\\pmi_pll_fp.v
Yes
..\models\mem\ddr3_.v
Yes
..\models\mem\ddr3__
_.v
Yes
..\models\mem\ddr3_parameters.vh
Yes
These are source files of Clock synchronization logic. PLL and DQSDLL
are used to generate system clock
(SCLK) for the core and edge clock
(ECLK) for I/O logic.
DIMM simulation model. (DIMM_Type :
dimm for UDIMM, rdimm for RDIMM),
(mem_data_width:
8/16/24/32/40/48/56/64/72).
Evaluation Test Bench Files
..\testbench\top\\test_phy_ctrl.v
Yes
This is the evaluation test bench top
level file.
..\tests\\cmd_gen_phy.v
Yes
This is the command generator for the
evaluation test bench.
..\tests\\tb_config_params.v
Yes
This file is the test bench configuration
parameter.
..\tests\\testcase.v
Yes
This file is the evaluation test bench.
Yes
This file is the Aldec script.
..\sim\aldec\_gatesim_.do
Yes
This file is the Aldec script for netlist
simulation. This file is generated only if
the selected device package has
enough I/Os for all the user side and
memory side signals.
: precision or synplify
..\sim\modelsim\_eval.do
Yes
This file is the ModelSim script.
Yes
This file is the ModelSim script for
netlist simulation. This file is generated
only if the selected device package has
enough I/Os for all the user side and
memory side signals.
: precision or synplify
Evaluation Simulation Script Files
..\sim\aldec\_eval.do
..\sim\modelsim\_gatesim_.do
Evaluation Implementation Script Files
..\impl\synplify\_eval.ldf
Yes
This is the Diamond project file for Synplify flow
..\impl\lse\_eval.ldf
Yes
This is the Diamond project file for LSE
flow
..\impl\synplify\_eval.lpf
Yes
This is the par preference file for Synplify flow
..\impl\lse\_eval.lpf
Yes
This is the par preference file for LSE
flow
IPUG96_2.1, October 2016
40
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for ECP5 DDR3 PHY
Table 5-1. File List (Continued)
File
Simulation Synthesis
Description
..\impl\synplify\post_route_trace.prf
Yes
This is the post_route preference file
for Synplify flow
..\impl\lse\post_route_trace.prf
Yes
This is the post_route preference file
for LSE flow
DDR3 PHY IP File Structure
The DDR3 PHY IP core consists of the following blocks:
• Top-level wrapper (RTL)
• An obfuscated behavioral model of the DDR3 PHY IP core for simulation and an encrypted netlist for synthesis
• Clock Synchronous Module (RTL files for simulation and Verilog flow synthesis and a netlist file for VHDL flow
synthesis)
All of these blocks are required to implement the IP core on the target FPGA. Figure 5-6 depicts the interconnection among the blocks.
Figure 5-6. File Structure of the DDR3 PHY IP Core
Top-Level Wrapper (RTL)
DFI
System Clock
DDR3 PHY IP Core
DDR3 Memory
Clock Synchronous Module
(CSM)
Top-level Wrapper
The IP core and the CSM block are instantiated in the top-level wrapper. When a system design is made with the
DDR3 PHY IP core, this wrapper must be instantiated. If needed, the CSM block may be moved out of the wrapper
and instantiated separately. The wrapper is fully configured as per the generated parameter file.
Clock Synchronization Module
The DDR3 PHY IP core uses a clock synchronization module that generates the system clock (sclk) for the core
and the edge clock (eclk) and the high-speed system clock (sclk2x) for the I/O modules. This CSM module operates with a dedicated PLL which works on a reference clock input and generates the SCLK, ECLK and SCLK2x
outputs. For easy regeneration of the PLL for different reference clock frequencies, the PLL module ddr3_pll.v is
placed outside the CSM module in the directory ..\ddr_p_eval\models\ecp3. In addition to clock generation, this
CSM block performs a synchronization process, after every reset, to lock a pre-defined phase relationship between
these clocks. This clock synchronization block uses a DQSDLL to extract a PVT-compensated 90 degree delay
count to the I/O block that appropriately shifts the DQS signal during write and read operations.
The sclk clock output from the CSM block which drives the IP core is also made available to the external user logic.
If a system that implements the DDR3 memory controller requires a clock generator external to the IP core, then
the CSM block incorporated inside the IP core’s top-level wrapper can be shifted out of the wrapper. Connections
between the top-level wrapper and the clock generator are fully RTL based, and therefore, it is possible to modify
the structure and connection of the core for the clock distribution to meet system needs.
IPUG96_2.1, October 2016
41
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for ECP5 DDR3 PHY
This module is provided as RTL source for all cases of simulation and for Verilog flow synthesis. For VHDL flow
synthesis, this module is available as a netlist.
Simulation Files for IP Core Evaluation
Once a DDR3 PHY IP core is generated, it contains a complete set of test bench files to simulate a few example
core activities for evaluation. The simulation environment for the DDR3 PHY IP core is shown in Figure 5-7. This
structure can be reused by system designers to accelerate their system validation.
Figure 5-7. Simulation Structure for DDR3 PHY IP Core Evaluation
Parameter File
Testbench Top
Top-level
Wrapper
Command
Generator
and Checker
Memory
Model
DDR3 PHY
and CSM
TB Configuration
Parameter
Memory Model
Parameter
Test Bench Top
The test bench top includes the IP core under test, memory model, stimulus generator and monitor blocks. It is
parameterized by the IP core parameter file.
Obfuscated PHY IP Simulation Model
The obfuscated top-level wrapper simulation model for the core includes all the PHY modules. This obfuscated
simulation model must be included in the simulation.
Command Generator and Checker
The command generator generates stimuli for the IP core. The core initialization and command generation activities are predefined in the provided test case module. It is possible to customize the test case module to see the
desired activities of the IP core.
IPUG96_2.1, October 2016
42
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for ECP5 DDR3 PHY
Test Bench Configuration Parameter
The test bench configuration parameter provides the parameters for test bench files. These parameters are derived
from the core parameter file and are not required to configure them separately. For those users who need a special
memory configuration, however, modifying this parameter set might provide a support for the desired configuration.
Memory Model
The DDR3 PHY IP core test bench uses a memory simulation model provided by one of the most popular memory
vendors. If a different memory model is required, it can be used by simply replacing the instantiation of the model
from the memory configuration modules located in the same folder.
Memory Model Parameter
This memory parameter file comes with the memory simulation model. It contains the parameters that the memory
simulation model needs. It is not necessary for users to change any of these parameters.
Evaluation Script File
ModelSim and Active-HDL simulation macro script files are included for instant evaluation of the IP core. All
required files for simulation are included in the macro script. This simulation script can be used as a starting point
of a user simulation project.
The evaluation test bench files are provided to show the DFI bus signal timing for typical memory access commands from the memory controller. These evaluation simulation files are made available only for the following
memory settings: CL=5, AL=0, CWL=5. For any other memory setting values, the user is advised to refer to this
user’s guide and develop the corresponding simulation environment.
Note on Shortening Simulation Run Time
The DDR3 PHY IP core implements many timers to comply with JEDEC specifications. Due to these timers, the
functional simulation takes a longer time at various stages of the simulation. In order to reduce the simulation run
time, the IP core includes an option for lowering the timer counts, particularly on those timers used for waiting periods. This option can be enabled by adding a define SIM in the simulation script. It is important to note that this
reduced timer value is valid for the simulation only and should not be included in the synthesis script.
Hardware Evaluation
The DDR3 PHY IP core supports Lattice’s IP hardware evaluation capability, which makes it possible to create versions of the IP core that operate in hardware for a limited period of time (approximately four hours) without requiring
the purchase of an IP license. It may also be used to evaluate the core in hardware in user-defined designs.
Enabling Hardware Evaluation in Diamond
Choose Project > Active Strategy > Translate Design Settings. The hardware evaluation capability may be
enabled/disabled in the Strategy dialog box. It is enabled by default.
Regenerating/Recreating the IP Core
By regenerating an IP core with the Clarity Designer tool, you can modify any of the options specific to an existing
IP instance. By recreating an IP core with Clarity Designer tool, you can create (and modify if needed) a new IP
instance with an existing LPC/IPX configuration file.
Regenerating an IP Core in Clarity Designer Tool
To regenerate an IP core in Clarity Designer:
1. In the Clarity Designer Builder window, right-click on the existing IP instance and choose Config.
2. In the dialog box, choose the desired options.
For more information about the options, click Help. You may also click the About tab in the Clarity Designer window for links to technical notes and user guides. The IP may come with additional information. As the options
IPUG96_2.1, October 2016
43
Display Interface Multiplexer IP Core User Guide
IP Core Generation and Evaluation for ECP5 DDR3 PHY
change, the schematic diagram of the module changes to show the I/O and the device resources the module
will need.
3. Click Configure.
Recreating an IP Core in Clarity Designer Tool
To recreate an IP core in Clarity Designer:
1. In the Clarity Designer Catalog window, click the Import IP tab at the bottom.
2. In the Import IP tab, choose the existing IPX/LPC source file of the module or IP to regenerate.
3. Specify the instance name in Target Instance. Note that this instance name should not be the same as any of
the existing IP instances in the current Clarity Design project.
4. Click Import. The module's dialog box opens showing the option settings.
5. In the dialog box, choose the desired options.
For more information about the options, click Help. You may also click the About tab in the Clarity Designer window for links to technical notes and user guides. The IP may come with additional information. As the options
change, the schematic diagram of the module changes to show the I/O and the device resources the module
will need.
6. Click Configure.
IPUG96_2.1, October 2016
44
Display Interface Multiplexer IP Core User Guide
Chapter 6:
Application Support
This chapter provides supporting information on using the DDR3 PHY IP core in complete designs.
Understanding Preferences
The generated preference file has many preferences that will fall mainly into one of these categories:
FREQUENCY Preferences
Each clock domain in the DDR3 PHY IP core is defined by a FREQUENCY preference.
MAXDELAY NET
The MAXDELAY NET preference ensures that the net has a minimal net delay and falls within the allowed limit.
Since this preference is likely to be over-constrained, the post-route trace preference file should be used to validate
the timing results.
MULTICYCLE / BLOCK PATH
The MULTICYCLE preference is applied to a path that is covered by the FREQUENCY constraint, but is allowed to
be relaxed from its FRQUENCY constraint. The FREQUENCY constraint is relaxed in multiples of the clock period.
The BLOCK preference is applied to a path that is not relevant for the timing analysis.
IOBUF
The IOBUF preference assigns the required I/O types and attributes to the DDR3 I/O pads.
LOCATE
Only the em_ddr_dqs pads and the PLL input clock pad are located in the provided preference file per user selection. Note that not all I/O pads can be associated with a DQS (em_ddr_dqs) pad in a bank. Since there is a strict
DQ-to-DQS association rule in each Lattice FPGA device, it is strongly recommended to validate the DQ-to-DQS
associations of the selected pinouts using the implementation software before the PCB routing task is started. The
DQ-to-DQS pad associations for a target FPGA device can be found in the data sheet or pinout table of the target
device.
For more details on DDR3 pinout guidelines, refer to:
• TN1265, ECP5 High-Speed I/O Interface
• TN1180, LatticeECP3 High-Speed I/O Interface
Handling DDR3 PHY IP Preferences in User Designs
• The generated preference file uses the hierarchical paths for nets and cells. These paths are good for the evaluation environment provided by the IP package. When the DDR3 PHY IP core is integrated into the user design,
all the hierarchical paths in the preference file should be updated per the user’s integrated environment. In most
cases, appending a wild case designation (such as “*/”) in the beginning of the path name may be enough.
• The hierarchy structure and name of an internal net used in a preference is subject to change when there are
changes in the design or when a different version of a synthesis tool is used. It is the user’s responsibility to track
these changes and update them in the preference file. The updated net and path names can be found in the map
report file (.mrp) or through Floorplan View and Physical View in Diamond.
• If a preference has an incorrect path or name it is dropped by the Place and Route tool and the dropped preferences are listed in the static timing report (.twr file). It is important to check for such dropped preferences in the
static timing report.
IPUG96_2.1, October 2016
47
Display Interface Multiplexer IP Core User Guide
Application Support
Reset Handling
The DDR3 PHY IP core provides two reset input ports at the local side. The dfi_reset_n signal by default resets
both the IP core and the memory device. Usually this dfi_reset_n is expected to include power_on_reset as well as
the system reset and is implemented through the global reset net (GSR) by default. Another reset input signal,
mem_rst_n, is available to reset only the memory device, not the IP core. In addition to routing this reset to the
memory, the IP core ensures that the memory reset signal em_ddr_reset_n is asserted at least for a 100 ns period
as required by the JEDEC specification, even if the input reset signal mem_rst_n is asserted for less than 100 ns.
The minimum required reset assertion time for mem_rst_n is one system clock.
The DDR3 PHY IP core, through the GUI option “Controller Reset to Memory” allows the user to disable both the
dfi_reset_n or mem_rst_n inputs from resetting the memory (see Controller Reset to Memory for further information). When this option is disabled (unchecked) the memory side output port em_ddr_reset_n is removed from the
IP core’s output ports. In this disabled mode, dfi_reset_n resets only the IP core. It is the user's responsibility to
implement a memory reset logic outside the IP core and also to add a port for the memory reset. In addition, the
user memory reset signal generated outside the IP core must be fed to the mem_rst_n input of the IP core to let the
core know the memory reset assertion. This will enable the IP core to set the memory interface signals at the
appropriate state as specified for the reset time.
There may be many applications which need to handle the memory reset outside the IP core. For example, disabling the memory reset from the core can be useful when multiple host controllers need to be connected to and
share a DDR3 memory.
Dummy Logic in IP Core Evaluation
When a DDR3 PHY IP core is generated, Clarity Designer assigns all the signals from both the DDR3 and DFI
interfaces to the I/O pads. The number of the DDR3 PHY IP core’s DFI signals for read and write data buses
together is normally closer to eight times that of the DDR3 memory interface. It is impossible for the IP core to be
generated if the selected device does not have enough I/O pad resources. To facilitate the core evaluation with
smaller package devices, Clarity Designer inserts dummy logic to decrease the I/O pad count by reducing the DFI
side read_data and write_data bus sizes. With the dummy logic, a core can be successfully generated and evaluated even with smaller pad counts. The PAR process can be completed without a resource violation so that one
can evaluate the performance and utilization of the IP core. However, the synthesized netlist will not function correctly because of the inserted dummy logic. The core with dummy logic, therefore, must be used only for evaluation.
Top-level Wrapper File Only for Evaluation Implementation
For evaluation implementation using the Verilog core, a separate top-level wrapper file,
ddr3_sdram_phy_top_wrapper.v is provided in the directory ..\ddr_p_eval\\impl. This wrapper file may
have a reduced number of local side data buses for the reason mentioned in the previous paragraph. The evaluation par project file _eval.syn in the directory ..\ddr_p_eval\\impl\synplicity (or
..\ddr_p_eval\\impl\precision) points to this wrapper file for running the evaluation implementation.
For VHDL flow, the top-level wrapper file ..\ddr_p_eval\\impl\ddr3_sdram_phy_top_wrapper.vhd is provided for evaluation implementation.
Note that this top-level wrapper file is not used for evaluation simulation.
IPUG96_2.1, October 2016
48
Display Interface Multiplexer IP Core User Guide
Application Support
Top-level Wrapper file for All Simulation Cases and Implementation in a User Design
In real applications, since back-end user logic design is attached to the core, most of the user side interface signals
are connected within the FPGA fabric and will not be connected to the pads of the FPGA fabric. There is a main top
level wrapper file, ddr3_sdram_phy_top_wrapper.v (.vhd), in the directory
..\ddr_p_eval\\src\rtl\top\. This wrapper is generated with full local side data bus and is
meant for simulation as well as for the final integration with user's logic for synthesis. The user's par project file
should point to this top-level wrapper file while implementing the IP core in the user’s application.
RDIMM Module Support
The DDR3 PHY IP core is designed to work with the default settings of the RDIMM module’s SSTE32882 registering clock driver. There is no support from the IP core to program the control word registers of the clock driver.
A Note on Chip Select Signal Handling when a Single Rank RDIMM Module is Used: In order to set the
RDIMM’s clock driver in normal mode, the PHY IP provides two bits for the chip select signal em_ddr_cs_n and
always drives em_ddr_cs_n[1] high. The user is advised to connect both chip select bits to the corresponding chip
select input pins of the RDIMM module. Leaving the chip select bit 1 input of the RDIMM module open will lead to
incorrect operation of the RDIMM module.
Netlist Simulation
The IP GUI automatically generates the netlist simulation scripts, ddr3phy_gatesim_synplify.do and/or
ddr3phy_gatesim_precision.do file in the ..\ddr_p_eval\\impl\sim\ directory only when
there are enough pads in the selected target device to connect all the user-side signals of DDR3 PHY IP core.
The generated simulation scripts are to perform the netlist simulation of the standalone DDR3 PHY IP in the core
evaluation mode. Note that the generated scripts do not include the SDF back-annotation because of the large
routing delays of the core’s local signals to the I/O pads.
When there are not enough I/O pads available in the selected target device to connect all the user side signals of
DDR3 PHY IP, the IP GUI will not generate a netlist simulation script because the evaluation test bench cannot
access all ports that are required to verify the functions of the implemented core.
The back-annotated netlist simulation of the DDR3 PHY IP with the SDF file (timing simulation), therefore, will work
successfully only when a complete user design is attached to the IP core, which will properly terminate the core
local signals providing complete functionality accesses from the user test bench environment.
IPUG96_2.1, October 2016
49
Display Interface Multiplexer IP Core User Guide
Chapter 7:
Core Verification
The functionality of the DDR3 PHY IP core has been verified via simulation and hardware testing in a variety of
environments, including:
• Simulation environment verifying proper DDR3 functionality when testing with the industry standard Denali
MMAV (Memory Modeler - Advanced Verification) verification IP
• Hardware validation of the IP implemented on Lattice FPGA evaluation boards. Specific testing has included:
— Verifying proper DDR3 protocol functionality using Lattice DDR3 Memory Controller
— Verifying DDR3 electrical compliance using Lattice DDR3 Memory Controller.
• In-house interoperability testing with multiple DIMM modules
IPUG96_2.1, October 2016
50
Display Interface Multiplexer IP Core User Guide
Chapter 8:
Support Resources
This chapter contains information about Lattice Technical Support, additional references, and document revision
history.
Lattice Technical Support
Submit a technical support case via www.latticesemi.com/techsupport.
References
• TN1265, ECP5 High-Speed I/O Interface
• TN1180, LatticeECP3 High-Speed I/O Interface
Revision History
Date
Document
Version
IP Core
Version
October 2016
2.1
2.1
Change Summary
Updated Write Leveling description in the Initialization Module section.
Updated Type Tab section. Added reference to the Initialization Module
section in the description of write leveling.
Added support for ECP5-5G (LFE5UM5G) devices.
Updated Table 5-1, File List. Revised Model Files mem_data_width.
Updated Lattice Technical Support section.
October 2014
2.0
2.0
Added support for ECP5.
March 2012
01.1
1.1
Updated document with new corporate logo.
Added support for Dual Rank memory.
Added restricted netlist simulation capability.
Updated Appendix B with reference to SSN guidelines for DQS pin placement.
Added support for LatticeECP3-17EA-328 device.
Added support for LatticeECP3 device speed grades: -6L, -7L, -8L and -9.
Updated GUI screen shots
Added 2T support
Removed references to ispLEVER design software.
May 2011
01.0
IPUG96_2.1, October 2016
1.0
Initial release.
51
Display Interface Multiplexer IP Core User Guide
Appendix A:
Resource Utilization
This appendix gives resource utilization information for Lattice FPGAs using the DDR3 PHY IP core. The IP configurations shown in this appendix were generated using the IPexpress software tool. IPexpress is the Lattice IP
configuration utility, and is included as a standard feature of the Diamond design tools. Details re-garding the usage
of IPexpress can be found in the IPexpress and Diamond help systems. For more information on the Diamond
design tools, visit the Lattice web site at: www.latticesemi.com/software.
ECP5 Devices
Table A-1. Performance and Resource Utilization1
Parameters
Slices
LUTs
Registers
I/O2
fMAX (MHz)3
Data Bus Width: 8 (x8)
688
942
736
42
400 MHz (800 Mbps)
Data Bus Width: 16 (x8)
809
1066
969
53
400 MHz (800 Mbps)
Data Bus Width: 24 (x8)
838
1039
1003
64
400 MHz (800 Mbps)
Data Bus Width: 32 (x8)
970
1140
1181
75
400 MHz (800 Mbps)
Data Bus Width: 40 (x8)
1094
1262
1355
86
400 MHz (800 Mbps)
Data Bus Width: 48 (x8)
1212
1358
1509
97
400 MHz (800 Mbps)
Data Bus Width: 56 (x8)
1284
1375
1687
108
400 MHz (800 Mbps)
Data Bus Width: 64 (x8)
1383
1434
1851
119
400 MHz (800 Mbps)
Data Bus Width: 72 (x8)
1518
1550
2021
130
333 MHz (666 Mbps)
1. Performance and utilization data are generated targeting an LFE5U/LFE5UM-85F-8BG756C device using Lattice Diamond 3.3 design software with an LFE5U/LFE5UM control pack. Performance may vary when using a different software version or targeting a different device
density or speed grade within the ECP5 family.
2. Numbers shown in the I/O column represent the number of primary I/Os at the DDR3 memory interface. User interface (local side) I/Os are
not included.
3. The DDR3 IP core can operate at 400 MHz (800 DDR3) in the fastest speed-grade (-8) when the data width is 64 bits or less and one chip
select is used.
Ordering Part Number
The Ordering Part Number (OPN) for the DDR3 PHY IP on ECP5 devices is DDR3-PHYP-E5-U or DDR3PHYP_E5_UT.
IPUG96_2.1, October 2016
52
Display Interface Multiplexer IP Core User Guide
Resource Utilization
LatticeECP3 FPGAs
Table A-2. Performance and Resource Utilization1, 2
Parameter
Slices
LUTs
Registers
I/O
fMAX (MHz)3
Data Bus Width: 8 (x8)
702
929
822
42
400 MHz (800 Mbps)
Data Bus Width: 16 (x8)
871
1056
1127
53
400 MHz (800 Mbps)
Data Bus Width: 24 (x8)
1049
1181
1429
64
400 MHz (800 Mbps)
Data Bus Width: 32 (x8)
1223
1322
1739
75
400 MHz (800 Mbps)
Data Bus Width: 40 (x8)
1148
1322
1560
86
400 MHz (800 Mbps)
Data Bus Width: 48 (x8)
1263
1431
1742
97
400 MHz (800 Mbps)
Data Bus Width: 56 (x8)
1368
1517
1916
108
400 MHz (800 Mbps)
Data Bus Width: 64 (x8)
1472
1623
2091
119
400 MHz (800 Mbps)
Data Bus Width: 72 (x8)
1615
1728
2303
130
333 MHz (666 Mbps)
1. Performance and utilization data are generated targeting an LFE3-150EA-8FN1156C device using Lattice Diamond 1.4 software. Performance may vary when using a different software version or targeting a different device density or speed grade within the LatticeECP3 family.
2. LatticeECP3 ‘EA’ silicon support only.
3. The DDR3 IP core can operate at 400 MHz (800 DDR3) in the fastest speed-grade (-8, -8L or -9) when the data width is 64 bits or less and
one chip select is used.
Ordering Information
The Ordering Part Number (OPN) for the DDR3 PHY IP on LatticeECP3-EA devices is DDR3-PHYP-E3-U.
IPUG96_2.1, October 2016
53
Display Interface Multiplexer IP Core User Guide
Appendix B:
Lattice Devices Versus
DDR3 PHY IP Matrix
The maximum DDR3 bus datawidth supported in a device depends on the number of DQS groups available in that
device. The available number of DQS groups in the left or right side varies with each LatticeECP3 or ECP5 device
density and package.
While all the DQS groups fully support DDR3 electrical and protocol specifications, the user is recommended to
consider Simultaneous Switching Noise (SSN) guidelines for the proper placement of the DQS pins.
These guidelines are driven by the following factors:
• Properly terminated interface
• SSN optimized PCB layout
• SSN considered I/O pad assignment
• Use of pseudo power pads
Technical notes TN1180, LatticeECP3 High-Speed I/O Interface and TN1265, ECP5 High-Speed I/O Interface, provide detailed information on the SSN-considered I/O pad assignment and the use of pseudo power pads. These
technical notes include a Recommended DQS Group Allocation table for each LatticeECP3 and ECP5 device and
package. These tables can be used as baseline. You are advised to derive the best DQS placement for higher or
lower data widths depending on the level of adherence to all the factors of the Simultaneous Switching Noise (SSN)
guidelines.
IPUG96_2.1, October 2016
54
Display Interface Multiplexer IP Core User Guide
Appendix C:
LatticeECP3 DDR3 PHY IP
Locate Constraints
The DDR3 PHY IP core has a few critical macro-like blocks that require specific placement locations. This is
achieved by adding a number of “LOCATE” constraints in the preference file for these blocks. There are two groups
of locate constraints applied in the preference file.
• One group consists of a list of locate constraints for the read_pulse_delay logic. Each of these locate constraints
corresponds to a particular DQS pin.
• One group consists of a list of locate constraints for the clock synchronization logic. Each clk_in pin has one
group of these locate preferences.
As per the DQS pins and clk_in pin selected through the Pin Selection tab of the IPexpress GUI, the IP generation
process automatically adds the corresponding locate constraints into the preference file (refer to the Pin Selection
Tab section).
If the user decides to change any of the DQS pins or the clk_in pin, the IP core will need to be regenerated after
selecting the new pins in the GUI. The new preference file will contain the new locate preferences.
The DQS and/or clk_in location change will require only a new set of locate preferences with no change in IP core
functionality. Alternatively, the user may regenerate the IP core in a different project directory and copy only these
locate preferences from the new preference file into the preference file in the current working directory.
As mentioned previously, for the selected clock input pin, the IP core generation process automatically adds the
corresponding locate constraints into the preference file. This clock input pin is the dedicated PLL clock input pin for
a particular PLL. In the Pin Selection tab of the DDR3 PHY IP core GUI, only one clock input pin is available in the
left and right side of the selected device. The user has the option to select an alternative clock input pin per side
which is not shown in the GUI. This second clock input pin is a dedicated clock input of another PLL in the same
side.
To use this additional clock input pin, the user must manually edit the generated preference file by replacing the
locations of a few locate constraints. The following tables show the locations for each of those available second
clock input pins. Note that there are no additional clock input pins available in LatticeECP3-17 devices.
Table A-1. DDR3 Configurations Versus Device Matrix for Right side DDR3 I/Os
Site
Comp.
CLKI
PLL
Left Side
2nd Clock Input
LatticeECP3-150
FPBGA1156
FPBGA672
LatticeECP3-95
FPBGA1156
FPBGA672
FPBGA484
Y9
U4
Y9
U4
T3
R79C5
R79C5
R61C5
R61C5
R61C5
sync
LECLKSYNC1
LECLKSYNC1
LECLKSYNC1
LECLKSYNC1
LECLKSYNC1
clk_phase0
R78C5D
R78C5D
R60C5D
R60C5D
R60C5D
clk_phase1a
R60C2D
R60C2D
R42C2D
R42C2D
R42C2D
clk_phase1b
R60C2D
R60C2D
R42C2D
R42C2D
R42C2D
clk_stop
R60C2D
R60C2D
R42C2D
R42C2D
R42C2D
IPUG96_2.1, October 2016
55
Display Interface Multiplexer IP Core User Guide
LatticeECP3 DDR3 PHY IP Locate Constraints
Site
Comp.
LatticeECP3-70
FPBGA1156
FPBGA672
FPBGA484
FPBGA672
FPBGA484
FTBGA256
Y9
U4
T3
U4
T3
P2
PLL
R61C5
R61C5
R61C5
R53C5
R53C5
R53C5
sync
LECLKSYNC1 LECLKSYNC1 LECLKSYNC1 LECLKSYNC1 LECLKSYNC1 LECLKSYNC1
CLKI
Left Side 2nd
Clock Input
clk_phase0
R60C5D
R60C5D
R60C5D
R52C5D
R52C5D
R52C5D
clk_phase1a
R42C2D
R42C2D
R42C2D
R34C2D
R34C2D
R34C2D
clk_phase1b
R42C2D
R42C2D
R42C2D
R34C2D
R34C2D
R34C2D
clk_stop
R42C2D
R42C2D
R42C2D
R34C2D
R34C2D
R34C2D
Site
Comp.
PLL
Site
LatticeECP3-150
FPBGA1156
CLKI
Right Side 2nd
Clock Input
LatticeECP3-35
FPBGA672
LatticeECP3-95
FPBGA1156
FPBGA672
FPBGA484
Y28
V20
Y28
V20
R17
R79C178
R79C178
R61C142
R61C142
R61C142
sync
RECLKSYNC1
RECLKSYNC1
RECLKSYNC1
RECLKSYNC1
RECLKSYNC1
clk_phase0
R78C178D
R78C178D
R60C142D
R60C142D
R60C142D
clk_phase1a
R60C181D
R60C181D
R42C145D
R42C145D
R42C145D
clk_phase1b
R60C181D
R60C181D
R42C145D
R42C145D
R42C145D
clk_stop
R60C181D
R60C181D
R42C145D
R42C145D
R42C145D
Comp.
CLKI
PLL
Right Side 2nd sync
Clock Input
clk_phase0
LatticeECP3-70
FPBGA1156
FPBGA672
LatticeECP3-35
FPBGA484
FPBGA672
FPBGA484
FTBGA256
Y28
V20
R17
V20
R17
T15
R61C142
R61C142
R61C142
R53C70
R53C70
R53C70
RECLKSYNC1 RECLKSYNC1 RECLKSYNC1 RECLKSYNC1 RECLKSYNC1 RECLKSYNC1
R60C142D
R60C142D
R60C142D
R52C70D
R52C70D
R52C70D
clk_phase1a R42C145D
R42C145D
R42C145D
R34C73D
R34C73D
R34C73D
clk_phase1b R42C145D
R42C145D
R42C145D
R34C73D
R34C73D
R34C73D
clk_stop
R42C145D
R42C145D
R34C73D
R34C73D
R34C73D
IPUG96_2.1, October 2016
R42C145D
56
Display Interface Multiplexer IP Core User Guide