Deadlocks
The specification defines two possible deadlock conditions that can occur because the ISA and LPC (Low Pin-Count) buses do not support transaction retry. For example, if an ISA (LPC) Master initiates a transaction that requires a response, the bus cannot handle a new request prior to the current transaction having completed. This type of protocol is extremely simple from an ordering perspective because all transactions must complete before the next one begins; thus, no ordering rules are required. Of course the downside to this approach is that all other devices are stalled while they wait for the current transaction to complete. Delayed transactions supported by the PCI bus and split transactions supported by PCI-X and HyperTransport can handle new transactions while a response to a previous transaction is pending. The price — complex ordering rules to ensure that transactions complete in the intended order.
Deadlock Scenario 1
Consider the following sequence of events as they relate to the limitations of the ISA/LPC bus as discussed above and to the PCI-based Producer/Consumer transaction ordering model.
-
An ISA/LPC Master initiates a transaction that requires a response from the Host-to-HT Bridge (e.g., a memory read from main memory).
-
The CPU initiates a write operation targeting a device on the ISA/LPC bus, and the Host Bridge issues this write as a posted operation.
-
The posted write reaches HT-to-PCI bridge where it is sent across the PCI bus to the south bridge.
-
The south bridge cannot accept the write targeting the ISA bus because the ISA/LPC bus is waiting for the outstanding response. So, the south bridge issues a retry.
-
The read response reaches the HT/PCI bridge. However, the Producer/Consumer model requires that all previously-posted write headed to the PCI bus be completed before sending a read response. The read response is now stuck behind a posted write that cannot complete prior to the read response. Result: Deadlock!
The recommended solution to this problem is to require that all requests targeting the ISA/LPC bus be non-posted operations. This eliminates the problem because non-posted operations can be forwarded to the PCI bus in any order.
Deadlock Scenario 2
Once again because the ISA or LPC bus is unable to accept any requests while it waits for a response to its own requests a possible deadlock can occur. This deadlock can occur when the downstream non-posted request channel fills up while awaiting a response to an ISA DMA request. The sequence of events is as follows:
-
A DMA request is issued by an ISA/LPC device to main memory.
-
Downstream requests targeting the ISA bus are initiated but stack up because they are not being accepted by the south bridge, because its's waiting on a response from the previously issued DMA request. Consequently, it is possible for the downstream nonposted request channel to fill.
-
A peer-to-peer operation is initiated to a device on the same chain that is in the non-posted request queue ahead of the ISA/LPC request (in step 1) This peer-to-peer transaction is sent to the Host, which attempts to reflect the transaction downstream to the target device. However, because the downstream request channel is full; the upstream nonposted peer request stalls as does the request from the ISA bus. This prevents the ISA/LPC bridge from making forward progress.
The solution to this deadlock is for the host to limit the number of requests it makes to the ISA/LPC bus to a known number of requests (typically one) that the bridge can accept. Because the host cannot limit peer requests without eventually blocking the upstream nonposted channel (and causing another deadlock), no peer requests to the ISA/LPC bus are allowed. Peer requests to devices below the ISA/LPC bridge on the chain (including other devices in the same node as the ISA/LPC bridge) cannot be performed without deadlock unless the ISA/LPC bridge sinks the above mentioned known number of requests without blocking requests forwarded down the chain. This can be implemented with a buffer (or set of buffers) for requests targeting the bridge, but separate from the buffering for other requests.