So what you are saying is that since PCI-E 1.0 won't slow down a PCI-E 3.0 card, running it in a PCI-E 2.0 system won't slow it down either.
He tested using:
Code:
PCI-e 1.1 16x (2.5GT/ lane (250MB)) = ~4GB/ sec Crysis FPS: 59.15
PCI-e 3.0 1x (8GT/ lane (800MB)) = ~0.8GB/ sec 55.98
PCI-e 3.0 16x (8GT/ lane (800MB)) = ~12.8GB/ sec 60.67
Ungine follows the same trend, but not to the same extent.
You can clearly see that by running the card in a PCI-e 3.0x1 slot, which is a scenario that may happen using CrossfireX, or SLI for example with multiple cards, that instead of the control 100% performance you would expect, you get 92.3% of the performance.
Again, going back to the PCI-e 1.1x16 you would get 97.5% of the performance which is within a margin for error, but still would be a reasonable overclock for free on any given card.
Assuming these results are accurate, this difference would almost be like downgrading the 7970 to a 7950 in many situations.
I don't think you realise, a lot of people go for the new i7 CPU's etc, and slap in 32GB or 64GB of RAM, and can justify the extra cost for the extra performance, but then most people tend to brush off the umportance of losing 7% of your FPS just because their card is in the wrong socket, or running at wrong speed.
Based on these 3 results, it is hard to say where the line comes in where bandwidth stops playing a roll, and he wasn't using surround resolutions either, so makes it even harder to see that.
I for one am not going to post anymore here now, you made it perfectly clear you are not going to run these benchmarks that would show the lack/ the increase in performance, I personally do not have the hardware to run them, so I guess we will never know.