Manufacturers want their chips to get the best performance possible on benchmarks while ignoring real-world scenarios, a new report alleges.

glbench

Samsung has tweaked the  Exynos Octa 5 to perform better than the competition on benchmarks by running a boosted frequency on certain benchmark applications such as AnTuTu, Linpack, Benchmark Pi and GLBenchmark 2.5.1, according to a report first published on AnandTech. The PowerVR SGX544MP3 in Samsung Galaxy S4 was able to run at 533 MHz instead of the nominal 480 MHz during those benchmarks.

The same scenario was not observed under GL Benchmark 2.7.0 and Epic Citadel. The screenshot below shows the frequency of 480 MHz during the run of Epic Citadel.

400mhz

 

The screenshot below shows the frequency of 532 MHz during the run of AnTuTu.

533mhz

Although the underlying architecture of GLBenchmark 2.7.0 and GLBenchmark 2.5.1 are the same (confirmed by the maker), they did not produce consistent results. As a consequence, the change in frequency was proved to be non-workload dependant but rather, intentional to produce desirable results in specific benchmark.

The GPU is only half of the story when the CPU also experienced the same phenomenon. During the run of GLBenchmark 2.5.1, the high performance mode was triggered and all four CPUs are running at 1.2 GHz, even during the menu screen. Meanwhile, the high performance mode was not activated at all during the run of GLBenchmark 2.7.0.

cpubench

Digging deeper into the Dynamic Voltage and Frequency Switching (DVFS) source code revealed hard-coded profiles are used in AnTuTu, Linpack, Benchmark Pi and GLBenchmark 2.5.1 to increase performance numbers.

Benchmark optimization is not a new thing as Nvidia and AMD have been doing the same for a long time in the PC sector. Using the principle of DVFS, both vendors were able to limit the power consumption in power viruses (OCCT and Furmark) while leveraging boosted clock in game benchmarks to increase performance.  Now, mobile GPU makers are following the footsteps of Nvidia and AMD and start optimizing their GPU specifically to achieve good performance numbers in benchmark.

This is generally a positive sign as it shows that the general public actually cares about the performance numbers. One thing to take away from this trend is that mobile consumers in our generation are getting wiser and more educated in terms of evaluating performances between GPUs.

However, higher performance numbers do not always ensure higher performance in games. In the Exynos Octa 5 case, although the GPU runs at 533 MHz in GLBenchmark 2.5.0, it actually peaks at approximately 10 percent slower in day-to-day usage. Thus, end users will have the misconceptions that the GPU is faster by looking at the benchmark, but it really isn’t.

As a result, optimization should not merely be done on a per benchmark basis. Meaning, it should not follow a “detect a benchmark then boost clock” approach, which is how it is done currently. Optimization should start by understanding the underlying low-level architecture of a particular benchmark, then tweak the GPU to achieve the best performance per watt.

Hard-coded profile to increase benchmark numbers is useful to showcase the performance capability but is never useful for the end users. In the end, users experience is what matters the most.

Source: Anandtech