13.8.12

Graphics improvements give Mountain Lion that speedy feeling | Ars Technica

Graphics improvements give Mountain Lion that speedy feeling | Ars Technica




"Believe it or not, Mountain Lion seems to run faster just sitting on your desk."
Has Mountain Lion been feeling faster for you compared to Lion on the same machine? It's probably not just you: Mountain Lion appears to include improved graphics drivers and low-level graphics subsystem improvements. According to our testing, these improvements result in an approximate performance increase of up to 10 percent. Those improvements can make your current hardware feel faster despite the fact that your CPU can't magically crunch numbers any faster. The changes also lay the foundation for Apple to update OS X's OpenGL support in a more timely manner, which could potentially lead to better graphics performance in the future.

Since OS X relies heavily on GPU acceleration and graphics performance for its UI, all these improvements work in concert to give an impression of faster operation. Hundreds of small lags in animations when you drag a window, scroll a page, click buttons, and other UI operations have been reduced by these improvements. Taken together, they add up to an overall improved user experience.

 

while the test result indicate not much improvement......

Ars Tribunus Militum
"Still, we ran a few benchmarks before and after upgrading to Mountain Lion to see if we could find any differences."

This is a problematic strategy for any OS; it's especially problematic for Apple.
The reason it's problematic is that Apple, even more than other companies (MS, Google), cares about the user experience, not benchmarks, and is continually tweaking their code to optimize for user experience.
This is not just happy talk. As CPUs have become faster, a different set of tradeoffs makes sense from say fifteen years ago.

To take one obvious example - it is very jarring to have apps that are USUALLY very fast, but which every so often run grindingly slowly. This in turn means that it makes sense to use algorithms which are, say, 10% slower ON AVERAGE than the fastest known algorithm, but which also have a very narrow range of performance outcomes, whereas the fastest known algorithm may usually be fast, but every so often behaves abysmally.

We see a second version of this idea in the changes made to Mountain Lion VM. The biggest problem with Lion VM was that the pool of clean pages was not large enough. This meant that on a large memory allocation, most of the time spent waiting was to write out dirty pages, NOT to read in new pages.
Mountain Lion changes this by more aggressively (in the background) cleaning pages.
To a benchmark this looks like a step backwards --- over any given stretch of time, MORE data is written out than in LION, and there's a minor overall slowdown of the machine because of this. But to a user it does indeed feel like a great improvement. The user does not notice a .5% (or whatever) constant slowdown of the machine, but DOES notice the fact that the machine seems to have fewer places where it noticeably halts --- the places where, with Lion, it was writing out a mass of dirty pages.

One could compare ARC as against Garbage Collection as the same sort of tradeoff, though in this particular case I think the honest truth is that ARC is a better match to the way Objective C is actually used than is GC; it's just a pleasant side benefit that it avoids the pauses of GC.

There is a second important change in Mountain Lion, but I am less familiar with this so I may have some details wrong. As I understand it, compositing in Lion and earlier, going back to Quartz Extreme, operated on a window-by-window basis. Each window constructed its bits using mostly the CPU, then the GPU stepped in to copy the appropriate region of each window to the screen (applying clipping, transparency, perhaps scaling) as appropriate. With Mountain Lion this has been extended via Core Animation (in the sense that a set of patterns and design guidelines is being developed) to apply to PARTS of windows. I expect the idea here is
(a) to allow fills, strokes and so on to occur on the GPU,
(b) to allow separate panes in a window to interact in richer ways (eg within-window transparency),
(c) to allow the GPU to construct a window from a bunch of separate, independently updated panes, in the same way that the screen is constructed from independent windows.

However, as I said, I'm not at all familiar with the details of this, and would appreciate whatever other commenters have to say.

Finally, once again WRT benchmarks, it is worth noting in any benchmark whether you are comparing apples with apples. This is especially important as security becomes ever more sophisticated. The relevance to Mountain Lion, and to the future of OSX, is that Apple is making a determined push to move ever more potentially dangerous code into separate, jailed, processes, which communicate with the main process via XPC. It is not unlikely that, for example, PDF parsing and rendering will move to such a separate process. This sort of fencing off has the potential to make certain operations say 10% slower. A fool will note this and mock, but an intelligent review will point out the tradeoff that is being made.
(Compare file system benchmarks against Linux say seven or eight years ago. It was common for Linux fans to mock that OSX was slow on certain operations, without noting that the reason for the slowness was that OSX was being much more careful about preventing file system corruption, both through careful ordering of writes and through forcing flushes.)
We'll never be rid of the ignorant teenage mind, primarily interested in using benchmarks to shore up his fragile ego. But we can do something to ensure his idiotic comments appear on other blogs by supplementing all benchmark articles with background information explaining the tradeoffs being made by each system.
1763 posts | registered