Measuring computer performance has gotten weird

posted by Jeff | Monday, February 13, 2023, 9:23 PM | comments: 0

I am gonna review the new laptop eventually, but for now let me say that I continue to be surprised at its performance with software development tools, but also with video manipulation. But even before I pulled the trigger, doing the math about what constitutes "better" performance was weird.

I built that PC on my desk about three and a half years ago. The GPU, an RTX 2070, is still more than respectable in terms of its gaming capability. It maintains high frame rates in Halo, Forza and LEGO Star Wars, the only games I regularly play. It did more than OK with Planet Coaster and Planet Zoo back in the day, too. The current generation of GPU's cost about three times as much for performance normal people like me wouldn't see. Where I leverage that power is in video editing applications like Premier Pro and DaVinci Resolve. The CPU itself, with a bucket of RAM and solid state drives doesn't seem to matter as much, though the plumbing between them does.

Up until a year and a half ago, Mac used the same hardware, so comparing capability was not straight forward. What Apple did consistently well even then was manage power consumption and heat, and by extension, battery life. This was possible in part because they own the operating system and the hardware design. But then they started going the route of ARM-based system-on-a-chip (SoC) processors. One chip replaced a lot of chips, and that has a lot of advantages to it, specifically low power consumption and shorter pathways between all the things.

Benchmarks don't really measure the "right" things anymore, and lack context. They used to measure clock speed and number of processing cores, but if there is efficiency gained by the SoC design, these numbers don't mean the same thing. Figuring out how to use the GPU in tandem with the CPU matters too. Also, if battery life matters to you, as it generally should on laptops, you have to take that into account as well. Battery life is a performance metric.

So with the wildly different architecture, I mostly had to lean into specific use cases and how they "feel" to the user, and what people were reporting. Video editing was already not even close, and I can confirm that. Trying to edit video that you're color grading and/or applying noise reduction to, on my PC, is not ideal. Scrubbing across the timeline skips a lot, and if I do it too fast, it's not usable. There are work arounds to this, namely creating smaller proxy files for editing, but that's an extra step. On the M2 MacBook Pro, I can apply many layers of adjustments to the video and it scrubs without a hitch. I mean, I've never seen it that smooth ever, even in the old standard definition days. The typical software development load is also ridiculously fast, using the same .Net tools I would use in Windows. App build times are a third of what they are on my desktop. (Mind you, I'm also starting to realize that my "thin and light" laptops have been not ideal for dev work.)

I realized this problem about benchmarking when reading a recent article about how to spec out a computer for video editing. As the author pointed out, you can buy a $600 Mac Mini now and you're already in a pretty good starting place. If you buy the $1,300 version, you have a crazy good solution that might well outlast the software running on it. Performance has never exceeded software capability in video editing until now.

I've also come to realize the "but you can't expand or replace parts" thing is completely not important. I haven't added RAM to a computer in over a decade. The only thing I've added to my desktop in 3.5 years is more storage, which I could just have easily made external. And even if I wanted to replace the CPU or memory, I'd have to replace both and the motherboard because the standards and clock speeds have changed. The whole modular thing doesn't matter anymore, and we can probably thank cell phones for that.

There are PC parts that could blow away the current crop of Macs, but to get there, you need to buy CPU's that are over $500, GPU's that about a grand, and that's before you buy something to put it in. That setup will also use two or three times the power. Remember, Apple originally went from Motorola to Intel to catch up, but this change is something we haven't seen really in my lifetime. It's possible that Windows on ARM could make a similar leap, but it will probably require Microsoft to figure it out for the PC makers, much as they did with the Surface line.

The new Mac laptops are still awfully expensive, but given what they deliver, it's kind of justifiable. The MacBook Airs and Mac Minis are extraordinary in the performance-cost curve. Budget creators can afford the kind of power that used to be reserved for people with budgets.


No comments yet.

Post your comment: