Reviewing photo-editing software is complicated.
In an ideal world you’d be able to select a set of parameters for comparing programs, measure their performance in each area and then calculate a total score. But there are serious flaws to this argument:
What do you do with software that’s only designed to do one thing but is spectacularly good at it? If you judge it by the same parameters as software that offers ten times the features, it can never achieve more than a 1/10 score – and that tells you nothing about its worth.
So here’s an example. ON1 Photo RAW 2019 clearly does more than Alien Skin Exposure X4. It has image layering tools, a set of dedicated portrait enhancement tools, and panorama, HDR and focus stacking options entirely missing in Exposure X4. It’s cheaper, too.
But Exposure X4 does not set out to do all of these different things. It’s designed for a more specific set of needs – the recreation of atmospheric analog film and darkroom looks – and it does this so brilliantly, and with so little fuss that it would be foolish not to give it five stars. It’s extremely good at what it sets out to do, just as ON1 Photo RAW 2019 is.
You choose software according to what you want to do with it, and both are extremely good at what they are designed to do. I can’t mark Exposure X4 down for not supporting image layers, when it’s a feature that Alien Skin does not attempt to provide.
I can’t rate software purely on the basis of its features – unless it promises something that it just doesn’t deliver.
So wait, every software review is five stars now?
Not quite, but I can see how it might look that way. The reason for so many high ratings in software reviews on Life after Photoshop is that I only feature software on this site that I already rate highly. There are lots of programs I don’t like that I simply don’t talk about (please don’t make me list them). I don’t see the point. The whole idea behind Life after Photoshop is to explore software and processes which are better than Photoshop, not cheapskate Photoshop or Lightroom clones and wannabees that are actually inferior and worse to use.
So when I review a program it’s because I already like it. Sorry about that, but I don’t really see any way round that right now. But that doesn’t mean these reviews aren’t critical. Even the best software has flaws, and where I find them you can be sure I’ll highlight them – but I might still think a program deserves five starts despite these flaws.
For example, the three or four key plug-ins in the DxO Nik Collection are so good the collection deserves a five-star rating for these alone, even though the remaining plug-ins are really rather weak and dated and probably not worth using at all.
I look at this another way. The star rating might not be a very good differentiator for which program you should buy, but a five-star review is still a chance to explain why I rate the software so highly and identify exactly the kind of user likely to get most from it. This is important, because photo-editing tools are aimed at a very disparate range of photographers with very different needs and outlooks.
We don’t all want the same thing
For my style of photography I rarely combine image layers from one year to the next. My style is all about enhancing single images, so I don’t insist that photo editing software must have layers for me to like it. Other photographers may approach their work differently and layers may be an essential part of what they do.
One of my criticisms of Photoshop is that it doesn’t show you what your images COULD look like. It offers no inspiration, only tools. This is why I place a high value on single-click creative presets. The more there are, the more I like it, as long as the quality is maintained. I know other photographers who don’t need or want inspiration; they know the look they’re after and how to achieve it, and don’t need fresh ideas.
I’m also aware that many photographers look for value above all else. That’s not me at all, I’m afraid. My feeling is that you very quickly forget how much you paid for a thing, but you never forget whether you like it or not. I’ve regretted too many cheap deals and second-rate purchases to want to keep on making that mistake. However, if you can only feel happy with a purchase if you think it was a great deal, then I respect that too. It’s just not me.
Why more features shouldn’t automatically boost a rating
As a reviewer, it’s very easy to paint yourself into a ratings corner. You really like a product, you give it a really high rating, and then a few months later the publisher adds something new and great but you’ve left yourself nowhere to go. You can’t give it more than five stars, and yet the fact it’s new and better seems to demand a higher rating still.
Sometimes it’s even more complicated. Sometimes a publisher adds features which are OK but not brilliant, and this definitely dilutes the quality of the software – it’s obvious straight away. Yes, the program demonstrably does more, so technically its score should go up. However, if the new features aren’t very well implemented, you inevitably feel the software has lost something. It does more, but its overall satisfaction level has fallen.
Another way of looking at this is that when a publisher adds a big new capability to its software, it’s suddenly competing with a new set of programs, and may now be out of its league.
A good example of both these issues is DxO Optics Pro and PhotoLab. I have endless admiration for DxO and the quality of its software, but its recent history illustrates the points above perfectly.
Let’s start with DxO Optics Pro 11. Its RAW conversions were simply superb, its optical corrections the best in the business and its PRIME Denoise and ClearView tools were the icing on the cake. It didn’t offer image cataloguing tools or local adjustments, but it never set out to. As what it set out to be – the best RAW image converter and optical correction tool on the market – it was peerless. What can you do but give a program like this the maximum rating?
Then DxO brought out PhotoLab, and suddenly this superb raw processing and lens correction engine gained local adjustment tools which could leverage all that lovely raw data. Suddenly, DxO had made a powerful image editor, not just a raw converter. It did more, and it did it brilliantly, but since the previous version already had five stars there was nowhere else to go – PhotoLab could only get the same five star rating.
But PhotoLab 2 has thrown things out of kilter. For some reason, DxO has released a major version update with comparatively few major changes, and the main one – a new, intelligent search feature – feels only half-finished. DxO is promising free updates soon, but even though this is a new, extra feature that theoretically adds to the software’s value, in practice it feels like it’s slightly dragged it down.
Before, PhotoLab wasn’t really competing with serious photo cataloguing tools so there was no reason to mark it down for not doing that. Now that it is, though, it’s clear that it falls short of what its rivals can do. The combination of a comparatively thin major version upgrade and a somewhat weak initial implementation of its major new feature has – in my opinion – done more harm than good.
It’s like DxO raised the bar with PhotoLab 2 and then failed to clear it. It’s very difficult to give the software the same five-star rating as before under these circumstances, wouldn’t you say?
Luminar faces the same potential issue with Luminar 3, due in December 2018 (this is being written in late November). On the one hand, its new Libraries feature will obviously extend its capabilities and on that basis alone you could argue the program is provably better. On the other hand, if the new Libraries feature proves weak, unreliable or in any way disappointing, Luminar will inevitably feel ‘diluted’ in quality. Tricky, isn’t it?
Context is crucial
Expectations change, rival products improve, new ideas and technologies come along that turn old processes on their head. So just because a program got a five-star rating last year, it doesn’t mean it still deserves it this year.
This is a worse problem for hardware reviews than it is for software – and I’ve spent many years writing both. People who bought a Sony A6000 (for example) in 2014 when it was getting five-star ratings are incensed when it gets three or four stars in 2018. Some rather literal-minded consumers think that ratings should be absolute and permanent – yet clearly, a product has to be rated according to its peers at the time. In 2014 the A6000 was great; now it’s not.
So I might give Capture One Pro 11 five stars now, but if Adobe relaunches Lightroom with five times the speed (if only!), the same raw conversion quality as Capture One, a more effective set of local adjustment tools AND with Photoshop fully integrated, I won’t just be obliged to give this new Lightroom five stars (it would surely deserve it!), I will also have to go back to Capture One Pro with a far more critical eye and rethink my rating.
I wouldn’t go so far as to change the original review – that’s just too confusing for readers and for software publishers – but in any future comparisons, buying guides and reviews, Capture One Pro’s five-star status would certainly need to be revoked.
Closing thoughts
Sorry about the length of this post, but it feels like there was a lot that needed saying. It would be great if reviews and ratings could be scientifically standardised and equalised and calculated, but it can’t be done.
Instead, the best that any reviewer can hope to offer is a flavour of the response a product evokes and an explanation, with as much clarity as possible, as to why it was.
Inevitably, reviews are partly subjective and based on opinion as well as fact. But surely it’s better to know what someone else thinks about a product, even if they’re not exactly like you, than it is to rely on a bullet list of ‘benefits’ or a craftily-contrived comparison table on a publisher’s website?
• See all Life after Photoshop reviews