If you believe Terushi Shimizu or rather, the way the press is formulating it, then camera phones will have better image quality in 2024 than your trusty DSLR or mirrorless digital camera. He backs this up with sensor technology advancements and computational photography. He has a point.
However, as a digital camera enthusiast myself, I must strongly disagree with this point of view. The message might be interpreted in such way that its meaning reflects a view that we are no longer bound by physics to get the best image quality.
The thing is this, the bigger your camera sensor, the more photons it can capture. However, this comes at the realization that big sensors require big lenses which in turn makes the camera big and heavy. I’m simplifying of course, but that’s physics. For camera makers it is therefore always a question of tradeoffs: do you want better image quality or do you want a smaller and lighter camera.
Camera phones or cameras in smartphones, have changed this equation a bit thanks in part due to their ability for pre- and post-production processing. As an example, smaller sensors are not very good in capturing dynamic range and typically create a lot more noise (good overview is here) but there is a fix for that – one could take several photos of the same scene at different exposures and then later re-combine them to improve overall picture quality. This is called HDR or high dynamic range and is essentially something that camera phones have been doing since the mid-2010s.
Another aspect where camera phones are lacking is in their ability to isolate subjects, or in other words, have a person in-focus but the background is blurred. This is called a shallow depth-of-field. To create this effect, you must have the right combination of sensor size, lens, and distance from the subject. Without being too technical, you need a lens that can capture a lot of light – this is also called a fast lens or shooting with the aperture wide open and depending on the effect, you need a larger sensor than your typical optical smartphone sensor.
But here too, camera phones have continuously added this effect through a combination of computational photography, artificial intelligence and using several sensors at once.
Olympus Em1 MkIII with 25mm M. Zuiko Pro lens at f1.2 |
Olympus Em1 MkIII with 25mm M. Zuiko Pro lens at f1.2 |
Samsung S21 Ultra Smartphone with computational photography |
Samsung S21 Ultra Smartphone with computational photography |
With this trajectory, it’s clear that small sensors and computational photography are here to stay, and I would wager and say that in certain circumstances, we could probably create images that would be indistinguishable to what a mirrorless digital camera could produce.
But now, here comes the contrarian view – my view. I like to make two points that I think are overlooked:
1. Creative control
Obviously, the more intelligence we have in our smart phones, the less we control the outcome. In other words, my eyes can and will disagree with the smartphone’s computational logic. Just because I put the camera towards a subject does not mean I really want that subject to be isolated. And this is exactly what brings me to my 2nd point:
2. The best tool for the job
A camera body is still the best tool for the job – I could never envision myself, trying to take any meaningful pictures with a smart phone, simply because I don’t have the necessary controls in place or a way to compose the scene. For example, in bright light, I need a viewfinder, otherwise I’m just guessing what I’ll take a picture of. And likewise, I need to control the aperture, the ISO, the shutter speed in a way that doesn’t detract me from composing the shot. Now, I would argue that there isn’t a single smartphone camera system that lets me do all that and there probably never will be, because smartphones are not cameras. They are ok for the occasional tourist shot or a group shot – but they will never be a replacement for a real camera.
Hence, cameras will stay around for a long time to come. But at this point, my post turns into both a rant and a sales pitch towards the micro-four thirds format. Obviously, I’m not a professional photographer and in a sense, I grapple with some of the same issues that the smartphone cameras have when I shoot with micro-four-thirds. A quick technical excursion: the micro-four-thirds format is much smaller than full-frame sensors with the result that the focal length of my lens needs to be only half of its full-frame sibling – this means that my 12mm lens gives me the same view as a 24mm lens on a full-frame body. Likewise, when I shoot wide open, it’s just half the depth-of-field effect I would see on a full-frame body (with the same distance to the subject and the same f-stop). This basically means: my micro-four thirds system is between a camera phone and a full frame digital camera. And because of the smaller sensor I get all downsides such as more noise, less dynamic range, and not so great shallow depth-of-field.
So why would I sacrifice these things? The answer is physics: weight and size. My lenses are smaller than the ones on full-frame systems and yet create (with good light) exceptional pictures.
But here is the thing: I believe that what smartphones have been doing so successfully will come to the digital mirrorless cameras as well but this time, with much better creative controls because we have a tool that lets you control that. In other words, while pocket cameras are no longer being produced, I could very well envision for OM Systems or Panasonic (the two micro-four-thirds banner holders) to create new cameras, based on the micro-four thirds standard, and with more focus on computational photography.
There is already plenty of evidence that OM Systems (formerly known as Olympus) is heading in that direction with their latest OM-1 offering where they have, among other things, HDR modes and high-res shot to bridge the gap to their larger sensor competition (source). There is hope that OM Systems or Panasonic will make more strides in this area and develop similar effects that we have on smartphone cameras, for example a good denoising and some creative control over subject isolation.
But absent of that, I’ve started to utilize the same techniques as what we have on full-frame. Even today I use HDR with my small sensor camera (take multiple exposures) and I’ve started to leverage some AI denoising tools with great success.
Olympus EM1 MkIII 12mm (12-100mm M. Zuiko Pro lens) ISO 6400 F4 1/20 sec with denoising (Topaz) |
Olympus EM1 MkIII 12mm (12-100mm M. Zuiko Pro lens) ISO 6400 F4 1/20 sec without denoising |
I’m concluding by suggesting that there is a bright future for small sensor systems, precisely because of all the technologies that can be brought over from smartphones into the micro-four-thirds system.
Comments
Post a Comment