Skip to main content

Will smart phone cameras be better than digital mirrorless cameras?

 


If you believe Terushi Shimizu or rather, the way the press is formulating it, then camera phones will have better image quality in 2024 than your trusty DSLR or mirrorless digital camera. He backs this up with sensor technology advancements and computational photography. He has a point. 

 

However, as a digital camera enthusiast myself, I must strongly disagree with this point of view. The message might be interpreted in such way that its meaning reflects a view that we are no longer bound by physics to get the best image quality. 

 

The thing is this, the bigger your camera sensor, the more photons it can capture. However, this comes at the realization that big sensors require big lenses which in turn makes the camera big and heavy. I’m simplifying of course, but that’s physics. For camera makers it is therefore always a question of tradeoffs: do you want better image quality or do you want a smaller and lighter camera.

Camera phones or cameras in smartphones, have changed this equation a bit thanks in part due to their ability for pre- and post-production processing. As an example, smaller sensors are not very good in capturing dynamic range and typically create a lot more noise (good overview is here) but there is a fix for that – one could take several photos of the same scene at different exposures and then later re-combine them to improve overall picture quality. This is called HDR or high dynamic range and is essentially something that camera phones have been doing since the mid-2010s.

Another aspect where camera phones are lacking is in their ability to isolate subjects, or in other words, have a person in-focus but the background is blurred. This is called a shallow depth-of-field. To create this effect, you must have the right combination of sensor size, lens, and distance from the subject. Without being too technical, you need a lens that can capture a lot of light – this is also called a fast lens or shooting with the aperture wide open and depending on the effect, you need a larger sensor than your typical optical smartphone sensor.

But here too, camera phones have continuously added this effect through a combination of computational photography, artificial intelligence and using several sensors at once. 


Olympus Em1 MkIII with 25mm M. Zuiko Pro lens at f1.2

Olympus Em1 MkIII with 25mm M. Zuiko Pro lens at f1.2


Samsung S21 Ultra Smartphone with computational photography

Samsung S21 Ultra Smartphone with computational photography



With this trajectory, it’s clear that small sensors and computational photography are here to stay, and I would wager and say that in certain circumstances, we could probably create images that would be indistinguishable to what a mirrorless digital camera could produce. 

 

But now, here comes the contrarian view – my view. I like to make two points that I think are overlooked:

 

1.     Creative control

 

Obviously, the more intelligence we have in our smart phones, the less we control the outcome. In other words, my eyes can and will disagree with the smartphone’s computational logic. Just because I put the camera towards a subject does not mean I really want that subject to be isolated. And this is exactly what brings me to my 2nd point:  

 

2.     The best tool for the job

 

A camera body is still the best tool for the job – I could never envision myself, trying to take any meaningful pictures with a smart phone, simply because I don’t have the necessary controls in place or a way to compose the scene. For example, in bright light, I need a viewfinder, otherwise I’m just guessing what I’ll take a picture of. And likewise, I need to control the aperture, the ISO, the shutter speed in a way that doesn’t detract me from composing the shot. Now, I would argue that there isn’t a single smartphone camera system that lets me do all that and there probably never will be, because smartphones are not cameras. They are ok for the occasional tourist shot or a group shot – but they will never be a replacement for a real camera. 

 

Hence, cameras will stay around for a long time to come. But at this point, my post turns into both a rant and a sales pitch towards the micro-four thirds format. Obviously, I’m not a professional photographer and in a sense, I grapple with some of the same issues that the smartphone cameras have when I shoot with micro-four-thirds. A quick technical excursion: the micro-four-thirds format is much smaller than full-frame sensors with the result that the focal length of my lens needs to be only half of its full-frame sibling – this means that my 12mm lens gives me the same view as a 24mm lens on a full-frame body. Likewise, when I shoot wide open, it’s just half the depth-of-field effect I would see on a full-frame body (with the same distance to the subject and the same f-stop). This basically means: my micro-four thirds system is between a camera phone and a full frame digital camera. And because of the smaller sensor I get all downsides such as more noise, less dynamic range, and not so great shallow depth-of-field.  

 

So why would I sacrifice these things? The answer is physics: weight and size. My lenses are smaller than the ones on full-frame systems and yet create (with good light) exceptional pictures. 

 

But here is the thing: I believe that what smartphones have been doing so successfully will come to the digital mirrorless cameras as well but this time, with much better creative controls because we have a tool that lets you control that. In other words, while pocket cameras are no longer being produced, I could very well envision for OM Systems or Panasonic (the two micro-four-thirds banner holders) to create new cameras, based on the micro-four thirds standard, and with more focus on computational photography. 

 

There is already plenty of evidence that OM Systems (formerly known as Olympus) is heading in that direction with their latest OM-1 offering where they have, among other things, HDR modes and high-res shot to bridge the gap to their larger sensor competition (source). There is hope that OM Systems or Panasonic will make more strides in this area and develop similar effects that we have on smartphone cameras, for example a good denoising and some creative control over subject isolation. 

 

But absent of that, I’ve started to utilize the same techniques as what we have on full-frame. Even today I use HDR with my small sensor camera (take multiple exposures) and I’ve started to leverage some AI denoising tools with great success. 



Olympus EM1 MkIII 12mm (12-100mm M. Zuiko Pro lens) ISO 6400 F4 1/20 sec with denoising (Topaz)

Olympus EM1 MkIII 12mm (12-100mm M. Zuiko Pro lens) ISO 6400 F4 1/20 sec
without denoising


 

I’m concluding by suggesting that there is a bright future for small sensor systems, precisely because of all the technologies that can be brought over from smartphones into the micro-four-thirds system.

 

 

Comments

Popular posts from this blog

Apples Vision Pro Headset strategy is all about its Arm-chips.

  Apple has given us a vision of what their VR and AR future might entail. But as have others pointed out numerous times, the whole point of the showcase at the WWDC 23 was to let people experiment, I’ve heard others say that it’s like the launch of the Apple Watch when Apple didn’t really know what would become of it. This is similar and yet different.  Just like the Apple Watch (and the iPad before it), Apple sought to porting its whole ecosystem onto a watch – granted, the Apple Watch can’t live on its own and a better comparison would probably be the iPad. The iPad can live without any other Apple device and unlike the iPhone, never really had a clearly defined function other than to watch movies and browse the web. It was not until it gained the ability to be used with a pencil that artists and designers started to explore the potential.  I’m trying to point out that Apple took 5 years from the first iPad in 2010 to the iPad Pro with pencil in 2015 to find its “kille...

The new shiny armor of AI

If we listen to the media, business leaders, and the press, we should be getting behind the AI wagon because of its potential to automate many of the processes everyday companies struggle with. I don’t dismiss this notion entirely because I think it’s true if you have the ability to integrate this technology in a meaningful way. For example, the startup company " scrambl " (full disclosure, I’m a minority investor) is making use of gen-AI by "understanding" CVs (curriculum vitae) from applicants and identifying the skills to match them to open positions. This works great – I have seen this in action, and while there are some misses, most of that "normalization of skills" works. There are other promising examples, such as Q&A systems to understand the documentation of a complex environment. When combined with RAG ( retrieval augmented generation ), this has the potential to significantly reduce the time it takes to make complexities understandable. But ...