24MP and Beyond: Why Fuji Caught Up Just in Time and Will Have to Embrace the Mega Pixel War

24MP and Beyond: Why Fuji Caught Up Just in Time and Will Have to Embrace the Mega Pixel War

Photography isn’t just about print anymore. When I read articles or forum posts that are against increases in megapixels, like I have recently about the Fuji X-Pro 2, it makes me seriously question the background of the person making the claim and whether they understand the direction technology is heading.

That being said, I feel it’s important to clarify my technology background. I remember CGA/EGA/VGA/SVGA/XGA/etc…. In the 80’s, you were lucky to see more than ANSI art on your computer. But, when the 90’s came about, 320×200 and 640×480 digital pictures and video began to become common. I could get more technical, but this isn’t the place. In the 90’s I also worked for a ground breaking multimedia development company that was the first to do many things, including 360 real estate photography on the web. I was their problem solver, and our biggest problems generally pertained to providing a quality viewing experience at all resolutions since digital images would be too small or too big for display on desktops that could be 640×480 (mainly older users), 800×600 (very common), 1024×768 (ideal at the time but not common),  1280×1024 (mainly professionals), or 1600×1200 (only professionals). This was a huge issue that was partially addressed by vector graphics and the megapixel war. When you’re working with bitmap images you always want to start with files bigger than or equal to your end resolution.

(Example of poor operating system scaling on 3200×1800 13.3 inch laptop screen)

Yet we still don’t live in a resolution independent world because scaling bitmap files up in size is imperfect and the promise of vector graphics hasn’t payed off; which can best be seen on 4K (3840×2160) monitors when loading an older application or operating system. This is relevant because the transition to 8K (7680×4320), which is beginning this year, will have the same affect on photography , especially at resolutions below 24MP.

The mass production of high pixel density displays of various sizes has helped develop high pixel density production facilities and has dramatically driven down the cost of large, high pixel density panels. We might have seen faster adoption of 4K if integrated graphics were able to drive them and the adoption of a new HDCP standard came faster so there was more 4K content but, now that these issues are settled, 8K is rapidly approaching. 2016 is being marketed as the year of 4K, but 2017 could very easily be the year of 8K since the price of 4K has fallen below $500.

(Sprite based game on Xbox 360)

Beyond traditional display technology, Oculous Rift will begin shipping March 28th, followed by the HTC Vive, Microsoft HaloLens, PlayStation VR, and Razor OSVR. There are also phone based solutions like GearVR, and Google Cardboard, as well as other solutions on the way. The panels in these devices are largely 1080x(varies), but the worlds which they allow you to access are gigapixel in size and, if someone were inclined to display a photo in one of these worlds, you would end up with issues similar to the ones video games had with sprite based graphics systems.

(Getting close to a sprite wall or object)

In today’s world of rapidly advancing pixel density and virtual reality, consumers and professionals should demand that every camera maker produce the highest megapixel camera they can, while maintaining very good image quality around 3200-6400 ISO. Technologically speaking, today’s APS-C cameras should all be at least 24MP while Full-Frame cameras should all be at least 50MP. I believe that, ideally, the megapixel war will end again when a photo taken in the real world can be reproduced in a virtual world at a 1:1 size. To put it another way, if I take a photo in real life and display it in a virtual room, when I approach the photo on the virtual wall it should be as if I approached the subject in reality with zero pixels apparent.


This entry was posted in Fuji X-Pro2, Technology guides and tagged . Bookmark the permalink. Trackbacks are closed, but you can post a comment.
  • AnthonyH

    Huh? Not every lens can support his suggested 50mp. Moreover, human vision is not like a still camera. He doesn’t even address issues of diffraction. Waste of time to read.

    • Those topics are outside the realm of this article. This article simply covers how images are presented. The examples I give demonstrate what happens when you do not feed a computer with enough pixels. Digital photos are bitmaps and if they have to be scaled up minimally they become blurry like the OS example and how if they are viewed from a appropriate viewing distance in VR when they are approached they become pixelated like the doom example. 8K will require 24MP+ and VR will require Gigapixel levels.

      It is hard to demo VR worlds at the moment because everything has turned to rendering in 3D worlds so that pixelation does no occurs when you approach an object, but a photo taken by a digital camera will always remain a bitmap. There has been no progress in turning them into vectors that I am aware of because they are too complex.

    • sickheadache

      Ur Right..That is why I would never place a DX piece of crap on my FX D810. Or on my a7rll…Sigma Art and Otus would be perfect for those cameras.

  • dclivejazz

    I can’t believe how much it costs to upgrade the video card on my Macpro to drive two HD monitors, much less a one that’s 4K. It makes me seriously consider building my own Windows PC for my next workhorse desktop. Keeping it up to date would be less expensive, although constantly applying those pesky windows “critical updates” would be a nightmare…

    Meanwhile, I’m really curious to see how well Fuji’s 24 mp sensor works with my 16mm lens. That thing has killer resolution on the 16 mp sensor already.

    • Apple video cards cost more because they have to have a special firmware on them. You can find resources on the internet that will let you run less expensive hacked cards on your mac, but generally speaking Windows PCs are probably going to be cheaper.

      The 5k iMac is a pretty decent value and I think we will see a 6k or 8k cinema display refresh come along side a new Mac Pro, but Apple doesn’t share details on their road map. I purchased a fully upgraded Mac Pro years ago and find it has plenty of power for my needs and recently went through the process of prepping it to run windows for the Oculous Rift I preordered.

  • What’s the adoption curve for 4K displays? Is the market currently saturated with 4K displays and content?

    • That’s beyond the scope of the article, but in 2014 they were already building momentum and being able to buy a $399 4k display http://amzn.to/1Pj4ucc is a sign that the vast majority of screens being sold today are 4k. Display technology has a tick tock cycle just like processors.

      • Politics_Nerd

        Tick tock does not mean what you seem to think it means.

        • Yes it does…there are incremental improvements followed by more revolutionary changes.

          • Politics_Nerd

            It has a specific meaning. It means a new technology product is introduced (the tick), and then that same technology product is later refined by a later revision (the tock). The closest thing I can think of in photography is the single-digit Nikon series. D3/D3s, D4/D4s, D5/D5s, etc. But it specifically refers to Intel’s strategy of new chip technologies (tick) that they then revise (tock). Using it outside that context misplaces the concept.

          • The meaning can be analogized to assist people in understanding how technology is rolled out. Almost all tech companies make big initial investments in progressing their technology. Followed by smaller investments that provide incremental advancement. This model helps finance R&D and creates an upgrade cycle. iPhone 4/4S/5/5S/6/6S etc… Similarly TV companies do this across model lines, but there tick/tock can be longer or related to other values like CRT/LCD. It’s simply a model.

            Intel has played with their tick/tick cycle before, but the idea of tick/tick helps entry level tech people understand how to plan for a variety of things later in their carrier. Most people know vaguely what tick/tock means, but they do not make multimillion/billion dollar purchasing decisions based on Intel following the road map, which has broken down in recent years. I’m not going to get into a big tick/tock debate with you, but people in tech view all of tech through tick/tock not just intels road map.

  • J.L. Williams

    “…it makes me seriously question the background of the person making the claim and whether they understand the direction technology is heading.”

    I likewise question YOUR background — video gaming and photography aren’t as analogous as you seem to want to think — and whether you understand biology. 4K, 8K, whatever… we’re still viewing those pixels with a human eyeball that typically resolves only 3 minutes of arc. That means that to take advantage of 4K (and eventually 8K) displays, you have to view them from a closer and closer distance. Likewise, the resolution of VR headsets is irrelevant to photo viewing because people want to be able to see the entire photo at once, and the eye’s angle of view is limited.

    Meanwhile, although “photography isn’t just about print anymore,” the fact is that print is still the most demanding realm of photo usage. 36-inch-wide inkjet printers are commonplace in such fields as production of retail store point-of-sale displays, and customers are likely to view these displays from a much closer distance than anyone watches a video monitor. These printers need to be driven at a minimum of 240 pixels per inch to take full advantage of their resolution — so a typical 36×60-inch display banner is consuming 8,640 by 14,400 pixels, which makes 8K look pretty trivial. That’s 124.4 megapixels… and yet, the high-end photographers who produce these images do so without the need for a 124-megapixel camera, commercial examples of which don’t exist yet. The reason they can get away with this is simply that human vision isn’t very demanding when it comes to pixel-level detail — as long as the image has enough micro-contrast, the brain will perceive it as “sharp.” If you’ve ever had the opportunity to examine an Ansel Adams print up close, you’ve seen this illusion at work: the picture looks superbly detailed, but the actual image structure isn’t very sharp by today’s standards.

    So it’s all very well to say that “consumers and professionals should demand that every camera maker produce the highest megapixel camera they can.” But the only way to “demand” something from a business is to be willing to pay for it. How much are we willing to pay for arbitrarily high pixel densities that the eye can’t see and the brain doesn’t care about?

    • The only way to demonstrate how a bitmap file will function in a 3D world at the moment is with old sprite based video games. Granted there will be scaling technology built into the back end, but the rules still apply that you can only add so much detail through geometry which means pictures will get fuzzy or pixelated looking. This article is more about how photos will be displayed in the coming years than just photography. As for displays 24MP will just about fill an 8K display but a 12MP photo will have to be displayed at just over 200% to fill the screen regardless of screen size. To be clear I agree with the argument that 4K and 8K really only benefit large screens, but that hasn’t stopped manufactures from making tablet size 8K screens. Sizing non vector graphics for these screens is going to be difficult as can be seen in my steam example. My 13.3 inch 3200×1800 screen is amazingly sharp, but if an app isn’t designed correctly for scaling 250% it looks fuzzy and off putting.

      You’re going back to the print argument I’m rejecting why? Those same displays will be made in virtual worlds and will require gigapixels to accomplish. It’s much easier to create false detail in print than in a group of pixels. What we have today is acceptable for print and the web, but it won’t be for gigapixel rich environments. Did you ever work in a time when 640×480 was the maximum resolution a monitor could achieve? Or 800×600… Photoshop taking hrs to do things that are instant now? I ask because a lot of people working in digital photography today have never really needed more pixels to accomplish a project, but in the past there were rarely enough. I had to deal with the growing pains of the tech industry many times over my carrier and this is just one more on the horizon and it’s going to hopefully lead to resolution independence, but that’s getting off topic. The rules of print don’t apply exactly to the rules of displaying an image on a pixel based display. Even among high pixel density displays you see a lot of variation between Apple Retina and Lenovo HiDPI, etc… I personally prefer not to edit on HiDPI because of this, but there are growing pains to come and I am pointing them out because megapixels are going to become important again soon.

      I never said we shouldn’t pay and my demands are realistic. I said today 24MP for APS-C should be standard and largly it is, but in the area of Full Frame 50MP should be standard and sadly it is not because many have settled on 24MP being a standard resolution. This is unfortunate and not at all related to cost, because you can find 50MP Full Frame cameras for very reasonable prices if you’re a serious photographer. You can also find outrageously expensive 24MP cameras. The sensor really isn’t driving up the cost much. Beyond that I simply said scale the MP as fast as possible while maintaining good 3200-6400 ISO performance and I did not fabricate a time line for this to happen or say everyone should jump on the medium format bandwagon. I know there will be technical barriers to overcome, but manufactures need to get started.

      Hollywood is currently looking into how they can make VR movies and they will drive the industry in an interesting direction since cameras and video cameras have started to merge. Right now to film for VR you need 4-16 1080p or 4K cameras, but the quality varies greatly and there is no interaction. I’m sure people have built bigger rigs, but all of this will be compressed down into single camera solutions in the coming years. Your ideals of viewing an image need to change with the way consumers decide they want to consume them.

    • Politics_Nerd

      55″ 8k desktop displays will soon be as common as commercial carpet tiles! /solved /sarcasm 😉

      • Just like 0.3MP JPEGs…

        • Politics_Nerd

          Strawman argument is very convincing, to you at least.

  • Ritvar Krum

    I agree with this article for 100%… 8K is few years away – and if you do not shoot with 36mpix camera now – your photos could not be viewed on full res aftter just few years on 8k… I guess you will have to redo them (all your photos) then, a? now days sd cards and RAM for your computer are so dirt chep – there is no excuse anymore to be scared from 36mpix and above

  • Politics_Nerd

    8K will be out for five years before it has deep penetration into the market. There are so many 1080p and 1440p displays out there that have to age out before being replaced. Just because you have a 4k display under $500 doesn’t mean people are going to rush out and replace their working hardware. Personally I shoot action in low light so the MP wars do nothing for me. I’d rather have a 20mp image with low noise and frozen action than a 36+mp one that is full of noise and blur. My 24mp D750 does very well, but I may be going for a D500 (D5 remains a distant dream, but Id rather have the inevitable D5s anyway) depending on how sample images prove out its low-light capabilities. The 50mp Canons are a ridiculous joke unless you are in studio or daylight and have it on a tripod with the mirror up and use the timer to get a vibration-free exposure.

    • DXO mark is a very flawed benchmark that beginners use. This article is about preparing for the future. There aren’t a lot of 0.3MP – 2MP JPEG’s from digital cameras being used today for a reason. The emerging digital photography market has not had to take into consideration digital presentation since its original inception, because it outpaced display technologies, but that is going to change pretty rapidly and people should be informed/prepared for the change.

      • Politics_Nerd

        Okay, if you say so. Meanwhile in actual reality…

    • CHD

      ‘You also ignore the fact that even the best lenses can’t resolve past about 30mp anyway.’
      Where do you get this information?? You’re wrong.