Apple today published iOS 13.2’s first beta developer. The new beta is the first on iPhone 11 and iPhone 11 Pro to include Apple’s fresh Deep Fusion camera technology.
What is Deep Fusion?
As Apple notes,
Deep Fusion is a new image processing system enabled by the Neural Engine of A13 Bionic. Deep Fusion uses advanced machine learning to do pixel-by-pixel processing of photos, optimizing for texture, details and noise in every part of the photo.”
Here is an explanation by Verge, on how the feature works,
- By the time you press the shutter button, the camera has already grabbed three frames at a fast shutter speed to freeze motion in the shot. When you press the shutter, it takes three additional shots and then one longer-exposure shot to capture detail.
- Those three regular shots and long-exposure shot are merged into what Apple calls a “synthetic long.” This is a major difference from Smart HDR.
- Deep Fusion picks the short-exposure image with the most detail and merges it with the synthetic long exposure. Unlike Smart HDR, Deep Fusion only merges these two frames, not more. These two images are also processed for noise differently than Smart HDR, in a way that’s better for Deep Fusion.
- The images are run through four detail processing steps, pixel by pixel, each tailored to increasing amounts of detail ”” the sky and walls are in the lowest band, while skin, hair, fabrics, and so on are the highest level. This generates a series of weightings for how to blend the two images ”” taking detail from one and tone, color, and luminance from the other.
- The final image is generated.
Apple has also released watchOS 6.1 beta 2 and tvOS 13.2 beta 1 as well.
Head over to the settings and update your device if you are enrolled in the beta program.