As the iPhone SE is now widely available and making it to more users hands, we’re learning more about the technical specifications of the device. The developers behind the Halide Camera App have now published a deep dive into the iPhone SE’s camera, shedding a new light on the single lens system.
A recent teardown from iFixit confirmed that the iPhone SE’s camera hardware is interchangeable with the iPhone 8, but Halide’s breakdown focuses on the camera software. Specifically, it points out the fact that the iPhone SE is the “first iPhone that can generate a portrait effect using nothing but a single, 2D image.”
The iPhone XR also generated the portrait effect with a single-lens camera, but it used focus pixels ”” “tiny pairs of ‘eyes’ designed to help with focus.” The iPhone SE doesn’t use focus pixels because of its three-year-old camera sensor:
The new iPhone SE can’t use focus pixels, because its older sensor doesn’t have enough coverage. Instead, it generates depth entirely through machine learning. It’s easy to test this yourself: take a picture of another picture.
The depth data generated by the iPhone SE is exposed to developers, which is what Halide is using to generate the portrait effect with the iPhone SE camera for pets and objects ”” even though Apple limits Portrait mode to humans only.
At the end of the day, neural networks feel magical, but they’re bound by the same limitations as human intelligence. In some scenarios, a single image just isn’t enough. A machine learning model might come up with a plausible depth map, that doesn’t mean it reflects reality.
That’s fine if you’re just looking to create a cool portrait effect with zero effort! If your goal is to accurately capture a scene, for maximum editing latitude, this is where you want a second vantage point (dual camera system) or other sensors (LIDAR).
We recommend reading the full iPhone SE Â deep dive from the Halide developers here.Halide developers detail iPhone SE camera technology, iPhone 8 similarities