Deep Fusion, defined by Apple as crazy science, will be available with the iOS 13.2 update. During the beta process, the feature was made available to the participants.
Deep Fusion, which Apple announced at the iPhone 11 event and described it as an important step in mobile photography, will be available soon. Apple plans to try the feature first in the iOS 13.2 beta process.
What is Deep Fusion?
Normally many smartphone models use artificial intelligence for camera optimization. Detecting the scene and adjusting the settings automatically according to the scene has now become standard. Apple is the artificial intelligence with the output of the photo.
Deep Fusion is provided through the neural learning engine on the A13 Bionic chipset. The rear cameras take 4 short and 4 secondary pictures in milliseconds that you have not yet pressed the shutter button. When you press the shutter button, 1 makes a long exposure.
A total of 9 images are analyzed by the neural motor within 1 second and all pixels are scanned to select the best one. Very detailed and at least 24 million pixel photos are processed. Two 12MP resolution photos are combined by 4 neural networks to output 12MP resolution. In the meantime, the user continues to take pictures, and Deep Fusion holds the data, rendering it the first time the A13 chip is empty.
Deep Fusion technology iPhone 11 models in the ultra-wide angle sensor, iPhone 11 iPhone 11 Pro Max Pro is an ultra-wide and telephoto models using the sensors. The Deep Fusion feature, which is unique to the A13 Bionic chipset , will be available with the iOS 13.2 update. Apple's iOS 13.2 beta process is starting to test the feature is expressed.