Apple Acquired Montreal Based Vrvana, a Mixed Reality Headset Company with Hand Tracking Technologies back in 2017. One of the inventors listed on the patent is Vrvana’s Vincent Chapdelaine-Couture a computer Vision Scientist who didn’t go to California after the acquisition according to his LinkedIn profile. His expertise is in particular omnistereo capture methods, (un)structured light reconstruction and motion tracking. Currently working to achieve real-time SLAM-based positional tracking in a mixed reality context.
The second inventor listed on this patent is Selim BenHimane, Senior Engineering Manager – Computer Vision & Machine Learning working on 3D Computer Vision in the field of AR and VR.
Today the US Patent & Trademark Office published a patent application from Apple titled “Object Detection Using Multiple Three Dimensional Scans,” which generally relates to detecting and tracking real world physical objects depicted in images, and in particular, to systems, methods, and devices for detecting and tracking such physical objects based on prior scans of the objects and creating a 3D model.
Apple’s patent FIG. 2 below is a block diagram of a mobile device (iPhone) displaying a computer-generated reality (CGR) environment with AR content of the physical object of FIG. 1 taken in by the iPhone’s backside camera and/or scanner. Apple doesn’t describe landscape even though their patent figure clearly does.
Apple’s patent FIG. 3 above is a block diagram depicting a first scan (camera movement in blue) of the example physical object in the center; FIG. 4 is a block diagram depicting a second scan (camera movement in Pink) of the example physical object in the center.
Apple’s patent FIG. 5 above is a block diagram illustrating the differences of the paths of the image sensor during the first scan of FIG. 3 and the second scan of FIG. 4.
Apple’s patent FIG. 9 above is block diagram illustrating exemplary components of a device used to generate 3D models of physical objects and detect the physical objects
The phrase “physical object” associated with the ochre magnet in FIG. 2 generally refers to any type of item or combination of items in the real world including, but not limited to: building blocks, a toy, a statue, furniture, a door, a building, a picture, a painting, a sculpture, a light fixture, a sign, a table, a floor, a wall, a desk, a body of water, a human face, a human hand, human hair, another human body part, an entire human body, an animal or other living organism, clothing, a sheet of paper, a magazine, a book, a vehicle, a machine or other man-made object, and any other natural or man-made item or group of items present in the real world that can be identified and modeled.
The further you go down the rabbit hole in this patent, the more confusing it becomes. It’s really hard to figure out what they’re really describing until you pull out and recognize it for what it generally applies to: ARKit/ARKit 2.
The video below, titled “ARKit 2 Tutorial: Create an AR Shopping Experience – Scan & Detect Real 3D Objects,” explains ARKit 2 well in respect to scanning and detecting 3D objects which are components highlighted in Apple’s patent FIG. 9 above.
The video also describes an AR scanned object having the ability in ARKit 2 to add an information bubble above the scanned object which is highlighted patent FIG. 2 above.
Apple’s complex patent application published today when seen through the eyes of ARKit 2 helps us to understand that the invention in-part is describing the same thing.
Apple’s patent application 20200020118 that was published today by the U.S. Patent Office was filed back in Q3 2019 with previous work rolled into this patent going back to mid 2018 when ARKit was first introduced at WWDC 2018.
You would have to be an Apple developer and deeply experienced in ARKit 2 in order to recognize if there’s anything new being described in the patent filing or if it’s just covering the original foundation of the first and second generations of ARKit.