首页 > 解决方案 > 如何将 ARReferenceImage 放入 Core Data?

问题描述

现在我用 ARKit 2.0 创建了一个应用程序,但我不知道如何将参考图像放入 Core Data。因为每次我启动应用程序时,它都会再次创建参考图像。

var aRIngredients = Set<ARReferenceImage>()
let arImage = ARReferenceImage(((image?.cgImage))!, orientation: CGImagePropertyOrientation.up, physicalWidth: 0.5)

self.ARIngredients.insert(arImage)

configuration.trackingImages = self.ARIngredients

标签: swift4arkit

解决方案


Assuming I have interpreted your needs correctly you can do something like the following:

1st we will create an Entity in CoreData, which in this example will be called ARCustomImage and will have two attributes:

  1. imageData (BinaryData) - where we will store our SnapshotImage.
  2. name (String) - a unique ID for our Target Image.

enter image description here

Make sure that you set the imageData properties like so:

enter image description here

Lets say then that we want to save an ARSCNView Snapshot to CoreData. Then we can do something like this:

/// Saves The Data Needed To Store Our ARReferenceImage
@IBAction func saveReferenceImage(){
    
    //1. Take A SnapShot Of The ARSCNView
    let screenShot = self.augmentedRealityView.snapshot()
    
    //2. Save To CoreData
    if let arImageEntry = NSEntityDescription.entity(forEntityName: ENTITY_NAME, in: databaseContext),
        let newCustomARImage = NSManagedObject(entity: arImageEntry, insertInto: databaseContext) as? ARCustomImage{
        
        //a. Set The Target Name Of Our ARReferenceImage
        newCustomARImage.name = UUID().uuidString
        
        //b. Store Our Image To The Stack
        newCustomARImage.imageData = UIImagePNGRepresentation(screenShot)
        
        do {
            try databaseContext.save()
            
            //c. Load The Custom Images Again So We Can Start Our Configuration With Custom Images
            loadExistingReferenceImages()
            
        } catch {
            print("Error Saving \(UUID().uuidString) ")
        }
        
    }else{
        print("Error Generation \(UUID().uuidString) ")
    }
}

We can then check to see if we have any saved data and then convert them to a Set of ARReferenceImage like so:

/// Loads All Our ARReferenceImages From CoreData
func loadExistingReferenceImages(){
    
    var trackingImages = [ARReferenceImage]()
    
    //1. Generate A Fetch Request For Our Custom ARImages
    let request = NSFetchRequest<NSFetchRequestResult>(entityName: ENTITY_NAME)
    request.returnsObjectsAsFaults = false
    
    do {
        
        //2. Get Our Saved Data
        let savedData = try databaseContext.fetch(request) as! [ARCustomImage]
        
        //3. Loop Through Them & Create Our ARReferenceImage
        for customImage in savedData {
            
            //b. Get The Target Name & Image Back From Our Data & Convert To A CGImage
            if let targetName = customImage.name,
                let image = UIImage(data: customImage.imageData!),
                let arCGImage = image.cgImage {
                
                //c. Get The Width Of Our Image
                let widthInCM: CGFloat = CGFloat(arCGImage.width) / CGFloat(47)
                let widthInMetres: CGFloat = widthInCM * 0.01
                
                //d. Create An ARReferenceImage
                let arReferenceImage = ARReferenceImage(arCGImage,
                                                        orientation: cgImagePropertyOrientation(image.imageOrientation),
                                                        physicalWidth: widthInMetres)
                
                arReferenceImage.name = targetName
                
                //e. Add It To Our Tracking Images Array
                trackingImages.append(arReferenceImage)
                
                print("""
                    ARReferenceGenerated
                    Name = \(arReferenceImage.name!)
                    Width = \(arReferenceImage.physicalSize.width)
                    """)
                
            }
        }
        
    } catch { print ("Fatal Error Loading Data") }
    
    //4. Set Our ARSession Configuration Detection Images Using The Images From CoreData
    if !trackingImages.isEmpty{
        print("Loading An Existing Total Of \(trackingImages.count) ARReference Images")
        configuration.detectionImages = Set(trackingImages)
        augmentedRealitySession.run(configuration, options:  [.resetTracking, .removeExistingAnchors ])
    }else{
        //2. Run A Standard Session
        print("No ARReference Images Have Been Saved To CoreData")
        augmentedRealitySession.run(configuration, options:  [.resetTracking, .removeExistingAnchors])
    }
 
}

Rather than trying to explain each section, I have commented all the code so it should be fairly self explanatory (and here it is in full), assuming you have of course setup CoreData via the default setup:

import CoreData
import ARKit
import UIKit

extension ViewController{
    
    //------------------------------------------------
    //MARK: Get CIImageProperyOrientation From UIImage
    //------------------------------------------------
    
    /// Converts A UIImageOrientation To A CGImagePropertyOrientation
    ///
    /// - Parameter orientation: UIImageOrientation
    /// - Returns: CGImagePropertyOrientation
    func cgImagePropertyOrientation(_ orientation: UIImageOrientation) -> CGImagePropertyOrientation {
        switch orientation {
        case .up:
            return .up
        case .upMirrored:
            return .upMirrored
        case .down:
            return .down
        case .downMirrored:
            return .downMirrored
        case .leftMirrored:
            return .leftMirrored
        case .right:
            return .right
        case .rightMirrored:
            return .rightMirrored
        case .left:
            return .left
        }
    }
    
}

//--------------------------
// MARK: - ARSCNViewDelegate
//--------------------------

extension ViewController: ARSCNViewDelegate{
    
    func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        
        //1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
        guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
        
        //2. Get The Targets Name
        let name = currentImageAnchor.referenceImage.name!
        
        //3. Get The Targets Width & Height
        let width = currentImageAnchor.referenceImage.physicalSize.width
        let height = currentImageAnchor.referenceImage.physicalSize.height
        
        //4. Log The Reference Images Information
        print("""
            Image Name = \(name)
            Image Width = \(width)
            Image Height = \(height)
            """)
        
        //5. Create A Plane Geometry To Cover The ARImageAnchor
        let planeNode = SCNNode()
        let planeGeometry = SCNPlane(width: width, height: height)
        planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
        planeNode.opacity = 0.25
        planeNode.geometry = planeGeometry
        
        //6. Rotate The PlaneNode To Horizontal
        planeNode.eulerAngles.x = -.pi/2
        
        //7. The Node Is Centered In The Anchor (0,0,0)
        node.addChildNode(planeNode)
        
        //8. Create AN SCNBox
        let boxNode = SCNNode()
        let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
        
        //9. Create A Different Colour For Each Face
        let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
        var faceMaterials = [SCNMaterial]()
        
        //10. Apply It To Each Face
        for face in 0 ..< 5{
            let material = SCNMaterial()
            material.diffuse.contents = faceColours[face]
            faceMaterials.append(material)
        }
        boxGeometry.materials = faceMaterials
        boxNode.geometry = boxGeometry
        
        //11. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
        boxNode.position = SCNVector3(0 , 0.05, 0)
        
        //12. Add The Box To The Node
        node.addChildNode(boxNode)
    }
}

class ViewController: UIViewController {
    
    //1. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
    @IBOutlet weak var augmentedRealityView: ARSCNView!
    @IBOutlet weak var takeSnapshotButton: UIButton!
    
    //2. Create Our ARWorld Tracking Configuration
    let configuration = ARWorldTrackingConfiguration()
    
    //3. Create Our Session
    let augmentedRealitySession = ARSession()
    
    //4. Create An Array To Store Our Reference Images
    var customReferenceImages = [ARReferenceImage]()
    
    //5. Set Up Our CoreData Variable
    let ENTITY_NAME = "ARCustomImage"
    var databaseContext: NSManagedObjectContext!
    
    //-----------------------
    // MARK: - View LifeCycle
    //-----------------------
    
    override func viewDidLoad() {
        
        //1. Get Reference To Our AppDelegate And NSManagedObjectContext
        let appDelegate = UIApplication.shared.delegate as! AppDelegate
        databaseContext = appDelegate.persistentContainer.viewContext
        
        //2. Setup The ARSession
        setupARSession()
        
        super.viewDidLoad()
        
    }
    
    override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() }
    
    //-----------------
    //MARK: - ARSession
    //-----------------
    
    /// Sets Up The ARSession
    func setupARSession(){
        
        //1. Set The AR Session
        augmentedRealityView.session = augmentedRealitySession
        augmentedRealityView.delegate = self
        
        //2. Load Our Existing Data If It Exists & Set Our Configuration
        loadExistingReferenceImages()
    }
    
    //-------------------------------------------
    // MARK: - Loading & Saving ARReferenceImages
    //--------------------------------------------
    
    /// Loads All Our ARReferenceImages From CoreData
    func loadExistingReferenceImages(){
        
        var trackingImages = [ARReferenceImage]()
        
        //1. Generate A Fetch Request For Our Custom ARImages
        let request = NSFetchRequest<NSFetchRequestResult>(entityName: ENTITY_NAME)
        request.returnsObjectsAsFaults = false
        
        do {
            
            //2. Get Our Saved Data
            let savedData = try databaseContext.fetch(request) as! [ARCustomImage]
            
            //3. Loop Through Them & Create Our ARReferenceImage
            for customImage in savedData {
                
                //b. Get The Target Name & Image Back From Our Data & Convert To A CGImage
                if let targetName = customImage.name,
                    let image = UIImage(data: customImage.imageData!),
                    let arCGImage = image.cgImage {
                    
                    //c. Get The Width Of Our Image
                    let widthInCM: CGFloat = CGFloat(arCGImage.width) / CGFloat(47)
                    let widthInMetres: CGFloat = widthInCM * 0.01
                    
                    //d. Create An ARReferenceImage
                    let arReferenceImage = ARReferenceImage(arCGImage,
                                                            orientation: cgImagePropertyOrientation(image.imageOrientation),
                                                            physicalWidth: widthInMetres)
                    
                    arReferenceImage.name = targetName
                    
                    //e. Add It To Our Tracking Images Array
                    trackingImages.append(arReferenceImage)
                    
                    print("""
                        ARReferenceGenerated
                        Name = \(arReferenceImage.name!)
                        Width = \(arReferenceImage.physicalSize.width)
                        """)
                    
                }
            }
            
        } catch { print ("Fatal Error Loading Data") }
        
        //4. Set Our ARSession Configuration Detection Images Using The Images From CoreData
        if !trackingImages.isEmpty{
            print("Loading An Existing Total Of \(trackingImages.count) ARReference Images")
            configuration.detectionImages = Set(trackingImages)
            augmentedRealitySession.run(configuration, options:  [.resetTracking, .removeExistingAnchors ])
        }else{
            //2. Run A Standard Session
            print("No ARReference Images Have Been Saved To CoreData")
            augmentedRealitySession.run(configuration, options:  [.resetTracking, .removeExistingAnchors])
        }
        
        
    }
    
    /// Saves The Data Needed To Store Our ARReferenceImage
    @IBAction func saveReferenceImage(){
        
        //1. Take A SnapShot Of The ARSCNView
        let screenShot = self.augmentedRealityView.snapshot()
        
        //2. Save To CoreData
        if let arImageEntry = NSEntityDescription.entity(forEntityName: ENTITY_NAME, in: databaseContext),
            let newCustomARImage = NSManagedObject(entity: arImageEntry, insertInto: databaseContext) as? ARCustomImage{
            
            //a. Set The Target Name Of Our ARReferenceImage
            newCustomARImage.name = UUID().uuidString
            
            //b. Store Our Image To The Stack
            newCustomARImage.imageData = UIImagePNGRepresentation(screenShot)
            
            do {
                try databaseContext.save()
                
                //c. Load The Custom Images Again So We Can Start Our Configuration With Custom Images
                loadExistingReferenceImages()
                
            } catch {
                print("Error Saving \(UUID().uuidString) ")
            }
            
        }else{
            print("Error Generation \(UUID().uuidString) ")
        }
    }
}

Please note that this is just a very crude working example, and isn't optimised in any shape or form.

Having said this, it should definitely point you in the right direction...


推荐阅读