augmented-reality
Is there a good tutorial for implementing an augmented reality iPhone application? [closed]
I doubt exactly such a thing exists, but what you need to do is look at the location and camera frameworks for the iPhone, and go from there. Basically, you will create a UIImagePickerController (the Camera class) and overlay information on the view, via a custom .cameraOverlayView (which is a property of UIImagePickerController in 3.0). … Read more
ARKit – Get current position of ARCamera in a scene
Set yourself as the ARSession.delegate. Than you can implement session(_:didUpdate:) which will give you an ARFrame for every frame processed in your session. The frame has an camera property that holds information on the cameras transform, rotation and position. func session(_ session: ARSession, didUpdate frame: ARFrame) { // Do something with the new transform let … Read more
What’s the difference between using ARAnchor to insert a node and directly insert a node?
Update: As of iOS 11.3 (aka “ARKit 1.5”), there is a difference between adding an ARAnchor to the session (and then associating SceneKit content with it through ARSCNViewDelegate callbacks) and just placing content in SceneKit space. When you add an anchor to the session, you’re telling ARKit that a certain point in world space is … Read more
How to begin with augmented reality? [closed]
Being a quite popular buzz word, augmented reality can be build with some distinct algorithms which can be learnt separately. Usually it covers: planar object detection (can be a marker or previously trained object). SURF/SIFT/FAST descriptors, RANSAC for homography matrix calculation store trained objects in DB (KD-trees) camera position estimation augmenting 3D model with custom … Read more
Face filter implementation like MSQRD/SnapChat [closed]
I would recommend going with Core Image and CIDetector. https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_detect_faces/ci_detect_faces.html It has been available since iOS 5 and it has great documentation. Creating a face detector example: CIContext *context = [CIContext contextWithOptions:nil]; // 1 NSDictionary *opts = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh }; // 2 CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:opts]; // 3 opts = … Read more
Computing camera pose with homography matrix based on 4 coplanar points
If you have your Homography, you can calculate the camera pose with something like this: void cameraPoseFromHomography(const Mat& H, Mat& pose) { pose = Mat::eye(3, 4, CV_32FC1); // 3×4 matrix, the camera pose float norm1 = (float)norm(H.col(0)); float norm2 = (float)norm(H.col(1)); float tnorm = (norm1 + norm2) / 2.0f; // Normalization value Mat p1 = … Read more
How to detect vertical planes in ARKit?
Edit: This is now supported as of ARKit 1.5 (iOS 11.3). Simply use .vertical. I have kept the previous post below for historical purposes. TL;DR Vertical plane detection is not (yet) a feature that exists in ARKit. The .horizontal suggests that this feature could be being worked on and might be added in the future. … Read more
cvc-complex-type.2.4.a: Invalid content was found starting with element ‘base-extension’. One of ‘{layoutlib}’ is expected
Upgrade your com.android.tools.build:gradle in your build.gradle file to the latest version e.g. classpath ‘com.android.tools.build:gradle:7.0.4’ And upgrade your gradle-wrapper.properties to the latest version i.e. #Tue Apr 12 23:39:17 AEST 2022 distributionBase=GRADLE_USER_HOME distributionUrl=https\://services.gradle.org/distributions/gradle-7.6-bin.zip distributionPath=wrapper/dists zipStorePath=wrapper/dists zipStoreBase=GRADLE_USER_HOME Optionally, you can upgrade it using Project-Structure UI