14. December 2021 By Kevin Pahlke
Augmented reality – a brief overview
What is augmented reality?
In order to answer this question, it is useful to first clarify what we mean by augmented reality (AR). AR is often equated with virtual reality (VR). These two technologies are similar, but by no means identical. AR retains a user’s view of the real world while augmenting it to include virtual elements. In contrast, VR completely replaces the real world with an artificial environment.
The first steps towards AR were taken back in the mid-1970s. Since then, AR has continued to evolve and can now be used in almost any industry. It can be found in medicine, trades, industry and business, among others. AR owes this penetration to its flexibility, as this technology allows for the development of both mobile and web applications and applications for AR glasses.
Augmented reality can help support or simplify complex workflows, routines and processes. For example, mechanics and engineers can use AR to break down complex machines into individual parts, view each required work step individually and project these steps onto the real object in order to perform them. Other areas of application include navigation, interior design, architecture and product visualisation, to name but a few.
But how does AR work?
In short, the process requires both hardware and software components. The hardware components must include some sensor technologies, such as camera and GPS systems, in order to capture images of the real-world environment. This information is then enriched with object data, so the real world is displayed together with the virtual world. The user can interact with real or virtual objects via interfaces such as touchscreens, microphones and headphones. Objects are displayed in 3D, which allows the user to view them from all sides. Users view displays and interact with objects in real time.
How it is possible to link virtual objects to objects in the real world?
The way real and virtual objects are linked depends strongly on technical capabilities and the field of application and is achieved with the help of different tracking methods. These methods serve as triggers to launch the AR or position virtual objects in the real world.
Tracking methods can be divided into the following groups:
- A specific image or object is required for tracking
- Analysis of the real environment
- Tracking recognises human bodies
- Other image recognition methods
- Position of transmitters
The following methods reflect some of these groups:
1. With image targets, a specific image serves as the trigger. The camera registers the image and launches the augmented reality app. In addition, the image serves as an anchor onto which the virtual content is positioned.
2. Markerless tracking works without additional objects. The AR can be launched simply using a smartphone or AR glasses.
3. World map captures the current environment of an augmented reality application. A point in space is specified, either via the planes, feature points or the geometry of the depth sensor. The digital content is then positioned relative to this point.
Implementation of AR using the example of ARCore
Using the camera function of a mobile device, scenes of a real object are created and augmented with virtual content. This virtual content is created within the application. Many different software development kits (SDKs) are available for the implementation of AR applications. Different SDKs work better depending on the type of application and the required functionalities.
Three basic concepts are defined in ARCore: motion tracking, environmental understanding and lighting estimation.
Motion tracking enables the mobile device to detect its position relative to the real environment and its movement in three-dimensional space. For this purpose, Google developed concurrent odometry and mapping (COM), which combines visual feature-based tracking with the device’s inertial sensors. This method uses the camera image to identify visually distinct features. These are used to calculate changes in the camera’s position and orientation relative to the starting position. In addition, the acceleration and orientation of the device are determined by means of inertial measurement and measurement of the rotational speed by the built-in inertial measurement unit (IMU). This unit provides a second analysis of the position and orientation of the camera in relation to objects in the real world. If the unit has a depth sensor, depth values can be added to the features to better capture the surroundings.
Environmental understanding refers to the way the device registers the size and position of surfaces. Individual points are aggregated into point clouds, creating a virtual image of the environment. The more information (feature points) available, the more accurate the virtual image becomes.
Including or taking into account the lighting conditions can increase the realism of the superimposed objects. Refraction, shadows and the direction of the light source are all taken into consideration based on the viewing angle. Anchors are used to track the position of the virtual object over an extended period of time. New environmental information is constantly added, and the position of the object is updated on a continuous basis. The anchor ensures that the virtual object remains stable in space, even if the camera moves.
Augmented reality has enormous potential, and we cannot currently predict the full impact of this technology. As existing technologies continue to improve and new ones rapidly develop, AR will become an integral part of many different industries and play a major role in the lives of individual people.
Would you like to learn more about exciting topics from the world of adesso? Then check out our latest blog posts.