Augmented Reality concepts and AR vs VR | Future of AR (2018) Skip to main content

Augmented Reality concepts and AR vs VR | Future of AR (2018)



Augmented reality is a technology that allows virtual objects to be placed in the real world in real-time, enhancing our information about the world around us. Imagine looking outside and seeing today’s weather forecast appear before you on the window itself. AR will layer detailed information over what we see around us while still allowing us to navigate through the real environment.

Virtual Reality#VR seeks to recreate reality in an immersive environment. To accomplish this, users are often separated from the real world by headsets (often referred to as HMDs). HMDs completely block out the user’s surroundings, isolating them from the outside world.















Hardware


















 AR device is the Microsoft HoloLens. Looking like a sleek
pair of futuristic sunglasses, and with built-in stereoscopic 3D View and three-dimensional
sound, the HoloLens is designed to allow users to bring Windows 10 apps into
their world by rendering 2D and 3D holograms in the user’s environment.
HoloLens could turn a blank wall into a resizable
monitor; fill a wall/desk with Microsoft Word, email, and other apps; and even
create a cute little 3D puppy that sat on the floor wagging its
computer-generated tail.  
The Meta 2 operates much like the
HoloLens, enabling users to manipulate 3D objects with their hands as if they
were actually holding them. HoloLens
does this with an internal sensor array whereas
the Meta 2 is tethered and allows user
manipulation using external, mounted depth sensors.



The technical process behind AR

Creating an
Augmented Reality experience is a multi-stage process
·       
Includes the development of the 3D models
·       
Image tracking and






















·       
Depth sensing.
Using AR you can see 3D models as an overlay to the world
in front of you. There are two ways to approach creating your first AR
experience: create your own 3D content or start with already built models from
a 3D library.
3D artists use one of several software programs to
create 3D models; the most popular of these include SketchUp,
Revit, Cinema 4D, Maya, 3DS Max, and Blender.
3D starts with rough sketch of 2D to what the final
3D model will look like. The image will typically go through an approval
process that refines the idea.   Once they have been approved, the modelling
begins.

·        
Simple Box /shape will take small time
·        
a 3D character that moves and talks requires additional
stages of work, as the bones and facial muscles of the character must be
equipped for realistic animation
·        
When the 3D model is completed, it must be rendered in order to be compatible with
an augmented reality experience
·        
The top AR software solutions can accept a variety of file
types, including Collada, Wavefront,
and STL
, as well as 2D image formats and other 3D software integrations
But the bigger the
file (determined by the polygon count), the more rendering is necessary before
deploying in AR.
Polygons are the
virtual straight lines that collectively consist of a 3D model.

·        
AR needs Engine to integrate
3D models, and this Engine software can be implemented through an augmented
reality SDK.

·        
This AR SDK includes a
rendering engine for the 3D model, as well as tracking detection and depth sensing to position it in
the ‘real world’
.  
Motion Tracking
Motion tracking gives you real-time information about the
3D motion of a device. Motion
sensors will allow your smartphone to understand position
and location relative to your environment.
Depth Perception
Depth sensors can detect how far or close objects are in
relation to your device
·        
An AR SDK learns and tracks the environment in real-time,
creating and updating its view of the environment many times per second.

·        
The most common ways to get this tricky achievement are
Simultaneous Localization And Mapping (SLAM) tracking, Depth tracking, and
Pattern tracking.
·        
SLAM tracking uses complex
algorithms attached with sensor data to build a 3D map of an environment and
calculate its position within that environment.    SLAM performs well outdoors because it relies
on algorithms and successive camera frames to generate a consistent map.
·        
 
·        
Depth tracking also uses a variety
of sensors, many of which use infrared light to generate a map of the
environment. This sensor data is combined with other data from various sources
such as an accelerometer or GPS to update the computer’s vision of the world.
In cases where infrared light is used to measure distance and size within a
space, it is important to note that it is not suited for use outdoors, as
infrared works best indoors or in low-light conditions.
·        
Pattern tracking: here infrared lasers
are used to create a pattern of dots, which bounce back to the sensor to give
the device an idea of the curves of the surrounding area. It then keeps track
of distance and location in space by constantly recalculating the shape and
size of the pattern. If the pattern gets smaller, for instance, it knows the
object has moved farther away. Similarly, it tracks the space and distance
between the individual dots of the pattern
to get a sense of an
object’s shape or to determine whether it is being moved or rotated.


Comments

Popular posts from this blog

Microsoft DNA Data Storage | Breakthrough Research- what is it and how ...

Breakthrough Research from #Microsoft on DNA based storage invention that aims to replace warehouse sized  #DATACENTERS to small sized box #MicrosoftTeams #Microsoft Hello Friends Researchers from the University of Washington and Microsoft have demonstrated the first fully automated system to store and retrieve data in manufactured DNA — a key step in moving the technology out of the research lab and into commercial data centers. In a simple proof-of-concept test, the team successfully encoded the word “hello” in snippets of fabricated DNA and converted it back to digital data using a fully automated end-to-end system, which is described in a new paper published March 21 in Nature Scientific Reports. In its first run, the $10,000 prototype converted "HELLO" into DNA. The device first encoded the bits (1's and 0's) into DNA sequences (A's, C's, T's, G's). It then synthesized the DNA and stored it as a liquid. Next, the stored DNA was read by

INTEL's Cool 3 in 1 Foldable Laptop Desktop

Intel’s #foldable laptop opens up to a 17-inch OLED display with thin bezels making it function like a giant tablet. When folded the laptop essentially acts as a dual-screen laptop. As seen in the product video,  Intel Horseshoe Bend  looks like you’re holding a big textbook especially when you’re scrolling through the laptop. When in tablet format, the Intel Horseshoe Bend can be used with a compatible keyboard. This detachable keyboard is designed to fit into the device when folded. Intel says that Horshoe Bend can be used in different postures like laptop, all-in-one, lay-flat, canvas and book. There’s also a kickstand to support the laptop when fully opened up. Possibly the most interesting posture is the traditional laptop style where users can make full use of the display which stretches from top to bottom.

How picture was taken, interesting facts Black Hole की कैसे ली गई तस्वी...

#BlackHole #BlackHolePicture A black hole is a region of space-time exhibiting such strong gravitational effects that nothing—not even particles and electromagnetic radiation such as light—can escape from inside it. In this Video I have described how the image of Black hole has been captured over months efforts and what was the process followed Support by Subscribe, Like & Share subscribe here: https://bit.ly/amztech