• Tech News
  • Fintech
  • Security
  • Startup
  • Games
  • Ar & Vr
  • Reviews
  • More
    • Mobile Tech
    • Pc & Laptop
    • How To
What's Hot

The new USB Rubber Ducky is more dangerous than ever

August 16, 2022

As digital tracking wanes, companies turn to online communities for direct access to customers – DailyTech

August 16, 2022

Wolverine gameplay revealed in Marvel’s Midnight Suns

August 16, 2022
Facebook Twitter Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
Facebook Twitter Instagram Pinterest VKontakte
Behind The ScreenBehind The Screen
  • Tech News
  • Fintech
  • Security
  • Startup
  • Games
  • Ar & Vr
  • Reviews
  • More
    • Mobile Tech
    • Pc & Laptop
    • How To
Behind The ScreenBehind The Screen
Home»Ar & Vr»The misperception of 3D perception: Debunking notions from cost to capabilities
Ar & Vr

The misperception of 3D perception: Debunking notions from cost to capabilities

August 6, 2022No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
The misperception of 3D perception: Debunking notions from cost to capabilities
Share
Facebook Twitter LinkedIn Pinterest Email

3D perception is already playing a pivotal role in shaping how our cities are built, how our cars are made and how we engage with the physical world. Smart cities are using insights from busy intersections to inform urban planning, manufacturers are using 3D sensors to automate logistics, and venues are tracking density limits to improve safety. 

But for many people and organizations, this technology is largely misunderstood, and as a result, disregarded — capping 3D perception’s full potential. The misconception that it is overly expensive, with limited usability and “good enough” alternatives, has prompted companies to opt out of implementing 3D technology.

As more and more companies invest in technology to improve efficiency and the bottom line, debunking these misperceptions is the key to unlocking the market’s most powerful tool for insights and automation.

Myth: This is car tech

Most people do not realize how long 3D sensors have been around. The first light-based ranging experiments date back all the way to the 1930s and began in the aerospace industry when companies used 3D sensing technology to study the atmosphere and measure the height of clouds. 3D Radar-based sensors were also heavily used during WW2 and were a big factor in the United Kingdom’s ability to survive the Battle of Britain in 1940. Decades later, during the 1971 Apollo 15 mission, 3D sensors were used to map the surface of the moon. 

It wasn’t until the 2000s that 3D technology was implemented in the automotive industry. Today, most 3D sensors are designed and developed for the automotive industry due to the involvement of original equipment manufacturers (OEMs) and the vast amounts of investment in the technology by the automotive industry during the ‘Great AV Hype’ of 2016-2020.

See also  Normalyze's multicloud management tools aim to tighten security and lower cost

Unfortunately, this meant that advancements in 3D sensors became specialized for one market and the devices became less suitable for other applications. Now that the automated vehicle market is cooling down and the industry is becoming aware that the road to Level 5 autonomy is much longer and tougher than anticipated, the 3D sensor industry is looking to other markets such as retail and security to bridge the gap.

Myth: It’s too expensive

A costly reputation precedes 3D perception technology, which is usually the first deterrent. In actual fact, the technology has become increasingly affordable. Yet most organizations fail to explore the possibilities of 3D perception based on this preconceived notion. 3D sensors have been dropping in price over the past decade thanks to rapid adoption across industries — even smartphones now come equipped with this hardware. 

The manufacturing of 3D sensors used to be a very manual process in which each laser and receiver needed to be placed by hand by a skilled operator. In the last couple of years, though, due to involvement from a number of large automotive companies including Lincoln Continental, Bosch and Valeo, the manufacturing process has become automated. Furthermore, the technology has evolved to include mass-manufactured microoptoelectromechanical-based systems or hybrid-solid-state devices with vertical-cavity surface-emitting laser (VCSEL) lasers, which does not require the manual work of previous iterations. 

On the perception side, the required computing power is increasing as sensors are able to generate more data, but Moore’s law is ensuring that the CPUs are keeping up. Plus, companies like AMD, Intel and NVIDIA are working hard to produce computers that are smaller, more powerful and cheaper with every iteration.

See also  HS2 sets aside £9.5m to cover cost of IR35 non-compliance

Myth: 2D cameras are good enough

Often considered the cheaper alternative, 2D cameras have been a go-to for most organizations to derive insights for safety and security, customer journey, and productivity. While they’re great for surveillance and retrospective viewing, they are literally missing an entire dimension of insights. 3D sensors add an additional dimension to transcend the viewpoints 2D cameras see. Having an accurate understanding of proximity is valuable for improving safety and efficiency in many areas of business operations. 

On the other hand, traditional 2D sensors, such as cameras, are passive sensors that require little power and have been around for decades, meaning there are many skilled system integrators available. They also feature color information, an element that most modern 3D active sensors do not provide.

3D sensors such as LiDAR and Radar are not meant to replace 2D sensors; they are meant to augment them. In many ways, traditional 2D sensors and 3D sensors are each others’ perfect complement. 3D sensors provide range information, an ability to work in light-devoid environments, and have been extensively developed compared to 2D alternatives to counter bad weather. Some 3D perception software in the market today utilizes deep learning and weather-filtering AI to deliver accurate insights regardless of conditions such as snow and rain. 2D sensors’ strength comes from the aforementioned color information, and they often have far higher resolution, with 4K and 8K being available. Additionally, 2D Computer Vision tools are very advanced and popular among students all over the world.

Many 3D sensor manufacturers are now integrating 2D sensors into their units, which provides built-in 2D/3D calibration, and in turn allows for each provided data point to contain both 3D location and color information. The best of both worlds.

See also  The DeanBeat: RP1 simulates putting 4,000 people together in a single metaverse plaza

So what’s next?

Data is the most important factor now for organizations and governments looking to make major operational changes. From justifying spending to optimizing effectiveness, the more comprehensive a dataset is, the better these decisions will be. Arguably, information from 3D sensors will provide the most useful insights for shaping the world of tomorrow. 

As we look to build smarter cities, safer infrastructure and more efficient businesses, understanding 3D perception — and the role it plays in public and private settings — will become crucial. Beyond debunking misperceptions, it is important to continuously challenge what we know the technology to be capable of, and find new ways to improve and implement the right technology to solve long-standing problems. 

Jerone Floor is vice President of Products & Solutions at Seoul Robotics

Source link

Capabilities cost Debunking misperception notions perception
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

As companies fight to retain talent, employee benefits startups might escape cost cuts – DailyTech

August 15, 2022

What’s The True Cost Of Your Food? True Price Incorporates Social And Environmental Impact

August 12, 2022

MeetKai launches AI-powered metaverse, starting with a billboard in Times Square

August 10, 2022

The DeanBeat: RP1 simulates putting 4,000 people together in a single metaverse plaza

August 10, 2022
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

Wealthtech Tifin Acquires SharingAlpha’s Skilled Investor Neighborhood

July 15, 2022

Zero-trust market shows signs of maturity at RSA 2022

June 28, 2022

The Stunning Combat Over Google’s Downtown West Growth

July 31, 2022

Seedtag, the ex-Googler-founded, cookie-free, AI-based adtech startup, faucets $250M+ in funding – DailyTech

July 27, 2022

Subscribe to Updates

Get the latest news and Updates from Behind The Scene about Tech, Startup and more.

Top Post

The new USB Rubber Ducky is more dangerous than ever

As digital tracking wanes, companies turn to online communities for direct access to customers – DailyTech

Wolverine gameplay revealed in Marvel’s Midnight Suns

Behind The Screen
Facebook Twitter Instagram Pinterest Vimeo YouTube
  • Contact
  • Privacy Policy
  • Terms & Conditions
© 2022 behindthescreen.uk - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.