One of the most practical ways that the growing convergence of AI and IoT, or AIoT, will impact everyone’s lives in the near term is interaction with the world, from unlocking doors, to entering an office building, to checking in at an airport, to elder and in-patient care.
Integrating AI with 3D sensing
IoT devices are getting smarter through the integration of AI that obtains insights from data. At the same time, when immediate actions are required from an IoT device, the user experience is determined by how well the AI algorithm works, which is dependent on being trained by large amounts of quality data.
Unlike facial recognition, with its vast set of available data, most other applications lack the required data pool to provide effective AI results.
By combining AI and 3D sensing, companies can create endpoint systems that interact intelligently with people
So how can this be improved? By combining highly efficient edge AI processing and 3D sensing, companies can create endpoint systems that interact intelligently and unobtrusively with people.
Methods to use the latest tech
In this webinar, they discuss market trends and the technology demands of real-world deployments, taking a deep dive into smart building systems that combine multiple 3D sensing modalities with image sensors to create vastly improved outcomes for a variety of use cases.
The webinar will include straightforward methods for achieving these next-generation systems using reference designs employing the latest RGB-IR image sensors, 3D sensing, and edge AI vision systems on chip (SoCs).
Highlights of the webinar
Join to learn how to take full advantage of:
- Structured-light sensing for high resolution 3D biometric identification
- 4K RGB-IR CMOS image sensor technology for camera-based smart building systems
- Occupancy sensing using imaging or 3D sensing
- The combination of edge AI vision processing and 3D sensing for the real world