Skip to main content
The Archetype platform is organized around three core concepts that work together to power Physical AI applications. These components define how data enters the system, how Newton interprets it, and how insights flow back to your applications. Understanding these concepts will help you configure lenses, manage data, and build end-to-end real-world AI workflows.

Core Components

A Lens is a software interface that allows developers to configure and interact with Newton. Lenses define how sensor data flows in and out of Newton and specify how Newton should interpret that data for a given use case. Each lens includes:
  • Instructions – Define the lens’s purpose and behavior.
  • Focus – Directs interpretation toward specific objects, events, or concepts.
  • Temporal parameters – Determine how data is analyzed over time.
Lenses use three types of data streams:
  • Input streams – Receive raw sensor data.
  • Output streams – Deliver structured insights.
  • Control streams – Adjust lens behavior dynamically.
Lenses are the core mechanism that turns physical-world sensor data into actionable intelligence. Files offer a centralized way to upload and manage data on the Archetype platform. The Files API supports multiple formats—including images (JPEG, PNG), videos (MP4), CSVs, and JSON files—up to 255MB. Uploaded files are securely stored within your organization, making them accessible to your team while remaining private to others. Each file is assigned a unique file ID that you can reference throughout the platform, whether you’re passing a video to a lens for analysis or supplying data for fine-tuning and summarization workflows. Data Streams enable real-time data flow between your sensors, applications, and lenses.
  • Input streams bring sensor data into a lens using connectors such as video readers, CSV readers, RTSP cameras, or Kafka sources.
  • Output streams deliver processed insights to your applications through Server-Sent Events, REST APIs, or WebSockets.
  • Control streams let you adjust lens parameters in real time while it continues processing data.
Together, these streams form the end-to-end pipeline for ingesting, interpreting, and delivering physical AI insights. As a developer, you can start building on top of the Archetype platform using our Lens APIs, File APIs, and the provided data connectors to connect event streams (input, control and output) to a Lens.