Skip to main content
The Archetype platform is a Physical AI platform designed to help you build systems that perceive, reason, and act in the real world. It’s powered by a proprietary foundation model, Newton, which fuses multimodal sensor data - optical, radar, acoustic, motion, temperature, and more - to enable real-time perception, reasoning, and decision-making across machines, environments, and people. Platform White2

About Newton

Newton is Archetype AI’s foundation model designed specifically for understanding the physical world through sensor data, a fundamentally different problem than what most AI models are built for. While large language models are trained predominantly on text (roughly 80%) with images and video as secondary modalities, Newton inverts this ratio: it’s trained on approximately 50% physical sensor data (vibration, temperature, electrical current, motion) with video and text as supporting inputs. The architectural difference matters too. Standard multimodal models treat each input type separately, processing text in one pathway and images in another. Newton instead fuses hundreds of sensor modalities together, learning to reason across them simultaneously—because in the physical world, understanding a machine failure or a safety incident requires correlating vibration patterns with power draw with temperature with visual context, not analyzing each in isolation. The result is a model that can interpret raw sensor streams and respond to natural language queries about physical systems, enabling someone to ask “what’s causing this motor to overheat?” rather than building a custom ML pipeline for that specific question.

How Newton Processes Physical Data

Newton combines two core components to bridge raw sensor data and human understanding: How Newton Sees the Physical World
  1. Physical World Model: Ingests time-series sensor data (motion, power, electrical signals) and video streams capturing machine and human behavior. This layer can process many types of physical sensors without requiring model retraining, enabling capabilities like trajectory prediction, multivariate analysis, anomaly detection, and classification.
  2. Semantic World Model: Translates between physical data and natural language, allowing you to interact with sensor streams conversationally—like talking with a human expert. You can provide scene descriptions and custom prompts to query presence, activity, and intent.

Platform Capabilities

With the Archetype platform you can:
  • Access Newton through standard APIs: Tap into Newton, the multimodal foundation model that fuses physical measurements and sensor data into unified understanding. Build Physical AI use cases via Python bindings or REST APIs, with just a few lines of code.
  • Connect any sensor or data source: Easily integrate data from diverse sensors, devices, and systems. This includes real time or pre-recorded data, enabling analysis, testing, and simulation within a single environment.
  • Accelerate development with the Console: Use the Developer Console for testing, evaluation, and performance tuning. Configure Lenses with custom instructions and data to solve various Physical AI use cases, and export configurations to use in your projects.
The platform allows customers to create and deploy systems that monitor, optimize, and automate complex physical operations. Early enterprise customers — including NTT DATA, Kajima, and the City of Bellevue — have already deployed systems to increase efficiency, reduce downtime, and improve safety in environments ranging from warehouses and construction sites to city streets. Learn more. Overall, the Archetype platform is designed to accelerate the development and deployment of the Newton foundation model into real-world systems, enabling faster iteration and scaling of Physical AI applications.

Get Started