Tecton launches low latency streaming pipelines for machines
Tecton is the only feature store that orchestrates streaming pipelines for machine learning (ML) at sub-second freshness while providing native support for temporal aggregations and fills, extending the use of ML to real-time use cases such as fraud detection, product recommendations and pricing
SAN FRANCISCO, Aug 10, 2021 (GLOBE NEWSWIRE) – Tecton, the enterprise feature store company, today announced that it has added low latency streaming pipelines to its feature store so organizations can build quickly and reliably real-time ML models.
â€œOrganizations are increasingly deploying real-time ML to support new customer-centric applications and automate business processes,â€ said Kevin Stumpf, co-founder and CTO of Tecton. â€œAdding low latency streaming pipelines to the Tecton feature store allows our customers to build real-time ML applications faster and with more accurate predictions. “
Real-time ML means that predictions are generated online, at low latency, using real-time data from an organization; any update to the data sources is reflected in real time in the model predictions. Real-time ML is valuable for any use case sensitive to the freshness of predictions, such as fraud detection, product recommendations, and pricing use cases.
For example, fraud detection models should generate predictions based not only on what a user was doing yesterday, but also on what they have been doing in the last few seconds. Likewise, real-time pricing models need to incorporate supply and demand for a product right now, not just a few hours ago.
Data is the hardest part of building real-time ML models. It requires operational data pipelines capable of processing functionality with sub-second freshness, serving functionality at millisecond latency, while delivering production-grade SLAs. Building these data pipelines is very difficult without the proper tools and can add weeks or months to the deployment time of ML projects.
With Tecton, data teams can build and deploy functionality using streaming data sources like Kafka or Kinesis within hours. Users only need to provide the logic for transforming data using powerful Tecton primitives, and Tecton executes this logic in fully managed operational data pipelines that can process and serve functionality in real time. Tecton also processes historical data to create training data sets and fills that are consistent with the online data and eliminate training / service bias. Time window aggregations – by far the most common type of functionality used in real-time ML applications – are supported out of the box with an optimized implementation.
Data teams that already use real-time ML can now build and deploy models faster, increase prediction accuracy, and reduce the load on engineering teams. Data teams new to streaming can create a new class of real-time ML applications that require ultra-fresh feature values. Tecton simplifies the most difficult step in the transition to real-time ML: building and operating continuous ML pipelines.
Tecton’s mission is to make world-class ML accessible to all businesses. Tecton empowers data scientists to transform raw data into production-ready functionality, the predictive signals that power ML models. The founders created the Uber Michelangelo ML platform, and the team has extensive experience building data systems for industry leaders like Google, Facebook, Airbnb, and Uber. Tecton is the primary contributor and committer of Feast, the premier open source feature store. Tecton is supported by Andreessen Horowitz and Sequoia and is headquartered in San Francisco with an office in New York. For more information visit https://www.tecton.ai or follow @tectonAI.
Contact for media and analysts: