Ray is a distributed execution framework that makes it easy to scale your applications and to leverage state of the art machine learning libraries.
Tech tags:
Related shared contents:
-
project2024-06-14
Pinterest's journey of adopting Ray for infrastructure enhancement started in 2023. It involved overcoming challenges like Kubernetes integration, optimizing resource utilization, and ensuring security. The Ray infrastructure enables scalable, efficient machine learning workloads, significantly improving last-mile data processing, batch inference, and recommender systems model training. By focusing on distributed processing, cost management, and developer velocity, Pinterest achieved improved scalability and operational efficiency for its machine learning applications.
-
project2023-09-12
Pinterest enhances machine learning dataset iteration speed by adopting Ray for distributed processing, addressing bottlenecks in dataset handling for recommender systems. Previously slow processes involving Apache Spark and Airflow workflows now leverage Ray's parallelization, resulting in a significant reduction in training time. Ray’s support for CPU/GPU resource management and streaming execution has led to increased throughput and cost savings, improving ML engineer velocity and overall efficiency in managing large-scale data.
-
In productions with: