Amazon Simple Storage Service
Tech tags:
Related shared contents:
-
project2025-11-20
The article discusses how Care Access, a healthcare organization, utilized Amazon Bedrock's prompt caching feature to significantly reduce data processing costs by 86% and improve processing speed by 66%. By caching static medical record content while varying analysis questions, Care Access optimized their operations to handle large volumes of medical records efficiently while maintaining compliance with healthcare regulations. The implementation details, including the architecture and security measures, are also highlighted, showcasing the transformative impact of this technology on their health screening program.
-
project2025-11-13
The article discusses Yelp's transformation of its data infrastructure through the adoption of a streaming lakehouse architecture on AWS. This modernization aimed to address challenges related to data processing latency, operational complexity, and compliance with regulations like GDPR. By migrating from self-managed Apache Kafka to Amazon MSK and implementing Apache Paimon for storage, Yelp achieved significant improvements, reducing analytics data latencies from 18 hours to minutes and cutting storage costs by over 80%. The article outlines the architectural shifts and technologies involved in this transformation.
-
spike2025-03-07
-
project2025-02-18
Very classic Glue job pipeline to feed the AWS Bedrock Knowledge Bases for a RAG use case.
-
tech12024-12-04
S3 Table bucket handle the Iceberg compaction and catalog maintenance tasks for you.
-
project2024-12-05
Twitch has leveraged Views in their Data Lake to enhance data agility, minimize downtime, and streamline development workflows. By utilizing Views as interfaces to underlying data tables, they've enabled seamless schema modifications, such as column renames and VARCHAR resizing, without necessitating data reprocessing. This approach has facilitated rapid responses to data quality issues and supported efficient ETL processes, contributing to a scalable and adaptable data infrastructure.
-
project2024-11-22
Improving the data processing efficiency by implementing Apache Iceberg's base-2 file layout for S3.
-
product2024-12-12
Build a process to built the complete data lineage information by merging the partial lineage generated by dbt automatically.
-
project2022-06-22
-
project2024-11-11
To search user profiles to remove, we use an AWS Lambda function that queries Aurora, DynamoDB, and Athena and places those locations in a DynamoDB table specifically for GDPR requests.
-