7/5/2023 0 Comments Amazon day one![]() (To some observers, “the design in the PV article is problematic. However, the high number of Tier-1 calls to the S3 bucket was expensive.” Defect detectors (where each of them also runs as a separate microservice) then download images and processed it concurrently using AWS Lambda. To reduce computationally expensive video conversion jobs, we built a microservice that splits videos into frames and temporarily uploads images to an… S3 bucket. The second cost problem we discovered was about the way we were passing video frames (images) around different components. Besides that, AWS Step Functions charges users per state transition. Our service performed multiple state transitions for every second of the stream, so we quickly reached account limits. Join peers following □ The Stack on LinkedIn □Īs Kolny’s blog spells out, initial microservices-based tools built for video stream defect detection had hit all kinds of issues: “The main scaling bottleneck in the architecture was the orchestration management that was implemented using AWS Step Functions. The initial setup had seen the Prime Video team analysing frames and audio buffers using machine-learning algorithms, with AWS Step Functions used as a primary process orchestration mechanism to coordinate the execution of several serverless Lambda functions.Īll audio/video data was stored in AWS S3 buckets and an AWS SNS topic was used to deliver analysis results but the cost of passing data around racked up fast. AWS docs, somewhere, maybe- April King □ May 4, 2023 “please don’t use and spend lots of money on our services for production loads at sufficient scale” Pretty sure the docs even say that.” (Ladies, gentlemen, non-binary readers we can’t see that in the docs…) Strikingly, in one discussion about this decision on Twitter, a purported senior product engineer at Amazon piped up to tell the world that “We don’t use serverless in-house for production loads and no company at sufficient scale should. Senior software development engineer Marcin Kolny said on Prime’s technology blog that toolings built to assess every video stream and check for quality issues had initially been spun up as a “distributed system using serverless components” but that this architecture “caused us to hit a hard scaling limit at around 5% of the expected load” and the “cost of all the building blocks was too high to accept the solution at a large scale.” ![]() (Whether this constitutes a “monolith” as it is described in a Prime Video engineering blog that has triggered huge attention its or instead is now one large microservice is an open question it has saved it a lot of money following the approach Adrian Cockcroft describes as “optimiz serverless applications by also building services using containers to solve for lower startup latency, long running compute jobs, and predictable high traffic.”) Prime Video blasts both barrels at AWS serverless. The shift saw the team swap an eclectic array of distributed microservices handling video/audio stream analysis processes for an architecture with all components running inside a single Amazon ECS task instead. Amazon Prime Video has dumped its AWS distributed serverless architecture and moved to what it describes as a “monolith” for its video quality analysis team in a move that it said has cut its cloud infrastructure costs 90%.
0 Comments
Leave a Reply. |