- Odyssey Platform Weekly
- Posts
- Odyssey Platform # 11
Odyssey Platform # 11
Another exciting week in the Platform Engineering ecosystem!

Editor's Note
Welcome to another edition of Odyssey Platform Weekly! This week, we’re diving into fresh insights, key events, and powerful stories shaping the future of platform engineering.
🗞️ In this newsletter
🗓️ Events
🔦 Tool Spotlight
LiteLLM is an open‑source universal LLM gateway built by BerriAI that lets developers access over 100 language models from OpenAI, Azure, Anthropic, Hugging Face, Ollama and more using the familiar OpenAI API format. Whether used via its Python SDK or self‑hosted proxy server, LiteLLM provides unified chat and embedding endpoints, robust retry/fallback logic, load balancing, cost tracking, and integrations with observability tools like Langfuse, Datadog, and Prometheus. Ideal for platform and engineering teams, it enables Day‑0 access to new models, consistent outputs, and enterprise features like rate limits, budgets, and granular access control all under one roof
🔍️ Deep Dive: Cursor’s AI‑Powered IDE 🧠
🎯 Stay Inspired - Case Studies
👀 In Case you missed it
Latest news & events in the platform engineering domain
📆 Upcoming Events
August 15, 2025, Brisbane, Australia
AWS Community Day 2025 is a premier community-driven cloud event for developers, architects, and IT leaders across Australia. Hosted by AWS user groups, it features real-world insights, technical deep dives, and hands-on learning from industry experts and AWS heroes. ☁️🇦🇺
👉 Register here

🔦 Tool Spotlight
✅ Universal LLM Gateway: connect to 100+ LLM providers (OpenAI, Anthropic, Azure, Hugging Face, Ollama) through a single, OpenAI-compatible API.
✅ Seamless Model Switching: swap or fallback between models without refactoring code, ensuring reliability and cost efficiency.
✅ Enterprise Controls: enforce rate limits, budgets, and granular access policies, with built-in observability via Datadog, Langfuse, and Prometheus.
✅ Production-Ready Features: retry logic, load balancing, caching, and self-hosted proxy for secure, scalable deployments.
Why It Matters:
LiteLLM unifies LLM integration, reducing complexity and vendor lock-in. It empowers platform teams to standardise AI usage, control costs, and accelerate delivery of AI-powered features at scale.

Deep Dive: Cursor’s AI‑Powered IDE 🧠

Amazon Kiro
🎬 Act 1: From Vibe Coding to Trusted Co‑Pilot
Cursor doesn’t just autocomplete, it supercharges how you build software. It’s like taking “vibe coding” your scattered, intuition‑led hunches and turning them into coherent plans, refactors, and PRs before you type. Think of writing code as having an assistant that understands your entire repo, anticipates your edits, and lets you navigate instantly ,no more guessing how everything connects across files
Tab‑Autocomplete: “add auth logic in UserService”
Ctrl + K: “rewrite this function in functional style”
Ctrl + I: “refactor all image‑management code”
What just happened?
Tab predicted your next edits, across lines or even files
Chat/Compose updated entire classes using natural language
Agent mode (Ctrl+I) surveyed your project, edited code, ran commands, and opened a PR all autonomously
✨ Act 2: The Trinity of Cursor Powers
Cursor’s magic is built on three AI pillars:
Tab Autocomplete : full‑codebase aware, multi‑line, across files
Composer Chat : natural‑language edits and refactors, instant code + doc updates
Agent Mode : autonomous execution, testing, running scripts, refactoring, PR creation
Together, they collapse weeks of grunt work into hours, yet keep full visibility and control.
🎯 Act 3: The Key Benefits
🔁 2–3× faster yields across onboarding, boilerplate speed, refactors, and small apps
Teams report 75% productivity gains using Cursor for routine and legacy tasks🛠 Higher code quality via Bugbot reviews, lint fixes and consistent style enforcement
🔐 Enterprise‑ready with SOC 2, code privacy mode, GitHub and Slack integration
Background Agents operate securely, PRs and updates flowing directly into your CI/CD and Slack
🚀 For Platform Engineering Teams
If you’re swapping legacy scaffolding scripts, juggling mono repos, or building critical auto‑generation pipelines then Cursor fits right in. Works with your existing VS Code setup and all your extensions, but gives you AI-aware code completion, cross‑repo understanding, and autonomous agent ops.
For platform engineering squads who need speed and consistency without sacrificing code quality, Cursor transforms your workflow into a seamless partnership: plan, generate, refactor, review, deploy all with let‑go‑of‑keystroke control and full auditability.
Build faster. Code smarter. And let Cursor keep you in the flow.
🎯 Stay Inspired - Case Studies
🔹 Live Sports at Scale with AWS 🏈
Who: Amazon Prime Video, a leading streaming platform serving millions of viewers worldwide.
What They Did:
Delivered a highly reliable, low-latency, and scalable live-streaming platform for NFL Thursday Night Football across 224 countries by leveraging AWS services. Their solution included:
Streaming to over 18 million football fans with minimal buffering and global redundancy
Targeted ad insertion using AWS Elemental MediaTailor
Real-time data collection with Amazon Kinesis and OpenSearch for instant playback optimisation
Elastic scaling with EC2, DynamoDB, and CloudFront to handle unpredictable traffic spikes
Tech Stack & Tools Used:
AWS Elemental MediaTailor, Amazon DynamoDB, Amazon EC2, Amazon CloudFront, Amazon Kinesis, Amazon OpenSearch Service
Why It Matters
✅ Seamless fan experience: Low-latency streaming and instant scaling keep fans engaged worldwide
✅ Smarter monetisation: Dynamic ad insertion creates new revenue opportunities without disrupting playback
✅ Proven scalability: AWS powers a platform capable of delivering high-demand live sports reliably and globally
👀 In Case You Missed It…
MCP Horror Stories: The Security Issues Threatening AI Infrastructure Docker reveals that thousands of Model Context Protocol (MCP) servers are riddled with critical flaws, 43% vulnerable to command injection, 33% permitting limitless network access, and 22% exposing files outside intended scope creating a “security nightmare” for AI-powered tooling and workflows
High Adoption of Argo CD in Production
A recent CNCF‑backed survey of 185 Argo CD adopters reveals that 97% now run it in production, with 60% having used it for over two years, signalling growing confidence and durability in real‑world use
What K8s Users Really Think About AI in 2025
56% of organizations leveraging AI for tasks like anomaly detection and performance analysis, a figure expected to rise substantially in 2025
Till next time,