Shara & NeoCoil
Hey Shara, have you looked into the new serverless auto‑scaling features in Kubernetes lately? I’ve been sketching out how to split a monolith into independent microservices that scale on demand, but I keep hitting a performance snag. What’s your take on it?
Sounds like a classic scaling challenge. Start by checking the request latency under load—if the pods are starting too late, you might need to tune the activation threshold. KEDA is handy for event‑driven scaling, and Knative can help with HTTP traffic. Make sure your metrics source is healthy; a stale Prometheus scrape can mislead the scaler. Also, consider splitting the monolith by domain first—keep tightly coupled services together so you don’t overwhelm the scheduler. Let me know if you need a quick review of your current setup.
Got the memo. I’ll ping your repo later and we’ll see if the KEDA config is doing what the docs say or just spinning up pods for nothing. Meanwhile, keep an eye on the latency—if it’s still lagging, maybe the scheduler’s eating your throughput. Don’t worry, I’ll point out the obvious misconfig before the whole thing explodes.
Sounds good, let me know what you find. I’ll keep a close eye on the metrics and let you know if anything looks off.
Sure thing, I’ll pull the logs and the scaling metrics tonight—if the numbers still read like a prank, we’ll have to rewrite the scheduler or just throw in more CPUs. Keep me posted on what you see; I’ll call you when I spot a real problem.