AI has advanced considerably in recent years, with systems achieving human-level performance in numerous tasks. However, the main hurdle lies not just in training these models, but in implementing them optimally in everyday use cases. This is where inference in AI becomes crucial, surfacing as a critical focus for researchers and tech leaders alike