About Concentrate AIConcentrate AI provides a unified, OpenAI-compatible API to access, route, and manage LLMs. We save our customers time, cut costs, and reduce risk. We manage high-volume AI traffic and logs. We’re seed-stage, well-funded, and in stealth. This role is fully remote.The RoleBuild, run, and debug LLM applications on top of Concentrate. You’ll work on production workloads (customer-facing and internal), stress test the platform, and share feedback and learnings with the product team.Key Areas of ResponsibilityBuild LLM-powered services, workflows, and applications using ConcentrateIntegrate with real production applications and environmentsDesign reliable interactions with LLMs (OpenAI, Anthropic, Gemini, OSS), including prompts, tool calling, evals, and guardrailsDebug failures across models, APIs, and infrastructureWrite code, scripts, and small tools to test system behavior at scaleReproduce bugs, isolate root causes, and clearly document findingsDrive improvements to APIs, defaults, and overall developer experienceWhat We’re Looking ForHands-on experience integrating LLMs via APIs in productionAlways building tools, scripts, side projects, or experiments to test ideas and learn fastStrong Python; TypeScript is a plusComfortable with REST APIs, async systems, and production debuggingCurious about model behavior under load, failure, and edge casesFamiliarity with tool calling, agents, multi-step workflows, or MCPsStrong code quality and technical judgmentComfortable operating in early-stage environments with speed and ambiguityClear async communication (Slack, written updates, concise docs)Fluent written and verbal English required for documentation, incident response, and technical and business presentationsBonus: active open-source contributor