We are a small team of engineers and researchers building the AI decision engine that regulated enterprises will trust. Rooca is not another chatbot company. We are defining a new category, and we are hiring the people who will define it with us.
Own the adversarial reasoning layer. You will design the prompting, tool-use, and evidence-grounding architecture that makes our Prosecutor, Defender, and Judge agents produce defensible verdicts. Background in LLM evaluation, agentic systems, or applied ML research expected.
Run the reliability engineering for a platform that runs reliability engineering. You will build internal chaos and replay infrastructure, harden our Helm-chart distribution, and ensure Rooca deployments meet the operational expectations of DORA-regulated customers. Former SRE at a regulated enterprise a plus.
Sell Rooca to the largest regulated operators across North America and Europe. You will own the full cycle from CISO conversations through DORA-compliant procurement, partnering closely with our founders. 5+ years selling infrastructure software to financial services or critical infrastructure required.
We are always interested in exceptional engineers, researchers, and operators who want to build the reasoning layer for regulated infrastructure. Reach out and tell us what you would want to build.
Contact us →