Services

Four shapes.Scope is written. Scope is kept.

We write fixed scope and we deliver it. When something has to change mid-engagement, we put it in writing before we start the work. No scope creep, no surprise invoices, no status-yellow lying.

01

Ideation Sprint

Half-day workshop · 1-week turnaround

A focused workshop when the AI ambition is real but the scope isn't yet.

We don't run brainstorms. An Ideation Sprint is a discovery exercise — your stakeholders, two Remāngu engineers, your real operational pain on the table. We leave with one scoped brief, or an honest 'don't build this yet' if that's where the evidence points.

In scope

  • Half-day workshop with two Remāngu engineers
  • Surface three candidate AI projects
  • Rank by impact × feasibility × reversibility
  • Scoped brief for the strongest candidate — what, why, how long
  • Directional architecture sketch
  • Written risk and assumption log

Out of scope

  • Code
  • Data pipelines
  • Vendor selection beyond AWS/foundation models
  • Formal procurement documentation

What you get

  • The brief — PDF + editable doc, yours to own
  • A short memo on risks, open questions, and next-step cost estimate
  • A 30-minute debrief call one week later, free

How it runs

  1. 1Kickoff call to align on who should be in the room
  2. 2Pre-read: your background context, shared 48h ahead
  3. 3Workshop day(s)
  4. 4Brief delivered within 5 business days
  5. 5Free 30-min follow-up one week later

Best for

Teams that know AI is coming but can't decide between three candidate projects — and don't want to waste a quarter running them all in parallel.

02

Prototype

2–4 weeks · fixed fee

A working system on your AWS account. Not a sandbox demo, not a slide deck.

Eligible engagements may qualify for AWS co-investment — we handle the qualification and paperwork.

A Prototype is a real system, running on your infrastructure, on real or production-synthetic data. It ships with evals from day one — precision, recall, latency, cost — and observability wired in before any AI call is made. You can hand it to a stakeholder and demo live.

In scope

  • Scoped AI system built end-to-end
  • Deployed to your AWS account
  • Eval suite on a held-out dataset (20–100 labeled examples)
  • Observability: logs, traces, cost per call
  • Infrastructure-as-code (CDK or Terraform, your choice)
  • Handover session with your team

Out of scope

  • Full auth/RBAC (unless core to the demo)
  • Multi-region or DR
  • Production-grade SLA and on-call
  • Ongoing model retraining

What you get

  • Working system in your AWS account
  • Source code, IaC, and README
  • Eval suite runnable in CI
  • Technical write-up — what we built, what we saw
  • Live demo and handover call

How it runs

  1. 1Kickoff within 5 business days of sign-off
  2. 2Weekly written progress notes
  3. 3Midpoint demo at week 2
  4. 4Final handover demo at end of week 4
  5. 530-day bug-fix window post-handover

Best for

Teams with a scoped AI problem and access to real data, who need evidence the approach works before committing production engineering.

03

Production Build

6–16 weeks · milestone-based

We take a prototype to production. Hardened, observable, yours to run.

Eligible engagements may qualify for AWS co-investment.

A Production Build brings the hardening that a prototype deliberately skips. Auth, RBAC, audit logging, cost guardrails, drift monitoring, a 500+ example eval suite running in CI, model fallbacks, circuit breakers, runbooks, on-call docs. Milestone-based with explicit go/no-go gates — we don't burn budget chasing things you don't need.

In scope

  • Full auth, RBAC, SSO integration
  • Audit logging + compliance artifacts (SOC2/HIPAA/FCA as needed)
  • Cost guardrails per-user / per-tenant / per-feature
  • Model fallbacks and circuit breakers
  • Drift monitoring + alerting
  • Production eval suite in CI
  • Runbooks and on-call documentation
  • Handover and pair-coding with your team

Out of scope

  • Ongoing 24/7 on-call (that's Managed Operations)
  • Net-new product discovery (that's an Ideation Sprint)

What you get

  • Production system with SLA committed
  • Full IaC, CI/CD, and observability stack
  • Eval suite + dashboards
  • Compliance evidence package
  • Trained internal team

How it runs

  1. 1Written statement of work
  2. 2Milestone-based invoicing, with go/no-go at each milestone
  3. 3Weekly written status notes + bi-weekly demos
  4. 4Soft launch → hardening → full launch
  5. 530-day support tail post-launch

Best for

Teams shipping an AI feature that real customers will depend on — where regulatory, security, or cost discipline is non-optional.

04

Managed Operations

Ongoing · monthly fee

We run the system so your team doesn't have to.

Once a system is in production, someone has to carry the pager, track the bills, catch drift, and retrain models on schedule. Managed Operations folds the production system into Revolgy's long-running CloudOps practice — the same practice that has managed AWS for enterprise customers for a decade.

In scope

  • 24/7 monitoring and incident response
  • Model retraining cadence (weekly/monthly/quarterly)
  • Cost reviews and optimisation
  • Quarterly architecture + roadmap reviews
  • Security patching, dependency upkeep
  • SLA with named response times

Out of scope

  • Net-new feature work (that's another Production Build)
  • Work outside the agreed system boundary

What you get

  • Monthly operations report
  • Quarterly business review with your leadership
  • On-demand deep-dives when something changes

How it runs

  1. 1Takeover audit in month one
  2. 2Runbook refresh and on-call integration
  3. 3Then steady-state: monitor, retrain, review, repeat

Best for

Teams who want the AI system they just launched to stay healthy without hiring a specialist SRE team to watch it.

Not sure which shape fits?

Describe your problem in the intake. We’ll reply with a recommended engagement shape and a written scope within 48 hours.

Start a project