About

FieldworkIQ is an open project, built with the crisis-mapping community.

Started as an internal verification toolkit for an election-observation deployment. Now an open-source project run by a small team, with a public roadmap, public benchmarks, and a public repo. Apache 2.0 — fork it, run it, contribute back.

Why FieldworkIQ exists

Volunteers spend the first ten minutes of every report on translation and sorting.

Working with verification teams in Kenya, Uganda, and the Philippines, the same pattern kept showing up: a volunteer opens a report, spends ten minutes translating it, looking up the place name, cross-referencing other reports, then thirty seconds making the actual judgement call.

FieldworkIQ does the first ten minutes so the volunteer can spend their day on the thirty seconds. The verifier still reads every report. The verifier still decides what gets published. The map is still your map. We just removed the parts a careful piece of software can do.

Everything we build sits on top of the platforms your team already runs. Ushahidi today. KoboToolbox and ODK Central next. If you've already invested in tooling, FieldworkIQ doesn't ask you to swap it out.

What's true today

v1.0 ships against Ushahidi. Two pilots in the field. The rest is honest about being on the roadmap.

v1.0 · shipping
Live

In production today

  • Ushahidi V3 connector, OAuth2, end-to-end
  • Multilingual translation (Swahili-English shipped)
  • PII redaction, sensitivity holds, sanctions screening
  • Verifier dashboard with held-case routing
  • Reporter-reputation scoring (off by default)
  • Reproducible Uchaguzi-2022 benchmark
  • Docker Compose self-host
Pilots
2 live

Real deployments

  • Mukurinji-2026 county elections
    Concluded · 1,847 reports processed, 68% verifier time saved on logistics categories
  • Hivos Uchaguzi 2027 prep
    Currently testing · Meru County election-day setup, expected go-live August 2027
  • Two further pilots in scoping
    Humanitarian response · floods · East Africa region
v1.1 · next
Q1 2027

On the roadmap

  • KoboToolbox connector
  • Coordinated-inauthentic-behavior detection
  • Mobile verifier app (read-only first)
  • French + Portuguese translation models
  • Quarterly bias-audit report
Where we draw lines

Six commitments we won't quietly walk back under deadline pressure.

Crisis-mapping runs in vulnerable contexts. The cost of a single mishandled report can be physical safety, deportation, retaliation, or stigma — for the reporter or the people they named. These commitments are architectural, not policy statements. The system enforces them by default.

  • Reporter safety

    Personal information never reaches the public map.

    PII redaction is a workflow-level guardrail, not a post-hoc filter. A public post that still carries a phone number, name, or ID is blocked from publishing — every time.

  • Sensitive content

    No auto-publishing of critical-severity flags. Ever.

    Graphic violence, sexual violence, child involvement, suicide content — these never bypass human review. Auto-approve is opt-in, low-stakes only, and bounded by the deployment's own threshold.

  • Verification claims

    Nothing is called "verified" without corroboration.

    A single source means "reported," not "verified." We don't inflate confidence to please a dashboard. The verification status reflects the evidence in hand.

  • Operator authority

    Verifier overrides are sacred, never penalized.

    Operators can override any classification, reject any drafted post, escalate any case. We record dissents in the audit trail; we never collect metrics that punish high override rates.

  • Data boundaries

    Reporter data never leaves your deployment.

    No selling, no sharing across tenants, no implicit federation. Reporter identity is stored as a hash, encrypted at rest, with per-tenant key derivation. Right-to-erasure honored within 7 days; 24 hours for safety-critical cases.

  • Accountable framing

    We don't claim our classifications are "the truth."

    They are interpretations grounded in evidence. The verifier approves. The audit trail makes the chain inspectable. The map shows what your team published, not what a model decided.

The full list of ten commitments lives in the repo's ETHICS.md, along with the decision log of how they've evolved. We review this annually with external advisors and disclose materially if we fall short.

Get started

Three ways in, depending on what you're trying to find out.

For program officers

Talk to us about a pilot.

You run a deployment, you've read this far, you want to see whether FieldworkIQ fits. A 30-minute call covers your channels, your taxonomy, your verifier team, and whether a two-week trial makes sense.

hello@fieldworkiq.org
For verifier teams

See the dashboard yourself.

The interactive demo runs in your browser — no signup, no setup. Walk five real cases (a clear publish, a held judgement call, a low-source logistics report) and feel the rhythm of the verifier's workflow.

Open the demo
For developers

Run it on your machine.

The repo has Docker Compose, a worked Ushahidi adapter, the eval harness, the adapter SDK. Apache 2.0, no telemetry. Get the dataset access, run docker compose up, evaluate locally.

github.com/fieldworkiq
What a pilot looks like

Three weeks. Your deployment, your taxonomy, your verifiers — paired with us.

A pilot is the cheapest way for both sides to find out whether FieldworkIQ helps your team. No hosting fees during the pilot, no commitment past week three. Most teams have decided one way or the other by the end of week two.

1
Discovery call
Week 0 · 30 min

We walk your existing workflow. You show us your taxonomy, channels, and verifier load. We sketch what FieldworkIQ would do for you. Either side can stop here.

2
Setup & training
Week 1 · 1 day shared

FieldworkIQ deployed against your Ushahidi instance (managed or self-host, your call). Your verifiers get a one-hour walkthrough. We import your taxonomy and tag rules.

3
Live trial
Weeks 2–3 · your normal load

Real reports flow through. Verifiers use FieldworkIQ for their actual queue. We monitor with you — daily check-in on what's working, weekly metrics on time saved and held-case patterns.

4
Decision
End of week 3

Continue (your team takes over, self-hosted or managed); pause (we hand over the data, you keep what was learned); or stop (no obligation either way).

Reach us

Where the project lives.

Working in election observation or crisis mapping?

We share a small mailing list of release notes, pilot writeups, and benchmark updates. Roughly monthly. No marketing — just what shipped and what we learned.