FieldworkIQ talks to Ushahidi over the V3 REST API.
One OAuth2 service account, scoped to three permissions: read messages, write posts, read the deployment. No webhook gymnastics, no schema changes, no Ushahidi fork. Self-hosted via Docker, or run as a managed service.
- Translate · classify · geocode
- Corroboration lookup
- Sensitivity check
- Drafted Post for review
Create a service account in your Ushahidi admin.
In Ushahidi's admin UI, create a new OAuth2 Client with the client_credentials grant and three scopes: api, posts, messages. Save the client_id and client_secret.
FieldworkIQ authenticates as this service account, not as a human operator. Every action it performs is auditable as "FieldworkIQ" rather than attributable to whoever last rotated the credential.
Poll for pending messages, with a high-water mark.
Every 30 seconds (configurable), FieldworkIQ fetches messages newer than the last seen created timestamp. The high-water mark is persisted; restarts don't replay the queue. If a page returns the limit, FieldworkIQ drains immediately rather than waiting for the next tick.
Each message is checked against an idempotency index on (deployment_id, source_record_id). Duplicates are dropped silently. New messages are converted to FieldworkIQ's internal RawReport shape and enqueued for the verification workflow.
On approval, write the Post and mark the message processed.
When the verifier approves a case, FieldworkIQ writes a canonical Post via POST /api/v3/posts. The body is the standard Ushahidi V3 envelope: title, content, values keyed by form attribute, status published, geolocation, tags. FieldworkIQ stamps additional_data.fieldworkiq_run_id for traceability.
Immediately after, FieldworkIQ marks the source message processed via PUT /api/v3/messages/{id}. The pair is logged in FieldworkIQ's outbound_writes table for idempotency — see below.
No duplicate Posts on retry.
Two safeguards, either of which is sufficient on its own. Together they handle the failure mode where the write succeeded but the response was lost.
- 01Pre-write check. Before writing, FieldworkIQ queries Ushahidi for any existing Post with
additional_data.fieldworkiq_run_idmatching the current run. If found, return that Post and skip. - 02Write log. Postgres
outbound_writestable records(deployment_id, source_message_id, post_id, written_at, run_id). Checked before every POST.
On 4xx responses, FieldworkIQ surfaces the validation error to the verifier with the source case still in the queue. On 5xx, exponential backoff up to three tries before paging ops.
Three permissions. Nothing else.
FieldworkIQ requests exactly the scopes it needs to do its job. The token is rejected for any other operation. If your Ushahidi admin wants to audit what the service account can touch, here it is.
Read deployment metadata
Forms, categories, tags, geolocation hints. FieldworkIQ caches the schema on startup and refreshes every hour.
Process inbound messages
Poll pending messages. Mark processed or archived after review. Cannot create or delete messages.
Write verified posts
Create new Posts on verifier approval. Cannot edit or delete Posts originated by other clients or operators.
FieldworkIQ cannot create users, change roles, modify the data providers Ushahidi pulls from, delete posts, or alter form schemas. If any of those are needed, your Ushahidi admin does them directly in the admin UI.
Every case runs through PII redaction and a sanctions screen against OFAC, EU, and UN lists before it reaches a verifier. A positive sanctions hit freezes the outbound write until your team's escalation procedure is followed. Every step is logged in the audit trail your donor reports build from.
Run it yourself, or let us run it.
FieldworkIQ ships as Docker Compose. A single host runs the workflow, queue, Postgres, and Redis. Point it at any Ushahidi deployment with a service-account credential.
Docker Compose, your infrastructure.
- One
docker compose upon any Linux host (2 vCPU, 4GB RAM minimum). - Data never leaves your VPC. Reports, traces, verifier actions all local.
- BYO model keys (OpenAI, Anthropic, or self-hosted via
vllm). - Apache 2.0 license. No usage telemetry.
$ git clone github.com/fieldworkiq/fieldworkiq
$ cd fieldworkiq
$ cp config.example.yml config.yml
$ docker compose up -d
# dashboard at https://localhost:8443
We run the workflow, you keep your data.
- Hosted on EU or Africa region, your choice.
- Per-deployment isolation. Single-tenant database, scoped credentials.
- Sub-processor list disclosed; DPA available before any deployment.
- For pilots only. Long-term we expect most users to self-host.
FieldworkIQ is Apache 2.0. Read it, fork it, run it on your own iron.
The workflow engine, the Ushahidi adapter, the verifier dashboard, the benchmark harness — all in one repository. Pull requests welcome. If your organization needs FieldworkIQ to read a platform other than Ushahidi (KoboToolbox, ODK Central), the adapter contract is documented in the repo.