Connector · Ushahidi V3 Shipping

FieldworkIQ talks to Ushahidi over the V3 REST API.

One OAuth2 service account, scoped to three permissions: read messages, write posts, read the deployment. No webhook gymnastics, no schema changes, no Ushahidi fork. Self-hosted via Docker, or run as a managed service.

Data flow
Ushahidi · inbound
GET /api/v3/messages
Pending messages. Polled every 30s with a high-water mark.
FieldworkIQ workflow
verification_triage_v1
  • Translate · classify · geocode
  • Corroboration lookup
  • Sensitivity check
  • Drafted Post for review
Ushahidi · outbound
POST /api/v3/posts
On verifier approval. Idempotent. Marks the source message processed.
All operations against the standard Ushahidi V3 API. No platform modifications required.
Workflow orchestrated by Smith — budget caps, guardrails, tool-authorization, per-case audit trace.
01
Set up the OAuth2 client

Create a service account in your Ushahidi admin.

In Ushahidi's admin UI, create a new OAuth2 Client with the client_credentials grant and three scopes: api, posts, messages. Save the client_id and client_secret.

FieldworkIQ authenticates as this service account, not as a human operator. Every action it performs is auditable as "FieldworkIQ" rather than attributable to whoever last rotated the credential.

02
Inbound · read messages

Poll for pending messages, with a high-water mark.

Every 30 seconds (configurable), FieldworkIQ fetches messages newer than the last seen created timestamp. The high-water mark is persisted; restarts don't replay the queue. If a page returns the limit, FieldworkIQ drains immediately rather than waiting for the next tick.

Each message is checked against an idempotency index on (deployment_id, source_record_id). Duplicates are dropped silently. New messages are converted to FieldworkIQ's internal RawReport shape and enqueued for the verification workflow.

03
Outbound · write posts

On approval, write the Post and mark the message processed.

When the verifier approves a case, FieldworkIQ writes a canonical Post via POST /api/v3/posts. The body is the standard Ushahidi V3 envelope: title, content, values keyed by form attribute, status published, geolocation, tags. FieldworkIQ stamps additional_data.fieldworkiq_run_id for traceability.

Immediately after, FieldworkIQ marks the source message processed via PUT /api/v3/messages/{id}. The pair is logged in FieldworkIQ's outbound_writes table for idempotency — see below.

04
Idempotency · dual-write safety

No duplicate Posts on retry.

Two safeguards, either of which is sufficient on its own. Together they handle the failure mode where the write succeeded but the response was lost.

  1. 01
    Pre-write check. Before writing, FieldworkIQ queries Ushahidi for any existing Post with additional_data.fieldworkiq_run_id matching the current run. If found, return that Post and skip.
  2. 02
    Write log. Postgres outbound_writes table records (deployment_id, source_message_id, post_id, written_at, run_id). Checked before every POST.

On 4xx responses, FieldworkIQ surfaces the validation error to the verifier with the source case still in the queue. On 5xx, exponential backoff up to three tries before paging ops.

Scopes

Three permissions. Nothing else.

FieldworkIQ requests exactly the scopes it needs to do its job. The token is rejected for any other operation. If your Ushahidi admin wants to audit what the service account can touch, here it is.

api read

Read deployment metadata

Forms, categories, tags, geolocation hints. FieldworkIQ caches the schema on startup and refreshes every hour.

GET /api/v3/forms · /tags · /categories
messages read · update status

Process inbound messages

Poll pending messages. Mark processed or archived after review. Cannot create or delete messages.

GET /api/v3/messages · PUT .../{id}
posts create

Write verified posts

Create new Posts on verifier approval. Cannot edit or delete Posts originated by other clients or operators.

POST /api/v3/posts
What's explicitly excluded
admin users roles data_providers post_delete forms_write

FieldworkIQ cannot create users, change roles, modify the data providers Ushahidi pulls from, delete posts, or alter form schemas. If any of those are needed, your Ushahidi admin does them directly in the admin UI.

Compliance built in
OFAC SDN EU consolidated UN sanctions PII redaction Audit trail

Every case runs through PII redaction and a sanctions screen against OFAC, EU, and UN lists before it reaches a verifier. A positive sanctions hit freezes the outbound write until your team's escalation procedure is followed. Every step is logged in the audit trail your donor reports build from.

Deployment

Run it yourself, or let us run it.

FieldworkIQ ships as Docker Compose. A single host runs the workflow, queue, Postgres, and Redis. Point it at any Ushahidi deployment with a service-account credential.

Self-hosted
Open source

Docker Compose, your infrastructure.

  • One docker compose up on any Linux host (2 vCPU, 4GB RAM minimum).
  • Data never leaves your VPC. Reports, traces, verifier actions all local.
  • BYO model keys (OpenAI, Anthropic, or self-hosted via vllm).
  • Apache 2.0 license. No usage telemetry.
$ git clone github.com/fieldworkiq/fieldworkiq
$ cd fieldworkiq
$ cp config.example.yml config.yml
$ docker compose up -d
# dashboard at https://localhost:8443
Managed
Pilot-only

We run the workflow, you keep your data.

  • Hosted on EU or Africa region, your choice.
  • Per-deployment isolation. Single-tenant database, scoped credentials.
  • Sub-processor list disclosed; DPA available before any deployment.
  • For pilots only. Long-term we expect most users to self-host.
Open source

FieldworkIQ is Apache 2.0. Read it, fork it, run it on your own iron.

The workflow engine, the Ushahidi adapter, the verifier dashboard, the benchmark harness — all in one repository. Pull requests welcome. If your organization needs FieldworkIQ to read a platform other than Ushahidi (KoboToolbox, ODK Central), the adapter contract is documented in the repo.