RDToolkit: Data Architecture and API’s

The Rapid Diagnostics Toolkit is architected to provide three tiers of data capture and reporting for different stakeholders. 

  • Point of Care data about individual test sessions

  • Program RDT Usage data about a wider set of ongoing tests being performed, and their results, to give programs insight into how RDT’s are being utilized

  • Detailed RDT Capture Session forensics metadata data about individual capture sessions to be used to improve tools and try new approaches

Point of Care Data

The Rapid Diagnostics Toolkit is intended to be used alongside digital health systems like CommCare, OpenSRP, or DHIS2 which have existing Android applications, and will be integrating RDT data as part of Point of Care service delivery (health visits, disease management campaigns, etc.). 

Since these apps already have their own complex structured data models the Rapid Diagnostics Toolkit is designed to be as simple and portable as possible, to ensure that it can be integrated with existing workflows - a realistic scenario piloted during User Acceptance Testing. As such, this data model provides the primary, scalar data associated with a test, with the intent of that data being available to whatever backed data system is being used for program delivery.

  • Test Type

  • Timing Data (Start time, expiration time, resolution time, capture time)

  • Cropped Cassette Image 

  • User entered test result

  • Automated classifier test result 

This interoperability is designed through inter-process communication (IPC) through Android’s Intent layer and made available through published “Support Library” which manages the data IPC, providing implementing systems with a simple, two-line integration path to access all of the data in the Primary Domain Model and incorporate the diagnostic outcomes and workflow data as best suits their applications.

Program RDT Usage Data

Another use case for the Rapid Diagnostics Toolkit is organizations which want to incorporate the Toolkit to manage broader efforts for applying RDTs. These organizations are interested in identifying patterns in how tests are applied, like how many tests of each type are occurring, whether tests are being applied effectively, what aggregate results are occurring, whether users are successfully utilizing classifiers, etc. 

These non-identifiable data models are sent out from the Rapid Diagnostics Toolkit from structured HTTP API's, rather than on-device IPC. Implementing organizations can either implement these API patterns into their backends, or use a separate service to manage this data. 

This separation allows the Rapid Diagnostics Toolkit to capture significantly more complex datasets without encumbering ease of adoption into Android apps, and allows capturing and submitting data which is physically very large on a separate channel from operational data, allowing this data to trickle-in slowly when possible without blocking data submissions in health apps until complete.

Captured Session Record models provide the following in addition to the Point of Care Data

  • Session configuration details 

  • Full, uncropped images 

  • Metrics for the capture session

  •  

    • Number of image capture attempts

    • Whether the user viewed the RDT instructions

    • Whether the user viewed the interpretation job aid

    • App info: Android Device Model / OS Version / App Version

Detailed RDT Capture Session Data

The last data stakeholder expected from CloudWorks is organizations who want highly detailed metadata about RDT Result Capture sessions for improving the associated services or software components. This data is also communicated through HTTP API’s, but only when configured per session, since the resulting datasets can be extremely large or complex.

When configured, this forensics data is submitted to an HTTP API in an documented “event log” data model which encodes time-stamped events that occurred during a session capture, along with additional metadata about capture. This data format is flexible and what events are logged will depend on a user’s path through the app. If a classifier is used, for example, the full metadata of the classifier is provided as a JSON element and ID’d to the associated image. If configured, this channel will provide all images which are attempted to be captured, not just the user’s final attempt.