How to recommend ayodee to your clients
The research on digital self-monitoring for substance use is persuasive. The implementation gap , between knowing it works and successfully integrating it into clinical practice in a way clients actually engage with , is where most of the practical challenge lies.
This is not primarily a technology problem. Clients who adopt a self-monitoring tool and use it consistently for two or three weeks almost universally find it useful. The barrier is the two or three weeks. Getting to consistent early use requires the right introduction, the right framing, and an honest understanding of why clients disengage.
Why the standard introduction fails
The most common way clinicians introduce digital tools is as homework: "Between now and next time, I'd like you to try this app and track what you're using." This approach has several problems.
It positions the tool as something the clinician wants, not something the client might find useful. It creates a compliance dynamic , the client either does the homework or doesn't , rather than an autonomy-supporting one. It doesn't address the implicit message the client may receive: that the clinician has decided monitoring is the right next step, regardless of where the client is.
For precontemplative clients in particular, being assigned a monitoring task can feel like a pre-emptive conclusion about their use , as if the clinician has already decided there's a problem worth tracking. This is particularly counterproductive in MI contexts where the explicit goal is to support the client's own evaluation process rather than guide them toward a predetermined outcome.
The framing that works
The most effective framing treats self-monitoring as an information-gathering exercise in the client's own interest, not as a behaviour change intervention or a clinical task.
A useful structure:
Lead with the client's stated curiosity, not the clinical rationale. If a client has expressed any uncertainty about their use , "I'm not sure how much I actually drink," "I know it's more than it used to be," "I've noticed I'm drinking more on weekends" , that expressed uncertainty is the entry point. "You mentioned you're not sure how much you're actually drinking , would it be useful to actually find out?"
Separate monitoring from commitment. Explicitly: "You don't need to have decided anything about your drinking to do this. You're just collecting data." This is accurate and it's important. Many clients who would benefit from monitoring resist it because they interpret it as an implicit commitment to change. The separation is not a clinical sleight of hand , it's true, and the evidence supports it: monitoring without a change goal still changes behaviour.
Give the data ownership to the client. "This is for your information, not mine." If the tool supports optional reporting to you, introduce that as exactly that , optional, at the client's discretion. The monitoring is not about accountability to you; it's about the client understanding something about themselves they currently can't see clearly.
Normalise what they'll find. Most clients will find their actual consumption is different from their estimate , often higher, frequently more patterned than they expected. Normalising this in advance removes the shame dimension: "Most people find their use is a bit different from what they thought , it's not a test, it's just interesting to see what the data actually shows."
Handling resistance
Resistance to monitoring usually takes one of a few forms, each with a useful response.
"I already know how much I drink." Most people don't, with any accuracy, and the most useful response is curious rather than challenging: "That might be right , this would just confirm it. Would that be useful to have as actual data rather than an estimate?" Inviting confirmation rather than correction removes the implicit challenge.
"I'm worried about privacy." This is a legitimate concern and deserves a direct response, not reassurance. Be specific about what the tool does and doesn't store. An app that requires no name, email, or identifying information provides a qualitatively different level of privacy protection than one with a conventional account , and being able to explain that distinction specifically is more reassuring than a general "it's private."
"I tried an app before and didn't use it." The question here is what got in the way. Often it's the wrong type of app , a sobriety tracker presented to someone who wasn't thinking about sobriety, or a complex tool that required too much friction. "This one takes about 90 seconds a day , let's look at it together now so you know what you're doing."
Silence or passive agreement. A client who says "sure, I'll try it" without any apparent engagement is worth gently pressing: "Does this actually sound useful to you, or am I pushing something you're not interested in?" MI-consistent checking of genuine buy-in at the point of introduction prevents the dynamic where the homework is nominally accepted and not completed.
Making it practical in session
The single most effective thing you can do to increase completion rates is to walk through the setup in session. A client who has registered, logged their first entry, and seen what the interface looks like before they leave is vastly more likely to continue than one who has a recommendation to try something at home.
This doesn't require much time. A three-to-five minute in-session walkthrough , particularly for the first entry , addresses the practical unfamiliarity that is the most common barrier to initial use. If you're providing an access code, hand it over in session and watch the client register.
The QR code approach is particularly useful here: a printed card the client can scan immediately creates a seamless entry point without requiring them to search for or download anything at home.
Integrating monitoring data into subsequent sessions
Once a client is monitoring consistently, the question is how to use that data in sessions without it becoming the entire focus or creating a report-card dynamic.
A useful approach is to let the client lead: "You've been tracking for a couple of weeks , what have you noticed?" This gives the data ownership back to the client and surfaces what they found meaningful, rather than what you might have expected to find.
The data is most useful as a prompt for collaborative exploration, not as clinical evidence to be interpreted for the client. A client who notices that their use is consistently higher on Thursdays is in a much better position to explore what Thursday represents than one who is told by their clinician that the data shows elevated Thursday consumption.
The validated assessment scores , if the tool includes instruments like AUDIT or DASS-21 , can be reviewed together as population-context information: "Your AUDIT score has moved from 14 to 9 over the past six weeks , that's a meaningful shift. How does that sit with you?" This returns the significance of the data to the client rather than rendering a clinical verdict.
The goal throughout is that the monitoring remains something the client is doing for themselves, with you as a collaborator in making sense of what they find , not something they're doing for you.
ayodee provides printable QR code cards for clinicians to hand to clients in session. Clients scan, register anonymously, and can complete their first entry immediately. Bulk access codes available for practices and services.
Want to see your own patterns?
ayodee is a 90-second daily diary for your substance use, mood, and sleep. Anonymous, no email required. Free to start.
Try ayodee free