Redcare: Using hypothesis-driven analytics to rethink how a pharmacy app earns attention.
Nobody had defined what each placement was for. That was the whole problem.
RedCare Pharmacy is one of Europe's largest online pharmacies, operating across six markets with a consumer app serving millions of users. The brief when MVST came in was to improve the in-app campaign and engagement strategy. Before accepting that scope at face value, I wanted to understand what was actually there.
What the audit surfaced
The app had a full set of promotional placements: home screen banners and sliders, quick entry categories, coupons, brand teasers, inbox messages, push notifications, and NPS prompts. Nobody had done a rigorous assessment of whether they were working together as a system.
In Germany alone, up to ten banners were displaying per session, with three slots already permanently taken. Functional flows like e-prescription entry, loyalty program access, and welcome discounts were living inside promotional banner slots, crowding out actual promotional inventory. Sponsored campaigns were rotating mainly to hit impression commitments. There were no fatigue rules, no exclusion logic, and no journey-based thinking: a user who had already converted on a promotion would still receive a push about it the next day.
The individual placements weren't broken in isolation. The system was broken because nobody had defined what each placement was for, what it was not for, and what should happen when they competed with each other. The app was communicating at volume, not at relevance.
In a pharmacy context, that distinction matters more than in most categories. A user who encounters three promotional surfaces while trying to find their prescription flow has a fundamentally different experience of the brand than one who sees a well-timed, relevant offer. Trust is the product. Every irrelevant notification is a small withdrawal from something that took years to build.
The strategy
I ran a full audit of every placement type and documented each one the same way: its purpose, its non-purpose, its current problems, its targeting logic, its KPIs and decision thresholds, its governance model, and its quick wins. Writing down the non-purpose was the most important part. Sliders are for time-bound promotions, seasonal health messages, and sponsored brand campaigns. They are not for loyalty entry, e-prescription flows, or first-purchase welcome offers. Quick entries are for category discovery. They are not for coupons or functional flows. This sounds obvious. It had not been done.
From there, the fixes followed a consistent logic across placements. Capping visible banners to three to five per session and replacing ad hoc manual rotation with a priority scoring system: time sensitivity, user segment relevance, contractual obligation, and a fatigue penalty that reduced priority as conversion decay set in. Moving functional flows out of promotional slots and into dedicated surfaces. Adding journey-based exclusion rules so users who had already converted stopped seeing the same placements. Capping sponsored inventory to one visible banner at a time with impression limits per user per week.
Each placement type also got structured hypotheses with explicit risk statements and pre-defined KPI thresholds. For banner CTR: above 2.5% was good, below 1.2% was a signal to act. The thresholds were starting benchmarks rather than settled targets, but defining them upfront forced the team to agree on what "working" meant before measuring anything.
Two decisions shaped the implementation specifically. The first was timing the analysis to feed the Leanplum to CleverTap migration that was happening in parallel. Rather than carry the existing problems into a new system, the audit created the opportunity to configure CleverTap around a better placement strategy from day one. The second was scoping the governance rollout to Germany first. A six-market strategy would have been easier to produce and harder to implement.
What I'd do differently
The KPI thresholds were marked as starting points because not all placements had clean historical data at the time of the audit. In hindsight I'd instrument the placements earlier in the engagement so the analysis has real performance data to work from rather than benchmarks that need validating after delivery.
I'd also push harder earlier on getting a unified view of the aggregate communication load per user. The individual placement audits were strong. What was harder to produce was a single picture of what a given user experienced across all surfaces in a single session, because the data lived in different systems. That cross-surface view would have made the governance recommendations more concrete and testable from day one.
The outcome
Overall app engagement increased by 8.6% from the optimized placement hierarchy. Conversion across promotional surfaces lifted by 3.4% within three months. The process improvements saved their team nine hours per person per week on average across six markets.
The full Confluence documentation became the foundation for a stakeholder presentation delivered across CRM, Category Management, Product, and Tech. Six market-specific teams, each with their own campaign calendars and local dependencies. Getting a strategy that any of them could act on required the audit to be rigorous enough that the recommendations weren't abstract.
