The Next Question Was "What Specifically Works?"
Beatport 110 proved that the structure works.
But what those 110 buyers responded to was still unknown. The track structure? The title? The genre tags? Algorithmic chance? To secure reproducibility, the variables needed to be identified.
"Apply AB testing to music" — in my work, this is standard methodology. Why I had not applied it to music until now is, in itself, the strange thing.
Design — 4 Axes, 20 Patterns
I decomposed the variables into 4 axes. The 1-minute cap was chosen because platform retention measurement is most accurate at that length. All patterns included visuals — comparison against text-only posts was deferred to a later phase.
Execution — TikTok and YouTube Reels in Parallel
The same patterns were posted in parallel to TikTok and YouTube Reels. Per-platform algorithm differences were included as an observation variable.
| Item | TikTok | YouTube Reels |
|---|---|---|
| Posts | 20 patterns | 20 patterns (identical) |
| Length | 60 sec fixed | 60 sec fixed |
| Visuals | Per-pattern, same source material | Per-pattern, same source material |
| Metrics | Full-view rate, save rate, share rate | Retention rate, CTR, impressions |
| Window | 72h post-publish | 72h post-publish |
A Decisive Dataset Arrived
72 hours later, the data was in.
Calling it "decisive" is not an exaggeration. Among the 20 patterns, one produced numbers incomparable to the others. And the variable combination that pattern represented — was not the one I had expected to win.
The full data breakdown will be published in the next post.
Continues in #023