• Welcome to affLIFT!
    We are happy you have decided to check out our awesome affiliate marketing forum. Register your account today to join our amazing community!
  • Registration is free and you can upgrade your membership anytime to view our premium content. We also have over 100 free Public threads.
  • Get a 5% bonus when you sign up or login to Zeropark and make a deposit! Start buying push, pop, and domain redirect traffic.
    Claim Your Zeropark Bonus

Nerd Alert: Binomial Confidence Intervals (Zeropark example)


Active member
...as my 6yo niece says any time I enter the room, Nerd Alert.

We pay a ton of attention to Afflift follow-alongs and this feature was inspired by you all right here in this forum, especially @Luke. Given how much guesswork we see going on we figured it was time to incorporate statistical significance into our tracker's reports; specifically when deciding what to cut, boost, run and when.

We are curious how many of you, if any, currently export report data from your tracker in order to make informed boost/cut/run decisions based on confidence intervals.

In super-affiliate land it's quite common (not sure if you've seen any of the incredibly complex boost/cut/run spreadsheets out there) so we decided to do a metric ton of math to create inline calculations of traffic source token (i.e. pub/widget/etc) "confidence" reports.

We are working on a free tool to allow people to calculate these using whatever tracker they use. Unless you already use Kintura, go to the calculator linked below and enter Denominator: [number of visits], Numerator: [number of conversions] and click Compute. The resulting Interval around Proportion will tell you, with 95% confidence, a lower and upper range of how future traffic will convert:


Below is a real example with traffic from Zeropark tracked in our tracker, Kintura. In the Advisor column, the vertical grey bar represents the minimum viable conversion rate for this keyword to avoid being cut based on a desired minimum ROI of 40% (on this campaign). The dark blue error bars indicate probable conversion rate based on future traffic (using 95% confidence). As you can see, when the upper error bar falls below the minimum viable conversion rate it's a cut. (Please keep in mind this is very early testing phase in this campaign...and also Zeropark is awesome)

What makes this example great is that it shows you two instances where you might have otherwise cut the keyword but there wasn't enough data to make a truly informed decision based on statistical significance. I'm going to stop blabbing and wait for questions if you have them.



Staff member
This is awesome. This type of optimization within a tracker is just what I am talking about. With the APIs available on good traffic sources (like Zeropark), you could automatically pause targets that need to get cut.

Anyway, I am not really a math nerd which is why I need tools like this :) I'll start testing Kintura on my next campaign 👍


Active member
@Luke that is what we're planning next but we are waiting for feedback from some of our highest volume users. There are a couple services that will WL/BL for you but that's an algorithm we want to tweak with lots of real-world, human feedback. As it exists now, we're using standard algorithms used in advertising (and even medicine).

Also agree with you about Zeropark quality. I realize they're owned by our competitor but one of our Advisor algorithms alerts you to "Toxic" pubs (it has a neat skull icon) and we've yet to see one show up in Zeropark traffic. Kudos to @Zeropark (y)


Well-known member
That looks really great! Just wondering how many visits/conversions do you need to reach 95% confidence?


Active member
I'm currently using Kintura tracker, and checked this new feature earlier this morning. I'm totally amazed how this advisor turns out. Good job Kintura!
Awesome! Expect more improvements today because we're tweaking it quite a bit.

That looks really great! Just wondering how many visits/conversions do you need to reach 95% confidence?
The 95% is fixed but the range varies widely based on exactly what you're asking: the number of Trials.

Using the science meme below, you can see that C reports 50 which makes C look better than B, but his error bars tell a different story. The error bars on C indicate 95% confidence that given more trials, the result would fall somewhere between 1 and 100 meaning he can't be trusted. In this case, just like you say, C needs more data before his reported value can be taken seriously.