Kapua Optimizely Intelligence: Pehea e hoʻohana ai i ka Stats Engine To A / B Test Smarter, a me ka wikiwiki

ʻO Optimizely Stats Engine a me nā ʻōnaehana hoʻāʻo A / B

Inā ʻoe e ʻimi ana e holo i kahi papahana hoʻokolohua e kōkua i kāu ʻoihana hoʻokolohua & aʻo, ke hoʻohana nei ʻoe i kahi manawa Kapua ʻo Optimizely Intelligence - a i ʻole nānā aku paha iā ia. ʻO Optimizely kekahi o nā meahana ikaika loa i ka pāʻani, akā e like me kēlā me kēia meahana, hoʻohana hewa paha ʻoe inā ʻaʻole ʻoe e ʻike pehea e holo ai. 

What makes Optimizely so powerful? At the core of its feature set lies the most informed and intuitive statistics engine in a third-party tool, allowing you to focus more on getting important tests live – without needing to worry that you’re misinterpreting your results. 

E like me ka hoʻopaʻa makapō ʻana i ka lāʻau lapaʻau, Ka hoʻokolohua A / B e hōʻike ʻokoʻa wale nō i ka ʻokoʻa ohe. o kāu pūnaewele i nā mea hoʻohana ʻokoʻa e hoʻohālikelike i ka pono o kēlā me kēia lapaʻau. 

Kōkua nā helu helu iā mākou e hana i nā nīnūnē e pili ana i ka hopena o ka mālama ʻana ma kahi o ka wā lōʻihi. 

Most A/B testing tools rely on one of two types of statistical inference: Frequentist or Bayesian stats. Each school has various pros and cons – Frequentist statistics require a sample size to be fixed in advance of running an experiment, and Bayesian statistics mainly care about making good directional decisions rather than specifying any single figure for impact, to name two examples. Optimizely’s superpower is that it’s the only tool on the market today to take a ʻoi loa o nā ao hoʻokokoke.

ʻO ka hopena hopena? Hiki iā Optimizely ke hoʻohana i nā mea hoʻohana e holo wikiwiki i nā hoʻokolohua, ʻoi aku ka hilinaʻi, a me nā mea maʻalahi.

In order to take full advantage of that, though, it’s important to understand what’s happening behind the scenes. Here are 5 insights and strategies that will get you using Optimizely’s capabilities like a pro.

Pākuʻi # 1: Maopopo ʻaʻole hana ʻia nā metika āpau like

I ka hapanui o nā pono hana hoʻāʻo, ʻo ka pilikia i nānā pinepine ʻole ʻia ʻo ka nui o nā metric āu e hoʻohui ai a me ke ala ma ke ʻano he mahele o kāu hoʻāʻo, ʻoi aku ka nui o kāu ʻike ʻana i kekahi mau hopena kūpono ʻole ma muli o ka loaʻa wale (i nā helu helu, ua kapa ʻia kēia ʻo "pilikia hoʻokolohua he nui. "). I mea e hilinaʻi ai i kāna mau hopena, hoʻohana ʻo Optimizely i kahi kaʻina o nā kaohi a me nā hoʻoponopono e mālama i nā kūleʻa o kēlā hanana ma kahi haʻahaʻa i hiki. 

ʻO kēlā mau kaohi a me nā hoʻoponopono i ʻelua mau hopena ke hele ʻoe e hoʻonohonoho i nā hoʻokolohua ma Optimizely. ʻO ka mea mua, ʻo ka metric āu e koho ai e like me kāu Mīkini Mua e piʻi wikiwiki i ka nui o ka helu, nā mea āpau āpau. ʻO ka lua, ʻo ka nui o nā ana āu e hoʻohui ai i kahi hoʻokolohua, ʻo ka lōʻihi o kāu ana o hope e kiʻi i ka helu helu.

Ke hoʻolālā nei i kahi hoʻokolohua, make sure you know which metric will be your True North in your decision-making process, make that your Primary Metric. Then, keep the rest of your metrics list lean by removing anything that’s too superfluous or tangential.

Pākuʻi # 2: Hana i kāu Mana Mana Mana Pono

Optimizely is great at giving you several interesting and helpful ways to segment your experiment results. For example, you can examine whether certain treatments perform better on desktop vs. mobile, or observe differences across traffic sources. As your experimentation program matures though, you’ll quickly wish for new segments – these may be specific to your use case, like segments for one-time vs. subscription purchases, or as general as “new vs. returning visitors” (which, frankly, we still can’t figure out why that isn’t provided out of the box).

The good news is that via Optimizely’s Project Javascript field, engineers familiar with Optimizely can build any number of interesting custom attributes that visitors can be assigned to and segmented by. At Cro Metrics, we’ve built a number of stock modules (like “new vs. returning visitors”) that we install for all of our clients via their Project Javascript. Leveraging this ability is a key differentiator between mature teams who have the right technical resources to help them execute, and teams who struggle to realize the full potential of experimentation.

Pākuʻi # 3: Explore Optimizely’s Stats Accelerator

One often-overhyped testing tool feature is the ability to use “multi-armed bandits”, a type of machine learning algorithm that dynamically changes where your traffic is allocated over the course of an experiment, to send as many visitors to the “winning” variation as possible. The issue with multi-armed bandits is that their results aren’t reliable indicators of long-term performance, so the use case for these types of experiments are limited to time-sensitive cases like sales promotions.

Eia naʻe, ʻo Optimizely kahi ʻano ʻokoʻa o ka bandit algorithm i loaʻa i nā mea hoʻohana ma nā hoʻolālā kiʻekiʻe - Stats Accelerator (ʻike ʻia i kēia manawa ʻo ke koho "Accelerate Learnings" i loko o Bandits). I kēia hoʻonohonoho ʻana, ma kahi o ka hoʻāʻo ʻana e hoʻokaʻawale me ka dynamically i ke kalaiwa i ke ʻano hana kiʻekiʻe loa, hoʻokaʻawale ʻo Optimizely i kahi kaʻa i nā ʻano like ʻole e kiʻi wikiwiki i ka helu helu. Ma kēia ala, hiki iā ʻoe ke aʻo wikiwiki aʻe, a hoʻomau i ka replicability o nā hopena hōʻike A / B kuʻuna.

Pāhana # 4: Pākuʻi iā Emojis i kāu mau inoa mika

I ka nānā mua ʻana, kani ʻole paha kēia manaʻo, ʻoiai ʻo inane. Eia nō naʻe, ʻo kahi hiʻohiʻona nui o ka hōʻoia ʻana e heluhelu ana ʻoe i nā hopena hoʻokolohua kūpono e hoʻomaka i ka hōʻoia ʻana e hiki i kāu poʻe hoʻolohe ke hoʻomaopopo i ka nīnau. 

I kekahi manawa me kā mākou hoʻāʻo maikaʻi loa, hiki i nā inoa metric ke huikau (kali - puhi ʻia kēlā metric ke ʻae ʻia ka ʻoka, a i ʻole ke kuʻi ka mea hoʻohana i ka ʻaoʻao mahalo?), A i ʻole he hoʻokolohua ka nui o nā anana e ʻolokaʻa ana a me nā hopena. alakaʻi ka ʻaoʻao i ka huina nui o ka ʻike.

Hoʻohui i nā emojis i kāu mau inoa metika (nā pahuhopu, nā kaha ʻōmaʻomaʻo, a ʻoi paha ke ʻeke kālā nui) hiki ke hopena i nā ʻaoʻao i ʻoi aku ka scannable. 

Hilinaʻi iā mākou - e maʻalahi ka maʻalahi ʻana i ka heluhelu ʻana i nā hopena.

Pākuʻi # 5: E noʻonoʻo hou i kāu pae helu helu helu

Results are deemed conclusive in the context of an Optimizely experiment when they’ve reached koʻikoʻi helu. ʻO ka helu helu helu he huaʻōlelo makemakika paʻakikī, akā ʻo ia ka likelika o kāu ʻike ʻana he hopena o ka ʻokoʻa maoli ma waena o ʻelua mau lāhui, ʻaʻole wale no ka loaʻa wale. 

Optimizely’s reported statistical significance levels are “always valid” thanks to a mathematical concept called hoʻāʻo hoʻāʻo - hoʻolilo kēia iā lākou i mea ʻoi aku ka hilinaʻi ma mua o nā mea hana hoʻāʻo ʻē aʻe, i maʻalahi i nā ʻano "peeking" āpau inā heluhelu koke ʻoe iā lākou.

It’s worth considering what level of statistical significance you deem important to your testing program. While 95% is the convention in the scientific community, we’re testing website changes, not vaccines. Another common choice in the experimental world: 90%.  But are you willing to accept a little more uncertainty in order to run experiments faster and test more ideas? Could you be using 85% or even 80% statistical significance? Being intentional about your risk-reward balance can pay exponential dividends over time, so think this through carefully.

E heluhelu hou aʻe e pili ana iā Optimizely Intelligence Cloud

ʻO kēia mau kumumanaʻo wikiwiki ʻelima a me nā ʻike e kōkua nui e hoʻomanaʻo i ka hoʻohana ʻana iā Optimizely. E like me nā mea hana, paila ia i lalo e hōʻoia i ka ʻike maikaʻi ʻana i nā hana ma hope o ke kahua, no laila hiki iā ʻoe ke hōʻoia me ka hoʻohana pono ʻana i ka mea hana. Me kēia mau ʻike, hiki iā ʻoe ke kiʻi i nā hopena hilinaʻi āu e ʻimi nei, ke pono ʻoe. 

Pehea kou manaʻo?

Ke hoʻohana nei kēia pūnaewele i ka Akismet e ho'ēmi i ka spam. E aʻo pehea e hanaʻia ai kāuʻikeʻikepili.