• Home
  • Article
  • Enhancing engagement with personalized online sports content | Amazon Web Services Blog

Enhancing engagement with personalized online sports content | Amazon Web Services Blog


Pulselive has created an essential experience for sports fans, such as the Cricket World Cup official website, the English Premier League iOS and Android apps.

One of the important points that customers evaluate the company are fans engagement to digital content such as videos.However, until recently, the videos displayed on each fan were based on the list of public order and were not personalized.

Sports organizations are trying to understand who they are their fans and what they want.The abundant digital behavioral data that can be collected for each fan tells how the fans are and how they are involved in our content.As the increase in available data and mechanical learning (ML) has increased, PulSelive has been asked by a customer to provide customized content recommendations.

This article describes the experience of adding Amazon Personalize to the platform as a new recommendation engine and how to increase video consumption by 20%.

Implementation of Amazon Personalize

Before we started, Pulselive had two main issues.The staff had no data scientists, and it was necessary to find a solution that could understand the mechanical learning experience and obtain measurable results.Using tools such as Amazon Sagemaker (still needs a lot of learning curves) and Amazon Personalize, we (although it costs money), we have considered the support of external companies.

Finally, I chose to use Amazon Personalize.There were some reasons for the following:

  1. 技術的にも財政的にも参入障壁が低かった。
  2. A/B テストをすばやく実施してレコメンデーションエンジンの価値を実証できた。
  3. 既存のサイトへの影響を最小限に抑えて、シンプルな概念実証 (PoC) を作成できた。
  4. 私たちは、Amazon Personalize の内部で何が起こっているのかを明確に理解することよりも、その影響と結果の改善に関心を持っていました。

Like other businesses, we had to avoid having a negative effect on daily operations, but we still needed to be convinced that solutions were suitable for their environment.So I started with his A / B test at his POC, which can be spin -up and run in a few days.

パーソナライズされたオンラインスポーツコンテンツでエンゲージメントを高める | Amazon Web Services ブログ

In cooperation with the Amazon Prototyping team, the option range for the first integration has been narrowed down to a minimal change to a website, which can be easily A/B tests.As a result of finding out the location of the video list, it was determined that re -ranking the next video list was the fastest to implement personalized content.This prototype provided a new API that uses the AWS Lambda function and Amazon API Gateway to intercept more video requests and re -rank using the Amazon Personalize GetPersonalizedranking API.

In order for the experiment to be succeeded, it was necessary to prove that either the total number of videos or the recycled completion rate was statistically improved. In order to make this possible, it was necessary to test for a long period of time and cover the days when multiple sporting events were held and the quiet days without a match. By testing various usage patterns, I wanted to eliminate actions that depend on the time of day and whether the match had recently been held. I set a period of two weeks for the first data collection. All users were part of the experiment and were randomly assigned to any control or test group. All videos have become part of the experiment to make the experiment as simple as possible. The following figure shows the architecture of this solution.

First, we needed to build an Amazon Personalize solution that was the starting point of the experiment.Amazon Personalize requires interaction datasets for users and items to define solutions and create campaigns that recommend videos for users.To meet these requirements, we created a CSV file containing a time stamp, user ID, and video ID for each video viewing during the use period for several weeks.Uploading the interaction history to Amazon Personalize was an easy task, and I was able to test the recommendation with the AWS Management Console immediately.To train the model, I used 30,000 recent interaction datasets.

To compare the total number of videos and the metrics of the playback completion rate, we have built a second API that records all video interactions on Amazon Dynamodb.This second API solved the problem of communicating new interactions to Amazon Personalize via the Putevents API.As a result, we were able to keep the machine learning model up to date.

In this experiment, we tracked all users all video playback and the opportunity to encourage video playback.The trigger of video playback was direct links (from social media, etc.), links from other parts of the website, and links from video lists.Each time the user browsing the video page, either the current video list or the newly -ranked video list was displayed, depending on whether it belongs to the control group or the test group.The experiment was started with 5 % of her for all users.Since there was no problem with this approach (there was no clear decline in video consumption or an increase in API errors), he increased the test group to 50 % and started collecting data with the remaining users as a group.。

Learn from experiments

After the two weeks of the A/B test, the collected KPI was taken out of DynamoDB and compared some KPIs of the two variations.In this first experiment, we decided to use some simple KPIs, but his KPI of other organizations may be different.

The first KPI was the number of videos plays for each session per user.In the first hypothesis, I thought that re -ranking the video list would not be meaningful.However, it was measured that the number of views per user increased by 20%.The next graph is a summary of the number of video views of each group.

In addition to measuring the total number of views, I wanted to check if the user was watching the video to the end.We tracked this by sending an event every time the user plays 25 % of his video.It was found that the average regeneration completion rate did not change much by the video recommended by Amazon Personalize or the video recommended in the original playback list.Considering the number of views, the user who presented a personalized recommended video list concluded that the overall playback time was long.

In addition, the position of each video in the user's "Recommended Video" bar and the items selected by the user have been tracked.As a result, we were able to compare the ranking of the personalized list and the public order list.As a result, it was found that there was no major difference between the two variations.This suggests that the user is likely to select the video displayed on the screen, rather than scrolling the entire list.

After analyzing the results of the experiment, we presented the recommendation to enable Amazon Personalize as a default method that ranks videos in the future.

Lessons learned

In this process, I learned the following lessons.This may be useful when you implement your own solution.

  1. ユーザーと項目のインタラクション履歴データを収集する。私たちは約 30,000 のインタラクションを使用しました。
  2. 最近の履歴データに焦点を当てる。当面の関心はできるだけ多くの履歴データを取得することですが、最近のインタラクションは古いインタラクションよりも価値があります。過去のインタラクションのデータセットが非常に大きい場合、古いインタラクションをフィルターして、データセットのサイズとトレーニング時間を減らすことができます。
  3. SSO ソリューションを使用するか、セッション ID を生成するかして、すべてのユーザーに一貫性のある一意の ID を付与できることを確認する。
  4. サイトまたはアプリケーションで A/B テストを実行できる場所を見つけ、既存のリストを再ランク付けするか、おすすめアイテムのリストを表示する。
  5. API を更新して Amazon Personalize を呼び出し、項目の新しいリストを取得する。
  6. A/B テストをデプロイし、実験に参加するユーザーの割合を徐々に増やす。
  7. 実験の結果を理解できるように、計測と測定を行う。

Conclusion and future steps

This time, I went into the world of machine learning at Amazon Personalize for the first time.It turned out that the whole process of integrating the trained model into a workflow was surprisingly simple.It also spent much more time to confirm the appropriate KPI and data capture to prove the usefulness of the experiment, rather than implementing Amazon Personalize.

In the future, we plan to strengthen the following functions.

  1. コンテンツのリストが提供される場所ではどこでも Amazon Personalize を使用する機会を開発チームに提供することで、ワークフロー全体で Amazon Personalize をより頻繁に統合していきます。
  2. ユースケースを拡大し、再ランク付けだけでなくおすすめアイテムを含める。これにより、各ユーザーが好むと思われる古いアイテムを表示できるようになります。
  3. モデルを再トレーニングする頻度を実験する。新しいインタラクションをリアルタイムでモデルに挿入することは常に新鮮さを保つための素晴らしい方法ですが、モデルを最も効果的に活用するためには日々の再トレーニングが必要です。
  4. すべてのお客様に Amazon Personalize を使用して、最も関連性の高いコンテンツをあらゆる形態で推奨することで、ファンのエンゲージメント向上に役立てる方法を検討する。
  5. レコメンデーションフィルターを使用して、各リクエストで使用できるパラメータの範囲を拡張する。近いうちに、お気に入りのプレーヤーの動画をフィルタリングして含めるなど、オプションの追加も検討しています。

Mark Wood is a Pulselive product solution director.Mark has been in Pulselive for more than six years, and has played a role in both technical directors and software engineers.Before joining PulSelive, he was a ROKE senior engineer and a Querix developer.Mark has a mathematics and computer science degree at Southampton University.