The YouTube Formula. Derral Eves

Чтение книги онлайн.

Читать онлайн книгу The YouTube Formula - Derral Eves страница 17

The YouTube Formula - Derral Eves

Скачать книгу

You eat the Cheetos. Then you have a conversation about Han Solo, so he runs to the living room and plays The Empire Strikes Back for you. The next time you go to their house, as soon as you walk through the door he hands you a cookie and turns on Return of the Jedi. His prediction about what you might want to eat or watch is based on the last time you came over, and it's probably spot on. Oh, and also, you're probably going to want to go to their house more often with this kind of treatment. They know what you like. (Unless he recommends The Last Jedi or Solo, in which case you'll just go to the Zuckerbergs' next time because those movies stink.)

      Let's say that in place of Cheetos, you wanted carrot sticks, and in place of Star Wars, you watched The Office reruns. The next time you showed up, li'l bro would offer broccoli and Parks and Recreation. The concept works no matter your preferences.

      These examples help explain YouTube's goals:

       Predict what the viewer will watch.

       Maximize the viewer's long‐term engagement and satisfaction.

      How they do it is broken into two parts: Gathering and Using Data, and Algorithms with an “S.”

      YouTube collects 80 billion data points from user behavior every single day. They gather data in two key areas in order to achieve the goals of the AI. The first area it observes is user behavior via metadata. It determines things about a video based on the behavior of the person whose eyes are on the screen and whose fingers are doing the clicking. “Satisfaction signals” train the AI what to suggest or not. There is a very specific list of these signals:

       Which videos a user watches

       Which videos they skip

       Time they spend watching

       Likes and dislikes

       “Not interested” feedback

       Surveys after watching a video

       Whether they come back to rewatch or finish something unwatched

       If they save and come back to watch later

      All of these signals feed the Satisfaction Feedback Loop. This loop is created based on the feedback the algorithm is getting from your specific behavior. It “loops” the types of videos you like through its suggestions. This is how it personalizes each user's experience.

      Gathering Metadata

      To really get down to the details, here's an explanation for exactly how the AI gathers data. Observing metadata starts with the thumbnail. The YouTube AI uses the advanced technology of Google's suite of AI products. It operates a program called Cloud Vision (CV). CV uses optical character recognition (OCR) and image recognition to determine lots of things about a video based on what it finds in the thumbnail. It takes points from each image in the thumbnail and, using billions of data points already in the system, recognizes those images, and feeds that information back into the algorithm. For example, a thumbnail including a close‐up of world‐renowned physicist Stephen Hawking's face is recognized as such in CV, so that video can be “grouped” in the suggested feed along with every other video on YouTube that has been tagged under the Stephen Hawking topic. This is how your videos get discovered and watched.

Snapshot depicts the thumbnail with data points

      Video Intelligence

      Closed Captioning

      The AI does the same thing with the language of the video. YouTube has an auto‐caption feature now, and the AI reads through the words of the caption to gather data as well. So basically going through the video frames using shot lists is like looking at what is visually being said, while listening to the audio provides even more feedback via what is actually being verbalized. Everything goes into the system.

      Natural Language

      The AI is also listening for actual sentence structure and breaking it down into a sentence diagram. This extracts the meaning of what is being said. It can differentiate language so it can group it categorically, but not just on the surface. For example, two different creators might both talk about Stephen Hawking in their videos, but one video might be biographical or scientific while the other might be humorous or entertaining. Even though both videos are talking about the same person, they are categorically different enough that the AI would categorize them differently and group them with different recommended content because of the language being used.

      Video Title and Description

      Did you know that YouTube has more than one algorithm?

Скачать книгу