22.9.2023

The Importance of Recommender Systems: A Key Technology of the World Wide Web

This expert article is all about Recommender Systems, evolving from early methods to using deep learning and transformers, which are crucial for personalized online experiences and are now under scrutiny for transparency and ethical considerations.

Fatih Gedikli

Guest Author & AI Expert

Artificial Intelligence

Recommender Systems are applications commonly integrated into web stores to support sales. They provide users with personalized recommendations for products and services that match their interests and that they have not yet purchased. Recommender Systems are not limited to selling products but also play a role in other areas. For example, they are used in areas such as streaming platforms, news portals, social media, travel planning, music recommendations, job placement, and many others.

Understanding the role of recommendations on YouTube is enough to realize the importance of Recommender Systems. It is difficult to imagine this platform, which is one of the most popular websites in the world next to Google, without well-known recommendations on the homepage or in the right sidebar. As a key technology, YouTube recommendations play a crucial role in keeping users on the platform, improving the user experience, facilitating the discovery of new content, and providing personalized advertising.

Recommender Systems are therefore among the most significant technologies of the World Wide Web and have been used since the early days of the Internet.

The Origins of Collaborative Filtering

In the 1990s, significant progress was made in the field of Collaborative Filtering, especially in terms of recommending news, music albums, and movies. The principle of Collaborative Filtering is to identify similarities between users and generate recommendations based on these similarities. The goal is to suggest products or content to users that have been positively reviewed by similarly-minded individuals.

Here are some key milestones and their significance

The Tapestry System (Goldberg et al., 1992), developed at Xerox Palo Alto Research Center, was one of the first systems to introduce the idea and terminology of collaborative filtering. It was originally designed as an e-mail system to filter messages from mailing lists and send only relevant information to users. This early application of Collaborative Filtering laid the foundation for the later development of Recommender Systems.

The GroupLens Project xrn (Resnick et al., 1994) at the University of Minnesota contributed to the further development of Collaborative Filtering. GroupLens' xrn system enabled filtering of Usenet messages to recommend relevant messages to users based on the interests and preferences of other users. The project focused on incorporating social interactions and used community feedback to improve personalized recommendations.

The Ringo System (Shardanand and Maes, 1995), developed at the Massachusetts Institute of Technology (MIT), was one of the first systems to use collaborative filtering to recommend music albums and artists. It allowed users to specify their musical preferences and receive matching recommendations based on the preferences of other users. The Ringo system helped extend Collaborative Filtering into the realm of music recommendations.

The Bellcore Video Recommender System (Hill et al., 1995) introduced Collaborative Filtering to the movie recommendation domain. It allowed users to indicate their preferences for movies and generated recommendations based on the ratings and preferences of other users. The system laid the foundation for the application of Collaborative Filtering in the field of movie ratings and recommendations.

These systems demonstrated how the use of user feedback and social interactions can be used to generate personalized recommendations. These insights have had a significant impact on the understanding and development of Recommender Systems and are still relevant today.

The evolution of Recommender Systems has proceeded in stages over the years. In March 1996, the first Collaborative Filtering Workshop was held in Berkeley, where the community first gathered. At the end of that meeting, there was consensus that all the approaches addressed a larger common problem: Recommender Systems.

In March 1997, a special issue of Communications of the ACM was published on this topic, edited by Resnick and Varian. This publication highlighted the growing importance of Recommender Systems in research and industry.

The mid-1990s also saw the first successful spin-offs from academia. A group from MIT founded Agents, Inc. which was later renamed Firefly Networks. The GroupLens group, also from 1996, founded Net Perceptions.

In 1999, Amazon introduced its own Recommendation Engine, which displayed customer recommendations based on the behavior of other customers. This approach, known as "Customers who bought this item also bought...," was a milestone in personalized recommendations in e-commerce.

Amazon's recommendation engine since 1999.

The 2000s and the Netflix Prize 

After the dot-com bubble burst in 2000-2005, many of the companies of the time disappeared, but the technology of Recommender Systems remained and evolved. The industry faced new challenges. It became clear that accurate predictions alone were not enough. Recommendations needed to be meaningful and scalable without slowing down the existing website.

In 2006, Netflix announced the Netflix Prize, which was endowed with one million US dollars. The goal was to improve the prediction accuracy of the Netflix algorithm by 10%. The Netflix Prize led to synergy effects and attracted new players from different disciplines such as information retrieval and data mining, who increasingly turned to the field of Recommender Systems.

Screenshot of the Netflix Prize.

In 2007, the first ACM Recommender Systems Conference was held, bringing together 120 participants. This underscored the growing importance and scientific nature of the Recommender Systems research field. The ACM RecSys conference is still the most important in this research area.

Transformers are Eating Recommender Systems

In recent years, more and more papers have been published that use Deep Learning (artificial neural networks with many layers) for Recommender Systems. As part of machine learning, deep learning enables the automatic extraction of relevant features and patterns from different user signals that are critical for personalized recommendations. Alluding to Marc Andreessen's well-known quote "Software is eating the world", NVIDIA CEO Jensen Huang said, "Software is eating the world, but AI is going to eat software". By AI, he was referring to the use of deep learning, which has gradually gained ground over all other machine learning methods since the ImageNet competition in 2013. This groundbreaking development was made possible in large part by the power of graphics processing units (GPUs).

And today, in 2023, it can be said without a doubt that Transformer models downright dominate Deep Learning models in application areas such as natural language processing (NLP).

A key element of the transformers architecture is the attention mechanism, which allows the model to focus on and selectively weight relevant information. This is particularly useful for Recommender Systems, as user interactions, item characteristics, and other contextual information must be considered to generate accurate recommendations.

By using transformers, Recommender Systems can model complex relationships between users and items. The architecture captures subtle nuances in the data and enables a more accurate evaluation of the relevance of recommendations. It also enables effective use of sequential data, such as the order of items a user has viewed, which is important for music playlists, for example.

In addition, the transformers architecture offers flexibility in integrating different features and information. It enables the combination of structured and unstructured data, such as user profiles, item attributes, text descriptions, or image information. This opens up new possibilities for personalized recommendations and enables more comprehensive coverage of user interests.

Overall, the transformers architecture plays a significant role in the further development of Recommender Systems. It enables improved relationship modeling, more accurate prediction of user preferences, and efficient processing of large data sets. The integration of this architecture into Recommender Systems helps deliver personalized and relevant recommendations to users.

Regulation and legislation

Technologies such as Recommender Systems, which play a central role in our everyday lives and sometimes decide what information we see on social media and what we don't, must first be understood and then regulated by governments. 

Although Germany and Europe often lag when it comes to algorithms and models, they are often at the forefront when it comes to legislation and regulation of AI applications. For example, the European Union has created a new regulatory body called the European Centre for Algorithmic Transparency (ECAT) to provide insights into social media and search engine algorithms. ECAT will study the workings of "black box" algorithms, analyze potential risks, and evaluate measures to combat illegal content, human rights violations, and harm to democratic processes or user health. The agency will also study the long-term social impact of algorithms and propose measures to improve their accountability and transparency. ECAT also aims to act as a knowledge and best practice hub for researchers from different fields. This initiative is part of the EU's efforts to regulate AI and can have far-reaching positive effects beyond the Union's borders.

References

  • Goldberg, Nicholas, Oki, Terry (1992). Using Collaborative Filtering to Weave an Information Tapestry. Communications of the ACM 35 (12), 61–70.
  • Resnick, Iacovou, Suchak, Bergstrom, Riedl (1994). Grouplens: An open architecture for collaborative filtering of net news. Proceedings of the conference on computer-supported cooperative work (CSCW’94), pp. 175–186.
  • Shardanand and Maes (1995). Social information filtering: algorithms for automating "word of mouth". Proceedings of the conference on human factors in computing systems (SIGCHI), pp. 210–217.
  • Hill, Stead, Rosenstein, Furnas (1995). Recommending and evaluating choices in a virtual community of use. Proceedings of the conference on human factors in computing systems (SIGCHI), pp. 194–201.
  • Resnick and Varian (1997). Recommender Systems. Introduction to Special Section of Communications of the ACM 40 (3), pp. 56–58.
  • Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, Polosukhin (2017). Attention is all you need. Proceedings of the conference on neural information processing systems (NIPS'17), pp. 6000–6010.
  • EDPB resolves dispute on transfers by Meta and creates task force on Chat GPT. https://edpb.europa.eu/news/news/2023/edpb-resolves-dispute-transfers-meta-and-creates-task-force-chat-gpt_en (13.04.2023)

Recommender Systems are applications commonly integrated into web stores to support sales. They provide users with personalized recommendations for products and services that match their interests and that they have not yet purchased. Recommender Systems are not limited to selling products but also play a role in other areas. For example, they are used in areas such as streaming platforms, news portals, social media, travel planning, music recommendations, job placement, and many others.

Understanding the role of recommendations on YouTube is enough to realize the importance of Recommender Systems. It is difficult to imagine this platform, which is one of the most popular websites in the world next to Google, without well-known recommendations on the homepage or in the right sidebar. As a key technology, YouTube recommendations play a crucial role in keeping users on the platform, improving the user experience, facilitating the discovery of new content, and providing personalized advertising.

Recommender Systems are therefore among the most significant technologies of the World Wide Web and have been used since the early days of the Internet.

The Origins of Collaborative Filtering

In the 1990s, significant progress was made in the field of Collaborative Filtering, especially in terms of recommending news, music albums, and movies. The principle of Collaborative Filtering is to identify similarities between users and generate recommendations based on these similarities. The goal is to suggest products or content to users that have been positively reviewed by similarly-minded individuals.

Here are some key milestones and their significance

The Tapestry System (Goldberg et al., 1992), developed at Xerox Palo Alto Research Center, was one of the first systems to introduce the idea and terminology of collaborative filtering. It was originally designed as an e-mail system to filter messages from mailing lists and send only relevant information to users. This early application of Collaborative Filtering laid the foundation for the later development of Recommender Systems.

The GroupLens Project xrn (Resnick et al., 1994) at the University of Minnesota contributed to the further development of Collaborative Filtering. GroupLens' xrn system enabled filtering of Usenet messages to recommend relevant messages to users based on the interests and preferences of other users. The project focused on incorporating social interactions and used community feedback to improve personalized recommendations.

The Ringo System (Shardanand and Maes, 1995), developed at the Massachusetts Institute of Technology (MIT), was one of the first systems to use collaborative filtering to recommend music albums and artists. It allowed users to specify their musical preferences and receive matching recommendations based on the preferences of other users. The Ringo system helped extend Collaborative Filtering into the realm of music recommendations.

The Bellcore Video Recommender System (Hill et al., 1995) introduced Collaborative Filtering to the movie recommendation domain. It allowed users to indicate their preferences for movies and generated recommendations based on the ratings and preferences of other users. The system laid the foundation for the application of Collaborative Filtering in the field of movie ratings and recommendations.

These systems demonstrated how the use of user feedback and social interactions can be used to generate personalized recommendations. These insights have had a significant impact on the understanding and development of Recommender Systems and are still relevant today.

The evolution of Recommender Systems has proceeded in stages over the years. In March 1996, the first Collaborative Filtering Workshop was held in Berkeley, where the community first gathered. At the end of that meeting, there was consensus that all the approaches addressed a larger common problem: Recommender Systems.

In March 1997, a special issue of Communications of the ACM was published on this topic, edited by Resnick and Varian. This publication highlighted the growing importance of Recommender Systems in research and industry.

The mid-1990s also saw the first successful spin-offs from academia. A group from MIT founded Agents, Inc. which was later renamed Firefly Networks. The GroupLens group, also from 1996, founded Net Perceptions.

In 1999, Amazon introduced its own Recommendation Engine, which displayed customer recommendations based on the behavior of other customers. This approach, known as "Customers who bought this item also bought...," was a milestone in personalized recommendations in e-commerce.

Amazon's recommendation engine since 1999.

The 2000s and the Netflix Prize 

After the dot-com bubble burst in 2000-2005, many of the companies of the time disappeared, but the technology of Recommender Systems remained and evolved. The industry faced new challenges. It became clear that accurate predictions alone were not enough. Recommendations needed to be meaningful and scalable without slowing down the existing website.

In 2006, Netflix announced the Netflix Prize, which was endowed with one million US dollars. The goal was to improve the prediction accuracy of the Netflix algorithm by 10%. The Netflix Prize led to synergy effects and attracted new players from different disciplines such as information retrieval and data mining, who increasingly turned to the field of Recommender Systems.

Screenshot of the Netflix Prize.

In 2007, the first ACM Recommender Systems Conference was held, bringing together 120 participants. This underscored the growing importance and scientific nature of the Recommender Systems research field. The ACM RecSys conference is still the most important in this research area.

Transformers are Eating Recommender Systems

In recent years, more and more papers have been published that use Deep Learning (artificial neural networks with many layers) for Recommender Systems. As part of machine learning, deep learning enables the automatic extraction of relevant features and patterns from different user signals that are critical for personalized recommendations. Alluding to Marc Andreessen's well-known quote "Software is eating the world", NVIDIA CEO Jensen Huang said, "Software is eating the world, but AI is going to eat software". By AI, he was referring to the use of deep learning, which has gradually gained ground over all other machine learning methods since the ImageNet competition in 2013. This groundbreaking development was made possible in large part by the power of graphics processing units (GPUs).

And today, in 2023, it can be said without a doubt that Transformer models downright dominate Deep Learning models in application areas such as natural language processing (NLP).

A key element of the transformers architecture is the attention mechanism, which allows the model to focus on and selectively weight relevant information. This is particularly useful for Recommender Systems, as user interactions, item characteristics, and other contextual information must be considered to generate accurate recommendations.

By using transformers, Recommender Systems can model complex relationships between users and items. The architecture captures subtle nuances in the data and enables a more accurate evaluation of the relevance of recommendations. It also enables effective use of sequential data, such as the order of items a user has viewed, which is important for music playlists, for example.

In addition, the transformers architecture offers flexibility in integrating different features and information. It enables the combination of structured and unstructured data, such as user profiles, item attributes, text descriptions, or image information. This opens up new possibilities for personalized recommendations and enables more comprehensive coverage of user interests.

Overall, the transformers architecture plays a significant role in the further development of Recommender Systems. It enables improved relationship modeling, more accurate prediction of user preferences, and efficient processing of large data sets. The integration of this architecture into Recommender Systems helps deliver personalized and relevant recommendations to users.

Regulation and legislation

Technologies such as Recommender Systems, which play a central role in our everyday lives and sometimes decide what information we see on social media and what we don't, must first be understood and then regulated by governments. 

Although Germany and Europe often lag when it comes to algorithms and models, they are often at the forefront when it comes to legislation and regulation of AI applications. For example, the European Union has created a new regulatory body called the European Centre for Algorithmic Transparency (ECAT) to provide insights into social media and search engine algorithms. ECAT will study the workings of "black box" algorithms, analyze potential risks, and evaluate measures to combat illegal content, human rights violations, and harm to democratic processes or user health. The agency will also study the long-term social impact of algorithms and propose measures to improve their accountability and transparency. ECAT also aims to act as a knowledge and best practice hub for researchers from different fields. This initiative is part of the EU's efforts to regulate AI and can have far-reaching positive effects beyond the Union's borders.

References

  • Goldberg, Nicholas, Oki, Terry (1992). Using Collaborative Filtering to Weave an Information Tapestry. Communications of the ACM 35 (12), 61–70.
  • Resnick, Iacovou, Suchak, Bergstrom, Riedl (1994). Grouplens: An open architecture for collaborative filtering of net news. Proceedings of the conference on computer-supported cooperative work (CSCW’94), pp. 175–186.
  • Shardanand and Maes (1995). Social information filtering: algorithms for automating "word of mouth". Proceedings of the conference on human factors in computing systems (SIGCHI), pp. 210–217.
  • Hill, Stead, Rosenstein, Furnas (1995). Recommending and evaluating choices in a virtual community of use. Proceedings of the conference on human factors in computing systems (SIGCHI), pp. 194–201.
  • Resnick and Varian (1997). Recommender Systems. Introduction to Special Section of Communications of the ACM 40 (3), pp. 56–58.
  • Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, Polosukhin (2017). Attention is all you need. Proceedings of the conference on neural information processing systems (NIPS'17), pp. 6000–6010.
  • EDPB resolves dispute on transfers by Meta and creates task force on Chat GPT. https://edpb.europa.eu/news/news/2023/edpb-resolves-dispute-transfers-meta-and-creates-task-force-chat-gpt_en (13.04.2023)

Read next

Frontnow Newsletter

Stay Updated With Frontnow's News & Cutting-Edge AI Innovation.