{"id":431,"date":"2020-05-31T22:57:59","date_gmt":"2020-05-31T22:57:59","guid":{"rendered":"http:\/\/www.alpha-quantum.com\/blog\/?p=431"},"modified":"2020-05-31T22:58:09","modified_gmt":"2020-05-31T22:58:09","slug":"content-based-recommender-system-with-python","status":"publish","type":"post","link":"https:\/\/www.alpha-quantum.com\/blog\/content-based-recommendation-engine\/content-based-recommender-system-with-python\/","title":{"rendered":"Content-based Recommender System with Python"},"content":{"rendered":"<p>Recommender systems are methods that predict users\u2019 interests and make meaningful recommendations to them for different items, such as songs to play on Spotify, movies to watch on Netflix, news to read about your favourite newspaper website or \u00a0products to purchase on Amazon.<\/p>\n<p>Recommender systems can be distinguished primarily by the type of information that they use. Content-based recommenders rely on attributes of users and\/or items, whereas collaborative filtering uses information on the interaction between users and items, expressed in the so-called user-item interaction matrix.<\/p>\n<p>Recommender systems are generally divided into 3 main approaches: <strong>content-based, collaborative filtering<\/strong>, and <strong>hybrid recommendation systems <\/strong>(see Fig. 1).<\/p>\n<p id=\"dnqmYlU\"><img loading=\"lazy\" width=\"882\" height=\"502\" class=\"alignnone size-full wp-image-432 \" src=\"http:\/\/www.alpha-quantum.com\/blog\/wp-content\/uploads\/2020\/05\/img_5ed4354677fb7.png\" alt=\"\" \/><\/p>\n<p>Figure 1: Types of recommender systems<\/p>\n<h1>What are content-based recommender systems?<\/h1>\n<p>Content-based recommender systems generate recommendations by relying on attributes of items and\/or users. User attributes can include age, sex, job type and other personal information. Item attributes on the other hand, are descriptive information that distinguishes individual items from each other. In case of movies, this could include title, cast, description, genre and others.<\/p>\n<p>By relying on features, those of users and items, content-based recommender systems are more like a traditional machine learning problem than is the case for collaborative filtering. Content-based method uses item-based or user-based features to predict an action of the user for a given item. User\u2019s action can be a specific rating, a buy decision, like or dislike, a decision to view a movie and similar.<\/p>\n<p>One of the advantages of content-based recommendation is user independence \u2013 to make recommendations to a user, it does not require information about other users, unlike collaborative filtering. This makes content-based approach easier to scale. Another benefit is that the recommendations are more transparent, as the recommender can more clearly explain recommendation in terms of the features used.<\/p>\n<p>Content-based approach also has its drawbacks, one is over-specialization \u2013 if the user is only interested in specific categories, recommender will have difficulty recommending items outside of this scope, leading to user remaining in its current circle of items\/interests. Content-based approaches also often require domain knowledge to produce relevant item and user features.<\/p>\n<p>We will now build an implementation of content-based recommender in python, using the <a href=\"https:\/\/grouplens.org\/datasets\/movielens\/\">MovieLens dataset<\/a>.<\/p>\n<h1>Content-based recommender system for recommendation of movies<\/h1>\n<p>Our recommender system will be able to recommend movies to us, based on movie plots and based on combination of features, such as top actors, director, keywords, producer and screenplay writers of the movies.<\/p>\n<p>First, we load the models:<\/p>\n<pre class=\"lang:default decode:true \">import pandas as pd\r\n\r\nimport ast\r\n\r\nfrom sklearn.feature_extraction.text import CountVectorizer\r\n\r\nfrom sklearn.feature_extraction.text import TfidfVectorizer\r\n\r\nfrom sklearn.metrics.pairwise import cosine_similarity\r\n\r\nimport seaborn as sns\r\n\r\nimport numpy as np\r\n\r\nimport matplotlib.pyplot as plt<\/pre>\n<p>&nbsp;<\/p>\n<p>Next, we import data from <a href=\"https:\/\/www.kaggle.com\/rounakbanik\/the-movies-dataset\">https:\/\/www.kaggle.com\/rounakbanik\/the-movies-dataset<\/a> and <a href=\"https:\/\/grouplens.org\/datasets\/movielens\/latest\/\">https:\/\/grouplens.org\/datasets\/movielens\/latest\/<\/a>:<\/p>\n<p>df_data\u00a0=\u00a0pd.read_csv(&#8216;movies_metadata.csv&#8217;,\u00a0low_memory=False)<\/p>\n<p>One of the pre-processing steps for our recommender involves removing movies which have low number of votes:<\/p>\n<pre class=\"lang:default decode:true\">df_data = df_data[df_data['vote_count'].notna()]\r\n\r\nplt.figure(figsize=(20,5))\r\n\r\nsns.distplot(df_data['vote_count'])\r\n\r\nplt.title(\"Histogram of vote counts\")<\/pre>\n<p>#<em>\u00a0determine\u00a0the\u00a0minimum\u00a0number\u00a0of\u00a0votes\u00a0that\u00a0the\u00a0movie\u00a0must\u00a0have\u00a0to\u00a0be\u00a0included\u00a0<\/em><\/p>\n<pre class=\"lang:default decode:true\">min_votes = np.percentile(df_data['vote_count'].values, 85)<\/pre>\n<p>#<em>\u00a0exclude\u00a0movies\u00a0that\u00a0do\u00a0not\u00a0have\u00a0minimum\u00a0number\u00a0of\u00a0votes<\/em><\/p>\n<pre class=\"lang:default decode:true\">df = df_data.copy(deep=True).loc[df_data['vote_count'] &gt; min_votes]<\/pre>\n<h2>Content-based recommender that recommends movies based on similarity of movie plots<\/h2>\n<p>Our first content-based recommender will have a goal of recommending movies which have a similar plot to a selected movie.<\/p>\n<p>We will use &#8220;overview&#8221; feature from our dataset:<\/p>\n<pre class=\"lang:default decode:true\"># removing rows with missing overview\r\n\r\ndf = df[df['overview'].notna()]\r\n\r\ndf.reset_index(inplace=True)\r\n\r\n\r\n# processing of overviews\r\n\r\ndef process_text(text):\r\n\r\n    # replace multiple spaces with one\r\n\r\n    text = ' '.join(text.split())\r\n\r\n    # lowercase\r\n\r\n    text = text.lower()\r\n\r\n    return text\r\n\r\ndf['overview'] = df.apply(lambda x: process_text(x.overview),axis=1)<\/pre>\n<p>To compare movie plots, we first need to compute their numerical representation. There are various approaches we can use, from bag of words, word embeddings to TF-IDF, we will select the latter.<\/p>\n<h3>TF-IDF approach<\/h3>\n<p>TF-IDF of a word in a document which is part of a larger corpus of documents is a combination of two values. One is term frequency (TF), which measures how frequently the word occurs in the document.<\/p>\n<p>However, some of the words, such as \u201cthe\u201d and \u201cis\u201d, occur frequently in all documents and we want to downscale the importance of such words. This is accomplished by multiplying TF with the inverse document frequency.<\/p>\n<p>This ensures that only those words are considered important for the document that are frequent in this document but more rarely present in the rest of the corpus.<\/p>\n<p>To build the TF-IDF representation of movie plots we will use the TfidfVectorizer from scikit-learn. We first fit TfidfVectorizer on train data set of movie plot descriptions and then transform the movie plots into TF-IDF numerical representation:<\/p>\n<pre class=\"lang:default decode:true \">tf_idf = TfidfVectorizer(stop_words='english')\r\n\r\ntf_idf_matrix = tf_idf.fit_transform(df['overview']);<\/pre>\n<p>&nbsp;<\/p>\n<p>Now that we have numerical vectors, representing each movie plot description, we can compute similarity of movies by calculating their pair-wise cosine similarities and storing them in cosine similarity matrix:<\/p>\n<pre class=\"lang:default decode:true \"># calculating cosine similarity between movies\r\n\r\ncosine_similarity_matrix = cosine_similarity(tf_idf_matrix, tf_idf_matrix)<\/pre>\n<p>With cosine similarity matrix computed, we can define the function &#8220;recommendations&#8221; that will return top recommendations for a given movie.<\/p>\n<p>The function first determines the index of the input movie, retrieves the similarities of movies with selected movie, sorts them and returns the titles of movies with the highest similarity to the selected movie.<\/p>\n<pre class=\"lang:default decode:true \">def index_from_title(df,title):\r\n\r\nreturn df[df['original_title']==title].index.values[0]\r\n\r\n\r\n# function that returns the title of the movie from its index\r\n\r\ndef title_from_index(df,index):\r\n\r\nreturn df[df.index==index].original_title.values[0]\r\n\r\n\r\n# generating recommendations for given title\r\n\r\ndef recommendations( original_title, df,cosine_similarity_matrix,number_of_recommendations):\r\n\r\nindex = index_from_title(df,original_title)\r\n\r\nsimilarity_scores = list(enumerate(cosine_similarity_matrix[index]))\r\n\r\nsimilarity_scores_sorted = sorted(similarity_scores, key=lambda x: x[1], reverse=True)\r\n\r\nrecommendations_indices = [t[0] for t in similarity_scores_sorted[1:(number_of_recommendations+1)]]\r\n\r\nreturn df['original_title'].iloc[recommendations_indices]<\/pre>\n<p>&nbsp;<\/p>\n<p>We can now produce our recommendation for a given film, e.g. \u2018Batman\u2019:<\/p>\n<p>recommendations(&#8216;Batman&#8217;,\u00a0df,\u00a0cosine_similarity_matrix,\u00a010)<\/p>\n<p><em>3693\u00a0\u00a0\u00a0 Batman Beyond: Return of the Joker\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>5962\u00a0\u00a0\u00a0 The Dark Knight Rises\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>7379\u00a0\u00a0\u00a0 Batman vs Dracula\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>5476\u00a0\u00a0\u00a0 Batman: Under the Red Hood\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>6654\u00a0\u00a0\u00a0 Batman: Mystery of the Batwoman\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>3911\u00a0\u00a0\u00a0 Batman Begins\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>6334\u00a0\u00a0\u00a0 Batman: The Dark Knight Returns, Part <\/em><\/p>\n<p><em>1<\/em><em>770\u00a0\u00a0\u00a0\u00a0 Batman &amp; Robin\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>4725\u00a0\u00a0\u00a0 The Dark Knight\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>709\u00a0\u00a0\u00a0\u00a0 Batman Returns\u00a0\u00a0\u00a0 <\/em><\/p>\n<p>&nbsp;<\/p>\n<h2>Content-based recommender based on keywords, actors, screenplay, director, producer and genres features<\/h2>\n<p>The recommender based on overview is of a limited quality as it considers only the movie plot.<\/p>\n<p>We will now explore a different recommender, which gives more focus to other metadata (keywords, actors, director, producer, genres and screenplay authors) when recommending movies.<\/p>\n<p>To use additional metadata, we first need to extract it from separate files, keywords.csv and credits.csv, and merge it with the main pandas dataframe:<\/p>\n<pre class=\"lang:default decode:true \">df_keywords = pd.read_csv('keywords.csv')\r\n\r\ndf_credits = pd.read_csv('credits.csv')\r\n\r\n# Some ids have irregular format, so we will remove them\r\n\r\ndf_cb = df_data.copy(deep=True)[df_data.id.apply(lambda x: x.isnumeric())]\r\n\r\ndf_cb['id'] = df_cb['id'].astype(int)\r\n\r\ndf_keywords['id'] = df_keywords['id'].astype(int)\r\n\r\ndf_credits['id'] = df_credits['id'].astype(int)\r\n\r\n# Merging keywords, credits of movies with main data set\r\n\r\ndf_movies_data = pd.merge(df_cb, df_keywords, on='id')\r\n\r\ndf_movies_data = pd.merge(df_movies_data, df_credits, on='id')<\/pre>\n<p>&nbsp;<\/p>\n<p>Again, we will keep only the movies with highest vote counts using similar code as previously.<\/p>\n<p>We next create a new feature for each movie, which consists of top 4 actors in the movie. We also concatenate and lowercase the name and surname of actors. We namely want e.g. Tom of Tom Hanks to be distinct from Tom in Tom Selleck, replacing both names with tomhanks and tomselleck, respectively:<\/p>\n<pre class=\"lang:default decode:true \">max_number_of_actors = 4\r\n\r\ndef return_actors(cast):\r\n\r\nactors = []\r\n\r\ncount = 0\r\n\r\nfor row in ast.literal_eval(cast) :\r\n\r\nif count&lt;max_number_of_actors:\r\n\r\nactors.append(row['name'].lower().replace(\" \",\"\"))\r\n\r\nelse:\r\n\r\nbreak\r\n\r\ncount+=1\r\n\r\nreturn ' '.join(actors)\r\n\r\n\r\ndf_movies['actors']=df_movies.apply(lambda x: return_actors(x.cast),axis=1)<\/pre>\n<p>We will now create similar features for director, screenplay and producer of movies. To simplify, we only use first person detected per job type.<\/p>\n<pre class=\"lang:default decode:true \">def return_producer_screenplay_director(crew,crew_type):\r\n\r\npersons = []\r\n\r\nfor row in ast.literal_eval(crew) :\r\n\r\nif row['job'].lower()==crew_type:\r\n\r\npersons.append(row['name'].lower().replace(\" \",\"\"))\r\n\r\nreturn ' '.join(persons)\r\n\r\n\r\ndf_movies['director']=df_movies.apply(lambda x: return_producer_screenplay_director(x.crew,'director'),axis=1)\r\n\r\ndf_movies['screenplay']=df_movies.apply(lambda x: return_producer_screenplay_director(x.crew,'screenplay'),axis=1)\r\n\r\ndf_movies['producer']=df_movies.apply(lambda x: return_producer_screenplay_director(x.crew,'producer'),axis=1)<\/pre>\n<p>After generating individual metadata, we merge them in a single feature with the ability to individually weight different features. This allows us to build highly flexible recommenders, as we will see later on.<\/p>\n<pre class=\"lang:default decode:true \"># relative importance of different features\r\n\r\nw_genres = 2\r\n\r\nw_keywords = 3\r\n\r\nw_actors = 3\r\n\r\nw_director = 1\r\n\r\nw_producer = 1\r\n\r\nw_screenplay = 1\r\n\r\n# function for merging features\r\n\r\ndef concatenate_features(df_row):\r\n\r\ngenres = []\r\n\r\nfor genre in ast.literal_eval(df_row['genres']) :\r\n\r\ngenres.append(genre['name'].lower())\r\n\r\ngenres = ' '.join(genres)\r\n\r\nkeywords = []\r\n\r\nfor keyword in ast.literal_eval(df_row['keywords']) :\r\n\r\nkeywords.append(keyword['name'])\r\n\r\nkeywords = ' '.join(keywords)\r\n\r\nreturn ' '.join([genres]*w_genres)+' '+' '.join([keywords]*w_keywords)+' '+' '.join([df_row['actors']]*w_actors)+' '+' '.join([df_row['director']]*w_director)+' '+' '.join([df_row['producer']]*w_producer)+' '+' '.join([df_row['screenplay']]*w_screenplay)<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"lang:default decode:true \">df_movies['features'] = df_movies.apply(concatenate_features,axis=1)\r\n\r\n# pre-processing text of features\r\n\r\ndef process_text(text):\r\n\r\n# replace multiple spaces with one\r\n\r\ntext = ' '.join(text.split())\r\n\r\n# lowercase\r\n\r\ntext=text.lower()\r\n\r\nreturn text\r\n\r\ndf_movies['features'] = df_movies.apply(lambda x: process_text(x.features),axis=1)<\/pre>\n<p>&nbsp;<\/p>\n<p>After generating the feature, we again need to vectorize it. We will not use TF-IDF, as it reduces the importance of words which occur in many documents, in our case this e.g. also includes actors, directors, screenplay writer, producers.<\/p>\n<p>We will thus use CountVectorizer for this purpose.<\/p>\n<pre class=\"lang:default decode:true \">vect = CountVectorizer(stop_words='english')\r\n\r\nvect_matrix = vect.fit_transform(df_movies['features'])\r\n\r\ncosine_similarity_matrix_count_based = cosine_similarity(vect_matrix, vect_matrix)<\/pre>\n<p>Example recommendations:<\/p>\n<p>recommendations(&#8216;Toy Story&#8217;,\u00a0df_movies,\u00a0cosine_similarity_matrix_count_based,\u00a010)<\/p>\n<p><em>4252\u00a0\u00a0\u00a0 Toy Story 3\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><em>5823\u00a0\u00a0\u00a0 Toy Story That Time Forgot <\/em><em>785\u00a0\u00a0\u00a0\u00a0 Small Soldiers\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><em>5702\u00a0\u00a0\u00a0 Hawaiian Vacation\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><em>1358\u00a0\u00a0\u00a0 Toy Story 2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><em>273\u00a0\u00a0\u00a0\u00a0 Pinocchio\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><em>1680\u00a0\u00a0\u00a0 The Transformers: The Movie<\/em><em>833\u00a0\u00a0\u00a0\u00a0 Child&#8217;s Play\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><em>966\u00a0\u00a0\u00a0\u00a0 Toys\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><em>4836\u00a0\u00a0\u00a0 Ted\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p>The usage of relative weights to control importance of different metadata allows us to quickly build a new recommender focused on other aspects of movies.<\/p>\n<p>We can e.g. increase weight for the director to recommend movies that are highly likely directed by the same director as the input movie:<\/p>\n<pre class=\"lang:default decode:true \">w_director = 100\r\n\r\ndf_movies['features'] = df_movies.apply(concatenate_features,axis=1)\r\n\r\nvect = CountVectorizer(stop_words='english')\r\n\r\nvect_matrix = vect.fit_transform(df_movies['features'])\r\n\r\ncosine_similarity_matrix_count_based = cosine_similarity(vect_matrix, vect_matrix)\r\n\r\nrecommendations('Toy Story', df_movies, cosine_similarity_matrix_count_based, 8)<\/pre>\n<p><em>4837\u00a0\u00a0\u00a0 Tin Toy\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>4860\u00a0\u00a0\u00a0 Knick Knack\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/em><\/p>\n<p><em>3182\u00a0\u00a0\u00a0 Luxo Jr.\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>1358\u00a0\u00a0\u00a0 Toy Story 2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>1012\u00a0\u00a0\u00a0 A Bug&#8217;s Life\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>4532\u00a0\u00a0\u00a0 Cars 2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p><em>5423\u00a0\u00a0\u00a0 Mater and the Ghostlight<\/em><\/p>\n<p><em>3268\u00a0\u00a0\u00a0 Cars\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/em><\/p>\n<p>A quick search shows that all of the movies recommender were directed by the director of Toy Story \u2013 John Lasseter.<\/p>\n<h1>Conclusion<\/h1>\n<p>In this article, we have introduced several content-based recommender systems in python, using MovieLens data set.<\/p>\n<p>Recommender systems utilize big data about our interactions with items and try to find patterns which show what items are most popular with users that are similar to us or find items that are most similar to items that we have purchased in the past.<\/p>\n<p>Besides content-based method used in this article, recommenders are also often using collaborative filtering approach or a combination of both, known as hybrid methods, which try to combine both main approaches in a way which minimizes the drawbacks of any of the individual methods. Hybrid recommenders are the most common type of recommenders, found in online platforms today.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recommender systems are methods that predict users\u2019 interests and make meaningful recommendations to them for different items, such as songs to play&#8230;<\/p>\n","protected":false},"author":1,"featured_media":435,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[46],"tags":[],"_links":{"self":[{"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/posts\/431"}],"collection":[{"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/comments?post=431"}],"version-history":[{"count":2,"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/posts\/431\/revisions"}],"predecessor-version":[{"id":434,"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/posts\/431\/revisions\/434"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/media\/435"}],"wp:attachment":[{"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/media?parent=431"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/categories?post=431"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.alpha-quantum.com\/blog\/wp-json\/wp\/v2\/tags?post=431"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<br />
<b>Notice</b>:  Trying to access array offset on value of type null in <b>/var/www/alpha-quantum.com/public_html/blog/wp-content/plugins/woocommerce/includes/class-woocommerce.php</b> on line <b>202</b><br />
