Advanced Analytics and Deep Learning Models. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Advanced Analytics and Deep Learning Models - Группа авторов страница 22

Advanced Analytics and Deep Learning Models - Группа авторов

Скачать книгу

0.8592 0.7953 0.7554 ParallelSGD 0.8409 0.7449 0.8852 80 50 Y 0.859 0.7907 0.7544 ALSWR 0.9545 0.9053 1.0354 80 50 N 0.8597 0.7995 0.7544

       Result of MCRS Item–Based CF

       Top 10 Main Aspects Extracted

Configuration Dataset Yelp Place, food, service, restaurant, price menu, staff, drink, and lunch
#asp. Sub-asp Yelp TripAdvisor Amazon
10 Y 0.864 0.8245 0.811 TripAdvisor Hotel, room, staff, location, service, breakfast, restaurant, bathroom, price, view
10 N 0.8643 0.8252 0.8117
50 Y 0.8641 0.8254 0.8118 Amazon game, graphic, story, character, player price, gameplay, controller level, and music
50 N 0.8648 0.826 0.8124

      When multiple-criteria user-to-user CF is used as recommender algorithm, then the best overall results are obtained [5].

      3.4.2 User Preference Learning in Multi-Criteria Recommendation Using Stacked Autoencoders by Tallapally et al.

      Here, they come up with a stacked autoencoders which is a DNN approach to use the multiple-criteria ratings. They implemented a model which is configured to analyze the connection in the middle of every client’s criteria and general rating efficiency. Test outcomes are based on practical datasets like Yahoo! Movies dataset and TripAdvisor dataset. It illustrates that this approach can perform both single-criteria systems and multi-criteria approaches on different performance matrix [4].

      Now, if we look on their proposed performance evaluation and result analysis, then it will be cleared that how much efficiency this model can achieve.

       3.4.2.1 Dataset and Evaluation Matrix

      In this paper, they have used two datasets based on real world from tourism and movie domains that are used to evaluate the performance. They hold on to the sample data of the users who reviewed at least five hotels and hotels that were reviewed by at least five users to obtain working data subset from TA.

      They used subset that carry more than 19,000 rating instances by more than 3,100 users with around 3,500 hotels which has a high sparsity of 99.8272%. In addition, YM data are generated as shown in Tables 3.3 to 3.5. For analyzing the performance of this method, they used Mean Absolute Error (MAE) which is known for its simplicity, accuracy, and popularity [4].

       Result = YM 10-10

       Result = YM 20-20

Technique MAE GIMAE GPIMAE F1 Technique MAE GIMAE GPIMAE F1
MF [10] 0.8478 0.7461 0.6765 0.5998 MF [10] 0.7397

Скачать книгу