Exploring eWOM in Online Consumer Reviews: Experience Versus Search Goods

image_pdfimage_print

By Jinsoo Kim, Jaejin Lee and Matt Ragas

WJMCR 32 (May 2011)

Introduction | Purpose Literature Review | Research Questions | Methods | Findings | Summary and Discussion | Limitations and Suggestions for Future Research

Abstract

The purpose of this exploratory study is to provide a description of eWOM that allows a better understanding of this new communication phenomenon by conducting content analysis. The study analyzes 828 online consumer reviews based on product characteristics (experience vs. search goods) and website characteristics (specialized vs. general sites) from various angles, such as the consumer ratings for a product and its product sales ranking. Findings reveal that product and website characteristics are closely associated with quality or quantity of content, review characteristics, preference, and consumer rating. Managerial implications and suggestions for future studies are discussed.

Introduction

In an attempt to make an informed decision, people will often refer to the opinions of others to help make up their mind; this is even truer when making a purchase decision as a consumer. Consumers do this by seeking out information from various sources, including advertising, publicity, salespeople, peers and news reports. While there are numerous information sources available, consumers are likely to gather third-party opinions when making decisions.1 These third-parties are considered non-marketer-dominated sources and include critiques, product reviews, peers and word of mouth (WOM) referrals. Such sources are not supposed to have a personal stake in consumers’ purchases, and, as such, are perceived as more credible and less biased.2 Among those sources, WOM is widely available and considered to be a critical component of marketing since consumers often seek out WOM opinions before they purchase books, movie tickets, technology products, cars, or choose restaurants. Consequently, WOM is generating increased attention among communication professionals and the policymakers tasked with regulating this area.

The Rise of WOM in the Internet Era

Word of mouth is generally defined as interpersonal communication with a verbal exchange of positive and negative information about products and services.3 Research has generally shown that WOM is one of the most influential elements of the marketing mix.Looking beyond marketing to the broader mass mediated and human communication landscape, Katz and Lazarsfeld5 found that the influence of interpersonal communication (i.e. WOM) was twice as important as personal selling and seven times more important than print advertising. Similarly, the hierarchy of information sources perspective introduced by Arndt and May6 maintains that interpersonal sources (i.e. WOM) have a greater communicative influence on decision making than mass media, such as ads and news.

In recent years, as the Web has emerged a leading mass communication platform, WOM has naturally blossomed on the Internet on thousands of specialized and general interest websites, and, in so doing, has become arguably even more important in affecting consumer decision making7. Today, many organizations view electronic word of mouth, known as eWOM, as a powerful marketing force and opportunity because consumers increasingly both post and read online consumer reviews as part of the purchase decision process. According to a study by Forrester Research, more than half of European consumers refer to other consumers’ online reviews when they make decisions8.

Purpose of the study

Although eWOM has received substantial coverage by the trade and popular press for its potential as a marketing communication tool, until recent years, relatively few studies on this topic have been published in scholarly journals. The growth of eWOM, as a Web-based phenomenon, lacks a large body of empirically-derived work that provides a solid foundation for further research 9. Also, the majority of research in this area has focused on user motivations or the effect aspect of eWOM 10(i.e. the readers’ perception of the eWOM content). However, one aspect of eWOM communication that has been largely overlooked is the other side of eWOM – the writers’ perspectives of eWOM, in other words, what is in the messages of online consumer-to-consumer interactions.

The purpose of the current study is to provide a description of eWOM that will allow for a better understanding of the new communication phenomenon by conducting content analysis in a hope to gain new insights and inspiration for future studies. This exploratory study analyzes the actual content of existing online consumer reviews as the most common form of eWOM, based on product characteristics (experience goods vs. search goods) and website characteristics (specialized sites vs. general sites) from various angles (length of review, number of reasons to support the reviews, preference toward product, review characteristics, and consumer ratings). Park and Chung’s study11, which analyzed eWOM information in South Korea, has been one of the few efforts to provide a piece of the puzzle called eWOM from a content analytic perspective. Most prior research into eWOM has been either experiment or survey-design based. There is value in applying a diverse range of methodologies to a phenomenon, thereby gaining a multi-sided view of it.12

To the authors’ best knowledge, the current investigation represents one of the first attempts to provide a descriptive account of eWOM from the writers’ point-of-view by conducting a content analysis of the characteristics of online consumer reviews for two different types of products (experience and search goods) on two different types of websites (specialized and general interest websites). We hope that by approaching the eWOM phenomenon from a content analytic-based perspective we will advance knowledge of the online review aspect of WOM and help inspire future research in this area.

Literature Review

WOM
To place the emergence of eWOM research in its proper historical context, it is necessary to briefly review research into interpersonal communication and WOM. Since Whyte13 first coined the term WOM 50 years ago, this concept has subsequently been defined by several researchers. These various definitions of WOM all share a common ground: WOM is an exchange of information by verbal means in an informal, person-to-person manner.

Katz and Lazarsfeld14 conducted the initial empirical research into the relationship between mass mediated information sources, interpersonal communication, and public attitude and behavior change. They determined how individuals obtained information and how they weighed this information in making decisions15. Based on this research, they introduced a two-step flow model of communication. This model suggested that opinion leaders mediate the relationship between the mass media and the public, and that opinion leaders generally have a greater and more direct influence on individuals than mass mediated information. WOM is one such incarnation of interpersonal communication and opinion leaders. 

Figure 1: The Two-step Flow Model(Katz and Lazarsfeld, 1955)


More than 25 years after the publication of Lazarsfeld and Katz’s pioneering research, marketing scholars Arndt and May16 hypothesized a dominance hierarchy of information sources, which maintains the existence of a direct hierarchy of influence among different types of sources. According to Arndt and May, the use of WOM (interpersonal sources) depends on the level of brand experience (direct prior experience), whereas the use of advertising (mass media) depends on the level of WOM information. Based on a logical process of deduction, interpersonal sources rank lower in perceived usefulness than direct prior experience, but rank higher than mass media. Therefore, direct prior experience (brand experience) tends to dominate interpersonal sources (WOM), and interpersonal sources tend to dominate mass media (advertising). The authors supported this notion by comparing and contrasting the structural and operational characteristics of these three different source levels, including attribution of biases, opportunity for feedback, control of feedback, relevance of content, completeness, validity, and accuracy. 

Figure 2: The Hierarchy of Information Sources(Arndt and May1981)

Although Arndt’s and May’s hierarchy was originally developed for durable consumer goods, Faber and O’Guinn17 conducted exploratory research to apply it to movie-going decision making. Faber and O’Guinn’s investigation found that, in most cases, people were influenced by multiple sources that provided conflicting information regarding a new movie. In this situation, consumers must eliminate the contradiction between sources and come up with their own decisions. Faber and O’Guinn came to the same conclusion as Chaffee18 on the notion that people learn source credibility by using and comparing different information sources through repeated experiences. Over time, people gradually perceive some sources as more credible than others.

Given this background, Faber and O’Guinn assessed movie-goers’ perceptions of different sources’ potential influence on movie selection to test Arndt and May’s hierarchy. The authors investigated eight different sources in order to determine their frequency of consultation, perceived credibility, importance and usefulness: one direct prior experience (preview), four mass media sources (critics’ reviews, television ads, radio ads, magazines), and three interpersonal sources (comments from friends, comments from a spouse/date, comments from someone known by the respondent and considered to be a movie expert). They found that interpersonal sources were generally more influential than mass media sources in consumer movie selection. A related study by Assael and Kamins19 investigated the motivation for WOM, and found evidence that supports Arndt’s and May’s argument20. The Assael and Kamins study found that the reason people seek out WOM is because it is found to be  reliable. These scholars also found that people perceive WOM to be a time saver, and a way to lower the risk of purchasing.

eWOM

One of the unique characteristics of the Internet is its high level of interactivity. The emergence and prevalence of the Internet make it possible for consumers to interactively share their thoughts and experiences about products, brands, issues and public figures with other people more easily than ever. Schindler and Bickart21 claim that there are a number of ways in which eWOM messages are communicated through the Internet and they can be divided into seven categories. First, a “posted review” is a type of eWOM that appears on online merchant and commercial websites that specializes in posting consumer opinions. The “posted review” is the object the current study analyzes because currently it is considered the most common form of eWOM. Second, a “mailbag” is a type of eWOM that includes consumer and reader comments and feedback posted on the websites of consumer products’ manufacturers, service providers, magazines and news organizations. Third, “discussion forums” include bulletin boards and Usenet groups. Fourth, “electronic mailing lists” email the members of an email list with consumer opinions. Fifth, “personal emails” are messages sent by one individual to others. Sixth, “chat rooms” are places where real-time conversations between groups of people over the Internet take place. Finally, “instant messaging” includes one-on-one real-time conversations over the Internet.
Goldsmith and Horowitz22 noted that eWOM is an important aspect of e-commerce. According to these researchers, eWOM affects the sales of products and services because consumers tend to actively give and seek opinions online in the same manner that opinions are traded offline. According to Hung and Li23, eWOM could be even more influential than traditional WOM because it provides explicit information, tailored solutions, interactivity and empathetic listening directly to consumers. Several recent studies24 confirm that eWOM could be more powerful in communication than traditional WOM, due to its distinct characteristics and the impressive technological development of the Internet.

WOM vs. eWOM

There are several generally agreed up differences between traditional WOM and eWOM.
First, with eWOM, consumers are no longer constrained by time, place, or acquaintances either in transmitting or receiving information as they are with WOM. That is, traditional WOM is typically conducted face-to-face, whereas eWOM is Web-based communication that overcomes most of the physical barriers that inhibit traditional communication.

Second, the amount of information and the number of sources that consumers can access online is greater than what is available offline25. Offline, WOM is limited to those sources a consumer can readily contact. Through the Web, consumers have access to a large and diverse set of opinions about products and services posted by individuals who have used the product or are knowledgeable about the service, yet consumers need not have a prior relationship with those individuals to take advantage of the information being offered26.

Third, eWOM enhances the cost effectiveness of acquiring information. It saves time, effort, and money to find the appropriate information compared to searching offline in a more traditional way27.

All these traits give eWOM a far greater reach than most traditional ways of gathering information.

Search Goods vs. Experience Goods (Product Characteristics)

According to Peterson et al28. the suitability of the Internet for consumer marketing heavily depends on the characteristics of the products and services being marketed. Therefore, it is essential to consider product characteristics and to incorporate a product classification29.

One of the most common ways to classify products is either as experience goods or search goods30. Experience goods are defined as goods for which the quality is uncertain prior to consumption31. It is difficult for people to judge the pre-consumption quality of goods, so they commonly gather relevant information from expert sources in order to construct evaluations and make decisions that will reduce the risk and uncertainty involved with a purchase32. Entertainment items like Broadway shows, plays, theater prodcutions, recorded music, and movies are probably the best representatives of experience goods. On the other hand, search goods are dominated by product attributes for which full information can be acquired prior to consumption.33 In this case, potential buyers can determine product attributes (e.g. price, feature, and function) before the purchase. This includes goods such as electronic equipment, furniture, cars, and foods.34 In short, search goods can be evaluated by external information obtained prior to consumption whereas experience goods need to be personally experienced to determine the quality.35

Length (Quantity) & Number of Reasons (Quality) of Consumer Review

In WOM, it is generally difficult for consumers to determine a source’s credibility. Therefore, message quality (number of reasons to support review) and quantity (length of consumer review) seem to play critical roles for consumers when determining whether they will adopt a WOM or not. Furthermore, preference toward a product in an online consumer review is considered a critical factor in eWOM.

Yoon36 suggests that a ‘sufficient length of review’ is required for qualified communication in eWOM. In WOM studies, length is considered an important variable when measuring the quantity of WOM. Length is also found to be related to the perceived information distinctiveness37, the effect of WOM38, and the interaction among consumers, reviewers, and WOM.39

Yoon40 also proposed a ‘number of reasons to support reviews’ as one of the key factors to improve the quality of online opinions. The quality and quantity of information in online consumer reviews enhance interactivity, further increasing the power of persuasion.41 Thus, the following initial research question is posed:

Research Questions

RQ1a. Is there a difference in the length of consumer review and the number of reasons to support a review based on product characteristics (experience vs. search goods) and preference toward that product?

Product News vs. Personal Experience vs. Advice Giving (Review Characteristics)

Richins’s and Root-Shafter’s study42 was an attempt to classify the characteristics of WOM into three categories: product news, personal experience, and advice giving. First, product news gives product information to consumers. This category includes more function and feature-oriented information than the other two. Second, personal experience contains personal information, such as reasons to buy and experiences with the product. Finally, advice giving is a positive or negative personal opinion about a product or service that is intended to affect other peoples’ decision-making processes. Due to the nature of product characteristics, this study anticipates that there is more projective information, such as advice giving and personal experience, in consumer reviews for experience goods than for search goods.

RQ1b. Is there a difference in online review characteristics (product news vs. personal experience vs. advice giving) based on product characteristics?

The current study examines the relationship between product characteristics and consumer ratings on products or services, as well as whether there are significant relationships among product characteristics, consumer ratings, and products’ sales rank.

RQ1c. Is there a difference in consumer ratings based on product characteristics?

RQ2. Is there a difference in consumer ratings based on product characteristics and products’ sales performance?

Specialized vs. General Websites (Website Characteristics)
In addition to exploring product characteristics, this study also expands its scopes of research to the characteristics of the websites at which the reviews appear. Among the various types of websites on the Internet, specialized and general (portal) review sites are the two where consumer reviews on products/services are most easily found. Auction sites are another type of website where many consumer reviews can be found; however, the data are often omitted on completion of purchase, and not archived.43 Because of this, those data were not included in the study. Specialized review sites are sites that are designed for professional critics and consumers with a more advanced knowledge and higher interests in a certain topic (e.g. CNET.com) whereas general web portals or online shopping mall sites (e.g. Amazon.com) target more general users. Because of these differences in the nature of the websites’ characteristics, differences in consumer reviews may be found. Thus, the following research questions ask:

RQ3a. Is there a difference in the length of reviews and the number of reasons to support reviews, based on the website characteristics (specialized vs. general)?

RQ3b. Is there a difference in review characteristics based on website characteristics?

RQ3c. Is there a difference in consumer ratings based on website characteristics?

RQ3d. Is there a difference in preference toward a product based on website characteristics?

Methods

Sampling Procedure

This investigation adopts and expands upon the three-step sampling procedure developed by Yang and Fang44 to collect qualified online consumer reviews.

In order to compare consumer reviews on experience goods and search goods from different sites, the first step is to determine products that fit in with the purpose of this study. For experience goods, movies were chosen because movies are considered to be typical examples of experience goods where the quality is uncertain prior to consumption.45 Also, movies are one of the most common objects of consumer review sites on the Internet that provide information of positivity or negativity of the product/service.46 For search goods, consumer reviews on Global Positioning System (GPS) products were chosen to be analyzed because they are one of the fastest growing markets among technology products, which are representative of search goods. The convenience of product categorization and data collection was also considered. GPS has a very simple category line-up unlike many other technology products that have complicated subcategories (e.g. digital cameras).

The second step is to find appropriate websites that provide consumer reviews with post-use experiences. By using multiple search sites, such as GoogleYahoo, and MSN, the researchers reviewed the most notable US-based websites that offer online consumer reviews on movies and GPS products. Eight different websites were found to be relevant to the current study: Buzzillions.com; Consumerreports.org; Epinion.com; CNET.com; Amazon.com; Metacritic.com; Rottentomato.com; and movies.MSN.com.

The third step is to find a qualified consumer review site. In order to pursue the purpose of the study, two different online consumer review sites were found respectively for both the representative experience good and search good. One is a specialized review site for professional critics and consumers, and the other is a general web portal / online shopping mall site, which encourages general consumers to write their opinions about a product in a specified area. After an intensive review of all sites, four websites fully met the requirements for this study. For the experience goods (movies), Metacritic.com47 was selected as a specialized online movie review site, and movies.MSN.com48 was selected as a general movie review site in the web portal category. For the search goods (GPS products), CNET.com49 was chosen for a specialized online IT product review site, and Amazon.com50 was chosen as a leading online general shopping mall site.

The final step of the sampling process is to choose specific movies and GPS products and their consumer reviews. To examine any significant relationship among customer ratings, product characteristics and sales ranking, four different brands/products were selected and divided into two groups: upper-top-ranking brand group and lower-top-ranking brand group. In the case of movies, the top 1 and 2 movies (Spiderman 3 and Shrek the Third) from the 2007 box office performances represent an upper–top-ranking group, and the top 9 and 10 (I am Legend and The Simpsons) represent a lower–top-ranking group based on the source The-Numbers.com.51 Also, based on the 2007 GPS sales report by NPD Group (GPSmagazine.com),52 the top 1 and 2 GPS products (Garmin C330 & TomTom ONE 3rd Ed.) were selected as the upper-top-ranking brand group and the top 9 and 10 (TomTom GO910 & TomTom GO510) as the lower–top-ranking brand group.

Data Collection

Since the upper- and lower-ranking brand groups were determined by full year 2007 sales performance, consumer reviews written during the same time period (from January 1, 2007 through December 31, 2007) were collected and examined from August 24 to September 10, 2008. A total of 4,236 consumer reviews from the upper–top-ranking brand group and the lower-top-ranking group were found from the four selected consumer review sites (movies: Metacritic.com, and movies.MSN.com; GPS: CNET.com, and Amazon.com), and as shown in Table 1, a total of 421 reviews were randomly sampled for the movies, and 407 reviews were randomly sampled for GPS products. Every Nth consumer review was chosen to get an approximately similar number for each slot based on product characteristics and website characteristics. Therefore, a total of 828 consumer reviews were sampled and analyzed. 

Table 1: Number of Reviews by Product Characteristics and Website Characteristics

Measurement and Coding
This study analyzed the content of the consumer reviews that were collected from the four different online consumer review sites (CNET.com, Amazon.com, Metacritic.com, and movies.MSN.com) and coded based on a coding sheet developed with 7 separate categories.

The first category is product characteristics, which delineates whether the sample review is about experience goods (movies) or search goods (GPS products). The second category is website characteristics, such as specialized online review sites and general web portal/online shopping mall site. It is intuitive that online consumer reviews are influenced by not only the product characteristics, but also the characteristics of the website.53 The third category is the length of consumer reviews on products or brands. For the current study, the length is measured by counting the number of words in each consumer review. The fourth category is the number of reasons to support reviews, which was used as a criterion to measure the quality of consumer reviews.54 The fifth category is preference toward a product/service, such as positive, negative, or neutral opinions in the reviews. For example, “Excellent” and “Love it” are counted as positive; “Unreliable,” “Boring,” and “This is bad” as negative. Reviews containing both responses are counted as neutral. The sixth category is review characteristics, broken down into three subcategories: product news, personal experiences, and advice giving.55 According to Richins and Root-Shaffer, product news is, for example, in the case of GPS products, a comment about advances in technology and features. In case of a movie, it may be the cast, director, plot, or storyline in the movie.56 Personal experience is noted when reviewers make statements related to their personal experience, such as “I bought it for my daughter and she loves it.” Advice giving includes comments in which reviewers give advice about products such as “I strongly recommend this GPS” or “Don’t waste your money on this movie.” The seventh category is consumer ratings on a product/service found along with each review examined. Five is the highest rating possible and zero is the lowest rating possible.

Most websites contain reviews from both professional critics and consumers; however, because this study is about consumer reviews, only consumer reviews were coded, and reviews by professional critics were not included for analysis.

Intercoder Reliability

For the coding procedure, a standardized coding sheet was developed. Two graduate students in mass communications were selected as coders. They were trained in a series of intensive sessions to clarify the coding instructions, operational definitions, and the category schemes. Some 20 consumer reviews sampled from outside of the study’s sampling frame were examined and coded for training purposes. Results were discussed and disagreements were analyzed and reexamined. This pre-coding procedure is extremely important because careful training of coders is an integral task in any content analysis and typically results in a more reliable analysis.57

Intercoder reliability was calculated from a randomly selected sub-sample of the analyzed content using Holsti’s formula. Reliability at about 90 percent or above is considered meaningful.58 Final reliability was 90.2 percent.

Findings

This study analyzed the content of existing consumer reviews online by product characteristics (experience goods and search goods) and website characteristics (special and general) from various angles (length of review, number of reasons to support the reviews, preference toward a product, review characteristics, and consumer ratings). A total of 828 consumer reviews was sampled and analyzed for this study as discussed in the method section earlier.

By Product Characteristics

RQ1a. Is there a difference in the length of consumer review and the number of reasons to support a review based on product characteristics and preference toward the product?

The first research question was posed to investigate the difference in the length of review (quantity of content) and the number of reasons to support the review (quality of content) based on product characteristics and preference toward the product. A two-way ANOVA test was conducted twice for each dependent variable to answer the question. Results are illustrated in Table 2.

Table 2: Descriptive Statistics for Number of Reasons and Length by Product Characteristics and Preference

Table 3: ANOVA Tests of Between-Subjects Effects (Number of Reasons)

Table 4: ANOVA Tests of Between-Subjects Effects (Lengths)

Graph 1: Number of Reasons by Product Characteristics and Preference

In terms of the number of reasons to support a review (quality of content), a statistically significant main effect was found for both product characteristics and product preference. An interaction effect between product characteristics and product preference was also detected. When people are positive about a product, they have more reasons to write online reviews for search goods (GPS, Mean = 3.33) than for experience goods (movies, Mean = 2.01). When consumers are negative, however, there is no statistical difference between the two products (GPS, Mean = 2.38; movies, Mean = 2.37). This assumes that, when online reviewers criticize a GPS product (search goods), they become relatively less logical and have fewer reasons to support their arguments.

In terms of the length of reviews (quantity of content), a significant main effect was found for both product characteristics and product preference; however, an interaction effect between product characteristics and product preference was not detected (> .05).

RQ1b. Is there a difference in online review characteristics based on product characteristics?

Table 5: Relationship between Product Characteristics and Online Review Characteristics

A statically significant difference was found in terms of the review characteristics based on product characteristics. According to the results, GPS product reviews (search goods) contained more product news as information, whereas a movie review (experience goods) contained more personal experience and advice giving. This result supports Park and Chung’s59 finding that consumer reviews on experience goods tend to include more subjective information about personal experiences whereas consumer reviews on search goods tend to include more objective information on a product’s features and facts.

RQ1c. Is there a difference in consumer ratings based on product characteristics?

Graph 2: Consumer Ratings by Product Characteristics

The graph for both GPS products and movies illustrates nearly a U-shape if the extremes on the left with 0 and .5 ratings are excluded. This indicates that ratings of the reviews on both GPS products and the movies tended to show bipolarity to the right and the left with the low count in the middle. From the result, it is presumable that people who are willing to log on and write a review about a certain product/service tend to have strong, either negative or positive, opinions rather than staying in the middle grey area.

RQ2. Is there a difference in consumer ratings based on product characteristics and products’ sales rank?

Table 6: ANOVA Tests of Between-Subjects Effects (Consumer Ratings)

Graph 3: Consumer Ratings by Product Characteristics and Ranking

In terms of the consumer ratings, a statistically significant main effect was detected for both product characteristics and ranking. An interaction effect between product characteristics and ranking was also detected; however, it is not reasonable to make an inference from this result and generalize the relationship between consumer rating and sales ranking since this study analyzed only four products out of the hundreds of products available in each category. Also, many different variables, other than WOM, can affect the sales of a certain product (e.g. advertising, promotion, news media attention, etc.). So, the results of this research question should remain open and inclusive for future study. 
By Website Characteristics

RQ3a. Is there a difference in the length of consumer review and the number of reasons to support that review based on website characteristics? 

Table 7: Descriptive Statistics for Number of Reasons and Length by Website Characteristics

In terms of the number of reasons, reviews on specialized sites (Metacritic.com and CNET.com, Mean = 2.91) tend to have more reasons than general sites (movies.MSN.com and Amazon.com, Mean = 2.40). The results indicate that reviewers on specialized sites tend to use more reasons (higher quality) than reviewers on general sites. However, the difference in the length of the reviews between the sites types was not significant (p > .05).

RQ3b. Is there a difference in online review characteristics based on website characteristics?

Table 8: Relationship between Website Characteristics and Online Review Characteristics

None of the three online review characteristics were significantly different based on website characteristics.
RQ3c. Is there a difference in consumer ratings based on website characteristics?

Table 9: Consumer Ratings by Website Characteristics

According to the result, reviewers on the general sites tend to give higher ratings (Mean = 3.56) than reviewers on the specialized sites (Mean = 3.13). The result implies that reviewers on the general sites tend to be more generous, whereas reviewers on the specialized sites tend to be more critical.

RQ3d. Is there a difference in preference toward a product based on website characteristics?

Table 10: Product Preferences by Website Characteristics

The results show that both specialized sites and general sites contain more positive reviews than negative, and general sites tend to be even more positive than specialized sites. This result supports the findings of RQ3c, as well as Park and Chung’s60 findings that consumer reviews on general shopping malls tend to be more generous and positive (60.3%) than the reviews on specialized sites (50.4%).

Summdary and Discussion

The results of this study point to several implications that should be carefully considered. 
First, this study found that product characteristics (search vs. experience goods) was the variable that was closely associated with various aspects of online reviews, including the number of reasons (quality of content), length of reviews (quantity of content), review characteristics (product news, personal experience, and advice giving), and consumer ratings. This study’s findings imply that marketers need to take more care of, and pay close attention to, eWOM, especially for search goods since it is assumed that consumers are more sensitive to reviews on search goods. Also, according to this study, search goods (GPS) reviewers tend to be more logical with more reasons and product news information included in their reviews when they are positive about the product; however, when they criticize search goods, they use relatively less reason to support their arguments, and can be less logical. It may be an important finding because, according to prior research, negative reviews with more subjective and emotional opinion (less logical) can be more powerful and effective than logical positive opinions.61

Second, the study finds that the ratings that online reviewers give to a certain product tend to be bipolarized. This means that there is a great possibility that reviewers on the web are either ‘very happy’ or ‘very angry’ consumers, rather than ‘neutral’ ones. While strong customer service has always been important for successful marketers, this finding provides yet another reminder of why strong customer service has taken on even greater importance in an interconnected world increasingly dominated by eWOM. By responding more swiftly to negative online consumer reviews, and by providing new opportunities to give feedback, marketers may be able to limit the amount of negative reviews they ultimately receive.

Third, the study finds a relationship between consumer rating and sales ranking. In contrast to search goods (GPS), experience goods (movies) had little or no association between sales ranking and consumer ratings. This could mean that consumers pay less attention to consumer reviews when they purchase experience goods than they do when purchasing search goods. In other words, consumer ratings could have less influence on consumers’ decision making when purchasing experience goods. In fact, in the movie industry, there has long been a myth that reviews and box office performance are not closely correlated.62 Another possible assumption is that the motivations behind using review sites for experience goods (movies) and search goods (GPS) are different. For example, it could be assumed that the motivation for people using the movie review sites is generated after they consume a movie as experience goods, and want to share opinions and feelings with others. Whereas, for a GPS product, it could be assumed that people use the review sites to seek out information before they make any purchasing decisions. This assumption reflects well the characteristics and definitions of the two product categories: movies as experience goods, and GPS products as search goods.

Fourth, the study finds that reviews on specialized sites (Metacritic.com and CNET.com) tend to have more reasons and grounds for argument (quality of content) than general sites (movies.MSN.com and Amazon.com). The study also finds that reviews on specialized sites are generally more critical and relatively more negative than reviews on general sites. The result supports the implication that there are more reviewers with professional knowledge and critical minds on the specialized sites than on general sites. Considering the possibility that they could act as opinion leaders, this fact could require more attention to the reviewers of the specialized sites and strengthens the need for marketers to manage eWOM.

Limitations and Suggestions for Future Research

This study analyzed the content of existing online consumer reviews on four different websites by product characteristics and other important factors. While this study makes several unique contributions to enhancing the relationship between companies and consumers, and even among consumers themselves, it only gives some inspiration and scratches the surface of potential future research that could explore many other inquiries in the area. As with any study, there are several limitations that need to be noted.

First, in this study, movies and GPS products were chosen as the products to represent experience goods and search goods, respectively, in order to investigate any difference in consumer reviews. There could be some controversy, however, on the ability of the two products (movies and GPS) to represent the two categories and generalize the results. Therefore, replications are needed through future research to enhance validity.

Second, products were classified into an upper-top-ranking (top 1 and 2 brands), and a lower-top-ranking (top 9 and 10 brands) group; however, in the movie industry, there are several thousand movies that are released every year. Therefore, the top 9 and 10 movies are still considered highly-ranked, very successful movies. Thus, this study’s sample can be seen as controversial as to whether it is showing accurate results. Accordingly, future research should find and compare samples that show more distinct differences in attributes.

Third, for collecting data in terms of movies, this study overlooked some intermediate variables (e.g. genre) which can affect results. There is a possibility that differences in genre influence the online consumer reviews of movies. Therefore, future studies should consider other variables and enhance the results of the research based on those variables.

Fourth, to collect online consumer reviews for both experience goods and search goods, the current study selected only two websites for each product category and divided them based on their characteristics. Future research should show a larger sampling of websites or other sources of eWOM; e.g., online forums, chat rooms, and blogs. 
Fifth, even though the products, websites, and reviews are appropriately sampled for this exploratory study, the nature of content analysis of online consumer reviews could disadvantage this study’s findings. For example, self-selected samples which are not necessarily statistically representative can increase the possibility of biased results.

Finally, to measure the reviews, the current study only adopted variables such as the length of review, number of reasons to support the review, preference toward a product, review characteristics, and consumer ratings. For future research, additional variables (e.g. reply comments, ranks on review, interactivity, etc.) might be added to examine any significant relationship between online consumer reviews and other variables. Over the past few years, researchers have begun to explore eWOM in the context of blogs and Twitter.63 Content analyses replicating and extending aspects of the current study design into these two new settings, as well as emerging platforms such as the mobile Web, would help compliment this growing line of inquiry

Jinsoo Kim and Jaejin Lee are Ph.D. students in the Department of Advertising at University of Florida, Gainesville. Matt Ragas is an assistant professor in the College of Communication at DePaul University.

Leave a comment

Your email address will not be published. Required fields are marked *