Abstract
Introduction
In the Web 2.0 era, major e-commerce websites have introduced user-generated product questions and answers (Q&As) to satisfy consumers’ information needs and improve customer experience (Deng et al., 2020; S. Gao et al., 2021), such as “Questions and Answers” on Amazon, “Ask Everyone” on Taobao and “Jingdong Q&A” on Jingdong. User-generated product Q&As are interactive information for customers to share product knowledge. Potential customers and consumers who have purchased can ask questions, and the system invites consumers who have purchased products to share knowledge, which enables interaction between questioners and answerers (Chen et al., 2019; S. Gao et al., 2019). Since both questions and answers are given by consumers, consumers have better acceptance of and trust in user-generated product Q&As (Deng et al., 2020).
However, the excessive quantity of consumer questions and answers may lead to the problem of information overload (Kulkarni et al., 2019). Thus, to help consumers filter the most helpful Q&As, many e-commerce platforms have introduced a helpful evaluation mechanism that allows consumers to vote on whether Q&As are helpful or not (W. Zhang, Lam, et al., 2020). Nevertheless, helpfulness is a comprehensive indicator when evaluating the value of Q&As, and it does not reflect the characteristics that consumers use to judge the helpfulness of product Q&As. If managers can grasp the influencing factors of consumers’ evaluation of helpfulness, they will not only be able to optimize the Q&A system to guide consumers to write more valuable Q&As but also provide a basis for predicting the helpfulness of newly generated product Q&As. Therefore, the following questions are raised: what are the factors in consumers’ judging the helpfulness of user-generated product Q&As? How do different factors affect users’ perceived helpfulness of product Q&As?
In academia, a large number of studies related to user-generated product Q&As have sought to investigate answer generation methods (Feng et al., 2021; Yu et al., 2018a). Previous studies have found that answers can be mined from online reviews (Chen et al., 2019; Deng et al., 2020) and product attributes (S. Gao et al., 2021) to respond to consumers’ questions. Meanwhile, traditional user-generated Q&A studies have focused on the factors of knowledge behavior from the perspective of answer quality (C. H. Chou et al., 2015; Fang, 2014) and the social capital of answers (Guan et al., 2018; Yan et al., 2019) in virtual Q&A communities. In addition, a growing body of studies has concentrated on the helpfulness of user-generated information on e-commerce platforms from the perspective of online reviews (Yang et al., 2021; Zhou & Guo, 2017). Previous studies have indicated that the characteristics of review content (Y. C. Chou et al., 2022; Yang et al., 2020) and reviewers (Filieri et al., 2019) significantly affect review helpfulness. Online reviews are mainly one-way expressions of consumers’ emotional attitudes (Korfiatis et al., 2012), while user-generated product Q&As are two-way communication with a knowledge-sharing orientation (Guan et al., 2018; Yan et al., 2019). Although user-generated product Q&As include both questions and answers, previous studies of user-generated Q&As ignored the characteristics of questions and Q&As. However, consumers never understand answers in isolation and always evaluate questions and answers together (Yu et al., 2018a). Therefore, it is important to understand the helpfulness of user-generated product Q&As by considering the characteristics of questions and Q&As.
The present study attempts to explore how the characteristics of questions and Q&As affect consumers’ evaluation of user-generated product Q&As from the perspective of interactive communication. Specifically, questions reflect consumers’ knowledge needs, which directly determine whether consumers will read the Q&As (Salehan & Kim, 2016; Yu et al., 2018b). Consumers’ attitudes may be affected by whether an answer completely responds to a question. Thus, the topic consistency of answers and questions may affect the perceived value of Q&As. In addition, consumers’ questions originate from product attributes and usage perceptions. The impact of user-generated content about objective information on consumers’ perceived helpfulness may obviously differ from that of information about consumers’ subjective attitudes (Huang et al., 2013; Luan et al., 2016). Hence, this study also considers the question type as a factor. Therefore, the present study investigates the following two questions: (1) How does the topic consistency of questions and answers affect the helpfulness of user-generated product Q&As? (2) What roles does the question type play in altering consumers’ evaluation of user-generated product Q&As?
The present empirical study contributes to the existing literature in two ways. First, unlike previous studies investigating the helpfulness of user-generated content focusing on unidirectionally expressed information such as online reviews or answers, the present study explores the helpfulness of user-generated product Q&As from the perspective of interactive communication by combining questions and answers. Second, this study examines the impact of Q&A topic consistency and the question type on the helpfulness of user-generated product Q&As.
Therefore, the present study not only provides consumers with insights into how to identify helpful user-generated product Q&As but also helps e-commerce managers optimize Q&A systems. With the occurrence of Web 3.0, the findings of this study will be more valuable. The important characteristic of Web 3.0 is the shift from platform revenue to joint revenue for both platforms and users (Sharafi Farzad et al., 2019). This study will help users generate more helpful Q&As. Furthermore, both users and e-commerce platforms will obtain more value, thus promoting the generation of more helpful Q&As to better help consumers.
Literature Review
User-Generated Q&As
With the popularity of Q&A systems on e-commerce platforms, user-generated product Q&As have attracted the attention of many scholars. Previous studies have mainly focused on answer generation methods. Generating answers to questions about products based on large-scale online reviews is a common method (Chen et al., 2019; Deng et al., 2020; Zhao et al., 2019), and product attributes are regarded as supplementary information (S. Gao et al., 2019; S. Gao et al., 2021). In addition, the answer ranking in user-generated product Q&As is an important factor for improving user satisfaction. W. Zhang, Deng, & Lam, (2020) ranked the answers of user-generated product Q&As based on multiple semantic relationships between answers and online reviews.
Previous studies on user-generated Q&As mainly focused on the influencing factors of user knowledge behavior in virtual Q&As communities. From the perspective of answer content characteristics, C. H. Chou et al. (2015) proposed that knowledge behavior is positively influenced by the characteristics of knowledge quality. Fang (2014) proposed that in addition to traditional cognitive stimuli, emotional stimuli and arousal in response messages help to promote knowledge behavior. From the perspective of responder characteristics, variables such as social capital (Yan et al., 2019), reciprocity norms (Guan et al., 2018), and reputation (Meng et al., 2021) have also been recognized as important determinants of knowledge behavior.
However, current studies describe traditional user-generated Q&As and user-generated product Q&As mainly from the perspective of one-way expression, such as the characteristics of answers and answerers, with few studies analyzing the influencing factors of the helpfulness of user-generated product Q&As from an interactive communication perspective by considering the characteristics of questions and Q&As.
Therefore, the present study attempts to explore how the characteristics of questions and Q&As affect consumers’ evaluation of user-generated product Q&As from the perspective of interactive communication. Specifically, the characteristics of questions mainly include the question type. According to Luan et al. (2016) and Huang et al. (2013), the questions in user-generated product Q&As can be classified as attribute questions and experience questions. Attribute questions focus on inquiries about product features, such as a product’s attributes and functions (Luan et al., 2016). Experience questions mainly ask consumers how they feel about using the product (Luan et al., 2016). Meanwhile, the characteristics of Q&As mainly refer to the topic consistency of questions and answers. Q&A topic consistency refers to whether the topic contained in the answer matches that in the question (Yang et al., 2021). In addition, previous studies have shown that the valence, consistency, professionalism, knowledge stickiness, and product type of user-generated content are all key influences on the helpfulness of user-generated content in e-commerce websites (Y. C. Chou et al., 2022; Lee et al., 2021; Thomas et al., 2019; Yang et al., 2021). Therefore, this study also considers the effects of answer professionalism, answer knowledge stickiness, answer opinion consistency, answer valence, and product type on the helpfulness of user-generated product Q&As. Specifically, answer professionalism refers to the expertise of the information contained in an answer (Thomas et al., 2019). Answer knowledge stickiness refers to the difficulty of answer knowledge transfer (Von Hippel, 1994). Answer opinion consistency refers to the degree of consistency of the opinions embedded in different answers to the same question (Thomas et al., 2019). Answer valence refers to the overall evaluation of the answers in user-generated product Q&As provided by previous individuals (Lee et al., 2021).
Helpfulness of User-Generated Content on E-Commerce Websites
Previous studies on the helpfulness of user-generated content on e-commerce websites have focused on online reviews. From the perspective of review content characteristics, variables such as review readability (Y. C. Chou et al., 2022), review length (Li & Huang, 2020), and the review linguistic style (Yang et al., 2021) have been recognized as important determinants of the helpfulness of online reviews. Meanwhile, the influence of emotional factors in online reviews has also been considered, with Y. C. Chou et al. (2022) suggesting that low-arousal emotions in online reviews can significantly and positively influence their helpfulness. In addition, the text similarity of review titles and review content positively and significantly affects the helpfulness of online reviews (Yang et al., 2020). The ranking of online reviews (J. N. Wang et al., 2020; Zhou & Guo, 2017) and the characteristics of reviewers (Filieri et al., 2019) have been regarded as significant influencing factors of the helpfulness of online reviews. Furthermore, previous studies have shown that the product type is an important influence on the usefulness of user-generated content on e-commerce websites (Filieri et al., 2019; Ren & Hong, 2019; Susan et al., 2010). Nelson (1970) divided product types into search products and experience products based on consumers’ difficulty in obtaining information before and after purchasing products. Specifically, search products are products that consumers can evaluate based on product attributes before purchase, such as mobile phones, computers, and other digital products. Experience products are products that require consumers’ personal experience to evaluate, such as food and cosmetics.
Different from the one-way expression mode of online reviews, user-generated product Q&As are a two-way communication mode for consumers to communicate product information. Moreover, the characteristics of product Q&As contain richer information characteristics, such as the question type and the correlation between the question and answer. Thus, this study attempts to investigate the influence of the characteristics of questions and Q&A pairs on the helpfulness of user-generated product Q&As.
Elaboration Likelihood Model
The elaboration likelihood model (ELM) was proposed by psychologists Petty et al. (1981) to explain how information is treated by individuals (Petty & Cacioppo, 1986). It suggests that changes in consumer attitudes are influenced by the central route and the peripheral route (Petty & Cacioppo, 1986). On the one hand, the central route indicates that the complicated indications embedded in information, such as information readability, have a significant influence on individuals’ attitudes. On the other hand, the peripheral route indicates that simple indications, such as source credibility, have an important effect on individuals’ attitudes (Bhattacherjee & Sanford, 2006).
The ELM is widely used to explore the influencing factors of the helpfulness of online reviews (Y. C. Chou et al., 2022; Yang et al., 2021). Previous studies have generally concluded that indicators related to information quality affect consumer perceptions through the central route, while peer evaluations of information occur through the peripheral route (Y. C. Chou et al., 2022; Yang et al., 2021). The present study adopts the ELM for the following reasons: first, online reviews are mainly unidirectional expressions of consumers’ emotional attitudes (Korfiatis et al., 2012). However, user-generated product Q&As are knowledge-sharing-oriented interactive information among consumers (Y. Zhang et al., 2019). User-generated product Q&As include questions and answers, and the richer information content in user-generated product Q&As requires more refined analysis and understanding among consumers (Y. Zhang et al., 2019). Second, user-generated product Q&As contain information about other consumers’ evaluations of questions and answers, and this comprehensive social information can provide more peripheral cues for users’ decision-making. Therefore, the research idea of this study is consistent with the ELM. Thus, the key influencing factors of the helpfulness of user-generated product Q&As are explored based on the ELM. In this study, the variables related to user-generated product Q&As are categorized into the central route and the peripheral route. Meanwhile, this study argues that different types of products and questions may influence consumers’ perceptions, further moderating the relationship between central and peripheral cues related to user-generated product Q&As and the helpfulness of user-generated product Q&As.
Research Model and Hypotheses
Model Construction
To answer the questions above, this study adopts the ELM as its theoretical foundation to derive hypotheses. As shown in Figure 1, the variables related to user-generated product Q&As are divided into the central route and peripheral route; at the same time, the moderating effects of the product type and question type are also considered.

The conceptual framework.
Research Hypotheses
The Central Route
Answer Professionalism
Professional information can help consumers fully understand product details (Nicolaou et al., 2013). Thomas et al. (2019) suggested that higher information professionalism helps users form more positive perceptions. X. Gao et al. (2021) found that information professionalism significantly and positively influenced individuals’ perceived helpfulness. Similarly, the professional and personalized information embedded in questions and answers can help individuals improve the perceived helpfulness of Q&A pairs. Therefore, the following hypothesis is proposed:
Hypothesis 1. Answer professionalism will positively affect the helpfulness of user-generated product Q&As.
Q&A Topic Consistency
The topic consistency between a question and an answer indicates whether the answer responds to multiple topics contained in the question. Yang et al. (2021) found that a high level of topic consistency between hotel reviews and manager responses could weaken the negative effect of pessimistic sentiment on the helpfulness of online reviews. Similarly, higher Q&A topic consistency helps to alleviate the problem of information overload in user-generated product Q&As (Luo et al., 2013), and individuals’ satisfaction with Q&As pairs can be increased. Thus, we propose the following hypothesis:
Hypothesis 2. Q&A topic consistency will positively influence the helpfulness of user-generated product Q&As.
Answer Knowledge Stickiness
It is easier for individuals to understand answers when the answer knowledge stickiness is lower. Korfiatis et al. (2012) proposed that the comprehensibility of information is positively related to the helpfulness of reviews. Y. C. Chou et al. (2022) suggested that information that is easy to read and understand can enhance consumers’ perceived helpfulness. Therefore, the following hypothesis is proposed:
Hypothesis 3. Answer knowledge stickiness will negatively affect the helpfulness of user-generated product Q&As.
The Peripheral Route
Answer Opinion Consistency
C. H. Chou et al. (2015) and Thomas et al. (2019) argued that consumers are more likely to perceive current information as credible and helpful when it is consistent with the opinions proposed by other consumers. Similarly, individuals are more likely to improve the perceived helpfulness of the Q&A pair when the opinions of different answers are highly consistent. Thus, the following hypothesis is proposed:
Hypothesis 4. Answer opinion consistency will positively influence the helpfulness of user-generated product Q&As.
Answer Valence
Luo et al. (2014) and Lee et al. (2021) found that information valence positively and significantly influences the perceived helpfulness and credibility of information. Individuals are more likely to develop positive perceptions of answer if they are consistently given high valence by previous users. Hence, we propose the following hypothesis:
Hypothesis 5. Answer valence will positively affect the helpfulness of user-generated product Q&As.
Moderating Variables
Product Type
Previous studies have shown that the helpfulness of user-generated content in e-commerce is moderated by the product type (Filieri et al., 2019; Mudambi & Schuff, 2010; Ren & Hong, 2019; F. Wang et al., 2015). Similarly, user-generated product Q&As for search products typically involve more specialized field vocabulary than those for experience products. It is easier to understand the attribute characteristics of products when the answer knowledge stickiness is lower. In addition, compared to search products, the answers to user-generated product Q&As for experience products are often given based on the experience of users. The positive effect of answer opinion consistency on the helpfulness of user-generated product Q&As will be weakened based on the objective fact that different individuals may have different experiences of using the same product. Therefore, the moderation hypotheses are proposed as follows:
Hypothesis 6a. The negative effect of answer knowledge stickiness on the helpfulness of user-generated product Q&As will be stronger for search products than for experience products.
Hypothesis 6b. The positive effect of answer opinion consistency on the helpfulness of user-generated product Q&As will be weaker for experience products than for search products.
Question Type
Attribute questions are mainly inquiries about the characteristics of products, such as their advantages and functions (Luan et al., 2016). Even if the topic of questions and answers is inconsistent, the answers may give other attribute information that consumers are interested in. Thus, user-generated product Q&As are more likely to be perceived as helpful when the Q&A topic consistency is lower for attribute questions. Experience questions are mainly inquiries about consumers’ feelings about using a product, and such feelings are characterized by strong subjectivity and high linguistic differences between individuals (Luan et al., 2016). The smaller the difference in linguistic habits among different individuals, the lower the answer knowledge stickiness and the higher the evaluation of Q&As. Therefore, the following moderation hypotheses are proposed:
Hypothesis 7a. The positive effect of the Q&A topic consistency of attribute questions on the helpfulness of user-generated product Q&As will be weaker than that of experience questions.
Hypothesis 7b. The negative effect of answer knowledge stickiness of experience questions on the helpfulness of user-generated product Q&As will be stronger than that of attribute questions.
Research Methodology
Data Collection
The Amazon website has a Q&A section and provides consumers with voting information for each Q&A pair. Therefore, the empirical data of this study are collected from Amazon, which has rich question and answer data. The data collection in this study consists of four steps. In step 1, based on consumers’ difficulty in obtaining information before and after purchasing products, this study classified products on Amazon into experience products and search products. In step 2, this study randomly selected experience products and search products with more Q&A data on Amazon. Among them, the experience products were mainly selected from popular food products such as popcorn and highly reputable cosmetics such as lipstick. The search products were mainly selected from popular digital products such as iPads and Bluetooth headsets. In step 3, data consisting of 4,881 questions and the corresponding 10,573 answers from December 31, 2017, to September 9, 2021, were collected through web scraping. In step 4, since some questions in the collected Q&A data do not have corresponding answer data to support the empirical analysis of this study, the data of 67 questions with zero number of answers were excluded. Thus, 4,814 Q&A data were finally used for the empirical analysis. Each Q&A pair includes the following data: the number of helpful votes for the Q&A pair, the question content, the answer content, the number of answers, the number of helpful votes for answers, and the total number of votes for answers.
Measurement
Dependent Variable
The Amazon website provides a helpful voting function to filter the most popular Q&A pairs for consumers. Previous research adopted the total number of helpful votes obtained by information to measure helpfulness (Y. C. Chou et al., 2022; Yang et al., 2020). Similarly, we use the total number of helpful votes received by Q&A pairs to measure the dependent variable of this study.
Explanatory Variables
Central Route Variables
(1) Answer professionalism. Answer professionalism is measured as the mean of the number of feature words contained in each answer (X. Gao et al., 2021). Based on Xiong et al. (2021), this study used the TextRank algorithm to extract answer feature words. The TextRank algorithm does not require corpus training or computation, and the feature words of a sentence can be extracted based only on the information of the sentence itself. First, this study extracted word stems. The different forms of words were transformed into general forms by removing plural nouns and through tense transformation. Second, the present study removed prepositions, articles, and other words in the text that do not play a significant role in distinguishing the content of the text. Third, this study constructed the edges between two words using the co-occurrence relationship between words and then constructed a keyword graph. Fourth, the
Where
(2) Answer knowledge stickiness. Text readability can be used to measure the ease of comprehension of text content (Y. C. Chou et al., 2022). Answer knowledge stickiness is measured as the inverse of answer text readability, where answer text readability is measured by the Flesch Reading Ease index (Y. C. Chou et al., 2022).
(3) Q&A topic consistency. Q&A texts are relatively short. Since the question and answer are relatively short, we adopted the biterm topic model (BTM) to extract the topic sets of questions and answers (Jia et al., 2022). Furthermore, based on Yang et al. (2021), Q&A topic consistency is calculated based on the Jaccard similarity coefficient. First, we constructed the topic vectors of questions and answers. Specifically, we matched the question topic set with the answer topic set one by one. If a topic in the answer topic set appeared in the question topic set, the corresponding element in the answer topic vector was recorded as 1 and 0 otherwise. Second, this study calculated Q&A topic consistency based on the Jaccard similarity coefficient. The calculation method is shown in Equation 2.
Where
Peripheral Route Variables
(1) Answer valence. Information validity is often expressed in terms of numerical ratings or star ratings (Lee et al., 2021). However, the Amazon website does not have a numerical rating or star rating mechanism for the Q&A section. Thus, this study uses the ratio of the number of likes given to an answer to the total number of votes on the answer to measure answer valence.
(2) Answer opinion consistency. Following Yang et al. (2020), answer opinion consistency is calculated. Since the length of the answer is different and many phrases and sentences are related to each other, the order of words and syntax need to be fully considered when converting the answer text into vectors. Thus, Doc2vec was used to convert the answers into vectors (Kim et al., 2019). Furthermore, the cosine of the similarity between answer text vectors was used to calculate answer opinion consistency, as shown in Equation 3.
Where
Moderating Variables
The product type and question type are defined as dummy variables. Search products take the value of 0, and experience products take the value of 1. Additionally, attribute questions take the value of 0, and experience questions take the value of 1. Furthermore, two graduate students were invited to study the classification standards for attribute-based and experience-based reviews in Luan et al. (2016) and Huang et al. (2013). The questions were further classified. The Cohen’s kappa coefficient of the question classification is 0.948 (
Model Specification
The dependent variable in this study is noncontinuous count data, and only 4.113% of the Q&A pairs received helpful votes. The variance in the dependent variable (
Data Analysis and Results
Descriptive Statistical Analysis
The descriptive statistics of the sample and their correlations are shown in Table 1. In addition, the variance inflation factors (VIFs) of the model are examined to diagnose multiple correlations. The maximum VIF value is 1.56, and the minimum value is 1.02, indicating that multicollinearity problems do not need to be seriously considered.
Descriptive Statistics.
Hypothesis Testing
The estimation results of the proposed model are presented in Table 2. Model 1 is a negative binomial regression model that contains only the central route variables. Model 2 measures the effect of the peripheral route variables. Model 3 contains the independent variables and moderating variables. Model 4 and Model 5 add the interactions.
Negative Binomial Regression Results for the Helpfulness of User-Generated Product Q&As.
In Table 2, the results of Model 1 indicate that the coefficient between answer professionalism and the helpfulness of product Q&As is positive and that the
The results of Model 2 indicate that the hypothesized path from answer opinion consistency to the helpfulness of product Q&As is positively significant (
As shown in Model 4, the coefficient for the interaction between the product type and answer knowledge stickiness is negative, and the

Interaction plots: (a) interaction of answer knowledge stickiness and the product type, (b) interaction of answer opinion consistency and the product type, (c) interaction of Q&A topic consistency and the question type, and (d) interaction of answer knowledge stickiness and the question type.
The results of Model 5 in Table 2 indicate that the coefficient of the interaction between the question type and Q&A topic consistency is negative and that the
Robustness Tests
This paper retested the robustness of our findings by using zero-inflated Poisson regression (
Robustness Check Results for Alternative Model Specifications.
Discussion and Conclusion
The present study examined the factors affecting the helpfulness of user-generated product Q&As from the perspective of interactive communication. Based on the ELM, an estimation model was developed, and empirical testing was performed using data collected from Amazon to represent the effects of answer and question cues on the helpfulness of user-generated product Q&As. The negative binomial regression results show a few substantial findings. The present study found that the relationship between answer professionalism, Q&A topic consistency, answer opinion consistency, answer valence, and the helpfulness of user-generated product Q&As is positive. Meanwhile, answer knowledge stickiness negatively affects the helpfulness of user-generated product Q&As.
In addition, the results of our study show that the product type significantly moderates the impacts of answer and question cues on the helpfulness of user-generated product Q&As. The negative effect of answer knowledge stickiness on the helpfulness of user-generated product Q&As is more significant for search products. Meanwhile, the positive effect of answer opinion consistency on the helpfulness of user-generated product Q&As is stronger than that of experience products. In addition, the negative effect of answer knowledge stickiness on the helpfulness of user-generated product Q&As is stronger for experience questions. Meanwhile, the positive influence of answer topic consistency on the helpfulness of user-generated product Q&As is stronger than that of attribute questions.
Theoretical Contributions
The present study offers several theoretical implications. First, unlike existing studies on the helpfulness of user-generated content on e-commerce websites, which mainly focused on the one-way communication mode of online reviews (Y. C. Chou et al., 2022; Yang et al., 2020), while user-generated product Q&As have obvious two-way communication characteristics, the present study contributes to understanding the influencing factors of the helpfulness of user-generated product Q&As from the perspective of interactive communication. Our study validates ELM theory in the context of user-generated product Q&As and offers a theoretical reference for future research on user-generated product Q&As.
Second, previous related research on user-generated Q&As mainly investigated the factors that affect individuals’ knowledge behavior from the perspective of answers and answerers (C. H. Chou et al., 2015), while the characteristics of questions and Q&A matching were ignored. This study not only incorporates answer characteristics such as answer professionalism and answer opinion consistency but also considers the characteristics of Q&A topic consistency and the question type. Our results can help to explain the process through which consumers understand user-generated product Q&As and enrich the construction of online Q&A characteristics.
Practical Implications
This study has several practical implications. First, our study indicates that the effects of answer knowledge stickiness and answer opinion consistency on the helpfulness of user-generated product Q&As are moderated by the product type. The implications for e-commerce website managers are straightforward: they should develop differentiated ranking functions for user-generated product Q&As for different types of products. Compared with experience products, the ranking of user-generated product Q&As for search products needs to be based more on indicators such as answer knowledge stickiness and answer opinion consistency as references. In addition to the traditional ranking based on the posting time or the number of helpful votes for answers, answer knowledge stickiness or answer opinion consistency can be used as the optimization goal to develop the ranking functions for user-generated product Q&As for search products.
Second, the present study finds that the impact of Q&A topic consistency on the helpfulness of user-generated product Q&As is moderated by the question type. A significant implication for managers of e-commerce websites is that the function of classifying questions should be developed to help consumers identify attribute questions and experience questions. In addition, attribute questions and experience questions should be displayed differently by adding tags. Meanwhile, e-commerce platform managers can develop the function of extracting and displaying question and answer topic words to help consumers judge the topic consistency of Q&As.
Third, the present study indicates that the impact of answer knowledge stickiness on the helpfulness of user-generated product Q&As is negative. Managers should develop the function of recommending keywords to guide consumers to provide more professional and helpful answers. Managers can extract high-frequency words from product detail pages and online reviews based on deep learning methods. Then, high-frequency words should be recommended to consumers who are invited to answer questions with the recommendation algorithm, which can help to enhance answer professionalism and knowledge stickiness.
Limitations and Future Research Directions
This study also has some limitations. First, our study focused on the text characteristics of questions and answers, while the sentiment of Q&A pairs was ignored. Future studies are encouraged to consider the sentiment factor, including question sentiment, answer sentiment, and their combination characteristics. Second, the helpfulness of user-generated product Q&As may be affected by the professionalism, age, and rank of the questioner and answerer. Thus, these factors should be comprehensively considered in future research. Third, consumer problems are commonly related to product details and online reviews. Hence, the intrinsic relationship between the prerequisite information, such as product details and online reviews, and the helpfulness of user-generated product Q&As should be explored in depth in future research. Finally, previous studies have shown that the length of information has a significant impact on the helpfulness of information (Li & Huang, 2020; Yang et al., 2021). Thus, the impact of the length of questions and answers on the helpfulness of user-generated product Q&As should be investigated as an interesting topic in the future.
