|
Date
|
Speaker
|
Topic
|
Faculty Host
|
4/11/2026
CBB 310
9:00 AM - 4:30 PM
|
Doctoral Students
Various Institutions
|
The 43rd Annual UH Marketing Doctoral Symposium
-
Click to read Abstract
Doctoral Student Research Presentations
|
|
4/10/2026
CBB 310
3:45 PM - 6:00 PM
|
Sandy Jap
Emory University
|
The 43rd Annual UH Marketing Doctoral Symposium
-
Click to read Abstract
Welcome and Keynote Address
|
|
4/3/2026
MH 365A
11:00AM-12:30PM
|
Ankit Sisodia
Purdue University
|
Economic Value of Visual Design: Evidence from the UK Automobile Market
-
Click to read Abstract
We examine whether and how visual design of products has economically meaningful effects on demand and competition, focusing on the automobile market. Visual design has been challenging to empirically characterize in a manner that is human-interpretable yet sufficiently rich. We leverage a recently developed method to help achieve low-dimensional disentangled representations of visual design, using data from the UK automobile market across 2008-2017. We obtain human-interpretable visual characteristics that are orthogonal and can reconstruct the original design of all the products (make-models), thus spanning the range of products in the market. We incorporate these visual characteristics into a structural model of demand and supply based on the classic BLP framework. We quantify how economically meaningful quantities like elasticity vary across models with visual characteristics compared with the standard baseline without visual characteristics, and find a substantial economic impact. We also evaluate a counterfactual in which we remove a focal product's closest neighbor in functional (or visual) characteristics, and then comparing how the equilibrium quantities (prices, market shares) vary. We find that removing the closest ''visual neighbor'' has an effect similar to removing the closest ''functional neighbor,'' highlighting the economically meaningful role of visual characteristics. Additionally, we quantify the value of a specific visual redesign by examining a vehicle facelift that altered front-end design while leaving functional characteristics unchanged, and find that the redesign generated meaningful gains in market share and profit.
|
|
3/13/2026
MH 365A
11:00AM-12:30PM
|
Tong Wang
Yale University
|
Why it Works: Can LLM Hypotheses Improve AI Generated Marketing Content?
-
Click to read Abstract
Generative AI models are increasingly used to produce marketing content. Since off-the-shelf models are misaligned with desired marketing outcomes, they are fine-tuned using content experiments that identify what content is correlated with higher engagement. Yet optimizing only for what works risks overfitting, reward hacking, and poor generalization, yielding content that succeeds in-sample but fail in new contexts or drift toward clickbait. We propose a principled knowledge-alignment framework that moves beyond merely what works to why it works. In our approach, an LLM iteratively generates hypotheses about mechanisms (e.g., emotional language, narrative framing) to explain observed performance differences on a small set of data (abduction), then validates them on held-out data (induction). The optimized set of validated hypotheses form an interpretable, domain-specific knowledge base that regularizes fine-tuning via Direct Preference Optimization (DPO), constraining the model toward generalizable principles. Our LLM-based approach extends the tradition of theory-guided machine learning to domains where relevant knowledge is tacit and therefore hard to explicitly encode in models. Using a dataset of over 23,000 A/B-tested news headlines across 4,500+ articles, we show that our knowledge-guided framework outperforms supervised fine-tuning, DPO and multi-dimensional DPO in improving engagement (click-through), while avoiding clickbait and maintaining lexical diversity.
|
|
3/6/2026
MH 365A
11:00AM-12:30PM
|
Eugina Leung
Tulane University
|
Preference Filtering: When Consumers Share Narrow Preferences with Algorithms
-
Click to read Abstract
Algorithmic personalization is pivotal to the digital economy worldwide. To generate personalized recommendations, companies frequently elicit consumer preferences by asking them to select categories of interest (e.g., video genres). This research examines whether preference-elicitation questions effectively capture consumer preferences and, if not, why this occurs and how it can be mitigated. Six pre-registered studies (along with five supplemental studies and a pilot study total N = 6,767) spanning eight product domains (e.g., videos, news, wine) reveal that consumers share less diverse preferences with algorithms than they actually possess. Instead, they focus on their core preferences while omitting tangential ones—a phenomenon termed preference filtering. It is driven by the belief that sharing diverse preferences with an algorithm makes it prone to misclassification. Redesigning the preference elicitation task to attenuate the perceived risk of misclassification can encourage consumers to share more diverse preferences with algorithmic recommenders, which deters the formation of ''filter bubbles''. Paradoxically, contrary to the lay belief in misclassification, two studies on a custom-built video-streaming website show that more diverse recommendations enhance consumer evaluation of the algorithmic recommendation service. These findings offer valuable insights for companies that rely on algorithms to engage consumers.
|
|
2/27/2026
MH 365A
11:00AM-12:30PM
|
Grant Donnelly
The Ohio State University
|
I'd Like Anything But Anchovies:
Rejecting Unappealing Options Reduces Difficulty in Decisions for Joint Consumption
-
Click to read Abstract
Consumers often solicit preferences from each other when deciding what to consume together. Prior work has shown that consumers are often hesitant to express a preference for joint consumption decisions, but expressing a preference for an appealing option can ease the decision-making process. We extend this work by evaluating the effectiveness of preference communication that rejects an unappealing option from a choice set. Despite only eliminating a single (and unappealing) option, such preference communication reduces decision difficulty for joint consumption because rejecting an unappealing option increases the perception of preference similarity with a consumption partner. As such, our effect is not observed when an unappealing option is rejected for reasons other than personal preference or when making decisions for individual consumption. Further, rejecting an unappealing option is a stronger signal of preference similarity in less established relationships. Together, this research contributes to the literature on decision making for joint consumption, interpersonal inference-making, and preference communication, and offers managerial insights for firms and individuals wishing to increase the effectiveness of decision-making for shared consumption.
|
|
2/20/2026
MH 365A
11:00AM-12:30PM
|
Alice Wang
University of Iowa
|
Privileged and Picky: How a Sense of Disadvantage or Advantage Influences Consumer Pickiness Through Psychological Entitlement
-
Click to read Abstract
Growing inequality continues to shape consumers' lives, widening the gap between the advantaged and the disadvantaged. This research examines how perceived disadvantage versus advantage influences consumer pickiness, defined as the latitude of acceptance around idiosyncratic ideal points. Across eight studies—including an analysis of consumer panel data, a field study at a local food pantry, and six preregistered experiments—we find that a sense of disadvantage leads consumers to be less picky, whereas a sense of advantage leads consumers to be more picky. These effects are driven by differences in psychological entitlement: disadvantage reduces entitlement, while advantage increases it, which in turn affects pickiness. Importantly, these differences emerge even in the absence of resource or external constraints, highlighting entitlement as a key psychological mechanism. We further find that the effects are moderated by social dominance orientation, such that the impact of disadvantage versus advantage on entitlement and pickiness is attenuated among individuals who do not endorse existing inequalities.
|
|
2/6/2026
365A MH
11:00AM-12:30PM
|
Ishita Chakraborty
University of Wisconsin–Madison
|
From Reviews to Responses: Bridging Pre- and Post-Purchase Consumers through AI-Enhanced QA with RAG
-
Click to read Abstract
Question Answering (QA) on customer-facing platforms (e-commerce, travel, education, and brand websites) often suffers from delayed, low-quality responses and limited user participation. While customer reviews are abundant, their unstructured nature limits direct use to answer specific questions, and Large Language Models (LLMs) alone lack product-specific details and subjective insights. To address these challenges, we propose and evaluate a novel QA framework that integrates LLMs with reviews through Retrieval-Augmented Generation (RAG). Our framework incorporates three components: (1) RAG for dynamically retrieving relevant reviews at inference time (2) a question–review type matching module that enhances topical alignment (3) an answerability classifier that determines whether a reliable answer can be generated. Using a data set of 500 Amazon questions, 2000+ human responses, and 14,000 review sentences, we systematically evaluate different model variants for both lexical similarity metrics (ROUGE-L) and human judgments. Our full model improves lexical similarity scores by 50% from baseline LLM answers, matches or exceeds 72% of human responses, and approaches the best human answers in clarity, relevance, and informativeness. In particular, human evaluations show that our full model performs particularly well on subjective questions and lexical similarity metrics fail to capture this performance gain. Overall, our findings show how LLMs and reviews can be combined to build scalable QA systems, while also revealing the limits of lexical similarity metrics and highlighting the importance of human-centered evaluation.
|
Yanyan Li
|
1/30/2026
365A MH
11:00AM-12:30PM
|
Oded Netzer
Columbia University
|
Learning When to Quit in Sales Conversations
-
Click to read Abstract
Salespeople frequently face the dynamic screening decision of whether to persist in a conversation or abandon it to pursue the next lead. Yet, little is known about how these decisions are made, whether they are efficient, or how to improve them. We study these decisions in the context of high-volume outbound sales where leads are ample, but time is scarce and failure is common. We formalize the dynamic screening decision as an optimal stopping problem and develop a generative language model-based sequential decision agent — a stopping agent — that learns whether and when to quit conversations by imitating a retrospectively-inferred optimal stopping policy. Our approach handles high-dimensional textual states, scales to large language models, and works with both open-source and proprietary language models. When applied to calls from a large European telecommunications firm, our stopping agent reduces the time spent on failed calls by 54% while preserving nearly all sales reallocating the time saved increases expected sales by up to 37%. Upon examining the linguistic cues that drive salespeople's quitting decisions, we find that they tend to overweight a few salient expressions of consumer disinterest and mispredict call failure risk, suggesting cognitive bounds on their ability to make real-time conversational decisions. Our findings highlight the potential of artificial intelligence algorithms to correct cognitively-bounded human decisions and improve salesforce efficiency.
|
Martin
|
11/14/2025
365A MH
11:00 AM - 12:30 PM
|
Sungsik Park
University of South Carolina
|
Fake Carting: Manipulation of Consumer Observational Learning
-
Click to read Abstract
Observational Learning (OL) is the process by which individuals learn by observing the actions of others. Online platforms increasingly provide statistics that serve as OL information, such as backer counts in crowdfunding, viewership on streaming platforms, and claimed rates in flash sales. While this firm-provided information can facilitate consumer social learning, it is vulnerable to manipulation. This paper identifies a previously undocumented deceptive tactic on Amazon’s Lightning Deals, where sellers artificially inflate the Deal-Claimed Rate (DCR), the real-time percentage of inventory claimed, to mislead consumers. We term this practice 'fake carting.' Analyzing 2.07 million Lightning Deals, we estimate that fake carting occurs in about 1% of all deals, but is heavily concentrated among high-DCR cases. When consumers observe a DCR of 80 percent or higher, there is a 36.5% chance it is manipulated. Holding product, price, and seller constant, we find that fake carting increases the sales effect of Lightning Deals by 23.9%, suggesting consumers are misled into overestimating product value and making suboptimal decisions. Survey evidence shows that consumers place high trust in DCR and remain largely unaware of manipulation. Our findings highlight a significant and underexplored deceptive practice in online markets: the manipulation of OL information.
|
Seshadri Tirunillai
|
11/7/2025
365A MH
11:00 AM - 12:30 PM
|
Liu Liu
University of Colorado Boulder
|
Building Persuasive Stories with Emotion Sequences
-
Click to read Abstract
What types of stories are most persuasive? In this paper, we introduce a new template for categorizing story types based on the specific emotional dynamics of text, or ''emotion sequences''—for example, whether a story begins fearful and ends with sadness, or vice versa. We present this as a new way to capture distinct narrative progressions that is tractable even in short-form media, and then apply this method to analyze the persuasiveness of different story types in online fundraising. Using transformer-based emotion classification tools, we measure the two-part emotion sequences of 14,000 medical fundraising pitches from GoFundMe.org and show that, among other findings, medical fundraising pitches that begin with a sad tone and end on a caring tone are significantly more likely to succeed. We then develop a simple new approach for testing the generalizability of these observational findings by using crowd-sourced, LLM-assisted rewrites to introduce particular emotion sequences to a sample of 40 randomly-selected fundraisers. We show that human-only rewrites generally fail due to skill deficits (and LLM-only rewrites can introduce salient informational changes from originals), but demonstrate that crowd-sourced, LLM-assisted rewriting offers an effective method for testing the out-of-sample application of research results by everyday online users. With this, we establish that pitches rewritten to feature our focal emotion sequences see a significant boost in perceived persuasiveness, even for some sequences associated with lower success in observational data, while placebo rewrites produce null effects. Furthermore, we show that increased identification with the protagonist of the fundraiser is the primary mechanism driving the observed effects.
|
Bowen Luo
|