• 信箱
  • 捐赠
  • 常用资源
  • 旧版网站
  • English

学术活动

首页 - 学术活动 - 正文

【学术通知】香港中文大学教授柯特 :Designing Detection Algorithms for AI-Generated Content: Consumer Inference, Creator Incentives, and Platform Strategy

  • 发布日期:2025-11-05
  • 点击数:

  

2025年第81期(总第1122期)

演讲主题:Designing Detection Algorithms for AI-Generated Content: Consumer Inference, Creator Incentives, and Platform Strategy

主讲人:柯特 香港中文大学教授

主持人:关旭 供应链管理与系统工程系主任、教授

活动时间:2025年11月14日(周五)10:00-11:30

活动地址: 管院大楼219教室

主讲人简介:

柯特是香港中文大学商学院市场学教授兼系主任,礼任决策、运营与科技学系教授。他在加州大学伯克利分校取得运筹学博士、统计学硕士和经济学硕士学位,以及在北京大学取得物理学学士和统计学学士学位。他的研究领域涵盖量化营销模型、微观经济理论和产业组织。他近期的研究重点是消费者搜索、在线广告和平台,以及隐私、数据和算法经济学。在加入香港中文大学之前,他曾在麻省理工学院斯隆管理学院以及运筹研究中心担任助理教授。他目前担任《Journal of Marketing Research》、《Management Science》、《Marketing Science》和《Quantitative Marketing and Economics》等杂志的副主编。柯特教授关于数字经济的研究于2024年获得国家自然科学基金优秀青年科学基金项目资助,作为受邀专家参与了“工商管理学科发展战略及十五五发展规划”研讨。

活动简介:

Generative AI has transformed content creation, enhancing efficiency and scalability across media platforms; however, it also introduces substantial risks, particularly the spread of misinformation that can undermine consumer trust and platform credibility, and to address this, platforms deploy detection algorithms to distinguish AI-generated from human-created content, but these systems face inherent trade-offs—aggressive detection lowers false negatives (failing to detect AI-generated content) but raises false positives (misclassifying human-created content), discouraging truthful creators, while conservative detection protects creators but weakens the informational value of labels, eroding consumer trust; we develop a model in which a platform sets the detection threshold, consumers infer credibility from labels when deciding whether to engage, and creators choose whether to adopt AI and how much effort to exert to create content, with a central insight that equilibrium structure shifts across regimes as the threshold changes—specifically, at low thresholds, consumers trust human labels and partially engage with AI-labeled content, disciplining AI misuse and boosting engagement, while at high thresholds, this inference breaks down, AI adoption rises, and both trust and engagement collapse; thus, the platform’s optimal detection strategy balances these forces, choosing a threshold that preserves label credibility while aligning creator incentives with consumer trust, and our analysis shows how detection policy shapes content creation, consumer inference, and overall welfare in two-sided content markets.

学院要闻

  • 1
  • 2
  • 3
  • 4
  • 5