TY - GEN
T1 - Exploring the Potential of an AI-Based Twitch Moderation and Toxicity Detection Bot
AU - Huth, Julian
AU - Eichhorn, Christian
AU - Plecher, David A.
AU - Pirker, Johanna
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025/8/19
Y1 - 2025/8/19
N2 - Toxicity is still a big problem in the games industry. The development is rather simple, but still painful. What could we do? Toxicity and hate speech remain constant challenges in various online communities, including live-streaming platforms such as Twitch. In this paper, we explore a lightweight, Pythonbased chat bot that integrates Twitch's IRC interface with the Google Perspective API to assess the toxicity of live chat messages in real time. The tool is designed for streamers seeking a minimaleffort moderation assistant and includes a graphical interface for monitoring per-channel toxicity metrics. To evaluate the system's effectiveness, we conducted an offline analysis using a toxic comment dataset. Our results show that the API can broadly distinguish toxic from non-toxic messages, achieving a mean toxicity score of 0.73 for labeled toxic content and 0.11 for benign content. However, limitations such as domain mismatch, API rate restrictions, and lack of contextual understanding constrain real-world applicability of the integration. We discuss practical implications, ethical concerns, and paths for future work, including domain-specific dataset collection, threshold optimization, and user-centered evaluation. The findings highlight both the opportunities and limitations of integrating third-party AI services into real-time moderation workflows on platforms like Twitch.
AB - Toxicity is still a big problem in the games industry. The development is rather simple, but still painful. What could we do? Toxicity and hate speech remain constant challenges in various online communities, including live-streaming platforms such as Twitch. In this paper, we explore a lightweight, Pythonbased chat bot that integrates Twitch's IRC interface with the Google Perspective API to assess the toxicity of live chat messages in real time. The tool is designed for streamers seeking a minimaleffort moderation assistant and includes a graphical interface for monitoring per-channel toxicity metrics. To evaluate the system's effectiveness, we conducted an offline analysis using a toxic comment dataset. Our results show that the API can broadly distinguish toxic from non-toxic messages, achieving a mean toxicity score of 0.73 for labeled toxic content and 0.11 for benign content. However, limitations such as domain mismatch, API rate restrictions, and lack of contextual understanding constrain real-world applicability of the integration. We discuss practical implications, ethical concerns, and paths for future work, including domain-specific dataset collection, threshold optimization, and user-centered evaluation. The findings highlight both the opportunities and limitations of integrating third-party AI services into real-time moderation workflows on platforms like Twitch.
UR - https://www.scopus.com/pages/publications/105015498903
U2 - 10.1109/CoG64752.2025.11114410
DO - 10.1109/CoG64752.2025.11114410
M3 - Conference paper
AN - SCOPUS:105015498903
T3 - IEEE Conference on Computatonal Intelligence and Games, CIG
BT - Proceedings of the IEEE 2025 Conference on Games, CoG 2025
PB - IEEE Computer Society
T2 - 2025 IEEE Conference on Games, CoG 2025
Y2 - 26 August 2025 through 29 August 2025
ER -