Presenter Information

Kieran AhnFollow

Start Date

18-12-2020 9:40 AM

Description

GPT-3 is an incredibly powerful NLP model that has taken the NLP world by storm. However, it possesses the glaring flaw of bias: due to its training datasets being sourced from the internet, they contain biased and prejudiced text which shows through in some of GPT-3’s text output. In order to eliminate this bias in whatever way possible, it is necessary to first understand it and figure out what its biases are in the first place. Therefore, I propose an exhaustive analysis and evaluation of GPT-3’s biases towards a wide variety of people groups in the hopes of creating a reference for other research to use when investigating ways of combating bias in GPT-3.

Comments

Mentor: Mandy Korpusik

Click below to download individual papers.

  • Kieran Ahn Final Proposal.docx (547 kB)
  • Share

    COinS
     
    Dec 18th, 9:40 AM

    Identifying Bias in Language Modeling Algorithms

    GPT-3 is an incredibly powerful NLP model that has taken the NLP world by storm. However, it possesses the glaring flaw of bias: due to its training datasets being sourced from the internet, they contain biased and prejudiced text which shows through in some of GPT-3’s text output. In order to eliminate this bias in whatever way possible, it is necessary to first understand it and figure out what its biases are in the first place. Therefore, I propose an exhaustive analysis and evaluation of GPT-3’s biases towards a wide variety of people groups in the hopes of creating a reference for other research to use when investigating ways of combating bias in GPT-3.