• 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: September 5th, 2023

help-circle


  • No it’s more of a technical discussion. Many people might believe that in order to avoid toxicity, you just train a model on “good” non-toxic data and then apply toxicity removal techniques to address emergent toxicity that the model might spit out. This paper is saying they found it more effective to train the model on a small percentage of “bad” toxic data on purpose, then apply those same toxicity removal techniques. For some reason, that actually generated less total toxicity. It’s an interesting result. A wild guess on my part, but I’m thinking training the model with toxic content “sharpened” the toxicity when it was generated, making it easier for those removal tools to identify it.