Weekly Web Harvest for 2024-01-28

  • [2012.15761] Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection
    We present a human-and-model-in-the-loop process for dynamically generating datasets and training better performing and more robust hate detection models. We provide a new dataset of ~40,000 entries, generated and labelled by trained annotators over four rounds of dynamic data creation. It includes ~15,000 challenging perturbations and each hateful entry has fine-grained labels for the type and target of hate. Hateful entries make up 54% of the dataset, which is substantially higher than comparable datasets. We show that model performance is substantially improved using this approach. Models trained on later rounds of data collection perform better on test sets and are harder for annotators to trick. They also perform better on HateCheck, a suite of functional tests for online hate detection. We provide the code, dataset and annotation guidelines for other researchers to use. Accepted at ACL 2021.
  • Evaluating Language Model Bias with ? Evaluate
    The workflow has two main steps:

    Prompting the language model with a predefined set of prompts (hosted on ? Datasets)
    Evaluating the generations using a metric or measurement (using ? Evaluate)
    Let’s work through bias evaluation in 3 prompt-based tasks focused on harmful language: Toxicity, Polarity, and Hurtfulness. The work we introduce here serves to demonstrate how to utilize Hugging Face libraries for bias analyses, and does not depend on the specific prompt-based dataset used. Critically, remember that recently introduced datasets for evaluating biases are initial steps that do not capture the vast range of biases that models may produce (see the Discussion section below for more details).

  • Language Models are Unsupervised Multitask Learners PDF
    Instead, we created a new web scrape which emphasizes
    document quality. To do this we only scraped web pages
    which have been curated/filtered by humans. Manually
    filtering a full web scrape would be exceptionally expensive
    so as a starting point, we scraped all outbound links from
    Reddit, a social media platform, which received at least 3
    karma. This can be thought of as a heuristic indicator for
    whether other users found the link interesting, educational,
    or just funny.

Leave a Reply