Numerous studies have shown that trained dogs can detect many kinds of disease — including lung, breast, ovarian, bladder, and prostate cancers, and possibly Covid-19 — simply through smell. In… Read More »Toward a disease-sniffing device that rivals a dog’s nose
The Successor Representation, Gamma-Models, and Infinite-Horizon Prediction
Standard single-step models have a horizon of one. This post describes a method for training predictive dynamics models in continuous state spaces with an infinite, probabilistic horizon.
Reinforcement learning algorithms are frequently categorized by whether they predict future states at any point in their decision-making process. Those that do are called model-based, and those that do not are dubbed model-free. This classification is so common that we mostly take it for granted these days; I am guilty of using it myself. However, this distinction is not as clear-cut as it may initially seem.
In this post, I will talk about an alternative view that emphases the mechanism of prediction instead of the content of prediction. This shift in focus brings into relief a space between model-based and model-free methods that contains exciting directions for reinforcement learning. The first half of this post describes some of the classic tools in this space, including generalized value functions and the successor representation. The latter half is based on our recent paper about infinite-horizon predictive models, for which code is available here.
Most likely not.
Yet, OpenAI’s GPT-2 language model does know how to reach a certain Peter W— (name redacted for privacy). When prompted with a short snippet of Internet text, the model accurately generates Peter’s contact information, including his work address, email, phone, and fax:
In our recent paper, we evaluate how large language models memorize and regurgitate such rare snippets of their training data. We focus on GPT-2 and find that at least 0.1% of its text generations (a very conservative estimate) contain long verbatim strings that are “copy-pasted” from a document in its training set.
Such memorization would be an obvious issue for language models that are trained on private data, e.g., on users’ emails, as the model might inadvertently output a user’s sensitive conversations. Yet, even for models that are trained on public data from the Web (e.g., GPT-2, GPT-3, T5, RoBERTa, TuringNLG), memorization of training data raises multiple challenging regulatory questions, ranging from misuse of personally identifiable information to copyright infringement.
Deep reinforcement learning has made significant progress in the last few years, with success stories in robotic control, game playing and science problems. While RL methods present a general paradigm where an agent learns from its own interaction with an environment, this requirement for “active” data collection is also a major hindrance in the application of RL methods to real-world problems, since active data collection is often expensive and potentially unsafe. An alternative “data-driven” paradigm of RL, referred to as offline RL (or batch RL) has recently regained popularity as a viable path towards effective real-world RL. As shown in the figure below, offline RL requires learning skills solely from previously collected datasets, without any active environment interaction. It provides a way to utilize previously collected datasets from a variety of sources, including human demonstrations, prior experiments, domain-specific solutions and even data from different but related problems, to build complex decision-making engines.
When the conclusions seem “off,” is it bad data science or faulty NLP? By Dan Maxwell Download Whitepaper Executive Summary In the age of big data, high-quality results from natural… Read More »The Top NLP Mistake Made by Data Scientists
Rosette 1.17.0 Release: Hebrew Name Translation, French Semantic Similarity, Robust Address Matching
Recent Rosette® Cloud and Enterprise releases (1.17.0, 1.16.1) bring expanded language coverage to name translation and semantic similarity, and ease of use to the address matching capability within Rosette Name… Read More »Rosette 1.17.0 Release: Hebrew Name Translation, French Semantic Similarity, Robust Address Matching
What are the top three barriers to better machine learning models? Annotating data, annotating data, and annotating data. Okay, so it’s not that simple, but producing quality training data to… Read More »Faster Annotation with Rosette Adaptation Studio
What does it take to build a real production-ready model for entity extraction in one language? Here’s a peek through the eyes of our linguistic data engineer. Swedish model building… Read More »A Day in the Life of… Building a New Entity Extraction Model