Locally Typical Sampling

Today’s probabilistic language generators fall short when it comes to producing coherent and fluent text despite the fact that the underlying models perform well under standard metrics (e.g., perplexity). This discrepancy has puzzled the language generation community for the last few years. In this work, we posit that the abstraction of natural language generation as a discrete stochastic process—which allows for an information-theoretic analysis—can provide new insights into the behavior of probabilistic language generators, for example, why high-probability texts can be dull or repetitive. Humans use language as a means of communicating information, aiming to do so in a simultaneously efficient and error-minimizing manner; in fact, psycholinguistics research suggests humans choose each word in a string with this subconscious goal in mind. We formally define the set of strings that meet this criterion: Those for which each word has an information content close to the expected information content, namely, the conditional entropy of our model. We then propose a simple and efficient procedure for enforcing this criterion when generating from probabilistic models, which we call locally typical sampling. Automatic and human evaluations show that, in comparison to nucleus and top-k sampling, locally typical sampling offers competitive performance (in both abstractive summarization and story generation) in terms of quality while consistently reducing degenerate repetitions.


Introduction
Modern probabilistic models have repeatedly demonstrated their prowess at modeling natural language, placing high probability on held-out corpora from many different domains (Brown et al., 2020;Hoffmann et al., 2022;Chowdhery et al., 2022).Yet when used as text generators, their performance is far from perfect.One of the largest determinants of the generated text's quality is the choice of decoding strategy-i.e., the decision rule used to extract strings from a model.Perhaps surprisingly, for many language generation tasks, decoding strategies which aim to find the highest-probability strings produce text that is undesirable (Holtzman et al., 2020;See et al., 2019;Eikema and Aziz, 2020;Zhang et al., 2021;DeLucia et al., 2021).For instance, Stahlberg and Byrne (2019) report that in their neural machine translation experiments, the highest-probability string is usually the empty string.On the other hand, stochastic strategies, which take random samples from the model, often lead to text with better qualitative properties (Fan et al., 2018;Holtzman et al., 2020;Basu et al., 2021).However, stochastic strategies still have a host of other problems, while not entirely dispensing with those seen in maximization-based approaches. 1  At first glance, it is unintuitive that highprobability strings are often neither desirable nor human-like.Due to this pathology, a number of works have concluded that there must be faults in the training objective or architecture of the probabilistic models behind language generators (Welleck et al., 2020;Guan et al., 2020;Li et al., 2020, inter alia).Yet, this conclusion is at odds with these models' performance in terms of other metrics.The fact that modern models can place high probability on held-out text suggests that they provide good estimates (in at least some aspects) of the probability distribution underlying human language.We posit that looking at language generation through an information-theoretic lens may shed light on this paradox.
Communication via natural language can intuitively be cast in information-theoretic terms.Indeed, there is a long history of studying language through the lens of information theory (Shannon, 1948(Shannon, , 1951;;Hale, 2001;Piantadosi et al., 2011;Pimentel et al., 2020, inter alia).In this paradigm, linguistic strings are messages used to convey information, and their information content can be quantified as a function of their probability of being uttered-often driven by context.Assuming that humans use language in order to transmit information in an efficient yet robust manner (Zaslavsky et al., 2018;Gibson et al., 2019), the subset of strings typically used by humans should encode information at some (perhaps near-optimal) rate. 2  In fact, prior works studying the uniform information density hypothesis (Levy and Jaeger, 2007;Mahowald et al., 2013) empirically observed this property in humans' use of natural language.
These insights lead us to re-think what it means to be a probabilistic language generator.First, we contend that language generators, in some cases, can be thought of as discrete stochastic processes.This in turn, allows us to cleanly define typicality (and the typical set) for these processes.We argue, however, that due to discrepancies between the model behind these generators and the true distribution over natural language strings, directly sampling from the typical set is not a good idea.Indeed, for language generators that do not use an end-of-string (EOS) state, this is exactly what is done by ancestral sampling-a decoding strategy not known for providing high-quality text.Inspired by research on human sentence processing, we then define the more restrictive notion of local typicality, and argue that if we want text generated from a model to be "human-like," we should perhaps enforce this information-theoretic criterion in generations ourselves.To this end, we develop a new algorithm, which we call locally typical sampling.Concretely, we hypothesize that for text to be perceived as natural, each word should have an information content close to its expected information content given prior context.When sampling from probabilistic language generators, we should limit our options to strings that adhere to this property.In experiments on abstractive summarization 2 Information rate may be defined with respect to time (as is the case with spoken language) or with respect to a specific linguistic unit, such as a word (as is the case with text).and story generation, we observe that, compared to nucleus and top-k sampling: (i) locally typical sampling reduces the number of degenerate repetitions, giving a REP value (Welleck et al., 2020) on par with human text, and (ii) text generated using typical sampling is generally closer in quality to that of human text. 3  2 Two Views of Language Modeling In this work, we discuss language models 4 in an information-theoretic light.Our first step towards this goal is to re-frame their presentation.Concretely, we put forth that there are actually two lenses through which we can view language modeling productively.Under the traditional lens, we can think of a language model as a distribution over full strings: a language model constitutes the distribution of a single string-valued random variable.Under an alternative lens, we can think of a language model as a discrete stochastic process: a collection of indexed random variables.We compare and contrast these views formally, and then show how to use the language process view to derive a new sampling algorithm in §5.

A Single String-Valued Random Variable
We codify the traditional view of language modeling in the following definition.Let V be an alphabet-a non-empty, finite set.
Definition 2.1 (Language Model).A language model p is a probability distribution over all strings y ∈ V * . 5Under this view, we can think of a language model as describing a single V * -valued random variable.
Under Definition 2.1, it is common to express a language model in the following factorized form where we define y <t def = y 0 , . . ., y t−1 with the padding y 0 = BOS as a distinguished beginning-ofsequence symbol.Through the chain rule of probability, we can always factorize a model as in Eq. (1). 3 An implementation of typical sampling can be found in the Hugging Face's Transformers library (Wolf et al., 2020). 4Here we use the term language model to refer to any (valid) probability distribution over natural language strings.We subsequently specify the necessary conditions for validity.Note that this distribution may also be conditioned on an input. 5The Kleene closure of a set V is defined as The process which produces such a factorization is called local normalization. 6However, with local normalization, we encounter a subtlety: one has to define each conditional probability p(y t | y <t ) not over V, but rather over the augmented set V def = V ∪ {EOS}, i.e., where we have added the distinguished end-of-string symbol EOS.Why?Because without EOS, it would be impossible to normalize the language model, i.e., have it sum to 1.7

A Discrete Stochastic Process
Interestingly, the factorization in Eq. ( 1) suggests that we might view language models, not as a single string-valued random variable, but rather as a collection of random variables {Y t } ∞ t=1 , i.e., as a discrete stochastic process.8Under this view, we arrive at the following definition of what we term a language process, to distinguish it from the definition of a language model given above.
where each Y t is V-valued.The process is described by a distribution p, and we denote its conditional distribution as p(Y t = y t | Y <t = y <t ) for t > 0. In slight abuse of notation but out of convention, we take Y t for t ≤ 0 to be BOS, i.e., conditioning p on just BOS signifies the initial distribution of the process.
Definition 2.2 is very generic.In words, it just says that a language process is any discrete process where we sample a new word9 given the previously sampled words.The first question that naturally comes to mind is when the definitions of a language model and a language process coincide.As it turns out, there is a simple answer.
be a language process over alphabet V with distribution p.A language process is tight (Booth and Thompson, 1973) if and only if In words, tightness says that a language process must not leak probability mass to infinite strings.Because a language model must be a (valid) probability distribution, it must also be tight.
t=1 be a language process over alphabet V with distribution p and let . Then Y is tight if and only if p t = 1 for some 0 < t < ∞ or ∞ t=1 p t → ∞.Proof.Note that p t is the probability of sampling EOS at exactly step t given that the history of the string is of length (t − 1).
Then, Y is clearly tight as no probability mass is leaked to strings beyond length t, where t < ∞.
• Case 2: Now suppose p t < 1 for all t.In this case, we have that the probability of all infinite-length strings is given by ∞ t=1 (1 − p t ).However, by a standard result (see e.g., Knopp, 1954, Ch. 12), we have that ∞ t=1 (1 Both cases together complete the proof. We can now see that language processes are strictly more general than language models: Eq. (1) shows us that any language model can be written as a language process, but Proposition 2.4 shows the converse is not necessarily true.Indeed, Proposition 2.4 allows us to easily construct a simple language process (example given below) that cannot be converted to a language model, which motivates the formalism.
and p(EOS | y <t ) = 1 2 t+1 .Note that we keep the convention that Y t = BOS for t ≤ 0, and thus p 0 = 0. We have ∞ t=1 p t = 1 2 < ∞, so, by Proposition 2.4, Y is not a language model.Computing the infinite product shows Y leaks ≈ .58 to infinite strings.
Life after EOS? Proposition 2.4 further hints at the more intuitive difference between language models and language processes-what happens after EOS?In the traditional definition of a language model (Definition 2.1), life ends at EOS.That is, any string with symbols after EOS would not be a valid sample from a language model because such strings are not in the model's support.On the other hand, a language process offers a more chipper view: once we hit EOS, we can just generate another symbol.A language process is better thought of as an infinite babbler than a distribution over any sort of strings.At some level, this is indeed the implicit view that is adopted by some when language modeling, as many language models do not have EOS in the traditional sense.For the rest of this paper we will also take this view, and consider language processes for which we can continue generating after sampling an EOS symbol.

Other Useful Properties
Next, we discuss some other properties about language processes that are important for understanding the theoretical results presented in §3.
where k ≥ 0 is the Markov order.We again take Y t for t ≤ 0 to be BOS, indicating our initial distribution.
Many language processes are explicitly defined to be Markov, e.g., ones based on n-gram language models.However, many language processes based on recurrent neural networks are, in principle, non-Markov.Yet despite being capable of learning non-Markov distributions, researchers have found that recurrent neural language models tend to learn Markov distributions.For instance, Khandelwal et al. (2018) show that a recurrent neural language model's memory is empirically bounded at roughly 200 words.Thus, we can still generally assume this property when working with language processes parameterized by such models.11 for n ≥ 0. We again take Y t for t ≤ 0 to be BOS, indicating our initial distribution.
While not theoretically Markovian, human language is generally considered stationary, i.e., the probability distribution over the next word should not depend on absolute position, but rather the history.

Definition 2.8 (Ergodicity). A language process
t=1 is ergodic if its statistical properties (e.g., ensemble averages) can be deduced from a single, sufficiently long, random sample.
The above definition is intentionally informal because ergodicity is a complex topic that would take time to treat thoroughly, but see McMillan (1953);Breiman (1957) for a more rigorous discussion.One of the important implications of ergodicity for language processes, however, is rather straightforward: if our language process is over alphabet V with distribution p and is ergodic, then for every symbol y ∈ V and for every history y <t ∈ V * , there must exist an extension y <t = y <t , y t , • • • , y t −1 such that p(Y t = y | Y <t = y <t ) > 0. In plain terms, this just says that we can always reach every word in our alphabet via some path no matter where we currently are.In our context, ergodicity also relates to the problem with EOS.If we convert a language model into a language process (as discussed in §2.1) and make the EOS state absorbing,12 this language process must be non-ergodic, as once it encounters EOS, no other state is reachable.

Estimating a Language Model from Data
Language models are typically estimated from language data.The standard method for estimating the parameters of p is via maximization of the log-likelihood of a training corpus S do not consider the Markovian property of most models as a limitation to their ability to model natural language in practice.
where θ are the model p's parameters.The above is equivalent to minimizing the cross-entropy loss between p and the empirical distribution.Note that we assume all y ∈ S end in the special EOS token.
3 Information-Theoretic Properties of Language Processes The view of language modeling as a discrete stochastic process naturally lends itself to an analysis through the lens of information theory.Indeed, much of information theory is concerned with the study of discrete stochastic processes (see e.g., Cover and Thomas, 2012, Ch. 4).In this section, we review standard information-theoretic definitions in §3.1 and build on these to introduce our own notion of local typicality in §3.2.

Typicality
An important definition in the study of stochastic processes is entropy rate, which generalizes the notion of entropy from a random variable to a stochastic process.
be a stationary, ergodic discrete stochastic process over alphabet V with distribution p.The entropy rate of Y is defined as The entropy rate is useful in that it tells us, in the limit, how spread out, i.e., entropic, the distribution is.Another interpretation is that it quantifies the complexity of Y .In the case of an i.i.d.process, the entropy rate and the entropy coincide, making the entropy rate a true generalization of the entropy.Using entropy rate, we can define the notion of the typical set.
t=1 be a stationary, ergodic discrete stochastic process where each Y t follows distribution p and takes on values in a finite support Y.For 1 ≤ T < ∞, the (T, ε)-typical set of Y is the set of all sequences of length exactly T with average per-symbol negative log-probability close to H(Y ), i.e.
In informal terms, the typical set is the set of all samples that we would expect when sampling from p. To give the reader intuition about typicality, we now turn to a classical example. 13xample 3.3.Consider an i.i.d.stochastic process t=1 where Y t is defined as the outcome of flipping a biased coin: we have p(HEADS) = .6and p(TAILS) = .4.If we flip 100 coins, the most likely outcome is the sequence of 100 heads.However, this would be a surprising outcome to most people, who would intuitively expect the sequence to consist of roughly 60% heads and 40% tails.Indeed, even for relatively large ε, the sequence of 100 heads is not in the T (T ) ε typical set; its average symbol probability is .6 2 −H(Yt) ≈ 0.51.
The above example demonstrates that the typical set often does not contain the most likely sequence.Additionally, the typical set is interesting because, as T → ∞, it contains nearly all the probability mass; we formalize this property in a proposition.
t=1 be a stationary, ergodic discrete stochastic process where each Y t follows distribution p and takes on values in a finite support Y.For every ε > 0, for sufficiently large T , the following conditions hold: i) In words, as we take T → ∞, the probability mass covered by the typical set is nearly 1 and the number of elements in it is nearly 2 T •H(Y ) .
What's wrong with the typical set?Let Y be a stationary, ergodic language process.By the conditions of Definition 3.2, we know that Y has a typical set.We have motivated the typical set, intuitively, as the subset of strings that are usual or typical among all strings.Under this intuition, it makes sense that-when using Y as a language generator-this is the set from which we would like to select a string.A relatively straightforward corollary of Proposition 3.4 is that ancestral sampling should pull from just this set.To see this, we can turn to i) in Proposition 3.4: since ancestral sampling provides an i.i.d.sample from Y , the probability of getting an element not in T (T ) ε as T → ∞ is (1 − ε), i.e., practically never.However, there is the confound that our models are not perfect representations of the true distribution behind the "human" natural language process.Perhaps for this reason (and the reasons discussed in §4), ancestral sampling is not known to result in samples that humans judge to be high quality in the task of language generation; rather it often leads to text that humans perceive as incoherent (Holtzman et al., 2020).Furthermore, the typical set's definition relies on Y being a stationary and ergodic language process.As we saw in §2.2, however, a language model that we convert into a language process will be non-ergodic by definition (at least if we keep EOS as an absorbing state).Thus, while the typical set is a natural starting point, it does not actually get us to our end goal of defining a set of strings that humans would find typical.To remedy this problem, we introduce the new concept of local typicality.

Local Typicality
A core contribution of this work is to define a more restrictive notion of typicality-termed here local typicality-which we subsequently motivate as useful in the context of describing the set of strings humans typically produce.
t=1 be a discrete stochastic process over finite support Y.The (T, ε)-locally typical set of Y is the set of all sequences of length exactly T such that In comparison to the typical set, the locally typical set further restricts the set of samples to those for which each individual symbol y t has probability near the local conditional entropy, i.e., the entropy of the distribution p(• | y <t ).In general, there is no strong theoretical relationship between the typical set and the locally typical set.However, in the case of an i.i.d.stochastic process we can prove that the latter constitutes a subset of the former.
A natural question to ask at this point is why the definition of local typicality is useful in the context of a language process.Our argument, presented in the following section, is cognitive in nature.

Local Typicality in Natural Language
To motivate our definition of local typicality in the context of natural language, we must first look at language through an information-theoretic lens.We will consider two distributions in this section: p, the distribution that a speaker of the language is assumed to generate strings from, and p our language process that approximates p-albeit, perhaps not perfectly.In this setting, we view a natural language string y as a means of communicating some information, where each word y t is a symbol via which we construct our message.The information content of y is then defined as its negative log-probability under a specified distribution: − log p(y).Following the chain rule of probability, this quantity can be decomposed over words, i.e., the information content of a word is its negative logprobability given prior context: − log p(y t | y <t ).

Properties of Human Communication
Given the above definitions, we can now ask a question at the heart of this work: what are the information-theoretic characteristics of natural language typically produced by humans.In other words, what do strings sampled from p look like, from the perspective of p, our trained language process?Research in psycholinguistics suggests that a core component of what makes text human-like is its per-unit information content.
To motivate this conclusion, we first consider a language user's objective.When using natural language, humans aim to transmit information efficiently while also minimizing the risk of miscommunication (Zipf, 1949).In order to achieve this goal, speakers avoid producing words with either very high or very low information content (Fenk and Fenk, 1980;Aylett and Turk, 2004;Levy and Jaeger, 2007;Mahowald et al., 2013, inter alia), a behavior inline with theories of efficient and robust communication.14Indeed, cross-linguistic research has shown that languages trade-off information content and speech rate, perhaps aiming at a specific (optimal) information rate (Coupé et al., 2019;Pimentel et al., 2021).Further, not using words in a context where they have very high or low information content avoids characteristics that appear to negatively impact traditional grammaticality judgments: an ideal natural language string would not compensate for unusually near-zero probability in the first half, e.g., syntactic error, with unusually high probability in the second half, e.g., especially frequent words (Schütze, 2016;Lau et al., 2017).

An Information-Theoretic Formalization
The definition of local typicality presented in §3.2 can be viewed as an embodiment of the characteristics of human language just described above.One logical interpretation of these behaviors is that, at every time step, natural-sounding language should have per-symbol information content close to the expected (average) per-symbol information content. 15We formalize this relationship in the following hypothesis.
Hypothesis 4.1.Samples y = y 0 • • • y T from a human language process with distribution p tend to belong to the process's locally typical set L (T ) ε for large enough T and some ε > 0. In words, this means that we should expect every word in natural-sounding sentences to be close to the expected information content under p, i.e., the conditional entropy given prior context.
We verify this relationship empirically using data from human language processes.In Fig. 1, we show the distribution of the difference between the information content of y t and the expected information content of Y t , i.e., − log p(y t | y <t ) − H(Y t | Y <t = y <t ), according to the model on human-generated text.The peaked nature of the distributions in Fig. 1 reveals that human language indeed tends to have per-word information content quite close a specific value; the centering of these distributions around ≈ 0 suggests that this value is H(Y t | Y <t = y <t ).Notably, Meister et al. (2022) shows the same is not true for text generated by models according to a number of different popular decoding schemes, which instead produce strings with much higher probability, i.e., with lower information content.
In an ideal situation, such a property of natural language would be reflected in p, in which case sampling from the typical set should be sufficient to ensure human-like language.However, our models are by no means perfect.The failure to capture the property of human language expounded in Hyp.4.1 may come from a number of possible modeling deficiencies, e.g., poor ability to capture the tails of these distributions.We hypothesize that, when using language models to generate text, enforcing this local-typicality criterion explicitly may serve as a patch for this shortcoming.

Sampling from a Language Process
In this section, we describe how to sample from a language process parameterized by the distribution p,16 or in more commonly-used terminology, how to decode from p.There are many different algorithms one could employ to sample from p.The most intuitive strategy is ancestral sampling,17 which works as follows: sample y t ∼ p(• | y <t ) for each history y <t successively until some chosen criterion, e.g., the EOS symbol is sampled or a maximum length is reached.Note that in the case of the former criterion, this procedure is equivalent to sampling entire strings according to the distribution p.Perhaps the most popular set of techniques for sampling fall under a paradigm we call truncated sampling, where the vocabulary at a time step is truncated to a core subset of words.
For instance, Fan et al. (2018) propose limiting the sampling space to the top-k most likely words in each decoding step, and Holtzman et al. (2020) consider the smallest nucleus (i.e., subset) of words whose cumulative probability mass exceeds a chosen threshold η.
In this paper, we give a general treatment of truncated sampling and then discuss our variant.Given a context-dependent constraint subset C(y <t ) ⊆ V of the vocabulary, we define the truncated distribution as where the normalizer is defined as where P is the power set operator.
Algorithm 5.2 (Nucleus Sampling).In nucleus sampling, we choose a threshold parameter η and define the truncation set C(y <t ) as the solution to the following subset minimization problem minimize where again P is the power set operator.

Shortcomings of Existing Algorithms
To motivate sampling based on the locally typical set, we must first better understand the shortcomings of current decoding strategies.While strings generated using stochastic strategies may have lower probability according to p, they often outperform those decoded using maximizationbased strategies in terms of qualitative metrics.
A number of recent works have tried to offer explanations for this phenomenon.Some have attributed it to a diversity-quality trade-off (Zhang et al., 2021;Basu et al., 2021), while others blame shortcomings of model architectures or training strategies (Welleck et al., 2020;Li et al., 2020).Our analysis from §4 inspires an alternative explanation, motivated by information theory and psycholinguistics, for why models that perform so well (in terms of metrics such as perplexity) can still exhibit such undesirable behavior when used to generate text.First, the connection between probability and information content may explain why high-probability text is often dull or generic (Holtzman et al., 2020;Eikema and Aziz, 2020); its low information content likely makes for boring, i.e., uninformative, text.This connection also offers a potential explanation for the rather strange behavior that, when a string has a repetitive loop, language models often assign increasingly higher probability to the repeated substring (Holtzman et al., 2020); the substring conveys less and less information after each occurrence.
A further implication of this framing is the equivalence between decoding strings from a probabilistic language generator and sampling messages from the natural language communication channel.If we wish to solely sample from the subset of messages that a human would typically construct, i.e., that are human-like, then we should begin by narrowing down this subset to those messages that meet at least some of the same criteria as human-generated messages.In this work, we have identified the criterion that such messages tend to be in the locally typical set.This observation motivates a new decoding strategy in which our information-theoretic criterion is explicitly enforced.

Locally Typical Sampling
We now introduce our novel sampling algorithm, which we entitle locally typical sampling.Algorithm 5.3.Locally typical sampling is a truncated sampling scheme where the truncation set C(y <t ) is the solution to the following subset optimization problem minimize In words, Algorithm 5.3 limits the sampling distribution to only those words with negative logprobability within a certain absolute range from the conditional entropy (expected information content) of the model at that time step.In the spirit of nucleus sampling, this range is determined by a hyperparameter τ , the amount of probability mass from the original distribution that we wish to consider.
Interestingly, Algorithm 5.3 does not imply that high-probability words should not be chosen.Indeed, in the situation where conditional entropy is low, i.e., when the model places most of the probability mass on a small subset of words, it is likely the case that only high-probability words fall into the locally typical set.
Computational Complexity.From a practical perspective, locally typical sampling can be implemented with the same efficiency as nucleus or top-k sampling.First, we compute the conditional entropy, which is an O(|V|) operation.Second, we sort words by their absolute distance from H( p(• | Y <t = y <t )), which can be done in O(|V| log |V|) time with standard sorting algorithms.Finally, we greedily take words from this list until their cumulative probability exceeds the threshold τ , which again takes O(|V|) time.Thus, creating our altered distribution has time complexity O(|V| log |V|). 18elationship to Other Decoding Strategies.Notably, we already see motivation for this criterion in the performance of several well-known decoding strategies.For example, beam search is the predominant decoding strategy for machine translation models (Wu et al., 2016;Edunov et al., 2018;Ng et al., 2019;Meister et al., 2020b), a setting in which beam search (incidentally) often already enforces this criterion. 19Yet, when used in more open-ended tasks, where the entropy of the language model is higher, beam search can lead to low-quality text (Li et al., 2016;Holtzman et al., 2020;Welleck et al., 2020;Meister et al., 2022).Locally typical sampling is also closely related to nucleus sampling.When the probability distribution over the vocabulary has low conditional entropy, i.e., when there are only a few reasonable choices for the next word according to our model, nucleus and typical will have the same truncation set.Locally typical sampling and Mirostat (Basu et al., 2021) likewise have similar decision rules for truncation.Mirostat decodes strings such that they have a perplexity (or, equivalently, a per-word information content) close to a target value.In contrast to Mirostat, however, locally typical sampling does not require a specific target information content to be defined.Rather, locally typical sampling defines this quantity as the conditional entropy, choosing it dynamically (per word) and making it less sensitive to hyperparameter choice.Finally, locally typical sampling is also related to Braverman et al.'s (2020) strategy, which proposes a look-ahead decoding algorithm that generates text with a similar entropy rate to that of human-generated text.Our strategy's motivation is similar-to match the tendencies in information content exhibited by human-generated text-albeit without requiring the computational overhead of a look-ahead strategy.

Experiments
In this section, we explore the efficacy of our decoding strategy on two natural language generation tasks: abstractive summarization and story generation.We assess performance with respect to several other stochastic decoding strategies: nucleus sampling, top-k sampling, temperature sampling,20 beam search and Mirostat.Our evaluation includes both automatic metrics and human ratings.

Setup
Models and Data.We use the Hugging Face framework (Wolf et al., 2020) for reproducibility, employing their implementations of nucleus, topk, temperature sampling and beam search.We rely on the implementation of Mirostat provided by its authors.For story generation, we finetune the medium and large versions of GPT-2 (Radford et al., 2019) from checkpoints made available by OpenAI on the WRITINGPROMPTS dataset (Fan et al., 2018).We use the medium checkpoint finetuned on WIKITEXT-103 (Merity et al., 2017) to produce the data used in Fig. 1.For abstractive summarization, we use BART (Lewis et al., 2020)  Hyperparameters.In a preliminary hyperparameter sweep using MAUVE 22 (Pillutla et al., 2021), we found k = {30, 40}, η = {0.9,0.95} and τ = 3.0 to be the best performing hyperparameters for top-k sampling, nucleus sampling and Mirostat, respectively.For locally typical sampling, we found τ = 0.2, τ = 0.95 to provide the best results for story generation and abstractive summarization, respectively.Standard values according to the literature for other hyperparameters (i.e., for beam search and temperature sampling) were employed.We use these values in our human evaluations and in computation of automatic metrics.
Automatic Quality Metrics.As automatic quality metrics, we evaluate the generated text's perplexity-under both the model used to generate the text (PPL(g)) and an independent, i.e., not finetuned, LM (PPL(i)), namely GPT-2 large (Radford et al., 2019).Several prior works have shown that neither low nor high perplexity (Zhang et al., 2021;   21 As we are interested in getting as close an estimate of p as possible with our models p, all fine-tuning is done without label-smoothing.Note that label-smoothing may also artificially inflate conditional entropy estimates, as it pushes the learned distribution towards the most entropic distribution: the uniform distribution (Pereyra et al., 2017). 22We use the default settings given by the authors for all MAUVE computations, albeit we employ different LMs in our parameter sweep vs. reported results (standard GPT-2 vs. GPT-2 large, respectively) to reduce bias in the final results.Notably, MAUVE presents similar performances when used with these two pretrained LMs (Pimentel et al., 2022).Nadeem et al., 2020;Pillutla et al., 2021) are direct indicators of text quality.Rather, human-like text often has perplexity within a certain range.Consequently, we report the difference in this metric from the reference text as well.We additionally evaluate using MAUVE 23 (Pillutla et al., 2021) with the reference text.
Automatic Diversity Metrics.We also evaluate locally typical sampling using automatic diversity metrics.We compute REP (Welleck et al., 2020), Zipf's coefficient, and n-gram diversity.For REP we use the average of REP/ scores, as defined in Eq. 9 of (Welleck et al., 2020) for ∈ {16, 32, 128}.We define n-gram diversity D as the average fraction of unique vs. total n-grams for n ∈ {1, 2, 3, 4} in a string Table 2: Automatic quality and diversity metrics, as described in §6.1, along with human ratings on the CNN/DAILYMAIL dataset.Human ratings are averaged across criteria to form a single metric.Bolded values are the best results among decoding strategies, where for perplexity (PPL) and Zipf's coefficient, we take this to be the delta from measurements on human text (numbers in purple).Numbers in blue are standard error estimates.text that meets these criteria at different levels of the scale.Raters are additionally provided several examples of stories/summarizations that both meet and fail to meet the rating criteria.They are then presented with the respective prompt/news article and the corresponding stories/summaries generated by different decoders and by the reference in random order.For abstractive summarization, we ask annotators to score on fluency and relevance, while for story generation, annotators score on fluency, coherence, and interestingness, each using a scale from 1 to 5. We choose these criteria following recommendations from van der Lee et al. ( 2019).
For each story/summarization and each of the criteria, we take the median score across raters as the final score. 24Workers are paid $1.50 per HIT for the abstractive summarization task and $2 per HIT for the story generation task, for which entries were longer.Note that these rates translate to >$15/hour. 24We use an attention check in each HIT.Responses where the attention check has been failed are thrown out.

Results
Quantitative Performance.Tab. 1 and 2 show the results of our different evaluation metrics.Human scores are averaged across the qualitative metrics to give an aggregate score; the value in parentheses is the standard error of the estimate.We show full breakdowns of score distributions in Tab. 5. We see that in general, locally typical sampling performs on par with or better than other sampling techniques, producing text with human quality ratings closest to that of the reference among the stochastic decoding strategies.Interestingly, beam search still outperforms locally typical sampling in abstractive summarization, albeit by a small margin.This could perhaps be attributed to the deterministic nature of beam search, which suggests that an interesting direction for future research may be a deterministic version of locally typical sampling, e.g., where the highest-probability word within the truncated set is always chosen.Importantly, all the strategies we explore are quite close to human-level performance-in some cases Prompt (CNN) The attorney for a suburban New York cardiologist charged in what authorities say was a failed scheme to have another physician hurt or killed is calling the allegations against his client "completely unsubstantiated."Appearing Saturday morning on CNN's "New Day," Randy Zelin defended his client, Dr. Anthony Moschetto, who faces criminal solicitation, conspiracy, burglary, arson, criminal prescription sale and weapons charges in connection to what prosecutors called a plot to take out a rival doctor on Long Island."None of anything in this case has any evidentiary value," Zelin told CNN's Christi Paul... Reference A lawyer for Dr. Anthony Moschetto says the charges against him are baseless.Moschetto, 54, was arrested for selling drugs and weapons, prosecutors say.Authorities allege Moschetto hired accomplices to burn down the practice of former associate.

Beam k = 5
Dr. Anthony Moschetto faces criminal solicitation, conspiracy, burglary, arson and weapons charges."None of anything in this case has any evidentiary value," his attorney says.
Nucleus η = 0.95 Dr. Anthony Moschetto, 54, pleaded not guilty to charges Wednesday.Two men -identified as James Chmela and James Kalamaras -were named as accomplices.

Top-k k = 30
Dr. Anthony Moschetto is accused of providing police with weapons and prescription drugs.Authorities say he was part of a conspiracy to harm or kill a rival doctor.His attorney calls the allegations against his client "completely unsubstantiated" Typical τ = 0.95 Dr. Anthony Moschetto is charged with crimes including arson, conspiracy, burglary, prescription sale, weapons charges.His attorney says "none of anything in this case has any evidentiary value" even surpassing human references in terms of ratings.At this level, it is perhaps only reasonable to expect that the differentiation between the top strategies is small.Accordingly, we also consider how robust locally typical sampling is to hyperparameter choice.Fig. 2 shows REP measurements for different values of the hyperparameters k, η, and τ for top-k, nucleus, and locally typical sampling, respectively.Interestingly, REP appears to be far less sensitive to τ than to k and η.While many values of k and η appear to lead to degenerate repetitions in story generation, most values of τ lead to text with a REP value on par with human text, demonstrating that an advantage of our technique is its robustness to hyperparameter choice.See Fig. 6 in the appendix for a larger exploration of how other quality metrics vary as a function of τ .
Qualitative Performance.We present some examples of text generated according to each of the decoding strategies in Tab. 3 and 4. For both of the tasks, we choose the example with ID 1 in the respective test set and provide examples from each of the decoding strategies, employing the hyperparameter values that lead to the best human scores in Tab. 1 and 2. For the summarization task, we see that locally typical sampling provides a comprehensive and coherent summary of the article, quite similar to that of beam search.In comparison, the text produced by temperature sampling is not necessarily coherent; text from nucleus sampling and top-k sampling misses some of the important information in the article, e.g., the charges of burglary and arson.While the qualitative performance in story generation is much more subjective, locally typical sampling arguably provides the most fluent story amongst all the decoding strategies.Other stories lack coherence and, even within the first few sentences, we see repeated phrases and words.Together, these results suggest that locally typical sampling may indeed produce more desirable text.

Conclusion
In this work, we analyze decoding from probabilistic language generators in the information-theoretic framework.We equate a language model to a discrete stochastic process, and use the known properties of such processes to quantitatively describe the samples we should expect.Motivated by results in psycholinguistics, we hypothesize that-with the goal of communicating efficiently and robustlyhumans produce text whose per-word information content is within a close range of the expected information content of a word given prior context.Current language models may fall short in capturing Story Generation (WritingPrompts) Prompt A kid doodling in a math class accidentally creates the world's first functional magic circle in centuries.
Reference It was dark and Levi was pretty sure he was lying on his back.There was firelight flickering off of what was left of a ceiling.He could hear something but it was muffled.He was almost positive it was screaming.When he tried to move he felt an excruciating pain in his left side that caused him to cry out... Nucleus η = 0.95 "Why do you draw a circle?"He asked as he handed the piece of paper over to her.She rolled her eyes and looked down at the picture, "Because I like math!" "Then what?" "I like to draw the circles."She ran a hand through her curls and looked away."See, I have three circles.""Hmph, I didn't think I saw that in your class."A younger student brought up the ceiling and murmured...It was teachers duty to ensure that our kingdom is pure and pure and successful, however it does not mean we do not try to be as rebellious as any other kingdom.Most of our teachers are suitable for rulership, being knowledgeable in sadistic rec thunking of time stopping magic circle created by several apprentices...

Mirostat τ = 3
ready to go home, when a little kid starts to sketch a circle."No.Just stop.It's not working.""But the circle turns to smoke and then to a flame.It's not working, I know it is!" Another kid, who was also drawing a circle with his phone, starts to scream.The teacher, looking at the circle, lost his patience and started to yell...
As soon as he got the clock on his desk he jumped up.The noise it was making had changed his day forever.His hair, normally unkempt, now glistened and he had to have more hair because of it."Yes!"He had declared triumphantly as he watched his father fumble around in the backpack in search of a little compass he always kept with him.He took it from the desk and quickly walked out the door to see what had happened.He wasn't the first child in the world to draw something... this property, which is a possible explanation for why the corresponding language processes often do not lead to human-like text.Yet, this observation provides a simple new criterion for decoding more human-like text from probabilistic language generators: constraining the sampling space to words that meet this criterion.In experiments on two language generation tasks, we find that our strategy-called locally typical sampling-leads to text of comparable or better quality than other stochastic decoding strategies according to human ratings.Further, when compared to these other decoding strategies, several quantitative properties of typically-sampled text more closely align with those of human text.
sations about the relationship between information theory and sampling.We would also like to thank Ehud Reiter, who served as our TACL action editor, and the the anonymous reviewers for their insightful feedback during the review process.Further, we are grateful to Eleanor Chodroff, Clément Guerner and Lucas Torroba Hennigen for their feedback on the manuscript of this work.

Ethical Concerns
In order to complete our human evaluation, we used a crowdsourcing platform.For each task, we made sure that the crowdworkers would be paid (at minimum) a wage of $15 per hour.
Another ethical consideration worth discussing concerns the use of language models for text generation.Text generated by these models may contain malicious content, either by design of the user or as a byproduct of the training data/algorithm.While we hope the results of our work will not be misused, they may nonetheless provide insights for those employing these models with ill-intent as to how machine-generated text can be made more "human-like," and thus more convincing.

A Human Evaluation Setup
We use Amazon Mechanical Turk framework for collecting human ratings of text.We use solely MTurk Master Workers in order to maximize the quality of our ratings.For story generation and abstractive summarization, each Human Intelligence Task (HIT) consists of either a single prompt from which a story should be generated or a single news article to be summarized.The raters are first presented with the different rating criteria, along with descriptions of the type of text that meets these criteria at different levels of the scale.These definitions can be seen in 3 and 4. Raters are additionally provided several examples of stories/summarizations meeting/failing to meet the rating criteria.They are then presented with the respective prompt/news article and the corresponding stories/summaries generated by different decoders and by the reference in random order.We use an attention check in each HIT.Responses where the attention check has been failed are thrown out.For each of the rating criteria, the rater assigns a score from 1 to 5. For each story/summarization and each of the criteria, we take the median score across raters as the final respective score.Statistics for these scores can be seen in Tab. 5. Workers are awarded $1.50 per HIT for the abstractive summarization task and $2 per HIT for the story generation task, for which entries were longer.These rates translate to >$15/hour.

Please confirm the following worker criteria:
Narrative 1

Evaluation
How interesting is the story?
How fluent is the story?

Detailed Instructions Examples
We will reject your HIT if you fail attention checks or if you have unusually low agreement with other annotators.

Definitions
Below you will find multiple prompts and stories (narratives) generated from those prompts.Please rate the stories according to their interestingness, fluency and coherence following the given definitions and examples.We will reject your HIT if you input obviously wrong answers.
The 5-point scale for each definition should be used as a guideline.The definitions are displayed when hovering over each radio button for convenience.(Note: if the definitions do not appear even after a few seconds, please leave your browser (e.g.Chrome) and OS (e.g.Windows) information in the comment box.) Interesting:The story is fun to read.It feels creative, original, dynamic, and/or vivid.The opposite of this might be something that's obvious, stereotypical/unoriginal, and/or boring.Very interesting: The story has themes, characters, and dialog that make you want to keep reading it and you might even want to show it to a friend Somewhat interesting: The story has themes, characters, dialog, and/or a writing style that pique your interest Mildly interesting: There are moments of interest but the story is not too notable.Not very interesting: You finish the story but can't remember anything unique about it.Adequate, but not a fun read Not at all interesting: You do not even want to finish reading the story.It is boring and/or unoriginal.
Fluent:The story is written in grammatical English.No obvious grammar mistakes that a person wouldn't make.An incomplete final word or incomplete sentence does not count as a mistake and should not affect fluency.The English sounds natural.Note: do not take off points for spaces between punction (e.g."don 't") and simpler sentences.Simple English is as good as complex English, as long as everything is grammatical.Very fluent: The sentences read as if they were written by a native English speaker with 1 or no errors.Somewhat fluent: The sentences read as if they were written by a native English speaker with very few errors.Some minor mistakes that a person could have reasonably made.
Mildly fluent: The sentences could have been written by a human but it is not entirely obvious.Not very fluent: Many sentences have frequently repeated words and phrases.Obvious mistakes.
Not at all fluent: The sentences are completely unreadable.If the same sentence is repeated over and over for the entire story, that story is considered not at all fluent.
Coherent:The story feels like one consistent story, and not a bunch of jumbled topics.Stays on-topic with a consistent plot, and doesn't feel like a series of disconnected sentences.Very coherent: The sentences when taken as a whole all have a clearly identifiable plot Somewhat coherent: Many of the sentences work together for a common plot with common characters.One or two unrelated sentences.Mildly coherent: Around half of the sentences work together.The plot is not entirely clear though.Not very coherent: Only a few sentences seem to be from the same story; the others are random.Not at all coherent: There is absolutely no identifiable plot.Each sentence feels completely disconnected from every other sentence.

Previewing Answers Submitted by Workers
This message is only visible to you and will not be shown to Workers.You can test completing the task below and click "Submit" in order to preview the data and format of the submitted results.Please confirm the following worker criteria: Narrative 1

Evaluation
How fluent is the text?
How relevant is the summary to the article?

Evaluation
How fluent is the text?
How relevant is the summary to the article?

Evaluation
How fluent is the text?
How relevant is the summary to the article?

Evaluation
How fluent is the text?

Detailed Instructions Examples
We will reject your HIT if you fail attention checks or if you have unusually low agreement with other annotators.

Definitions
Below you will find a news article and multiple summaries of this article.The news article is repeated for each summary for reference.Please rate the summaries according to their fluency and relevance following the given definitions and examples.We will reject your HIT if you input obviously wrong answers.The 4-point scale for each definition should be used as a guideline.The definitions are displayed when hovering over each radio button for convenience.(Note: if the definitions do not appear even after a few seconds, please leave your browser (e.g.Chrome) and OS (e.g.Windows) information in the comment box.) Fluent:The summary is written in grammatical English.No obvious grammar mistakes that a person wouldn't make.An incomplete final word or incomplete sentence does not count as a mistake and should not affect fluency.The English sounds natural.Note: do not take off points for spaces between punction (e.g."don 't") and simpler sentences.Simple English is as good as complex English, as long as everything is grammatical.Very fluent: The sentences read as if they were written by a native English speaker with 1 or no errors.Somewhat fluent: The sentences read as if they were written by a native English speaker with very few errors.Some minor mistakes that a person could have reasonably made.
Mildly fluent: The sentences could have been written by a human but it is not entirely obvious as there are a number of mistakes.Not very fluent: Many sentences have frequently repeated words and phrases.Obvious mistakes.
Not at all fluent: The sentences are completely unreadable.If the same sentence is repeated over and over for the entire story, that story is considered not at all fluent.
Relevant:The summary captures all the relevenant events, persona and locations from the article.Very relevant: It is very clear the summary captures the theme, vocabulary, and specific persona and locations from the article.Somewhat relevant: Most sentences include relevant points from the article.Mildly relevant: A few sentences include relevant points from the article.Not very relevant: The summary mentions something from the article but contains mostly unrelated text.Not at all relevant: It is as if the summary was written without reading the article.

Previewing Answers Submitted by Workers
This message is only visible to you and will not be shown to Workers.You can test completing the task below and click "Submit" in order to preview the data and format of the submitted results.

B Additional Results
Figure 5: Distributions of conditional entropies and information contents per token for three different language generation tasks for human text, i.e., the reference text for each of the respective datasets.Many of the weapons were found in a hidden room behind a switch-activated bookshelf, according to prosecutors.The investigation began back in December, when undercover officers began buying heroin and oxycodone pills from Moschetto in what was initially a routine investigation into the sale of prescription drugs, officials said.During the course of the undercover operation, however, Moschetto also sold the officers two semiautomatic assault weapons as well as ammunition, prosecutors said.Moschetto allegedly told officers during one buy that he needed dynamite to "blow up a building."He later said he no longer needed the dynamite because a friend was setting fire to the building instead.Kalamaras and Chmela are believed to have taken part in the arson, according to prosecutors."The fire damaged but did not destroy the office of another cardiologist whose relationship with Dr. Moschetto had soured due to a professional dispute," according to the statement from the district attorney's office.Moschetto allegedly gave an informant and undercover detective blank prescriptions and cash for the assault and killing of the fellow cardiologist, according to prosecutors.He also requested that the rival's wife be assaulted if she happened to be present, authorities said."He was willing to pay $5,000 to have him beaten and put in a hospital for a few months, and then he said he would pay $20,000 to have him killed," said Assistant District Attorney Anne Donnelly, according to CNN affiliate WCBS.
Reference A lawyer for Dr. Anthony Moschetto says the charges against him are baseless.Moschetto, 54, was arrested for selling drugs and weapons, prosecutors say.Authorities allege Moschetto hired accomplices to burn down the practice of former associate.

Beam
Dr. Anthony Moschetto faces criminal solicitation, conspiracy, burglary, arson and weapons charges."None of anything in this case has any evidentiary value," his attorney says.

Temp
Dr. Anthony Moschetto accused of planning to kill doctor, drug dealer.Scheme imploded when a crisis developed in their relationships.Preliminary bond hearing set for August 5.

Nucleus
Dr. Anthony Moschetto, 54, pleaded not guilty to charges Wednesday.Two menidentified as James Chmela and James Kalamaras -were named as accomplices.

Top-k
Dr. Anthony Moschetto is accused of providing police with weapons and prescription drugs.Authorities say he was part of a conspiracy to harm or kill a rival doctor.His attorney calls the allegations against his client "completely unsubstantiated" Typical Dr. Anthony Moschetto is charged with crimes including arson, conspiracy, burglary, prescription sale, weapons charges.His attorney says "none of anything in this case has any evidentiary value" Story Generation (WritingPrompts) Prompt A kid doodling in a math class accidentally creates the world's first functional magic circle in centuries.
Reference It was dark and Levi was pretty sure he was lying on his back.There was firelight flickering off of what was left of a ceiling.He could hear something but it was muffled.He was almost positive it was screaming.When he tried to move he felt an excruciating pain in his left side that caused him to cry out.His hand moved to it instinctively and found something protruding from the wound.It seemed to be a pipe of some kind.He attempted to pull it out but it was covered in blood making it slick.Levi's ears began ringing and the sounds began to become clearer, it felt quite a bit like some one was driving needles into his eardrums.The sounds he was hearing were definitely screams and not just one person's, a lot of people were screaming or yelling.There was some one close to him that was crying.He looked in the direction of the tears and seen his sister kneeling beside him, her hands covering her face."What happened Laur?"He was shocked at the sound that barely rasped out from between his lips.His sister's hands jerked down and she stared down at Levi with a shocked look on her face."bu...I tho... you weren't breathing!"What started as a whisper ended in yell as she threw her self across her brother and began to sob anew.Levi cried out hoarsely but she didn't hear.She just continued to cling to him like a little girl that had just found her lost doll.He put one of his arms around her and scanned the room as much as he could.It looked like a warzone, like something out of one of the many shooters in his gaming collection."What the hell?"There were dead bodies everywhere, he recognized some of them.There were firefighters and EMT's running around trying to find survivors in the rubble.Most of the screams were coming from survivors.He seemed to be laying on top of the bottom of a desk, and he was pretty sure the pipe sticking out of his side was a actually one of the legs.
Then he spotted it lying about two feet from his right leg, a round section of desk about the size of a softball.On it was a round symbol with geometric shapes glowing with dark red embers and a dull tendril of smoke rising up from it.It all came back to him in rush.He drew that in his notebook.It was second period and his trig teacher had this monotonous voice that could put a crack head to sleep.Laurana caught him doodling and had thrown a pencil at him to scold him silently, which made him jerk as he was connecting the last line on his drawing.Then there was the light and the heat and lastly the dark.Did he do this?What the hell was going on here?A voice brought him out of his little flashback."Sir.Sir? Sir!" it was a firefighter."I need you to stay still.You're going to hear a loud noise from behind you.I need you stay calm."Kay."He just blew up his classroom with a piece of paper and a pencil and this guy wants him to stay calm."it's going to cause this desk to vibrate a lot and your going to feel some pain but there is nothing that we can do about it.We will work as fast as possible."The sound of the saw kicking on was not near as bad as he had thought, but Levi was no where near ready for when it made contact with the leg of the desk.His cries of agony began to drown out the sound of everything around him until darkness overwhelmed him.Sorry about my grammar it's been a while since I have written anything also, first time poster so take it easy on me; p Nucleus "Why do you draw a circle?"He asked as he handed the piece of paper over to her.She rolled her eyes and looked down at the picture, "Because I like math!" "Then what?" "I like to draw the circles."She ran a hand through her curls and looked away.

Temp
It was teachers duty to ensure that our kingdom is pure and pure and successful, however it does not mean we do not try to be as rebellious as any other kingdom.Most of our teachers are suitable for rulership, being knowledgeable in sadistic rec thunking of time stopping magic circle created by several apprentices at the same time.After the first amendment, we decided that whilst having a society that thrives on protecting and serving the majority of people, whilst ensuring that the needs of some are met, the elements of freedom, equality and liberty shall not be infringed upon.Sadly, we ' ve done little to uphold those values since the day our forefathers signed the ancient Papercut act and limited the extent of human freedoms.It is now a day we lay a steady hand on the pages of history, hoping that the rules shall be rewritten and brought to what they once were.The next barrier to seeing the world anew as a black pantheon was a fallen occult student, Mr Pegarchow in Commerce, brought to history via the magic eye how to make a magic circle in the first place.This student made a hoop in the library that can be opened with a simple spiral.If a magic circle is inside this hoop, the magical illusion created is that a circle is made with the appropriate essence of spirit.An image of Pgyarchow being covered by the hoop, with the colours that inhabit it are verily magical.The magician essentially created. . .magic.To this day, not a single piece of paper hath survived contact with the magic eye.There were so many experiments looking for what function to draw in, or when it starts, 3rd century of my not-so-distant time.Nothing turned out as expected and everyone quickly passed into subculture that embraced the forbidden art of forgetfulness.However, even through the strongest taboo that could cause, symbols of independent spirit were still made and used as religion source material.I am the last wizard right now though not by choice, which was rather problematic after my mentor ' s death.However, I am different, and a magician, not by birth but via training and knowledge dedcialized from the single book of knowledge stored in my heart, able to share it with everyone, even those who share their mothers blood.
To mhm, Lizonun The kid is ready to spell it then: "The kenning-untergrund, kenning-unterunterhrund, kenning-unteronAAAKE!" The teacher asks: "What do you mean fails?" "Ok, and I forgot why I just copy all my circles to my phone:)" "You can't make circles!That would be impossible.And you forgot the reason for your circle to not work:)" "Yes, I did!I forgot the purpose of my circle and this happened to me too:)" "That might happen, but get out of here immediately, the spell is about to hit the ground!"The kid grabs the teacher by the shirt collar: "So what did you get out of it?I thought magic circles are making people happy?"The teacher looks at him with fury in his eyes: "You should go to the yelling and crying section of the school, before the spell hits the ground.You're going to repeat the'Wizard' spell again.""Thanks, but no, I'll leave."The kid grabs the ground and he leaves the school.Then his circle is gone.The teacher is still trying to figure out what should be done about the circle because he forgot to ask some of his other students.So he goes home to his basement, to his computer and he starts to draw a circle.And then he draws another one.And another one.Soon enough, there are some things that are drawn on his screen: a circle with smoke coming out of it, a circle with fog coming out of it, a circle with fire coming out of it, a circle with flavouring of words coming out of it, a circle with adding words into it and a circle with some mixture of words coming out of it.As the circle is wants to be drawn, he starts to move his finger Mask he was using to draw the circle.When suddenly, his finger don't move, slowly but steadily, like a hand is starting to move against his will.The students are looking at the screen now.They start to scream and yell: "Why is the hand still moving?!!" "Yes,

Typical
As soon as he got the clock on his desk he jumped up.The noise it was making had changed his day forever.His hair, normally unkempt, now glistened and he had to have more hair because of it."Yes!"He had declared triumphantly as he watched his father fumble around in the backpack in search of a little compass he always kept with him.He took it from the desk and quickly walked out the door to see what had happened.He wasn't the first child in the world to draw something to him and so far, so good.It wasn't as big as a tree, but it did sparkle a little bit and made a few notes in his calculus textbook.As he left the room, the others around him also drew and explained to him their ability.Most kids who knew their way around magic started to teach their spells in a lesson on learning.His teachers tried to tell him about his parents but he only heard his name as they asked if he knew about them.They explained how he would always go with them when he learned something and then asked how he had found the way.His mother said she always did the work when she taught and then how his father used the ingredients he had brought along to create his circle.It wasn't anything like what his mother would use to do her magic but he wasn't interested in her or his magic because it wasn't his favorite type.As he got older he got to watch as the world moved.His family went out on field trips to look at stars and wonder where the other world is.They asked where his mother and father went, and how long they spent on that.It wasn't anything like he expected but they never explained what the field trips were about.As he grew older he became a man with more responsibility and began to notice things like this happen to the rest of the world.As a man who didn't believe in magic he found his circle was missing from the side of the road he always stopped on to check his bearings.His neighbors thought it was weird that his neighbors' circles would get in the way.One day his friends stopped on the street he was walking down and told him to get to his car and take him to a store so they could find his circle.When they found his circle they gave it back and it had no clue where it came from.After some further digging he discovered the missing part of the circle.A pentagram that appeared out of nothingness on the road and for the longest time, no one had been able to find it.A couple weeks later a truck came out and dumped all of the construction workers that were using the road and blocked it.When his circle had disappeared the news talked about the weird pentagram on the road.This went on for months until his mother noticed he wasn't coming home.When she came home, she looked in his room and he wasn't there.When he wasn't there he never made his presence known to her and never tried to teach his spells to her.They eventually went back to their apartment together.Her father didn't even acknowledge his daughter at first but his wife didn't think to call him or get his attention.The father, seeing the pattern and a good time with his daughter decided to stay.She told her parents to check their calendar.It had stopped being so late that her father thought to take his place to try to sleep off his spell exhaustion.The parents realized something was up and the wife suggested she get her boyfriend and see if she could have a quiet time.It wasn't a normal time to do such things, but when he found her and saw his father had passed he told his girlfriend that they would meet for drinks to get over it.The wife got ready to go, and they drove home.She pulled over in front of their house.A truck pulled up in the driveway, stopped and looked out and began to slowly pull into the driveway.
When she was looking in his eyes, she asked if he knew about her mother's strange powers.When he replied he asked how his father got the circle that they kept around for all of these years.
The man pulled up in his truck and gave his girlfriend the back seat and started the truck.She took him home, they drank, and watched a movie about dragons and aliens and made up a bedtime story for the boy and girl.After dinner the boy slept through the night.He dreamt that the circle had gone missing, that it had disappeared in his yard, and that the car parked next to him was covered in dust and it smelled like his room had been raided by monsters.After his sleep, the mother took him to his room to wake him up.When she entered he didn't react at all."Are you alright honey?"The mother asked him.The boy responded in the most puzzled tone she had ever heard him.
Table 6: Full sample generations for abstractive summarization and story generation.We use samples from the model fine-tuned from GPT-2 large for story generation.

Figure 1 :
Figure 1: The per-token distribution of the deviation (ε) of information content from conditional entropy.Values are computed using the reference (human) text for three different language generation tasks, where probabilities and entropies are computed using probabilistic models trained on the respective task (see §6 for model details).Dotted line and adjacent label indicate median ε while dashed line and adjacent label indicate mean ε.Per token distributions of conditional entropies and information contents are shown in App.B for reference.
) and we call C(y <t ) the truncation set.Now we give two examples of truncated samplers.Algorithm 5.1 (Top-k Sampling).In top-k sampling, the truncation set C(y <t ) is defined as the top-k highest-probability tokens y according to p(• | y <t ), i.e., the solution to the following subset maximization problem maximize C(y <t )∈P(V) y∈C(y <t ) p(y | y <t ) subject to |C(y <t )| ≤ k (10)

k = 30 "
Aha. Aah!That makes three."Professor Gosh sighed."Oh my God, you had to have been there.""Well, I'm not sure if that's the right term, but this circle is going to work.""But I just told you that it wasn't working, didn't you, you little sh**?I'm not kidding!That circle is gon na work!" "So what now?'...Temp τ = 1.0

I
have read the instructions I have read the examples I am a native English speaker

Table 1 :
(Nallapati et al., 2016)versity metrics, as described in §6.1, along with human ratings on the WRITINGPROMPTS dataset.Human ratings are averaged across criteria to form a single metric.Bolded values are the best results among decoding strategies, where for perplexity (PPL) and Zipf's coefficient, we take this to be the delta from measurements on human text (numbers in purple).Numbers in blue are standard error estimates.Results are from finetuned GPT-2 large.finetuned on the CNN/DAILYMAIL dataset(Nallapati et al., 2016).21Allreported metrics are computed on the respective test sets.

Table 3 :
Sample generations for abstractive summarization; examples correspond to ID 1 in the test set.Decoding strategy hyperparameters are chosen based off of performance in human evaluations shown in Tab. 2.

Table 4 :
Sample generations for story generation from GPT-2 large finetuned on the WRITINGPROMPTS dataset; examples correspond to ID 1 in the test set.Decoding strategy hyperparameters are chosen based off of best performance in human evaluations shown in Tab. 1. Full stories shown in App.B.
Prompt(CNN) The attorney for a suburban New York cardiologist charged in what authorities say was a failed scheme to have another physician hurt or killed is calling the allegations against his client "completely unsubstantiated."Appearing Saturday morning on CNN's "New Day," Randy Zelin defended his client, Dr. Anthony Moschetto, who faces criminal solicitation, conspiracy, burglary, arson, criminal prescription sale and weapons charges in connection to what prosecutors called a plot to take out a rival doctor on Long Island."None of anything in this case has any evidentiary value," Zelin told CNN's Christi Paul."It doesn t matter what anyone says, he is presumed to be innocent."Moschetto,54, pleaded not guilty to all charges Wednesday.He was released after posting $2 million bond and surrendering his passport.Zelin said that his next move is to get Dr. Moshetto back to work."He's got patients to see.This man, while he was in a detention cell, the only thing that he cared about were his patients.And amazingly, his patients were flooding the office with calls, making sure that he was OK," Zelin said.Two other men -identified as James Chmela, 43, and James Kalamaras, 41 -were named as accomplices, according to prosecutors.They pleaded not guilty in Nassau County District Court, according to authorities.Both were released on bail.A requests for comment from an attorney representing Chmela was not returned.It's unclear whether Kalamaras has retained an attorney.Police officers allegedly discovered approximately 100 weapons at Moschetto's home, including hand grenades, high-capacity magazines and knives.
Figure 6: MAUVE, Zipf's coefficient, (average) probability mass of candidate token pool and (average) candidate token pool size as a function of decoder hyperparameters for nucleus, top-k and locally typical sampling.
"See, I have three circles.""Hmph, I didn't think I saw that in your class."A younger student brought up the ceiling and murmured."Yeah, sure.It's just, why do we draw the circles?" "Because they're circles."She looked up."But they don't match up!"She pointed at the nearby wall, "They don't start out like this!"She ran a hand over the map on the wall, "It starts out the same."She turned and pointed to the map marked with candles."You have fire here.""Well that's nice."He turned away from her, "I don't think the artists in class thought about it that way."She pointed again."See?Even your circle!" "That's my circle.""Look, Mason.""Hear, daughter.""I don't see a man, and so can't prove it."The two young adults nodded in agreement.Then she began to speak, "I draw the circles because I like to guess about what will happen when I make circles.""Yeah, and that's what we're going to do when you make a circle and turn the paper to our faces."That makes three."Professor Gosh sighed."Oh my God, you had to have been there.""Well, I'm not sure if that's the right term, but this circle is going to work.""But I just told you that it wasn't working, didn't you, you little sh**?I'm not kidding!That circle is gon na work!" "So what now?" "Well, that's for you to figure out.First, you've got to figure out a way to communicate with the circle.Maybe the spell isn't right for it, but it's not going to ruin the circle.""Okay, okay, you've told me to figure it out so I'll do a whole lot of work.But how exactly do you communicate a spell to a circle of stones in a basement?""Simple, really.You put it inside the circle, and it magically works its way around the circle to get to the person you wrote'I AM' on.""But how?Why am I supposed to be the main caster?""Because you don't have magic powers to cast spells.""What?That's not even supposed to be possible.""You are special, you know."Professor Gosh looked at me with a little bit of sympathy."I can't believe you're a wizard, you can do all this stuff when you're a kid." "Well, yeah, that's what they told me, so I guess you're right.But no matter how hard you try, you can't control magic.Don't worry, we'll figure out a way to make you a wizard one of your own." "Okay, I think you understand, I still haven't figured out how to create a spell, but I am so excited about the possibilities!" "Alright, let's do this." Mirostatready to go home, when a little kid starts to sketch a circle."No.Just stop.It's not working.""But the circle turns to smoke and then to a flame.It's not working, I know it is!" Another kid, who was also drawing a circle with his phone, starts to scream.The teacher, looking at the circle, lost his patience and started to yell at the student: "This circle is not working, you need to come up with some more inventive magic demonstration!" "I like it.You can draw it with your phone, it's not hard.""No, stop it.Just stop drawing it like that and move on with your day.You look troubled.""No, I do.I can get some ideas for my magic circle.Also it looks cool and I can tell everyone that it's working.""Ok, but you still have to come up with a name for the circle.""I can come up with kenning, kenning-perforant, kenning-untergrund or kenning-unterhrund depending on the layout.Then my circle is named kenningunterhrund, and then I post that on reddit, and I sell it on Etsy!" "Uhh... you spelt it wrong again using your phone, kenning-unterhrund for what's a circle in the middle of a room!You're not getting any creativity from that!" "Still, I'm not that bad.I could come up with a name too.We could see if our circle works on reddit!"The teacher starts to object, but the kid just brushes him off: "open the door and let me in, I'll tell everyone my name for my circle."The door is open and the teacher comes in."KENNING-UNTER DOES N'T WORK, IT'SBAGS!"The kid answers: "Open the door and let me in.If there's no door, I'll come back tomorrow and make a new one."He sounds confident in his plan."Ok, kenning-untergrund, kenning-unterhrund or kenning-unteromactually totalitarian need you!Honoring your name will make your circle work better!"