Grawlix Syntax and Understanding Underspecified Words

No, a grawlix is not a creature of the week from Dr. Who. Grawlixes are another term for ‘symbol swearing’ or ‘profanity’. They’re those odd symbol strings that replace the normal characters in maledicta.

Taken from the grammar.com,

“The symbols that work best [for grawlixes] are those that fill up space: @, #, $, %, and &. Hyphens, plus signs, asterisks, and carets (^) leave too much white space within the body of the grawlix for it to look like a single word. Wiktionary recommends @#$%& as the standard grawlix. This uses the five beefiest symbols in the order they appear on an American keyboard. (If you curse with a British accent, try @#£%&.) . . . Because it represents words spoken in anger or excitement, the grawlix should always end with an exclamation mark, even if it’s an interrogative grawlix: @#$%&?! Finally, as a word of caution, you should reserve your use of grawlixes for emails to close friends. Grawlixes are highly inappropriate for professional writing.”
(Bill Schmalz, The Architects Guide to Writing: For Design and Construction Professionals. Images, 2014)

So, we find that the standard grawlixes are a one-to-many proposition. All profanity is replaced with the string @#$%*!.  Boring.

What’s more interesting are the partial-grawlixes and X-word transformations. Partial grawlixes retain some combination of the original characters- the first character and an optional final character, typically. The X-word transformation takes the first letter and adds (-)word as a uniform suffix.

Both quasi-censor curse words but still prime the audience to decipher the code.  The b-word and b**** supplant bitch. Partial grawlixing and X-wording both force the reader to countenance and comprehend off-color speech, avoiding the circumlocution and obfuscation of euphemisms. It is not entirely clear whether grawlixes have a spoken equivalent. Maybe, ‘razzle-frazzle, frecka-secka’, delivered in a low grumbly voice. Rap music occasionally censors swear words by leaving the initial phone intact and overlaying a record scratch for the rest of the lexical item, or playing the rest of the lexeme in reverse.

X-wording is a sort of linguistic aikido. The speaker only hints at something taboo while nodding to the emotive power typically behind cursing. The speaker is flouting Grice’s fourth maxim, Manner, while more closely adhering to the second maxim, Quality, by being more forthcoming about their emotional state.

Here is Louis CK on the ‘n-word’-

With partial-grawlixes, I have seen two basic variations- random special character strings and uniform special character strings. Shit can be s#!% or shit can be s***.

To the diligent reader, a question arises- should the special characters resemble the original orthography? Which shit most resembles shit? Is it s#!% or s***? And how close do we want to get to shit? Surely, sh!t is a pointless transformation.

Now, full grawlixes obfuscate almost entirely. The only priming comes from the knowledge of grawlixes and the final exclamation point. Compare this to partial-grawlixes like s@#$, f@#$ and b@#$%. These are readily parsed as shit, fuck and bitch.

The wiki on hypoglycemia provides some folkloric insight into comprehension of obscured words. In the example data on the page, notice the words all retain both of their exterior characters. There is a higher rate of comprehension when the interior letters are scrambled compared to simple reversals. That’s strange. I suspect that it all comes back to priming, context and frequencies. Take the example jumbled sentence below,

Mroe itnsihgs can be galeend form Mkraov Stnneece Meodls.

Some words are decipherable. Insights, gleaned and the named entity are muy dificil.

A relevant passage from the livescience.com,

We use context to pre-activate the areas of our brains that correspond to what we expect next, she explained. For example, brain scans reveal that if we hear a sound that leads us to strongly suspect another sound is on the way, the brain acts as if we’re already hearing the second sound. Similarly, if we see a certain collection of letters or words, our brains jump to conclusions about what comes next. “We use context to help us perceive,” Kutas said.

More insight can be gleaned from Markov Chains and n-grams. From decontextualize.com,

n-grams:

An n-gram is simply a sequence of units drawn from a longer sequence; in the case of text, the unit in question is usually a character or a word. The unit of the n-gram is called its level; the length of the n-gram is called its order

N-grams are used frequently in natural language processing and are a basic tool text analysis. Their applications range from programs that correct spelling to creative visualizations to compression algorithms to generative text.

So, an n-gram is a sequence of words with a length of n. The sentence ‘I love coffee’ is a trigram or 3-gram. Within in the sentence, we have two bigrams and three 1-grams for a total of six n-grams. Or factorial(n), where n is the length of the string in question. Now, if you searched for all the bigrams in the sentence above, then you get I love and love coffee. If you were to posit the other elements of these n-grams then you may very well say, I love (you) and (Seattleites) love coffee. As experts of our respective Lgs, we have some foundational knowledge of context, frequencies and probabilities.

Markov Chains, from onthelambda.com:

A Markov chain is a system that transitions between states using a random, memoryless process. The transition from one state to another is determined by a single random sample from a (usually discrete) probability distribution. Additionally, the current state wanders aimlessly through the chain according to these random transitions with no regard to its previous states…

each word is a state, and the transitions are based on the the number of times a word appears after another one. For example, “I” always precedes “am” in the text, so the transition from “I” to “am” occurs with certainty (p=1). Following the word “the”, however, “son” occurs twice and “heir” occurs once, so the probability of the transitions are .66 and .33, respectively.

Text can be generated using this Markov chain by changing state until an “absorbing state” is reached, where there are no longer any transitions possible.

Back to curse words, partial-grawlixes and X-words. X-words seem rather straightforward- first letter + word. But how does X-wording account for morphology? Like, f-word is equivalent to fuck but what about fuck+er? F-worder does not work. I think one would probably take the initial character and add the morpheme – f-er (effer). How does partial grawlixing handle morphology? F***** does not seem to work. Combinatorics may tell us why.

If we say that the * can be any character then the we have a f***-ton of options for parsing f*****. Since English has 26 characters, Our string has 1*26*26*26*26*26 options. If we don’t mask the morphology, as in f***er, then we have  1*26*26*26*1*1 options. But really, context and our knowledge of frequencies points us straight at the fucker. But really, there is much more to it considering that phones are not equally likely to occur after each other.

I am not sure I have explained anything or even begun to address the rules of derivations for partial grawlixes and X-words. Frankly, I’m a little tired of this s***.

Leave a Reply

Your email address will not be published. Required fields are marked *