Coming Good and Breaking Bad: Generating Transformative Character Arcs For Use in Compelling Stories Tony Veale School of Computer Science and Informatics University College Dublin, Belfield D4, Ireland. Tony.Veale@UCD.ie Abstract Stories move us emotionally by physically moving their protagonists, from place to place or from state to state. The most psychologically compelling stories are stories of change, in which characters learn and evolve as they fulfil their dreams or become what they most despise. Character-driven stories must do more than maneouver their protagonists as game pieces on a board, but move them along arcs that transform their inner qualities. This paper presents the Flux Capacitor, a generator of transformative character arcs that are both intuitive and dramatically interesting. These arcs – which define a conceptual start-point and end-point for a character in a narrative – may be translated into short story pitches or used as inputs to an existing story-generator. A corpus-based means of constructing novel arcs is presented, as are criteria for selecting and filtering arcs for wellformedness, plausibility and interestingness. Characters can thus, in this way, be computationally modeled as dynamic blends that unfold along a narrative trajectory. Metamorphosis As Gregor Samsa awoke one morning from uneasy dreams, he found himself transformed in his bed into a monstrous vermin. So starts Franz Kafka’s novella of transformation, titled Metamorphosis, in which the author explores issues of otherness and guilt by exploiting a character’s horrific (if unexplained) change into an insect. Authors from Ovid to Kafka demonstrate the value of transformation – physical, spiritual and metaphorical – asa tool of character development, just as storytellers fromHomer to Kubrick demonstrate the value of journeys assupport-structures for narratives of becoming and change.Even narratives that are primarily plot-focused or action-centric can, many times, be succinctly summarized bylisting key character transformations. Consider Gladiator,an Oscar-winning action film from 2000. The main villain of that piece, Emperor Commodus, summarizes the plot with three successive transformations: “The general who became a slave. The slave who became a gladiator. The gladiator who defied an emperor.” Note how the third transformation is implicit, for the gladiator Maximus has transformed himself into a potential leader of Rome itself. Kafka presents his driving transformation as a fait accompli in the very first line of his story, while in Ovid’sMetamorphoses, characters are transformed by Gods intotrees or animals with magical immediacy. Most narrativetransformations occur gradually, however, with a storycharting the course of a character’s development from astart-state S to target-state T. In this respect the television drama Breaking Bad offers an exemplary model of theslow-burn transformation. We first meet the show’s main character, Walter White, in his guise as a put-upon high-school chemistry teacher. “Chemistry”, he tells us, “is the study of change.” Though Walter has a brilliant mind, helives a dull suburban life of quiet desperation, until adiagnosis of lung cancer provides a catalyst to look anewat his life’s choices. Walter decides to use his chemistryskills to “cook” and sell the drug Crystal Meth, andrecruits former student Jessie as a drug-savvy partner. In62 episodes, the show charts the slow transformation ofWalter from dedicated teacher to ruthless drug baron. As the show’s writer/creator Vince Gilligan put it, “I wanted to turn my lead character from Mr. Chips into Scarface.” Walter’s progress is neither smooth nor monotonic. Hebecomes an unstable, dynamic blend of his start and end states. Though he commits unspeakable crimes, he neverentirely ceases to be a caring parent, husband or teacher.As viewers we witness a true conceptual integation of histwo worlds: Walter brings the qualities of a drug baron tohis family relationships, just as he brings the qualities of ahusband and father-figure to his illicit business dealings.To fully appreciate this nuanced character transformation,we must understand it as more than a monotonic journeybetween two states: characters must unfold as evolvingblends of the states that they move between, so they canexhibit emergent qualities that arise from no single state. This paper presents a CC system – The Flux Capacitor – for generating hypothetical character arcs for use instory generation. The Flux Capacitor is not itself a storygeneration system, but a stand-alone system that suggests“what-if” arcs that may underpin interesting narratives. Though it is a trivial matter to randomly generate arcsbetween any two conceptual perspectives – say between teacher and drug-baron, or terrorist and politican – the Flux Capacitor generates arcs that are well-formed, well-motivated, intuitive and of dramatic interest. It does so byusing a rich knowledge-representation of our stereotypicalperspectives on humans, knowing e.g. what qualities areexhibited by teachers or criminals. It uses corpus analysisboth to acquire a stock of valid start-and end-states and tomodel the most natural direction of change. It further usesa robust model of conceptual blending to understand theemergent qualities that may arise during a transformation. The Flux Capacitor builds on a body of related work which will be discussed in the next section. The means by which novel transformative arcs are formulated is then presented, before a model of property-level blending and proposition-level analogy/disanalogy is also described. The Flux Capacitor does more than generate a list of possible character arcs: it provides to a third-party story generator a conceptual rationale for each transformation, so a story-teller may properly appreciate the ramifications of a given arc. In effect this rationale is a pitch for a story. Before drawing our final conclusions, we describe how such a pitch can be constructed from a blending analysis. Related Work and Ideas What is a hero without a quest? And what is a quest that does not transform its hero in profound ways? The scholar Joseph Campbell has argued that our most steadfast myths persist because they each instantiate, in their own way, a profoundly affecting narrative structure that Campbell calls the monomyth. Campbell (1973) sees the monomyth as a productive schema for the generation of heroic stories that, at their root, follow this core pattern either literally or figuratively: “A hero ventures forth from the world of common day into a region of supernatural wonder: fabulous forces are encountered and a decisive victory is won: the hero comes back from this mysterious adventure with the power to bestow boons on his fellow man.” Many ancient tales subconsciously instantiate this schema, while many modern stories – such as George Lucas’s Star Wars – are consciously written so as to employ Campbell’s monomyth schema as a narrative deep-structure. A comparable schematic analysis of the heroic quest is provided by Propp’s Morphology of the Folk Tale (1968). Like Campbell, Propp identifies an inventory of recurring classes (of character and event) that make up a traditional Russian folk tale, though Propp’s analysis can be applied to many different kinds of heroic tale. Transformative elements in Propp’s inventory include Receipt of Magical Agent, which newly empowers a hero, Transfiguration, in which a hero is rewarded through change, and Wedding, through which a hero’s social status is elevated. Propp also anticipates that a truly transformed hero may not be recognized on returning home (Unrecognized Arrival) and may have to undergo a test of identity (Recognition). The basic morphemes of Propp’s model can be used either to analyze or to generate stories, in the latter case by using a variant of Fritz Zwicky’s Morphological Analysis (1969). Propp’s morphemes have thus been used in the service of automated game design (Fairclough and Cunningham, 2004) as well as creative story generation (Gervás, 2013). Campbell’s monomyth and Propp’s morphology can each be subsumed under a more abstract mental structure, the Source-Path-Goal (SPG) schema analyzed by Johnson (1987). Johnson argues that any purposeful action along a path – from going to the shops to undertaking a quest – activates an instance of the SPG schema in the mind. In cinema the SPG is most obviously activated by “road movies”, in which (to quote the marketing campaign for Dances With Wolves), a hero goes “in search of America and finds himself”. Such movies use the SPG to align the literal with the figurative, so that a hero starts from a state that is both geographic and psychological, and reaches an end-point that is similarly dual-natured. The SPG schema is also evident in comic-book tales in which an everyman is transformed into a superheroic form that permits some driving goal (revenge, justice) to be achieved. Forceville (2006) has additionally used the SPG to uncover the transformative-quest structure of less overtly heroic film genres, such as documentaries and autobiographical films. Storytelling is a purposeful activity with a beginning (Source), middle (Path) and end (Goal) that typically shapes the events of a narrative into a purposeful activity on the part of one or more characters. Computer systems that generate stories – as described in e.g. Meehan (1981), Turner (1994), Perez y Perez & Sharples (2001), Riedl & Young (2004) and Gervás (2013) – are thus, implicitly, automated instantiators of the Source-Path-Goal schema. This is especially so of story systems, like that of Riedl & Young, that employ an explicitly plan-based approach to generation. These authors use a planner that is anchored in a model of the beliefs and internal states of the story’s characters, so as to construct narrative plans that call for believable, well-motivated actions from these characters. The use of a planner also ensures that these actions create the appearance of an intentional SPG path that is viewed as plausible and coherent by the story’s audience. Outside the realm of myths and fairy-tales, the deepest transformations are to the beliefs and internal states of a character, though such profound changes may be reflected in outward appearances too, such as via a change of garb, residence, place of work, or choice of tools. Consider the case of a prostitute who becomes a nun, or the altogether rarer case of a nun who breaks bad in the other direction. Such transformations are dramatically interesting because they create oppositions at the levels of properties and of propositions. Though frame-level symmetries are present, since each kind of person follows a particular vocation in a particular place of work while wearing a particular kind of clothing, the specific frame-fillers are very different. We can imagine a tabloid headline screaming “Nun burns habit, buys thong” or “Nun flees convent, joins bordello.” Analogies and disanalogies between the start-and end-states of a transformation provide fodder for the evolving blends that need to be constructed to ferry a protagonist between these two states in a narrative. Conceptual blending is a knowledge-hungry process par excellence (see Fauconnier and Turner, 1998, 2002). However, Veale (2012a) presents a computational variant of conceptual blending, called the conceptual mash-up, that is robust and scalable. Propositional knowledge is milked from various Web sources – such as query completions from Web search engines – and, using corpus evidence, this knowledge is mapped to more than one concepts. Veale (2012b) also presents a robust method for mining stereotypical properties from Web similes, such as “as chaste as a nun” and “as sleazy as a prostitute”. Used here, these representations allow the Flux Capacitor to analyze the blending potential of a transformative arc, and so construct a conceptual rationale as to why a given arc has the potential to underpin an interesting narrative. Opposites Attract At its most reductive, a transformative character arc is an unlabeled directed edge Sa.T that takes a character from a conceptual starting-state S to a conceptual end-point T, where S and T are different lexicalized perspectives on a character (such as e.g. S=activist and T=terrorist). To be a truly transformative arc, as opposed to an arbitrarily random pairing of S and T states, an arc should induce a dramatic change of qualities. Superficially, this change may be reflected in a reversal of affective polarity from S to T. Thus, if S is viewed as a positive state overall, such as activist, saint or defender, and T is predominantly seen as a negative state, such as terrorist, prostitute or tyrant, then a character will break bad by following this arc. Conversely, if S is most often seen as a negative state, and T is typically seen as a positive state, then a character will come good by following this arc. Naturally, our overall affective view of a concept will be a function of our property-level perception of all its stereotypical qualities. If S typically evokes a preponderance of positive qualities then it will be viewed as a positive state overall. Likewise, if S typically evokes a preponderance of negative qualities then it will be viewed as a negative state overall. A means of mapping from property-level representations to overall +/-affective polarity scores is presented in Veale (2011). Stories thrive on conflict and surprise, and surprising transformations arise when the pairing of S and T gives rise to a clash of opposing properties. Consider again the case of the prostitute (=S) who becomes a nun (=T). The transformation Sa.T at the conceptual-level implies the property-level oppositions dirty-pure, immoral-moral, promiscuous-chaste and sleazy-respected, affording an opportunity for a truly dramatic Proppian transfiguration. Generalizing, we say that a character arc Sa.T implies a direct opposition at the property-level if S and T each exhibit properties that can produce antonymous pairs. We thus use WordNet (Fellbaum 1998) as a comprehensive source of antonymy relationships (such as pure-dirty), which we apply to any putative arc Sa.T to determine whether the arc involves a dramatic conflict of properties. This property-level analysis allows The Flux Capacitor to identify nuanced transformations that allow a character to come good while also breaking bad. Consider the arc beggara.king. A character following this arc may come good in many ways, by going from lowlya.lordly, poora.lofty, brokea.wealthy, impoverisheda.privileged and raggeda.regal. Yet such an arc may induce negative effects too, changing a character from humblea.arrogant, humblea.haughty and humblea.unapproachable. Perhaps a beggar that becomes a king may come to rue his change of station, while a king that becomes a beggar may derive some small comfort from his fall from grace? Yet S and T need not conflict directly at the property-level to yield an opposition-rich transformation. The clash of properties may be indirect, if S relates to a concept S’ in the same way that T relates to T’, and if a clash of opposing properties can be observed between S’ and T’. For instance, scientists and priests do not directly oppose one another, but a property-level clash can be found in the stereotypical representations of science and religion, since science is stereotypically rational while religion is often seen as irrational. Since scientists practice science while priests practice religion, a character that goes from being a scientist to being a priest will, in a leap of faith, reject rational science and embrace irrational religion instead. A gifted storyteller can surely make an transformation, no matter how random or illogical, seem interesting. Such is the art of improvizational comedy, after all. However, rather than abdicate its responsibility for making an arc interesting to a subsequent story-telling component, the Flux Capacitor applies it own filtering criteria to find the arcs it considers to have dramatic potential. An arc Sa.T is generated only if S and T possess opposing qualities, or if S and T are indirectly opposed by virtue of being analogously related to a concept pair S’ and T’ that do. We now turn to how S and T are found in the first place. Charging the Capacitor We often speak of children in terms of what they may one day become, but speak of adults in terms of what they have already become. Some concepts are more naturally thought of as start-states in a transformation, while others are more naturally viewed as end-states. Beyond the clear cut cases, most concepts sit on a continuum of suitability for use on either side of a transformation. To determine the suitability of a given concept C as either a start state or an end state, we can simply look to a large text corpus. The frequency of the 2-gram “C+s become” in a corpus such as the Google n-grams (Brants and Franz, 2006) will indicate how often C is viewed as a start-state, while the frequency of the 2-gram “become C+s” will indicate C’s suitability as an end-state. Since the n-gram frequency of “become terrorists” (7180) is almost 7 times greater than the frequency of “terrorists become” (1166), terrorist is far more suited to the role of end-point than to start-point. The Flux Capacitor limits its choice of start-states to any stereotype S for which the Google n-grams contains the bigram “S+s become”. Similarly, it limits its choice of end-states to any stereotype T for which Google provides the bigram “become T+s”. Within these constraints, the Google n-grams suggests 1,213 person-concepts to use as start-states, and 1,529 to use as their ultimate end-states. The Google n-grams contains a small number (< 500) of well-established transformations between person-types that can be found via the pattern “S-turned-T”. Examples include friend-turned-foe, bodybuilder-turned-actor and actor-turned-politician. Though some turns have dramatic value (like bully-turned-Buddhist), most are well-trodden paths with little to offer a creative system. Nonetheless, the Google n-grams are a valuable source of inspiration for the generation of novel transformations that combine complementary ideas. For the n-grams can tell us whether two ideas have a history of working well together, either in harmony or as part of an antagonistic double-act. Consider the 3-gram pattern “X+s and Y+s”, which matches all instances of coordinated bare plurals in the Google n-grams. Examples include “angels and demons”, “nuns and prostitutes” and “scientists and priests”. While these attested coordinations often bring together opposing concepts, they are concepts drawn from the same domains or semantic fields, and thus seem fitted to each other. So while a transformation linking two such conflicting states may strike one as a surprising turn of events, it will also likely strike one as a fitting turn of events. By mining the Google 3-grams for instances of this pattern that connect a valid start-state to a valid end-state, where these states also exhibit either a direct or indirect conflict of qualities, the Flux Capacitor harvests a large collection of potential state-pairs for its own transformative character arcs. The question of which state can best serve as a start-state, and which should serve as the end-state, is decided afterwards. Coordinations are a rich source of explicit constrasts between conceptual states, but other n-grams are an even richer source of implicit contrasts. Consider the 3-gram “army of dreamers”. The typical member of an army is a soldier, not a dreamer, as borne out by the system’s own propositional world-knowledge. This 3-gram thus implies a clash of soldiers and dreamers, which in turn implies the property-level conflicts disciplined-undisciplined and fit-lethargic. Generalizing, we mine all Google 3-grams that match the pattern “ of +s”, such as “church of heretics”, “army of cowards” and “religion of sinners”, to identify any cases where the stated member (sinner, coward, etc) contrasts with a known stereotypical member of the group. A large pool of contrasting concept pairs is mined in this way from the Google n-grams, to be used to form each side of a transformative character arc. But what trajectory should each transformation follow? Which concept will serve as the start-point S of an arc, and which as its end-point T? We infer the most natural direction for an arc by again looking to corpus data. For a pair of contrasting concepts X and Y, we calculate a score for the arc Xa.Y as the sum of the n-gram frequencies for “X+s become” and “become Y+s”. Likewise, we calculate the score for the arc Ya.X as the sum of the n-gram frequencies for “Y+s become” and “become X+s”. We then choose the arc/direction with the greatest score. Consider, for example, the pair militant and politician, which share, in the world-view of the Flux Capacitor, this implicit contrast: militants launch celebrated rebellions, whilst politicians launch hated wars. Corpus data suggests that politician is more suited to be the end-state of an arc than its start-state, perhaps because politicians must be elected, and election is an obvious goal-state in the SPG schema. In contrast, militant is slightly more comfortable in the role of start-state than end-state, no doubt because militants fight so as to initiate some future change. Thus, the arc militanta.politician is favored over its inverse, politiciana.militant, and so only the former is generated. Blended States In character-led stories, key transformations often unfold gradually through a build-up of incremental changes. So as characters follow their trajectory along an arc that takes them ever closer to their final state, they will exhibit more of the qualities we stereotypically associate with the endpoint of their arc and fewer of the properties we associate with their starting point. In effect, a changing character becomes a dynamic blend of the starting-point and end-point concepts that define its narrative trajectory. The theory of conceptual integration networks, also known as conceptual blending (see Fauconnier & Turner, 1998, 2002), offers a principle-driven framework for the interpretation of any blend, while Veale (1997) further explores the workings of character blends that gradually unfold during a narrative. A character blend – a character that moves between two states and thus assumes a mix of the properties and behaviors associated with each – can be modeled computationally at the level of properties and of propositions. To model the former, we explore the space of complex properties that integrate nuances from each of the inputs, while to model the latter we draw on Markman and Gentner’s (1993) theory of alignable differences. Consider a proposition-level blend in the shocking case of our nun-turned-prostitute. The alignable differences in this example concern the propositions associated with nuns and with prostitutes that can be aligned by virtue of positing exactly the same relationship for each subject, but with different values for their objects. For instance, nuns work and reside in convents or cloisters, under the supervision of a mother superior, while prostitutes work and reside in bordellos under the supervision of madams and pimps. So as this transformation is effected, convents and cloisters will give way to bordellos, while mother superiors will lose out to pimps and madams, just as wimples and habits will transition into an altogether racier style of dress. It is a simple matter to connect propositions with alignable differences such as these, to produce a structural blend that is part analogy and part disanalogy. The Flux Capacitor is also sensitive to the reversals of status and power that accompany a given transformation. By attending to the relationships that link a subject A to an object B, and the relationships that reciprocally link B as a subject to A as an object, it learns how to recognize situations where a protagonist’s social inter-relationships are dramatically reversed in a blend. Thus, for instance, it observes a fundamental tension between the verbs obey and control, between ruling and being led, and between governing and electing. In the case of a king-turned-slave then, it perceives an interesting reversal of power, where a once-mighty king goes from being served by respectful followers to being led by haughty and arrogant rulers, just as he may go from appointing fawning servants to being managed by dominant and exalted masters. The scale of each reversal is emphasized by highlighting the most pointed contrasts between the blended states; thus, it also suggests that our deposed king goes from being served by honorable knights to being led by depraved rulers. While these new rulers need not be depraved, it heightens the dramatic potential of the blend to assume that they are. At the property-level, we strive to understood how a property A associated with a start-state S, and a property B associated with an end-state T, might yield an emergent property AB that arises from a character’s transformation from S into T. Might our nun-turned-prostitute retain a residual sense of piety, even if such piety were to be unjustified or even immoral? The Google 2-grams inform us that the phrase “immoral piety” denotes an attested state (with a Web frequency of at least 49). Since nuns are typically pious and so practice piety, while prostitutes are typically seen as immoral, immoral piety denotes the kind of nuanced state that may arise as one state gives way to the other. The Google n-grams also suggest, in this vein, that a nun-turned-prostitute might be a moral prostitute, a compassionate prostitute, a religious prostiute or, at least, a spiritual prositute, one that commits pure or virtuous sins despite practicing a sleazy morality and a dirty faith. Likewise, when intellectuals become zealots, attested 2grams that bridge both states include “inspired rant”, “misguided superiority”, “uncompromising critique”, “extreme logic”, “intellectual obsession”, “scholarly zeal” and even “educated stupidity”. The Google n-grams attest to the validity of a great many complex states that can be surprising and revealing. By seeking out nuanced states that bridge the properties of the conflicting concepts in a character arc, the Flux Capacitor can tap in to the vast, collective imagination of readers and writers as exercised for other, past narratives. Hold The Presses These blend interpretations serve to advertize the merits of a given character transformation: the richer the blend, in terms of aligned propositions and nuanced properties, the richer the narrative it should yield when turned over to a dedicated story-generation system. In many ways then, these blend interpretations are the computational version of a Hollywood story pitch, in which a screenwriter sells his or her vision of a story to the studio that will make it. Like a Hollywood studio, which can only afford to make a small number of films per year, a story-generation system will need some narratological basis to judge which stories ideas to further refine and which to reject outright. The Flux Capacitor is not a story-generation system, but a creator of high-concept story ideas. Yet to better sell these ideas, it uses natural-language generation techniques to convert its blend analyses into simple pitches. Consider the following pitch, in which each mapping in the blend for nuna.prostitute has been realized as its own sentence: Nun condemns chastity, wallows in wickedness Nun criticizes convents, bounces into brothels Nun chucks crucifixes, gropes for garters Nun fatigued by fidelity, veers toward vices Nun hates habits, stockpiles stilettos Nun mistreated by mother superiors, pulled to pimps Nun skips out of spectacles, loves latex Nun vents about veils, crazy for corsets Nun vents about virginity, seduced by shamelessness Nun whines about wimples, grabs garters Nun goes from being managed by abbesses and mother superiors to being controlled by pimps Nun goes from carrying beads to carrying infections Does strict chastity struggle with wild promiscuity? How long can outer purity suppress inner filth? Nun goes from being unflinchingly faithful to being increasingly unfaithful Nun goes from living in cloisters and convents to working in brothels and bawdy houses Can inner morality be transformed into naked sin? Nun goes from practicing chastity to practicing vices How long can a superficial respectability suppress pervasive sleaze? Nun goes from wearing habits and crucifixes to wearing corsets and fishnets Nun goes from wearing veils and spectacles to wearing latex and stilettos Nun goes from wearing wimples to wearing hotpants Note the simple structure of each sentence in the pitch. Wherever possible, a tabloid-headline style is employed, using alliteration – as in condemns chastity, wallows in wickedness – to make each stage of a transformation seem more compelling. Such devices, though simple, embody a strategy that psychologists call the Keats heuristic, for the use of even the most rudimentary rhymes has been empirically shown to heighten the perceived truthfulness of a statement (see McGlone and Tofighbakhsh, 2000). Conversely, character transformations can also be used to craft rhetorical questions and figurative allusions for automated poetry. The Stereotrope system of Veale (2013) thus generates rhetorical questions such as “how does a selfish wolf become a devoted zealot?”, “how does a devoted zealot become a selfish bully?” and “how does a mindless zealot become a considerate lover?” to allude to unknown protagonists whose identify must ultimately be determined by the reader. Transformative Possibilities The Flux Capacitor uses the corpus-based techniques of the previous sections to construct 63,016 unique character transformation arcs, using a combination of the Google n-grams, a large database of stereotypical properties, and a propositional model of world-knowledge. Each arc links character states that conflict either directly or indirectly, where each gives rise to its own blending interpretation. Some arcs simply demand too much from an audience. Novel character arcs may be provocative, but they should rarely be jarring. Arcs that strain credulity, or require an element of cod science to work at all, are best avoided. While it is not possible to predict every faultline along which a narrative may rupture, it is worth considering the most obvious problem-cases here, as these allow us to draw broad generalizations about the quality of our arcs. The first problem-case concerns gender. Though there exist famous and dramatically successful exceptions to this rule, such as Virginia Wolff’s Orlando, characters rarely change their gender during a transformation. Of the valid start/end states used by the Flux Capacitor, 84 are manually annotated as male, such as pope and hunk, while 72 are annotated as female, such as geisha and nun. All other states are assumed to be compatible with both male and female characters. In all, 9,915 of the 63,016 arcs that are generated involve one or more gender-marked states. Of these, only 7% involve a problematic mix of genders (e.g. popea.mother). Though a creative story-teller might make lemonade from these lemons (e.g. as in the tale of Pope Joan, who passed as a man until made pregnant), the Flux Capacitor simply filters these arcs from its output. The second problem-case concerns age. Once again, though Hollywood may occasionally find a cod-science reason to reverse time’s arrow, characters rarely transform into people younger than themselves. Not wishing to paint a story-teller into a corner, where it must appeal to a dust-blown plot device such as time travel, body swapping or family curses to get out, the Flux Capacitor aims to avoid generating such arcs altogether. So of its valid start/end states, 52 are manually tagged for age to reflect our strong stereotypical expectations. Elders such as grandmother, pensioner and archbishop are assigned a timepoint of 60 years, while youths such as student, rookie and newcomer are given a timepoint of 18. Younger states, such as baby, toddler, child, kid, preteen and schoolgirl, are assigned lower time-points still, while those states unmarked for age are all assumed to have a default timepoint of 30. In all, 7,892 arcs are generated for which one or more states is explicitly marked for age. Now, if our corpus-based approach to determining the trajectory of an arc is valid, we should expect most of these 7,892 arcs to flow in the expected youngera.older direction. In fact, 76% of arcs do flow in the right direction. The remaining 24% are not simply discarded however. Rather, these arcs are inverted, turning e.g. mentora.student into studenta.mentor. The ultimate test of a character transformation is the quality of the narrative that can be constructed around it. We cannot evaluate the quality of these narratives until they have been woven by a subsequent story-generation system, acting as a user of the Flux Capacitor’s outputs. Nonetheless, the diversity of the Flux Capacitor’s outputs – 63,016 well-formed arcs, bridging 1,213 start-states to 1,529 end-states in interesting ways that pair concepts that conflict and which also exhibit corpus-attested affinities – is a reason to be optimistic about the quality of the many as-yet-unwritten stories that may employ these arcs. Back to the Future Georges Braque, who co-developed Cubism with Pablo Picasso, was less than impressed with the arc of Picasso’s career, noting late in life that “Pablo used to be a good painter, but now he’s just a genius.” If character arcs induce change, such changes are just as likely to remove a desirable quality as add it. For Braque, to go from noted painter to certified genius was to follow a downward arc, for Picasso was now to be feted more for his politics, his lifestyle and his women than for any of his painterly gifts. Braque’s view of Picasso’s career is witty because it runs against expectation: to become a genius is often seen as the highest of achievements and not a vulgar booby prize. As we strive to make the Flux Capacitor generate arcs that seem interesting yet plausible, we must remember that it is not just a transformation per se that can be original, but the manner in which we choose to interpret it, not to mention the way we ultimately use it in a story. Creativity requires more than generative capability, and a generative system is merely generative if it can perform neither deep interpretation nor critical assessment nor insightful filtering of its own outputs. Though the Flux Capacitor is just one part of a story-generation pipeline, it is not just a mere generator of character arcs. It operates in a large space of possible transformations, sampling this space carefully to identify those transformations that change a character in dramatically interesting ways into something that is at once both incongruous and fitting. The property transfers that accompany a transformation may serve as causes or as effects. That is, some property shifts initiate a change while others naturally follow on as consequences of these root causes. Consider the case of a king-turned-slave, in which the Flux Capacitor identifies the following wealth of property conflicts and shifts: worshipped›contemptible, revered›contemptible, lofty ›inferior, lofty›subservient, lofty›submissive, anointed›cursed, powerful›powerless, powerful› contemptible, powerful›frightened, powerful›scared, powerful›inferior, magisterial›powerless, learned› illiterate, learned›uneducated, commanding› cowering, commanding›subservient, commanding› passive, commanding›powerless, commanding› submissive, rich›powerless, rich›malnourished, rich ›miserable, merry›miserable, merry›unfortunate, crusading›frightened, august›contemptible, celebrated ›contemptible, honored›contemptible, regal› powerless, spoiled›whipped, spoiled›abused, spoiled ›overworked, spoiled›exhausted, spoiled› malnourished, spoiled›overburdened, spoiled› exploited, comfortable›miserable, contented›unhappy, contented›miserable, delighted›unhappy, leading› submissive, leading›subservient, ruling›submissive, ruling›subservient, lordly›inferior, pampered› whipped, pampered›abused, pampered›overworked, pampered›exhausted, pampered›malnourished, pampered›overburdened, pampered›exploited, prestigious›inferior, prestigious›subservient, prestigious›submissive, reigning›submissive, reigning ›subservient, royal›inferior, royal›subservient, exalted›inferior, exalted›subservient, deified› powerless, beloved›cursed, beloved›miserable, beloved›contemptible, beloved›condemned, magnificent›powerless, magnificent›miserable, magnificent›contemptible, honorable›contemptible, great›powerless, dominant›subservient, dominant› dependent, dominant›inferior, dominant›submissive, mighty›powerless, mighty›low-level, mighty› contemptible, mighty›scared, fortunate›unfortunate, fortunate›cursed, fortunate›unhappy, fortunate› miserable, consecrated›cursed, worthy›miserable, adored›contemptible, happy›unhappy, happy›miserable, happy›unfortunate, venerated›contemptible, grand ›powerless Dramatic changes are very often precipitated by external actions, and some states – expressed as past-participles – are easily imagined as both the primary cause and direct effect of a transformation. Thus, the property cursed may serve as both cause and effect of the dramatic humbling of a king, when perhaps cursed by a witch, demon or other entity as suggested by attested n-grams (e.g. “cursed by a witch”). Further n-gram analysis will also suggest that one who is cursed may also be be condemned and abused, while one who is abused is more likely to be hungry and dependent. Or perhaps our king is first defeated, since the Google 3-grams suggest defeat leads one to become powerless, that being powerless leads to being oppressed, and that oppression leads one to being tortured, miserable and unhappy. The next stage of the Flux Capacitor’s development will thus focus on imposing a plausible causal ordering on the properties that undergo change in a transformation, to provide more conceptual insight to any story-generation system that exploits its character arcs. A story-generation system may then use a Proppian or Campbellian analysis to impose narrative structure on any such character arc. For a transformed character effectively undertakes a journey, whether or not this journey takes place entirely within one’s mind or social circumstances. By better understanding how the arrow of causality may impose a narrative ordering on the property-changes in a story, a system can better impose the morphology of a folk-tale or a monomyth on any generated character arc. This system may ask which property changes conform to what Propp deemed a Transfiguration, and which can best underpin the role of a Magical Agent in a story? Does a character return, or attempt to return, from the end-state of a transformation, and which actions or events can make such a Return possible? What property changes make a character difficult to Recognize post-facto, and which initial properties of a character continue to shine through? We do not see the Flux Capacitor as a disinterested sub-contractor in the story-telling process, but an active collaborator that works hand-in-glove with a full story-generator to help weave surprising yet plausible stories. As it thus evolves from being a simple provider of arcs to being a co-creator of stories in its own right, we expect that its usefulness as a sub-contractor to existing story-generation systems will yield insights into the additional features and functionalities it should eventually provide. Out of the Mouths of Bots To showcase the utility of the Flux Capacitor as a subcontractor in the generation of creative outputs, we use the system as a key generative module in the operation of a creative Twitterbot. Twitterbots, like bots in general, are typically simple generative systems that autonomously perform useful, well-defined (if provocative) services. A Twitterbot is an automated generator of tweets, short micro-blog messages that are distributed via the social media platform Twitter. Most twitterbots, like most bots, are far from creative, and exploit mere generation to send superficially well-formed texts into the twittersphere, so in most cases, the conceit behind a particular twitterbot is more interesting than the content generated by the bot. Twitter is the ideal midwife for pushing the products of true computational creativity – such as metaphors, jokes, aphorisms and story pitches – into the world. A new twitterbot named MetaphorIsMyBusiness (handle: @MetaphorMagnet) thus employs the Flux Capacitor to generate a novel, well-formed, creative metaphor or story pitch every hour or so. As such, @MetaphorMagnet’s outputs are the product of a complex reasoning process that combines a large knowledge-base of stereotypical norms with real usage data from the Google n-grams. Though encouraged by the quality of the bot’s outputs, we continue to expand its expressive range, to give the twitterbot its own unique voice and identifiable aesthetic. Outputs such as “What is an accountant but a timid visionary? What is a visionary but a bold accountant?” show how @MetaphorMagnet frames the conceits of the Flux Capacitor as though-provoking metaphors, to lend the bot a distinctly hard-boiled persona. Ongoing work with the bot aims to further develop this sardonic voice. There are many practical advantages to packaging creative generation systems as Web services, but there are just as many advantages to packaging these services as twitterbots. For one, the panoply of mostly random bots on Twitter that make little or no use of world knowledge or of true computational creativity – such as the playfully subversive @metaphorminute bot – provide a competitive baseline against which to evaluate the creativity and value of the insights that are pushed out into the world by theory-driven and knowledge-driven twitterbots like @MetaphorMagnet. For another, the willingness of human Twitter users to follow such accounts regardless of their provenance, and to favorite or retweet the best outputs from these accounts, provides an empirical framework for estimating (and promoting) the quality of the back-end Web services in each case. Finally, such bots may reap some social value in their own right, as sources of occasional insight, wit or profundity, or even of useful metaphors or story ideas that are subsequently valued, adopted, and re-worked by human speakers. References Thorsten Brants and Alex Franz. (2006). Web 1T 5-gram database, Version 1. Linguistic Data Consortium. Joseph Campbell. (1973). The Hero With A Thousand Faces. Princeton University Press. Chris Fairclough and Pádraig Cunningham. (2004). AI Structuralist Storytelling in Computer Games. In Proceedings ofthe International Conference on Computer Games: Artificial Intelligence, Design and Education. Gilles Fauconnier and Mark Turner. (1998). Conceptual Integration Networks. Cognitive Science, 22(2):133–187. Gilles Fauconnier and Mark Turner. (2002). The Way We Think. Conceptual Blending and the Mind's Hidden Complexities. New York: Basic Books. Christiane Fellbaum (ed.). (2008). WordNet: An Electronic Lexical Database. Cambridge, MA: MIT Press. Charles Forceville. (2006). The source-path-goal schema in the autobiographical journey documentary: McElwee, Van der Keuken, Cole. The New Review of Film and Television Studies 4:3, 241-261. Pablo Gervás. (2013). Propp’s Morphology of the Folk Tale as a Grammar for Generation. In Proceedings of the 2013 Workshop on Computational Models of Narrative, Dagstuhl, Germany. Mark Johnson. (1987). The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. University of Chicago. Arthur Markman and Dedre Gentner. (1993). Splitting the differences: A structural alignment view of similarity. Journal of Memory and Language 32(4):517–535. Matthew McGlone and Jessica Tofighbakhsh. (2000). Birds of a feather flock conjointly (?): rhyme as reason in aphorisms. Psychological Science 11 (5): 424–428. James Meehan. (1981). TALE-SPIN. In Roger Schank and C. K. Riesbeck (eds.), Inside Computer Understanding: Five Pro grams plus Miniatures. Hillsdale, NJ: Lawrence Erlbaum. Rafael Pérez y Pérez and Mike Sharples. (2001). MEXICA: A computer model of a cognitive account of creative writing. The Journal of Experimental and Theoretical Artificial Intelligence, 13: 119-139. Vladimir Propp. (1968). Morphology Of The Folk Tale (second edition). University of Texas Press. Mark Riedl and Michael Young. (2004). An intent-driven plan ner for multi-agent story generation. In Proc. of 3rd International Joint Conference on Autonomous Agents and Multi-agent Systems, 186-193. Scott R. Turner. (1994). The Creative Process: A Computer Model of Storytelling, Hillsdale, NJ: Lawrence Erlbaum. Tony Veale. (1997). Creativity as pastiche: A computational treatment of metaphoric blends, with special reference to cinematic ”borrowing”. In Proc. of Mind II: Computational Models of Creative Cognition, Dublin, Ireland. Tony Veale. (2011). The Agile Cliché: Using Flexible Stereotypes as Building Blocks in the Construction of an Affective Lexicon. In: Oltramari, A., Vossen, P., Qin, L. & Hovy, E. (eds.) New Trends of Research in Ontologies and Lexical Resources. Springer: Theory and Applications of Nat. Lang. Processing. Tony Veale. (2012a). From Conceptual Mash-ups to "Bad-Ass" Blends: A Robust Computational Model of Conceptual Blending. In Proceedings of ICCC 2012, the 3rd International Conference on Computational Creativity. Dublin, Ireland. Tony Veale. (2012b). Exploding the Creativity Myth: The Com putational Foundations of Linguistic Creativity. Bloomsbury. Tony Veale. (2013). Less Rhyme, More Reason: Knowledge-based Poetry Generation with Feeling, Insight and Wit. In Proc. of the 4th International Conference on Computational Creativity, Sydney, Australia. Fritz Zwicky. (1969). Discovery, Invention, Research: Through the Morphological Approach. Toronto: Macmillan.