]> PROTON Topics (from Inspec Thesaurus) ordered by algorithm X "0.1" COMPUTATIONAL CREATIVITY\\ [music, image, story] music, image, story, games, agents, words, actions, poems, character, blending CONCEPTUAL\\ [analogy, mapping, associations] analogy, blending, mapping, conceptual, objective, associations, team, graphs, concepts, domain analogy, blending, mapping, team, conceptual, objective, graphs, slogan, problem, bisociative MUSICAL\\ [music, improvisation, composition] music, chord, improvisation, melodies, harmonize, composition, accompaniment, pitch, emotions, beat music, improvisation, chord, melodies, harmonize, accompaniment, pitch, composition, jazz, composer LINGUISTIC\\ [poems, story, words] story, poems, actions, character, words, agents, narrative, poetry, artefacts, evaluating actions, story, words, narrative, poems, artefacts, character, cultural, computer_creativity, surprising GAMES [games, design, player] games, design, player, games_design, angelina, agents, code, jam, filter, gameplay games, design, filter, games_design, player, jam, identical, Î, simulated, evolved NARRATIVE [story, character, narrative] story, character, narrative, actions, poems, knight, metaphor, story_generated, plot, tensions POETRY [poems, rhyme, syllable] poems, poetry, words, rhyme, semantic, syllable, poetry_generated, metaphor, vehicles, poeticness RECIPEES\\ [recipe, ingredient, artefacts] recipe, ingredient, artefacts, predict, surprising, vectors, viewpoint, novelty, artifacts, conceptual LEXICAL [words, humor, neologisms] words, corpus, text, frequency, neologisms, corpora, humor, irony, humour, lexical VISUAL [image, painting, collage] image, painting, darci, artifacts, collage, adjectives, associations, rendered, colored, artists image, painting, artifacts, colour, darci, collage, colored, rendered, picture, creators CONC. BLENDING [blending, conceptual, ontology] blending, conceptual, ontology, analogy, conceptual_spacing, spacing, document, input_spacing, conceptual_blending, team ASSOCIATIONS/ANALOGIES [analogy, mapping, associations] analogy, mapping, associations, objective, team, graphs, concepts, conceptual, domain, interpretation EVALUATION [evaluating, creativity_system, participants] music, poems, improvisation, evaluating, interactive, poetry, creativity_system, musician, participants, behavioural poems, improvisation, imagine, evaluating, expert, music, musician, cultural, interactive, darci MANUAL_SVM.rdf.bow#0 MANUAL_SVM.rdf.bow#1 MANUAL_SVM.rdf.bow#2 MANUAL_SVM.rdf.bow#3 MANUAL_SVM.rdf.bow#4 MANUAL_SVM.rdf.bow#5 MANUAL_SVM.rdf.bow#6 MANUAL_SVM.rdf.bow#7 MANUAL_SVM.rdf.bow#8 Real-Time Emotion-Driven Music Engine Alex Rodrguez Lopez, Antonio Pedro Oliveira, and Amlcar Cardoso Centre for Informatics and Systems, University of Coimbra, Portugal lopez@student.dei.uc.pt, apsimoes@student.dei.uc.pt, amilcar@dei.uc.pt Abstract. Emotion-Driven Music Engine (EDME) is a computer system that intends to produce music expressing a desired emotion. This paper presents a real-time version of EDME, which turns it into a standalone application. A real-time music production engine, governed by a multi-agent system, responds to changes of emotions and selects the more suitable pieces from an existing music base to form song-like structures, through transformations and sequencing of music fragments. The music base is composed by fragments classified in two emotional dimensions: valence and arousal. The system has a graphic interface that provides a front-end that makes it usable in experimental contexts of different scientific disciplines. Alternatively, it can be used as an autonomous source of music for emotion-aware systems. 1 Introduction Adequate expression of emotions is a key factor in the ecacy of creative activities [16]. A system capable of producing music expressing a desired emotion can be used to in uence the emotional experience of the target audience. EmotionDriven Music Engine (EDME) was developed with the objective of having such a capability. The high modularity and parameterization of EDME allows it to be customized for different scenarios and integrated into other systems. EDME can be controlled by the user or used in an autonomous way, depending on the origin of the input source (an emotional description). A musician can use our system as a tool to assist the process of composition. Automatic soundtracks can be generated for other systems capable of making an emotional evaluation of the current context (i.e., computer-games and interactive media, where the music needs to change quickly to adapt to an ever-changing context). The input can be fed from ambient intelligence systems. Sensing the environment allows the use in installations where music reacts to the public. In a healthcare context, self-report measures or physiological sensors can be used to generate music that reacts to the state of the patient. The next section makes a review of related work. Section 3 presents our computer system. Section 4 draws some conclusions and highlights directions for further work. 150 2 Related Work The developed system is grounded on research made in the areas of computer science and music psychology. Systems that control the emotional impact of musical features usually work through the segmentation, selection, transformation and sequencing of musical pieces. These systems modify emotionally-relevant structural and performative aspects of music [4, 11, 22], by using pre-composed musical scores [11] or by making musical compositions [3, 10, 21]. Most of these systems are grounded on empirical data obtained from works of psychology [8, 19]. Scherer and Zentner [18] established parameters of in uence for the experienced emotion. Meyer [13] analyzed structural characteristics of music and its relation with emotional meaning in music. Some works have tried to measure emotions expressed by music and to identify the ffect of musical features on emotions [8, 19]. From these, relations can be established between emotions and musical features [11]. 3 System EDME works by combining short MIDI segments into a seamless music stream that expresses the emotion given as input. When the input changes, the system reacts and smoothly fades to music expressing the new emotion. There are two stages (Fig. 1). At the off-line stage, pre-composed music is segmented and classified to build a music base (Section 3.1); this makes system ready for the real-time stage, which deals with selection, transformation, sequencing and synthesis (Section 3.2). The user interface lets the user select in different ways the emotion to be expressed by music. Integration with other systems is possible by using different sources as the input (Section 3.3). 3.1 Off-line stage Pre-composed MIDI music (composed on purpose, or compiled as needed) is input to a segmentation module. An adaptation of LBDM [2] is used to attribute weights according to the importance and degree of proximity and change of five features: pitch, rhythm, silence, loudness and instrumentation. Segmentation consists in a process of discovery of fragments, by looking to each note onset with the higher weights. Fragments that result are input to a feature extraction module. These musical features are used by a classification module that grades the fragments in two emotional dimensions: valence and arousal (pleasure and activation). Classification is done with the help of a knowledge base implemented as two regression models that consist of weighted relations between each emotional dimension and music features [14]. Regression models are used to calculate the values of each emotional dimension through a weighted sum of the features obtained by the module of features extraction. MIDI music emotionally classified is then stored in a music base. 151 Real-Time Stage Off-line Stage Desired Emotion Music Selection Music Transformation Music Sequencing Music Synthesis Music Base Music Features Extraction Music Segmentation Pre-composed Music Knowledge Base Pattern Base Listener Music Classification Fig. 1. The system works in two stages. 3.2 Real-Time Stage Real-time operation is handled by a multi-agent system, where agents with different responsabilities cooperate in simultaneous tasks to achieve the goal of generating music expressing desired emotions. Three agents are used: an input agent, which handles commands between other agents and user interface; a sequencer agent, that selects and packs fragments to form songs; and a synthesizer agent, which deals with the selection of sounds to convert the MIDI output from the sequencer agent into audio. In this stage, the sequencer agent has important responsabilities. This agent selects music fragments with the emotional content closer to the desired emotion. It uses a pattern-based approach to construct songs with the selected fragments. Each pattern defines a song structure and the harmonic relations between the parts of this structure (i.e., popular song patterns like AABA). Selected fragments are arranged to match the tempo and pitch of a selected musical pattern, through transformations and sequencing. The fragments are scheduled in order to make their perception as one continuous song during each complete pattern. This agent also crossfades between patterns and when there is a change in the emotional input, in order to allow a smooth listening experience. 3.3 Emotional Input The system can be used under user control with an interface or act autonomously with other input. The input specifies values of valence and arousal. User Interface. The user interface serves the purpose of letting the user choose in different ways the desired emotion for the generated music. It is possible for the user to directly type the values of valence and arousal the music should have. 152 Other way is through a list of discrete emotion the user can choose from. It is possible to load several lists of words denoting emotions to fit different uses of the system. For example, Ekman [6] has a list of generally accepted basic emotions. Russell [17] and Mehrabian [12] both have lists which map specific emotions to dimensional values (using 2 or 3 dimensions). Juslin and Laukka [9] propose a specific list for emotions expressed by music. Another way to choose the affective state of the music is through a graphical representation of the valence-arousal affective space, based on FeelTrace [5]: a circular space with valence dimension is in the horizontal axis and the arousal dimension in the vertical axis. The coloring follows that of Plutchik's circumplex model [15]. Other Input. EDME can stand as an autonomous source of music for other systems by taking their output as emotional input With the growing concern on computational models of emotions and affective systems, and a demand for interfaces and systems that behave in an affective way, it is becoming frequent to adapt systems to show or perceive emotions. EmoTag [7] is an approach to automatically mark up affective information in texts, marking sentences with emotional values. Our system can serve the musical needs of such systems by taking their emotional output as the input for real-time soundtrack generation. Sensors can serve as input too. Francisco et al. [20] presents an installation that allows people to experience and in uence the emotional behavior of their system. EDME is used in this interactive installation to provide music according to values of valence and arousal. 4 Conclusion Real-time EDME is a tool that produces music expressing desired emotions that has application in theatre, films, video-games and healthcare contexts. Currently, we have applied our system in an affective installation [20]. The real-time usage of the system by professionals of music therapy and the integration of EDME with EmoTag [7] for emotional soundtrack generation are also being analysed. The extension of EDME to an agent-based system increased its scability, which makes easier its expansion and integration with external systems. Listening tests are needed to assess the uentness of obtained songs. MANUAL_SVM.rdf.bow#9 MANUAL_SVM.rdf.bow#10 MANUAL_SVM.rdf.bow#11 MANUAL_SVM.rdf.bow#12 MANUAL_SVM.rdf.bow#13 MANUAL_SVM.rdf.bow#14 MANUAL_SVM.rdf.bow#15 MANUAL_SVM.rdf.bow#16 MANUAL_SVM.rdf.bow#17 MANUAL_SVM.rdf.bow#18 MANUAL_SVM.rdf.bow#19 MANUAL_SVM.rdf.bow#20 MANUAL_SVM.rdf.bow#21 MANUAL_SVM.rdf.bow#22 MANUAL_SVM.rdf.bow#23 MANUAL_SVM.rdf.bow#24 MANUAL_SVM.rdf.bow#25 Automated Jazz Improvisation Robert M. Keller1 with contributions by Jon Gillick2, David Morrison3, Kevin Tang4 1 Harvey Mudd College, Claremont, CA, USA 2 Wesleyan University, Middletown, CT, USA 3 University of Illinois, Urbana-Champaign, IL, USA 4 Cornell University, Ithaca, NY, USA keller@cs.hmc.edu, jrgillick@wesleyan.edu, drmorr0@gmail.com, kt258@cornell.edu Abstract. I will demonstrate the jazz improvisational capabilities of ImproVisor, a software tool originally intended to help jazz musicians work out solos prior to improvisation. As the name suggests, this tool provides various forms of advice regarding solo construction over chord changes. However, recent additions enable the tool to improvise entire choruses on its own in real-time. To reduce the overhead of creating grammars, and also to produce solos in specific styles, the tool now has a feature that enables it to learn a grammar for improvisation in a style from transcribed performances of solos by others. Samples may be found in reference [4]. Acknowledgment This research was supported by grant 0753306 from the National Science Foundation and a faculty enhancement grant from the Mellon Foundation. MANUAL_SVM.rdf.bow#26 The Painting Fool Teaching Interface Simon Colton Department of Computing, Imperial College, London, UK www.thepaintingfool.com The Painting Fool is software that we hope will be taken seriously as a creative painter in its own right { one day. As we are not trained artists, a valid criticism is that we are not equipped to train the software. For this reason, we have developed a teaching interface to The Painting Fool, which enables anyone { including artists and designers { to train the software to generate and paint novel scenes, according to a scheme they specify. In order to specify the nature and rendering of the scene, users must give details on some, or all, of seven screens, some of which employ AI techniques to make the specification process simpler. The screens provide the following functionalities: (i) Images: enables the usage of context free design grammars to generate images. (ii) Annotations: enables the annotation of digital images, via the labelling of user-defined regions. (iii) Segmentations: enables the user to specify the parameters for image segmentation schemes, whereby images are turned into paint regions. (iv) Items: enables the user to hand-draw items for usage in the scenes, and to specify how each exemplar item can be varied for the generation of alternatives. (v) Collections: enables the user to specify a constraint satisfaction problem (CSP) via the manipulation of rectangles. The CSP is abduced from the rectangle shapes, colours and placements, and when solved (either by a constraint solver or evolutionary process), generates new scenes of rectangles, satisfying the user constraints. (vi) Scenes: enables the specification of layers of images, items, segmentations and collections, in addition to substitution schemes. (v) Pictures: enables the specification of rendering schemes for the layers in scenes. In the demonstration, I will describe the process of training the software via each of the seven screens. I will use two running example picture schemes, namely the PresidENTS series and the Fish Fingers series, exemplars of which are portrayed in figure 1. Fig. 1. Exemplar pictures from the PresidENTS and the Fish Fingers series of pictures 289 MANUAL_SVM.rdf.bow#27 Generative Music Systems for Live Performance Andrew R. Brown, Toby Gifford, and Rene Wooller Queensland University of Technology, Brisbane , Australia. {a.brown, t.gifford, r.wooller}@qut.edu.au Music improvisation continues to be an intriguing area for computational creativity. In this paper we will outline two software systems designed for live music performance, the LEMu (live electronic music) system and the JamBot (improvisatory accompaniment agent). Both systems undertake an analysis of human created music, generate complementary new music, are designed for interactive use in live performance, and have been tested in numerous live settings. These systems have some degree of creative autonomy, however, we are especially interested in the creative potential of the systems interacting with human performers. The LEMu software generates transitional material between scores provided in MIDI format. The LEMu software uses an evolutionary approach to generated materials that provide an appropriate path between musical targets [1]. This musical morphing process is controlled during performance by an interactive nodal graph that allows the performer to select the morphing source and target as well as transition speed and parameters. Implementations included the MorphTable [2] where users manipulate blocks on a large surface to control musical morphing transitions. This design suits social interaction and is particularly suited to use by inexperienced users. The JamBot [3] listens to an audio stream and plays along. It consists of rhythmic and harmonic analysis algorithms that build a dynamic model of the music being performed. This model holds multiple probable representations at one time in the Chimera Architecture [4] which can be interpreted in various ways by a generative music algorithm that adds accompaniment in real time. These systems have been designed using a research method we have come to call Generation in Context, that relies on iterations of aesthetic reflection on the generated outcomes to inform the processes of enquiry [5]. MANUAL_SVM.rdf.bow#28 Realtime Generation of Harmonic Progressions in Kinetic Engine -Demo Arne Eigenfeldt1 and Philippe Pasquier2 1School for the Contemporary Arts, 2School of Interactive Arts and Technology, Simon Fraser University, Canada {eigenfel, pasquier}@sfu.ca Abstract. We present a method for generating harmonic progressions using case-based analysis of existing material that employs a Markov model. Using a unique method for specifying desired harmonic complexity, tension between chord transitions, and a desired bass-line, the user specifies a 3 dimensional vector, which the realtime generative algorithm attempts to match during chord sequence generation. The proposed system thus offers a balance between userrequested material and coherence within the database. The presentation will demonstrate the software running in realtime, allowing users to generate harmonic progressions based upon a database of chord progressions drawn from Pat Metheny, Miles Davis, Wayne Shorter, and Antonio Carlos Jobim. The software is written in MaxMSP, and available at the first author's website (www.sfu.ca/~eigenfel). 291 MANUAL_SVM.rdf.bow#29 The Continuator Strikes Back: a Controllable Bebop Improvisation Generator François Pachet1 1 Sony CSL-Paris, 6, rue Amyot, 75005, Paris, France pachet@csl.sony.fr Abstract. The problem of modeling improvisation has received a lot of attention recently, thanks to progresses in machine learning, statistical modeling, and to the increase in computation power of laptops. The Continuator (Pachet, 2003) was the first real time interactive systems to allow users to create musical dialogs using style learning techniques. The Continuator is based on a modeling of musical sequences using Markov chains, a technique that was shown to be well adapted to capture stylistic musical patterns, notably in the pitch domain. The Continuator had great success in free-form improvisational settings, in which the users explore freely musical language created on-the-fly, without additional musical constraints, and was used with Jazz musicians as well as with children (Addessi & Pachet, 2005). However, the Continuator, like most systems using Markovian approaches, is difficult, if not impossible to control. This limitation is intrinsic to the greedy, left-to-right nature of Markovian music generation algorithms. Consequently, it was so far difficult to use these systems in highly constrained musical contexts. We present here a prototype of a fully controllable improvisation generator, based on a new technique that allows the user to control a Markovian generator. We use a combination of combinatorial techniques (constraint satisfaction) with machinelearning techniques (supervised classification as described in Pachet, 2009) in a novel way. We illustrate this new approach with a Bebop improvisation generator. Bebop was chosen as it is a particularly "constrained" style, notably harmonically. Our technique can generate improvisations that satisfy three types of constraints: 1) harmonic constraints derived from the rules of Bebop, 2) "Side-slips" as a way to extend the boundaries of Markovian generation by producing locally dissonant but semantically equivalent musical material that smoothly comes back to the authorized tonalities, and 3) non-Markovian constraints deduced from the user's gestures. Keywords: music interaction, virtuosity, doodling. MANUAL_SVM.rdf.bow#30 Software Engineering Rewards for Brainstorming Online (SEREBRO) D.F. Grove, N. Jorgenson, S. Sen, R. Gamble University of Tulsa 800 S. Tucker Drive, Tulsa OK, USA {dean-grove, noah-jorgenson, sandip, gamble@utulsa.edu} Abstract. Our multi-faceted tool called SEREBRO (Software Engineering Rewards for Brainstorming Online) is an embodiment of a novel framework for understanding how creativity in software development can be enhanced through technology and reinforcement. SEREBRO is a creativity support tool, available as a Web application that provides idea management within a social networking environment to capture, connect, and reward user contributions to team-based, software engineering problem solving tasks. To form an idea network, topics are created that typically correspond to artifacts needed to achieve specific milestones in the software development process. Team members then perform the activities of brainstorming (initiating) ideas, spinning ideas from current ones by agreeing or disagreeing, pruning threads that are non-productive, and finalizing emerging concepts for the next milestone. Each idea type is represented by a corresponding icon and color in the idea network: brainstorm nodes are blue circles, agree nodes are upright, green triangles, disagree nodes are upside down, orange triangles, and finalized nodes are yellow pentagons that have tags associated with contributing ideas. SEREBRO can display threads as a series of posts or in a graphical view of the entire tree for easy navigation. Team members also use SEREBRO for scheduling meetings and announcing progress. Special idea nodes can be used to represent meeting minutes. The meeting mode associates a clock with each idea type and allows multiple users to be credited. Rewards are propagated from leaf nodes to parents to correspond to idea support. They are supplemented when a node is tagged by finalization. These rewards are represented as badges. Reputation scores are accumulated by the direct scoring of ideas by team members. A user's post publicly displays both reward types. The current version, SEREBRO 2.0, is supplemented with software project management components that enhance both the idea network and reward scheme. These include uploading files for sharing, version control for changes to the product implementations, a Wiki to document product artifacts, a calendar tool, and a Gantt chart. The website with a video of SEREBRO 1.0, data collections, and link to SEREBRO 2.0 to view various idea nets, the wiki, uploaded documents, and any resulting prototype development by the teams, as well as publications, including submissions, can be found at http://www.seat.utulsa.edu/serebro.php. Guest access to SEREBRO is available by email request to gamble@utulsa.edu. 293 MANUAL_SVM.rdf.bow#31 Piano_prosthesis Michael Young Music Department, Goldsmiths, University of London New Cross, London, UK m.young@gold.ac.uk Piano_prosthesis presents a would-be live algorithm, a system able to collaborate creatively with a human partner. In performance, the pianist's improvisation is analysed statistically by continuously measuring the mean and standard deviation of 10 features, including pitch, dynamic, onset separation time and ‘sustain-ness' within a rolling time period. Whenever these features constitute a 'novel' point in 10dimensional feature space (by exceeding an arbitrary distance threshold) this point is entered as a marker. This process continues as the improvisation develops, accruing further marker points (usually around 15 are generated in a 10 minute performance). The system expresses its growing knowledge, represented by these multi-dimensional points, in its own musical output. Every new feature point is mapped to an individual input node of a pre-trained neural network, which in turn drives a stochastic synthesizer programmed with a wide repertoire of piano samples and complex musical behaviours. At any given moment in the performance, the current distance from all existing markers is expressed as a commensurate set of outputs from the neural network, generating a merged set of corresponding musical behaviours of appropriate complexity. The identification of new points, and the choice of association between points and network states, is hidden from the performer and can only be ascertained through listening and conjecture (as may well be case in improvising with fellow human player). The system intermittently and covertly devises connections between the human music and its own musical capabilities. As the machine learns and ‘communicates', the player is invited to reciprocate. Through this quasi-social endeavour a coherent musical structure may emerge as the performance develops in complexity and intimacy. This is a new system that substitutes on-the-fly network training (previously described in detail [1]) with Euclidian distance measurements, offering considerable advantages in efficiency. There are a number of sister projects for other instruments, with corresponding sound libraries (oboe, flute, cello). Further explanation and several audio examples of full performances are available on the author's website [2]. MANUAL_SVM.rdf.bow#32 MANUAL_SVM.rdf.bow#33 A Visual Language for Darwin Penousal Machado and Henrique Nunes CISUC, Department of Informatics Engineering, University of Coimbra, 3030 Coimbra, Portugal machado@dei.uc.pt Abstract. The main motivation for the research that allowed the creation of the works presented here was the development of a system for the evolution of visual languages. When applied to artistic domains the products of computational creativity systems tend to be individual artworks. In our approach search takes place at a higher level of abstraction, using a novel evolutionary engine we explore a space of context free grammars. Each point of the search space represents a family of shapes following the same production rules. In this exhibit we display instances of the vast set of shapes specified by one of the evolved grammars. 295 MANUAL_SVM.rdf.bow#34 Using Computational Models to Harmonise Melodies Raymond Whorley, Geraint Wiggins, Christophe Rhodes, and Marcus Pearce Centre for Cognition, Computation and Culture Goldsmiths, University of London New Cross, London SE14 6NW, UK. Wellcome Laboratory of Neurobiology University College London London WC1E 6BT, UK. {r.whorley,g.wiggins,c.rhodes}@gold.ac.uk marcus.pearce@ucl.ac.uk Abstract. The problem we are attempting to solve by computational means is this: given a soprano part, add alto, tenor and bass such that the whole is pleasing to the ear. This is not easy, as there are many rules of harmony to be followed, which have arisen out of composers' common practice. Rather than providing the computer with rules, however, we wish to investigate the process of learning such rules. The idea is to write a program which allows the computer to learn for itself how to harmonise in a particular style, by creating a model of harmony from a corpus of existing music in that style. In our view, however, present techniques are not sufficiently well developed for models to generate stylistically convincing harmonisations (or even consistently competent harmony) from both a subjective and an analytical point of view. Bearing this in mind, our research is concerned with the development of representational and modelling techniques employed in the construction of statistical models of four-part harmony. Multiple viewpoint systems have been chosen to represent both surface and underlying musical structure, and it is this framework, along with Prediction by Partial Match (PPM), which will be developed during this work. Two versions of the framework have so far been implemented in Lisp. The first is the strictest possible application of multiple viewpoints and PPM, which reduces the four musical sequences (or parts) to a single sequence comprising compound symbols. This means that, given a soprano part, the alto, tenor and bass parts are predicted or generated in a single stage. The second version allows the lower three parts to be predicted or generated in more than one stage; for example, the bass can be generated first, followed by the alto and tenor together in a second stage of generation. We shall be describing and demonstrating our software, which uses machine learning techniques to construct statistical models of four-part harmony from a corpus of fifty hymn-tune harmonisations. In particular, we shall demonstrate how these models can be used to harmonise a given melody; that is, to generate alto, tenor and bass parts given the soprano part. Output files are quickly and easily converted into MIDI files by a program written in Java, and some example MIDI files will be played. 296 MANUAL_SVM.rdf.bow#35 User-Controlling Expressed Emotions in Music with EDME Alex Rodrguez Lopez, Antonio Pedro Oliveira, and Amlcar Cardoso Centre for Informatics and Systems, University of Coimbra, Portugal lopez@student.dei.uc.pt, apsimoes@student.dei.uc.pt, amilcar@dei.uc.pt http://www.cisuc.uc.pt Abstract. Emotion-Driven Music Engine software (EDME) expresses user-defined emotions with music and works in two stages. The first stage is done oine and consists in emotionally classifying standard MIDI files in two dimensions: valence and arousal. The second stage works in realtime and uses classified files to produce musical sequences arranged in song patterns. First stage starts with the segmentation of MIDI files and proceeds to the extraction of features from the obtained segments. Classifiers for each emotional dimension use these features to label the segments, which are then stored in a music base. In the second stage, EDME starts by selecting the segments with emotional characteristics closer to the user-defined emotion. The software then uses a pattern-based approach to arrange selected segments into song-like structures. Segments are adapted, through transformations and sequencing, in order to match the tempo and pitch characteristics of given song patterns. Each pattern defines song structure and harmonic relations between the parts of each structure. The user interface of the application offers three ways to define emotions: selection of discrete emotions from lists of emotions; graphical selection in a valence-arousal bi-dimensional space; or direct definition of valence-arousal values. While playing, EDME responds to input changes by quickly adapting the music to a new user-defined emotion. The user may also customize the music and pattern base. We intend to explore this possibility by challenging attendees to bring their own MIDI files and experiment the system. With this, we intend to allow a better understanding of the potential of EDME as a composition aid tool and get useful insights about further developments. MANUAL_SVM.rdf.bow#36 Swarm Painting Atelier Paulo Urbano1, 1 LabMag, Universidade de Lisboa, Lisboa, Portugal pub@di.fc.ul.pt Abstract. The design of coordination mechanisms is considered as a vital component for the successful deployment of multi-agent systems in general. The same happens in artificial collective creativity and in particular in artificial collective paintings where the coordination model has direct effects in agent's behavior and in the collective pattern formation process. Coordination, that is, the way agents interact with each other and how their interactions can be controlled, plays an important role in the "aesthetic value" of the resulting paintings, in spite of its subjective nature. Direct or indirect communication, centralized or decentralized control, local versus global information are important issues regarding coordination. We have created a swarm painting tool to explore the territory of collective pattern formation, looking for aesthetically valuable behaviors and interactions forms. We adopted the bottom-up methodology for producing collective behavior, as it is more kin to fragmentation, surprise, and non-predictability—as if it was an unconscious collaboration of collective artists—something similar to a swarm "cadavre exquis", but where we have a much more numerous group of participants, which drop paint while they move. They do not know anything about pattern or style, they have just to decide where to move and which color to drop. We are going to show the artistic pieces made by a swarm painting tool made collections of decentralized painting agents using just local information, which are coordinated through the mediation of the environment (stigmergy). We will also describe other types of agent coordination based on imitation where some consensual attributes, like color or orientation, or position, will emerge, creating some order on a potential collective chaos. This consensus can die out, randomly or by interaction factors, and new consensual attributes can win resulting in heterogeneous paintings with interesting patterns, which would be difficult to achieve if made by human hands. We think that our main contribution, besides the creative exploration of new artistic spaces with swarm-art, will be in the sense of showing the possibilities of generating unpredictable and surprising patterns from the interaction of individual behaviors controlled by very simple rules. This interaction between the micro and macro levels in the artistic realm can be the source of new artistic patterns and also can foster imagination and creativity. The Atelier can be reached at: http://www.di.fc.ul.pt/~pub/swarm-atelier. 298 MANUAL_SVM.rdf.bow#37 MANUAL_SVM.rdf.bow#38 MANUAL_SVM.rdf.bow#39 MANUAL_SVM.rdf.bow#40 MANUAL_SVM.rdf.bow#41 MANUAL_SVM.rdf.bow#42 MANUAL_SVM.rdf.bow#43 MANUAL_SVM.rdf.bow#44 MANUAL_SVM.rdf.bow#45 MANUAL_SVM.rdf.bow#46 MANUAL_SVM.rdf.bow#47 MANUAL_SVM.rdf.bow#48 MANUAL_SVM.rdf.bow#49 MANUAL_SVM.rdf.bow#50 MANUAL_SVM.rdf.bow#51 MANUAL_SVM.rdf.bow#52 MANUAL_SVM.rdf.bow#53 MANUAL_SVM.rdf.bow#54 MANUAL_SVM.rdf.bow#55 MANUAL_SVM.rdf.bow#56 MANUAL_SVM.rdf.bow#57 MANUAL_SVM.rdf.bow#58 MANUAL_SVM.rdf.bow#59 MANUAL_SVM.rdf.bow#60 MANUAL_SVM.rdf.bow#61 MANUAL_SVM.rdf.bow#62 MANUAL_SVM.rdf.bow#63 MANUAL_SVM.rdf.bow#64 MANUAL_SVM.rdf.bow#65 MANUAL_SVM.rdf.bow#66 MANUAL_SVM.rdf.bow#67 MANUAL_SVM.rdf.bow#68 MANUAL_SVM.rdf.bow#69 MANUAL_SVM.rdf.bow#70 MANUAL_SVM.rdf.bow#71 MANUAL_SVM.rdf.bow#72 MANUAL_SVM.rdf.bow#73 MANUAL_SVM.rdf.bow#74 MANUAL_SVM.rdf.bow#75 MANUAL_SVM.rdf.bow#76 MANUAL_SVM.rdf.bow#77 MANUAL_SVM.rdf.bow#78 MANUAL_SVM.rdf.bow#79 MANUAL_SVM.rdf.bow#80 MANUAL_SVM.rdf.bow#81 MANUAL_SVM.rdf.bow#82 MANUAL_SVM.rdf.bow#83 MANUAL_SVM.rdf.bow#84 MANUAL_SVM.rdf.bow#85 MANUAL_SVM.rdf.bow#86 MANUAL_SVM.rdf.bow#87 MANUAL_SVM.rdf.bow#88 MANUAL_SVM.rdf.bow#89 MANUAL_SVM.rdf.bow#90 MANUAL_SVM.rdf.bow#91 MANUAL_SVM.rdf.bow#92 MANUAL_SVM.rdf.bow#93 MANUAL_SVM.rdf.bow#94 MANUAL_SVM.rdf.bow#95 MANUAL_SVM.rdf.bow#96 MANUAL_SVM.rdf.bow#97 MANUAL_SVM.rdf.bow#98 MANUAL_SVM.rdf.bow#99 MANUAL_SVM.rdf.bow#100 Coming Together: Composition by Negotiation by Autonomous Multi-Agents Arne Eigenfeldt Philippe Pasquier School for the Contemporary Arts Simon Fraser University Vancouver, Canada arne_e@sfu.ca School of Interactive Arts and Technology Simon Fraser University Surrey, Canada pasquier@sfu.ca ABSTRACT Coming Together is a series of computational creative systems based upon the premise of composition by negotiation - within a controlled musical environment, autonomous multi-agents attempt to converge their data, resulting in a self-organised, dynamic, and musically meaningful performance. All the Coming Together systems involve some aspect of a priori structure around which the negotiation by the agents is centered. In the versions demonstrated, the structure presupposes several discrete movements that together form a complete composition of a predetermined length. Characteristics of each movement - density, time signature, tempo - are generated using a fuzzy-logic method of avoiding similarity between succeeding movements. Two versions of Coming Together are described, used in two different musical compositions. The first, for the composition And One More, involves agents interacting in real-time, their output being sent via MIDI to a mechanical percussion instrument. This version has nine different agents performing on eighteen different percussion instruments, and includes a live percussionist whose performance is encoded and considered an additional agent. The second version, for the composition More Than Four, involves four agents, whose output is eventually translated into musical notation using MaxScore1 , for performance by four instrumentalists. Agent interaction is transcribed to disk prior to performance; at the onset of the performance, a curatorial agent selects previous movements from the database, and chooses from those to create a musically unified composition. 1 www.computermusicnotation.com 2012 221 MANUAL_SVM.rdf.bow#101 Continuous Improvisation and Trading with Impro-Visor Robert M. Keller Computer Science Department Harvey Mudd College Claremont, CA 91711 USA keller@cs.hmc.edu Demonstration Impro-Visor is a free open-source program designed to help musicians learn to improvise. Its main purpose is to help its user become a better improviser. It can exhibit creativity by improvising continuously on its own in a variety of soloist styles. We demonstrate that, in principle, Impro-Visor can continue creating indefinitely, without repeating the same sequence of musical ideas. We also demonstrate how Impro-Visor can alternate ("trade") phrases with the soloist, again continuously, as well as recording what the soloist plays on a MIDI device. Related aspects that can be shown are learning an improvisational style through grammar acquisition and using "roadmaps" as a basis for trading. The figure shows a screen shot of Impro-Visor creating phrases in real-time and capturing the soloist's input in real-time from a MIDI device. Acknowledgements The author thanks the NSF (CNS REU #0753306), ImproVisor co-developers, and Harvey Mudd College for their generous support. MANUAL_SVM.rdf.bow#102 Exploring Everyday Creative Responses to Social Discrimination with the Mimesis System D. Fox Harrell†+* , Chong-U Lim* , Sonny Sidhu† , Jia Zhang† , Ayse Gursoy† , Christine Yu+ Comparative Media Studies Program† | Program in Writing and Humanistic Studies+ Computer Science and Artificial Intelligence Laboratory* Massachusetts Institute of Technology {fox.harrell, culim, sidhu, zhangjia, agursoy, czyu}@mit.edu Introduction We have created an interactive narrative system called Mimesis, which explores the social discrimination phenomena through gaming and social networking. Mimesis places players in control of a mimic octopus in its marine habitat that encounters subtle discrimination from other sea creatures. Relevant to computational creativity, Mimesis explores: 1) Collective creativity by constructing game characters algorithmically from collective musical preferences on a social networking site. 2) Everyday creativity by modeling the diverse creative ways people respond to covert acts of discrimination. Figure 1: The player character is customized based on the player's musical preferences on Facebook. Collective Creativity Building on previous work [2], Mimesis requests access to information from the player's Facebook profile, using music preferences in the player's social network as a standin for qualities of individual and social identity. Mimesis generates corresponding moods for each musical artist. By associating the player character with artists' moods such oblivious, confused, suspicious, or aggressive, players can impart these qualities onto the player character (see Figure 1). Within gameplay, moods are mapped to strategies of conversationally responding to microaggressions. Everyday Creativity The player character encounters other sea creatures who utter sentences like: "Where are you from?" and "You don't seem like the typical creature around here." This is shown in Figure 2. While such questions may seem benign, they can also covertly imply the theme: "You are an alien in your own land" (such might be encountered by an Asian American in the United States). The player responds by using gestural input such as pinching out for an open/oblivious attitude or pinching in for a closed/aggressive attitude. Each encounter plays out according to a conversational narrative schema based on sociolinguistic studies of narratives of personal experience. Figure 2: The screen shows the player's character (left) in a microinvalidation encounter with an NPC (right). These encounters convey aspects of the experience of microaggressions, which are covert acts of discrimination. Researchers Sue et al. identify "microinvalidations" as communications that exclude, negate, or nullify the experiential reality of others). The "alien in your own land" theme is an example of microinvalidation. Microaggressions have been clinically found to have strong cumulative effects on health and happiness, restrict understandings between groups. [1] We hope the system is an effective tool for increasing awareness of this subtle form of social discrimination. MANUAL_SVM.rdf.bow#103 Functional Representations of Music James McDermott∗ , University College Dublin. April 30, 2012 Music is an interesting domain for the study of computational creativity. Some generative formalisms for musical composition (e.g. Markov chains) achieve plausible music over short time-scales (a few notes) but appear to be "meandering" over longer time-scales. Imposing a sense of teleology or purpose is an important goal in creating valuable music. In the field of evolutionary computation (EC), researchers draw inspiration from Darwinian evolution to address computational problems. EC can be applied to aesthetic and creative domains. Although EC is commonly used to generate music, key open issues remain. Formal measurement of the quality of a piece of music in a computational fitness function is an obvious obstacle. A naive representation for music, such as a list of integer values each corresponding directly to a note, will tend to produce disorganised music. In previous work, Hoover et al. [1, and later] showed that a functional representation could impose organisation and a sense of purpose. In this system, music is represented as a function of time. A fixed piece of pre-existing music is used as a "scaffold": the system then evolves functions, i.e. mappings from the scaffold to new accompanying material. Time-series of numerical "control" variables are also proposed as a means of imposing structure on the music. Fitness is judged interactively. The XG project is partly inspired by this work. It discards the "scaffold", but relies on the time-series of control variables (see Figure 1). Time (beats) Bar Beat x,y,z bar mod + sin2 sin2 beat + + sin2 + x unaryy sin sin2 sin2 z unary+ output sin2 output sin2 unarysin2 unaryoutput Figure 1: Time-series of control variables (left) impose a bar/beat structure and an overall AABA structure. The evolved function (right) maps these variables to numerical outputs, once per time-step. The outputs are interpreted as music. It also differs in its internal representation for the mappings (a simple language of arithmetic functions, with special accumulator functions at the outputs to control volume), and its use of a computational (noninteractive) fitness function. Surprisingly good results arise using this representation in combination with a simple fitness function which rewards variety in the output music. Neither the functional representation nor the fitness function is alone capable of producting good results. More details are available in a full paper [2] and online1. A longer-term goal of the XG project is to create large-scale musical works as mappings from pre-existing time series arising in nature and human affairs, and from non-musical artforms such as film or still images with a sequential aspect. MANUAL_SVM.rdf.bow#104 MaestroGenesis: Computer-Assisted Musical Accompaniment Generation Paul A. Szerlip, Amy K. Hoover, and Kenneth O. Stanley Department of Electrical Engineering and Computer Science University of Central Florida Orlando, FL 32816-2362 USA {paul.szerlip@gmail.com, ahoover@eecs.ucf.edu, kstanley@eecs.ucf.edu} Abstract This demonstration presents an implementation of a computer-assisted approach to music generation called functional sca↵olding for musical composition (FSMC) whose representation facilitates creative combination, exploration, and transformation of musical ideas and spaces. The approach is demonstrated through a program called MaestroGenesis with a convenient GUI that makes it accessible to even nonmusicians. Music in FSMC is represented as a functional relationship between an existing human composition, or sca↵old, and a generated accompaniment. This relationship is represented by a type of artificial neural network called a compositional pattern producing network (CPPN). A human user without any musical expertise can then explore how accompaniment can relate to the sca↵old through an interactive evolutionary process akin to animal breeding. Composing with MaestroGenesis MaestroGenesis is a program that helps users create complete polyphonic pieces with only the musical expertise necessary to compose a simple, monophonic melody. Users begin creating accompaniments by establishing a sca↵old, or melody that will provide the initial rhythmic and harmonic seed for the accompaniment. The accompaniment is then represented as a functional transformation of this original sca↵old through a method called functional sca↵olding for musical composition (FSMC) (Hoover et al. 2012). FSMC explots the structure already present in the humancomposed sca↵old by computing a function that transforms its structure into the accompaniment. These FSMC-accompaniments are then bred like animals might be bred. Once the sca↵old is chosen, a population of ten accompaniments is displayed. Each is rated as good or bad by pressing the "thumbs-up" button (figure 1). By ratings accompaniments with favorable qualities higher than those without, the next generation of accompaniments tends to possess similar qualities to the well-liked parents. Through interactively evolving these accompaniments, they grow to reflect the personal inclinations of the user. Figure 1: MaestroGenesis Candidate Accompaniments. Accompaniments in MaestroGenesis are evolved through a process similar to animal breeding. Candidate accompaniments are evolved ten at a time in an interative process in which each subsequent generation inherits traits from the previous population. Conclusion MaestroGenesis is a program that facilitates creativity in music composition through functional sca↵olding for musical composition (FSMC) (Hoover et al. 2012). Accompaniments are evolved through a process similar to animal breeding. The program is availble for download at http://maestrogenesis.org. Acknowledgements This work was supported in part by the National Science Foundation under grant no. IIS-1002507 and also by a NSF Graduate Research Fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. MANUAL_SVM.rdf.bow#105 MANUAL_SVM.rdf.bow#106 CrossBee: Cross-Context Bisociation Explorer Matjaž JurÅ¡ič1,2, Bojan Cestnik3,1 , Tanja Urbančič4,1, Nada Lavrač1,4 1 Jožef Stefan Institute, Ljubljana, Slovenia 2 International Postgraduate School Jožef Stefan, Ljubljana, Slovenia 3 Temida d.o.o., Ljubljana, Slovenia 4 University of Nova Gorica, Nova Gorica, Slovenia {matjaz.jursic, bojan.cestnik, tanja.urbancic, nada.lavrac}@ijs.si CrossBee is an exploration engine for text mining and cross-context link discovery, implemented as a Web application with a user-friendly interface. The system supports the expert in advanced document exploration supporting document retrieval, analysis and visualization. It enables document retrieval from public databases like PubMed, as well as by querying the Web, followed by document cleaning and filtering through several filtering criteria. Document analysis includes document presentation in terms of statistical and similarity-based properties, topic ontology construction through document clustering. A distinguishing feature of CrossBee is its powerful cross-context and crossdomain document exploration facility and bisociative (Koestler 1964) term discovery aimed at finding potential cross-domain linking terms/concepts. Term ranking based on an ensemble heuristic (JurÅ¡ič et al. 2012) enables the expert to focus on cross-context links with high potential for cross-context link discovery. CrossBee's document visualization and user interface customization additionally support the expert in finding relevant documents and terms through similarity graph visualization, a color-based domain separation scheme and highlighted top-ranked bisociative terms. A typical user scenario starts by inputting two sets of documents of interest and by regulating the parameters of the system. The required input is a file with documents from two domains. Each line of the file contains exactly three tab-separated entries: (a) document identification number, (b) domain acronym, and (c) the document text. The other options available to the user include specifying the exact preprocessing options, specifying the base heuristics to be used in the ensemble, specifying outlier documents identified by external outlier detection software, defining the already known bisociative terms (b terms), and others. Next, CrossBee starts a computationally very intensive step in which it prepares all the data needed for the fast subsequent exploration phase. During this step the actual text preprocessing, base heuristics, ensemble, bisociation scores and rankings are computed in the way presented in the previous section. This step does not require any user intervention. After this computation, the user is presented with a ranked list of b term candidates. The list provides the user with some additional information including the ensemble's individual base heuristics votes and term's domain occurrence statistics in both domains. The user then browses through the list and chooses the term(s) he believes to be promising b terms, i.e. terms for finding meaningful connections between the two domains. At this point, the user can start inspecting the actual appearances of the selected term in both domains, using the efficient side-by-side document inspection. In this way, he can verify whether his rationale behind selecting this term. CrossBee is available at website: http://crossbee.ijs.si/. The system's home page is shown below. MANUAL_SVM.rdf.bow#107 Computer Software for Measuring Creative Search Kyle E. Jennings Department of Psychology University of California, Davis Davis, CA 95616 USA kejennings@ucdavis.edu The creative process can be thought of as the search through a space of possible solutions for one that best satisfies the problem criteria. To better understand this search process, two face valid creative tasks have been created, both of which track the intermediate configurations that creators explore. These data—called search trajectories—may yield valuable insights into the creative process. This demonstration allows visitors to try both tasks and to see the sorts of data that are produced. The first task is a computerized version of Amabile's (1982) popular collage task, wherein participants make themed collages using colored shapes (see Figure 1). The software allows the shapes to be moved, rotated, flipped, and stacked using an intuitive mouse-based interface. The creator's moves can be characterized according to the extent that the set of shape movements actually performed exceeds the minimal set of movements needed to produce the final collage. Figure 1: Intermediate screen of the collage task. The white area represents a piece of paper and the gray area is a work area. Initially all of the shapes are stacked in the work area, similar to the triangles in the upper-right corner. The second task, called the orbital composition task (Jennings 2010; Jennings, Simonton, and Palmer 2011), involves arranging a camera and light that lie in fixed circular orbits around a set of objects. The configuration space has only three dimensions—camera angle, camera zoom, and light angle—but the scene is constructed in a way that permits many interesting and varied images (see Figure 2). While less face valid than the collage task, the orbital task's simplicity permits more consistent analyses and makes it possible to collect ratings from a standardized subset of the space, thereby providing a sense of the search landscape topology that can help overcome some of the ambiguities inherent in analyzing search trajectories alone (see Jennings 2012). Figure 2: Final images from the orbital composition task as selected by four different research participants. MANUAL_SVM.rdf.bow#108 ANGELINA Coevolution in Automated Game Design Michael Cook and Simon Colton Computational Creativity Group, Imperial College, London (ccg.doc.ic.ac.uk) Figure 1: Screenshot from a game about a murdered aid worker from Scotland. The background image is of the Scottish landscape, and a red ribbon image has been selected to represent the aid charity. ANGELINA ANGELINA is a co-operative co-evolutionary system for automatically creating simple videogames. It has previously been used to design both simple arcade-style games and twodimensional platformers. In the past, ANGELINA's efforts have been focused on mechanical aspects of design, such as level creation, rule selection and enemy design. We are now in the process of expanding ANGELINA's remit to cover other aspects of videogame design, including aesthetic problems such as art direction and the selection and use of external media to evoke emotion or communicate meaning. ANGELINA has produced several new games for this demonstration, exemplifying the new abilities the system now has. Its co-operative co-evolutionary system for platform games is composed of four modules: (i) a level designer that places solid blocks and locked doors to shape the progress of the player (ii) a layout designer that places and designs the enemies the player faces, as well as the start and end of the level (iii) a powerup designer that defines what bonus items the player can acquire during gameplay and (iv) a creative direction module that arranges a set of media resources in the level for the player to discover during gameplay. This latter module is the newest addition to the system, and takes advantage of many new capabilities built into ANGELINA for retrieving content from the web dynamically for use in themed videogames. Design Task Inspired by the collage-creation problem described in (Cook and Colton 2011) ANGELINA obtains current affairs articles by accessing the website of the British newspaper The Guardian. It selects a news story, and attempts to design a short platform game whose theme is inspired by the news article selected. Currently, this allows ANGELINA to demonstrate simple abilities such as the appropriate selection of media from a wide variety of sources, and arrangement in a potentially nonlinear level space. Figure 2: Media retrieved for a game inspired by an inquiry into a newspaper. Left: an image retrieved using the phrase 'newspapers and magazines'. On the right is Rebekah Brooks, one of the journalists in the investigation. ANGELINA uses online knowledge sources such as Wikipedia to extract additional information about data retrieved from the news articles it can, for instance, identify when a country is the subject of a news article, allowing the system to search photography websites such as Flickr for photographs of that country to use as a backdrop to the game. Keyword-based searches can also be augmented with emotional keywords to alter the results they return, based on techniques described in (Cook and Colton 2011). By reading live Twitter search results about a named person in the news article, ANGELINA can use search augmentation appropriate to the opinions it finds to retrieve media that reflect perceived public opinion of a particular topic. Although a simple technique, it is a first step towards the system dealing with opinion and bias through the work it produces. Games The games produced are simple platform games, loosely following the design tenets of the Metroidvania subgenre. The player must navigate the level space to reach the exit, but in order to gain access to later level sections, it is necessary to seek out and obtain items that add to the player's capabilities (for example: unlocking doors or changing the player's jumping abilities). As the player explores further they will encounter enemies, as well as images and sound content that is appropriate to the game's theme. ANGELINA is implemented in Java, but the games the system produces are Flash-based. When ANGELINA has evolved a game design, it modifies an existing ActionScript game template to include the generated design content, as well as incorporating the media downloaded and selected from the internet. All of the games available in the demonstration, as well as others developed by the system, are available on the project website: www.gamesbyangelina.org MANUAL_SVM.rdf.bow#109 SynAPP: An online web application for exploring creativity Alberto Gael Abadín Martínez, Universidade de Vigo (Spain) / AGH-UST (Cracow, Poland) Bipin Indurkhya, AGH University of Science and Technology (Cracow, Poland) Juan Carlos Burguillo Rial, Universidade de Vigo (Spain) DESCRIPTION SynAPP is a web application currently hosted at AGH-UST (http://149.156.205.250:15180) designed to stimulate users' creative skills through image association tasks and a rating feedback system. In SynAPP users perform two tasks related to image-image associations: -Associating two images using a word or short phrase. The two images can be presented simultaneously, left and right, or sequentially with a five seconds delay in between. The user is allowed to make only one association per couple. -Evaluating associations generated by other users according to two criteria: originality (0, 0.5 or 1 points) and intelligibility (0, 0.5 or 1 points). The set of image pairs for these two tasks are mutually disjoint, so if a user generates association for an image pair, then she or he does not evaluate associations generated by other users for the same pair; and vice versa. All the responses are recorded with their respective time stamps, and the time taken to perform each association is also recorded. A user can see how her or his associations were rated (with respect to their originality and intelligibility) by other users, and also how this evaluation evolved over time. This information is shown in an intuitive way using tables and graphs. Users perform three standard tests of creativity before and after using SynAPP: -Will Shortz & Morgan Worthy's word equation (ditloid) puzzles like "24 = H. in O. D." ("24 = Hours in One Day"). Different equations are used for before-SynAPP and afterSynAPP tests. -Guilford's alternative uses task: the user is asked to give as many uses as possible of a common item. Different objects are used for before-SynAPP and after-SynAPP tests. -Wallace & Kogan's assessment of creativity: A test similar to Guilford's, but the user is asked to find objects with a common property instead. The answers given by each user are evaluated by the other users, similar to the image associations, and statistics on these evaluations is also displayed graphically to the user. We hypothesize that SynAPP helps users to improve their creative, out-of-the-box divergent thinking cognitive abilities, and our goal is to properly evaluate this hypothesis based on the analysis of the data collected from the association tasks and the creativity tests. APPLICATION WORKFLOW 2012 229 MANUAL_SVM.rdf.bow#110 Co-creating Game Content using an Adaptive Model of User Taste Antonios Liapis, Georgios N. Yannakakis, and Julian Togelius Center for Computer Games Research IT University of Copenhagen Rued Langgaards Vej 7, 2300 Copenhagen, Denmark {anli, yannakakis, juto}@itu.dk Mixed-initiative procedural content generation can augment and assist human creativity by allowing the algorithm to take care of the mechanisable parts of content creation, such as consistency and playability checking. But it can also enhance human creativity by suggesting new directions and structures, which the designer can choose to adopt or not. The proposed framework generates spaceship hulls and their weapon and thruster topologies in order to match a user's visual taste as well as conform to a number of constraints aimed for playability and game balance. The 2D shapes representing the spaceship hulls are encoded as pattern-producing networks (CPPNs) and evolved in two populations using the feasible-infeasible 2-population approach (FI-2pop). One population contains spaceships which fail ad-hoc constraints pertaining to rendering, physics simulation and game balance, and individuals in this population are optimized towards minimizing their distance to feasibility. The second population contains feasible spaceships, which are optimized according to ten fitness dimensions pertaining to common attributes of visual taste such as symmetry, weight distribution, simplicity and size. These fitness dimensions are aggregated into a weighted sum which is used as the feasible population's fitness function — the weights in this quality approximation are adjusted according to a user's selection among a set of presented spaceships. This adaptive aesthetic model aims to enhance the visual patterns behind the user's selection and minimize visual patterns of unselected content, thus generating a completely new set of spaceships which more accurately match the user's tastes. A small number of user selections allows the system to recognize their preference, minimizing user fatigue. The proposed two-step adaptation system, where (1) the user implicitly adjusts their preference model through content selection and (2) the preference model affects the patterns of generated content, should demonstrate the potential of a flexible tool both for personalizing game content to an end-user's visual taste but also for inspiring a designer's creative task with content guaranteed to be playable, novel and yet conforming to the intended visual style. Related Work A. Liapis, G. N. Yannakakis, and J. Togelius, "Adapting Models of Visual Aesthetics for Personalized Content Creation," IEEE Transactions on Computational Intelligence and AI in Games, Special Issue on Computational Aesthetics in Games, 2012, (to appear). A. Liapis, G. N. Yannakakis, and J. Togelius, "Optimizing Visual Properties of Game Content Through Neuroevolution," in Artificial Intelligence for Interactive Digital Entertainment Conference, 2011. A. Liapis, G. N. Yannakakis, and J. Togelius, "Neuroevolutionary Constrained Optimization for Content Creation," in Computational Intelligence and Games (CIG), 2011 IEEE Conference on, 2011, pp. 71- 78. Figure 1: The fitness dimensions used to evaluate spaceships' visual properties and sample spaceships optimized for each fitness dimension. Weapons are displayed in green and thrusters in red. Figure 2: The graphic user interface for spaceship selection. 2012 230 MANUAL_SVM.rdf.bow#111 MANUAL_SVM.rdf.bow#112 MANUAL_SVM.rdf.bow#113 MANUAL_SVM.rdf.bow#114 MANUAL_SVM.rdf.bow#115 MANUAL_SVM.rdf.bow#116 MANUAL_SVM.rdf.bow#117 MANUAL_SVM.rdf.bow#118 MANUAL_SVM.rdf.bow#119 A Fully Automatic Evolutionary Art Tatsuo Unemi Department of Information Systems Science Soka University Tangi-machi 1-236, Hachioji, Tokyo 192-8577 Japan ¯ unemi@iss.soka.ac.jp Figure 1: Sample image. This is a project of an automatic art that the computer autonomously produces animations of a type of abstract images. Figure 1 is a typical frame image of an animation. A custom software, SBArt4 version 3, developed by the author is tanking a main role of the work, that based on a genetic algorithm utilizing computational aesthetic measures as fitness function (Unemi 2012a). The fitness value is a weighted geometric mean of measures including complexity, global contrast factor, distribution of color values, distribution of edge angles, difference of color values between consecutive frame images, and so on. Figure 2 illustrates the system configuration using two personal computers connected by the Ethernet. The left side is for evolutionary process, and the right side is for rendering and sound synthesis. Starting from a population randomly initialized with mathematical expressions that determines the color value for each pixel in a rectangular area, a never-ending series of abstract animations are continuously displayed on the screen in turn with synchronized sound effect (Unemi 2012b). Each of the 20 seconds animation is corresponding to an individual of relatively high fitness chosen from the population in the evolutionary process. The evolutionary part is using Minimal Generation Gap model (Satoh, Ono, and Kobayashi 1997) for the generational alternation to guarantee the time for each computation step is minimal. After 120 steps of generational alterna f   ff  fi   Figure 2: System setup. tions, the genotypes of the best ten individuals are sent to the player side in turn. To avoid convergence to lead a narrower variation of individuals in the population, the individuals of lower fitness in one forth of the population are replaced with random genotypes for each 600 steps. The visitors will notice not only the recent progress of the power of computer technology but also will possibly be given an occasion to think what the artistic creativity is. These technologies are useful not only to build up a system that makes unpredictable interesting phenomena but also to provide an occasion for people to reconsider how we should relate to the artifacts around us. We know the nature is complex and often unpredictable, but we, people in the modern democratic society, intend to assume that artificial systems should be under our control and there must be some person who takes responsibility on the effects. The author hopes the visitors will notice that it is difficult to keep some of the complex artifacts under our control, and will learn how we can enjoy with them. MANUAL_SVM.rdf.bow#120 MANUAL_SVM.rdf.bow#121 MANUAL_SVM.rdf.bow#122 MANUAL_SVM.rdf.bow#123 MANUAL_SVM.rdf.bow#124 MANUAL_SVM.rdf.bow#125 MANUAL_SVM.rdf.bow#126 MANUAL_SVM.rdf.bow#127 MANUAL_SVM.rdf.bow#128 MANUAL_SVM.rdf.bow#129 MANUAL_SVM.rdf.bow#130 MANUAL_SVM.rdf.bow#131 MANUAL_SVM.rdf.bow#132 MANUAL_SVM.rdf.bow#133 MANUAL_SVM.rdf.bow#134 MANUAL_SVM.rdf.bow#135 MANUAL_SVM.rdf.bow#136 MANUAL_SVM.rdf.bow#137 MANUAL_SVM.rdf.bow#138 MANUAL_SVM.rdf.bow#139 MANUAL_SVM.rdf.bow#140 MANUAL_SVM.rdf.bow#141 MANUAL_SVM.rdf.bow#142 An Artificial Intelligence System to Mediate the Creation of Sound and Light Environments Claudio Benghi Northumbria University, Ellison Building, Newcastle upon Tyne, NE1 8ST, England claudio.benghi@northumbria.ac.uk Gloria Ronchi Aether & Hemera, Kingsland Studios, Priory Green, Newcastle upon Tyne, NE6 2DW, England hemera@aether-hemera.com Introduction This demonstration presents the IT elements of an art installation that exhibits intelligent reactive behaviours to participant input employing Artificial Intelligence (AI) techniques to create unique aesthetic interactions. The audience is invited to speak into a set of microphones; the system captures all the sounds performed and uses them to seed an AI engine for creating a new soundscape in real time, on the base of a custom music knowledge repository. The compositions is played back to the users through surrounding speakers and accompanied with synchronised light events of an array of coloured LEDs. This art work allows viewers to become active participants in creating multisensory computer-mediated experiences, with the aim of investigating the potential for creative forms of inter-authorship. Software Application The installation's software has been built as a custom event manager developed under the .Net framework that can respond to events from the users, timers, and the UI cascading them through the required algorithms and libraries as a function of specified interaction settings; this solution allowed swift changes to the behaviour of the artwork in response to the observation of audience interaction patterns. Figure 1: Scheme of the modular architecture of the system Different portions of the data flow have been externalised to custom hardware to reduce computational load on the controlling computer: a configurable number of real-time devices converters transform the sounds of the required number of microphones into MIDI messages and channel them to the event manager; a cascade of Arduino devices control the custom multi channel lighting controllers and the sound output stage relies on MIDI standards. A substantial amount of work has been put into the optimisation of the UI console controlling the behaviour of the installation; this turned out to be crucial for the success of the project as it allowed to make use of the important feedback gathered in the first implementation of this participatory art work. Figure 2: GUI of the controlling system The work was first displayed as part of a public event over three weeks and allowed the co-generation of unpredictable soundscapes with varying levels of user's appreciation. The evaluation of any public co-creation environment is itself a challenging research area and our future work will investigate and evaluate methodologies to do so; further developments to the AI are also planned to include feedback from past visitors. More information about this project can be found at: http://www.aether-hemera.com/s/aib 2013 220 MANUAL_SVM.rdf.bow#143 Controlling Interactive Music Performance (CIM) Andrew Brown, Toby Gifford and Bradley Voltz Queensland Conservatorium of Music, Griffith University andrew.r.brown@griffith.edu.au, t.gifford@griffith.edu.au, b.voltz@griffith.edu.au Abstract Controlling Interactive Music (CIM) is an interactive music system for human-computer duets. Designed as a creativity support system it explores the metaphor of human-machine symbiosis, where the phenomenological experience of interacting with CIM has both a degree of instrumentality and a sense of partnership. Building on Pachet's (2006) notion of reflexivity, Young's (2009) explorations of conversational interaction protocols, and Whalley's (2012) experiments in networked human-computer music interaction, as well as our own previous work in interactive music systems (Gifford & Brown 2011), CIM applies an activity/relationality/prominence based model of musical duet interaction. Evaluation of the system from both audience and performer perspectives yielded consensus views that interacting with CIM evokes a sense of agency, stimulates creativity, and is engaging. Description The CIM system is an interactive music system for use in human-machine creative partnerships. It is designed to sit at a mid-point of the autonomy spectrum, according to Rowe's instrument paradigm vs player paradigm continuum. CIM accepts MIDI input from a human performer, and improvises musical accompaniment. CIM's behaviour is directed by our model of duet interaction, which utilises various conversational, contrapuntal and accompaniment metaphors to determine appropriate musical behaviour. An important facet of this duet model is the notion of turn-taking - where the system and the human swap roles as the musical initiator. To facilitate turn-taking, the system includes some mechanisms for detecting musical phrases, and their completion. This way the system can change roles at musically appropriate times. Our early implementation of this system simply listened for periods of silence as a cue that the human performer had finished a phrase. Whilst this method is efficient and robust, it limits duet interaction and leads to a discontinuous musical result. This behaviour, whilst imbuing CIM with a sense of autonomy and independence, detracts from ensemble unity and interrupts musical flow. To address this deficiency, we implemented some enchronic segmentation measures, allowing for inter-part elision. Inter-part elision is where phraseend in one voice coincides with (or is anticipated by) phrasestart in a second voice. In order to allow for inter-part elision, opportunistic decision making, and other synchronous devices for enhancing musical flow, we have implemented some measures of musical closure as secondary segmentation indicators. Additionally these measures guide CIM's own output, facilitating generation of coherent phrase structure. The evaluation procedure Our evaluation process involved six expert musicians, including staff and senior students at a University music school and professional musicians from the State orchestra, who performed with the system under various conditions. The setup of MIDI keyboard and computer used for these sessions is shown in Figure 5. Figure 5: A musician playing with CIM Participants first played a notated score (see Figure 6). Next they engaged in free play with the system, giving them an opportunity to explore the behaviour of the system. Finally, they performed a short improvised duet with the system. The interactive sessions were video recorded. Following the interactive session each performer completed a written questionnaire. Figure 1: A musician interacting with the CIM system MANUAL_SVM.rdf.bow#144 Towards a Flowcharting System for Automated Process Invention Simon Colton and John Charnley Computational Creativity Group, Department of Computing, Goldsmiths, University of London www.doc.gold.ac.uk/ccg Figure 1: User-defined flowchart for poetry generation. Flowcharts Ironically, while automated programming has had a long and varied history in Artificial Intelligence research, automating the creative art of programming has rarely been studied within Computational Creativity research. In many senses, software writing software represents a very exciting potential avenue for research, as it addresses directly issues related to novelty, surprise, innovation at process level and the framing of activities. One reason for the lack of research in this area is the difficulty inherent in getting software to generate code. Therefore, it seems sensible to start investigating how software can innovate at the process level with an approach less than full programming, and we have chosen the classic approach to process design afforded by flowcharts. Our aim is to provide a system simple enough to be used by non-experts to craft generative flowcharts, indeed, simple enough for the software itself to create flowcharts which represent novel, and hopefully interesting new processes. We are currently in the fourth iteration of development, having found various difficulties with three previous approaches, ranging from flexibility and expressiveness of the flowcharts to the mismatching of inputs with outputs, the storage of data between runs, and the ability to handle programmatic constructs such as conditionals and loops. In our current approach, we represent a process as a script, onto which a flowchart can be grafted. We believe this offers the best balance of flexibility, expressiveness and usability, and will pave the way to the automatic generation of scripts in the next development stage. We have so far implemented the natural language processing flowchart nodes required to model aspects of a previous poetry generation approach and a previous concept formation approach. The Flow System In figure 1 we present a screenshot of the system, which is tentatively called Flow. The flowchart shown uses 18 subprocesses which, in overview, do the following: a negative valence adjective is chosen, and used to retrieve tweets from Twitter; these are then filtered to remove various types, and pairs are matched by syllable count and rhyme; finally the lines are split where possible and combined via a template into poems of four stanzas; multiple poems are produced and the one with overall most negative valency is saved. A stanza from a poem generated using ‘malevolent' is given in figure 2. Note in figure 1 that the node bordered in red (WordList Categoriser) contains the sub-process currently running, and the node bordered in grey (Twitter) has been clicked by the user, which brings up the parameters for that sub-process in the first black-bordered box and the output from it in the second black-bordered box. We see the 332nd of 1024 tweets containing the word ‘cold' is on view. Note also that the user is able to put a thumb-pin into any node, which indicates that the previous output from that node should be used in the next run, rather than being calculated again. It's our ambition to build a community of open-source developers and users around the Flow approach, so that the system can mimic the capabilities of existing generative systems in various domains, but more importantly, it can invent new processes in those domains. Moreover, we plan to install the system on various servers worldwide, constantly reacting in creative ways to new nodes which are uploaded by developers, and to new flowcharts developed by users with a variety of cultural backgrounds. We hope to show that, in addition to creating at artefact level, software can innovate at process level, test the value of new processes and intelligently frame how they work and what they produce. Figure 2: A stanza from the poem On Being Malevolent. 1 2013 222 MANUAL_SVM.rdf.bow#145 A Rogue Dream: Web-Driven Theme Generation for Games Michael Cook Computational Creativity Group Imperial College, London mtc06@doc.ic.ac.uk ABSTRACT A Rogue Dream is an experimental videogame developed in seven days for a roguelike development challenge. It uses techniques from computational creativity papers to attempt to theme a game dynamically using a source noun from the player, including generating images and theme information. The game is part of exploratory research into bridging the gap between generating rules-based content and theme content for videogames. 1. DOWNLOAD While A Rogue Dream is not available to download directly, its code can be found at: https://github.com/cutgarnetgames/roguedream Spritely, a tool used in A Rogue Dream, can also be downloaded from: https://github.com/gamesbyangelina/spritely 2. BACKGROUND Procedural content generation systems mostly focus on generating structural details of a game, or arranging pre-existing contextual information (such as choosing a noun from a list of pre-approved words). This is because the relationship between the mechanics of a game and its theme is hard to define and has not been approached from a computational perspective. For instance, in Super Mario eating a mushroom increases the player's power. We understand that food makes people stronger, therefore a mushroom is contextually appropriate. In order to procedurally replace that with another object, the system must understand the real-world concepts of food, strength, size and change. Most content generation systems for games are designed to understand games, not the real world. How can we overcome that? 3. A ROGUE DREAM In [1] Tony Veale proposes mining Google Autocomplete using leading phrases such as "why do >keyword<s..." and using the autocompletions as a source of general knowledge Figure 1: A screenshot from A Rogue Dream. The input was ‘cow' enemies were ‘red', resulting in a red shoe being the enemy sprite. Abilities including ‘mooing' and ‘giving milk'. or stereotypes. We refer to this as ‘cold reading the Internet', and use it extensively in A Rogue Dream. We also employ Spritely, a tool for automatically generating spritebased artwork by mining the web for images. The game begins by asking the player to complete the sentence "Last night, I dreamt I was a...". The noun used to complete the sentence becomes a parameter for the search systems in A Rogue Dream, such as Spritely and the various text retrieval systems based on Veale's cold reading. These are subject to further filtering queries matching "why do >keyword<s hate..." are used to label enemies, for example. This work connects to other research being conducted by the author currently in direct code modification for content generation [?]. We hope to combine these two research tracks in order to build technology that can understand and situate abstract game concepts in a real-world context, and provide labels and fiction that describe and illustrate the game world accurately and in a thematically appropriate way. MANUAL_SVM.rdf.bow#146 A Puzzling Present: Code Modification for Game Mechanic Design Michael Cook and Simon Colton Computational Creativity Group Imperial College, London {mtc06,sgc}@doc.ic.ac.uk Figure 1: A screenshot from A Puzzling Present. ABSTRACT A Puzzling Present is an Android and Desktop game released in December 2012. The game mechanics (that is, the player's abilities) as well as the level designs were generated using Mechanic Miner, a procedural content generator that is capable of exploring, modifying and executing codebases to create game content. It is the first game developed using direct code modification as a means of procedural mechanic generation. 1. DOWNLOAD A Puzzling Present is available on Android and for all desktop operating systems, for free, here: http://www.gamesbyangelina.org/downloads/app.html The source code is also available on gamesbyangelina.org. 2. BACKGROUND Mechanic Miner was developed as part of PhD research into automating the game design process, through a piece of software called ANGELINA. ANGELINA's ability to develop small games autonomously, including theming the game's content using social and web media, was demonstrated at ICCC 2012[1]. Mechanic Miner represents a large step forward for ANGELINA as the system becomes able to inspect and modify code directly, instead of using grammars or other intermediate representations. ANGELINA's research has always aimed to produce playable games for general release. Space Station Invaders was released in early 2012 as a commission for the New Scientist, and a series of newsgames were released to coincide with several conferences in mid-2012. A Puzzling Present was the largest release to date, garnering over 6000 downloads, and entering the Android New Game charts in December, as well as coverage on Ars Technica, The New Scientist, and Phys.org. 3. A PUZZLING PRESENT The game itself contains thirty levels split into three sets of ten. Each set of levels, or world, has a unique power available to the player, such as inverting gravity or becoming bouncy. These powers can be switched on and o↵, and must be used to complete each level. Each power was discovered by Mechanic Miner by iterative modification of code and simulation of gameplay to test the code modifications. For more information on the system, see [2]. Levels were designed using the same system mechanics are tested against designed levels to evaluate whether the level is appropriate. This means the system is capable of designing novel levels with mechanics it has never seen before there is no human intervention to add heuristics or evaluations for specific mechanics. We are currently working on integrating Mechanic Miner into the newsgame generation module of ANGELINA, so that the two systems can work together to collaboratively build larger games. This initial work on code modification has also opened up major questions about the relationship between code and meaning in videogames, which we plan to explore in future work. MANUAL_SVM.rdf.bow#147 Demonstration: A meta-pianist serial music comproviser Roger T. Dean austraLYSIS, Sydney; and MARCS Institute, University of Western Sydney, Australia roger.dean@uws.edu.au Computational processes which produce metahuman as well as seemingly-human outputs are of interest. Such outputs may become apparently human as they become familiar. So I write algorithmic interfaces (often in MAXMSPJitter) for real-time performative generation of complex musical/visual features, to be part of compositions or improvisations. Here I demonstrate a musical system to generate serial 12-tone rows, their standard transforms, and then to assemble them into melodic sequences, or into two part meta-pianistic performances. Serial rigour of pitch construction is maintained throughout. This means here that 12note motives are made, each of which comprises all the pitches within an octave on the piano (an octave comprises a doubling of frequency of the sound, and notes at the start and end of this sequence are given the same note name CDEFGABC etc). Then a generative system creates a rigorous set of transforms of the chosen note sequences. But as in serial composition at large, when these are disposed amongst multiple voices, and to create harmonies (simultaneous notes) as well as melodies (successions of separated notes), the serial chronology is modified. Furthermore, the system allows asynchronous processing of several versions of the original series, or of several different series. A range of complexity can result, and to enhance this I also made a companion system which uses tonal major scale melodies in a similar way. Here the original (Prime) version consists only of 12 notes taken from within an octave of the major scale (which includes only 7 rather than 12 pitches), thus permitting some repetitions. Chromatic inversion is used, so that for example, the scale of Cmajor ascending from C becomes the scale of Ab major descending from C, and major tonality with change of key centre is preserved. The performance patch within the system provided a default stochastic rhythmic, chordal and intensity control process; all of whose features are open to real-time control by the user.The patches are used for generating components of electroacoustic or notated composition, normally with equal-tempered or alternative tuning systems performed on a physical synthesis virtual piano (PianoTeq); and also within live solo MultiPiano performances involving acoustic piano and electronics. The outputs are meta-human in at least two senses. First, as with many computer patches, the physical limitations of playing an instrument do not apply, and Xenakian performance complexities can be realised. Second, no human improviser could achieve this precision of pitch transformation; rather we have evidence they tend to take a simplified approach to atonality, usually focusing on controlling intervals of 1, 2, 6, and 11 semitones. The products of these patches are also in use in experiments on the psychology of expectation (collaboration with Freya Bailes, Marcus Pearce and Geraint Wiggins, UK). MANUAL_SVM.rdf.bow#148 assimilate collaborative narrative construction Damian Hills Creativity and Cognition Studio University of Technology, Sydney Sydney, Australia Damian.Hills@uts.edu.au Abstract This demonstration presents the 'assimilate collaborative narrative construction' project, that aims for a holistic system design with support for the creative possibilities of collaborative narrative construction. Introduction This demonstration presents the 'assimilate collaborative narrative construction' project (Hills 2011) that aims for a holistic system design with support for the creative possibilities of collaborative narrative construction. By incorporating interface mechanics with a flexible model of narrative template representation, the system design emphasises how mental models and intentions are understood by participants, and represents its creative knowledge outcomes based on these metaphorical and conversational exchanges. Using a touch table interface participants collaboratively narrate and visualise narrative sequences using online media obtained through a keyword search, or by words obtained from narrative templates. The search results are styled into generative behaviours that visually self-organise while participants make aesthetic choices about the narrative outcomes and their associated behaviours. The playful interface supports collaboration through embedded mechanics that extend gestural actions commonly performed during casual conversations. By embedding metaphorical schemes associated with narrative comprehension, such as pointing, exchanging, enlarging or merging views, gestural action drives the experience and supports the conversational aspects associated with narrative exchange. System Architecture The system architecture models the narrative template events to allow a particular narrative perspective, globally or locally within the generated story world. This is done by modeling conversation relationships with the aim of selforganising and negotiating an agreement surrounding several themes. The system extends Conversation Theory (CT) (Pask, 1976), a theory of learning and social interaction, that outlines a formal method of conversation as a sense-making network. Based on CT entailment meshes with an added fitness metric, this develops a negotiated agreement surrounding several interrelated themes, that leads to eventual narrative coherence. MANUAL_SVM.rdf.bow#149 MANUAL_SVM.rdf.bow#150 Breeding on site Tatsuo Unemi Department of Information Systems Science Soka University Tangi-machi 1-236, Hachioji, Tokyo 192-8577 Japan ¯ unemi@iss.soka.ac.jp Computer #1 SBArt4 Controller Player Computer #2 Ethernet Figure 1: System setup. This is a live-performance of improvisational productions and playbacks of a type of evolutionary art using a breeding tool, SBArt4 version 3 (Unemi 2010). The performer breeds a variety of individual animations using SBArt4 on a machine at his front in a manner of interactive evolutionary computation, and sends the genotype of his/her favorite individual to SBArt4Player through a network connection. Figure 1 is a schematic illustration of the system setups. Each individual animation that reached the remote machine is played back repeatedly with the synchronized sound effect until another one arrives. Assisted by a mechanism of automated evolution based on computational aesthetic measures as the fitness function, it is relatively easy to produce interesting animations and sound effects efficiently on site (Unemi 2011). The player component has a functionality to composite another animation of feathery particles that reacts against the original image rendered by a genotype. Each particle moves guided by the force calculated from the HSB color value under the particle. The brightness is mapped to the strength, the hue value is mapped to the orientation, and the saturation is mapped to the fluctuation. This additional effects provide another impression for viewers. The performance will start from a simple pattern selected from the initial population randomly generated, and then gradually shifts to complex patterns. The parameters of sound synthesis are fundamentally determined from statistic features of frame image so that it fits with the impression of visuals, but some of them are also subjects of real-time tuning. The performer is allowed to adjust several parameters such as scale, tempo, rhythm, noise, and the other modulation parameters (Unemi 2012) following his/her preference. Because the breeding process includes spontaneous transFigure 2: Live performance in Rome, December 2011. formation by mutation and combination, the animations shown in a performance are always different from those in another occasion. This means each performance is just one time. MANUAL_SVM.rdf.bow#151 MANUAL_SVM.rdf.bow#152 MANUAL_SVM.rdf.bow#153 MANUAL_SVM.rdf.bow#154 MANUAL_SVM.rdf.bow#155 MANUAL_SVM.rdf.bow#156 MANUAL_SVM.rdf.bow#157 MANUAL_SVM.rdf.bow#158 MANUAL_SVM.rdf.bow#159 MANUAL_SVM.rdf.bow#160 MANUAL_SVM.rdf.bow#161 MANUAL_SVM.rdf.bow#162 MANUAL_SVM.rdf.bow#163 MANUAL_SVM.rdf.bow#164 MANUAL_SVM.rdf.bow#165 MANUAL_SVM.rdf.bow#166 MANUAL_SVM.rdf.bow#167 MANUAL_SVM.rdf.bow#168 MANUAL_SVM.rdf.bow#169 MANUAL_SVM.rdf.bow#170 MANUAL_SVM.rdf.bow#171 MANUAL_SVM.rdf.bow#172 MANUAL_SVM.rdf.bow#173 MANUAL_SVM.rdf.bow#174 MANUAL_SVM.rdf.bow#175 MANUAL_SVM.rdf.bow#176 MANUAL_SVM.rdf.bow#177 MANUAL_SVM.rdf.bow#178 MANUAL_SVM.rdf.bow#179 MANUAL_SVM.rdf.bow#180 MANUAL_SVM.rdf.bow#181 MANUAL_SVM.rdf.bow#182 MANUAL_SVM.rdf.bow#183 MANUAL_SVM.rdf.bow#184 MANUAL_SVM.rdf.bow#185 MANUAL_SVM.rdf.bow#186 MANUAL_SVM.rdf.bow#187 MANUAL_SVM.rdf.bow#188 MANUAL_SVM.rdf.bow#189 MANUAL_SVM.rdf.bow#190 MANUAL_SVM.rdf.bow#191 MANUAL_SVM.rdf.bow#192 MANUAL_SVM.rdf.bow#193 MANUAL_SVM.rdf.bow#194 MANUAL_SVM.rdf.bow#195 MANUAL_SVM.rdf.bow#196 MANUAL_SVM.rdf.bow#197 MANUAL_SVM.rdf.bow#198 MANUAL_SVM.rdf.bow#199 MANUAL_SVM.rdf.bow#200 MANUAL_SVM.rdf.bow#201 MANUAL_SVM.rdf.bow#202 MANUAL_SVM.rdf.bow#203 MANUAL_SVM.rdf.bow#204 MANUAL_SVM.rdf.bow#205 MANUAL_SVM.rdf.bow#206 MANUAL_SVM.rdf.bow#207 MANUAL_SVM.rdf.bow#208 MANUAL_SVM.rdf.bow#209 MANUAL_SVM.rdf.bow#210 MANUAL_SVM.rdf.bow#211 MANUAL_SVM.rdf.bow#212 MANUAL_SVM.rdf.bow#213 MANUAL_SVM.rdf.bow#214 MANUAL_SVM.rdf.bow#215 MANUAL_SVM.rdf.bow#216 MANUAL_SVM.rdf.bow#217 MANUAL_SVM.rdf.bow#218 MANUAL_SVM.rdf.bow#219 MANUAL_SVM.rdf.bow#220 MANUAL_SVM.rdf.bow#221 MANUAL_SVM.rdf.bow#222 MANUAL_SVM.rdf.bow#223 MANUAL_SVM.rdf.bow#224 MANUAL_SVM.rdf.bow#225 MANUAL_SVM.rdf.bow#226 MANUAL_SVM.rdf.bow#227 MANUAL_SVM.rdf.bow#228 MANUAL_SVM.rdf.bow#229 MANUAL_SVM.rdf.bow#230 MANUAL_SVM.rdf.bow#231 MANUAL_SVM.rdf.bow#232 MANUAL_SVM.rdf.bow#233 MANUAL_SVM.rdf.bow#234 MANUAL_SVM.rdf.bow#235 MANUAL_SVM.rdf.bow#236 MANUAL_SVM.rdf.bow#237 MANUAL_SVM.rdf.bow#238 MANUAL_SVM.rdf.bow#239 MANUAL_SVM.rdf.bow#240 MANUAL_SVM.rdf.bow#241 MANUAL_SVM.rdf.bow#242 MANUAL_SVM.rdf.bow#243 MANUAL_SVM.rdf.bow#244 MANUAL_SVM.rdf.bow#245 MANUAL_SVM.rdf.bow#246