Visualization *of* dance & visualization *via* dance

dancing_infographics

[The above image is from the post “Math Dances: Imitating Data Visualization Techniques through Dance” as found on the blog information aesthetics. The video is by Tufts University applicant Amelia Downs. Thank you.]

We just submitted two student applications to ISEA2014 (in addition to my faculty application for “Debauched Kinesthesia” that I sent in December of 2013). The two pieces are “augmented” solos for two undergraduate dancers at UVU, Dixon Bowles and Mindy Houston. Both are wonderfully talented dancers and I’m fortunate to work with them.

The two pieces are of particular interest to me personally because they build on my other academic interest (i.e., the one that I’m paid money for), which is data and data visualization. The first piece is a visualization of the dance’s data. It’s slightly tongue-in-cheek but still neat, especially as it all happens while the dancer is dancing. The second takes a different angle by making the dance itself the visualization of other data (in this case, a poem). The choreography is created to reflect the progression in the poem and then the visuals, which are recorded and looped live, are arranged in such a way as to magnify the structure and development.

Here are the official descriptions that we sent in with the applications. The first piece is Dixon’s.

The Dance and the Meta-Dance: Live Performance and Live Visualization

This proposal is for a live, solo dance performance that is augmented with video and motion tracking. The video and motion data are used for two live visualizations that are projected on each side of the dance. For the first projection or “meta-dance”, the video data is captured with a small web camera and is processed in Max/MSP/Jitter using the Cyclops object, which analyzes RGB data to allow for motion tracking in real-time. The resulting motion data are then used by the program to “evaluate” the performance on several criteria derived from Margaret H’Doubler’s web of principles of dance composition: Climax, Transition, Balance, Sequence, Repetition, Harmony, Variety and Contrast. These values are standardized and displayed as both streaming bar and radar charts as well time-series plots to highlight periodicity in the performance. For the second “meta-dance” projection, motion data from a Kinect depth camera are used to identify the three-dimensional coordinates of key body points in the dancer, such as head, shoulders, hands, and hips. These coordinates are then drawn on the screen as abstract ribbons or streams that rotate and fade over time, highlighting the temporality and abstractness of dance. Taken together, the live performer’s dance and the two “visual commentaries” or “meta-dances” offer multiple psychological and social realities on performative art. They also represent an initial step in the establishment of live visualization of dance-derived data as an art form in itself.

And the second piece is Mindy’s.

The Triple Fool + 2: A Performance for Poetry, Dance, and Data Visualization

This live dance performance is based on the poem “The Triple Fool” by John Donne (1572-1631), in which Donne complains that he is a fool in three ways: (1) for falling in love; (2) for “saying so in whining poetry,” and (3) for grieving again when his verse is put to music. This performance expands on Donne’s lament by expressing it in two other media: first in dance by a solo performer, which is recorded live with a small web camera and processed in Max/MSP/Jitter, and second by a data visualization of the poem’s text, which is revealed as the dance progresses, showing the relationships between the poem’s structures and ideas. The dance, however, serves a critical, additional function: the performance itself becomes a visualization of the text, as video clips of the performance (all of which are recorded live) are selected by the program and looped on two adjacent screens in such a way that the structure and relationships within the dance (as constructed by the live performer and the projected doubles) mirror those of the poem. That is, the dance is not just an kinesthetic, affective enactment but a visualization of the poem’s textual data. Thus, the dance performance and the data visualization become two additional means for exploring/compounding Donne’s grief, adding two additional “fools” to his original three.

We should find out by the end of March whether the performances are accepted. (And I should find out about mine in about two weeks.) We’re keeping fingers crossed.

Hello World: Jitter

My final project for my independent studies course in Jitter was to revisit a dance piece called “Hello World” that my wife, choreographer Jacque Bell, and I created back in October of 2012 for Repertory Dance Theatre here in Salt Lake City, Utah. (You can see an entry with still image and links to reviews here or another with a video of the performance here.) My major goal for this project was to explore the possibilities of Max/MSP/Jitter (with an emphasis on the latter…) for use in future dance and technology pieces, especially Dance Loops, the major project that Jacque, Nichole Ortega, and I are working on for this year and next.

I did two major things for this Jitter project:

  1. Worked with several different visual effects within Jitter (as facilitated by the Vizzie modules); and
  2. Experimented with using a hardware controller – a Korg nanoKONTROL2, in this case – to manipulate video in real time.

Overall, it was a lot of fun and I think there’s a lot of potential there. I’ll spend the next several months learning ways to work out the kinks in the patch, as not everything worked reliably, and learning how to use other hardware, such as my Kinects, Novation Launchpads, Akai APC40 and 20, KMI Softstep and QuNeo, as well as the projectors, etc. (That’s the nice thing about grant money – you can get some excellent gear!)

The major lesson is that it is much, much, much easier to do a lot of this in Max/MSP/Jitter than it is in Processing, which is what I have been using for the last two or three years. The programming is easier, the performance seems to be much smoother, and the hardware integration is way, way easier. (I find it curious, though, that there are hardly any books written about Max/MSP/Jitter, while there are at least a dozen fabulous books about Processing. Go figure.)

I’ve included a few still shots at the top of this post and a rather lengthy walk-through of the patch (where not much seems to be working right at the moment…) below.

Where’s my beret? I’ve got an art show!

 

As one more surprising development in my artistic life, I created some still pieces based on the dance visualization project that I did at the U of U (see this entry) and submitted them to the annual juried show for U of U art students at Williams Fine Art, a long-standing art gallery in Salt Lake City. Shockingly, I got accepted! (See, I’m the third person on the list.)

In the process, I gave a little theoretical background on the pieces I created. Here’s what I put in my rather lengthy artist statement:

Dance is a challenging medium. It is notoriously ephemeral, as it disappears once the performance is finished. It is temporal, as it is always viewed in a particular order: first the beginning, then the middle, and then the end. And dance is situated, as the viewer typically has a single visual perspective throughout the entire performance.

In a series of experiments called “Danco kaj la universala okulo,” which is Esperanto for “Dance for the universal eye,” alternative to each of these characteristics were explored. To do so, ten dance performances were recorded with a Microsoft Kinect to get digital video and 3D motion capture data, which were then manipulated in Processing.

The first manipulation, “Danco 1: Preter spaco” (“Dance 1: Beyond Space”), presented point clouds – 3D pixel images – of the dancers. Viewers could change their perspective of the dance at will, even during the live performance: zooming in and out, rotating left and right, or going above or below the dancers.

The second manipulation, “Danco 2: Preter ordo” (“Dance 2: Beyond Order”), which is the basis of this print, was an interactive application that placed frames from all ten dances in random order. However, viewers could click on a frame and all of the frames from that dance would be highlighted and connected in order by a curving line. (The line is a Catmull-Rom spline with a random tension factor.) Viewers could then click a button and the selected frames would reassemble themselves in temporal order. As a note, this piece provided the seed for a recent multimedia and dance performance for Repertory Dance Theatre called “Hello World,” which was created with choreographer Jacque Bell.

The final manipulation, “Danco 3: Preter tempo” (“Dance 3: Beyond Time”), derived skeleton views from the pixel data and then accumulating figures as the dance progressed. In this way, the entire dance was simultaneously present as a unitary whole.

The prizes at the show went to actual artists, which is not surprising (although UVU did give me their own whopper prize a few months ago with the fellowship for Dance Loops). The show, however, was a fabulous experience and – hopefully – the first first of many to follow.

[And, yes, I do/did own a beret. However, I don’t have any idea where it is. And I don’t have any black turtlenecks, so I guess I have to pass on the artist image.