https://volumetricregimes.xyz/book/api.php?action=feedcontributions&user=FS&feedformat=atom
Volumetric Regimes - User contributions [en]
2024-03-28T14:47:36Z
User contributions
MediaWiki 1.34.0
https://volumetricregimes.xyz/index.php?title=Depths_and_densities&diff=498
Depths and densities
2020-10-15T10:47:16Z
<p>FS: /* Depths and Densities: Accidented and Dissonant Spacetimes */</p>
<hr />
<div>== Depths and Densities: Accidented and Dissonant Spacetimes ==<br />
<onlyinclude><br />
* Text: Contribution by Kathryn Yusoff (2020)<br />
* Text: with Helen Pritchard, [[We Have Always Been Geohackers]] (2019).<br />
* Workshop documentation: [[LiDAR on the rocks]] (2019)<br />
* Report: Rocha, J. (2019) [[A_Bugged_Report,_transmediale_Journal| A Bugged Report]], transmediale Journal<br />
* Fiction: Possible Bodies feat. Helen Pritchard (2018). [[Ultrasonic Dreams of Aclinical Renderings]]. Ada: A Journal of Gender, New Media, and Technology, No. 13<br />
</onlyinclude></div>
FS
https://volumetricregimes.xyz/index.php?title=Depths_and_densities&diff=497
Depths and densities
2020-10-15T10:45:54Z
<p>FS: </p>
<hr />
<div>== Depths and Densities: Accidented and Dissonant Spacetimes ==<br />
<onlyinclude><br />
* Text: Contribution by Kathryn Yusoff (2020)<br />
* Text: with Helen Pritchard, [[We Have Always Been Geohackers]] (2019).<br />
* Workshop documentation: [[LiDAR on the rocks]] (2019)<br />
* Report: [[A_Bugged_Report,_transmediale_Journal|Rocha, J. (2019) A Bugged Report, transmediale Journal]]<br />
* Fiction: Possible Bodies feat. Helen Pritchard (2018). [[Ultrasonic Dreams of Aclinical Renderings]]. Ada: A Journal of Gender, New Media, and Technology, No. 13<br />
</onlyinclude></div>
FS
https://volumetricregimes.xyz/index.php?title=Dis-orientation_and_its_Aftermath&diff=482
Dis-orientation and its Aftermath
2020-05-29T09:54:23Z
<p>FS: </p>
<hr />
<div>{| class="wikitable"<br />
|-<br />
| This text was originally published in: InMaterial, Vol. 2 Núm. 3 (2017): Cuerpos poliédricos y diseño: Miradas sin límites https://www.inmaterialdesign.com/index.php/INM/article/view/29<br />
|}<br />
<br />
'''Abstract:'''<br />
Following the invitation of Sara Ahmed, “to think how queer politics might involve disorientation, without legislating disorientation as a politics”, the collective enquiry Possible Bodies research team inventoried three items related to 3D artifacts, following through the implications of the contemporary renderings of 'dis-orientation' they invoke. Each in their own way, the items relate to a world that is becoming oblique, where inside and outside, up and down are switching places and where new perspectives become available. They speak of the mutual constitution of technology and bodies, of matter and semiotics, of nature and culture and how orientation and the subjectivities that emerge from it are managed across the technocolonial matrix of representation in turbo-capitalism. The three items allow for a look at tools that represent, track and model “bodies” through diverse cultural means of abstraction, and eventually to convoke their aftermath in a call for ‘disobedient action-research’.<br />
<br />
'''Keywords:'''<br />
3D, technology, possible bodies, disorientation, inventory<br />
<br />
== The Possible Bodies Inventory: dis-orientation and its aftermath ==<br />
<br />
<blockquote>"We remain physically upright not through the mechanism of the skeleton or even through the nervous regulation of muscular tone, but because we are caught up in a world" (Merleau-Ponty quoted in Ahmed, 2006).</blockquote><br />
<br />
This text is based on three items selected from the Possible Bodies inventory. Possible Bodies is a collaborative project on the very concrete and at the same time complex and fictional entities that "bodies" are, asking what matter-cultural conditions of possibility render them present. These questions become especially pertinent in contact with the technologies, infrastructures and techniques of 3D tracking, modeling and scanning. Intersecting issues of race, gender, class, species, age and ability resurface through these performative as well as representational practices. The research is concerned with genealogies of how bodies and technologies have been mutually constituted. It interrogates corpo-realities and their orientation through parametric interfaces and looks at anatomies that are computationally constrained by the requirements of mesh-modeling. It invites the generation of concepts and experimental renderings, wild combinations and digital and non-digital prototypes for different embodiments.<br />
<br />
The Possible Bodies inquiry operates through a growing inventory of software, manuals, artworks, interfaces, scripts, performances, mathematical concepts, animations and renderings. We settled for inventorying as a method because we want to give an account of the structural formations conditioning the various cultural artifacts that co-compose 3D polygon “bodies” through scanning, tracking and modeling. With the help of the multi-scalar and collective practice of inventorying, we make an attempt to think along the agency of these items, hopefully widening their possibilities rather than pre-designing ways of doing that too easily could crystallize into ways of being. Rather than rarefying the items, as would happen through the practice of collecting, or pinning them down, as in the practice of cartography, or rigidly stabilizing them, as might be a risk through the practice of archiving, inventorying is about continuous updates, and keeping items available.<br />
<br />
Among all of the apparatuses of the Modern Project that persistently operate on present world orderings, naming and account-giving, we chose the inventory with a critical awareness of its etymological origin. It is remarkably colonial and persistently productivist: inventory is linked to invention, and thereby to discovery and acquisition [1]. The culture of inventorying remits us to the material origins of commercial and industrial capitalism, and connects it with the contemporary database-based cosmology of techno-colonialist turbo-capitalism. But we learned about the potentials embedded in modern apparatuses of designation and occupation, and how they can be put to use once carefully unfolded to allow for active problematisation and situated understanding (Haraway, 1992). In the case of Possible Bodies, it means to keep questioning how artifacts co-habit and co-compose with techno-scientific practices, historically sustained through diverse axes of inequality. We urgently need research practices that go through axes of diversity.<br />
<br />
The temporalities of inventorying are discontinuous, and its modes of existence pragmatic: it is about finding ways to collectively specify and take stock, to prepare for eventual replacement, repair or replenishment. Inventorying is a hands-on practice of readying for further use, not one of account-giving for the sake of legitimation. As an "onto-epistemological" practice (Barad, 2012), it is as much about recognizing what is there (ontological) as it is about trying to understand (epistemological). Additionally, with its roots in the culture of manufacture, inventorying counts on cultural reflection as well as on action. This is how as a method it links to what we call 'disobedient action-research', it invokes and invites further remediations that can go from the academic paper to the bug report, from the narrative to the diagrammatic and from tool mis-use to interface re-design to the dance-floor. It provides us with inscriptions, de-scriptions and re-interpretations of a vocabulary that is developing all along.<br />
<br />
For this text, we followed the invitation of Sara Ahmed, “to think how queer politics might involve disorientation, without legislating disorientation as a politics” (Ahmed, 2006). We inventoried three items, ‘Worldsettings for beginners’, ‘No Ground’ and ‘Loops’, each related to the politics of 'dis-orientation'. In their own way, these artifacts relate to a world that is becoming oblique, where inside and outside, up and down switch places and where new perspectives become available. The items speak of the mutual constitution of technology and bodies, of matter and semiotics, of nature and culture and how orientation is managed in tools across the technological matrix of representation. The three items allow us to look at tools that represent, track and model “bodies” through diverse cultural means of abstraction, and to convoke its aftermath.<br />
<br />
== Item 007: Worldsettings for beginners ==<br />
<br />
Year in which the item emerged culturally or was produced industrially: 1995<br />
Entry of the item into the inventory: March 2017<br />
Author(s) of the item: Blender community<br />
Cluster(s) the item belongs to: Dis-orientation<br />
URL: http://possiblebodies.constantvzw.org/inventory/?007<br />
<br />
[[File:blender.jpg|thumb|none|600px|Screenshot Blender 2.69 (2017)]]<br />
<br />
<blockquote>"If the point of origin changes, the world moves but the body doesn't"[2]</blockquote><br />
<br />
In computer graphics and other geometry-related data processing, calculations are based on Cartesian coordinates, that consist of three different dimensional accesses: x y and z. In 3D-modelling, this is also referred to as 'the world'. The point of origin literally figures as the beginning of the local or global computational context that a 3D object functions in.<br />
<br />
Using software manuals as probes into computational realities, we traced the concept of 'world' in Blender, a powerful Free, Libre and Open Source 3D creation suite. We tried to experience its process of 'worlding' by staying on the cusp of 'entering' into the software. Keeping a balance between comprehension and confusion, we used the sense of dis-orientation that shifting understandings of the word 'world' created, to gauge what happens when such a heady term is lifted from colloquial language to be re-normalized and re-naturalized. In the nauseating semiotic context of 3D modeling, the word 'world' starts to function in another, equally real but abstract space. Through the design of interfaces, the development of software, the writing of manuals and the production of instructional videos, this space is inhabited, used, named, projected and carefully built by its day-to-day users.<br />
<br />
In Blender, virtual space is referred to in many ways: the mesh, coordinate system, geometry and finally, the world. In each case, it denotes a constellation of x, y, z vectors that start from a mathematical point of origin, arbitrarily located in relation to a 3D object and automatically starting from X = 0, Y = 0, Z = 0. Wherever this point is placed, all other planes, vertices and faces become relative to it and organize around it; the point performs as an "origin" for subsequent trans-formations.<br />
<br />
In the coordinate system of linear perspective, the vanishing point produces an illusion of horizon and horizontality, meant to be perceived by a monocular spectator that marks the centre of perception and reproduction. Points of origin do not make such claims of visual stability.<br />
<br />
<blockquote>"The origin does not have to be located in the centre of the geometry (e.g. mesh). This means that an object can have its origin located on one end of the mesh or even completely outside the mesh." [3]</blockquote><br />
<br />
In software like Blender, there is not just one world. On the contrary, each object has its own point of origin, defining its own local coordinates. These multiple world-declarations are a practical solution for the problem of locally transforming single objects that are placed in a global coordinate system. It allows you to manipulate rotations and translations on a local level and then outsource the positioning to the software that will calculate them in relation to the global coordinates. The multi-perspectives in Blender are possible because in computational reality, 'bodies' and objects exist in their own regime of truth that is formulated according to a mathematical standard. Following the same processual logic, the concept of 'context' in Blender is a mathematical construct, calculated around the world's origin. Naturalized means of orientation such as verticality and gravity are effects, applied at the moment of rendering.<br />
"Blender is a two-handed program. You need both hands to operate it. This is most obvious when navigating in the 3D View. When you navigate, you are changing your view of the world; you are not changing the world." (Fisher, 2014).<br />
<br />
The point of origin is where control is literally located. The two-handedness of the representational system indicates a possibility to shift from 'navigation' (vanishing point) into 'creation' (point of origin), using the same coordinate system. The double agency produced by this ability to alternate is only tempered by the fact that it is not possible to take both positions at the same time.<br />
<br />
<blockquote>'Each object has an origin point. The location of this point determines where the object is located in 3D space. When an object is selected, a small circle appears, denoting the origin point. The location of the origin point is important when translating, rotating or scaling an object. See Pivot Points for more.' [4]</blockquote><br />
<br />
The second form of control placed at the origin is the 3D manipulator that handles the rotation, translation, and scaling of the object. In this way, the points of origin function as pivots that the worlds are moved around.<br />
<br />
An altogether different cluster of world metaphors is at work in the 'world tab'. Firmly re-orienting the virtual back in the direction of the physical, these settings influence how an object is rendered and made to look 'natural'.<br />
<br />
<blockquote>'The world environment can emit light, ranging from a single solid colour, physical sky model, to arbitrary textures.' [5]</blockquote><br />
<br />
The tab contains settings for adding effects such as mist, stars, and shadows but also 'ambient occlusion'. The Blender manual explains this as a 'trick that is not physically accurate', suggesting that the other settings are. The 'world tab' leaves behind all potential of multiplicity that became available through the computational understanding of 'world'. The world of worlds becomes, there, impossible.<br />
<br />
Why not the world? At the one hand, the transposition of the word 'world' into Blender functions as a way to imagine a radical interconnected multiplicity, and opens up the possibility of political fictions derived from practices such as scaling, displacing, de-centering and/or alternating. On the other hand, through its linkage to (a vocabulary) of control, its world-view stays close to that of actual world domination. Blender operates with two modes of 'world'. One that is accepting the otherness of the computational object, somehow awkwardly interfacing with it, and another that is about restoring order, back to 'real'. The first mode opens up to a widening of the possible, the second prefers to stick to the plausible.<br />
<br />
== Item 012: No Ground ==<br />
<br />
Entry of the item into the inventory: 5 March 2017<br />
Year in which the item emerged culturally or was produced industrially: 2008, 2012<br />
Author(s) of the item: mojoDallas, Hito Steyerl<br />
Cluster(s) the item belongs to: Dis-orientation<br />
<br />
[[File:mojoDallas01.jpg|thumb|left|600px|Animation: mojoDallas (2008) https://www.youtube.com/watch?v=ZakpoLqXhyI]]<br />
[[File:mojoDallas02.jpg|thumb|none|600px]]<br />
<br />
"A fall toward objects without reservation, embracing a world of forces and matter, which lacks any original stability and sparks the sudden shock of the open: a freedom that is terrifying, utterly deterritorializing, and always already unknown. Falling means ruin and demise as well as love and abandon, passion and surrender, decline and catastrophe. Falling is corruption as well as liberation, a condition that turns people into things and vice versa. It takes place in an opening we could endure or enjoy, embrace or suffer, or simply accept as reality." (Steyerl, 2012)<br />
<br />
This item follows Hito Steyerl in her reflection on disorientation and the condition of falling, and drag it all the way to the analysis of an animation generated from a motion capture file. The motion capture of a person jumping is included in the Carnegie-Mellon University Graphics Lab Human Motion Library [6]. Motion capture systems, including the one at Carnegie Mellon, typically do not record information about context, and the orientation of the movement is made relative to an arbitrary point of origin (see item 007: World).<br />
<br />
In the animated example, the position of the figure in relation to the floor is 'wrong', the body seems to float a few centimeters above ground. The software relies on perceptual automatisms and plots a naturalistic shadow, taking the un-grounded position of the figure automatically into account: if there is a body, a shadow must be computed for. Automatic naturalisation: technology operates with material dilligence. What emerges is not the image of the body, but the body of the image: "The image itself has a body, both expressed by it's construction and material composition, and (...) this body may be inanimate, and material." (Steyerl, 2012).<br />
<br />
'No ground' is an attempt to think through issues with situatedness that appear when encountering computed and computational bodies. Does location work at all, if there is no ground? Is displacement a movement, if there is no place? How are surfaces behaving around this no-land's man, and what forces affect them?<br />
The found-on-the-go ethics and “path dependence" that condition computational materialities of bodies worry us. It all appears too imposing, too normative in the humanist sense, too essentialist even. What body compositions share a horizontal base, what entities have the gift of behaving vertically? How do other trajectorialities affect our semiotic-material conditions of possibility, and hence the very politics that bodies happen to co-compose? How can these perceptual automatism be de-clutched from a long history of domination, of the terrestrial and extraterrestrial wild (Haraway, 1992), now sneaking into virtual spheres?<br />
We suspect a twist in the hierarchy between gravitational forces. It does not lead to collapse but results in a hallucinatory construction of reality, filled with floating bodies. If we want to continue using the notions of 'context' and 'situation' for cultural analysis of the bodies that populate the pharmacopornographic, military and gamer industries and their imaginations, to attend to their immediate political implications, we need to reshape our understanding of them. It might be necessary to let go of the need for 'ground' as a defining element for the body's very existence, though this makes us wonder about the agencies at work in this un-grounded embodiments. If the land is for those who work it, then who is working the ground [7]?<br />
<br />
<blockquote>"Disorientation involves failed orientations: bodies inhabit spaces that do not extend their shape, or use objects that do not extend their reach" (Ahmed, 2006, p.160).</blockquote><br />
<br />
The co-constitution of bodies and technologies shatters all dream of stability, the co-composition of foreground and background crashes all dreams of perspective. When standing just does not happen due to a lack of context or a lack of ground, even if it is a virtual one, the notion of standpoint does not work. Situation, though, deserves a second thought.<br />
<br />
The political landscape of turning people into things and vice-versa recalls the rupture of 'knowing subjects' and 'known objects' that Haraway called for after reading the epistemic use of 'standpoint' in Harding (1986), which asked for a recognition of the 'view from below' of the subjugated: “to see from below is neither easily learned nor unproblematic, even if 'we' 'naturally' inhabit the great underground terrain of subjugated knowledges” (Haraway, 1998, p. 584). The emancipatory romanticism of Harding does not work in these virtual renderings neither. The semiotic-material conditions of possibility that unfold from Steyerl’s above description are conditions without point, standing or below.<br />
<br />
What implications would it have to displace our operations, based on unconsolidated matter that in its looseness asks for eventual anchors of interdependence? How could we transmute the notion of situatedness, to understand the semiotic-material conditionings of 3D rendered bodies, that affect us socially and culturally through multiple managerial worldlings?<br />
<br />
The body in this item is not static nor falling: it is floating. Here we find a thing on the 'situatedness' of Haraway that does not match when we try to manage the potential vocabularies for the complex forms of worldmaking and its embodiments in the virtual. What can we learn from the conditions of floating brought to us by the virtual transduction of modern perspective, in order to draft an account-giving apparatus of present presences? How can that account-giving be intersectional with regards to the agencies implied, respectful of the dimensionality of time and ageing, and responsible with a political history of groundness?<br />
<br />
Floating is the endurance of falling. It seems that in a in a computed environment, falling is always in some way a floating. There is no ground to fall towards that limits the time of falling, nor is the trajectory of the fall directed by gravity. The trajectory of a floating or persistently falling body is always already unknown.<br />
<br />
In the dynamic imagination of the animation, the ground does not exist before the movement is generated, it only appears as an afterthought. Everything seems upside down: the foundation of the figure is deduced from, not pre-existing its movement. Does this mean that there is actually no foundation, or just that it appears in every other loop of movement? Without the ground, the represented body could be understood as becoming smaller and that would open the question on dimensionality and scaleability. But being surface-dependent, it is received as moving backwards and forwards: the modern eye reads one shape that changes places on a territory. Closer, further, higher, lower: the body arranges itself in perspective, but we must attend the differences inherent in that active positioning. The fact that we are dealing with an animation of a moving body implies that the dimension of time is brought into the conversation. Displacement is temporary, with a huge variation in the gradient of time from momentary to persistent.<br />
<br />
In most cases of virtual embodiment, the absolute tyranny of the conditions of gravity do not operate. In a physical situation (a situation organized around atoms), falling on verticality is a key trajectory of displacement; falling cannot happen horizontally upon or over stable surfaces. For the fleshy experienced, falling counts on gravity as a force. Falling seems to relate to liquidity or weightlessness, and grounding to solidity and settlement of matters. Heaviness, having weight, is a characteristic of being-in-the-world, or more precisely: of being-on-earth, magnetically enforced. Falling is depending on gravity, but it is also - as Steyerl explains - a state of being un-fixed, ungrounded, not as a result of groundbreakingness but as an ontological lack of soil, of base. Un-fixed from the ground, or from its representation (Steyerl, 2012).<br />
<br />
Nevertheless, when gravity is computed, it becomes a visual-representational problem, not an absolute one. In the animation, the figure is fixed and sustained by mathematical points of origin but to the spectator from earth, the body seems unfixed from its 'natural soil'. Hence, in a computational space, other 'forced' directions become possible thanks to a flipped order of orientation: the upside-down regime is expanded by others like left-right, North-South and all the diagonal and multivortex combinations of them. This difference in space-time opens up the potential of denaturalized movements.<br />
<br />
Does falling change when the conditions of verticality, movement and gravity change? Does it depend on a specific axis? Is it a motion-based phenomenon, or rather a static one? Is it a rebellion against the force of gravity, since falling here functions in a mathematical rather than in a magnetic paradigm? And if so, 'who' is the agent of that rebellion?<br />
<br />
At minute 01:05, we find a moment where two realities are juxtaposed. For a second, the toe of the figure trespasses the border of its assigned surface, glitching a way out of its position in the world, and bringing with it an idea of a pierceable surface to exist on ... opening up for an eventual common world. <br />
In the example, the 'feet' of the figure do not touch the 'ground'. It reminds us that the position of this figure is the result of computation. It hints at how rebellious computational semiotic-material conditions of possibility are at work. We call them semiotic because they are written, codified, inscribed and formulated (alphanumerically, to start with). We call them material since they imply an ordering, a composition of the world, a structuring of its shapes and behaviors. Both conditions affect the formulation of a 'body' by considering weight, height and distance. They also affect the physicality of computing: processes that generate it pulses in electromagnetic circuits, power network use, server load, etc.<br />
<br />
When the computational grid is placed under the feet of the jumping figure, materialities have to be computed, generated and located "back" and "down" into a "world". Only in relation to a fixed point of origin and after having declared its world to make it exist, the surrounding surfaces can be settled. Accuracy would depend on how those elements are placed in relation to the positioned body. Accuracy is a relational practice: body and ground are computed separately, each within their own regime of precision. When the rendering of the movement makes them dependent on the placement of the ground, their related accuracy will appear as strong or weak, and this intensity will define the kind of presence emerging.<br />
<br />
Thinking present presences can not rely on the lie of laying. A thought on agency can neither rely on the ground to fall towards nor on the roots of grass to emerge from. How can we then invoke a politics of floating not on the surface but within, not cornered but around and not over but beyond, in a collective but not a grass-roots movement? Constitutive conditioning of objects and subjects is absolutely relational, and hence we must think of and operate with their consistencies in a radically relational way as well: not as autonomous entities but as interdependent worldlings. Ground and feet, land and movement, verticality and time, situatedness and axes: the more of them we take into account when giving account of the spheres we share, the more degrees of freedom we are going to endow our deterritorialized and reterritorialized lives with.<br />
<br />
The body is a political fiction, one that is alive (Preciado, 2008); but a fiction is not a lie. And so are up, down, outside, base, East and South (Rocha, 2016) and presence. Nevertheless, we must unfold the insights from knowing how those fictions are built to better understand their radical affection on the composition of what we understand as 'living', whether that daily experience is mediated fleshly or virtually.<br />
<br />
== Item 022: Loops ==<br />
<br />
Entry of the item into the inventory: November 2016<br />
Year in which the item emerged culturally or was produced industrially: 2009, 2008, 1971, 1946<br />
Author(s) of the item: Golan Levin, Merce Cunningham, OpenEnded group, Buckminster Fuller<br />
Cluster(s) the item belongs to: Dis-orientation<br />
URL: http://possiblebodies.constantvzw.org/inventory/?022<br />
<br />
‘Loops’ entered the inventory for the first time through an experiment by Golan Levin (2009). Using an imaging technique called Isosurfacing, common in medical data-visualisation and in cartography, Levin rendered a motion recording of Merce Cunningham's performance ‘Loops’. The source code of the project is published on his website as golan_loops.zip. The archive contains amongst c-code and several Open Framework libraries, two motion capture files formatted in the popular Biovision Hierarchy file format, rwrist.bvh.txt and lwrist.bvh.txt. There is no license included in the archives [8].<br />
<br />
Following the standard lay-out of .bvh, each of the files starts with a detailed skeleton hierarchy where in this case, WRIST is declared as ROOT. Cascading down into carpals and phalanges, Rindex is followed by Rmiddle, Rpinky, RRing and finally Rthumb. After the hierarchy section, there is a MOTION section that includes a long row of numbers.<br />
<br />
Just before he died in 2009, Cunningham released the choreography for ‘Loops’ under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 license. No dance-notations were published, neither has The Merce Cunningham Trust included the piece in the 68 Dance Capsules providing “an array of assets essential to the study and reconstruction of this iconic artist's choreographic work.” [9]<br />
<br />
From the late nineties, the digital art collective OpenEnded group worked closely with Merce Cunningham. In 2001, they recorded four takes of Cunningham performing ‘Loops’, translating the movement of his hands and fingers into a set of datapoints. The idea was to "Open up Cunningham’s choreography of Loops completely" as a way to test the idea that the preservation of a performance could count as a form of distribution [10].<br />
<br />
The release of the recorded data consists of four compressed folders. Each of the folders contains a .fbx (Filmbox) file, a proprietary file format for motion recording owned by software company Autodesk, and two Hierarchical Translation-Rotation files, a less common motion capture storage format. The export files in the first take is called Loops1_export.fbx and the two motion capture files loops1_all_right.htr and loops1_all_left.htr. Each take is documented on video, one with hand-held camera and one on tripod. There is no license included in the archives.<br />
<br />
In 2008, the OpenEnded group wrote custom software to create a screen based work called ‘Loops’. Loops runs in real time, continually drawing from the recorded data. “Unique? — No and yes: no, the underlying code may be duplicated exactly at any time (and not just in theory but in practice, since we’ve released it as open source); yes, in that no playback of the code is ever the same, so that what you glimpse on the screen now you will never see again.” [11] The digital artwork is released under a GPL v.3 license.<br />
<br />
Seeing interpretations of ‘Loops’ made by other digital artists such as Golan Levin, OpenEnded group declared that they did not have any further interest in anyone else interpreting the recordings: “I found the whole thing insulting, if not to us, certainly to Merce.” [12]<br />
<br />
Cunningham developed ‘Loops’ as a performance to be exclusively executed by himself. He continued to dance the piece throughout his life in various forms until arthritis forced him to limit its execution to just his hands and fingers.[13]<br />
<br />
In earlier iterations, Cunningham moved through different body parts and their variations one at a time and in any order: feet, head, trunk, legs, shoulders, fingers. The idea was to explore the maximum number of movement possibilities within the anatomical restrictions of each joint rotation. Stamatia Portanova writes: “Despite the attempt at performing as many simultaneous movements as possible (for example, of hands and feet together), the performance is conceived as a step-by-step actualization of the concept of a binary choice.” (Portanova, 2013)<br />
<br />
A recording of ‘Loops’ performed in 1975 is included in the New York Public Library Digital Collections, but can only viewed on site.[14]<br />
<br />
Cunningham danced ‘Loops’ for the first time in the Museum of Modern Art in 1971. He situated the performance in front of 'Map (Based on Buckminster Fuller's Dymaxion Airocean World)', a painting by Jasper Johns. Roger Copeland describes ‘Loops’ as follows: “In much the same way that Fuller and Johns flatten out the earth with scrupulous objectivity, Cunningham danced in a rootless way that demonstrated no special preference for any one spot.” and later on, in the same book, "Consistent with his determination to decentralize the space of performance, Cunningham’s twitching fingers never seemed to point in any one direction or favor any particular part of the world represented by Johns’s map painting immediately behind him." (Copeland, 2004)<br />
<br />
In one of the rare images that circulates of the 1971 performance, we see Cunningham with composer Gordon Mumma in the background. From the photograph it is not possible to detect if Cunningham is facing the painting while dancing ‘Loops’, and whether the audience was seeing the painting behind or in front of him.<br />
Cunningham met Buckminster Fuller in 1948 at Blackmountain college. In an interview with Jeffrey Schnapp, he describes listening to one of Fuller's lectures: “In the beginning you thought, this is absolutely wonderful, but of course it won't work. But then, if you listened, you thought, well maybe it could. He didn't stop, so in the end I always felt like I had a wonderful experience about possibilities, whether they ever came about or not.” [15]<br />
<br />
With The Dymaxion Airocean World Map, Buckminster Fuller wanted to visualize planet earth with greater accuracy. In this way “humans will be better equipped to address challenges as we face our common future aboard Spaceship Earth.” The description of the map on the Buckminister Fuller Institute website is followed by a statement that “the word Dymaxion, Spaceship Earth and the Fuller Projection Map are trademarks of the Buckminster Fuller Institute. All rights reserved.” [16]<br />
The Dymaxion Airocean Projection divides the surface of the earth into 20 equilateral spherical triangles in order to produce a two-dimensional projection of the globe. Fuller patented the Dymaxion map at the US Patent office in 1946. [17]<br />
<br />
[[File:cunningham.jpg|thumb|none|600px|Merce Cunningham and OpenEnded group, Loops: Take 1 (hand-held) (2001)]]<br />
<br />
[[File:fuller.jpg|thumb|none|600px|Buckminster Fuller, US Patent 2393676, Dymaxion Airocean Projection (1946)]]<br />
<br />
== Aftermath ==<br />
<br />
The inventorying of three items has allowed us to think through cultural artifacts with very different scales, densities, media and duration. The items were selected because they align with a fundamental inquiry into 3D-infused imaginations of the 'body' and their consequences, emerging through a set of questions related to orientation and dis-orientation. Additionally, the items represent the transdisciplinarity of the issues with 3D scanning, modeling and tracking, that touch upon performance analysis, math, cartography, law and software studies.<br />
<br />
In item 007: Worldsettings for beginners, we explored the singular way in which the Cartesian coordinate system inhabits the digital by producing worlds in 3D modeling software, including the world of the body itself. In item 012: No Ground, we asked how situatedness can be meaningful when there is no ground to stand on. We wondered which tools we might need to develop in order to organize forms, shapes and ultimately a living if floating on virtual disorientation. Finally in item 022: Loops, we followed the embodiment of a choreographic practice, captured in files and legal documents, all the way up and back, to facing the earth.<br />
The text evidences some of the ways that inventorying could work as a research method, specifically when interrogating digital apparatuses and the ethico-political implications that are nested in the most legitimated and capitalized industries of the technocolonial totalizing innovation, defining the limits of the fictional construction of fleshy matters: what computes as a body.<br />
<br />
The main engine of Possible Bodies as a collective research, is to problematise the hegemonic pulsations in those technologies that deal with "bodies" in their volumetric dimension. We understand the research as an intersectional practice with a trans-feminist sensibility along the aesthetics and ethics to understand the (somato)political conditioning of our everyday.<br />
<br />
Evidently, the questions both sharpened and overflowed while studying the items and testing their limits, fueling Possible Bodies as a project. Inventorying opens up possibilities for an urgent mutation of that complex matrix by diffracting from probabilistic normativity.<br />
<br />
== Notes ==<br />
<br />
[1] http://www.etymonline.com/index.php?term=inventory<br />
<br />
[2] François Zajega, interview, 2017<br />
<br />
[3] https://docs.blender.org/manual/en/dev/editors/3dview/object/editing/transform/control/pivot_point/individual_origins.html<br />
<br />
[4] https://docs.blender.org/manual/en/dev/editors/3dview/object/origin.html<br />
<br />
[5] https://docs.blender.org/manual/en/dev/render/cycles/world.html<br />
<br />
[6] http://mocap.cs.cmu.edu/<br />
<br />
[7] https://vimeo.com/45615376<br />
<br />
[8] http://www.flong.com/storage/code/golan_loops.zip<br />
<br />
[10] This is precisely how the Merce Cunningham Dance Capsules website introduces itself http://dancecapsules.merce.broadleafclients.com/index.cfm <br />
<br />
[11] Website Openended group http://openendedgroup.com/<br />
<br />
[12] http://openendedgroup.com/writings/drawingTrue.html<br />
<br />
[13] Paul Kaiser (OpenEnded Group) quoted in ScienceLine http://scienceline.org/2012/07/dancing-in-digital-immortality/<br />
<br />
[14] https://digitalcollections.nypl.org/items/2103ccd0-e87e-0131-dc7f-3c075448cc4b<br />
<br />
[15] Merce Cunningham: An Interview on R. Buckminster Fuller and Black Mountain College https://jeffreyschnapp.com/2016/08/31/merce-cunningham-an-interview-on-r-buckminster-fuller-and-black-mountain-college/<br />
<br />
[16] https://www.bfi.org/about-fuller/big-ideas/dymaxion-world/dymaxion-map<br />
<br />
[17] https://www.google.com/patents/US2393676<br />
<br />
== Bibliography ==<br />
<br />
* Ahmed, S., 2006. Queer Phenomenology. Orientations, Objects, Others. Durham: Duke.<br />
* Barad, K., 2012, Matter feels, converses, suffers, desires, yearns and remembers. In Dolphijn, R. and van der Tuin, I., New Materialism: Interviews & Cartographies. Ann Arbor: Open Humanities Press.<br />
* Copeland, R., 2004. Merce Cunningham: The Modernizing of Modern Dance. New York, Routledge<br />
* Cunningham: An Interview on R. Buckminster Fuller and Black Mountain College https://jeffreyschnapp.com/2016/08/31/merce-cunningham-an-interview-on-r-buckminster-fuller-and-black-mountain-college/ (last visited: March 2017)<br />
* Haraway, D., 1988. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies, 14(3), 575-599.<br />
* ___, D., 1992. The promises of monsters : a regenerative politics for inappropriate/d others. In Lawrence<br />
* Fisher, G, Blender 3D Basics Beginner's Guide, Packt Publishing, 2014<br />
* Grossberg/Cary Nelson/Paula A. Treichler (eds.), Cultural Studies, New York, 1992, pp. 295–337.<br />
* Harding, S., 1986. The Science Question in Feminism. Ithaca, NY: Cornell University Press https://www.andrew.cmu.edu/course/76-327A/readings/Harding.pdf (last visited: March 2017)<br />
* Portanova, S., 2013. Moving Without a body. Cambridge, MIT.<br />
* Preciado, P.B., 2008, Pharmaco-pornographic Politics: Towards a New Gender Ecology. Routledge, Paralax, vol. 14, n.1, 105-117 http://urome.miami.edu/media/college-of-arts-and-sciences/content-assets/center-for-humanities/docs/irg-pdfs/Preciado.pdf (last visited: March 2017)<br />
* Rocha, J., 2016. Testing texting South: a political fiction. In Machine Research <br />
* Snelting, F. and Rocha, J., 2017 (forthcoming). MakeHuman. In Braidotti, R. et al (eds.), The Posthuman Glossary. Bloomsbury: Bloomsbury Academic.<br />
* Stengers, I., 2013. Une culture du dépaysement, Hermès, La Revue (n°67), p. 201-201.<br />
* Steyerl, H., 2012. In Free Fall: A Thought Experiment on Vertical Perspective. E-flux journal. http://www.e-flux.com/journal/24/67860/in-free-fall-a-thought-experiment-on-vertical-perspective/(last visited: March 2017)<br />
* ___, H., 2012. Ripping reality: Blind spots and wrecked data in 3D http://eipcp.net/e/projects/heterolingual/files/hitosteyerl/print(last visited: March 2017)</div>
FS
https://volumetricregimes.xyz/index.php?title=File:MojoDallas02.jpg&diff=472
File:MojoDallas02.jpg
2020-05-29T07:23:02Z
<p>FS: </p>
<hr />
<div></div>
FS
https://volumetricregimes.xyz/index.php?title=File:Fuller.jpg&diff=471
File:Fuller.jpg
2020-05-29T07:22:48Z
<p>FS: </p>
<hr />
<div></div>
FS
https://volumetricregimes.xyz/index.php?title=File:Cunningham.jpg&diff=470
File:Cunningham.jpg
2020-05-29T07:22:36Z
<p>FS: </p>
<hr />
<div></div>
FS
https://volumetricregimes.xyz/index.php?title=File:Blender.jpg&diff=469
File:Blender.jpg
2020-05-29T07:22:25Z
<p>FS: </p>
<hr />
<div></div>
FS
https://volumetricregimes.xyz/index.php?title=File:MojoDallas01.jpg&diff=468
File:MojoDallas01.jpg
2020-05-29T07:22:07Z
<p>FS: </p>
<hr />
<div></div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=467
Volumetric Regimes
2020-05-29T07:10:17Z
<p>FS: </p>
<hr />
<div>__NOTOC__<br />
<span class="hi">This wiki is an ongoing workspace for a book in the making. ''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.</span><br />
<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
'''Volumetric Regimes emerges from [https://possiblebodies.constantvzw.org Possible Bodies], a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an inventory of cases and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.'''<br />
<br />
'''The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.'''<br />
<br />
&rarr; [[About|More about ''Volumetric Regimes'']]<br />
<br />
== Index ==<br />
<br />
=== [[Introduction]] ===<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
{{:somatopologies}}<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
{{:clandestine_disorder}}<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
{{:x,_y,_z}}<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
{{:parametric_unknowns}}<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
{{:depths_and_densities}}<br />
=== [[Colophon]] ===</div>
FS
https://volumetricregimes.xyz/index.php?title=The_Fragility_of_Life&diff=466
The Fragility of Life
2020-05-22T09:50:45Z
<p>FS: </p>
<hr />
<div><br />
{| class="wikitable"<br />
|-<br />
| A conversation between Femke Snelting, Jara Rocha and Simone Niquille during the Possible Bodies working session <ref>http://www.akademie-solitude.de/en/events/possible-bodies-iteration-i~no3869/</ref> at Schloss Solitude in Stuttgart in May 2017, following a screening of process material of Niquille’s film ''The Fragility of Life''<ref>http://www.fragilityof.life/</ref>. It was first published by Het Nieuwe Instituut online <ref>https://research-development.hetnieuweinstituut.nl/en/fellows/fragility-life</ref><br />
|}<br />
<br />
== The Fragility of Life ==<br />
<br />
[[File:06 imgf0016.png|thumb|none|600px|06 CAESAR database used as training set in the research towards a parametric three-dimensional body model for animation. “Method for providing a threedimensional body model,” Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V., 2015.]]<br />
<br />
Jara Rocha: In the process of developing "Possible Bodies" one of the excursions we made was to the Royal Belgian Institute of Natural Science’s 3D reproduction workshop in Brussels, where they were working on reproductions of Hominids. Another visitor asked: “How do you know how many hairs a monkey like this should have?” The person working on the 3D reproduction replied, “It is not a monkey.” You could see that he had an empathetic connection to the on-screen-model he was working on, being of the same species. I would like to ask you about norms and embedded norms in software. Talking about objective truth and parametric representation and the like in this example you refer to, there is a huge norm that worries me, that of species, of unquestioned humanness. When we talk about bodies, we can push certain limits because of the hegemony of the species. In court, the norm is anthropocentric, but when it comes to representation…<br />
<br />
Femke Snelting: This is the subject of "Kritios They"?<br />
<br />
Simone Niquille: "Kritios They" is a character in "The Fragility of Life", a result of the research project "The Contents". While "The Contents" is based on the assumption that we as humans possess and create content, living in our daily networked space of appearance that is used for or against us, I became interested in the corporeal fragility exposed and created through this data, or that the data itself possesses. In the film, the decimation scene questions this quite bluntly: when does a form stop being human, when do we lose empathy towards the representation? Merely reducing the 3D mesh’s resolution, decreasing its information density, can affect the viewer’s empathy. Suddenly the mesh might no longer be perceived as human, and is revealed as a simple geometric construct: A plain surface onto which any and all interpretation can be projected. The contemporary accelerating frenzy of collecting as much data as possible on one single individual to achieve maximum transparency and construct a ‘fleshed out’ profile is a fragile endeavour. More information does not necessarily lead to a more defined image. In the case of "Kritios They", I was interested in character creation software and the parameters embedded in its interfaces. The parameters come with limitations: an arm can only be this long, skin colour is represented within a specified spectrum, and so on. How were these decisions made and these parameters determined?<br />
<br />
<blockquote>"Looking at design history and the field’s striving to create a standardised body to better cater to the human form, I found similarities of intent and problematics."</blockquote><br />
<br />
[[File:04 bertillon identification system.jpg|thumb|none|600px| Alphonse Bertillon, Anthropometric data sheet and Identification Card, 1896.]]<br />
<br />
[[File:03 henry dreyfuss humanscale.jpg|thumb|none|600px| Humanscale 7b: Seated at Work Selector, Henry Dreyfuss Associates, MIT Press, 1981. collection.cooperhewitt.org/objects/51689299]]<br />
<br />
Anthropometric efforts ranging from Da Vinci’s Vitruvian Man, to Corbusier’s Modulor, to Alphonse Bertillon’s’ Signaletic Instructions and invention of the mug shot, to Henry Dreyfuss’s Humanscale… What these projects share is an attempt to translate the human body into numbers. Be it for the sake of comparison, efficiency, policing…<br />
<br />
In a "Washington Post" article <ref>https://www.washingtonpost.com/gdpr-consent/?next_url=https%3a%2f%2fwww.washingtonpost.com%2fwp-srv%2fnational%2fdotmil%2farkin020199.htm</ref> from 1999 on newly developed voice mimicking technology, Daniel T. Kuehl, the chairman of the Information Operations department at the National Defense University in Washington (the military's school for information warfare) is quoted as saying: "Once you can take any kind of information and reduce it into ones and zeroes, you can do some pretty interesting things."<br />
<br />
To create the "Kritios They" character I used a program called Fuse<ref>https://www.adobe.com/products/fuse.html#</ref>. It was recently acquired by Adobe and is in the process of being integrated into their Creative Cloud services. It originated as assembly based 3D modelling research carried out at Stanford University. The Fuse interface segments the body into Frankenstein-like parts to be assembled by the user. However, the seemingly restriction free Lego-character-design interface is littered with limitations. Not all body parts mix as well as others; some create uncanny folds and seams when assembled. The torso has to be a certain length and the legs positioned in a certain way and when I try to adapt these elements the automatic rigging process doesn’t work because the mesh won’t be recognised as a body.<br />
<br />
A lot of these processes and workflows demand content that is very specific to their definition of the human form in order to function. As a result, they don’t account for anything that diverges from that norm, establishing a parametric truth that is biased and discriminatory. This raises the question of what that norm is and how, by whom and for whom it has been defined.<br />
<br />
FS: Could you say something about the notion of ‘parametric truth’ that you used?<br />
<br />
SN: Realising the existence of a built-in anthropometric standard in such software, I started looking at use cases of motion capture and 3D scanning in areas other than entertainment - applications that demand an objectivity. I was particularly interested in crime and accident reconstruction animations that are produced as visual evidence or in court support material. Traditionally this support material would consist of photographs, diagrams and objects. More recently this sometimes includes forensic animations commissioned by either party. The animations are produced with various software and tools, sometimes including motion capture and/or 3D scanning technologies.<br />
<br />
These animations are created post-fact; a varying amalgam of witness testimonies, crime scene survey data, police and medical reports etc. Effectively creating a ‘version of’, rather than an objective illustration. One highly problematic instance was an animation intended as a piece of evidence in the trial of George Zimmerman on the charge of second-degree murder on account of the shooting of Trayvon Martin in 2012. Zimmerman’s defence commissioned an animation to attest his actions as self defence. Among the online documentation of the trial is a roughly two-hour long video of Zimmerman’s attorney questioning the animator on his process. Within these two hours of questioning the defence attorney is attempting to demonstrate the animations’ objectivity by minutely scrutinising the creation process. It is revealed that a motion capture suit was used to capture the character’s animations, to digitally re-enact Zimmerman and Martin. The animator states that he was the one wearing the motion capture suit portraying both Zimmerman as well as Martin. If this weren’t already enough to debunk an objectivity claim, the attorney asks: “How does the computer know that it is recording a body?” Upon which the animator responds: “You place the 16 sensors on the body and then on screen you see the body move in accordance.”<br />
<br />
<blockquote>"But what is on screen is merely a representation of the data transmitted by 16 sensors, not a body."</blockquote><br />
<br />
A misplaced or wrongly calibrated sensor would yield an entirely different animation. And further, the anthropometric measurements of the two subjects were added in post production, after the animation data had been recorded from the animator’s re-enactment. In this case the animation was thankfully not allowed as a piece of evidence, but it nevertheless was allowed to be screened during the trial. The difference from showing video in court is, seeing something play out visually, in a medium that we are used to consume. It takes root in a different part of your memory than a verbal recount and renders one version more visible than others. Even with part of the animation based on data collected at the crime scene, a part of the reproduction will remain approximation and assumption.<br />
<br />
This is visible in the visual choices of the animation, for example. Most parts are modelled with minimal detail (I assume to communicate objectivity). “There were no superfluous aesthetic choices made.” However, some elements receive very selective and intentional detailing. The crime scene’s grassy ground is depicted as a flat plane with an added photographic texture of grass rather than 3D grass produced with particle hair. On the other hand, Zimmerman and Martin’s skin colour is clearly accentuated as well as the hoodie worn by Trayvon Martin, a crucial piece of the defence’s case. The hoodie was instrumentalized as evidence of violent intentions during the trial, where it was claimed that if Martin had not worn the hood up he would not have been perceived as a threat by Zimmerman. To model these elements at varying subjective resolution was a deliberate choice. It could have depicted raw armatures instead of textured figures, for example. The animation was designed to focus on specific elements; shifting that focus would produce differing versions.<br />
<br />
[[File:09 newsdirect.png|thumb|none|600px| 3D animation by Reuter’s owned News Direct “Transform your News with 3D Graphics”, “FBI investigates George Zimmerman for shooting of Florida teen, Trayvon Martin” News Direct, 2012.]]<br />
<br />
FS: This is something that fascinates me, the different levels of detailing that occur in the high octane world of 3D. Where some elements receive an enormous amount of attention and other elements, such as the skeleton or the genitals, almost none.<br />
<br />
SN: Yes, like the 16 sensors representing a body…<br />
<br />
FS: Where do you locate these different levels of resolution?<br />
<br />
SN: Within the CGI [computer-generated imagery] community, modellers are obsessed by creating 3D renders in the highest possible resolution as a technical as well as artistic accomplishment, but also as a form of muscle flexing of computing power. Detail is not merely a question of the render quality, but equally importantly it can be the realism achieved; a tear on a cheek, a thin film of sweat on the skin. On forums you come across discussions on something called subsurface scattering<ref>https://docs.blender.org/manual/ja/dev/render/shader_nodes/shader/sss.html?highlight=subsurface%20scattering</ref>, which is used to simulate blood vessels under the skin to make it look more realistic, to add weight and life to the hollow 3D mesh. However, the discussions tend to focus on pristine young white skin, oblivious to diversity.<br />
<br />
JR: This raises the notion of the 'epistemic object'. The matter you manipulated brings a question to a specific table, but it cannot be on every table: it cannot be on the ‘techies’ table and on the designers table. However, under certain conditions, with a specific language and political agenda and so on, "The Contents" raises certain issues and serves as a starting point for a conversation or facilitates an argument for a conversation. This is where I find your work extremely interesting. I consider what you make objects around which to formulate a thought, for thinking about specific crossroads. They can as such be considered 'disobedient action-research', as epistemic objects in the sense that they make me think, help me wonder about political urgencies, techno-ecological systems and the decisions that went into them.<br />
<br />
SN: That’s specifically what two scenes in the film experiment with: the sleeping shadow and the decimating mug shot. They depend on the viewer’s expectations.<br />
<br />
<blockquote>"The most beautiful reaction to the decimating mug shot scene has been: 'Why does it suddenly look so scary?'"</blockquote><br />
<br />
The viewer has an expectation in the image that is slowly taken away, quite literally, by lowering the resolution. Similar with the sleeping scene: What appears as a sleeping figure filmed through frosted glass unveils itself by changing the camera angle. The new perspective reveals another reality. What I am trying to figure out now is how the images operate in different spaces. Probably there isn’t one single application, but they can be in "The Fragility of Life" as well as in a music video or an ergonomic simulation, for example, and travel through different media and contexts. I am interested in how the images exist in these different spaces.<br />
<br />
FS: We see that these renderings, not only yours but in general, are very volatile in their ability to transgress applications, on the large scale of movements ranging from Hollywood to medical, to gaming, to military. But it seems that, seeing your work, this transgression can also function on different levels.<br />
<br />
SN: These different industries share software and tools, which are after all developed within their crossroads.<br />
<br />
<blockquote>"Creating images that attempt to transgress levels of application is a way for me to reverse the tangent, and question the tools of production."</blockquote><br />
<br />
Is the image produced differently if the tool is the same or is its application different? If 3D modelling software created by the gaming industry were used to create forensic animations, possibly incarcerating people, what are the parameters under which that software operates? This is a vital question affecting real lives. <br />
<br />
JR: Can you please introduce us to Mr. item #0082a?<br />
<br />
SN: In attempting to find answers to some of the questions on the Fuse character creator software’s parameters I came across a research project initiated by the U.S. Air Force Research Laboratory from the late 1990s and early 2000s called "CAESAR" [Civilian American and European Surface Anthropometry Resource].<br />
<br />
#0082a is a whole body scan mesh from the CAESAR database <ref>http://store.sae.org/caesar/</ref>, presumably the 82nd scanned subject in position a. "CAESAR" project’s aim was to create a new anthropometric surface database of body measurements for the Air Force’s cockpit and uniform design. The new database was necessary to represent the contemporary U.S. military staff. Previous measurements were outdated as the U.S. population had grown more diverse since the last measurement standards had been registered. This large-scale project consisted of scanning about 2000 bodies in the United States, Italy and the Netherlands. A dedicated team travelled to various cities within these countries outfitted with the first whole body scanner developed specifically for this purpose by a company called Cyberware. This is how I initially found out about the "CAESAR" database, by trying to find information on the Cyberware scanner. <br />
<br />
I found a video somewhere deep within YouTube, it was this very strange and wonderful video of a 3D figure dancing on a NIST [U.S. National Institute of Standards and Technology] logo. The figure looked like an early 3D scan that had been crudely animated. I got in touch with the YouTube user and through a Skype conversation learned about his involvement in the "CAESAR" project through his work at NIST. Because of his own personal fascination with 3D animation he made the video I initially found by animating one of the "CAESAR" scans, #0082a, with an early version of Poser.<br />
<br />
[[File:Simone1.jpg|thumb|none|600px|Leonard Nimoy is one of the first actors to get scanned and be replicated digitally in Star Trek IV: The Voyage Home. […] Image: Cinefex 29, 02/1987.]]<br />
<br />
Cyberware <ref>https://en.wikipedia.org/wiki/Cyberware</ref> has its origins in the entertainment industry. They scanned Leonard Nimoy, who portrays Spock in the Star Trek series, for the famous dream sequence in the 1986 movie Star Trek IV: The Voyage Home. Nimoy's head scan is among the first 3D scans… The trajectory of the Cyberware company is part of a curious pattern: it originated in Hollywood as a head scanner, advanced to a whole body scanner for the military, and completed the entertainment-military-industrial cycle by returning to the entertainment industry for whole-body scanning applications.<br />
<br />
"CAESAR", as far as I know, is one of the biggest databases available of scanned body meshes and anthropometric data to this day. I assume, therefore it keeps on being used — recycled — for research in need of humanoid 3D meshes. <br />
<br />
While looking into the history of the character creator software Fuse I sifted through 3D mesh segmentation research, which later informed the assembly modelling research at Stanford that became Fuse. #0082 was among 20 "CAESAR" scans used in a database assembled specifically for this segmentation research and thus ultimately played a role in seting the parameters for Fuse. A very limited amount of training data, that in the case of Fuse ended up becoming a widely distributed commercial software. At least at this point the training data should be reviewed… It felt like a whole ecology of past and future 3D anthropometric standards revealed itself through this one mesh.<br />
<br />
[[File:Fol thefragilityoflife-2.jpeg|thumb|none|600px|]]<br />
<br />
== References ==</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=465
MediaWiki:Common.css
2020-05-22T09:48:45Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 200% !important;}<br />
<br />
img {margin: 1em 0 1em 0 !important;}<br />
<br />
.gallerytext, .thumbcaption {font-size: 85% !important;}<br />
<br />
.tocnumber { display: none; }<br />
<br />
blockquote {font-weight: bold; font-style: italic;}</div>
FS
https://volumetricregimes.xyz/index.php?title=The_Fragility_of_Life&diff=464
The Fragility of Life
2020-05-22T09:47:53Z
<p>FS: </p>
<hr />
<div><br />
{| class="wikitable"<br />
|-<br />
| A conversation between Femke Snelting, Jara Rocha and Simone Niquille during the Possible Bodies working session <ref>http://www.akademie-solitude.de/en/events/possible-bodies-iteration-i~no3869/</ref> at Schloss Solitude in Stuttgart in May 2017, following a screening of process material of Niquille’s film *The Fragility of Life*<ref>http://www.fragilityof.life/</ref>. It was first published by Het Nieuwe Instituut online <ref>https://research-development.hetnieuweinstituut.nl/en/fellows/fragility-life</ref><br />
|}<br />
<br />
== The Fragility of Life ==<br />
<br />
[[File:06 imgf0016.png|thumb|none|600px|06 CAESAR database used as training set in the research towards a parametric three-dimensional body model for animation. “Method for providing a threedimensional body model,” Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V., 2015.]]<br />
<br />
Jara Rocha: In the process of developing "Possible Bodies" one of the excursions we made was to the Royal Belgian Institute of Natural Science’s 3D reproduction workshop in Brussels, where they were working on reproductions of Hominids. Another visitor asked: “How do you know how many hairs a monkey like this should have?” The person working on the 3D reproduction replied, “It is not a monkey.” You could see that he had an empathetic connection to the on-screen-model he was working on, being of the same species. I would like to ask you about norms and embedded norms in software. Talking about objective truth and parametric representation and the like in this example you refer to, there is a huge norm that worries me, that of species, of unquestioned humanness. When we talk about bodies, we can push certain limits because of the hegemony of the species. In court, the norm is anthropocentric, but when it comes to representation…<br />
<br />
Femke Snelting: This is the subject of "Kritios They"?<br />
<br />
Simone Niquille: "Kritios They" is a character in "The Fragility of Life", a result of the research project "The Contents". While "The Contents" is based on the assumption that we as humans possess and create content, living in our daily networked space of appearance that is used for or against us, I became interested in the corporeal fragility exposed and created through this data, or that the data itself possesses. In the film, the decimation scene questions this quite bluntly: when does a form stop being human, when do we lose empathy towards the representation? Merely reducing the 3D mesh’s resolution, decreasing its information density, can affect the viewer’s empathy. Suddenly the mesh might no longer be perceived as human, and is revealed as a simple geometric construct: A plain surface onto which any and all interpretation can be projected. The contemporary accelerating frenzy of collecting as much data as possible on one single individual to achieve maximum transparency and construct a ‘fleshed out’ profile is a fragile endeavour. More information does not necessarily lead to a more defined image. In the case of "Kritios They", I was interested in character creation software and the parameters embedded in its interfaces. The parameters come with limitations: an arm can only be this long, skin colour is represented within a specified spectrum, and so on. How were these decisions made and these parameters determined?<br />
<br />
<blockquote>"Looking at design history and the field’s striving to create a standardised body to better cater to the human form, I found similarities of intent and problematics."</blockquote><br />
<br />
[[File:04 bertillon identification system.jpg|thumb|none|600px| Alphonse Bertillon, Anthropometric data sheet and Identification Card, 1896.]]<br />
<br />
[[File:03 henry dreyfuss humanscale.jpg|thumb|none|600px| Humanscale 7b: Seated at Work Selector, Henry Dreyfuss Associates, MIT Press, 1981. collection.cooperhewitt.org/objects/51689299]]<br />
<br />
Anthropometric efforts ranging from Da Vinci’s Vitruvian Man, to Corbusier’s Modulor, to Alphonse Bertillon’s’ Signaletic Instructions and invention of the mug shot, to Henry Dreyfuss’s Humanscale… What these projects share is an attempt to translate the human body into numbers. Be it for the sake of comparison, efficiency, policing…<br />
<br />
In a "Washington Post" article <ref>https://www.washingtonpost.com/gdpr-consent/?next_url=https%3a%2f%2fwww.washingtonpost.com%2fwp-srv%2fnational%2fdotmil%2farkin020199.htm</ref> from 1999 on newly developed voice mimicking technology, Daniel T. Kuehl, the chairman of the Information Operations department at the National Defense University in Washington (the military's school for information warfare) is quoted as saying: "Once you can take any kind of information and reduce it into ones and zeroes, you can do some pretty interesting things."<br />
<br />
To create the "Kritios They" character I used a program called Fuse<ref>https://www.adobe.com/products/fuse.html#</ref>. It was recently acquired by Adobe and is in the process of being integrated into their Creative Cloud services. It originated as assembly based 3D modelling research carried out at Stanford University. The Fuse interface segments the body into Frankenstein-like parts to be assembled by the user. However, the seemingly restriction free Lego-character-design interface is littered with limitations. Not all body parts mix as well as others; some create uncanny folds and seams when assembled. The torso has to be a certain length and the legs positioned in a certain way and when I try to adapt these elements the automatic rigging process doesn’t work because the mesh won’t be recognised as a body.<br />
<br />
A lot of these processes and workflows demand content that is very specific to their definition of the human form in order to function. As a result, they don’t account for anything that diverges from that norm, establishing a parametric truth that is biased and discriminatory. This raises the question of what that norm is and how, by whom and for whom it has been defined.<br />
<br />
FS: Could you say something about the notion of ‘parametric truth’ that you used?<br />
<br />
SN: Realising the existence of a built-in anthropometric standard in such software, I started looking at use cases of motion capture and 3D scanning in areas other than entertainment - applications that demand an objectivity. I was particularly interested in crime and accident reconstruction animations that are produced as visual evidence or in court support material. Traditionally this support material would consist of photographs, diagrams and objects. More recently this sometimes includes forensic animations commissioned by either party. The animations are produced with various software and tools, sometimes including motion capture and/or 3D scanning technologies.<br />
<br />
These animations are created post-fact; a varying amalgam of witness testimonies, crime scene survey data, police and medical reports etc. Effectively creating a ‘version of’, rather than an objective illustration. One highly problematic instance was an animation intended as a piece of evidence in the trial of George Zimmerman on the charge of second-degree murder on account of the shooting of Trayvon Martin in 2012. Zimmerman’s defence commissioned an animation to attest his actions as self defence. Among the online documentation of the trial is a roughly two-hour long video of Zimmerman’s attorney questioning the animator on his process. Within these two hours of questioning the defence attorney is attempting to demonstrate the animations’ objectivity by minutely scrutinising the creation process. It is revealed that a motion capture suit was used to capture the character’s animations, to digitally re-enact Zimmerman and Martin. The animator states that he was the one wearing the motion capture suit portraying both Zimmerman as well as Martin. If this weren’t already enough to debunk an objectivity claim, the attorney asks: “How does the computer know that it is recording a body?” Upon which the animator responds: “You place the 16 sensors on the body and then on screen you see the body move in accordance.”<br />
<br />
<blockquote>"But what is on screen is merely a representation of the data transmitted by 16 sensors, not a body."</blockquote><br />
<br />
A misplaced or wrongly calibrated sensor would yield an entirely different animation. And further, the anthropometric measurements of the two subjects were added in post production, after the animation data had been recorded from the animator’s re-enactment. In this case the animation was thankfully not allowed as a piece of evidence, but it nevertheless was allowed to be screened during the trial. The difference from showing video in court is, seeing something play out visually, in a medium that we are used to consume. It takes root in a different part of your memory than a verbal recount and renders one version more visible than others. Even with part of the animation based on data collected at the crime scene, a part of the reproduction will remain approximation and assumption.<br />
<br />
This is visible in the visual choices of the animation, for example. Most parts are modelled with minimal detail (I assume to communicate objectivity). “There were no superfluous aesthetic choices made.” However, some elements receive very selective and intentional detailing. The crime scene’s grassy ground is depicted as a flat plane with an added photographic texture of grass rather than 3D grass produced with particle hair. On the other hand, Zimmerman and Martin’s skin colour is clearly accentuated as well as the hoodie worn by Trayvon Martin, a crucial piece of the defence’s case. The hoodie was instrumentalized as evidence of violent intentions during the trial, where it was claimed that if Martin had not worn the hood up he would not have been perceived as a threat by Zimmerman. To model these elements at varying subjective resolution was a deliberate choice. It could have depicted raw armatures instead of textured figures, for example. The animation was designed to focus on specific elements; shifting that focus would produce differing versions.<br />
<br />
[[File:09 newsdirect.png|thumb|none|600px| 3D animation by Reuter’s owned News Direct “Transform your News with 3D Graphics”, “FBI investigates George Zimmerman for shooting of Florida teen, Trayvon Martin” News Direct, 2012.]]<br />
<br />
FS: This is something that fascinates me, the different levels of detailing that occur in the high octane world of 3D. Where some elements receive an enormous amount of attention and other elements, such as the skeleton or the genitals, almost none.<br />
<br />
SN: Yes, like the 16 sensors representing a body…<br />
<br />
FS: Where do you locate these different levels of resolution?<br />
<br />
SN: Within the CGI [computer-generated imagery] community, modellers are obsessed by creating 3D renders in the highest possible resolution as a technical as well as artistic accomplishment, but also as a form of muscle flexing of computing power. Detail is not merely a question of the render quality, but equally importantly it can be the realism achieved; a tear on a cheek, a thin film of sweat on the skin. On forums you come across discussions on something called subsurface scattering<ref>https://docs.blender.org/manual/ja/dev/render/shader_nodes/shader/sss.html?highlight=subsurface%20scattering</ref>, which is used to simulate blood vessels under the skin to make it look more realistic, to add weight and life to the hollow 3D mesh. However, the discussions tend to focus on pristine young white skin, oblivious to diversity.<br />
<br />
JR: This raises the notion of the 'epistemic object'. The matter you manipulated brings a question to a specific table, but it cannot be on every table: it cannot be on the ‘techies’ table and on the designers table. However, under certain conditions, with a specific language and political agenda and so on, "The Contents" raises certain issues and serves as a starting point for a conversation or facilitates an argument for a conversation. This is where I find your work extremely interesting. I consider what you make objects around which to formulate a thought, for thinking about specific crossroads. They can as such be considered 'disobedient action-research', as epistemic objects in the sense that they make me think, help me wonder about political urgencies, techno-ecological systems and the decisions that went into them.<br />
<br />
SN: That’s specifically what two scenes in the film experiment with: the sleeping shadow and the decimating mug shot. They depend on the viewer’s expectations.<br />
<br />
<blockquote>"The most beautiful reaction to the decimating mug shot scene has been: 'Why does it suddenly look so scary?'"</blockquote><br />
<br />
The viewer has an expectation in the image that is slowly taken away, quite literally, by lowering the resolution. Similar with the sleeping scene: What appears as a sleeping figure filmed through frosted glass unveils itself by changing the camera angle. The new perspective reveals another reality. What I am trying to figure out now is how the images operate in different spaces. Probably there isn’t one single application, but they can be in "The Fragility of Life" as well as in a music video or an ergonomic simulation, for example, and travel through different media and contexts. I am interested in how the images exist in these different spaces.<br />
<br />
FS: We see that these renderings, not only yours but in general, are very volatile in their ability to transgress applications, on the large scale of movements ranging from Hollywood to medical, to gaming, to military. But it seems that, seeing your work, this transgression can also function on different levels.<br />
<br />
SN: These different industries share software and tools, which are after all developed within their crossroads.<br />
<br />
<blockquote>"Creating images that attempt to transgress levels of application is a way for me to reverse the tangent, and question the tools of production."</blockquote><br />
<br />
Is the image produced differently if the tool is the same or is its application different? If 3D modelling software created by the gaming industry were used to create forensic animations, possibly incarcerating people, what are the parameters under which that software operates? This is a vital question affecting real lives. <br />
<br />
JR: Can you please introduce us to Mr. item #0082a?<br />
<br />
SN: In attempting to find answers to some of the questions on the Fuse character creator software’s parameters I came across a research project initiated by the U.S. Air Force Research Laboratory from the late 1990s and early 2000s called "CAESAR" [Civilian American and European Surface Anthropometry Resource].<br />
<br />
#0082a is a whole body scan mesh from the CAESAR database <ref>http://store.sae.org/caesar/</ref>, presumably the 82nd scanned subject in position a. "CAESAR" project’s aim was to create a new anthropometric surface database of body measurements for the Air Force’s cockpit and uniform design. The new database was necessary to represent the contemporary U.S. military staff. Previous measurements were outdated as the U.S. population had grown more diverse since the last measurement standards had been registered. This large-scale project consisted of scanning about 2000 bodies in the United States, Italy and the Netherlands. A dedicated team travelled to various cities within these countries outfitted with the first whole body scanner developed specifically for this purpose by a company called Cyberware. This is how I initially found out about the "CAESAR" database, by trying to find information on the Cyberware scanner. <br />
<br />
I found a video somewhere deep within YouTube, it was this very strange and wonderful video of a 3D figure dancing on a NIST [U.S. National Institute of Standards and Technology] logo. The figure looked like an early 3D scan that had been crudely animated. I got in touch with the YouTube user and through a Skype conversation learned about his involvement in the "CAESAR" project through his work at NIST. Because of his own personal fascination with 3D animation he made the video I initially found by animating one of the "CAESAR" scans, #0082a, with an early version of Poser.<br />
<br />
[[File:Simone1.jpg|thumb|none|600px|Leonard Nimoy is one of the first actors to get scanned and be replicated digitally in Star Trek IV: The Voyage Home. […] Image: Cinefex 29, 02/1987.]]<br />
<br />
Cyberware <ref>https://en.wikipedia.org/wiki/Cyberware</ref> has its origins in the entertainment industry. They scanned Leonard Nimoy, who portrays Spock in the Star Trek series, for the famous dream sequence in the 1986 movie Star Trek IV: The Voyage Home. Nimoy's head scan is among the first 3D scans… The trajectory of the Cyberware company is part of a curious pattern: it originated in Hollywood as a head scanner, advanced to a whole body scanner for the military, and completed the entertainment-military-industrial cycle by returning to the entertainment industry for whole-body scanning applications.<br />
<br />
"CAESAR", as far as I know, is one of the biggest databases available of scanned body meshes and anthropometric data to this day. I assume, therefore it keeps on being used — recycled — for research in need of humanoid 3D meshes. <br />
<br />
While looking into the history of the character creator software Fuse I sifted through 3D mesh segmentation research, which later informed the assembly modelling research at Stanford that became Fuse. #0082 was among 20 "CAESAR" scans used in a database assembled specifically for this segmentation research and thus ultimately played a role in seting the parameters for Fuse. A very limited amount of training data, that in the case of Fuse ended up becoming a widely distributed commercial software. At least at this point the training data should be reviewed… It felt like a whole ecology of past and future 3D anthropometric standards revealed itself through this one mesh.<br />
<br />
[[File:Fol thefragilityoflife-2.jpeg|thumb|none|600px|]]<br />
<br />
== References ==</div>
FS
https://volumetricregimes.xyz/index.php?title=The_Fragility_of_Life&diff=463
The Fragility of Life
2020-05-22T09:47:03Z
<p>FS: /* The Fragility of Life */</p>
<hr />
<div><br />
{| class="wikitable"<br />
|-<br />
| A conversation between Femke Snelting, Jara Rocha and Simone Niquille during the Possible Bodies working session <ref>http://www.akademie-solitude.de/en/events/possible-bodies-iteration-i~no3869/</ref> at Schloss Solitude in Stuttgart in May 2017, following a screening of process material of Niquille’s film *The Fragility of Life*<ref>http://www.fragilityof.life/</ref>. It was first published by Het Nieuwe Instituut online <ref>https://research-development.hetnieuweinstituut.nl/en/fellows/fragility-life</ref><br />
|}<br />
<br />
== The Fragility of Life ==<br />
<br />
[[File:06 imgf0016.png|thumb|none|600px|06 CAESAR database used as training set in the research towards a parametric three-dimensional body model for animation. “Method for providing a threedimensional body model,” Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V., 2015.]]<br />
<br />
Jara Rocha: In the process of developing "Possible Bodies" one of the excursions we made was to the Royal Belgian Institute of Natural Science’s 3D reproduction workshop in Brussels, where they were working on reproductions of Hominids. Another visitor asked: “How do you know how many hairs a monkey like this should have?” The person working on the 3D reproduction replied, “It is not a monkey.” You could see that he had an empathetic connection to the on-screen-model he was working on, being of the same species. I would like to ask you about norms and embedded norms in software. Talking about objective truth and parametric representation and the like in this example you refer to, there is a huge norm that worries me, that of species, of unquestioned humanness. When we talk about bodies, we can push certain limits because of the hegemony of the species. In court, the norm is anthropocentric, but when it comes to representation…<br />
<br />
Femke Snelting: This is the subject of "Kritios They"?<br />
<br />
Simone Niquille: "Kritios They" is a character in "The Fragility of Life", a result of the research project "The Contents". While "The Contents" is based on the assumption that we as humans possess and create content, living in our daily networked space of appearance that is used for or against us, I became interested in the corporeal fragility exposed and created through this data, or that the data itself possesses. In the film, the decimation scene questions this quite bluntly: when does a form stop being human, when do we lose empathy towards the representation? Merely reducing the 3D mesh’s resolution, decreasing its information density, can affect the viewer’s empathy. Suddenly the mesh might no longer be perceived as human, and is revealed as a simple geometric construct: A plain surface onto which any and all interpretation can be projected. The contemporary accelerating frenzy of collecting as much data as possible on one single individual to achieve maximum transparency and construct a ‘fleshed out’ profile is a fragile endeavour. More information does not necessarily lead to a more defined image. In the case of "Kritios They", I was interested in character creation software and the parameters embedded in its interfaces. The parameters come with limitations: an arm can only be this long, skin colour is represented within a specified spectrum, and so on. How were these decisions made and these parameters determined?<br />
<br />
<blockquote>"Looking at design history and the field’s striving to create a standardised body to better cater to the human form, I found similarities of intent and problematics."</blockquote><br />
<br />
[[File:04 bertillon identification system.jpg|thumb|none|600px| Alphonse Bertillon, Anthropometric data sheet and Identification Card, 1896.]]<br />
<br />
[[File:03 henry dreyfuss humanscale.jpg|thumb|none|600px| Humanscale 7b: Seated at Work Selector, Henry Dreyfuss Associates, MIT Press, 1981. collection.cooperhewitt.org/objects/51689299]]<br />
<br />
Anthropometric efforts ranging from Da Vinci’s Vitruvian Man, to Corbusier’s Modulor, to Alphonse Bertillon’s’ Signaletic Instructions and invention of the mug shot, to Henry Dreyfuss’s Humanscale… What these projects share is an attempt to translate the human body into numbers. Be it for the sake of comparison, efficiency, policing…<br />
<br />
In a "Washington Post" article <ref>https://www.washingtonpost.com/gdpr-consent/?next_url=https%3a%2f%2fwww.washingtonpost.com%2fwp-srv%2fnational%2fdotmil%2farkin020199.htm</ref> from 1999 on newly developed voice mimicking technology, Daniel T. Kuehl, the chairman of the Information Operations department at the National Defense University in Washington (the military's school for information warfare) is quoted as saying: "Once you can take any kind of information and reduce it into ones and zeroes, you can do some pretty interesting things."<br />
<br />
To create the "Kritios They" character I used a program called Fuse<ref>https://www.adobe.com/products/fuse.html#</ref>. It was recently acquired by Adobe and is in the process of being integrated into their Creative Cloud services. It originated as assembly based 3D modelling research carried out at Stanford University. The Fuse interface segments the body into Frankenstein-like parts to be assembled by the user. However, the seemingly restriction free Lego-character-design interface is littered with limitations. Not all body parts mix as well as others; some create uncanny folds and seams when assembled. The torso has to be a certain length and the legs positioned in a certain way and when I try to adapt these elements the automatic rigging process doesn’t work because the mesh won’t be recognised as a body.<br />
<br />
A lot of these processes and workflows demand content that is very specific to their definition of the human form in order to function. As a result, they don’t account for anything that diverges from that norm, establishing a parametric truth that is biased and discriminatory. This raises the question of what that norm is and how, by whom and for whom it has been defined.<br />
<br />
FS: Could you say something about the notion of ‘parametric truth’ that you used?<br />
<br />
SN: Realising the existence of a built-in anthropometric standard in such software, I started looking at use cases of motion capture and 3D scanning in areas other than entertainment - applications that demand an objectivity. I was particularly interested in crime and accident reconstruction animations that are produced as visual evidence or in court support material. Traditionally this support material would consist of photographs, diagrams and objects. More recently this sometimes includes forensic animations commissioned by either party. The animations are produced with various software and tools, sometimes including motion capture and/or 3D scanning technologies.<br />
<br />
These animations are created post-fact; a varying amalgam of witness testimonies, crime scene survey data, police and medical reports etc. Effectively creating a ‘version of’, rather than an objective illustration. One highly problematic instance was an animation intended as a piece of evidence in the trial of George Zimmerman on the charge of second-degree murder on account of the shooting of Trayvon Martin in 2012. Zimmerman’s defence commissioned an animation to attest his actions as self defence. Among the online documentation of the trial is a roughly two-hour long video of Zimmerman’s attorney questioning the animator on his process. Within these two hours of questioning the defence attorney is attempting to demonstrate the animations’ objectivity by minutely scrutinising the creation process. It is revealed that a motion capture suit was used to capture the character’s animations, to digitally re-enact Zimmerman and Martin. The animator states that he was the one wearing the motion capture suit portraying both Zimmerman as well as Martin. If this weren’t already enough to debunk an objectivity claim, the attorney asks: “How does the computer know that it is recording a body?” Upon which the animator responds: “You place the 16 sensors on the body and then on screen you see the body move in accordance.”<br />
<br />
<blockquote>"But what is on screen is merely a representation of the data transmitted by 16 sensors, not a body."</blockquote><br />
<br />
A misplaced or wrongly calibrated sensor would yield an entirely different animation. And further, the anthropometric measurements of the two subjects were added in post production, after the animation data had been recorded from the animator’s re-enactment. In this case the animation was thankfully not allowed as a piece of evidence, but it nevertheless was allowed to be screened during the trial. The difference from showing video in court is, seeing something play out visually, in a medium that we are used to consume. It takes root in a different part of your memory than a verbal recount and renders one version more visible than others. Even with part of the animation based on data collected at the crime scene, a part of the reproduction will remain approximation and assumption.<br />
<br />
This is visible in the visual choices of the animation, for example. Most parts are modelled with minimal detail (I assume to communicate objectivity). “There were no superfluous aesthetic choices made.” However, some elements receive very selective and intentional detailing. The crime scene’s grassy ground is depicted as a flat plane with an added photographic texture of grass rather than 3D grass produced with particle hair. On the other hand, Zimmerman and Martin’s skin colour is clearly accentuated as well as the hoodie worn by Trayvon Martin, a crucial piece of the defence’s case. The hoodie was instrumentalized as evidence of violent intentions during the trial, where it was claimed that if Martin had not worn the hood up he would not have been perceived as a threat by Zimmerman. To model these elements at varying subjective resolution was a deliberate choice. It could have depicted raw armatures instead of textured figures, for example. The animation was designed to focus on specific elements; shifting that focus would produce differing versions.<br />
<br />
[[File:09 newsdirect.png|thumb|none|600px| 3D animation by Reuter’s owned News Direct “Transform your News with 3D Graphics”, “FBI investigates George Zimmerman for shooting of Florida teen, Trayvon Martin” News Direct, 2012.]]<br />
<br />
FS: This is something that fascinates me, the different levels of detailing that occur in the high octane world of 3D. Where some elements receive an enormous amount of attention and other elements, such as the skeleton or the genitals, almost none.<br />
<br />
SN: Yes, like the 16 sensors representing a body…<br />
<br />
FS: Where do you locate these different levels of resolution?<br />
<br />
SN: Within the CGI [computer-generated imagery] community, modellers are obsessed by creating 3D renders in the highest possible resolution as a technical as well as artistic accomplishment, but also as a form of muscle flexing of computing power. Detail is not merely a question of the render quality, but equally importantly it can be the realism achieved; a tear on a cheek, a thin film of sweat on the skin. On forums you come across discussions on something called subsurface scattering<ref>https://docs.blender.org/manual/ja/dev/render/shader_nodes/shader/sss.html?highlight=subsurface%20scattering</ref>, which is used to simulate blood vessels under the skin to make it look more realistic, to add weight and life to the hollow 3D mesh. However, the discussions tend to focus on pristine young white skin, oblivious to diversity.<br />
<br />
JR: This raises the notion of the 'epistemic object'. The matter you manipulated brings a question to a specific table, but it cannot be on every table: it cannot be on the ‘techies’ table and on the designers table. However, under certain conditions, with a specific language and political agenda and so on, "The Contents" raises certain issues and serves as a starting point for a conversation or facilitates an argument for a conversation. This is where I find your work extremely interesting. I consider what you make objects around which to formulate a thought, for thinking about specific crossroads. They can as such be considered 'disobedient action-research', as epistemic objects in the sense that they make me think, help me wonder about political urgencies, techno-ecological systems and the decisions that went into them.<br />
<br />
SN: That’s specifically what two scenes in the film experiment with: the sleeping shadow and the decimating mug shot. They depend on the viewer’s expectations.<br />
<br />
<blockquote>"The most beautiful reaction to the decimating mug shot scene has been: 'Why does it suddenly look so scary?'"</blockquote><br />
<br />
The viewer has an expectation in the image that is slowly taken away, quite literally, by lowering the resolution. Similar with the sleeping scene: What appears as a sleeping figure filmed through frosted glass unveils itself by changing the camera angle. The new perspective reveals another reality. What I am trying to figure out now is how the images operate in different spaces. Probably there isn’t one single application, but they can be in "The Fragility of Life" as well as in a music video or an ergonomic simulation, for example, and travel through different media and contexts. I am interested in how the images exist in these different spaces.<br />
<br />
FS: We see that these renderings, not only yours but in general, are very volatile in their ability to transgress applications, on the large scale of movements ranging from Hollywood to medical, to gaming, to military. But it seems that, seeing your work, this transgression can also function on different levels.<br />
<br />
SN: These different industries share software and tools, which are after all developed within their crossroads.<br />
<br />
<blockquote>"Creating images that attempt to transgress levels of application is a way for me to reverse the tangent, and question the tools of production."/<blockquote><br />
<br />
Is the image produced differently if the tool is the same or is its application different? If 3D modelling software created by the gaming industry were used to create forensic animations, possibly incarcerating people, what are the parameters under which that software operates? This is a vital question affecting real lives. <br />
<br />
JR: Can you please introduce us to Mr. item #0082a?<br />
<br />
SN: In attempting to find answers to some of the questions on the Fuse character creator software’s parameters I came across a research project initiated by the U.S. Air Force Research Laboratory from the late 1990s and early 2000s called "CAESAR" [Civilian American and European Surface Anthropometry Resource].<br />
<br />
#0082a is a whole body scan mesh from the CAESAR database <ref>http://store.sae.org/caesar/</ref>, presumably the 82nd scanned subject in position a. "CAESAR" project’s aim was to create a new anthropometric surface database of body measurements for the Air Force’s cockpit and uniform design. The new database was necessary to represent the contemporary U.S. military staff. Previous measurements were outdated as the U.S. population had grown more diverse since the last measurement standards had been registered. This large-scale project consisted of scanning about 2000 bodies in the United States, Italy and the Netherlands. A dedicated team travelled to various cities within these countries outfitted with the first whole body scanner developed specifically for this purpose by a company called Cyberware. This is how I initially found out about the "CAESAR" database, by trying to find information on the Cyberware scanner. <br />
<br />
I found a video somewhere deep within YouTube, it was this very strange and wonderful video of a 3D figure dancing on a NIST [U.S. National Institute of Standards and Technology] logo. The figure looked like an early 3D scan that had been crudely animated. I got in touch with the YouTube user and through a Skype conversation learned about his involvement in the "CAESAR" project through his work at NIST. Because of his own personal fascination with 3D animation he made the video I initially found by animating one of the "CAESAR" scans, #0082a, with an early version of Poser.<br />
<br />
[[File:Simone1.jpg|thumb|none|600px|Leonard Nimoy is one of the first actors to get scanned and be replicated digitally in Star Trek IV: The Voyage Home. […] Image: Cinefex 29, 02/1987.]]<br />
<br />
Cyberware <ref>https://en.wikipedia.org/wiki/Cyberware</ref> has its origins in the entertainment industry. They scanned Leonard Nimoy, who portrays Spock in the Star Trek series, for the famous dream sequence in the 1986 movie Star Trek IV: The Voyage Home. Nimoy's head scan is among the first 3D scans… The trajectory of the Cyberware company is part of a curious pattern: it originated in Hollywood as a head scanner, advanced to a whole body scanner for the military, and completed the entertainment-military-industrial cycle by returning to the entertainment industry for whole-body scanning applications.<br />
<br />
"CAESAR", as far as I know, is one of the biggest databases available of scanned body meshes and anthropometric data to this day. I assume, therefore it keeps on being used — recycled — for research in need of humanoid 3D meshes. <br />
<br />
While looking into the history of the character creator software Fuse I sifted through 3D mesh segmentation research, which later informed the assembly modelling research at Stanford that became Fuse. #0082 was among 20 "CAESAR" scans used in a database assembled specifically for this segmentation research and thus ultimately played a role in seting the parameters for Fuse. A very limited amount of training data, that in the case of Fuse ended up becoming a widely distributed commercial software. At least at this point the training data should be reviewed… It felt like a whole ecology of past and future 3D anthropometric standards revealed itself through this one mesh.<br />
<br />
[[File:Fol thefragilityoflife-2.jpeg|thumb|none|600px|]]<br />
<br />
== References ==</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=123
MediaWiki:Common.css
2020-03-27T16:24:00Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 200% !important;}<br />
<br />
img {margin: 1em 0 1em 0 !important;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=122
MediaWiki:Common.css
2020-03-27T16:23:37Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 200% !important;}<br />
<br />
img {margin: 2em 0 2em 0 !important;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=121
MediaWiki:Common.css
2020-03-27T16:23:16Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 200% !important;}<br />
<br />
img {margin: 2em 0 2em 0;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=120
MediaWiki:Common.css
2020-03-27T16:22:36Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 200% !important;}<br />
<br />
img {margin: 0 2em;}</div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=119
Volumetric Regimes
2020-03-27T16:21:55Z
<p>FS: /* Volumetric Regimes: material cultures of quantified presence */</p>
<hr />
<div>__NOTOC__<br />
<span class="hi">This wiki is an ongoing workspace for a book in the making. ''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.</span><br />
<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
'''Volumetric Regimes emerges from [https://possiblebodies.constantvzw.org Possible Bodies], a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an inventory of cases and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.'''<br />
<br />
'''The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.'''<br />
<br />
&rarr; [[About|More about ''Volumetric Regimes'']]<br />
<br />
== Chapter index ==<br />
<br />
=== [[Introduction]] ===<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
=== [[Colophon]] ===</div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=118
Volumetric Regimes
2020-03-27T16:21:44Z
<p>FS: /* Volumetric Regimes: material cultures of quantified presence */</p>
<hr />
<div>__NOTOC__<br />
<span class="hi">This wiki is an ongoing workspace for a book in the making. ''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.</span><br />
<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
<br />
'''Volumetric Regimes emerges from [https://possiblebodies.constantvzw.org Possible Bodies], a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an inventory of cases and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.'''<br />
<br />
'''The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.'''<br />
<br />
&rarr; [[About|More about ''Volumetric Regimes'']]<br />
<br />
== Chapter index ==<br />
<br />
=== [[Introduction]] ===<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
=== [[Colophon]] ===</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=117
MediaWiki:Common.css
2020-03-27T16:18:43Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 200% !important;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=116
MediaWiki:Common.css
2020-03-27T16:18:23Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 200% !important; padding-top: 0 !important;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=115
MediaWiki:Common.css
2020-03-27T16:17:49Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 200% !important; margin-top: 1em !important;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=114
MediaWiki:Common.css
2020-03-27T16:17:17Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 200% !important; margin-top: 2em;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=113
MediaWiki:Common.css
2020-03-27T16:16:26Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 200% !important;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=112
MediaWiki:Common.css
2020-03-27T16:16:03Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 150% !important;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=111
MediaWiki:Common.css
2020-03-27T16:15:36Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 50px !important;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=110
MediaWiki:Common.css
2020-03-27T16:14:50Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold; font-size: 30px;}</div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=109
Volumetric Regimes
2020-03-27T16:13:49Z
<p>FS: /* Volumetric Regimes: material cultures of quantified presence */</p>
<hr />
<div>__NOTOC__<br />
<span class="hi">This wiki is an ongoing workspace for a book in the making. ''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.</span><br />
<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
'''Volumetric Regimes emerges from [https://possiblebodies.constantvzw.org Possible Bodies], a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an inventory of cases and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.'''<br />
<br />
'''The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.'''<br />
<br />
&rarr; [[About|More about ''Volumetric Regimes'']]<br />
<br />
== Chapter index ==<br />
<br />
=== [[Introduction]] ===<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
=== [[Colophon]] ===</div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=108
Volumetric Regimes
2020-03-27T16:12:58Z
<p>FS: </p>
<hr />
<div>__NOTOC__<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
This wiki is an ongoing workspace for a book in the making. ''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
'''Volumetric Regimes emerges from [https://possiblebodies.constantvzw.org Possible Bodies], a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an inventory of cases and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.'''<br />
<br />
'''The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.'''<br />
<br />
&rarr; [[About|More about ''Volumetric Regimes'']]<br />
<br />
== Chapter index ==<br />
<br />
=== [[Introduction]] ===<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
=== [[Colophon]] ===</div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=107
Volumetric Regimes
2020-03-27T16:12:14Z
<p>FS: </p>
<hr />
<div>__NOTOC__<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
This wiki is an ongoing workspace for a book in the making.<br />
''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
'''Volumetric Regimes emerges from [https://possiblebodies.constantvzw.org Possible Bodies], a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an inventory of cases and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.'''<br />
<br />
'''The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.'''<br />
<br />
&rarr; [[About|More about ''Volumetric Regimes'']]<br />
<br />
== Chapter index ==<br />
<br />
=== [[Introduction]] ===<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
=== [[Colophon]] ===</div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=106
Volumetric Regimes
2020-03-27T16:11:28Z
<p>FS: </p>
<hr />
<div>__NOTOC__<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
This wiki is an ongoing workspace for a book in the making.<br />
''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
= Volumetric Regimes: material cultures of quantified presence =<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
'''Volumetric Regimes emerges from [https://possiblebodies.constantvzw.org|Possible Bodies], a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an inventory of cases and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.'''<br />
<br />
'''The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.'''<br />
<br />
&rarr; [[About|More about ''Volumetric Regimes'']]<br />
<br />
== Chapter index ==<br />
<br />
=== [[Introduction]] ===<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
=== [[Colophon]] ===</div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=105
Volumetric Regimes
2020-03-27T16:10:09Z
<p>FS: </p>
<hr />
<div>__NOTOC__<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
This wiki is an ongoing workspace for a book in the making.<br />
''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
= Volumetric Regimes: material cultures of quantified presence =<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
'''Volumetric Regimes emerges from Possible Bodies, a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an [https://possiblebodies.constantvzw.org/ inventory of cases] and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.'''<br />
<br />
'''The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.'''<br />
<br />
&rarr; [[About|More about ''Volumetric Regimes'']]<br />
<br />
== Chapter index ==<br />
<br />
=== [[Introduction]] ===<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
=== [[Colophon]] ===</div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=104
Volumetric Regimes
2020-03-27T16:09:31Z
<p>FS: </p>
<hr />
<div>__NOTOC__<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
This wiki is an ongoing workspace for a book in the making.<br />
''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
= Volumetric Regimes: material cultures of quantified presence =<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
'''Volumetric Regimes emerges from Possible Bodies, a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an [https://possiblebodies.constantvzw.org/ inventory of cases] and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.<br />
<br />
The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.'''<br />
<br />
&rarr; [[About|More about ''Volumetric Regimes'']]<br />
<br />
== Chapter index ==<br />
<br />
=== [[Introduction]] ===<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
=== [[Colophon]] ===</div>
FS
https://volumetricregimes.xyz/index.php?title=X,_y,_z&diff=103
X, y, z
2020-03-27T16:03:16Z
<p>FS: /* x, y, z: Dimensional axes of power */</p>
<hr />
<div>== x, y, z: Dimensional axes of power ==<br />
<br />
* Text: [[Contribution by Helen Pritchard]] (2020)<br />
* Text: Rocha, J. and Snelting, F. (2017) [[The Possible Bodies Inventory: dis-orientation and its aftermath]] in "Cuerpos Poliédricos", Inmaterial Journal vol. 2, num. 3 (Barcelona: Bau College of Design)<br />
* Visual essay: [[Possible Bodies, x, y, z (4 filmstills)]] (2018)<br />
* Text: [[Invasive Imagination and its agential cuts]] (2019)</div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=102
Volumetric Regimes
2020-03-27T16:02:45Z
<p>FS: /* Chapter index */</p>
<hr />
<div>__NOTOC__<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
This wiki is an ongoing workspace for a book in the making.<br />
''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
= Volumetric Regimes: material cultures of quantified presence =<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
Volumetric Regimes emerges from Possible Bodies, a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an [https://possiblebodies.constantvzw.org/ inventory of cases] and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.<br />
<br />
The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.<br />
<br />
[[About|More about ''Volumetric Regimes'']]<br />
<br />
== Chapter index ==<br />
<br />
=== [[Introduction]] ===<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
=== [[Colophon]] ===</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Sidebar&diff=101
MediaWiki:Sidebar
2020-03-27T16:02:11Z
<p>FS: </p>
<hr />
<div><br />
* navigation<br />
** Main_Page|Volumetric Regimes<br />
** about|About this book<br />
** introduction|Introduction<br />
** somatopologies|Somatopologies<br />
** clandestine_disorder|Signs of Clandestine Disorder<br />
** x,_y,_z| x, y, z: Dimensional Axes of Power<br />
** parametric_unknowns|Parametric Unknowns<br />
** depths_and_densities|Depths and Densities<br />
<br />
* SEARCH<br />
* TOOLBOX<br />
* LANGUAGES</div>
FS
https://volumetricregimes.xyz/index.php?title=MakeHuman&diff=100
MakeHuman
2020-03-27T16:00:47Z
<p>FS: /* Bibliography */</p>
<hr />
<div>== MakeHuman ==<br />
<br />
MakeHuman is an Open Source software for modeling 3-dimensional humanoid characters [1]. Including a concrete software object into this glossary means to address specific entanglements of technology, representation and normativity: a potent triangle that MakeHuman sits in the middle of. But MakeHuman does not only deserve our attention due to the technological power of self-representation that it affords. As an Open Source project, it is shaped by the conditions of interrogation and transformability, guaranteed through its license. Like many other F/LOSS projects, MakeHuman is surrounded by a rich constellation of textual objects, expressed through publicly accessible source code, code-comments, bugtrackers, forums and documentation [2]. This porousness facilitated the shaping of a collective inquiry, activated through experiments, conversations and mediations [3]. In collaboration with architects, dancers, trans*-activists, design students, animators and others, we are turning MakeHuman into a thinking machine, a device to critically think along physical and virtual imaginaries. Software is culture and hence software-making is world-making. It is a means for relationalities, not a crystallized cultural end [4].<br />
<br />
=== Software: we've got a situation here ===<br />
<br />
MakeHuman is '3D computer graphics middleware designed for the prototyping of photo realistic humanoids' and has gained visibility and popularity over time [5]. It is actively developed by a collective of programmers, algorithms, modelers and academics and used by amateur animators to prototype modeling, by natural history museums for creating exhibition displays, by engineers to test multi-camera systems and by game-developers for sketching bespoke characters [6]. Developers and users evidently work together to define and codify the conditions of presence for virtual bodies in MakeHuman [7]. Since each of the agents in this collective somehow operates under the Modern regime of representation, we find the software full of assumptions about the naturality of perspective-based and linear representations, the essential properties of the species and so forth. Through its curious naming the project evokes the demiurg, dreaming of 'making' 'humans' to resemble his own image, the deviceful naming is a reminder of how the semiotic-material secrets of life's flows are strongly linked to the way software represents or allows bodies to be represented [8]. The modern subject, defined by the freedom to make and decide, is trained to self-construct under the narcissistic fantasy of “correct”, “proper” or “accurate” representations of the self. These virtual bodies matter to us because their persistent representations cause mirror affects and effects on both sides of the screen [9].<br />
MakeHuman is “middleware”, a device in the middle: a composition machine that glues the deliriums of the “quantified self” to that of Hollywood imagery, all of it made operational through scientific anthropomorphic data and the graphic tricks of 3D-hyper-real rendering. From software development to character animation, from scientific proof to surveillance, the practices crossing through MakeHuman produce images, imaginations and imaginaries that are part of a concrete and situated cultural assemblage of hetero-patriarchal positivism and humanism. Found in and fed by mainstream mediated representations, these imaginations generally align with the body stereotypes that belong to advanced capitalism and post-colonialist projections. Virtual bodies only look “normal” because they appear to fit into that complex situation.<br />
<br />
=== Un-taming the whole ===<br />
<br />
The signature feature of the MakeHuman interface is a set of horizontal sliders. For a split second, the surprising proposal to list “gender” as a continuous parameter, promises wild combinations. Could it be that MakeHuman is a place for imagining humanoids as subjects in process, as open-ended virtual figures that not yet materialized? But the uncomfortable and yet familiar presence of physical and cultural properties projected to the same horizontal scale soon shatters that promise. The interface suggests that the technique of simply interpolating parameters labeled 'Gender', 'Age', 'Muscle', 'Weight', 'Height', 'Proportions', 'Caucasian', 'African' and 'Asian' suffices to make any representation of the human body. The unmarked extremities of the parameters are merely a way to outsource normativity to the user, who can only blindly guess the outcomes of the algorithmic calculations launched by handling the sliders. The tool invites a comparison between 'Gender' to 'Weight' for example, or to decide on race and 'Proportions' through a similar gesture. Subtle and less subtle shifts in both textual and visual language hint at the trouble of maintaining the one-dimensionality of this 3D world-view: 'Gender' (not 'Sex') and 'Weight' are labeled as singular but 'Proportions' is plural; 'Age' is not expressed as 'Young' nor 'Old', while race is made finite in its intra-iterations by naming a limited set of options for mixture [10].<br />
<br />
Further inspection reveals that even the promise of continuity and separation is based on a trick. The actual math at work reveals an extremely limited topology based on a closed system of interconnected parameters, tightening the space of these bodies through assumptions of what they are supposed to be. This risky structuration is based on reduced humanist categories of “proportionality” and “normality”. Parametric design promises infinite differentiations but renders them into a mere illusion: obviously, not all physical bodies resulting from that combination would look the same, but software can make it happen. The sliders provide a machinic imagination for utilitarianised (supposedly human) compositors, conveniently covering up how they function through a mix of technical and cultural normativities. Aligning what is to be desired with the possible, they evidently mirror the binary systems of the Modern proposal for the world [11]. The point is not to "fix" these problems, quite the contrary. We experimented with replacing default values with random numbers, and other ways to intervene with the inner workings of the tool. But only when we started rewriting the interface, we could see it behave differently [12]. By renaming labels, replacing them with questions and more playful descriptions, by adding and distracting sliders, the interface became a space for narrating through the generative process of making possible bodies.<br />
<br />
A second technique of representation at work is that of geometric modeling or polygon meshes. A mesh consolidates an always-complete collection of vertices, edges, planes and faces in order to define the topology of an individualized shape. Each face of a virtual body is a convex polygon; this is common practice in 3D computer graphics and simplifies the complexity of the calculations needed for rendering. Polygon meshes are deeply indebted to the Cartesian perspective by their need for wholeness. It results in a firm separation of first inside from outside and secondly shape or topology from surface. The particular topology of MakeHuman is informed by a rather awkward sense of chastity [13]. With all it's pride in 'anatomical correctness' and high-resolution rendering, it has been decided to place genitals outside the base-body-mesh. The dis-membered body-parts are relegated to a secondary zone of the interface, together with other accessories such as hats and shoes. As a consequence, the additional set of skin-textures included in MakeHuman does not include the genital add-ons so that a change in material makes them stand out, both as a potentiality for otherwise embodied otherness and as evidence of the cultural limitations to represent physical embodiment.<br />
<br />
In MakeHuman, two different technical paradigms (parametric design and mesh-based perspective) are allied together to grow representative bodies that are renormalized within a limited and restricted field of cultivated material conditions, taming the infinite with the tricks of the 'natural' and the 'horizontal'. It is here that we see modern algorithms at work: sustaining the virtual by providing certain projections of the world, scaled up to the size of a powerful presence in an untouchable present.<br />
But what if the problematic understanding of these bodies being somehow human, and at the same time being made by so-called humans, is only one specific actualization emerging from an infinite array of possibilities contained in the virtual? What if we could understand the virtual as a potential generator of differentiated and differentiating possibilities? This might lead us towards mediations for many other political imaginaries [14].<br />
<br />
=== A potential for imaginations ===<br />
<br />
By staging MakeHuman through a performative spectrum, the software turned into a thinking machine, confirming the latent potential of working through software objects. Sharing our lack of reverence for the overwhelming complexities of digital techniques and technologies of 3D imaging, we collectively uncovered its disclosures and played in its cracks [15]. We could see the software iterate between past and present cultural paradigms as well as between humans and non-humans. These virtual bodies co-constructed through the imagination of programmers, algorithms and animators call for otherwise embodied others that suspend the mimicking of “nature” to make room for experiences that are not directly lived, but that deeply shape life [16].<br />
Our persistent attention to MakeHuman being in the middle, situated in-between various digital practices of embodiment, somehow makes collaboration between perspectives possible, and pierces its own utilitarian mesh. Through strategies of “de-familiarization” the potentialities of software open up: breaking the surface is a political gesture that becomes generative, providing a topological dynamic that helps us experience the important presence of impurities in matter-culture continuums [17].<br />
Exploring a software like MakeHuman hints at the possibility of a politics, aesthetics and ethics that is truly generative. To provide us with endless a-modern mestizo, an escape from representational and agential normativities, software CAN and MUST provide the material conditions for wild combinations or un-suspected renders [18].<br />
<br />
<br />
[[file:Makehuman.png|300px|Interface (detail), default settings (MakeHuman version 1.0.2)]]<br />
<br />
<br />
== References ==<br />
<br />
1. http://www.makehuman.org<br />
<br />
2. Free, Libre and Open Source Software (F/LOSS) licenses stipulate that users of the software should have the freedom to run the program for any purpose, to study how the program works, to redistribute copies and to improve the program.<br />
<br />
3. In 2014 the association for art and media Constant organized GenderBlending, a work-session to look at the way 3D-imaging technologies condition social readings and imaginations of gender. The collective inquiry continued with several performative iterations and includes contributions by Rebekka Eisner, Xavier Gorgol, Martino Morandi, Phil Langley and Adva Zakai. http://genderblending.constantvzw.org<br />
<br />
4. The potential of software as a “thinking machine” is that it can activate mechanisms of knowledge production, of not-only-text-based critical theory: 'A cartography is a theoretically based and politically informed reading of the present. Cartographies aim at epistemic and ethical accountability by unveiling the power locations which structure our subject-position'. Rosi Braidotti, The Posthuman (Cambridge: Polity Press, 2013) p. 164.<br />
<br />
5. 'Makehuman is an open source 3D computer graphics software middleware designed for the prototyping of photo realistic humanoids. It is developed by a community of programmers, artists, and academics interested in 3D modeling of characters.' https://en.wikipedia.org/wiki/MakeHuman<br />
<br />
6. Present and past contributors to MakeHuman: http://www.makehuman.org/halloffame.php<br />
<br />
7. https://en.wikipedia.org/wiki/MakeHuman#References_and_Related_Papers<br />
<br />
8. The Artec3 3D-scanner is sold to museums, creative labs, forensic institutions and plastic surgery clinics alike. Their collection of use-cases shows how the market of shapes circulates between bodies, cars and prosthesis http://www.artec3d.com/applications<br />
<br />
9. A code comment in modeling_modifiers_desc.json, a file that defines the modifications operated by the sliders, explains that 'Proportions of the human features, often subjectively referred to as qualities of beauty (min is unusual, center position is average and max is idealistic proportions).' https://bitbucket.org/MakeHuman/makehuman (version 1.0.2)<br />
<br />
10. humanmodifierclass.py, a file that holds the various software-classes to define body shapes, limits the "EthnicModifier(MacroModifier) class" to three racial parameters, together always making up a complete set: '# We assume there to be only 3 ethnic modifiers. self._defaultValue = 1.0/3' https://bitbucket.org/MakeHuman/makehuman (version 1.0.2)<br />
<br />
11. In response to a user suggesting to make the sliders more explicit ('It really does not really make any sense for a character to be anything other then 100% male or female, but than again its more appearance based than actual sex.'), developer Manuel Bastioni responds that it is 'not easy': 'For example, weight = 0.5 is not a fixed value. It depends by the age, the gender, the percentage of muscle and fat, and the height. If you are making an adult giant, 8 ft, fully muscular, your 0.5 weight is X. (...) In other words, it's not linear' http://bugtracker.makehumancommunity.org/issues/489<br />
<br />
12. MakeHuman is developed in Python, a programming language that is relatively accessible for non-technical users and does not require compilation after changes to the program are made.<br />
<br />
13. When the program starts up, a warning message is displayed that 'MakeHuman is a character creation suite. It is designed for making anatomically correct humans. Parts of this program may contain nudity. Do you want to proceed?'<br />
<br />
14.The trans*-working field of all mediations is a profanation of sacred and natural bodies (of virtuality and of flesh). It evidences the fact of them being technological constructions.<br />
<br />
15. Here we refer to Agamben's proposal for “profanation”: 'To profane means to open the possibility of a special form of negligence, which ignores separation or, rather, puts it to a particular use'. Giorgio Agamben, Profanations (New York: Zone Books, 2007) p.73.<br />
<br />
16. 'The ergonomic design of interactive media has left behind the algorithmic “stuff” of computation by burying information processing in the background of perception and embedding it deep within objects' Luciana Parisi, Contagious Architecture: Computation, Aesthetics, and Space (Cambridge MA: MIT Press, 2013)<br />
<br />
17. Breaking and piercing the mesh are gestures that in 'This topological dynamic reverberates with QFT processes (...) in a process of intra-active becoming, of reconfiguring and trans-forming oneself in the self’s multiple and dispersive sense of it-self where the self is intrinsically a nonself.' Karen Barad, Transmaterialities. Trans*/Matter/Realities and Queer Political Imaginings (GLQ: Duke University Press, 2015)<br />
<br />
18. 'Experiments in virtuality -explorations of possible trans*formations- are integral to each and every (ongoing) be(coming).' Karen Barad, Transmaterialities. Trans*/Matter/Realities and Queer Political Imaginings (GLQ: Duke University Press, 2015)<br />
<br />
== Bibliography ==<br />
<br />
* Luciana Parisi, Contagious Architecture: Computation, Aesthetics, and Space (Cambridge MA: MIT Press, 2013)<br />
* Karen Barad, Transmaterialities. Trans*/Matter/Realities and Queer Political Imaginings (GLQ: Duke University Press, 2015)<br />
* Giorgio Agamben, Profanations (New York: Zone Books, 2007)<br />
* Rosi Braidotti, The Posthuman (Cambridge: Polity Press, 2013)<br />
* Matthew Fuller, Softness: interrogability; general intellect; art methodologies in software (Huddersfield University, 2006)<br />
* Andrew Mackenzie, The Performativity of Code: Software and Cultures of Circulation (Theory Culture Society 22, Sage, 2015)</div>
FS
https://volumetricregimes.xyz/index.php?title=MakeHuman&diff=99
MakeHuman
2020-03-27T16:00:27Z
<p>FS: /* References */</p>
<hr />
<div>== MakeHuman ==<br />
<br />
MakeHuman is an Open Source software for modeling 3-dimensional humanoid characters [1]. Including a concrete software object into this glossary means to address specific entanglements of technology, representation and normativity: a potent triangle that MakeHuman sits in the middle of. But MakeHuman does not only deserve our attention due to the technological power of self-representation that it affords. As an Open Source project, it is shaped by the conditions of interrogation and transformability, guaranteed through its license. Like many other F/LOSS projects, MakeHuman is surrounded by a rich constellation of textual objects, expressed through publicly accessible source code, code-comments, bugtrackers, forums and documentation [2]. This porousness facilitated the shaping of a collective inquiry, activated through experiments, conversations and mediations [3]. In collaboration with architects, dancers, trans*-activists, design students, animators and others, we are turning MakeHuman into a thinking machine, a device to critically think along physical and virtual imaginaries. Software is culture and hence software-making is world-making. It is a means for relationalities, not a crystallized cultural end [4].<br />
<br />
=== Software: we've got a situation here ===<br />
<br />
MakeHuman is '3D computer graphics middleware designed for the prototyping of photo realistic humanoids' and has gained visibility and popularity over time [5]. It is actively developed by a collective of programmers, algorithms, modelers and academics and used by amateur animators to prototype modeling, by natural history museums for creating exhibition displays, by engineers to test multi-camera systems and by game-developers for sketching bespoke characters [6]. Developers and users evidently work together to define and codify the conditions of presence for virtual bodies in MakeHuman [7]. Since each of the agents in this collective somehow operates under the Modern regime of representation, we find the software full of assumptions about the naturality of perspective-based and linear representations, the essential properties of the species and so forth. Through its curious naming the project evokes the demiurg, dreaming of 'making' 'humans' to resemble his own image, the deviceful naming is a reminder of how the semiotic-material secrets of life's flows are strongly linked to the way software represents or allows bodies to be represented [8]. The modern subject, defined by the freedom to make and decide, is trained to self-construct under the narcissistic fantasy of “correct”, “proper” or “accurate” representations of the self. These virtual bodies matter to us because their persistent representations cause mirror affects and effects on both sides of the screen [9].<br />
MakeHuman is “middleware”, a device in the middle: a composition machine that glues the deliriums of the “quantified self” to that of Hollywood imagery, all of it made operational through scientific anthropomorphic data and the graphic tricks of 3D-hyper-real rendering. From software development to character animation, from scientific proof to surveillance, the practices crossing through MakeHuman produce images, imaginations and imaginaries that are part of a concrete and situated cultural assemblage of hetero-patriarchal positivism and humanism. Found in and fed by mainstream mediated representations, these imaginations generally align with the body stereotypes that belong to advanced capitalism and post-colonialist projections. Virtual bodies only look “normal” because they appear to fit into that complex situation.<br />
<br />
=== Un-taming the whole ===<br />
<br />
The signature feature of the MakeHuman interface is a set of horizontal sliders. For a split second, the surprising proposal to list “gender” as a continuous parameter, promises wild combinations. Could it be that MakeHuman is a place for imagining humanoids as subjects in process, as open-ended virtual figures that not yet materialized? But the uncomfortable and yet familiar presence of physical and cultural properties projected to the same horizontal scale soon shatters that promise. The interface suggests that the technique of simply interpolating parameters labeled 'Gender', 'Age', 'Muscle', 'Weight', 'Height', 'Proportions', 'Caucasian', 'African' and 'Asian' suffices to make any representation of the human body. The unmarked extremities of the parameters are merely a way to outsource normativity to the user, who can only blindly guess the outcomes of the algorithmic calculations launched by handling the sliders. The tool invites a comparison between 'Gender' to 'Weight' for example, or to decide on race and 'Proportions' through a similar gesture. Subtle and less subtle shifts in both textual and visual language hint at the trouble of maintaining the one-dimensionality of this 3D world-view: 'Gender' (not 'Sex') and 'Weight' are labeled as singular but 'Proportions' is plural; 'Age' is not expressed as 'Young' nor 'Old', while race is made finite in its intra-iterations by naming a limited set of options for mixture [10].<br />
<br />
Further inspection reveals that even the promise of continuity and separation is based on a trick. The actual math at work reveals an extremely limited topology based on a closed system of interconnected parameters, tightening the space of these bodies through assumptions of what they are supposed to be. This risky structuration is based on reduced humanist categories of “proportionality” and “normality”. Parametric design promises infinite differentiations but renders them into a mere illusion: obviously, not all physical bodies resulting from that combination would look the same, but software can make it happen. The sliders provide a machinic imagination for utilitarianised (supposedly human) compositors, conveniently covering up how they function through a mix of technical and cultural normativities. Aligning what is to be desired with the possible, they evidently mirror the binary systems of the Modern proposal for the world [11]. The point is not to "fix" these problems, quite the contrary. We experimented with replacing default values with random numbers, and other ways to intervene with the inner workings of the tool. But only when we started rewriting the interface, we could see it behave differently [12]. By renaming labels, replacing them with questions and more playful descriptions, by adding and distracting sliders, the interface became a space for narrating through the generative process of making possible bodies.<br />
<br />
A second technique of representation at work is that of geometric modeling or polygon meshes. A mesh consolidates an always-complete collection of vertices, edges, planes and faces in order to define the topology of an individualized shape. Each face of a virtual body is a convex polygon; this is common practice in 3D computer graphics and simplifies the complexity of the calculations needed for rendering. Polygon meshes are deeply indebted to the Cartesian perspective by their need for wholeness. It results in a firm separation of first inside from outside and secondly shape or topology from surface. The particular topology of MakeHuman is informed by a rather awkward sense of chastity [13]. With all it's pride in 'anatomical correctness' and high-resolution rendering, it has been decided to place genitals outside the base-body-mesh. The dis-membered body-parts are relegated to a secondary zone of the interface, together with other accessories such as hats and shoes. As a consequence, the additional set of skin-textures included in MakeHuman does not include the genital add-ons so that a change in material makes them stand out, both as a potentiality for otherwise embodied otherness and as evidence of the cultural limitations to represent physical embodiment.<br />
<br />
In MakeHuman, two different technical paradigms (parametric design and mesh-based perspective) are allied together to grow representative bodies that are renormalized within a limited and restricted field of cultivated material conditions, taming the infinite with the tricks of the 'natural' and the 'horizontal'. It is here that we see modern algorithms at work: sustaining the virtual by providing certain projections of the world, scaled up to the size of a powerful presence in an untouchable present.<br />
But what if the problematic understanding of these bodies being somehow human, and at the same time being made by so-called humans, is only one specific actualization emerging from an infinite array of possibilities contained in the virtual? What if we could understand the virtual as a potential generator of differentiated and differentiating possibilities? This might lead us towards mediations for many other political imaginaries [14].<br />
<br />
=== A potential for imaginations ===<br />
<br />
By staging MakeHuman through a performative spectrum, the software turned into a thinking machine, confirming the latent potential of working through software objects. Sharing our lack of reverence for the overwhelming complexities of digital techniques and technologies of 3D imaging, we collectively uncovered its disclosures and played in its cracks [15]. We could see the software iterate between past and present cultural paradigms as well as between humans and non-humans. These virtual bodies co-constructed through the imagination of programmers, algorithms and animators call for otherwise embodied others that suspend the mimicking of “nature” to make room for experiences that are not directly lived, but that deeply shape life [16].<br />
Our persistent attention to MakeHuman being in the middle, situated in-between various digital practices of embodiment, somehow makes collaboration between perspectives possible, and pierces its own utilitarian mesh. Through strategies of “de-familiarization” the potentialities of software open up: breaking the surface is a political gesture that becomes generative, providing a topological dynamic that helps us experience the important presence of impurities in matter-culture continuums [17].<br />
Exploring a software like MakeHuman hints at the possibility of a politics, aesthetics and ethics that is truly generative. To provide us with endless a-modern mestizo, an escape from representational and agential normativities, software CAN and MUST provide the material conditions for wild combinations or un-suspected renders [18].<br />
<br />
<br />
[[file:Makehuman.png|300px|Interface (detail), default settings (MakeHuman version 1.0.2)]]<br />
<br />
<br />
== References ==<br />
<br />
1. http://www.makehuman.org<br />
<br />
2. Free, Libre and Open Source Software (F/LOSS) licenses stipulate that users of the software should have the freedom to run the program for any purpose, to study how the program works, to redistribute copies and to improve the program.<br />
<br />
3. In 2014 the association for art and media Constant organized GenderBlending, a work-session to look at the way 3D-imaging technologies condition social readings and imaginations of gender. The collective inquiry continued with several performative iterations and includes contributions by Rebekka Eisner, Xavier Gorgol, Martino Morandi, Phil Langley and Adva Zakai. http://genderblending.constantvzw.org<br />
<br />
4. The potential of software as a “thinking machine” is that it can activate mechanisms of knowledge production, of not-only-text-based critical theory: 'A cartography is a theoretically based and politically informed reading of the present. Cartographies aim at epistemic and ethical accountability by unveiling the power locations which structure our subject-position'. Rosi Braidotti, The Posthuman (Cambridge: Polity Press, 2013) p. 164.<br />
<br />
5. 'Makehuman is an open source 3D computer graphics software middleware designed for the prototyping of photo realistic humanoids. It is developed by a community of programmers, artists, and academics interested in 3D modeling of characters.' https://en.wikipedia.org/wiki/MakeHuman<br />
<br />
6. Present and past contributors to MakeHuman: http://www.makehuman.org/halloffame.php<br />
<br />
7. https://en.wikipedia.org/wiki/MakeHuman#References_and_Related_Papers<br />
<br />
8. The Artec3 3D-scanner is sold to museums, creative labs, forensic institutions and plastic surgery clinics alike. Their collection of use-cases shows how the market of shapes circulates between bodies, cars and prosthesis http://www.artec3d.com/applications<br />
<br />
9. A code comment in modeling_modifiers_desc.json, a file that defines the modifications operated by the sliders, explains that 'Proportions of the human features, often subjectively referred to as qualities of beauty (min is unusual, center position is average and max is idealistic proportions).' https://bitbucket.org/MakeHuman/makehuman (version 1.0.2)<br />
<br />
10. humanmodifierclass.py, a file that holds the various software-classes to define body shapes, limits the "EthnicModifier(MacroModifier) class" to three racial parameters, together always making up a complete set: '# We assume there to be only 3 ethnic modifiers. self._defaultValue = 1.0/3' https://bitbucket.org/MakeHuman/makehuman (version 1.0.2)<br />
<br />
11. In response to a user suggesting to make the sliders more explicit ('It really does not really make any sense for a character to be anything other then 100% male or female, but than again its more appearance based than actual sex.'), developer Manuel Bastioni responds that it is 'not easy': 'For example, weight = 0.5 is not a fixed value. It depends by the age, the gender, the percentage of muscle and fat, and the height. If you are making an adult giant, 8 ft, fully muscular, your 0.5 weight is X. (...) In other words, it's not linear' http://bugtracker.makehumancommunity.org/issues/489<br />
<br />
12. MakeHuman is developed in Python, a programming language that is relatively accessible for non-technical users and does not require compilation after changes to the program are made.<br />
<br />
13. When the program starts up, a warning message is displayed that 'MakeHuman is a character creation suite. It is designed for making anatomically correct humans. Parts of this program may contain nudity. Do you want to proceed?'<br />
<br />
14.The trans*-working field of all mediations is a profanation of sacred and natural bodies (of virtuality and of flesh). It evidences the fact of them being technological constructions.<br />
<br />
15. Here we refer to Agamben's proposal for “profanation”: 'To profane means to open the possibility of a special form of negligence, which ignores separation or, rather, puts it to a particular use'. Giorgio Agamben, Profanations (New York: Zone Books, 2007) p.73.<br />
<br />
16. 'The ergonomic design of interactive media has left behind the algorithmic “stuff” of computation by burying information processing in the background of perception and embedding it deep within objects' Luciana Parisi, Contagious Architecture: Computation, Aesthetics, and Space (Cambridge MA: MIT Press, 2013)<br />
<br />
17. Breaking and piercing the mesh are gestures that in 'This topological dynamic reverberates with QFT processes (...) in a process of intra-active becoming, of reconfiguring and trans-forming oneself in the self’s multiple and dispersive sense of it-self where the self is intrinsically a nonself.' Karen Barad, Transmaterialities. Trans*/Matter/Realities and Queer Political Imaginings (GLQ: Duke University Press, 2015)<br />
<br />
18. 'Experiments in virtuality -explorations of possible trans*formations- are integral to each and every (ongoing) be(coming).' Karen Barad, Transmaterialities. Trans*/Matter/Realities and Queer Political Imaginings (GLQ: Duke University Press, 2015)<br />
<br />
=== Bibliography ===<br />
<br />
* Luciana Parisi, Contagious Architecture: Computation, Aesthetics, and Space (Cambridge MA: MIT Press, 2013)<br />
* Karen Barad, Transmaterialities. Trans*/Matter/Realities and Queer Political Imaginings (GLQ: Duke University Press, 2015)<br />
* Giorgio Agamben, Profanations (New York: Zone Books, 2007)<br />
* Rosi Braidotti, The Posthuman (Cambridge: Polity Press, 2013)<br />
* Matthew Fuller, Softness: interrogability; general intellect; art methodologies in software (Huddersfield University, 2006)<br />
* Andrew Mackenzie, The Performativity of Code: Software and Cultures of Circulation (Theory Culture Society 22, Sage, 2015)<br />
<br />
Keywords: OTHERWISE EMBODIED OTHERS , TRANS/POST (HUMAN) GENDER, MONSTER (THE UNHUMAN), BODY COUNTS / BODY AS DATA</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=98
MediaWiki:Common.css
2020-03-27T15:59:31Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-weight: bold;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=97
MediaWiki:Common.css
2020-03-27T15:57:56Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-size: 5em; !important}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=96
MediaWiki:Common.css
2020-03-27T15:57:30Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-size: 2em; !important}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=95
MediaWiki:Common.css
2020-03-27T15:57:04Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-size: 2em;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=94
MediaWiki:Common.css
2020-03-27T15:56:40Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-size: 220%; color: red;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=93
MediaWiki:Common.css
2020-03-27T15:56:17Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-size: 220%;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=92
MediaWiki:Common.css
2020-03-27T15:55:29Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {font-size: 180%;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=90
MediaWiki:Common.css
2020-03-27T15:52:51Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {display: 220%;}</div>
FS
https://volumetricregimes.xyz/index.php?title=Invasive_Imagination_and_its_Agential_Cuts&diff=89
Invasive Imagination and its Agential Cuts
2020-03-27T15:51:21Z
<p>FS: </p>
<hr />
<div>== Invasive imagination and its agential cuts ==<br />
<br />
'''There is a conversation missing on the politics of computer tomography, on what is going on with data captured by MRI, PET and CT scanners, rendered as 3D-volumes and then managed, analyzed, visualized and navigated within complex software environments. By aligning medical evidence with computational power, biomedical imaging seems to operate at the forefront of technological advancement while remaining all too attached to modern gestures of cutting, dividing and slicing. Computer tomography<span style="background-color:transparent;"> actively naturalizes modern regimes such as </span><span style="background-color:transparent;">Euclidean</span><span style="background-color:transparent;"> geometry, </span><span style="background-color:transparent;">discretization, </span><span style="background-color:transparent;">anatomy, ocularity and computational efficiency </span><span style="background-color:transparent;">to</span> create powerful political fictions: invasive imaginations and inventions that provoke the technocratic and scientific truth of so-called bodies.'''<br />
<br />
'''This text is a call for trans*feminist<ref>'''We apply the formula trans*feminist in order to convoke all necessary intersectional and intrasectional aspects around that star (*)'''</ref> software prototyping, a persistent affirmation of the possibility for radical experimentation, especially in the </span><span style="background-color:transparent;">hypercomputational </span><span style="background-color:transparent;">context</span><span style="background-color:transparent;"> of biomedical imaging.'''</span>== 1. Slice ==<br />
<br />
''In which we follow the emergence of a slice and its encounters with Euclidean geometry.''The appearance of the'' slice ''in biomedical imaging coincides with the desire to optimize the use of optical microscopes in the 18<sup>th</sup> century. Specimen were cut into thin translucent sections mounted between glass, to maximize their accessible surface area and to slide them more easily under the objective. Microtomography, after “tomos” which means slice in Greek, seems at first sight conceptually coherent with contemporary volumetric scanning techniques or computer tomography. But where microtomography produces visual access by physically cutting into specimen, computer tomography stays on the outside. In order to affectively and effectively navigate matter, ocularity has been replaced by digital data-visualisation.<br />
<br />
In computer tomography, “slice” stands for a data entity containing the total density values acquired from a cross-section of a volume. MRI, PET or CT scanners rotate around matter conglomerates such as human bodies, crime scenes or rocks to continuously probe their consistency with the help of radiation.<ref name="ftn2">Computer Tomography (CT) uses multiple x-ray-exposures; Positron-Emission Tomography (PET) reads from radioactive tracers that a subject has swallowed or was injected with and Magnetic Resonance Imaging (MRI) uses strong magnets and then measures the difference in speed between activation and dis-activation of atoms.</ref> The acquired data is digitally discrete but spatially and temporally ongoing. Only once turned into data, depths and densities can be cut into slices, and computationally flattened onto a succession of two-dimensional virtual surfaces that are backprojected to each resemble a contrasted black and white X-ray. Based on the digital cross-sections that are mathematically aligned into a stack, a third dimension can now be reverse-engineered. This volumetric operation blends data acquired at different micro-moments into a homogeneous volume. The computational process of translating matter density into numbers, re-constructing these as stacks of two-dimensional slices and then extrapolating additional planes to re-render three-dimensional volumes, is at the basis of most volumetric imaging today.<br />
<br />
<br />
Tomography emerged from a long-standing technoscientific exploration fueled by the desire to making the invisible insides of bodies visible. It follows the tradition of anatomic experiments into a “new visual reality” produced by early x-ray imagery.<ref name="ftn3"><div style="margin-left:0.598cm;margin-right:0cm;">Lorraine Daston, Peter Galison, “The image of objectivity” in: Representations, No. 40, Special Issue: Seeing Science (Autumn, 1992). p. 106</div></ref> The slice was a collective invention by many: technologists, tools, users, uses, designers and others knotted the increasing availability of computational capacity to the mathematical theorem of an Austrian mathematician and the standardization of radio-densities.<ref name="ftn4">In 1917, Austrian mathematician Johann Radon introduced the the Radon transform, a formula that Sir Godfrey Hounsfield fifty years later would combine with a quantitative scale for radiodensity, the Hounsfield unit (HU), to reverse-calculate images from density projection data in the CT-scanner that he invented.</ref> Demonstrating the human and more-than-human entanglements of technoscientific streams, the slice invoked multiple pre-established paradigms to provoke an unusual sight on and inside the world. Forty years later, most hospitals located in the Global North have MRI and CT scanners operating around the clock.<ref name="ftn5">In 2017 ca. 13.000 CT-scanners in European hospitals performed 80 million scans per year. See: Healthcare resource statistics – technical resources and medical technology Statistics Explained. Eurostat, 2019 [https://ec.europa.eu/eurostat/statistics-explained/pdfscache/37388.pdf https://ec.europa.eu/eurostat/statistics-explained/pdfscache/37388.pdf]</ref> In the mean time, the slice became involved in the production of multiple truths, as tomography propagated along the industrial continuum: from human brain imaging to other influential fields of data-extraction such as mining, border-surveillance, mineralogy, large-scale fishing, entomology and archaeology.<ref name="ftn6">See: Possible Bodies, Item 074: The Continuum [https://possiblebodies.constantvzw.org/inventory/?074 https://possiblebodies.constantvzw.org/inventory/?074]</ref> <span style="background-color:transparent;">The acceleration produced by the probable jump to the third dimension can hardly be overestimated. This</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">jump is</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">made</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">even </span><span style="background-color:transparent;">more</span><span style="background-color:transparent;"> useful because of </span><span style="background-color:transparent;">the</span><span style="background-color:transparent;"> alleged </span><span style="background-color:transparent;">“non-invasive” </span><span style="background-color:transparent;">character </span><span style="background-color:transparent;">of tomography:</span> tomography promises visual access without the violence of dissection. Looking at the insides of a specimen which was traditionally conditioned by its death or ''an-aesthesia,'' does not anymore require physical intervention.<ref name="ftn7"><div style="margin-left:0.598cm;margin-right:0cm;">CT-scanners are not non-invasive at all since they use x-rays which carry a risk of developmental problems and cancer. This triggered for example ‘Image Gently’, a campaign to be more careful with radiation especially when used on children. [https://www.imagegently.org/ https://www.imagegently.org]</div></ref> <span style="background-color:transparent;">But the </span><span style="background-color:transparent;">persistence of the </span><span style="background-color:transparent;">cross-cut</span><span style="background-color:transparent;">, the </span><span style="background-color:transparent;">fast assumptions </span><span style="background-color:transparent;">that are made</span><span style="background-color:transparent;"> about</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">the non-</span><span style="background-color:transparent;">temporality </span><span style="background-color:transparent;">of the </span><span style="background-color:transparent;">slice</span><span style="background-color:transparent;">, </span><span style="background-color:transparent;">the</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">supposed </span><span style="background-color:transparent;">indexical relation t</span><span style="background-color:transparent;">hey have to</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">matter, </span><span style="background-color:transparent;">the way</span><span style="background-color:transparent;"> math is</span><span style="background-color:transparent;"> involved in </span><span style="background-color:transparent;">the </span><span style="background-color:transparent;">re-generation of densities and the </span><span style="background-color:transparent;">location of </span><span style="background-color:transparent;">tissues</span><span style="background-color:transparent;">, </span><span style="background-color:transparent;">all of it </span><span style="background-color:transparent;">makes us wonder about the</span><span style="background-color:transparent;"> not-non-invasive</span><span style="background-color:transparent;">ness of the imagination </span><span style="background-color:transparent;">at work in the</span><span style="background-color:transparent;"> bio(info)technological tale. </span><span style="background-color:transparent;">L</span><span style="background-color:transparent;">ooking </span><span style="background-color:transparent;">is </span><span style="background-color:transparent;">somehow always already an operation.</span><br />
<br />
Slices necessitate powerful software platforms to be visualized, analyzed, rendered and navigated. We call such platforms ‘powerful’ because of their extensive (and expensive) computational capacities<span style="background-color:#ffffff;">, </span><span style="background-color:#ffffff;">but also because of </span><span style="background-color:#ffffff;">ways</span> they embody authority and truth-making.<span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">Software works hard</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">to</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">remove any trace of the </span><span style="background-color:#ffffff;">presence</span><span style="background-color:#ffffff;"> of the scann</span><span style="background-color:#ffffff;">ing apparatus </span><span style="background-color:#ffffff;">and of the </span><span style="background-color:#ffffff;">mattered </span><span style="background-color:#ffffff;">bod</span><span style="background-color:#ffffff;">ies </span><span style="background-color:#ffffff;">that were once present </span><span style="background-color:#ffffff;">inside </span><span style="background-color:#ffffff;">of </span><span style="background-color:#ffffff;">it</span><span style="background-color:#ffffff;">. </span><span style="background-color:#ffffff;">For slices to </span><span style="background-color:#ffffff;">behave as</span><span style="background-color:#ffffff;"> a single volume </span><span style="background-color:#ffffff;">that is </span><span style="background-color:#ffffff;">scanned at a single instant</span><span style="background-color:#ffffff;">, </span><span style="background-color:#ffffff;">they need to be normalized and aligned to</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">then </span><span style="background-color:#ffffff;">neatly fit the three orthogonal </span><span style="background-color:#ffffff;">planes</span><span style="background-color:#ffffff;"> of X, Y and Z. </span>This automated process of ‘registration’ draws expertise from computer vision, 3D-visualisation and algorithmic data-processing to stack slices in probable ways.<br />
<br />
From now on, the slices act in line with the rigidity of Euclidean<span style="background-color:transparent;"> geometry, </span><span style="background-color:transparent;">a</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">mathematical</span><span style="background-color:transparent;"> paradigm</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">with</span><span style="background-color:transparent;"> its own system of truth, </span><span style="background-color:transparent;">a </span><span style="background-color:transparent;">''straight''</span><span style="background-color:transparent;"> truth.</span><span style="background-color:transparent;"><ref name="ftn8">Sarah Ahmed, Queer Phenomenology, Orientations, Objects, Others. Duke University Press, 2006. p. 70</ref></span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">It</span> relies on a set of axioms or postulates where the X, Y and Z axis are <span style="background-color:transparent;">always parallel, and </span><span style="background-color:transparent;">where</span><span style="background-color:transparent;"> all corpo-real volumes </span><span style="background-color:transparent;">are </span><span style="background-color:transparent;">located in th</span><span style="background-color:transparent;">e</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">cubic reality of their square angles.</span><span style="background-color:transparent;"><ref name="ftn9"><div style="margin-left:0.598cm;margin-right:0cm;">Euclidian geometry relies among others on the parallel postulate: ‘if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which the angles are less than two right angles.’ Euclidean Geometry, Wikipedia [https://en.wikipedia.org/wiki/Euclidean_geometry https://en.wikipedia.org/wiki/Euclidean_geometry]</div></ref></span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">F</span><span style="background-color:transparent;">or reasons of efficiency, hardware </span><span style="background-color:transparent;">optimization</span><span style="background-color:transparent;">, </span><span style="background-color:transparent;">path dependency</span><span style="background-color:transparent;"> and compatibility, </span><span style="background-color:transparent;">Euclidean</span><span style="background-color:transparent;"> geometry has become </span><span style="background-color:transparent;">the </span><span style="background-color:transparent;">un-questionable neutral spatial </span><span style="background-color:transparent;">norm</span><span style="background-color:transparent;"> in </span><span style="background-color:transparent;">any </span><span style="background-color:transparent;">software used for volumetric rendering, whether </span><span style="background-color:transparent;">this is</span><span style="background-color:transparent;"> gaming, </span><span style="background-color:transparent;">flight planning</span><span style="background-color:transparent;"> or </span><span style="background-color:transparent;">geodata processing</span><span style="background-color:transparent;">. </span><span style="background-color:transparent;">But </span><span style="background-color:transparent;">in the case of biomedical imaging, </span><span style="background-color:transparent;">X, Y and Z</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">planes </span><span style="background-color:transparent;">are </span><span style="background-color:transparent;">also </span><span style="background-color:transparent;">c</span><span style="background-color:transparent;">onveniently fit</span><span style="background-color:transparent;">ting</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">the ‘saggital’, ‘coronal’ and ‘axial’ planes that were established in anatomical science in the 19th century.</span><span style="background-color:transparent;"><ref name="ftn10">‘Through the dissection and analysis of the body’s organisation, anatomy works to suspend any distinction between surface and depth, interior and exterior, endosoma and exosoma. It ideally makes all organs equally available to instrumental address and calibration, forms of engineering and assemblage with other machine complexes.’ Catherine Waldby, The Visible Human Project: Informatic Bodies and Posthuman Medicine. Routledge, 2000. p. 51</ref></span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">The </span><span style="background-color:transparent;">slices</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">have been made to</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">fit the fiction of medicine as seamlessly as they fit the fiction of computation.</span>Extrapolated along probable axis and obediently registered to the Euclidean perspective, the slices are now ready to be rendered as high-res three dimensional volumes. Two <span style="background-color:#ffffff;">common practices</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">from </span><span style="background-color:#ffffff;">across the industrial continuum of volumetric imaging </span><span style="background-color:#ffffff;">are combined for this operation: </span>Ray-tracing and image segmentation<span style="background-color:#ffffff;">. </span><span style="background-color:#ffffff;">Ray-tracing considers</span><span style="background-color:#ffffff;"> each pixel in </span><span style="background-color:#ffffff;">each</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">slice</span><span style="background-color:#ffffff;"> as the </span><span style="background-color:#ffffff;">point </span><span style="background-color:#ffffff;">of </span><span style="background-color:#ffffff;">intersection </span><span style="background-color:#ffffff;">with</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">a ray of </span><span style="background-color:#ffffff;">light, </span><span style="background-color:#ffffff;">as if </span><span style="background-color:#ffffff;">it</span><span style="background-color:#ffffff;"> was projected from a simulated eye</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">and </span><span style="background-color:#ffffff;">then </span><span style="background-color:#ffffff;">encounter</span><span style="background-color:#ffffff;">ed</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">a</span><span style="background-color:#ffffff;"> virtual object</span><span style="background-color:#ffffff;">. </span><span style="background-color:#ffffff;">‘Imaging’ enters the picture only at the moment of rendering, when the ray-tracing </span><span style="background-color:#ffffff;">algorithm </span><span style="background-color:#ffffff;">re-inserts the re-assuring presence</span><span style="background-color:#ffffff;">s</span><span style="background-color:#ffffff;"> of </span><span style="background-color:#ffffff;">both ocularity and a virtual </span><span style="background-color:#ffffff;">internal </span><span style="background-color:#ffffff;">sun</span><span style="background-color:#ffffff;">. </span><span style="background-color:#ffffff;">Ray-tracing is</span><span style="background-color:#ffffff;"> a form of algorithmic drawing </span><span style="background-color:#ffffff;">that</span><span style="background-color:#ffffff;"> ma</span><span style="background-color:#ffffff;">kes objects </span><span style="background-color:#ffffff;">appear on the scene by </span><span style="background-color:#ffffff;">projecting </span><span style="background-color:#ffffff;">lines </span><span style="background-color:#ffffff;">that</span><span style="background-color:#ffffff;"> originate from a single vantage point. </span><span style="background-color:#ffffff;">It means that e</span><span style="background-color:#ffffff;">very time a volume is rendered,</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">r</span><span style="background-color:#ffffff;">ay-tracing </span><span style="background-color:#ffffff;">performs</span><span style="background-color:#ffffff;"> Duerer’s </span><span style="background-color:#ffffff;">enlightenment</span><span style="background-color:#ffffff;"> classic,</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">''Artist drawing a nude with </span><span style="background-color:#ffffff;">perspective</span><span style="background-color:#ffffff;"> device.''</span><span style="background-color:#ffffff;"><ref name="ftn11"><div style="margin-left:0.598cm;margin-right:0cm;">‘The woman lies comfortably relaxed; the artist sits upright, rigidly constrained by his fixed position. The woman knows that she is seen; the artist is blinded by his viewing apparatus, deluded by his fantasy of objectivity. The draftsman's need to order visually and to distance himself from that which he sees suggests a futile attempt to protect himself from what he would (not) see. Yet the cloth draped between the woman's legs is not protection enough; neither the viewing device nor the screen can delineate or contain his desire. The perspective painter is transfixed in this moment, paralyzed, unable to capture the sight that encloses him. Enclosing us as well, Dürer's work draws our alarm.’ Barbara Freedman, Staging the Gaze: Postmodernism, Psychoanalysis, and Shakespearean Comedy. Cornell University Press, 1991. p. 2</div></ref></span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">Ray-tracing </span><span style="background-color:#ffffff;">literally </span><span style="background-color:#ffffff;">inverses</span><span style="background-color:#ffffff;"> the</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">centralized</span><span style="background-color:#ffffff;"> god-like</span><span style="background-color:#ffffff;"> ‘vision’</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">of the renaissance artist </span><span style="background-color:#ffffff;">and turns it </span><span style="background-color:#ffffff;">into a</span><span style="background-color:#ffffff;">n act of</span><span style="background-color:#ffffff;"> creation.</span><br />
<br />
<span style="background-color:#ffffff;">I</span><span style="background-color:#ffffff;">mage segmentation </span>starts at the boundaries rendered on each slice. A c<span style="background-color:#ffffff;">ontinuous light area </span><span style="background-color:#ffffff;">surrounded</span><span style="background-color:#ffffff;"> by </span><span style="background-color:#ffffff;">a </span><span style="background-color:#ffffff;">darker one </span><span style="background-color:#ffffff;">suggest the presence of</span><span style="background-color:#ffffff;"> coherent materiality; </span><span style="background-color:#ffffff;">difference signals a border between inside and outside</span><span style="background-color:#ffffff;">. </span><span style="background-color:#ffffff;">With the help of </span><span style="background-color:#ffffff;">partially automatic </span><span style="background-color:#ffffff;">edge </span><span style="background-color:#ffffff;">detection </span><span style="background-color:#ffffff;">algorithms,</span><span style="background-color:#ffffff;"> contrasted areas </span><span style="background-color:#ffffff;">are demarcated</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">and </span><span style="background-color:#ffffff;">can subsequently be </span><span style="background-color:#ffffff;">transformed into </span><span style="background-color:#ffffff;">synthetic surfaces </span><span style="background-color:#ffffff;">with the help of a computer graphics algorithm </span><span style="background-color:#ffffff;">such as</span><span style="background-color:#ffffff;"> Marching Cubes. </span><span style="background-color:#ffffff;">The </span><span style="background-color:#ffffff;">resulting</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">mesh- or polygon </span><span style="background-color:#ffffff;">model</span><span style="background-color:#ffffff;">s can be</span><span style="background-color:#ffffff;"> rendered </span><span style="background-color:#ffffff;">as continuous three dimensional volumes with </span><span style="background-color:#ffffff;">unambiguous borders</span><span style="background-color:#ffffff;">.</span><span style="background-color:#ffffff;"><ref name="ftn12">W.E. Lorensen, Harvey Cline, “Marching cubes: A high resolution 3d surface construction algorithm”. ACM Computer Graphics. 21 (1987): pp. 163–169</ref></span><span style="background-color:#ffffff;"> W</span><span style="background-color:#ffffff;">hat</span><span style="background-color:#ffffff;"> is important here, </span><span style="background-color:#ffffff;">is </span><span style="background-color:#ffffff;">that </span><span style="background-color:#ffffff;">the doings and happenings of </span><span style="background-color:#ffffff;">t</span><span style="background-color:#ffffff;">omography literally </span><span style="background-color:#ffffff;">''make''</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">invisible inside</span><span style="background-color:#ffffff;">s</span><span style="background-color:#ffffff;"> visible.</span><br />
<br />
From the very beginning of the tomographic process there has been an entanglement at work between computation and anatomy.<span style="background-color:transparent;"><ref name="ftn13">See Karen Barad, “Getting Real: Technoscientific practices and the materialization of reality.” in: Meeting the Universe Halfway. Duke University Press, 2007 pp. 189-222</ref></span><span style="background-color:transparent;"> </span><span style="background-color:#ffffff;">F</span><span style="background-color:#ffffff;">or a computer scientist, </span><span style="background-color:#ffffff;">segmentation is a set of standard </span><span style="background-color:#ffffff;">techniques </span><span style="background-color:#ffffff;">used in</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">the field of C</span><span style="background-color:#ffffff;">omputer </span><span style="background-color:#ffffff;">V</span><span style="background-color:#ffffff;">ision to </span><span style="background-color:#ffffff;">algorithmically </span><span style="background-color:#ffffff;">discern</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">useful bits and pieces of images. </span><span style="background-color:#ffffff;">When anatomist use the same term, they refer to the process of cutting off one part of an organism from another. For</span><span style="background-color:#ffffff;"> radiologist</span><span style="background-color:#ffffff;">s,</span><span style="background-color:#ffffff;"> segmentation mean</span><span style="background-color:#ffffff;">s</span><span style="background-color:#ffffff;"> visually discerning anatomical </span><span style="background-color:#ffffff;">parts</span><span style="background-color:#ffffff;">. </span><span style="background-color:#ffffff;">In computer tomography, t</span><span style="background-color:transparent;">raditions of math, computation, perspective and anatomy join forces to perform exclusionary boundaries together, </span><span style="background-color:transparent;">identifying</span><span style="background-color:transparent;"> tissue types at the level of single pixels.</span> In the process, invisible insides have become readable and eventually writable for further processing. Cut along all-too-probable sets of gestures, dependent on assumptions of medical truth, indexality and profit, slices have collaborated in the transformation of so-called bodies into stable, clearly demarcated volumes that can be operated upon. The making visible that tomography does, is the result of a series of generative re-renderings that should be considered as operative themselves.<ref name="ftn14"><div style="margin-left:0.598cm;margin-right:0cm;">Aud Sissel Hoel, Frank Lindseth, “Images as Operative Tools” in: The New Everyday: A MediaCommons Project, The Operative Image cluster, 2014</div></ref> Tomography re-presents matter-conglomerates as continuous, stable entities and contributes strongly to the establishment of coherent materiality and humanness-as-individual-oneness. These picturings create powerful political fictions; imaginations and inventions that provoke the technocratic and scientific truth of so-called bodies.The processual quantification of matter under such efficient regimes produces predictable outcomes, oriented by industrial concerns that are aligned with pre-established decisions on what counts as pathology or exploitation. What is at stake here is how probable sights of the no-longer-invisible are being framed. So, what implications would it have to let go of the probable, and to try some other ways of making invisible insides visible? What would be an intersectional operation that disobeys anthropo-euro-andro-capable projections? Or: how to otherwise reclaim the worlding of these possible insides?== 2. Slicer ==<br />
<br />
''In which we meet Slicer, and its collision with trans*feminist urgencies.''<br />
<br />
<br />
<span style="background-color:#ffffff;">Feminist c</span><span style="background-color:#ffffff;">ritical </span><span style="background-color:#ffffff;">analy</span><span style="background-color:#ffffff;">sis</span><span style="background-color:#ffffff;"> of representation ha</span><span style="background-color:#ffffff;">s</span><span style="background-color:#ffffff;"> been helpful in formulating a response to the </span><span style="background-color:#ffffff;">kind of </span><span style="background-color:#ffffff;">world</span><span style="background-color:#ffffff;">s</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">that</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">s</span><span style="background-color:#ffffff;">lice</span><span style="background-color:#ffffff;">s produce</span><span style="background-color:#ffffff;">. </span><span style="background-color:#ffffff;">B</span><span style="background-color:#ffffff;">ut by persistently asking questions like: who sees, who is seen, </span><span style="background-color:#ffffff;">and </span><span style="background-color:#ffffff;">who is allowed to participate in the closed circuit of “seeing”, </span><span style="background-color:#ffffff;">such</span><span style="background-color:#ffffff;"> modes of critique </span><span style="background-color:#ffffff;">too</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">easily</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">take</span><span style="background-color:#ffffff;"> the side of the individual subject.</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">Moreover, </span><span style="background-color:#ffffff;">i</span><span style="background-color:#ffffff;">t is clear that </span><span style="background-color:#ffffff;">in the context of biomedical informatics, </span><span style="background-color:#ffffff;">the issue of hegemonic modes of doing is more widely distributed than the problem of the </span><span style="background-color:#ffffff;">(</span><span style="background-color:#ffffff;">expert</span><span style="background-color:#ffffff;">)</span><span style="background-color:#ffffff;"> eye, </span><span style="background-color:#ffffff;">as will become increasingly clear when we meet our protagonist, the software platform Slicer.</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">It</span><span style="background-color:#ffffff;"> is </span><span style="background-color:#ffffff;">why</span><span style="background-color:#ffffff;"> we are interested in working through </span><span style="background-color:#ffffff;">trans*feminist </span><span style="background-color:#ffffff;">concepts such as</span><span style="background-color:#ffffff;">'' ''</span><span style="background-color:#ffffff;">entanglement and </span><span style="background-color:#ffffff;">intra-action </span><span style="background-color:#ffffff;">as a way to</span><span style="background-color:#ffffff;"> engage with the complicated more-than-oneness that th</span><span style="background-color:#ffffff;">ese</span><span style="background-color:#ffffff;"> kind of techno-ecologies evidently puts in practice.</span><br />
<br />
<span style="background-color:#ffffff;">Slicer or </span><span style="background-color:#ffffff;">or 3D-Slicer </span><span style="background-color:#ffffff;">is </span><span style="background-color:#ffffff;">an Open Source software platform for the analysis and visualization of medical images </span><span style="background-color:#ffffff;">in research environments</span><span style="background-color:#ffffff;">.</span><span style="background-color:#ffffff;"><ref name="ftn15"><span style="background-color:#ffffff;">Slicer documentation, download and forum pages each describe its main purpose in slightly different ways: </span><span style="background-color:transparent;">‘an open source software platform for medical image informatics, image processing, and three-dimensional visualization’ </span>[https://www.slicer.org/wiki/Main_Page https://www.slicer.org/wiki/Main_Page]<span style="background-color:transparent;"> </span><span style="background-color:transparent;">‘Slicer, or 3D Slicer, is a free, open source software package for visualization and </span><span style="background-color:transparent;">image analysis’ </span>[https://github.com/Slicer/Slicer https://github.com/Slicer/Slicer]<span style="background-color:transparent;"> ‘3D Slicer (“Slicer”) is an open source, extensible software platform for image visualization and analysis. Slicer has a large community of users in medical imaging and surgical navigation, and is also used in fields such as astronomy, paleontology, and 3D printing’ </span>[https://discourse.slicer.org/t/slicer-4-8-summary-highlights-and-changelog/1292 https://discourse.slicer.org/t/slicer-4-8-summary-highlights-and-changelog/1292]<span style="background-color:transparent;"> </span><span style="background-color:transparent;">‘a software platform for the analysis (including registration and interactive segmentation) and visualization (including volume rendering) of medical images and for research in image guided therapy.’ </span>[https://slicer.readthedocs.io/en/latest/user_guide/getting_started.html https://slicer.readthedocs.io/en/latest/user_guide/getting_started.html]</ref></span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">The platform </span><span style="background-color:#ffffff;">is </span><span style="background-color:#ffffff;">auto-</span><span style="background-color:#ffffff;">frame</span><span style="background-color:#ffffff;">d </span><span style="background-color:#ffffff;">by </span><span style="background-color:#ffffff;">its name,</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">an explicit </span><span style="background-color:#ffffff;">choice</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">to </span><span style="background-color:#ffffff;">p</span><span style="background-color:#ffffff;">lac</span><span style="background-color:#ffffff;">e </span><span style="background-color:#ffffff;">the work of cutting or dividing </span><span style="background-color:#ffffff;">in the center; </span><span style="background-color:#ffffff;">an</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">unapologetical</span><span style="background-color:#ffffff;"> celebrat</span><span style="background-color:#ffffff;">ion of</span><span style="background-color:#ffffff;"> the geometric norm of contemporary biomedical imaging. Naming a software “Slicer” imports </span><span style="background-color:#ffffff;">the cut</span><span style="background-color:#ffffff;"> as a </span><span style="background-color:#ffffff;">naturalized</span><span style="background-color:#ffffff;"> gesture, justifying it as an obvious need to prepare data for scientific objectivity. </span><span style="background-color:#ffffff;">F</span><span style="background-color:#ffffff;">iguring the software as “Slicer” (like butcher, baker, or doctor) turns it into a performative device </span><span style="background-color:#ffffff;">by which </span><span style="background-color:#ffffff;">the violence of that cut is delegated to the software itself. By this delegation, the software </span><span style="background-color:#ffffff;">puts itself</span><span style="background-color:#ffffff;"> at the service of</span><span style="background-color:#ffffff;"> fitting the already-cut slices to </span><span style="background-color:#ffffff;">multiple</span><span style="background-color:#ffffff;"> paradigm</span><span style="background-color:#ffffff;">s</span><span style="background-color:#ffffff;"> of </span><span style="background-color:#ffffff;">''straightness''</span><span style="background-color:#ffffff;">, </span><span style="background-color:#ffffff;">to</span><span style="background-color:#ffffff;"> relentlessly re-render them as visually accessible volumes.</span><span style="background-color:#ffffff;"><ref name="ftn16"><div style="margin-left:0.598cm;margin-right:0cm;">Waldby 2000, p. 34</div></ref></span><span style="background-color:#ffffff;"> In such an environment, </span><span style="background-color:#ffffff;">any </span><span style="background-color:#ffffff;">oblique, deviating, unfinished or </span><span style="background-color:#ffffff;">''queer''</span><span style="background-color:#ffffff;"> cuts </span><span style="background-color:#ffffff;">become</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">hard </span><span style="background-color:#ffffff;">to imagine.</span><br />
<br />
Slicer evolved in the fertile space between scientific research, biomedical imaging and the industry of scanning devices. It sits comfortably in the middle of a booming industry that attempts to seamlessly integrate hardware and software, flesh, bone, radiation, economy, data-processing with the management of it all. In the clinic, such software environments are running on expensive patented radiology hardware, sold by global technology companies such as Philips, Siemens and General Electric. In the high-end commercial context of biomedical imaging, Slicer is one of the few platforms that runs independent of specific devices and can be installed on generic laptops. The software is released under an Open Source license which invites different types of users to study, use, distribute and co-develop the project and its related practices. The project is maintained by a community of medical image computing researchers that take care of technical development, documentation, versioning, testing and the publication of a continuous stream of open access papers.<ref name="ftn17"><div style="margin-left:0.598cm;margin-right:0cm;">The Slicer publication database hosted by the Surgical Planning Laboratory currently contains 552 publications. [http://www.spl.harvard.edu/publications/pages/display/?collection=11 http://www.spl.harvard.edu/publications/pages/display/?collection=11]</div></ref><br />
<br />
At several locations in- and around Slicer, users are warned that this software is not intended for clinical use.<ref name="ftn18">When launching Slicer, a pop-up appears: ‘This software is not intended for clinical use’ (see figure 6). In the main interface we also find ‘This software has been designed for research purposes only and has not been reviewed or approved by the Food and Drug Administration, or by any other agency.’ <span style="background-color:transparent;">In addition, the software license stipulates in capital letters that “YOU ACKNOWLEDGE AND AGREE THAT CLINICAL APPLICATIONS ARE NEITHER RECOMMENDED NOR ADVISED’. </span>[https://github.com/Slicer/Slicer/blob/master/License.txt https://github.com/Slicer/Slicer/blob/master/License.txt]</ref> The reason Slicer positions itself so persistently outside the clinic might be a liability issue but seems most of all a way to assert itself as a prototyping environment in-between diagnostic practice and innovative marketable products.<ref name="ftn19">Slicer positions itself as a prototyping environment in-between diagnostic practice and innovative marketable products, and ‘facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development<span style="background-color:transparent;">.’ Fedorov, Andriy et al. “3D Slicer as an image computing platform for the Quantitative Imaging Network.” </span><span style="background-color:transparent;">''Magnetic resonance imaging''</span><span style="background-color:transparent;"> vol. 30,9 (2012): 1323-41.</span></ref> The consortium managing Slicer draws in millions worth of US medical grants every year, already for more than a decade. Even so, Slicer’s interface comes across as alarmingly amateurish, bloating the screen with a myriad of options and layers that only vaguely remind of the subdued sleekness of corresponding commercial packages. The all-over-the place impression of Slicer’s interface coincides with its coherent mission to be a prototyping rather than an actual software platform. As a result, its architecture is skeletal and its substance consists almost entirely of extensions, each developed for very different types of biomedical research. Only some of this research concerns actual software development, most of it is aimed at developing algorithms for automating tasks such as anomaly detection or organ segmentation. The ideologies and hegemony embedded in the components of this (also) collectivelly-developed-software are again confirmed by the recent adoption of a BSD license which is considered to be the most “business-friendly” Open Source license around.The development of Slicer is interwoven with two almost simultaneous genealogies of acceleration in biomedical informatics. The first is linked to the influential environment of the Artificial Intelligence labs at MIT. In the late nineties, Slicer emerged here as a tool to demonstrate the potential of intervention planning. From the start, the platform connected the arts and manners of Quantitative Imaging to early experiments in robot surgery. This origin-story binds the non-clinical environment of Slicer tightly to the invasive gestures of the computer-assisted physician.<ref name="ftn20"><div style="margin-left:0.598cm;margin-right:0cm;">Gering, David T. et all. In: Taylor C., Colchester A. (eds) Medical Image Computing and Computer-Assisted Intervention – Lecture Notes in Computer Science, vol 1679. Springer, Berlin, Heidelberg (1999)</div></ref><br />
<br />
The second, even more spectacular genealogy is Slicer’s shared history with the Visible Human project. In the mid-nineties, when the volume of tomographic data was growing, the American Library of Science felt it necessary to publicly re-confirm the picturings with the visible insides of an actual human body, and to verify that the captured data responded to specifically mattered flesh. While the blurry black and white slices did seem to resemble anatomic structures, how to ensure that the results were actually correct? <br />
<br />
A multi-billion dollar project was launched to materially re-enact the computational gesture of tomography onto actual flesh-and-blood bodies. The project started with the acquisition of two 'volunteers', one convicted white middle-aged male murderer, allegedly seeking repentance through donating his body to science, and a white middle-aged female, donated by her husband. Their corpses where first vertically positioned and scanned, before being horizontally stabilized in clear blue liquid, then frozen, and <span style="background-color:transparent;">s</span><span style="background-color:transparent;">a</span><span style="background-color:transparent;">wn</span><span style="background-color:transparent;"> </span>into four pieces.<ref name="ftn21"><div style="margin-left:0.598cm;margin-right:0cm;">‘The term “cut” is a bit of a misnomer, yet it is used to describe the process of grinding away the top surface of a specimen at regular intervals. The term “slice,” also a misnomer, refers to the revealed surface of the specimen to be photographed; the process of grinding the surface away is entirely destructive to the specimen and leaves no usable or preservable “slice” of the cadaver.’ The Visible Human Project, Wikipedia [https://en.wikipedia.org/wiki/Visible_Human_Project https://en.wikipedia.org/wiki/Visible_Human_Project]</div></ref> Each piece was mounted under a camera, and photographed in a zenithal plane before being scraped down by 3 millimeter, to be photographed again. The resulting color photographs where digitized, color-corrected, registered and re-rendered volumetrically in X, Y, Z planes. Both datasets (the MRI-data and the digitized photographs) where released semi-publicly. These two datasets, informally renamed into “Adam” and “Eve” still circulate as default reference material in biomedical imaging, amongst others in current versions of Slicer.<ref name="ftn22"><div style="color:#000000;">Naming is a strongly politicized representational technique. See also Paul B. Preciado, xxxx for a discussion of the theological-patriarchal regime of the biomedical field.</div></ref> Names affect matter; or better said: naming is always already mattering.<ref name="ftn23"><span style="background-color:#e9b913;">S</span><span style="background-color:#e9b913;">ee </span><span style="background-color:#e9b913;">U</span><span style="background-color:#e9b913;">rsula K. Leguin, ‘</span><span style="background-color:#e9b913;">S</span><span style="background-color:#e9b913;">he unnames them’, </span><span style="background-color:#e9b913;">xxxxx</span><span style="background-color:#e9b913;"> </span><span style="background-color:transparent;">or Possible Bodies, Item 059:</span><span style="background-color:transparent;"> </span><span style="background-color:transparent;">Anarcha’</span><span style="background-color:transparent;">s Gland</span><span style="background-color:transparent;">, </span><span style="background-color:transparent;">for an</span><span style="background-color:transparent;"> attempt by tech-feminist group </span><span style="background-color:transparent;">Pechblenda </span><span style="background-color:transparent;">to rename anatomy </span><span style="background-color:transparent;">in an</span><span style="background-color:transparent;"> attempt to decolonize bodies. </span>[https://possiblebodies.constantvzw.org/inventory/?059 https://possiblebodies.constantvzw.org/inventory/?059]</ref>The mediat<span style="background-color:#ffffff;">ized process of the Visible Human project </span><span style="background-color:#ffffff;">coincide</span><span style="background-color:#ffffff;">d</span><span style="background-color:#ffffff;"> with</span><span style="background-color:#ffffff;"> a </span><span style="background-color:#ffffff;">big</span><span style="background-color:#ffffff;"> push for accessible imagining software platforms that would offer fly-through 3D anatom</span><span style="background-color:#ffffff;">i</span><span style="background-color:#ffffff;">cal </span><span style="background-color:#ffffff;">atlases</span><span style="background-color:#ffffff;">, </span><span style="background-color:#ffffff;">re-inserting modern regimes</span><span style="background-color:#ffffff;"> on the intersection of computer science, biomedical science and general education.</span><span style="background-color:#ffffff;"><ref name="ftn24">'The Visible Human Project data sets are designed to serve as a common reference point for the study of human anatomy, as a set of common public domain data for testing medical imaging algorithms, and as a test bed and model for the construction of image libraries that can be accessed through networks.’ Programs and services fiscal year 2000. National institutes of health, National Library of Medicine, 2000 [https://www.nlm.nih.gov/ocpl/anreports/fy2000.pdf https://www.nlm.nih.gov/ocpl/anreports/fy2000.pdf]</ref></span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">It </span><span style="background-color:#ffffff;">produced the </span><span style="background-color:#ffffff;">need</span><span style="background-color:#ffffff;"> for the development of automatic registration and segmentation </span><span style="background-color:#ffffff;">algorithms such as </span><span style="background-color:#ffffff;">the</span><span style="background-color:#ffffff;"> Insight Segmentation and Registration Toolkit (ITK), an algorithm that is at the basis of Slicer.</span><span style="background-color:#ffffff;"><ref name="ftn25"><div style="margin-left:0.598cm;margin-right:0cm;">Insight Segmentation and Registration Toolkit webpage [https://itk.org/Doxygen413/html/index.html https://itk.org/Doxygen413/html/index.html]</div></ref></span><br />
<br />
<span style="background-color:#ffffff;">Slicer </span><span style="background-color:#ffffff;">opens</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">a small window</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">o</span><span style="background-color:#ffffff;">nto the complex </span><span style="background-color:#ffffff;">and hypercomputational </span><span style="background-color:#ffffff;">world of biomedical imaging and the way software creates the matter-cultural conditions of possibility that render so-called bodies volumetrically present. </span><span style="background-color:#ffffff;">It</span><span style="background-color:#ffffff;"> tells</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">stories of interlocking </span><span style="background-color:#ffffff;">regime</span><span style="background-color:#ffffff;">s</span><span style="background-color:#ffffff;"> of power which discipline the body,</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">its modes and representations in a top-to-bottom mode. </span><span style="background-color:#ffffff;">It shows</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">how the</span><span style="background-color:#ffffff;">se regimes</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">operate through a distributed and naturalized assumption of efficiency which hegemonically reproduces bodies as singular entities that need to be clear and ready in order to be "healed". </span><span style="background-color:#ffffff;">But e</span><span style="background-color:#ffffff;">ven when we are critical of the way Slicer orders both technological innovation and biovalue as an economy</span><span style="background-color:#ffffff;"><ref name="ftn26">‘Technics can intensify and multiply force and forms of vitality by ordering it as an economy, a calculable and hierarchical system of value – exist in circulation and disctribution, can function in other economies.’ Waldby 2000, p. 33</ref></span><span style="background-color:#ffffff;">, </span><span style="background-color:#ffffff;">its licensing and positioning </span><span style="background-color:#ffffff;">also create </span><span style="background-color:#ffffff;">the </span><span style="background-color:#ffffff;">collective conditions for </span><span style="background-color:#ffffff;">a</span><span style="background-color:#ffffff;">n affirmative</span><span style="background-color:#ffffff;"> cultural critique of software artifacts. </span><span style="background-color:#ffffff;">We suspect that a </span><span style="background-color:#ffffff;">FLOSS environment responsibilizes its community to make sure boundaries do not sit still. Without wanting to suggest that FLOSS </span><span style="background-color:#ffffff;">itself</span><span style="background-color:#ffffff;"> produc</span><span style="background-color:#ffffff;">es the conditions for non-hegemonic imaginations, </span><span style="background-color:#ffffff;">its</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">persistent </span><span style="background-color:#ffffff;">commitment to transformation is key </span><span style="background-color:#ffffff;">for </span><span style="background-color:transparent;">radical experiment</span><span style="background-color:transparent;">s</span><span style="background-color:transparent;">, and for </span><span style="background-color:#ffffff;">trans*feminist s</span><span style="background-color:transparent;">oftware prototyping.</span>== 3. Slicing ==<br />
<br />
''Where we introduce the Modern Separation Toolkit, and the aftermath of the cut.''The act of separation is a key gesture of modernity. The Modern Separation Toolkit (MST) contains persistent and culturally aligned modes of euro-andro-able-anthropocentric representation: taxonomy, anatomy, perspective, individual subjecthood, objectivity and many other material-semiotic moves of division. Separation is active on every level in order to isolate the part from the whole, the one from the other and to detach the object from the subject. Modern claims of truth work from the assumption that there is a necessary relation between separability, determinacy and sequentiality; between division, knowledge and representation.<ref name="ftn27"><div style="margin-left:0.598cm;margin-right:0cm;">As Rosi Braidotti notes, ‘Modern science is the triumph of the scopic drive as a gesture of epistemological domination and control: to make visible the invisible, to visualise the secrets of nature. Biosciences achieve their aims by making the embodied subject visible and intelligible according to the principles of scientific representation. In turn this implies that the body can be split into a variety of organs, each of which can be analyzed and represented.’ Rosi Braidotti, Nomadic Subjects: Embodiment and Sexual Difference in Contemporary Feminist Theory. Columbia University Press, 2011. p. 196</div></ref>The disciplines of Art Theory, History of Science and Philosophy of Perception exemplify each with their own means the particular gestures of separation in which the complexities of a particular world are haunted and caught by modern modes to understand, name, transmit and eventually “apprehend” these worlds. If in tomography representing again is a form of grasping or even of control, it is evident that we need to attend to the power relations that these cutting practices produce, so we don't allow them to be completely or definitively naturalized, culturally assumed as evident or given.<br />
<br />
The specific mode of separation in contemporary biomedical imaging is the art of computational slicing. Our protagonist Slicer is obviously exposed to and exposing various cuts:<div style="margin-left:0cm;margin-right:0cm;">''The subjectivity cut. ''Subjectivity can be understood as a prerequisite for representation, as it assures the presence of a subject responsible for a particular understanding of the world. But with the emergence of modern subjecthood, of physical and legal persona freed from their environmental attachments and charged with free will and the capacity of judgment, additional representational norms imposed themselves, somehow occupying an in-between space of singular and normative subjectivity.<ref name="ftn28"><div style="margin-left:0.598cm;margin-right:0cm;">Daston 1992</div></ref> In Slicer, the ''subjectivity cut'' is activated by the default choice of volumetric rendering, a two-point perspective where lines of sight come together in a single point, that of the individual viewer. These so-called bodies are reduced to their individual matter constellation, separated from the machinery around them, movable but divorced from their specific rhythms, without attachments or complications and most important of all, with minimal agency. Being and becoming is reduced to the incontestable promise of wholeness-at-the-end-of-the-scanner's-tunnel.</div><div style="margin-left:0cm;margin-right:0cm;">''The regional cut ''refers to the technoscientific phenomena of defining a Region of Interest (ROI), a location of special attention, even if it is as vast as a globe, or an atlas. The regional cut supports a focus and a training of the gaze that as a result can habituate itself on a certain area, but only at the expense of not looking at another.<ref name="ftn29"><div style="margin-left:0.598cm;margin-right:0cm;">‘what was not new to nineteenth-century atlases was the dictum “truth to nature”: there is no atlas in any field that does not pique itself on its accuracy, on its fidelity to fact. But in order to decide whether an atlas picture is an accurate rendering of nature, the atlas maker must first decide what nature is. All atlas makers must solve the problem of choice: which objects should be presented as the standard phenomena of the discipline, and from which viewpoint? In the late ninetheenth century, these choices triggered a crisis of anxiety and denial, for they seemed invitations to subjectivity.’ Daston 1992</div></ref> In Slicer, the technical definition and isolation of what is called Region Of Interest operates as a computational upgrading of the decisions behind nineteenth century atlases of anatomy. This interface operation presents the target as a cut. It results in a visual slicing of the virtual volume, which then exposes its invisible insides at its straight incisions.</div><div style="margin-left:0cm;margin-right:0cm;"><span style="background-color:transparent;">''T</span><span style="background-color:transparent;">he demarcation cut ''</span><span style="background-color:transparent;">relates to</span><span style="background-color:transparent;"> the way </span><span style="background-color:transparent;">that </span><span style="background-color:transparent;">the practice of segmentation is present in both historical and contemporary biomedical im</span><span style="background-color:transparent;">aging. Segmentation </span><span style="background-color:transparent;">produces</span><span style="background-color:transparent;"> absolute divisions between </span><span style="background-color:transparent;">image areas, </span><span style="background-color:transparent;">organs, </span><span style="background-color:transparent;">shades of gray and </span><span style="background-color:transparent;">bones that </span><span style="background-color:transparent;">obediently </span><span style="background-color:transparent;">follow the anatomical canon. It all works together to give the renderings a sense of mathematical precision and medical evidence. In a nutshell, th</span><span style="background-color:transparent;">e</span><span style="background-color:transparent;"> process allows us to </span><span style="background-color:transparent;">engineer</span><span style="background-color:transparent;"> a </span><span style="background-color:transparent;">non-ambiguous spatial lay-out </span><span style="background-color:transparent;">where each tissue or anatomical structure is </span><span style="background-color:transparent;">identified</span><span style="background-color:transparent;"> by </span><span style="background-color:transparent;">a label and a unique color code, </span><span style="background-color:transparent;">all based on a black and white blur</span><span style="background-color:transparent;">. </span>''The demarcation cut''<span style="background-color:transparent;"> </span><span style="background-color:transparent;">subsequently </span><span style="background-color:transparent;">cascades into </span><span style="background-color:transparent;">''The </span><span style="background-color:transparent;">t</span><span style="background-color:transparent;">axonomic cut''</span><span style="background-color:transparent;"> by means of the hierarchical anatomical model that Slicer shares with motion-tracking software.</span><span style="background-color:transparent;"><ref name="ftn30">The model for anatomical data in Slicer resembles the crude cascading hierarchies used in basic motion tracking software.</ref></span></div><div style="margin-left:0cm;margin-right:0cm;">''The invasive-non-invasive cut ''<span style="background-color:#ffffff;">emerged when</span><span style="background-color:#ffffff;"> the tomographic paradigm imposed itself over other regimes of “seeing” in </span><span style="background-color:#ffffff;">the field of </span><span style="background-color:#ffffff;">biomedical </span><span style="background-color:#ffffff;">imaging</span><span style="background-color:#ffffff;">. </span><span style="background-color:#ffffff;">This</span><span style="background-color:#ffffff;"> crossing concept connect</span><span style="background-color:#ffffff;">s </span><span style="background-color:#ffffff;">the </span><span style="background-color:#ffffff;">search for least</span><span style="background-color:#ffffff;"> invasivelessness </span><span style="background-color:#ffffff;">in innovative</span><span style="background-color:#ffffff;"> surgery, with the </span><span style="background-color:#ffffff;">thread of </span><span style="background-color:#ffffff;">making invisible insides visible </span><span style="background-color:#ffffff;">in bio</span><span style="background-color:#ffffff;">medical </span><span style="background-color:#ffffff;">informatics’ </span><span style="background-color:#ffffff;">research and practice. Slicer </span><span style="background-color:#ffffff;">contributes to </span><span style="background-color:#ffffff;">a </span><span style="background-color:#ffffff;">dense constellation of techniques and technologies </span><span style="background-color:#ffffff;">that are </span><span style="background-color:#ffffff;">developed to cut bod</span><span style="background-color:#ffffff;">ies</span><span style="background-color:#ffffff;"> visually, but not </span><span style="background-color:#ffffff;">in the flesh</span><span style="background-color:#ffffff;">.</span></div>The last cut in this list is what we learned with Karen Barad to call ''the agential cut''. She unfolds a fundamental notion, that of intra-action, to give account of the constitutive onto-epistemes in apparatuses of observation. And this agential cut is fundamental for a trans*feminist approach to techno-sciences as response-ability.<ref name="ftn31">‘We are responsible for the world within which we live not because it is an arbitrary construction of our choosing, but because it is sedimented out of particular practices that we have a role in shaping. and ‘The crucial point is that the apparatus enacts an agential cut – a resolution of the ontological indeterminacy – within the phenomenon, and agential separability – the agentially enacted material condition of exteriority-within-phenomena – provides the condition for the possibility of objectivity. This agential cut also enacts a local causal structure in the marking of the measuring instrument (effect) by the measured object (cause), where ‘‘local’’ means within the phenomenon.’ Barad 2007, p. 390 and p. 175</ref> ''The agential cut'' claims for a fundamental form of response-ability that is always already entangled in the production of knowledge and its apparatuses. <span style="background-color:transparent;">In Slicer, we see the agential cut operating for example in the way </span><span style="background-color:transparent;">the O</span><span style="background-color:transparent;">pen </span><span style="background-color:transparent;">S</span><span style="background-color:transparent;">ource condition invites and expresses a mutual </span><span style="background-color:transparent;">responsibility</span><span style="background-color:transparent;"> of users, </span><span style="background-color:transparent;">devices, </span><span style="background-color:transparent;">developers, </span><span style="background-color:transparent;">algorithms, </span><span style="background-color:transparent;">practitioners</span><span style="background-color:transparent;">, </span><span style="background-color:transparent;">researchers, </span><span style="background-color:transparent;">datasets, </span><span style="background-color:transparent;">founders</span><span style="background-color:transparent;">, embodi</span><span style="background-color:transparent;">ments</span><span style="background-color:transparent;">, and other involved agents.</span>These six cuts identify a number of agencies and their very particular distribution. Their power relations are based on aesthetic, economic and scientific paradigms which together define the tension between what is probable in the gesture of slicing, and what might be possible.== 4. Feature requests ==<br />
<br />
''Where the paradigmatic entanglement is ready to redistribute agencies.''<br />
<br />
<br />
In previous sections we moved from slice to slicer, and then into slicing, encountering multiple entangled trans*feminist urgencies on the way. We discussed the effects of the invention of the slice and the naturalization of its geometric and stratifying paradigms. We interrogated the agencies that altogether compose a complex entanglement such as our protagonist, Slicer. And in the last section, we listed six different cuts, understanding the act of division as a key modern gesture that relates knowledge to (mostly visual) representation. Now it is time to apprehend Slicer's technicity by other means.<ref name="ftn32">Hoel 2014</ref>With trans*feminist techno-sciences we have learned that it is necessary to problematize modern regimes and the impossibilities for life they produce. And that it is possible to do so with what we have at hand. Trans*feminism challenges the ontology of humanity by questioning its separateness from social, economic, material, environmental, aesthetic and historical issues as well as from situated intersections such as race, gender, class, age, ability and species. They also invite us to test an ongoing ''affirmative ethics<ref name="ftn33">''Rosi Braidotti, "Affirmative Ethics, Posthuman Subjectivity, and Intimate Scholarship: a Conversation with Rosi Braidotti", in: Decentering the Researcher in Intimate Scholarship (Advances in Research on Teaching, Vol. 31), Emerald Publishing Limited, 2018. pp. 179-188''</ref> in relation to the semiotic-material compositions of what we call "our worldings". It means to put ourselves "at risk" by reconsidering the very notion of “us”, assuming the response-ability of being always already entangled with these techno-ecologies which we co-compose by just “being”-in-the-world.<br />
<br />
Maybe Open Source platforms such as Slicer can be environments to render so-called bodies differently. Even if this software is being developed in the particularly tight hegemony of innovation-driven, biomedical research, its F/LOSS licensing conditions invites us to imagine an affirmative critique, in dialogue with the communities that develop the software. Or could the platform itself be rendered differently through disobedient takes on the body?This text ends with a set of “feature requests” that challenge the slicedom of Slicer. It is an attempt at starting a kind of trans*feminist prototyping for an open source software platform for biomedical informatics. To technically widen the tomographic imagination, we could maybe start by:* Renaming the software platform to more accurately reflect the operations it performs. Some proposals: Euclidean Anatomix, Forever dissecting, The Slicest, FlashFlesh, A-clinical Suite Pro, Tomographix Toolbox, Final Cut™ ...<br />
<br />
* Introducing multiple and relational-perspectives. Computational rendering does not need a single vantage point, nor does it need to mimic the presence of human eyes. Next to the conventional two-way and orthogonal perspective, Slicer could bring multiple-axis and non-Euclidean perspective to the foreground.<ref name="ftn34">Slicer does offer a second perspective rendering, namely “orthographic perspective” (straight-extreme).</ref><br />
<br />
* De-centering the ocularcentrism of the renderings and re-orient representations. It is not (necessarily) about replacing vision with touch, vibrational, thermic and aural renderings although they might be less or otherwise burdened by modern issues. We are wondering about first of all collective modes of sensing and/or observations, to include multiplied modes of gathering and of processing impressions, of involving otherwise enabling renderings of data.<br />
<br />
* Breaking the mirage of the interface as a mirror or window on a natural outcome. There must be ways to insist that representation is never complete: in volumetric renderings, nothingness and thereness are happening at the same time. Donna Haraway: "see objectivity not as an epistemological position, but as a precious and fragile and partial achievement"<br />
<br />
* De-individualising the imagery of the oneness of humanness. The platform does not need to technically collapse multiple slices into a discrete, single volumetric object that appears out of nowhere. Hayles says "only if one thinks of the subject as an autonomous self, independent of the environment, is one likely to experience the panic of Norbert Wiener's Cybernetics and Bernard Wolfe's Limbo (...) when the human is seen as part of a distributed system... it is not a question of leaving the body behind but rather of extending embodied awareness in highly specific, local and material ways that would be impossible without electronic prosthesis".<br />
<br />
* Problematising the processual temporality of the volumetric images: can we make sure that we do not forget that these volumes as being constructed from takes at different moments, glued into a single object?<br />
<br />
* Implementing Agential Regions of Interest. This is aimed at eventually freeing the slice from the modern project. What would an a-modern slice be, how would it behave? How to un-capture the slice from its modern ghosts?<br />
* Last but not least, we propose to dedicate some of funding to the initiation of a non-dependent program that would allow users, experts and other participants in Slicer to study the Computer Vision (sic) techniques that are implemented in this software. The program should not follow the limited spectrum of probable visions of a white-washed medical research imagination.<br />
<br />
The possible is not about a fantastical widening of the imagination, but it is a technical condition that is already happening. This is a fundamental political twist in cultural analysis and critique of what imagination is: it is actually a technical thing. Imagination depends on the devices we collectively use, or that allow our lives to be used by. The devices we collectively use, depend on that imagination. This dependency has always been and will always be ''mutual''. When we assume this condition, then what would response-able imagery entail?–<br />
<br />
<br />
''This text was written in the context of Possible Bodies, a collaborative research on the concrete and complex fictional entities that "bodies" are, asking what matter-cultural conditions of possibility render them volumetrically present. The research was supported by an artistic development grant from the Flemish Government, and by Hangar, AZALA, Bidston Observatory. Thank you Antye Guenther, Martino Morandi, Zoumana Meite and Dennis Pohl for valuable feedback. ''[http://possiblebodies.constantvzw.org/ http://possiblebodies.constantvzw.org/]<br />
<br />
= Figures =<br />
<br />
== <span style="background-color:#ffffff;">1. </span><span style="background-color:#ffffff;">S</span><span style="background-color:#ffffff;">lice</span> ==<br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image1.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;"><span style="background-color:#ffffff;">Figure </span><span style="background-color:#ffffff;">1</span><span style="background-color:#ffffff;">:</span><span style="background-color:#ffffff;"> ‘</span><span style="background-color:#ffffff;">We slice the image of the patient like a loaf of bread’. Mayo Foundation for Medical </span><span style="background-color:#ffffff;">Education</span><span style="background-color:#ffffff;">, </span><span style="background-color:#ffffff;">date unknown.</span></div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image3.png|none]]<span style="background-color:#ffffff;">F</span><span style="background-color:#ffffff;">igure </span><span style="background-color:#ffffff;">2</span><span style="background-color:#ffffff;">: </span><span style="background-color:#ffffff;">Basic image registration in </span><span style="background-color:#ffffff;">Slicer v</span><span style="background-color:#ffffff;">4.10.2 (screenshot)</span></div><br />
<br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image2.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;"><span style="background-color:#ffffff;">Figure </span><span style="background-color:#ffffff;">3</span><span style="background-color:#ffffff;">:</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">Albrecht Duerer, “</span><span style="background-color:#ffffff;">Artist drawing a nude with </span><span style="background-color:#ffffff;">perspective</span><span style="background-color:#ffffff;"> device”. 1525</span></div><br />
<br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image4.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;"><span style="background-color:#ffffff;">Figure </span><span style="background-color:#ffffff;">4</span><span style="background-color:#ffffff;">: “</span><span style="background-color:#ffffff;">W</span><span style="background-color:#ffffff;">hole heart segmentation from cardiac CT in 10 minutes”. </span><span style="background-color:#ffffff;">Perklab, 2017 (still from Slicer video tutorial)</span></div><br />
<br />
== <span style="background-color:#ffffff;">2. </span><span style="background-color:#ffffff;">S</span><span style="background-color:#ffffff;">licer</span> ==<br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image11.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">Figure 5: Slicer logo</div><br />
<br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image8.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;"><span style="background-color:#ffffff;">Figure </span><span style="background-color:#ffffff;">6</span><span style="background-color:#ffffff;">: ‘Not for clinical use’, Slicer v</span><span style="background-color:#ffffff;">4.10.2 </span><span style="background-color:#ffffff;">(screenshot)</span></div><br />
<br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image5.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">Figure 7: Torso and Internal Organs of the Visible Human, traverse cut. Voxel-man, 2000</div><br />
<br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image10.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;"><span style="background-color:#ffffff;">Figure </span><span style="background-color:#ffffff;">8</span><span style="background-color:#ffffff;">:</span><span style="background-color:#ffffff;"> </span><span style="background-color:#ffffff;">Re-rendered torso </span><span style="background-color:#ffffff;">including medical equipment. </span><span style="background-color:#ffffff;">Ray-tracing in </span><span style="background-color:#ffffff;">Slicer v</span><span style="background-color:#ffffff;">4.10.2 (screenshot)</span></div><br />
<br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image6.png|none]]<span style="background-color:#ffffff;">Figure </span><span style="background-color:#ffffff;">9</span><span style="background-color:#ffffff;">: </span><span style="background-color:#ffffff;">An abundance of extensions. </span><span style="background-color:#ffffff;">Slicer v</span><span style="background-color:#ffffff;">4.10.2 (screenshot)</span></div><br />
<br />
== <span style="background-color:#ffffff;">3. </span><span style="background-color:#ffffff;">S</span><span style="background-color:#ffffff;">licing</span> ==<br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image7.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;"><span style="background-color:#ffffff;">Figure </span><span style="background-color:#ffffff;">10</span><span style="background-color:#ffffff;">: </span><span style="background-color:#ffffff;">''The </span><span style="background-color:#ffffff;">regional cut: ''</span><span style="background-color:#ffffff;">Defining a r</span><span style="background-color:#ffffff;">egion of interest </span><span style="background-color:#ffffff;">enacting a straight cut.</span><span style="background-color:#ffffff;"> Slicer v4.10.2 </span><span style="background-color:#ffffff;">(</span><span style="background-color:#ffffff;">s</span><span style="background-color:#ffffff;">creenshot)</span></div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image13.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;"><span style="background-color:#ffffff;">Figure </span><span style="background-color:#ffffff;">1</span><span style="background-color:#ffffff;">1</span><span style="background-color:#ffffff;">: </span><span style="background-color:transparent;">''T</span><span style="background-color:transparent;">he demarcation cut: ''</span><span style="background-color:transparent;">The SPL Inner Ear Atlas </span><span style="background-color:transparent;">is</span><span style="background-color:transparent;"> based on CT s</span><span style="background-color:transparent;">c</span><span style="background-color:transparent;">ans</span><span style="background-color:transparent;"> visualized with Slicer</span><span style="background-color:transparent;">. </span><span style="background-color:transparent;">Open Anatomy Project. 2018 </span>[https://www.openanatomy.org/atlases/nac/inner-ear-2018-02 https://www.openanatomy.org/atlases/nac/inner-ear-2018-02]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image14.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;"><span style="background-color:transparent;">F</span><span style="background-color:transparent;">igure </span><span style="background-color:transparent;">1</span><span style="background-color:transparent;">2</span><span style="background-color:transparent;">: </span><span style="background-color:transparent;">''The invasive-non-invasive cut: ''</span><span style="background-color:#ffffff;">Figure x: In 2015, Susan Potter donated her not-so normal body but also her medical history to the Virtual Human project. “This Woman Volunteered Her Body To Be Sliced Into 27,000 Pieces, To Help Medical Students”. National Geographic, 2017 </span>[https://www.storypick.com/digital-cadaver/ https://www.storypick.com/digital-cadaver/]</div><br />
<br />
== 4. Feature Requests ==<br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">{{clear}}<br />
[[Image:Image12.png|none]]</div><br />
<br />
<div style="margin-left:0cm;margin-right:0cm;">Figure 13: Lynn Randolph, “Immeasurable Results”, illustration included in Donna J. Haraway, Modest_Witness@Second_Millennium. FemaleMan©_Meets_OncoMouse™. Feminism and Technoscience, Originally published in 1997.</div><br />
<br />
----<br />
<references/></div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=88
MediaWiki:Common.css
2020-03-27T15:50:19Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {display: 200%;}</div>
FS
https://volumetricregimes.xyz/index.php?title=MediaWiki:Common.css&diff=87
MediaWiki:Common.css
2020-03-27T15:49:55Z
<p>FS: </p>
<hr />
<div>/* CSS placed here will be applied to all skins */<br />
<br />
h1 {display: none;}<br />
<br />
h2 {display: 150%;}</div>
FS
https://volumetricregimes.xyz/index.php?title=Volumetric_Regimes&diff=86
Volumetric Regimes
2020-03-27T15:36:19Z
<p>FS: </p>
<hr />
<div>__NOTOC__<br />
== Volumetric Regimes: material cultures of quantified presence ==<br />
<br />
This wiki is an ongoing workspace for a book in the making. ''Volumetric Regimes'' will be published by Open Humanities Press (2021) in the [http://www.data-browser.net/ DATA browser series], edited by Geoff Cox and Joasia Krysa.<br />
<br />
[[File:Book.png|border|500px]]<br />
<br />
= Volumetric Regimes: material cultures of quantified presence =<br />
Edited by [https://possiblebodies.constantvzw.org/inventory/ Possible Bodies] (Jara Rocha and Femke Snelting)<br />
<br />
Volumetric Regimes emerges from Possible Bodies, a collaborative project on the intersection between artistic and academic research. The project was initiated in 2016 to explore the very concrete and at the same time complex and fictional entities of so-called "bodies" in the context of 3D computation. Possible Bodies developed alongside an [https://possiblebodies.constantvzw.org/ inventory of cases] and resulted in texts, workshops, visual essays and performances. This book would bring together diverse materials from an ongoing conversation between artists, software developers and theorists working with the techniques and technologies of detecting, tracking, printing, modelling and rendering.<br />
<br />
The book will be organised in five chapters, each containing differently formatted materials. Each chapter starts with a commissioned contribution which addresses the material-discursive entanglements that emerge from the crossing of quite differentiated worldlings. Contributors include: Ramon Amaro, Maria Dada, Helen Pritchard, Nicolas Malevé.<br />
<br />
[[About|More about ''Volumetric Regimes'']]<br />
<br />
== Chapter index ==<br />
<br />
=== [[Introduction]] ===<br />
<br />
=== [[somatopologies|somatopologies: On the ongoing rendering of corpo-realities]] ===<br />
<br />
=== [[clandestine_disorder|Signs of clandestine disorder: The continuous after-math of 3D computationalism]] ===<br />
<br />
=== [[x,_y,_z|x, y, z: Dimensional axes of power]] ===<br />
<br />
=== [[parametric_unknowns|Parametric Unknowns: Hypercomputation between the probable and the possible]] ===<br />
<br />
=== [[depths_and_densities|Depths and Densities: Accidented and dissonant spacetimes]] ===<br />
<br />
=== [[Colophon]] ===</div>
FS