Tuesday, April 14, 2009
Computation Methods
Wednesday, April 8, 2009
JUST SOME RANTING-
WHAT IS BEING COMPUTED BY WHOM AND HOW? SPUYBROEK VS. BALMOND
Balmond and Spuybroek discuss the elimination of the "random" within their respective approaches to design. Both explain how the "notion of randomness disappears", Balmond 245, Spuybroek… They are using computation as a design tool and in this case Spuybroek employs a Frei Otto technique with water and wool where as Balmond makes use of fractals and the "aperiodic tiling" techniques of the mathematician Ammann. Though both use computation and achieve unexpected results with a complexity that eliminates randomness, they remain fundamentally different.
Perhaps the most interesting distinction between these designers is in their use of computation. "Fractals vs. Abstract Machines" Get your tickets now. It can be seen that both are using computation as form finding devices that eliminate randomness from the design process. However, the process is different for the two. Spuybroek is looking for emergent and unexpected results where global organizations are affected by dynamic forces. Balmond is looking for complexity and non-repeating results where global organizations are affected by geometric principals.
It seems to me that the issue at stake has to with emergence, intensive and extensive properties. While Spuybroeks "form finding machines" have to do with intensive forces and material constraints, Balmonds have to do with extensive geometry and mathematical constraints. Spuybroek’s work is created from responsive networks or structures that adapt and change in the process of "translating" information (force) into form. Balmond’s work is created from rigid geometric relationships that are a result of "translating" recursive mathematics (equations) into form. Spuybroek’s machine looks for optimization and variation where Balmond’s calculator looks for variation and there is nothing to optimize.
Spuybroek’s experiments produce radical global variations as the strings bifurcate, deform and undergo "phase transitions" when different types of forces are applied. Spuybroek’s forms emerge as a result of fluctuation and change and are therefore unpredictable. Balmond’s forms are a result of fractal geometry that it is not reflective of change or fluctuation, but rather an unchanging seed condition. While Balmond’s complex tile pattern does not repeat itself, it will never have phase transitions or radical global variations and it is difficult to call the final product emergent or unpredictable.
Residual effect vs. Seed condition
Machine vs. Computer
Forces vs. Rules
Optimization vs. Variation
Residue vs. Result
Tuesday, March 31, 2009
"The computer with its smart software is killing the designer's authorship"! Sounds like a sentence from a cheap futuristic movie. But on the other hand-all sense of the discussion is laid around this nowadays phenomenon. It seems that today's designer (or architect) plays role of a "guider of the evolution of form"not a generator or composer. Controlling form's development ("designer does not specify every single point of the curve, but only a few key weights which deform the cure in certain ways"M.de Landa) is his initial duty. It can be partially true. But it doesn't frees us as a designers from having artistic flair, from understanding the nature and potential of the material. The computer is just a mean of the achievement of the purpose.
I was also impressed with such words as:"The genes guide but do not command the final form. In other words, the genes do not contain a blueprint of the final form"(M. de Landa). A balance would be a perfect decision. A hand and a pencil with the computer and smart software should be a projection of someones mind in the space.
De Landa
De Landa writes that to achieve the potential of genetic algorithm in design, designers are no longer specifying forms but boundaries and constraints. To allow for the evolution of more complex and interesting forms, sets of primary codes/rules must be generated. Designers set constraints but must not foresee any particular result; indeed, the "evolutionary result [is] to be truly surprising". When the algorithm ceases to produce surprises it is no different than a simple CAD program that produces rigid polygons. It is the different intensities that drive the evolutionary process and thus the complexity of forms.
The challenge to designers, therefore, is to know what and to what extent these intensities are to be employed. To achieve this, designers need to acquire the understanding of the complexity of material. They are to become, to a certain degree, craftsmen whose craft is often overlooked.
Monday, March 30, 2009
Delanda Readings
DeLanda
A little confuse about the role that architect playing
About the evolution of the virtual structure, when the number of the changes or the generations are large enough, the result will be quite unexpected, and far different from the original generation. Surely that will bring up some amazing designs after the process, but what role is architect playing in this process? Just a critic who gives comment? Or a final-decision-maker to dicide which should be kept or to be thrown away? If so, what's the difference between an architect and a passer-by?
As my thought, architect should be a god of such progress instead of the final-decision-maker, which means architect should get involved of the whole evolution progress rather than take a glance at the result. Architect should design the first generation with required function, which could define the correct topological relations of the several parts in this design; architect need to set up the rules of "natural selection", too, so that every generation could be directed to follow the restrictions. There must be something else an architect need to concern, but so far not so clear in my head.......
Wednesday, March 25, 2009
Delanda
Delanda’s essays investigate the genetic algorithm; their place in architecture and their use in the future. Genetic algorithms are now a tool that can generate form by the mating of different forms. The richest algorithms breed the most surprising results. If the results were predictable the need for this tool would no longer exist. In saying that, there still needs to be some level of predictability to generate true buildings. Structural elements need to still function in the same ways, holding the loads and stresses of the building. For if columns evolved into decorative elements the structural soundness of the building would be compromised. Therefore; before the breeding of forms begin, a “body plane” must be set, much like how mammals have an “abstract vertebrae” plan, building would have to have their own “body plan” that defines the essence of a building. The designer would decide what that plan is comprised of, and in doing so becomes the designer of the egg rather than the designer of the building. Once the egg is decided upon the genetic algorithm takes over and starts generating offspring, as generations pass new building forms will evolve.
My question is then; in the end will algorithms be able to produce more efficient buildings than architects are now able to produce? And if so, will aesthetic design in the future be of less importance compared to “aesthetic fitness”?
Tuesday, March 24, 2009
DeLanda Readings
Intensive vs Extensive
If we divide a volume of matter into two equal halves, we end up with two volumes, each half the extent of the original. Intensive properties on the other hand are properties such as temperature or pressure, which cannot be so divided.
- Manual Del Landa, Intensive Science and Virtual Philosophy
Extensive quality is the quality that you can measure, such as length, area and volume, hence Extensive quality is quantitative difference. Let’s say if you divide the perfect cube into two equally subdivided solids, these two solids contain same amount of volumes. On the other hand, Intensive quality is when material reaches the threshold to create difference in quality. For example, water turns into ice as temperature decrease. This allows us to investigate self-organized and bottom-up approach. In Bottom-up approach, designers have control of local scale but the global scale is the result of interactions of the local behavior. The important thing is that the global behavior is more than sum of whole, the difference in quality to make new behavior to be emerged such as water flows to turn into tidal waves or spider webs string to create self-supported web network. Designers seem using algorithmic methods for the investigation of the emergent behavior. However, algorithm, either scripting, animation or parametric, they are structured by nested binary choices for example, if A do C, if B do D, if E do G and so on. I still wonder if this binary choice can simulate complexity of nature. Maybe it does. As Steven Wolfram mentions, the simple rule can create complex system.
Genetic Algorithm
from: Deleuze and the Use of the Genetic Algorithm in Architecture, Manuel de Landa
In this short reading Delanda supports fundamental changes in architectural design. He gives credit to Deleuze`s philosophical work and expands on the subject of “genesis of form”. An example of drawing a round column, which he gives as a technique of 3 step procedure (kind of mixture between Euclidian geometry and Aristotle’s categories ), seems to be the beginning of relevance of recursion and aggregation of such form or rather that specific technique to effectively create an evolution of “larger reproductive community” – a population with variable genetic code. The reading appears very relevant and contemporary; while reading, I did seem to think of software like Grasshopper or Houdini for instance, and attach the meaning of genetic mutation to simple step procedures that can be deployed in a computer environment using chunks of information embedded into an element. The notion of “body plan” and the idea of hacking into non-architecture based resources / fields are interesting and necessary to consider avoiding the traditional decayed trends of preference for aesthetics geometry selection. In the end we are seeing the final product through so many different lenses and scales of criticism that I think the grounds for seeing digitally mutated forms can be diminutive in itself, so I don’t think we came close to nearly digital reproduction or similarities; in themselves they hold small bits of the building blocks and techniques to unravel new foundations. Personal style and selection do not run parallel with generating a process through defined sequence of code, that is true but still under question. Subsequently we also begin to see overlapping with earlier reading of Wolfram and Rocker`s research. These are more or less my general comments to this reading, “The Case of Modeling Software” reintroduces some of the same concepts as I am browsing through it; comments on a follow up blog.
Thursday, March 19, 2009
Rocker_Complexity_Project
The image: Zaha Hadid./ It's a digital media and a study for the Thames Gateway as an urban field, the project would be located in London, United Kingdom. This specific project reveals a similar discussion by having a rule set as an origin that follows development, an alteration of that development that leads to multiple possibilities and patterns of structure. They investigate 4 main building typologies throughout the urban area leading to a series of evolutions of these standard typologies that are placed in the site and experimented. The fusion of these typologies creates new possible structures. To see the animation: http://www.youtube.com/watch?v=IksIyui84wE#
Tuesday, March 17, 2009
Rocker/ Complexity / Image

Tuesday, March 10, 2009
wolfram's idea of complexity is easier to define than that of deleuze. for wolfram, complexity arises from a simple code, repeated many times. in this way, while the local scale is simple, the same code at the global scale becomes complex. deleuze's smooth and striated models are inherently complex, and related to each other in many more ways than just local/global scale.
interior of the interactive water pavilion by NOX
Complexity, cellular automata
Rocker writes that “extraction of algorithmic process is an act of high level abstraction”, an act producing visual complexity (2d or 3d diagrams). The unpredictable and the unknown generated through such algorithmic process over-ride experience and perception, two very distinct categories. In the sense cellular automata is a visual form of simple messages and language abstracted into complex behavior and patterns; that is within the medium only.
I suppose a difference between Wolfram and Deleuze would be the method through which spatial complexity can be identified. Wolfram for instance uses procedural sequence, beginning with a few simple rule sets and so the procedural motion is more like a thrust forward or towards the expansion of visual complexity; Deleuze on the other hand seems to draw a line between the two distinct spaces and jump back and forth to form inherent relation between them. Both operate in a dynamic environment.
UNStudio Project of Master plan & Train Station,
Deleuze/Wolfram Complexity, Rocker Essay and Digital Architect Image
Rocker’s studio uses cellular automata as a generative tool for architecture. However, her use of the cellular automata places emphasis on the output diagram as an architectural form. In the last class, we discussed that Rocker’s studio used the algorithm as a way of creating the diagrams, but that diagrams were only one way of representing the computation. I was wondering how computation could be represented if its outcome vary? Is the problem with Rocker’s approach that she is only looking at one outcome and not a variety of outcomes?
The following is an image of a model designed by Greg Lynn for the Kleiburg Block.
Monday, March 9, 2009
Tuesday, March 3, 2009
Relation Between Categories and Prior Analytics
Thursday, February 19, 2009
Wednesday, February 11, 2009
smooth & striated - vs - the fold
the fold text was a little tougher to get through - perhaps its easier if one is already familiar with leibniz's work? deleuze explains the fold through the idea of a finite body being transformed through an infinite number of folds. the labyrinth he references is therefore not a system of lines, but one entity that is now able to create space. the process of folding is why organisms exist. it creates dimensionality (not in the metric sense, though). so, there is never a scenario of just one fold in matter. there are always multiple folds within folds.
this idea seems impossible to represent architecturally, as eisenman attempted. if one of the essential concepts is this immeasurable folding, there is no way to include so much detail in a built solution. and, the way that I understand the fold is that it isn't a quantifiable thing... it is a force acting on matter. seems like it's not enough to create only a structural iteration of the idea. does that mean that no designer will ever be able to capture it? there are a whole lot of them trying, at the moment.
how does the baroque house fit into all of this? I'm having trouble making that connection.
Tuesday, February 10, 2009
models
Jan 27 blog: Contrasts Between Topology and Geometry
Euclid seems to primarily concern himself with space that can be fully rationalized and physically measured. His notion of space is somewhat limited in that it only allows for the creation of space in two dimensions. Space is conceived of, and described by the vocabulary set forth in the definitions, postulates and common notions. He does not establish a means of describing space that exists outside his language of points, lines, planes and figures.
Barr defines geometry as the study of mathematical space. He defines topology as a type of geometry but states that it also encompasses other fields. The two differ in that geometric spaces must, by definition, be visually expressive. This is not necessary for a topological space. Typology provides the means of expression for figures that would, in geometry, be otherwise indescribable. It deals with rules of continuity rather than rules of form.
It becomes easy to see the differences between geometric and topological spaces when looking the example of the geometric and topological pentagon (pg. 12, fig. 14). A pentagon is defined geometrically as a polygon having five sides and five interior angles. However it talks nothing of the number of faces, vertices and edges. The topological pentagon is not a geometric pentagon because it does not have five interior angles, but it still can be classified as a pentagon because it retains the same number of faces, vertices and edges.
Eisenman rejects the traditional ideas of Cartesian geometry and instead favors the idea of the fold rather than the point. He is interested in the way that the fold can change and manipulate the existing relationships of the horizontal and vertical, and figure and ground. He favors the ideas of Deleuze and objectile, which imply continual variation through the fold. He is interested in its affect on the object / event relationship.
He also is interested in the ways that the fold can be applied to Thom’s catastrophe theory. He explains that in catastrophe theory, a grain of sand that appears to cause a landslide is really not the cause at all, but rather the cause exists within the conditions of the entire structure. The fold in relation to architecture is similar in that it can serve as the unseen force that explains abrupt changes in form as well as in urban conditions. He sees enormous potential in the fold as a way to reinterpret and reframe what exists as well as connect the old and the new. He sees it both as a potential formal device as well as the way to transform architecture and urbanism from static objects to meaningful events.
Wednesday, February 4, 2009
Tuesday, February 3, 2009
Summary Session 2 Conflicts
Deleuze, Rene Thom, Leibniz
Problems of models of meaning
Conflicts between models of meaning.
We covered a lot today, bu we'll continue tor return many times to these examples of conflicts. Remember that architecture never ceases to make ambitious claims about its use of mathematics and geometry. And out first example points to a conflict between ideality and instrumentality in Vitruvius.
Its what I would call an internal conflict since the presumed idealization of mathematical and geometrical concepts is followed at the end of the chapter with a declaration of a reparative instrumental geometry. Symmetry is of course one of the priciples.
It is also a principle in Boullee and Durand. Their drawings point to rather different and I would say conflictual notions of architecture's use of geometry. For Boullee geometry and the example of the sphere embody Nature and Man's principles in perfect harmony. Geometry is both thing and symbol. In Durand, it is quite instrumental and if there is anything significant about symmetry it is because its an economic principle. In Durand's case furthermore we have a "method" that for the first time is analytical, probing, and generative.
First there is a grammatical spreadsheet of possibilities of primitive combinations. In subsequent illustrations of the Precis these develop into various syntactical formations. They become propositions about buildings.
I would say already JNL Durand is protocomputational.
We also looked at mathematical systems and certain differences, such as those betweem geometry and topology. And between those and the Cartesian coordinate system and Calculus and Catastrophe theory.
We looked at Eisenman's conflict with Euclidean Geometry and Cartesian space and his introduction of the Fold, Leibniz, Catastrophe theory and R thom, via Deleuze and Greg Lynn. Here one of the things at stake is the introduction of the ontology of events rather than the ontology of discrete forms.
Catastrophe theory combines calculus and the continuous variation/rates of change with topology. Its interest is to quantify qualitative states of change which calculus can't do on its own.
Ok, so this now becomes a new model but does so by showing the insufficiency of previous (Modernist) models. It is a kind of fifactic conflict. For the next blog please just read the two essays by Deleuze and identify how he uses models of meaning.
I want us to thoroughly distinguish between uses of computation for design and computation as a generative system.
In every case we see historically we have various modifications in geometry being applied to geometry and the transformation of design. With the algorithm and computation proper we've hit a limit since we are now no longer dealing with geometry and in fact I woud say possibly not even mathematics.
One last conflict. Eisenman like many others during the 90s takes the figure of the catastrophe fold and literraly transposes to the cartesian grid. Well, what's wrong with that? A lot. But for one, and promarily, the image of the catastrophe fold is a diagram - its not a thing. Esienman makes it a thing. A thingy! Punk.
Ok, hope this helps.
Peace out
P
Design Office for Research and Architecture
68 Jay Street
Brooklyn, NY 11201
USA
646-575-2287
petermacapia@labdora.com
http://labdora.com/
http://atlas.labdora.com/
Tuesday, January 27, 2009
Contrasts between topology and geometry – preliminary investigations
One of the major distinctions that Barr addresses when defining space is that topology isn’t focused on a form or shape so much as how it is put together. In Cartesian geometry the focus is on the aesthetic appeal of the object, which is defined by a multi-dimensional gird system. This system was passed down through the generations as a golden chalice. However, Euler’s Law radically changed the world’s perception of space placing emphasis on continuity as opposed to object appeal. Barr notes “a topologist is interested in those properties of a thing, that while they are in a sense geometrical, are the most permanent- the ones that will survive distortion and stretching.”
The specific properties of a thing that will survive distortion and stretching are its faces, edges and vertices. Through Euler’s Law, topological invariants can be used to bend geometric polyhedra from an aesthetic driven form into a continuous, self informed object. Through a series of scripted rules, topological geometry is capable of producing outcomes unimagined by Cartesian geometry. Space is no longer thought of as an inside/outside if/then statement, instead it is looked at as defined by faces, edges, and vertices.
Peter Eisenman takes it upon himself to outright reject Euclidian and Cartesian geometry for the likes of topology and the Catastrophe Theory. Through the research and analysis of Deleuze and Leibniz, Eisenman has equipped himself with an arsenal of weaponry that allowed him to reject previous Euclidian and Cartesian geometry for a more ‘event’ based form. Derived from Deleuze’s idea of continuous variation through a fold, Eisenmann proposes using these philosophical ideas as architectural matter. The concept of the fold allows for mediation between previous figure/ground strategies. The fold is neither figure nor ground yet it is capable of reconstituting both. Eisenman felt that the fold would allow for “an opportunity to reassess the entire idea of a static urbanism that deals with objects rather than events.”







