Tuesday, April 14, 2009

Computation Methods

The discussion is dominated by the surprising emergence of components - through an intermittent process of computation and spontaneity.  In both cases, arriving to a sense of structure seems to be the main interest. Spuybroek/Otto's method of calculation contemplates transformation and freedom; flexibility is literal and material variable relevant to every operation.  The relevance of flexibility is such that the method seems to be reduced to a framework, and following non-procedural events take precedence over the process of determining form. The truthfulness of the methodologies can be extended to site influences and social parameters, just to propose something.  If so, then Spuybroek/Otto's method seems to aim closer to that regard, especially when it recognizes that it is material potential/material intelligence (and not a superimposition of mathematical operations) what sets "the method" in motion, and it leaves plenty of room for future influences. 

Wednesday, April 8, 2009

JUST SOME RANTING-

OPTIMIZATION AND VARIATION THROUGH COMPUTATION

WHAT IS BEING COMPUTED BY WHOM AND HOW? SPUYBROEK VS. BALMOND


Balmond and Spuybroek discuss the elimination of the "random" within their respective approaches to design. Both explain how the "notion of randomness disappears", Balmond 245, Spuybroek… They are using computation as a design tool and in this case Spuybroek employs a Frei Otto technique with water and wool where as Balmond makes use of fractals and the "aperiodic tiling" techniques of the mathematician Ammann. Though both use computation and achieve unexpected results with a complexity that eliminates randomness, they remain fundamentally different.


Perhaps the most interesting distinction between these designers is in their use of computation. "Fractals vs. Abstract Machines" Get your tickets now. It can be seen that both are using computation as form finding devices that eliminate randomness from the design process. However, the process is different for the two. Spuybroek is looking for emergent and unexpected results where global organizations are affected by dynamic forces. Balmond is looking for complexity and non-repeating results where global organizations are affected by geometric principals.


It seems to me that the issue at stake has to with emergence, intensive and extensive properties. While Spuybroeks "form finding machines" have to do with intensive forces and material constraints, Balmonds have to do with extensive geometry and mathematical constraints. Spuybroek’s work is created from responsive networks or structures that adapt and change in the process of "translating" information (force) into form. Balmond’s work is created from rigid geometric relationships that are a result of "translating" recursive mathematics (equations) into form. Spuybroek’s machine looks for optimization and variation where Balmond’s calculator looks for variation and there is nothing to optimize.


Spuybroek’s experiments produce radical global variations as the strings bifurcate, deform and undergo "phase transitions" when different types of forces are applied. Spuybroek’s forms emerge as a result of fluctuation and change and are therefore unpredictable. Balmond’s forms are a result of fractal geometry that it is not reflective of change or fluctuation, but rather an unchanging seed condition. While Balmond’s complex tile pattern does not repeat itself, it will never have phase transitions or radical global variations and it is difficult to call the final product emergent or unpredictable.


Residual effect vs. Seed condition
Machine vs. Computer
Forces vs. Rules
Optimization vs. Variation
Residue vs. Result

Tuesday, March 31, 2009

2009-03-31 On Board


Manuel de Landa reading

"The computer with its smart software is killing the designer's authorship"! Sounds like a sentence from a cheap futuristic movie. But on the other hand-all sense of the discussion is laid around this nowadays phenomenon. It seems that today's designer (or architect) plays role of a "guider of the evolution of form"not a generator or composer. Controlling form's development ("designer does not specify every single point of the curve, but only a few key weights which deform the cure in certain ways"M.de Landa) is his initial duty. It can be partially true. But it doesn't frees us as a designers from having artistic flair, from understanding the nature and potential of the material. The computer is just a mean of the achievement of the purpose.
I was also impressed with such words as:"The genes guide but do not command the final form. In other words, the genes do not contain a blueprint of the final form"(M. de Landa). A balance would be a perfect decision. A hand and a pencil with the computer and smart software should be a projection of someones mind in the space.

De Landa

I suppose we are all somewhat fascinated by the "egg designer" analogy.

De Landa writes that to achieve the potential of genetic algorithm in design, designers are no longer specifying forms but boundaries and constraints. To allow for the evolution of more complex and interesting forms, sets of primary codes/rules must be generated. Designers set constraints but must not foresee any particular result; indeed, the "evolutionary result [is] to be truly surprising". When the algorithm ceases to produce surprises it is no different than a simple CAD program that produces rigid polygons. It is the different intensities that drive the evolutionary process and thus the complexity of forms.

The challenge to designers, therefore, is to know what and to what extent these intensities are to be employed. To achieve this, designers need to acquire the understanding of the complexity of material. They are to become, to a certain degree, craftsmen whose craft is often overlooked.

Monday, March 30, 2009

Delanda Readings

Delanda’s chief interest is in the complexity of matter and the nature of material. Formal manipulation is purely conceptual, and the problems facing architects include numerous additional factors such as material constraints, structural forces etc. Delanda sees enormous potential in CAD software to not only develop complex forms but also ground them in real life situations, thereby providing a more holistic approach to design. He also challenges the traditional role of materiality in architecture, specifically through the idea of the genetic algorithm. Similarly to how DNA can be restructured to create a variety of organisms, new heterogeneous materials can be restructured and applied universally in any situation. These ideas have the potential to alter the way that buildings are conceived and constructed as well as the role of the architect in the design process.

DeLanda

Both of DeLanda's articles were helpful to me in terms of relating Delueze to our modes of thinking about form and architecture.  I was equally interested in the "egg designer" idea as Matt was, but it also fascinated me the way DeLanda talked about materials.  Rather than talk about the inherent properties of a material, he approached it from a craftsman point of view, in that materials are always varied based on their origins/components.  It started me thinking about what it would mean to be a craftsman today - how does the know-how of materials influence an approach in design?  Particularly a digital approach?  

While using the bottom up approach that Tsakairi talked about, materials are obviously very important and influential in design.  That got me questioning how materiality could relate to the genetic algorithms and designing in a virtual environment - where materiality can assemble itself.  DeLanda talked about the benefits of bone-like materials - where tension and compression stresses are dealt with in different ways rather than homogeneously. But if those elements that deal with tension and compression are in themselves homogeneously/mechanically produced, then how are we still representing the possible material variation?

A little confuse about the role that architect playing

For the process about virtual evolution mentioned in the article, "the space of possible designs that the algorithm searches needs to be suficiently rich for the evolutionary results to be truly surprising" is the first point that enlighten me, but also confuses me a little.
About the evolution of the virtual structure, when the number of the changes or the generations are large enough, the result will be quite unexpected, and far different from the original generation. Surely that will bring up some amazing designs after the process, but what role is architect playing in this process? Just a critic who gives comment? Or a final-decision-maker to dicide which should be kept or to be thrown away? If so, what's the difference between an architect and a passer-by?
As my thought, architect should be a god of such progress instead of the final-decision-maker, which means architect should get involved of the whole evolution progress rather than take a glance at the result. Architect should design the first generation with required function, which could define the correct topological relations of the several parts in this design; architect need to set up the rules of "natural selection", too, so that every generation could be directed to follow the restrictions. There must be something else an architect need to concern, but so far not so clear in my head.......

Wednesday, March 25, 2009

Delanda

Delanda’s essays investigate the genetic algorithm; their place in architecture and their use in the future.  Genetic algorithms are now a tool that can generate form by the mating of different forms.  The richest algorithms breed the most surprising results.  If the results were predictable the need for this tool would no longer exist.  In saying that, there still needs to be some level of predictability to generate true buildings.  Structural elements need to still function in the same ways, holding the loads and stresses of the building.  For if columns evolved into decorative elements the structural soundness of the building would be compromised.  Therefore; before the breeding of forms begin, a “body plane” must be set, much like how mammals have an “abstract vertebrae” plan, building would have to have their own “body plan” that defines the essence of a building.  The designer would decide what that plan is comprised of, and in doing so becomes the designer of the egg rather than the designer of the building.  Once the egg is decided upon the genetic algorithm takes over and starts generating offspring, as generations pass new building forms will evolve. 

My question is then; in the end will algorithms be able to produce more efficient buildings than architects are now able to produce?  And if so, will aesthetic design in the future be of less importance compared to “aesthetic fitness”?  

Tuesday, March 24, 2009

DeLanda Readings

DeLanda's quest to make a case for modeling software finds its roots in Deleuzian theory. In his text, DeLanda describes Deleuze's attempt to change the dominant philosophy of the genesis of form. However, what struck me as being extremely helpful in seeing the transition from 'the world of obedient rigid polygons' to our current design philosophies was DeLanda's explaination and analysis of embryological development. In embryological developement, an egg is initially very simple. Through phase transitions, the relatively simple egg becomes more and more complex. As DeLanda elegantly phrased it, 'the genes guide but do not command the final form.' This same process must be undertaken when using algorithmic tools. One begins with a relatively simple set of instructions [this brings to mind the transition from a simple A B set that we discussed in class, to the complex patterns that would eventually form] and by following those instructions a complexity based on intensive properties begins to form. This seemly banal analogy between the embroyological process and architectural developement was an eye opener for me. I guess DeLanda was right when he stated that 'they [architects and engineers] will have to become egg designers.'

Intensive vs Extensive

Intensive vs Extensive

If we divide a volume of matter into two equal halves, we end up with two volumes, each half the extent of the original. Intensive properties on the other hand are properties such as temperature or pressure, which cannot be so divided.

- Manual Del Landa, Intensive Science and Virtual Philosophy

Extensive quality is the quality that you can measure, such as length, area and volume, hence Extensive quality is quantitative difference. Let’s say if you divide the perfect cube into two equally subdivided solids, these two solids contain same amount of volumes. On the other hand, Intensive quality is when material reaches the threshold to create difference in quality. For example, water turns into ice as temperature decrease. This allows us to investigate self-organized and bottom-up approach. In Bottom-up approach, designers have control of local scale but the global scale is the result of interactions of the local behavior. The important thing is that the global behavior is more than sum of whole, the difference in quality to make new behavior to be emerged such as water flows to turn into tidal waves or spider webs string to create self-supported web network. Designers seem using algorithmic methods for the investigation of the emergent behavior. However, algorithm, either scripting, animation or parametric, they are structured by nested binary choices for example, if A do C, if B do D, if E do G and so on. I still wonder if this binary choice can simulate complexity of nature. Maybe it does. As Steven Wolfram mentions, the simple rule can create complex system.

Genetic Algorithm

from: Deleuze and the Use of the Genetic Algorithm in Architecture, Manuel de Landa

In this short reading Delanda supports fundamental changes in architectural design. He gives credit to Deleuze`s philosophical work and expands on the subject of “genesis of form”. An example of drawing a round column, which he gives as a technique of 3 step procedure (kind of mixture between Euclidian geometry and Aristotle’s categories ), seems to be the beginning of relevance of recursion and aggregation of such form or rather that specific technique to effectively create an evolution of “larger reproductive community” – a population with variable genetic code. The reading appears very relevant and contemporary; while reading, I did seem to think of software like Grasshopper or Houdini for instance, and attach the meaning of genetic mutation to simple step procedures that can be deployed in a computer environment using chunks of information embedded into an element. The notion of “body plan” and the idea of hacking into non-architecture based resources / fields are interesting and necessary to consider avoiding the traditional decayed trends of preference for aesthetics geometry selection. In the end we are seeing the final product through so many different lenses and scales of criticism that I think the grounds for seeing digitally mutated forms can be diminutive in itself, so I don’t think we came close to nearly digital reproduction or similarities; in themselves they hold small bits of the building blocks and techniques to unravel new foundations. Personal style and selection do not run parallel with generating a process through defined sequence of code, that is true but still under question. Subsequently we also begin to see overlapping with earlier reading of Wolfram and Rocker`s research. These are more or less my general comments to this reading, “The Case of Modeling Software” reintroduces some of the same concepts as I am browsing through it; comments on a follow up blog.

Thursday, March 19, 2009

Rocker_Complexity_Project

Studio Rocker emphasizes in exploring the possibilities of Architecture by manipulating the idea of codes, how this entity is viewed in its original state and configuration to multiple possibilities of expressions. The code is not seen as an external factor that limits the performance of Architecture; it is used as the mediator to transform the basis of the discipline by being materialized or executed. It becomes distant to our traditional view of restriction of codes transcending to a variety of possible performances and results (such as structure and surface) that are controlled and originated by a set of rules and organizations that determines the after effect of this configuration. I was personally interested in the idea of patterns and repetition that this computational process provides. Having one original state of configuration transcending to a more complex and recoded effect that is given by repetition. Similar to Neumann’s UCC (1940’s) that reveals the idea of copy and construction that gave product to unstable patterns, in Rocker’s Recursions (2004) it becomes evident that the automata is revealed in the understanding of rules, repetition of these operations and the effect that causes by relating one to the other; contributing to a new generation. Interesting to see how the author compares these lines as generations because they behave exactly like the idea of generations, an organization that is transformed by the other. The result: a variety of patterns. It ends up to a modification of codes and rules that stimulate a new understanding in Architecture by computational mediums. The digital software breaks the boundary of the screen and provides not only a theoretical understanding of its function but also the multiple physical possibilities that it can produce: the architecture.
Deleuze’s complexity relates two different kinds of system or spaces that are not completely independent from the other; the smooth and the striated. The relationship is not considered restricted or controlled; it does not reveal an evident origin or a clear sequence of its development. Wolfram’s complexity becomes more choreographed and its development through time is controlled by a rule set that change and maintains a reference from its previous. It becomes more of an addition or a repetition of the original rule set that transcends to a more complicated system different from Deleuze’s complexity that is less systematic. I agree in most of the comments that these two perceptions (Deleuze and Wolfram) are similar to each other by creating a constant relationship with the changing events (rule set or spaces), in addition they are both progressive and developmental. Independent from one being less strained and manipulated than the other, they evolve to a different use of the space. They both generate a complexity that emerges by their common and constant relationships through time to another dimension that contains information from one stage to the other, one that becomes edited or altered independent from their consistency and origin.
The image: Zaha Hadid./ It's a digital media and a study for the Thames Gateway as an urban field, the project would be located in London, United Kingdom. This specific project reveals a similar discussion by having a rule set as an origin that follows development, an alteration of that development that leads to multiple possibilities and patterns of structure. They investigate 4 main building typologies throughout the urban area leading to a series of evolutions of these standard typologies that are placed in the site and experimented. The fusion of these typologies creates new possible structures.
To see the animation: http://www.youtube.com/watch?v=IksIyui84wE#

Tuesday, March 17, 2009

Rocker/ Complexity / Image

Studio Rocker use cellular automata as a tool.  It generates a code, and they dictate some form of representation upon it.   For me Melissa says it well when she defines the designer's role as merely a chooser of representation.
---

When Pawel talks about Deleuze's complexity as being relational,  and Wolfram's as being sequential, he got me thinking about what that means in terms of time.  Matt talked about this too, saying that both systems relate to time.  To me it initially seems as if Wolfram's complexity is that of a progressively increasing nature, and Deleuze's complexity already completely exists - it is just waiting to be discovered.  But then that gets me thinking about the setup of both theories.  

Wolfram's complexity emerges from simple rules - and so does Deleuze's.  [smooth = this, striated = this, they interact thusly].  For Deleuze, the complexity emerges from how the two ideas interact, and how they relate.

Although Wolfram presents his diagrams of cellular automata in a linear and progressive format - what would happen if we relate them differently?  We talked in class about the Rocker studio having issues with modes of representation.  What if these rules related to each other in a different method - radially or as numbers, as random examples.  The type of representation and relation determines in part the level of complexity.  (If you only allowed the rules to progress two steps, with a three square wide grid, the diagram would be a lot less complex than an 18 step 50 square wide grid)  

So it seems that both Wolfram and Deleuze present complexity in terms of a relational and representational means.  Starting with a simple definition (rules for squares, and two ideologically different spaces) in relating the two spaces, or in relating how the rules operate in space, complexity develops.  The rules in and of themselves have no complexity, just as the definition of a smooth or striated space has inherently no complexity.  
---

The image is of the Hydrogen House in Austria by Michael McInturf, Greg Lynn, and Martin Treberspurg.

Tuesday, March 10, 2009

studio rocker experiments with cellular automata to create three dimensional diagrams, which is just another way of expressing the code. there is no more complexity in the architectural solution than there was in the code – it is exactly the same information in a different form. she acknowledges this deficiency though, when saying 'any code's expression is thus always just one of an infinite set of possible realisations'. the role of the designer, for her studio, is to simply decide on the method of representation of the code.

wolfram's idea of complexity is easier to define than that of deleuze. for wolfram, complexity arises from a simple code, repeated many times. in this way, while the local scale is simple, the same code at the global scale becomes complex. deleuze's smooth and striated models are inherently complex, and related to each other in many more ways than just local/global scale.

interior of the interactive water pavilion by NOX

Complexity, cellular automata


Rocker writes that “extraction of algorithmic process is an act of high level abstraction”, an act producing visual complexity (2d or 3d diagrams). The unpredictable and the unknown generated through such algorithmic process over-ride experience and perception, two very distinct categories. In the sense cellular automata is a visual form of simple messages and language abstracted into complex behavior and patterns; that is within the medium only.

I suppose a difference between Wolfram and Deleuze would be the method through which spatial complexity can be identified. Wolfram for instance uses procedural sequence, beginning with a few simple rule sets and so the procedural motion is more like a thrust forward or towards the expansion of visual complexity; Deleuze on the other hand seems to draw a line between the two distinct spaces and jump back and forth to form inherent relation between them. Both operate in a dynamic environment.

UNStudio Project of Master plan & Train Station,

Bologna, Italy, 2007


Deleuze/Wolfram Complexity, Rocker Essay and Digital Architect Image

Wolfram’s reading indicates that complexity can be based on a simple rule set. By following the defined rules, the outcome can be visually complex. In this system, complexity is based on a sequence of ‘if then’ statements. However, Deleuze’s theory suggests that there is an inherent complexity constructed of two different yet intertwined entities: the smooth and the striated. There is no if then statement with these, because the smooth and the striated act in relation to each other, while at the same time remaining their own entities. One thing that both of the readings have in common is that the systems exist within time and they are not static. In Wolfram’s text, time is displayed in the process of analyzing the previous steps based on the logic of the system and then implementing them to create the outcome. In the Deleuze reading, time is a property that is present in the smooth, because intensities and forces cannot exist without the element of time.

Rocker’s studio uses cellular automata as a generative tool for architecture. However, her use of the cellular automata places emphasis on the output diagram as an architectural form. In the last class, we discussed that Rocker’s studio used the algorithm as a way of creating the diagrams, but that diagrams were only one way of representing the computation. I was wondering how computation could be represented if its outcome vary? Is the problem with Rocker’s approach that she is only looking at one outcome and not a variety of outcomes?

The following is an image of a model designed by Greg Lynn for the Kleiburg Block.

Monday, March 9, 2009

Tuesday, March 3, 2009

Relation Between Categories and Prior Analytics

Aristotle’s categories consist of methods of describing (categorizing) human perception and understanding. Topics include substance, quantity, relativity, quality, action / affection, opposition, contradiction, priority, simultaneity, movement, possession. Predicate logic seems to exist in the categories specifically in the substance category to describe the individual, man and animal relationship. It struck me as being similar to the syllogism discussed in last week’s class: Humans are mortal, Socrates is a human, Socrates is mortal. Prior Analytics builds upon these ideas, developing further modes of logic and reasoning as well as methods for establishing and refuting propositions and investigating problems.

Thursday, February 19, 2009

2009-02-17 On Board






















A little blur, so I just turned them into black and white.






Wednesday, February 11, 2009

smooth & striated - vs - the fold

I pulled the same conclusions as pawel from the smooth/striated text. deleuze describes the attributes of both spatial qualities, but argues that they are able to shift back and forth - striated to smooth and vice versa. so, one quality could not exist without the other, since they are definable by what they do and don't have in common.

the fold text was a little tougher to get through - perhaps its easier if one is already familiar with leibniz's work? deleuze explains the fold through the idea of a finite body being transformed through an infinite number of folds. the labyrinth he references is therefore not a system of lines, but one entity that is now able to create space. the process of folding is why organisms exist. it creates dimensionality (not in the metric sense, though). so, there is never a scenario of just one fold in matter. there are always multiple folds within folds.

this idea seems impossible to represent architecturally, as eisenman attempted. if one of the essential concepts is this immeasurable folding, there is no way to include so much detail in a built solution. and, the way that I understand the fold is that it isn't a quantifiable thing... it is a force acting on matter. seems like it's not enough to create only a structural iteration of the idea. does that mean that no designer will ever be able to capture it? there are a whole lot of them trying, at the moment.

how does the baroque house fit into all of this? I'm having trouble making that connection.

Tuesday, February 10, 2009

models

Deleuze sets up five types of models to indentify two different kinds of spaces occurring in various conditions through the models he is describing; however distinct or separate those two spaces are: one being the smooth and the other striated, they always run into some kind of specific relationships, some kind of non/-measurable conditions . For one the smooth and striated are two not identical conditions but are fully dependent on each other`s evolution; they constantly transform between one another or how the author calls is – intertwine, and however different the intertwining is, the relationship between elements within each model is a dynamic one. I think he is saying that the smooth space needs the striated in order to become one or the other. It’s not necessarily a matter of conflicting or resolving conditions, more that these conditions will always exist, and there is a play in exchange of energy; no means of simple invention or mathematical solution, but means of unpredictability. This is how the organic has evolved from scattered nomad cave life to agglomerated self-organized/sustained cities, and it is rarely/never one or the other since the division line between the two spaces is very abstract.

Jan 27 blog: Contrasts Between Topology and Geometry

Euclid’s three categories of geometry are definitions, postulates and common notions. Definitions serve to describe the most basic geometric elements and the figures that can be constructed from them. The postulates describe specific relationships that exist among geometric figures, and the Common Notions are basic statements of equality.

Euclid seems to primarily concern himself with space that can be fully rationalized and physically measured. His notion of space is somewhat limited in that it only allows for the creation of space in two dimensions. Space is conceived of, and described by the vocabulary set forth in the definitions, postulates and common notions. He does not establish a means of describing space that exists outside his language of points, lines, planes and figures.

Barr defines geometry as the study of mathematical space. He defines topology as a type of geometry but states that it also encompasses other fields. The two differ in that geometric spaces must, by definition, be visually expressive. This is not necessary for a topological space. Typology provides the means of expression for figures that would, in geometry, be otherwise indescribable. It deals with rules of continuity rather than rules of form.

It becomes easy to see the differences between geometric and topological spaces when looking the example of the geometric and topological pentagon (pg. 12, fig. 14). A pentagon is defined geometrically as a polygon having five sides and five interior angles. However it talks nothing of the number of faces, vertices and edges. The topological pentagon is not a geometric pentagon because it does not have five interior angles, but it still can be classified as a pentagon because it retains the same number of faces, vertices and edges.

Eisenman rejects the traditional ideas of Cartesian geometry and instead favors the idea of the fold rather than the point. He is interested in the way that the fold can change and manipulate the existing relationships of the horizontal and vertical, and figure and ground. He favors the ideas of Deleuze and objectile, which imply continual variation through the fold. He is interested in its affect on the object / event relationship.

He also is interested in the ways that the fold can be applied to Thom’s catastrophe theory. He explains that in catastrophe theory, a grain of sand that appears to cause a landslide is really not the cause at all, but rather the cause exists within the conditions of the entire structure. The fold in relation to architecture is similar in that it can serve as the unseen force that explains abrupt changes in form as well as in urban conditions. He sees enormous potential in the fold as a way to reinterpret and reframe what exists as well as connect the old and the new. He sees it both as a potential formal device as well as the way to transform architecture and urbanism from static objects to meaningful events.

Wednesday, February 4, 2009

Tuesday, February 3, 2009

Summary Session 2 Conflicts

Vitruvius, Boulee, Durand, Eisenman
Deleuze, Rene Thom, Leibniz

Problems of models of meaning
Conflicts between models of meaning.

We covered a lot today, bu we'll continue tor return many times to these examples of conflicts. Remember that architecture never ceases to make ambitious claims about its use of mathematics and geometry. And out first example points to a conflict between ideality and instrumentality in Vitruvius.

Its what I would call an internal conflict since the presumed idealization of mathematical and geometrical concepts is followed at the end of the chapter with a declaration of a reparative instrumental geometry. Symmetry is of course one of the priciples.

It is also a principle in Boullee and Durand. Their drawings point to rather different and I would say conflictual notions of architecture's use of geometry. For Boullee geometry and the example of the sphere embody Nature and Man's principles in perfect harmony. Geometry is both thing and symbol. In Durand, it is quite instrumental and if there is anything significant about symmetry it is because its an economic principle. In Durand's case furthermore we have a "method" that for the first time is analytical, probing, and generative.

First there is a grammatical spreadsheet of possibilities of primitive combinations. In subsequent illustrations of the Precis these develop into various syntactical formations. They become propositions about buildings.

I would say already JNL Durand is protocomputational.

We also looked at mathematical systems and certain differences, such as those betweem geometry and topology. And between those and the Cartesian coordinate system and Calculus and Catastrophe theory.

We looked at Eisenman's conflict with Euclidean Geometry and Cartesian space and his introduction of the Fold, Leibniz, Catastrophe theory and R thom, via Deleuze and Greg Lynn. Here one of the things at stake is the introduction of the ontology of events rather than the ontology of discrete forms.

Catastrophe theory combines calculus and the continuous variation/rates of change with topology. Its interest is to quantify qualitative states of change which calculus can't do on its own.

Ok, so this now becomes a new model but does so by showing the insufficiency of previous (Modernist) models. It is a kind of fifactic conflict. For the next blog please just read the two essays by Deleuze and identify how he uses models of meaning.

I want us to thoroughly distinguish between uses of computation for design and computation as a generative system.

In every case we see historically we have various modifications in geometry being applied to geometry and the transformation of design. With the algorithm and computation proper we've hit a limit since we are now no longer dealing with geometry and in fact I woud say possibly not even mathematics.

One last conflict. Eisenman like many others during the 90s takes the figure of the catastrophe fold and literraly transposes to the cartesian grid. Well, what's wrong with that? A lot. But for one, and promarily, the image of the catastrophe fold is a diagram - its not a thing. Esienman makes it a thing. A thingy! Punk.
Ok, hope this helps.
Peace out
P

Design Office for Research and Architecture
68 Jay Street
Brooklyn, NY 11201
USA
646-575-2287
petermacapia@labdora.com
http://labdora.com/
http://atlas.labdora.com/

Tuesday, January 27, 2009

Contrasts between topology and geometry – preliminary investigations

Euclid’s definition of space relies on the construction of points, lines and planes. These elements, when used in tandem, produce objects that are two-dimensionally grounded. By dividing his theories into definitions, postulates, and common notations, he is able to categorically define the objects constructed, explain their self-evident truths as related to mathematics and discuss the self-evident truths that are not specific to mathematics. By using Euclid’s definitions, postulates, and common notations, one is able to create a flat object, that in its nature is concerned with its form.

One of the major distinctions that Barr addresses when defining space is that topology isn’t focused on a form or shape so much as how it is put together. In Cartesian geometry the focus is on the aesthetic appeal of the object, which is defined by a multi-dimensional gird system. This system was passed down through the generations as a golden chalice. However, Euler’s Law radically changed the world’s perception of space placing emphasis on continuity as opposed to object appeal. Barr notes “a topologist is interested in those properties of a thing, that while they are in a sense geometrical, are the most permanent- the ones that will survive distortion and stretching.”

The specific properties of a thing that will survive distortion and stretching are its faces, edges and vertices. Through Euler’s Law, topological invariants can be used to bend geometric polyhedra from an aesthetic driven form into a continuous, self informed object. Through a series of scripted rules, topological geometry is capable of producing outcomes unimagined by Cartesian geometry. Space is no longer thought of as an inside/outside if/then statement, instead it is looked at as defined by faces, edges, and vertices.

Peter Eisenman takes it upon himself to outright reject Euclidian and Cartesian geometry for the likes of topology and the Catastrophe Theory. Through the research and analysis of Deleuze and Leibniz, Eisenman has equipped himself with an arsenal of weaponry that allowed him to reject previous Euclidian and Cartesian geometry for a more ‘event’ based form. Derived from Deleuze’s idea of continuous variation through a fold, Eisenmann proposes using these philosophical ideas as architectural matter. The concept of the fold allows for mediation between previous figure/ground strategies. The fold is neither figure nor ground yet it is capable of reconstituting both. Eisenman felt that the fold would allow for “an opportunity to reassess the entire idea of a static urbanism that deals with objects rather than events.”

Monday, January 26, 2009

On 2009_01_20




algorithmus conflictus @ pratt institute

use this blog to post responses to the readings, follow up on class discussions, or introduce new topics/questions.