It’s been a year of big changes, and my Substack “Life as a Disaster” took on a whole new course, basically becoming a chronicle of my research and insights into A.I.
It was a natural segue for me . . . ever since the debut of ChatGPT, I’ve understood A.I. as potentially disastrous for the architecture profession. Disasters usually aren’t unpredictable. Quite the opposite – they have a source code. And once you recognize the signs of a potential disaster, they translate easily into contexts that we don’t usually associate with the term ‘disaster.’ That’s what ‘Life as a Disaster’ was supposed to be about, so I guess it didn’t stray too far from its mission after all.
What follows is a rollup of my 2023 research and writing, with some predictions about what might follow in 2024. A friend recently told me that futurism isn’t about predicting the future - it's about being able to navigate to slightly better decisions in the present.
In that spirit, here are some of the ‘predictions’ by which I’ll be navigating 2024:
For all the talk about how A.I. is going to change how we design, there’s comparatively little talk about how it’s going to change what we design. In the 60’s, architects designed social housing. In the 80’s, it was corporate towers and shopping malls. Changes in the zeitgeist provoke changes in program. Here are a few of the things I’d expect:
Designers have made huge strides in finding ways to make buildings more environmentally responsible, and lower energy costs. Powerful advances in software have helped. We should expect that trend to continue, as designers find more and more ways to optimize on energy. However, designers will run into an asymptote: there’s a certain amount of energy use and carbon cost embedded in the supply chain, and in the very act of designing and building a building. As the cost of carbon, water and energy continues to rise, and our climate crisis becomes more and more dire, the attractiveness of adaptive reuse will expand. It’s fun to design and build new buildings – for clients, for designers, and for builders. But we’re evolving – we’re starting to recognize that going forward, we’re going to have to account for the environmental costs associated with our work, either through carbon taxes, new environmental legislation, or other. This will fundamentally change the value differential between ‘make the new thing’ and ‘refurbish the thing I already got.’ Clients will pivot first, and designers and builders will start to follow.
We’re starting to see the emergence of ‘office to residential’ projects in some cities. This became an economic imperative as cities hollowed out during the pandemic, and caused the crash in commercial real estate that we’re now living through. Developers won’t forget that lesson easily. Expect to see an interest in dynamic use changes as a design consideration. Nothing too dramatic – you’re not going to design a hospital with an expectation that it could be repurposed as a movie studio. But we’ll start to see an interest in ‘how do I design this thing so that if the original, intended purpose doesn’t work out, it can be something else.’ In other words, how do I make the building responsive to changing market and climactic conditions that no one can foresee?
Until now, a ‘Smart’ building usually just meant an algorithmically-based environmental system – one that could lower the heating on particularly sunny days, and so forth. Some of these systems get damn impressive. It hasn’t really gone far beyond that, though, because the programming of these systems is technically complicated and expensive. It only really makes sense to develop them for defined use cases with a realizable ROI. That will start to change, when Natural Language Processing (NLP) meets the Internet of Things (IOT).
The IOT also suffered some early teething problems, as it was a technology in search of a problem. Developing the technology for a refrigerator to talk to a toaster is relatively easy, as it turns out, compared to convincing anyone that a refrigerator would need to talk to a toaster. With NLP, the utility of a lot of IOT technology is going to be given a second life. A refrigerator doesn’t need to talk to a toaster – a human would know that. An NLP intelligence creates possibilities that weren’t there before. I could say to a Smart Home Automation System ‘I would like to make chicken casserole for dinner’ and it could survey the contents of the fridge and respond ‘I’m sorry, but you’re out of sour cream, would you like me to order you some for next time?’ Or it could respond ‘Sound delicious – I checked, and you have all the ingredients, would you like me to preheat the oven for you?’
Expect to see the conversation start to build around these possibilities. While some tech-heads will be eager to jump on it as the next new thing, there likely won’t be any widespread adoption in 2024. However, looking ahead 10 years, it’s a near certainty that we’ll have embedded A.I. in our homes, schools and offices. What will it do? Architects should start thinking about that now.
Our design mediums haven’t changed much since the digital revolution – we arrange bits in a computer to simulate lines on a page, which then directs the arrangement of steel, wood, glass and wood on the jobsite. That’s going to start to change when we start arranging bits to arrange other bits. The last 12 months have seen a quantum leap in the quality of virtual technology, and the pandemic already predisposed us to working remotely. Those trends will accelerate, and start to bring virtual designing into the wheelhouse of architectural design. Look for:
The best way to sell a client your design services is to introduce them to your past work. Other than their work, architects spend the most time on their portfolios, which is just a TLDR of their work. When the internet took off, many architects abandoned the physical portfolio in favor of a website. Expect the same to happen here. Why show clients flat representations of your past work when you can invite them into a fully immersive experience of all your past projects? The metaverse has had some serious missteps. My own personal theory is that after the pandemic, the last place anyone wanted to be was a metaverse. We wanted to be outside. And with people. We probably always will. But the metaverse is inevitable, in some respects. We’ll find that some places, spaces, interactions, and events are just much better to have in the metaverse than in the real world.
When we hear about VR in architecture, it’s usually in the context of a presentation – it’s a new way for us to show clients the design we’re doing. But clients are going to be using VR, too. How does that get assimilated into the design brief? In the near future, if you’re designing a flagship store for a clothing line, it’s likely that they will want their customers in every part of the world to be able to have a VR experience of the store remotely. They’ll also want their in-store shoppers to have an AR experience as they’re browsing the physical aisles. And all of this would need to be coordinated with the clothing line’s metaversal store. Architects will need to align those design efforts, so that the design of all four experiences is under one cohesive vision.
As immortal as the Architect/Contractor/Owner dynamic might seem, it too tends to evolve in response to changing sociological, historical and economic factors. The twin crises of A.I. and Climate Change are going to force some much-needed adaptations into our current models. They will be dynamic changes, depending on which professions adopt A.I. first, and how fast.
The ‘Architect Over-details / Contractor Over-RFIs’ arms race that’s been going on for decades will heat up with the widespread adoption of A.I. The complexity and length of drawing sets has exploded in recent decades. Which is a huge bummer, because BIM was supposed to cut us loose from all that. We all know why. Architects overdraw things, because they know that the Contractor is going to go looking for errors, omissions, typos, smudges, coffee stains – anything that they can claim might result in extra costs for them, and therefore extract more fees out of the client. The architect, for her part, must raise fees in order to cover the costs of all that extra, largely unnecessary detailing.
I have a lot of hope that in the long-term, A.I. is going to help us achieve truly integrated project delivery. In the immediate future, however, it will likely throw gasoline on the fire. Both Architects and Contractors are going to embrace A.I.-powered clash detection for BIM Models, and AI powered submittal review. This will escalate both the Architect’s need to overdraw things, and the Contractor’s need to over-inspect things.
Architects should begin talking about this with their clients. The Architect/Contractor friction is an incredibly counterproductive and costly defect of the AEC industry. A.I. might one day solve it, but could also make things worse in the near term.
I know I’ll get slammed for saying this, but . . . I think contractors are going to embrace A.I. a lot faster than architects. The financial benefits for architects are more long term, while those for contractors are more immediate. Getting a piece of AI-driven submittal review software now, on one project, might find that error that saves the contractor $100,000. It can pay for itself overnight. This will, of course, require that architects adopt A.I. quickly, because of the aforementioned A.I. Arms Race.
Designers and Builders have been trying to crack the code on offsite and modular construction for 100 years, to no avail. There’s probably a lot of reasons for that, and if you’re interested in a deep dive, I recommend HUD’s 2023 Offsite Construction for Housing: Research Roadmap, done in partnership with MIT.
Several factors, including AI, give me a lot of optimism that we might be seeing a resurgence in the possibilities of offsite and modular:
1 The Housing Crisis: We (in the U.S.) need housing, and we need it bad. Decades of poor housing policy, and the intrusion of institutional investors into the homebuying market have caused a critical shortage of housing in the U.S. Housing hasn’t quite ranked as a leading issue in the 2024 Presidential Campaign, but I expect that it will.
2 Environmental Crisis: Offsite and Modular doesn’t intrinsically have a lower environmental impact than stick built homes, but it can. Intrinsically, offsite and modular provide more granular control over the environmental inputs and outputs of building. Couple with new supply chain strategies, there’s good reason to believe that Offsite and Modular can effectively bring down the environmental stamp of building all the new housing that we’re going to have to build.
3 Supply Chain Evolution: The pandemic hopefully woke us up to the basic idiocy of having something that you really need made in only one place, far from wherever you are. Through the CHIPS Act, the US government has already started taking steps to modify our supply chains so that we don’t run out of really critical things (like chips, naturally). We haven’t seen this show up in our construction infrastructure in a widespread way, but companies like Cuby are challenging the existing orthodoxy around both the conventional construction model and the offsite model. The principal expense of conventional construction is human labor. The principal challenge of offsite construction is in making the thing (e.g. the house) in one, centralized, fixed place, and having to transport and install it in some other place. That ‘other’ place often changes with market demand, meaning the costs change, too, as does the price that you can charge for the product. Moveable, popup house manufacturing plants can thread the needle between both models and potentially extract the best value of both.
Climate change, rather than A.I., is going to be the big driver here – changing both how our clients behave, and who are clients actually are. A.I. will have its own effects, too. With the combination of both, we should start to see some significant changes across the client landscape. My top three would be:
Clients will come to insist on the use of AI. Part of Architecture’s eventual full adoption of BIM came at the behest of large, institutional and monied clients. They set the expectation – ‘we insist that you use this, because it’s going to result in less costs for us.’ Expect the adoption of A.I. to go the same way. I have no way of knowing whether that has already found its way into project briefs, but it will. Expect large corporate clients and forward-thinking commercial clients to lead the way, beginning in 2024.
There’s an old saw that goes ‘inventing the ship necessarily invented the shipwreck, inventing the plane invented the plane crash . . .’ and so forth. So it is with A.I. In order to reap the enormous possibilities that A.I. promises, we have to enable it to see, hear, touch, and feel everything. The more that it does, the greater its potential capability. That necessarily creates a surveillance state. At some level, we already have one, given that each of us carries around a GPS tracker/microphone in our pockets every day. But we can put away our phones. We can turn off our computers. Once A.I. is fully baked into our buildings, and the built environment, there may be no escape. Sidewalk Labs’ failed Quayside Project in Toronto gave planners, designers, technologists and the general public a crash course in how & why we’re not ready to be fully plugged into the matrix. The project had several problems, but the paramount one always seemed to be concerns over privacy and data collection. How can a Smart City operate if it’s not collecting data? And if you collect all that data, you’ve created yourself a surveillance state, however benign or malicious.
At some level, an architect is a hand-holder, navigating their clients through the complex process of designing and building buildings. This will become part of the conversation – some clients are going to want all the data, and some will want none, and each choice will come with tradeoffs.
Major Insurers have already pulled out of Florida and California, and may eventually pull out of many more states (or all states) to avoid ‘uninsurable’ climate risks. This will have immediate implications for how & where we live. Most people can’t afford a home without a mortgage, and most banks won’t give you a mortgage if you don’t have insurance. That has several effects:
People who are ‘house rich’ – meaning, they have a house in a desirable location that’s worth a lot of money may see the value of their properties plummet, because the pool of available buyers has suddenly shrunk. Reduce the number of people who can buy your home by 90%, and the price is going to come down, even if the remaining 10% are wealthy enough to buy it at the old price.
So-called ‘climate havens’ may see a boom. Not that there’s any place that’s entirely safe from disaster. But the places within the U.S. that are perceived as being safer from the effects of climate change may attract many more buyers only for the fact that those are the places that you can still get insurance, and by extension, a mortgage.
Relatedly, places that are particularly vulnerable to climate change will start to see a population decline, even before any catastrophic effects are seen, as people start to gradually favor safer (more insurable) locations.
Frankenstein (and every good horror/sci-fi novel) works because it starts you off thinking that technology is the bad guy, and by the end, you start wondering whether it’s actually humans that are causing all the trouble. Similarly, in 2024, we’re going to ask less ‘how is A.I. going to affect what designers do?’ and start thinking more about ‘how can designers affect what A.I. does?’ Certain key events, which seem likely in 2024, but are inevitable regardless, are going to challenge our collective understanding about what design is, and how it should be practiced, and taught.
Bold Prediction: in 2024, at some point, somewhere, it will be revealed that an A.I. has won a significant design competition. It probably has already. We saw something analogous earlier in 2023 when a photographer, using A.I., won the Creative Category of the prestigious Sony World Photographer Award with images generated by A.I. He refused the prize, of course. He was just trying to make a point. Much of architects’ resistance to the intrusion of A.I. has centered around a disbelief that A.I. can do ‘design.’ But if an A.I. wins a major design award, what then?
Every Design Professor I know is struggling with how to incorporate A.I. into the classroom. Inevitably, students use image generating platforms like DALL-E and Midjourney to generate the kind of images that it took my generation an entire all-nighter to create. And why wouldn’t they? All-nighters suck.
As an educator, I believe this is both inevitable and ominous. It’s taking something out of design pedagogy, but it’s also invariably putting something back in its place. It’s too early to tell which is which, and whether the change represents a net positive or a net negative.
I think this debate will come to a head in 2024 at some point when a student faces potential disciplinary action for the use of A.I. At that point, a critical debate ensues, with merit on both sides:
• On the one hand, students should be expected to comport with the requirements of the studio that they’re in, obviously.
• On the other hand, why should students be held to arbitrary rules, if it could be said that those rules have no pedagogical purpose? If a math professor said I had to do all of my math homework using an abacus, I would most definitely be using a calculator behind that professor’s back.
At this point, the lively debates around A.I. that are currently happening in architecture schools around the country will become something more than a debate: people’s reputations and careers will be on the line. The Architectural Academy will be forced to take a position, and not leave it up to the predilections of individual instructors.
If you haven’t checked out AECPlusTech’s database of emerging A.I. tools for AEC Professionals, you should. There are over 400 and counting. It is a startup free-for-all. There are A.I.s for doing construction documents, construction management, space planning, doing feasibility studies, and on and on. That’s too many.
We’ll see a consolidation in tools over the course of 2024, either through mergers, buyouts, or merely customers displaying a preference for tools with more varied capabilities. No one wants to maintain 50 licenses for 400 hundred different software platforms, especially if those platforms don’t easily speak with one another.
The market will demand consolidation as firms pick 2 or 3 winners out of the arena. And no, it won’t necessarily be Autodesk. Check out Sam Lubell’s piece Autodesk Has Ruled Architecture For Decades. These Startups Are Trying To Unseat It in Fast Company to learn more.
A.I., and automation in general, is going to rapidly speed up parts of the design process, and diminish the level of human involvement in some design phases. The math is simple: if you can automate away half of what an architect does in his/her day, you only need half the architects on your staff. Assuming you want to keep your staff, the only solution is to double your pipeline. Many architects will try and do this, leading to fierce competition. Some architects will respond to this competition by arbitraging away the productivity gains that they accrued through the implementation of A.I. under the sophisticated economic theory of ‘get the job now, figure out the accounting later.’
Architects don’t get paid for their time. They get paid for the value that that time produces. If technological developments allow me to produce designs of similar quality in half the time, I should get paid the same amount as I always did, because the value being created is the same.
However, we know that many architects don’t see it that way, and are happy to charge clients at the level of their own costs, or even beneath them. That will skew the market: bolstered by the proposals being offered by lowballer architects, clients will start to expect lower fees, arguing that the labor involved in the design is now less. That will force those architects who might otherwise try to defend their own value to lower their own fees, instigating a race to the bottom.
The solution to this isn’t clear. Architects have a nasty history of underselling the value of their work, and it appears that A.I. may aggravate this problem. When you start to see architects offering full Design, CD & CA packages for a few thousand dollars, the profession is going to have to act.
Any new transformative technology is going to elicit the whole range of human emotions. Typically, those emotions settle down as the technology becomes more and more integrated into daily life. No one shakes their fist at the Power Loom anymore for having replaced weavers. The process will continue with A.I., but will be complicated by the fact that A.I. is evolving faster than our ability to emotionally adapt to it. A few things that will change our feelings towards A.I.:
The man who coined the term ‘AI’ – John McCarthy – eventually became disenchanted with the term, lamenting that “As soon as it works, no one calls it A.I. anymore.” He called this ‘The A.I. Effect’ and he has been proven very, very right. There’s A.I. in your Netflix algorithm, and your Google searches, and in all sorts of things. But we don’t really think of those things as ‘A.I.’ in the same way that we now discuss ChatGPT as ‘A.I.’ As soon as it works, we assimilate it into our normal lives, and cease thinking about it as A.I.
For that reason, I think in 2024, A.I. will become simultaneously more and less visible. It will become more visible because of major breakthroughs that are bound to happen (like the GNoME project), and emergent threats (like election interference, cyberattacks, etc.). It will become less visible because it’ll be folded into the things we do every day. It was only a year between the debut of ChatGPT and the integration of similar NLP into the office applications we use every day – like our browsers, and MS Office applications.
I’ve written previously about the emergence of personalized, autonomous agents made possible by NLGAI. If you don’t know what an ‘agent’ is, you need to catch up in 2024. Basically, it utilizes the conversational power of LLM to act (in some cases autonomously) on your behalf. As I predicted in back in April, Microsoft was the first to roll out Microsoft 365 Copilot, which was kinda like an agent and had an immediate effect in the business world. However, OpenAI was the first to roll out commercially available agents in November, allowing even those with no coding experience to build and deploy agents.
Moving forward, we’ll start to see personalized agents cropping up as highly functional personal, intelligent, virtual assistants, helping with work, travel, groceries, correspondence, etc. The more personalized A.I. becomes, the more useful it will become, and the more we’ll accept it into our daily lives. We’ll succumb to McCarthy’s A.I. Effect, and personalized A.I. Agents will become a default expectation of everyday life.
‘Multiagents’ or ‘Agent Swarms’ are just customized agents, working in concert. Researcher Joon Park broke serious ground last spring when he released the paper Generative Agents: Interactive Simulacra of Human Behavior based on experiments he organized with a ‘village’ of twenty five agents. I’m not really sure why it didn’t get more press than it did. The experiment began when a single bot was given a single inspiration: to throw a Valentine’s Day party. The agents subsequently discussed amongst themselves, and over two days, made new friends, invited each other, and coordinated the timing of the party so that they would all show up at the same time.
As customized, personalized agents grow in competence, it will make more and more sense to have them work with each other. Instead of asking an ‘Architect Agent’ to ideate on a thorny design problem, you might direct that agent to recruit a ‘Structural Engineer Agent’, a ‘Civil Engineer Agent’, a ‘Landscape Architect Agent’ and then facilitate a group discussion about what solutions seem viable.
The really wild part is that Agent Swarms would be infinitely scalable. It would be as easy to throw one ‘Engineer Agent’ at a problem as it would be to throw a thousand ‘Engineer Agents’ at the same problem, and there’s already evidence that GPTs working together reduce each other’s hallucinations, speed learning time, and improve the accuracy of solutions.
Relatedly, multi-modal is going to take over the world. It seems inconceivable that ChatGPT only debuted a year ago. It’s chief limitation seemed to be that it was constrained by language – it was a chatbot, that required written text inputs and provided written text outputs. That’s over.
ChatGPT has already become multimodal, capable of analyzing written documents, spreadsheets, code, images, video, etc. We will come to expect this from our A.I. tools, to the point where having an A.I. that only operates in one medium will feel frustrating and archaic.
Of course we want an A.I. to help write our emails. But that A.I. should be able to do everything that we do when writing emails, like checking back through past correspondence to properly set the tone for the latest email, or whether, based on the calendar and the contact’s personality, an email is particularly urgent.
By the end of 2024, it will be the default expectation that any working A.I.s are capable of handling a wide range of media, and ‘thinking’ in multimodal ways, like humans do.
The conversation we’ll have about A.I. in 2024 will be way, way different than the conversation we had about A.I. in 2023. The conversation in 2023 was fueled by discovery, protest, resistance, enthusiasm, hype, and sometimes overhype. The conversation in 2024 will begin with acceptance that A.I. is not a new fad. It is not the latest software release. It is a fundamental change in the operating system of human civilization. With that beginning, our conversation will evolve and start to tackle a few big topics:
We’re only getting to know the full environmental impact of generative A.I. Multiple tech giants reported 20%-30% increase in water consumption in recent years, as a result of Generative AI, and estimates from UC Riverside determined that for every 10 to 50 prompts in ChatGPT, 500 milliliters of water are consumed. They seemed as surprised as we were. That sucks - a big part of the reason to get excited about A.I. is that it might help us solve some of our intractable environmental problems!
If we’re anticipating that the use and expanse of A.I. is going to expand exponentially over the next few years, we have to expect their resource consumption is going to expand proportionally. There is no immediate, obvious solution to this problem. We can’t stop using A.I., and we can’t keep using it.
Designers have made huge strides in using software to lower the carbon footprint and energy consumption of the buildings they design – what if it is all offset by the insane energy demands of the A.I. software we’re using!?!
Recently, Open A.I. has created a lot of buzz around the term ‘superalignment.’ Expect that conversation to expand in 2024. Superalignment refers to the technical, moral and philosophical challenge of aligning a superintelligence with our own human values and goals. Superintelligence ≠ AGI. AGI refers to A.I. systems that operate at a human level – they can perform any task that a human might do, at a human level. Superintelligence (or, ‘Artificial Superintelligence’) would be better than any human at all tasks. Theoretically, ASI systems would evoke emotions, beliefs, and desires of their own, in the way that humans do.
ASI represents an irreversible point in human history. Once ASI is achieved, there wouldn’t be any humans on the planet smart enough to outsmart the thing. So, we need to ensure that if such an intelligence were ever developed, that it was perfectly aligned with our own wants, dreams and plans for the future of humanity. We’ll only get one shot to get this right. Superalignment teams will be trying to make sure that we do. Perhaps no human project since the Manhattan Project has been as consequential for the future of humanity.
What is Synthetic Data, and why should you care?
Suppose you train an A.I. on every plane crash that has ever happened in an attempt to make an autopilot that will eliminate the possibility of air disasters. Noble, right? However, every flight is a potential crash, and the total number of plane crashes that have ever happened is an infinitesimally small fraction of all the flights that have ever happened. And all the flights that have ever happened are an infinitesimally small fraction of all the flights that could have happened.
Once and A.I. has assimilated and learned from every flight and crash that has ever happened, what then? How does it continue to improve itself? Easy. It just generates more flights, and more crashes (simulated, of course). It can then assimilate those too, and learn from them as well.
We encounter an obvious problem if the first generation of simulations created by A.I. are flawed. The second generation of A.I.-simulated flights would likely be worse. It’s easy to imagine one wrong assumption, or one hallucination, multiplying and magnifying at the speed of a machine.
This is the ‘Synthetic Data’ conversation in a nutshell. How can A.I.s generate high quality synthetic data that A.I.s can learn from? An A.I. pilot would be competent, if it learned from every flight and every crash that had ever happened. But it can only be better than human once its knowledge base exceeds that of humans. And there’s not really much point in adopting A.I.s to do this and that if they’re going to perform worse than humans.
Reactions were mixed to Andrew Yang’s 2016 presidential campaign, which seemed initially motivated by a single issue: that rapidly onsetting automation was going to displace millions of workers and we needed to get serious about it. At the time, we thought that was about blue collar workers, which probably explains why Yang’s message didn’t curry too much favor with political elites.
The tune has changed, and the consensus, led by scholars like the Susskinds, seems to be that white collar workers are now most at risk of being automated into unemployment. That changes the conversation.
Historically, revolutions aren’t started by poor people. They’re started by upper-middle class, and professional managerial class people. Poor people fight the revolutions, and often end up shedding the blood they demand. But the instigators come from a different social strata. Lenin was a lawyer before he became a revolutionary, as were Gandhi, Robespierre, Castro and Nelson Mandela. Che Guevara and Sun Yat-sen were both doctors before they were called by the revolutionary spirit; Michael Collins was a civil servant and Leon Trotsky was a journalist.
When millions of poor people are unemployed, underemployed, disaffected, bored and miserable, the upper-middle class contemplate solutions. When they themselves are so, they contemplate revolution. We probably won’t have such a revolution in 2024, but as the effects of automation are increasingly felt in law, medicine, architecture, engineering, research, etc., the conversation around UBI & UBS will move from ‘yeah, that’ll never work’ to ‘Well, maybe we should try it.’
Humanity has been dabbling in UBI experiments for decades – look for this research to be critically reexamined & reconsidered, even by those who would have historically opposed to the idea.
Figuring out every which way that A.I. is going to change the whole world isn’t really my beat. I leave that to the experts and trying and focus on the world of design. That said, I think there are a few inevitables that could have big implications for the world, and for designers:
Changes in presidential administrations can dramatically shift federal funding priorities. And the GSA remains the single biggest client in the U.S. How any presidential election unfolds always has ramifications for designers, and this one is going to be a dumpster fire. The speed at which deep fake technology has developed has taken just about everyone by surprise. We can expect that all sorts of malicious actors, both foreign and domestic, will try to influence the election via deep fakes, fake robocalls, bot campaigns, you name it. Even if none of those tactics are used at all, people will believe that they have, and political operators will be keen to exploit those beliefs. ‘Deep fake’ will be 2024’s ‘Fake News’ – a colloquialism capable of dismissing evidence of any reality, no matter how overwhelming. Whenever I see a news clip of my candidate saying something I don’t agree with or think is wrong, I’ll content myself with the knowledge that he/she didn’t really say that – someone probably just made the clip with A.I. This could coax all of us into a sheltered, personal reality, unaffected by any ‘evidence’ coming from the ‘real’ world.
A protracted dispute over the election results, or a nonpeaceful transition of power, would have global effects. There’s no way to predict, exactly, what that will mean for the AEC professions, but probably none of it is good.
The U.S. will attempt to regulate AI, joining the fray along with Europe’s Artificial Intelligence Act, but it probably won’t matter much for designers. I don’t say that with any kind of joy. But I believe that the pace of technological advance is simply too fast for our legislative bodies to keep up. The most obvious legal challenge for designers is in copyright. Is it fair to authors, artists, designers and creators that some of these tech giants used their work to train their A.I. models, which can now reproduce those creators’ designs at will? No, of course not.
I believe, under the right legal theory, designers could prevail in some kind of copyright suit. The New York Times is trying it now. But I don’t have much hope. By the time they do, the internet will be so glutted with AI-produced art, literature, products, and other designs that it will be impossible to notice. If you ask Midjourney, at this point, to generate a painting in the style of Van Gogh, it can. If you subsequently ask it to produce a painting in the style of that generated image, it can do so as well. Repeat that process 1000 times, and what do you have? I don’t know – I don’t have time to conduct such experiments. But that’s exactly what’s playing out, right now, across the world. That kind of dilution is going to make it nearly impossible to prevail in court.
Any legal energy on A.I. is probably going to be focused on exactly where it’s focused now: on the protection of highly vulnerable groups, consumer privacy, critical infrastructure, etc. That’s probably a good thing. We don’t want some open-source A.I. model to be used by some bad actor to make an A.I. that can throw an election, or disable a power grid, or something worse. I expect that for now, all the lawyers involved in A.I. are going to be focused on these kinds of problems. Hopefully they get around one day to designers’ issues, but I think that by then, it’ll unfortunately be too late.
The recent discovery by GNoME (another Deep Mind project) of two million new materials apparently fast-forwarded the field of materials research by 800 years. As impressive as that is, we should get used to these kinds of discoveries as the ‘new normal.’ It’s eventually going to be an ordinary thing to hear about A.I.s making field-altering discoveries at a scale and pace humans just aren’t capable of.
What will those discoveries be? I have no idea. Maybe an A.I. finds a new form of cold fusion. Or discovers thousands of wrongly-convicted prisoners that can now be set free, like a sci-fi Innocence Project. Maybe it discovers previously undetected signals from advanced civilizations outside our galaxy. Who knows. The point is that companies like Deep Mind are going to start using these powerful technologies to discover other things, besides just materials and Go strategies, and we’ll have to adjust to having our minds blown on a more regular basis. Look for several more ‘big’ discoveries in 2024, and for the pace of discovery to increase.