The creation of a sentient machine has captured popular imagination for decades, as seen in literature and film over the last 60+ years: social implications of Artificial Intelligence (AI) were described by Isaac Asimov and his enduring Laws of Robotics (I, Robot; published 1950). AI drives the all-encompassing military apparatus Skynet and the Terminators as well as the cold calculated self-evident logic of HAL in 2001: A Space Odyssey. Star Trek explored more optimistic existential themes through Data and The Doctor; two programmed personalities striving to understand what it is to be human. Science fiction sees Artificial Intelligence as both humanity’s most important invention as well as a hubristic mistake leading to our extinction. These allegories hint at AI’s potential as both a creative and destructive force: technological advances in AI will disrupt nearly every industry in the world, architecture included, and in doing so may make them unrecognizable from today.
With such a powerful catalyst being developed, it’s no surprise that many articles have been written on the topic (including this one!). The term “Artificial Intelligence” has been used so often it is already in danger of becoming a blanket term whose ubiquity leads to it becoming meaningless; from self-driving cars to the digital assistants on our phones, it is being applied to a wide variety of computer-automated processes. These types of focused AI are simply a series of algorithms that identify patterns based on a given set of starting parameters. While the input patterns are getting more complex (computers are now able to use speech as well as text, live video as well as photographs) and the algorithms are becoming increasingly sophisticated, these are still not true AI. A true artificial intelligence would improve its own code based on previous inputs, not dissimilar to how we learn from past experiences.
For example, a medical computer which has been shown a sample of MRI scans identifying a target ailment can then sift through a database of scans to identify positive results for an ailment of similar shape and size. It works within a range, identifies how similar each pattern is to the original target, but still requires a human eye to confirm a positive result. An AI version of this program would take those same scans and through millions of iterations (per second!) teach itself the various discrepancies between healthy brain tissue and tumors, cysts, aneurysms, etc. Through this “machine learning”, the computer would no longer just work within its given parameters but would be able to take that ever-expanding data set to identify all possible issues in any scan. It could then act accordingly to triage and forward the applicable information to where and whom it is needed. Results would be instant, and treatment could begin immediately. On top of that, the computer could adjust the hospital’s environment (heat, light, humidity) based on the patients’ biometrics while they go through the process.
Being able to analyze a scan more quickly and accurately might seem like a small change, but multiplied thousands of times it can have a big impact on time and resource allocation. When these impacts are multiplied across hundreds of industries, AI can change how we interact with the built world itself. In Part II, we will explore how architects and designers may adopt AI to improve that built world.
(For a quick yet comprehensive description of the different types of Artificial Intelligence, check out this Forbes article. For a more mind-bending description, read anything about Google’s DeepDream experiments)