Articles on leadership and mental models for understanding the world. All articles are written by Wojciech Gryc.
by Wojciech Gryc
Large language models like GPT-4, Claude, and others are ushering a new world – one where AI is available to everyone and everyone can benefit from showing an AI agent how to do things without actually building customized algorithms or data sets1. The unreasonable effectiveness of such models promises more models in the future, including ones that work across images, sound, video, and even 3-dimensional objects2.
This new wave of AI is akin to being around when the Internet was developed or when the Apple App Store launched. At the same time, it is completely unclear what capabilities these models have, what capabilities they will have, and how they will interact with humanity. Hence the view of this memo – we are at an inflection point with AI.
The 21st century is “the most important century”; it promises to usher in continued exponential growth for humanity, or at worst, a new dark age where nothing progresses. Given the challenges of climate change, an aging world with stalling population growth, and other geopolitical and socioeconomic risks, it is reasonable to fear that the 20th century was the peak of humanity’s economic growth. One way to address this is via artificial intelligence; by having digital agents and robots that automate human tasks, we can continue experiencing the exponential growth we’re accustomed to3.
This inflection point enables numerous opportunities and ways to address some of our biggest challenges in the 21st century. These include:
There are two broad types of risks we need to consider in this AI-enabled future.
Extrinsic risks – geopolitical, environmental, and economic risks. This is an unfortunately large bucket, but can be summarized as anything that can impact the progress of research around AI – either directly or indirectly, but extrinsically to the AI research itself. For example, the current unstable banking environment could result in an economic crash and impact any further research agendas via lack of funding. Climate change could lead to similar instability. There is a risk that governments or citizens become afraid of AI and try to prevent its further developments.
Intrinsic risks – a lack of understanding of AI models will prevent their use and integration into broader systems. Deep learning, as a subfield of AI, has generated much of the progress we’re seeing today with large language models and generative AI. However, humans need explainable systems to enable trust, and to enable accountability/liability within our existing social and legal systems.
A lack of explainability or trust will result in such models being used in extremely limited scenarios. Anthropic’s “Core Views on AI Safety” outline these concerns in more detail. In short, we need to invest heavily in understanding and communicating with AI models to ensure we can continue using them4. Lacking this can either result in limited use of models, or worse yet, a catastrophic failure that results in regulating such models away.
Addressing the above requires three buckets of work:
How the above is done is a deeply personal decision, but I hope as many people as possible get involved in all three buckets of work.