10 Million Steps

Articles on leadership and mental models for understanding the world. All articles are written by Wojciech Gryc.

Posts

Why 10 Million Steps?

Challenges

About

Get Updates

22 March 2023

The AI Inflection Point and Planning for the Future

by Wojciech Gryc

Large language models like GPT-4, Claude, and others are ushering a new world – one where AI is available to everyone and everyone can benefit from showing an AI agent how to do things without actually building customized algorithms or data sets1. The unreasonable effectiveness of such models promises more models in the future, including ones that work across images, sound, video, and even 3-dimensional objects2.

This new wave of AI is akin to being around when the Internet was developed or when the Apple App Store launched. At the same time, it is completely unclear what capabilities these models have, what capabilities they will have, and how they will interact with humanity. Hence the view of this memo – we are at an inflection point with AI.

Why an AI Inflection Point is Needed

The 21st century is “the most important century”; it promises to usher in continued exponential growth for humanity, or at worst, a new dark age where nothing progresses. Given the challenges of climate change, an aging world with stalling population growth, and other geopolitical and socioeconomic risks, it is reasonable to fear that the 20th century was the peak of humanity’s economic growth. One way to address this is via artificial intelligence; by having digital agents and robots that automate human tasks, we can continue experiencing the exponential growth we’re accustomed to3.

This inflection point enables numerous opportunities and ways to address some of our biggest challenges in the 21st century. These include:

  1. Automation and productivity boosting while addressing the challenges of an aging workforce. AI can dramatically improve productivity and enable us to take better care of retired and aging individuals, while also maintaining (if not increasing!) the level of output we are accustomed to.
  2. Personalizing education, medicine, and other fields can foster social mobility. Education, in particular, can be improved via personalized tutors, support for students who normally don’t have access to support, and much more. When used in such a way, AI can enable social mobility.
  3. AI-aided research, thinking, and invention. Artificial General Intelligence (AGI) need not be invented to enable new discoveries across pretty much any scientific field. AI-aided research can help increase the rate at which discoveries are made, and thus enable further economic and social growth.

Risks to an AI Inflection Point

There are two broad types of risks we need to consider in this AI-enabled future.

Extrinsic risks – geopolitical, environmental, and economic risks. This is an unfortunately large bucket, but can be summarized as anything that can impact the progress of research around AI – either directly or indirectly, but extrinsically to the AI research itself. For example, the current unstable banking environment could result in an economic crash and impact any further research agendas via lack of funding. Climate change could lead to similar instability. There is a risk that governments or citizens become afraid of AI and try to prevent its further developments.

Intrinsic risks – a lack of understanding of AI models will prevent their use and integration into broader systems. Deep learning, as a subfield of AI, has generated much of the progress we’re seeing today with large language models and generative AI. However, humans need explainable systems to enable trust, and to enable accountability/liability within our existing social and legal systems.

A lack of explainability or trust will result in such models being used in extremely limited scenarios. Anthropic’s “Core Views on AI Safety” outline these concerns in more detail. In short, we need to invest heavily in understanding and communicating with AI models to ensure we can continue using them4. Lacking this can either result in limited use of models, or worse yet, a catastrophic failure that results in regulating such models away.

What Can We Do?

Addressing the above requires three buckets of work:

  1. Continued research into AI. Technological progress needs to continue, and it’s assumed it will continue at a pace similar to what’s been seen in the past decade.
  2. Research into model explainability, communication, and understanding. This is necessary to address the intrinsic risks outlined above.
  3. Using AI to encourage positive political, economic, and other decision-making. This will enable AI to begin contributing to the betterment of the world and thus manifesting the “most important century’s” promise, making life better for everyone.

How the above is done is a deeply personal decision, but I hope as many people as possible get involved in all three buckets of work.


Notes

  1. The best example of this is prompt engineering and few shot learning, where people can provide two or three examples to a model and generate new results and ideas based on that prompt. This can be done without any coding capability.
  2. Meta showed some of the research it was doing into 3D generative AI at its “Inside the Lab” day in February 2022.
  3. Note that AI or AGI is not the only requirement here. Having access to energy and raw materials/resources is also important.
  4. There are many promising research areas in this regard, including cognitive factoring, iterated distillation and amplification, and process oriented learning.