
One night in May 2020, during the height of lockdown, Deep Ganguli was worried.
Ganguli, then research director at the Stanford Institute for Human-Centered AI, had just been alerted to OpenAI’s new paper on GPT-3, its latest large language model. This new AI model was potentially 10 times more advanced than any other of its kind – and it was doing things he had never thought possible for AI. The scaling data revealed in the research suggested there was no sign of it slowing down. Ganguli fast-forwarded five years in his head, running through the kinds of societal implications he spent his time at Stanford anticipating, and the changes he e …
Read the full story at The Verge.
