I'm a skeptic on the exponential improvement of the models themselves. I think it's more likely to be logarithmic. Google got 80% of the way to making a self driving car in 2018. 7 years later it's only 90% of the way there.
But even if the tech doesn't get better, there's already space for a boom in gaming just by increasing our understanding of how to use what we have.
People who code have only just begun to realise how revolutionary it is to be able to talk to computers in English. We haven't even began to touch those implications in game design.
For example, with existing tech, I think we already can create an entirely new subgenre of RTS where you control armies by sending orders from HQ instead of explicitly clicking them. "Lord Raglan wishes the cavalry to advance rapidly to the front, follow the enemy, and try to prevent the enemy carrying away the guns." Mistakes will only make that funnier and more 'real'.
Imagine Mass Effect where you can tell Garrus to flank the enemy without pausing combat.
Imagine the meta city builders where you tell the computer to build you a particular type of city or structure, and it will generate a template for you to fit the tone you specify
People judge AI by the ways they see it affecting their lives.
I can't look at photos anymore without squinting at it to determine *am I looking at something real?* I search for real life photos to use as reference to study structure and lighting, and the AI images cluttering the results are more than useless for my needs. I look at restaurant menus and see pictures of food that doesn't exist.
I see news articles AI generated from reddit comment threads. I see Studio Ghibli's work used to promote the Israeli military. I see fake social media profiles astroturfing for scummy politicians. Job postings are fake and job applicants are fake.
AI computation tools for weather and medical diagnostics are cool, but AI a scourge in marketing and creative fields.
> Pretty much everyone I know in venture capital genuinely believes we’re in the beginning of a revolution that will make the world wealthier, healthier, and possibly even more free.
To me, this is the core point- in my experience, while there are some wins, the VC industry as a whole does not have a good track record for providing any increase to overall health, wealth, or freedom for the world, only for themselves and their investors. As a result, when the VC industry says something like this, I have very little confidence that it is accurate, and even some indication that it may be intentionally misleading.
There’s probably a lot of merit on the health and freedom questions. But Re: your take on wealth only going to investors: check out the book The Power Law by Sebastian Mallaby—he has a very interesting analysis of the role VC has played in bootstrapping broader economic growth that I found compelling. Fun read, too.
As someone developing their first indie game, I have to say AI has enabled me in more ways than I could have imagined. Having an "intern" at my side that's able to walk me through best practices, explain new-to-me functions and occasionally debug my mistakes better than I can is utterly invaluable. Even though that "intern" is often wrong and occasionally frustratingly stupid, it's still broaden my skills in a tremendous way.
I don't know whether AI's intelligence will double every year, and I'm not convinced that actually matters. There is a whole world of possibilities still left to be unlocked with the models we have available to us today.
I think that's very interesting because what is the material difference between the support that Gen AI is providing you vs. being able to search for best practices and interact with forums through the internet?
Obviously there is an experiential difference, you only go one place (your Gen AI of choice) and it is more flexible in how it presents information (e.g. it can rephrase based on prompts) - but the actual information is the same and many times coming from the same place.
My understanding of LLM's is that they have effectively ingest as much data as they can and then they are capable of generating content that conforms to the ingested data based on some pretty advanced statistical analysis - they do not know anything, they are simply capable of re-iterating what they have ingested. If that's true, then the only novel thing they offer is how a user gets to information, not what information is available.
Given that, what do these things enable that wasn't already possible just through google search?
Moores Law is not a universal law, it’s a cherry picked “huh, that’s interesting”. The vast majority of technologies/inventions (cars, coffee machines, hair gel) do not get infinitely better on a curve forever. To say it’s happened with microchips, therefore it will inevitably happen with AI is ignorant. For further reading on how Moore’s Law has been misrepresented and misused, I recommend reading The Myth of Artificial Intelligence by Erik Larson.
The fact that AI is an umbrella term for WILDLY different applications with similar underpinnings is part of the problem. When you talk about AI, the first thing that comes to mind is ChatGPT. And it's still very difficult to make a good value proposition for what it can reliably do based on what it costs. It's also hard to imagine what exponential efficiency is supposed to mean here for that reason.
On the other hand, the world of machine learning tools for image or voice recognition is doing amazing things. And the world of 3D modelling would not be what it is today without what are now being called "AI Based" tools. Procedural tools are absolutely vital to the 3D and broader CGI field, and I can definitely see those getting better and more efficient with time.
Thank you for your nuanced and well researched take on AI. I look forward to receiving more diversive articles in my inbox every week.
Before the covid pandemic I would have find ChatGPT something out of Black Mirror, a distant future, but look how quickly the future has caught up.
It's very interesting for me to realise that we have yet to hit the ceiling of the the AI model integration yet. Which I believe in time, we will.
Perhaps a few more years of 'annoyances' for some people before we hit that next innovation phase. The dissonance is going to always be there. There are always skepics, laggards and late adopters. With the digital world filled with noise, polarising views, and tribes, it gets increasing difficult at times to get everyone on the same page.
But through articles like what you’ve done here. It might help us get there faster.
As an engineer I look at AI as an important tool to support work flows and as 'assistant' - I work for a company that has implemented its own AI model and it has proven valuable.
So for STEM related workflows I think it has its place. What I don't condone or agree with is AI being used in artistic / creative work flows, i mean will AI ever replace the imagination of an artist? The only way that could happen is if the AI model learns from the artist or artists and even then I think that's many years away. I mean to start with a blank sheet of paper/screen and create something meaningful based on an idea, takes creativity and imagination, I'm pretty sure an AI model couldn't learn that just yet...
There are so many things troubling about generative AI that outweigh the occasional benefits like weather prediction. As someone who is connected to the world of venture capital and probably has friends there I can see why you might be sympathetic, but to me and many others it’s a tool to broaden capitalism’s exploitation, disenfranchising the labor of even more people.
AI futurists love to focus on how technology has some kind of ‘exponential growth’ (which isn’t proven to last) and will make the lives of future people “wealthier, healthier, and possibly even more free” as you put it - while ignoring the fact that the technology exploits more people than it helps and that the wealthy people developing and benefitting from AI have no incentive to turn around the tool that’s giving them more power.
I’m not an AI or debate expert so my arguments are probably flawed, but the main reason I want to comment is that of course AI looks good, when you completely ignore the political, cultural, and socio-economic impacts of AI and just focus on ‘technology exponential growth = good”
Maybe generative AI technologies can provide benefits, but their current positioning makes them an evil and anti-humanist technology.
I'm a skeptic on the exponential improvement of the models themselves. I think it's more likely to be logarithmic. Google got 80% of the way to making a self driving car in 2018. 7 years later it's only 90% of the way there.
But even if the tech doesn't get better, there's already space for a boom in gaming just by increasing our understanding of how to use what we have.
People who code have only just begun to realise how revolutionary it is to be able to talk to computers in English. We haven't even began to touch those implications in game design.
For example, with existing tech, I think we already can create an entirely new subgenre of RTS where you control armies by sending orders from HQ instead of explicitly clicking them. "Lord Raglan wishes the cavalry to advance rapidly to the front, follow the enemy, and try to prevent the enemy carrying away the guns." Mistakes will only make that funnier and more 'real'.
Imagine Mass Effect where you can tell Garrus to flank the enemy without pausing combat.
Imagine the meta city builders where you tell the computer to build you a particular type of city or structure, and it will generate a template for you to fit the tone you specify
People judge AI by the ways they see it affecting their lives.
I can't look at photos anymore without squinting at it to determine *am I looking at something real?* I search for real life photos to use as reference to study structure and lighting, and the AI images cluttering the results are more than useless for my needs. I look at restaurant menus and see pictures of food that doesn't exist.
I see news articles AI generated from reddit comment threads. I see Studio Ghibli's work used to promote the Israeli military. I see fake social media profiles astroturfing for scummy politicians. Job postings are fake and job applicants are fake.
AI computation tools for weather and medical diagnostics are cool, but AI a scourge in marketing and creative fields.
> Pretty much everyone I know in venture capital genuinely believes we’re in the beginning of a revolution that will make the world wealthier, healthier, and possibly even more free.
To me, this is the core point- in my experience, while there are some wins, the VC industry as a whole does not have a good track record for providing any increase to overall health, wealth, or freedom for the world, only for themselves and their investors. As a result, when the VC industry says something like this, I have very little confidence that it is accurate, and even some indication that it may be intentionally misleading.
There’s probably a lot of merit on the health and freedom questions. But Re: your take on wealth only going to investors: check out the book The Power Law by Sebastian Mallaby—he has a very interesting analysis of the role VC has played in bootstrapping broader economic growth that I found compelling. Fun read, too.
I will read this, thanks for the rec!
This is a fantastic article on AI and so informative too. Thank you!!
The John Carmack link is broken. I'm curious to see what it was. Thanks.
Fixed now, sorry about that and thanks for catching it!
As someone developing their first indie game, I have to say AI has enabled me in more ways than I could have imagined. Having an "intern" at my side that's able to walk me through best practices, explain new-to-me functions and occasionally debug my mistakes better than I can is utterly invaluable. Even though that "intern" is often wrong and occasionally frustratingly stupid, it's still broaden my skills in a tremendous way.
I don't know whether AI's intelligence will double every year, and I'm not convinced that actually matters. There is a whole world of possibilities still left to be unlocked with the models we have available to us today.
I think that's very interesting because what is the material difference between the support that Gen AI is providing you vs. being able to search for best practices and interact with forums through the internet?
Obviously there is an experiential difference, you only go one place (your Gen AI of choice) and it is more flexible in how it presents information (e.g. it can rephrase based on prompts) - but the actual information is the same and many times coming from the same place.
My understanding of LLM's is that they have effectively ingest as much data as they can and then they are capable of generating content that conforms to the ingested data based on some pretty advanced statistical analysis - they do not know anything, they are simply capable of re-iterating what they have ingested. If that's true, then the only novel thing they offer is how a user gets to information, not what information is available.
Given that, what do these things enable that wasn't already possible just through google search?
Moores Law is not a universal law, it’s a cherry picked “huh, that’s interesting”. The vast majority of technologies/inventions (cars, coffee machines, hair gel) do not get infinitely better on a curve forever. To say it’s happened with microchips, therefore it will inevitably happen with AI is ignorant. For further reading on how Moore’s Law has been misrepresented and misused, I recommend reading The Myth of Artificial Intelligence by Erik Larson.
The fact that AI is an umbrella term for WILDLY different applications with similar underpinnings is part of the problem. When you talk about AI, the first thing that comes to mind is ChatGPT. And it's still very difficult to make a good value proposition for what it can reliably do based on what it costs. It's also hard to imagine what exponential efficiency is supposed to mean here for that reason.
On the other hand, the world of machine learning tools for image or voice recognition is doing amazing things. And the world of 3D modelling would not be what it is today without what are now being called "AI Based" tools. Procedural tools are absolutely vital to the 3D and broader CGI field, and I can definitely see those getting better and more efficient with time.
Yea ‘generative AI’ can be used to more specifically describe recent AI technologies but it still doesn’t feel like a great term
Hey Ryan,
Thank you for your nuanced and well researched take on AI. I look forward to receiving more diversive articles in my inbox every week.
Before the covid pandemic I would have find ChatGPT something out of Black Mirror, a distant future, but look how quickly the future has caught up.
It's very interesting for me to realise that we have yet to hit the ceiling of the the AI model integration yet. Which I believe in time, we will.
Perhaps a few more years of 'annoyances' for some people before we hit that next innovation phase. The dissonance is going to always be there. There are always skepics, laggards and late adopters. With the digital world filled with noise, polarising views, and tribes, it gets increasing difficult at times to get everyone on the same page.
But through articles like what you’ve done here. It might help us get there faster.
As an engineer I look at AI as an important tool to support work flows and as 'assistant' - I work for a company that has implemented its own AI model and it has proven valuable.
So for STEM related workflows I think it has its place. What I don't condone or agree with is AI being used in artistic / creative work flows, i mean will AI ever replace the imagination of an artist? The only way that could happen is if the AI model learns from the artist or artists and even then I think that's many years away. I mean to start with a blank sheet of paper/screen and create something meaningful based on an idea, takes creativity and imagination, I'm pretty sure an AI model couldn't learn that just yet...
There are so many things troubling about generative AI that outweigh the occasional benefits like weather prediction. As someone who is connected to the world of venture capital and probably has friends there I can see why you might be sympathetic, but to me and many others it’s a tool to broaden capitalism’s exploitation, disenfranchising the labor of even more people.
AI futurists love to focus on how technology has some kind of ‘exponential growth’ (which isn’t proven to last) and will make the lives of future people “wealthier, healthier, and possibly even more free” as you put it - while ignoring the fact that the technology exploits more people than it helps and that the wealthy people developing and benefitting from AI have no incentive to turn around the tool that’s giving them more power.
I’m not an AI or debate expert so my arguments are probably flawed, but the main reason I want to comment is that of course AI looks good, when you completely ignore the political, cultural, and socio-economic impacts of AI and just focus on ‘technology exponential growth = good”
Maybe generative AI technologies can provide benefits, but their current positioning makes them an evil and anti-humanist technology.