And I don't mean The Resistance to Terminators.
Al automation promises to revolutionize white-collar or office work. Some think there’s plenty of resistance against quick AI adoption and so there won’t be major negative societal disruptions. I’ve come to believe that resistance will crumble.
Legal and copyright issues could slow AI down, but not for long.
AI generated code may be hard to accept into proprietary code bases due to legal uncertainty, e.g., copyright issues. For example, the Google v Oracle Java lawsuit started in 2010 and reached conclusion in the Supreme Court in 2021. That’s 11 years of legal uncertainty over whether there was copied code.
Having said that, Google kept using the disputed technology through those years of litigation. There are ways around legal issues like that I presume. And the legal cases against AI are already underway as in the Github Copilot lawsuit [1], so the clock is already ticking — and currently, the AI side is winning [2].
It feels there's some amount of inevitability at play too in terms of resolving the copyright issues. The actors' guild has agreed to a contract allowing limited AI usage with major media companies. OpenAI is securing deals with media companies to train and utilize their copyrighted materials. The writing is on the wall, and AI is writing it.
People and companies are resistant to change, but new folks will jump ahead
Some say "The future is already here; it's just unevenly distributed" (William Gibson).
One reason that happens is that status quo lowers risk. If you're Apple and if you have AI that can do the work of half your software development workforce, do you layoff 50% of SDEs? No. No one has the risk tolerance for that big a move all at once. What if you were wrong? What if the wrong automation tech is used, or the people you hire to manage that get it wrong? Betting "all in" on a single roll of the dice is too risky.
Not without an external crisis, at least.
The oil and gas industry used to employ a lot of people. Until oil price crashed and belt tightening resulted in lots of layoffs and automation. Those jobs are never coming back because they were automated away [11]. But without the price crash, the amount or severity of layoffs might not have been as bad.
After all, one way for a manager to get promoted is by managing more people and a bigger budget. So the status quo of using more people than automation is a self-interested move. A crisis like a big oil price crash is required to shake that status quo loose.
But new companies don't have the same status quo problem. Old companies will either succeed or fail. New startups with come in with automation and AI baked in to reduce cost, extend their funding runway, and to get product market fit faster.
Old Tech will eventually come around to automation anyway
People say that it's normal for FAANG type companies to lose 5% or more tech workers every year for various reasons [3]. Attrition, eliminating teams of projects that get cancelled, "unregretted attrition", etc. Depending on the speed of AI automation progress, old tech companies can just keep up with 5% unregretted attrition each year to slowly turnover the company to AI.
Some might say it's already started since 2022 [4].
And for those old stuffy companies who don't come around fast enough to automation? They can always turn from being an engineering company into a financial engineering firm, like Siemens, IBM or GE.
I’m kidding! Slightly. But just because Air Canada will probably think twice about using AI for customer service after that lawsuit [8], that doesn’t mean they won’t outsource to an AI customer service startup that’s also willing to defend them against lawsuits.
Lump of Labour fallacy is about the whole economy, not a single sector
Some say don't worry about AI automation producing negative societal disruptions in the form of making everyone unemployed because Lump of Labour is a fallacy [6].
The Lump of Labour fallacy misconception is that there's only a fixed amount of work to be done, so don't worry about automation taking work away from people! There'll always be more work invented by people to do!
Some would say this time is different because AI could do that newly invented work too. But even without this AI-everywhere angle, you have to see that the type of work may be different.
When manufacturing and software development work gets automated, what new work is there to do? Those workers can re-skill into making TikTok influencing channels?
Some say “yes, of course”, but there's just no guarantee that the new work would be "better" (e.g. higher abstraction of code, safer workplace, higher paying, improved quality of life at work). It could be worse or more dangerous. As climate change gets worse, maybe those displaced tech workers can re-skill into forest fire fighting? I doubt non-robotic LLMs can do that.
Jevons Paradox applies to things people want, not things people are ambivalent about or worse
Some say don't worry about AI automation making office/tech/programming workers unemployed because Jevons says greater efficiency will induce more demand for software, etc. [6]
Jevons Paradox states that increased efficiency leads to increased consumption. i.e. as X becomes more efficient, more X will be used. Substitute X with:
- Oil
- Gas
- Electricity
- Microsoft Office
Look, people want dogs and cats. But generally speaking, people aren't into wanting software developers and assembly line workers.
USA is manufacturing more than ever, but is employing fewer manufacturing workers than ever. Maybe AI will let us substitute "manufacturing" with "software developing" in that sentence.
Consumers want the goods, not the people producing them (see: offshoring). If AI can produce it cheaper with fewer humans involved and without animal testing, then consumers would want that instead (mainly because it's cheaper).
So at best, Jevons says people will want more software and apps than ever before because they'll get more efficient with AI. But nothing says that the code has to be written by humans.
In fact, we've seen this last point before. Practically no one writes the assembly code that those software requires written to work — compilers write that automatically for us since decades ago.
Fortunately, assembly code programmers could easily up-skill to higher level languages like Java. With AI, coders may have to up-skill to higher level languages like English, and compete with all the non-coders who can already English better than them.
Software developers will work at higher levels of abstractions. Yes, and so can everyone else
Some say software developers shouldn't worry about automation taking their jobs, because at best AI can spit out code given some architecture and specifications for problems that the human software developer has to write up into prompts. So the human SDE can now work at a higher level of abstraction, work at system design levels and above, double check the code that AI produced to make sure there's no hallucinations.
Sure, yes. But what evidence is there that BSc graduates with education in algorithms and data structures can write those prompts better than, e.g., BA philosophy graduates who trained in close-reading, analyzing, and writing highly technical English in all their Analytic Philosophy classes?
There's also a lot of LLB lawyers who are underemployed in the legal or adjacent fields doing paralegal or repetitive property conveyancing work. Maybe some of them can do this AI software development prompting better than CS/SDE graduates?
In fact, we've seen this broadening (or democratizing) of a labour field before. It used to be highly technical and challenging work to do special event photography. You have to get the lighting, shutter speed, and aperture just right or else you'll miss the moment, or waste and run out of expensive film. Digital SLR photography and big inexpensive memory cards means many with an eye for beautiful photos can now do wedding photography, take 1000s of photos, then choose the best 50 afterwards. Or just fix it in "post" with Photoshop.
This means sky-high compensations or job security will come down, if not in one field (tech), then maybe in any field that's at risk of AI automation (all white collar or office jobs).
AI is a bubble, until it’s not
The last few points focused more on software development, but that’s a canary in the coal mine. Some say software development is more resistant to AI automation because it’s already in the business of automation [7]. But that just means that if (since?) software engineering labour is at risk, then many other fields are also at risk too.
Some say AI is like crypto. It’s all hype and a bubble run like or by some of the same personalities. But that’s a useless comparison. Can you, in 1999, tell if the internet is a bubble or not based on how tulips were a bubble [9]? They’re just not the same. And at least with AI there are clear and present use-cases, no AGI required.
So forget AGI. Resistance to automation will crumble. There's too much money to be made in automating even just 10% [10] of the white-collar or office work labour market, in whole or in part, outright or by “just” improving human efficiency. That's the "killer app" of AI.
Don’t focus so much on the automating a worker outright-in-whole part. Instead, focus on the replacement in-part, by “just” improving human efficiency, part. E.g. Self-checkout didn’t eliminate “cashiers”, but allows one supervisor to do the work of (say) three cashiers.
How much profit can this “killer app" make that would justify the (non) bubble? I lazily checked with Meta AI and it says:
- telemarketing and public opinion researchers (including call center representatives) in the USA in 2020 probably had compensation of about $100B
- software developers (including applications and systems software developers) employed in the USA in 2020 made about $200B
- office managers, supervisors, support and assistants employed in the USA in 2020 made about $200B
That’s $500B right there. So by automating even just 10%, it’d take just 10 years to fill the $500 billion dollar revenue gap [5]. That percentage will go up, the addressable market is not just the USA, and there are many other labour fields to automate than those.
That’s why I think AI adoption will be quicker and there’ll be more negative societal disruptions coming up. We can’t all re-skill to fight forest fires.
[1]: https://www.artificialintelligence-news.com/news/openai-and-microsoft-lawsuit-github-copilot/
[2]: https://www.developer-tech.com/news/judge-dismisses-majority-github-copilot-copyright-claims/
[3]: https://www.seattletimes.com/business/amazon/internal-amazon-documents-shed-light-on-how-company-pressures-out-6-of-office-workers/
[4]: https://layoffs.fyi
[5]: https://blog.carsoncheng.ca/2024/07/re-500b-ai-revenue-expectations-gap.html
[6]: AI and the automation of work. https://www.ben-evans.com/benedictevans/2023/7/2/working-with-ai
[7]: I believe Yann LeCun said something like that but I can’t find the source.
[8]: https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416
[9]: https://en.m.wikipedia.org/wiki/Tulip_mania
[10]: https://news.ycombinator.com/item?id=41465081 - study says there's a 26.08% increase in productivity with AI, and more specifically a 27% to 39% for junior level and 8% to 13% at senior level. So my "10%" guess wasn't too bad?
[11]: https://www.parklandinstitute.ca/job_creation_or_job_loss - “the Big Four are leading the push to automate away even more jobs in the coming years … The Alberta oil and gas industry employed 25,788 fewer workers in 2021 than in 2014 (a 15.5% reduction)”