2024-10-03

Resistance to AI Automation will crumble

And I don't mean The Resistance to Terminators.

Al automation promises to revolutionize white-collar or office work. Some think there’s plenty of resistance against quick AI adoption and so there won’t be major negative societal disruptions. I’ve come to believe that resistance will crumble.

Legal and copyright issues could slow AI down, but not for long.


AI generated code may be hard to accept into proprietary code bases due to legal uncertainty, e.g., copyright issues.  For example, the Google v Oracle Java lawsuit started in 2010 and reached conclusion in the Supreme Court in 2021.  That’s 11 years of legal uncertainty over whether there was copied code.

Having said that, Google kept using the disputed technology through those years of litigation.  There are ways around legal issues like that I presume.  And the legal cases against AI are already underway as in the Github Copilot lawsuit [1], so the clock is already ticking — and currently, the AI side is winning [2].

It feels there's some amount of inevitability at play too in terms of resolving the copyright issues.  The actors' guild has agreed to a contract allowing limited AI usage with major media companies. OpenAI is securing deals with media companies to train and utilize their copyrighted materials. The writing is on the wall, and AI is writing it.

People and companies are resistant to change, but new folks will jump ahead


Some say "The future is already here; it's just unevenly distributed" (William Gibson).

One reason that happens is that status quo lowers risk.  If you're Apple and if you have AI that can do the work of half your software development workforce, do you layoff 50% of SDEs?  No.  No one has the risk tolerance for that big a move all at once.  What if you were wrong?  What if the wrong automation tech is used, or the people you hire to manage that get it wrong?  Betting "all in" on a single roll of the dice is too risky.

Not without an external crisis, at least.

The oil and gas industry used to employ a lot of people.  Until oil price crashed and belt tightening resulted in lots of layoffs and automation.  Those jobs are never coming back because they were automated away [11].  But without the price crash, the amount or severity of layoffs might not have been as bad.

After all, one way for a manager to get promoted is by managing more people and a bigger budget.  So the status quo of using more people than automation is a self-interested move.  A crisis like a big oil price crash is required to shake that status quo loose.

But new companies don't have the same status quo problem.  Old companies will either succeed or fail.  New startups with come in with automation and AI baked in to reduce cost, extend their funding runway, and to get product market fit faster.

Old Tech will eventually come around to automation anyway


People say that it's normal for FAANG type companies to lose 5% or more tech workers every year for various reasons [3].  Attrition, eliminating teams of projects that get cancelled, "unregretted attrition", etc.  Depending on the speed of AI automation progress, old tech companies can just keep up with 5% unregretted attrition each year to slowly turnover the company to AI.

Some might say it's already started since 2022 [4].

And for those old stuffy companies who don't come around fast enough to automation?  They can always turn from being an engineering company into a financial engineering firm, like Siemens, IBM or GE.

I’m kidding! Slightly.  But just because Air Canada will probably think twice about using AI for customer service after that lawsuit [8], that doesn’t mean they won’t outsource to an AI customer service startup that’s also willing to defend them against lawsuits.

Lump of Labour fallacy is about the whole economy, not a single sector


Some say don't worry about AI automation producing negative societal disruptions in the form of making everyone unemployed because Lump of Labour is a fallacy [6].

The Lump of Labour fallacy misconception is that there's only a fixed amount of work to be done, so don't worry about automation taking work away from people!  There'll always be more work invented by people to do!

Some would say this time is different because AI could do that newly invented work too.  But even without this AI-everywhere angle, you have to see that the type of work may be different.

When manufacturing and software development work gets automated, what new work is there to do?  Those workers can re-skill into making TikTok influencing channels?

Some say “yes, of course”, but there's just no guarantee that the new work would be "better" (e.g. higher abstraction of code, safer workplace, higher paying, improved quality of life at work).  It could be worse or more dangerous.  As climate change gets worse, maybe those displaced tech workers can re-skill into forest fire fighting?  I doubt non-robotic LLMs can do that.

Jevons Paradox applies to things people want, not things people are ambivalent about or worse


Some say don't worry about AI automation making office/tech/programming workers unemployed because Jevons says greater efficiency will induce more demand for software, etc. [6]

Jevons Paradox states that increased efficiency leads to increased consumption.  i.e. as X becomes more efficient, more X will be used. Substitute X with:

  • Oil
  • Gas
  • Electricity
  • Microsoft Office


Look, people want dogs and cats.  But generally speaking, people aren't into wanting software developers and assembly line workers.

USA is manufacturing more than ever, but is employing fewer manufacturing workers than ever. Maybe AI will let us substitute "manufacturing" with "software developing" in that sentence.

Consumers want the goods, not the people producing them (see: offshoring).  If AI can produce it cheaper with fewer humans involved and without animal testing, then consumers would want that instead (mainly because it's cheaper).

So at best, Jevons says people will want more software and apps than ever before because they'll get more efficient with AI.  But nothing says that the code has to be written by humans.

In fact, we've seen this last point before.  Practically no one writes the assembly code that those software requires written to work — compilers write that automatically for us since decades ago.

Fortunately, assembly code programmers could easily up-skill to higher level languages like Java.  With AI, coders may have to up-skill to higher level languages like English, and compete with all the non-coders who can already English better than them.

Software developers will work at higher levels of abstractions.  Yes, and so can everyone else


Some say software developers shouldn't worry about automation taking their jobs, because at best AI can spit out code given some architecture and specifications for problems that the human software developer has to write up into prompts.  So the human SDE can now work at a higher level of abstraction, work at system design levels and above, double check the code that AI produced to make sure there's no hallucinations.

Sure, yes.  But what evidence is there that BSc graduates with education in algorithms and data structures can write those prompts better than, e.g., BA philosophy graduates who trained in close-reading, analyzing, and writing highly technical English in all their Analytic Philosophy classes?

There's also a lot of LLB lawyers who are underemployed in the legal or adjacent fields doing paralegal or repetitive property conveyancing work.  Maybe some of them can do this AI software development prompting better than CS/SDE graduates?

In fact, we've seen this broadening (or democratizing) of a labour field before.  It used to be highly technical and challenging work to do special event photography.  You have to get the lighting, shutter speed, and aperture just right or else you'll miss the moment, or waste and run out of expensive film.  Digital SLR photography and big inexpensive memory cards means many with an eye for beautiful photos can now do wedding photography, take 1000s of photos, then choose the best 50 afterwards.  Or just fix it in "post" with Photoshop.

This means sky-high compensations or job security will come down, if not in one field (tech), then maybe in any field that's at risk of AI automation (all white collar or office jobs).

AI is a bubble, until it’s not


The last few points focused more on software development, but that’s a canary in the coal mine.  Some say software development is more resistant to AI automation because it’s already in the business of automation [7].  But that just means that if (since?) software engineering labour is at risk, then many other fields are also at risk too.

Some say AI is like crypto.  It’s all hype and a bubble run like or by some of the same personalities.  But that’s a useless comparison.  Can you, in 1999, tell if the internet is a bubble or not based on how tulips were a bubble [9]?  They’re just not the same.  And at least with AI there are clear and present use-cases, no AGI required.

So forget AGI.  Resistance to automation will crumble.  There's too much money to be made in automating even just 10% [10] of the white-collar or office work labour market, in whole or in part, outright or by “just” improving human efficiency.  That's the "killer app" of AI.

Don’t focus so much on the automating a worker outright-in-whole part.  Instead, focus on the replacement in-part, by “just” improving human efficiency, part.  E.g. Self-checkout didn’t eliminate “cashiers”, but allows one supervisor to do the work of (say) three cashiers.

How much profit can this “killer app" make that would justify the (non) bubble?  I lazily checked with Meta AI and it says:

  • telemarketing and public opinion researchers (including call center representatives) in the USA in 2020 probably had compensation of about $100B
  • software developers (including applications and systems software developers) employed in the USA in 2020 made about $200B
  • office managers, supervisors, support and assistants employed in the USA in 2020 made about $200B


That’s $500B right there. So by automating even just 10%, it’d take just 10 years to fill the $500 billion dollar revenue gap [5].  That percentage will go up, the addressable market is not just the USA, and there are many other labour fields to automate than those.

That’s why I think AI adoption will be quicker and there’ll be more negative societal disruptions coming up.  We can’t all re-skill to fight forest fires.

EDIT: To clarify, I don't mean for sure tech or any other jobs will definitely get decimated by AI efficiency gains.  It's just that quoting "Jevons" or "Lump of Labour" isn't a knock-out against that possibility.  The analysis has to be deeper than just invoking those, by now, thought-stopping thoughts.

Let me volunteer one possibility, for example, AI makes coding jobs 10% more efficient, meaning cost of labour is reduced.  Because cost of software production goes down, consumers want more apps, so much more than the 10% efficiency gains allow that more humans developers are needed.  This is the classic rebound effect in Jevons.  Notice it requires consumers to want so much more apps that it more than offsets the 10% efficiency gains — there is no guarantee it rebounds that much!

EDIT 2: Also, jobs getting decimated by AI efficiency gains is not necessarily bad!  Machines decimated coal mining jobs, which is bad for jobs, but given there are better alternate jobs, it was great for the health and safety of would-be coal miners!


[1]: https://www.artificialintelligence-news.com/news/openai-and-microsoft-lawsuit-github-copilot/

[2]: https://www.developer-tech.com/news/judge-dismisses-majority-github-copilot-copyright-claims/

[3]: https://www.seattletimes.com/business/amazon/internal-amazon-documents-shed-light-on-how-company-pressures-out-6-of-office-workers/

[4]: https://layoffs.fyi

[5]: https://blog.carsoncheng.ca/2024/07/re-500b-ai-revenue-expectations-gap.html

[6]: AI and the automation of work. https://www.ben-evans.com/benedictevans/2023/7/2/working-with-ai

[7]: I believe Yann LeCun said something like that but I can’t find the source.

[8]: https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416

[9]: https://en.m.wikipedia.org/wiki/Tulip_mania

[10]: https://news.ycombinator.com/item?id=41465081 - study says there's a 26.08% increase in productivity with AI, and more specifically a 27% to 39% for junior level and 8% to 13% at senior level.  So my "10%" guess wasn't too bad?

[11]: https://www.parklandinstitute.ca/job_creation_or_job_loss - “the Big Four are leading the push to automate away even more jobs in the coming years … The Alberta oil and gas industry employed 25,788 fewer workers in 2021 than in 2014 (a 15.5% reduction)”

2024-10-02

Labour Automation vs Growth

Automation in oil and gas reduced labour needed to extract and produce oil. Oil as a product is not a platform-platform. Oil usage cannot induce demand for ever more oil by being a medium for platforms that can recursively host more platforms of products. Because oil does not induce exponential demand, then automation has a good chance of reducing labour down.

The flip side is that platform-platforms can recursively host more platforms of products. So platform-platforms has a chance to induce exponential demand, and so while automation reduces labour per unit produced, on the whole the exponential demand for units can balance out if not even increase labour required industry-wide.

C compilers are not platform-platforms. Compilers in general basically eliminated all assembly programming labour. The reason there are more programmers now than when assembly language reigned supreme is not because compiler automation freed up labour for more creative things. The reason is more because that other platform-platforms like the internet and operating systems induced exponential demand for software.

Other examples of automation that are not platform-platforms and in fact eliminated a lot of labour include:

  • Manufacturing automation
  • Bookbinding automation (replaced by printing presses)
  • Telephone switchboard automation (replaced by digital systems)


Jevons Paradox states that increased efficiency leads to increased consumption.  i.e. as X becomes more efficient, more X will be used. Substitute X with:

  • Oil
  • Gas
  • Electricity
  • Microsoft Office


But software developers? Or assembly line workers? USA is manufacturing more than ever, but is employing fewer manufacturing workers than ever. Maybe AI will allow us to  substitute "manufacturing" with "software developing" in that sentence.

Jevon's Paradox "works" only for items that consumers actually want. If extreme environmentalism "won", gas usage would go down no matter how efficient gas engines get. If coal-powered SUVs became unreasonably ultra trendy, coal use would go up regardless of how inefficient coal was.

Software development and manufacturing workers are not goods that consumers want. They want what's produced. And if AI can produce it cheaper with fewer humans involved and without animal testing, then consumers would want that instead.

2024-10-01

The Internet is a Platform-Platform. AI is not.

AI is not like the internet or computer operating system software in terms of its economic impact.

The latter two are platform-platforms while AI is not.

A platform is a kind of medium.  e.g. YouTube is a platform for videos.

Some platforms are special in that they are recursively platforms for platforms.

Computer hardware are platform-platforms.  They're a medium for various single purpose built software, of course, but more importantly they are also a medium for different operating systems (OSs).

OSs like macOS or Windows are platform-platforms.  The most easiest and reductive way to see this is that you can virtualize and run macOS on Windows, and vice versa.  More substantively, these OSs can run internet or web browsers.

Internet and web browsers and the physical internetwork they connect to form platform-platforms.  Different businesses run on the web, but more importantly, the web or internet are a medium for running app stores on desktop and mobile devices.

Is AI a platform-platform?  What is it a medium for?  Does this medium support a platform recursively?

Or is AI more like a sufficiently smart general compiler?  AI can automate existing products and workflows, but it isn't a medium that will enable more products exponentially by carrying platforms on platforms … on platforms.

Others have noted similar ideas.  e.g. AI is a feature, not a product.  Actually, it's worse: "AI is a tech to enable a feature, not a feature in itself" [1].

Or that current AI are automated "infinite interns that can write anything for you" [2].  But follow that train of thought to the conclusion that a human-level current-generation AI would be an actually-human intern and now wonder: since when do human interns (even an infinity of them) form a medium that can be a platform for platforms?  (… unless you use them like transistors, to do computation as a human powered Turing machine, just to argue they can do what actual computers can already do.)

This distinction between products and platforms vs platform-platforms can inform us on the economic impact of AI on society.

 

 [1]: ( https://www.threads.net/@benedictevans/post/C8C00e5OZBT )

 [2]: ( https://www.ben-evans.com/benedictevans/2023/7/2/working-with-ai )

 

2024-09-16

Jevons Is a Paradox, Not a Rule

Jevons Paradox says that as technology becomes more efficient, overall resource consumption can increase. This was seen during the Industrial Revolution when more efficient coal engines led to higher coal usage. However, this paradox is not universal, and efficiency can also lead to reduced resource consumption.

In the context of AI coding tools (e.g. GitHub Copilot), there's a belief that increased efficiency will lead to more coding jobs by lowering development costs. While this may happen, history shows that technological advancements can also displace workers.

Counter Examples

The invention of programming compilers made coding more efficient but reduced demand for assembly language programmers, who were once critical to assembly-based software development. While many of those programmers probably found other coding jobs in higher-level languages, Jevons simply doesn't guarantee it.

Similar patterns have occurred in other industries more starkly. The mechanization of agriculture reduced the need for farm labor.  See this graph:

https://ourworldindata.org/grapher/number-of-people-employed-in-agriculture

Then there's the replacement of draft horses, where ICE vehicles meant horses were no longer needed and millions of draft horses were slaughtered or displaced, and their population dwindled.  See this graph:

https://www.researchgate.net/publication/338480301/figure/fig1/AS:845430833283085@1578577826802/Evolution-of-the-horse-population-in-France-from-1800-to-2010-translated-from-French.ppm

In recent years, coal consumption has fallen despite energy efficiency gains due to the shift to other energy sources (e.g. renewables, gas).

The rebound effect, which drives Jevons Paradox, doesn’t always occur at full strength. For example, energy-efficient LED lighting and fuel-efficient cars have reduced overall energy and fuel consumption, despite potentially increasing usage. Similarly, AI tools may lead to fewer coding jobs, even if more code is produced.

Ultimately, while AI could increase software development demand, it may also reduce the need for certain types of programmers. History shows that efficiency gains don’t always lead to more jobs. Jevons didn't guarantee draft horses more jobs, after all.

This was written in collaboration with an AI — another example where more words will be written as efficiency per word increases but the number of writing jobs may well decrease (as it apparently already has: https://www.bbc.com/news/business-65906521 ).

2024-08-13

How to use Homebrew on a Multi-user macOS

You'll find descriptions of how to do this, like on StackOverflow: How to use Homebrew on a Multi-user MacOS Sierra Setup.


It's said that using `sudo` is wrong.

It's said that using a per-user local version of brew is right, but...

1. it doesn't play well with `nvm` (see)

2. it is completely and entirely unsupported (see)

3. many packages don't support it (see)

4. many packages will install from source instead of a binary (see)


So the practical, quick and dirty solution is to just use `sudo` (see).


In my experience, if I recall correctly, homebrew on macOS 14 by default uses the group "admin" for where it installs things, and it sets the permissions to "read" and "execute" as needed already.  The only thing missing are "write" permissions.  And for the user to include brew binaries on their PATH.

Also, "Administrator" users on Macs are in group "admin" by default already too.  I'm guessing if your user is using brew, they're probably a macOS "Administrator" too (or else why would you let them use a global brew install?).

So I just ran in Terminal:

$ sudo chmod -R g+w $(brew --prefix)

Then for the user that wants to use brew, put in their home directory's ".zprofile" file:

eval "$(/opt/homebrew/bin/brew shellenv)"

To ensure brew is working for them, run in their Terminal: $ brew doctor

Warning: this is unsupported, and it's said to be wrong, and you're letting all Admin users on that machine to share one single installation of homebrew!

If roommates fight over fridge space, you've got no one but yourself to blame for not buying each roommate their own fridge!

So what use-case does this safely enable?  A single human with multiple macOS user profiles to isolate their work space while sharing (with themselves!) the same global brew install.