Jevons Paradox says that as technology becomes more efficient, overall resource consumption can increase. This was seen during the Industrial Revolution when more efficient coal engines led to higher coal usage. However, this paradox is not universal, and efficiency can also lead to reduced resource consumption.
In the context of AI coding tools (e.g. GitHub Copilot), there's a belief that increased efficiency will lead to more coding jobs by lowering development costs. While this may happen, history shows that technological advancements can also displace workers.
Counter Examples
The invention of programming compilers made coding more efficient but reduced demand for assembly language programmers, who were once critical to assembly-based software development. While many of those programmers probably found other coding jobs in higher-level languages, Jevons simply doesn't guarantee it.
Similar patterns have occurred in other industries more starkly. The mechanization of agriculture reduced the need for farm labor. See this graph:
https://ourworldindata.org/grapher/number-of-people-employed-in-agriculture
Then there's the replacement of draft horses, where ICE vehicles meant horses were no longer needed and millions of draft horses were slaughtered or displaced, and their population dwindled. See this graph:
https://www.researchgate.net/publication/338480301/figure/fig1/AS:845430833283085@1578577826802/Evolution-of-the-horse-population-in-France-from-1800-to-2010-translated-from-French.ppm
In recent years, coal consumption has fallen despite energy efficiency gains due to the shift to other energy sources (e.g. renewables, gas).
The rebound effect, which drives Jevons Paradox, doesn’t always occur at full strength. For example, energy-efficient LED lighting and fuel-efficient cars have reduced overall energy and fuel consumption, despite potentially increasing usage. Similarly, AI tools may lead to fewer coding jobs, even if more code is produced.
Ultimately, while AI could increase software development demand, it may also reduce the need for certain types of programmers. History shows that efficiency gains don’t always lead to more jobs. Jevons didn't guarantee draft horses more jobs, after all.
This was written in collaboration with an AI — another example where more words will be written as efficiency per word increases but the number of writing jobs may well decrease (as it apparently already has: https://www.bbc.com/news/business-65906521 ).
Tech Stuff and Notes
on software development, computing science, software technologies, learning, etc.
2024-09-16
Jevons Is a Paradox, Not a Rule
2024-08-13
How to use Homebrew on a Multi-user macOS
You'll find descriptions of how to do this, like on StackOverflow: How to use Homebrew on a Multi-user MacOS Sierra Setup.
It's said that using `sudo` is wrong.
It's said that using a per-user local version of brew is right, but...
1. it doesn't play well with `nvm` (see)
2. it is completely and entirely unsupported (see)
3. many packages don't support it (see)
4. many packages will install from source instead of a binary (see)
So the practical, quick and dirty solution is to just use `sudo` (see).
In my experience, if I recall correctly, homebrew on macOS 14 by default uses the group "admin" for where it installs things, and it sets the permissions to "read" and "execute" as needed already. The only thing missing are "write" permissions. And for the user to include brew binaries on their PATH.
Also, "Administrator" users on Macs are in group "admin" by default already too. I'm guessing if your user is using brew, they're probably a macOS "Administrator" too (or else why would you let them use a global brew install?).
So I just ran in Terminal:
$ sudo chmod -R g+w $(brew --prefix)
Then for the user that wants to use brew, put in their home directory's ".zprofile" file:
eval "$(/opt/homebrew/bin/brew shellenv)"
To ensure brew is working for them, run in their Terminal: $ brew doctor
Warning: this is unsupported, and it's said to be wrong, and you're letting all Admin users on that machine to share one single installation of homebrew!
If roommates fight over fridge space, you've got no one but yourself to blame for not buying each roommate their own fridge!
So what use-case does this safely enable? A single human with multiple macOS user profiles to isolate their work space while sharing (with themselves!) the same global brew install.
2024-08-12
You can mount Linux Ext or LUKS disks on Mac, Windows!
This is exciting! I’ve been looking for a disk format that works seamlessly across Mac, Windows, and Linux. Something like a USB or disk image that mounts on all three desktop platforms.
The obvious choice is exFAT, but what if I want a modern disk format with journaling for data safety?
Ext4 is one option, although it used to be read-only on Macs (see Mounting Ext2/Ext3/Ext4 USB Flash Drives on Mac: Read Only Success). Maybe things have gotten better in the last 10 years?
And what if I want disk encryption?
VeraCrypt with an exFAT disk works: see Sharing TrueCrypt USB volume on 3 platforms: Mac, Windows, Linux.
But what if you want both encryption and a modern file system? I couldn't find a solution to this, until I found…
linsk
In summary, linsk is an easy way to run an Alpine Linux VM using qemu to mount a disk within it, and then share that disk back out to your local host machine via a networking protocol.
So now you can mount an Ext4 disk, or a LUKS encrypted Ext4 disk, on your Mac or Windows machine!
The usage notes are so good, you can just look there for how to install and use.
Here's a quick use reference, more for my own reference than anything
How to use linsk to mount a Ext4 disk within a LUKS encrypted disk on macOS
Note: If you're not me, please read the linsk documentation first and understand it completely before proceeding below! This assumes you installed linsk correctly, including qemu!
1. With the disk plugged into your Mac, make sure to unmount all volumes that macOS auto-mounts!!!
Warning Danger Caution Danger: If you don't unmount all volumes first, the following may delete, nuke, and destroy all your disk data. macOS likes to repeatedly auto-mount any volume it sees, so after every step below, make sure to unmount those volumes again!!! You'll see warnings to this effect from linsk in the terminal as well.
2. In Terminal, run: $ diskutil list
Find your disk and note its path: e.g. `/dev/diskX`
The value of `X` in `diskX` may change every time your Mac encounter the disk!
However, the LVM group/volume/luks-container names within that disk should be stable (unless intentionally changed), so in the future, you can skip down to the `linsk run` step 5 below if you're just re-mounting the same volume.
3. Find the LUKS volume to mount within diskX. Run: $ sudo linsk ls dev:/dev/diskX
Note the `vda` drive you'd see are the system drives within Alpine Linux, so ignore those.
For this example, suppose: `vdb1` is the `crypto_LUKS` volume you're interested in.
4. Find the Ext4 volume inside the LUKS volume by running: $ sudo linsk ls dev:/dev/diskX --luks-container vdb1
Now suppose that `cryptcontainer` is the ext4 volume you want to mount.
5. Mount the ext4 volume. Run: $ sudo linsk run dev:/dev/diskX --luks-container vdb1 mapper/cryptcontainer
6. Alternatively, mount the ext4 volume and open a debug shell. Run: $ sudo linsk run --debug-shell dev:/dev/diskX --luks-container vdb1 mapper/cryptcontainer
This will mount that volume and open a shell within the Alpine Linux VM so you can do whatever you want to that disk volume from within Linux. Good for changing disk ownership or permissions as needed so your Mac can access it.
7. Mount the volume from macOS.
You can use Cmd+k to connect to the network volume that linsk / Alpine Linux sets up for you locally.
You can also use the Mac Terminal. Run: $ mount_afp -i -o noowners afp://linsk@127.0.0.1:9000/linsk /Path/to/mountpoint
`sudo` is not needed and unhelpful here!
You might find the folders in the mounted volume do not have the right permissions for you to open them up. You might try using `sudo chown` or `chmod`, etc., and find they don't work to fix this. You might try `umask` on your Mac, and find they don't work.
If it's your own disk you want access to, you might as well just tell linsk to open up a debug shell into the Alpine Linux VM and change the permissions on the disk from there.
i.e. Run: $ sudo linsk run --debug-shell dev:/dev/diskX --luks-container vdb1 mapper/cryptcontainer
Then use the Linux chown/chmod tools as needed.
2024-07-15
Re: AI heatwave
I’m trying to make sense of “The AI summer” [1].
OpenAI’s ChatGPT had a meteoric rise in popularity not because the technology works (it does, for some reasonable definition of “works”) but rather the foundation is there for viral spread because:
> a lot of this is ‘standing on the shoulders of giants’ - OpenAI didn’t have to wait for people to buy devices or for telcos to build DSL or 3G
> ChatGPT is just a website or an app, and … it could ride on all of the infrastructure we’ve built over the last 25 years. So a huge number of people went off to try it last year.
But current AI’s problem is that no one knows what to do with it:
> The problem is that most of them haven’t been back. … most people played with it once or twice, or go back only every couple of weeks
> On one hand, getting a quarter to a third of the developed world’s population to try a new product in 18 months is very hard. But on the other, most people who tried it didn’t see how it was useful.
Current AI is more R&D than basic foundational research but it is still more R than D, and it’s still far from being COTS [3] products:
> Accenture … Last summer it proudly announced that it had already done $300m of ‘generative AI’ work for clients… and that it had done 300 projects. Even an LLM can divide 300 by 300 - that’s a lot of pilots, not deployment.
> As a lot of people have now pointed out, all of that adds up to a stupefyingly large amount of capex (and a lot of other investment too) being pulled forward for a technology that’s mostly still only in the experimental budgets.
> an LLM by itself is not a product - it’s a technology that can enable a tool or a feature, and it needs to be unbundled or rebundled into new framings, UX and tools to be become useful. That takes even more time.
It took 8 years (to approx. June 2022) for cloud adoption to touch 25%. It took that long for cloud adoption expected-in-3-years to just pass 40% [2].
It took 2 more years and a pandemic (to approx. January 2024) for cloud adoption to get to about 30%.
It took that long for cloud adoption expected-in-3-years to get near 50%:
> If you work in tech, cloud is old and boring and done, but it’s still only a third or so of enterprise workflows
> it took more than 20 years for 20% of US retail to move online
Gen AI and LLMs are here to stay but it’ll still take many years to decades for it to spread everywhere and displace existing technologies and labor.
[1]: https://www.ben-evans.com/benedictevans/2024/7/9/the-ai-summer
[2]: https://www.ben-evans.com/benedictevans/2023/7/2/working-with-ai
[3]: https://en.wikipedia.org/wiki/Commercial_off-the-shelf
2024-07-12
RE: $500B AI revenue expectations gap
Part 1
Given the business that Sequoia Cap is in, it should not be surprising that they’d say things like:
> Investment incineration… a lot of people lose a lot of money during speculative technology waves. It’s hard to pick winners, but much easier to pick losers
> Winners vs. losers… there are always winners during periods of excess infrastructure building. AI is likely to be the next transformative technology wave… lt will cause harm primarily to investors.
i.e. invest right and you’d capture a huge amount of value. Invest wrong and you’d be burning your money. So do investments with us.
Part 2
What I found interesting is the point about there being a:
> $500B … gap between the revenue expectations [$600B] implied by the AI infrastructure build-out, and actual revenue growth in the AI ecosystem [$100B] … that needs to be filled for each year of CapEx at today’s levels [GPU $150B, “Data Center Facility Build and Cost to Operate” $150B (they seem to have included OpEx in their “CapEx” figure)]
This means there’s either some amazing AI killer apps that will make $500B in sales or some AI investments will get incinerated.
Investment incineration "will cause harm primarily to investors" [1] — Nvidia, the data center builders, facility operators, and power companies will all have gotten paid for the work they will do — but I wonder what are the broader implications of the $500B revenue expectations gap.
Is it — the investments, not necessarily the GPT/LLM tech — irrational exuberance? How much of today’s Big Tech valuation is driven by it? How sensitive is it to interest rates? Notice this "bubble", if it is one, is not occurring during a ZIRP [3] period.
It seems AI startups aren’t the ones building AI data centers — "much of the incremental data center build-out is coming from big tech companies" [2]. So startups seem less affected by that cost.
But actually 50% of the $500B revenue expectations gap is “software margin” — that’s the margin earned by “The end user of the GPU—for example, Starbucks, X, Tesla, Github Copilot or a new startup” [2].
Which means when some of the $500B expected revenue doesn’t show up, it’ll be hitting the AI startups' margins.
Now remember the other 50% is “CapEx”: Nvidia GPU, and “Data Center Facility Build and Cost to Operate”. And remember that Nvidia, the data center builders, facility operators, and power companies will all have gotten paid for the work they will do — because they don’t work for free or for startups' equity. So it seems they won’t have their margins squeezed.
But doesn’t that also mean when some of the $500B expected revenue doesn’t show up, it’ll be hitting the Big Tech AI data center’s top line?
I don't know enough to know what will happen, but it seems some amount of AI Investment cooling will hit AI startups and Big Tech's AI data center buildout. Big Tech has been and remains profitable, and their GPUs are paid for, so it'll mainly change their product priorities and revenue forecasts (and thus stock price?). AI startups, however...
But perhaps, just in time, the Fed's interest rates will go down for unrelated reasons.
[1]: https://www.sequoiacap.com/article/ais-600b-question/
[2]: https://www.sequoiacap.com/article/follow-the-gpus-perspective/
[3]: https://en.wikipedia.org/wiki/Zero_interest-rate_policy