So things have been pretty bad lately. The culture war is as far from won as ever. And I’m getting sued by a patent troll, which is a daily reminder of how completely broken our societial reward mechanisms are. Thinking of naming my next song

complexity a decaying oligarchy can no longer maintain

and trying to write one those 10 minute long emo ballads that spans the industrial revolution to now about how Nietzsche gave way to Bronze Age Mindset and after COVID it’s even hard to delight in that because it all feels so hollow. How many booster u get bro?

Reread the unabomber manifesto (1995) recently, and it is as unanswered as ever.

I can’t bring myself to accept his conclusion that technology is bad and our only hope is to destroy it. Then I would forfeit any hope of being able to know everything. And any hope of meeting God. I will die before I give up my hope for technology because that’s the only thing I am living for.



And then there’s the other issue. We solved self driving cars years ago, we just haven’t found all the bugs yet. We hit a point in AI where it’s mostly the lack of compute that’s frustrating. I haven’t seen new ideas in a while, in fact the field is moving further from it’s roots as all the noobs and hanger-on-ers show up.

When AI safety means I can’t ask DALL-E for a picture of <censored> because it’s racist, and real AI safety about not accidently destroying the world with machines is marginalized. To answer the questions about real AI safety would involve addressing the unabomber manifesto, and I suspect even if you had an answer, in the current world that would speak way too much truth to power to happen.



I did promise some hope though, and it’s this. After 2 years of writing this blog, I have nothing better than Technology without Industry. Of course, this doesn’t happen if centralization itself is too beneficial. If AI is just compute, it’s already over. Compute scales really well, and if and when they wake up, centralized powers will have all of it.

The hope is that AI is largely data limited. Unlike compute, data has very low cost of replication. And from this DeepMind paper, there’s reason to believe this might be true. Common crawl (the entire internet!) is only 266TB, well within the storage purchasable by an individual today (around $5,000 in 2022). While video datasets are larger, this is only true by several orders of magnitude.

If a $100,000 computer can train models that efficiently use all the data, the risk of centralization drops massively. While larger computers may dominate at things that don’t involve data, like chess, as long as they don’t win at real world like that, we are good.

As of now, this isn’t true though. The training alone (renting these computers) of the largest models costs millions. And we haven’t made Stable Diffusion Video Edition yet, we know how we just can’t really train it yet. In a couple years, some big AI lab will, and this will likely be the most expensive model trained in history costing over $100 million.

If we make it to the point where the best models can be trained by individuals, and we are all limited by the same data, the world has a much better chance than if those with the largest computer win. Data tends toward being public, compute tends toward being private.

A world of open source personally owned decentralized AI sounds nice. And it’s not without precedent, this was PCs in the 80’s.