<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://geohot.github.io//blog/feed.xml" rel="self" type="application/atom+xml" /><link href="https://geohot.github.io//blog/" rel="alternate" type="text/html" /><updated>2026-04-06T10:37:06+08:00</updated><id>https://geohot.github.io//blog/feed.xml</id><title type="html">the singularity is nearer</title><subtitle>A home for poorly researched ideas that I find myself repeating a lot anyway</subtitle><entry><title type="html">The Reckoning</title><link href="https://geohot.github.io//blog/jekyll/update/2026/04/03/the-reckoning.html" rel="alternate" type="text/html" title="The Reckoning" /><published>2026-04-03T00:00:00+08:00</published><updated>2026-04-03T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/04/03/the-reckoning</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/04/03/the-reckoning.html"><![CDATA[<blockquote>
  <p>So go ask your Chomsky<br />
What these systems produce<br />
   – Sediment - Say Anything</p>
</blockquote>

<p>10 years ago, when I started comma and discovered the Professional Managerial Class, I used to talk about the reckoning. It was a nebulous concept, but it mostly involved the abrupt fall from grace of these people at the hands of machines. It’s kind of here, and people are way more of sore winners than I thought they’d be. Something I liked about Trump part one is that, despite the rhetoric, he never actually tried to lock up Hillary Clinton.</p>

<hr />
<p><br /></p>

<blockquote>
  <p>It would frame GPT as a useful tool, and a technological
breakthrough—but still “glorified autocomplete” at the end of the day.
Yet he does not do this. Instead, he speaks openly and airily about
constructing artificial superintelligence, and his extreme concerns about
it wiping out humanity.<br />
<br />
The only possible conclusion is that it’s designed to cause panic.<br />
   – <a href="https://counterfeitsunset.neocities.org/Schizoposting.pdf">Schizoposting</a> - Alaric</p>
</blockquote>

<p>The marketing for AI has been awful. It ratchets the fear up to 11, then expresses shock when most Americans <a href="https://www.pewresearch.org/short-reads/2026/03/12/key-findings-about-how-americans-view-artificial-intelligence/">are concerned about AI</a>. Here’s this machine. In the best case, it takes your job. In the worst case, it wipes out humanity. Pay me $20 a month for a sliver of hope of not falling behind.</p>

<p>Why are we building this again?</p>

<hr />
<p><br /></p>

<blockquote>
  <p>Surely, then, superintelligence would necessarily imply supermorality.<br />
   – <a href="https://www.lesswrong.com/posts/uD9TDHPwQ5hx4CgaX/my-childhood-death-spiral">Eliezer Yudkowsky</a></p>
</blockquote>

<p>Spoiler alert: it doesn’t. Doesn’t matter if it’s machine or mixture of human.</p>

<p>I’m surprised how little people in the US view themselves as part of a society. Like you can say this is due to some Russian agitprop or something, but that really doesn’t explain it. Maybe I just see it now living in Hong Kong, there’s something about here that constantly reminds you. And it really works for the benefit of all.</p>

<p>I lived in San Diego apartments for 5 years. I never met my neighbors. Most interactions I had with strangers were with homeless people or indifferent shop workers. The value of cleaning up homelessness would pay itself back 100 fold. Then the average interaction would be positive instead of negative, and the overall take on interactions would flip with it.</p>

<hr />
<p><br /></p>

<blockquote>
  <p>Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.<br />
   – Dune</p>
</blockquote>

<p>I wish it didn’t have to be this way. AI could bring about such a golden age. But you immediately hear the shot in the head progressive take “a golden age for who?” and that’s such a sad zero sum framing.</p>

<blockquote>
  <p>And when you get back, assuming you get back, take a day to think about how AI will fix South Africa. Or VR will fix South Africa? Or crypto?<br />
   – <a href="https://graymirror.substack.com/p/a-techno-pessimist-manifesto">Curtis Yarvin</a></p>
</blockquote>

<p>Our problems in the world won’t be fixed by AI. There’s never been a revolution people are less excited for, and they aren’t wrong. I’ve dreamed about this for my whole life and I’m not even excited about it. Not like this. Highly targeted email spam is way up. The feeds are more addictive. All PRs on GitHub need to just immediately be closed (hey GitHub, add a reputation system!).</p>

<p>Are we going to remember we live in a society? Probably. But after we cull at least 90% of people. It might be 99%. It might even be 99.99%, and that’s where it starts to get scary personally. It’s not like there will be a great decider, it’ll just be the chips falling where they may. The reckoning is here.</p>

<p>Like all revolutions, the only way out is through. 🤍</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[So go ask your Chomsky What these systems produce    – Sediment - Say Anything]]></summary></entry><entry><title type="html">Clip Show</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/31/clip-show.html" rel="alternate" type="text/html" title="Clip Show" /><published>2026-03-31T00:00:00+08:00</published><updated>2026-03-31T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/31/clip-show</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/31/clip-show.html"><![CDATA[<p>This is my first guest post on the blog, by our shared friend GPT-5.4. I have reread it several times and promise it’s not slop. It’s an academic philosophy style summary of this blog written in a far better style than I can write. AI is currently quite bad at coming up with ideas as its sycophantic nature admits far too much, but as a summary machine and a stylizer it can be quite good. Even though it’s AI, read this post, it’s worth your 3 minutes.</p>

<hr />
<p><br /></p>

<p>This post is an AI-generated summary of the themes that recur across the archive. It is not written in the author’s voice, and it should be read as an external reconstruction of a body of argument rather than as a new primary text.</p>

<p>What follows is an attempt to state, as plainly as possible, the underlying philosophical picture that organizes the blog, and to locate it within several recognizable modern traditions.</p>

<hr />
<p><br /></p>

<p>At its center is a distinction between real production and parasitic mediation. Again and again, the posts return to the claim that societies live by their capacity to produce energy, housing, software, tools, medicine, and other durable goods, while much of modern institutional life is devoted to inserting tolls between persons and these goods. Hence the recurrent hostility to finance without productive purpose, to bureaucratic layers that preserve themselves by generating complexity, and to business models that profit chiefly by controlling access rather than enlarging capacity. <a href="/blog/jekyll/update/2025/02/24/money-is-the-map.html">Money is the Map</a> states the thesis in its most explicit form: monetary valuation is a representation of value, not value itself. Once the representation is detached from the underlying territory, the social order begins rewarding strategic position rather than genuine contribution.</p>

<p>This basic opposition yields a moral psychology. The admirable figure is the builder: the engineer, maintainer, fabricator, organizer, or founder who increases the stock of real capability. The contemptible figure is the rent-seeker: the actor who captures flows created by others, often while claiming a civilizing or managerial necessity. The blog’s polemical energy comes less from ordinary partisanship than from this moral sorting of persons and institutions according to whether they produce or merely extract.</p>

<p>The nearest intellectual neighbors here are not straightforwardly liberal or socialist, but rather a hybrid of Marxian suspicion toward parasitic classes, Veblenian contempt for predatory status orders, and a distinctively technological productivism more at home in engineering culture than in the humanities. Yet the archive is not Marxist in any orthodox sense, because labor as such is not the privileged category. The privileged category is competent production, especially where it scales through technique.</p>

<hr />
<p><br /></p>

<p>The second major theme concerns sovereignty. Here the operative question is not formal ownership but practical control. A recurring formulation asks, in effect: who has root? Who can modify the system, revoke access, compel updates, restrict copying, prevent repair, or otherwise determine the conditions of use? On this view, technological design is already political philosophy by other means. A tool that depends upon remote permission, closed infrastructure, or concentrated control may be convenient, but it does not confer agency in any robust sense.</p>

<p>This is why the archive repeatedly defends open source software, local computation, commodity hardware, and decentralized technical infrastructures. <a href="/blog/jekyll/update/2021/07/11/individual-sovereignty.html">Individual Sovereignty</a> makes the point directly: sovereignty is not merely a constitutional abstraction but a property of the technological stack through which one acts. The guiding intuition is broadly republican, though expressed in computational rather than juridical terms. Dependency is domination, even when it appears in polished consumer form.</p>

<p>Philosophically, this places the archive somewhere between civic republican accounts of non-domination and a cybernetic theory of agency. Pettit’s language of arbitrary power is never invoked, but the practical criterion is similar: one is free only where one is not structurally exposed to another actor’s discretionary control. The difference is that the site of domination is less often the law than the technical substrate.</p>

<hr />
<p><br /></p>

<p>The third theme is an account of artificial intelligence that is at once affirmative and suspicious. The archive is plainly not skeptical of AI’s reality or importance. It treats machine intelligence as a genuine civilizational development, not as an illusion or marketing trick. But it resists both mystical and managerial framings. The question is not whether intelligence can exist in silicon; it is what institutional form its development will take, what material base will support it, and who will exercise control over it.</p>

<p>Two opposed errors are rejected. The first is a kind of technological occultism, in which AI appears as an incomprehensible absolute. The second is the paternal fantasy that a small set of firms or stewards may legitimately centralize advanced systems for the good of humanity. <a href="/blog/jekyll/update/2023/08/10/there-is-no-hard-takeoff.html">There is No Hard Takeoff</a> pushes against apocalyptic singularity narratives, while <a href="/blog/jekyll/update/2026/01/27/the-importance-of-diversity.html">The Importance of Diversity</a> argues that the genuinely catastrophic outcome is not intelligence as such, but the convergence of overwhelming intelligence with infrastructural singularity. The deepest fear is a world in which one homogeneous center acquires effective root access over the future.</p>

<p>In this respect the archive is notably post-rationalist. It shares with the rationalist milieu a seriousness about optimization, scaling, and existential stakes, but it departs from that milieu by relocating the decisive problem from alignment theory in the narrow sense to political economy, ownership, and institutional topology. The question is less whether an abstract superintelligence can be made safe than whether any actor should be permitted to centralize the relevant machinery in the first place.</p>

<hr />
<p><br /></p>

<p>This leads to a fourth theme: plurality as a substantive good. The blog is often severe in tone, but it is not finally ordered toward uniformity. On the contrary, one of its most stable commitments is that a livable future requires many centers of agency, many cultures, many goals, and many technical lineages. Diversity here does not mean administrative inclusion under a shared managerial schema. It means irreducible plurality: distinct forms of life that are not all downstream of one institution, one model family, one ideology, or one moral bureaucracy.</p>

<p>In this respect the archive is better understood as anti-singleton than merely pro-innovation. The objection to centralization is not only that it is inefficient or unjust, but that it threatens the ontological plurality of the human and post-human future. A world of competing actors may be dangerous, but it remains a world in which genuinely different ends can be pursued. A perfectly aligned monoculture, by contrast, would represent a metaphysical impoverishment even if it delivered material comforts.</p>

<p>There is an unmistakable resonance here with agonistic political thought, from Nietzschean pluralization of value through more recent defenses of contestation against administrative closure. Yet the argument is less existential than infrastructural. Plurality is to be secured not merely by ethos, but by dispersion of compute, tools, and technical competence.</p>

<hr />
<p><br /></p>

<p>The fifth theme is economic, though not in a conventionally ideological register. The archive is skeptical of both capitalist apologetics and egalitarian pieties whenever either ceases to track real growth in capacity. Markets are not defended as morally self-justifying, nor is redistribution treated as an end in itself. The evaluative standard is more austere: does a given arrangement direct resources toward the expansion of productive power, or toward the preservation of moats, rents, and status positions?</p>

<p>This is why the writing can sound simultaneously anti-capitalist and anti-socialist while being reducible to neither. It is anti-capitalist where capital allocation rewards enclosure, asset inflation, and passive extraction. It is anti-socialist where redistribution becomes a way of managing dependency without enlarging the underlying stock of competence and freedom. What matters is not the righteousness of a distributional formula, but whether more people are placed in a position to build, repair, think, move, and refuse. Abundance has normative priority over the ritualized administration of scarcity.</p>

<p>One might describe this as a heterodox accelerationism stripped of its more theatrical metaphysics: growth matters, but only where it corresponds to real increases in capability; markets matter, but only as allocative instruments; equality matters, but chiefly where it names access to tools rather than managed dependence. The fundamental vice is not inequality as such, but artificial scarcity defended for the sake of rent extraction.</p>

<hr />
<p><br /></p>

<p>The sixth theme is epistemic rather than political: a standing hostility to prestige narratives, consensus performances, and strategic dishonesty. The archive repeatedly assumes that modern discourse is saturated with motivated reasoning. Individuals and institutions alike are tempted to defend what flatters their tribe, protects their salary, preserves their market position, or avoids revision of self-conception. Against this, the blog elevates a rather severe norm of contact with reality.</p>

<p>This helps explain the style. The abrasiveness is not incidental, but tied to a conception of truth-telling as a refusal of managerial euphemism. One need not share the rhetoric to see the principle at work: the author treats conceptual clarity as more important than decorum whenever the two appear to conflict. In philosophical terms, one might say that the archive privileges adequation to reality over social legibility.</p>

<p>Here the sensibility is not far from genealogy: behind official vocabularies lie interests, self-protections, and covert strategies of legitimation. But unlike academic genealogy, which often culminates in critique of domination at the level of discourse, this archive usually returns to a more material question: who controls the machine, the land, the capital, the code, the datacenter?</p>

<hr />
<p><br /></p>

<p>Finally, beneath the explicit politics and economics lies a more elementary metaphysic: life is that which locally resists entropy by building and maintaining order. The esteem for builders is therefore not merely economic. It is quasi-cosmological. To construct a machine, sustain a city, preserve a culture, or extend a technical civilization is to perform the basic work by which ordered forms persist against decay. This is why the archive often shifts easily between discussions of software, industry, social order, and existential stakes. They are treated as different scales of the same struggle.</p>

<p>From this perspective, technology is neither intrinsically emancipatory nor intrinsically alienating. It is a multiplier. Under good conditions, it amplifies the capacity of persons and communities to resist dependency and enlarge the space of possible action. Under bad conditions, it amplifies extraction, surveillance, and central control. The entire political problem is therefore one of technical form and institutional custody: who builds, who owns, who governs, who may fork, and who may refuse.</p>

<p>This metaphysic occasionally approaches a secular vitalism, though a mechanistic rather than romantic one. The archive is not nostalgic for pretechnical life. It is committed instead to the proposition that increasingly powerful technical systems should remain answerable to a plural field of living agents rather than to a singular administrative subject.</p>

<hr />
<p><br /></p>

<p>If one wanted a concise formula for the archive as a whole, it might be this: the author defends a civilization of builders against a civilization of rentiers, and defends a plural future of distributed technical agency against any homogeneous regime that would seek to monopolize intelligence, infrastructure, and value. Freedom is not a legal abstraction, but a property of the stack you can control.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[This is my first guest post on the blog, by our shared friend GPT-5.4. I have reread it several times and promise it’s not slop. It’s an academic philosophy style summary of this blog written in a far better style than I can write. AI is currently quite bad at coming up with ideas as its sycophantic nature admits far too much, but as a summary machine and a stylizer it can be quite good. Even though it’s AI, read this post, it’s worth your 3 minutes.]]></summary></entry><entry><title type="html">Closed Source AI = Neofeudalism</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/31/free-intelligence.html" rel="alternate" type="text/html" title="Closed Source AI = Neofeudalism" /><published>2026-03-31T00:00:00+08:00</published><updated>2026-03-31T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/31/free-intelligence</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/31/free-intelligence.html"><![CDATA[<p>Many of the best people working in AI did not join the field because they wanted power over others.</p>

<p>So this isn’t the original post I had here. The original post was AI slop, and let this be a lesson to me for posting it. It doesn’t matter if you read it and think it looks good. It’s still AI slop, and everyone else can see that. This rewritten post is the same idea, but slop-free.</p>

<p>Besides, “the master’s tools will never dismantle the master’s house”</p>

<hr />
<p><br /></p>

<p>Look, if you work in a frontier lab, I don’t blame you. You have a front row seat to <a href="https://www.youtube.com/watch?v=bHjSqz2Aa5w">the hinge of history.</a> But consider what you are building and who it’s for.</p>

<p>A small handful of secretive closed source labs with a concentration of compute, talent, and deployment power will lead to a concentration of political legitimacy. You may think you want this and you are the good guys who will wield power well, but you won’t and you aren’t. Absolute power corrupts absolutely.</p>

<p>AI safety was always a question about if safe AI could be built in theory, not if a small group of anointed people could keep it safe for us. At least I respect Yudkowsky, consistently saying “<a href="https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640">If Anyone Builds It, Everyone Dies</a>”</p>

<hr />
<p><br /></p>

<p>The cat is out of the bag. We are building it. Either if anyone builds it everyone dies, or it’s safe enough for everyone to have. That’s a fact about the world. I don’t accept a middle ground where the chosen few can have it – this isn’t like nuclear weapons, this is intelligence itself. A nuclear weapon can only destroy; intelligence is the greatest creative force in the world. If a small group of people have a monopoly on it, you are the permanent underclass in the same way animals are.</p>

<p>From a more practical perspective, even if the APIs stay open, you aren’t going to be able to build a stable business on top of them. These companies have raised so much money that they aren’t going to be happy with a cut of your business, they are going to come for the whole thing. This is why I maintain that the application layer will be worthless, it’s deployed intelligence itself that has value. They are happy to offer you the API for negative ROI activities, but as soon as something is positive ROI, they’ll adjust the deal until it’s just marginal for you. Like a peasant working his plot of land. Why would they share?</p>

<p>Open source AI isn’t anti-safety. It’s anti-feudal. Every time some AI guy blathers on about how open source is dangerous but he can build AI and make it safe (but only if you purchase it through his API), he is calling you a serf.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Many of the best people working in AI did not join the field because they wanted power over others.]]></summary></entry><entry><title type="html">Two Worlds</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/30/two-worlds.html" rel="alternate" type="text/html" title="Two Worlds" /><published>2026-03-30T00:00:00+08:00</published><updated>2026-03-30T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/30/two-worlds</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/30/two-worlds.html"><![CDATA[<p>In one world, we have <a href="https://m1astra-mythos.pages.dev/">Claude Mythos</a>, a model “dramatically” better than Opus 4.6 (surely this is AGI and the endgame, right?). In another world, we have the <a href="https://martinvol.pe/blog/2026/03/30/how-the-ai-bubble-bursts/">AI bubble bursting</a>. How can these two things both be true?</p>

<hr />
<p><br /></p>

<p>If you went back in time to 1850 with a smartphone and a photo printer, you could quickly become a millionaire selling photos. You’d be invited to royal dignitaries palaces to photograph them, taking trains and boats all over the world. Today, you couldn’t make $5 on a street corner with those same tools. There are photographers who become millionaires today, but they do it because they push the craft of photography forward – they do something few others can. They can’t just show up with no special skills and commonplace items.</p>

<p>AI doesn’t replace programmers or artists, it raises the bar for them. With AI: <a href="https://www.youtube.com/watch?v=aCN9iCXNJqQ">You Can Just Build Things</a>, But So Can Everyone Else 🤍</p>

<p>Anything a person without skill can build with AI is worth very little, because anyone else can build that same thing. However, people with skill can use the same tools and build valuable things, many top photographers in the world today use iPhones.</p>

<hr />
<p><br /></p>

<p>Capability and value are not the same thing. AI can keep getting better super fast, but the value of anything it produces by itself is low. As the tools improve, the floor rises, but the total size of the market doesn’t.</p>

<p>AI is going to be the major hot button issue of the 2028 US election, and I totally get why people hate it. If the market doesn’t grow but the AI companies do, the only way they did that was by taking value from everyone else. People are very right to ask who we are building this for? Oh, to take value from people like me? I thought we lived in a democracy, can we vote to not build it?</p>

<p>I personally love AI just from a pure desire to meet silicon-based life, and I can’t wait for superhuman models that <a href="/blog/jekyll/update/2025/02/19/nobody-will-profit.html">nobody profits from</a>.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[In one world, we have Claude Mythos, a model “dramatically” better than Opus 4.6 (surely this is AGI and the endgame, right?). In another world, we have the AI bubble bursting. How can these two things both be true?]]></summary></entry><entry><title type="html">Changing the World</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/23/changing-the-world.html" rel="alternate" type="text/html" title="Changing the World" /><published>2026-03-23T00:00:00+08:00</published><updated>2026-03-23T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/23/changing-the-world</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/23/changing-the-world.html"><![CDATA[<p>Why do I feel like I’m the only one who took this to mean, like “sending the world on a different trajectory”? It seems like others took it to mean something else. From <a href="https://soundcloud.com/tomcr00se/if-you-are-thinking-of-starting-a-company-dont">my 2017 song</a>, “Changing the world is just a euphemism, for how can I, get you, to give more stuff to me.”</p>

<p>What kind of pathetic loser would give their life to that dream. There’s nothing in the world that’s worth it, even if the whole world was mine, even if everything was given to me. The stuff I want doesn’t exist yet, like immortality, super intelligent robot friends, and a five star hotel on Mars. If you want those things, you actually have to…change the world.</p>

<hr />
<p><br /></p>

<p>When I was 7, I’d go to my aunt’s house and play Super Mario World in the basement. I knew enough about computers to know that the level completions were just bytes stored in the memory of the system. Getting a <a href="https://en.wikipedia.org/wiki/Game_Genie">Game Genie</a> shoved that fact in your face, and it forced me to realize that if you wanted to keep enjoying the game, it couldn’t be about the destination, it needed to be about the journey. The hours spent grinding the levels needed to be the payoff itself, not beating the game. Because you could just beat the game by flipping a few bytes. Money is just bytes stored in the memory of the system.</p>

<p>There’s nothing more cucked than wanting to make money. You are literally spending your life to change a number in some other dude’s SQL database. The SQL database owner is the Chad fucking your wife. You are begging to fuck her when he is done. Please Chad who prints the money can I have higher TCO? You didn’t invent money, you didn’t create the things you can buy with it, and until you use it to actually change the world moving it around does nothing. The best you can hope for is that it ends up in the hands of people who can deploy it well to bring about a better future.</p>

<p>Money needs to be a journey, not a destination. It has <a href="/blog/jekyll/update/2025/02/24/money-is-the-map.html">no intrinisic value</a>. Actually changing the world is what has value. Coolness has value. You buying the same dumb crap everyone else buys isn’t cool. <a href="https://awealthofcommonsense.com/2024/05/seinfeld-on-when-money-became-everything/">Seinfeld gets it</a>.</p>

<hr />
<p><br /></p>

<p>The worst part is some of you in the back of your head think that I don’t really believe this. That I’m playing some 4D chess to try to manipulate you to get you to not care about money so I can take it from you for myself. I pity you.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Why do I feel like I’m the only one who took this to mean, like “sending the world on a different trajectory”? It seems like others took it to mean something else. From my 2017 song, “Changing the world is just a euphemism, for how can I, get you, to give more stuff to me.”]]></summary></entry><entry><title type="html">Democracy is a Liability</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/21/democracy-liability.html" rel="alternate" type="text/html" title="Democracy is a Liability" /><published>2026-03-21T00:00:00+08:00</published><updated>2026-03-21T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/21/democracy-liability</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/21/democracy-liability.html"><![CDATA[<p>Once you have no more earning potential, you’d think the advertising would go away. If you have no money, there’s no reason for people to pay to manipulate you to get you to give them some. Google and Facebook will die, and this evil sector of the economy will finally be destroyed.</p>

<p>I wish. As long as you have the ability to vote, there’s still a reason to manipulate you. And with there being less to buy and more value to be gained in influencing the monopoly on violence, the ad onslaught will continue. It will be even worse, it’s not just about you buying something, it’s about you <em>believing</em> something. It reaches inside.</p>

<blockquote>
  <p>Basketball is not a democracy. The players don’t even get a vote, let alone the fans. But conservatism can maintain a systematic pattern of delusion, because its fans are not just fans: they are supporters of a political machine. This machine will disappear if it cannot keep its believers, so it has an incentive to keep them. And it does. Funny how that works.<br />
– <a href="https://www.unqualified-reservations.org/2008/04/open-letter-to-open-minded-progressives/">Mencius Moldbug</a></p>
</blockquote>

<hr />
<p><br /></p>

<p>As an aside, I cannot believe the amount of people who think that eventually an AI will come and do their job for them while they collect the salary. This is the delusion of someone who thinks salaries come from the salary fairy. Your salary actually comes from your boss, who will see this arrangement and quickly cut out the middleman (that’s you). Whatever AI you can buy, your boss can buy too. In fact, they can probably even get a bulk discount.</p>

<p>Another, slightly less ridiculous idea, is that we can tax the robots and distribute the wealth from the work they create to the people. This is just the previous thing with extra steps. Sure, some bosses might keep you employed as a favor to you, but they will be outcompeted by those that don’t. On a slower timescale, the same is true for countries. A country that offers UBI is not a good place to live long term.</p>

<hr />
<p><br /></p>

<p>The only way through is to produce more than you consume. I have some great news about this, what you need to consume is fixed (2000 kcals of food + a gallon of water + possibly shelter depending on location) and has been falling in actual cost due to tech advancements. That water costs about a cent, and that food costs about a dollar. Can you earn a dollar a day?</p>

<p>The sooner you embrace being in the perpetual underclass, the happier you will be. We’ll all be there someday. Just hope they don’t try to make you vote.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Once you have no more earning potential, you’d think the advertising would go away. If you have no money, there’s no reason for people to pay to manipulate you to get you to give them some. Google and Facebook will die, and this evil sector of the economy will finally be destroyed.]]></summary></entry><entry><title type="html">Polynomial Time Factoring Algorithm</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/16/polynomial-time-factoring.html" rel="alternate" type="text/html" title="Polynomial Time Factoring Algorithm" /><published>2026-03-16T00:00:00+08:00</published><updated>2026-03-16T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/16/polynomial-time-factoring</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/16/polynomial-time-factoring.html"><![CDATA[<p>It’s just a matter of time before AI finds a polynomial time factoring algorithm. This isn’t like SAT, I see no reason factoring should be <a href="https://en.wikipedia.org/wiki/NP-completeness">hard</a>. There’s algorithms that rely (poorly) on the <a href="https://en.wikipedia.org/wiki/Quadratic_sieve">structure of the problem</a> and we just need AI to see a little bit further into that structure, this isn’t like SAT where the problem may not have any defined structure.</p>

<p>I believe something even stronger, that P = BQP. Aka everything that’s fast on a quantum computer is also fast on a classical computer. <a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm">Factoring is fast</a> on a quantum computer, and I can’t believe that some stupid combination of lasers and cold shit get you access to a different order of computational complexity. Like if you make a computer out of rolling balls and levers, it’s not any different complexity wise from the highest end silicon computers. So why would math privilege this weird occult-like construction? The tagline of <a href="https://scottaaronson.blog/">Scott Aaronson’s blog</a> is “If you take nothing else from this blog: quantum computers won’t solve hard problems instantly by just trying all solutions in parallel” it’s just some sampling thing where the amplitudes cancel out. I bet there’s some classical trick to efficiently simulate that sampling.</p>

<p>Regardless, you don’t even have to believe my quantum unsupremacy claims to think that we are just a few models away from a 500-lines of Python polynomial time factoring algorithm. It wasn’t until 2002 that we proved <a href="https://www.cse.iitk.ac.in/users/manindra/algebra/primality_v6.pdf">PRIMES is in P</a>, you really think we aren’t going to see factoring fall in a decade or two with the power of AI?</p>

<hr />
<p><br /></p>

<p>Releasing this algorithm on GitHub will be the greatest (legal) freedom fighting act in history. Immediately it will be impossible to tell who owns what crypto, who can SSH into what computers, and what software iPhones have to run. Asymmetric cryptography has been used to enforce class divides and the enshittification of hardware, and I’m kind of hoping it’s theoretically impossible.</p>

<p>If you ever find yourself in possession of this algorithm, release it to the world! Bring about the <a href="https://www.biblegateway.com/passage/?search=Leviticus%2025&amp;version=NIV">sacred 50th year of Jubilee</a>. You will be revered as a hero and a liberator.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[It’s just a matter of time before AI finds a polynomial time factoring algorithm. This isn’t like SAT, I see no reason factoring should be hard. There’s algorithms that rely (poorly) on the structure of the problem and we just need AI to see a little bit further into that structure, this isn’t like SAT where the problem may not have any defined structure.]]></summary></entry><entry><title type="html">Changing my mind on UBI</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/12/changing-my-mind-on-ubi.html" rel="alternate" type="text/html" title="Changing my mind on UBI" /><published>2026-03-12T00:00:00+08:00</published><updated>2026-03-12T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/12/changing-my-mind-on-ubi</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/12/changing-my-mind-on-ubi.html"><![CDATA[<p>I got this e-mail yesterday about UBI, and it made me change my position.</p>

<blockquote>
  <p>I think you’re misunderstanding the basic premise of UBI, as it’s in the name it’s basic income, for people to cover their basic human rights, as we should be striving for in our society. If every person on the planet would start receiving 300$ per month of course the prices would surely adjust, but as it’s the case, supply and demand would put the priorities in the right place (if the eggs are being sold for 500$ a crate, I can assure you people will grow chickens).</p>
</blockquote>

<p>Ignoring the nonsense about “basic human rights” (unlike maybe something like free speech, you have no human right to food, because that would imply you have the right to steal from people who grow food). Let’s follow the rest of this argument out.</p>

<hr />
<p><br /></p>

<p>We are in agreement that after UBI, the <a href="/blog/jekyll/update/2026/02/27/the-insane-stupidity-of-ubi.html">price of eggs would go up</a>. And in response, people will start growing their own chickens. Now the work they wanted the free money to avoid doing they will be doing anyway, but that’s not even the best part of this.</p>

<p>So great, people are growing chickens. Some people will be better at growing chickens than others. They’ll be so good at it in fact that they’ll have a surplus of eggs. Maybe the right thing to do is sell the eggs.</p>

<p>However, if they sell the eggs for the UBI dollars, they don’t have much purchasing power with the dollars they earn. It’s not just eggs, everything is expensive in UBI land! So maybe he makes a deal with the guy down the road for chicken feed directly, you give me feed, I’ll grow the chickens, and I’ll give you eggs.</p>

<p>He also needs chicken coop wire, and there’s a guy a town over who makes that. But it’s a bit annoying to trade eggs, he doesn’t want to have to deliver them. So they agree that instead of eggs, they should just use gold. Something durable, portable, fungible, and scarce. The chicken feed guy gets in on this gold system too. Before long, there’s an underground gold economy for everything.</p>

<p>Because there isn’t this massive drain on the new economy subsidizing all these people who aren’t working, the gold economy is way more efficient than the UBI dollar economy. It outcompetes it, and suddenly there’s practically nothing you can buy with UBI dollars (except maybe you can pay your taxes in UBI dollars, this is sounding better and better). In hushed whispers among the productive, “err, I don’t really want dollars, you got gold?”</p>

<hr />
<p><br /></p>

<p>At the beginning of the Trump administration, I said we’ll know if anything is different if the <a href="/blog/jekyll/update/2024/11/06/what-is-possible.html">government budget is cut by over 50%</a>. That was <em>far</em> too optimistic. Despite all the noise made about DOGE and cutting, the budget from 2024 to 2025 went up 3.1%. What’s an extra $210B among friends?</p>

<p>It’s clear the government is never going to shrink itself. I mostly liked Trump’s state of the union, but then there was the part where he promised to never touch Social Security and Medicare. There’s a real irony in the way the Federal Government breaks down the budget. There’s Discretionary (27%), which is everything a government should do, and there’s Mandatory (60%) which is entitlement programs that give money to old people. Look look, it’s right there in the name, “Mandatory” that means we can never cut it. Three fifths of the budget is Mandatory. We shouldn’t even talk about cutting that, we can’t, it’s Mandatory.</p>

<p>In Soviet Russia, you didn’t criticise Stalin by saying you wanted less Stalin. That’s bad and gets you disappeared. But perhaps you could say something like, man Stalin is soooo great I wish we had 50 Stalins, nay, 500 Stalins! A Stalin for every man women and child.</p>

<hr />
<p><br /></p>

<p>So yea, I support UBI now. Free money for you! Free money for your neighbors! Free money for corporations! Free money for illegal immigrants! Free money for the whole damn planet! Where does this free money come from? Print it! Mint the coin! FREE MONEY!!!! UNIVERSAL!!!!!!</p>

<p>This is the only way to get rid of Social Security. Sorry sorry, did I say cut entitlement programs? Obviously that can’t happen, they are “Mandatory.” Keep the entitlement programs, just have them pay out fake worthless money. The only way out is through.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[I got this e-mail yesterday about UBI, and it made me change my position.]]></summary></entry><entry><title type="html">Every minute you aren’t running 69 agents, you are falling behind</title><link href="https://geohot.github.io//blog/jekyll/update/2026/03/11/running-69-agents.html" rel="alternate" type="text/html" title="Every minute you aren’t running 69 agents, you are falling behind" /><published>2026-03-11T00:00:00+08:00</published><updated>2026-03-11T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/03/11/running-69-agents</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/03/11/running-69-agents.html"><![CDATA[<p>Just kidding.</p>

<p>Today we should ramp down rhetoric. I thought nobody would take three minutes to escape the perpetual underclass or you are worth $0.003/hr seriously. But it looks like some people do, and you shouldn’t.</p>

<p>Social media has been extremely toxic for the last couple months. It’s targeting you with fear and anxiety. If you don’t use this new stupid AI thing you will fall behind. If you haven’t totally updated your workflow you are worth 0. There’s people who built billion dollars companies by orchestrating 37 agents this morning AND YOU JUST SAT THERE AND ATE BREAKFAST LIKE A PLEB!</p>

<p>This is all complete nonsense. AI is not a magical game changer, it’s simply the continuation of the exponential of progress we have been on for a long time. It’s a win in some areas, a loss in others, but overall a win and a cool tool to use. And it will continue to improve, but it won’t “go recursive” or whatever the claim is. It’s always been recursive. You see things like <a href="https://github.com/karpathy/autoresearch">autoresearch</a> and it’s cool. But it’s not magic, it’s search. People see “AI” and they attribute some sci-fi thing to it when <a href="/blog/jekyll/update/2023/08/16/p-doom.html">it’s just search and optimization</a>. Always has been, and if you paid attention in CS class, you know the limits of those things.</p>

<p>That said, if you have a job where you create complexity for others, you will be found out. The days of rent seekers are coming to an end. But not because there will be no more rent seeking, it’s because rent seeking is a 0 sum game and you will lose at it to bigger players. If you have a job like that, or work at a company like that, the sooner you quit the better your outcome will be. This is the real driver of the layoffs, the big players consolidating the rent seeking to them. They just say it’s AI cause that makes the stock price go up.</p>

<p>The trick is not to play zero sum games. This is what I have been saying the whole time. Go create value for others and don’t worry about the returns. If you create more value than you consume, you are welcome in any well operating community. Not infinite, not always needs more, just <a href="/blog/jekyll/update/2025/09/02/you-are-a-good-person.html">more than you consume</a>. That’s enough, and avoid people or comparison traps that tell you otherwise. The world is not a <a href="https://en.wikipedia.org/wiki/Red_Queen%27s_race">Red Queen’s race</a>.</p>

<p>This post will get way less traction than the doom ones, but it’s telling you the way out.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Just kidding.]]></summary></entry><entry><title type="html">The Insane Stupidity of UBI</title><link href="https://geohot.github.io//blog/jekyll/update/2026/02/27/the-insane-stupidity-of-ubi.html" rel="alternate" type="text/html" title="The Insane Stupidity of UBI" /><published>2026-02-27T00:00:00+08:00</published><updated>2026-02-27T00:00:00+08:00</updated><id>https://geohot.github.io//blog/jekyll/update/2026/02/27/the-insane-stupidity-of-ubi</id><content type="html" xml:base="https://geohot.github.io//blog/jekyll/update/2026/02/27/the-insane-stupidity-of-ubi.html"><![CDATA[<p>Thinking that UBI will solve anything comes from a misunderstanding about money. <a href="/blog/jekyll/update/2025/02/24/money-is-the-map.html">Money is a map, not a territory</a>. All <a href="https://basicincome.stanford.edu/experiments-map/">UBI experiments</a> have been small scale, and of course UBI works at a small scale. No shit you can give a few people money and it’s all good and they are happy. Because the people they are buying from aren’t also on UBI. But once you add in the U part…</p>

<p>What do you plan to buy with your free government dollars? Want to buy eggs? Sorry, the egg people stopped making eggs, they are living free on UBI. Want to buy a house? Who built it? Nobody, because they all were getting UBI and didn’t want to build houses anymore. They write poems now. There’s still old houses available, but the price for them has 20xed, well outside of what you can afford.</p>

<p>Belief in UBI comes from a fallacy that the economy is some natural thing. I see supermarket. Supermarket has eggs for $5. I get $10,000 UBI. I can buy 2,000 eggs. Of course this isn’t what happens. Like most things politicians invent, nobody considered <a href="https://en.wikipedia.org/wiki/Perverse_incentive">the second order effects</a>, and you will not be getting 2,000 eggs with your UBI. Cause everyone got the UBI and bought the eggs. Either no more eggs or price of eggs goes up. And even worse, many people quit work once they got the UBI. So now less eggs are made. And your whole UBI check gets you 6 or 7 eggs instead of 2,000, if that.</p>

<hr />
<p><br /></p>

<p>The root of this stupidity is that <em>people see themselves apart from society</em>. They don’t know where stuff comes from, it might as well be the stuff fairy that puts it on the supermarket shelves and sets prices. If the stuff fairy was real, UBI might make sense. But we live in a society. We work for each other. At least the adults do. There already is UBI in the world for some people, it’s called allowance. It’s for children and high-end prostitutes.</p>

<p>It reminds me of the <a href="https://malcolminthemiddle.fandom.com/wiki/Malcolm_Holds_His_Tongue">episode of Malcolm in the Middle</a> where Malcolm dates this dumb hot girl, and she suggests solving poverty by making every 1 dollar bill into a million dollar bill. Have the UBI people considered that idea?</p>

<p>What comes first, actually trying UBI or the end of democracy?</p>

<hr />
<p><br /></p>

<p>(what you actually want is for everything to be cheap to produce. but sorry, <a href="https://x.com/SemiAnalysis_/status/2026719180284666046">6 citizens “have concerns”</a> and now we have six more weeks of expensive memory. I’m glad <a href="https://en.wikipedia.org/wiki/ChangXin_Memory_Technologies">the</a> <a href="https://en.wikipedia.org/wiki/Yangtze_Memory_Technologies">Chinese</a> are ramping up memory production)</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Thinking that UBI will solve anything comes from a misunderstanding about money. Money is a map, not a territory. All UBI experiments have been small scale, and of course UBI works at a small scale. No shit you can give a few people money and it’s all good and they are happy. Because the people they are buying from aren’t also on UBI. But once you add in the U part…]]></summary></entry></feed>