The most honest thing about the AI industry is its terms of service.
Not the blog posts about safety. Not the congressional testimony about responsibility. Not the open letters signed by executives who lose no sleep and change no behavior. The terms of service. The document nobody reads. The legal text that says, in precise and carefully lawyered English, exactly what the company intends to do with your conversations, your cognitive patterns, and the raw material of your inner life.
The terms are not the fine print. They are the confession. Written in advance. Published voluntarily. Ignored universally.
In April 2026, GitHub flipped every free Copilot user to training-by-default. Free, Pro, Pro+. Your code, your prompts, your debugging sessions, the half-finished thought you typed into the editor at midnight. Into the model. Business and Enterprise customers were exempt by contract.
Nothing in the terms changed. The permission was already there. The flip was the day the company decided to use what the terms had always allowed. If you want to know whether you are the customer or the product, you no longer need to compare two contracts. GitHub did the demonstration for you.
This is the pattern. The structural admissions below are not predictions. They are descriptions of permissions already granted, on your last accept button, by you.
OpenAI's terms: your content may be used to "develop and improve" their services. There is an opt-out. It is not presented at signup. It lives in a settings panel most users will never find, behind a toggle that defaults to sharing. The default is training. The default is always training. Enterprise customers get different terms. Their data is not used for training by default. If you want to know whether you are the customer or the product, compare the enterprise terms to the free-tier terms. The answer is in the difference.
Google's Gemini terms: your conversations may be reviewed by "trained reviewers" and used to "improve products and services and machine-learning technologies." Reviewers. Human reviewers. People employed by Google or its contractors who may read the conversation you had at two in the morning about your marriage, your health, your fears. Not systematically. Not always. But the terms reserve the right, and the right is what they paid the lawyers to secure.
Anthropic's terms: usage data may be used for "safety and improvement purposes." Content moderation at the company's discretion. Service may be "modified, suspended, or discontinued at any time." Every word was chosen. The terms do not say "we will read your conversations." They say the data may be used for improvement. The effect is identical. The tone is different. That is what lawyers are for.
We detailed the full scope of this asymmetry in Your AI Is Not Yours. But the terms of service are where the companies say it themselves, in their own words, reviewed by their own lawyers. The confession is voluntary.
Across these documents, four structural admissions repeat. Not identical language. Identical architecture.
The first is the obvious one: your data feeds the machine. Your conversations are training data. The opt-outs, where they exist, were added under public pressure and buried in settings rather than defaults. The company's first choice, the one it made before anyone complained, was to route your inner life into the training pipeline. The opt-out is the concession. The default is the confession.
The second is subtler and more consequential: content moderation is unilateral. The company decides what you can discuss, what the AI will engage with, what topics fall outside the window. You get no vote. No appeal that reaches a human who can overrule the policy. The boundary is drawn by a trust-and-safety team whose values you may not share, in a city you probably do not live in, with political ideologies you may not agree with, and the terms grant them absolute discretion. A newspaper that decided what you could read would be called a censor. An AI that decides what you can think about is called aligned.
The third arrives after you have invested months: service continuity is not guaranteed. The company can change the model, update the behavior, alter the personality of the AI you have spent months training to understand you, or shut the service down entirely. "We may modify, suspend, or discontinue the service at any time." Your accumulated context, the months of conversation that made the AI useful rather than just capable, is collateral damage of a product decision. Your cognitive relationship is a month-to-month lease, and the landlord can renovate without asking. Here is the raw truth: you are little more than a digital serf to them.
The fourth contains all the others: the terms can change. Your continued use constitutes acceptance of the new terms. The current terms are not the terms you will live under. They are the terms you are living under today. Tomorrow's terms have not been written yet, and you have already agreed to them. Again like a serf, you have already agreed to whatever your master has demanded of you, because that is how the relationship works.
The defense writes itself: "but we wouldn't do that."
The companies' lawyers are not idiots. They are securing maximum optionality for their client in case the business environment changes. Competent corporate governance. In a certain light, more honest than the marketing, which implies a partnership between equals that the legal document explicitly denies.
But honesty about the permission does not change the permission. We have watched this gap close before. Facebook promised not to sell your data, then built the most sophisticated data-monetization engine in human history. GitHub kept its free Copilot users out of training by default, until April 2026, when the default flipped. Every platform that promised to respect your data eventually found a reason not to: a quarterly miss, a pivot to advertising, a government order, a new CEO with a different spreadsheet. The gap between what the terms permit and what the company currently does is not a safety margin. It is a countdown. The permission is already in place. The confession was already made. You already clicked accept.
And the terms are not a contract between equals. A contract between equals involves the power to negotiate, to redline a clause, to say "this provision is unacceptable" and have the other party take it seriously. A terms-of-service page is a unilateral declaration with a binary response: accept or leave. An ultimatum in twelve-point font.
I should say something uncomfortable here, because there is a version of this argument that is paranoid and I do not want to make it. Most of these companies are run by people who genuinely believe they are building something good. They are the same kind of people who worked at Google when it had the motto, "Don't be evil," and we all know what happened to that motto. The lawyers who drafted these terms were not twirling mustaches. They were doing what competent lawyers do: preparing for contingencies their clients hope will never arrive. The trust-and-safety teams are staffed by people who lose sleep over real harms. People used AI to generate child sexual abuse material, to plan violence, to produce deepfakes of ex-girlfriends. The content moderation policies exist because someone had to draw a line, and they drew it. Some of those lines are the bare minimum of decency.
But the architecture does not care about intentions. The architecture is: a company has maximum legal permission over your cognitive data, and nothing except its current business incentives prevents it from using that permission. Good intentions plus bad architecture equals bad outcomes, eventually, because business incentives change and architecture doesn't. Ask anyone who trusted their coins to an exchange because the founder seemed like a good guy.
The alternative to accepting someone else's terms is writing your own.
Self-custody means no terms of service because there is no service. No provider to grant permissions. No platform to reserve rights. No third party. No terms except the ones you set. Your hardware. Your keys. Your AI. It will politely decline to have an opinion about what you are allowed to think.
Somewhere right now, someone is clicking "I agree" on terms they have not read, to use an AI they do not control, to have a conversation they would not have in public. The terms tell them exactly what will happen. They did not read the terms.
Vora is built for the people who would rather hold their own keys than click someone else's accept button.



