How to Build AI Ethics Without Big Tech's Permission
How to Build AI Ethics Without Big Tech's Permission How to Build AI Ethics Without Big Tech's Permission Gregory Cowles March 10, 2026 · 8 min read Too busy to read? Listen here × 0:00 / 0:00 SingularityNET participants decide together which AI models can enter their network - a choice that would normally require approval from a Silicon Valley board. That shift matters more than it sounds. When OpenAI's board fired Sam Altman in late 2023, the entire AI industry held its breath waiting for a handful of executives to sort things out [8] . The decision affected millions of users, thousands of developers, and the trajectory of artificial general intelligence research. None of them got a vote. Decentralised AI platforms are trying to fix that power imbalance. They're not waiting for permission. The Problem with Boardroom Ethics The stark contrast between centralised boardroom control and decentralised community governance Corporate AI development concentrates decision-making in ways that should make us uncomfortable. A small group of investors and executives determines which models get built, what data they train on, and who benefits from the technology [2] . That creates predictable problems. Algorithmic bias gets baked into systems because the people building them share similar backgrounds and incentives. Data privacy becomes negotiable when quarterly earnings are at stake. Transparency suffers because proprietary models are competitive advantages. I think the uncomfortable truth is that corporate boards aren't designed to make ethical decisions; they're designed to maximise shareholder value. Those goals occasionally align, but not reliably. Three Steps Communities Are Actually Taking The three-step framework for community-driven AI governance Governance Through Tokens, Not Hierarchy SingularityNET merged into the ASI Alliance with the FET token as its governance mechanism. Token holders vote on which AI agents can join the network, how resources get allocated, and what ethical guidelines apply [1] [2] . It's not perfect; whoever holds more tokens holds more influence; but it beats a closed boardroom. At least the power distribution is visible. You can see who controls what. Fetch.AI takes a similar approach with autonomous economic agents that operate according to community-set rules rather than corporate policy [6] . The agents can't be reprogrammed by executive fiat. Building Transparent Infrastructure Blockchain's immutability creates accountability that traditional systems can't match. When governance decisions get recorded on-chain, you can't quietly revise them later [3] . CUDOS provides the distributed computing backbone for these systems. Instead of relying on centralised cloud providers - who can shut down services or change terms unilaterally; the infrastructure itself gets spread across participants [2] . The tradeoff is speed. Decentralised networks currently struggle with the scalability needed for autonomous agents at scale [6] . Arbitrum and Optimism are adapting Layer 2 solutions to handle higher throughput, and we're getting even closer with the recently announced ASI Chain fro the Artificial Superintelligence Alliance. Distributing Expertise, Not Just Power Decentralisation risks replacing corporate groupthink with mob rule if you're not careful. The better platforms involve researchers, ethicists, and domain experts in governance structures [7] . SingularityNET's decentralised model includes specialist working groups that analyse proposals before community votes. It's slower than a CEO making a snap decision, but perhaps that's the point. Consequential choices probably shouldn't happen quickly. The Bits Nobody's Talking About We need to be honest about the gaps. Token-based governance can recreate wealth-based power imbalances - early investors and insiders often control large stakes [2] . That's better than pure corporate control, but it's hardly egalitarian. There's also the regulatory arbitrage question. Decentralised platforms might dodge existing AI regulations like GDPR or the EU AI Act by distributing responsibility so thoroughly that enforcement becomes nearly impossible [5] . That could be liberation or it could be a loophole, depending on your perspective. And honestly, we don't have failure case studies yet. Every example in the current discourse assumes decentralised governance produces better ethical outcomes. Maybe it does. But we should probably wait for some disasters before declaring victory. What This Actually Means The shift from corporate to community governance isn't about replacing one power structure with another; it's about making power visible and contestable [2] . When decisions happen in boardrooms, you can't participate even if you're affected. When they happen on-chain, you can at least see them coming. That transparency alone changes the game. "When no single entity holds the reins, the likelihood of unchecked power accumulation decreases" [2] . It's not a guarantee of better ethics, but it's a structural improvement that makes better ethics possible. The infrastructure still needs work. The governance mechanisms need refinement. The economic sustainability remains uncertain without venture capital or corporate profit models funding development. But perhaps the real insight is this: you don't need Big Tech's permission to build alternative systems. The technology exists. The governance models are being tested. The communities are forming. The question isn't whether decentralised AI ethics can work. It's whether we'll build them before corporate control becomes so entrenched that alternatives become impossible. Sources [1] What is Decentralized AI Model - GeeksforGeeks [2] Decentralization vs. Corporate Control: Who Will Shape the Future of AI? - SingularityNET [3] The Ethical Implications of Decentralized AI: A New Frontier [5] A Revolutionary Framework for AI Governance: Bridging Complexity, Ethics, and Global Justice [6] How Will Decentralized AI Affect Big Tech? | Built In [7] Building a case for decentralised AI ethics [8] Why We Need Decentralized AI Gregory Cowles View more posts Published with DraftEngine — drafte.ai