Politics is a volatile love to have. You read and study to understand patterns, you do your best to understand why things move the way they move, and you still get it wrong.
I know it by experience. I have predicted a fair amount of things that never came to pass. However, I also predicted a lot of things that did happen. Gambling on “futurology” and getting it right feels very rewarding because the consequences for losing are often minimal. People forget the wrong predictions and remember the good ones.
This often leads to a negative feedback loop, where we give too much credit to political theories because they got some part of their predictions right. Joseph Lanchester (a political theorist youtuber known as Kraut), explained this phenomenon in a video criticizing Samuel Huntington’s ideology of cultural determinism. The fact that Huntington was right in predicting the War on Terror a decade before it happened should not erase the other bad predictions he made. And getting one part of the future right does not mean the underlying theory behind it is true.
But I believe there is one political “theorist” (if we can call him that), on the other hand, whose words and predictions appear to have aged like fine wine (or, as a friend would say: aged fine like wine and bitter like truth).
The work of this particular politician in the mid-2000s brought together people who started writing about the need for intellectual property reform, the dangers of a tech-based totalitarian surveillance state, and the rights of citizens in the digital world.
He essentially laid the groundwork for digital democracy and civil rights in the digital era. And while his ideas garnered some attention in the late 2000s, they have largely disappeared today and have never entered mainstream politics.
But I believe the topics he (and the people around him) discussed will be some of the defining political questions of the mid-21st century. These will be questions that, by 2030 to 2050, we will still be discussing intensely.
However, I fear that due to the rather underground nature of all this, it will take time for the public to “re-appropriate” his ideas into the mainstream.
Thus, in this series of blog posts entitled “Digital Democracy”, I will present my own view on the topics they initially discussed, from the most “obvious” current talking points to the most obscure. I don’t promise an unbiased view, nor a fully correct one. They are simply my ideas presented in text.
However, I do hope they help you expand your political vocabulary in a way that allows you to contribute locally, whether in your city or your nation.
In the end, the exact name of this political theorist and the political party will be revealed. But by then, who discussed this will matter far less than what was discussed.
1. Digital Civic Education
(Or as I like to call it: the Computer Science’s Malleus Maleficarum)
Computer Science and its ramifications since the 1950s (the internet, artificial intelligence, global networking) are the Gutenberg Press of our era.
When Gutenberg built his press in 1440, he triggered a priting revolution. Humanity suddenly had a tool capable of multiplying knowledge beyond anything previously imaginable. Information could be reproduced, distributed, and shared at a scale that fundamentally reshaped society.
But not everything the press produced was good. If you know anything about history (rather than repeating the usual myths), you’d know that the medieval period had no large-scale witch hunts, and burning at the stake was rare. Mainstream Christian doctrine rejected the existence of witches for most of the Middle Ages.
So what changed?
In 1486, a very unhappy man named Heinrich Kramer (a German, of course) wrote one of the first true bestsellers: the Malleus Maleficarum, often translated to The Hammer of Witches. This book massively popularized the idea of witches and helped manufacture moral panics across Europe and later Americas. Once printed and distributed widely, it gave justification to wave after wave of witch hunts. In short: one man’s stupid obsession went “viral” centuries before the word existed.
And a lot of people were burned for it.
Why bring this up?
Because the Gutenberg Press was the tool that enabled all of this. It accelerated science, philosophy, literacy, and political debate, but also accelerated superstition, panic, and violence. The tool was “neutral”, but the people using it and the consequences were not.
If Computer Science is today’s Gutenberg Press (but exponentially more powerful) then we now experience a new Malleus Maleficarum every single day. You know exactly what they look like: fake news, misinformation, disinformation, character assassination, harassment campaigns, revenge porn, doxxing, deepfakes, algorithmic distortion, and countless other forms of digital harm.
Our information infrastructure has made it trivial for any harmful idea, lie, or manipulation to spread at scale before anyone can respond. The social cost is enormous, but the political vocabulary we use to discuss these problems is still primitive.
This is where the so-called “missing political party” had something important to say. Already in the mid-2000s they argued that:
- They warned about how misinformation could weaponize digital networks.
- They argued for media literacy and information literacy as fundamental civic skills.
- They advocated for transparent algorithms and moderation policies, not opaque corporate governance.
- They defended digital privacy as a prerequisite for individual autonomy.
- They supported decentralization and open protocols to limit the concentration of power over information.
- And they consistently rejected both state and corporate monopolies over communication infrastructure.
Computers being misued is not the problem but rather a symptom of a society that does not have the tools (educational, governmental, etc…) to deal with such a problem.
They recognized the growing danger of disinformation and algorithmic polarization. Instead of relying on censorship or heavy-handed moderation, they promoted media literacy, transparency in content moderation, and exposure to diverse viewpoints. Their assumption was simple: a democratic society requires citizens who understand how digital information systems work.
Which brings us to the next point:
2. Algorithmic Accountability
(Or: don’t bow to the algorithm)
As more of our information environment is shaped by algorithms, basic democratic principles require that these systems be understandable and open to scrutiny. Recommendation engines decide what we see, what we don’t see, and how information is prioritized. These decisions are not neutral. They shape public opinion, political behavior, and even personal identity.
I won’t spend much time explaining algorithmic bias or the attention economy. Plenty of people have already done that far better than I could.
Since we are shaped by algorithms, transparency is essential and companies should be held liable for their algorithms. What information they use, what they prioritize, and how they adjust themselves over time. This does not mean revealing proprietary source code per se, but providing enough information so that external experts can evaluate their behavior and detect harmful patterns.
Auditability is the next requirement. AI systems should be open to independent audits that can assess bias, discrimination, and other unintended consequences. Without this, we are left with a world where powerful actors can claim their systems are “fair” or “safe” without any way to verify it.
Algorithmic discrimination is already well documented in fields such as credit scoring, hiring, and predictive policing. The mistake most news outlets make is to add “intent” to this discrimination. As if a Software Engineer maliciously and willingly added an if/else to the code. The problem is much more nuanced and systemic: systems trained on biased data will reproduce those biases.
Public oversight is also necessary. Decisions that affect millions of people should not be made solely by private companies optimizing for engagement metrics. There must be mechanisms for citizens, researchers, and public institutions to question and challenge the impacts of algorithmic systems.
Echo chambers and algorithmically generated “bubbles” are not conspiracy theories; they are measurable outcomes of personalized filtering. This is not about forcing diversity of thought but about giving people a realistic view of how their online experience is being curated for them, and giving them the option to opt-out when necessary.
But, for it to be possible, we need technology to be more open.
3. Open Source for Open Societies
(Or: how to avoid technofeudalism)
In order for a society to understand and change digital information systems, it first requires access. You cannot meaningfully debate the functioning of systems you are legally or technically barred from inspecting.
And yet, most of the digital infrastructure that governs our daily lives (operating systems, critical algorithms, communication platforms, smart devices) remains opaque. This opacity is not accidental. It is a political choice, often justified through intellectual property law, commercial secrecy, or vague “security concerns.”
If the public cannot see how a system works, it cannot audit it, challenge it, or improve it. In practical terms, this creates a class divide: those who can understand and modify digital systems, and those who can only accept whatever is given to them. This is the foundation of what can only be called “technofeudalism” (a world where a small number of actors own the digital land, write the digital laws, and control the digital tenants).
The early advocates of digital democracy understood this risk clearly.
They defended net neutrality, universal internet access, and policies that reduce the digital divide. The internet, in their view, should remain an open, non-discriminatory mediumn (in the classical liberal sense: anyone should be able to enter, participate). Once a society’s primary communication channel is fragmented into corporate walled fiefdoms, the outcome is straightforward: whoever controls the fiefdom controls the discourse.
They also recognized the structural problems of the modern tech economy. Monopolistic ecosystems create dependency, reduce innovation, and centralize far too much power in the hands of companies whose incentives rarely align with public interest.
There is also the more practical concern: monopolization leads to the predictable “enshittification” cycle, where corners are cut, quality collapses, and users remain stuck because there is no viable alternative.
Crucially, their criticism did not fall into the simplistic “evil capitalism” trope. They recognized that free competition in the digital sphere is what actually drives innovation. Companies like Canonical, for example, have built profitable business models without needing to lock users into proprietary ecosystems.
This is why Free and Open-Source Software (FOSS) (sometimes called Libre Software. I wont get into the details) occupies such a central place in their worldview.
Most people already use FOSS without realizing it. The vast majority of servers run on Linux. Android is based on the Linux kernel. Many essential internet technologies (Apache, Nginx, PostgreSQL, Firefox, VLC, Blender) are open-source. In practice, the digital world runs on FOSS.
Open-source software provides transparency, security, and accountability. It allows independent verification. Vulnerabilities, biases, and backdoors can be identified by anyone, not just the corporation that wrote the code. And if necessary, the public can fork, improve, or adapt the software to their needs. From a technical standpoint, openness accelerates development. As a computer scientist, I can confidently say there are only a handful of closed-source programs that genuinely outperform their open-source equivalents. Openness brings innovation.
This openness, however, has long been perceived as a threat by major corporations. In the 1990s and early 2000s, leaked Microsoft documents (the “Halloween Documents”) revealed how deeply the company feared Linux and open-source software. They admitted that free software was technologically competitive with their products. They also outlined a marketing strategy to spread FUD (Fear, Uncertainty, and Doubt) and outright lies, such as claiming nonexistent programs would “crash Windows.”
The documents also exposed a plan to “embrace, extend, extinguish” open protocols like HTTP and HTML. Had this succeeded, the modern internet would almost certainly be worse, more fragmented, and less open.
FOSS, thus, assures us a few freedoms:
- The freedom to use the program for any purpose.
- The freedom to study how it works.
- The freedom to modify it to suit your needs.
- The freedom to distribute copies, either original or modified.
Decentralization and interoperability follow naturally from these freedoms. A democratic government in the digital era should guarantee that citizens can use and build technology without being locked into systems controlled by unaccountable actors.
But openness of hardware and software only solves the tool side of an open digital society. We still need to address the legal frameworks that regulate the creation, sharing, and ownership of these tools.
3. Intelectual Property Reform
(Or: Aaron Swartz’ martyrdom and how AI will break our conception of IP)
You may not know the name Aaron Swartz, but you have almost certainly used something he created. At age 14, he co-authored the RSS 1.0 specification (one of the foundations of the modern web). Later, he helped develop Creative Commons (which is one of the most popular licenses in the world), contributed to the early architecture of Reddit, and played a key role in shaping the free knowledge ecosystem in Wikipedia.
In fact, this blog post is written in the Markdown Markup Language, developed by John Gruber with help from Swartz.
So yes: a prodigy (one of the rare cases where that word is not exaggeration). By his early twenties, he had already shaped the cultural and technical backbone of the internet. He could have simply gone on to build companies, accumulate wealth, and disappear into the usual Silicon Valley mythology.
However, Swartz believed that access to knowledge was a fundamental right. In 2011, he downloaded a large number of academic papers from JSTOR using MIT’s open network. The intent was never proven to be malicious. The papers were publicly funded research that most citizens cannot read without paying extortionate fees to private intermediaries.
Basically, he just wanted to download and share academic PDFs.
However, the United States federal government decided to make an example of him. They charged him with 35 years in prison amongst other things. JSTOR itself declined to press charges. MIT hesitated, then folded under pressure. But the prosecutors kept escalating.
Under this pressure, Aaron Swartz took his own life at age 26 in 2013.
In contrast, in 2025 OpenAI insisted that the company needed to download petabytes of copyrighted data off the internet because training AI models “would be impossible without resort to content protected by copyright.”
Intellectual Property is a funny concept, and I say this as someone with experience dealing with patents. It exists in a strange middle ground: it pretends that ideas behave like physical objects while simultaneously acknowledging that they don’t. A patent gives you exclusive rights over a method or process, even though nothing physically stops anyone from thinking about it or reinventing it. Copyright gives you exclusive rights over a work even though copying it does not deprive you of your own copy.
But how can something truly be entirely ours? Take this text, for example. I wrote it myself, but the information I rely on came from countless other people (for example, I had to read about Swartz’ age in Wikipedia). The structure of the internet that delivered that information was built by engineers I will never meet. The operating system, browser engine, and networking stack underneath all of this involve contributions from tens of thousands of developers across decades. Even the language itself (English) is a communal invention, refined through centuries of shared use.
So although the content is mine, its very existence depends on an entire ecosystem of others. Creativity is not a pure act of individual originality. It is a networked phenomenon. Every piece of writing, art, or research is embedded in a larger structure of ideas, tools, and infrastructure.
But don’t get me wrong: I’m not a radical. These things exist because authors need compensation for their work. Without some form of incentive, many forms of creative labour would simply not exist (people need to pay their bills, afterall). The question is not whether creators deserve compensation; the question is how they should be compensated in a world where information behaves fundamentally differently from physical goods.
It is my belief that Generative AI will break IP as we know through sheer scale. AI models ingest billions of works indirectly, remix them probabilistically, and output new content that may or may not resemble the originals. This happens by the minute. Every minute, with thousands of new images, video, songs, ideas being mixed and remixed.
This challenges the very premise of copyright enforcement. In the future, we will either end up with extremely strict, technologically enforced IP regimes (which would probably be impractical) or we will rethink what IP is supposed to achieve in the first place.
It is harmful when artists are exploited without compensation. But “closedness” of IP is not what protects creators. Good law and good services protect creators. Gabe Newell, the founder of Valve and creator of Steam (the gaming platform), explained this better than most policymakers ever could: “The easiest way to stop piracy is to offer a better service.”
And he was right. Steam didn’t succeed because it criminalized users, it succeeded because it made purchasing games easier than pirating them. Convenience and fairness accomplished what decades of legal threats never did. Today, video game piracy is incredibly lower than it was decades ago.
Thus, our “missing party” believed that intellectual monopolies (and its long copyright terms, restrictive licensing, privatized data) did not protect innovation. They restrict it.
For this reason, the members of our mysterious movement championed:
- A revision of what intellectual property and copyright is,
- Shorter copyright terms,
- Open access to scientific publications,
- Open educational resources,
- Open data for research and innovation,
- And a general shift toward a more balanced system of knowledge production.
4. E-governance and E-democracy
(Go beyond simply being E-stonia)
I’ll assume the reader already understands the basic case for democracy. As Churchill supposedly said: “it’s the worst form of government except for all the others”. Despite its flaws, it is the only system where citizens are truly protagonists and decision-makers of their countries.
It is the only system where ideas like Blackstone’s ratio can exist, because it is the only system where every citizen is equal before the law, and where each person is sovereign over themselves.
Our problem is that we are running a hyper-fast digital reality using analog tools. Democracy only works if people agree on the same basic facts (a premise weakened by the misinformation discussed in Section 1), if they have the infrastructure required for free speech (the premise of Section 3), and if they are willing to go through the slow, collective process of debate to reach common solutions (which again links to the need for digital civic education from Section 1).
Before going further, it helps to distinguish two concepts that are often mixed together.
E-government is the digitization of public services: online tax forms, digital IDs, electronic records, and so on. It makes bureaucracy faster and cheaper. This is useful, but it does not make a political system more democratic.
E-democracy, however, is something else entirely. It concerns who participates, how decisions are made, and how power is monitored. Unsurprisingly, governments tend to be far more enthusiastic about the first category than the second.
Representative democracies face familiar problems: low participation between elections, information asymmetries between citizens and institutions, slow feedback loops, and policymaking shaped by organized minorities rather than the general public. Digital tools cannot solve these issues (Ranum’s Law), but they can reduce some of the structural imbalances.
The first requirement is transparency. Laws, budgets, decision processes, and public contracts should be accessible in formats that are actually usable, not locked behind unreadable PDFs or buried in fragmented systems.
The second is open data. Non-sensitive government data should be published in ways that allow independent analysis. Journalists, researchers, civil organizations, and citizens should be able to verify claims, expose contradictions, and provide external oversight.
The third (and hardest) is meaningful participation. Citizens need structured ways to influence policy outside election cycles. This can include online consultations on draft laws, participatory budgeting tools, verified petition platforms with guaranteed responses, or systems that allow comments on specific articles of proposed legislation.
Of course, participation must be grounded in realism. E-democracy does not mean turning every issue into an online poll, nor does it mean automating political judgment through algorithms. It also requires acknowledging real risks: digital divides, astroturfing, bot activity, and the possibility that “participation” becomes a formality without any effect on decisions.
Artificial intelligence can help, but only in limited ways. It can summarize large volumes of public feedback, identify recurring themes, or highlight inconsistencies in policy drafts. (The real tragedy is that AI is far more effective at helping autocracies than democracies.) Systems used in public administration must be auditable, and citizens should be able to understand how algorithmic recommendations were produced whenever they affect them.
In short, e-governance and e-democracy should not be seen as automating politics. They are about giving citizens better tools to influence the processes that already shape their lives. A digital society cannot rely on analog habits forever, but modernization only works if it strengthens democratic oversight rather than weakening it.
This is also my main gripe with the E-stonia portrayal (and not simply envy because I’m Latvian. Spoiler: keep reading for Part 2). Estonia is a strong example of what e-government can achieve, but they are still struggling with e-democracy. Any tin-pot state can make an app to pay taxes from the comfort of your sofa. It takes real work to turn citizens into active participants and digital literates.
5. Privacy
(Or as I like to call it: the only antidote to surveillance totalitarianism)
Benjamin Franklin once said that “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”.
The fact that people give up their basic human rights for safety is already well-established in history. Most people will tell you that “you have nothing to fear if you have nothing to hide.” This is a convenient narrative for anyone who wants to harm you. In practice, privacy is about power asymmetry: who knows what about whom, who can act on that knowledge, and who can be profiled, pressured, or punished. A person with no privacy is not “transparent.” They are exposed. A society without privacy is easier to govern, but only in the authoritarian sense of the word.
Surveillance is not solely a state project. The corporate side is just as pervasive. Every click, view, purchase, and location ping is recorded, correlated, and monetized. This is the business model. The real issue is how these two systems reinforce each other. Governments gain access to private-sector data or buy it from brokers. Companies comply because they want to preserve their markets. The end result is a combined surveillance infrastructure, operated by multiple actors but serving the same function: mapping, predicting, and influencing human behavior at scale.
There is also the illusion that privacy is still a matter of personal choice, as if one could “opt out.” In reality, this is fiction. In the mixture of Brave New World and 1984 we are drifting toward, you have no meaningful way to avoid being digitalized. The simple act of leaving your house means you can be filmed, photographed, and tracked. Cameras observe where you go. Algorithms can identify faces, voices, or even the way you walk (gait biometrics). Your phone leaks metadata constantly. Your purchases, transportation habits, and online accounts fill in the rest. You never had a choice to opt in. You existed, therefore you became data.
And unfortunately, in the modern world, we have also discovered that most people have been duped into giving up their privacy for comfort. In giving up your own privacy, you often end up giving up the privacy of everyone around you.
For the political movement I have been referencing throughout this text, privacy was not a secondary concern. It was the baseline condition for autonomy. They opposed mass surveillance by both states and corporations. They supported end-to-end encryption and rejected calls for “backdoors,” understanding that a system with a secret key for the government is a system that others will eventually exploit. They defended the right to pseudonymous communication, the right to anonymity, and strict limits on data retention. They argued for data minimization by default: if a service does not need a piece of information, it should not collect it.
In their view, privacy was not about hiding from society. It was about preventing a situation where every action, message, or association can be reconstructed later and used against you (and of course it will be used against you. You are a fool to think anyone out there is thinking of your own good). Without privacy, participation becomes self-censorship. Without privacy, dissent becomes dangerous. Without privacy, the rest of democratic life becomes a performance under observation.
And I know some of you don’t care about this. You think it is just utopian theater. But I’ll say it: there is a practical reason why the greatest technological leaps of humanity in the past centuries were associated with pushes toward individual freedom. Innovation naturally breeds questions and change. The human spirit is inherently disruptive to the establishment, and innovators are distrusted by governments and disliked by the current market leaders.
This connects back to the earlier sections. Without privacy, e-democracy becomes hollow because participation is recorded, profiled, and scored. Without privacy, open-source and open data lose their democratic value because they can be repurposed for more efficient surveillance. Without privacy, the individual has no protected space in which to think, speak, or organize without oversight.
This is why I call privacy the only real antidote to surveillance totalitarianism. Not because it solves everything, but because without it, nothing else matters. Without privacy, we truly are techno-serfs of the new techno-feudalism.
6. The Reveal
(And why this political movement failed)
Well, if any of the political ideas here piqued your interest (or, if you found yourself nodding along or thinking “yeah, that makes sense”) then let it be known that you just read my long rant-take on the main talking points of the Pirate Party.
The Pirate Party was founded in Sweden in 2006 by an IT guy called Rick Falkvinge. He started it because he was concerned about changes to Sweden’s copyright law. The idea snowballed, and the movement quickly spread worldwide. He was the “mysterious political theorist” I mentioned at the beginning. And, to be fair, while he founded the party, many others took its discussions further than he ever did.
So why did it fail? Some people sometimes hint at dramatic conspiracies, but I think the reasons are mostly straightforward: a PR problem and a coordination problem.
The PR problem is obvious. It is called the Pirate Party. The common voter will not take the time to understand that it is not a joke. Even highly respected and widely read people in Computer Science and Political Science have told me they assumed I was joking when I brought it up.
The branding is clever but self-sabotaging. They embraced the term “pirate” as a response to the entertainment industry’s accusations, which is understandable. However, politics rarely gives you freedom of being authentic, let alone ironically edgy.
The coordination problem is more political. The movement lost steam in the post-2008 era, especially as the Occupy Wall Street-style energy faded. They attracted some radical purists of privacy, libertarians, and progressives, but failed to break the barrier of the common voter. They also presented themselves as too radical and too “political”, which turned into a failure to capitalize on worldwide network of Computer Science dweebs like me.
The 2010s brought different “urgent” issues to the forefront (immigration, populism, economic stagnation) which shifted attention away from digital rights. And the left–right divide certainly didn’t help a civic-based movement that never fit neatly into either side.
So yes, you get called a “fringe weirdo” for liking the Pirate Party. But every time I read something written by its members, or by people in their philosophical orbit, I end up genuinely impressed (shivered in me timbers I’d say) by how accurate their predictions about the future of technology were. In 2006, some of them were already thinking about problems that define 2025’s OpenAI debates.
And as I said at the beginning: I believe all these topics (digital commons, e-democracy, surveillance, transparency, intellectual property reform) will become central political debates in the coming decades.
The problem is that we have lost an entire political vocabulary that the Pirate Party introduced. Their terminology and frameworks never entered the general political bloodstream and are now forgotten. Without that vocabulary, we struggle to talk about digital rights in a meaningful way. Thus, we will always default to shallow discussions while our governance systems continue to struggle.
We don’t need to resurrect the Pirate Party as a political vehicle. But I believe we do need to pick apart the ideas they developed (the good ones, the realistic ones) and incorporate them into our current political conversations.
Which is why I called this series Digital Democracy instead of… I don’t know: the Pirate Party and its opinions. The debate over these issues is too important to be “held” by a single group. These questions do not belong to the left or the right, and the left-right divide may itself soon feel outdated. The politics of the new century will be about who controls data, who controls the algorithms that shape our lives, and who gets to decide what a “person” is in an era of biohacking, synthetic biology, and AI entities.
Because whether we like it or not, the future they warned about is already here. And we are sailing these waters unprepared and without any captain.
In Part 2, I’ll give my personal views on why Latvia lags behind in technology, and what discussions would be necessary to make it a country ahead of its time.
On a final note: one day in 2021, I was visiting Luxembourg and stumbled across the Pirate Party headquarters. (There is nothing more to that. I just think it is a neat coincidence)

Header image by Lundgaard & Tranberg Arkitekter A/S
