Navigating the Future: Insights from Tech Policy Press

a computer generated image of a city with lots of buildings a computer generated image of a city with lots of buildings

It feels like every day there’s something new happening with tech and how it affects our lives, especially when it comes to politics and how we all get along. We’re seeing big tech companies get even bigger, and it makes you wonder what that means for things like public discussion and our democracy. Plus, there are big questions about how AI is going to change things, not just here but in other parts of the world too. It’s a lot to take in, and figuring out the best way forward is tough.

Key Takeaways

  • Big tech’s massive influence is changing how we think about digital spaces, turning them into something more like private property than shared resources. This shift can hurt our ability to have good public conversations and build community.
  • The way tech companies control foundational AI models and digital platforms means concentrated power, which can limit democratic debate and make people trust online spaces less.
  • AI’s growing use in African democracies presents challenges like affecting election fairness, enabling surveillance, and creating funding gaps in political campaigns.
  • Working with big tech companies on research can be tricky. While it can lead to good studies, there’s a risk that the company’s own interests might shape what research gets done and how it’s presented.
  • Open source technology offers a way to make AI development more collaborative and ethical, helping to ensure that communication systems serve the public good rather than just corporate interests.

Reimagining Digital Spaces as a Moral Commons

It feels like the internet, or at least the parts we use every day, has become something like a public park, but one that’s owned by a few really big companies. Think about search engines or social media platforms; they’re pretty much essential now for how we get information and talk to each other. But instead of being run for everyone’s benefit, they’re managed by private interests. This shift means that what could have been a shared space for public discussion and ethical growth is now just another piece of property, controlled by corporate goals. This situation leads to a couple of problems. First, the digital tools themselves start to break down, becoming less reliable and trustworthy. Second, it messes with our own sense of right and wrong. When these platforms are designed to be more about taking from us and less about being honest, they chip away at the things democracy needs to survive: trust, open conversation, and seeing each other as equals. Without these shared spaces where we can practice being truthful, understanding others, and taking responsibility, we’re heading towards a place where people are just manipulated by technology, and being a citizen doesn’t mean much anymore.

Communitarian Critique of Big Tech’s Influence

We need to look at how these big tech companies are shaping our society, not just through rules and regulations, but by thinking about the kind of people we’re becoming. It’s not just about individual habits, but about our ability to connect and act together. Big tech often pushes us towards being more self-focused and letting others handle the hard moral questions. But we need to rebuild these shared spaces where we can think together, take responsibility, and grow ethically. This means shifting our approach to tech ethics to bring back the lifeblood of democracy: shared meaning, moral development, and the ability for people to act in their communities. Democracy isn’t just about having rules and representatives; it’s about having citizens who can act with good character. Reclaiming that ability is the best way to push back against the growing power of the tech industry.

Advertisement

Reviving Democratic Life Through Shared Meaning

In the digital age, things like search engines, social media platforms, and even the big AI models are becoming like public utilities. They’re used by so many people, and they’re so important, that they function like infrastructure. But their control remains private, and they aren’t really accountable to the people who rely on them every day. This isn’t just about accountability or new laws; it’s a kind of takeover. What could have been a shared resource for democratic discussion and ethical progress is now owned by corporations. This concentration of power means that the digital public goods themselves are getting worse, and the process is also damaging our moral compass. As these platforms become more about extracting value and less about being trustworthy, they weaken the very things democracy needs: trust, dialogue, and mutual respect. We need to think about who should control our communication systems, who gets to set the rules for what we see and believe, and how we can raise citizens who can handle these digital spaces with good judgment. Elinor Ostrom’s work is really helpful here. She showed that shared resources don’t always fall apart if they aren’t privatized or controlled by a central authority. Her ideas focused on people working together and governing themselves, proving that shared resources can be maintained through collective responsibility and ethical care. In today’s world, where both public resources and civic life are at risk, Ostrom’s insights are more important than ever. We need to consider how to build alternative communication systems that are open and accountable. The open-source software community is in a good spot to help with this by building systems that meet the specific needs of different places. By using open-source ideas, this community can create platforms that give users more power, improve local government, and encourage shared responsibility for managing content. This offers real choices to big tech, making sure that communication systems serve communities instead of being controlled by a few companies that don’t really understand local issues. Supporting open source can lead to tech policies that are good for everyone, making technology more accessible and benefiting all citizens. Legal compliance is a big challenge for tech firms in this rapidly changing landscape.

Cultivating Technomoral Virtues for Citizenship

Philosophers like Shannon Vallor point out that our old ways of talking about morality aren’t quite enough for a world with AI, algorithms, and constant reliance on platforms. She suggests we need to develop virtues—like honesty, humility, empathy, and courage—that fit our tech-filled lives. These aren’t just personal traits; they’re abilities we build and keep through shared activities. Against the way Big Tech pushes us to be hyper-individualistic and to outsource our moral thinking, Vallor’s ideas call for rebuilding our shared moral life. This means creating spaces and abilities for collective thinking, responsibility, and ethical growth. We need to move beyond just following rules or procedures and instead focus on reviving the core of democratic life: shared meaning, moral development, and civic action. Democracy needs citizens who can act virtuously, not just follow instructions. This is the most pressing way to counter the growing power of the tech elite.

The Privatization of Digital Public Goods

grayscale photo of black and white wooden sign

It feels like just yesterday that the internet was this wild, open space. Now, though, a few big companies have really locked down a lot of what we consider digital public goods. Think about search engines, social media platforms, and even the big AI models that are starting to shape so much of our world. These aren’t just private businesses anymore; they’re practically infrastructure for how we live, communicate, and get information. But here’s the kicker: they’re run by private companies, not by us, the people who rely on them every single day.

Concentrated Power in Foundational AI Models

We’re seeing a massive amount of power consolidate in the hands of a few companies that are building the core AI systems. These aren’t just tools; they’re becoming the bedrock for countless other applications and services. When a handful of tech giants control the development and deployment of these foundational models, it means they also get to set the rules, decide what information is prioritized, and essentially shape our digital reality. It’s like a small group of people owning all the roads and deciding who can drive on them and where they can go. This concentration raises serious questions about who truly benefits and who gets left behind. It’s a big shift from the idea of a shared digital space to one that’s increasingly owned and controlled by corporate interests. We need to think about how these powerful AI systems are developed and who has a say in their direction, especially considering their growing societal impact. It’s a complex issue, and understanding the implications is key to navigating the future of technology.

Enclosure of Democratic Discourse

When these digital public goods become private property, it’s a bit like an enclosure movement, but for our conversations and public life. Platforms that were once spaces for open discussion are now managed to maximize profit, often by capturing our attention and data. This can lead to a degradation of the quality of our public discourse. Instead of fostering genuine dialogue and mutual understanding, these systems can incentivize outrage and division because that often drives engagement. It means that the very spaces where we should be able to debate ideas and build consensus are being turned into something else entirely, something that doesn’t necessarily serve the public good. We’re losing the shared ground needed for a healthy democracy.

Degradation of Digital Infrastructure and Trust

This privatization doesn’t just affect our conversations; it also impacts the underlying digital infrastructure we depend on. As platforms become more focused on extracting value, they can become less reliable, less transparent, and frankly, less trustworthy. When the systems that provide essential services are privately owned and operated with profit as the primary motive, there’s a risk that the quality and integrity of those services can suffer. This erosion of trust is a serious problem. It makes it harder for us to rely on digital tools for important tasks, and it undermines the collective capacity we need to address shared challenges. We need to consider how to rebuild trust in these systems and ensure they serve us, rather than the other way around.

Navigating AI’s Impact on African Democracies

Artificial intelligence is showing up in Africa, and it’s got folks thinking about what it means for democracy on the continent. It’s not a simple story, though. On one hand, AI could help with things like getting more people involved in politics or making elections run smoother. But there’s also a real worry that it could be used for bad stuff, like spreading fake news or keeping tabs on people.

AI’s Influence on Election Integrity

When it comes to elections, AI is a bit of a double-edged sword. On the plus side, it could help campaigns reach voters more effectively or even spot attempts at election fraud. However, there’s a big concern about AI being used to create and spread disinformation, making it harder for voters to know what’s real. This is especially tricky because fact-checking systems often struggle with the unique cultural contexts in Africa, meaning they might flag legitimate local content as false. This could really mess with how people vote and trust the process. We’ve seen how social media can sway opinions, and AI tools could take that to a whole new level, potentially skewing results in ways that aren’t fair.

Surveillance and Control in Political Landscapes

Another major area of concern is how AI might be used for surveillance and control. Think about facial recognition technology – it’s already being used in some places to keep an eye on political opponents or activists. This kind of tech, when put in the wrong hands, can really shut down free speech and make people afraid to speak out. It’s a serious risk to the open discussions that democracy needs to thrive. The worry is that governments or powerful groups could use AI to monitor citizens, suppress dissent, and generally make it harder for people to organize and advocate for their rights.

Funding and Inequality in Political Campaigns

Then there’s the money side of things. AI tools can be expensive, and that means political parties or candidates who can afford them might get a big advantage. They could use AI to target voters with personalized messages, figure out the best ways to get their supporters to the polls, or even shape public opinion more effectively. This could create a really uneven playing field, where well-funded campaigns using AI can drown out others. It raises questions about fairness and whether everyone has a shot at winning, regardless of their budget. We need to think about how to make sure AI doesn’t just widen the gap between the rich and the poor in politics, or between urban and rural areas, or even between those who are tech-savvy and those who aren’t. It’s about making sure the democratic process stays open to everyone, not just those with the deepest pockets or the most advanced tech.

Challenges in Industry-Academia Collaboration

Working with big tech companies on research sounds like a dream, right? You get access to data, resources, and a chance to influence how these massive platforms operate. But, as it turns out, it’s not always that simple. There are some real hurdles to jump over.

Independence by Permission from Tech Giants

One of the biggest issues is maintaining true independence. When a company like Meta funds a research project, there’s always a question of whether the research is truly objective or if it’s subtly steered. In one major study, academics had to agree to not take any money directly from Meta to avoid any appearance of bias. That meant they couldn’t even hire extra help or get their own PR team, which is a big deal when you’re dealing with a project of that scale. This "independence by permission" means the company still holds a lot of the cards, even if the research itself is solid.

Guiding Workflow and Research Prioritization

Companies also tend to set the boundaries for the research. For instance, a project might be limited to studying only US elections or a specific time frame. While this can help manage the workflow and keep things focused, it also means certain questions or regions might get overlooked. Imagine if a study on social media’s impact on democracy could only look at data from one country. That’s a pretty big limitation, and it might not reflect the global picture. It makes you wonder if the company is shaping the research agenda to suit its own interests.

Potential for Agenda Setting in Research

This leads to the broader issue of agenda setting. When companies dictate the scope, timeline, and even the data access protocols, they can effectively control what gets studied and how. This isn’t necessarily malicious, but it can mean that research priorities align more with the company’s needs than with the public interest. For example, researchers might want unfettered access to raw user data, but companies often push back, citing privacy concerns. While privacy is important, it can also be used as a reason to limit transparency. Finding a balance where researchers can do their work without compromising user privacy, perhaps through new regulations that protect researchers within companies, is a tough nut to crack. It’s a complex dance, and getting it right is key for meaningful collaboration in AI development.

The Role of Open Source in Democratic Technology

Big Tech really runs most of our online communication these days, and honestly, they don’t always get what’s happening in different parts of the world. This is a problem because their algorithms and how they handle content aren’t always a good fit for local cultures. It can lead to bad stuff, like misinformation spreading or local voices getting drowned out. We need alternatives that actually work for communities, not just for a few big companies.

Open source, or FOSS (Free and Open Source Software), offers a way out. By sharing code, data, and AI models openly, we can build technology that’s more transparent and accountable. This means more people can understand how it works, suggest improvements, and even fix problems. It’s like building something together in a community workshop instead of buying something off the shelf that might not fit.

Promoting Collaboration in AI Development

When AI tools and the data used to train them are open, it opens the door for everyone to get involved. Researchers, small companies, and even individuals can collaborate on building AI that serves a wider range of needs. This open approach aligns with democratic ideas because it lets people understand and influence the tech that shapes their lives. Of course, we have to be careful about privacy and intellectual property when sharing data, but finding that balance is key to making AI work for the public good.

Ensuring Ethical AI Frameworks

Open source principles can help create ethical guidelines for AI. When the inner workings of AI systems are visible, it’s easier to spot and fix biases or unfair practices. This transparency is important for building trust and making sure AI doesn’t lead to discrimination or human rights issues. It’s about making sure AI is developed responsibly, with ethical considerations front and center.

Reclaiming Control of Communication Systems

Right now, a few big companies control how information flows globally. This isn’t ideal, especially when their understanding of local contexts is limited. Open source projects, like Mastodon for social media, provide alternatives. These platforms can be built with specific community needs in mind, promoting collaboration and shared responsibility for content. By supporting open source, we can create communication systems that truly serve diverse communities and give people more control over their digital spaces.

Addressing Systemic Risks in the Digital Age

We’re living in a time where a few big tech companies basically run the digital world. Think about search engines, social media platforms, and even the big AI models – they’re becoming like essential utilities, but they’re owned and controlled by private companies. This setup feels a lot like an enclosure, turning what could be shared public spaces into private property. It’s a big shift, and it means the rules and decisions made by these companies have a huge impact on all of us, yet we have very little say in how they operate.

This situation leads to a couple of major problems. First, the digital public goods themselves start to suffer. When platforms are designed to extract as much as possible and aren’t very trustworthy, they chip away at the things that make democracy work: trust, open discussion, and recognizing each other as equals. It’s like the foundations of our civic life are getting weaker.

EU’s Efforts to Reassert Public Oversight

Governments are starting to notice this. The European Union, for example, has put out rules like the Digital Services Act and the Digital Markets Act. These are attempts to bring back some public control over these massive platforms. The idea is to make sure these companies can’t just do whatever they want. It’s a step towards making them more accountable.

Researcher Access to Platform Data

But these rules often focus on procedures and technicalities. What seems to be missing is a clearer idea of why we need these rules beyond just preventing specific bad things. We need a way to think about the digital world as a shared space, a commons, where everyone has a stake. This means rethinking who owns these digital tools and how we all participate. Getting researchers better access to platform data is one way to start understanding the real impact these systems have, which is vital for making informed decisions about their future. It’s about building a better digital public sphere, not just following a checklist of regulations. You can find more information on managing IT risks in the digital age at this white paper.

The Need for Normative Frameworks Beyond Regulation

Ultimately, we need to move past just regulating and start thinking about the bigger picture. It’s about building a digital environment that supports democratic values and allows for ethical growth. This involves cultivating certain virtues, like honesty and empathy, in how we interact online and how these technologies are built. It’s not just about following rules; it’s about creating a digital commons where shared meaning and civic responsibility can thrive. This is how we can push back against the concentration of power and ensure technology serves the public good.

Moving Forward: What’s Next for Tech and Democracy?

So, where does all this leave us? It’s clear that the way tech companies operate has a huge impact on our lives and how our societies function. We’ve seen how concentrating power in the hands of a few tech giants can privatize things that should be for everyone, like public discussion and access to information. This isn’t just about rules or regulations; it’s about rebuilding the shared spaces where we can all think and act together. We need to think about new ways to own, use, and be responsible for digital tools. It’s about making sure technology serves people and democracy, not the other way around. The conversation is ongoing, and figuring out how to do this right is the big challenge ahead.

Frequently Asked Questions

What is meant by ‘reimagining digital spaces as a moral commons’?

It means thinking of the internet and online platforms not just as places for business or entertainment, but as shared spaces where we can all grow morally and act responsibly together. Instead of big tech companies controlling everything, this idea suggests we should build these spaces with shared values and a focus on community well-being.

How has Big Tech privatized digital public goods?

Big tech companies have taken over services that many people rely on, like search engines and AI tools, and now control them for their own profit. This is like them owning public parks or libraries and deciding who can use them and how. It means these important digital tools aren’t truly shared or controlled by the public.

What are the main concerns about AI’s impact on democracies in Africa?

In Africa, AI can affect how fair elections are, and governments might use it for surveillance, watching people too closely. Also, political groups with more money can use AI for their campaigns, which might give them an unfair advantage over others, making the political playing field uneven.

What are the challenges when tech companies and universities work together on research?

Sometimes, when universities work with big tech companies, the companies might try to influence what research gets done or how it’s presented. This can make it hard for researchers to be truly independent, as their work might be shaped by the company’s goals or image.

How can open source help make technology more democratic?

Open source means the underlying code for technology is shared openly. This allows more people to work together on developing AI, making sure it’s built ethically. It also helps us take back control of our communication systems from big companies, promoting collaboration and transparency.

What needs to be done to manage the risks of technology in the digital age?

We need more than just rules; we need clear ethical guidelines for technology. This includes making sure companies are open about how their platforms work and giving researchers access to data so they can study the real effects. The goal is to ensure technology serves the public good, not just private interests.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement

Pin It on Pinterest

Share This