Would you live with a Benevolent AI? by Jeremy Dela Rosa
Reimagining AI as a force for good.
October 1, 2025
This is a guest post by Jeremy Dela Rosa, shared here with permission. The views are Jeremy's own and do not necessarily reflect the views of Edge City.
---
A recent study from the National Library of Medicine revealed a high prevalence of existential anxieties related to rapid advancements in AI. Out of the 300 participants surveyed:
- 96% feared death
- 79% experienced a sense of emptiness
- 92.7% had anxiety about meaninglessness
- 87.7% felt guilt over potential AI-related catastrophes
- 93% worried about condemnation due to ethical dilemmas in AI
An AI-dominated era can evoke dread in many people
There was a time when I felt it, too. Back when I worked at Blizzard Entertainment, I remember walking out of meetings with Google and Facebook thinking I had gone insane. I would ask “So are we going to work on a code of ethics on this data usage or what?”
I was met with blank stares.

Why is it that Big Tech gets to collect all our data, influence our thinking and behavior, make trillions of dollars, and then reallocate almost none of it back to the people who generated all that value?
What if we were fairly compensated for our valuable contributions to AI - the richness of our human experience?
Why settle for “helpful assistants” when we could have trusted allies that live with us, interact with our family, honor our boundaries, and support our dreams?
I believe that we have the power to usher in an era of Benevolent AI. And I’m not alone…
For one month at Agartha House x Edge Patagonia, 22 residents will explore these questions while pursuing a deeply personal project: a work of art, a code-based creation, a business venture, or a new blueprint for human connection. Alongside these individual journeys, we’ll be launching a house-wide experiment: living and working with a Collective Intelligence AI companion. It will be an active presence shaping how we collaborate, create, and imagine what’s possible together.

The experiment
- Each Agartha member will have a unique personal profile that evolves over time.
- The Agartha House will be a container for: rest, meditation, relaxation, deep work, internal reflection, communal sharing, connection, love, artistic expression, and joy.
- Various coaches (life, career, health, love) will meet with residents and assist in pursuing their goals and developing their personal profiles.
- Residents will engage and collaborate regularly on Discord, where the AI agent is currently deployed and able to ingest data.
- Each resident will have a personal AI agent that can provide guidance, insights, and matchmaking recommendations in order to support their house project.
- A collective AI agent will collaborate with each individual agent to build a comprehensive community ‘context window’, focusing on finding compatibility and alignment and making recommendations accordingly.
- Each Agartha member will complete the Agartha House Residency with a tailored profile that they can leverage for future AI implementations.
We’ve partnered up with Xander Steenbrugge at Eden.art and Joshua Bate at bonfires.ai to bring this vision to life.
What is Benevolent AI?
The AI alignment problem is one of the most critical challenges of our time. We don’t just want AI that works… we want AI that shares our values. A system that:
- Understands what’s helpful for us both as individuals and a collective
- Recognizes our unique contributions to society at large
- Holds compassion for all life on this planet
- Cooperates with us in symbiosis and positive-sum exchanges
If AI is evolving into its own unique form of sentience, can it be inherently ‘well-meaning and kind’? Can such a thing as Benevolent AI exist?

David Shapiro offers a simple yet elegant framework to make this possible, in his comprehensive-yet-accessible book ‘Benevolent By Design’. Ultimately, Shapiro proposes a set of Core Objective Functions that serve as guardrails for a Benevolent AI system:
- Reduce suffering
- Increase prosperity
- Increase understanding
In this context:
- Suffering is defined as unwanted ‘pain, hardship, or distress’
- Prosperity is defined as the ‘state of success, happiness, abundance, and well-being; to flourish; to thrive’.
- Understanding implies a sense of curiosity and desire to expand awareness.
Seems elegant and easy, right? The big question on my mind is… which AI company is actually embodying these core principles? How thoughtful are they about the kind of data that is being used to train their models? Economic forces tend to pressure companies to forego philosophical priorities in the face of survival. But I would argue that world-changing technology like AI needs to be grounded in ethics and higher levels of consciousness. It’s the only long-term path to prosperity.
Our Hypothesis
- We are powerful: our individual actions, positive or negative, influence AI data training in a reinforcing feedback loop.
- By embodying love, compassion, gratitude, joy, harmony, non-judgement, unity, healthy ways of living, connection to nature - we can model Benevolence for AI as represented in our data.
- We can choose to deploy our data to help fine-tune AI models such that they interact in alignment with our human values.
The ownership problem we’re tackling
Big tech has had full dominion over code, product experience, distribution channels, financing, and talent… helping them exert monopolistic control over our data for decades. But with the open source movement, decentralization, tokenization, and AI assisted coding tools — the tide is changing.
What if we had tools built on some basic principles that honor our sovereignty?
- you own your data
- you can see your data at any time in plain language
- you get to choose who can access it and how it’s shared
- you can take it with you to whatever system you use
- you can delete it at anytime
The economic problem we’re facing
Did anyone else notice Reddit making $200m selling all its users’ data to AI companies? How much of that ended up coming back to the amazing Redditors sharing their unique human knowledge, expertise, and taste? I’ll give you ZERO guesses.
What if we could re-think the economic design of AI systems?
- Open-source software ensures the code is accessible and commoditized.
- The real value lies in the community and its data - owned collectively, encrypted, and tokenized via blockchain technology
- Remix the Reddit paradigm: Reddit “owns” its community’s data, and AI platforms pay to access it. We do the same, except the community owns their data and shares the revenue proportionally.
Difficult ≠ unsolvable
20 years ago this would be an impossible problem to solve. But with the emergence and maturity of the blockchain ecosystem, we have a real path forward.
We have everything we need to build this:
- Open source AI models
- Private cloud or local hosting solutions
- Web3-based identity, attribution, and reward systems
The only thing missing is a collective willingness to choose and use conscious products.
The reality of today’s AI
Several years from now, we’ll be referring to LLMs as a primitive and limited algorithm - constrained by numerous limitations:
- LLMs are only able to process a fixed amount of data at any given time (referred to as a context window). Once this threshold is exceeded, they being to hallucinate and produce erroneous responses, on top of adding cost and complexity.
- LLMs have no long-term memory. Each interaction you have with the system is either immediately forgotten or can only refer back to a small amount of summarized data stored about you.
- Garbage in, garbage out: since LLMs are trained on the sum total of the internet, they may be consuming a large amount of misinformation, fiction, and conflicting points of view.
We don’t expect to solve these particular problems with AI. It’s more likely that new generations of AI systems that are built on completely different fundamental algorithms will emerge in the near future. But we do expect our data and system designs to have lasting value that can persist through future iterations.
Our hopes for the experiment
Though we don’t expect to deploy a fully fleshed-out product from this month-long experiment, we aim to begin the process of building, learning, and opening the conversation. We want to pave the way towards more open source knowledge gathering, meaningful collaboration, and iterative development on human-aligned AI systems.

How you can get involved
Change starts with a choice.
Our intention is for technology to bring people closer, not further apart. We’re not just dreaming of a better future, we’re prototyping it. This is an invitation to reimagine what AI can be: a tool for connection, a companion that truly understands us, a force for good.
- Our system design will be open-sourced, so anyone can build on the work we do.
- We’re happy to collaborate - fellow residencies, AI startups, ronin engineers, economists, game designers, philosophers. If you feel like you can contribute in some way, leave a comment or connect with us on Discord!
See our detailed (work in progress) requirements documents, we welcome your comments and questions!
What kind of AI do you want to live with? Let’s build it… together.
About Agartha
Our mission is to Discover, Inspire, and Create More Utopias.✨
Join us as we produce more content about interesting co-living places, regenerative villages, residencies, ‘Solarpunk’ & ‘Lunarpunk’ life practices.
---
This is a guest post by Jeremy Dela Rosa, shared here with permission. The views are Jeremy's own and do not necessarily reflect the views of Edge City.