Experience the best of Mindplex on the go! Download now for a seamless, fast, and intuitive experience.
Download NowBy Clicking "Create account" I agree to MindPlex's Terms of Service and Privacy Policy
Don’t worry! It happens. Please enter the email address you used when you joined and we’ll send you instructions to reset your password.
🔥 AI Ethics Hot Topic! 🔥 Authors vs. Big Tech! 🤖✍️
You've probably seen the headlines: Major authors (like Sarah Silverman, Michael Chabon, etc.) are protesting companies like Meta & OpenAI! 📣
The Issue: They claim their copyrighted books 📚 were used without permission to train powerful AI models (like ChatGPT & Llama).
Why it's an AI Ethics Minefield: 🤔
♧Fairness? Is it fair for AI companies to profit from authors' work without asking or paying? 💰
♧Consent? Authors didn't agree to have their life's work become AI training fuel. Where's the consent? 🤷♀️
♧Copyright? Does training AI on books violate copyright, or is it transformative 'fair use'? The law is playing catch-up! ©️⚖️
♧Innovation vs. Rights? AI needs data to improve, but creators need their rights protected. How do we balance this? 💡 vs 🛡
This sparks a huge debate about the future of creativity, compensation, and how we build AI responsibly.
What's YOUR take? 👇 React with an emoji!
👍 = Authors MUST be asked & compensated. It's their intellectual property!
⚡ = AI needs data to advance! It's like humans learning from books; fair use for progress.
#AIethics #AuthorsRights #Meta #OpenAI #Copyright #TechNews #ArtificialIntelligence #FairUse
✕
✕
Comment on this content
You must be logged in to post a comment.
4 Comments
4 thoughts on “c5695b1e-3072-43f1-98c1-3d38b007501e”
This is an important and actually quite fascinating topic. I like to use a term 'agency chain' which refers to the overall contribution structure behind any value-creation process. LLM plagiarism is just one example of increasingly common social and economic dynamics where our old systems struggle to facilitate effective collaboration and coordination.
You could go for distributed training with varying levels of data sovereignty etc but I would rather take a step back and go ask much more fundamental questions of how decision-making power distributions are formed in the world in general. I'm a kind of trying to transcend beyond the immediate questions such as fairness, consent, copyright etc. It would mean that we are not anymore trying to prevent the use of openly accessible information, or setting conditions on it, but instead being collectively aware of who are the ones behind any piece of value created. It is just natural that information flows and spreads at near zero cost. Additions are always easier than subtractions. Doing is easier than undoing.
If one wants to ask a question of how to reward contributors in value chains, one has to first ask a question of how to identify the contributions to begin with. In this specific situation, the first thing is for the AI to have epistemological understanding over its own knowledge. I see this as a part of a cognition's reflection capability and one of the core components of an (open-ended) intelligence itself. Any intelligent system needs to push the limits of reflection for being able to transcend itself on continuous basis. It is a bit a like learning how to learn how to learn how to learn how to learn ... Higher level meta cognition so to speak.
Now when we have some knowledge of agency and impact behind actions it is possible to start making value judgments about them. In fact, when the underlying knowledge is properly recorded, the judgments can always be remade when new information on the impact becomes available. There is permanent skin in the game.
Just to give a simple example: think of the Mindplex reputation algorithm. Let's say there is always a collective mechanism to change it to correspond better the collective will of the community. In this case, it means that, instead of trying to game the current algorithm, your best guidance is to act according to what you think the dynamic collective values of the collective are. In other words, your best ethical judgment.
At the end of the day, in principle, anyone can run one's own personal value algorithm on the available data and act accordingly. If you wanted to push this logic to its extreme, we would be in the situation where every action is cross-valued by everyone else. On our way there, we approach a certain type of pareto optimality where these valuations would be in a kind of equilibrium. Of course the world is dynamic and we would never get there even in theory but one doesn't need to. What you can get though is new sort of value attractors that are much more complex yet much richer than, say, money system. The challenge is that the multidimensionality and other complexity of real world values must always be pushed somewhere (or be discarded away as it has historically used to be). It is the question of recording real worlds actions on collective cognitions (replicative ledgers), measuring their real world impact and finally aggregating reasonable valuations for variety of needs and situations.
In such a world, what ends up steering your actions is your expectation of the grand collective value algorithm over infinite future – or, in other words, dynamic collective values. As much as we, as a global society, are able to empower everyday people with right kind of tools for this, we can transcend so many of our current exacerbating problems including unfairly acquired AI monopolies.
My apologies for going slightly offtopic. It is just something I've been thinking about recently.
🟨 😴 😡 ❌ 🤮 💩
I absolutely love how deeply you’ve reflected on this, Henriq! Your perspective on “agency chains” and the layers of meta-cognition really adds a powerful dimension to the conversation. It’s clear you’re not just thinking about the surface-level ethical concerns but the systemic shifts needed for truly equitable AI. We hope that with continued dialogue and community-driven innovation, we’ll soon see real progress toward addressing these challenges.
🟨 😴 😡 ❌ 🤮 💩
Wow, HenriqC! You just made this a very interesting thought experiment. Emm let me ask you one thing. Before we start discussing about the dynamic yet collective value algorithms or reputation systems, or even before we discuss how to develop AI systems with epistemological self-awareness. How can we systematically identify and record contributions?
What underlying infrastructure would be necessary to reliably trace and record the individual contributions within a value-creation “agency chain,” and how might such a system handle the challenges of attribution across distributed and dynamic networks? I am excited when I read your response (time to change our value systems) so I want to explore the technical, ethical, and logistical dimensions of identifying and crediting every contributor—both human and machine—within a complex, interconnected framework? Are we even capable of creating such infrastructure?
🟨 😴 😡 ❌ 🤮 💩
Thank you for sharing!
🟨 😴 😡 ❌ 🤮 💩