Written by: Ilya Rokach, European Humanities University.
In the European Union, Corporate social responsibility (CSR) has become a critical pillar in balancing societal well-being and business profitability, particularly in the context of digital transformation and the evolving regulatory landscape. Generative AI is capable of creating texts, artworks that win various competitions (e.g. “Spatial Theatre”), musical masterpieces, etc. and introduces new ethical and legal dilemmas that require rethinking CSR principles, particularly regarding human dignity, oversight, and accountability (Vincent 2022).
The EU has responded to these challenges through groundbreaking legislation, most notably the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act), positioning itself as a global leader in ethical AI regulation. These instruments aim to align technological development with the EU’s fundamental values – respect for human dignity, freedom, and democracy – and directly influence how CSR is practiced in relation to AI.
In the context of generative AI development, CSR requires the implementation of an ethical framework that goes beyond established business practices, and in this case, the underlying theories for discussion are stakeholder theory, developed by Edward Freeman, and Kantian ethics, named after the great thinker and philosopher Immanuel Kant.
Stakeholder theory emphasizes the need for corporations to consider the interests of all parties who are engaged in the business and influence its actions and development. Generative AI disrupts this balance, creating risks of job displacement, privacy breaches, and algorithmic bias. For instance, tools like ChatGPT or DALL-E replace creative roles and raise questions about companies’ ethical responsibility for cultural preservation and fair employment.
From the perspective of EU policymaking, these concerns are acknowledged in the AI Act, which introduces a risk-based classification of AI systems and imposes mandatory safeguards for “high-risk” applications that may affect employment, education, and access to services – areas where stakeholder rights are most vulnerable.
Kantian ethics, on the other hand, is based on human dignity and moral duty, and one of its central principles is the categorical imperative to treat people not as means to ends, but as ends in themselves. This imperative conflicts with the unregulated use of personal data for AI training, where individuals’ works are treated merely as raw input without consent, violating privacy and autonomy.
This ethical violation is particularly relevant in the EU context, where the GDPR offers a comprehensive legal framework to safeguard personal data. Under Article 5 of the GDPR, personal data must be processed lawfully, fairly, and transparently. Training AI on scraped personal data directly breaches these principles, exposing companies to penalties and reputational risks.
CSR in AI requires transparency, protection of workers’ rights, and prevention of discrimination. Under the AI Act and GDPR these are not optional guidelines but legal duties, institutionalizing ethical corporate behavior. Thus, integrating the tenets of stakeholder theory and Kantian ethics helps businesses minimize the risks associated with AI and build public trust in the disruptive technologies of the future.
Indeed, generative AI is a revolutionary technology because it changes industries, their functioning, and future development, and creates new opportunities for innovation. At the same time, its rapid diffusion concentrates CSR risk in three areas: privacy, creative labor and authors’ rights, and manipulation (including deepfakes).
Many training pipelines rely on large-scale data collection; absent valid consent, this conflicts with GDPR’s lawfulness, purpose limitation, and data minimization, and erodes trust – central to CSR.
For example, in training the algorithms of the startup Clearview AI, billions of images were taken from social media platforms without the knowledge and consent of users. Clearview AI was fined 20 million euros in 2022 by the Italian Data Protection Authority for carrying out activities that unlawfully collect and use users’ data. The case shows that without embedded CSR-grade data governance and consent controls, enforcement alone may not prevent recurrence. For CSR, impacts include erosion of user trust and heightened risks of misuse of sensitive data.
Generative AI tools such as ChatGPT, MidJourney, DALL-E, Suno, etc. have reconfigured creative markets and raised systemic questions about training-data provenance and the boundaries of appropriation.
A prime example is the ongoing “Stability AI, Midjourney and DeviantArt” lawsuit in the US. In January 2023, artists Sarah Andersen, Kelly McKernan, and Carla Ortiz filed a class action lawsuit against Stability AI, Midjourney, and DeviantArt. They claimed that these companies used their work for AI training without consent, infringing on the copyrights of millions of artists. The core dispute is the limit of text-and-data mining/fair use versus the protection of creative labor. In August 2024, the judge allowed some of the plaintiffs’ claims to proceed, including allegations of copyright and trademark infringement (CourtListener 2023). For CSR this highlights not only legal exposure but also the need for equitable value distribution between platforms and creators.
In the context of CSR, this case raises pressing socioeconomic issues. As AI-generated content becomes more pervasive and creative tasks can be automated, human creators face financial instability, a shrinking market for creativity, and reduced opportunities for self-actualization. This shift not only affects individual creators but also reduces cultural diversity in general, as the results of algorithms are often devoid of the specificities and nuances that are inherent in human creativity.
In the European Union, these concerns have prompted renewed debate over copyright law and the ethical boundaries of AI training. The European Commission has acknowledged the need to modernize the EU copyright framework in response to the rise of generative AI. While the EU Copyright Directive (2019/790) includes exceptions for text and data mining (TDM), it also allows rights holders to opt out – a provision that reflects the EU’s attempt to balance innovation with the protection of creators’ rights.
The manipulative use of generative AI poses a particularly insidious threat to human rights because, unlike traditional forms of persuasion and social engineering, deepfake manipulation is based on sophisticated algorithms for exploiting psychological vulnerabilities and creating hyper-realistic artificial content. For example, in 2019, an unnamed subsidiary of a British energy company fell victim to a financial fraud carried out using deepfake: fraudsters used voice deepfake to impersonate the CEO of the parent German firm during a phone call. The fictitious CEO, whose voice was spoofed using AI, asked the head of the UK subsidiary to transfer EUR 220,000 to a Hungarian supplier’s account, when in fact the account belonged to the criminals (The Wall Street Journal 2019).
The incident exemplifies how model-enabled impersonation amplifies traditional fraud risks and therefore widens the CSR perimeter from internal controls to broader societal impacts. For example, deepfake videos are widely used in political propaganda to undermine public trust in institutions of power: during the February 2020 legislative assembly elections in Delhi, fake two videos of Bharatiya Janata Party (BJP) President Manoj Tiwari criticizing the incumbent Delhi government of Arvind Kejriwal went viral on WhatsApp to influence voter perceptions (Vice 2020). These episodes sharpen the CSR obligation to build provenance and disclosure into AI-enabled communications.
The European Union has taken note of such risks. The AI Act includes provisions that specifically address manipulative uses of AI, especially those involving biometric identification, emotional recognition, and deepfake technologies. It mandates transparency, labeling of synthetic content in specified contexts, and risk-mitigation duties for high-risk systems, moving manipulation controls from “best practice” to binding compliance.
Across these cases, the common denominator is governance gaps rather than inherent inevitability:
– strengthening data governance: embed consent, purpose limitation, and minimization by design; document lawful bases; and subject training datasets to independent audit, with escalation to suspensions where fines fail to deter.
– advance ethical design: test for disparate impact, curate representative data, and gate launches on bias thresholds; the retired Amazon hiring model shows how historical data can entrench discrimination if left uncorrected. In the EU, the AI Act requires that high-risk AI systems undergo conformity assessments, including checks for bias, discrimination, and representativeness in training data. This legalizes core CSR expectations around non-discrimination and human-rights safeguards. In addition, companies should implement internal committees or an ethics officer to oversee the development of AI with authority to pause deployments pending risk remediation.
– strengthen oversight and interoperability of rules: adapt IP frameworks to clarify text-and-data mining boundaries and creator opt-outs, and coordinate enforcement across jurisdictions to limit arbitrage.
Within the EU, cross-border regulatory consistency is being pursued through mechanisms such as the Digital Services Act (DSA) and Digital Markets Act (DMA), which, together with the AI Act, form a holistic framework for governing digital platforms, AI deployment, and consumer protection.
Closing privacy, creative-labor, and manipulation gaps is not only risk reduction but a strategic trust mandate for firms deploying generative AI. There is no doubt that generative AI has far-reaching implications, both positive and negative, for human rights. According to the principles of international law, human rights are central to the interaction between government, business and society. The Universal Declaration of Human Rights sets out key principles of dignity, freedom and equality that remain relevant in the digital age, regardless of the challenges posed by the development of generative AI.
The Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights establish obligations for states to protect human rights from violations, as well as those committed by non-state actors. In AI contexts this spans misuse of data, discriminatory algorithms, and erosion of creative livelihoods.
The United Nations Guiding Principles on Business and Human Rights translate this into a corporate duty of human-rights due diligence – privacy, transparency, and bias mitigation included.
These global principles are reinforced at the European level by instruments such as the Charter of Fundamental Rights of the European Union, which enshrines the right to privacy, data protection, and non-discrimination. The EU’s legislative and ethical frameworks thus provide a concrete foundation for aligning CSR with human rights in the age of AI.
Debates about universality versus cultural relativism matter for global AI rollout. A Rawlsian “justice as fairness” lens supports baseline rights (privacy, equality, non-discrimination) that travel across contexts, while implementation can reflect local conditions. For example, the right to data privacy stated in international laws such as the GDPR is often seen as a universal right. The GDPR, a flagship regulation of the EU, exemplifies how supranational legal instruments can set global benchmarks for ethical data governance.
In contrast, cultural-relativist views emphasize sensitivity to local social norms (e.g., collectivist framings of data governance). For companies implementing generative AI, a pragmatic hybrid, anchor to UNGPs and EU-level baselines, adapt processes in consultation with local regulators and civil society, balances legitimacy and feasibility.
In the EU, this hybrid model is partially institutionalized through the AI Act and the European AI Alliance – a forum that brings together stakeholders from different sectors and cultures to contribute to a common vision of trustworthy and inclusive AI. For example, integrating universal standards helps minimize rights risks and build trust at deployment.
Currently, the European Union has made significant progress by adopting the AI Act, which categorizes systems by risk and hardwires transparency, accountability, and fairness obligations. The AI Act operationalizes many CSR-related principles by requiring documentation, human oversight, and risk assessments for high-risk AI systems. It also prohibits certain manipulative or exploitative AI uses, reinforcing human dignity and autonomy. Foresight (e.g., deepfakes, data misuse, conflict settings) should inform iterative updates to keep the framework adaptive.
As stated earlier, companies play a key role in the practical realization of human rights, and their development strategies should include provisions for integrating ethical principles into the processes of developing and using AI. Human-rights impact assessments (HRIAs) should become standard for higher-risk use cases, with clear remediation gates.
Another important element is the transparency of algorithms, which means that AI systems must be explainable and verifiable. Explainability calibrated to audiences (regulators, affected users, auditors) and structured stakeholder engagement both increase accountability and social license.
Another important tool is AI ethics and awareness training for product, legal, and leadership teams tied to concrete go/no-go criteria.
Effective remedies play an important role in addressing human rights violations in the context of generative AI development. Companies should ensure that complaints and grievance mechanisms are accessible with protected channels for employees, users, and third parties, and time-bound response SLAs.
Partnerships and collaboration between business, government, and civil society enhance the effectiveness of such measures. For example, Partnership on AI-style initiatives, created in 2016 by major technology companies including Amazon, Apple, DeepMind, Facebook (now Meta), Google, IBM, Microsoft, and other organizations, demonstrates the value of cross-sector collaboration, and pools resources and expertise to address complex ethical issues.
It is important to share other success stories of corporations that have implemented ethical AI initiatives. One such case study is Microsoft’s ethical AI initiative. Microsoft’s AI Ethical Framework is a benchmark for responsible implementation of AI technologies. The company regularly conducts human rights impact assessments to evaluate the impact of its AI systems on society, and has also created a Responsible AI standard to guide the design and implementation of AI systems (Microsoft 2022). In addition, Microsoft regularly publishes a transparency report on responsible AI, thereby prioritizing fairness and transparency in the use of AI (Microsoft n.d.). Such practices illustrate operationalization of CSR in large-scale AI programs.
The following example is unprecedented: IBM’s refusal to develop a facial recognition system. In June 2020, IBM CEO Arvind Krishna sent a letter to the U.S. Congress announcing that he would cease developing and selling general-purpose technologies for facial recognition and analysis. In the letter, Arvind Krishna emphasized the company’s concerns about the potential use of such technologies for mass surveillance, racial profiling, and violations of basic human rights and freedoms (BBC 2020a). This is a paradigmatic example of aligning commercial choices with rights-based CSR.
Another example is the European AI Alliance’s collaborative model. The European AI Alliance is an initiative of the European Commission that aims to establish an open dialog between businesses, policy makers, and civil society on the development and operation of AI. The Alliance’s recommendations, including a focus on transparency and fairness of algorithms, formed the basis of the EU AI Law (European Commission n.d.).
The European AI Alliance exemplifies how the EU integrates stakeholder engagement into the policymaking process – a fundamental element of both CSR and democratic legitimacy in AI governance.
Collectively, these examples show pathways from internal standards to market exits and multi-stakeholder fora for embedding CSR into AI lifecycles.
As generative AI is increasingly utilized in business operations, accountability is a cornerstone of ethical governance. Companies implementing AI systems must address not only the operational and technical challenges of their business.
Accountability in the context of generative AI encompasses three core dimensions: transparency, fairness and responsiveness:
- transparency: documentation of training data sources, model limitations, and foreseeable risks for relevant audiences;
- fairness: measurable non-discrimination targets and monitoring;
- responsiveness: accessible challenge and appeal mechanisms and effective remedies.
These principles of accountability are embedded in EU law: the GDPR guarantees individuals the right to meaningful information about logic and to object to automated decision-making, while the AI Act further institutionalizes these safeguards, requiring documentation, auditability, and human oversight for high-risk AI systems.
These aspects of accountability are consistent with the following ethical theories:
- deontological ethics, which emphasizes the moral obligation of companies to act transparently and fairly regardless of possible benefits and profits;
- utilitarianism, focuses on maximizing benefits to society and minimizing harms, making accountability an important tool for creating technologies that maximize benefits.
- virtue ethics, which involves creating a corporate culture based on honesty, fairness, and compassion.
These duties align with deontological, utilitarian, and virtue-ethics perspectives, providing a pluralistic normative basis for corporate practice.
From a legal perspective, accountability in AI governance now combines GDPR’s data-rights regime with the AI Act’s risk-based obligations. However, there are still enforcement gaps in this area as there is no comprehensive regulatory framework for AI.
To address these gaps, the EU AI Act introduces a layered risk-based approach, mandates conformity assessments, and strengthens the role of market surveillance authorities in all Member States. It also complements existing frameworks like the DSA, which focuses on platform accountability and the removal of illegal AI-generated content.
Fulfilling accountability requirements requires the implementation of specific practical mechanisms, and a central element is transparency, which should be built into every stage of AI technology development and deployment.
For example, Explainable AI initiatives aim to create algorithms that can be understood by both technical and non-technical audiences. One such example is IBM’s Watson Health healthcare system. This system is used to support physicians in making diagnoses and choosing the best treatment. IBM Watson Health analyzes patient medical data (medical history, lab tests, etc.) as well as scientific publications to offer treatment recommendations to physicians. IBM reports using explainability techniques so clinicians can interrogate recommendations (IBM n.d.).
Companies should also implement mechanisms for documenting AI-related processes, including data lineage, model cards, and post-deployment monitoring.
Within the EU, this aligns with obligations to maintain “technical documentation” under Article 11 of the AI Act, and to register high-risk AI systems in the public database maintained by the European Commission.
Of course, government and international organizations play an important role in accountability. For example, in September 2024, the UN’s High-Level Advisory Body on Artificial Intelligence published a report with recommendations on AI regulation. One of the key proposals is the creation of an independent International Scientific Panel on AI, supported by ITU/UNESCO to synthesize evidence and support intergovernmental coordination (United Nations 2024).
A critical element of an accountability framework is the availability of grievance mechanisms. Companies should provide users with accessible ways to report AI-related issues: clear intake, triage, and escalation pathways, plus reporting on outcomes. For example, Google has public AI responsibility principles and user-facing feedback channels (Google n.d.).
However, corporations are not always held accountable: in 2020, Google was at the center of a scandal involving the firing of Timnit Gebru, one of the leading researchers in the field of AI ethics (BBC 2020b). The episode underscores the gap that can exist between stated principles and internal accountability.
Another example of Cambridge Analytica scandal that broke in 2018. The company accessed the personal data of millions of Facebook users without their consent and subsequently used AI to manipulate the 2016 US presidential election and the Brexit referendum (BBC 2018). It illustrates how weak data-governance controls can scale societal harm, reinforcing why GDPR-level safeguards and enforcement matter.
It’s not uncommon for companies to face criticism for not having enough control over AI-generated content – in 2023, Twitter (now X) faced similar criticism and was even dubbed a “ghost town of bots” as spam content was being distributed on a large scale on the platform (ABC News 2024). This highlights platform-level obligations (e.g., DSA duties) to detect, label, and remove harmful or misleading AI-generated content.
This case study reveals the problems of insufficient algorithmic control (Twitter’s moderation system was not adapted to the new level of complexity of AI content) lack of AI content regulation (mechanisms to identify or label AI content were not implemented), risks for human rights violations (distribution of inaccurate content manipulated political stability in certain regions, thus undermining trust in government institutions).
The EU’s regulatory instruments such as the DSA and Code of Practice on Disinformation are aimed precisely at these types of risks – setting clear responsibilities for platforms to detect, label, and remove harmful or misleading AI-generated content.
To effectively manage risk and ensure fairness, businesses should consider the following recommendations:
- institutionalize accountability: empower ethics governance with decision rights; set bias/robustness gates as ship-criteria; and tie leadership incentives to compliance outcomes;
- align to AI Act/DSA through living compliance: maintain technical documentation, risk registers, incident reporting, and independent audits.
- deploy technical controls: dataset versioning, data-loss prevention for sensitive sources, watermarking/disclosure for synthetic media, and continuous post-deployment monitoring.
These layers as well as culture, controls, and compliance mutually reinforce trust and reduce systemic risk.
Generative AI is a disruptive technology that is opening up new business opportunities while also jeopardizing respect for human rights and raising significant ethical issues. This paper reframed CSR for generative AI around three core risk domains (privacy, creative labor, manipulation) and grounded corporate obligations in stakeholder and Kantian ethics.
Operationally, firms should combine HRIAs, explainability, dataset governance, grievance mechanisms, and stakeholder engagement with AI Act/DSA compliance.
The European Union, through its legal instruments and policymaking initiatives, is actively shaping this new ethical frontier, offering a model that can be adapted globally.
Responsible use of generative AI requires binding governance, measurable fairness, and enforceable remedies, so innovation advances without trading away rights.
References
- Vincent, J. (2022, September 2). Who gets credit for AI-generated art? The question is posing problems for artists. The New York Times. Retrieved from https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html
- CourtListener. (2023). Case Summary: Stability AI, Midjourney, and DeviantArt. Retrieved from https://storage.courtlistener.com/recap/gov.uscourts.cand.407208/gov.uscourts.cand.407208.223.0_2.pdf
- The Wall Street Journal. (2019). Fraudsters Use AI to Mimic CEO’s Voice in Unusual Cybercrime Case. Retrieved from https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
- Vice. (2020). The First Use of Deepfakes in Indian Election by BJP. Retrieved from https://www.vice.com/en/article/the-first-use-of-deepfakes-in-indian-election-by-bjp/
- Microsoft. (2022). Microsoft’s Framework for Building AI Systems Responsibly. Retrieved from https://blogs.microsoft.com/on-the-issues/2022/06/21/microsofts-framework-for-building-ai-systems-responsibly/
- Microsoft. (n.d.). Responsible AI Transparency Report. Retrieved from https://www.microsoft.com/en-us/corporate-responsibility/responsible-ai-transparency-report
- BBC. (2020). IBM to Stop Offering Facial Recognition Software. Retrieved from https://www.bbc.com/news/technology-52978191
- European Commission. (n.d.). European AI Alliance. Retrieved from https://digital-strategy.ec.europa.eu/en/policies/european-ai-alliance
- IBM. (n.d.). Explainable AI: IBM Watson Health. Retrieved from https://www.ibm.com/think/topics/explainable-ai
- United Nations. (2024). Governing AI for Humanity: Final Report. Retrieved from https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf
- Google. (n.d.). AI Responsibility Principles. Retrieved from https://ai.google/responsibility/principles/
- BBC. (2020). Google Fires AI Ethics Researcher Timnit Gebru. Retrieved from https://www.bbc.com/news/technology-55281862
- BBC. (2018). Facebook and Cambridge Analytica Scandal. Retrieved from https://www.bbc.com/news/technology-43465968
- ABC News. (2024). Twitter X Fighting Bot Problem as AI Spam Floods the Internet. Retrieved from https://www.abc.net.au/news/science/2024-02-28/twitter-x-fighting-bot-problem-as-ai-spam-floods-the-internet/103498070



