Hiring for AI Ethics: What Creative Teams Need Now

When Nathalie Berdat joined the BBC two years ago as “employee number one” in the data governance function, she entered a role that barely existed in media organisations a decade prior. Today, as Head of Data and AI Governance, Berdat represents the vanguard of an emerging professional class: specialists tasked with navigating the treacherous intersection of artificial intelligence, creative integrity, and legal compliance. These aren't just compliance officers with new titles. They're architects of entirely new organisational frameworks designed to operationalise ethical AI use whilst preserving what makes creative work valuable in the first place.
The rise of generative AI has created an existential challenge for creative industries. How do you harness tools that can generate images, write scripts, and compose music whilst ensuring that human creativity remains central, copyrights are respected, and the output maintains authentic provenance? The answer, increasingly, involves hiring people whose entire professional existence revolves around these questions.
“AI governance is a responsibility that touches an organisation's vast group of stakeholders,” explains research from IBM on AI governance frameworks. “It is a collaboration between AI product teams, legal and compliance departments, and business and product owners.” This collaborative necessity has spawned roles that didn't exist five years ago: AI ethics officers, responsible AI leads, copyright liaisons, content authenticity managers, and digital provenance specialists. These positions sit at the confluence of technology, law, ethics, and creative practice, requiring a peculiar blend of competencies that traditional hiring pipelines weren't designed to produce.
The Urgency Behind the Hiring Wave
The statistics tell a story of rapid transformation. Recruitment for Chief AI Officers has tripled in the past five years, according to industry research. By 2026, over 40% of Fortune 500 companies are expected to have a Chief AI Officer role. The U.S. White House's Office of Management and Budget mandated in March 2024 that all executive departments and agencies appoint a Chief AI Officer within 60 days.
Consider Getty Images, which employs over 1,700 individuals and represents the work of more than 600,000 journalists and creators worldwide. When the company launched its ethically-trained generative AI tool in 2023, CEO Craig Peters became one of the industry's most vocal advocates for copyright protection and responsible AI development. Getty's approach, which includes compensating contributors whose work was included in training datasets, established a template that many organisations are now attempting to replicate.
The Writers Guild of America strike in 2023 crystallised the stakes. Hollywood writers walked out, in part, to protect their livelihoods from generative AI. The resulting contract included specific provisions requiring writers to obtain consent before using generative AI, and allowing studios to “reject a use of GAI that could adversely affect the copyrightability or exploitation of the work.” These weren't abstract policy statements. They were operational requirements that needed enforcement mechanisms and people to run them.
Similarly, SAG-AFTRA established its “Four Pillars of Ethical AI” in 2024: transparency (a performer's right to know the intended use of their likeness), consent (the right to grant or deny permission), compensation (the right to fair compensation), and control (the right to set limits on how, when, where and for how long their likeness can be used). Each pillar translates into specific production pipeline requirements. Someone must verify that consent was obtained, track where digital replicas are used, ensure performers are compensated appropriately, and audit compliance.
Deconstructing the Role
The job descriptions emerging across creative industries reveal roles that are equal parts philosopher, technologist, and operational manager. According to comprehensive analyses of AI ethics officer positions, the core responsibilities break down into several categories.
Policy Development and Implementation: AI ethics officers develop governance frameworks, conduct AI audits, and implement compliance processes to mitigate risks related to algorithmic bias, privacy violations, and discriminatory outcomes. This involves translating abstract ethical principles into concrete operational guidelines that production teams can follow.
At the BBC, James Fletcher serves as Lead for Responsible Data and AI, working alongside Berdat to engage staff on artificial intelligence issues. Their work includes creating frameworks that balance innovation with responsibility. Laura Ellis, the BBC's head of technology forecasting, focuses on ensuring the organisation is positioned to leverage emerging technology appropriately. This tripartite structure reflects a mature approach to operationalising ethics across a large media organisation.
Technical Assessment and Oversight: AI ethics officers need substantial technical literacy. They must understand machine learning algorithms, data processing, and model interpretability. When Adobe's AI Ethics Review Board evaluates new features before market release, the review involves technical analysis, not just philosophical deliberation. The company implemented this comprehensive AI programme in 2019, requiring that all products undergo training, testing, and ethics review guided by principles of accountability, responsibility, and transparency.
Dana Rao, who served as Adobe's Executive Vice President, General Counsel and Chief Trust Officer until September 2024, oversaw the integration of ethical considerations across Adobe's AI initiatives, including the Firefly generative AI tool. The role required bridging legal expertise with technical understanding, illustrating how these positions demand polymath capabilities.
Stakeholder Education and Training: Perhaps the most time-consuming aspect involves educating team members about AI ethics guidelines and developing a culture that preserves ethical and human rights considerations. Career guidance materials emphasise that AI ethics roles require “a strong foundation in computer science, philosophy, or social sciences. Understanding ethical frameworks, data privacy laws, and AI technologies is crucial.”
Operational Integration: The most challenging aspect involves embedding ethical considerations into existing production pipelines without creating bottlenecks that stifle creativity. Research on responsible AI frameworks emphasises that “mitigating AI harms requires a fundamental re-architecture of the AI production pipeline through an augmented AI lifecycle consisting of five interconnected phases: co-framing, co-design, co-implementation, co-deployment, and co-maintenance.”
The Copyright Liaison
Whilst AI ethics officers handle broad responsibilities, copyright liaisons focus intensely on intellectual property considerations specific to AI-assisted creative work. The U.S. Copyright Office's guidance, developed after reviewing over 10,000 public comments, established that AI-generated outputs based on prompts alone don't merit copyright protection. Creators must add considerable manual input to AI-assisted work to claim ownership.
This creates immediate operational challenges. How much human input is “considerable”? What documentation proves human authorship? Who verifies compliance before publication? Copyright liaisons exist to answer these questions on a case-by-case basis.
Provenance Documentation: Ensuring that creators keep records of their contributions to AI-assisted works. The Content Authenticity Initiative (CAI), founded in November 2019 by Adobe, The New York Times and Twitter, developed standards for exactly this purpose. By February 2021, Adobe and Microsoft, along with Truepic, Arm, Intel and the BBC, founded the Coalition for Content Provenance and Authenticity (C2PA), which now includes over 3,700 members.
The C2PA standard captures and preserves details about origin, creation, and modifications in a verifiable way. Information such as the creator's name, tools used, editing history, and time and place of publication is cryptographically signed. Copyright liaisons in creative organisations must understand these technical standards and ensure their implementation across production workflows.
Legal Assessment and Risk Mitigation: Getty Images' lawsuit against Stability AI, which proceeded through 2024, exemplifies the legal complexities at stake. The case involved claims of copyright infringement, database right infringement, trademark infringement and passing off. Grant Farhall, Chief Product Officer at Getty Images, and Lindsay Lane, Getty's trial lawyer, navigated these novel legal questions. Organisations need internal expertise to avoid similar litigation risks.
Rights Clearance and Licensing: AI-assisted production complicates traditional rights clearance exponentially. If an AI tool was trained on copyrighted material, does using its output require licensing? If a tool generates content similar to existing copyrighted work, what's the liability? The Hollywood studios' June 2024 lawsuit against AI companies reflected industry-wide anxiety. Major figures including Ron Howard, Cate Blanchett and Paul McCartney signed letters expressing alarm about AI models training on copyrighted works.
Organisational Structures
Research indicates significant variation in reporting structures, with important implications for how effectively these roles can operate.
Reporting to the General Counsel: In 71% of the World's Most Ethical Companies, ethics and compliance teams report to the General Counsel. This structure ensures that ethical considerations are integrated with legal compliance. Adobe's structure, with Dana Rao serving as both General Counsel and Chief Trust Officer, exemplified this approach. The downside is potential over-emphasis on legal risk mitigation at the expense of broader ethical considerations.
Reporting to the Chief AI Officer: As Chief AI Officer roles proliferate, many organisations structure AI ethics officers as direct reports to the CAIO. This creates clear lines of authority and ensures ethics considerations are integrated into AI strategy from the beginning. The advantage is proximity to technical decision-making; the risk is potential subordination of ethical concerns to business priorities.
Direct Reporting to the CEO: Some organisations position ethics leadership with direct CEO oversight. This structure, used by 23% of companies, emphasises the strategic importance of ethics and gives ethics officers significant organisational clout. The BBC's structure, with Berdat and Fletcher operating at senior levels with broad remits, suggests this model.
The Question of Centralisation: Research indicates that centralised AI governance provides better risk management and policy consistency. However, creative organisations face a particular tension. Centralised governance risks becoming a bottleneck that slows creative iteration. The emerging consensus involves centralised policy development with distributed implementation. A central AI ethics team establishes principles and standards, whilst embedded specialists within creative teams implement these standards in context-specific ways.
Risk Mitigation in Production Pipelines
The true test of these roles involves daily operational reality. How do abstract ethical principles translate into production workflows that creative professionals can follow without excessive friction?
Intake and Assessment Protocols: Leading organisations implement AI portfolio management intake processes that identify and assess AI risks before projects commence. This involves initial use case selection frameworks and AI Risk Tiering assessments. For example, using AI to generate background textures for a video game presents different risks than using AI to generate character dialogue or player likenesses. Risk tiering enables proportionate oversight.
Checkpoint Integration: Rather than ethics review happening at project completion, leading organisations integrate ethics checkpoints throughout development. A typical production pipeline might include checkpoints at project initiation (risk assessment, use case approval), development (training data audit, bias testing), pre-production (content authenticity setup, consent verification), production (ongoing monitoring), post-production (final compliance audit), and distribution (rights verification, authenticity certification).
SAG-AFTRA's framework provides concrete examples. Producers must provide performers with “notice ahead of time about scanning requirements with clear and conspicuous consent requirements” and “detailed information about how they will use the digital replica and get consent, including a 'reasonably specific description' of the intended use each time it will be used.”
Automated Tools and Manual Oversight: Adobe's PageProof Smart Check feature automatically reveals authenticity data, showing who created content, what AI tools were used, and how it's been modified. However, research consistently emphasises that “human oversight remains crucial to validate results and ensure accurate verification.” Automated tools flag potential issues; human experts make final determinations.
Documentation and Audit Trails: Every AI-assisted creative project requires comprehensive records: what tools were used, what training data those tools employed, what human contributions were made, what consent was obtained, what rights were cleared, and what the final provenance trail shows. The C2PA standard provides technical infrastructure, but as one analysis noted: “as of 2025, adoption is lacking, with very little internet content using C2PA.” The gap between technical capability and practical implementation reflects the operational challenges these roles must overcome.
The Competency Paradox
Traditional educational pathways don't produce candidates with the full spectrum of required competencies. These roles require a combination of skills that academic programmes weren't designed to teach together.
Technical Foundations: AI ethics officers typically hold bachelor's degrees in computer science, data science, philosophy, ethics, or related fields. Technical proficiency is essential, but technical knowledge alone is insufficient. An AI ethics officer who understands neural networks but lacks philosophical grounding will struggle to translate technical capabilities into ethical constraints. Conversely, an ethicist who can't understand how algorithms function will propose impractical guidelines that technologists ignore.
Legal and Regulatory Expertise: The U.S. Copyright Office published its updated report in 2024 confirming that AI-generated content may be eligible for copyright protection if a human has made substantial creative contribution. However, as legal analysts noted, “the guidance is still vague, and whilst it affirms that selecting and arranging AI-generated material can qualify as authorship, the threshold of 'sufficient creativity' remains undefined.”
Working in legal ambiguity requires particular skills: comfort with uncertainty, ability to make judgement calls with incomplete information, understanding of how to manage risk when clear rules don't exist. The European Union's AI Act, passed in 2024, identifies AI as high-risk technology and emphasises transparency, safety, and fundamental rights. The U.S. Congressional AI Working Group introduced the “Transparent AI Training Data Act” in May 2024, requiring companies to disclose datasets used in training models.
Creative Industry Domain Knowledge: These roles require deep understanding of creative production workflows. An ethics officer who doesn't understand how animation pipelines work or what constraints animators face will design oversight mechanisms that creative teams circumvent or ignore. The integration of AI into post-production requires treating “the entire post-production pipeline as a single, interconnected system, not a series of siloed steps.”
Domain knowledge also includes understanding creative culture. Creative professionals value autonomy, iteration, and experimentation. Oversight mechanisms that feel like bureaucratic impediments will generate resistance. Effective ethics officers frame their work as enabling creativity within ethical bounds rather than restricting it.
Communication and Change Management: An AI ethics officer might need to explain transformer architectures to the legal team, copyright law to data scientists, and production pipeline requirements to executives who care primarily about budget and schedule. This requires translational fluency across multiple professional languages. Change management skills are equally critical, as implementing new AI governance frameworks means changing how people work.
Ethical Frameworks and Philosophical Grounding: Microsoft's framework for responsible AI articulates six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Applying these principles to specific cases requires philosophical sophistication. When is an AI-generated character design “fair” to human artists? How much transparency about AI use is necessary in entertainment media versus journalism? These questions require reasoned judgement informed by ethical frameworks.
Comparing Job Descriptions
Analysis of AI ethics officer and copyright liaison job descriptions across creative companies reveals both commonalities and variations reflecting different organisational priorities.
Entry to Mid-Level Positions typically emphasise bachelor's degrees in relevant fields, 2-5 years experience, technical literacy with AI/ML systems, familiarity with regulations and ethical frameworks, and strong communication skills. Salary ranges typically £60,000-£100,000. These positions focus on implementation: executing governance frameworks, conducting audits, providing guidance, and maintaining documentation.
Senior-Level Positions (AI Ethics Lead, Head of Responsible AI) emphasise advanced degrees, 7-10+ years progressive experience, demonstrated thought leadership, experience building governance programmes from scratch, and strategic thinking capability. Salary ranges typically £100,000-£200,000+. Senior roles focus on strategy: establishing governance frameworks, defining organisational policy, external representation, and building teams.
Specialist Copyright Liaison Positions emphasise law degrees or equivalent IP expertise, deep knowledge of copyright law, experience with rights clearance and licensing, familiarity with technical standards like C2PA, and understanding of creative production workflows. These positions bridge legal expertise with operational implementation.
Organisational Variations: Tech platforms (Adobe, Microsoft) emphasise technical AI expertise. Media companies (BBC, The New York Times) emphasise editorial judgement. Entertainment studios emphasise union negotiations experience. Stock content companies (Getty Images, Shutterstock) emphasise rights management and creator relations.
Insights from Early Hires
Whilst formal interview archives remain limited (the roles are too new), available commentary from practitioners reveals common challenges and emerging best practices.
The Cold Start Problem: Nathalie Berdat's description of joining the BBC as “employee number one” in data governance captures a common experience. Early hires often enter organisations without established frameworks or organisational understanding of what the role should accomplish. Successful early hires emphasise the importance of quick wins: identifying high-visibility, high-value interventions that demonstrate the role's value and build organisational credibility.
Balancing Principle and Pragmatism: A recurring theme involves tension between ethical ideals and operational reality. Effective ethics officers develop pragmatic frameworks that move organisations toward ethical ideals whilst acknowledging constraints. The WGA agreement provides an instructive example, permitting generative AI use under specific circumstances with guardrails that protect writers whilst protecting studios' copyright.
The Importance of Cross-Functional Relationships: AI governance “touches an organisation's vast group of stakeholders.” Effective ethics officers invest heavily in building relationships across functions. These relationships provide early visibility into initiatives that may raise ethical issues, create channels for influence, and build reservoirs of goodwill. Adobe's structure, with the Ethical Innovation team collaborating closely with Trust and Safety, Legal, and International teams, exemplifies this approach.
Technical Credibility Matters: Ethics officers without technical credibility struggle to influence technical teams. Successful ethics officers invest in building technical literacy to engage meaningfully with data scientists and ML engineers. Conversely, technical experts transitioning into ethics roles must develop complementary skills: philosophical reasoning, stakeholder communication, and change management capabilities.
Documentation Is Thankless but Essential: Much of the work involves unglamorous documentation: creating records of decisions, establishing audit trails, maintaining compliance evidence. The C2PA framework's slow adoption despite technical maturity reflects this challenge. Technical infrastructure exists, but getting thousands of creators to actually implement provenance tracking requires persistent operational effort.
Emerging Trends and Evolving Positions
Several trends are reshaping these roles and spawning new specialisations.
Fragmentation and Specialisation: As AI governance matures, broad “AI ethics officer” roles are fragmenting into specialised positions. Emerging job titles include AI Content Creator (+134.5% growth), Data Quality Specialist, AI-Human Interface Designer, Digital Provenance Specialist, Algorithmic Bias Auditor, and AI Rights Manager. This specialisation enables deeper expertise but creates coordination challenges.
Integration into Core Business Functions: The trend is toward integration, with ethics expertise embedded within product teams, creative departments, and technical divisions. Research on AI competency frameworks emphasises that “companies are increasingly prioritising skills such as technological literacy; creative thinking; and knowledge of AI, big data and cybersecurity” across all roles.
Shift from Compliance to Strategy: Early-stage AI ethics roles focused heavily on risk mitigation. As organisations gain experience, these roles are expanding to include strategic opportunity identification. Craig Peters of Getty Images exemplifies this strategic orientation, positioning ethical AI development as business strategy rather than compliance burden.
Regulatory Response and Professionalisation: As AI governance roles proliferate, professional standards are emerging. UNESCO's AI Competency Frameworks represent early steps toward standardised training. The Scaled Agile Framework now offers a “Achieving Responsible AI” micro-credential. This professionalisation will likely accelerate as regulatory requirements crystallise.
Technology-Enabled Governance: Tools for detecting bias, verifying provenance, auditing training data, and monitoring compliance are becoming more sophisticated. However, research consistently emphasises that human judgement remains essential. The future involves humans and algorithms working together to achieve governance at scale.
The Creative Integrity Challenge
The fundamental question underlying these roles is whether creative industries can harness AI's capabilities whilst preserving what makes creative work valuable. Creative integrity involves multiple interrelated concerns: authenticity (can audiences trust that creative work represents human expression?), attribution (do creators receive appropriate credit and compensation?), autonomy (do creative professionals retain meaningful control?), originality (does AI-assisted creation maintain originality?), and cultural value (does creative work continue to reflect human culture and experience?).
AI ethics officers and copyright liaisons exist to operationalise these concerns within production systems. They translate abstract values into concrete practices: obtaining consent, documenting provenance, auditing bias, clearing rights, and verifying human contribution. The success of these roles will determine whether creative industries navigate the AI transition whilst preserving creative integrity.
Research and early practice suggest several principles for structuring these roles effectively: senior-level positioning with clear executive support, cross-functional integration, appropriate resourcing, clear accountability, collaborative frameworks that balance central policy development with distributed implementation, and ongoing evolution treating governance frameworks as living systems.
Organisations face a shortage of candidates with the full spectrum of required competencies. Addressing this requires interdisciplinary hiring that values diverse backgrounds, structured development programmes, cross-functional rotations, external partnerships with academic institutions, and knowledge sharing across organisations through industry forums.
A persistent challenge involves measuring success. Traditional compliance metrics capture activity but not impact. More meaningful metrics might include rights clearance error rates, consent documentation completeness, time-to-resolution for ethics questions, creator satisfaction with AI governance processes, reduction in legal disputes, and successful integration of new AI tools without ethical incidents.
Building the Scaffolding for Responsible AI
The emergence of AI ethics officers and copyright liaisons represents creative industries' attempt to build scaffolding around AI adoption: structures that enable its use whilst preventing collapse of the foundations that make creative work valuable.
The early experience reveals significant challenges. The competencies required are rare. Organisational structures are experimental. Technology evolves faster than governance frameworks. Legal clarity remains elusive. Yet the alternative is untenable. Ungovernably rapid AI adoption risks legal catastrophe, creative community revolt, and erosion of creative integrity. The 2023 Hollywood strikes demonstrated that creative workers will not accept unbounded AI deployment.
The organisations succeeding at this transition share common characteristics. They hire ethics and copyright specialists early, position them with genuine authority, resource them appropriately, and integrate governance into production workflows. They build cross-functional collaboration, invest in competency development, and treat governance frameworks as living systems.
Perhaps most importantly, they frame AI governance not as constraint on creativity but as enabler of sustainable innovation. By establishing clear guidelines, obtaining proper consent, documenting provenance, and respecting rights, they create conditions where creative professionals can experiment with AI tools without fear of legal exposure or ethical compromise.
The roles emerging today will likely evolve significantly over coming years. Some will fragment into specialisations. Others will integrate into broader functions. But the fundamental need these roles address is permanent. As long as creative industries employ AI tools, they will require people whose professional expertise centres on ensuring that deployment respects human creativity, legal requirements, and ethical principles.
The 3,700 members of the Coalition for Content Provenance and Authenticity, the negotiated agreements between SAG-AFTRA and studios, the AI governance frameworks at the BBC and Adobe, these represent early infrastructure. The people implementing these frameworks day by day, troubleshooting challenges, adapting to new technologies, and operationalising abstract principles into concrete practices, are writing the playbook for responsible AI in creative industries.
Their success or failure will echo far beyond their organisations, shaping the future of creative work itself.
Sources and References
- IBM, “What is AI Governance?” (2024)
- European Broadcasting Union, “AI, Ethics and Public Media – Spotlighting BBC” (2024)
- Content Authenticity Initiative, “How it works” (2024)
- Adobe Blog, “5-Year Anniversary of the Content Authenticity Initiative” (October 2024)
- Variety, “Hollywood's AI Concerns Present New and Complex Challenges” (2024)
- The Hollywood Reporter, “Hollywood's AI Compromise: Writers Get Protection” (2023)
- Brookings Institution, “Hollywood writers went on strike to protect their livelihoods from generative AI” (2024)
- SAG-AFTRA, “A.I. Bargaining And Policy Work Timeline” (2024)
- The Hollywood Reporter, “Actors' AI Protections: What's In SAG-AFTRA's Deal” (2023)
- ModelOp, “AI Governance Roles” (2024)
- World Economic Forum, “Why you should hire a chief AI ethics officer” (2021)
- Deloitte, “Does your company need a Chief AI Ethics Officer” (2024)
- U.S. Copyright Office, “Report on Copyrightability of AI Works” (2024)
- Springer, “Defining organizational AI governance” (2022)
- Numbers Protocol, “Digital Authenticity: Provenance and Verification in AI-Generated Media” (2024)
- U.S. Department of Defense, “Strengthening Multimedia Integrity in the Generative AI Era” (January 2025)
- EY, “Three AI trends transforming the future of work” (2024)
- McKinsey, “The state of AI in 2025: Agents, innovation, and transformation” (2025)
- Autodesk, “2025 AI Jobs Report: Demand for AI skills in Design and Make jobs surge” (2025)
- Microsoft, “Responsible AI Principles” (2024)

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk