Privacy: EU Artificial Intelligence Act

Artificial Intelligence Act  :: Banned applications

 

Recognizing the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit:

  • Biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • Emotion recognition in the workplace and educational institutions;
  • Social scoring based on social behavior or personal characteristics;
  • AI systems that manipulate human behavior to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

Synthetic Media: AI SAG-AFTRA Deal Points

From this document: /AIFAQs.pdf

Big Picture Questions

AI Ban on Projects: AI not completely banned; focus on setting guardrails around its use.
Training AI on Work: Challenges in outright banning AI training on actors’ work due to legal complexities.
Consent for Digital Replicas: Producers must obtain explicit, informed consent for creating and using digital replicas.
Specificity in Consent: Legal requirements for clarity and specificity in consent provisions.

Consent

Refusal to Hire Without Consent: Producers can refuse to hire if consent for digital replica creation is not given.
Clear and Conspicuous Consent:  Legal standards for clear and conspicuous consent, including separate signing.
New Consent for Each Use: Requirement for separate consent for each specific use of digital replicas.
Informed Consent After Death: Provisions for post-mortem use of digital replicas and union’s role in consent process.

Principal Performers

Ownership of Digital Replicas: Legal ownership by employers with consent required for use.
Digital Replicas for Voice Actors: Inclusive provisions for voice actors in digital replica terms.
Difference Between Replica Types: Distinction between Employment-Based and Independently Created Digital Replicas.
Protections Against Unconsented Use: New contractual provisions protecting against unconsented use of digital replicas.

Background Actors

Ownership and Use of Background Actor Digital Replicas: Similar ownership and consent requirements as principal performers.
Compensation for Scanning and Use: Guidelines on compensation for scanning and use in projects.

Generative AI and Synthetic Assets

Use of Generative AI and Synthetic Performers: Contract language requiring notification and bargaining for the use of synthetic performers.

Protections Against Replacement by AI: Measures to prevent wholesale replacement of actors with AI-generated performers.

Miscellaneous

Handling Bankruptcy and Asset Transfer: Procedures for handling digital replicas in case of company bankruptcy.
Tracking and Enforcement of Digital Replica Terms: Strategies for monitoring and enforcing terms related to digital replicas.
Impact on Future Contract Negotiations: Implications of these terms on negotiations for other contracts.
Use of Digital Replicas During Strikes: Guidelines on the use of digital replicas during work stoppages or strikes.
Tracking Technology for Digital Replicas: Exploration of tracking technologies for monitoring the use of digital replicas.

Navigating the Digital Maze: Dark Patterns, Algorithm Ethics, Facial Recognition and the FTC

This digital age of ours has consumers facing a deluge of potential harms ranging from privacy invasions to deceptive practices. Being online is like a dance of thousand cuts.

While Congress has been paralyzed crafting legislation to address these rapidly evolving harms the Federal Trade Commission (FTC) has emerged as a key player in enforcing accountability.

As one of the rare mechanisms capable of responding swiftly to digital malpractices, the FTC’s enforcement actions against dark patterns, unethical use of algorithms, and privacy breaches have become crucial in safeguarding consumer rights. The absence of comprehensive congressional intervention is at this point criminal and has allowed for harms both at national and a personal level. I took the time to extract some key points from recent enforcement actions.

FTC’s Crackdown on Dark Patterns

Definition of Dark Patterns: Dark Patterns: Dark patterns are deceptive design tactics used in websites and apps that trick or manipulate users into making unintended decisions, often resulting in unwanted subscriptions, purchases, or loss of privacy. These patterns can take various forms, such as confusing navigation, hidden costs, misleading wording, or bait-and-switch techniques.

Dark Pattern Characteristics
– Misleading Navigation: Design elements that intentionally confuse or mislead users.
– Hidden Costs: Concealing extra charges or subscriptions.
– Bait-and-Switch: Promising one thing but delivering another.
– Privacy Intrusion: Coercing users to surrender more personal data than necessary.

List of FTC Enforcement Cases on Dark Patterns
– Vonage: A $100 million settlement for consumers misled by dark patterns into unwanted service commitments.
– Credit Karma: Action taken for using dark patterns to mislead consumers about credit card pre-approvals.
– WW International: Demanded deletion of algorithmic systems developed from unlawfully obtained data.
– Everalbum, Inc.: Required the deletion of a facial recognition algorithm developed through deceptive practices.

The Everalbum case prompted a deeper dive into the … FTC’s Guidelines on Ethical Use of Facial Recognition Technology (FRT) The FTC recommends companies using FRT prioritize consumer privacy, develop secure data practices, and ensure consumer awareness and consent. FTC insists on explicit consumer consent before using consumer images or biometric data in ways not initially represented, and before identifying anonymous images of a consumer.

Navigating the Digital Maze: The Weaponization of Audio and Video

The world of audio and visual (A/V) media has experienced a profound transformation, evolving from a realm of entertainment and delight into one being weaponized with manipulation and fear.

This evolution has given rise to what I term the “Dark A/V Era,” characterized by the growing exploitation of these mediums to disseminate misinformation, incite violence, and exploit vulnerabilities.  A loss of trust in what we hear and see.  

The Rise of Audio Manipulation

Audio technology, once celebrated for its power to connect and entertain, has taken a turn for the dark side . Synthetic voices, eerily accurate in mimicking human speech, has opened a Pandora’s box of dark arts deceptive practices.

Advanced algorithms are being exploited in “voice scams,” where unsuspecting individuals receive calls from voices indistinguishable from those of their loved ones (Hi mom it’s me i am in jail and need you to send 10k) or trusted authorities undermining the integrity of communication and  posing a significant threat to personal trust and safety.  As these technologies become more sophisticated, they represent a disturbing evolution in the landscape of digital fraud, challenging our ability to discern reality of what we are hearing.

Video’s Manipulative Grip

Likewise, video, once a source of amusement, and delight now distorted into a tool for manipulation. Videos are meticulously crafted to change hearts and minds across political landscapes.  Deceiving viewers unable to discern or unpracticed in new visual consumption best practices.  Deepfake technology, enabling the realistic manipulation of video footage, has exacerbated this trend, blurring the line between fact and fiction.  These tools are not being used for joy they are being leveraged to evoke fear, anger, and hatred, often with the intention of inciting violence or promoting extremism.

Watermarks: An Insufficient Shield

While watermarks are intended to safeguard copyright and ownership, they fall short in addressing the root cause of this issue. The capacity to create and manipulate A/V content resides not solely in the tools themselves but in the intentions of those who wield them. The addition of a watermark alone does not deter malicious actors from exploiting these technologies for nefarious purposes.  More importantly it does not prevent its viewing, which happens at lighting speeds globally. Watermarks are mostly ignored or noticed after the fact by the few that care.  The social networks have made it clear they are not going to be the solution or voice of reason in allowing manipulated media to exist and be promoted on their networks often obtaining reach though the social networks own ranking algos. 

Navigating the Dark A/V Era

To navigate the Dark A/V Era effectively, a holistic approach is imperative. This involves addressing the underlying motivations behind the creation and dissemination of harmful content, fostering digital literacy and critical thinking skills, and formulating ethical guidelines for the use of A/V technologies. 

A Collective Effort

Addressing the weaponization of audio and video necessitates a collective effort involving individuals, governments, and technology companies. None of which will happen in our life times. Making it more important for individuals to acquire the skills to identify and resist manipulation because the hope that governments will enact policies regulating technology usage, and technology companies will  develop and adhere to ethical guidelines for their platforms is unlikely. 

Navigating the Digital Maze: Music Industry, Transforming the future of music creation: Dream Track, Lyria

Google DeepMind, in partnership with YouTube, has announced Lyria, an AI music generation model, and two AI experiments: Dream Track and Music AI tools. Lyria is designed to enhance creativity in music creation, and Dream Track allows creators on YouTube Shorts to generate unique soundtracks using AI-generated voices and styles of various artists. The Music AI tools, developed in collaboration with industry professionals, aid in transforming audio and creating new music.

DeepMind emphasizes responsible technology deployment, introducing SynthID for watermarking AI-generated audio, ensuring content traceability. These efforts are aligned with YouTube’s AI principles, focusing on the beneficial and responsible use of generative music technologies.

Key Points:

  • Introduction of Lyria and AI experiments for music creation.
  • Development of Music AI tools and Dream Track experiment.
  • Use of SynthID watermarking for traceability in AI-generated audio.

Ed note – stunning speed to market, market partnerships, and intern synergy Warners earnings call alluded to not getting caught flat footed this time around.  While there is a massive amount of market forces at play here and around this creation and creative process this is a thought through R&D playground.  Seeing Deep Mind and YouTube work at this depth and the label and artist relations convesations that had to take place this is a modern day marvel

Navigating the Digital Maze: YouTube AI Standards and Practices

Generative AI’s Potential and Responsibility: YouTube acknowledges the creative potential of generative AI but also its responsibility to protect the community. They emphasize that all content, regardless of how it’s generated, is subject to YouTube’s Community Guidelines​​.

Disclosure Requirements and Content Labels: YouTube plans to update its platform to inform viewers about synthetic content. Creators will be required to disclose if their content includes realistic altered or synthetic material, especially if it’s created using AI tools. Labels will be added to content descriptions and video players for sensitive topics​​.

Handling Synthetic Media: Some synthetic media may be removed from YouTube if it violates Community Guidelines, regardless of labeling. This includes content that shows realistic violence with the intent to shock or disgust viewers​​.

New Options for Creators, Viewers, and Artists: YouTube will allow for the removal of AI-generated content that simulates identifiable individuals, considering factors like satire or public figures’ involvement. This also extends to AI-generated music content mimicking an artist’s voice​​.

AI-Powered Content Moderation: YouTube uses a combination of AI classifiers and human reviewers to enforce its Community Guidelines. AI helps identify novel forms of abuse, increasing the speed and accuracy of content moderation​​.

Building Responsibility into AI Tools: YouTube is focused on developing AI tools responsibly, with a focus on building guardrails to prevent generation of inappropriate content and continuously improving protections against bad actors

Parsed from : https://blog.youtube/inside-youtube/our-approach-to-responsible-ai-innovation/

Synthetic Media: Content Creation Tools

In the ever-evolving landscape of AI-driven media production, a new era of multi-modal generative AI is unfolding, giving rise to a vibrant ecosystem. Within this landscape, major players and open-source models are making significant strides, each contributing to the advancement of synthetic media production.

However, a key differentiator lies in their offerings: some provide warranties and indemnification for usage, processing time, and cost, while others cater to startups of varying sizes, stages, market fits, and capitalizations.

Among the early pioneers shaping this landscape are notable names such as Flawless, known for its GenAI film editing software, Genmo AI, specializing in transforming text into visual media, and Irreverent Labs, which excels in high-fidelity video creation. Jupitrr offers AI-generated B-rolls, while Kaiber focuses on AI videos for gaming trailers.

Kapwing provides a modern video creation platform, MNTN VIVA combines generative video with audio and stock footage, and Synthesia creates videos from text using AI avatars. An additional market mover, Runway, plays a crucial role in this transformative journey.

These companies exemplify the diversity within this market, showcasing a wide range of product offerings and technological approaches. The market is still in its nascent stages with ample room for growth and innovation. “picks and shovels”In terms of market structure, the synthetic video tooling industry encompasses technology providers, content creators, and end-users.

The customer base spans various sectors, with prominent applications in marketing, corporate communications, and training. Marketing applications include case studies, testimonials, and how-to videos, while corporate communications involve reports, team updates, and video presentations. Training applications are diverse, ranging from interactive learning modules to immersive simulations.

The addressable market for synthetic video tooling is expanding. With the global surge in video content consumption and the shift of social media platforms towards video formats like TikTok, the demand for efficient, scalable, and creative video production solutions is more pronounced than ever.

The versatility of synthetic video applications further extends from entertainment and advertising to education and corporate communications.In the realm of TV and film production, the use cases are myriad, ranging from prototyping scenes before shooting to translating content into multiple languages. Synthetic video technology is not viewed as a substitute for legacy media but as an incremental development, enhancing efficiency and productivity in all aspects of content creation.

While there is immense potential, the tools available for Hollywood-level professional enterprises and those accessible to individual “creators” are still in their infancy. Startups are often not targeting this market fit. I know of one co. operating in stealth mode, and industry giants like Adobe have a strong foothold in this domain.

Large language models (LLMs) such as OpenAI, while capable, typically recommend working with professional video editing tools rather than relying solely on AI-generated content.Originally, the aim of this post was to discuss the evolution of the creation process, shifting from editing to “sculpting,” and what this signifies from concept to screen. Most of the aforementioned players are building tools that mimic the behaviors of legacy media, a strategy conducive to widespread adoption.

This shift represents more than an advancement; it signifies a paradigm shift in how we conceive, create, and interact with digital content. In our next discussion, we will delve deeper into how these innovations are revolutionizing conventional workflows from the ground up.We stand on the precipice of a broader video content creation value chain.

This space is poised to expand, reshaping both production and consumption behaviors, catalyzing the next generation of media and compounding the impact of existing media.The implications of synthetic video tooling go beyond being a mere content creation tool; it is a medium that redefines storytelling and visual communication, potentially democratizing the means of production further.

Editor’s Note: If you are involved or interested in this space, feel free to get in touch!

Navigating the Digital Maze: AI Current Snapshot

In an era where open models are accelerating in utility OpenAI moves to platform model and offerings to lock-in the ecosystem during its first Developer Day. The platform model showcases a strategic move reminiscent of Apple’s integrated ecosystem approach. For some startups a master class in the risks associated with supply-side dependencies  It’s worth watching the keynote for the new set of offerings including ChatGPT-4 Turbo, as well as the ability to create custom “agents,” called “GPTs”

“We’re introducing copyright shield. Copyright Shield means that we will step in and defend our customers and pay the costs incurred, if you face legal claims or on copyright infringement, and this applies both to ChatGPT Enterprise and the API.”   IP is the unknown known, or known unknown depending on your p.o.v. 

Followed by this statement. “..let me be clear, this is a good time to remind people do not train on data from the API or ChatGPT Enterprise ever.”  a caution against unauthorized use of the data which could have legal, ethical, or technical implications.  A  signal to startups with supply-side dependencies alluded to above. We live in a complex world of dualities.

For me the feature called reproducible outputs, which ensures that every time the model runs with the same seed and inputs, it produces the same output. I have already started using gen_id seeds for creating visual continuity to test exactness and precision. 

Meanwhile, traditional industry defenses are undergoing transformation (e.g. moats) In the realm of entertainment, Clear boundaries are being defined around the use of AI in the SAG-AFTRA negotiations

AMPTP addressed the issue of AI by offering an increase in salaries to professionals that allow them to be virtually replicated.  It appears there is no commitment to cease training its AI systems. Wow, read that last sentence again. 

The music industry, grappling with AI’s rise strategies. Among major labels Warner Music Group (WMG) hints at hopes for legislative support and a DRM Content ID-style system, while the Universal Music Group (UMG) suggests that bolstering laws, like the proposed federal right of publicity in the U.S., could address issues arising from synthetic media. (No comment -Ed)

The pace at which technology advances often outstrips the legal and business frameworks meant to govern it. An illustration of this is the emergence of AI-generated music on social platforms, such as Sorisori’s service that offers tracks mimicking well-known artists like Ariana Grande in novel contexts.

Content provenance tools, such as Nightshade and Glaze, are in their infancy. These early attempts to trace AI’s supply chain contributions face challenges in proving their efficacy.

Korean song to an AI cover of South Korean singer IU singing Cupid by K- pop girl group Fifty Fifty. For the monthly subscription fee of 14,800 won (S$15.40) on the English website, subscribers can generate up to 200 tracks of music using AI.  Spot-AI-fy, a YouTube channel that specializes in AI-generated music, has a total of 235 videos on the platform. Sites like this  are everywhere.

Google’s reported $2 billion investment in Anthropic, an OpenAI competitor, signifies the intensifying competition in the AI space. This is the same Anthropic currently embroiled in a lawsuit with UMG over the unauthorized distribution of copyrighted lyrics through its AI model, Claude 2.  Seeking potentially tens of millions in damages and could set a legal precedent.

In journalism, the industry fresh off being destroyed by social media is moving to a block the bot and license strategy to combat the potential for AI to further lay waste to this critical field.

The license plays are just theater for the AI companies they eat, and will eat what they want. In July, The Daily Telegraph revealed allegations that Google harvested around 1m online news articles from the Daily Mail.  

The recent executive order signed by President Biden addresses various facets of artificial intelligence but notably does not delve into AI’s ramifications for creative industries.  Overall, the general pulse seems to prioritize ‘existential risk and other challenges, such as misinformation, safety standards, privacy, and civil rights.

IP protection is in that stack but its a long road with many issues driving at the same time. Again a differential (time/innovation) where consequence will take place.  If the Library of Congress comments from Antropic indicate the companies legal stance (cited below) and if it is any indicator there is a long hard copy fight ahead.

 

###

Citation  “Artificial Intelligence and Copyright.” Federal Register, vol. 88, no. 167, 30 Aug. 2023, Antropic comments 
Citation  ‘Actors’ union says no agreement on studios’ ‘final’ offer‘, Agence France-Presse (online), 7 Nov 2023 
Citation  Glenn CHAPMAN, ‘OpenAI Sees A Future Of AI ‘Superpowers On Demand’ – ‘, International Business Times: United Kingdom Edition (online), 7 Nov 2023 
Citation  ‘AI-generated music sparks debate in S. Korea’, Straits Times, The (online), 7 Nov 2023 
Citation ‘Artificial Intelligence regulation starts to take shape in US and UK’, Cape Times, The (online), 7 Nov 2023  
Citation: 1. UMG Music Group N.V. Q3 2023 Call 
N.B. some links go to alternate sites with the same story not behind a firewall

 

 

 

 

Navigating the Digital Maze: US AI EO

President Biden has issued an Executive Order aimed at advancing the safe, secure, and trustworthy development of artificial intelligence (AI) in the United States.

FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

Global related reading Commission Nationale de l’Informatique et des Libertés

Law being proposed in France aiming to regulate artificial intelligence through copyright , _ _