Generative AI Reading List

This weeks reading list….

DeepMind :  Model evaluation for extreme risks

From Machine Learning  to Autonomous Intelligence Towards Machines that can Learn, Reason & Plan Northeastern University Institute for Experiential AI – Yann LeCun  (Slides)
and video

How Rogue AIs may Arise  [Yoshua Bengio]

National Artificial Intelligence Research And Development Strategic Plan

White House AI Fact Sheet

MAS.S68: Generative AI for Constructive Communication Evaluation and New Research Methods

AI Canon   A curated list of resources we’ve relied on to get smarter about modern AI . Art Isn’t Dead, It’s Just Machine-Generated

Democratic Inputs to AI  – OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.

How The Cost Of Living Crisis Is Impacting Djs And Producers  [mixmag]


Unique Characteristics of Generative AI: Exploring the Creative Medium

Unique Characteristics of Generative AI: Exploring the Creative Medium

Generative AI is a creative medium with unique characteristics, including non-deterministic outputs, latent space exploration, style transfer, adaptability, co-creation with humans, scalability, algorithmic and data-driven creativity, emergence, autonomous generation, and real-time adaptation. These qualities enable generative AI to serve as a powerful tool for artists, designers, and other creators, while also offering novel experiences and opportunities for collaboration.

10 characteristics that define Generative AI media. 


  1. Non-deterministic outputs: Generative AI models often produce different results each time they are run, even with the same input. This introduces an element of variability and unpredictability to the creative process.
  2. Latent space exploration: Generative AI models map high-dimensional spaces, allowing for the exploration of a vast array of possibilities and combinations within the latent space. This enables the discovery of novel and surprising outputs.
  3. Style transfer and interpolation: Generative AI allows for the blending and transfer of styles between different inputs, creating unique combinations and aesthetic experiences.
  4. Adaptability and learning: Generative AI systems can be fine-tuned and adapted to specific domains or styles, enabling them to learn and evolve in response to new data and user preferences.
  5. Co-creation with human input: Generative AI models can be used as creative collaborators, augmenting human creativity by providing novel ideas and options for artists, designers, and other creators to work with.
  6. Scalability: The generative nature of AI enables the creation of large quantities of unique content quickly, making it a valuable tool for industries that require rapid content generation, such as advertising, entertainment, and gaming.
  7. Algorithmic and data-driven creativity: Generative AI relies on mathematical algorithms and data-driven processes to generate content, resulting in a unique form of creativity that differs from traditional human-generated art and design.
  8. Emergence and complexity: Due to the complex interactions of AI algorithms and data, generative AI systems can produce intricate and emergent patterns or behaviors that may not be explicitly programmed or anticipated.
  9. Autonomous generation: Generative AI models can create content without direct human intervention, enabling the generation of art, music, or other creative outputs with minimal human guidance.
  10. Real-time generation and adaptation: Generative AI models can be used to create content in real-time, allowing for dynamic and adaptive experiences in fields like video games, virtual reality, and interactive installations.


(C) mark ghuneim 2023

The Luring Test: AI and the Engineering of Consumer Trust

In “The Luring Test: AI and the Engineering of Consumer Trust”, Michael Atleson, an Attorney at the FTC Division of Advertising Practices, dropped some guidance today. (Bold emphasis FTC 🔥) Last weeks technology sector earnings cycle being powered by talk of AI and ad-targeting might have not been the ideal narrative to start with.

“This includes recent work relating to dark patterns and native advertising. Among other things, it should always be clear that an ad is an ad, and search results or any generative AI output should distinguish clearly between what is organic and what is paid. People should know if an AI product’s response is steering them to a particular website, service provider, or product because of a commercial relationship. And, certainly, people should know if they’re communicating with a real person or a machine.”

No alternative text description for this image

Entertainment Manufacturing: Navigating the Crossroads of Creativity and Compensation

When we speak of America as a manufacturing nation, it’s not just about the steel, cars, or technology we produce. It’s also about our exceptional ability to manufacture world-class entertainment.

In 2007, the writers’ strike resulted in a shift, filling the void with reality TV, a pale Xerox of our high-quality scripted content. This downgrade came at a cost, both in terms of cultural value and industry standards. However, we’re now seeing a recovery, thanks to massive investments in quality streaming content.

The Alliance of Motion Picture and Television Producers plays a critical role in this landscape. Their negotiations and deal points will shape the future of our entertainment industry, and they deserve our full support. <—

Consider the dynamics between the traditional broadcast TV model and the evolving streaming model. In the former, writers typically produce 22 scripts per season, with royalties based on performance. Streaming series, often less than 10 episodes, offer less upside for writers.

This discrepancy is just one of the many nuanced deal points that need to be addressed moving forward.

Another crucial issue is the role of AI and generative content. Given the potential impact on the industry, it’s no surprise that this has become a significant sticking point. We need to learn from the protracted conflict of 2007, a dance of a thousand cuts that left studios, writers, and viewers all suffering.
If the current negotiations drag on, we could be faced with a rise in creator TV or other lower-quality substitutes for well-produced and written narratives, causing pain points across all stakeholders.

Today, Hollywood stands at the crossroads of Digital Pennies Drive and High Production Costs Avenue. It is a tough neighborhood. Streaming services are vying to lock-in audiences and market share, all while grappling with escalating costs. As we navigate this intersection, it’s crucial that we strike a balance that maintains the quality of American entertainment while ensuring fair compensation for its creators.

Happiness POV

Reading this newsletter last week  Happiness is a 2×2 Matrix I came across this video Interview with Hillel Einhorn
on happiness.   “Now, there are some interesting issues there about looking for evidence opposed, or evidence about non-occurrences. This was brought home to me dramatically in a Chinese restaurant one night after the meal. They brought the usual fortune cookies and I opened the cookie and read my fortune. It was a very interesting one. It said, ‘Don’t think about all of the things that you want that you don’t have, think of all of the things that you don’t want that you don’t have.'”

The speaker continued, “Well, that kind of stopped me dead. I don’t know who writes these things, but this is a very interesting one. So I immediately drew a two-by-two table – ‘want’, ‘not want’, ‘have’, ‘not have’. Of course, we think about what we want that we have, what we want that we don’t have, what we don’t want that we have. But rarely do we ever think about what we don’t want and what we don’t have. So, I like to use this example to point out that if the correlation between ‘wants’ and ‘haves’ is some notion of happiness, and because that ‘don’t want, don’t have’ cell is so large, we’re actually a lot happier than we think we are.”





European Parliament Members Reach Provisional Agreement on Groundbreaking AI Act

Members of the European Parliament (MEPs) have provisionally agreed on the world’s first rulebook for artificial intelligence (AI), known as the AI Act. This legislation aims to regulate AI based on its potential for harm. The formalization of the Parliament’s position is imminent, with a committee vote scheduled for 11 May and a plenary vote in mid-June.

Key points from the Act include:

  1. General Purpose AI: The Act puts stricter regulations on foundation models, such as ChatGPT, which are AI systems that do not have a specific purpose. Generative AI models would need to comply with EU law and fundamental rights, including freedom of expression.
  2. Prohibited practices: Certain AI applications deemed to pose unacceptable risks are banned. These include AI-powered tools for general monitoring of interpersonal communications, biometric identification software (with certain exceptions for serious crimes), purposeful manipulation, emotion recognition software in certain domains, and predictive policing for administrative offenses.
  3. High-risk classification: AI solutions that pose a significant risk of harm to health, safety, or fundamental rights will be classified as high-risk, requiring them to follow stricter regulations, including risk management, transparency, and data governance. AI used to manage critical infrastructure will also be deemed high-risk if they present a severe environmental risk.
  4. Detecting biases: Providers of high-risk AI models can process sensitive data to detect negative biases, but under strict conditions. The processing must happen in a controlled environment, the data must not be shared with other parties, and it must be deleted after the assessment.
  5. General principles: All AI models should adhere to principles including human agency and oversight, technical robustness and safety, privacy and data governance, transparency, social and environmental well-being, diversity, non-discrimination, and fairness.
  6. Sustainability of high-risk AI: High-risk AI systems and foundation models will have to comply with European environmental standards and keep records of their environmental footprint.

The The EU AI Act Newsletter #28 has an up-to-date developments and analyses of the proposed EU artificial intelligence law.


The US who has been standing still on enacting legislation and a lot of whats adopted has roots in EU legislation. Looking at a high-level comparison of the EU and U.S. positions on AI regulation.

EU and U.S. Positions on AI Regulation: A Comparison

EU Approach:

  • Comprehensive legislation tailored to specific digital environments
  • New requirements planned for high-risk AI in socioeconomic processes, government use of AI, and regulated consumer products
  • Emphasizes public transparency and influence over AI system design in social media and e-commerce

U.S. Approach:

  • Highly distributed across federal agencies without new legal authorities
  • Investments in non-regulatory infrastructure, such as AI risk management framework and evaluations of facial recognition software
  • Risk-based approach but lacks consistent federal approach to AI risks

Alignment and Misalignment:

  • Conceptual alignment on risk-based approach, key principles of trustworthy AI, and importance of international standards
  • Significant differences in AI risk management regimes, especially in socioeconomic processes and online platforms


  • EU-U.S. Trade and Technology Council: Successful collaboration on metrics, methodologies, and international AI standards
  • Joint efforts in studying emerging AI risks and applications

Recommendations for Alignment:

  • U.S.: Execute federal agency AI regulatory plans, design strategic AI governance with EU-U.S. alignment, establish legal framework for online platform governance
  • EU: Create flexibility in sectoral implementation of EU AI Act, improve law for future EU-U.S. cooperation

The  Brookings Inst offers this framing on the US approach

Regarding the U.S. federal government’s approach to AI risk management, it is characterized as risk-based, sectorally specific, and highly distributed across federal agencies. However, the development of AI policies in the U.S. has been uneven.

While there are guiding federal documents on AI harms, they have not created a consistent approach to AI risks. Federal agencies have not fully developed the required AI regulatory plans, with only a few agencies having comprehensive plans in response to the requirements.

The Biden administration has shifted focus from implementing Executive Order 13859 to the Blueprint for an AI Bill of Rights (AIBoR), developed by the White House Office of Science and Technology Policy (OSTP).

The AIBoR endorses a sectorally specific approach to AI governance, relying on associated federal agency actions rather than centralized action. However, the AIBoR is nonbinding guidance​


F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories

Todays paper F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories which is a mouthful.  Lets break this one down to production talk.

This bad boy allows a filmmaker, interested in creating virtual scenes or generating novel views of a scene from different camera angles. It’s a framework that can synthesize high-quality images of a scene from various camera viewpoints, even ones that weren’t originally captured.  Boom!

Moreover, F2-NeRF is fast—it can be trained in just a few minutes, which makes it practical for use in filmmaking and animation workflows. Think of it as a powerful tool for creating dynamic and visually compelling virtual environments in your films.

As for the training data, F2-NeRF is trained on images (not videos or text) and leverages a technique called “Neural Radiance Fields” to learn how to generate new views of a scene based on the images provided. This allows it to produce high-quality renderings from any desired camera viewpoint.

AI risk regulation frameworks:


AI risk regulation frameworks:

Singapore’s Model AI Governance Framework: Soft law focused on explainable, transparent, and human-centric AI decision-making.

The Draft EU AI Act: Hard law that classifies AI systems into unacceptably risky, high-risk, and low-risk uses, and implements risk regulation for high-risk AI systems.

NIST’s AI Risk Management Framework (U.S.): Soft law that emphasizes enterprise risk management and an iterative approach to risk management.