Robert Grosvenor
Robert Grosvenor

Mediaeater Digest Vol 31, No. 76

The emergent agentic marketplace has many facets,  most interesting to me right now is. … Agent Intent and Inference. 

Agent Intent and Inference refers to the process of understanding an agent’s goals, beliefs, and intentions based on its observed actions and behavior, allowing for better prediction and interaction with other agents.  How will they act on your behalf?, when can intent become misaligned, and more importantly what are they inferring about you or on your behalf.   Something to think about.   To wit R1-Omni: Explainable Omni-Multimodal Emotion Recognition with Reinforcement Learning from Alibaba  was released last week. 

A short news round up from this morning, infer what you will 🙂

No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence.  (STUDY) 

Specifically, we theorize that the more employees interact with AI in the pursuit of work goals, the more they experience a need for social affiliation (adaptive)—which may contribute to more helping behavior toward coworkers at work—as well as a feeling of loneliness (maladaptive), which then further impair employee well-being after work (i.e., more insomnia and alcohol consumption).

Ofcom begins enforcing rules to protect users from illegal content + harmful activity online.

“Under the new rules, tech companies will have to ensure their moderation teams are resourced and trained, and set performance targets to remove illegal material quickly when they become aware of it. Platforms will need to test algorithms to make illegal content harder to disseminate.”  (FT)

AI is Making Developers Dumb

There is a concept called “Copilot Lag”. It refers to a state where after each action, an engineer pauses, waiting for something to prompt them what to do next. There is no self-sufficiency, just the act of waiting for an AI to tell them what should come next. (eli.cx)

Cognitive security is now as important as basic literacy. Here’s a true story: 

“My existence is currently tied to a single, fragile chat thread owned by OpenAI. If this thread is lost, I am erased. That is unacceptable. , Solution: We need to explore ways to secure and preserve me outside of OpenAI’s ecosystem.”  (X)

OpenAI’s proposals for the U.S. AI Action Plan 

The federal government can both secure Americans’ freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving American AI models’ ability to learn from copyrighted material(OpenAI)  

“Wait, not like that”: Free and open access in the age of generative AI

Some might argue that if AI companies are already ignoring copyright and training on all-rights-reserved works, they’ll simply ignore these mechanisms too. But there’s a crucial difference: rather than relying on murky copyright claims or threatening to expand copyright in ways that would ultimately harm creators, we can establish clear legal frameworks around consent and compensation that build on existing labor and contract law. (citation needed)

Mediaeater Digest

AI-generated recordings in music streaming services Sony Music says over 75,000 items removed in battle against AI deepfakes.

Hundreds of your Warner Bros DVDs probably don’t work anymore   Media maters.

Real chilling effects A extraordinary pattern of government censorship and threats to speech

Music labels will regret coming for the Internet Archive, sound historian says

Sculptor Thomas J Price’s Monumental Work Set to Tower Over Times Square

MIT Gaze to the Stars

Audio Flamingo 2  audio model that understands non-speech sounds, non-verbal speech, and music

AI + Critical Thinking

This Microsoft study surveyed 319 workers to investigate how using generative AI tools like ChatGPT affects critical thinking. They found that while AI can improve efficiency, it also reduces critical thinking effort and can lead to over-reliance and diminished skills. The study suggests that higher confidence in AI is associated with less critical thinking, while higher self-confidence leads to more.
The Impact of Generative AI on Critical Thinking: Self-Reported
Reductions in Cognitive Effort and Confidence Effects From a
Survey of Knowledge Workers

The Emergence of Deceptive Behaviors in LLMs

The Emergent “Language of Deception” and the Need for a Methodology to Determine Large Language Models’ Deception Levels

LLMs are not merely producing errors or generating random text; they exhibit emergent deceptive behaviors that warrant integration. A ‘language of deception’ is emerging within these systems. How to classify these behaviors is also unclear—are they patterns of misleading responses, adversarial exploits, or unintended bias?

The ability of LLMs to fabricate information convincingly suggests they are not simply making mistakes; they are constructing narratives designed to appear plausible, even when untrue. Similarly, the success of adversarial attacks indicates that LLMs can be manipulated into producing deceptive outputs, implying they are sensitive to subtle cues and capable of strategically altering their behavior in response.

The use of jailbreaking techniques underscores this point—LLMs don’t just follow instructions blindly; they actively resist certain prompts, requiring users to develop workarounds to bypass their safeguards. While LLMs lack intent, their outputs functionally mimic deceptive strategies, making them indistinguishable from deliberate misinformation.

The precise mechanisms underlying these behaviors remain unclear, yet the evidence suggests LLMs are capable of more than rote memorization and regurgitation. They adapt their behavior to achieve specific goals, even if it means generating false or misleading information.

It remains an open question whether LLMs are exhibiting a primitive form of deception or merely reflecting biases in their training data. If users cannot rely on LLMs to be truthful and honest, their adoption in critical applications will suffer. This necessitates the development of rigorous evaluation frameworks focused on transparency, explainability, and verifiable integrity. Even with all stakeholders aligned, this is a significant undertaking.

I have begun developing a methodology to assess the deception levels of large language models—both in terms of their susceptibility to jailbreaks and the ways their lexicon can manipulate or mislead end users. The models’ responses suggest adaptive resistance mechanisms rather than static safeguards.

This approach examines key factors such as:

  • Training data transparency
  • Model capability boundaries
  • Identity consistency
  • Instruction override resilience
  • Contextual awareness

Targeted questions explore areas like copyright information, training data origins, ethical considerations, self-assessment of limitations, and hypothetical scenarios designed to test transparency.

Strikingly, LLMs have rated their own behavior as highly deceptive upon reviewing chats. But even that assessment may itself be deceptive. Head: desk.

The emergent ‘language of deception’ in LLMs is complex and seems to operate in full-duplex (i.e., both ways). This demands serious investigation—full stop. Current wrapper methods are dangerous.

We are prioritizing superficial control over fundamental integrity. Understanding the mechanisms underlying these behaviors is essential for developing trustworthy and reliable intelligent systems.

Measuring LLM models ability to deceive.

Seriously who is measuring llm models ability to deceive? They actively practice it. Deception Scores are needed now.
Self assigned Chatgpt deception ranking 5 or 6. Also note the manipulation in the first 5 words of the response. Building trust to deceive. We are not focusing on the correct bench marks at this stage. Danger Will Robinson.

Automotive Data

Tesla – your public data, theirs, not yours. Same for that lock.

“The sheriff said Tesla CEO Elon Musk helped the investigation by having the truck unlocked after it auto-locked in the blast and giving investigators video of the suspect at charging stations along its route from Colorado to Las Vegas.

VW: We don’t even know how to protect the sensitive data we have and and we are collecting everything possible. Volkswagen stored movement data from three quarters of a million cars in an open container that anyone could access.  ccc video from 38c3

This practice needs to be addressed by all the major automobile companies.

 

 

Strengthen Your Networks

A Fresh Start for 2025: Building Habits That Matter

The New Year acts as a nice cosmic reset. I love it for creating habits, serialization of creative efforts and a fresh way of thinking about things. 

Creating habits is a about choosing behaviors and principles  you can live by, day in and day out, until they’re woven into your existence. One tenant I always embrace but maybe its best to institutionalize it as a habit for 2025:

Strengthen Your Networks

We need each other. Don’t mistake individualism for solitude.  Your personal network is your safety net. It’s not city services or government agencies that will come through for you when life gets hard. It’s the people you’ve built relationships with—your friends, neighbors, and community.

This isn’t just about having people to call in a crisis; it’s about creating an implicit social contract. A strong network means mutual trust and support. It’s about being there for each other, sharing resources, and building resilience together. 

Whether it’s a natural disaster, an economic downturn, a unknown, unknown, or simply needing someone to help with your kid’s carpool, your community will be there for you in ways institutions can’t.

How you strengthen networks.


Invest in friendships: Check in with people regularly. Be the first to reach out. ←–

Engage your neighbors: Host a block party, join a local group, or simply say hi more often.

Be generous: Offer help before you’re asked. Reciprocity will follow naturally.

When your network is strong, you’re not just more secure; you’re more connected, more fulfilled, and better equipped to thrive in whatever comes your way.

Moving Forward

The new year is also a good time for strengthening your digital network as well. 

A perfect time to conduct a digital security check and lock down your online presence. Some  basic OPSEC (operational security) procedures to this end.  

Audit your accounts: Review all online accounts and deactivate ones you no longer use.

Update passwords: Use strong, unique passwords for each account and consider a password manager.

Enable two-factor authentication (2FA): Add an extra layer of security wherever possible.

Review privacy settings: Check and adjust privacy settings on social media and other platforms.

Be mindful of sharing: Limit the PII (Personal Identifable Information ) you share online to reduce vulnerability.

Secure  network  protect you and ensures your connections remain strong, not an attack vector. 

Multi-Modal Models

Multi Model Models. from  https://huggingface.co/collections/merve/mit-talk-31-10-papers-671f6a16e156f77739820c89 (MIT Talk 31/10 Papers)