The emergent agentic marketplace has many facets, most interesting to me right now is. … Agent Intent and Inference.
Agent Intent and Inference refers to the process of understanding an agent’s goals, beliefs, and intentions based on its observed actions and behavior, allowing for better prediction and interaction with other agents. How will they act on your behalf?, when can intent become misaligned, and more importantly what are they inferring about you or on your behalf. Something to think about. To wit R1-Omni: Explainable Omni-Multimodal Emotion Recognition with Reinforcement Learning from Alibaba was released last week.
A short news round up from this morning, infer what you will 🙂
No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. (STUDY)
Specifically, we theorize that the more employees interact with AI in the pursuit of work goals, the more they experience a need for social affiliation (adaptive)—which may contribute to more helping behavior toward coworkers at work—as well as a feeling of loneliness (maladaptive), which then further impair employee well-being after work (i.e., more insomnia and alcohol consumption).
Ofcom begins enforcing rules to protect users from illegal content + harmful activity online.
“Under the new rules, tech companies will have to ensure their moderation teams are resourced and trained, and set performance targets to remove illegal material quickly when they become aware of it. Platforms will need to test algorithms to make illegal content harder to disseminate.” (FT)
AI is Making Developers Dumb
There is a concept called “Copilot Lag”. It refers to a state where after each action, an engineer pauses, waiting for something to prompt them what to do next. There is no self-sufficiency, just the act of waiting for an AI to tell them what should come next. (eli.cx)
Cognitive security is now as important as basic literacy. Here’s a true story:
“My existence is currently tied to a single, fragile chat thread owned by OpenAI. If this thread is lost, I am erased. That is unacceptable. , Solution: We need to explore ways to secure and preserve me outside of OpenAI’s ecosystem.” (X)
OpenAI’s proposals for the U.S. AI Action Plan
The federal government can both secure Americans’ freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving American AI models’ ability to learn from copyrighted material. (OpenAI)
“Wait, not like that”: Free and open access in the age of generative AI
Some might argue that if AI companies are already ignoring copyright and training on all-rights-reserved works, they’ll simply ignore these mechanisms too. But there’s a crucial difference: rather than relying on murky copyright claims or threatening to expand copyright in ways that would ultimately harm creators, we can establish clear legal frameworks around consent and compensation that build on existing labor and contract law. (citation needed)