- The Rundown AI
- Posts
- Anthropic questions AI consciousness
Anthropic questions AI consciousness
PLUS: Adobe releases new Firefly models, third-party integrations
Good morning, AI enthusiasts. Anthropic just took the AI consciousness debate from science fiction to serious research — launching a new program to develop frameworks for assessing potential model welfare.
With their own researcher estimating a 15% chance that models are already conscious, are we nearing the existential debate on whether digital minds deserve ethical treatment?
In today’s AI rundown:
Anthropic’s new research explores AI welfare
Adobe’s new Firefly models, AI integrations
Turn your terminal into an AI coding assistant
Google DeepMind expands Music AI Sandbox
4 new AI tools & 4 job opportunities
LATEST DEVELOPMENTS
ANTHROPIC

Image source: GPT-4o / The Rundown
The Rundown: Anthropic just launched a new research program dedicated to “model welfare,” exploring the complex ethical questions around whether future AI systems might gain consciousness or deserve moral consideration in the future.
The details:
Research areas include developing frameworks to assess consciousness, studying indicators of AI preferences and distress, and exploring interventions.
Anthropic hired its first AI welfare researcher, Kyle Fish, in 2024 to explore consciousness in AI — who estimates a 15% chance models are conscious.
The initiative follows increasing AI capabilities and a recent report (co-authored by Fish) suggesting AI consciousness is a near-term possibility.
Anthropic emphasized deep uncertainty around these questions, noting no scientific consensus on whether current or future systems could be conscious.
Why it matters: Sam Altman previously likened AI to a form of alien intelligence. Soon, these models may reach a level that changes how we understand consciousness and ethics about them. We’ll likely see a polarizing divide—especially since there’s no threshold for when an AI could be considered “conscious” or deserving of rights.
TOGETHER WITH INNOVATING WITH AI
The Rundown: Innovating with AI’s new program, AI Consultancy Project, equips AI enthusiasts with all the resources to capitalize on the rapidly growing AI consulting market – which is set to 8x to $54.7B by 2032.
The program offers:
Tools and framework to find clients and deliver top-notch services
A 6-month roadmap to build a 6-figure AI consulting business
Student landing their first AI client in as little as 3 days
ADOBE

Image source: Adobe
The Rundown: Adobe just launched a major expansion of its Firefly AI platform at its MAX London event, introducing two new powerful image generation models, third-party integrations, a new collaborative workspace, and an upcoming mobile app.
The details:
The new Firefly Image Model 4 and 4 Ultra boost generation quality, realism, control, and speed, while supporting up to 2K resolution outputs.
Firefly's web app now offers access to third-party models like OpenAI's GPT ImageGen, Google's Imagen 3 and Veo 2, and Black Forest Labs’ Flux 1.1 Pro.
Firefly’s text-to-video capabilities are now out of beta, alongside the official release of its text-to-vector model.
Adobe also launched Firefly Boards in beta for collaborative AI moodboarding and announced the upcoming release of a new Firefly mobile app.
Adobe’s models are all commercially safe and IP-friendly, with a new Content Authenticity allowing users to easily apply AI-identifying metadata to work.
Why it matters: OpenAI’s recent image generator and other rivals have shaken up creative workflows, but Adobe’s IP-safe focus and the addition of competing models into Firefly allow professionals to remain in their established suite of tools — keeping users in the ecosystem while still having flexibility for other model strengths.
AI TRAINING

The Rundown: In this tutorial, you will learn how to install and use OpenAI’s new Codex CLI coding agent that runs in your terminal, letting you explain, modify, and create code using natural language commands.
Step-by-step:
Make sure Node.js and npm are installed on your system.
Install Codex typing npm install -g @openai/codex in your terminal and set your API key using export OPENAI_API_KEY="your-key-here".
Start an interactive session with codex or run commands directly like codex "explain this function".
Choose your comfort level with any of the three approval modes, e.g., suggest, auto-edit, or full-auto.
Pro tip: Always run it in a Git-tracked directory so you can easily review and revert changes if needed. For more info, here is the GitHub repository.
PRESENTED BY IMAGINE AI LIVE
The Rundown: IMAGINE AI LIVE '25 gives your enterprise direct access to the AI pioneers most companies can't reach, with speakers like Bindu Reddy, Dan Siroker, and Nathan Labenz compressing years of learning into just three days.
Meet the AI experts on May 28-30 at the Fontainebleau Las Vegas and:
Bypass months of costly trial-and-error with frameworks built for enterprise scale
Connect with leaders who've successfully embedded AI across entire organizations
Get actionable roadmaps that translate cutting-edge capabilities into business impact
Accelerate your AI transformation with code AISPEAKERS200 to save $200 when you register by April 25th — limited VIP passes are still available.
GOOGLE DEEPMIND

Image source: Google DeepMind
The Rundown: Google DeepMind just released new upgrades to its Music AI Sandbox, introducing its new Lyria 2 music generation model alongside new creation and editing features for professional musicians.
The details:
The platform’s new “Create,” “Extend,” and “Edit” features allow musicians to generate tracks, continue musical ideas, and transform clips via text prompts.
The tools are powered by the upgraded Lyria 2 model, which features higher-fidelity, professional-grade audio generation compared to previous versions.
DeepMind also unveiled Lyria RealTime, a version of the model enabling interactive, real-time music creation and control by blending styles on the fly.
Access to the experimental Music AI Sandbox is expanding to more musicians, songwriters, and producers in the U.S. for broader feedback and exploration.
Why it matters: Google is targeting professional musicians, positioning Lyria 2 and the Sandbox as co-creation partners rather than just novelty music generators. The creative landscape for musicians is being reshaped by AI, like every other medium, but these tools are a big step in normalizing its currently polarizing use in the industry.
QUICK HITS
🔍 Dropbox Dash – AI universal search and knowledge management that will find every doc, video, image, or teammate across apps and turn content into first drafts, fast*
🎨 gpt-image-1 — OpenAI’s advanced image generation, now available via API
🤖 Researcher & Analyst - Copilot agents for research and data science tasks
🎆 Seedream 3.0 - Dreamina’s new high-level text-to-image model
*Sponsored Listing
🧠 Deepmind - Research Scientist
🛠️ OpenAI - NOC Technician
🌍 Scale AI - Strategic Projects Lead
📊 Perplexity AI - Revenue Operations Analyst
OpenAI reportedly plans to release an open-source reasoning model this summer that surpasses other open-source rivals on benchmarks and has a permissive usage license.
Tavus launched Hummingbird-0, a new SOTA lip-sync model that scores top marks in realism, accuracy, and identity preservation.
U.S. President Donald Trump signed an executive order establishing an AI Education Task Force and Presidential AI Challenge, aiming to integrate AI across K-12 classrooms.
Loveable unveiled Loveable 2.0, a new version of its app-building platform featuring
“multiplayer” workspaces, an upgraded chat mode agent, an updated UI, and more.
Grammy winner Imogen Heap released five AI "stylefilters" on the music platform, Jen, allowing users to generate new instrumental tracks inspired by her songs.
Higgsfield AI introduced a new Turbo model for faster and cheaper AI video generations, alongside seven new motion styles for additional camera control.
COMMUNITY
Join our next workshop on Monday, April 28th at 3 PM EST with Ellie Jacobs and Noam Markose from LTX Studio. In this live session, you’ll learn how to bring your AI-generated storyboards to life using LTX Studio’s powerful new timeline editor — no editing experience needed.
RSVP here. Not a member? Join The Rundown University on a 14-day free trial.
We’ll always keep this newsletter 100% free. To support our work, consider sharing The Rundown with your friends, and we’ll send you more free goodies.
That's it for today!Before you go we’d love to know what you thought of today's newsletter to help us improve The Rundown experience for you. |
See you soon,
Rowan, Joey, Zach, Alvaro, and Jason—The Rundown’s editorial team
Reply