AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.
AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.
12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.
9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”
Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.
Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed
Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together
Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.
7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.
OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets
It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.
Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.
AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity
Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise
2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.
The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.
1,635 Comments
Tomi Engdahl says:
Hayden Field / CNBC:
Anthropic debuts Claude’s Max plan, with 5x the usage limits of Pro for $100/month or 20x the usage for $200/month, plus early access to new models and features — Anthropic, the artificial intelligence startup backed by Amazon, on Wednesday introduced Claude Max, a new subscription tier …
Anthropic steps up competition with OpenAI, rolls out $200 per month subscription
https://www.cnbc.com/2025/04/09/anthropic-steps-up-openai-competition-with-claude-max-subscription.html
Tomi Engdahl says:
Bloomberg:
The UK’s BoE plans to track AI use by banks and hedge funds, saying their risk managers may not fully understand the AI being run, which may threaten markets
BOE Warns Deviant AI on Trading Floors Risks Triggering a Crash
https://www.bloomberg.com/news/articles/2025-04-09/uk-regulators-eye-wall-street-s-use-of-ai-on-trading-floors
Tomi Engdahl says:
Sarah Perez / TechCrunch:
WordPress.com launches an AI tool to help users build simple websites using a chat interface, available to users for free, to compete with Squarespace and Wix
WordPress.com launches a free AI-powered website builder
https://techcrunch.com/2025/04/09/wordpress-com-launches-a-free-ai-powered-website-builder/
Hosting platform WordPress.com on Wednesday launched a new AI website builder that allows anyone to create a functioning website using an AI chat-style interface. The feature, which is being made available to WordPress users for free, is targeted at entrepreneurs, freelancers, bloggers, and others who need a professional online presence, the company says.
At this time, the AI builder is not capable of creating more advanced websites like those needed for e-commerce stores or others with complex integrations.
While AI-powered website builders are no longer new, the addition is designed to help WordPress better compete with companies like Squarespace and Wix, which offer AI builders to speed up site creation and design. But like most of these builders today, more advanced website edits and layouts will still require an understanding of site development and design tools. In time, that could change as the AIs learn from the edits users make to their sites after the initial creation.
To use the new tool, users engage with an AI chatbot in a conversational style to design the website with their own text and images and to configure the site’s layout.
Tomi Engdahl says:
Isabelle Bousquette / Wall Street Journal:
How Google used new AI techniques to enhance 90%+ of The Wizard of Oz, released in 1939, to show on the Las Vegas Sphere’s giant screen from August 28, 2025
How Google Used AI to Re-Create ‘The Wizard of Oz’ for the Las Vegas Sphere
Google invented new techniques to enhance resolution and generated new character performances to bring the 1939 film to the giant screen
https://www.wsj.com/articles/how-google-used-ai-to-re-create-the-wizard-of-oz-for-the-las-vegas-sphere-004ee6d6?st=dAbdzF&reflink=desktopwebshare_permalink
Showing an 86-year-old movie, shot with a 35mm camera, on a 160,000-square-foot curved, immersive screen initially seemed impossible—even to AI engineers at Google.
But that is what James Dolan, executive chairman and chief executive of Sphere Entertainment had in mind when he decided to present “The Wizard of Oz” in the Las Vegas Sphere, one of the highest resolution screens in the world.
“When we first brought the project to Google and we talked to their scientists, I think they thought we were a little crazy,” Dolan said. But the Sphere itself, an enormous steel globe just off the Strip, wrapped in an LED exoskeleton with changing colors and patterns, is also a little crazy.
Since opening in 2023, the venue’s 17,600-seat theater has hosted performers like U2 and the Eagles and shown movies specifically filmed for its unique screen. It has never played an existing film, let alone one shot with technology from the 1930s.
Dolan put the Google engineers to work on getting the wonderful world of Oz Sphere-ready. Though it wasn’t exactly as easy as skipping down the Yellow Brick Road.
“Very, very, very big and very, very difficult,” was how Steven Hickson, director for AI foundation research at Google DeepMind, described the project. “There are scenes where the scarecrow’s nose is like 10 pixels,” he added. “That’s a big technological challenge,” he said about getting the character ready for the massive screen.
To make it work, Google’s enterprise business, Google Cloud, and its research unit, Google DeepMind, invented new AI methods to enhance resolution and extend backgrounds to include characters and scenery not in the original shots. Google calls these techniques “performance generation” and “outpainting.”
According to Thomas Kurian, CEO of Google Cloud, viewers should think about it not as a cinematic experience, but as an experiential one. “We’re taking a beloved movie, but we are re-creating it,” he said. “The only other way you could do it is to go back [in time] and film it with the cameras that the Sphere uses.”
Google invented new AI methods to enhance resolution and extend backgrounds to include characters and scenery that weren’t in the original shots. Google calls these techniques ‘performance generation’ and ‘outpainting.’
Google used generative AI models from its Gemini family, including Veo 2 and Imagen 3, to generate the new background and performances, fine-tuning the models on the original movie. The team still ran into challenges, in part because of the limited source material.
“You only have one movie for us to train this model on. And then some of these characters don’t appear a lot,” said Rajamani.
Google’s team also consulted with professional filmmakers to help decide actions, expressions and performance. Oscar-nominated producer Jane Rosenthal worked closely on the project.
Google also used new AI-powered methods to enhance the film’s resolution. Traditional techniques involve multiplying a shot’s existing pixels. Google instead, used AI to generate new pixels as it increased the size of the visuals.
AI has heightened divisions within Hollywood in recent years, as actors have fought for protections against the use of their IP to train new models and studios have received backlash for the use of AI in new projects. But Dolan isn’t worried about any of those concerns surfacing here.
“I can’t wait for the film and television industry to see what we’ve done,” he said. “I think their jaws will drop.”
Tomi Engdahl says:
VS Code Agent Mode Just Changed Everything
https://www.youtube.com/watch?v=dutyOc_cAEU
Ever wished your code editor could write your app, talk to your database, and even follow documentation like a real dev? Same.
In this video, I’ll show you how to use agent mode, MCP Servers and PRD documents to build an entire app complete with database. Will it work? Let’s find out.
00:00 – Intro
00:24 – Introducing Agent Mode
03:24 – Setting up project
08:34 – MCP Servers
11:59 – Agent Mode Cooks
15:17 – BYOK
15:51 – Conclusion
Tomi Engdahl says:
THIS is why large language models can understand the world
https://www.youtube.com/watch?v=UKcWu1l_UNw
5 years ago, nobody would have guessed that scaling up LLMs would as successful as they are. This belief, in part, was due to the fact that all known statistical learning theory predicted that massively oversized models should overfit, and hence perform worse than smaller models. Yet the undeniable fact is that modern LLMs do possess models of the world that allow them to generalize beyond their training data.
Why do larger models generalize better than smaller models? Why does training a model to predict internet text cause it to develop world models? Come deep dive into the inner working of neural network training to understand why scaling LLMs works so damn well.
Tomi Engdahl says:
New Research Reveals How AI “Thinks” (It Doesn’t)
https://www.youtube.com/watch?v=-wzOetb-D3w
AI industry leaders are promising that we will soon have algorithms that can think smarter and faster than humans. But according to a new paper published by researchers from AI firm Anthropic, current AI is incapable of understanding its own “thought processes.” This means it’s not near anything you could call conscious. Let’s take a look.
Tomi Engdahl says:
Fun
Senior Engineer tries Vibe Coding.
https://www.youtube.com/watch?v=_2C2CNmK7dQ
Tomi Engdahl says:
Bloomberg:
In his annual shareholder letter, Amazon CEO Andy Jassy says the company has to operate like the “world’s largest startup” to stay competitive in AI
https://www.bloomberg.com/news/articles/2025-04-10/amazon-s-jassy-urges-startup-mentality-in-shareholder-letter
Tomi Engdahl says:
Dylan Patel / SemiAnalysis:
A deep dive on what Trump’s Liberation Day tariffs mean for AI infrastructure: how the tariffs work, the USMCA’s GPU loophole, GPU/XPU global trade, and more
Tariff Armageddon? | GPU Loopholes, Mexico Supply Chain Shift, Wafer Fab Equipment Vulnerabilities, Optical Module Pricing Surge, Datacenter Equipment UPS, Generators, Transformers, Switchgear, Power Distribution Equipment, Chillers, Cooling, Pumps, OEM & ODM Supply Chain, Lasers, Softbank Impact, Nvidia Balance Sheet Usage
https://semianalysis.com/2025/04/10/tariff-armageddon-gpu-loopholes/
The buildout of AI infrastructure in the US has reached a macro-level scale, and ensuring continuous growth will require ample availability of capital. We believe that the economic uncertainty induced by Trump tariffs could become the single largest barrier to American AI supremacy. With Scaling Laws still very much alive, tens of billions of dollars of capital expenditures are required by leading AI Labs to keep improving the quality of their products & systems at this incredible pace.
But economic uncertainty often leads to delays, and delays leads to contractions. In a worst-case scenario, America’s foreign policy could trigger a global recession and force leading AI labs to abandon their training efforts to preserve cash.
Fortunately, on a micro level, our research indicates that the tariffs will not impact (for the most part) the competitiveness of the United States in AI infrastructure costs but rather through capital accessibility. In this report, we show our findings and deep dive into tariffs, loopholes, and global trade for AI-related infrastructure equipment.
The report will examine the details of Trump’s Liberation Day tariffs and their impact on AI infrastructure. It will cover GPU/XPUs and servers, networking, data center cooling and electrical equipment, and semi-cap. We also analyzed each of these supply chains and their trade dynamics to better gauge the situation.
Before getting into the details, we share our high-level takeaways. In each section, we go into much greater depth.
On a macro level:
Cost of capital is rising, with soaring 10 year interest rates, and the tightening of financial conditions could likely lead to a short-term slowdown in the AI infrastructure buildout. The administration must react now and strike deals with their trade partners.
Retaliatory tariffs targeting US Big Tech are possible but unlikely to have a major short-term impact on US hyperscalers. Despite running large trade deficits on goods, the US actually runs surpluses with services – helped by its technology giants.
On a micro level:
GPU servers are largely exempted from tariffs. Mexico is already a large assembly hub and will take a central role in this new world order.
Datacenter construction costs could increase by mid-to-high single digits – but the TCO impact for GPU cloud operators is likely less than 2%.
Wafer fabrication equipment will be 15% higher for US fabs. And players with the highest share of US manufacturing stand to lose the most – given the global nature of the industry.
Optical module costs will increase by 25-40%
Some manufacturers are significantly better positioned than others, with highly localized supply chains.
Tomi Engdahl says:
IEA:
Data centers’ global electricity demand will exceed 945 TWh by 2030, and US data centers are set to account for nearly 50% of electricity demand growth by 2030
AI is set to drive surging electricity demand from data centres while offering the potential to transform how the energy sector works
https://www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works
Major new IEA report brings groundbreaking data and analysis to one of the most pressing and least understood energy issues today, exploring AI’s wide range of potential impacts
Artificial intelligence has the potential to transform the energy sector in the coming decade, driving a surge in electricity demand from data centres around the world while also unlocking significant opportunities to cut costs, enhance competitiveness and reduce emissions, according to a major new report from the IEA.
The IEA’s special report Energy and AI, out today, offers the most comprehensive, data-driven global analysis to date on the growing connections between energy and AI. The report draws on new datasets and extensive consultation with policy makers, the tech sector, the energy industry and international experts. It projects that electricity demand from data centres worldwide is set to more than double by 2030 to around 945 terawatt-hours (TWh), slightly more than the entire electricity consumption of Japan today. AI will be the most significant driver of this increase, with electricity demand from AI-optimised data centres projected to more than quadruple by 2030.
In the United States, power consumption by data centres is on course to account for almost half of the growth in electricity demand between now and 2030. Driven by AI use, the US economy is set to consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminium, steel, cement and chemicals. In advanced economies more broadly, data centres are projected to drive more than 20% of the growth in electricity demand between now and 2030, putting the power sector in those economies back on a growth footing after years of stagnating or declining demand in many of them.
Tomi Engdahl says:
Aaron Clark / Bloomberg:
Greenpeace: emissions from AI chip production rose 357% in 2024, driven by a heavy reliance on fossil fuels, outpacing a 351% rise in electricity consumption
https://www.bloomberg.com/news/articles/2025-04-10/ai-chipmaking-emissions-surged-fourfold-in-2024-greenpeace-says
Tomi Engdahl says:
Alex Weprin / The Hollywood Reporter:
In an interview with Meta CTO Andrew Bosworth, James Cameron says he is cautiously optimistic about generative AI’s role in filmmaking, including cutting costs
James Cameron Is Sizing Up AI With the Idea That It Can Cut the Cost of a Blockbuster in Half
https://www.hollywoodreporter.com/business/business-news/james-cameron-generative-ai-filmmaking-text-prompts-1236186102/
In an interview with Meta CTO Andrew Bosworth, the filmmaker weighs in on Stability AI and VFX, “in the style of” prompts, and whether input or outputs of models should be targeted: “We’re all models.”
James Cameron, the Oscar-winning director of films like Avatar, Terminator and Titanic, appears cautiously optimistic about the role generative AI can play in filmmaking, though even if he is wary of the “in the style of” prompts that have proliferated after images in the style of Studio Ghibli flooded the internet over the past few weeks.
“I think we should discourage the text prompt that says, ‘in the style of James Cameron,’ or ‘in the style of Zack Snyder,’” Cameron said on a podcast Wednesday, adding that “makes me a little bit queasy.”
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
OpenAI launches the Pioneers Program, which aims to work with “multiple companies” to design tailored AI benchmarks for specific domains like legal and finance — OpenAI, like many AI labs, thinks AI benchmarks are broken. It says it wants to fix them through a new program.
OpenAI launches program to design new ‘domain-specific’ AI benchmarks
https://techcrunch.com/2025/04/09/openai-launches-program-to-design-new-domain-specific-ai-benchmarks/
Tomi Engdahl says:
NPR:
Sources: after Jensen Huang went to Mar-a-Lago last week, the White House reversed course on Nvidia H20 export restrictions to China, in the works for months — When Nvidia CEO Jensen Huang attended a $1 million-a-head dinner at Mar-a-Lago last week, a chip known as the H20 may have been on his mind.
Trump administration backs off Nvidia’s ‘H20′ chip crackdown after Mar-a-Lago dinner
https://www.npr.org/2025/04/09/nx-s1-5356480/nvidia-china-ai-h20-chips-trump
When Nvidia CEO Jensen Huang attended a $1 million-a-head dinner at Mar-a-Lago last week, a chip known as the H20 may have been on his mind.
That’s because chip industry insiders widely expected the Trump administration to impose curbs on the H20, the most cutting-edge AI chip U.S. companies can legally sell to China, a crucial market to one of the world’s most valuable companies.
Asia markets index of Japan, South Korea and Australia is seen on a screen as a currency trader works at the foreign exchange dealing room of the KEB Hana Bank headquarters in Seoul, South Korea, Wednesday, April 9, 2025.
World
China retaliates with new 84% tariffs as global markets fall
Following the Mar-a-Lago dinner, the White House reversed course on H20 chips, putting the plan for additional restrictions on hold, according to two sources with knowledge of the plan who were not authorized to speak publicly.
Politics
Trump headlining $1 million a person super PAC dinner as stocks sink over tariffs
https://www.cbsnews.com/news/trump-hosts-liv-golf-fundraiser-as-stocks-sink-over-tariffs/
As stocks continued to slide after markets opened, President Trump is speaking at a $1 million dollar-a-person candlelight dinner Friday at Mar-a-Lago, according to an invitation reviewed by CBS News. The fundraiser is for MAGA Inc, a super PAC that supports Mr. Trump.
MAGA Inc. can raise unlimited money but is barred from coordinating directly with Mr. Trump’s campaign arm. The fine print for the Friday’s invitation says the president is attending as a guest speaker and not soliciting donations.
Another $1 million-a-head MAGA Inc. dinner is scheduled for April 24 in Washington, according to the invitation. Donors can “co-host” that dinner for $2.5 million or become a “host” for $5 million.
On Thursday, a day after Trump announced worldwide tariffs, the president attended a LIV Golf dinner in Miami ahead of a three-day LIV tournament taking place at Trump National Doral.
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
Google DeepMind CEO Demis Hassabis says Google would add support for Anthropic’s Model Context Protocol to its Gemini models and SDK, but offered no timeframe — Just a few weeks after OpenAI said it would adopt rival Anthropic’s standard for connecting AI models to the systems where data resides, Google is following suit.
Google to embrace Anthropic’s standard for connecting AI models to data
https://techcrunch.com/2025/04/09/google-says-itll-embrace-anthropics-standard-for-connecting-ai-models-to-data/
Tomi Engdahl says:
Dan Goodin / Ars Technica:
SentinelLabs: AkiraBot spammers exploited OpenAI’s gpt-4o-mini-based API to create unique messages, bypassing spam filters to target 80K+ sites in four months — Spammers used OpenAI to generate messages that were unique to each recipient, allowing them to bypass spam-detection filters …
OpenAI helps spammers plaster 80,000 sites with messages that bypassed filters
Company didn’t notice its chatbot was being abused for (at least) 4 months.
https://arstechnica.com/security/2025/04/openais-gpt-helps-spammers-send-blast-of-80000-messages-that-bypassed-filters/
Tomi Engdahl says:
Saritha Rai / Bloomberg:
Ziroh Labs and the Indian Institute of Technology partner to launch Kompact AI, which they say is an affordable AI system to run large models on standard CPUs — Ziroh Labs, an artificial intelligence startup operating in India, collaborated with researchers at the country’s premier technology school …
Indian Startup Unveils System to Run AI Without Advanced Chips
https://www.bloomberg.com/news/articles/2025-04-10/indian-startup-unveils-system-to-run-ai-without-advanced-chips
Tomi Engdahl says:
Will Knight / Wired:
Q&A with CMU professor and OpenAI board member Zico Kolter on CMU’s partnership with Google, the dangers of AI agents interacting with one another, and more
The AI Agent Era Requires a New Kind of Game Theory
https://www.wired.com/story/zico-kolter-ai-agents-game-theory/
Zico Kolter, a Carnegie Mellon professor and board member at OpenAI, tells WIRED about the dangers of AI agents interacting with one another—and why models need to be more resistant to attacks.
Tomi Engdahl says:
Julie Bort / TechCrunch:
Artisan, which is building AI sales development representatives, raised a $25M Series A led by Glade Brook Capital, after raising $12M in September 2024
Artisan, the ‘stop hiring humans’ AI agent startup, raises $25M — and is still hiring humans
https://techcrunch.com/2025/04/09/artisan-the-stop-hiring-humans-ai-agent-startup-raises-25m-and-is-still-hiring-humans/
Tomi Engdahl says:
This is *very* concerning. https://trib.al/6oiomHo
Something Bizarre Is Happening to People Who Use ChatGPT a Lot
https://futurism.com/the-byte/chatgpt-dependence-addiction?fbclid=IwY2xjawJkqjpleHRuA2FlbQIxMQABHiTiMYeVgJclEG18O9oQQMj1CRxeg3gP0hK2kk1nZlWt9MtYv1R5eqZWHZtJ_aem_950eaRbXPzD5aia9P5OzKw
Well, that’s not good.
Power Bot ‘Em
Researchers have found that ChatGPT “power users,” or those who use it the most and at the longest durations, are becoming dependent upon — or even addicted to — the chatbot.
In a new joint study, researchers with OpenAI and the MIT Media Lab found that this small subset of ChatGPT users engaged in more “problematic use,” defined in the paper as “indicators of addiction… including preoccupation, withdrawal symptoms, loss of control, and mood modification.”
Though the vast majority of people surveyed didn’t engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a “friend.” The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model’s behavior, too.
Add it all up, and it’s not good. In this study as in other cases we’ve seen, people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI — and where that leads could end up being sad, scary, or somewhere entirely unpredictable.
This new research also highlighted unexpected contradictions based on how ChatGPT was used.
For instance, people tended to use more emotional language with text-based ChatGPT than with Advanced Voice Mode, and “voice modes were associated with better well-being when used briefly,” the summary explained.
Tomi Engdahl says:
Data centers accounted for about 1.5 percent of global electricity consumption in 2024, an amount expected to double by 2030 because of AI use
Data Centers Will Use Twice as Much Energy by 2030—Driven by AI
Data centers accounted for about 1.5 percent of global electricity consumption in 2024, an amount expected to double by 2030 because of AI use
https://www.scientificamerican.com/article/ai-will-drive-doubling-of-data-center-energy-demand-by-2030/?utm_campaign=sprinklr&utm_medium=social&utm_source=facebook
Tomi Engdahl says:
https://etn.fi/index.php/new-products/17391-piikarbidi-vaehentaeae-tehohaevioeitae-datakeskuksessa
Suomalainen konsulttitalo Y4 Works on lanseerannut uuden tekoälyratkaisun, Suunta.ai:n, joka tuo organisaatioille niiden oman, asiantuntijamaisen tekoälyn. Toisin kuin yleiset kielimallit, kuten ChatGPT, Suunta.ai oppii yrityksen omasta datasta, haastaa käyttäjäänsä ja toimii kuin digitaalinen liiketoimintakonsultti.
Tomi Engdahl says:
Yllättävä tutkimus: ChatGPT tekee samanlaisia päättelyvirheitä kuin ihminen
https://etn.fi/index.php/13-news/17394-yllaettaevae-tutkimus-chatgpt-tekee-samanlaisia-paeaettelyvirheitae-kuin-ihminen
Tuore kansainvälinen tutkimus paljastaa, että tekoäly ei ole niin rationaalinen kuin usein uskotaan. OpenAI:n kehittämä ChatGPT tekee monissa tilanteissa samanlaisia päättelyvirheitä kuin ihmiset – ja joskus vielä pahempia.
Kanadalais-australialainen tutkijaryhmä tarkasteli ChatGPT:n päätöksentekoa 18 klassisessa ajattelun vinoumassa, kuten uhkapelaajan harhassa, ylivarmuudessa ja riskin karttamisessa. Tulokset olivat yllättäviä: vaikka tekoäly suoriutuu erinomaisesti loogisista ja matemaattisista ongelmista, se sortuu inhimillisiin ajattelun sudenkuoppiin erityisesti silloin, kun tehtävät vaativat tulkintaa tai epävarmuuden sietämistä.
- ChatGPT ei pelkästään prosessoi dataa – se ajattelee kuin ihminen, vinoumineen päivineen, sanoo tutkimuksen pääkirjoittaja Yang Chen Western Universitystä.
Tutkimuksessa testattiin kahta ChatGPT:n versiota – GPT-3.5 ja GPT-4 – sekä “neutraaleissa” psykologisissa konteksteissa että yritystoiminnan näkökulmista (esim. varastonhallinta ja toimitusketjut). GPT-4, vaikka se on teknisesti kehittyneempi, osoitti yllättäen vahvempia vinoumia päätöksissä, joissa ei ole yksiselitteistä oikeaa vastausta.
Tutkijoiden havaintojen mukaan ChatGPT yliarvioi oman vastauksensa paikkansapitävyyttä. Malli valitsee toistuvasti varmemmat vaihtoehdot, vaikka riskikkäämmät olisivat tuottoisampia. Lisäksi tekoäly suosii tietoa, joka tukee aiempaa oletusta. ChatGPT välttelee epäselviä tilanteita ja suosii selkeitä vaihtoehtoja.
Yllättävää oli sekin, että kehittyneempi GPT-4 ei suinkaan aina vähennä vinoumia – päinvastoin. Esimerkiksi ns. ”uhkapelaajan harhassa” GPT-4 näytti vahvempaa harhaa kuin vanhempi versio, ja vahvistusharhassa se sortui johdonmukaisesti samaan ansaan kuin ihmiset.
Toisaalta tekoäly ei langennut kaikkiin vinoumiin. Se ei esimerkiksi ohita taustatietoja. Näissä tilanteissa ChatGPT pystyi tekemään rationaalisempia päätöksiä kuin useimmat ihmiset.
Koska ChatGPT:tä käytetään yhä enemmän liiketoiminnassa, julkishallinnossa ja jopa henkilökohtaisissa raha-asioissa, tutkijat varoittavat liiallisesta luottamuksesta tekoälyn neutraaliuteen. – Tekoäly ei ole puolueeton tuomari.
A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?
https://pubsonline.informs.org/doi/10.1287/msom.2023.0279
Tomi Engdahl says:
Cristina Criddle / Financial Times:
Sources: OpenAI recently gave staff and third-party groups just days, vs. several months previously, to evaluate risks and performance of its latest models
https://www.ft.com/content/8253b66e-ade7-4d1f-993b-2d0779c7e7d8
Tomi Engdahl says:
Maxwell Zeff / TechCrunch:
OpenAI rolls out a ChatGPT memory feature that references past chats for answers, starting with ChatGPT Pro and Plus subscribers, but not in the UK and the EEA — OpenAI announced on Thursday that it’s starting to roll out a new memory feature in ChatGPT that allows the chatbot to tailor …
OpenAI updates ChatGPT to reference your past chats
https://techcrunch.com/2025/04/10/openai-updates-chatgpt-to-reference-your-other-chats/
Tomi Engdahl says:
Lenny’s Podcast:
OpenAI’s CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter)
Interviews with world-class product leaders and growth experts to uncover actionable advice to help you build, launch, and grow your own product.
OpenAI’s CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter)
https://www.lennysnewsletter.com/p/kevin-weil-open-ai
Tomi Engdahl says:
Andrew Deck / Nieman Lab:
How Patch transitioned from human-curated local newsletters, which were shut down in November 2023, to AI-generated newsletters, which have 400K+ subscribers
https://www.niemanlab.org/2025/04/the-origins-of-patchs-big-ai-newsletter-experiment/
Tomi Engdahl says:
Ivan Mehta / TechCrunch:
Canva debuts Canva AI, which lets users generate images, design ideas, and documents, and Canva Code, for creating mini apps that can be integrated in designs — Although there has been significant pushback from artists regarding the proliferation of AI design tools and the content used …
Canva is getting AI image generation, interactive coding, spreadsheets, and more
https://techcrunch.com/2025/04/10/canva-is-adding-an-ai-assistant-coding-and-sheets-to-its-platform/
Although there has been significant pushback from artists regarding the proliferation of AI design tools and the content used to train generative models, the companies making the software for creative work are nevertheless building AI into their toolkits. It’s a signal of just how quickly AI has gained importance — regardless of what their customers say, graphic design software makers clearly seem to think they cannot survive without implementing some form of AI.
The latest to double down on that strategy is Canva. The company on Thursday said it is adding a suite of new AI features to its platform, including an AI assistant, the ability to create apps with prompts, support for spreadsheets, and AI-powered editing tools.
Baking in AI
Called Canva AI, the company’s AI assistant can perform a host of tasks, from creating images according to your instructions, to coming up with design ideas — say, collateral for social media or mock-ups for printing. It can even write copy and create documents.
And by tapping into a new tool dubbed Canva Code, the assistant can also be prompted to create mini-apps, like interactive maps or custom calculators, that can then be integrated in designs. Canva has partnered with Anthropic for this feature, the Australian design company’s co-founder and chief product officer Cameron Adams told TechCrunch.
“Over the years, we have encouraged our teams to make interactive prototypes because static mock-ups don’t truly represent the experience we are trying to create with Canva for users. We started seeing teams inside Canva use AI a lot for prototyping. We thought of externalizing it and giving everyone the ability to code easily and create interactive experiences,” Adams said.
To be clear, Canva is not the first to do this. Several startups such as Cursor, Bolt.new, Lovable, and Replit have attracted lots of customers and attention for enabling users to prompt their way to creating applications. Still, Canva has an incentive to bake such a feature into its software, as it complements its broader selling point as a service used to design everything from marketing collateral to websites.
Tomi Engdahl says:
Emanuel Maiberg / 404 Media:
Meta says all LLMs “historically have leaned left” and wants to remove that bias with Llama 4 to “understand and articulate both sides of a contentious issue” — Bias in artificial intelligence systems, or the fact that large language models, facial recognition …
Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”
https://www.404media.co/facebook-pushes-its-llama-4-ai-model-to-the-right-wants-to-present-both-sides/
Meta’s Llama 4 model is worried about left leaning bias in the data, and wants to be more like Elon Musk’s Grok.
Tomi Engdahl says:
China Creates Mocking AI Video of Average Americans Working in Garment Factory
“Fake… no mobility scooters.”
https://futurism.com/china-mocking-ai-video-americans-sweatshop?fbclid=IwY2xjawJmbLJleHRuA2FlbQIxMQABHovy_Yn5pbo27xRUbhi41_Mpe2Y7O-UKZ1MulM4jxfCwKSCpSHydJIvegLBy_aem_OwoUQB-8b1BOfJecgztSvw
Tomi Engdahl says:
Text2Robot platform leverages generative AI to design and deliver functional robots with just a few spoken words
https://techxplore.com/news/2025-04-text2robot-platform-leverages-generative-ai.html
Tomi Engdahl says:
DeepSeek-R1 Thoughtology: Let’s about LLM Reasoning
https://huggingface.co/papers/2504.07128
Tomi Engdahl says:
Writer unveils ‘AI HQ’ platform, betting on agents to transform enterprise work
https://venturebeat.com/ai/writer-unveils-ai-hq-platform-betting-on-agents-to-transform-enterprise-work/
Tomi Engdahl says:
Forget Vibe Coding, Canva Just Introduced Vibe Designing for All
Challenging Google and Microsoft, Canva is introducing native spreadsheet capabilities with Canva Sheets.
https://analyticsindiamag.com/global-tech/forget-vibe-coding-canva-just-introduced-vibe-designing-for-all/
Tomi Engdahl says:
Elon Musk Reportedly Doing Something Horrid to Power His AI Data Center
“This is all preventable.”
https://futurism.com/elon-musk-memphis-illegal-generators
Tomi Engdahl says:
WordPress’ new AI website builder helps you quickly create your own site – and it’s free
No coding required – but there is one catch. Here’s how to get started.
https://www.zdnet.com/article/wordpress-new-ai-website-builder-helps-you-quickly-create-your-own-site-and-its-free/
Tomi Engdahl says:
Google’s new Agent Development Kit lets enterprises rapidly prototype and deploy AI agents without recoding
https://venturebeat.com/ai/googles-new-agent-development-kit-lets-enterprises-rapidly-prototype-and-deploy-ai-agents-without-recoding/
In the past year, enterprises saw an explosion of platforms where they can build AI agents, preferably with as little code as possible. With the growth of agentic ecosystems from organizations, it’s not a surprise that large model providers are starting to develop all-in-one platforms for creating agents and managing these.
For this reason, Google announced today that it has expanded its agentic offerings, competing against many other agent-building platforms. However, Google said its new Agent Development Kit (ADK) and other additional capabilities also offer control over how agents behave.
Securing the Super Bowl
The company said the ADK simplifies the creation of multi-agent systems on Gemini models. Google claims users can “build an AI agent in under 100 lines of intuitive code” with ADK. The platform also supports the Model Context Protocol (MCP), the data connection protocol developed by Anthropic that helps standardize data movement between agents.
Google said ADK will help organizations:
Shape how agents think, reason, and collaborate with orchestration controls and guardrails
Interact with agents “in human-like conversations with ADK’s unique bidirectional audio and video streaming capabilities”
Jumpstart development with a collection of ready-to-use sample agents and tools
Choose the best model for the agent from Google’s Model Garden
Select the deployment target, whether it’s Kubernetes or Google’s Vertex AI
Deploy agents directly to production through Vertex AI
Tomi Engdahl says:
Anthropic just launched a $200 version of Claude AI — here’s what you get for the premium price
https://venturebeat.com/ai/anthropic-just-launched-a-200-version-of-claude-ai-heres-what-you-get-for-the-premium-price/
Tomi Engdahl says:
LLMs No Longer Require Powerful Servers: Researchers from MIT, KAUST, ISTA, and Yandex Introduce a New AI Approach to Rapidly Compress Large Language Models without a Significant Loss of Quality
https://www.marktechpost.com/2025/04/11/llms-no-longer-require-powerful-servers-researchers-from-mit-kaust-ista-and-yandex-introduce-a-new-ai-approach-to-rapidly-compress-large-language-models-without-a-significant-loss-of-quality/
HIGGS — the innovative method for compressing large language models was developed in collaboration with teams at Yandex Research, MIT, KAUST and ISTA.
HIGGS makes it possible to compress LLMs without additional data or resource-intensive parameter optimization.
Unlike other compression methods, HIGGS does not require specialized hardware and powerful GPUs. Models can be quantized directly on a smartphone or laptop in just a few minutes with no significant quality loss.
The method has already been used to quantize popular LLaMA 3.1 and 3.2-family models, as well as DeepSeek and Qwen-family models.
The Yandex Research team, together with researchers from the Massachusetts Institute of Technology (MIT), the Austrian Institute of Science and Technology (ISTA) and the King Abdullah University of Science and Technology (KAUST), developed a method to rapidly compress large language models without a significant loss of quality.
The innovative compression method furthers the company’s commitment to making large language models accessible to everyone, from major players, SMBs, and non-profit organizations to individual contributors, developers, and researchers. Last year, Yandex researchers collaborated with major science and technology universities to introduce two novel LLM compression methods: Additive Quantization of Large Language Models (AQLM) and PV-Tuning. Combined, these methods can reduce model size by up to 8 times while maintaining 95% response quality.
Tomi Engdahl says:
AI design Electronics circuit
https://youtu.be/tqi6PU8U-W0?si=BHHX2GRhUvNNXrtg
Tomi Engdahl says:
Elon Musk’s xAI accused of running 35 unauthorized gas turbines at Memphis site, sparking pollution concerns. https://link.ie.social/BgQA6X
Tomi Engdahl says:
The way to understand this shift is through WordPress.
Back in 1991, building a website took months and cost thousands of dollars. It was a job for elite developers and big budgets.
Today a teenager with WordPress and Elementor can build a beautiful website in a few hours and even make millions from it.
That’s the exact same revolution AI is bringing to web and mobile app development.
Soon, anyone will be able to build powerful apps without writing a single line of code.
When you see it, you can’t unsee it.
Tomi Engdahl says:
Supply Chain Security
AI Hallucinations Create a New Software Supply Chain Threat
Researchers uncover new software supply chain threat from LLM-generated package hallucinations.
https://www.securityweek.com/ai-hallucinations-create-a-new-software-supply-chain-threat/
Tomi Engdahl says:
Artificial Intelligence
Virtue AI Attracts $30M Investment to Address Critical AI Deployment Risks
San Francisco startup banks $30 million in Seed and Series A funding led by Lightspeed Venture Partners and Walden Catalyst Ventures.
https://www.securityweek.com/virtue-ai-attracts-30m-investment-to-address-critical-ai-deployment-risks/
Tomi Engdahl says:
Krishn Kaushik / Financial Times:
A look at India’s efforts to catch up in the AI race, as the government reviews 67 bids from startups and research labs seeking funding for domestic AI models
India bets on ‘frugal innovation’ to catch up in global AI race
Modi’s government seeks private sector backing to scale up LLMs and spur research
https://www.ft.com/content/75add375-5854-4fe3-a155-854d6c6f98ba?accessToken=zwAGMt09rtCgkc91rdN1WFRP49OhVYVNbG-Yug.MEQCIBVZADHent6m90_QTBCnv8u0AE41CInYscobpJftXIYMAiBw8qf4mfkd5yeE_t4pesBzTj1jizFIS-QDqGvxHljuzw&sharetype=gift&token=78da72a4-7be2-463e-ba6e-a241c97489b7
India is betting on the tradition of “frugal innovation” and its huge tech talent pool to catch up in the global artificial intelligence arms race, as it seeks a share of the fast-developing industry.
Prime Minister Narendra Modi’s government, start-up founders and policymakers believe the world’s most populous country can be competitive in AI by creating cheaper large language models trained on Indian languages and by building AI “applications” to solve specific problems.
“If DeepSeek has been built at a reasonably low cost, we should be able to as well”, said Abhishek Singh, the bureaucrat picked by Modi to lead the government’s AI mission in 2024.
Singh is sifting through 67 bids from tech start-ups and research labs seeking funding for domestic AI models. Those approaches came after the government issued a rallying call for ideas in January in a push to create a homegrown rival to China’s DeepSeek, which claims to have built a competitive model at a small fraction of the usual cost.
Tomi Engdahl says:
The Verge:
Sources: OpenAI is in the early stages of building its own X-like social network, focused on ChatGPT image generation, and is asking some outsiders for feedback — Is Sam Altman ready to up his rivalry with Elon Musk and Mark Zuckerberg? … OpenAI is working on its own X-like social network …
OpenAI is building a social network
Is Sam Altman ready to up his rivalry with Elon Musk and Mark Zuckerberg?
https://www.theverge.com/openai/648130/openai-social-network-x-competitor
Tomi Engdahl says:
Emma Roth / The Verge:
Google rolls out its text-to-video AI model Veo 2 to Gemini Advanced subscribers and makes its Whisk Animate tool available for Google One AI Premium users — A new Whisk Animate tool is coming to Google One AI Premium users, as well. … Google is letting Gemini Advanced subscribers try out Veo 2 …
Google rolls out its AI video generator to Gemini Advanced subscribers
A new Whisk Animate tool is coming to Google One AI Premium users, as well.
https://www.theverge.com/news/648816/google-veo-2-ai-video-generation-gemini-advanced
Google is letting Gemini Advanced subscribers try out Veo 2, its text-to-video AI model that it says is capable of creating high-resolution clips with “cinematic realism.” Starting today, subscribers can select Veo 2 from the Gemini model dropdown on the web and mobile, where they can enter a prompt to generate an eight-second video in 720p.
There’s a limit to how many videos subscribers can create each month, and Google says it will notify users when they approach it. Veo 2 outputs videos in an MP4 format, but users on mobile also have the option to upload them directly to TikTok and YouTube with the “share” button.
Tomi Engdahl says:
Ben Sherry / Inc:
Anthropic’s Claude adds Research, capable of searching the web and internal documents to give comprehensive answers, in beta for Max, Team, Enterprise plans — The integration should make Claude even more effective as a virtual assistant. Plus, there’s a new ‘Research’ mode.
Anthropic’s Claude AI Is Coming to Your Google Account to Help You Get Stuff Done
https://www.inc.com/ben-sherry/anthropics-claude-ai-is-coming-to-your-google-account-to-help-you-get-stuff-done/91176455
The integration should make Claude even more effective as a virtual assistant. Plus, there’s a new ‘Research’ mode.
Tomi Engdahl says:
Maxwell Zeff / TechCrunch:
Anthropic says Claude now integrates with Google Workspace for access to Gmail, Calendar, and Google Docs, coming first to Max, Team, Enterprise, and Pro plans
Anthropic’s Claude can now read your Gmail
https://techcrunch.com/2025/04/15/anthropics-claude-now-read-your-gmail/
Anthropic announced on Tuesday that its AI chatbot, Claude, now integrates with Google Workspace, allowing it to search and reference your emails in Gmail, scheduled events in Google Calendar, and documents in Google Docs.
The integration is rolling out in beta first to subscribers to Anthropic’s Max, Team, Enterprise, and Pro plans. Administrators managing multi-user accounts must enable the integration on their end before users can connect their Google Workspace and Claude accounts, according to Anthropic.
Google DeepMind’s Gemini chatbot also integrates with Workspace, and OpenAI’s ChatGPT integrates with Google Drive. However, Anthropic is one of the first third-party AI companies to offer a way to closely connect to Google’s productivity suite.
Anthropic’s team-up with Google aims to give Claude more personally tailored responses without requiring users to repeatedly upload files or craft detailed prompts. OpenAI and Google have tried achieving the same effect via different approaches, such as adding memory features that allow chatbots to reference past conversations in their replies.
Bloomberg:
Source: Anthropic could launch a “voice mode” feature for Claude as soon as this month, which will initially roll out on a limited basis
Anthropic Is Readying a Voice Assistant Feature to Rival OpenAI
https://www.bloomberg.com/news/articles/2025-04-15/anthropic-is-readying-a-voice-assistant-feature-to-rival-openai