AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.
AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.
12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.
9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”
Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.
Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed
Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together
Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.
7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.
OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets
It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.
Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.
AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity
Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise
2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.
The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.
1,635 Comments
Tomi Engdahl says:
AI proves that human fingerprints are not unique, upending 100 years of law enforcement
https://www.earth.com/news/ai-proves-that-fingerprints-are-not-unique-shattering-long-held-belief-legal-implications/
Tomi Engdahl says:
Self-hosting n8n: the easy way
#
cloud
#
devops
#
opensource
#
ai
n8n is the hottest “ai native” automation tool on the market right now. Basically no-code for AI workflows, BUT the pricing can be a bit daunting, with the cheapest paid plan starting at 24/month with a lot of limitations. Alternatively, you can self-host n8n! In this tutorial we’re going to setup a n8n (community version) instance on sliplane, for only 9 euros per month and (nearly) no limitations!
https://sliplane.io/blog/self-hosting-n8n-the-easy-way
Tomi Engdahl says:
Chinese tech industry floods market with AI models after DeepSeek success
Since DeepSeek upstaged OpenAI with a model that purportedly cost just several million dollars to build, China’s tech leaders have flooded the market with a number of low-cost AI-services
https://www.business-standard.com/technology/tech-news/chinese-tech-industry-floods-market-with-ai-models-after-deepseek-success-125032601455_1.html
Tomi Engdahl says:
And so it begins – Amazon Web Services is aggressively courting its own customers to use its Trainium tech rather than Nvidia’s GPUs
News
By Wayne Williams published March 29, 2025
Report claims AWS offered a 25% discount to make the switch
https://www.techradar.com/pro/and-so-it-begins-amazon-web-services-is-aggressively-courting-its-own-customers-to-use-its-trainium-tech-rather-than-nvidias-gpus
AWS urging customers to switch from Nvidia to its cheaper Trainium chip
It says its hardware offers the same performance with a 25 percent cost saving
Amazon’s pitch happened as Nvidia was showcasing its new hardware at GTC 2025
Tomi Engdahl says:
LLM providers on the cusp of an ‘extinction’ phase as capex realities bite
Only the strong will survive, but analyst says cull will not be as rapid as during dotcom era
https://www.theregister.com/2025/03/31/llm_providers_extinction/
Tomi Engdahl says:
Bill Gates nimeää ammatin, jossa riittää töitä jatkossakin
https://www.is.fi/digitoday/art-2000011137866.html
Tomi Engdahl says:
Xinmei Shen / South China Morning Post:
DeepSeek and Tsinghua University researchers detail an approach combining reasoning methods to let LLMs deliver better and faster results to general queries
DeepSeek unveils new AI reasoning method as anticipation for its next-gen model rises
https://www.scmp.com/tech/tech-trends/article/3305259/deepseek-unveils-new-ai-reasoning-method-anticipation-its-next-gen-model-rises
In collaboration with Tsinghua University, DeepSeek developed a technique combining reasoning methods to guide AI models towards human preferences
Tomi Engdahl says:
Tom Warren / The Verge:
Microsoft unveils a demo of Quake II generated using its Muse AI model, with limited playability in a browser, as part of the company’s Copilot for Gaming push
Microsoft has created an AI-generated version of Quake
https://www.theverge.com/news/644117/microsoft-quake-ii-ai-generated-tech-demo-muse-ai-model-copilot
Microsoft’s Muse AI model is now available as an AI-generated Quake II tech demo.
Tomi Engdahl says:
Rachel Metz / Bloomberg:
Source: AI coding startup Cursor, which has helped permeate “vibe coding”, hit 1M+ DAUs in March 2025, driven largely by word-of-mouth growth
AI Coding Assistant Cursor Draws a Million Users Without Even Trying
https://www.bloomberg.com/news/articles/2025-04-07/cursor-an-ai-coding-assistant-draws-a-million-users-without-even-trying
Word-of-mouth growth has helped turn a 60-person startup into one of the early hits of the generative AI era.
Tomi Engdahl says:
Mayank Parmar / BleepingComputer:
Sources and an APK teardown: OpenAI is testing watermarks for images generated via ChatGPT’s free account, however whether OpenAI rolls them out is unclear — OpenAI is reportedly testing a new “watermark” for the Image Generation model, which is a part of the ChatGPT 4o model.
OpenAI tests watermarking for ChatGPT-4o Image Generation model
https://www.bleepingcomputer.com/news/artificial-intelligence/openai-tests-watermarking-for-chatgpt-4o-image-generation-model/
OpenAI is reportedly testing a new “watermark” for the Image Generation model, which is a part of the ChatGPT 4o model.
This is an interesting move, and it’s likely because more and more users are generating Studio Ghibli artwork using the ImageGen model.
In fact, ChatGPT is in the news largely due to the Image Generation model, which is the most advanced multi-model shipped to date.
Not only can it accurately generate images with texts, but it also allows you to create realistic visuals, such as art produced by Studio Ghibli, a famous and big name in the Japanese studio world.
ChatGPT’s ImageGen model was previously limited to paid users (ChatGPT Plus customers), but it has now rolled out to everyone, including those with a free subscription.
As spotted by AI researcher Tibor Blaho, it looks like OpenAI is working on a new “ImageGen” watermark for free users.
My sources also told me that OpenAI recently started testing watermarks for images generated using ChatGPT’s free account.
If you subscribe to ChatGPT Plus, you’ll be able to save images without the watermark.
However, it’s unclear if OpenAI will move ahead with its plans to watermark images. Plans at OpenAI are always subject to change.
Tomi Engdahl says:
Matteo Wong / The Atlantic:
A look at the ARC-AGI exam designed by French computer scientist François Chollet to show the gulf between AI models’ memorized answers and “fluid intelligence” — Deep down, Sam Altman and François Chollet share the same dream. They want to build AI models that achieve …
The Man Out to Prove How Dumb AI Still Is
François Chollet has constructed the ultimate test for the bots.
https://www.theatlantic.com/technology/archive/2025/04/arc-agi-chollet-test/682295/?gift=2iIN4YrefPjuvZ5d2Kh3089M3DxlABplHmODO9XssmE&utm_source=copy-link&utm_medium=social&utm_campaign=share
When I spoke with him earlier this year, Chollet told me that AI companies have long been “intellectually lazy” in suggesting that their machines are on the path to a kind of supreme knowledge. At this point, those claims are based largely on the programs’ ability to pass specific tests (such as the LSAT, Advanced Placement Biology, and even an introductory sommelier exam). Chatbots may be impressive. But in Chollet’s reckoning, they’re not genuinely intelligent.
Chollet, like Altman and other tech barons, envisions AI models that can solve any problem imaginable: disease, climate change, poverty, interstellar travel. A bot needn’t be remotely “intelligent” to do your job. But for the technology to fulfill even a fraction of the industry’s aspirations—to become a researcher “akin to Einstein,” as Chollet put it to me—AI models must move beyond imitating basic tasks, or even assembling complex research reports, and display some ingenuity.
Chollet isn’t just a critic, nor is he an uncompromising one. He has substantial experience with AI development and created a now-prominent test to gauge whether machines can do this type of thinking.
Tomi Engdahl says:
Jamie Friedlander Serrano / Washington Post:
AI is aiding radiologists with tools like tumor-detection algorithms; 75%+ of AI software cleared by the US FDA for medical use is designed to support radiology — About two-thirds of radiology departments in the United States use AI in some capacity, according to a recent unpublished survey.
https://www.washingtonpost.com/health/2025/04/05/ai-machine-learning-radiology-software/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzQzODI1NjAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzQ1MjA3OTk5LCJpYXQiOjE3NDM4MjU2MDAsImp0aSI6IjQyZGU3MWVmLTkwNDMtNDg1YS1iNDI5LWI0YmE2NzNlZTBiNSIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS9oZWFsdGgvMjAyNS8wNC8wNS9haS1tYWNoaW5lLWxlYXJuaW5nLXJhZGlvbG9neS1zb2Z0d2FyZS8ifQ.-ONorPwtfOdZNwrCPaRDVR-5KRJXiuM8QOzvS-9XLHY
Tomi Engdahl says:
Generatiiviseen tekoälyyn investoidaan 644 miljardia dollaria tänä vuonna
https://etn.fi/index.php/13-news/17374-generatiiviseen-tekoaelyyn-investoidaan-644-miljardia-dollaria-taenae-vuonna
Gartnerin tuoreen ennusteen mukaan maailmanlaajuiset generatiivisen tekoälyn (GenAI) investoinnit nousevat 644 miljardiin Yhdysvaltain dollariin vuonna 2025. Tämä merkitsee 76,4 prosentin kasvua vuoteen 2024 verrattuna.
Vuonna 2024 käynnistetyt kunnianhimoiset sisäiset GenAI-projektit joutuvat suurennuslasin alle vuonna 2025, kun yritykset siirtyvät valmiisiin kaupallisiin ratkaisuihin, jotka tarjoavat ennakoitavampia hyötyjä ja käyttöönottoa. Gartnerin mukaan vaikka itse tekoälymallit kehittyvät, organisaatiot vähentävät omien mallien kehitystä keskittyen sen sijaan olemassa olevien ohjelmistojen GenAI-ominaisuuksiin.
644 miljardin dollarin panostuksista yli puolet eli 398 miljardia dollaria menee laitteisiin, 181 miljardia dollaria palvelimiin, 37 miljardia ohjelmistoihin ja 28 miljardia dollarissa palveluihin. Tänä vuonna GenAI-kulutusta vauhdittaa erityisesti tekoälyominaisuuksien sisällyttäminen laitteistoihin, kuten palvelimiin, älypuhelimiin ja tietokoneisiin. Näiden osuus koko GenAI-kuluista nousee jopa 80 prosenttiin.
Tomi Engdahl says:
Ben Blanchard / Reuters:
Foxconn reports Q1 revenue up 24.2% YoY to $49.5B, driven by AI demand, but says the impact of evolving global political conditions will need “close monitoring”
Foxconn reports record Q1 revenue, says it must closely watch global politics
https://www.reuters.com/technology/foxconn-reports-record-first-quarter-revenue-2025-04-05/
TAIPEI, April 5 (Reuters) – Taiwan’s Foxconn, the world’s largest contract electronics maker, posted its highest first-quarter revenue ever on strong demand for artificial intelligence products but said it would need to closely watch global politics.
Revenue for Apple’s (AAPL.O)
, opens new tab biggest iPhone assembler jumped 24.2% year-on-year to T$1.64 trillion ($49.5 billion), Foxconn (2317.TW), opens new tab said in a statement on Saturday, just missing the T$1.68 trillion LSEG SmartEstimate, which gives greater weight to forecasts from analysts who are more consistently accurate.
Tomi Engdahl says:
Emilia David / VentureBeat:
Anthropic’s Alignment Science team: “legibility” or “faithfulness” of reasoning models’ Chain-of-Thought can’t be trusted and models may actively hide reasoning
Don’t believe reasoning models’ Chains of Thought, says Anthropic
https://venturebeat.com/ai/dont-believe-reasoning-models-chains-of-thought-says-anthropic/
We now live in the era of reasoning AI models where the large language model (LLM) gives users a rundown of its thought processes while answering queries. This gives an illusion of transparency because you, as the user, can follow how the model makes its decisions.
However, Anthropic, creator of a reasoning model in Claude 3.7 Sonnet, dared to ask, what if we can’t trust Chain-of-Thought (CoT) models?
“We can’t be certain of either the ‘legibility’ of the Chain-of-Thought (why, after all, should we expect that words in the English language are able to convey every single nuance of why a specific decision was made in a neural network?) or its ‘faithfulness’—the accuracy of its description,” the company said in a blog post. “There’s no specific reason why the reported Chain-of-Thought must accurately reflect the true reasoning process; there might even be circumstances where a model actively hides aspects of its thought process from the user.”
Tomi Engdahl says:
Amazon is starting to test a new AI shopping agent, a feature it calls “Buy for Me,” with a subset users, the company announced in a blog post Thursday.
If Amazon doesn’t sell something that users are searching for, the Buy for Me feature will display products to users that other websites are selling. Then, users can select and request to purchase one of these products without ever leaving the Amazon Shopping app.
Read more from Maxwell Zeff here: https://tcrn.ch/4j5oQuH
#TechCrunch #technews #artificialintelligence #Amazon #ecommerce #shopping
Tomi Engdahl says:
Eliza Strickland / IEEE Spectrum:
A look at the state of AI in 2025 across training and inference costs, carbon footprint, investment activity, bills proposed in the US, and more — Stanford’s AI Index tracks performance, investment, public opinion, and more — If you read the news about AI, you may feel bombarded with conflicting messages: AI is booming.
12 Graphs That Explain the State of AI in 2025
Stanford’s AI Index tracks performance, investment, public opinion, and more
https://spectrum.ieee.org/ai-index-2025
Tomi Engdahl says:
Andrej Karpathy / @karpathy:
How LLMs, unlike previous transformative tech like the internet, disproportionately benefit regular people, with slower impact in corporations and governments
https://x.com/karpathy/status/1909308143156240538
ransformative technologies usually follow a top-down diffusion path: originating in government or military contexts, passing through corporations, and eventually reaching individuals – think electricity, cryptography, computers, flight, the internet, or GPS. This progression feels intuitive, new and powerful technologies are usually scarce, capital-intensive, and their use requires specialized technical expertise in the early stages.
So it strikes me as quite unique and remarkable that LLMs display a dramatic reversal of this pattern – they generate disproportionate benefit for regular people, while their impact is a lot more muted and lagging in corporations and governments. ChatGPT is the fastest growing consumer application in history, with 400 million weekly active users who use it for writing, coding, translation, tutoring, summarization, deep research, brainstorming, etc. This isn’t a minor upgrade to what existed before, it is a major multiplier to an individual’s power level across a broad range of capabilities. And the barrier to use is incredibly low – the models are cheap (free, even), fast, available to anyone on demand behind a url (or even local machine), and they speak anyone’s native language, including tone, slang or emoji. This is insane. As far as I can tell, the average person has never experienced a technological unlock this dramatic, this fast.
Tomi Engdahl says:
Bloomberg:
Publishers say site traffic has plummeted since Google rolled out AI Overviews; sources say Google acknowledged the drop in an October 2024 publisher meeting — In March 2024, website owner Morgan McBride was posing for photos in her half-renovated kitchen for a Google ad celebrating the ways …
https://www.bloomberg.com/news/articles/2025-04-07/google-ai-search-shift-leaves-website-makers-feeling-betrayed?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc0NDAzMzY5NywiZXhwIjoxNzQ0NjM4NDk3LCJhcnRpY2xlSWQiOiJTVFVFQkpEV1gyUFMwMCIsImJjb25uZWN0SWQiOiJEQUZGOTQ2MjMzOTM0NjI5QURDODEzNDRGQ0QwODBBOSJ9.g2Pj57IAuRue1QQiAlaOTEhlYtO4CumxIwre3di6lHA
Tomi Engdahl says:
Ryan Whitwam / Ars Technica:
Google rolls out Search’s AI Mode feature to millions more Labs users in the US with multimodal capabilities, letting users ask complex questions about pictures — Google started cramming AI features into search in 2024, but last month marked an escalation.
Google’s AI Mode search can now answer questions about images
Google’s AI Mode can now understand images as part of your searches.
https://arstechnica.com/gadgets/2025/04/googles-ai-mode-search-can-now-answer-questions-about-images/
Tomi Engdahl says:
David Shepardson / Reuters:
Memo: the White House orders federal agencies to name chief AI officers and expand their use of AI, rescinding Biden-era orders intended to place AI safeguards
Expanding AI use, White House orders agencies to develop strategies and name leaders
https://www.reuters.com/technology/artificial-intelligence/white-house-orders-agencies-name-chief-ai-officers-it-expands-use-2025-04-07/
WASHINGTON, April 7 (Reuters) – The White House said on Monday it is ordering federal agencies to name chief AI officers and develop strategies for an expansion of the government’s use of artificial intelligence, rescinding Biden-era orders intended to place safeguards on the technology.
The Office of Management and Budget directed government agencies to implement minimum-risk management practices for high-impact uses of AI and develop a generative AI policy in the coming months.
Tomi Engdahl says:
Rebecca Szkutak / TechCrunch:
IBM unveils the z17 mainframe with a Telum II chip, built for 250+ AI use cases and able to process 450B daily inferences, 50% more than z16, available June 8
IBM releases a new mainframe built for the age of AI
https://techcrunch.com/2025/04/07/ibm-releases-a-new-mainframe-built-for-the-age-of-ai/
Tomi Engdahl says:
Vauhini Vara / Bloomberg:
A look at Inkitt, a publishing platform that uses AI to create sequels and spinoffs of authors’ original work with minimal human input, raising quality concerns
https://www.bloomberg.com/features/2025-ai-romance-factory/
Tomi Engdahl says:
Retaining top AI talent is tough amid cutthroat competition between Google, OpenAI, and other heavyweights.
Google’s AI division, DeepMind, has resorted to using “aggressive” noncompete agreements for some AI staff in the U.K. that bar them from working for competitors for up to a year, Business Insider reports.
Some are paid during this time, in what amounts to a lengthy stretch of PTO. But the practice can make researchers feel left out of the quick pace of AI progress, reported BI.
Read more from Charles Rollet here: https://tcrn.ch/4jtJvbP
#TechCrunch #technews #artificialintelligence #bigtech #Google
Tomi Engdahl says:
Microsoft backs away from $1bn data center plans in Licking County, Ohio
Another data center project bites the dust
https://www.datacenterdynamics.com/en/news/microsoft-backs-away-from-1bn-data-center-plans-in-licking-county-ohio/
Microsoft has backed out of plans to build data centers in Licking County, Ohio.
The company told the Columbus Dispatch that it was no longer moving forward with its previous plans to invest $1 billion in three data center campuses in New Albany, Heath, and Hebron.
This is the latest in a series of data center project cancellations from Microsoft, with reports recently emerging that the company had pulled back on as much as 2GW of data center projects across the US and Europe. Shortly after, it was further reported that cancellations also occurred in APAC and the UK.
Speculation over Microsoft’s ever-growing list of canceled data center projects has been rife. Brokerage TD Cowen first brought attention to the matter, with analysts from the company speculating that the “lease cancellations and deferrals of capacity points to data center oversupply relative to its current demand forecast.”
Tomi Engdahl says:
Reuters:
Sources: US officials say DOGE is using AI to surveil at least one federal agency’s communications for anti-Trump talk; a source says DOGE is also using Signal — Trump administration officials have told some U.S. government employees that Elon Musk’s DOGE team of technologists …
Exclusive: Musk’s DOGE using AI to snoop on U.S. federal workers, sources say
https://www.reuters.com/technology/artificial-intelligence/musks-doge-using-ai-snoop-us-federal-workers-sources-say-2025-04-08/
Tomi Engdahl says:
Umar Shakir / The Verge:
Google rolls out Gemini Live camera and screenshare features to the Pixel 9 series and Samsung Galaxy S25 devices, available in 45 languages for users 18 and up
Gemini Live’s screensharing feature is rolling out to Pixel 9 and Galaxy S25 devices
It’s also coming soon to paid Gemini Advanced users on other devices.
https://www.theverge.com/news/644757/google-gemini-live-screen-share-video-camera-pixel-9
Tomi Engdahl says:
Maxwell Zeff / TechCrunch:
Amazon launches Nova Sonic, which can generate natural-sounding speech and that it claims is “the most cost-efficient” AI voice model on the market, via Bedrock
Amazon unveils a new AI voice model, Nova Sonic
https://techcrunch.com/2025/04/08/amazon-unveils-a-new-ai-voice-model-nova-sonic/
Tomi Engdahl says:
Cameron Emanuel-Burns / FinTech Futures:
Munich-based Hawk, which offers AI-powered anti-money laundering services, raised a $56M Series C led by One Peak and says it has 80+ customers worldwide — The Series C round builds on Hawk’s $17 million Series B in 2023, which was extended twice the following year.
https://www.fintechfutures.com/venture-capital-funding/german-aml-fintech-hawk-raises-56m-series-c-led-by-one-peak
Tomi Engdahl says:
Ryan Naraine / SecurityWeek:
Aurascape, which offers enterprise security tools to mitigate risks from third-party AI apps, emerges from stealth with $50M from Menlo, Mayfield, and others — Silicon Valley startup secures big investment from Menlo Ventures and Mayfield Fund to solve the “shadow AI” security problem.
Aurascape Banks Hefty $50 Million to Mitigate ‘Shadow AI’ Risks
Silicon Valley startup secures big investment from Menlo Ventures and Mayfield Fund to solve the “shadow AI” security problem.
https://www.securityweek.com/aurascape-banks-hefty-50-million-to-mitigate-shadow-ai-risks/
Tomi Engdahl says:
Mauro Orru / Wall Street Journal:
The EU unveils its AI Continent Action Plan, seeking to compete with the US and China, and says it wants to build “AI gigafactories” with ~100K AI chips
EU Bets on Gigafactories to Catch Up With U.S., China in AI Race
The bloc has been lagging behind since OpenAI’s 2022 release of ChatGPT
https://www.wsj.com/tech/ai/eu-bets-on-gigafactories-to-catch-up-with-u-s-china-in-ai-race-283683b8?st=MFLtzZ&reflink=desktopwebshare_permalink
The European Union said it would focus on building artificial-intelligence data and computing infrastructure and making it easier for companies to comply with regulation in a bid to catch up with the U.S. and China in the AI race.
The European Commission, the EU’s executive arm, said it wanted to develop a network of so-called AI gigafactories to help companies train the most complex models. Those facilities will be equipped with roughly 100,000 of the latest AI chips, around four times more than the number installed in AI factories being set up right now.
The announcement, part of the EU’s AI Continent Action Plan, underscores efforts from the block to position itself as a key player in the AI race against the U.S. and China. The EU has been lagging behind since OpenAI’s 2022 release of ChatGPT ushered in a spending bonanza.
Earlier this year, Washington announced Stargate, an AI joint venture that aims to build data centers in the U.S. for OpenAI. OpenAI, SoftBank Group, Oracle and MGX are the initial equity funders in Stargate, while Arm, Microsoft and Nvidia are technology partners. The companies are committing $100 billion initially, but plan to invest up to $500 billion over the next four years.
Beijing, for its part, has also made strides in the technology. Chinese company DeepSeek developed AI models that it said nearly matched American rivals despite using inferior chips, raising questions about the need to spend huge sums on advanced gear provided by Nvidia and other tech giants.
“The global race for AI is far from over,” said Henna Virkkunen, EU executive vice-president for tech sovereignty, security and democracy. “This action plan outlines key areas where efforts need to intensify to make Europe a leading AI continent.”
The EU in February pledged to mobilize 200 billion euros ($219.17 billion) in AI investments. More than 20 investors earmarked 150 billion euros for AI-related opportunities in Europe over the next five years, while the bloc is setting up a new 20 billion-euro fund for up to five AI gigafactories.
The EU plans to work with the private sector to roll out the infrastructure given the elevated costs, a senior EU official said, with member states and companies sharing the burden in a public-private partnership. The bloc posted a call for expressions of interest to attract investors.
EU officials also said the commission would set up an AI Act Service Desk, a point of contact to make it easier for companies to comply with the AI regulation in the bloc.
EU lawmakers last year approved the world’s most comprehensive legislation yet on artificial intelligence.
Tomi Engdahl says:
UK creating ‘murder prediction’ tool to identify people most likely to kill
Exclusive: Algorithms allegedly being used to study data of thousands of people, in project critics say is ‘chilling and dystopian’
https://www.theguardian.com/uk-news/2025/apr/08/uk-creating-prediction-tool-to-identify-people-most-likely-to-kill
The UK government is developing a “murder prediction” programme which it hopes can use personal data of those known to the authorities to identify the people most likely to become killers.
Researchers are alleged to be using algorithms to analyse the information of thousands of people, including victims of crime, as they try to identify those at greatest risk of committing serious violent offences.
The scheme was originally called the “homicide prediction project”, but its name has been changed to “sharing data to improve risk assessment”. The Ministry of Justice hopes the project will help boost public safety but campaigners have called it “chilling and dystopian”.
Tomi Engdahl says:
Financial Times:
How Unitree and other Chinese humanoid startups are competing with those in the US, fueled by cheaper components, rapid innovation, and state-led financing
https://www.ft.com/content/4ebac441-d5a8-4c6a-950c-a160274d389b
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
Google Cloud Next: Google updates Gemini Code Assist to add agentic capabilities, including creating apps from product specifications in Google Docs, in preview
Gemini Code Assist, Google’s AI coding assistant, gets ‘agentic’ abilities
https://techcrunch.com/2025/04/09/gemini-code-assist-googles-ai-coding-assistant-gets-agentic-upgrades/
Gemini Code Assist, Google’s AI coding assistant, is gaining new “agentic” capabilities in preview.
During its Cloud Next conference on Wednesday, Google said Code Assist can now deploy new AI “agents” that can take multiple steps to accomplish complex programming tasks. These agents can create applications from product specifications in Google Docs, for example, or perform code transformations from one language to another. Code Assist is now available in Android Studio in addition to other coding environments.
Code Assist’s upgrades are likely in response to competitive pressure from rivals such as GitHub Copilot, Cursor, and Cognition Labs, the creator of the viral programming tool Devin. The AI coding assistant market grows fiercer by the month, and there’s a lot of money in it. Considering the tech’s productivity-boosting potential, that’s not totally surprising.
Code Assist’s agents, which can be managed from a new Gemini Code Assist Kanban board, can generate work plans and report step-by-step progress on job requests. Beyond generating software and migrating code, the agents can implement new app features, execute code reviews, and generate unit tests and documentation, the company claims.
However, it’s unclear just how well Code Assist can do all this. Even the best code-generating AI today tends to introduce security vulnerabilities and bugs, studies have found, owing to weaknesses in areas like the ability to understand programming logic. One recent evaluation of Devin found that it completed just three out of 20 tasks successfully.
Tomi Engdahl says:
Joshua Rothman / New Yorker:
How AI could hollow out the media business, which employed under 50K US journalists in 2023, while enhancing some news reporting, including by processing data
Will A.I. Save the News?
Artificial intelligence could hollow out the media business—but it also has the power to enhance journalism.
https://www.newyorker.com/culture/open-questions/will-ai-save-the-news
Today, I’m surrounded by the news at seemingly every moment; checking on current events has become almost a default activity, like snacking or daydreaming. I have to take active steps to push the news away. This doesn’t feel right—shouldn’t I want to be informed?—but it’s necessary if I want to be present in my life.
It also doesn’t feel right to complain that the news is bad. There are many crises in the world; many people are suffering in different ways. But studies of news reporting over time have found that it’s been growing steadily more negative for decades. It’s clearly not the case that everything has been getting worse, incrementally, for the past eighty years. Something is happening not in reality but in the news industry. And since our view of the world beyond our direct experience is so dramatically shaped by the news, its growing negativity is consequential. It renders us angry, desperate, panicked, and fractious.
The more closely you look at the profession of journalism, the stranger it seems. According to the Bureau of Labor Statistics, fewer than fifty thousand people were employed as journalists in 2023, which is less than the number of people who deliver for DoorDash in New York City
Journalists serve the public good by uncovering disturbing truths, and this work contributes to the improvement of society, but the more these disturbing truths are uncovered, the worse things seem. Readers bridle at the negativity of news stories, yet they click on scary or upsetting headlines in greater numbers—and so news organizations, even the ones that strive for accuracy and objectivity, have an incentive to alarm their own audiences.
Gone are the days when cable was newfangled, and you could feel informed if you read the front page and watched a half-hour newscast while waiting for “The Tonight Show” to start. But this is also a bright spot when it comes to the news: it can change.
Certainly, change is coming. Artificial intelligence is already disrupting the ways we create, disseminate, and experience the news, on both the demand and the supply sides. A.I. summarizes news so that you can read less of it; it can also be used to produce news content. Today, for instance, Google decides when it will show you an “A.I. overview” that pulls information from news stories, along with links to the source material. On the science-and-tech podcast “Discovery Daily,” a stand-alone news product published by the A.I.-search firm Perplexity, A.I. voices read a computer-generated script.
It’s not so easy to parse the implications of these developments, in part because a lot of news already summarizes. Many broadcasts and columns essentially catch you up on known facts and weave in analysis. Will A.I. news summaries be better? Ideally, columns like these are more surprising, more particular, and more interesting than what an A.I. can provide. Then there are interviews, scoops, and other kinds of highly specific reporting; a reporter might labor for months to unearth new information, only for A.I. to hoover it up and fold it into some bland summary. But if you’re interested in details, you probably won’t be happy with an overview, anyway.
And yet there’s a broader sense in which “the news,” as a whole, is vulnerable to summary. There’s inherently a lot of redundancy in reporting, because many outlets cover the same momentous happenings, and seek to do so from multiple angles. (Consider how many broadly similar stories about the Trump Administration’s tariffs have been published in different publications recently.) There’s value in that redundancy, as journalists compete with one another in their search for facts, and news junkies value the subtle differences among competing accounts of the same events. But vast quantities of parallel coverage also enable a reader to ask a service like Perplexity, “What’s happening in the news today?,”
The continued spread of summarization could make human writers—with their own personalities, experiences, contexts, and insights—more valuable, both as a contrast to and a part of the A.I. ecosystem. (Ask ChatGPT what a widely published writer might think about any given subject—even subjects they haven’t written about—and their writing can seem useful in a new way.) It could also be that, within newsrooms, A.I. will open up new possibilities. “I really believe that the biggest opportunity when it comes to A.I. for journalism, at least in the short term, is investigations and research,” Zach Seward, the editorial director of A.I. initiatives at the Times, told me. “A.I. is actually opening up a whole new category of reporting that we weren’t even able to contemplate taking on previously—I’m talking about investigations that involve tens of thousands of pages of unorganized documents, or hundreds of hours of video, or every federal court filing.” Because reporters would be in the driver’s seat, Seward went on, they could use it to further the “genuine reporting of new information” without compromising “the fundamental obligation of a news organization—to be a reliable source of truth.” (“Our principle is we never want to shift the burden of verification to the reader,” Seward said at a forum on A.I. and journalism this past fall.)
But there’s no getting around the money problem. Even if readers value human journalists and the results they produce, will they still value the news organizations—the behind-the-scenes editors, producers, artists, and businesspeople—on which A.I. depends? It’s quite possible that, as A.I. rises, individual voices will survive while organizations die. In that case, the news could be hollowed out. We could be left with A.I.-summarized wire reports, Substacks, and not much else.
News travels through social media, which is also being affected by A.I. It’s easy to see how text-centric platforms, such as X and Facebook, will be transformed by A.I.-generated posts; as generative video improves, the same will be true for video-based platforms, such as YouTube, TikTok, and Twitch. It may become genuinely difficult to tell the difference between real people and fake ones—which sounds bad. But here, too, the implications are uncertain. A.I.-based content could find an enthusiastic social-media audience.
To understand why, you have to stop and think about what A.I. makes possible. This is a technology that separates form from content. A large language model can soak up information in one form, grasp its meaning to a great extent, and then pour the same information into a different mold. In the past, only a human being could take ideas from an article, a book, or a lecture, and explain them to another human being, often through the analog process we call “conversation.” But this can now be automated. It’s as though information has been liquefied so that it can more easily flow. (Errors can creep in during this process, unfortunately.)
It’s tempting to say that the A.I. result is only re-presenting information that already exists. Still, the power of reformulation—of being able to tell an A.I., “Do it again, a little differently”—shouldn’t be underestimated. A single article or video could be re-created and shared in many formats and flavors, allowing readers (or their algorithms) to decide which ones suit them best. Today, if you want to fix something around the house, you can be pretty sure that someone, somewhere, has made a YouTube video about how to do it; the same principle might soon apply to the news.
At the same time, however, the fluidity of A.I. could work against social platforms. Personalization might allow you to skip the process of searching, discovering, and sharing altogether; in the near future, if you want to listen to a podcast covering the news stories you care about most, an A.I. may be able to generate one.
Right now, the variable quality and uncertain accuracy of A.I. news protects sophisticated news organizations. “As the rest of the internet fills up with A.I.-generated slop, and it’s harder to tell the provenance of what you’re reading, then the value of being able to say, ‘This was reported and written by the reporters whose faces you see on the byline’ only goes up and up,” Seward said. As time passes and A.I. improves, however, different kinds of readers may find ways of embracing it. Those who enjoy social media may discover A.I. news content through it. (Some people are already doing this, on TikTok and elsewhere.) Those who don’t frequent social platforms may go directly to chatbots or other A.I. sources, or may settle on news products that are explicitly marketed as combining human journalists with A.I. Others may continue to prefer the old approach, in which discrete units of carefully vetted, thoroughly fact-checked journalism are produced by people and published individually.
Is it possible to imagine a future in which the script is flipped? As I wrote last week, many people who work in A.I. believe that the technology is improving far faster than is widely understood. If they’re right—if we cross the milestone of “artificial general intelligence,” or A.G.I., by 2030 or sooner—then we may come to associate A.I. “bylines” with balance, comprehensiveness, and a usefully nonhuman perspective. That might not mean the end of human reporters—but it would mean the advent of artificial ones.
For a while, I’ve been integrating A.I. into my news-reading process. I peruse the paper but keep my phone nearby, asking one of the A.I.s that I use (Claude, ChatGPT, Grok, Perplexity) questions as I go. “Tell me more about that prison in El Salvador,” I might say aloud. “What do firsthand accounts of life inside reveal?” Sometimes I’ve followed stories mainly through Perplexity, which is like a combination of ChatGPT and Google: you can search for information and then ask questions about it. “What’s going on with the Supreme Court?” I might ask. Then, beneath a bulleted list of developments, the A.I. will suggest follow-up questions. (“What are the implications of the Supreme Court’s decision on teacher-training grants?”) It’s possible to move seamlessly from a news update into a wide-ranging Q. & A. about whatever’s at stake. Articles are replaced by a conversation.
Tomi Engdahl says:
Are We Taking A.I. Seriously Enough?
There’s no longer any scenario in which A.I. fades into irrelevance. We urgently need voices from outside the industry to help shape its future.
https://www.newyorker.com/culture/open-questions/are-we-taking-ai-seriously-enough
Tomi Engdahl says:
The National Weather Service is no longer providing language translations of its products after their contract with an artificial intelligence company has lapsed.
Experts warn the change could put non-English speakers at risk of missing potentially life-saving warnings about extreme weather.
ABC News station KTRK meteorologist Elyse Smith explains the risk communities may face. https://abcnews.visitlink.me/jLUMtc
Tomi Engdahl says:
This is one of the very worst uses of AI. https://trib.al/LZ5WdpJ
AI Startup Deletes Entire Website After Researcher Finds Something Disgusting There
Gross.
https://futurism.com/ai-startup-deletes-website-researcher-deepfake?fbclid=IwY2xjawJkJIRleHRuA2FlbQIxMQABHrC1QWxVyAdDmGsqyPazJ-BmwcLVxMGxqoXh1b0biLeJsl9iq4p30mbCnBPf_aem_3ucOaMgb4uT8iIwzBIV2Rw
A South Korean website called GenNomis went offline this week after a researcher made a particularly alarming discovery: tens of thousands of AI-generated pornographic images created by its software, Nudify. The photos were found in an unsecured database, and included explicit images bearing the likeness of celebrities, politicians, random women, and children.
Jeremiah Fowler, the cybersecurity researcher who found the cache, says he immediately sent a responsible disclosure notice to GenNomis and its parent company, AI-Nomis, who then restricted the database from public access. Later, just hours after Wired approached GenNomis for comment, both it and its parent company seemed to disappear from the web entirely.
Often known as “deepfakes” because of their lifelike nature, fake porn images and videos based on real people have exploded throughout the internet as consumers get their hands on ever-more convincing generative AI.
The consequences of deepfake porn can be devastating, especially for women, who make up the vast majority of victims.
Beside the obvious lack of consent when a person is digitally undressed, this stuff has been used to tarnish politicians, get people fired, extort victims for money, and generate child sexual abuse materials. Beyond sexual violence, non-pornographic deepfakes are responsible for a huge increase in financial and cyber crimes and no small amount of blatant misinformation.
Tomi Engdahl says:
https://hackaday.com/2025/04/09/ask-hackaday-vibe-coding/
Tomi Engdahl says:
Google Targets SOC Overload With Automated AI Alert and Malware Analysis Tools
Google plans to unleash automated AI agents into overtaxed SOCs to reduce the manual workload for cybersecurity investigators.
https://www.securityweek.com/google-targets-soc-overload-with-automated-ai-alert-and-malware-analysis-tools/
Tomi Engdahl says:
Artificial Intelligence
AI Now Outsmarts Humans in Spear Phishing, Analysis Shows
Agentic AI has improved spear phishing effectiveness by 55% since 2023, research shows.
https://www.securityweek.com/ai-now-outsmarts-humans-in-spear-phishing-analysis-shows/
Tomi Engdahl says:
Artificial Intelligence
Google Pushing ‘Sec-Gemini’ AI Model for Threat-Intel Workflows
Experimental Sec-Gemini v1 touts a combination of Google’s Gemini LLM capabilities with real-time security data and tooling from Mandiant.
https://www.securityweek.com/google-pushing-sec-gemini-ai-model-for-threat-intel-workflows/
Tomi Engdahl says:
From https://hackaday.com/2025/04/09/ask-hackaday-vibe-coding/
Personally, I’ve found that going through the thought process well enough to articulate it for an AI to grok is like 90% of the mental effort of designing a program anyway. You need to think about the problem, think about how you want to solve the problem, think about the interface and abstraction you want to set up, and do the pseudo-coding (i.e. explaining yourself to genAI). What AI helps with is just the ‘busywork’ anyway. Thinking about the problem and your architecture and interface approach to it is like 90% of the battle. Writing out the code is the remaining 10%.
I suppose I do like AI for dealing with languages that I hate dealing with myself. Like bash scripts, javascript, and python.
From my recent experince with AI getting 19 out of 20 attempts wrong. Worse than the worst summer intern.
The answers look okay at first glance, enough that supposed experts will say “that’s amazing”. Queue painful argument about why it isnt right before you can fix the screw ups.
LLMs are the wrong approach for a lot of things.
So far, I’ve found GitHub Copilot to be good for 10-20% performance boost, depending on the complexity of the code base. I tried it at work after my employer bought everyone a subscrription, and liked it enough to buy my own subscription for personal projects.
I mostly just use the autocomplete aspect… You have to be pretty adept at evaluating its suggestions, because the majority of them are trash. But if you can evaluate at a glance and just keep typing, the bad suggestions have no impact, and the good ones save a bit of time. About 1 in 50 are something I prefer to what I was planning to write, and that feels like magic.
The chat UI has been equally hit-or-miss. Sometimes it doesn’t understand what I want, sometimes it seems to understand but suggests something that doesn’t actually work… but with that approach, the time spent formulating the question and reading the answer is not trivial enough to ignore.
I’ve heard that some of the other tools are great for bootstrapping new projects, but haven’t had a new project to try them with yet.
Anyway, the “vibe coding” idea sounds like wishful thinking today, but I’m intrigued by the trajectory things are on. The reasoning / chain-of-thought stuff has given me vastly better results than the ChatGPT hallucination factory that kicked off the LLM craze, so it’ll be interesting to see what happens when that gets merged with an IDE and a code base.
I recently tried Copilot’s typealong autocomplete feature on some safety-critical embedded firmware. Naturally, I audited the heck out of the thing. It was absolutely correct 90% of the time, and the rest was mistakes that could easily have killed people. Biggest one was when it offered up a fairly long function to control the operation of a solenoid valve. It assumed that logical high was “valve closed” and low was “open”. This was very much not the case.
Like all generative AI stuff, Copilot’s output is only useful in situations where quality isn’t a concern.
I’ve found that it is very useful for generating individual routines, particularly when I carefully have it make the building blocks that I’m going to have it assemble later. It is also great for introducing me to capabilities of APIs that I’ve never really dug into. That said, I need to carefully look over its code because it does some crazily inefficient things if you let it… bit operations by converting the bytes into strings and applying boost regex? seriously? I’ve never had success letting it build above the small module level. The time to quality check and correct its code is just longer than it would take for me to write it myself.
Tomi Engdahl says:
Now ChatGPT Can Make Breakfast For Me
https://hackaday.com/2023/02/02/now-chatgpt-can-make-breakfast-for-me/
Tomi Engdahl says:
Google siirtää sovelluskehityksen selaimeen – tekoäly isossa roolissa
https://etn.fi/index.php/13-news/17388-google-siirtaeae-sovelluskehityksen-selaimeen-tekoaely-isossa-roolissa
Google esitteli Cloud Next -tapahtumassaan uuden sukupolven sovelluskehitysalustan, Firebase Studion, joka siirtää koko kehitysprosessin selaimeen – suunnittelusta julkaisuun. Uutuus rakentuu vahvasti tekoälyavusteisuuden ympärille ja hyödyntää Googlen omaa Gemini-mallia läpi koko kehityksen.
Firebase Studio on pilvipohjainen IDE, joka yhdistää käyttöliittymäsuunnittelun, koodauksen, testauksen ja julkaisun yhdeksi saumattomaksi kokonaisuudeksi. Kehittäjät voivat aloittaa projekteja valmiista pohjista tai luonnostella sovelluksia vapaamuotoisesti kuvin, piirroksin tai tekstillä. Generatiivinen tekoäly auttaa luomaan mm. käyttöliittymiä, API-rakenteita ja AI-virtoja – ilman perinteistä koodausta.
Tekoäly ei toimi vain apurina koodauksessa, vaan mukana on kokonainen joukko älykkäitä agentteja. Esimerkiksi migraatioagentti auttaa päivittämään koodia Java-versioiden välillä, testausagentti simuloi käyttäjän toimintaa ja etsii bugeja käyttöliittymän läpi kulkien. Dokumentaatioagentti keskustelee koodista kuin wiki, mikä helpottaa uusien kehittäjien perehdytystä.
Sovellukset voidaan julkaista suoraan Firebase App Hosting -alustalle, joka hoitaa koko CI/CD-prosessin automaattisesti GitHubista käsin. Lisäksi kehittäjät voivat yhdistää Firebase Data Connectin avulla relaatiotietokantoja, GraphQL-rajapintoja ja koneoppimismalleja osaksi sovellustaan. Uutta on myös tuki Pythonille ja Golangille, sekä mahdollisuus käyttää avoimen lähdekoodin malleja kuten LLaMA ja Mistral.
Firebase Studiolla rakennetaan erityisesti generatiivisen tekoälyn sovelluksia – mm. puheentunnistusta, kuvagenerointia ja RAG-hakua hyödyntäviä ratkaisuja.
Introducing Firebase Studio and agentic developer tools to build with Gemini
https://cloud.google.com/blog/products/application-development/firebase-studio-lets-you-build-full-stack-ai-apps-with-gemini
Millions of developers use Firebase to engage their users, powering over 70 billion instances of apps every day, everywhere — from mobile devices and web browsers, to embedded platforms and agentic experiences. But full-stack development is evolving quickly, and the rise of generative AI has transformed not only how apps are built, but also what types of apps are possible. This drives greater complexity, and puts developers under immense pressure to keep up with many new technologies that they need to manually stitch together. Meanwhile, businesses of all sizes are seeking ways to make AI app development cycles more efficient, deliver quality software, and get to market faster.
Today at Google Cloud Next, we’re introducing a suite of new capabilities that transforms Firebase into an end-to-end platform to accelerate the complete application lifecycle. The new Firebase Studio, available to everyone in preview, is a cloud-based, agentic development environment powered by Gemini that includes everything developers need to create and publish production-quality AI apps quickly, all in one place. Several more updates across the Firebase platform are helping developers unleash their modern, data-driven apps on Google Cloud. These announcements will empower developers to forge new paths for building AI applications across multiple platforms.
Tomi Engdahl says:
Alexey Shabanov / TestingCatalog:
Google announces Agent2Agent, an open interoperability protocol to enable seamless collaboration between AI agents across diverse frameworks and vendors — Google has announced the launch of Agent2Agent (A2A), an open interoperability protocol designed to enable seamless collaboration between …
Google launches Agent2Agent protocol to connect AI agents across platforms
https://www.testingcatalog.com/google-launches-agent2agent-protocol-to-connect-ai-agents-across-platforms/
Google has announced the launch of Agent2Agent (A2A), an open interoperability protocol designed to enable seamless collaboration between AI agents across diverse frameworks and vendors. Aimed at enterprises, the protocol seeks to address the challenges of siloed systems by standardizing communication between agents, thereby automating complex workflows and enhancing productivity. Supported by over 50 technology partners, including Salesforce, SAP, ServiceNow, and MongoDB, A2A provides a universal framework for AI agents to securely exchange information, coordinate actions, and integrate across enterprise platforms.
Abner Li / 9to5Google:
Google launches Workspace Flows, an AI tool for workflow automation, in alpha, and new Docs, Sheets, Meet, Chat, and Vids features, like audio overviews in Docs
https://9to5google.com/2025/04/09/google-workspace-flows/
Tomi Engdahl says:
Google introduces Firebase Studio, an end-to-end platform that builds custom apps in-browser, in minutes
https://venturebeat.com/ai/google-introduces-firebase-studio-an-end-to-end-platform-that-builds-custom-apps-in-browser-in-minutes/
Tomi Engdahl says:
Alex Kantrowitz / Big Technology:
Q&A with Google Cloud CEO Thomas Kurian on Google Cloud Next, DeepMind, AWS, offering 200+ AI models, training and inference costs, efficient training, and more
Google Cloud CEO Thomas Kurian on AI Competition, Agents, and Tariffs
“We’ve been working on contingency plans for quite a while,” Kurian says on Tariffs.
https://www.bigtechnology.com/p/google-cloud-ceo-thomas-kurian-on
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
Google unveils Ironwood, its seventh-generation TPU and the first optimized for inference, coming later in 2025 in a 256-chip cluster or a 9,216-chip cluster
Ironwood is Google’s newest AI accelerator chip
https://techcrunch.com/2025/04/09/google-unveils-ironwood-a-new-ai-accelerator-chip/
Tomi Engdahl says:
Sarah Perez / TechCrunch:
YouTube expands its pilot to some top creators of tech that detects the AI-generated likenesses of famous people and declares support for the US’ NO FAKES ACT — YouTube on Wednesday announced an expansion of its pilot program designed to identify and manage AI-generated content that features the …
YouTube expands its ‘likeness’ detection technology, which detects AI fakes, to a handful of top creators
https://techcrunch.com/2025/04/09/youtube-expands-its-likeness-detection-technology-which-detects-ai-fakes-to-a-handful-of-top-creators/