AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

840 Comments

  1. Tomi Engdahl says:

    OpenAI haluaa omat sirut jo tänä vuonna
    https://etn.fi/index.php/13-news/17141-openai-haluaa-omat-sirut-jo-taenae-vuonna

    OpenAI aikoo vähentää riippuvuuttaan Nvidia-siruista kehittämällä oman tekoälyprosessorinsa, jonka ensimmäinen versio on tarkoitus valmistaa jo tänä vuonna, kertoo Reuters.

    ChatGPT:n kehittäjä viimeistelee parhaillaan ensimmäisen oman sirunsa suunnittelua ja aikoo lähettää sen tuotantoon Taiwan Semiconductor Manufacturing Co:lle (TSMC). Piille saamisen arvioidaan kestävän noin kuusi kuukautta. Massatuotannon on tarkoitus alkaa vuonna 2026, mutta OpenAI voi nopeuttaa valmistusprosessia lisäämällä investointeja, kertoo Reutersin lähde.

    OpenAI:n sisällä siruhanketta pidetään strategisena vipuvartena neuvotteluissa muiden sirutoimittajien kanssa. Yhtiön suunnitelmissa on kehittää entistä tehokkaampia prosessoreita jokaisella uudella versiolla. Mikäli ensimmäinen siru onnistuu odotusten mukaisesti, OpenAI voisi jo loppuvuodesta testata vaihtoehtoa Nvidian siruille.

    Reutersin mukaan OpenAI:n sirutiimiä johtaa entinen Googlen asiantuntija Richard Ho, ja sen koko on tuplaantunut viime kuukausina 40 insinööriin. Tiimi tekee yhteistyötä myös Broadcomin kanssa. Sirun kehittäminen on kallis prosessi: yhden version arvioidaan maksavan 500 miljoonaa dollaria, ja kokonaiskustannukset voivat kaksinkertaistua ohjelmistojen ja infrastruktuurin rakentamisen myötä.

    Generatiivisten tekoälymallien kehittäjät, kuten OpenAI, Google ja Meta, tarvitsevat jatkuvasti suurempia määriä laskentatehoa, mikä kasvattaa sirujen kysyntää. Microsoft aikoo investoida 80 miljardia dollaria tekoälyinfrastruktuuriin vuonna 2025, ja Meta on ilmoittanut 60 miljardin dollarin investoinneista. OpenAI:n osallistuminen 500 miljardin dollarin Stargate-infrastruktuuriohjelmaan alleviivaa alan kasvavia tarpeita.

    Reply
  2. Tomi Engdahl says:

    Kate Knibbs / Wired:
    Thomson Reuters wins the first major US AI copyright ruling against fair use, in a case filed in May 2020 versus legal research AI startup Ross Intelligence — The Thomson Reuters decision has big implications for the battle between generative AI companies and rights holders.

    Thomson Reuters Wins First Major AI Copyright Case in the US
    The Thomson Reuters decision has big implications for the battle between generative AI companies and rights holders.
    https://www.wired.com/story/thomson-reuters-ai-copyright-lawsuit/

    Thomson Reuters has won the first major AI copyright case in the United States.

    In 2020, the media and technology conglomerate filed an unprecedented AI copyright lawsuit against the legal AI startup Ross Intelligence. In the complaint, Thomson Reuters claimed the AI firm reproduced materials from its legal research firm Westlaw. Today, a judge ruled in Thomson Reuters’ favor, finding that the company’s copyright was indeed infringed by Ross Intelligence’s actions.
    AI Lab Newsletter by Will Knight
    WIRED’s resident AI expert Will Knight takes you to the cutting edge of this fast-changing field and beyond—keeping you informed about where AI and technology are headed. Delivered on Wednesdays.
    By signing up, you agree to our user agreement (including class action waiver and arbitration provisions), and acknowledge our privacy policy.

    “None of Ross’s possible defenses holds water. I reject them all,” wrote US District Court of Delaware judge Stephanos Bibas, in a summary judgement.

    Reply
  3. Tomi Engdahl says:

    Melissa Heikkilä / Financial Times:
    At the AI Action Summit, Eric Schmidt urged the West to focus on open-source AI or risk losing to China while calling for AI safety collaboration with China

    Eric Schmidt warns west to focus on open-source AI in competition with China
    https://www.ft.com/content/84cf0b2e-651d-4cb4-b426-ebc7afd634fa

    Reply
  4. Tomi Engdahl says:

    The Information:
    Source: Apple and Alibaba submitted their co-developed AI features for approval by China’s cyberspace regulator; Apple considered but rejected DeepSeek’s models — Apple has recently started working with Chinese internet and e-commerce giant Alibaba Group to roll out artificial intelligence features …

    Apple Partners With Alibaba to Develop AI Features for iPhone Users in China
    https://www.theinformation.com/articles/apple-partners-with-alibaba-to-develop-ai-features-for-iphone-users-in-china

    Reply
  5. Tomi Engdahl says:

    Wall Street Journal:
    AI Action Summit in Paris: European Commission President Ursula von der Leyen says the EU plans to mobilize €200B to invest in AI to catch the US and China

    EU Sets Out $200 Billion AI Spending Plan in Bid to Catch Up With U.S., China
    The announcement underscores efforts from the EU to position itself as a key player in the AI race
    https://www.wsj.com/tech/ai/eu-pledges-200-billion-in-ai-spending-in-bid-to-catch-up-with-u-s-china-7bf82ab5?st=oZ8jEh&reflink=desktopwebshare_permalink

    Reply
  6. Tomi Engdahl says:

    Reuters:
    Paris AI summit: US VP JD Vance warns the EU that excessive AI regulation could strangle the tech and rejects content moderation as “authoritarian censorship”

    Vance tells Europeans that heavy regulation could kill AI
    https://www.reuters.com/technology/artificial-intelligence/europe-looks-embrace-ai-paris-summits-2nd-day-while-global-consensus-unclear-2025-02-11/

    Vance says Europeans risk killing AI with their red tape
    US, UK do not sign summit communique
    Vance says Trump will ensure US remains lead AI player

    Reply
  7. Tomi Engdahl says:

    David DiMolfetta / Nextgov/FCW:
    Andesite, which combines human expertise and AI in its security operations center platform, raised an additional $23M seed, taking its total funding to $38.25M — Cybersecurity firm Andesite secured an added $23 million in seed funding, bringing its total funds to $38.5 million …

    AI-cybersecurity firm Andesite secures added $23M in funding
    https://www.nextgov.com/cybersecurity/2025/02/ai-cybersecurity-firm-andesite-secures-added-23m-funding/402891/

    Reply
  8. Tomi Engdahl says:

    John Kang / Forbes:
    Sources: Meta is in talks to acquire South Korean AI chip startup FuriosaAI, which has raised ~$115M since its 2017 founding; the deal could close this month

    https://www.forbes.com/sites/johnkang/2025/02/11/meta-in-talks-to-buy-korean-ai-chip-startup-founded-by-samsung-engineer/

    Reply
  9. Tomi Engdahl says:

    Artificial Intelligence
    Can AI Early Warning Systems Reboot the Threat Intel Industry?

    News analysis: The big AI platforms are emerging as frontline early warning systems, detecting nation-state hackers at the outset of their campaigns. Can this help save the threat intel industry?

    https://www.securityweek.com/can-ai-early-warning-systems-reboot-the-threat-intel-industry/

    Reply
  10. Tomi Engdahl says:

    Stephen Nellis / Reuters:
    US chip startup Groq says it has secured a $1.5B commitment from Saudi Arabia to expand the delivery of its AI chips to the country over the course of this year — U.S. semiconductor startup Groq said on Monday it has secured a $1.5 billion commitment from Saudi Arabia to expand the delivery of its advanced AI chips to the country.

    AI chip startup Groq secures $1.5 billion commitment from Saudi Arabia
    https://www.reuters.com/technology/artificial-intelligence/ai-chip-startup-groq-secures-15-billion-commitment-saudi-arabia-2025-02-10/

    Reply
  11. Tomi Engdahl says:

    You’ve hit the Free plan limit for GPT-4o.
    Responses will use another model until your limit resets after 3:46 PM

    Reply
  12. Tomi Engdahl says:

    Kiinalaiset tekoälyt Deepseek ja Alibaban äskettäin uusima Qwen ovat jo rikollisten käytössä, tietoturvayhtiö Check Point kertoo. Monet siirtyivät nopeasti suositusta ChatGPT:stä uusiin vaihtoehtoihin.

    Uusi tekniikka päätyi heti rikollisten käyttöön – 4 karua vedätystä
    https://www.is.fi/digitoday/tietoturva/art-2000011018319.html

    Reply
  13. Tomi Engdahl says:

    Check Point antaa esimerkkejä siitä, kuinka uusia tekoälymalleja käytetään väärin:

    Qwenin avulla on luotu tietoa varastavia haittaohjelmia. Niillä on pyritty anastamaan arkaluonteista tietoa pahaa aavistamattomilta käyttäjiltä.

    Netissä jaetaan ohjeita, kuinka tekoälyjen estot voidaan poistaa antamalla niille monimutkaisia ja pitkiä kirjallisia ohjeita. Check Pointin mukaan tästä on tullut rikollisten suosikkitapa ohittaa tekoälyjen rajoitukset.

    Useat keskustelut käsittelevät Deepseekin käyttämistä pankkien petoksia estävien järjestelmien ohittamiseen. Sitä varten on jaossa monia tekniikoita. Vaarana on merkittävä rahallinen vahinko.

    Kolmea tekoälymallia, ChatGPT:tä, Qweniä ja Deepseekiä, käytetään yhdessä roskapostin lähettämiseen entistä tehokkaammin. Se voi tarkoittaa suurempaa määrää huijausviestejä, jotka ovat myös entistä uskottavampia.

    Uusi tekniikka päätyi heti rikollisten käyttöön – 4 karua vedätystä
    https://www.is.fi/digitoday/tietoturva/art-2000011018319.html

    Reply
  14. Tomi Engdahl says:

    “Cloud’s disappointing results suggest that AI-powered momentum might be beginning to wane.” https://trib.al/rmfBMK8

    Google’s Finances Are in Chaos as the Company Flails at Unpopular AI
    https://futurism.com/the-byte/googles-finances-chaos-unpopular-ai?fbclid=IwY2xjawIZWqRleHRuA2FlbQIxMQABHfP0viIEnMJ3PKjTmLZH-2yqviNnJ5DZRPkkWnJCDOnCng6PIBl3BcfTdg_aem_fwRkG4FX2x-e2R8ZPH8Frw

    Google’s parent company Alphabet failed to hit sales targets, falling a 0.1 percent short of Wall Street’s revenue expectations — a fraction of a point that’s seen the company’s stock slide almost eight percent today, in its worst performance since October 2023.

    It’s also a sign of the times: as the New York Times reports, the whiff was due to slower-than-expected growth of its cloud-computing division, which delivers its AI tools to other businesses.

    While a reported $96.5 billion in revenue versus the expected $96.6 billion sounds well within a margin of error, it shows that even for a company of Google’s scale and stature, actually making money off AI — even on the infrastructure side — is still a risky business.

    That’s despite Alphabet committing a whopping $75 billion on capital expenditures as it builds out AI infrastructure, $22 billion more than just last year, following the lead of its competitors in the AI space, including Meta. Everyone’s pouring money in, but it’s as unclear as ever when the industry will start generating meaningful revenue — and for how many players.

    Catching Up
    Investors are also still reeling from the emergence of Chinese AI startup DeepSeek, which shook Silicon Valley to its core last week. The company’s ultra-lean and highly efficient AI models — that can be trained for a tiny fraction of the price of Western competitors but still keep up — caught the AI industry by surprise, wiping out over $1 trillion in market value in a single day.

    Is Alphabet’s latest earnings result the canary in the coal mine? Should the AI industry brace for tougher days ahead as investors become increasingly skeptical of what the tech has to offer? Or are investors concerned over OpenAI’s ChatGPT overtaking Google’s search engine?

    Illustrating the drama, this week Google appears to have retroactively edited the YouTube video of a Super Bowl ad for its core AI model called Gemini, to remove an extremely obvious error the AI made about the popularity of gouda cheese.

    “Although it’s still well insulated, Google’s advantages in search hinge on its ubiquity and entrenched consumer behavior,” Emarketer senior analyst Evelyn Mitchell-Wolf told The Guardian.

    This year “could be the year those advantages meaningfully erode as antitrust enforcement and open-source AI models change the game,” she added. “And Cloud’s disappointing results suggest that AI-powered momentum might be beginning to wane just as Google’s closed model strategy is called into question by DeepSeek.”

    Reply
  15. Tomi Engdahl says:

    There’s Apparently a Huge Financial Problem With Trump’s Massive AI Project
    “The intent is not to become a data center provider for the world, it’s for OpenAI.”
    https://futurism.com/huge-financial-problem-trump-ai-stargate?fbclid=IwY2xjawIZtNdleHRuA2FlbQIxMQABHY0Zp_uJ6Zs1aAu3nYjPZJDpiyF1On82HzkzYSyRofgdkGwliOfb3rLC4A_aem_3KDvD0pLKIpFYO6ckPZrVQ

    President Donald Trump’s behemoth $500 billion AI infrastructure project, dubbed Stargate, may be doomed from the start.

    Trump made the sweeping announcement earlier this week, revealing that the ChatGPT maker, investment company SoftBank, tech giant Oracle, and Abu Dhabi state-run AI fund MGX would initially spend a total of $100 billion on the project, with the eventual goal of reaching half a trillion dollars in just a few years.

    But in reality, according to the Financial Times’ sources, Stargate may be facing insurmountable financial challenges as it attempts to get off the ground.

    “They haven’t figured out the structure, they haven’t figured out the financing, they don’t have the money committed,” an unnamed source told the newspaper.

    Did Trump put the cart before the horse by making a splashy announcement before the pieces were in place? Critics of the project think it’s entirely possible.

    Reply
  16. Tomi Engdahl says:

    Tekoälyagentit mullistavat työelämän – ”Nyt on siirrytty ChatGPT:n jälkeiseen aikaan”
    Tivi11.2.202522:05Tekoäly
    Generatiivinen tekoäly on helpottanut powerpoint-esitysten tekemistä ja koodin kirjoittamista. Todellinen mullistus työhön tulee kuitenkin tekoälyagenteista.
    https://www.tivi.fi/uutiset/tekoalyagentit-mullistavat-tyoelaman-nyt-on-siirrytty-chatgptn-jalkeiseen-aikaan/a0bb45dc-a678-4e45-9985-f21a06963860

    Reply
  17. Tomi Engdahl says:

    How to refactor code with GitHub Copilot
    Discover how to use GitHub Copilot to refactor your code and see samples of it in action.
    https://github.blog/ai-and-ml/github-copilot/how-to-refactor-code-with-github-copilot/

    Some standard ways of refactoring include:

    Simplifying complex conditionals (because no one should need a PhD to read your if statements)
    Extracting duplicated logic (so you’re not trying to maintain code in ten different places)
    Improving variable and function names (because doThing() is a crime against humanity)
    Converting monolithic functions into smaller, modular pieces (to prevent the dreaded “function that spans multiple screens” scenario)
    Refactoring isn’t just about tidiness—it’s about making your codebase more resilient, scalable, and enjoyable to work with. Let’s find out how GitHub Copilot can help you do it faster and with fewer headaches.

    Reply
  18. Tomi Engdahl says:

    You can ask Copilot Chat to explain how some code works, either by asking in plain language or using the /explain slash command. To limit the scope of what Copilot looks at, select the code in your IDE before asking your query, or specify specific files for it to consider by using #file. While you’re at it, you can even ask it to add code comments to help you (or anyone else reading the code) in the future.

    Here are some sample prompts:

    Explain what this code does.
    What is this code doing?
    Add comments to this code to make it more understandable.
    You should use Copilot Chat to analyze and explain your codebase until you fully understand the code you’re looking to refactor.

    https://github.blog/ai-and-ml/github-copilot/how-to-refactor-code-with-github-copilot/

    Reply
  19. Tomi Engdahl says:

    Here are some sample prompts:

    How would you improve this?
    Improve the variable names in this function.
    #file:pageInit.js, #file:socketConnector.js Offer suggestions to simplify this code.
    Copilot will then offer suggestions to improve the code in the way that you specified. This is great for getting started, but Copilot can do much more if you give it some guidance.

    https://github.blog/ai-and-ml/github-copilot/how-to-refactor-code-with-github-copilot/

    Reply
  20. Tomi Engdahl says:

    ZitaoTech’s Latest Handheld Pops a Raspberry Pi 5 16GB and Optional AI Accelerator in Your Pocket
    If storage is more your thing, how about a 512GB SSD — all in a BlackBerry-inspired pocketable handheld form factor?
    https://www.hackster.io/news/zitaotech-s-latest-handheld-pops-a-raspberry-pi-5-16gb-and-optional-ai-accelerator-in-your-pocket-060e0aece6eb

    Reply
  21. Tomi Engdahl says:

    New hack uses prompt injection to corrupt Gemini’s long-term memory
    There’s yet another way to inject malicious prompts into chatbots.
    https://arstechnica.com/security/2025/02/new-hack-uses-prompt-injection-to-corrupt-geminis-long-term-memory/

    Reply
  22. Tomi Engdahl says:

    Helsinki kokeili tekoälyä: kahden viikon työ hoitui 10 minuutissa
    Anna Helakallio12.2.202512:56|päivitetty12.2.202512:59Tekoäly
    Tekoälykuvantunnistustyökalu tunnisti kokeilussa 87 prosenttia kohteista oikein.
    https://www.tivi.fi/uutiset/helsinki-kokeili-tekoalya-kahden-viikon-tyo-hoitui-10-minuutissa/13bdc9a2-21c1-4be4-b5ee-757995cd225e

    Reply
  23. Tomi Engdahl says:

    This AI Tool Lets You Build Apps Faster Than Googling | Amjad Masad (Replit)
    Why personal software is the future of coding and a live demo of Replit’s AI agent. Plus, how to find job security in tech in the AI era
    https://creatoreconomy.so/p/this-ai-tool-lets-you-build-apps-faster-than-googling

    Reply
  24. Tomi Engdahl says:

    The cybersecurity crossroads: AI and quantum computing could save or endanger us
    AI enhances cybersecurity by detecting threats in real-time, yet hackers exploit it for advanced attacks; meanwhile, quantum computing threatens encryption, risking sensitive data; the future of cybersecurity depends on how quickly defenses evolve to counter these threats
    https://www.ynetnews.com/business/article/bjpsrj9y1g

    Reply
  25. Tomi Engdahl says:

    Copilot Language Server SDK is now available
    https://github.blog/changelog/2025-02-10-copilot-language-server-sdk-is-now-available/

    We are excited to announce that the Copilot Language Server SDK is now publicly available. This enables any editor or IDE to integrate with GitHub Copilot via the language server protocol standard. Today, Copilot is available in popular editors such as VS Code, Visual Studio, JetBrains IDEs, Vim/Neovim, and most recently Xcode. A key ingredient of bringing Copilot to new editors has been the Copilot Language Server, which is used by all of those editors. At GitHub, we value developer choice and aim to empower developers to use Copilot with their favorite editor.

    The Copilot Language Server SDK is available now: @github/copilot-language-server

    This SDK can be used to integrate GitHub Copilot into any editor or IDE. See the documentation on the package to get started.

    Reply
  26. Tomi Engdahl says:

    Näille kapineille povattiin loistavaa tulevaisuutta – tässä totuus nyt
    11.2.202519:05
    Tekoälyä hyödyntävien tietokoneiden oli ennustettu elvyttävän pc-markkinoita, mutta niiden läpimurto on viivästynyt.
    https://www.mikrobitti.fi/uutiset/naille-kapineille-povattiin-loistavaa-tulevaisuutta-tassa-totuus-nyt/c3bc5039-c54b-430f-a068-1a7f221127dd

    Reply
  27. Tomi Engdahl says:

    Study Finds That People Who Entrust Tasks to AI Are Losing Critical Thinking Skills
    “Atrophied and unprepared.”
    https://futurism.com/study-ai-critical-thinking

    Reply
  28. Tomi Engdahl says:

    DeepMind working on distributed training of large AI models
    Alternate process could be a game changer if they can make it practicable
    https://www.theregister.com/2025/02/11/deepmind_distributed_model_training_research/

    Reply
  29. Tomi Engdahl says:

    OpenAI has effectively canceled the release of o3, which was slated to be the company’s next major AI model release, in favor of what CEO Sam Altman is calling a “simplified” product offering.

    In a post on X on Wednesday, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in its AI-powered chatbot platform ChatGPT and API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model.

    Read more from Kyle Wiggers here: https://tcrn.ch/3Ex8XxE

    #TechCrunch #technews #artificialintelligence #OpenAI #ChatGPT

    Reply
  30. Tomi Engdahl says:

    6G tulee olemaan verkko, jota tekoäly optimoi kaiken aikaa
    https://etn.fi/index.php/13-news/17148-6g-tulee-olemaan-verkko-jota-tekoaely-optimoi-kaiken-aikaa

    Oulun yliopiston 6G-tutkimusohjelma on julkaissut uuden white paper -dokumentin tulevasta 6G-tekniikasta. Dokumentin on päätoimittanut Oulun yliopiston Future Computing Group -tutkimusryhmän johtaja Lauri Lovén. Hänen mukaansa voidaan sanoa, että 5G:n myötä pilvipalvelut ja reunalaskenta integroituivat mobiiliverkkoihin. – 6G:n kohdalla langattomiin verkkoihin yhdistyy tekoäly.

    Dokumentti menee hyvin yksityiskohtaisesti suurten kielimallien hyödyntämiseen 6G-verkoissa. 6G-verkoissa tekoäly ei ole enää vain apuväline, vaan se muodostaa itse verkon toimintalogiikan ytimen. Suuret kielimallit (LLM:t) ja tekoälyn hallinnoimat järjestelmät mullistavat tietoliikenteen infrastruktuuria. Samalla tämä kehitys tuo mukanaan myös haasteita, kuten reaaliaikaisen viiveettömän päätöksenteon sekä laskennallisen tehokkuuden kysymykset.

    Vaikka tekoäly tuo valtavia mahdollisuuksia 6G-verkoille, haasteita ei voi sivuuttaa. Tällainen on esimerkiksi millisekuntien latenssivaatimus. 6G-verkon pitää pystyä käsittelemään kriittisiä yhteyksiä lähes reaaliajassa, esimerkiksi autonomisessa liikenteessä tai etäkirurgiassa. Nykyisiä LLM-malleja eivät ole suunniteltu näin nopeaan päätöksentekoon.

    Suuret tekoälymallit vaativat lisäksi merkittävästi laskentatehoa, mikä voi muodostua ongelmaksi erityisesti reunalaskennassa ja mobiililaitteissa. Mallien taipuminen tietoturvaan on myös pitkälti auki. Verkkojen tulee olla turvallisia ja luotettavia, mutta LLM-mallien päätöksiä on vaikea selittää, mikä voi olla haaste regulaation ja luottamuksen kannalta.

    Tästä syystä 6G ei tule pelkästään nojaamaan LLM-malleihin, vaan se hyödyntää hybridimallia, jossa perinteiset koneoppimisalgoritmit ja LLM:t toimivat rinnakkain. Perinteiset mallit tarjoavat tarkkuutta ja nopeutta kriittisiin verkkotoimintoihin, kun taas LLM:t mahdollistavat kehittyneemmän päätöksenteon ja itseohjautuvuuden.

    Erityisen mielenkiintoista on, että LLM-mallit, joita ei käytännössä ollut vielä 2,5 vuotta sitten, ovat yhtäkkiä 6G-kehityksen keskiössä. Vielä 2020-luvun alussa niitä pidettiin lähinnä kokeellisina generatiivisen tekoälyn työkaluina, mutta nyt ne ovat muuttuneet keskeisiksi verkkojen hallintaa ja optimointia ohjaaviksi järjestelmiksi.

    Dokumentti löytyy täältä.
    https://oulurepo.oulu.fi/bitstream/handle/10024/53842/nbnfioulu-202501211268.pdf?sequence=1&isAllowed=y

    Reply
  31. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Sam Altman says GPT-5 will include o3, which is no longer set to ship as a standalone model, GPT-4.5 will be OpenAI’s last non-chain-of-thought model, and more — OpenAI has effectively canceled the release of o3, which was slated to be the company’s next major AI model …

    OpenAI postpones its o3 AI model in favor of a ‘unified’ next-gen release
    https://techcrunch.com/2025/02/12/openai-cancels-its-o3-ai-model-in-favor-of-a-unified-next-gen-release/

    OpenAI has effectively canceled the release of o3, which was slated to be the company’s next major AI model, in favor of what CEO Sam Altman is calling a “simplified” product offering.

    In a post on X on Wednesday, Altman said that in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in its AI-powered chatbot platform ChatGPT and API. As a result of that roadmap decision, OpenAI no longer plans to launch o3 as a stand-alone model.

    The company originally said in December that it aimed to release o3 sometime early this year. Just a few weeks ago, Kevin Weil, OpenAI’s chief product officer, said in an interview that o3 was on track for a “February-March” launch.

    “We want to do a better job of sharing our intended roadmap, and a much better job simplifying our product offerings,” Altman wrote in his post. “We want AI to ‘just work’ for you; we realize how complicated our model and product offerings have gotten. We hate the model picker [in ChatGPT] as much as you do and want to return to magic unified intelligence.”

    Reply
  32. Tomi Engdahl says:

    Igor Bonifacic / Engadget:
    OpenAI plans to give free ChatGPT users unlimited GPT-5 access at “the standard intelligence setting”; Plus and Pro users will get to run GPT-5 at higher levels — It’s part of the company’s plan to simplify its product offerings. — OpenAI’s upcoming GPT-5 release will integrate …

    OpenAI will offer free ChatGPT users unlimited access to GPT-5
    It’s part of the company’s plan to simplify its product offerings.
    https://www.engadget.com/ai/openai-will-offer-free-chatgpt-users-unlimited-access-to-gpt-5-211935734.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cudGVjaG1lbWUuY29tLw&guce_referrer_sig=AQAAAKyCY30IgaseK9wYvmAY86T7JsZtKCaO6s6jtdsG_FLQHDMB_zExJUupiGftCDfs08NTZEEz5fMikV_BVI9SaTXLkQq4LQj6Qq3HH_fG7WuFTWLkwS26vH2ZtjTa5fxYAEWfELHprtZB_foqJUeHng-f77jZ6H04aE47hSuJ7z8e

    OpenAI’s upcoming GPT-5 release will integrate its o3 reasoning model and be available to free users, CEO Sam Altman revealed in a roadmap he shared on X. He said the company is also working to simplify how users interact with ChatGPT.

    “We want AI to ‘just work’ for you; we realize how complicated our model and product offerings have gotten,” Altman wrote. “We hate the model picker as much as you do and want to return to magic unified intelligence.”

    In its current iteration, forcing ChatGPT to use a specific model, such as o3-mini, involves either tapping the “Reason” button in the prompt bar or one of the options present in the model picker, which appears after the chatbot answers a question. If you pay for ChatGPT Plus or Pro, that dropdown menu can get pretty long, with multiple models and intelligence settings to choose from.

    Reply
  33. Tomi Engdahl says:

    Kylie Robison / The Verge:
    OpenAI expands its Model Spec for how its AI models should behave, from 10 to 63 pages, emphasizing “customizability, transparency, and intellectual freedom” — ChatGPT is learning how to handle Stalin, ethical erotica, and trolley problems. — ChatGPT is learning how to handle Stalin …

    OpenAI is rethinking how AI models handle controversial topics
    ChatGPT is learning how to handle Stalin, ethical erotica, and trolley problems.
    https://www.theverge.com/openai/611375/openai-chatgpt-model-spec-controversial-topics

    Reply
  34. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Court filing: Elon Musk will withdraw his $97.4B bid if OpenAI’s board of directors “preserve the charity’s mission” and halt its conversion to a for-profit

    Elon Musk will withdraw bid for OpenAI’s nonprofit if its board agrees to terms
    https://techcrunch.com/2025/02/12/elon-musk-will-withdraw-bid-for-openais-nonprofit-if-its-board-agrees-to-terms/

    Reply
  35. Tomi Engdahl says:

    Vlad Savov / Bloomberg:
    Alibaba Chair Joe Tsai says Apple will use the Chinese tech giant’s AI technology on iPhones sold in China, speaking at the World Government Summit in Dubai — “Apple has been very selective, they talked to a number of companies in China, and in the end they choose to do business with us …

    Alibaba Wins Prized Role Powering AI on Apple’s iPhone in China

    Chairman Joe Tsai confirms Apple will use its AI technology
    Alibaba is staging a comeback after years of turbulence

    https://www.bloomberg.com/news/articles/2025-02-13/apple-s-iphone-will-use-alibaba-ai-in-china-joe-tsai-says

    Reply
  36. Tomi Engdahl says:

    Michael Nuñez / VentureBeat:
    Snowflake and Anthropic partner to integrate Claude 3.5 Sonnet into Snowflake’s new Cortex Agents platform, embedding AI agents into corporate data environments

    Snowflake expands AI tools with Anthropic partnership — what it means for businesses
    https://venturebeat.com/ai/snowflake-expands-ai-tools-with-anthropic-partnership-what-it-means-for-businesses/

    Snowflake and Anthropic have unveiled a major partnership to embed AI agents directly into corporate data environments, empowering businesses to analyze vast amounts of information while maintaining strict security controls.

    The companies will integrate Anthropic’s Claude 3.5 Sonnet model into Snowflake’s new Cortex Agents platform, allowing organizations to deploy AI systems that can analyze both structured database information and unstructured content like documents within their existing security frameworks.

    Reply
  37. Tomi Engdahl says:

    Reuters:
    Baidu says it will make its AI chatbot Ernie free starting April 1 to all users on both desktop and mobile platforms, citing improved tech and reduced costs

    Baidu to make AI chatbot Ernie Bot free of charge from April 1
    https://www.reuters.com/technology/artificial-intelligence/baidu-says-ai-model-ernie-free-april-2025-02-13/

    Reply
  38. Tomi Engdahl says:

    Euractiv:
    After the AI Action Summit, the EU Commission says it will withdraw the AI Liability Directive, a strategic move to show goodwill to the Trump administration

    Commission withdraws AI liability directive after Vance attack on regulation
    https://www.euractiv.com/section/tech/news/commission-withdraws-ai-liability-directive-after-vance-attack-on-regulation/

    In the wake of regulation criticisms at the AI Action Summit in Paris, most notably by US vice-president JD Vance, the Commission added the AI liability directive to the list of legislative acts it plans to withdraw, in its final 2025 work programme.

    The Commission published its final work programme late in the evening on 11 February with a final twist: the withdrawal of the AI liability directive.

    This move came in the wake of the AI Action Summit held in Paris on 10-11 February, during which US vice-president JD Vance was particularly vocal in his disapproval of the EU’s regulatory approach in tech.

    The summit, originally intended to promote human-centric AI, was eclipsed by substantial investment announcements by the EU and France totalling hundreds of billions of euros, essentially bids for relevance in the global AI race.

    In this context, withdrawing the AI liability directive can be understood as a strategic manoeuvre by the EU to present an image of openness to capital and innovation, to show it prioritises competitiveness and show goodwill to the new US administration.

    More pragmatically, the AI liability directive was losing traction at EU level for the past year, following the adoption of the EU’s AI Act, regulating AI models and systems based on their inherent risks to society.

    In this context, an over-the-top AI liability law was increasingly seen as superfluous.

    The Commission justified withdrawing the directive, writing that there is ”no foreseeable agreement” on the law and plans to assess whether another proposal will be tabled or another type of approach should be chosen.’

    Reply
  39. Tomi Engdahl says:

    Digiuutiset
    Helsingin kaupunki on käyttänyt viikkoja työhön, jonka voi hoitaa 10 minuutissa
    Tekoälykuvantunnistustyökalu tunnisti kokeilussa 87 prosenttia kohteista oikein. Helsingin kaupunki pitää saatuja tuloksia rohkaisevina.
    https://www.iltalehti.fi/digiuutiset/a/89bdddba-45a8-4d2d-b0c7-87521e5cc238

    Helsingin kaupunki kokeili tekoälyn käyttöä yleisten alueiden tietojen ylläpitämisessä. Kaupunki toteutti kokeilun Digian kanssa.

    Kaupunki ylläpitää yleisten alueiden tiedon rekisteriä, jonka ylläpito on osoittautunut hitaaksi ja työlääksi kaupungin osalta. Ylläpito vaatii esimerkiksi tuhansien ilmakuvien läpikäynnin manuaalisesti. Kokeilun tarkoituksena oli selvittää, voidaanko rekisterin ylläpitoon vaadittava manuaalinen kuvantunnistusprosessi automatisoida.

    Kokeilussa käytetty tekoälypohjainen kuvantunnistustyökalu tunnisti 87 prosenttia kohteista oikein. Kokeilun toteuttanut Digian tekninen konsultti Hilla Tilhi kertoo Digian tiedotteessa, että työkalu ylitti kaupungin ja yhtiön odotukset.

    – Tekoäly odotetusti nopeutti prosessia: ihmisiltä kaksi viikkoa vaatinut työ onnistui tekoälyltä kymmenessä minuutissa. Mutta se ylitti myös tarkkuudessa odotukset, Tilhi sanoo tiedotteessa.

    Helsingin kaupungin projektipäällikkö Ritva Keko kertoo tiedotteessa, että kokeilun tulokset olivat ”todella rohkaisevia”. Kaupunki aikoo jatkaa menetelmän kehittämistä.

    – Jos kaikki tietojen paikkansapitävyyden parantamiseen käytetty aika saadaan käyttää tietojen korjaamisen virheiden etsimisen sijaan, pystymme parantamaan tieto-omaisuutemme laatua huomattavasti, Keko sanoo tiedotteessa.

    Reply
  40. Tomi Engdahl says:

    Kova lupaus: Tuntien työ valmiiksi kymmenissä minuuteissa
    OpenAI kertoo, että työkalu on tehty ”intensiivisen tietotyön” tekijöitä varten.
    https://www.iltalehti.fi/digiuutiset/a/e0ff8b5c-4e71-487f-90a5-2866f5bf2b3d

    ChatGPT:n kehittäjä OpenAI on julkaissut syväluotaavaan tiedonhankintaan tarkoitetun Deep research -tekoälyagenttityökalun. Työkalu toimii OpenAI:n o3-mallin avulla.

    Asiasta uutisoi muun muassa Reuters.

    Deep researchin tarkoituksena on toteuttaa perusteellista ja syväluotaavaa tiedonhankintaa. OpenAI:n mukaan työkalu pystyy tuottamaan tutkimusanalyytikon tasoisia raportteja analysoimalla laajan määrän verkkolähteitä ja kokoamalla niistä yhteenvedon.

    Deep research toimii merkittävästi yhtiön muita työkaluja hitaammin: käyttäjän antaman yhden pyynnön käsittely voi kestää jopa puoli tuntia.

    OpenAI launches new AI tool to facilitate research tasks
    https://www.reuters.com/technology/openai-launches-new-ai-tool-facilitate-research-tasks-2025-02-03/

    Reply
  41. Tomi Engdahl says:

    https://microbit.org/get-started/features/ai/
    micro:bit CreateAI is a free, web-based tool that lets you program a micro:bit to recognise and respond to your movements, like clapping, waving, dancing or jumping. Collect your movement data from the micro:bit’s accelerometer, train, test and improve your own machine learning model, then use it in a Microsoft MakeCode program on your micro:bit.
    You can use any version of the micro:bit to collect data, train and test an ML model, but you need the faster processor on the micro:bit V2 to code with ML in MakeCode and run the ML model on your micro:bit.
    https://createai.microbit.org/

    Reply
  42. Tomi Engdahl says:

    Emma Roth / The Verge:
    Google plans to begin testing an ML-based model in the US in 2025 that estimates whether a user is under 18 to help provide more “age-appropriate experiences” — Google says the technology will help it provide ‘age-appropriate experiences.’ — Google says the technology …

    Google will use machine learning to estimate a user’s age
    Google says the technology will help it provide ‘age-appropriate experiences.’
    https://www.theverge.com/news/610512/google-age-estimation-machine-learning

    Reply
  43. Tomi Engdahl says:

    Allstate Is Demanding We Delete These Quotes by Its Exec About How It’s Using AI to Write Insurance Emails
    Baffling.
    https://futurism.com/allstate-demanding-delete-quotes-ai

    We were struck this week when the Wall Street Journal reported that Allstate, a major insurance company, had largely handed over the task of writing claims emails over to an AI system.

    “The claim agent still looks at them just to make sure they’re accurate, but they’re not writing them anymore,” Jeevanjee enthused to the newspaper.

    “When these emails used to go out, even though we had standards and so on, they would include a lot of insurance jargon,” he continued. “They weren’t very empathetic… Claims agents would get frustrated, and so it wasn’t necessarily great communication.”

    It was a fascinating story about the incursion of AI into yet another industry, so we ran a quick blog on it and moved onto other things.

    But then we got a genuinely bizarre email from someone on Allstate’s media relations team, claiming the WSJ’s reporting was flawed and that the newspaper was on the verge of taking it down.

    “I’m currently working with the Wall Street Journal to have it updated/removed due to the high number of inaccuracies,” the Allstate spokesperson told us, demanding that we delete our blog entirely.

    Later on, the WSJ did add a correction to its story — but only on two obscure points, about the number of insurance reps the company employs and the name of the vendor Allstate uses for estimating the cost of repairs.

    Meanwhile, Allstate’s media team continued to badger us, sending a lengthy table of requested changes, many of which involved deleting or altering statements by the company’s own exec, Jeevanjee (they were also very unhappy with a comparison to UnitedHealthcare, another insurer that’s reportedly deployed AI to deal with claims.)

    To be clear, it’s not out of the ordinary for spokespeople to reach out to journalists to dispute factual claims, sometimes resulting in corrections.

    But it’s an entirely different matter when a company requests that direct quotes by its executives be deleted wholesale. And frankly, we were baffled: Jeevanjee is the company’s CIO, so you’d expect that he’d know exactly how its employees were using technology. And his quotes — the “claim agent still looks at them just to make sure they’re accurate, but they’re not writing them anymore” — weren’t remotely ambiguous.

    “We would like to correct how our Claims team uses AI tools,” the spokesperson replied, completely ignoring our questions. “Allstate employees are responsible for drafting and sending all customer emails, and they can choose to use AI tools to help improve clarity. Our employees are committed to helping restore our customers’ lives quickly with accuracy and empathy.”

    It’s hard not to speculate about what’s going on behind the scenes at Allstate. Was there a drastic miscommunication? Some sort of coverup of an actual policy?

    But the most likely explanation probably has to do with this: Allstate is starting to realize, like many other large companies, that customers don’t like the idea of being offloaded onto a flawed and over-hyped AI system.

    And as a result, when a newspaper reported that that’s exactly what it had been doing, its media arm panicked — and started making bizarre demands to journalists.

    Allstate Says Almost All Its Communications About Insurance Claims Are Done With AI Now
    “The claim agent still looks at them just to make sure they’re accurate, but they’re not writing them anymore.”
    https://futurism.com/allstate-almost-all-insurance-communications-ai

    Reply
  44. Tomi Engdahl says:

    CIO Journal
    Turns Out AI Is More Empathetic Than Allstate’s Insurance Reps
    Allstate said a large number of emails sent to customers are now generated by AI, which are overall more empathetic and less accusatory than human-written ones
    https://www.wsj.com/articles/turns-out-ai-is-more-empathetic-than-allstates-insurance-reps-cf5f7c98?mod=latest_headlines

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*