AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.
AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.
12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.
9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”
Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.
Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed
Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together
Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.
7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.
OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets
It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.
Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.
AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity
Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise
2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.
The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.
1,127 Comments
Tomi Engdahl says:
Reuters:
Manus partners with Alibaba’s Qwen to integrate Manus’ AI agent functions with AI models and computing platforms in China and to collaborate on Qwen’s AI models — China’s Manus AI announced on Tuesday a strategic partnership with the team behind tech giant Alibaba’s Qwen AI models …
China’s Manus AI partners with Alibaba’s Qwen team in expansion bid
https://www.reuters.com/technology/artificial-intelligence/chinas-manus-ai-announces-partnership-with-alibabas-qwen-team-2025-03-11/
Tomi Engdahl says:
Nikkei Asia:
Sakana CEO David Ha says its AI CUDA Engineer “cheated” to speed up training by 100x and “preliminary results” show it “will be closer to the 10% to 100% level” — TOKYO — Sakana AI’s announcement that its recent artificial intelligence breakthrough …
Sakana’s snag highlights risk of relying on AI to test AI
Nvidia-backed startup backs off claims of hundredfold efficiency gains
https://asia.nikkei.com/Business/Technology/Artificial-intelligence/Sakana-s-snag-highlights-risk-of-relying-on-AI-to-test-AI2
Tomi Engdahl says:
Riddhi Kanetkar / Business Insider:
OpusClip, which offers a multimodal AI tool to simplify short-form video editing for creators, raised $20M led by SoftBank’s Vision Fund 2 at a $215M valuation
AI video startup OpusClip raises $20 million from SoftBank’s Vision Fund 2 at a $215 million valuation
https://www.businessinsider.com/opusclip-softbank-vision-fund-2-funding-valuation-2025-3
Tomi Engdahl says:
Allie Garfinkle / Fortune:
Cartesia, which is developing real-time generative AI models for voice AI, raised a $64M Series A led by Kleiner Perkins, taking its total funding to $91M
https://fortune.com/2025/03/11/exclusive-cartesia-voice-ai-startup-raises-64-million-series-a/
Tomi Engdahl says:
Dave2D on YouTube:
Mac Studio with M3 Ultra and 512GB of unified memory review: opens up new workflows on a ~$10,000 desktop, like running a quantized version of DeepSeek R1 671B
M3 Ultra Mac Studio Review
https://www.youtube.com/watch?v=J4qwuCXyAcU
Tomi Engdahl says:
The Information:
Source: Anthropic’s annualized revenue grew from $1B at the end of 2024 to $1.4B in early March; Manus uses tools including Claude 3.7 Sonnet to power its agent
https://www.theinformation.com/articles/anthropics-claude-drives-strong-revenue-growth-while-powering-manus-sensation
Tomi Engdahl says:
Matt Day / Bloomberg:
After tech companies like Amazon and Microsoft made huge bets on AI, the gadget boom that started with the launch of Alexa in 2014 has largely fizzled
https://www.bloomberg.com/news/articles/2025-03-11/gadget-boom-fizzles-amid-ai-hoopla-it-s-a-bloodbath-out-there
Tomi Engdahl says:
Codeium’s Windsurf : This NEW & FULLY FREE AI Editor BEATS CURSOR! (10 min)
https://www.youtube.com/watch?v=ilTzOaYLeHA
Windsurf tutorial 20 min
Windsurf Tutorial for Beginners (AI Code Editor) – Better than Cursor??
https://www.youtube.com/watch?v=8TcWGk1DJVs
Tomi Engdahl says:
Marques Brownlee / Marques Brownlee on YouTube:
Hands-on with Samsung’s Project Moohan headset: resemb
les Apple Vision Pro, Gemini integration shines, runs Android XR with mobile and tablet apps, and more
https://www.youtube.com/watch?v=az5QL_NLBvg
The Android XR headset with Gemini has some surprisingly cool features that we’ll start to see everywhere.
Tomi Engdahl says:
Neuroverkot eivät sovi kaikkeen
Neuroverkot toimivat hyvin, kun data noudattaa yksinkertaisia kuvioita. Monimutkaisempien ja vähemmän järjestäytyneiden aineistojen kanssa niiden suorituskyky kuitenkin heikkenee, ja ne voivat joskus olla vain sattumanvaraisen arvauksen tasolla. Onneksi todellisessa maailmassa data on usein melko yksinkertaista ja rakenteellista, mikä sopii neuroverkkojen yksinkertaisuuteen painottuvalle oppimisperiaatteelle. Tämä auttaa niitä myös välttämään ylisovittamista eli liian tarkkaa mukautumista opetusaineistoon.
https://etn.fi/index.php/13-news/17069-tutkijat-loeysivaet-syyn-taemaen-takia-neuroverkot-oppivat-niin-tehokkaasti
Tomi Engdahl says:
Tekoälyradio vei voiton journalistikisassa
https://www.uusiteknologia.fi/2025/03/13/tekoaly-vei-voiton-journalistikisassa/
Suomen Tietotoimiston ja Bauer Median tekoälyradiouutiset luonut työryhmä on palkittu Vuoden uudistajana osana Suurta Journalistipalkintoa 2024, joka jaettiin eilen Helsingissä. Tekoälyprojekti on voittanut jo aiemmin Euroopan uutistoimistojen liiton innovaatiopalkinnon.
Suomen tietotoimisto ja radiotoimija Bauer Media aloittivat viime keväänä yhteistyön, jonka myötä uutistoimiston uutissisältöjä on kuultu radiokanavilla tekoälyankkureiden lukemana.
Voittoisa tekoäly ei vielä varsinaisesti muokkaa sisältöjä, vaan koneäänet ainoastaan lukevat STT:n toimittajien kirjoittamia ja toimitusprosessin läpi käyneitä uutisia.
Tomi Engdahl says:
Emma Roth / The Verge:
Google DeepMind launches two AI models, Gemini 2.0-based Robotics and Robotics-ER, to help robots “perform a wider range of real-world tasks than ever before” — Gemini Robotics also makes robots more dexterous, allowing them to perform more precise tasks, like folding a piece of paper.
Google DeepMind’s new AI models help robots perform physical tasks, even without training
https://www.theverge.com/news/628021/google-deepmind-gemini-robotics-ai-models
Gemini Robotics also makes robots more dexterous, allowing them to perform more precise tasks, like folding a piece of paper.
Tomi Engdahl says:
Benoit Berthelot / Bloomberg:
French publishers and authors sue Meta for allegedly training AI models on their books without consent, say they have evidence of “massive” copyright breaches — SNE, the trade association representing major French publishers including Hachette and Editis, along …
https://www.bloomberg.com/news/articles/2025-03-12/meta-faces-legal-challenge-by-french-publishers-over-ai-training
Tomi Engdahl says:
Aisha Malik / TechCrunch:
Snapchat introduces AI video Lenses, powered by Snap’s in-house generative video model, available to subscribers of the $16/month Snapchat Platinum tier
https://techcrunch.com/2025/03/12/snap-introduces-ai-video-lenses-powered-by-its-in-house-generative-model/
Tomi Engdahl says:
Associated Press:
Investigation: AI-powered student monitoring tool Gaggle, used to track ~6M US students, isn’t always secure, and there’s no research showing it boosts safety
Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks
https://apnews.com/article/ai-school-chromebook-gaggle-goguardian-securly-25a3946727397951fd42324139aaf70f
Tomi Engdahl says:
Artificial Intelligence
Beware of DeepSeek Hype: It’s a Breeding Ground for Scammers
Exploiting trust in the DeepSeek brand, scammers attempt to harvest personal information or steal user credentials.
https://www.securityweek.com/beware-of-deepseek-hype-its-a-breeding-ground-for-scammers/
Tomi Engdahl says:
Over half of American adults have used an AI chatbot, according to an Elon University survey.
Over half of American adults have used an AI chatbot, survey finds
ChatGPT was the most popular model, followed by Google Gemini.
https://www.nbcnews.com/tech/tech-news/half-american-adults-used-ai-chatbots-survey-finds-rcna196141?fbclid=IwY2xjawI_cBVleHRuA2FlbQIxMQABHdzEK_ryPnltT_GYmQV7SsqXxt4yhYHnheKyTlqI6jClEEmYtlcg9kjT8g_aem_0HAiGDkJbqGkOpsw7xzqGw
Tomi Engdahl says:
Artificial intelligence technology is becoming increasingly integral to everyday life, with an Elon University survey finding that 52% of U.S. adults have used AI large language models like ChatGPT, Gemini, Claude and Copilot.
The survey, conducted in January by the Imagining the Digital Future Center at the university in North Carolina, found that 34% of its 500 respondents who had used AI said they use large language models (LLMs) at least once a day. Most popular was ChatGPT, with 72% of respondents reporting they have used it. Google’s Gemini was second, at 50%.
It has become increasingly common for people to find themselves developing personal relationships with AI chatbots. The survey found that 38% of users said they believe LLMs will “form deep relationships with humans,”
https://www.nbcnews.com/tech/tech-news/half-american-adults-used-ai-chatbots-survey-finds-rcna196141?fbclid=IwY2xjawI_cG1leHRuA2FlbQIxMQABHdzEK_ryPnltT_GYmQV7SsqXxt4yhYHnheKyTlqI6jClEEmYtlcg9kjT8g_aem_0HAiGDkJbqGkOpsw7xzqGw
Tomi Engdahl says:
John Gruber / Daring Fireball:
By previewing smarter Siri vaporware at WWDC and the iPhone 16 launch, Apple burned a reputation earned over decades for only promising actual working products — In the two decades I’ve been in this racket, I’ve never been angrier at myself for missing a story than I am about Apple’s announcement …
Something Is Rotten in the State of Cupertino
https://daringfireball.net/2025/03/something_is_rotten_in_the_state_of_cupertino
Wednesday, 12 March 2025
In the two decades I’ve been in this racket, I’ve never been angrier at myself for missing a story than I am about Apple’s announcement on Friday that the “more personalized Siri” features of Apple Intelligence, scheduled to appear between now and WWDC, would be delayed until “the coming year”.
I should have my head examined.
This announcement dropped as a surprise, and certainly took me by surprise to some extent, but it was all there from the start. I should have been pointing out red flags starting back at WWDC last year, and I am embarrassed and sorry that I didn’t see what should have been very clear to me from the start.
How I missed this is twofold. First, I’d been lulled into complacency by Apple’s track record of consistently shipping pre-announced products and features. Their record in that regard wasn’t perfect, but the exceptions tended to be around the edges.
The same goes for “Apple Intelligence”. It doesn’t exist as a single thing or project. It’s a marketing term for a collection of features, apps, and services. Putting it all under a single obvious, easily remembered — and easily promoted — name makes it easier for users to understand that Apple is launching a new initiative. It also makes it easier for Apple to just say “These are the devices that qualify for all of these features, and other devices — older ones, less expensive ones — get none of them.”
What I mean by that is that it was clear to me from the WWDC keynote onward that some of the features and aspects of Apple Intelligence were more ambitious than others. Some were downright trivial; others were proposing to redefine how we will do our jobs and interact with our most-used devices. That was clear. But yet somehow I didn’t focus on it. Apple itself strongly hinted that the various features in Apple Intelligence wouldn’t all ship at the same time. What they didn’t spell out, but anyone could intuit, was that the more trivial features would ship first, and the more ambitious features later. That’s where the red flags should have been obvious to me.
In broad strokes, there are four stages of “doneness” or “realness” to features announced by any company:
Features that the company’s own product representatives will demo, themselves, in front of the media. Smaller, more personal demonstrations are more credible than on-stage demos. But the stakes for demo fail are higher in an auditorium full of observers.
Features that the company will allow members of the media (or other invited outside observers and experts) to try themselves, for a limited time, under the company’s supervision and guidance. Vision Pro demos were like this at WWDC 2023. A bunch of us got to use pre-release hardware and in-progress software for 30 minutes. It wasn’t like free range “Do whatever you want” — it was a guided tour. But we were the ones actually using the product. Apple allowed hands-on demos for a handful of media (not me) at Macworld Expo back in 2007 with prototype original iPhones — some of the “apps” were just screenshots, but most of the iPhone actually worked.
Features that are released as beta software for developers, enthusiasts, and the media to use on their own devices, without limitation or supervision.
Features that actually ship to regular users, and hardware that regular users can just go out and buy.
As of today — March 2025 — every feature in Apple Intelligence that has actually shipped was at level 1 back at WWDC. After the keynote, dozens of us in the press were invited to a series of small-group briefings where we got to watch Apple reps demo features like Writing Tools, Photos Clean Up, Genmoji, and more.
But we didn’t see all aspects of Apple Intelligence demoed. None of the “more personalized Siri” features, the ones that Apple, in its own statement announcing their postponement, described as having “more awareness of your personal context, as well as the ability to take action for you within and across your apps”.
There were no demonstrations of any of that. Those features were all at level 0 on my hierarchy. That level is called vaporware. They were features Apple said existed, which they claimed would be shipping in the next year, and which they portrayed, to great effect, in the signature “Siri, when is my mom’s flight landing?” segment of the WWDC keynote itself, starting around the 1h:22m mark. Apple was either unwilling or unable to demonstrate those features in action back in June, even with Apple product marketing reps performing the demos from a prepared script using prepared devices.
This shouldn’t have just raised a concern in my head. It should have set off blinding red flashing lights and deafening klaxon alarms.
Even the very engineers working on a project never know exactly how long something is going to take to complete. An outsider observing a scripted demo of incomplete software knows far less (than the engineers) just how much more work it needs.
But a feature or product that Apple is unwilling to demonstrate, at all, is unknowable. Is it mostly working, and close to, but not quite, demonstratable? Is it only kinda sorta working — partially functional, but far from being complete? Fully functional but prone to crashing — or in the case of AI, prone to hallucinations and falsehoods? Or is it complete fiction, just an idea at this point?
What Apple showed regarding the upcoming “personalized Siri” at WWDC was not a demo. It was a concept video. Concept videos are bullshit, and a sign of a company in disarray, if not crisis. The Apple that commissioned the futuristic “Knowledge Navigator” concept video in 1987 was the Apple that was on a course to near-bankruptcy a decade later. Modern Apple — the post-NeXT-reunification Apple of the last quarter century — does not publish concept videos. They only demonstrate actual working products and features.
Until WWDC last year, that is.
My deeply misguided mental framework for “Apple Intelligence” last year at WWDC was, something like this: Some of these features are further along than others, and Apple is showing us those features in action first, and they will surely be the features that ship first over the course of the next year. The other features must be coming to demonstratable status soon. But the mental framework I should have used was more like this: Some of these features are merely table stakes for generative AI in 2024, but others are ambitious, groundbreaking, and, given their access to personal data, potentially dangerous. Apple is only showing us the table-stakes features, and isn’t demonstrating any of the ambitious, groundbreaking, risky features.
It gets worse.
But yet while Apple still wouldn’t demonstrate these features in person, they did commission and broadcast a TV commercial showing these purported features in action, presenting them as a reason to purchase a new iPhone — a commercial they pulled, without comment, from YouTube this week.
Last week’s announcement — “It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year” — was, if you think about it, another opportunity to demonstrate the current state of these features. Rather than simply issue a statement to the media, they could have invited select members of the press to Apple Park, or Apple’s offices in New York, or even just remotely over a WebEx conference call, and demonstrate the current state of these features live, on an actual device. That didn’t happen. If these features exist in any sort of working state at all, no one outside Apple has vouched for their existence, let alone for their quality.
Tomi Engdahl says:
So being an “AI” guy I’m finding is a bit like being a crypto bro and I don’t know how I feel about this…
https://www.facebook.com/share/p/15x7bwfK8G/
ChatGPT says……..”Yeah, I get that. AI has that same mix of hype, speculation, and a few genuinely groundbreaking things happening underneath it all. It’s like everyone suddenly has “AI” in their bio, just like how everyone was a crypto trader in 2021.”
Tomi Engdahl says:
Bloomberg:
Alibaba debuts New Quark, an updated app using its flagship Qwen reasoning model, rolling out gradually, as it seeks to keep up with DeepSeek and Chinese rivals — It re-tooled the Quark app to take advantage of its flagship Qwen reasoning model. “New Quark” now integrates functions including a chatbot …
https://www.bloomberg.com/news/articles/2025-03-13/alibaba-unveils-ai-agent-app-in-race-to-keep-up-with-rivals
Tomi Engdahl says:
A well-funded Moscow-based global ‘news’ network has infected Western artificial intelligence tools worldwide with Russian propaganda
An audit found that the 10 leading generative AI tools advanced Moscow’s disinformation goals by repeating false claims from the pro-Kremlin Pravda network 33 percent of the time
https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global
Tomi Engdahl says:
Tesla shouldn’t be seen as an electric car manufacturer anymore. It’s an AI company—if you believe CEO Elon Musk. His confidence is tied to a unique dataset: petabytes of video harvested from the company’s cars as Tesla customers log millions of driving miles worldwide.
In theory, all that real-world data is exactly what Tesla needs to train its cars to operate without any human assistance, a goal that’s core to Musk’s vision for the future of Tesla. But there’s a problem: That data isn’t necessarily as helpful as Musk claims. Some of it isn’t useful at all.
Read more: https://trib.al/Si1ncbr
(Illustration: Emily Scherer for Forbes; Photos: Apu Gomes, Jaap Arriens/Nurphoto, Wang Zhao/AFP, Hannes P Albert/Picture Alliance via Getty Images)
Tomi Engdahl says:
Tekoälyfirmat julkaisivat vuosittaisen AI-raporttinsa – Sisältää painavaa sanottavaa yrityksille
Marko Pinola11.3.202509:00|päivitetty13.3.202509:48Tekoäly
Neljättä kertaa julkaistu raportti perustuu kyselyyn, johon osallistui yli 100 eri kokoista pohjoismaista yritystä lukuisilta eri aloilta.
https://www.tivi.fi/uutiset/tekoalyfirmat-julkaisivat-vuosittaisen-ai-raporttinsa-sisaltaa-painavaa-sanottavaa-yrityksille/429fbfa0-fd16-4560-a7bf-7a7a178b8f78
Tekoälyn laajentamisen päähaaste ei enää ole osaamisen puutteessa, vaan riittämättömissä investoinneissa. Näin kiteytetysti todetaan suomalaisen
Tomi Engdahl says:
Google julkisti avoimen Gemma 3 -tekoälymallin – ”maailman paras yhden kiihdyttimen malli”
https://mobiili.fi/2025/03/12/google-julkisti-avoimen-gemma-3-tekoalymallin-maailman-paras-yhden-kiihdyttimen-malli/
Google on julkaissut uuden avoimen Gemma 3 -tekoälymallin kehittäjille.
Googlen suljettujen Gemini-mallien rinnalla julkaistava Gemma-perhe käsittää avoimia malleja, jotka ovat kehittäjien ladattavissa ja hyödynnettävissä. Googlen mukaan Gemma-mallit ovat keränneet viimeisen vuoden aikana yli 100 miljoonaa latausta ja käytössä on erilaista 60 000 Gemma-varianttia.
Gemma-mallit on suunniteltu toimimaan nopeasti suoraan laitteissa, kuten puhelimissa, kannettavissa tietokoneissa ja pöytätyöasemissa.
Googlen mukaan uusi Gemma 3 -malli on rakennettu samalta tutkimus- ja teknologiapohjalta kuin Googlen Gemini 2.0 -mallit. Gemma 3 on saatavilla 1B-, 4B-, 12B- ja 27B-kokoina, jotka kuvastavat mallin parametrien lukumäärää miljardeilla mitattuna.
Uuden julkaisun yhteydessä Google on nostanut erikseen esille Gemma 3:n olevan paras yhdellä tekoälykiihdyttimellä (yksittäinen GPU tai TPU) ajettava tekoälymalli.
Lisäksi Gemma 3 pesee suorituskyvyssä LMArenan mukaan Llama-405B:n, DeepSeek-V3:n ja OpenAI:n o3-minin pienemmästä koostaan huolimatta.
Googlen mukaan Gemma 3 tarjoaa kehittyneen tekstimuotoisen ja visuaalisen järkeilykyvyn kuvien, tekstin ja lyhyiden videoiden analysoimiseksi 4B- ja suuremmissa versioissa. Malli tukee 128 000 tokenin konteksti-ikkunaa.
Gemma 3 tukee myös toimintojen kutsumista ja strukturoituja vastauksia, jotka mahdollistavat tehtävien automatisoinnin ja agenttimaisten tekoälykokemusten kehittämisen.
Tarjolla on myös viralliset kvantisoidut versiot, jotka pienentävät mallin kokoa ja laskentavaatimuksia säilyttäen kuitenkin Googlen mukaan korkean tarkkuuden.
Mukana on myös ShieldGamma 2 -turvallisuuselementti, joka voi luokitella sisällön vaaralliseksi, seksuaaliseksi tai väkivaltaa sisältäväksi.
Tomi Engdahl says:
Tekoäly mullistaa parikoodauksen: Muutkin kuin koodarit pääsevät näppiksen taakse
Suvi Korhonen11.3.202510:00|päivitetty12.3.202513:11TekoälyOhjelmointiOhjelmistokehitys
Uudet työkalut mahdollistavat sovelluskehityksessä uudenlaisen nopeamman ja intensiivisemmän tiimityöskentelyn.
https://www.tivi.fi/uutiset/tekoaly-mullistaa-parikoodauksen-muutkin-kuin-koodarit-paasevat-nappiksen-taakse/d351fedc-8d77-47ef-a97e-f7d5353af388
Cursorin kaltaisilla tekoälytyökaluilla ohjelmistokehityksen vauhti kiihtyy. Samalla myös projektien työnjako muuttuu, it-konsulttiyhtiö Wonnan Tommi Sinivuo sanoo
Tomi Engdahl says:
Tekoäly intti samaa tulosta kerta toisensa jälkeen kvanttikokeessa – Tieteilijät epäilivät, kunnes testasivat ratkaisun
Marko Pinola11.3.202511:48|päivitetty12.3.202512:23Tekoäly
Kiinalaistutkijat testasivat tekoälyä hyödyntäen, onko kvanttimekaniikasta tutun ilmiön kokeellinen toteuttaminen mahdollista ilman perinteisiä keinoja.
https://www.tivi.fi/uutiset/tekoaly-intti-samaa-tulosta-kerta-toisensa-jalkeen-kvanttikokeessa-tieteilijat-epailivat-kunnes-testasivat-ratkaisun/842911fe-1474-401e-86b3-eb2dcf53ed8f
Tomi Engdahl says:
https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice?fbclid=IwY2xjawI-cqtleHRuA2FlbQIxMQABHf0pF-guTCe6iAMlNqZmsAXdOrVA8oLNp9f68u5de4HkkhJxidhN0U4JSg_aem_tWOiC74hDZM4TvqLFZLTZg
Tomi Engdahl says:
https://blog.kuzudb.com/post/kuzu-wasm-rag/
Tomi Engdahl says:
https://codeium.com/
Windsurf Wave 4
Tomi Engdahl says:
Microsoft developing AI reasoning models to compete with OpenAI, The Information reports
https://www.reuters.com/technology/artificial-intelligence/microsoft-developing-ai-reasoning-models-compete-with-openai-information-reports-2025-03-07/
Tomi Engdahl says:
https://github.blog/changelog/2025-03-06-copilot-chat-users-can-now-use-the-vision-input-in-vs-code-and-visual-studio-public-preview/
Tomi Engdahl says:
https://www.forbes.com/sites/aytekintank/2025/03/04/5-ways-ai-agents-can-help-solopreneurs-scale-without-hiring/
Tomi Engdahl says:
https://searchengineland.com/openai-deep-research-seo-strategies-453012
Tomi Engdahl says:
https://jetsonhacks.com/2024/12/17/jetson-orin-nano-super-developer-kit/
Tomi Engdahl says:
https://www.zmescience.com/science/agriculture-science/ai-is-deciphering-ancient-inscriptions-that-experts-have-struggled-with-for-centuries/
Tomi Engdahl says:
https://hbr.org/2025/03/how-to-build-your-own-ai-assistant?ab=HP-latest-text-1
Tomi Engdahl says:
https://simonwillison.net/2025/Mar/5/code-interpreter/
Tomi Engdahl says:
Another DeepSeek moment’? Chinese start-up launches new AI agent, sparking widespread attention
https://www.globaltimes.cn/page/202503/1329652.shtml
Chinese start-up Monica has lately released an artificial intelligence (AI) agent called Manus, instantly attracting widespread media and public attention, with some media outlets asking if this would be “another DeepSeek moment,” in a nod to the rapid rise of the Chinese AI start-up.
Manus went viral online within just about 20 hours of its preview launch on Wednesday. Manus is reportedly the world’s first truly general AI agent. Examples showcased on the official website demonstrate its ability to independently think, plan, and execute complex tasks, delivering complete results, CCTV.com reported on Thursday.
According to the company’s official website, “Manus is a general AI agent that bridges minds and actions: it doesn’t just think, it delivers results. Manus excels at various tasks in work and life, getting everything done while you rest.”
Tomi Engdahl says:
Introducing the Windsurf Editor
The new purpose-built IDE to harness magic
https://codeium.com/
Tomi Engdahl says:
ChatGPT for students: learners find creative new uses for chatbots
The utility of generative AI tools is expanding far beyond simple summarisation and grammar support towards more sophisticated, pedagogical applications.
https://www.nature.com/articles/d41586-025-00621-2
Tomi Engdahl says:
https://dawn.fi/uutiset/2025/03/01/openai-meilta-loppuvat-gpu-piirit
Tomi Engdahl says:
https://blog.google/products/search/ai-mode-search/
Tomi Engdahl says:
https://etn.fi/index.php/13-news/17225-amazon-toi-nopeammat-ja-edullisemmat-tekoaelymallinsa-suomeen
Tomi Engdahl says:
Salesforce Launches AgentExchange: the Trusted Marketplace for Agentforce
https://www.salesforce.com/news/press-releases/2025/03/04/agentexchange-announcement/
Tomi Engdahl says:
https://venturebeat.com/ai/salesforces-agentexchange-launches-with-200-partners-to-automate-your-boring-work-tasks/
Tomi Engdahl says:
https://www.forbes.com/sites/lanceeliot/2025/02/28/ai-hiding-emergent-human-values-that-include-ai-survival-topping-human-lives/
Tomi Engdahl says:
https://venturebeat.com/ai/google-launches-free-gemini-powered-data-science-agent-on-its-colab-python-platform/
Tomi Engdahl says:
How AI Tools Are Reshaping the Coding Workforce
Leaner development teams and a higher bar for hiring new roles are some of the changes coming as companies turn to generative AI coding tools
https://www.wsj.com/articles/how-ai-tools-are-reshaping-the-coding-workforce-6ad24c86
Tomi Engdahl says:
https://www.mikrobitti.fi/uutiset/microsoft-vaiensi-copilotin/1c6c40eb-8a23-487a-aeb4-1fc989b0ecda