AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

886 Comments

  1. Tomi Engdahl says:

    By using an old subtitle format, one creator managed to hide AI-confounding garbage in her YouTube transcripts that’s invisible to humans.

    How one YouTuber is trying to poison the AI bots stealing her content
    Specialized garbage-filled captions are invisible to humans, confounding to AI.
    https://arstechnica.com/ai/2025/01/how-one-youtuber-is-trying-to-poison-the-ai-bots-stealing-her-content/?utm_source=facebook&utm_medium=social&utm_campaign=dhfacebook&utm_content=null&fbclid=IwZXh0bgNhZW0CMTEAAR2nC4PYcZ4BV1OC4SkO4LSu43-ES5tpcRWn6x-ASpR6AjHtvsQDde5EI-c_aem_WiLcdA7Cm3kDdtUhNagrsg

    If you’ve been paying careful attention to YouTube recently, you may have noticed the rising trend of so-called “faceless YouTube channels” that never feature a visible human talking in the video frame. While some of these channels are simply authored by camera-shy humans, many more are fully automated through AI-powered tools to craft everything from the scripts and voiceovers to the imagery and music. Unsurprisingly, this is often sold as a way to make a quick buck off the YouTube algorithm with minimal human effort.

    It’s not hard to find YouTubers complaining about a flood of these faceless channels stealing their embedded transcript files and running them through AI summarizers to generate their own instant knock-offs. But one YouTuber is trying to fight back, seeding her transcripts with junk data that is invisible to humans but poisonous to any AI that dares to try to work from a poached transcript file.

    The power of the .ass
    YouTuber F4mi, who creates some excellent deep dives on obscure technology, recently detailed her efforts “to poison any AI summarizers that were trying to steal my content to make slop.” The key to F4mi’s method is the .ass subtitle format, created decades ago as part of fansubbing software Advanced SubStation Alpha. Unlike simpler and more popular subtitle formats, .ass supports fancy features like fonts, colors, positioning, bold, italic, underline, and more.

    It’s these fancy features that let F4mi hide AI-confounding garbage in her YouTube transcripts without impacting the subtitle experience for her human viewers. For each chunk of actual text in her subtitle file, she also inserted “two chunks of text out of bounds using the positioning feature of the .ass format, with their size and transparency set to zero so they are completely invisible.”

    In those “invisible” subtitle boxes, F4mi added text from public domain works (with certain words replaced with synonyms to avoid detection) or her own LLM-generated scripts full of completely made-up facts. When those transcript files were fed into popular AI summarizer sites, that junk text ended up overwhelming the actual content, creating a totally unrelated script that would be useless to any faceless channel trying to exploit it.

    The fight continues
    While YouTube doesn’t support .ass natively, there are tools that let creators convert their .ass subtitles to YouTube’s preferred .ytt format. Unfortunately, these subtitles don’t display correctly on the mobile version of YouTube, where the repositioned .ass subtitles simply show up as black boxes covering the video itself.

    F4mi said she was able to get around this wrinkle by writing a Python script to hide her junk captions as black-on-black text, which can fill the screen whenever the scene fades to black. But in the video description, F4mi notes that “some people were having their phone crash due to the subtitles being too heavy,” showing there is a bit of overhead cost to this kind of mischief.

    Reply
  2. Tomi Engdahl says:

    “The Chinese artificial intelligence app DeepSeek could not be accessed on Wednesday in Apple and Google app stores in Italy.”

    http://f-st.co/A0SxLWD?fbclid=IwZXh0bgNhZW0CMTEAAR3XjKtfbH11Qt5igznZoXhms9OtpIMIEZsZ8lqi-dF_phpMAp4PEFcTjMI_aem_n2hSeA-XJMWPpKFcvw7wpQ

    Reply
  3. Tomi Engdahl says:

    A cloud security firm has reportedly uncovered a “completely open” database tied to the AI firm containing more than a million instances of “chat history, backend data, and sensitive information.”

    Security hallucinations
    Report: DeepSeek’s chat histories and internal data were publicly exposed
    Wiz researchers found many similarities to OpenAI with their escalated access.
    Report: DeepSeek’s chat histories and internal data were publicly exposed – Ars Technica https://search.app/ENPhVRrGQW6UufxN8

    Reply
  4. Tomi Engdahl says:

    Wiz researchers find sensitive DeepSeek data exposed to internet | CyberScoop https://search.app/pVnYk8QWWmBdsF4b7

    Reply
  5. Tomi Engdahl says:

    DeepSeek’s growing popularity has also attracted the attention of the cybersecurity industry, which has started analyzing the model itself and the Chinese company’s infrastructure.

    Researchers at Wiz looked at the company’s external security posture, starting with publicly accessible domains and open ports. A search led to the discovery of several unusual hosts, including one associated with an unprotected ClickHouse database.

    An analysis showed that arbitrary SQL queries could be executed against the database, which revealed tables storing roughly one million log lines that included highly sensitive data.

    The exposed data included chat history, API keys, backend details, operational metadata, and other types of information that could be useful to a threat actor.

    Unprotected DeepSeek Database Exposed Chats, Other Sensitive Information – SecurityWeek https://search.app/At3TghVKDJCgNSe7A

    Reply
  6. Tomi Engdahl says:

    Is Nvidia stock still worth 50X of earnings?

    Nvidia provides up to 95% percent of the advanced AI chips used to research, train, and run frontier AI models. The company’s stock lost 17% of its value on Monday when investors interpreted DeepSeek’s research results as a signal that fewer expensive Nvidia chips would be needed in the future than previously anticipated.
    http://f-st.co/5MES8C6?fbclid=IwZXh0bgNhZW0CMTEAAR1RHHJvFF4wMefeXasxaQyKTno0ztDZ7mB4AWJhwqxf2M517eRkvvN940k_aem_OLArmau23ZLpEj-OSc9hlw

    Reply
  7. Tomi Engdahl says:

    The latest trick to stop those annoying AI answers is also the most cathartic.

    “Just give me the f***ing links!”—Cursing disables Google’s AI overviews
    The latest trick to stop those annoying AI answers is also the most cathartic.
    https://arstechnica.com/google/2025/01/just-give-me-the-fing-links-cursing-disables-googles-ai-overviews/?utm_source=facebook&utm_medium=social&utm_campaign=dhfacebook&utm_content=null&fbclid=IwZXh0bgNhZW0CMTEAAR221tjzTZNkdTt4V1doBiabdv4MWqgY0wqvWnvDhWy5-eIgcy_oaTPDzKY_aem_0VPR19PeI5YMr6hOMzCnkg

    Reply
  8. Tomi Engdahl says:

    OpenAI begins releasing its next generation of reasoning models with o3-mini
    https://www.fastcompany.com/91271011/openai-begins-releasing-its-next-generation-of-reasoning-models-with-o3-mini?utm_medium=social&utm_source=facebook&fbclid=IwZXh0bgNhZW0CMTEAAR1iq3_UNULZXIznk_AGGKuuZ3dEQfEPCawWUmkIIv75kBVp6iPeB5_qEYc_aem_NKR3cKPl35laY937UqABRw

    OpenAI released its newest reasoning model, called o3-mini, on Friday. OpenAI says the model delivers more intelligence than OpenAI’s first small reasoning model, o1-mini, while maintaining o1-mini’s low price and speed. The company says o3-mini excels in science, math, and coding problems.

    Developers can access o3-mini through an API and can select between three levels of reasoning intensity. The lowest setting, for example, might be best for less-difficult problems where speed of response is a factor. ChatGPT Plus, Team, and Pro users can access OpenAI o3-mini starting today, OpenAI says, while enterprise users will get access in a week.

    The announcement comes at the end of a week in which the Chinese company DeepSeek dominated headlines after releasing a pair of surprisingly powerful and cost-effective AI models called DeepSeek-V3 and DeepSeek-R1. The latter, a reasoning model, scored close to, and sometimes above, OpenAI’s o1

    However, as reported in the New York Times, many of DeepSeek’s answers parrot disinformation campaigns from the Chinese government including illegitimate historical info leaning heavily into Chinese propaganda.

    “We’re shifting the entire cost‑intelligence curve,” OpenAI researcher Noam Brown said of o3-mini on X. “Model intelligence will continue to go up, and the cost for the same intelligence will continue to go down.” He said o3-mini even outperforms the full-sized o1 model in a number of evaluations.

    OpenAI CEO Sam Altman said in December that the o3 series models demonstrate significantly higher levels of intelligence than the o1 models, including in computer-coding and problem-solving requiring advanced mathematics. The largest version of o3 also achieved the highest score yet of any AI system on a test called ARC-AGI

    OpenAI chose not to expose the o1 models’ chain of thought, and the same holds true for o3-mini. Researchers have shown that generating chain-of-thought can sometimes confuse models and pull them off focus. DeepSeek-R1, however, is trained to show its chain of thought, and Google announced in December a new experimental model called Gemini 2.0 Flash Thinking that also shows its “thinking.”

    Reasoning models represent a new chapter in developing generative AI models. From 2020-2023 AI labs won almost all of their performance increases by pretraining their models with more data and computing power. That “brute force” approach began to show diminishing returns in 2024, so the AI labs—OpenAI chief among them—began to teach models to do more reasoning (and use more computing power) at inference time just after the user has asked a question or posed a problem. The model might generate multiple streams of tokens at once, then choose which one leads to the best answers. Or it might follow a certain branch of logic then iteratively backtrack after hitting a dead end.

    Reply
  9. Tomi Engdahl says:

    TinyZero, a new AI model from Berkeley, takes on the giants of the tech industry. https://link.ie.social/AmSOxX

    #DeepSeekAI #AIResearch #TechInnovation #ArtificialIntelligence #MachineLearning

    $30 DeepSeek dupe? US scientists claim to duplicate AI model for peanuts
    DeepSeek’s new AI model, R1, claims to do the same things as ChatGPT but at a much lower cost.
    https://interestingengineering.com/innovation/us-researchers-recreate-deepseek-for-peanuts

    Agroup of researchers at the University of California, Berkeley, claims they’ve managed to reproduce the core technology behind DeepSeek’s headline-grabbing AI at a total cost of roughly $30.

    The news is another twist in a quickly developing narrative about whether building state-of-the-art AI demands colossal budgets or if far more affordable alternatives have been overlooked by tech’s biggest players.

    DeepSeek made waves recently by introducing R1, an AI model that claims to replicate the functions of ChatGPT and other costly systems at just a fraction of the training expense typically seen in Silicon Valley.

    The Berkeley team’s response? To do it even more cheaply. Led by PhD candidate Jiayi Pan, the researchers created a smaller-scale version, dubbed “TinyZero,” and released it on GitHub for public experimentation. Though it lacks the massive 671-billion-parameter heft of DeepSeek’s main offering, Pan says TinyZero captures the core behaviors seen in DeepSeek’s so-called “R1-Zero” model.

    Pan’s approach centers on reinforcement learning, a technique in which the AI, starting with almost random guesses, gradually refines its answers by revising and searching through possible solutions. In a post describing the project, he highlighted the Countdown game, a British TV puzzle where players combine given numbers to reach a target value. “The results: it just works!” Pan wrote that although the AI initially spat out “dummy outputs,” it ultimately figured out how to correct its mistakes.

    The bigger revelation
    The idea that a few days of work and a handful of dollars could replicate such a core AI capability is an eye-opener for many industry watchers. It flies in the face of conventional wisdom that big breakthroughs in AI require entire data centers, power-hungry GPUs, and millions or even billions of dollars in spending.

    While Pan’s “TinyZero” shows that advanced reinforcement learning can be done on a budget, it doesn’t necessarily address the depth or breadth of tasks the larger DeepSeek system can handle. TinyZero may be more akin to a simplified proof-of-concept than a fully fledged challenger.

    Yet the demonstration hints at a deeper shift in the AI scene.

    DeepSeek shook the tech world by asserting that training its main model costs merely a few million, substantially less than many U.S. firms spend on AI. According to Pan and his team, it can be done for a mere $30 on a small scale.

    Still, skeptics have urged caution. Critics point out that DeepSeek’s claimed affordability numbers may not give the complete picture, as the company might be benefiting from alternate resources or distillation techniques from other proprietary models.

    Reply
  10. Tomi Engdahl says:

    Chinese AI startup DeepSeek has made a huge splash with its ChatGPT competitor, claiming it developed a similarly-performing AI assistant at a fraction of the cost.

    It’s a serious contender — at least in the eyes of investors, with AI chipmaker Nvidia’s shares sliding by around 15 percent Monday morning.

    But the app also has some significant shortcomings. Like other Chinese AI models, DeepSeek is beholden to the rules of state censors, as Bloomberg reports, refusing to directly address sensitive topics like the 1989 Tiananmen Square massacre or China-Taiwan relations.

    https://futurism.com/deepseek-ai-answer-tiananmen-square-massacre?fbclid=IwY2xjawILHgNleHRuA2FlbQIxMQABHTnv6dO2_ZJ1cXQs1kQpjFVSayavB2VSzQNMzJKI5nXU-gDSxkdnpidPYQ_aem_9enOzjpBRCw0QMF4Xim6OQ

    Reply
  11. Tomi Engdahl says:

    ChatGPT creator OpenAI launches o3-mini: its new ‘reasoning’ AI model to compete with DeepSeek
    OpenAI is launching its new o3-mini reasining model inside of ChatGPT, new model responds 24% faster than o1-mini, providing more accurate answers, more.

    Read more: https://www.tweaktown.com/news/102931/chatgpt-creator-openai-launches-o3-mini-its-new-reasoning-ai-model-to-compete-with-deepseek/index.html?utm_source=dlvr.it&utm_medium=facebook&fbclid=IwY2xjawILEEVleHRuA2FlbQIxMQABHdNgcl9R-42oTiqkJxQiLQpNjxoVQ-ptbwUPSfstylsUc7mTJf1Hf2tLAQ_aem_hkISeg6RwSF_z4ZVzDX3cw

    Reply
  12. Tomi Engdahl says:

    Pig API: Give your AI agents a virtual desktop to automate Windows apps
    https://venturebeat.com/ai/pig-api-give-your-ai-agents-a-virtual-desktop-to-automate-windows-apps/

    In the evolving landscape of AI, enterprises face the challenge of integrating modern solutions with legacy systems that often lack the necessary application programming interfaces (APIs) for seamless integration. Approximately 66% of organizations continue to rely on legacy applications for core operations, leading to increased maintenance costs and security vulnerabilities.

    Tools like Pig API have taken a different approach to this problem by enabling AI agents to interact directly with graphical user interfaces (GUIs) within virtual Windows desktops hosted in the cloud. This connects modern AI capabilities with legacy software, allowing for automation of tasks such as data entry and workflow management without the need for local infrastructure.

    Breaking through legacy system barriers
    Traditional robotic process automation (RPA) tools, such as UiPath and Automation Anywhere, are designed to automate repetitive tasks by mimicking human interactions with software applications. However, these tools often encounter significant challenges when dealing with legacy systems, particularly those that are GUI-based and lack modern integration points.

    The absence of user-friendly APIs in these older systems makes integration cumbersome and prone to errors. Additionally, RPA solutions are typically rule-based and struggle to adapt to dynamic changes in user interfaces or workflows, leading to brittle automation processes that require constant maintenance and updates.

    By contrast, AI agents, such as those enabled by Pig API, offer a more flexible and intelligent approach to automation. Unlike traditional RPA tools, AI agents are not solely rule-based; they can learn and adapt to changes in the user interface, making them more resilient to updates or modifications in legacy systems. This adaptability reduces the need for constant maintenance and allows for more complex task automation. Furthermore, by operating within virtual environments, AI agents can scale more efficiently, handling multiple tasks across different systems simultaneously without the constraints of physical hardware.

    For example, in the finance sector, AI agents can facilitate the migration of data from outdated accounting systems to modern customer relationship management (CRM) platforms by mimicking manual data entry processes.

    Pig API enables AI agents to interact directly with GUIs within cloud-hosted virtual Windows desktops. Through its Python software development kit (SDK), Pig makes it possible for developers to integrate virtual environments into workflows, automating processes that traditionally required manual effort.

    Connecting AI agents to cloud-hosted virtual desktops
    At the heart of Pig API is its ability to create and manage VMs for AI agents. These cloud-hosted environments eliminate the need for local infrastructure, allowing enterprises to scale workflows seamlessly. For instance, developers can easily initialize a VM, connect to it, and define tasks for their AI agents using a straightforward process.

    Pig API enables AI agents to perform a variety of actions that closely mimic human behavior. This includes moving a mouse, clicking, dragging, typing into forms or spreadsheets and capturing screenshots of the current desktop view. These tools allow agents to make informed decisions during their operations and execute complex workflows.

    One of Pig API’s standout features is its integration with large language models (LLMs) such as Anthropic’s Claude or OpenAI’s GPT. This capability enables AI agents to incorporate decision-making into their automation workflows, handling tasks that go beyond predefined rules.

    The automation landscape includes a variety of tools tailored for different use cases, from traditional RPA platforms to advanced agentic AI solutions. Tools like UiPath and AutoHotkey excel at automating structured workflows and repetitive tasks, but are often limited when it comes to unstructured processes or GUI-heavy environments. Both require predefined scripts or rule-based logic, making them less adaptable to changes in user interfaces or dynamic workflows.

    Pig API positions itself as a solution for scenarios where traditional automation tools encounter barriers, particularly in interacting with legacy Windows applications. Other emerging solutions, such as Microsoft’s UFO project and Anthropic’s Computer Use, also aim to enhance automation through intelligent agents capable of interacting with GUIs.

    As enterprises continue to navigate the complexities of integrating modern AI solutions with legacy systems, tools like Pig API take a new approach to bridging this gap. By enabling AI agents to interact directly with GUIs within virtual Windows desktops, Pig opens up new possibilities for automation in environments that have traditionally been difficult to modernize. Its cloud-hosted architecture and ability to work without APIs position it as a valuable tool for enterprises looking to extend the lifespan of legacy systems while improving operational efficiency.

    The official Python SDK for Pig
    https://github.com/pig-dot-dev/pig-python

    Reply
  13. Tomi Engdahl says:

    DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot
    Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.
    https://www.wired.com/story/deepseeks-ai-jailbreak-prompt-injection-attacks/

    Ever since OpenAI released ChatGPT at the end of 2022, hackers and security researchers have tried to find holes in large language models (LLMs) to get around their guardrails and trick them into spewing out hate speech, bomb-making instructions, propaganda, and other harmful content. In response, OpenAI and other generative AI developers have refined their system defenses to make it more difficult to carry out these attacks. But as the Chinese AI platform DeepSeek rockets to prominence with its new, cheaper R1 reasoning model, its safety protections appear to be far behind those of its established competitors.

    Today, security researchers from Cisco and the University of Pennsylvania are publishing findings showing that, when tested with 50 malicious prompts designed to elicit toxic content, DeepSeek’s model did not detect or block a single one. In other words, the researchers say they were shocked to achieve a “100 percent attack success rate.”

    The findings are part of a growing body of evidence that DeepSeek’s safety and security measures may not match those of other tech companies developing LLMs. DeepSeek’s censorship of subjects deemed sensitive by China’s government has also been easily bypassed.

    Generative AI models, like any technological system, can contain a host of weaknesses or vulnerabilities that, if exploited or set up poorly, can allow malicious actors to conduct attacks against them. For the current wave of AI systems, indirect prompt injection attacks are considered one of the biggest security flaws. These attacks involve an AI system taking in data from an outside source—perhaps hidden instructions of a website the LLM summarizes—and taking actions based on the information.

    Jailbreaks, which are one kind of prompt-injection attack, allow people to get around the safety systems put in place to restrict what an LLM can generate. Tech companies don’t want people creating guides to making explosives or using their AI to create reams of disinformation, for example.

    Cisco also included comparisons of R1’s performance against HarmBench prompts with the performance of other models. And some, like Meta’s Llama 3.1, faltered almost as severely as DeepSeek’s R1. But Sampath emphasizes that DeepSeek’s R1 is a specific reasoning model, which takes longer to generate answers but pulls upon more complex processes to try to produce better results. Therefore, Sampath argues, the best comparison is with OpenAI’s o1 reasoning model, which fared the best of all models tested. (Meta did not immediately respond to a request for comment).

    Polyakov, from Adversa AI, explains that DeepSeek appears to detect and reject some well-known jailbreak attacks, saying that “it seems that these responses are often just copied from OpenAI’s dataset.”

    “DeepSeek is just another example of how every model can be broken—it’s just a matter of how much effort you put in. Some attacks might get patched, but the attack surface is infinite,” Polyakov adds. “If you’re not continuously red-teaming your AI, you’re already compromised.”

    Reply
  14. Tomi Engdahl says:

    Sam Altman Unveils Unexpected Scientific Breakthroughs Through AI
    https://glassalmanac.com/sam-altman-unveils-unexpected-scientific-breakthroughs-through-ai/

    OpenAI, in partnership with Retro Biosciences, has recently made significant strides in human longevity research with the development of GPT-4b Micro. This advanced artificial intelligence (AI) model could potentially revolutionize cellular reprogramming and open new avenues in the field of biology. Although this innovation is exciting, it still requires thorough scientific validation to establish its true potential.

    Reply
  15. Tomi Engdahl says:

    My favorite ChatGPT feature just got way more powerful
    ChatGPT Canvas just got easier to access and smarter (hint: it can now code better than ever). Here’s how to use it.
    https://www.csoonline.com/article/3810857/the-cybersecurity-skills-gap-reality-we-need-to-face-the-challenge-of-emerging-tech.html

    Reply
  16. Tomi Engdahl says:

    New Jailbreaks Allow Users to Manipulate GitHub Copilot
    Whether by intercepting its traffic or just giving it a little nudge, GitHub’s AI assistant can be made to do malicious things it isn’t supposed to.
    https://www.darkreading.com/vulnerabilities-threats/new-jailbreaks-manipulate-github-copilot

    Reply
  17. Tomi Engdahl says:

    Jack Dorsey is back with Goose, a new, ultra-simple open-source AI agent-building platform from his startup Block
    https://venturebeat.com/programming-development/jack-dorsey-is-back-with-goose-a-new-ultra-simple-open-source-ai-agent-building-platform-from-his-startup-block/

    The bird-themed social network Twitter’s identity may have been X-ed out by new owner Elon Musk, but that hasn’t stopped one of its cofounders, Jack Dorsey, from taking on new bird names for a new project.

    Dorsey’s other company, Block, is the parent of point-of-sale service Square, mobile payments system Cash App, music streaming service Tidal, and other tech-driven financial tools. Today Block launched Goose, a free, open-source framework that seeks to simplify the process of building an AI agent (or many agents) with pretty much any conceivable large language model (LLM) as the intelligence on the backend, whether that be DeepSeek or a proprietary model from the likes of OpenAI, Google or Anthropic.

    Reply
  18. Tomi Engdahl says:

    https://www.datastax.com/lp/langflow-signup-c
    Langflow
    Your Next AI Project
    Just Got 100x Easier
    Welcome to Langflow, the visual IDE that makes building powerful RAG and multi-agent AI apps 100x easier!

    Reply
  19. Tomi Engdahl says:

    Los Alamos National Laboratory partners with OpenAI to advance national security
    https://www.lanl.gov/media/news/0130-open-ai

    For the first time ever, the latest reasoning models from OpenAI will be used for energy and national security applications on Los Alamos’s Venado supercomputer
    January 30, 2025

    Los Alamos National Laboratory has entered a partnership with OpenAI to install its latest o-series models — capable of expert reasoning for a broad span of complex scientific problems — on the Lab’s Venado supercomputer, which uses NVIDIA GH200 Grace Hopper Superchips, to conduct national security research.

    “As threats to the nation become more complex and more pressing, we need new approaches and advanced technologies to preserve America’s security,” said Laboratory director Thom Mason. “Artificial intelligence models from OpenAI will allow us to do this more successfully, while also advancing our scientific missions to solve some of the nation’s most important challenges.”

    The Venado machine will be moved to a secure, classified network where it will be a shared resource for researchers from Los Alamos, Lawrence Livermore, and Sandia national labs.

    Reply
  20. Tomi Engdahl says:

    Copyright Office suggests AI copyright debate was settled in 1965
    Most people think purely AI-generated works shouldn’t be copyrighted, report says.
    https://arstechnica.com/tech-policy/2025/01/copyright-office-suggests-ai-copyright-debate-was-settled-in-1965/

    The US Copyright Office issued AI guidance this week that declared no laws need to be clarified when it comes to protecting authorship rights of humans producing AI-assisted works.

    “Questions of copyrightability and AI can be resolved pursuant to existing law, without the need for legislative change,” the Copyright Office said.

    More than 10,000 commenters weighed in on the guidance, with some hoping to convince the Copyright Office to guarantee more protections for artists as AI technologies advance and the line between human- and AI-created works seems to increasingly blur.

    But the Copyright Office insisted that the AI copyright debate was settled in 1965 after commercial computer technology started advancing quickly and “difficult questions of authorship” were first raised. That was the first time officials had to ponder how much involvement human creators had in works created using computers.

    Reply
  21. Tomi Engdahl says:

    “I’m so sorry I can’t stop laughing.” https://trib.al/u2lX8lo

    Reply
  22. Tomi Engdahl says:

    OpenAI Hit With Wave of Mockery for Crying That Someone Stole Its Work Without Permission to Build a Competing Product
    “You can’t steal from us! We stole it fair and square!”
    https://futurism.com/openai-mockery-stole-work-deepseek

    Reply
  23. Tomi Engdahl says:

    How to Use the Langflow API in Node.js
    #
    node
    #
    langflow
    #
    genai
    Langflow is a fantastic low-code tool for building generative AI flows and agents. Once you’ve built your flow, it’s time to integrate it into your own application using the Langflow API
    https://datastax.com/blog/use-langflow-api-in-node-js

    In Node.js applications, you can construct and make calls directly to the API with fetch, the http module, or using your favourite HTTP client like axios or got. To make it easier, you can now use this JavaScript Langflow client. Let’s take a look at how it works.

    What you’ll need
    You can use the JavaScript Langflow client with either the open-source, self-hosted version of Langflow or the DataStax cloud-hosted version of Langflow.

    Note: this Langflow client is for using on the server. The Langflow API uses API keys, which should not be exposed, so it isn’t suitable for using directly from the front-end.

    https://www.langflow.org/
    Langflow is a low-code tool for developers that makes it easier to build powerful AI agents and workflows that can use any API, model, or database.

    Reply
  24. Tomi Engdahl says:

    AI Agents / CI/CD / Python
    Building Autonomous Systems in Python with Agentic Workflows
    Agentic workflows act as intelligent agents capable of solving problems, streamlining processes and driving efficiency at a higher level.
    https://thenewstack.io/building-autonomous-systems-in-python-with-agentic-workflows/

    As businesses and technology push boundaries, staying ahead often means finding more innovative, faster ways to get work done. Enter agentic workflows, a revolutionary approach to task automation that empowers systems to analyze, decide and execute tasks independently. Agentic workflows are not just another tech buzzword; they are about enabling automation that doesn’t just follow a script but adapts in real time to handle complex challenges.

    Traditional automation tools follow predefined steps. They’re effective for routine, repetitive work but falter when faced with dynamic, evolving tasks. This is where agentic workflows stand out. They combine flexibility and intelligence to manage complex operations with minimal manual input.

    Reply
  25. Tomi Engdahl says:

    A Little Data Goes a Long Way
    A new approach makes it possible to run AI inferences with a small fraction of the data usually required, greatly reducing resource use.
    https://www.hackster.io/news/a-little-data-goes-a-long-way-24ceb2c5ef71

    If we are going to keep up with this growth in available data, more efficient AI algorithms will need to be developed to help us make sense of it. That is exactly the challenge a team of researchers from Pennsylvania State University and MIT have taken on. Their newly developed Shift-Invariant Spectrally Stable Undersampled Network (SIUN) promises to drastically reduce the amount of sensor data needed for AI-driven tasks while maintaining accuracy. Their research introduces a selective learning approach where the data collected is tailored to the specific problem at hand.

    Traditional AI models, particularly those used in industrial sensing and scientific computing, rely on the Shannon-Nyquist sampling theorem.

    The SIUN approach challenges this notion by introducing selective sampling, which was inspired by human perception. Unlike traditional methods that capture and process all available sensor data, SIUN intelligently samples a fraction of the data at Nyquist rates, ensuring that only the most relevant portions are used for analysis. The architecture maintains shift invariance through localized windowing and ensures spectral stability by preserving relative positions of data points rather than absolute values. Using a neural network-based approach, SIUN adapts to different sensing tasks — such as classification and regression — while drastically reducing computational overhead, memory requirements, and latency compared to conventional deep learning models like convolutional neural networks.

    In tests involving industrial sensor data, SIUN was able to correctly classify faulty machinery with 96% accuracy while sampling only 30% of the raw data. In contrast, a traditional convolutional neural network achieved slightly higher accuracy (99.77%) but required the full dataset, making it computationally expensive. In other cases, SIUN maintained 80-90% accuracy with just 20% of the data.

    Since AI systems using SIUN can function effectively with far less computational power, they are ideal for edge computing applications where data storage and processing resources are limited. This could be particularly useful for applications in remote or extreme environments such as deep-sea exploration or space missions.

    To drive this point home, the researchers ran SIUN on the tiny, $4 Raspberry Pi Pico microcontroller. Despite its severely limited hardware resources, the system successfully performed AI inference tasks, proving that SIUN could bring advanced AI capabilities to even the most resource-limited devices.

    Reply
  26. Tomi Engdahl says:

    Kommentti: DeepSeek paljasti satojen miljardien eurojen harhan
    Sijoittajat ovat upottaneet satoja miljardeja tekoäly-yhtiöihin, jotka ovat luvanneet rakentaa maailmaa mullistavan teknologian. Kiinalainen DeepSeek saattoi juuri paljastaa, kuinka hataralla pohjalla nuo lupaukset ovat, kirjoittaa Ilta-Sanomien toimittaja Elias Ruokanen.

    Kommentti: DeepSeek paljasti satojen miljardien eurojen harhan
    https://www.is.fi/digitoday/art-2000010997060.html

    Kiinalainen DeepSeek R1 -tekoäly on saanut teknologiafirmojen kurssit syöksymään. Sovellus näyttää pääsevän samalle tasolle kuin sen kilpailijat erityisesti Yhdysvalloissa, mutta murto-osalla niiden kustannuksista. Mutta jo ennen DeepSeekin julkaisua oli havaittavissa epäröintiä tekoälyn tilanteesta kehittäjien puheissa.

    Merkittävillä tekoäly-yhtiöillä on ollut vaikeuksia puristaa lisää “älyllisyyttä” uusimmista malleistaan. ChatGPT:n kehittäneen OpenAI:n entinen johtava tutkija ja perustaja Ilya Sutskever sanoi marraskuussa, että ”2010-luku oli skaalautumisen aikakausi, nyt olemme jälleen ihmettelyn ja löytämisen aikakaudella.” Googlen toimitusjohtaja Sundar Pichai sanoi joulukuussa, että tekoälyn ”matalla roikkuvat hedelmät” on poimittu, ja tekoälyn kehittäminen vaatisi ”ehdottomasti syvempiä läpimurtoja.”

    General Motors ilmoitti joulukuussa lopettavansa omistamansa Cruise-robottitaksiyhtiön, johon se oli sijoittanut yli 10 miljardia dollaria vuodesta 2016 lähtien. Apple keskeytti tammikuussa tekoälyn tuottamat uutistiivistelmät niiden virheellisyyden vuoksi.

    Tekoälyn kehitys näyttää olevan tilanteessa, jossa kustannuksia voidaan madaltaa (josta DeepSeek on hyvä esimerkki), mutta itse ”älyllisyyttä” on vaikeaa lisätä, koska kukaan ei tiedä, miten sitä tekisi.

    Kaksi polkua
    Nykymuotoinen tekoäly poikkeaa merkittävästi siitä, miten tekoälyjä rakennettiin aikaisimpina vuosikymmeninä. Suuri muutos johtui niin kutsutun ”syväoppimisen” kehittymisestä, tietokoneiden tehojen kasvamisesta ja internetin mahdollistaman datakeruun räjähdysmäisestä kasvusta.

    Perinteisessä ohjelmassa (ja niin kutsussa ”vanhassa kunnon tekoälyssä”) on kirjoitettuja sääntöjä, joiden perusteella ohjelmalle syötettyä dataa käsitellään.

    Syväoppimisessa emme kirjoita sääntöjä. Meillä on syötteitä (eri laskutoimituksia, kuten 2 + 2), meillä on tulosteita (niiden laskutoimitusten lopputulos, kuten 4), mutta meillä ei ole valmiita sääntöjä. Syväoppimisen tehtävä on löytää säännöt, jotka yhdistävät tuon syötteen ja tulosteen.

    Syväoppiminen loistaa suuren data-aineiston järkeistämisessä

    Syväoppimisalgoritmit (algoritmi tarkoittaa vain sarjaa sääntöjä) perustuvat neuroverkkoihin. Neuroverkko on erikoinen ja ladattu termi. Se on oikeastaan vain tapa järjestää informaatiota.

    Kannattaa huomioida, että neuroverkko ei ole aivosimulaatio! Neuroverkot muistuttavat aivoja yhtä paljon kuin kauhuelokuvat, jotka ”ottavat inspiraatiota tositapahtumista” muistuttavat tositapahtumia. Esimerkiksi aivojen hermosolut ovat erittäin monimutkaisia, neuroverkkojen ”hermosolut” ovat äärimmäisen yksinkertaisia.

    Yksityiskohdat neuroverkon toteutuksessa voivat vaihdella

    Syväoppimisessa neuroverkolle annetaan suuri määrä syöte-tulostepareja, ja neuroverkko muuttaa palikoidensa ehtoja ja vahvuuksia, kunnes löytää sopivan polun, jolla kaikki syöte-tulosteparit toimivat.

    Jotta syväoppimisalgoritmi suoriutuu tehtävästään, täytyy sen neuroverkon olla myös tarpeeksi iso ja rakenteeltaan monimutkainen tehtävää varten. Esimerkiksi mustan neliön ja ympyrän tunnistamiseen riittää pieni neuroverkko. Kissan ja koiran tunnistamiseen tarvitaan paljon suurempi.

    Mitä tahansa kuvia tunnistavaan tarvitaan jättimäinen neuroverkko

    Syväoppimisen suosio ja menestys johtuneekin pitkälti siitä, että tietokoneiden tehot kehittyivät siihen pisteeseen, että suuria neuroverkkoja pystyttiin vihdoin toteuttamaan.

    Esimerkiksi OpenAI hehkuttaa sen videoita tuottavaa SORA-tekoälyä ”maailmasimulaattoriksi,” askeleena kohti ”yleiskäytettävää fysikaalisen maailman mallintajaa.” Analysoimalla jättimäistä määrää videota SORA

    Ja silti SORAn videot muistuttavat monissa tilanteissa enemmän unimaisemaa, huikeaa sellaista, jossa esineet muuttuvat muodosta toiseen, veden aallot virtaavat taaksepäin ja painovoiman keskipiste sinkoaa eri puolille kuvaa. Voimme toki sanoa, että SORAlla on ”malli maailmasta”, sen malli vain poikkeaa meidän todellisuudestamme.

    On käytännössä ääretön määrä erilaisia monimutkaisia sääntöjä, joita SORA voi datasta päätellä ja jotka sopivat dataan.

    Todennäköisyys siihen, että SORA on poiminut juuri oikeat säännöt, jotka täsmäävät meidän lihaiseen ja veriseen oikeaan maailmaamme, on häviävän pieni. Se on hakuammuntaa.

    Sama ongelma piinaa Teslaa, jonka autojen ”täysin itseään ajava” -tekoäly tekee vääriä käännöksiä ja jarruttaa milloin sattuu miljoonien ajotuntien opetusdatasta huolimatta. Koska säännöt eivät ole eksplisiittisesti ohjelmoitu tekoälyyn (kuten ”vanhassa kunnon tekoälyssä”), näiden virheiden korjaaminen koostuu pitkälti uuden datan keräämisestä ja neuroverkon kouluttamisesta uudestaan.

    Reply
  27. Tomi Engdahl says:

    Nuclear power, in particular, has been on the cusp of a renaissance for years, driven by advances in fuel and reactor designs that promise to make a new generation of power plants safer and cheaper to build and operate.

    The surge in power demand from AI had tech companies racing to secure new supplies, and throwing billions of dollars at the problem. But what if the problem has been overblown?

    Find out from Tim De Chant how DeepSeek could stall the nuclear renaissance here: https://tcrn.ch/42r7Bi3

    #TechCrunch #technews #artificialintelligence #DeepSeek #climate

    Reply
  28. Tomi Engdahl says:

    DeepSeek R1 -mallit jo saatavilla Amazonin pilvipalveluissa
    https://etn.fi/index.php/13-news/17099-deepseek-r1-mallit-jo-saatavilla-amazonin-pilvipalveluissa

    Kiinalaisen AI-startupin DeepSeekin kehittämät DeepSeek-R1 -mallit ovat nyt saatavilla Amazonin pilvipalveluiden asiakkaille. Mallit voi ottaa käyttöön Amazon Bedrockin ja Amazon SageMaker AI:n kautta, mikä mahdollistaa joustavan ja kustannustehokkaan generatiivisen tekoälyn hyödyntämisen.

    DeepSeek julkaisi joulukuussa 2024 DeepSeek-V3-mallinsa ja tammikuun 2025 aikana se toi markkinoille DeepSeek-R1, DeepSeek-R1-Zero (671 miljardin parametrin malli) sekä DeepSeek-R1-Distill -mallit, joiden koko vaihtelee 1,5 miljardista 70 miljardiin parametriin. Lisäksi yritys lisäsi valikoimaansa visionäärisen Janus-Pro-7B-mallin tammikuun 27. päivä. Näiden mallien kerrotaan olevan 90–95 % edullisempia ja tehokkaampia kuin vastaavat kilpailevat mallit. DeepSeekin mukaan niiden erottuva piirre on edistynyt päättelykyky, joka saavutetaan muun muassa vahvistusoppimisen kaltaisilla innovatiivisilla koulutusmenetelmillä.

    Amazonin toimitusjohtaja Andy Jassy korosti AWS re:Invent -tapahtumassa generatiivisen tekoälyn laajamittaisen käyttöönoton avainoppeja. Hänen mukaansa laskentakustannukset ovat merkittävä tekijä, tekoälysovellusten rakentaminen on haastavaa, ja yritykset tarvitsevat monipuolisen valikoiman malleja eri käyttötarkoituksiin. AWS:n strategiana on tarjota asiakkailleen laaja valikoima huippuluokan malleja, joista DeepSeek-R1 on nyt yksi uusimmista lisäyksistä.

    Reply
  29. Tomi Engdahl says:

    AI Markets Were Deceived To Believe In DeepSeek’s Low Training Costs; They Are Actually 400 Times Higher Than The Reported Figure
    https://wccftech.com/ai-markets-were-deceived-to-believe-in-deepseek-low-training-costs/?fbclid=IwY2xjawIM0B9leHRuA2FlbQIxMQABHZ-iC9be1YjgXXgrYmOq-G-kqYVELRLJeSjcs5RNoy_Ek243RjrfHRg_vw_aem_o6KwsWA2-bM9DFTYPb7bAg

    The controversy around DeepSeek’s costs for training their R1 model shook up the markets, but it seems like there was a lot of deception around it, since the actual figures are indeed surprising.

    DeepSeek’s Training Costs Are Said To Be Significantly Higher Than The Reported “$5 Million” Figure; They Have Access To High-End Hardware
    The research firm SemiAnalysis has conducted an extensive analysis of what’s actually behind DeepSeek in terms of training costs, refuting the narrative that R1 has become so efficient that the compute resources from NVIDIA and others are unnecessary.

    For those unaware, DeepSeek was said to be a side project of the Chinese hedge fund High-Flyer, and the report by SemiAnalysis claims that they purchased 10,000 units of NVIDIA’s A100 back in 2021, when export restrictions weren’t that aggressive. DeepSeek then evolved into a separate entity since the parent company, High-Flyer, decided to spin the project off, and that’s when things actually took off. With that, they started accumulating computing resources, which we’ll discuss next.

    The report says that DeepSeek has around 10,000 of NVIDIA’s “China-specific” H800 AI GPUs and 10,000 of the higher-end H100 AI chips. Moreover, the firm has invested in NVIDIA’s H20 AI accelerators, and they have a “pool” of resources that are being shared between DeepSeek and High-Flyer for “trading, inference, training, and research.” This translates into approximately $1.6 billion in CapEx for DeepSeek, with operating costs rumored to be around $944 million. The figures translate into approximately four hundred times higher than the markets initially perceived.

    The brains behind DeepSeek’s R1 model were indeed capable of coming up with an efficient solution to compete with the likes of OpenAI, but the “misreported” financial figures acted as a catalyst in last week’s black swan event,

    DeepSeek Debates: Chinese Leadership On Cost, True Training Cost, Closed Model Margin Impacts H100 Pricing Soaring, Subsidized Inference Pricing, Export Controls, MLA
    https://semianalysis.com/2025/01/31/deepseek-debates/

    Reply
  30. Tomi Engdahl says:

    Texas Governor Orders Ban on DeepSeek, RedNote for Government Devices

    “Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps,” Abbott said.

    https://www.securityweek.com/texas-governor-orders-ban-on-deepseek-rednote-for-government-devices/

    Texas Republican Gov. Greg Abbott issued a ban on Chinese artificial intelligence company DeepSeek for government-issued devices, becoming the first state to restrict the popular chatbot in such a manner. The upstart AI platform has sent shockwaves throughout the AI community after gaining popularity amongst American users in recent weeks.

    The governor also prohibited popular Chinese-owned social media apps Xiaohongshu, or what some are calling RedNote, and Lemon8 from all state-issued devices.

    “Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps,” Abbott said in a statement. “Texas will continue to protect and defend our state from hostile foreign actors.”

    The governor’s office declined to comment further for this story.

    Reply
  31. Tomi Engdahl says:

    Italy Blocks Access to the Chinese AI Application DeepSeek to Protect Users’ Data

    Italy’s data protection authority expressed dissatisfaction with DeepSeek’s response to its query about what personal data is collected, where it is stored and how users are notified.

    https://www.securityweek.com/italy-blocks-access-to-the-chinese-ai-application-deepseek-to-protect-users-data/

    Italy’s data protection authority on Thursday blocked access to the Chinese AI application DeepSeek to protect users’ data and announced an investigation into the companies behind the chatbot.

    The authority, called Garante, expressed dissatisfaction with DeepSeek’s response to its initial query about what personal data is collected, where it is stored and how users are notified.

    “Contrary to the authority’s findings, the companies declared that they do not operate in Italy, and that European legislation does not apply to them,’’ the statement said, noting that the app had been downloaded by millions of people around the globe in just a few days.

    DeepSeek’s new chatbot has raised the stakes in the AI technology race, rattling markets and catching up with American generative AI leaders at a fraction of the cost.

    Reply
  32. Tomi Engdahl says:

    How to Eliminate “Shadow AI” in Software Development

    With a security-first culture fully in play, developers will view the protected deployment of AI as a marketable skill, and respond accordingly.

    https://www.securityweek.com/how-to-eliminate-shadow-ai-in-software-development/

    In a recent column, I wrote about the nearly ubiquitous state of artificial intelligence (AI) in software development, with a GitHub survey showing 92 percent of U.S.-based developers using AI coding tools both in and outside of work. Seeing a subsequent surge in their productivity, many are taking part in what’s called “shadow AI” by leveraging the technology without the knowledge or approval of their organization’s IT department and/or chief information security officer (CISO).

    This should come as no surprise, as motivated employees will inevitably seek out technologies that maximize their value potential while reducing repetitive tasks that get in the way of more challenging, creative pursuits. After all, this is what AI is doing for not only developers but professionals across the board. The unapproved usage of these tools isn’t exactly new either, as we’ve seen similar scenarios play out with shadow IT, and shadow software as a service (SaaS).

    However, even if they circumvent company policies and procedures with good intentions in a “don’t ask/don’t tell” manner, developers are (often unknowingly) introducing potential risks and adverse outcomes through AI. These risks include:

    Blind spots in security planning and oversight, as CISOs and their teams are not aware of the shadow AI tools and, therefore, cannot assess or help manage them
    AI’s introduction of vulnerable code that leads to the exposure/leakage of data
    Compliance shortcomings caused by the failure of AI usage to align with regulatory requirements
    Decreased long-term productivity. While AI provides an initial productivity boost, it will frequently create vulnerability issues, and teams wind up working backward on fixes because they weren’t addressed from the start.

    What’s clear is that AI on its own is not inherently dangerous. It’s a lack of oversight into how it is implemented that reinforces poor coding habits and lax security measures. Under pressure to produce better software faster than ever, developer team members may try to take shortcuts in – or abandon entirely – the review of code for vulnerabilities from the beginning. And, again, CISOs and their teams are kept in the dark, unable to protect the tools because they aren’t even aware of their existence.

    Reply
  33. Tomi Engdahl says:

    Etteplan toi tekoälyn tekniseen dokumentointiin
    https://etn.fi/index.php/13-news/17100-etteplan-toi-tekoaelyn-tekniseen-dokumentointiin

    Etteplan lanseeraa tekoälyä hyödyntävän version HyperSTE-ohjelmistostaan tehostaakseen kirjoitustandardien noudattamista ja parantaakseen sisällön laatua teknisessä dokumentaatiossa. HyperSTE on tekoälypohjainen kirjoittamisen työkalu, joka auttaa teknisiä kirjoittajia tuottamaan yhdenmukaista, vaatimustenmukaista ja tehokasta sisältöä.

    Asianmukainen tekninen dokumentaatio on keskeistä esimerkiksi teollisuusasiakkaille, sillä se lisää heidän tuotteidensa arvoa ja varmistaa, että tuotteita käytetään oikein. Usein yrityksillä on vaikeuksia tuottaa teknistä sisältöä, joka vastaa loppukäyttäjän tarpeita ja on vaatimustenmukaista ja johdonmukaista kaikissa julkaisuissa.

    HyperSTE:n uudet tekoälyominaisuudet varmistavat korkealaatuisen teknisen dokumentaation, joka täyttää eri alojen tiukat vaatimukset, mukaan lukien ASD-STE100:n ja muut teknisen kirjoittamisen standardit. HyperSTE:n tekoälyominaisuuksien avulla käyttäjät saavat tekoälyn tuottamia ehdotuksia lauseiden uudelleenkirjoittamiseen, jotta lauseet ovat sekä vaatimustenmukaisia että optimoituja uudelleenkäyttöä varten, mikä voi vähentää uudelleenkirjoitusaikaa ja sisällön tarkistussyklejä jopa 75 %. Tekoälypohjainen HyperSTE voidaan integroida useisiin eri teknisen dokumentaation työkaluihin.

    Reply
  34. Tomi Engdahl says:

    Bloomberg:
    AI adoption has outpaced that of PCs and the internet, but evidence of its boost to productivity is thin on the ground; study: 40% of US adults have used AI

    AI Has Rocked the Stock Market, But What Will It Do for the Economy?
    https://www.bloomberg.com/news/articles/2025-01-31/will-ai-take-our-jobs-3-scenarios-for-how-it-could-impact-the-economy?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTczODM4OTYwOCwiZXhwIjoxNzM4OTk0NDA4LCJhcnRpY2xlSWQiOiJTUVlRTlZUMVVNMFcwMCIsImJjb25uZWN0SWQiOiIwNEFGQkMxQkYyMTA0NUVEODg3MzQxQkQwQzIyNzRBMCJ9.Hh73XucK_fIaF28mIoijg-XtXirvJrKFG_sWfrBzQHo&leadSource=uverify%20wall

    For all the excitement, evidence of a boost to productivity is still thin on the ground

    For investors in artificial intelligence, the last week delivered a painful shock. The sudden appearance of DeepSeek — a Chinese AI firm boasting a world-class model developed at bargain-basement costs — triggered a massive selloff in Nvidia and other US tech champions.

    What matters for the economy, though, is not the ups and downs of stock prices for the Magnificent Seven, but whether AI drives gains in productivity, and how those gains are divided up. For all the excitement, and the trillion-dollar valuations for AI firms, evidence of a boost to productivity remains thin on the ground.

    This disconnect doesn’t exactly ring an alarm bell. From the electric motor to the personal computer, past technological revolutions took decades not years to show up in the productivity data. The inventor’s ‘eureka’ moment takes time to diffuse through the economy. In the end, though, the gap has to be closed.

    There are three ways that could happen. Optimists see AI as a driver of rising prosperity: investors win and so do workers. Pessimists worry that chatbots are more parlor trick than paradigm shift, and the billions sunk into training models won’t ever generate a return. There’s also a dystopian view, with AI making the algorithm-elite rich beyond imagining, and everyone else unemployed.

    We’ll see how well DeepSeek’s claim of massive costs efficiencies — a leading-edge model developed for millions instead of billions of dollars — stands up to scrutiny. If AI is about to get much cheaper, the path to an answer on its economic impact is going to get shorter. For workers nervously wondering if large language models will make their skills redundant, a lot is riding on which camp is right.

    Three Different Paths

    Assessments of how big the productivity benefits of AI will be — and how quickly they will manifest — are all over the map.

    In an optimistic scenario, AI lives up to the hype and spreads through the broader economy, fueling a surge in productivity that propels growth and wages. Goldman Sachs analysts estimate that by 2034 US GDP will be 2.3% bigger as a result of AI. McKinsey Global Institute goes further and expects a 5-13% boost by 2040.

    They’re by no means the most optimistic. In a recent paper, Anton Korinek of the University of Virginia and Donghyun Suh of the Bank of Korea outline a range of growth scenarios. At the more temperate end, annual growth is boosted by a full percentage point. At the more aggressive, the boost is a circuit-overloading 6 percentage points per year, on average, over the next 10 years.

    That latter scenario assumes we are on the road to “the singularity” — a moment when machines become more intelligent than humans. It also assumes that thinking machines prove more solicitous of human wellbeing than, for example, Skynet, the malign intelligence of the Terminator movies.

    In a pessimistic scenario, AI stumbles as it moves from lab to market — proving more of a damp squib than a rocket charger for productivity. MIT’s Daron Acemoglu, a 2024 Nobel laureate in economics, estimates that only 5% of tasks currently performed by humans will be taken over by AI in the next 10 years. He expects the contribution to GDP a decade from now to be around 1%.

    In a third scenario, AI will be powerful in its application but dystopian in its impact. Elon Musk has warned that the technology could lead to the end of cognitive work, saying, “There will come a point where no job is needed.” Perhaps it’s not a coincidence that one of the first acts of the Trump administration where Musk wields outsize influence is to offer some 2 million Federal employees a buyout.

    If AI turns out to be better at replacing workers than bolstering their productivity, the result could be a wave of job losses — the white-collar version of the blue-collar redundancies that followed automation and offshoring of factory jobs. Growth will stay on trend, and may even speed up, but the benefits of that growth will accrue mainly to anyone early or smart enough to be on the right side of the revolution.

    Acemoglu appears in this camp as well. In 2023, he and MIT colleague Simon Johnson — also a Nobel laureate — published “Power and Progress,” a grim review of technology’s impact on labor. In the grand sweep of history, advances in technology from the plow to the textile factory have improved prosperity for all. But in the span of decades in which lives are lived, Acemoglu and Johnson show workers often lose out.

    A Solow Paradox for the Age of AI

    For now, all three camps are patiently waiting to be proven right. They might have to wait a while. Technology is a major driver of productivity growth, but gains are not always quick to arrive.

    By David Wilcox and Tom Orlik, Bloomberg Economics
    31 January 2025 at 19:16 EET

    For investors in artificial intelligence, the last week delivered a painful shock. The sudden appearance of DeepSeek — a Chinese AI firm boasting a world-class model developed at bargain-basement costs — triggered a massive selloff in Nvidia and other US tech champions.

    What matters for the economy, though, is not the ups and downs of stock prices for the Magnificent Seven, but whether AI drives gains in productivity, and how those gains are divided up. For all the excitement, and the trillion-dollar valuations for AI firms, evidence of a boost to productivity remains thin on the ground.

    This disconnect doesn’t exactly ring an alarm bell. From the electric motor to the personal computer, past technological revolutions took decades not years to show up in the productivity data. The inventor’s ‘eureka’ moment takes time to diffuse through the economy. In the end, though, the gap has to be closed.

    There are three ways that could happen. Optimists see AI as a driver of rising prosperity: investors win and so do workers. Pessimists worry that chatbots are more parlor trick than paradigm shift, and the billions sunk into training models won’t ever generate a return. There’s also a dystopian view, with AI making the algorithm-elite rich beyond imagining, and everyone else unemployed.

    We’ll see how well DeepSeek’s claim of massive costs efficiencies — a leading-edge model developed for millions instead of billions of dollars — stands up to scrutiny. If AI is about to get much cheaper, the path to an answer on its economic impact is going to get shorter. For workers nervously wondering if large language models will make their skills redundant, a lot is riding on which camp is right.
    Three Different Paths

    Assessments of how big the productivity benefits of AI will be — and how quickly they will manifest — are all over the map.

    In an optimistic scenario, AI lives up to the hype and spreads through the broader economy, fueling a surge in productivity that propels growth and wages. Goldman Sachs analysts estimate that by 2034 US GDP will be 2.3% bigger as a result of AI. McKinsey Global Institute goes further and expects a 5-13% boost by 2040.
    Workers control the temperature by operating intelligent greenhouse systems at the Qinhu Smart Agricultural Park digital factory in Taizhou, China.Source: CFOTO/Future Publishing/Getty Images

    They’re by no means the most optimistic. In a recent paper, Anton Korinek of the University of Virginia and Donghyun Suh of the Bank of Korea outline a range of growth scenarios. At the more temperate end, annual growth is boosted by a full percentage point. At the more aggressive, the boost is a circuit-overloading 6 percentage points per year, on average, over the next 10 years.

    That latter scenario assumes we are on the road to “the singularity” — a moment when machines become more intelligent than humans. It also assumes that thinking machines prove more solicitous of human wellbeing than, for example, Skynet, the malign intelligence of the Terminator movies.

    In a pessimistic scenario, AI stumbles as it moves from lab to market — proving more of a damp squib than a rocket charger for productivity. MIT’s Daron Acemoglu, a 2024 Nobel laureate in economics, estimates that only 5% of tasks currently performed by humans will be taken over by AI in the next 10 years. He expects the contribution to GDP a decade from now to be around 1%.

    In a third scenario, AI will be powerful in its application but dystopian in its impact. Elon Musk has warned that the technology could lead to the end of cognitive work, saying, “There will come a point where no job is needed.” Perhaps it’s not a coincidence that one of the first acts of the Trump administration where Musk wields outsize influence is to offer some 2 million Federal employees a buyout.
    Elon Musk during Trump’s presidential inauguration in Washington, DC, on Jan. 20.Photographer: Chip Somodevilla/Getty Images

    If AI turns out to be better at replacing workers than bolstering their productivity, the result could be a wave of job losses — the white-collar version of the blue-collar redundancies that followed automation and offshoring of factory jobs. Growth will stay on trend, and may even speed up, but the benefits of that growth will accrue mainly to anyone early or smart enough to be on the right side of the revolution.

    Acemoglu appears in this camp as well. In 2023, he and MIT colleague Simon Johnson — also a Nobel laureate — published “Power and Progress,” a grim review of technology’s impact on labor. In the grand sweep of history, advances in technology from the plow to the textile factory have improved prosperity for all. But in the span of decades in which lives are lived, Acemoglu and Johnson show workers often lose out.
    A Solow Paradox for the Age of AI

    For now, all three camps are patiently waiting to be proven right. They might have to wait a while. Technology is a major driver of productivity growth, but gains are not always quick to arrive.

    “You can see the computer age everywhere but in the productivity statistics,” wrote Nobel Prize-winning economist Robert Solow back in 1987. Back then, a youthful Bill Gates was sprinting to bring personal computers onto desks around the world. It would be another decade before Fed Chair Alan Greenspan found evidence of the boom in the GDP numbers.

    Likewise, it took decades for the electric motor to show up in the productivity statistics, as economic historian Paul David explained.

    Why so slow? Before electric motors could be widely used to power manufacturing, generating capacity had to be expanded, and the price of electricity had to come down. Factory owners, keen to get a return on their existing steam-powered machines, took their time electrifying. When they did decide to hit the switch, factories had to be rebuilt on a different design.

    Fast forward to 2025, and Solow’s paradox is back, with the gap between AI buzz and missing-in-action productivity gains even wider than it was with PCs.

    For the economy as a whole, evidence of an AI boom is, so far, hard to find. Productivity growth — delivering more output from the same amount of inputs — is a crucial measure of economic health. If productivity is rising, workers can get more pay, companies more profits, and government more tax revenue. If shares are divided more or less equally, everyone can be better off.

    Since the eve of the Covid pandemic, output per hour for US workers is estimated to have increased at an annual rate of just 1.86%. That’s nowhere near the 3.3% pace that prevailed from the mid-1990s through the mid-2000s as the internet revolutionized the economy. Still, it’s up from a doleful 1.48% average over the 15 years preceding the pandemic.

    Reply
  35. Tomi Engdahl says:

    AI Is Being Adopted Faster Than PCs Or the Internet

    Adoption rates following first mass-market product.

    Then there’s the fact that investors are betting big money that AI will deliver on its early promise. Even after the DeepSeek shock, with markets panicked that lower-cost AI might not require so many high-end chips, Nvidia remains one of the world’s most valuable companies by market capitalization.

    For seven of the companies positioned to benefit most from the AI revolution — including Nvidia, Microsoft, Google and Amazon — market cap has increased by 15% of US GDP since generative AI was unveiled to the public.

    By comparison, the increase in market cap for internet champions topped out at a little higher than 10% of GDP. AI companies are still valued far higher than internet companies were in their early years.
    Market Capitalization for AI Champions Is Running Ahead of the Internet Boom

    https://www.bloomberg.com/news/articles/2025-01-31/will-ai-take-our-jobs-3-scenarios-for-how-it-could-impact-the-economy?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTczODM4OTYwOCwiZXhwIjoxNzM4OTk0NDA4LCJhcnRpY2xlSWQiOiJTUVlRTlZUMVVNMFcwMCIsImJjb25uZWN0SWQiOiIwNEFGQkMxQkYyMTA0NUVEODg3MzQxQkQwQzIyNzRBMCJ9.Hh73XucK_fIaF28mIoijg-XtXirvJrKFG_sWfrBzQHo&leadSource=uverify%20wall

    Reply
  36. Tomi Engdahl says:

    Another reason for AI optimism: compelling case-study evidence of AI tools boosting worker productivity. Sida Peng, an economist at Microsoft, and his coauthors found that computer programmers with access to GitHub Copilot completed tasks 56% faster than those without.

    In another study, ChatGPT helped participants complete writing assignments 40% faster, and with significant improvements in quality for lower-skilled writers. A third found that customer service agents with an AI assistant resolved 14% more issues per hour than those without, again with lower-skill agents improving more than the average.
    Best of Times, Worst of Times

    A doubling of US GDP, or no change in the trend? A white-collar wasteland with mass job losses and soaring inequality, or AI bringing opportunities to workers that lost out as a result of globalization and automation? Billion dollar development costs a barrier to market entry, or DeepSeek’s massive cost efficiencies allowing a thousand AI flowers to bloom?

    The wide spectrum of imagined futures underscores how little we know.

    To us, a middle path seems plausible, with some sectors enjoying efficiency gains and others largely unaffected. Some workers are relieved of tedious aspects of their jobs while others are forced to look for other lines of work. Overall productivity gains are visibly higher but don’t outpace those in the PC and internet revolution.

    These are not purely questions of efficiency. If the early gains from AI aren’t divvied up fairly, questions of equity and social cohesion will come to the fore. Blue-collar workers who lost their jobs to automation played a part in making Trump a two-term president. If AI causes a swath of white-collar job losses, the political consequences could be similarly far-reaching.

    https://www.bloomberg.com/news/articles/2025-01-31/will-ai-take-our-jobs-3-scenarios-for-how-it-could-impact-the-economy?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTczODM4OTYwOCwiZXhwIjoxNzM4OTk0NDA4LCJhcnRpY2xlSWQiOiJTUVlRTlZUMVVNMFcwMCIsImJjb25uZWN0SWQiOiIwNEFGQkMxQkYyMTA0NUVEODg3MzQxQkQwQzIyNzRBMCJ9.Hh73XucK_fIaF28mIoijg-XtXirvJrKFG_sWfrBzQHo&leadSource=uverify%20wall

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*