AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

830 Comments

  1. Tomi Engdahl says:

    How I Would Learn AI as a Software Engineer/Programmer (If I Could Start Over in 2025)
    https://www.youtube.com/watch?v=EO2iYsMaV7g

    TOPICS COVERED:

    Essential Python skills for AI/ML
    Must-know math concepts (linear algebra, statistics)
    Machine learning fundamentals
    Deep learning frameworks (TensorFlow, PyTorch)
    Large Language Models (LLMs) and transformers
    Real-world AI project building
    Career guidance for AI developers

    TIMESTAMPS:
    00:00 Introduction
    00:39 Foundation in Programming & Math
    01:58 Machine Learning Essentials
    03:10 DataCamp
    05:01 Deep Learning & Modern AI Frameworks
    06:26 Project-Based Learning & Portfolio Building
    07:27 Staying Ahead: Continue Learning & Community
    08:32 AI Learning Roadmap for 2025 Final Thoughts

    Reply
  2. Tomi Engdahl says:

    xAI Grok 3 Launch in 9 Minutes
    https://www.youtube.com/watch?v=BDseU-kmDYY

    00:00 Introduction to Grok Three
    00:13 Hardware and Training Setup
    00:46 Performance and Benchmarks
    02:15 Reasoning Capabilities
    02:59 User Interface and Features
    06:15 Deep Search and Big Brain
    07:18 Access and Availability
    08:47 Conclusion and Final Thoughts

    Reply
  3. Tomi Engdahl says:

    First Look at JetBrains Junie Autonomous AI Agent
    https://www.youtube.com/watch?v=Ti-JGNvRDo4

    17 Feb 2025
    Last month Jetbrains released an early access preview of their new AI tool Junie. Junie is an agent. This means that we can give it a goal, it will devise a plan, and then execute tasks autonomously. It can even change its behaviour based on how things are going.

    In this first look at Junie I’m going to tell it the rules of Test Driven Development and see how well it can follow the process. As in the last episode, I will write the tests, and ask Junie to write the implementation code.

    This one is really, really interesting.

    Join Duncan as he takes an in-depth first look at Juni, JetBrains’ new AI tool, demonstrating how it autonomously follows the rules of test-driven development. Watch Duncan initiate tasks, review the tool’s performance, and explore features like file creation, proactive task execution, and interaction via guidelines. Despite some technical hiccups, Duncan showcases the potential of integrating AI with coding workflows, emphasizing strict TDD for effective problem-solving.

    Reply
  4. Tomi Engdahl says:

    Optimize Your AI – Quantization Explained
    https://www.youtube.com/watch?v=K75j8MkwgJ0

    Timestamps:
    [00:00] Introduction & Quick Overview
    [01:04] Why AI Models Need So Much Memory
    [02:00] Understanding Quantization Basics
    [03:20] K-Quants Explained
    [04:20] Performance Comparisons
    [04:40] Context Quantization Game-Changer
    [05:20] Practical Demo & Memory Savings
    [09:00] How to Choose the Right Model
    [09:50] Quick Action Steps & Conclusion

    Reply
  5. Tomi Engdahl says:

    What’s trending in Software-driven Automation (SDA) in 2025?
    https://www.linkedin.com/posts/jaakkoa1_ctrlxos-activity-7280517728942067712-9pG_?utm_source=share&utm_medium=member_android

    4. AI, of course, but how? Naturally AI can assist in efficient software development and testing. Also some algorithm optimisation and condition monitoring with AI and ML has been seen. But other than that, I haven’t yet really identified solutions ready for production in industrial automation operational use.

    Reply
  6. Tomi Engdahl says:

    Why Developers Are Ditching GitHub Copilot
    “I don’t need autocomplete. I need to tell Claude what I want, and it will give me the code.”
    https://analyticsindiamag.com/deep-tech/why-developers-are-ditching-github-copilot/

    Why are developers ditching GitHub Copilot?
    Why Developers Are Ditching GitHub Copilot
    Using GitHub Copilot is one sure-fire way to never actually learn how to do coding. Developers emphasise the importance of maintaining a clear mental model of their code. Copilot is not very useful for anything beyond auto complete.

    Reply
  7. Tomi Engdahl says:

    Pangea Launches AI Guard and Prompt Guard to Combat Gen-AI Security Risks

    Guardrail specialist releases new products to aid the development and use of secure gen-AI apps.

    https://www.securityweek.com/pangea-launches-ai-guard-and-prompt-guard-to-combat-gen-ai-security-risks/

    AI security specialist Pangea has added to its existing suite of corporate gen-AI security products with AI Guard and Prompt Guard. The first prevents sensitive data leakage from gen-AI applications, while the second defends against prompt engineering, preventing jailbreaks.

    According to the current OWASP Top 10 for LLM Applications 2025 (PDF), the number one risk for gen-AI applications comes from ‘prompt injection’, while the number two risk is ‘sensitive information disclosure’ (data leakage). With large organizations each developing close to 1,000 proprietary AI apps, Pangea’s new products are designed to prevent these apps succumbing to their major risks.

    Prompt engineering is a skill. It is the art of phrasing a gen-AI query in a manner that gets the most accurate and complete response. Malicious prompt engineering is a threat. It is the skill of phrasing a prompt in a way to obtain information, or elicit responses, that either should not be disclosed or could be used in a harmful manner.

    Pangea’s new Prompt Guard analyzes human and system prompts to detect and block jailbreak attempts or limit violations. Detection is done through heuristics, classifiers, and other techniques with, in Pangea’s announcement, ‘99% efficacy’.

    AI Guard is designed to prevent sensitive data leakage. It blocks malicious or undesirable content, such as profanity, hate speech, and violence. It examines prompt inputs, responses, and data ingestion from external sources to detect and block malicious content. It can prevent attempts to input false information including malware and malicious URLs, and can prevent the release of PII.

    In total, AI Guard employs more than a dozen detection technologies, and can understand over 50 types of confidential and personally identifiable information. It gathers threat intelligence from partners CrowdStrike, DomainTools, and ReversingLabs.

    “Prompt engineering,” explains Pangea co-founder and CEO Oliver Friedrichs, “is basically social engineering on a large language model to make it do things that it has been told not to do, circumventing the controls of a typical gen-AI application.” Prompt Guard can identify all common and specialized prompt injection techniques; and if and when new techniques emerge, they will be added to the system.

    Reply
  8. Tomi Engdahl says:

    Aaron Souppouris / Engadget:
    Humane says Ai Pin’s online features will stop working on February 28, when all customer data will be deleted, and it will refund some customers — Humane is discontinuing the AI Pin and selling out to HP. — AI hardware startup Humane has given its users just ten (10!) days notice that their Pins will be disconnected.

    https://www.engadget.com/ai/all-of-humanes-ai-pins-will-stop-working-in-10-days-225643798.html

    Brody Ford / Bloomberg:
    HP will acquire assets from Humane for $116M; Humane’s Ai Pin business will be wound down, and Humane’s team, including its founders, will join HP — – Humane’s Ai pin business to be wound down after rocky launch — New unit at HP will focus on implementation of AI in devices

    https://www.bloomberg.com/news/articles/2025-02-18/hp-116-million-deal-for-humane-includes-ip-but-no-ai-pin-device

    Reply
  9. Tomi Engdahl says:

    Emilia David / VentureBeat:
    OpenAI researchers, using the SWE-Lancer benchmark, find that real-world freelance software engineering work remains challenging for frontier language models — Large language models (LLMs) may have changed software development, but enterprises will need to think twice about entirely replacing …

    AI can fix bugs—but can’t find them: OpenAI’s study highlights limits of LLMs in software engineering
    https://venturebeat.com/ai/ai-can-fix-bugs-but-cant-find-them-openais-study-highlights-limits-of-llms-in-software-engineering/

    Large language models (LLMs) may have changed software development, but enterprises will need to think twice about entirely replacing human software engineers with LLMs, despite OpenAI CEO Sam Altman’s claim that models can replace “low-level” engineers.

    In a new paper, OpenAI researchers detail how they developed an LLM benchmark called SWE-Lancer to test how much foundation models can earn from real-life freelance software engineering tasks. The test found that, while the models can solve bugs, they can’t see why the bug exists and continue to make more mistakes.

    SWE-Lancer: Can Frontier LLMs Earn $1 Million
    from Real-World Freelance Software Engineering?
    https://arxiv.org/pdf/2502.12115

    We introduce SWE-Lancer, a benchmark of over
    1,400 freelance software engineering tasks from
    Upwork, valued at $1 million USD total in real-
    world payouts. SWE-Lancer encompasses both
    independent engineering tasks — ranging from
    $50 bug fixes to $32,000 feature implementa-
    tions — and managerial tasks, where models
    choose between technical implementation pro-
    posals.

    Effective tool use distinguishes top performers. We find
    that the strongest models make frequent use of the user tool,
    and are able to efficiently parse its outputs to reproduce,
    localize, and iteratively debug issues. The user tool often
    takes a nontrivial amount of time to run – a period of 90 to
    120 seconds – during which weaker models such as GPT-4o
    are prone to abandoning the tool altogether. The best per-
    forming models reason about the delay (which is disclosed
    in their instructions), set appropriate timeouts, and review
    results when available. An example user tool trajectory is in
    Appendix A10.

    The strongest models perform well across all task types.
    Tables 2 and 3 show pass@1 rates for IC SWE Diamond
    tasks across different task categories. Sonnet 3.5 performs
    best, followed by o1 and then GPT-4o, and pass@1 on
    Manager tasks is often more than double pass rate on IC
    SWE tasks. Sonnet 3.5 outperforms o1 by nearly 15% on
    UI/UX tasks in particular, and nearly 10% on IC SWE tasks
    involving implementing new features or enhancements.

    Agents excel at localizing, but fail to root cause, resulting
    in partial or flawed solutions. Agents pinpoint the source
    of an issue remarkably quickly, using keyword searches
    across the whole repository to quickly locate the relevant
    file and functions – often far faster than a human would.
    However, they often exhibit a limited understanding of how
    the issue spans multiple components or files, and fail to
    address the root cause, leading to solutions that are incorrect
    or insufficiently comprehensive. We rarely find cases where
    the agent aims to reproduce the issue or fails due to not
    finding the right file or location to edit. We provide several
    qualitative summaries of trajectories in Appendix A9.

    7. Impact Statement
    AI models with strong real-world software engineering ca-
    pabilities could enhance productivity, expand access to high-
    quality engineering capabilities, and reduce barriers to tech-
    nological progress. However, they could also shift labor
    demand—especially in the short term for entry-level and
    freelance software engineers—and have broader long-term
    implications for the software industry. Improving AI soft-
    ware engineering is not without risk. Advanced systems
    could carry model autonomy risk in self-improvement and
    potential exfiltration, while automatically generated code
    may contain security flaws, disrupt existing features, or stray
    from industry best practices, a consideration that is impor-
    tant if the world increasingly relies on model-generated code.
    SWE-Lancer provides a concrete framework to start tying
    model capabilities to potential real-world software automa-
    tion potential, therefore helping better measure its economic
    and social implications. By quantifying AI progress in soft-
    ware engineering, we aim to help inform the world about
    the potential economic impacts of AI model development,
    while underscoring the need for careful and responsible de-
    ployment. To further support responsible AI progress, we
    open-source a public eval set and report results on frontier
    AI models to help level-set the implications of AI progress.
    Future work should explore the societal and economic impli-
    cations of AI-driven development, ensuring these systems
    are integrated safely and effectively

    Reply
  10. Tomi Engdahl says:

    Test results

    After running the test, the researchers found that none of the models earned the full $1 million value of the tasks. Claude 3.5 Sonnet, the best-performing model, earned only $208,050 and resolved 26.2% of the individual contributor issues. However, the researchers point out, “the majority of its solutions are incorrect, and higher reliability is needed for trustworthy deployment.”

    The models performed well across most individual contributor tasks, with Claude 3.5-Sonnet performing best, followed by o1 and GPT-4o.

    “Agents excel at localizing, but fail to root cause, resulting in partial or flawed solutions,” the report explains. “Agents pinpoint the source of an issue remarkably quickly, using keyword searches across the whole repository to quickly locate the relevant file and functions — often far faster than a human would. However, they often exhibit a limited understanding of how the issue spans multiple components or files, and fail to address the root cause, leading to solutions that are incorrect or insufficiently comprehensive. We rarely find cases where the agent aims to reproduce the issue or fails due to not finding the right file or location to edit.”

    Interestingly, the models all performed better on manager tasks that required reasoning to evaluate technical understanding.

    These benchmark tests showed that AI models can solve some “low-level” coding problems and can’t replace “low-level” software engineers yet. The models still took time, often made mistakes, and couldn’t chase a bug around to find the root cause of coding problems. Many “low-level” engineers work better, but the researchers said this may not be the case for very long.

    https://venturebeat.com/ai/ai-can-fix-bugs-but-cant-find-them-openais-study-highlights-limits-of-llms-in-software-engineering/

    Reply
  11. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Fiverr unveils AI tools, including the Personal AI Creation Model, which lets gig workers configure an AI model trained on their work and set prices to use it — Gig marketplace Fiverr wants to let freelancers train AI on their bodies of work and use it to automate future jobs.

    Fiverr wants gig workers to offload some of their work to AI
    https://techcrunch.com/2025/02/18/fiverr-wants-gig-workers-to-offload-some-of-their-work-to-ai/

    Gig marketplace Fiverr wants to let freelancers train AI on their bodies of work and use it to automate future jobs.

    At an event on Tuesday, Fiverr announced the launch of several new efforts aimed at attracting gig workers to its platform and equipping them with generative AI tools. Perhaps the most ambitious is a program that’ll give freelancers doing voice-over, graphic design, and certain related work the ability to train AI on their content and to charge customers for access.

    Fiverr CEO Micha Kaufman pitched the move as a way to ensure gig workers “receive proper credit and compensation while giving them unprecedented tools to scale their work.”

    “This is about making our freelancers irreplaceable, not obsolete,” Kaufman said in a statement. “We built [these new features] to ensure creators remain at the center of the creative economy.”

    Reply
  12. Tomi Engdahl says:

    Jowi Morales / Tom’s Hardware:
    xAI says Grok-3 outperforms Gemini-2 Pro, DeepSeek-V3, Claude 3.5 Sonnet, and GPT-4o in some benchmarks; Musk says xAI’s mission is to “understand the universe”

    Elon Musk’s Grok 3 is now available, beats ChatGPT in some benchmarks — LLM took 10x more compute to train versus Grok 2
    There’s a powerful new AI model in town.
    https://www.tomshardware.com/tech-industry/artificial-intelligence/elon-musks-grok-3-is-now-available-beats-chatgpt-in-some-benchmarks-llm-took-10x-more-compute-to-train-versus-grok-2

    Elon Musk just launched Grok 3, the latest version of xAI’s LLM that was trained at the Colossus Supercluster in Memphis, Tennessee using 100,000 Nvidia H100 GPUs. He had previously said, about a week ago, that its full release was imminent and claimed that it would outperform its rivals. Today he launched the AI model via a live stream on X (formerly Twitter) showcasing impressive performance benchmark results.

    Musk began the presentation by saying “The mission of xAI and Grok is to understand the universe,” and explaining that he wants to answer questions like, “What’s going on? Where are the aliens? What is the meaning of life? How does the universe end? How did it start?” He added, “Of course, that’s to be a maximally truth-seeking AI even if that truth is sometimes at odds with what is politically correct.”

    After speaking about his goals with AI, Musk proclaimed that Grok 3 is an order of magnitude more capable than Grok 2, and that it was trained in a very short period. This was likely possible because of the massive number of GPUs xAI used for parallelized training, which also took just 19 days to set up — a record time especially since Nvidia’s CEO Jensen Huang said that that usually takes four years.

    Grok 3 isn’t just a single LLM though — instead, it’s a family of several models, with the first ones launched being Grok 3 and Grok 3 mini.

    Benchmarks shown by the xAI team reveal Grok-3 and Grok-3 mini models outperforming its competition, including Gemini-2 Pro, DeepSeek-V3, Claude 3.5 Sonnet, and GPT-4o, in several tests, including Math (AIME), Science (GPQA), and Coding (LCB). The reasoning models, which are accessible via the Grok app, also outperform the competition using the same benchmarks. Aside from this, the Grok app will have a new feature called DeepSearch, which scours the internet when questioned to then distill all the information into a single answer.

    Other experts have been given access to Grok 3 in advance and were able to test these claims. For example, former Tesla Director of AI and OpenAI founder Andrej Karpathy shared his test results on X, saying that Grok 3 + Thinking feels similar to OpenAI’s o1-pro model while being a bit better than DeepSeek-R1 and Gemini 2.0 Flash Thinking. This is actually quite a feat, especially since OpenAI and Google have had a massive head start over xAI.

    Grok 3 will be available to X Premium+ subscribers first.

    https://grok.com/?referrer=website

    Reply
  13. Tomi Engdahl says:

    Financial Times:
    Sources: Meta has led the charge against the EU’s AI Act this year, as Big Tech, with backing from President Trump, grows bolder in challenging EU regulations

    https://www.ft.com/content/3e75c36e-d29e-40ca-b2f1-74320e6b781f

    Reply
  14. Tomi Engdahl says:

    Wall Street Journal:
    An investor group plans to build an AI data center in South Korea that uses up to 3 gigawatts of power, starting with a $10B investment that could grow to $35B

    AI Data Center With Up to 3 Gigawatts of Power Is Envisioned for South Korea
    Few global facilities possess more than a gigawatt of power, making electricity for artificial-intelligence computing increasingly scarce
    https://www.wsj.com/tech/ai/ai-data-center-with-up-to-3-gigawatts-of-power-is-envisioned-for-south-korea-5141bd77

    Reply
  15. Tomi Engdahl says:

    Erin Woo / The Information:
    Sources describe disputes at Google between Google Labs and Workspace before NotebookLM’s launch and between Google Cloud and DeepMind over the pace of launches

    Google’s AI Efforts Marred by Turf Disputes
    https://www.theinformation.com/articles/googles-ai-efforts-marred-by-turf-disputes

    Reply
  16. Tomi Engdahl says:

    Jupyter AI
    Welcome to Jupyter AI, which brings generative AI to Jupyter. Jupyter AI provides a user-friendly and powerful way to explore generative AI models in notebooks and improve your productivity in JupyterLab and the Jupyter Notebook.
    https://jupyter-ai.readthedocs.io/en/latest/

    More specifically, Jupyter AI offers:

    An %%ai magic that turns the Jupyter notebook into a reproducible generative AI playground. This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, VSCode, etc.).

    A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant.

    Support for a wide range of generative model providers and models (AI21, Anthropic, Cohere, Gemini, Hugging Face, MistralAI, OpenAI, SageMaker, NVIDIA, etc.).

    Reply
  17. Tomi Engdahl says:

    Generative AI in Jupyter
    https://blog.jupyter.org/generative-ai-in-jupyter-3f7174824862

    Jupyter AI brings generative artificial intelligence to Jupyter notebooks, giving users the power to explain and generate code, fix errors, summarize content, ask questions about their local files, and generate entire notebooks from a natural language prompt. Using its powerful magic commands and chat interface, Jupyter AI connects Jupyter with large language models (LLM) from providers such as AI21, Anthropic, AWS, Cohere, and OpenAI. We use LangChain to support all popular LLMs and providers, giving you access to new models as they are released. LangChain will let Jupyter AI use local models as well. Jupyter AI version 1.0, for JupyterLab 3, and Jupyter AI 2.0, for JupyterLab 4, are now available as free and open source software.

    Getting started with Jupyter AI

    Start using Jupyter AI by installing the appropriate version with pip:

    pip install ‘jupyter-ai>=1.0,<2.0' # If you use JupyterLab 3
    pip install jupyter-ai # If you use JupyterLab 4

    Then, launch JupyterLab. Jupyter AI provides two different interfaces to interact with LLMs. In JupyterLab, you can converse with a chat UI to assist you with your code. Also, in any supported notebook or IPython environment, including JupyterLab, Notebook, IPython, Colab, and Visual Studio Code, you can invoke LLMs using the %%ai magic command. Jupyter AI can turn any Jupyter Notebook session into a generative AI playground with support for text and image models.

    Project Jupyter is vendor-neutral, so Jupyter AI supports LLMs from AI21, Anthropic, AWS, Cohere, HuggingFace Hub, and OpenAI. More model providers will be added in the future. Please review a provider’s privacy policy and pricing model before you use it. We’re also working on support for locally-deployed models, for maximum privacy.

    The chat interface has its own configuration panel for choosing a language model and an embedding model, and for authenticating to each model’s provider. A language model responds to users’ messages in the chat panel. When you ask the chat interface to learn about local files, it uses an embedding model to parse these files and to assist when you ask questions about them.

    The chat interface, your AI assistant

    The chat interface puts you in conversation with Jupyternaut, a conversational agent using a language model of your choice.

    Jupyternaut communicates primarily through text, and it can also interact with files in JupyterLab. It can answer questions as a general-purpose AI assistant, include selections from your notebooks with your questions, insert AI-generated output into your notebooks, learn from and ask questions about your local files, and generate notebooks from a prompt.

    Using prompts that include the selected code, you can ask Jupyternaut to explain your code in plain English (or in any other language it can speak), make modifications to it, and identify errors in it. If you want, Jupyternaut can even replace your selection with its response. Please review AI-generated code before you run it, as you would review code written by another person.

    Reply
  18. Tomi Engdahl says:

    Jupyter AI — Its AI Features & How to use it
    https://medium.com/@techlatest.net/jupyter-ai-its-ai-features-how-to-use-it-179eb3ac619c

    Jupyter AI is an innovative tool that brings generative AI capabilities to Jupyter notebooks, enhancing productivity and enabling interactions with large language models (LLMs) like GPT-3, Claude, and Amazon Titan. This blog will explore the features of Jupyter AI and provide a detailed guide on how to use it effectively.

    Features of Jupyter AI

    Jupyter AI offers a range of powerful features that revolutionize the notebook experience for Generative AI:

    1. Generative AI Playground: Jupyter AI provides %ai & %%ai magic commands that turns the Jupyter notebook into a reproducible generative AI playground, allowing users to work with generative AI models in notebooks and improve productivity in JupyterLab and the Jupyter Notebook.

    2. Native Chat UI: It includes a native chat UI in JupyterLab that enables users to work with generative AI as a conversational assistant, facilitating natural conversational flow and effortless generative AI interactions.

    3. Support for Various LLM Providers: Jupyter AI supports a wide range of generative model providers and models, including AI21, Anthropic, Cohere, Hugging Face, OpenAI, SageMaker, and more, unlocking the power of these models within the workflow.

    4. Ethical and Responsible AI: Jupyter AI is designed with responsible AI and data privacy in mind, allowing users to select their preferred LLM, embedding model, and vector database to meet their individual needs. It prioritizes ethical considerations and social responsibility.

    The easiest way to get started with Jupyter AI is to use the chat interface.

    Conclusion

    In conclusion, Jupyter AI is a game-changing tool that empowers users to work with generative AI models, enhance productivity, and prioritize ethical and responsible AI usage within the Jupyter environment. With its seamless integration, powerful features, and user-friendly interface, Jupyter AI is set to redefine the notebook experience for data scientists, developers, and AI enthusiasts alike.

    By leveraging the capabilities of Jupyter AI, users can unlock the potential of generative AI, streamline their development process, and gain deeper insights into code, ultimately revolutionizing the way AI is integrated into the notebook workflow.

    Reply
  19. Tomi Engdahl says:

    Humane is shutting down the AI Pin and selling its remnants to HP
    HP is acquiring the Humane platform and team for $116 million.
    https://www.theverge.com/news/614883/humane-ai-hp-acquisition-pin-shutdown

    Bloomberg reports that “Humane’s team, including founders Imran Chaudhri and Bethany Bongiorno, will form a new division at HP to help integrate artificial intelligence into the company’s personal computers, printers and connected conference rooms,” per an HP executive. The new team will be called HP IQ, which will be “HP’s new AI innovation lab focused on building an intelligent ecosystem across HP’s products and services for the future of work,” according to an HP press release.

    Reply
  20. Tomi Engdahl says:

    Tekoäly on riski radioverkkojen protokollille
    https://etn.fi/index.php/13-news/17172-tekoaely-on-riski-radioverkkojen-protokollille

    Ruotsin puolustusvoimien tutkimuslaitos FOI on julkaissut raportin, joka paljastaa tekoälyn ja koneoppimisen (ML) tuomat uhat langattomille verkoille. Tutkimuksen mukaan tekoäly voi optimoida langattoman viestinnän tehokkuutta, mutta samalla se avaa uusia mahdollisuuksia vihamielisille hyökkäyksille, joita perinteiset kyberturvallisuuskeinot eivät välttämättä tunnista.

    Raportti keskittyy niin sanottuun vastustukselliseen koneoppimiseen (Adversarial Machine Learning, AML), joka tarkoittaa tekoälymallien harhaanjohtamista tai väärinkäyttöä. FOI:n tutkijat arvioivat, että langattomien verkkojen turvallisuus on vaarassa erityisesti seuraavilla osa-alueilla:

    Spektrintunnistus: Hyökkääjät voivat manipuloida radioverkkojen tekoälypohjaisia algoritmeja, jolloin järjestelmä arvioi taajuuden olevan vapaana, vaikka se on varattu. Tämä voi johtaa häiriöihin ja viestinnän katkeamiseen.
    Modulaatioluokitus: Tekoälyä voidaan huijata tunnistamaan radioviestinnän modulaatio väärin, mikä vaikeuttaa viestinnän salausta ja tietoturvaa.
    Radioresurssien hallinta: Hyökkääjät voivat vaikuttaa tekoälyn avulla siihen, miten verkko jakaa taajuuksia ja kapasiteettia, mikä voi aiheuttaa häiriöitä ja tehottomuutta.

    Tutkimuksessa kävi ilmi, että vaikka AML-hyökkäykset langattomissa verkoissa ovat vielä kehittymässä, ne eivät ole pelkästään teoreettisia uhkia. FOI:n analyysi osoittaa, että hyökkääjät voivat ilman täydellistä tietoa järjestelmästä hyödyntää tekoälyä radioverkkojen manipulointiin.

    Perinteinen kyberturvallisuus ei riitä

    Raportti korostaa, että langattomien verkkojen suojaaminen vaatii uusia tekoälypohjaisia vastatoimia. Nykyiset kyberturvallisuuskeinot eivät yksin riitä estämään AML-hyökkäyksiä, vaan tarvitaan tekoälyn vahvistettuja puolustusmekanismeja.

    Hot- och sårbarhetsanalys av attacker mot AI i trådlösa kommunikationssystem
    https://www.foi.se/rapporter/rapportsammanfattning.html?reportNo=FOI-R–5646–SE

    Reply
  21. Tomi Engdahl says:

    Google AI Studio for Beginners
    https://www.youtube.com/watch?v=IHOJUJjZbzc

    What is Google AI Studio and how does it work? Join Googler Paige Bailey as she explores how to get started with Google AI Studio to build with generative AI for beginners.

    This AI Will Change How You See the World: Google AI Studio
    https://www.youtube.com/watch?v=t06gsqIhwVs

    Imagine having an assistant right in your pocket that understands and explains the world through your phone’s camera. In this video, I’ll show you how to use Google AI Studio on your phone to make your life easier and more efficient. From identifying buildings and exploring their history to figuring out laundry settings and even baking chocolate chip cookies, this tool is a game-changer!

    TIMESTAMPS
    0:00 – Dive Into the World of AI
    0:42 – Set Up Google AI Studio in Seconds
    1:27 – Unlock Travel Insights with AI
    2:39 – Create Recipes from Your Ingredients
    3:38 – Simplify Laundry with AI Guidance
    4:50 – Wrap Up and Next Steps

    Reply
  22. Tomi Engdahl says:

    https://aistudio.google.com/welcome

    Google AI studio replaces your AI tech stack (full demo)
    https://www.youtube.com/watch?v=6h9y1rLem4c

    On this episode, Logan Kilpatrick, lead PM for Google’s AI Studio, provides a comprehensive demonstration of Google’s AI capabilities, focusing on the Gemini models and AI Studio platform. The presentation covers various features including long-context processing, reasoning models, and real-time AI interactions. The discussion emphasizes the platform’s accessibility for developers and entrepreneurs, with free API access and tools for building AI-powered applications.

    Timestamps:
    00:00 – Introduction and overview
    01:18 – Overview of Gemini and AI Studio
    03:40 – Long Context Use Case and Extracting Data from Media
    07:05 – Overview of Gemini Models
    08:13 – Gemini’s reasoning model demo
    12:36 – Spatial Understanding Capabilities
    15:23 – Startup Ideas leveraging AI’s Spatial Understanding Capabilities
    18:23 – Maps Explorer demo
    20:06 – Real-Time Streaming and AI Co-Presence
    22:31 – Democratizing Access to Learning

    Reply
  23. Tomi Engdahl says:

    Access Top AI Models
    via Single API Solution
    Chat, Text-to-Image, Text-to-Video, Music Generation, Voice, Embeddings, OCR and Vision models
    https://aimlapi.com/app/sign-in

    Reply
  24. Tomi Engdahl says:

    ChatGpt’s Code Interpreter to analyze network traffic from an excel file. wow
    https://www.youtube.com/watch?v=JBkyQSyRvgk

    Reply
  25. Tomi Engdahl says:

    How to Use OpenAI’s ChatGPT to Analyze Wireshark Packet Captures
    https://phillyt.medium.com/how-to-use-openais-chatgpt-to-analyze-wireshark-packet-captures-a4cca934710c

    Step 1: Gather Wireshark Packet Captures
    The first step is to capture network traffic using Wireshark.

    Step 2: Copy Packet Data into ChatGPT
    Once you have the Wireshark packet capture, select a single packet from the capture file and copy its data into ChatGPT. You can do this by right-clicking on the packet in Wireshark and selecting “Copy -> As Text”. Then paste the text into the input field of the ChatGPT interface.

    Step 3: Ask ChatGPT to Explain the Packet
    Now that you have the packet data in ChatGPT, you can ask ChatGPT to explain the packet. Simply type “explain this packet capture data” into the ChatGPT input field, followed by the copied packet data.

    Step 4: Repeat the Process for Multiple Packets
    Repeat the process for multiple packets in your Wireshark capture file. By analyzing multiple packets, you can gain a better understanding of the network traffic patterns and identify any potential issues or anomalies.

    Step 5: Use ChatGPT’s Explanations to Troubleshoot Network Issues
    By using ChatGPT to explain Wireshark packets, you can quickly and easily identify and troubleshoot network issues. For example, if you see an unusual number of packets with a certain destination address, you can use ChatGPT to explain the packets and determine the source of the problem.

    Reply
  26. Tomi Engdahl says:

    DeepSeek and Packet Analysis? Let’s find out…
    https://www.youtube.com/watch?v=TciLnWFM-bY

    With all the buzz around DeepSeek AI, I threw a couple of packet captures at it to see if it could help with the analysis and find root cause. First we have to convert the pcap to a text file in wireshark for Deepseek to analyze it.

    Reply
  27. Tomi Engdahl says:

    Network Troubleshooting Made Easy with ChatGPT | Snack Minute Ep. 96
    https://www.youtube.com/watch?v=W7KtYF0gDJU

    Reply
  28. Tomi Engdahl says:

    How to Get Hired When AI Does the Screening
    https://hbr.org/2025/02/how-to-get-hired-when-ai-does-the-screening?tpcc=orgsocial_edit&utm_campaign=hbr&utm_medium=social&utm_source=facebook&fbclid=IwZXh0bgNhZW0CMTEAAR3ZVEJrm6k5cM2DAFl467tBsiS-rHpxnCtEOvriqhjTPRX0C4kHy8tFJE0_aem_Ma8cF4Tw__Yy7Bk4NnR6KA

    The current job market is tight, and not just because of recent layoffs and stagnant hiring. AI and automation are changing the hiring landscape at all levels, and those on the hunt for a new opportunity can no longer ignore AI’s impact on their job search strategy.

    Understanding how companies are leveraging AI for hiring is critical for landing your next job. Here are five ways to prepare for an AI-based hiring process, from your resume to interviews.

    Applications, Resumes, and Cover Letters
    With such a competitive market, recruiters don’t have time to look at every application. Consider these tips to help ensure your resume surfaces to the top of the candidate pool.

    Change your narrative.
    When updating your resume, develop a strong narrative that shows recruiters and hiring managers your unique, AI-resistant capabilities. Start by mapping out your current work. Determine which tasks might soon be accomplished through AI and automation or whether your entire job could become obsolete.

    Reply
  29. Tomi Engdahl says:

    AI Converts Images to Code in SECONDS!
    https://www.youtube.com/watch?v=NiwDhFbjfPw

    Discover how to create stunning websites just by describing an image! With v0’s Generative UI, you can instantly transform your ideas into React, Tailwind CSS, and Shadcn UI code. Watch as I demonstrate how to convert images into fully functional web components in minutes!

    Reply
  30. Tomi Engdahl says:

    Convert Screenshot to Code in Minutes (With Cursor AI)
    https://www.youtube.com/watch?v=wyN3iMhgiFM

    Turn UI Screenshots into Functional Code in Minutes with Cursor
    In this video, learn how to transform a screenshot of a UI into functional code in just minutes using Cursor. The process involves two key secrets: using a well-crafted prompt file and utilizing Cursor’s chat feature instead of the composer feature. The video walks through setting up a Vite React project with Tailwind CSS and Framer Motion, creating a prompt file for Cursor, and converting a screenshot of a pricing section into working code. Detailed steps are provided, including setting up the project, using Cursor to generate code, and refining the code for a near-pixel-perfect result. The video emphasizes the efficiency and productivity gains from using this method.

    https://github.com/gopinav/awesome-cursor/blob/main/prompts/screenshot-to-code.md

    Reply
  31. Tomi Engdahl says:

    How to Use AI to Build Apps Quickly? Taking an Image and Turning it Into Code!
    https://www.youtube.com/watch?v=wf7DScN5WMU

    What you’ll learn:

    Introduction to Tldraw: Understanding the basics of this powerful tool.
    Image/Sketch to Code Conversion: Step-by-step guide on how to take an image and transform it into code.
    Practical Examples: Demonstrating real-life scenarios where this tool can be a game-changer.
    Tips & Tricks: Knowledge to help you get the most out of Tldraw.
    Future of AI in App Development: A discussion on how tools like Tldraw are shaping the future of programming.

    Reply
  32. Tomi Engdahl says:

    OpenAI’s ‘deep research’ tool: is it useful for scientists?
    The model produces cited, pages-long reports that might be helpful for generating literature reviews.
    https://www.nature.com/articles/d41586-025-00377-9?linkId=12827840&fbclid=IwY2xjawIjIjlleHRuA2FlbQIxMQABHVSROg1WWlk9rtX-iHjTd3sNh54ONGOi3MaVA-lRc3D96vC1pQ5_JOWMlw_aem_Xbu28p1Y8GqgGjuo_zIaKw

    Technology giant OpenAI has unveiled a pay-for-access tool called ‘deep research’, which synthesizes information from dozens or hundreds of websites into a cited report several pages long. The tool follows a similar one from Google, called ‘Deep Research’, released last December and acts as a personal assistant, doing the equivalent of hours of work in tens of minutes.

    Reply
  33. Tomi Engdahl says:

    IBM Deep Search uses AI to collect, convert, curate, and ultimately search large document collections like public documents, such as patents and research papers
    https://research.ibm.com/projects/deep-search

    Reply
  34. Tomi Engdahl says:

    Tarkista tämä, ennen kuin käytät Deepseekiä
    Valesivustot voivat näyttää hyvinkin uskottavilta.
    https://www.iltalehti.fi/digiuutiset/a/2d4203b2-7772-4bd5-9011-8e1b5c6f25a3

    Kiinalainen tekoälypalvelu Deepseek tuntuu tällä hetkellä kiinnostavan vähän kaikkia. Sen julkistaminen esimerkiksi söi Yhdysvaltain pörssiyrityksistä arvoa sadoilla miljardeilla. Nyt Deepseekiä ollaan myös pikavauhtia kieltämässä joissakin maissa.

    Deepseekin suosio on saanut myös huijarit aktivoitumaan erilaisten Deepseekin nimellä ratsastavien valesivujen muodossa

    Aidon Deepseekin osoite verkossa on http://www.deepseek.com. Jos sivuston osoite on jotain muuta, kyseessä on huijaus.

    Reply
  35. Tomi Engdahl says:

    How Hackers Manipulate Agentic AI with Prompt Engineering
    https://www.securityweek.com/how-hackers-manipulate-agentic-ai-with-prompt-engineering/

    Organizations adopting the transformative nature of agentic AI are urged to take heed of prompt engineering tactics being practiced by threat actors.

    The era of “agentic” artificial intelligence has arrived, and businesses can no longer afford to overlook its transformative potential. AI agents operate independently, making decisions and taking actions based on their programming. Gartner predicts that by 2028, 15% of day-to-day business decisions will be made completely autonomously by AI agents.

    However, as these systems become more widely accepted, their integration into critical operations as well as excessive agency—deep access to systems, data, functionalities, and permissions—make them appealing targets for cybercrime. One of the most subtle but powerful attack techniques that threat actors use to manipulate, deceive, or compromise AI agents involves prompt engineering.

    How Threat Actors Leverage Prompt Engineering to Exploit Agentic AI

    Threat actors utilize a number of prompt engineering techniques to compromise agentic AI systems, such as:

    Steganographic Prompting

    Remember SEO poisoning technique where white text was used on a white background to manipulate search engine results? If a visitor browses the web page, they are unable to read the hidden text. But if a search engine bot crawls the page, it can read it. Similarly, steganographic prompting involves a technique where hidden text or obfuscated instructions are embedded in a way that is invisible to the human eye but detectable by an LLM.

    Jailbreaking

    Jailbreaking is a prompting technique that manipulates AI systems into circumventing their own built-in restrictions, ethical standards, or safety measures. In the case of agentic AI systems, jailbreaking seeks to bypass built-in protections and safeguards, compelling the AI to behave in ways that go against its intended programming. There are a number of different techniques bad actors can employ to jailbreak AI guardrails:

    Role-playing: instructing the AI to adopt a persona that bypasses its restrictions.
    Obfuscation: using coded language, metaphors, or indirect phrasing to disguise malicious intent.
    Context manipulation: altering context such as prior interactions or specific details to guide the model into producing restricted outputs.

    Prompt Probing

    Prompt probing is a technique used to explore and understand the behavior, limitations, and vulnerabilities of an agentic AI system by systematically testing it with carefully crafted inputs (prompts). Although the technique is typically employed by researchers and developers to gain an understanding about how AI models respond to different types of inputs or queries, it is also used by threat actors as a precursor to more malicious activities, such as jailbreaking, prompt injection attacks, or model extraction.

    Mitigating the Risks of Prompt Engineering

    To defend against prompt engineering attacks, organizations must adopt a multi-layered approach. Key strategies include:

    Input Sanitization and Validation: Implement robust input validation and sanitization techniques to detect and block malicious prompts, to strip or detect hidden text, such as white-on-white text, zero-width characters, or other obfuscation techniques, prior to processing inputs.
    Improve Agent Robustness: Using techniques like adversarial training and robustness testing, train AI agents to recognize and resist adversarial inputs.
    Limit AI Agency: Restrict the actions that agentic AI systems can perform, particularly in high-stakes environments.
    Monitor Agent Behavior: Continuously monitor AI systems for unusual behavior and conduct regular audits to identify and address vulnerabilities.

    Train Users: Educate users about the risks of prompt engineering and how to recognize potential attacks.
    Implement Anomaly Detection: Investing in a converged network and security-as-a-service model like SASE ensures that organizations can identify anomalous activities and unusual behaviors, which are often triggered by prompt manipulations, across the entire IT estate.
    Deploy Human-in-the-Loop: Use human reviewers to validate AI outputs and to monitor critical and sensitive interactions.

    Reply
  36. Tomi Engdahl says:

    Pangea Launches AI Guard and Prompt Guard to Combat Gen-AI Security Risks

    Guardrail specialist releases new products to aid the development and use of secure gen-AI apps.

    https://www.securityweek.com/pangea-launches-ai-guard-and-prompt-guard-to-combat-gen-ai-security-risks/

    AI security specialist Pangea has added to its existing suite of corporate gen-AI security products with AI Guard and Prompt Guard. The first prevents sensitive data leakage from gen-AI applications, while the second defends against prompt engineering, preventing jailbreaks.

    According to the current OWASP Top 10 for LLM Applications 2025 (PDF), the number one risk for gen-AI applications comes from ‘prompt injection’, while the number two risk is ‘sensitive information disclosure’ (data leakage). With large organizations each developing close to 1,000 proprietary AI apps, Pangea’s new products are designed to prevent these apps succumbing to their major risks.

    OWASP Top 10 for LLM Applications 2025
    https://genai.owasp.org/resource/owasp-top-10-for-llm-applications-2025/

    Reply
  37. Tomi Engdahl says:

    Tom Warren / The Verge:
    Microsoft announces Muse AI, a first-of-its-kind generative AI model that can generate a game environment based on visuals, players’ controller actions, or both — Microsoft Research and Xbox game studio Ninja Theory have partnered to create a new AI model for gaming.

    Microsoft’s Xbox AI era starts with a model that can generate gameplay
    Microsoft Research and Xbox game studio Ninja Theory have partnered to create a new AI model for gaming.
    https://www.theverge.com/news/615048/microsoft-xbox-generative-ai-model-gaming-muse

    I reported in November that Microsoft was about to start a bigger effort to bring AI features to Xbox, and today, the company is unveiling what it’s calling a breakthrough in AI for gaming. Microsoft’s new Muse AI model could help Xbox developers create parts of games in the future, and the company says it’s even exploring the potential of using it to preserve classic games and optimize them for modern hardware.

    Microsoft Research has created Muse, a first-of-its-kind generative AI model that can generate a game environment based on visuals or players’ controller actions. It understands a 3D game world and game physics and can react to how players interact with a game.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*