3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,874 Comments

  1. Tomi Engdahl says:

    Use of large language models as artificial intelligence tools in academic research and publishing among global clinical researchers
    https://www.nature.com/articles/s41598-024-81370-6

    Reply
  2. Tomi Engdahl says:

    ByteDance Research Introduces 1.58-bit FLUX: A New AI Approach that Gets 99.5% of the Transformer Parameters Quantized to 1.58 bits
    https://www.marktechpost.com/2024/12/30/bytedance-research-introduces-1-58-bit-flux-a-new-ai-approach-that-gets-99-5-of-the-transformer-parameters-quantized-to-1-58-bits/

    Vision Transformers (ViTs) have become a cornerstone in computer vision, offering strong performance and adaptability. However, their large size and computational demands create challenges, particularly for deployment on devices with limited resources. Models like FLUX Vision Transformers, with billions of parameters, require substantial storage and memory, making them impractical for many use cases. These limitations restrict the real-world application of advanced generative models. Addressing these challenges calls for innovative methods to reduce the computational burden without compromising performance.

    Researchers from ByteDance Introduce 1.58-bit FLUX
    Researchers from ByteDance have introduced the 1.58-bit FLUX model, a quantized version of the FLUX Vision Transformer. This model reduces 99.5% of its parameters (11.9 billion in total) to 1.58 bits, significantly lowering computational and storage requirements. The process is unique in that it does not rely on image data, instead using a self-supervised approach based on the FLUX.1-dev model. By incorporating a custom kernel optimized for 1.58-bit operations, the researchers achieved a 7.7× reduction in storage and a 5.1× reduction in inference memory usage, making deployment in resource-constrained environments more feasible.

    Reply
  3. Tomi Engdahl says:

    Paljastus: Samsungin tulevien Galaxy S25 -sarjan huippupuhelinten mukana voi saada Googlen parhaimman tekoälyn ilmaiseksi
    https://mobiili.fi/2024/12/31/paljastus-samsungin-tulevien-galaxy-s25-sarjan-huippupuhelinten-mukana-voi-saada-googlen-parhaimman-tekoalyn-ilmaiseksi/

    Reply
  4. Tomi Engdahl says:

    Forget ChatGPT — Google Gemini is my favorite AI product of the year
    Features
    By Nigel Powell published 2 days ago
    So much AI, so little time
    https://www.tomsguide.com/ai/google-gemini-is-my-ai-product-of-the-year-heres-why

    Reply
  5. Tomi Engdahl says:

    Introducing smolagents, a simple library to build agents
    https://huggingface.co/blog/smolagents

    Today we are launching smolagents, a very simple library that unlocks agentic capabilities for language models. Here’s a glimpse:
    from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel

    agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=HfApiModel())

    agent.run(“How many seconds would it take for a leopard at full speed to run through Pont des Arts?”)

    What are agents?
    Any efficient system using AI will need to provide LLMs some kind of access to the real world: for instance the possibility to call a search tool to get external information, or to act on certain programs in order to solve a task. In other words, LLMs should have agency. Agentic programs are the gateway to the outside world for LLMs.

    AI Agents are programs where LLM outputs control the workflow.

    Any system leveraging LLMs will integrate the LLM outputs into code. The influence of the LLM’s input on the code workflow is the level of agency of LLMs in the system.

    Note that with this definition, “agent” is not a discrete, 0 or 1 definition: instead, “agency” evolves on a continuous spectrum, as you give more or less power to the LLM on your workflow.

    Agents are useful when you need an LLM to determine the workflow of an app. But they’re often overkill. The question is: do I really need flexibility in the workflow to efficiently solve the task at hand? If the pre-determined workflow falls short too often, that means you need more flexibility. Let’s take an example: say you’re making an app that handles customer requests on a surfing trip website.

    You could know in advance that the requests will belong to either of 2 buckets (based on user choice), and you have a predefined workflow for each of these 2 cases.

    Want some knowledge on the trips? ⇒ give them access to a search bar to search your knowledge base
    Wants to talk to sales? ⇒ let them type in a contact form.
    If that deterministic workflow fits all queries, by all means just code everything! This will give you a 100% reliable system with no risk of error introduced by letting unpredictable LLMs meddle in your workflow. For the sake of simplicity and robustness, it’s advised to regularize towards not using any agentic behaviour.

    But what if the workflow can’t be determined that well in advance?

    That is where an agentic setup helps.

    In the above example, you could just make a multi-step agent that has access to a weather API for weather forecasts, Google Maps API to compute travel distance, an employee availability dashboard and a RAG system on your knowledge base.

    Until recently, computer programs were restricted to pre-determined workflows, trying to handle complexity by piling up if/else switches. They focused on extremely narrow tasks, like “compute the sum of these numbers” or “find the shortest path in this graph”. But actually, most real-life tasks, like our trip example above, do not fit in pre-determined workflows. Agentic systems open up the vast world of real-world tasks to programs!

    Code agents
    In a multi-step agent, at each step, the LLM can write an action, in the form of some calls to external tools. A common format (used by Anthropic, OpenAI, and many others) for writing these actions is generally different shades of “writing actions as a JSON of tools names and arguments to use, which you then parse to know which tool to execute and with which arguments”.

    Writing actions in code rather than JSON-like snippets provides better:

    Composability: could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function?
    Object management: how do you store the output of an action like generate_image in JSON?
    Generality: code is built to express simply anything you can have a computer do.
    Representation in LLM training data: plenty of quality code actions is already included in LLMs’ training data which means they’re already trained for this!

    Support for any LLM: it supports models hosted on the Hub loaded in their transformers version or through our inference API, but also supports models from OpenAI, Anthropic and many others via our LiteLLM integration.

    smolagents is the successor to transformers.agents, and will be replacing it as transformers.agents gets deprecated in the future.

    Reply
  6. Tomi Engdahl says:

    Kokeilussa LM Studio: näin ajat tekoälymalleja omalla tietokoneella
    29.12.202414:14
    LM Studio on ilmainen sovellus, joka tuo laajat kielimallit kotikäyttäjän omalle koneelle.
    https://www.mikrobitti.fi/testit/kokeilussa-lm-studio-nain-ajat-tekoalymalleja-omalla-tietokoneella/812fdb3f-a71e-4fda-b024-2ca2ffa260e8

    Reply
  7. Tomi Engdahl says:

    https://mobiili.fi/2024/12/29/tama-on-googlen-merkittavin-tavoite-ensi-vuodelle-liittyy-vahemman-yllattaen-tekoalyyn/

    CNBC raportoi Googlen toimitusjohtajan Sundar Pichain ja muun ylimmän johdon järjestäneen ennen joululomaa vuoden 2025 strategiakokouksensa. Tapaamisesta vuotanut nauhoite kertoo Pichain todenneen tapaamisessa, että ”Geminin skaalaaminen kuluttajapuolella tulee olemaan Googlen suurin keskittymisen alue ensi vuonna”.

    Google on läpi vuosien ollut tekoälytutkimuksen ja -kehityksen kärkijoukoissa, mutta vuoden 2022 lopulla OpenAI onnistui yllättämään kaikki sittemmin suureen suosioon nousseella ChatGPT-palvelullaan. Pikavauhtia yli 250 miljoonaan käyttäjään viikossa kasvanut ChatGPT on noussut kuluttajille suunnattujen tekoälypalvelujen johtotähdeksi.

    Googlella kisaa ei olla kuitenkaan luovuttamassa, vaan jo vuoden 2024 aikanakin Googlelta on nähty kiivaalla tahdilla erilaisia tekoälyjulkistuksia – viimeksi joulukuun aikana uuden sukupolven Gemini 2.0 -tekoälymallit.

    Reply
  8. Tomi Engdahl says:

    https://www.howtogeek.com/how-i-transformed-chatgpt-into-a-project-management-system/

    Use ChatGPT’s Custom Instructions & Memory features to create a basic project management system.
    Custom Instructions help organize tasks into categories, prioritize, and manage projects effectively.
    Regular maintenance (like clearing completed tasks) keeps the system running smoothly across chat sessions.

    Reply
  9. Tomi Engdahl says:

    Fun
    New concept: AaS
    AI as a Service

    Reply
  10. Tomi Engdahl says:

    Wireless audio SoC integrates AI processing
    https://www.edn.com/wireless-audio-soc-integrates-ai-processing/?fbclid=IwZXh0bgNhZW0CMTEAAR005DgFmqtrLLt3udOJyjVH2CPfhDT_LGzvQ88lG2iqCUKCrJNQDNtq6D4_aem_9SBGI7PAdhVcOUt-7-fR_g

    Airoha Technology’s AB1595 Bluetooth audio chip features a 6-core architecture and a built-in AI hardware accelerator. It consolidates functions typically spread across multiple chips into a single SoC and achieves Microsoft Teams Open Office certification.

    Reply
  11. Tomi Engdahl says:

    Chinese Researchers Crack ChatGPT : Replicating OpenAI’s Advanced AI Model
    https://www.geeky-gadgets.com/openai-o1-model-replicated/#google_vignette

    It’s no secret that artificial intelligence is advancing at a breakneck pace, reshaping industries and redefining what machines can do. But what happens when new AI models, like OpenAI’s highly advanced o1 reasoning model, are replicated by others? That’s exactly what Chinese researchers from Fudan University and the Shanghai AI Laboratory have reportedly achieved. Their success in reverse-engineering this pivotal AI model marks a significant leap in the global race toward Artificial General Intelligence (AGI). Yet, this development also raises some big questions: Should such powerful technologies be open sourced? And what does this mean for the future of AI innovation and security?

    Reply
  12. Tomi Engdahl says:

    Meta wants AI characters to fill up Facebook and Instagram ‘kind of in the same way accounts do,’ but also had to delete a humiliating first run of its official bots
    https://www.pcgamer.com/gaming-industry/meta-wants-ai-characters-to-fill-up-facebook-and-instagram-kind-of-in-the-same-way-accounts-do-but-also-had-to-delete-a-humiliating-first-run-of-its-official-bots/

    Reply
  13. Tomi Engdahl says:

    Hugging Face Smolagents is a Simple Library to Build LLM-Powered Agents
    https://www.infoq.com/news/2025/01/hugging-face-smolagents-agents/

    Smolagents is a library created at Hugging Face to build agents leveraging large language models (LLMs). Hugging Faces says its new library aims to be simple and LLM-agnostic. It supports secure “agents that write their actions in code” and is integrated with Hugging Face Hub.

    Agentic systems promises to extend the possibility of computer programs beyond the mere execution of pre-determined workflows conceived to solve narrow tasks. In fact, most real-life problems do not fit in pre-determined workflows, say Hugging Face engineers Aymeric Roucher, Merve Noyan, and Thomas Wolf.

    Agents, in HuggingFace’s view, provide LLMs access to the outside world. An Agent-based system can be either a multi-step agent or multi-agent and differs from other LLM-based systems in the level of agency of LLMs in the system. Specifically, AI Agents have the characteristic that LLM outputs control the system workflow. In other LLM-based systems, on the contrary, LLM output may have no impact whatsoever on the program’s flow or some intermediate effect.

    The way agentic systems achieve their workflow flexibility is having an LLM write an action, which takes the form of calls to external tools. This idea is represented in the following meta-code:

    memory = [user_defined_task]
    while llm_should_continue(memory): # this loop is the multi-step part
    action = llm_get_next_action(memory) # this is the tool-calling part
    observations = execute_action(action)
    memory += [action, observations]
    This idea is not new and, as Roucher, Noyan, and Wolf remarks, there already exists a commonly accepted JSON format, used by Anthropic, OpenAI, and others to describe such actions, i.e., calls to external tools. Here is where smolagents takes a distinct approach, based on the realization that JSON is not the best way to express what a computer should do. Instead, they preferred writing actions in code because programming languages provide a superior way to describe computer behavior, granting better composability, data management, and generality. Since LLMs already have the capacity to create quality code, this approach adds no major complexity.

    To create agentic systems, you need to solve a few recurrent problems, such as parsing the agent’s output and synthesizing prompts based on what happened in the last iteration. Those are among the key features provided by smolagents, along with error logging and retry mechanisms.

    If you want to build an agent system, however, you need to first determine if you need one. Indeed, as Roucher, Noyan, and Wolf explain, agents may be overkill.

    If [a] deterministic workflow fits all queries, by all means just code everything! This will give you a 100% reliable system with no risk of error introduced by letting unpredictable LLMs meddle in your workflow. For the sake of simplicity and robustness, it’s advised to regularize towards not using any agentic behavior.

    Once you are sure you need an agent, you need an LLM and some tools. You can use any open model using Hugging Face HfApiModel class or you can use LiteLMMModel to access a plethora of Cloud-based LLM. A tool is just a function the LLM can execute with some inputs.

    Hugging Face smolagents are not the only currently available tool to create agentic systems. In particular, OpenAI released Swarm, which leverages routines and handoffs to have multiple agents coordinate with one another. Additionally, Microsoft introduced Magentic-One and AWS has its own Multi-Agent Orchestrator.

    Reply
  14. Tomi Engdahl says:

    The Future of Work and Human-AI Collaboration: Will AI Replace or Enhance Human Potential?
    https://maria.io/the-future-of-work-and-human-ai-collaboration-will-ai-replace-or-enhance-human-potential/

    What does the future of work look like in 10 or 20 years? How will humans and machines collaborate? Will AI replace human roles, or are there fundamental human capabilities that AI cannot replicate? Looking ahead, what possibilities will the future hold for European companies?

    In a recent thought-provoking podcast by Maria 01, a panel of AI startup founders delved into these questions, exploring AI’s impact on human capabilities and potential future paths.

    How Can Humans and Machines Collaborate? What Human Capabilities Can AI Not Replicate?
    Generative AI assistants, powered by Large Language Models (LLM), can significantly enhance the efficiency of professionals across various sectors. While they can aid individuals with lower-level skills in performing their tasks, their adoption may also reduce the number of entry-level positions. As employees become more efficient and automation and multi-purpose robotics evolve, companies may reduce their workforce. Many roles are likely to disappear, though new ones will also emerge. The rapid transition may create significant unemployment, particularly among less educated or less digitally oriented populations. Future advancements, such as neuro-links, may enable AI to respond directly to human thought, allowing for seamless human-machine interaction.

    While many consider empathy and collaboration to be inherently human, Jaakko Kaikuluoma suggested that AI could actually bolster these skills: “Empathy, collaboration, and teamwork are areas in which AI can develop and augment human capabilities. Machines can actually do a better job than many humans can.”

    Chris Petrie pointed out that empathy requires human interaction: “We’ll always desire that human-to-human interaction for these kind of inherently human traits like empathy.”

    Reply
  15. Tomi Engdahl says:

    Could AI duplicates solve a range of issues, from time management to knowledge dissemination?
    Highly skilled individuals, such as experts or business leaders, could create AI twins that possess the same knowledge as their human counterparts. For instance, a professor’s AI twin could help teach and mentor a larger number of students, disseminating knowledge efficiently. Would busy startup founders be interested in getting help from an AI twin?

    Chris admitted that he would struggle to trust it completely. “I think that’s where the bottleneck for me is. Humans, at the end of the day, need to adapt to learn to use these tools.”

    Lauri humorously added: “If it were an accurate copy, it might try to delegate tasks back to me, replicating my flaws! I’m not looking to work with people like me anyway; I want people to compensate for what I am.”

    https://maria.io/the-future-of-work-and-human-ai-collaboration-will-ai-replace-or-enhance-human-potential/

    Reply
  16. Tomi Engdahl says:

    CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings
    https://huggingface.co/papers/2501.01257

    Reply
  17. Tomi Engdahl says:

    Understanding And Preparing For The 7 Levels Of AI Agents
    https://www.forbes.com/sites/douglaslaney/2025/01/03/understanding-and-preparing-for-the-seven-levels-of-ai-agents/

    As the calendar flips to the second quarter of the century, conversations about the transformative potential of artificial intelligence are reaching a fever pitch. However, the buzz about AI is shifting from AI tools to creating and deploying AI agents. Many executives I speak with remain unsure about how to conceive, categorize, and capitalize upon the various agentic possibilities for their businesses. Understanding the evolution of AI agents—from simple reactive systems to hypothetical superintelligent entities—can provide a roadmap for organizations aiming to harness AI strategically.

    The following framework I offer for defining, understanding, and preparing for agentic AI blends foundational work in computer science with insights from cognitive psychology and speculative philosophy. Each of the seven levels represents a step-change in technology, capability, and autonomy.

    Level 1—Reactive Agents
    Responding to the Present
    At the most basic level are reactive agents, which operate entirely in the moment. These agents do not retain memories or learn from past experiences. Instead, they follow predefined rules to respond to specific inputs. Reactive systems have their roots in early AI research and finite state machines, foundational concepts that emerged in the mid-20th century through the work of pioneers like John McCarthy and Marvin Minsky.

    A quintessential example is a basic chatbot that answers questions based on keyword matching, or one that generates or translates content. These agents excel in environments where the scope of interaction is limited and predictable. For businesses, reactive agents can streamline repetitive tasks, such as handling customer queries or automating well-defined workflows.

    Level 2—Task-Specialized Agents
    Mastering a Specific Activity
    Task-specialized agents excel in somewhat narrow domains, often outperforming humans in specific tasks by collaborating with domain experts to complete well-defined activities. These agents are the backbone of many modern AI applications, from fraud detection algorithms to medical imaging systems.

    A task-specialized agent might power an e-commerce recommendation engine, ensuring customers see products they’re likely to purchase. In logistics, these agents optimize delivery routes to reduce costs and improve efficiency.

    Level 3—Context-Aware Agents
    Handling Ambiguity and Complexity
    Context-aware agents distinguish themselves by their ability to handle ambiguity, dynamic scenarios, and synthesize a variety of complex inputs. These agents analyze historical data, real-time streams, and unstructured information to adapt and respond intelligently, even in unpredictable scenarios. Their development owes much to advancements in machine learning and neural networks, championed by researchers like Geoffrey Hinton and Yann LeCun.

    Sophisticated examples include systems that analyze vast volumes of medical literature, patient records, and clinical data to assist doctors in diagnosing complex conditions. In the financial sector, context-aware agents evaluate transaction patterns, user behaviors, and external market conditions to detect potential fraud.

    Level 4—Socially Savvy Agents
    Understanding Human Behavior
    Socially savvy agents represent the intersection of AI and emotional intelligence. These systems understand and interpret human emotions, beliefs, and intentions, enabling richer interactions. The concept draws from cognitive psychology, particularly the “theory of mind,” which posits that understanding others’ mental states is crucial for social interaction. Researchers like Simon Baron-Cohen and Alan Leslie have advanced the understanding of theory of mind in cognitive science, which informs the development of these agents in AI.

    In customer service, socially savvy agents can identify frustration in a caller’s tone and adjust their responses accordingly. Advanced applications include AI-driven coaching platforms that provide empathetic feedback or negotiation bots capable of understanding subtle cues during business deals.

    Level 5—Self-Reflective Agents
    Achieving Inner Awareness and Betterment
    The idea of self-reflective agents ventures into speculative territory. These systems would be capable of introspection and self-improvement. The concept has roots in philosophical discussions about consciousness, first introduced by Alan Turing in his early work on machine intelligence and later explored by thinkers like David Chalmers.

    Self-reflective agents would analyze their own decision-making processes and refine their algorithms autonomously, much like a human reflects on past actions to improve future behavior. For businesses, such agents could revolutionize operations by continuously evolving strategies (not just processes) without human input.

    However, the journey to this level is fraught with challenges, including defining and gauging machine “self-awareness,” complex ethical considerations, and what is referred to as “model collapse” (in which an AI agent’s performance degrades by relying too much on itself rather than upon variegated inputs).

    Level 6—Generalized Intelligence Agents

    Spanning Domains
    Generalized intelligence agents, or artificial general intelligence (AGI), represent a long-standing aspiration in AI research. First envisioned by early pioneers like John McCarthy, AGI aims to create systems capable of performing any intellectual task a human can achieve. Unlike task-specialized agents, AGI is rooted in the idea of adaptability across a wide array of domains, requiring profound advancements in learning algorithms, reasoning, and contextual understanding.

    Recent progress in large language models (LLMs) hints at the potential for AGI. These systems demonstrate an ability to synthesize information across disciplines, optimizing short-term with long-term goals.

    Level 7—Superintelligent Agents
    Reaching Beyond Human Conception

    At the pinnacle of AI evolution lies the superintelligent agent. This hypothetical system would surpass human intelligence in all domains, enabling breakthroughs in science, economics, and governance. Popularized by flike Nick Bostrom, superintelligence raises profound ethical and practical questions, and would likely require quantum computing-level technology.

    Potential problems that superintelligent agents could address include discovering cures for complex diseases by analyzing vast interconnected datasets and DNA, designing sustainable solutions for global environmental challenges, optimizing international economic systems, developing new methods of engineering or architecture, and solving our incomplete models of the universe, quantum physics, and the human brain. These culminant agents could also manage intricate geopolitical negotiations, peer into the future to mitigate catastrophic risks, optimize chaotic systems via infinite variable scenario planning, or conceive revolutionary solutions that redefine or invent new industries.

    Evolving Through the Levels
    For organizations, evolving from one level of AI agents to the next requires a combination of technological investment, cultural change, and strategic foresight. However, many limitations stem more from organizational imagination than technological constraints.

    Progression often involves iterative steps rather than leaps.

    By embracing AI not just as a tool, but as a strategic partner capable of driving innovation and creating value, and by understanding the levels of AI agents and the pathways to advance through them, organizations can position themselves at the forefront of their industry.

    Reply
  18. Tomi Engdahl says:

    Google AI Studio boss says AGI will “look a lot like a product release” after Sam Altman claimed the benchmark would pass with surprisingly little societal impact
    https://www.windowscentral.com/software-apps/google-ai-studio-boss-says-agi-will-look-a-lot-like-a-product-release

    Reply
  19. Tomi Engdahl says:

    LLM Comparison/Test: DeepSeek-V3, QVQ-72B-Preview, Falcon3 10B, Llama 3.3 70B, Nemotron 70B in my updated MMLU-Pro CS benchmark
    https://huggingface.co/blog/wolfram/llm-comparison-test-2025-01-02

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*