3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,837 Comments

  1. Tomi Engdahl says:

    Use of large language models as artificial intelligence tools in academic research and publishing among global clinical researchers
    https://www.nature.com/articles/s41598-024-81370-6

    Reply
  2. Tomi Engdahl says:

    ByteDance Research Introduces 1.58-bit FLUX: A New AI Approach that Gets 99.5% of the Transformer Parameters Quantized to 1.58 bits
    https://www.marktechpost.com/2024/12/30/bytedance-research-introduces-1-58-bit-flux-a-new-ai-approach-that-gets-99-5-of-the-transformer-parameters-quantized-to-1-58-bits/

    Vision Transformers (ViTs) have become a cornerstone in computer vision, offering strong performance and adaptability. However, their large size and computational demands create challenges, particularly for deployment on devices with limited resources. Models like FLUX Vision Transformers, with billions of parameters, require substantial storage and memory, making them impractical for many use cases. These limitations restrict the real-world application of advanced generative models. Addressing these challenges calls for innovative methods to reduce the computational burden without compromising performance.

    Researchers from ByteDance Introduce 1.58-bit FLUX
    Researchers from ByteDance have introduced the 1.58-bit FLUX model, a quantized version of the FLUX Vision Transformer. This model reduces 99.5% of its parameters (11.9 billion in total) to 1.58 bits, significantly lowering computational and storage requirements. The process is unique in that it does not rely on image data, instead using a self-supervised approach based on the FLUX.1-dev model. By incorporating a custom kernel optimized for 1.58-bit operations, the researchers achieved a 7.7× reduction in storage and a 5.1× reduction in inference memory usage, making deployment in resource-constrained environments more feasible.

    Reply
  3. Tomi Engdahl says:

    Paljastus: Samsungin tulevien Galaxy S25 -sarjan huippupuhelinten mukana voi saada Googlen parhaimman tekoälyn ilmaiseksi
    https://mobiili.fi/2024/12/31/paljastus-samsungin-tulevien-galaxy-s25-sarjan-huippupuhelinten-mukana-voi-saada-googlen-parhaimman-tekoalyn-ilmaiseksi/

    Reply
  4. Tomi Engdahl says:

    Forget ChatGPT — Google Gemini is my favorite AI product of the year
    Features
    By Nigel Powell published 2 days ago
    So much AI, so little time
    https://www.tomsguide.com/ai/google-gemini-is-my-ai-product-of-the-year-heres-why

    Reply
  5. Tomi Engdahl says:

    Introducing smolagents, a simple library to build agents
    https://huggingface.co/blog/smolagents

    Today we are launching smolagents, a very simple library that unlocks agentic capabilities for language models. Here’s a glimpse:
    from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel

    agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=HfApiModel())

    agent.run(“How many seconds would it take for a leopard at full speed to run through Pont des Arts?”)

    What are agents?
    Any efficient system using AI will need to provide LLMs some kind of access to the real world: for instance the possibility to call a search tool to get external information, or to act on certain programs in order to solve a task. In other words, LLMs should have agency. Agentic programs are the gateway to the outside world for LLMs.

    AI Agents are programs where LLM outputs control the workflow.

    Any system leveraging LLMs will integrate the LLM outputs into code. The influence of the LLM’s input on the code workflow is the level of agency of LLMs in the system.

    Note that with this definition, “agent” is not a discrete, 0 or 1 definition: instead, “agency” evolves on a continuous spectrum, as you give more or less power to the LLM on your workflow.

    Agents are useful when you need an LLM to determine the workflow of an app. But they’re often overkill. The question is: do I really need flexibility in the workflow to efficiently solve the task at hand? If the pre-determined workflow falls short too often, that means you need more flexibility. Let’s take an example: say you’re making an app that handles customer requests on a surfing trip website.

    You could know in advance that the requests will belong to either of 2 buckets (based on user choice), and you have a predefined workflow for each of these 2 cases.

    Want some knowledge on the trips? ⇒ give them access to a search bar to search your knowledge base
    Wants to talk to sales? ⇒ let them type in a contact form.
    If that deterministic workflow fits all queries, by all means just code everything! This will give you a 100% reliable system with no risk of error introduced by letting unpredictable LLMs meddle in your workflow. For the sake of simplicity and robustness, it’s advised to regularize towards not using any agentic behaviour.

    But what if the workflow can’t be determined that well in advance?

    That is where an agentic setup helps.

    In the above example, you could just make a multi-step agent that has access to a weather API for weather forecasts, Google Maps API to compute travel distance, an employee availability dashboard and a RAG system on your knowledge base.

    Until recently, computer programs were restricted to pre-determined workflows, trying to handle complexity by piling up if/else switches. They focused on extremely narrow tasks, like “compute the sum of these numbers” or “find the shortest path in this graph”. But actually, most real-life tasks, like our trip example above, do not fit in pre-determined workflows. Agentic systems open up the vast world of real-world tasks to programs!

    Code agents
    In a multi-step agent, at each step, the LLM can write an action, in the form of some calls to external tools. A common format (used by Anthropic, OpenAI, and many others) for writing these actions is generally different shades of “writing actions as a JSON of tools names and arguments to use, which you then parse to know which tool to execute and with which arguments”.

    Writing actions in code rather than JSON-like snippets provides better:

    Composability: could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function?
    Object management: how do you store the output of an action like generate_image in JSON?
    Generality: code is built to express simply anything you can have a computer do.
    Representation in LLM training data: plenty of quality code actions is already included in LLMs’ training data which means they’re already trained for this!

    Support for any LLM: it supports models hosted on the Hub loaded in their transformers version or through our inference API, but also supports models from OpenAI, Anthropic and many others via our LiteLLM integration.

    smolagents is the successor to transformers.agents, and will be replacing it as transformers.agents gets deprecated in the future.

    Reply
  6. Tomi Engdahl says:

    Kokeilussa LM Studio: näin ajat tekoälymalleja omalla tietokoneella
    29.12.202414:14
    LM Studio on ilmainen sovellus, joka tuo laajat kielimallit kotikäyttäjän omalle koneelle.
    https://www.mikrobitti.fi/testit/kokeilussa-lm-studio-nain-ajat-tekoalymalleja-omalla-tietokoneella/812fdb3f-a71e-4fda-b024-2ca2ffa260e8

    Reply
  7. Tomi Engdahl says:

    https://mobiili.fi/2024/12/29/tama-on-googlen-merkittavin-tavoite-ensi-vuodelle-liittyy-vahemman-yllattaen-tekoalyyn/

    CNBC raportoi Googlen toimitusjohtajan Sundar Pichain ja muun ylimmän johdon järjestäneen ennen joululomaa vuoden 2025 strategiakokouksensa. Tapaamisesta vuotanut nauhoite kertoo Pichain todenneen tapaamisessa, että ”Geminin skaalaaminen kuluttajapuolella tulee olemaan Googlen suurin keskittymisen alue ensi vuonna”.

    Google on läpi vuosien ollut tekoälytutkimuksen ja -kehityksen kärkijoukoissa, mutta vuoden 2022 lopulla OpenAI onnistui yllättämään kaikki sittemmin suureen suosioon nousseella ChatGPT-palvelullaan. Pikavauhtia yli 250 miljoonaan käyttäjään viikossa kasvanut ChatGPT on noussut kuluttajille suunnattujen tekoälypalvelujen johtotähdeksi.

    Googlella kisaa ei olla kuitenkaan luovuttamassa, vaan jo vuoden 2024 aikanakin Googlelta on nähty kiivaalla tahdilla erilaisia tekoälyjulkistuksia – viimeksi joulukuun aikana uuden sukupolven Gemini 2.0 -tekoälymallit.

    Reply
  8. Tomi Engdahl says:

    https://www.howtogeek.com/how-i-transformed-chatgpt-into-a-project-management-system/

    Use ChatGPT’s Custom Instructions & Memory features to create a basic project management system.
    Custom Instructions help organize tasks into categories, prioritize, and manage projects effectively.
    Regular maintenance (like clearing completed tasks) keeps the system running smoothly across chat sessions.

    Reply
  9. Tomi Engdahl says:

    Fun
    New concept: AaS
    AI as a Service

    Reply
  10. Tomi Engdahl says:

    Wireless audio SoC integrates AI processing
    https://www.edn.com/wireless-audio-soc-integrates-ai-processing/?fbclid=IwZXh0bgNhZW0CMTEAAR005DgFmqtrLLt3udOJyjVH2CPfhDT_LGzvQ88lG2iqCUKCrJNQDNtq6D4_aem_9SBGI7PAdhVcOUt-7-fR_g

    Airoha Technology’s AB1595 Bluetooth audio chip features a 6-core architecture and a built-in AI hardware accelerator. It consolidates functions typically spread across multiple chips into a single SoC and achieves Microsoft Teams Open Office certification.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*