3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,922 Comments

  1. Tomi Engdahl says:

    Generative AI tools are here, but who’s using them?
    Generative AI has much promise. But the road between here and delivering on those promises looks to be a lot longer than when ChatGPT first dropped in November 2022.
    https://www.techtarget.com/searchcontentmanagement/opinion/Generative-AI-tools-are-here-but-whos-using-them

    Reply
  2. Tomi Engdahl says:

    Cloudflare’s new free tool stops bots from scraping your website content to train AI
    AI bots accessed around 39% of the top one million ‘internet properties’ using Cloudflare in June of 2024, according to the company
    https://www.zdnet.com/article/cloudflares-new-free-tool-stops-bots-from-scraping-your-website-content-to-train-ai/#google_vignette

    Reply
  3. Tomi Engdahl says:

    Hacker group claims it leaked internal Disney Slack messages over AI concerns
    https://www.cnn.com/2024/07/15/business/internal-disney-slack-leak-hacker-group/index.html

    New York
    CNN

    An activist hacking group claimed it leaked thousands of Disney’s internal messaging channels, which included information about unreleased projects, raw images computer codes and some logins.

    Nullbulge, the “hacktivist group,” claimed responsibility for the breach and said they leaked a gigantic, roughly 1.2 terabytes of information from Disney’s Slack, a communications software. In an email on Monday to CNN, the group claimed it gained access through “a man with Slack access who had cookies.” The email also claimed the group was based out of Russia.

    “Disney was our target due to how it handles artist contracts, its approach to AI, and its pretty blatant disregard for the consumer,” the hacking group said over email.

    Reply
  4. Tomi Engdahl says:

    WHISTLEBLOWERS SAY OPENAI BROKE PROMISE TO RIGOROUSLY TEST AI FOR DANGER BEFORE RELEASING
    https://futurism.com/the-byte/openai-accusations-promise-test-ai

    Reply
  5. Tomi Engdahl says:

    Using artificial intelligence to make quantum computers a reality
    https://www.earth.com/news/using-artificial-intelligence-ai-to-make-quantum-computers-reality/

    Have you ever considered the potential of artificial intelligence (AI) to unlock the secrets of advanced quantum computing?

    This once seemingly impossible feat may soon become a reality, as suggested by new research from Australia’s national science agency, CSIRO.

    AI and quantum computing noise
    The research, published in the prestigious Physical Review Research journal, presents a fascinating and important concept.

    It indicates AI’s remarkable potential to process and resolve quantum errors, which are famously termed as ‘qubit noise’.

    Now, why do these quantum errors matter so much in the universe of quantum computing?

    This noise, which arises from various sources such as environmental interference and imperfections in the quantum system, apparently is the largest hurdle in transitioning quantum computers from being purely experimental devices to practical, everyday tools.

    Reply
  6. Tomi Engdahl says:

    Goodbye Manual Prompting, Hello Programming With DSPy
    The DSPy framework aims to resolve consistency and reliability issues by prioritizing declarative, systematic programming over manual prompt writing.
    https://thenewstack.io/goodbye-manual-prompting-hello-programming-with-dspy/

    The development of scalable and optimized AI applications using large language models (LLMs) is still in its growing stages. Building applications based on LLMs is complex and time-consuming due to the extensive manual work involved, such as writing prompts.

    Prompt writing is the most important part of any LLM application as it helps us to extract the best possible results from the model. However, crafting an optimized prompt requires developers to rely heavily on hit-and-trial methods, wasting significant time until the desired result is achieved.

    Reply
  7. Tomi Engdahl says:

    “A conversation alone can lead us to think that an agent that looks and works very differently from us can have a mind.” https://trib.al/AW4pbUr

    Reply
  8. Tomi Engdahl says:

    Macrostrategy Partnersin perustaja James Ferguson uskoo menossa olevan tekoälybuumin olevan kupla, uutisoi muun muassa Bloomberg ja Business Insider.

    Fergusonin mukaan nykyinen tekoälysirubuumi täyttää taloudellisen kuplan merkit. Tilanne muistuttaa Fergusonin mielestä 90-luvun lopun it-kuplaa: uuteen teknologiaan investoidaan paljon, vaikka sen tosiasialliset vaikutukset eivät ole tiedossa

    Ferguson kertoo, että luovalla tekoälyllä ei ole hypetyksestä huolimatta montaa käytännöllistä käyttökohdetta. Tämän lisäksi tekoäly vaatii paljon energiaa edes toimiakseen.

    ”Lopputuloksena on jotakin, joka on hyvin kallista ja joka ei ole vielä osoittautunut oikeastaan missään muualla kuin joissakin suppeissa sovelluksissa”, Ferguson kertoo.

    Tekoäly-yritysten arvo on kokenut suuren nousun viime aikoina. Nvidiasta tuli lyhyeksi aikaa maailman arvokkain yhtiö, TSMC nousi Aasian arvokkaimmaksi pörssiyhtiöksi

    https://www.mikrobitti.fi/uutiset/tekoaly-on-rajahtamaisillaan-oleva-kupla/abbc86c7-3ae2-4297-b161-c59050e6136a

    Reply
  9. Tomi Engdahl says:

    Tekoälyn käyttöönotto vaatii aktiivista johtajuutta – muuten edessä voi olla mahalasku
    https://www.telia.fi/yrityksille/artikkelit/artikkeli/tekoalyn-kayttoonotto-vaatii-aktiivista-johtajuutta

    Reply
  10. Tomi Engdahl says:

    https://www.tomshardware.com/tech-industry/artificial-intelligence/former-tesla-ai-director-reproduces-gpt-2-in-24-hours-for-only-672

    Former Tesla AI Director reproduces GPT-2 in 24 hours for only $672 — GPT-4 costs $100 million to train

    Reply
  11. Tomi Engdahl says:

    The intense battle to stop AI bots from taking over the internet
    Artificial intelligence systems need to be trained on text – which has led their creators to gather up words from right across the web
    https://www.independent.co.uk/tech/ai-bots-artificial-intelligence-scraper-b2574865.html

    Reply
  12. Tomi Engdahl says:

    Develop a Cloud-Hosted RAG App With an Open Source LLM
    Follow this step-by-step guide to create a custom AI application using BentoML, LangChain and MyScaleDB.
    https://thenewstack.io/develop-a-cloud-hosted-rag-app-with-an-open-source-llm/

    Reply
  13. Tomi Engdahl says:

    Coding From Scratch Creates New Risks
    The good news for organizations is that CodeOps combines AI and human ingenuity to minimize these risks while saving time and money.
    https://thenewstack.io/coding-from-scratch-creates-new-risks/

    Digital assets, including apps and websites, are a must-have for organizations, and those that are innovative, intuitive, and fun to use can go a long way toward building long-lasting customer relationships. Creativity helps businesses stand out in a crowded marketplace, but many need to realize that they don’t need to reinvent the wheel and start the app development process from scratch.

    In many new app development projects, a significant portion of the required code has already been written — up to 70% is often readily available. This code may originate from open source projects or have been previously developed by developers within the organization.

    Despite the abundance of existing code, efforts to prioritize code reuse have historically faced challenges. Solutions such as low- or no-code platforms often force disruption and demand new, non-transferable skill sets, contributing to resistance and failure. Many of these solutions also need more technical maturity to deliver on their promises.

    This is why organizations turn to CodeOps, an AI-driven software development process prioritizing systematic code reuse. This helps teams avoid wasting time reinventing the wheel and, more importantly, significantly reduces the risks associated with writing code from scratch, including:

    Reply
  14. Tomi Engdahl says:

    Tutkimus: Tekoäly voi viedä yöunet ja lisätä alkoholinkäyttöä
    Tekoäly|Uusi tutkimus antaa viitteitä haitoista, joita generatiivisen tekoälyn liiallinen käyttö aiheuttaa työpaikoilla. Tutkimuksen mukaan tekoälyä ei pidä ajatella vain työnteon nopeuttajana, mikäli työnantaja haluaa minimoida sen aiheuttamat ongelmat.
    https://www.hs.fi/visio/art-2000010569368.html

    ISOSSA pörssiyhtiössä työskentelevä Pekka avaa aamulla tietokoneensa, jonka tekoälyavusteinen assistentti tarkistaa kalenterin, käy läpi saapuneet sähköpostit ja hoitaa muut mekaaniset tehtävät. Assistentti jättää Pekalle aikaa keskittyä vaativampiin työtehtäviin ja auttaa tarvittaessa. Työt etenevät nopeasti, kuten yrityksen johto toivoo, eikä aika kulu ongelmiin jumiin jäädessä.

    Reply
  15. Tomi Engdahl says:

    OpenAI could be on the brink of bankruptcy in under 12 months, with projections of $5 billion in losses
    News
    By Kevin Okemwa published yesterday
    OpenAI might need another round of funding to remain afloat.
    https://www.windowscentral.com/software-apps/openai-could-be-on-the-brink-of-bankruptcy-in-under-12-months-with-projections-of-dollar5-billion-in-losses?fbclid=IwZXh0bgNhZW0CMTEAAR3Q11gnyBhMJ00TThabsO7PLbGC976bB5tC63KO5NkU2w_azjyrCFDYCRY_aem_ghCoI8VKDqehi6JWWlYB2g

    What you need to know
    OpenAI is reportedly on the verge of bankruptcy with projections of a $5 billion loss.
    The startup spends $7 billion on training its AI models and $1.5 billion on staffing.
    The ChatGPT maker’s operation costs aren’t satisfied by the approximate $3.5 billion generated in revenue.

    The booming AI business strategy is placing major tech corporations invested in the landscape on a profitable path. In the past few months, we’ve watched Microsoft, Apple, and NVIDIA battle for the world’s most valuable company crown. Market analysts attribute their growth in revenue and profits to their early investment in and adoption of the technology across their products and services.

    Ironically, OpenAI, a key player in the AI landscape, could make losses amounting to $5 billion in 2024. According to a report by The Information, the ChatGPT maker might be on the brink of bankruptcy, with projections indicating it could run out of cash in the next 12 months.

    For context, OpenAI spends up to $700,000 daily to keep ChatGPT running. The amount is likely to fluctuate as the model becomes more sophisticated and advanced.

    The report sheds more light on OpenAI’s financial status, citing that the firm is well on its way to spending a whopping $7 billion on training its AI models and an additional $1.5 billion on staffing. These expenses alone stack miles ahead of its rivals’ expenditure predictions for 2024.

    OpenAI reportedly receives discounted access to Microsoft’s Azure services.

    According to a report by Appfigures, GPT-4o’s launch led to the “biggest spike ever” in OpenAI’s ChatGPT revenue and downloads on mobile. The start generates up to $2 billion annually from ChatGPT and an additional $ 1 billion from LLM access fees, translating to an approximate total revenue of between $3.5 billion and $4.5 billion annually.

    However, this barely covers the firm’s operational costs.

    It’s worth noting that the company has already gone through seven rounds of funding, raising over $11 billion, and is currently valued at $80 billion. It’s also reported that OpenAI is running near total capacity, with 290,000 of its 350,000 servers dedicated to its AI assistant, ChatGPT.

    Reply
  16. Tomi Engdahl says:

    Hollywood video game performers are going on strike because they worry studios could train AI to copy them
    https://fortune.com/2024/07/26/hollywood-video-game-performers-strike-worry-studios-train-ai-copy-them/

    Reply
  17. Tomi Engdahl says:

    Study: AI “inbreeding” may cause model collapse for tools like ChatGPT, Microsoft Copilot
    News
    By Jez Corden published yesterday
    It’s like Game of Thrones, but for artificial intelligence large language models.
    https://www.windowscentral.com/software-apps/study-ai-incest-may-cause-model-collapse-for-tools-like-chatgpt-microsoft-copilot

    Reply
  18. Tomi Engdahl says:

    OpenAI:n SearchGPT perustuu GPT-4-malliin ja se etsii ja tiivistää reaaliaikaista tietoa internetistä sekä linkittää sen lähteet käyttäjälle. Hakukoneen käyttöliittymä muistuttaa paljon ChatGPT:n käyttöliittymää ja SearchGPT vastaa käyttäjän kysymyksiin verkosta etsityllä tuoreella tiedolla ja relevanteilla linkeillä. SearchGPT:ltä voi pyytää apua myös esimerkiksi kuvien, videoiden ja graafien tulkintaan.

    https://muropaketti.com/tietotekniikka/tietotekniikkauutiset/openai-julkisti-tekoalyyn-pohjautuvan-searchgpt-hakukoneen/

    SearchGPT on toistaiseksi suljetussa testissä.

    Reply
  19. Tomi Engdahl says:

    “Once machines take over the process of doing science and engineering,” one expert said, “the progress is so quick, you can’t keep up.” Even if and when things go wrong.

    What happens if AI grows smarter than humans? The answer worries scientists.
    Some AI experts have begun to confront the ‘Singularity.’ What they see scares them.
    https://www.popsci.com/science/ai-singularity/?fbclid=IwZXh0bgNhZW0CMTEAAR25kdcv1TRqfD_kkeKsoCz-iYpqRmGTMMEaGdGk1i_7564IPsWKo_VlXgo_aem_jYN4zbNlyQe1ZVeJw80jDw

    In 1993, computer scientist and sci-fi author Vernor Vinge predicted that within three decades, we would have the technology to create a form of intelligence that surpasses our own. “Shortly after, the human era will be ended,” Vinge said.

    As it happens, 30 years later, the idea of an artificially created entity that can surpass—or at least match—human capabilities is no longer the domain of speculators and authors. Ranks of AI researchers and tech investors are seeking what they call artificial general intelligence (AGI): an entity capable of human-level performance at all kinds of intellectual tasks. If humans produce a successful AGI, some researchers now believe, “the end of the human era” will no longer be a vague, distant possibility.

    Reply
  20. Tomi Engdahl says:

    Microsoft and Stanford University Researchers Introduce Trace: A Groundbreaking Python Framework Poised to Revolutionize the Automatic Optimization of AI Systems
    https://www.marktechpost.com/2024/07/28/microsoft-and-stanford-university-researchers-introduce-trace-a-groundbreaking-python-framework-poised-to-revolutionize-the-automatic-optimization-of-ai-systems/

    Reply
  21. Tomi Engdahl says:

    How to Use Self-Healing Code to Reduce Technical Debt
    The idea of self-healing code with LLMs is exciting, but balancing automation and human oversight is still crucial.
    https://thenewstack.io/how-to-use-self-healing-code-to-reduce-technical-debt/

    Reply
  22. Tomi Engdahl says:

    OpenAI’s GPT-5 is coming out soon. Here’s what to expect, according to OpenAI customers and developers.
    https://www.businessinsider.com/openai-gpt5-gpt4-expectations-ai-2024-7

    Reply
  23. Tomi Engdahl says:

    Llama 3.1 vs GPT-4o vs Claude 3.5: A Comprehensive Comparison of Leading AI Models
    https://www.marktechpost.com/2024/07/27/llama-3-1-vs-gpt-4o-vs-claude-3-5-a-comprehensive-comparison-of-leading-ai-models/

    The landscape of artificial intelligence has seen significant advancements with the introduction of state-of-the-art language models. Among the leading models are Llama 3.1, GPT-4o, and Claude 3.5. Each model brings unique capabilities and improvements, reflecting the ongoing evolution of AI technology. Let’s analyze these three prominent models, examining their strengths, architectures, and use cases.

    Llama 3.1: Open Source Innovation

    GPT-4o: Versatility and Depth

    GPT-4o, a variant of OpenAI’s GPT-4, is designed to balance versatility and depth in language understanding and generation. This model generates coherent, contextually accurate text across various applications, from creative writing to technical documentation.

    Claude 3.5: Speed and Precision

    Claude 3.5, developed by Anthropic, is designed to raise the industry standard for intelligence, emphasizing speed and precision.

    Reply
  24. Tomi Engdahl says:

    OpenAI’s GPT-4o Voice Mode Says It Needs to Breathe
    https://futurism.com/the-byte/gpt-4o-needs-to-breathe

    Reply
  25. Tomi Engdahl says:

    AI music startups say copyright violation is just rock and roll / Suno and Udio say their training methods fall under fair use and accuse major record labels of stifling industry competition.
    https://www.theverge.com/2024/8/2/24211842/ai-music-riaa-copyright-lawsuit-suno-udio-fair-use

    Several weeks after being targeted with copyright infringement lawsuits, AI music startups Suno and Udio have now accused the record labels that filed them of attempting to stifle competition within the music industry. Both companies admitted to training their music-generating AI models on copyrighted materials in separate legal filings, arguing that doing so is lawful under fair-use doctrine.

    The lawsuits against Suno and Udio were raised in June by the Recording Industry Association of America (RIAA), a group representing major record labels like Universal Music Group (UMG), Sony Music Entertainment, and Warner Records. Both cases accuse Suno and Udio of committing “copyright infringement involving unlicensed copying of sound recordings on a massive scale.” The RIAA is seeking damages of up to $150,000 for every work infringed.

    Reply
  26. Tomi Engdahl says:

    In a blog post accompanying its own filing, Suno said that major record labels had misconceptions about how its AI music tools work, likening its model training to “a kid learning to write new rock songs by listening religiously to rock music” as opposed to just copying and repeating copyrighted tracks. Suno also admitted to training its model on online music, noting that other AI providers like OpenAI, Google, and Apple also source their training data from the open internet.
    https://www.theverge.com/2024/8/2/24211842/ai-music-riaa-copyright-lawsuit-suno-udio-fair-use

    Reply
  27. Tomi Engdahl says:

    Taco Bell To Bring Voice AI Ordering To Hundreds Of US Drive-Throughs
    https://hackaday.com/2024/08/02/taco-bell-to-bring-voice-ai-ordering-to-hundreds-of-us-drive-throughs/

    Drive-throughs are a popular feature at fast-food places, where you can get some fast grub without even leaving your car. For the fast-food companies running them they are also a big focus of automation, with the ideal being a voice assistant that can take orders and pass them on to the (still human) staff. This probably in lieu of being able to make customers use the touch screens-equipped order kiosks that are common these days. Pushing for this drive-through automation change is now Taco Bell, or specifically the Yum Brands parent company.

    https://finance.yahoo.com/news/taco-bell-bringing-ai-hundreds-160018683.html

    Reply
  28. Tomi Engdahl says:

    This new type of artificial neural network is inspired by two Soviet mathematicians, and could make it easier to understand how an AI model arrived at its conclusions.

    A New Type of Neural Network Is More Interpretable Kolmogorov-Arnold Networks could point physicists to new hypotheses
    https://spectrum.ieee.org/kan-neural-network?share_id=8355973&socialux=facebook&utm_campaign=RebelMouse&utm_content=IEEE+Spectrum&utm_medium=social&utm_source=facebook&fbclid=IwZXh0bgNhZW0CMTEAAR0OrXxby280onow3Gd088i2PTTDYJ5kf-LYr4gpK4MJhsQjnXIlTNd6R50_aem_XcjHBlYByXhcq5EbJYSA2Q

    Artificial neural networks—algorithms inspired by biological brains—are at the center of modern artificial intelligence, behind both chatbots and image generators. But with their many neurons, they can be black boxes, their inner workings uninterpretable to users.

    Researchers have now created a fundamentally new way to make neural networks that some ways surpasses traditional systems. These new networks are more interpretable and also more accurate, proponents say, even when they’re smaller. Their developers say the way they learn to represent physics data concisely could help scientists uncover new laws of nature.

    One way to think of neural networks is by analogy with neurons, or nodes, and synapses, or connections between those nodes. In traditional neural networks, called multi-layer perceptrons (MLPs) each synapse learns a weight—a number that determines how strong the connection is between those two neurons. The neurons are arranged in layers, such that a neuron from one layer takes input signals from the neurons in the previous layer, weighted by the strength of their synaptic connection. Each neuron then applies a simple function to the sum total of its inputs, called an activation function.

    In the new architecture, the synapses play a more complex role. Instead of simply learning how strong the connection between two neurons is, they learn the full nature of that connection—the function that maps input to output. Unlike the activation function used by neurons in the traditional architecture, this function could be more complex—in fact a “spline” or combination of several functions—and is different in each instance. Neurons, on the other hand, become simpler—they just sum the outputs of all their preceding synapses. The new networks are called Kolmogorov-Arnold Networks (KANs), after two mathematicians who studied how functions could be combined. The idea is that KANs would provide greater flexibility when learning to represent data, while using fewer learned parameters.

    “It’s like an alien life that looks at things from a different perspective but is also kind of understandable to humans.”
    —Ziming Liu, computer scientist at MIT

    What’s more, the researchers could visually map out the KANs and look at the shapes of the activation functions, as well as the importance of each connection. Either manually or automatically they could prune weak connections and replace some activation functions with simpler ones, like sine or exponential functions. Then they could summarize the entire KAN in an intuitive one-line function (including all the component activation functions), in some cases perfectly reconstructing the physics function that created the dataset.

    “In the future, we hope that it can be a useful tool for everyday scientific research,”

    One downside of KANs is that they take longer per parameter to train—in part because they can’t take advantage of GPUs. But they need fewer parameters. Liu notes that even if KANs don’t replace giant CNNs and transformers for processing images and language, training time won’t be an issue at the smaller scale of many physics problems.

    KAN: Kolmogorov-Arnold Networks
    https://arxiv.org/abs/2404.19756

    Reply
  29. Tomi Engdahl says:

    EU:n uusi tekoälysäädös: Edelläkävijäyritykset erottautuvat vastuullisella tekoälytoiminnalla
    EU:n parlamentti hyväksyi maailman ensimmäisen tekoälysäädöksen täysistunnossa maaliskuussa 2024. Vaikka monivaiheista lakia aletaan soveltaa täysmittaisesti vasta kaksi vuotta sen voimaan astumisen jälkeen, yritysten kannattaa aloittaa käytössä olevien tekoälyjärjestelmien ja datanhallinnan asianmukaisuuden varmistaminen hyvissä ajoin.
    https://www.dna.fi/yrityksille/blogi/-/blogs/eu-n-uusi-tekoalysaados-edellakavijayritykset-erottautuvat-vastuullisella-tekoalytoiminnalla?utm_source=facebook&utm_medium=social&utm_content=KAN-artikkeli-eu-n-uusi-tekoalysaados-edellakavijayritykset-erottautuvat-vastuullisella-tekoalytoiminnalla&utm_campaign=P_KAN_24-31-35_artikkelikampanja__&fbclid=IwZXh0bgNhZW0BMAABHT3VQve3g84HsVHzPGpK8CqNyCPD-35JBt72Oa5WG-YQa8mM6riXatXCrg_aem_WosSlXEiDIXbjc8xzBgYCw

    Reply
  30. Tomi Engdahl says:

    The tool adds a pattern to how the large language model (LLM) writes its output, allowing OpenAI to detect if ChatGPT created it. However, the pattern remains unnoticeable to humans, thereby not impacting the LLM’s quality. Internal documentation says that the tool is 99.9% effective in detecting ChatGPT’s output, but OpenAI has yet to release it.

    While text watermarking is highly effective for detecting content written by ChatGPT, it cannot work with output from other LLMs like Gemini AI or Llama 3.

    https://www.tomshardware.com/tech-industry/artificial-intelligence/openai-has-built-a-text-watermarking-method-to-detect-chatgpt-written-content-company-has-mulled-its-release-over-the-past-year

    Reply
  31. Tomi Engdahl says:

    AI music startups say copyright violation is just rock and roll / Suno and Udio say their training methods fall under fair use and accuse major record labels of stifling industry competition.
    https://www.theverge.com/2024/8/2/24211842/ai-music-riaa-copyright-lawsuit-suno-udio-fair-use

    Reply
  32. Tomi Engdahl says:

    When indifference turns into active dislike

    Study finds that including “AI” in product descriptions makes them less appealing to consumers
    When indifference turns into active dislike
    https://www.techspot.com/news/104122-study-finds-including-ai-product-descriptions-makes-them.html?fbclid=IwZXh0bgNhZW0CMTEAAR0OKH_3s7aBlIkXNupaKf-fL0mwiDZ7dlEuSMdaV6hlDTYYEyHuLUDxwUI_aem_r0-HpMRvuGg1hOGtLDQcAA

    Facepalm: Companies love to shoehorn the term AI into their product descriptions, even if doing so seems weird or, at times, just stupid. They believe the inclusion of the initialism will appeal to consumers who want the latest cutting-edge tech. The reality, though, is that many people are put off when a product reveals its AI smarts.

    A study by Washington State University, published in the Journal of Hospitality Marketing & Management, surveyed 1,000 adults to evaluate the link between AI disclosure and consumer behavior.

    It was found that one group of participants was much less likely to buy a smart television when it included “AI” in its description. Another group that saw the same description just without the AI part was much more likely to buy the TV.

    The negative impact of using the term artificial intelligence was more pronounced in “high-risk” purchases such as expensive TVs, medical devices, or financial services.

    Low-risk products such as vacuum cleaners and service delivery robots that mentioned AI weren’t perceived quite as badly, but people still preferred their non-AI alternatives.

    “When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions,”

    Cicek summarized that marketers should be careful how they present artificial intelligence in their product descriptions, and that emphasizing the term might not be the best approach.

    Earlier this month, a poll asked if PC fans would be willing to pay extra money for hardware with AI capabilities and features. Over 22,000 people, a massive 84% of the overall vote, said no, they would not pay more. More than 2,200 participants said they didn’t know, while just under 2,000 voters said yes.

    Placing AI as the highlight of a product has become commonplace these days – you just have to look at AMD’s Strix Point mobile chips, which carry the Ryzen AI 300 branding. There are also AI PCs, apps, sales, services, and pretty much everything else you can think of.

    Reply
  33. Tomi Engdahl says:

    New generative AI tools to boost biomedical research
    Specialized AIs, trained on highly curated data, show the potential to aid in the discovery of new materials and in the treatment of diseases, ranging from cancer to Alzheimer’s
    https://www.nature.com/articles/d42473-023-00458-1

    Reply
  34. Tomi Engdahl says:

    OpenAI has the tech to watermark ChatGPT text—it just won’t release it
    Some say watermarking is the responsible thing to do, but it’s complicated.
    https://arstechnica.com/ai/2024/08/openai-has-the-tech-to-watermark-chatgpt-text-it-just-wont-release-it/?utm_source=facebook&utm_medium=social&utm_campaign=dhfacebook&utm_content=null&fbclid=IwZXh0bgNhZW0CMTEAAR2EQ1wxsuuQhYyS51f2kwUGneU4jjEoX5vMUnZZ5CAbL5dEGdpsscUumRk_aem_DBfZNjmWhf7bQ_6Fwjl_Lg

    According to The Wall Street Journal, there’s internal conflict at OpenAI over whether to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT.

    To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the company’s internal testing has shown that it does not negatively affect the quality of outputs. The detector would be accurate 99.9 percent of the time. It’s important to note that the watermark would be a pattern in the text itself, meaning it would be preserved if the user copies and pastes the text or even if they make modest edits to it.

    Among those who have shown the most interest in using the tools are teachers and professors, who have seen a rapid rise of ChatGPT-generated school papers and other assignments. But OpenAI’s argument is this: 99.9 percent accuracy sounds like a lot, but imagine that one among 1,000 college papers was falsely labeled as cheating. That could lead to some unfortunate consequences for innocent students.

    OpenAI hasn’t rolled it out the watermark feature yet and is also investigating alternative solutions that are still in development, such as including cryptographically signed metadata in outputs.

    OpenAI previously released and supported an AI text detection tool.

    Reply
  35. Tomi Engdahl says:

    Two new threat modes can flip generative AI model behavior from serving your GenAI applications to attacking them, according to three security researchers.

    While not being as dangerous as the fictional Skynet scenario from the Terminator movie franchise, the PromptWare and Advanced PromptWare attacks demonstrated do provide a glimpse into the “substantial harm” that a jailbroken AI system can cause. From forcing an app into causing a denial-of-service attack to using app AI to change prices in an e-commerce database, the threats are not only very real but are also likely to be used by malicious actors unless the potential harms of jailbreaking GenAI models are taken more seriously.

    https://www.forbes.com/sites/daveywinder/2024/08/05/hackers-warn-of-dangerous-new-0-click-promptware-threat-to-genai-apps/?fbclid=IwY2xjawEfmMFleHRuA2FlbQIxMQABHfTWLUSPsgmALcyFE49b5DhR7OtVpty9Sji4_uT6cobRtuGWFhOYMifq7g_aem_9tDhNBxoHgs9vZdS3NiRmQ

    Reply
  36. Tomi Engdahl says:

    AI trained on AI churns out gibberish garbage
    https://www.popsci.com/technology/ai-trained-on-ai-gibberish/?fbclid=IwZXh0bgNhZW0CMTEAAR2jKebg3-lc7rYvEWGshEWLg5BlSBqgYkH6u5xMNOwgIEerZDPT2dlSv2o_aem_fVr7RFCeHHTuRYKI17KaUw

    OpenAI has disbanded its “superalignment team” tasked with staving off the potential existential risks from artificial intelligence less than a year after first announcing its creation. News of the dissolution was first confirmed earlier today by Wired and other outlets, alongside a lengthy thread posted to X by the company’s former superalignment team co-lead, Jan Leike. Prior to today’s explanation, Leike simply tweeted “I resigned,” on May 15 without offering any further elaboration.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*