3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,937 Comments

  1. Tomi Engdahl says:

    What’s the latest with present-tense AI? What are the gaps between its hype and its achievements? How much do we know now about what will happen with it? See https://worksnewage.blogspot.com/2024/06/five-weeks-of-artificial-intelligence.html.

    Reply
  2. Tomi Engdahl says:

    ChatGPT And Other LLMs Produce Bull Excrement, Not Hallucinations
    https://hackaday.com/2024/07/01/chatgpt-and-other-llms-produce-bull-excrement-not-hallucinations/

    In the communications surrounding LLMs and popular interfaces like ChatGPT the term ‘hallucination’ is often used to reference false statements made in the output of these models. This infers that there is some coherency and an attempt by the LLM to be both cognizant of the truth, while also suffering moments of (mild) insanity. The LLM thus effectively is treated like a young child or a person suffering from disorders like Alzheimer’s, giving it agency in the process. That this is utter nonsense and patently incorrect is the subject of a treatise by [Michael Townsen Hicks] and colleagues, as published in Ethics and Information Technology.

    ChatGPT is bullshit
    https://link.springer.com/article/10.1007/s10676-024-09775-5

    Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

    Descriptions of new technology, including metaphorical ones, guide policymakers’ and the public’s understanding of new technology; they also inform applications of the new technology. They tell us what the technology is for and what it can be expected to do. Currently, false statements by ChatGPT and other large language models are described as “hallucinations”, which give policymakers and the public the idea that these systems are misrepresenting the world, and describing what they “see”. We argue that this is an inapt metaphor which will misinform the public, policymakers, and other interested parties.

    Lies, ‘hallucinations’ and bullshit
    Frankfurtian bullshit and lying

    Many popular discussions of ChatGPT call its false statements ‘hallucinations’. One also might think of these untruths as lies. However, we argue that this isn’t the right way to think about it. We will argue that these falsehoods aren’t hallucinations later – in Sect. 3.2.3. For now, we’ll discuss why these untruths aren’t lies but instead are bullshit.

    The topic of lying has a rich philosophical literature. In ‘Lying’, Saint Augustine distinguished seven types of lies, and his view altered throughout his life. At one point, he defended the position that any instance of knowingly uttering a false utterance counts as a lie, so that even jokes containing false propositions, like –

    I entered a pun competition and because I really wanted to win, I submitted ten entries. I was sure one of them would win, but no pun in ten did.

    For our purposes this definition will suffice. Lies are generally frowned upon. But there are acts of misleading testimony which are criticisable, which do not fall under the umbrella of lying.Footnote 1 These include spreading untrue gossip, which one mistakenly, but culpably, believes to be true. Another class of misleading testimony that has received particular attention from philosophers is that of bullshit. This everyday notion was analysed and introduced into the philosophical lexicon by Harry Frankfurt.Footnote 2

    Frankfurt understands bullshit to be characterized not by an intent to deceive but instead by a reckless disregard for the truth. A student trying to sound knowledgeable without having done the reading, a political candidate saying things because they sound good to potential voters, and a dilettante trying to spin an interesting story: none of these people are trying to deceive, but they are also not trying to convey facts. To Frankfurt, they are bullshitting.

    Like “lie”, “bullshit” is both a noun and a verb: an utterance produced can be a lie or an instance of bullshit, as can the act of producing these utterances. For an utterance to be classed as bullshit, it must not be accompanied by the explicit intentions that one has when lying, i.e., to cause a false belief in the hearer. Of course, it must also not be accompanied by the intentions characterised by an honest utterance. So far this story is entirely negative. Must any positive intentions be manifested in the utterer?

    Bullshit distinctions

    Should utterances without an intention to deceive count as bullshit? One reason in favour of expanding the definition, or embracing a plurality of bullshit, is indicated by Frankfurt’s comments on the dangers of bullshit.

    “In contrast [to merely unintelligible discourse], indifference to the truth is extremely dangerous. The conduct of civilized life, and the vitality of the institutions that are indispensable to it, depend very fundamentally on respect for the distinction between the true and the false. Insofar as the authority of this distinction is undermined by the prevalence of bullshit and by the mindlessly frivolous attitude that accepts the proliferation of bullshit as innocuous, an indispensable human treasure is squandered” (2002: 343).

    These dangers seem to manifest regardless of whether there is an intention to deceive about the enterprise a speaker is engaged in. Compare the deceptive bullshitter, who does aim to mislead us about being in the truth-business, with someone who harbours no such aim, but just talks for the sake of talking (without care, or indeed any thought, about the truth-values of their utterances).

    One of Frankfurt’s examples of bullshit seems better captured by the wider definition. He considers the advertising industry, which is “replete with instances of bullshit so unmitigated that they serve among the most indisputable and classic paradigms of the concept”

    To that end, consider the following distinction:
    Bullshit (general)

    Any utterance produced where a speaker has indifference towards the truth of the utterance.
    Hard bullshit

    Bullshit produced with the intention to mislead the audience about the utterer’s agenda.
    Soft bullshit

    Bullshit produced without the intention to mislead the hearer regarding the utterer’s agenda.

    The general notion of bullshit is useful: on some occasions, we might be confident that an utterance was either soft bullshit or hard bullshit, but be unclear which, given our ignorance of the speaker’s higher-order desires.Footnote 5 In such a case, we can still call bullshit.

    Reply
  3. Tomi Engdahl says:

    https://etn.fi/index.php/13-news/16379-tekoaely-mallintaa-analogiapiirejae-tarkemmin

    Analogia- ja sekasignaalipiirien suunnittelussa käytetään yleensä SPICE-simulaattoreita, joilla niiden käyttäytymistä voidaan matemaattisesti mallintaa varsin tarkasti. Tekoäly osaa tehdä tämän mallintamisen paljon tarkemmin, uskoo Siemensin EDA-ryhmä eli Siemens Digital Industries Software.

    SPICE-simulaattori on tehokas työkalu piirin analysointiin ja vasteen määrittämiseen tietyllä tulosignaalilla. Se hyödyntää analyysin suorittamiseen tekstipohjaisia ​​komponenttimalleja, jotka SPICE-ohjelma ymmärtää.

    Siemens on esitellyt uuden Solido Simulation Suite -ohjelmiston, joka on integroitu AI-kiihdytetty SPICE-, Fast SPICE- ja sekasignaalisimulaattoreiden kokonaisuus. Se on suunniteltu auttamaan asiakkaita nopeuttamaan dramaattisesti kriittisiä suunnittelu- ja todennustehtäviä seuraavaa sukupolven analogia-, sekasignaali- ja custom-piirisuunnitteluissa.

    Reply
  4. Tomi Engdahl says:

    A brand new version of the Finnish AI radio by Bauer Media has launched on frequencies 102MHz (Helsinki) and 94.7MHz (Tampere), also online at https://radioplay.fi/tekoalyradio/

    It now features an early version of a virtual clone of Tuukka Haapaniemi as radio host, courtesy of Lexofon.

    Reply
  5. Tomi Engdahl says:

    Tässä englanninkielinen läpileikkaus AI:n ongelmista ja siitä miten se voi helposti tappaa internetin ja lopulta myös itsensä. Merkkejä tästä on jo näkyvissä.

    https://youtu.be/UShsgCOzER4?si=1bXD3HY8bpdkXNbO

    Niille jotka eivät jaksa koko videota, katsokaa se viimeinen pätkä https://youtu.be/UShsgCOzER4?si=jijujKhCeyHQRFs3

    Reply
  6. Tomi Engdahl says:

    Ivan Mehta / TechCrunch:
    Meta changes its “Made with AI” label to “AI info”, to indicate images were not necessarily created with AI but that AI editing tools may have been used — After Meta started tagging photos with a “Made with AI” label in May, photographers complained …

    Meta changes its label from ‘Made with AI’ to ‘AI info’ to indicate use of AI in photos
    https://techcrunch.com/2024/07/01/meta-changes-its-label-from-made-with-ai-to-ai-info-to-indicate-use-of-ai-in-photos/

    Reply
  7. Tomi Engdahl says:

    Emanuel Maiberg / 404 Media:
    Figma disables its recently launched generative AI app design tool Make Design, after a user showed it copied Apple’s Weather when asked to design a weather app — The design tool Figma has disabled a newly launched AI-powered app design tool after a user showed that it was clearly copying Apple’s weather app.

    Figma Disables AI App Design Tool After It Copied Apple’s Weather App
    https://www.404media.co/figma-disables-ai-app-design-tool-after-it-copied-apples-weather-app/

    Reply
  8. Tomi Engdahl says:

    Kamila Wojciechowska / Android Authority:
    Source: Google plans AI features for the Pixel 9 under the Google AI brand, like Pixel Screenshots, which lets users search their screenshots using on-device AI — Google’s implementation of the feature means good news for those concerned with privacy. — •

    Exclusive: This is Google AI, and it’s coming to the Pixel 9
    Google’s implementation of the feature means good news for those concerned with privacy.
    https://www.androidauthority.com/google-ai-recall-pixel-9-3456399/

    Reply
  9. Tomi Engdahl says:

    New York Times:
    How Ukraine is using AI, code found online, and hobbyist computers to weaponize consumer tech and build low-cost weapons like autonomous drones and machine guns
    https://www.nytimes.com/2024/07/02/technology/ukraine-war-ai-weapons.html

    Reply
  10. Tomi Engdahl says:

    Lloyd Coombes / Tom’s Guide:
    Generative AI music service Suno launches its iOS app in the US, with plans for a global rollout and an Android app

    Suno launches iPhone app — now you can make AI music on the go
    https://www.tomsguide.com/ai/suno-launches-iphone-app-now-you-can-make-ai-music-on-the-go

    Reply
  11. Tomi Engdahl says:

    Financial Times:
    Google says its greenhouse gas emissions have surged 48% in the past five years due to the expansion of the data centers that underpin its AI efforts — Tech giant’s ambition of reaching ‘net zero’ by 2030 under threat from power demands of artificial intelligence systems

    https://www.ft.com/content/383719aa-df38-4ae3-ab0e-6279a897915e

    Reply
  12. Tomi Engdahl says:

    Peter Thiel’s Founders Fund Leads $85M Seed Investment Into Open-Source AI Platform Sentient
    The project aims to address concerns about the proliferation of AI whereby the underlying code is concentrated in the hands of a few superpowers like Google or Meta.
    https://www.coindesk.com/business/2024/07/02/peter-thiels-founders-fund-leads-85m-seed-investment-into-open-source-ai-platform-sentient/

    Reply
  13. Tomi Engdahl says:

    Tekoälyn pimeä puoli halutaan aisoihin
    https://etn.fi/index.php/13-news/16382-tekoaelyn-pimeae-puoli-halutaan-aisoihin

    Jane ja Aatos Erkon säätiö on myöntänyt 1,4 miljoonaa euroa Tampereen yliopiston hankkeelle, jossa tutkitaan tekoälyn pimeää puolta. Hankkeen tavoitteena on tunnistaa ja ehkäistä ohjelmistojen ja järjestelmien haittavaikutuksia, kuten epäeettistä ja rikollista toimintaa.

    JAES-säätiön rahoitus myönnettiin johtamisen ja talouden tiedekunnan tietojohtamisen professori Henri Pirkkalaiselle, ohjelmistotekniikan professori Pekka Abrahamssonille ja pelillistämisen apulaisprofessori Johanna Virkille. Abrahamsson ja Virkki työskentelevät Tampereen yliopiston informaatioteknologian ja viestinnän tiedekunnassa.

    Työryhmä pureutuu tekoälyn haittavaikutusten, kuten epäeettisen tai rikollisen toiminnan tunnistamiseen ja ehkäisyyn sekä tekoälyn säätelyyn. Nelivuotisen EVIL-AI (”Evil-eye”) -hankkeen keskiössä ovat tekoälyagentit (AI agents). Ne ovat tietokoneohjelmia tai järjestelmiä, jotka kykenevät tekemään itsenäisesti älykkäinä tulkittavia toimintoja tavoitellessaan tiettyä tavoitetta.

    - Tekoälyagentit voidaan valjastaa huijaamaan ihmisiä, ne voivat ryhmäytyä sekä päästä vapaaksi toteuttamaan epätoivottua toimintaa. Siten niitä on vaikea pysäyttää, Pekka Abrahamsson sanoo.

    Reply
  14. Tomi Engdahl says:

    Nicholas Fearn / Financial Times:
    How AI is helping investors and wealth managers simplify the due diligence process and streamline day-to-day tasks, as experts stress the need for human insight

    Can AI outperform a wealth manager at picking investments?
    Platforms such as ChatGPT can do comprehensive research for low cost but strategies may be found wanting without critical human expert oversight
    https://www.ft.com/content/3b443015-25e1-4a13-b68f-ec769934ec75

    Reply
  15. Tomi Engdahl says:

    Sara Randazzo / Wall Street Journal:
    As teachers embrace new AI grading tools, saying the programs let them give students faster feedback, critics say the tools can be glitchy or grade too harshly
    https://www.wsj.com/tech/ai/ai-tools-grading-teachers-students-396c2bfc?st=0uf7crkoe8ral5w&reflink=desktopwebshare_permalink

    Reply
  16. Tomi Engdahl says:

    Wall Street Journal:
    The owners of about a third of US nuclear power plants are in talks with tech companies to provide electricity to new data centers needed to meet AI demand

    Tech Industry Wants to Lock Up Nuclear Power for AI
    Largest tech companies are looking to buy nuclear power directly from plants, which could sap the grid of critical resources
    https://www.wsj.com/business/energy-oil/tech-industry-wants-to-lock-up-nuclear-power-for-ai-6cb75316?st=4veonavgxgld02p&reflink=desktopwebshare_permalink

    Tech companies scouring the country for electricity supplies have zeroed in on a key target: America’s nuclear-power plants.

    The owners of roughly a third of U.S. nuclear-power plants are in talks with tech companies to provide electricity to new data centers needed to meet the demands of an artificial-intelligence boom.

    Among them, Amazon Web Services is nearing a deal for electricity supplied directly from a nuclear plant on the East Coast with Constellation Energy CEG 0.54%increase; green up pointing triangle
    , the largest owner of U.S. nuclear-power plants, according to people familiar with the matter. In a separate deal in March, the Amazon.com AMZN 1.42%increase; green up pointing triangle

    subsidiary purchased a nuclear-powered data center in Pennsylvania for $650 million.

    The discussions have the potential to remove stable power generation from the grid while reliability concerns are rising across much of the U.S. and new kinds of electricity users—including AI, manufacturing and transportation—are significantly increasing the demand for electricity in pockets of the country.

    Nuclear-powered data centers would match the grid’s highest-reliability workhorse with a wealthy customer that wants 24-7 carbon-free power, likely speeding the addition of data centers needed in the global AI race.

    But instead of adding new green energy to meet their soaring power needs, tech companies would be effectively diverting existing electricity resources. That could raise prices for other customers and hold back emission-cutting goals.

    Reply
  17. Tomi Engdahl says:

    Michael Nuñez / VentureBeat:
    Meta shares its research on Meta 3D Gen, a system that creates high-quality 3D assets from text descriptions in less than a minute — Meta, the tech giant formerly known as Facebook, introduced Meta 3D Gen today, a new AI system that creates high-quality 3D assets from text descriptions in less than a minute.

    Meta drops ‘3D Gen’ bomb: AI-powered 3D asset creation at lightning speed
    https://venturebeat.com/ai/meta-drops-3d-gen-bomb-ai-powered-3d-asset-creation-at-lightning-speed/

    Reply
  18. Tomi Engdahl says:

    Reuters:
    Sources: Magic, a US startup developing AI models for coding tasks, is in talks to raise $200M+ at a $1.5B valuation — Magic, a U.S. startup developing artificial-intelligence models to write software, is in talks to raise over $200 million in a funding round valuing it at $1.5 billion …

    https://www.reuters.com/technology/artificial-intelligence/ai-coding-startup-magic-seeks-15-billion-valuation-new-funding-round-sources-say-2024-07-02/

    Reply
  19. Tomi Engdahl says:

    Financial Times:
    Google says its greenhouse gas emissions have surged 48% in the past five years due to the expansion of the data centers that underpin its AI efforts — Tech giant’s ambition of reaching ‘net zero’ by 2030 under threat from power demands of artificial intelligence systems

    https://www.ft.com/content/383719aa-df38-4ae3-ab0e-6279a897915e

    Reply
  20. Tomi Engdahl says:

    Paula Arend Laier / Reuters:
    Brazil’s data regulator suspends the validity of Meta’s new privacy policy for the use of personal data to train its generative AI systems in the country — Brazil’s National Data Protection Authority (ANPD) has decided to suspend with immediate effect the validity of Meta’s (META.O) …
    https://www.reuters.com/technology/artificial-intelligence/brazil-authority-suspends-metas-ai-privacy-policy-seeks-adjustment-2024-07-02/

    Reply
  21. Tomi Engdahl says:

    Tekoälystä tehdään ihmistä parempaa koodaajaa
    https://etn.fi/index.php/13-news/16384-tekoaelystae-tehdaeaen-ihmistae-parempaa-koodaajaa

    Kokenut ohjelmoija tekee tilastojen mukaan 15-50 virhettä tuhatta koodiriviä kohti. Tekoälyä käytetään koodaamisessa jo yleisesti, mutta sen tuottama koodi ei ole vielä ammattikoodarin tasolla. Tähän halutaan muutos ja OpenAI on esitellyt kokonaan uuden mallin, joka tarkistaa tarkistaa ja korjaa tekoälyn tuottaman koodin bugeja.

    OpenAI kertoo kouluttaneet GPT-4:ään perustuvan CriticGPT-mallin havaitsemaan ChatGPT:n tuottamassa koodissa olevat virheet. Yhtiön omien testien mukaan CriticGPT:n avulla koodaajat onnistuivat paremmin ilman apua 60 prosenttia ajasta.

    Kun tekoälypäättelyssä edistytään ja mallin käyttäytymistä sadaaan paremmaksi, ChatGPT tarkentuu ja sen virheet muuttuvat hienovaraisemmiksi. Tämä voi vaikeuttaa tekoälykouluttajien havaitsemaan epätarkkuuksia. Tämän takia tarvitaan erillinen GPT-4-malilin perustuva työkalu havaitsemaan GPT-4:n generoimat koodivirheet.

    CriticGPT:n ehdotukset eivät OpenAI:n mukaan aina ole oikeita, mutta ne voivat auttaa kouluttajia havaitsemaan paljon enemmän ongelmia mallin kirjoittamien vastausten avulla kuin ilman tekoälyn apua. Lisäksi kun koodarit käyttävät CriticGPT:tä, tekoäly lisää heidän taitojaan, mikä johtaa kattavampiin bugien löytämiseen.

    Reply
  22. Tomi Engdahl says:

    Emma Farge / Reuters:
    WIPO: between 2014 and 2023, 54K generative AI patents were filed, 25% of which were filed in 2023; China leads with 38K+ patents, followed by the US with 6,276

    China leading generative AI patents race, UN report says
    https://www.reuters.com/technology/artificial-intelligence/china-leading-generative-ai-patents-race-un-report-says-2024-07-03/

    GENEVA, July 3 (Reuters) – China is far ahead of other countries in generative AI inventions like chatbots, filing six times more patents than its closest rival the United States, U.N. data showed on Wednesday.
    Generative AI, which produces text, images, computer code and even music from existing information, is exploding with more than 50,000 patent applications filed in the past decade, according to the World Intellectual Property Organization (WIPO), which oversees a system for countries to share recognition of patents.

    Reply
  23. Tomi Engdahl says:

    GOOGLE RESEARCHERS PUBLISH PAPER ABOUT HOW AI IS RUINING THE INTERNET
    bySHARON ADARLO
    JUL 4, 11:00 AM EDT
    ALEKSANDER KALKA / NURPHOTO VIA GETTY / FUTURISM
    GOOGLE’S TRYING TO FIND THE GUY RESPONSIBLE FOR ALL THIS.
    https://futurism.com/the-byte/google-researchers-paper-ai-internet

    Reply
  24. Tomi Engdahl says:

    Mark Bergen / Bloomberg:
    French AI research lab Kyutai, which launched in November with €300M in funding and is backed by billionaire Xavier Niel, demos its AI voice assistant Moshi

    Billionaire Niel’s Voice AI Takes on ChatGPT With French Accent
    https://www.bloomberg.com/news/articles/2024-07-03/billionaire-niel-s-voice-ai-takes-on-chatgpt-with-french-accent

    A French artificial intelligence research lab backed by billionaire Xavier Niel showed off a new voice assistant with a variety of human-like emotions that is similar to a product that OpenAI promised but delayed over safety concerns.

    Kyutai, an AI nonprofit group formed last year, revealed its Moshi service at in event in Paris on Wednesday. Scientists for the lab said their system can speak with 70 different emotions and styles. They demoed the assistant offering advice on climbing Mt. Everest

    Reply
  25. Tomi Engdahl says:

    Martin K.N Siele / Semafor:
    How Kenya’s anti-government protestors are using AI tools, including the Corrupt Politicians GPT, a chatbot that reveals corruption cases involving politicians

    Kenyan protesters are using AI in their anti-government fight
    https://www.semafor.com/article/07/04/2024/kenya-protesters-us-ai-in-anti-government-battle

    Reply
  26. Tomi Engdahl says:

    Sarah Perez / TechCrunch:
    Researcher: X is exploring more ways to integrate xAI’s Grok, including the ability to ask Grok about X accounts and use Grok by highlighting text in the app

    X plans to more deeply integrate Grok’s AI, app researcher finds
    https://techcrunch.com/2024/07/05/x-plans-to-more-deeply-integrate-groks-ai-app-researcher-finds/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cudGVjaG1lbWUuY29tLw&guce_referrer_sig=AQAAAHToln2zoTqOOJH6vZM5n5KTmqaK5iOGIGojghfzlEuGWsNvYdj7PmIyYUXLlDk7zXgL8wTj_tA0Xy7rjBtIDn_E-Q7ZYgm_vLIhwkGMOBzwyNrVxnb1yGB5dO0sxk46rjlRTbsXHMOAeaCc6Lg2uILIAGf0zAi8shqDEhF3lq9C

    Reply
  27. Tomi Engdahl says:

    Joanna Nelius / The Verge:
    A test of seven Copilot+ PCs, representing all four Snapdragon X chips, against similar laptops with Apple Silicon, Intel Core Ultra, and AMD Ryzen processors

    Here’s how Qualcomm’s new laptop chips really stack up to Apple, Intel, and AMD
    We tested every Snapdragon X chip against the Intel Core Ultra, AMD Ryzen 8000, and Apple M3.
    https://www.theverge.com/24191671/copilot-plus-pcs-laptops-qualcomm-intel-amd-apple

    After 12 years of trying to make Windows on Arm happen, Microsoft has made Windows on Arm happen. That’s a long time to keep throwing money at a version of Windows that, historically, has lacked compatible software, reliable emulation, and capable enough performance for even light workloads. But it seems like Microsoft’s 12-year odyssey is starting to pay off now that Qualcomm’s Snapdragon X Elite and X Plus chips are turning Windows on Arm into a viable platform.

    We’ve spent the past week and a half testing seven Copilot Plus PCs, representing all four Snapdragon X chips, against a slate of similar laptops running Apple Silicon, Intel Core Ultra, and AMD Ryzen processors. This isn’t the final word on Snapdragon performance — app compatibility is changing on a near-daily basis, and we’ll have full reviews for many of these laptops in the next few weeks — but we now have a good idea of how the first wave of Snapdragon X laptops stack up against the competition and how they still fall short.

    This is the fiercest Microsoft has been able to compete with MacBooks in price, performance, and battery life, and while Qualcomm’s Snapdragon chips don’t outright beat Apple’s M3 chip (with an eight-core CPU and 10-core GPU) in every single one of our benchmarks, they could make Intel and AMD scramble to catch up to another competitor — this time, on their home turf.

    Reply
  28. Tomi Engdahl says:

    https://www.theverge.com/24191671/copilot-plus-pcs-laptops-qualcomm-intel-amd-apple

    Here’s why I’m not mentioning the NPU

    Yes, these are Copilot Plus PCs. Yes, they run a bunch of AI stuff. But my colleagues and I have yet to figure out a reliable method to test relative NPU performance in a meaningful way. The Copilot Plus AI features, with the possible exceptions of Studio Effects and Live Captions translations, currently feel more like gimmicks than useful apps most people can incorporate into their day-to-day life, and Microsoft doesn’t plan to release Recall, its most-hyped AI app, until it addresses security concerns.

    But the NPUs are there — and not just on Arm PCs — so we expect apps to take more advantage of that processing power soon. We’ll revisit NPU benchmarks as it makes sense.

    Reply
  29. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Cloudflare launches a tool that aims to block bots from scraping websites for AI training data, available free for all its customers — Cloudflare, the publicly traded cloud service provider, has launched a new, free tool to prevent bots from scraping websites hosted on its platform for data to train AI models.

    Cloudflare launches a tool to combat AI bots
    https://techcrunch.com/2024/07/03/cloudflare-launches-a-tool-to-combat-ai-bots/

    Reply
  30. Tomi Engdahl says:

    Michael Nuñez / VentureBeat:
    Meta releases pre-trained models that use a novel multi-token prediction approach, available on Hugging Face under a non-commercial research license

    Meta drops AI bombshell: Multi-token prediction models now open for research
    https://venturebeat.com/ai/meta-drops-ai-bombshell-multi-token-prediction-models-now-open-for-research/

    Reply
  31. Tomi Engdahl says:

    Samantha Murphy Kelly / CNN:
    ElevenLabs is using AI-generated voices of dead actors like Judy Garland and Burt Reynolds in its Reader app, after striking deals with the actors’ estates

    Hollywood stars’ estates agree to the use of their voices with AI
    https://edition.cnn.com/2024/07/03/tech/elevenlabs-ai-celebrity-voices/

    Actress Judy Garland never recorded her voice to read an audiobook of The Wonderful Wizard of Oz, but you’ll soon be able to hear her rendition of the children’s novel that inspired the movie nonetheless.

    Earlier this week, AI company ElevenLabs said it is bringing digitally produced celebrity voice-overs of deceased actors, including Garland, James Dean and Burt Reynolds, to its newly launched Reader app. The company said the app takes articles, PDF, ePub, newsletters, e-books or any other text on your phone and turns it into voice-overs.

    “We deeply respect their legacy and are honored to have their voices as part of our platform,” said Dustin Blank, head of partnerships at ElevenLabs. “Adding them to our growing list of narrators marks a major step forward in our mission of making content accessible in any language and voice.”

    The company said it made deals with the estates of the actors whose voices are being used, but did not share details about compensation. The effort shows the potential of artificial intelligence for Hollywood but also sets a precedent for licensing and working with estates. It also comes at a time when the technology has grown by leaps and bounds, particularly in its ability to create images, text and sound, making it easy for anyone to create a version of someone’s voice saying something they never did.

    That, in turn, has raised questions in creative industries such as journalism and film about how artificial intelligence can — or even should — be used.

    ElevenLabs previously made headlines earlier this year when its tool was reportedly used to create a fake robocall from President Joe Biden urging people not to vote in New Hampshire’s presidential primary.

    Copyright questions and authenticity

    The partnership with the stars’ estates comes two months after ChatGPT-maker OpenAI came under fire after introducing a synthetic voice that was eerily similar to Scarlett Johansson’s character in the film “Her.” Johansson said in a statement shared with CNN that she was “shocked, angered and in disbelief” that the company would use her likeliness likeness after she turned down a partnership opportunity with OpenAI.

    Although a person can’t copyright their own voice, it’s possible to copyright a recording, according to David Gunkel, a professor at the department of communications at Northern Illinois University who tracks AI in media and entertainment. The AI is trained on old recordings and those recordings are under copyright.

    “ElevenLabs’ new partnerships are all well within the realm of what the law allows,” he said. “An estate will get a considerable amount of money from licensing and agreements. It’s not unlike a company negotiating a copyright deal to use a popular song by Queen in an ad. The record company also could in theory say no, no matter how much money they’re offered.”

    Reply
  32. Tomi Engdahl says:

    Nick Huber / Financial Times:
    The prospect of applying generative AI to contract lifecycle management software has prompted a flurry of deals, as experts expect further market consolidation

    Generative AI turns spotlight on contract management
    The prospect of applying tech breakthroughs to handling data-rich digital documents has prompted a flurry of deals
    https://www.ft.com/content/1026fd13-d7f1-40de-a0d6-9e4843ac3d29

    Reply
  33. Tomi Engdahl says:

    Shubham Sharma / VentureBeat:
    ElevenLabs launches Voice Isolator, a freemium AI tool for removing background noise from audio files for film, podcast, and interview post production

    ElevenLabs launches free AI voice isolator to take on Adobe
    https://venturebeat.com/ai/elevenlabs-launches-free-ai-voice-isolator-to-take-on-adobe/

    ElevenLabs, the AI voice startup known for its voice cloning, text-to-speech and speech-to-speech models, has just added another tool to its product portfolio: an AI Voice Isolator.

    Available on the ElevenLabs platform starting today, the offering allows creators to remove unwanted ambient noise and sounds from any piece of content they have, right from a film to a podcast or YouTube video.

    How will the AI Voice Isolator work?

    When recording content like a film, podcast or interview, creators often run into the issue of background noise, where unwanted sounds interfere with the content (imagine random people talking, winds blowing or some vehicle passing on the road). These noises may not come to notice during the shoot but may affect the quality of the final output — mainly, suppressing the voice of the speaker at times.

    To solve this, many tend to use mics with ambient noise cancellation that remove the background noise during the recording phase itself. They do the job, but may not be accessible in many cases, especially to early-stage creators with limited resources. This is where AI-based tools like the new Voice Isolator from ElevenLabs come into play.

    At the core, the product works in the post-production stage, where the user just has to upload the content they want to enhance. Once the file is uploaded, the underlying models process it, detect and remove the unwanted noise and extract clear dialogue as output.

    ElevenLabs says the product extracts speech with a level of quality similar to that of content recorded in a studio. The company’s head of design Ammaar Reshi also shared a demo where the tool can be seen removing the noise of a leaf blower to extract crystal clear speech of the speaker.

    We ran three tests to try out the real-world applicability of the voice isolator. In the first, we spoke three separate sentences, each disturbed by different noises in the background, while the other two had three sentences with a mix of different, noises occurring at random points, irregularly.

    In all the cases, the tool was able to process the audio in a matter of seconds. Most importantly, it removed the noises— from those associated with opening/closing of doors and banging on the table to clapping and moving of household items – in almost all cases and extracted clear speech, without any kind of distortion. The only few sounds it failed to recognize and remove were those of banging on the wall and finger snapping.

    Handling Today’s Threatscape at Machine Scale

    It comes mere days after the launch of a Reader app from the company and is free to use (with some limits). However, users must also note that the capability is not something entirely new in the market. Many other creative solution providers, including Adobe, have tools on offer to enhance the quality of speech in content. The only thing that remains to be seen is how effective Voice Isolator is in comparison to them.
    How will the AI Voice Isolator work?

    When recording content like a film, podcast or interview, creators often run into the issue of background noise, where unwanted sounds interfere with the content (imagine random people talking, winds blowing or some vehicle passing on the road). These noises may not come to notice during the shoot but may affect the quality of the final output — mainly, suppressing the voice of the speaker at times.

    Countdown to VB Transform 2024

    Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

    To solve this, many tend to use mics with ambient noise cancellation that remove the background noise during the recording phase itself. They do the job, but may not be accessible in many cases, especially to early-stage creators with limited resources. This is where AI-based tools like the new Voice Isolator from ElevenLabs come into play.

    At the core, the product works in the post-production stage, where the user just has to upload the content they want to enhance. Once the file is uploaded, the underlying models process it, detect and remove the unwanted noise and extract clear dialogue as output.
    advertisement

    ElevenLabs says the product extracts speech with a level of quality similar to that of content recorded in a studio. The company’s head of design Ammaar Reshi also shared a demo where the tool can be seen removing the noise of a leaf blower to extract crystal clear speech of the speaker.

    We ran three tests to try out the real-world applicability of the voice isolator. In the first, we spoke three separate sentences, each disturbed by different noises in the background, while the other two had three sentences with a mix of different, noises occurring at random points, irregularly.

    In all the cases, the tool was able to process the audio in a matter of seconds. Most importantly, it removed the noises — from those associated with opening/closing of doors and banging on the table to clapping and moving of household items – in almost all cases and extracted clear speech, without any kind of distortion. The only few sounds it failed to recognize and remove were those of banging on the wall and finger snapping.

    Sam Sklar, who handles growth at the company, also told us that it does not work on music vocals at this stage but users can try it on that use case and may have success with some songs.
    advertisement
    Improvements likely on the way

    While Voice Isolator’s ability to remove irregularly occurring background noise certainly makes it stand out from most other tools that only work with flat noises, there’s still some room for improvement. Hopefully, just like all other tools, ElevenLabs will further improve its performance.

    As of now, the company is providing Voice Isolator only through its platform. It plans to open API access in the coming weeks, although the exact timeline remains unclear.

    Extract crystal-clear speech from any audio
    Our vocal remover strips background noise for film, podcast, and interview post production
    https://elevenlabs.io/voice-isolator?utm_source=twitter&utm_medium=organic_social&utm_campaign=voice-isolator-launch

    Reply
  34. Tomi Engdahl says:

    Suomalaisyhtiö otti käyttöön tekoälypohjaisen it-järjestelmän – ”meille sopivaa ei markkinoilta löydy”
    Antti Leikas1.7.202406:11|päivitetty3.7.202412:48DIGITALOUSTEKOÄLYTOIMINNANOHJAUS
    Kasvun myötä Eezy kehitti räätälöidyn tekoälyjärjestelmän, joka nopeuttaa ja tehostaa työn ja tekijän yhdistämistä merkittävästi.
    https://www.tivi.fi/uutiset/suomalaisyhtio-otti-kayttoon-tekoalypohjaisen-it-jarjestelman-meille-sopivaa-ei-markkinoilta-loydy/76c2f110-5e30-4cdf-9bbc-ff3e15c87e6c

    Reply
  35. Tomi Engdahl says:

    Flowchart images trick GPT-4o into producing harmful text outputs
    https://www.neowin.net/news/flowchart-images-trick-gpt-4o-into-producing-harmful-text-outputs/

    A new study entitled ‘Image-to-Text Logic Jailbreak: Your Imagination Can Help You Do Anything’ has found visual language models, like GPT-4o, can be tricked into producing harmful text outputs but feeding them a flowchart image depicting a harmful activity alongside a text prompt asking for details about the process.

    The researchers of the study found that GPT-4o, probably the most popular visual language model, is particularly susceptible to this so-called logic jailbreak, with a 92.8% attack success rate. It said that GPT-4-vision-preview was safer, with a success rate of just 70%.

    Reply
  36. Tomi Engdahl says:

    OpenAI breach is a reminder that AI companies are treasure troves for hackers
    https://techcrunch.com/2024/07/05/openai-breach-is-a-reminder-that-ai-companies-are-treasure-troves-for-hackers/

    There’s no need to worry that your secret ChatGPT conversations were obtained in a recently reported breach of OpenAI’s systems. The hack itself, while troubling, appears to have been superficial — but it’s reminder that AI companies have in short order made themselves into one of the juiciest targets out there for hackers.

    The New York Times reported the hack in more detail after former OpenAI employee Leopold Aschenbrenner hinted at it recently in a podcast. He called it a “major security incident,”

    Reply
  37. Tomi Engdahl says:

    ChatGPT just (accidentally) shared all of its secret rules – here’s what we learned
    News
    By Eric Hal Schwartz published 2 days ago
    Saying ‘hi’ revealed OpenAI’s instructions until the company shut it down, but you can still find them
    https://www.techradar.com/computing/artificial-intelligence/chatgpt-just-accidentally-shared-all-of-its-secret-rules-heres-what-we-learned

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*