3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,254 Comments

  1. Tomi Engdahl says:

    How Amazon is trying to make the world fall in love with its robots
    The retail giant’s autonomous robots have been precisely designed to avoid annoying or terrifying their human colleagues – but will the world welcome them, asks Andrew Griffin
    https://www.independent.co.uk/tech/amazon-autonomous-robots-proteus-b2628115.html

    Reply
  2. Tomi Engdahl says:

    AI engineers claim new algorithm reduces AI power consumption by 95% — replaces complex floating-point multiplication with integer addition
    News
    By Jowi Morales published 2 days ago
    Addition is simpler than multiplication, after all.
    https://www.tomshardware.com/tech-industry/artificial-intelligence/ai-engineers-build-new-algorithm-for-ai-processing-replace-complex-floating-point-multiplication-with-integer-addition

    Reply
  3. Tomi Engdahl says:

    Horror Studio Blumhouse Partners With Meta to Use Its AI Video Generator
    https://futurism.com/the-byte/horror-movie-studio-meta-ai-video

    As Variety reports, Blumhouse Productions, best known for franchises like “Halloween” and “The Purge,” has announced a partnership with Meta that grants it access to an early version of the tech company’s recently-unveiled video generation AI model Movie Gen.

    Reply
  4. Tomi Engdahl says:

    Machines of
    Loving Grace
    How AI Could Transform the World for the Better
    https://darioamodei.com/machines-of-loving-grace?fbclid=IwY2xjawGAGl1leHRuA2FlbQIxMQABHcjBkn-O-8M6bU0Oxy5GC0TNfA3Lz40o_wzRJOa4oC8sJBJVAN3uyXKplQ_aem_M6oPtGIVyhC4ygdxw5djJw#2-neuroscience-and-mind

    I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.

    Reply
  5. Tomi Engdahl says:

    Cristina Criddle / Financial Times:
    How OpenAI, Google, and Anthropic are using differing approches to improve “model behavior”, an emerging field shaping AI systems’ responses and characteristics

    https://www.ft.com/content/a7337550-9b42-45c8-9845-37d78d2c3209

    Reply
  6. Tomi Engdahl says:

    Mark Gurman / Bloomberg:
    Sources: some at Apple think it is over two years behind the leaders in generative AI; a closer look at changes in Apple’s HR and hardware engineering groups — Apple’s new iPad mini highlights the company’s secret advantage in artificial intelligence. Also: Sonos weighs a headphone reboot …

    Apple’s New iPad Mini Highlights the Company’s Secret AI Advantage
    https://www.bloomberg.com/news/newsletters/2024-10-20/apple-s-latest-ipad-mini-highlights-ai-advantage-sonos-considers-new-headphones-m2hkz4mn

    Apple’s new iPad mini highlights the company’s secret advantage in artificial intelligence. Also: Sonos weighs a headphone reboot after a sluggish start; Amazon rolls out a color Kindle; and Jony Ive-designed jackets arrive. On the management front, Apple’s chief people officer and top recruiter depart, and the company names new hardware leaders.

    When Apple Inc. announced the first iPad mini upgrade in three years this past week, it chose to recycle its marketing strategy for the iPhone 16 and go all-in on AI features.

    The company’s smallest iPad will have 8 gigabytes of memory and the same processor — the A17 Pro — as the iPhone 15 Pro line from last year. That gives it enough horsepower to support the new Apple Intelligence platform. And, considering that the new model doesn’t have other major new changes, it’s no surprise that Apple is heavily touting the AI capabilities.

    The bigger obstacle is that the first Apple Intelligence features are underwhelming — with the more impressive capabilities coming later. In the iPad mini marketing on Apple’s website, the company spotlights four features; three of them aren’t launching until between December and March.

    At the start, the signature feature will be notification summaries. These can be quite helpful — if they’re accurate — but they lack the wow factor of competitors’ offerings. Compared with the latest fare from Google, OpenAI and Meta Platforms Inc., Apple’s AI is still far behind.

    At some point, Apple will either develop, hire or acquire its way into the top tier of AI companies.

    Apple has another advantage as it tries to catch up: the ability to roll out features to a massive base of devices.

    When Apple announced its AI features in June, the software was only compatible with two iPhone models and a couple of iPads, as well as Macs with its in-house silicon. Now, the four newest iPhones, almost every iPad and all the Macs can support it. By 2026, nearly every Apple device with a screen will run it

    The Apple Watch doesn’t currently support the AI platform, but the notification summaries can be delivered to the device from a paired iPhone. And the company is working on bringing the features to the Vision Pro headset. Apple’s next wave of home devices, meanwhile, will also be built around AI capabilities.

    When Apple becomes a true player in AI, Google and Samsung Electronics Co. will be hard-pressed to roll out new features and upgrades at the same speed. They have more fragmented operating systems, and their hardware, software and services aren’t as tightly integrated.

    Still, Apple hasn’t yet shown it can achieve real competence in AI. Today, there’s little reason to buy products just to get Apple Intelligence. If consumers are sold on that idea by Apple’s marketing, they may be surprised to find few meaningful AI tools when they start using their new devices.

    But that raises a broader question: How much do customers actually care about AI? For now, the camera advancements on a new iPhone are a bigger draw.

    Apple has done a good job at convincing one group that it’s winning at artificial intelligence: investors.

    Some analysts have even made dubious claims that Apple Intelligence is already creating an “AI consumer revolution” that will “spark a massive holiday season.” But Apple’s AI glory is still years away. If the new iPhone is a hit this year, it will probably be because of everything but AI.

    Reply
  7. Tomi Engdahl says:

    Bloomberg:
    Students, academics, and developers say AI writing detectors are most likely to falsely flag essays written in a more generic manner as written by AI tools — About two-thirds of teachers report regularly using tools for detecting AI-generated content. At that scale, even tiny error rates can add up quickly.

    AI Detectors Falsely Accuse Students of Cheating—With Big Consequences
    https://www.bloomberg.com/news/features/2024-10-18/do-ai-detectors-work-students-face-false-cheating-accusations?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcyOTMxMDIyMiwiZXhwIjoxNzI5OTE1MDIyLCJhcnRpY2xlSWQiOiJTTEs0Q1REV1gyUFMwMCIsImJjb25uZWN0SWQiOiIwNEFGQkMxQkYyMTA0NUVEODg3MzQxQkQwQzIyNzRBMCJ9.KXDUCkelNZlFLe5OhoinVXptHroG9RSu0iKUaCxKoQM

    About two-thirds of teachers report regularly using tools for detecting AI-generated content. At that scale, even tiny error rates can add up quickly.

    Just weeks into the fall semester, Olmsted submitted a written assignment in a required class—one of three reading summaries she had to do each week. Soon after, she received her grade: zero. When she approached her professor, Olmsted said she was told that an AI detection tool had determined her work was likely generated by artificial intelligence.

    Olmsted disputed the accusation to her teacher and a student coordinator, stressing that she has autism spectrum disorder and writes in a formulaic manner that might be mistakenly seen as AI-generated, according to emails viewed by Bloomberg Businessweek. The grade was ultimately changed, but not before she received a strict warning: If her work was flagged again, the teacher would treat it the same way they would with plagiarism.

    Since OpenAI’s ChatGPT brought generative AI to the mainstream almost two years ago, schools have raced to adapt to a changed landscape. Educators now rely on a growing crop of detection tools to help spot sentences, paragraphs or entire assignments generated by artificial intelligence. About two-thirds of teachers report using an AI checker regularly, according to a survey of more than 450 instructors published in March by the Center for Democracy & Technology.

    The best AI writing detectors are highly accurate, but they’re not foolproof.

    Businessweek found the services falsely flagged 1% to 2% of the essays as likely written by AI, in some cases claiming to have near 100% certainty.

    Even such a small error rate can quickly add up, given the vast number of student assignments each year, with potentially devastating consequences for students who are falsely flagged. As with more traditional cheating and plagiarism accusations, students using AI to do their homework are having to redo assignments and facing failing grades and probation.

    The students most susceptible to inaccurate accusations are likely those who write in a more generic manner, either because they’re neurodivergent like Olmsted, speak English as a second language (ESL) or simply learned to use more straightforward vocabulary and a mechanical style, according to students, academics and AI developers. A 2023 study by Stanford University researchers found that AI detectors were “near-perfect” when checking essays written by US-born eighth grade students, yet they flagged more than half of the essays written by nonnative English students as AI-generated. OpenAI recently said it has refrained from releasing an AI writing detection tool in part over concerns it could negatively affect certain groups, including ESL students.

    Businessweek also found that AI detection services can sometimes be tricked by automated tools designed to pass off AI writing as human. This could lead to an arms race that pits one technology against another, damaging trust between educators and students with little educational benefit.

    Turnitin, a popular AI detection tool that Olmsted says was used to check her work, has said it has a 4% false positive rate when analyzing sentences. Turnitin declined to make its service available for testing.

    While some educators have backed away from AI detectors and tried to adjust their curricula to incorporate AI instead, many colleges and high schools still use these tools. AI detection startups have attracted about $28 million in funding since 2019, according to the investment data firm PitchBook, with most of those deals coming after ChatGPT’s release. Deepfake detection startups, which can check for AI-generated text, images, audio and video, raised more than $300 million in 2023, up from about $65 million the year before, PitchBook found.

    The result is that classrooms remain plagued by anxiety and paranoia over the possibility of false accusations, according to interviews with a dozen students and 11 teachers across the US. Undergraduates now pursue a wide range of time-consuming efforts to defend the integrity of their work, a process they say diminishes the learning experience. Some also fear using commonplace AI writing assistance services and grammar checkers that are specifically marketed to students, citing concerns they will set off AI detectors.

    Eric Wang, Turnitin’s vice president for AI, says the company intentionally “oversamples” underrepresented groups in its data set. He says internal tests have shown Turnitin’s model doesn’t falsely accuse ESL students, and that its overall false positive rate for entire documents is below 1% and improving with each new release. Turnitin doesn’t train specifically on neurodivergent student data or have access to medical histories to assess that classification.

    Copyleaks co-founder and Chief Executive Officer Alon Yamin says its technology is 99% accurate. “We’re making it very clear to the academic institutions that nothing is 100% and that it should be used to identify trends in students’ work,” he says. “Kind of like a yellow flag for them to look into and use as an opportunity to speak to the students.”

    “Every AI detector has blind spots,” says Edward Tian, the founder and CEO of GPTZero.

    It’s challenging to quantify AI use in schools. In one test, Businessweek analyzed a separate set of 305 essays submitted to Texas A&M in the summer of 2023, after ChatGPT launched, and found the same AI detectors flagged about 9% as being generated by artificial intelligence.

    AI writing detectors typically look at perplexity, a measure of how complex the words are in any given submission. “If the word choices tend to be more generic and formulaic, that work has a higher chance of being flagged by AI detectors,” says James Zou, a professor of biomedical data science at Stanford University and the senior author of the Stanford study on ESL students.

    “AI Humanizer” Edits a Human-Written Essay to Bypass AI Detection

    A Bloomberg test of a service called Hix Bypass found that a human-written essay that GPTZero incorrectly said was 98.1% AI went down dramatically to 5.3% AI after being altered by the service.

    The fear of being flagged by AI detectors has also forced students to rethink using popular online writing assistance tools.

    Bloomberg found using Grammarly to “improve” an essay or “make it sound academic” will turn work that passed as 100% human-written to 100% AI-written. Grammarly’s spell checker and grammar suggestions, however, have only a marginal impact on making documents appear more AI-written.

    Kaitlyn Abellar, a student at Florida SouthWestern State College, says she has uninstalled plug-ins for such programs as Grammarly from her computer.

    Stevens said she was put on academic probation for a year after a disciplinary hearing determined she’d cheated. She insisted she wrote the assignment herself, using only Grammarly’s standard spell-checking and grammar features.

    “This was a well-intentioned student who had been using Grammarly in the responsible way and was flagged by a third-party technology saying you did wrong. We can’t help how Turnitin operates, like they understand that they have false flags,”

    To some educators and students alike, the current system feels unsustainable because of the strain it places on both sides of the teacher’s desk and because AI is here to stay.

    “Artificial intelligence is going to be a part of the future whether we like it or not,” says Adam Lloyd, an English professor at the University of Maryland. “Viewing AI as something we need to keep out of the classroom or discourage students from using is misguided.”

    Instead of using Turnitin, which is available to faculty at his school, Lloyd prefers to go with his intuition. “I know my students’ writing, and if I have a suspicion, I’ll have an open discussion,” he says, “not automatically accuse them.”

    Reply
  8. Tomi Engdahl says:

    Todd Bishop / GeekWire:
    A historical look at Microsoft’s work in AI, from research to real-world applications, recent progress, competition, challenges, and the future

    https://www.geekwire.com/2024/ai-dreams-microsoft-50-chapter-1/

    Reply
  9. Tomi Engdahl says:

    Todd Bishop / GeekWire:
    Microsoft unveils 10 new AI agents for its enterprise-focused Dynamics 365 apps covering sales, finance, and more, ahead of Salesforce’s Agentforce availability — AI agents are reigniting the competition between Microsoft and Salesforce. — Microsoft announced 10 new AI agents …
    https://www.geekwire.com/2024/microsoft-unveils-new-autonomous-ai-agents-in-advance-of-competing-salesforce-rollout/

    Reply
  10. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    xAI launches an API for “grok-beta” priced at $5 per million input tokens or $15 per million output tokens; it is unclear which AI model “grok-beta” might be

    xAI, Elon Musk’s AI startup, launches an API
    https://techcrunch.com/2024/10/21/xai-elon-musks-ai-startup-launches-an-api/

    Reply
  11. Tomi Engdahl says:

    Simon Willison / Simon Willison’s Weblog:
    A look at some use cases of Anthropic’s Claude Artifacts, which lets users create interactive single-page apps via prompts

    Everything I built with Claude Artifacts this week
    https://simonwillison.net/2024/Oct/21/claude-artifacts/

    Reply
  12. Tomi Engdahl says:

    Brooks Barnes / New York Times:
    Blade Runner 2049′s producer sues Musk, Tesla, and WBD for allegedly using AI to create imagery close to the film to promote Cybercab, despite a denied request — Alcon Entertainment, the Hollywood company behind “Blade Runner 2049,” said it had denied a request to use images from the movie but that Mr. Musk did so anyway.

    https://www.nytimes.com/2024/10/21/business/media/elon-musk-alcon-entertainment-robotaxi-lawsuit.html?unlocked_article_code=1.T04.mTgX.TqvTpLs-wB9B&smid=url-share

    Reply
  13. Tomi Engdahl says:

    Google has already rolled out this new watermarking system with its Gemini chatbot.

    Google Is Now Watermarking Its AI-Generated Text But the DeepMind technology isn’t yet a practical solution for everyone
    https://spectrum.ieee.org/watermark?share_id=8471101&socialux=facebook&utm_campaign=RebelMouse&utm_content=IEEE+Spectrum&utm_medium=social&utm_source=facebook&fbclid=IwZXh0bgNhZW0CMTEAAR0cwL6oyIFoUDZhWdJN_TKgjjmPzlA3ylQ-tdhZ14lS5iUEPper6Qqc5z4_aem_JUiqHtfT3BD5lAEWYyMPdg

    The chatbot revolution has left our world awash in AI-generated text: It has infiltrated our news feeds, term papers, and inboxes. It’s so absurdly abundant that industries have sprung up to provide moves and countermoves. Some companies offer services to identify AI-generated text by analyzing the material, while others say their tools will “humanize“ your AI-generated text and make it undetectable. Both types of tools have questionable performance, and as chatbots get better and better, it will only get more difficult to tell whether words were strung together by a human or an algorithm.

    Here’s another approach: Adding some sort of watermark or content credential to text from the start, which lets people easily check whether the text was AI-generated. New research from Google DeepMind, described today in the journal Nature, offers a way to do just that. The system, called SynthID-Text, doesn’t compromise “the quality, accuracy, creativity, or speed of the text generation,” says Pushmeet Kohli, vice president of research at Google DeepMind and a coauthor of the paper. But the researchers acknowledge that their system is far from foolproof, and isn’t yet available to everyone—it’s more of a demonstration than a scalable solution.

    Google has already integrated this new watermarking system into its Gemini chatbot, the company announced today, and it has also open-sourced the tool and made it available to developers building on Gemini. However, only Google and those developers currently have access to the detector that checks for the watermark. What’s more, the detector can only identify Gemini-generated text, not text generated by ChatGPT, Perplexity, or any other chatbot. As Kohli says: “While SynthID isn’t a silver bullet for identifying AI-generated content, it is an important building block for developing more reliable AI identification tools.”

    The Rise of Content Credentials
    Content credentials have been a hot topic for images and video, and have been viewed as one way to combat the rise of deepfakes. Tech companies and major media outlets have joined together in an initiative called C2PA, which has worked out a system for attaching encrypted metadata to image and video files indicating if they’re real or AI-generated. But text is a much harder problem, since text can so easily be altered to obscure or eliminate a watermark. While SynthID-Text isn’t the first attempt at creating a watermarking system for text, it is the first one to be tested on 20 million prompts.

    Reply
  14. Tomi Engdahl says:

    Cheap AI “video scraping” can now extract data from any screen recording
    Researcher feeds screen recordings into Gemini to extract accurate information with ease.
    https://arstechnica.com/ai/2024/10/cheap-ai-video-scraping-can-now-extract-data-from-any-screen-recording/

    Recently, AI researcher Simon Willison wanted to add up his charges from using a cloud service, but the payment values and dates he needed were scattered among a dozen separate emails. Inputting them manually would have been tedious, so he turned to a technique he calls “video scraping,” which involves feeding a screen recording video into an AI model, similar to ChatGPT, for data extraction purposes.

    What he discovered seems simple on its surface, but the quality of the result has deeper implications for the future of AI assistants, which may soon be able to see and interact with what we’re doing on our computer screens.

    “The other day I found myself needing to add up some numeric values that were scattered across twelve different emails,” Willison wrote in a detailed post on his blog. He recorded a 35-second video scrolling through the relevant emails, then fed that video into Google’s AI Studio tool, which allows people to experiment with several versions of Google’s Gemini 1.5 Pro and Gemini 1.5 Flash AI models.

    Willison then asked Gemini to pull the price data from the video and arrange it into a special data format called JSON (JavaScript Object Notation) that included dates and dollar amounts. The AI model successfully extracted the data, which Willison then formatted as CSV (comma-separated values) table for spreadsheet use. After double-checking for errors as part of his experiment, the accuracy of the results—and what the video analysis cost to run—surprised him.

    “The cost [of running the video model] is so low that I had to re-run my calculations three times to make sure I hadn’t made a mistake,” he wrote. Willison says the entire video analysis process ostensibly cost less than one-tenth of a cent, using just 11,018 tokens on the Gemini 1.5 Flash 002 model. In the end, he actually paid nothing because Google AI Studio is currently free for some types of use.

    Reply
  15. Tomi Engdahl says:

    AI Finland Gala 2024: Tekoälytekopalkittu on saanut yli 1,4 miljoonaa hyödyntäjää
    https://www.uusiteknologia.fi/2024/10/24/ai-finland-gala-2024-tekoalytekopalkittu-saanut-yli-14-miljoonaa-hyodyntajaa/

    Eilen Helsingin Vanhalla Ylioppilastalolla palkittiin vuoden 2024 tärkeimmät tekoälyhankkeet ja toimijat. Vuoden tekoälyteoksi nimettiin suosittu MinnaLearnin ja Helsingin yliopiston Elements of AI-verkkokurssi. Myös tietoliikenne yritykset olivat palkittujen joukossa Nokiasta NTT Docomoon ja Teliaan.

    Helsingissä eilen järjestetty AI-gaala kokosi yhteen yli 400 yritysjohtajaa, tekoälyn asiantuntijaa ja innovaattoria juhlistamaan tekoälyn saavutuksia ja kuuntelemaan alan tulevaisuuden mahdollisuuksista. Gaala tarjosi tilaisuuden verkostoitumiseen, vertaisoppimiseen ja nopeaan tiedon siirtoon yrityksiltä toisille ja tulee näin edistämään yhteistyötä ja innovaatioita Suomessa.

    Gaalassa palkittua Elements of AI -verkkokurssia tuomaristo piti kurssia yhtenä suurimmista tekoälyyn liittyvistä teoista, joita Suomessa on tehty viimeisen 10 vuoden aikana. Elements of AI:lla on kerännyt yli 1,4 miljoonaa opiskelijaa ympäri maailmaa. Se on onnistunut demokratisoimaan tekoälyosaamista globaalisti ja nostanut Suomen keskeiseksi toimijaksi tekoälyopetuksen kansainvälisessä kentässä, tuomaristo perusteli.

    Reply
  16. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Google open sources SynthID Text, which lets developers watermark and detect text generated by AI models, available under the Apache 2.0 license — Google is making SynthID Text, its technology that lets developers watermark and detect text generated by generative AI models, generally available.

    Google releases tech to watermark AI-generated text
    https://techcrunch.com/2024/10/23/google-releases-tech-to-watermark-ai-generated-text/

    Reply
  17. Tomi Engdahl says:

    Kevin Roose / New York Times:
    A mother sues Character.AI after her 14-year-old son became obsessed with a chatbot before his suicide; Character.AI says it plans new safety features
    https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html?unlocked_article_code=1.UU4.cU7l.6htKp4WshpFU&smid=nytcore-ios-share&referringSource=articleShare&tgrp=cnt

    Reply
  18. Tomi Engdahl says:

    Hittikirjailija Yuval Noah Harari varoittaa: Suuri osa tiedosta on roskaa, ja se voi johtaa diktatuuriin
    https://yle.fi/a/74-20119550

    Harari sanoo Ylelle, että algoritmit uhkaavat demokratiaa maailmassa. Yhdysvalloissa äänestäjät pelaavat hänen mukaansa parhaillaan ”isoa uhkapeliä”.

    LONTOO Maailmankuululla kirjailijalla Yuval Noah Hararilla on synkkä viesti. Tekoäly voi tuhota ihmiskunnan tai ainakin ottaa siltä vallan. Tekoäly on jo täydessä vauhdissa sosiaalisesta mediasta sotatantereille.

    Tämä on ydin kymmeniä miljoonia kirjoja myyneen historioitsijan juuri julkaistussa teoksessa Nexus – Tietoverkkojen lyhyt historia.

    – Mitä tapahtuu ihmiskunnalle, jos miljoonat tai miljardit tekoälysovellukset tekevät yhä enemmän meitä koskevia päätöksiä? Tekoäly päättää, saatko pankkilainan. Tekoäly päättää, pommittaako se taloasi. Tämä tapahtuu jo nyt, Harari sanoo Ylen haastattelussa Lontoossa.

    Kymmenet miljoonat ihmiset ovat ostaneet tiiliskiven kokoisia kirjoja, joissa Harari yhdistää historian tapahtumia satojen vuosien ajalta ja eri kulttuurien väliltä. Kirjaa Sapiens – Ihmisen lyhyt historia on myyty 45 miljoonaa kappaletta.

    Uudessa teoksessa pääosassa ovat tulevaisuus ja tekoäly. Harari näkee horisontissa maailmaa mullistavia uhkia ja kertoo ne kirjoilleen ominaisella kevyellä tyylillä.

    Nexus-kirjassa hän muistuttaa, että ihminen on ensimmäistä kertaa historiassa luonut jotain, joka on enemmän kuin työkalu. Tekoäly ei enää aina vaadi ihmisen apua oppiakseen. Se mukavoittaa elämäämme emmekä huomaa, kun se jo alkaa tehdä päätöksiä, joita emme osaa ennustaa tai hallita.

    Vahvistusta Hararin kirjan ydinhuoli sai tämän vuoden fysiikan nobelistilta. Kanadalais-britannialaisen Geoffrey Hintonin mukaan on suuri vaara, että tekoälystä tulee älykkäämpi kuin ihminen ja että se riistäytyy ihmisen hallinnasta.

    Roskatieto voi viedä diktatuuriin

    Yksi Hararin teeseistä kirjassa on, että tieto ei enää kerrytä viisautta, vaikka sitä on valtavasti enemmän kuin koskaan.

    – Suuri osa tiedosta on roskaa, Harari sanoo.

    Roska on halpaa. Totuus taas on kallista, sillä faktantarkistus vaatii aikaa, rahaa ja energiaa.

    Disinformaatiota löytyy runsaasti muun muassa viestipalvelu X:ssä, jonka omistaja, maailman rikkain mies Elon Musk tekee vaalityötä Yhdysvaltain presidentiksi pyrkivän Donald Trumpin puolesta. Tekoälyn ohjaama algoritmi jakaa Muskin päivitykset kaikille X:n käyttäjille.

    Sosiaalinen media ja tietoteknologia ovat johtaneet demokraattisen keskustelun romahtamiseen ja demokratian järkkymiseen kaikkialla maailmassa, Harari sanoo.

    Hän näkee, että algoritmien jakaman disinformaation seurauksena luottamus demokratialle tärkeisiin instituutioihin kuten perinteiseen mediaan ja tieteellisiin laitoksiin rapautuu. Kun ihmiset eivät enää usko mihinkään, he pitävät uutisia toimittajien salaliittona ja tieteellisiä tutkimuksia tutkijoiden salaliittoina.

    – Jäljelle jäävät vain anarkia ja diktatuuri. Silloin suurin osa ihmisistä valitsee diktatuurin, Harari pelkää.

    Reply
  19. Tomi Engdahl says:

    OpenAI’s AGI Czar Quits, Saying the Company Isn’t ready For What It’s Building
    https://futurism.com/the-byte/openai-agi-readiness-head-resigns

    “The world is also not ready.”

    OpenAI’s researcher in charge of making sure the company (and the world) is prepared for the advent of artificial general intelligence (AGI) has resigned — and is warning that nobody is ready for what’s coming next.

    In a post on his personal Substack, the firm’s newly-resigned AGI readiness czar Miles Brundage said quitting his “dream job” after six years has been difficult. He says he’s doing so because he feels a great responsibility regarding the purportedly human-level artificial intelligence he believes OpenAI is ushering into existence.

    “I decided,” Brundage wrote, “that I want to impact and influence AI’s development from outside the industry rather than inside.”

    When it comes to being prepared to handle the still-theoretical tech, the researcher was unequivocal.

    “In short, neither OpenAI nor any other frontier lab is ready,” he wrote, “and the world is also not ready.”

    “AGI is an overloaded phrase that implies more of a binary way of thinking than actually makes sense.”

    Instead of there being some before-and-after AGI framework, the researcher said that there are, to quote many a hallucinogen enthusiast, levels to this shit.

    Indeed, Brundage said he was instrumental in the creation of OpenAI’s five-step scale of AI/AGI levels that got leaked to Bloomberg over the summer. On that scale, which ends with AI that can “do the work of an organization,” OpenAI believes the world is currently at the precipice of level two, which would be characterized by AI that has the capability of human-level reasoning.

    All the same, Brundage insists that both OpenAI and the world at large remain unprepared for the next-generation AI systems being built.

    Notably, Brundage still believes that while AGI can benefit all of humanity, it won’t automatically do so. Instead, the humans in charge of making it — and regulating it — have to go about doing so deliberately. That caveat suggests that he may not think OpenAI is being sufficiently deliberate in how it approaches AGI stewardship.

    Reply
  20. Tomi Engdahl says:

    Top “Reasoning” AI Models Can be Brought to Their Knees With an Extremely Simple Trick
    Cutting-edge AI models may be a whole lot stupider than we thought.
    https://futurism.com/reasoning-ai-models-simple-trick

    A team of Apple researchers has found that advanced AI models’ alleged ability to “reason” isn’t all it’s cracked up to be.

    “Reasoning” is a word that’s thrown around a lot in the AI industry these days, especially when it comes to marketing the advancements of frontier AI language models.

    But marketing aside, there’s no agreed-upon industrywide definition for what reasoning exactly means. Like other AI industry terms, for example, “consciousness” or “intelligence,” reasoning is a slippery, ephemeral concept; as it stands, AI reasoning can be chalked up to an LLM’s ability to “think” its way through queries and complex problems in a way that resembles human problem-solving patterns.

    But that’s a notoriously difficult thing to measure. And according to the Apple scientists’ yet-to-be-peer-reviewed study, frontier LLMs’ alleged reasoning capabilities are way flimsier than we thought.

    For the study, the researchers took a closer look at the GSM8K benchmark, a widely-used dataset used to measure AI reasoning skills made up of thousands of grade school-level mathematical word problems. Fascinatingly, they found that just slightly altering given problems — switching out a number or a character’s name here or adding an irrelevant detail there — caused a massive uptick in AI errors.

    In short: when researchers made subtle changes to GSM8K questions that didn’t impact the mechanics of the problem, frontier AI models failed to keep up. And this, the researchers argue, suggests that AI models aren’t actually reasoning like humans, but are instead engaging in more advanced pattern-matching based on existing training data.

    “We hypothesize that this decline is due to the fact that current LLMs are not capable of genuine logical reasoning,” the researchers write. “Instead, they attempt to replicate the reasoning steps observed in their training data.”

    As the saying goes, fake it ’till you make it!

    A striking example of such an exploit is a mathematical reasoning problem involving kiwis, which reads as follows:

    Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?

    Of course, how small or large any of these kiwis are is irrelevant to the task at hand. But as the scientists’ work showed, the majority of AI models routinely — and erroneously — incorporated the extraneous detail into reasoning processes, ultimately resulting in errors.

    And in an even more simplistic test, researchers found that just switching out details like proper nouns or numbers caused a significant decrease in a model’s ability to correctly answer the question, with accuracy dropping from 0.3 percent to nearly ten percent across 20 top reasoning models.

    “LLMs remain sensitive to changes in proper names (e.g., people, foods, objects), and even more so when numbers are altered,”

    “Understanding LLMs’ true reasoning capabilities is crucial for deploying them in real-world scenarios where accuracy and consistency are non-negotiable — especially in AI safety, alignment, education, healthcare, and decision-making systems,” Farajtabar continued in the X thread. “Our findings emphasize the need for more robust and adaptable evaluation methods.”

    “Developing models that move beyond pattern recognition to true logical reasoning,” he added, “is the next big challenge for the AI community.”

    Reply
  21. Tomi Engdahl says:

    Legal responses against synthetic human-like fakes, more precisely digital look-alikes, or so-called “deep fakes” or “deepfakes”, are springing up around the planet

    Does your jurisdiction have existing or new laws against the menace of fake human-like images?

    Read up in the Stop Synthetic Filth! wiki’s article on laws against synthesis and related crimes.

    SSF wiki is a non-profit public service announcement wiki served from
    . https://stop-synthetic-filth.org/wiki/Laws_against_synthesis_and_other_related_crimes

    Reply
  22. Tomi Engdahl says:

    “If you believe what I believe, you have to just leave the company.”

    OpenAI Whistleblower Disgusted That His Job Was to Vacuum Up Copyrighted Data to Train Its Models
    https://futurism.com/the-byte/openai-whistleblower-copyrighted-data

    Sounding the Alarm
    A former OpenAI researcher is blowing the whistle on the company’s AI training practices, alleging that OpenAI violated copyright law to train its AI models — and arguing that OpenAI’s current business model stands to upend the business of the internet as we know it, according to The New York Times.

    The ex-staffer, a 25-year-old named Suchir Balaji, worked at OpenAI for four years before deciding to leave the AI firm due to ethical concerns. As Balaji sees it, because ChatGPT and other OpenAI products have become so heavily commercialized, OpenAI’s practice of scraping online material en masse to feed its data-hungry AI models no longer satisfies the criteria of the fair use doctrine. OpenAI — which is currently facing several copyright lawsuits, including a high-profile case brought last year by the NYT — has argued the opposite.

    “If you believe what I believe,” Balaji told the NYT, “you have to just leave the company.”

    Reply
  23. Tomi Engdahl says:

    Chris Welch / The Verge:
    Google will add an AI info section in the image details view of Google Photos, for images edited with tools like Magic Editor and Magic Eraser — There’s no putting the genie back in the bottle when it comes to generative AI forever shaking our trust in photos, but the tech industry …

    Google Photos will soon show you if an image was edited with AI
    https://www.theverge.com/2024/10/24/24278663/google-photos-generative-ai-label-reimagine-best-take

    / Now you’ll be able to see when generative AI has been used — or when multiple images are combined into one.

    Reply
  24. Tomi Engdahl says:

    Maxwell Zeff / TechCrunch:
    In a post, Perplexity criticizes media companies that have sued over AI, saying they wish AI tools didn’t exist and prefer that corporations own reported facts — Perplexity shot back at media companies skeptical of AI’s benefits in a blog post Thursday responding to News Corp’s lawsuit filed against the startup earlier this week.

    ‘They wish this technology didn’t exist’: Perplexity responds to News Corp’s lawsuit
    https://techcrunch.com/2024/10/24/they-wish-this-technology-didnt-exist-perplexity-responds-to-news-corps-lawsuit/

    Perplexity shot back at media companies skeptical of AI’s benefits in a blog post Thursday, responding to News Corp’s lawsuit filed against the startup earlier this week. The lawsuit alleged Perplexity engaged in large-scale copyright violations against Dow Jones and the NY Post. Several other media organizations — including Forbes, The New York Times, and Wired — have made similar accusations against Perplexity.

    “There are around three dozen lawsuits by media companies against generative AI tools. The common theme betrayed by those complaints collectively is that they wish this technology didn’t exist,” said the Perplexity team in the blog. “They prefer to live in a world where publicly reported facts are owned by corporations, and no one can do anything with those publicly reported facts without paying a toll.”

    In just over 600 words, Perplexity makes several grandiose claims about the media industry but does little to back up those claims with facts or evidence, saying, “This is not the place to get into the weeds of it all.” That said, the overall tone represents a sharp change from how Perplexity has previously engaged with the media companies that power its AI search engine. In the post, Perplexity referenced an adversarial posture between the media and tech, calling this lawsuit “fundamentally shortsighted, unnecessary, and self defeating.”

    Reply
  25. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Anthropic launches an analysis tool to help Claude write and run JavaScript code, perform calculations, and analyze data from files, in preview — Anthropic’s Claude chatbot can now write and run JavaScript code. — Today, Anthropic launched a new analysis tool that helps Claude respond …

    Anthropic’s AI can now run and write code
    https://techcrunch.com/2024/10/24/anthropics-ai-can-now-run-and-write-code/

    Anthropic’s Claude chatbot can now write and run JavaScript code.

    Today, Anthropic launched a new analysis tool that helps Claude respond with what the company describes as “mathematically precise and reproducible answers.” With the tool enabled — it’s currently in preview — Claude can perform calculations and analyze data from files like spreadsheets and PDFs, rendering the results as interactive visualizations.

    “Think of the analysis tool as a built-in code sandbox, where Claude can do complex math, analyze data, and iterate on different ideas before sharing an answer,” Anthropic wrote in a blog post. “Instead of relying on abstract analysis alone, it can systematically process your data — cleaning, exploring, and analyzing it step-by-step until it reaches the correct result.”

    Anthropic gives a few examples of where this might be useful. For instance, a product manager could upload sales data and ask Claude for country-specific performance analysis, while an engineer could give Claude monthly financial data and have it create a dashboard highlighting key trends.

    Reply
  26. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Concentric AI, which helps companies secure and track sensitive data, raised a $45M Series B, bringing its total funding to $67M

    Concentric helps companies keep track of their sensitive data
    https://techcrunch.com/2024/10/24/concentric-helps-companies-keep-track-of-their-sensitive-data/

    Enterprises have a data inventory problem. The amount of data they’re collecting and storing is increasing, and that data is being spread across disparate storage buckets. Yet many organizations rely on processes that essentially amount to pencil-and-paper methods for tracking data provenance. According to one survey, more than 50% of companies use Excel spreadsheets in their data privacy and compliance efforts.

    Karthik Krishnan, Shankar Subramaniam, and Madhu Shashanka thought they might have the engineering chops to build something to make this easier for companies. The trio had cut their teeth in cybersecurity: Years ago, Subramaniam and Shashanka had recruited Krishnan as one of the first employees at their behavioral analytics startup, Niara.

    A few years after Hewlett Packard acquired Niara, the trio began sketching out ideas for an enterprise data management tool. They envisioned a product that could catalog a company’s critical data — including information stored in infrequently accessed places — and automatically flag any data that’s at risk of compromise.

    “We hoped to solve one of the most pressing data security challenges facing the modern enterprise,” Krishnan told TechCrunch. “That is: identifying and securing business-critical information within structured and unstructured data, stored on-premises or in the cloud, at scale.”

    Reply
  27. Tomi Engdahl says:

    Les Pounder / Tom’s Hardware:
    Raspberry Pi unveils the Raspberry Pi AI HAT+ in 13 and 26 TOPS versions in partnership with Hailo, after announcing branded SSDs, micro SD cards, and a bumper

    Raspberry Pi release higher performance AI HAT+ — 13 and 26 TOPS variants
    https://www.tomshardware.com/raspberry-pi/raspberry-pi-release-higher-performance-ai-hat-13-and-26-tops-variants

    Raspberry Pi has another new product to introduce this week, a continuation of its AI-centric products for the Raspberry Pi 5. The Raspberry Pi AI HAT+ comes in two versions, a 13 and 26 tera-operations per second TOPS and continues its partnership with Hailo.

    Fresh from the news that Raspberry Pi now has its own branded SSDs, micro SD cards and a bumper for the Raspberry Pi 5. Raspberry Pi has also announced its third AI-centric product in the form of the Raspberry Pi AI HAT+.

    You may be thinking that this looks familiar, and you are partially correct. It looks very similar to the previously released Raspberry Pi AI Kit and it also uses a Hailo-8 neural network inference accelerator. But the AI Kit+ sees the accelerator built into the board, and not via an M.2 interface. However, it still uses the PCIe interface, running at Gen 3 speeds. Much like the Raspberry Pi SSD which also runs at PCIe Gen 3, indicating that Raspberry Pi may shift (if they haven’t already) the default PCIe speed from Gen 2 to 3 in a software update.

    The new Raspberry Pi AI Kit+ comes in two variants. The 26 TOPS variant provided by a Hailo-8, and a 13 TOPS Hailo-8L version, matching the performance of the Raspberry Pi AI Kit. Raspberry Pi also recently released the Raspberry Pi AI Camera kit, which does not use a Hailo accelerator. Instead the Camera kit uses a Sony IMX500 “Intelligent Vision Sensor” which is not directly comparable to Hailo boards. But, we are working on a benchmark test that we can use to compare the performance of all three devices.

    Reply
  28. Tomi Engdahl says:

    David E. Sanger / New York Times:
    The Biden administration issues the first-ever National Security Memorandum on AI, detailing how the Pentagon and intel agencies should use and protect AI

    https://www.nytimes.com/2024/10/24/us/politics/biden-government-guidelines-ai.html?unlocked_article_code=1.Uk4.NA6q.sreB_UhMnV65&smid=url-share

    Reply
  29. Tomi Engdahl says:

    New Rules for US National Security Agencies Balance AI’s Promise With Need to Protect Against Risks

    New rules from the White House on AI use by US national security and spy agencies aim to balance the technology’s promise with the need to protect against risks.

    https://www.securityweek.com/new-rules-for-us-national-security-agencies-balance-ais-promise-with-need-to-protect-against-risks/

    Reply
  30. Tomi Engdahl says:

    You might be shocked — and not in a good way.

    The Environmental Toll of a Single ChatGPT Query Is Absolutely Wild
    https://futurism.com/the-byte/environment-openai-chatgpt?fbclid=IwZXh0bgNhZW0CMTEAAR3irgeX651IqO0udfzv9kjYWGs1bKouOHupTeGIrzUewGzzYZMadva2xgo_aem_v2XUz0GBQDVQhiUCobwzxQ

    Just how much resources are eaten up when you ask OpenAI’s ChatGPT to write a simple 100-word email?

    The answer may alarm you. The answer is about equivalent of a full bottle of water and enough power to light up 14 LED bulbs for one hour, according to The Washington Post’s consultation with UC Riverside researcher Shaolei Ren — an appreciable environmental toll on its own, but a staggering one when you multiply it out to the number of users worldwide.

    Say one out of every ten working Americans were using ChatGPT just once a week to write an email. In Ren’s estimate, over a one year period that would mean ChatGPT would guzzle 435 million liters of water and burn 121,517 megawatt-hours of power, which translates into all the water drunk up by every household in Rhode Island for a day and a half and enough electricity to light up all the households in Washington DC for 20 days.

    And that’s just today’s usage. With big tech so confident in the explosive potential of AI that Microsoft is looking to bring an entire nuclear plant back online to fuel its AI datacenters, those figures could come to look laughably low.

    Thirst Traps
    The reason ChatGPT consumes so much water is due to the fact that AI data centers emit tons of heat when running calculations. In order to cool these facilities, they require a tremendous amount of water to bring down the temperatures coming from these servers. In places where electricity is cheap or where there’s water scarcity, AI data centers use electricity to run air conditioners to cool their servers.

    Reply
  31. Tomi Engdahl says:

    Onko tekoälyssä kupla, analyytikko Tero Kuittinen?
    https://www.taloustaito.fi/Rahat/onko-tekoalyssa-kupla-analyytikko-tero-kuittinen/#75642db5

    KARON GRILLI ”On niissä [AI-osakkeissa] kupla. Tämä tilanne muistuttaa itse asiassa aika paljon vuotta 2000”, analyytikko ja neuvonantaja Tero Kuittinen vastaa suoraan Karon Grillissä.

    Vuonna 2000 elettiin internethuuman kuuminta vaihetta. Kuittinen työskenteli silloin Nokiaan erikoistuneena analyytikkona ja nousi lajissaan yhdeksi arvostetuimmista maailmassa.

    Analogia Yhdysvaltain markkinoilla tällä hetkellä hyrräävän tekoälyhuuman ja internethuuman välillä on ilmeinen. Hieman kärjistäen tekoälysovellusten tarvitsemia laskentatehoisia prosessoreita valmistava Nvidia vastaa internetbuumin verkonrakentajia.

    ”Silloin Lucentin, Nortelin, Motorolan, Nokian, Ciscon – kaikkien näiden firmojen hinnat räjähtivät. Ja mielenkiintoista tässä on se, että [näiden yhtiöiden sijoitustarinan pohjalla ollut] ennuste liikenteen räjähtämistä oli totta: mobiilidatan määrä miljoonakertaistui, koska ihmiset rupesivat katsomaan Tiktok-videoita kolme tuntia joka päivä. Mutta se ei auttanut hardis-firmoja. Ne romahtivat kaikki. Marginaalit kilpailtiin pois. Vaikka kaikki uskoivat, että niillä on vallihaudat, ne eivät pitäneet”, Kuittinen muistuttaa.

    Samalla tavalla tällä hetkellä Nvidian uskotaan pystyvän pitämään varsin korkeat marginaalinsa.

    ”Miten ne voisivat pitää? Lucentilla ja Ciscolla oli sentään asiakkaat, jotka eivät pystyneet itse kilpailemaan niiden kanssa. Mutta Nvidian isot asiakkaat Microsoft, Google ja Amazon ovat kuumeisesti tekemässä omia chip-suunnitelmia ja yrittävät itse rakentaa tuotetta, jota Nvidian toimittaa niille.”

    Suuret yhdysvaltalaiset teknologiayhtiöt käyttävät hurjia summia tekoälyinvestointeihin.

    ”Pelkästään tekoälystä pitäisi sitten tulla tuhannen miljardin voitot, jotta sillä katetaan satojen miljardien investoinnit”, Kuittinen ihmettelee.

    Giganteista Apple on ollut muita maltillisempi AI-kilpavarustelussa. Sekään ei välttämättä ole oikea ratkaisu.

    Kuittinen vertaa taas tilannetta 25 vuoden takaiseen.

    ”Jokaisen operaattorin oli silloin pakko mennä mukaan, eihän he voineet jättää 3G:tä tekemättäkään. Eli vaikka investointi näyttäisi hukkainvestoinnilta tai heikosti tuottavalta myöhemmin, vielä huonommin olisi käynyt, jos olisi jättäytynyt pois.”

    Tero Kuittinen arvelee, että suurimmat hyödyt tekoälystä saadaan lääketieteessä. Tekoälyn käyttöönsä valjastaneet diagnostiikkayhtiöt ovat olleet kuumia myös sijoittajien mielissä ja näpeissä.

    ”Amerikassa on vahva virta siihen suuntaan, että diagnostiikka ja alkuvaiheen hoito siirtyvät apteekkeihin. Moni ihminen haluaa pitää piilossa tärkeää dataa vakuutusyhtiöiltä ja omilta lääkäreiltään.”

    Apteekeista ja vaikkapa kuntosaleista voi tulla hänen arvionsa mukaan uusi kanava diagnooseille ja ennalta ehkäiseville hoidolle.

    ”Tähän AI tulee sopimaan ihan mahtavasti.”

    Sen sijaan julkisessa keskustelussa eniten esillä oleviin kielimalleihin Kuittinen suhtautuu skeptisesti.

    ”AI löytää kuvasta syövän alun todella hyvin, mutta jos siltä kysyy, mihin ravintolaan voisi mennä Hämeenlinnassa ylihuomenna, systeemit keksivät ravintoloita, joita ei ole koskaan ollut olemassa tai jotka ovat sulkeutuneet vuosi aikaisemmin.”

    Kielimallien hallusinoimat kuvitelmat alkavat täyttää internetiä.

    Yhdysvaltojen viimeaikainen pörssikehitys saa Tero Kuittisen pyörittelemään päätään. Hän sanoo tuntevansa monta hyvin älykästä ja rikasta ihmistä, jotka ovat odottaneet lamaa toukokuusta 2023, jolloin Silicon Valley Bank oli mennä nurin.

    Reply
  32. Tomi Engdahl says:

    I’ve mostly dumped Google Search for the smarter Perplexity and ChatGPT
    https://www.androidpolice.com/i-dumped-google-search-for-ai/

    Rest assured that you aren’t alone, Google Search results have indeed gotten worse. Google even added AI Overviews as a stopgap solution recently, but it’s unreliable to the point of being unusable. Things like these have added up over the past few months, which is why I have now turned to Google Search’s AI-first alternatives for my internet needs, and I don’t feel like turning back.

    Reply
  33. Tomi Engdahl says:

    Claude AI Gets Bored During Coding Demonstration, Starts Perusing Photos of National Parks Instead
    https://futurism.com/the-byte/claude-ai-bored-demonstration

    While its developers were trying to record a coding demonstration, the latest version of Claude 3.5 Sonnet — Anthropic’s current flagship AI — got off track and produced some “amusing” moments, the company said in an announcement.

    It’s perilous to anthropomorphize machine learning models, but if this were a human employee, we’d diagnose them with a terminal case of being bored on the job. As seen in a video, Claude decides to blow off writing code, opens Google, and inexplicably browses through beautiful photos of Yellowstone National Park.

    Reply
  34. Tomi Engdahl says:

    Toxicity testing using deep learning and stem cells might make animals redundant.

    Testing toxicity using stem cells and AI
    https://www.nature.com/articles/d42473-024-00249-2?utm_source=facebook&utm_medium=social&utm_campaign=APSR_NINDX_AWA1_GL_PCFU_CFULF_AI-YKH-AP24&fbclid=IwZXh0bgNhZW0BMABhZGlkAasTmcvU4lwBHR2FAKrVvQ7nX_HGUJ1Ji6R9XXBYjc7VmGdCaZNAWwFA-hzKanQiJ20PqA_aem_LpLh3SoBJjHTlm3DN_gDMw&utm_id=120210830217190572&utm_content=120211366248910572&utm_term=120211366248930572

    A new technique based on machine learning and stem cells may lead to personalized testing for hazardous chemicals and make toxin testing on animals a thing of the past.

    Reply
  35. Tomi Engdahl says:

    Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said
    https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14?fbclid=IwY2xjawGL0fdleHRuA2FlbQIxMQABHRT6iAS0LebX1toIlew3lDXtZpu243UyprLYcs0DLAH5Qb4BZkbJhhN_WA_aem_mqO5rqhfB7u5DE7kSlz53w

    Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

    But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

    A machine learning engineer said he initially discovered hallucinations in about half of the over 100 hours of Whisper transcriptions he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper.

    The problems persist even in well-recorded, short audio samples. A recent study by computer scientists uncovered 187 hallucinations in more than 13,000 clear audio snippets they examined.

    That trend would lead to tens of thousands of faulty transcriptions over millions of recordings, researchers said.

    Such mistakes could have “really grave consequences,” particularly in hospital settings, said Alondra Nelson

    “Nobody wants a misdiagnosis,”

    Whisper also is used to create closed captioning for the Deaf and hard of hearing — a population at particular risk for faulty transcriptions. That’s because the Deaf and hard of hearing have no way of identifying fabrications “hidden amongst all this other text,”

    OpenAI urged to address problem
    The prevalence of such hallucinations has led experts, advocates and former OpenAI employees to call for the federal government to consider AI regulations. At minimum, they said, OpenAI needs to address the flaw.

    “This seems solvable if the company is willing to prioritize it,”

    While most developers assume that transcription tools misspell words or make other errors, engineers and researchers said they had never seen another AI-powered transcription tool hallucinate as much as Whisper.

    In the last month alone, one recent version of Whisper was downloaded over 4.2 million times from open-source AI platform HuggingFace. Sanchit Gandhi, a machine-learning engineer there, said Whisper is the most popular open-source speech recognition model and is built into everything from call centers to voice assistants.

    They determined that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.

    Reply
  36. Tomi Engdahl says:

    Google to develop AI that takes over computers, The Information reports
    https://www.reuters.com/technology/artificial-intelligence/google-develop-ai-that-takes-over-computers-information-reports-2024-10-26/?fbclid=IwY2xjawGL1etleHRuA2FlbQIxMQABHaWxgLIoyYy-34rVeWK7Az_kE35TFLDt-53p1PNqtPKkOtocht_OkVRzQg_aem_qEK58Gq-koS-59g43IT-wA

    Alphabet’s (GOOGL.O), opens new tab Google is developing artificial intelligence technology that takes over a web browser to complete tasks such as research and shopping, The Information reported on Saturday.
    Google is set to demonstrate the product code-named Project Jarvis as soon as December with the release of its next flagship Gemini large language model, the report added, citing people with direct knowledge of the product.

    Anthropic and Google are trying to take the agent concept a step further with software that interacts directly with a person’s computer or browser, the report said.

    Google didn’t immediately respond to a Reuters request for comment.

    Reply
  37. Tomi Engdahl says:

    Meta Platforms to use Reuters news content in AI chatbot
    https://www.reuters.com/technology/artificial-intelligence/meta-platforms-use-reuters-news-content-ai-chatbot-2024-10-25/?utm_campaign=trueAnthem%3A+Trending+Content&utm_medium=trueAnthem&utm_source=facebook&fbclid=IwZXh0bgNhZW0CMTEAAR2LqpVvSKlbXmoqzvBLgs_MKpYZmFeuDSKfQczRjA4Qz77M7NKpzzXzbDg_aem_K2Lv4VJRmtq2yHVJuU1qtA

    Meta Platforms said on Friday its artificial intelligence chatbot will use Reuters content to answer user questions in real time about news and current events, the latest AI tie-up between a big technology company and a news publisher.

    Meta AI, the company’s chatbot, is available across its services including Facebook, Whatsapp and Instagram. The social media giant did not disclose whether it plans to use Reuters content to train its large-language model.
    “We can confirm that Reuters has partnered with tech providers to license our trusted, fact-based news content to power their AI platforms. The terms of these deals remain confidential,” a spokesperson for Reuters, said in a statement.

    Reuters will be compensated for access to its journalism under a multi-year deal, according to a report on Friday from Axios, which first published the news.

    We can confirm that Reuters has partnered with tech providers to license our trusted, fact-based news content to power their AI platforms. The terms of these deals remain confidential,” a spokesperson for Reuters, said in a statement.

    Through its partnership with Reuters, “Meta AI can respond to news-related questions with summaries and links to Reuters content,” a Meta spokesperson said in a statement sent by email.
    Other companies including ChatGPT-maker OpenAI and Jeff Bezos-backed startup Perplexity have struck similar AI partnerships with news organizations.

    Reply
  38. Tomi Engdahl says:

    Onko tekoäly-PC pelkkää hypeä?
    https://etn.fi/index.php/opinion/16766-onko-tekoaely-pc-pelkkaeae-hypeae

    On reilua sanoa, että AI PC ovat nopeasti nousemassa merkittäväksi puheenaiheeksi niin toimittajien kuin organisaatioiden keskuudessa. Yrityksille, jotka harkitsevat laitteidensa päivittämistä, oikean valinnan tekeminen ei ole koskaan ollut tärkeämpää, kirjoittaa Kingston Technologyn strategisesta markkinoinnista vastaava Elliot Jones.

    Tekoälyllä toimivat ohjelmat eivät ole mitään uutta, mutta PC, joissa on neuroprosessoriyksiköitä (NPUs), jotka on suunniteltu parantamaan koneoppimistehtäviä ovat suhteellisen uusi laiteluokka. AI-pohjaisten chatbotien, kuten ChatGPT:n nousun myötä olemme kuulleet paljon suurista kielimalleista (LLMs) – algoritmeista, jotka koneoppimisen ja suurten tietoaineistojen kautta pystyvät ymmärtämään ja tuottamaan ihmisen kieltä.

    AI-PC toimii vastaavalla tavalla pienemmässä, paikallisemmassa mittakaavassa, käyttäen pieniä kielimalleja (SLMs). Ne ovat rajallisempia, mutta soveltuvat paremmin yksittäisten laitteiden optimointiin ja tarkempien ja kohdennetumpien tehtävien suorittamiseen. Yksi SLM-mallien keskeinen etu on kyky siirtää valikoivasti dataa koneen fyysisen tallennuksen ja pilvitallennusverkkojen välillä, minkä tarjoaa parhaat puolet molemmista maailmoista.

    Reply
  39. Tomi Engdahl says:

    Taryn Plumb / VentureBeat:
    Gartner: AI agents are “one of the most hyped topics in gen AI today”; survey: 75% of CEOs say AI will impact their industry, up from 21% in 2023

    Gartner predicts AI agents will transform work, but disillusionment is growing
    https://venturebeat.com/ai/gartner-predicts-ai-agents-will-transform-work-but-disillusionment-is-growing/

    Very quickly, the topic of AI agents has moved from ambiguous concepts to reality. Enterprises will soon be able to deploy fleets of AI workers to automate and supplement — and yes, in some cases supplant — human talent.

    “Autonomous agents are one of the hottest topics and perhaps one of the most hyped topics in gen AI today,” Gartner distinguished VP analyst Arun Chandrasekaran said at the Gartner Symposium/Xpo this past week.

    However, while autonomous agents are trending on the consulting firm’s new generative AI hype cycle, he emphasized that “we’re in the super super early stage of agents. It’s one of the key research goals of AI companies and research labs in the long run.”

    Top trends in Gartner’s AI Hype Cycle for gen AI

    Based on Gartner’s 2024 Hype Cycle for Generative AI, four key trends are emerging around gen AI — autonomous agents chief among them. Today’s conversational agents are advanced and versatile, but are “very passive systems” that need constant prompting and human intervention, Chandrasekaran noted. Agentic AI, by contrast, will only need high-level instruction that they can break out into a series of execution steps.

    “For autonomous agents to flourish, models have to significantly evolve,” said Chandrasekaran. They need reasoning, memory and “the ability to remember and contextualize things.”

    Another key trend is multimodality, said Chandrasekaran. Many models began with text, and have since expanded into code, images (as both input and output) and video. A challenge in this is that “by the very aspect of getting multimodal, they’re also getting larger,” said Chandrasekaran.

    Open-source AI is also on the rise. Chandrasekaran pointed out that the market has so far been dominated by closed-source models, but open source provides customization and deployment flexibility — models can run in the cloud, on-prem, at the edge or on mobile devices.

    Finally, edge AI is coming to the fore. Much smaller models — between 1B to 10B parameters — will be used for resource-constrained environments. These can run on PCs or mobile devices, providing for “acceptable and reasonable accuracy,” said Chandrasekaran.

    Models are “slimming down and extending from the cloud into other environments,” he said.

    Heading for the trough

    At the same time, some enterprise leaders say AI hasn’t lived up to the hype. Gen AI is beginning to slide into the trough of disillusionment (when technology fails to meet expectations), said Chandrasekaran. But this is “inevitable in the near term.”

    There are a few fundamental reasons for this, he explained. First, VCs have funded “an enormous amount of startups” — but they have still grossly underestimated the amount of money startups need to be successful. Also, many startups have “very flimsy competitive moats,” essentially serving as a wrapper on top of a model that doesn’t offer much differentiation.

    Also, “the fight for talent is real” — consider the acqui-hiring models — and enterprises underestimate the amount of change management. Buyers are also increasingly raising questions about business value (and how to track it).

    There are also concerns about hallucination and explainability, and there’s more to be done to make models more reliable and predictable. “We are not living in a technology bubble today,” said Chandrasekaran. “The technologies are sufficiently advancing. But they’re not advancing fast enough to keep up with the lofty expectations enterprise leaders have today.”

    Not surprisingly, the cost of building and using AI is another significant hurdle. In a survey by Gartner, more than 90% of CIOS said that managing cost limits their ability to get value from AI. For instance, data preparation and inferencing costs are often greatly underestimated, explained Hung LeHong, a distinguished VP analyst at Gartner.

    Also, software vendors are raising their prices by up to 30% because AI is increasingly embedded into their product pipelines. “It’s not just the cost of AI, it’s the cost of applications they’re already running in their business,” said LeHong.

    Core AI use cases

    Still, enterprise leaders understand how instrumental AI will be going forward. Three-quarters of CEOs surveyed by Gartner say AI is the technology that will be most impactful to their industry, a significant leap from 21% just in 2023, LeHong pointed out.

    That percentage has been “going up and up and up every year,” he said.

    Right now, the focus is on internal customer service functions where humans are “still in the driver’s seat,” Chandrasekaran pointed out. “We’re not seeing a lot of customer-facing use cases yet with gen AI.”

    LeHong pointed out that a significant amount of enterprise-gen AI initiatives are focused on augmenting employees to increase productivity. “They want to use gen AI at individual employee level.”

    Chandrasekaran pointed to three business functions that stand out in adoption: IT, security and marketing. In IT, some uses for AI include code generation, analysis and documentation. In security, the technology can be used to augment SOCs when it comes to areas such as forecasting, incident and threat management and root cause analysis.

    In marketing, meanwhile, AI can be used to provide sentiment analysis based on social media posts and to create more personalized content. “I think marketing and gen AI are made for each other,” said Chandrasekaran. “These models are quite creative.”

    He pointed to some common use cases across these business functions: content creation and augmentation; data summarization and insights; process and workflow automation; forecasting and scenario planning; customer assistance; and software coding and co-pilots.

    Also, enterprises want the ability to query and retrieve from their own data sources. “Enterprise search is an area where AI is going to have a significant impact,” said Chandrasekaran. “Everyone wants their own ChatGPT.”

    AI is moving fast

    Additionally, Gartner forecasts that:

    By 2025, 30% of enterprises will have implemented an AI-augmented and testing strategy, up from 5% in 2021.
    By 2026, more than 100 million humans will engage with robo or synthetic virtual colleagues and nearly 80% of prompting will be semi-automated. “Models are going to get increasingly better at parsing context,” said Chandrasekaran.
    By 2027, more than 50% of enterprises will have implemented a responsible AI governance program, and the number of companies using open-source AI will increase tenfold.

    With AI now “coming from everywhere,” enterprises are also looking to put specific leaders in charge of it, LeHong explained: Right now, 60% of CIOs are tasked with leading AI strategies. Whereas before gen AI, data scientists were “the masters of that domain,” said LeHong.

    Reply
  40. Tomi Engdahl says:

    Brian Heater / TechCrunch:
    Apple launches some Apple Intelligence features like writing tools, image cleanup, and typing in Siri, with the release of iOS 18.1, iPadOS 18.1, and macOS 15.1 — Apple on Monday confirmed the general availability of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1.

    https://techcrunch.com/2024/10/28/apple-intelligence-goes-live-with-ios-181-update/

    Reply
  41. Tomi Engdahl says:

    Juli Clover / MacRumors:
    Apple says Apple Intelligence will roll out to iOS and iPadOS in the EU from April; EU macOS users can access Apple Intelligence in US English with macOS 15.1

    Apple Intelligence Rolling Out in the European Union Starting in April 2025
    https://www.macrumors.com/2024/10/28/apple-intelligence-eu-april-2025/

    Reply
  42. Tomi Engdahl says:

    Emma Roth / The Verge:
    Google says its AI Overview summaries will begin rolling out in over 100 countries and territories this week, meaning the feature will reach 1B+ monthly users — Google’s AI Overviews are expanding across more than 100 countries this week. The AI-generated search summaries will appear for users in Canada …

    Google’s AI search summaries are rolling out to over 100 more countries
    / Google’s AI Overviews will appear in Canada, Australia, South Africa, and many other locations.
    https://www.theverge.com/2024/10/28/24281860/google-ai-search-summaries-expand-more-countries

    Reply
  43. Tomi Engdahl says:

    Kylie Robison / The Verge:
    Meta disagrees with OSI’s definition of open-source AI; Llama doesn’t fit the definition due to commercial use restrictions and a lack of training data access — The Open Source Initiative (OSI) has released its official definition of “open” artificial intelligence, setting the stage …

    Open-source AI must reveal its training data, per new OSI definition
    https://www.theverge.com/2024/10/28/24281820/open-source-initiative-definition-artificial-intelligence-meta-llama

    Meta’s Llama does not fit OSI’s new definition.

    Reply
  44. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    The Open Source Initiative releases version 1.0 of its Open Source AI Definition, after years of collaboration with academia and the industry

    https://techcrunch.com/2024/10/28/we-finally-have-an-official-definition-for-open-source-ai/

    Reply
  45. Tomi Engdahl says:

    Ivan Mehta / TechCrunch:
    Read AI, which uses AI to summarize meetings, emails, and more, raised a $50M Series B and launches a free Chrome extension, after a $21M Series A in April 2024

    https://techcrunch.com/2024/10/28/read-ai-raises-50m-to-integrate-its-bot-with-slack-email-and-more/

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*