3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,941 Comments

  1. Tomi Engdahl says:

    Microsoft announces new Copilot Copyright Commitment for customers
    https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot-copyright-commitment-ai-legal-concerns/

    01.05.2024 Update: On November 15, 2023, Microsoft announced the expansion of the Copilot Copyright Commitment, now called the Customer Copyright Commitment, to include commercial customers using the Azure OpenAI Service.

    Microsoft’s AI-powered Copilots are changing the way we work, making customers more efficient while unlocking new levels of creativity. While these transformative tools open doors to new possibilities, they are also raising new questions. Some customers are concerned about the risk of IP infringement claims if they use the output produced by generative AI. This is understandable, given recent public inquiries by authors and artists regarding how their own work is being used in conjunction with AI models and services.

    To address this customer concern, Microsoft is announcing our new Copilot Copyright Commitment. As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.

    This new commitment extends our existing intellectual property indemnity support to commercial Copilot services and builds on our previous AI Customer Commitments. Specifically, if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters we have built into our products.

    You’ll find more details below. Let me start with why we are offering this program:

    We believe in standing behind our customers when they use our products. We are charging our commercial customers for our Copilots, and if their use creates legal issues, we should make this our problem rather than our customers’ problem. This philosophy is not new: For roughly two decades we’ve defended our customers against patent claims relating to our products, and we’ve steadily expanded this coverage over time. Expanding our defense obligations to cover copyright claims directed at our Copilots is another step along these lines.
    We are sensitive to the concerns of authors, and we believe that Microsoft rather than our customers should assume the responsibility to address them. Even where existing copyright law is clear, generative AI is raising new public policy issues and shining a light on multiple public goals. We believe the world needs AI to advance the spread of knowledge and help solve major societal challenges. Yet it is critical for authors to retain control of their rights under copyright law and earn a healthy return on their creations. And we should ensure that the content needed to train and ground AI models is not locked up in the hands of one or a few companies in ways that would stifle competition and innovation. We are committed to the hard and sustained efforts that will be needed to take creative and constructive steps to advance all these goals.
    We have built important guardrails into our Copilots to help respect authors’ copyrights. We have incorporated filters and other technologies that are designed to reduce the likelihood that Copilots return infringing content. These build on and complement our work to protect digital safety, security, and privacy, based on a broad range of guardrails such as classifiers, metaprompts, content filtering, and operational monitoring and abuse detection, including that which potentially infringes third-party content. Our new Copilot Copyright Commitment requires that customers use these technologies, creating incentives for everyone to better respect copyright concerns.

    More details on our Copilot Copyright Commitment

    The Copilot Copyright Commitment extends Microsoft’s existing IP indemnification coverage to copyright claims relating to the use of our AI-powered Copilots, including the output they generate, specifically for paid versions of Microsoft commercial Copilot services and Bing Chat Enterprise. This includes Microsoft 365 Copilot that brings generative AI to Word, Excel, PowerPoint, and more – enabling a user to reason across their data or turn a document into a presentation. It also includes GitHub Copilot, which enables developers to spend less time on rote coding, and more time on creating wholly new and transformative outputs.

    Reply
  2. Tomi Engdahl says:

    GitHub Copilot copyright case narrowed but not neutered
    Microsoft and OpenAI fail to shake off AI infringement allegations
    https://www.theregister.com/2024/01/12/github_copilot_copyright_case_narrowed/

    Reply
  3. Tomi Engdahl says:

    Blog: Will you get in legal trouble for using GitHub Copilot for work?
    https://www.vincit.com/blog/will-you-get-in-legal-trouble-for-using-github-copilot-for-work

    GitHub Copilot is a tool for generating source code that has garnered a lot of interest. The tool has been trained on using selected English-language source material and publicly available source code, including code in public repositories on GitHub. It uses this source data as a basis for suggestions, generating code based on textual description, function name or similar context in source code. There has been a lot of thought and discussion related to if there could be legal implications in using the tool commercially. In this blog, we will look more closely at what it means to include the snippets Copilot generates in source code that is produced by programmers in the legal context of the European Union.

    Immaterial rights related to source code

    Computer software in general can be protected legally using three distinct mechanisms: copyright, patents and as trade secrets. In our case, trade secrets do not apply as we’re talking about public code here. Software patents can apply if something you’re doing is infringing on a patent – but as software patents focus more on “solutions” than specific source code, that risk is not directly related to the use of Copilot, and Copilot should not add extra dimension to watch out for. Our focus here is on copyright.

    When immaterial rights of the source code GitHub Copilot uses were discussed, then-CEO of GitHub, Nat Friedman, responded in Twitter with the following:

    So the argument is twofold: training of the model is fair use, and output belongs to the operator of the tool. Let’s take a look at these arguments.

    Microsoft Announces Copilot Copyright Commitment to Address IP Infringement Concerns
    https://www.infoq.com/news/2023/09/copilot-copyright-commitment/

    Microsoft recently published the Copilot Copyright Commitment to address concerns about potential IP infringement claims from content produced by generative AI. Under this commitment, which covers various products, including GitHub Copilot, Microsoft will take responsibility for potential legal risks if a customer faces copyright challenges.

    The commitment covers third-party IP claims based on copyright, patent, trademark, trade secrets. It covers the customer’s use and distribution of the output content generated by Microsoft Copilot services and requires the customer to use the content filters and other safety systems built into the product.

    The Copilot Copyright Commitment extends the existing Microsoft IP indemnification coverage to the use of paid versions of Bing Chat Enterprise and commercial Copilot services, including Microsoft 365 Copilot and GitHub Copilot. According to the pledge, Microsoft will pay any legal damages if a third party sues a commercial customer for infringing their copyright by using those services.

    Reply
  4. Tomi Engdahl says:

    Melissa Heikkilä / MIT Technology Review:
    A look at AI video startup Synthesia, whose avatars are more human-like and expressive than predecessors, raising concerns over the consequences of realistic AI

    An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary
    https://www.technologyreview.com/2024/04/25/1091772/new-generative-ai-avatar-deepfake-synthesia/

    Synthesia’s new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

    Reply
  5. Tomi Engdahl says:

    Mielipidekirjoitus / Ohjelmistokehittäjä ei menetä työtään tekoälylle
    Tekoäly ei korvaa vaativaa ajattelutyötä IT-alalla, mielipidekirjoittaja toteaa.
    https://www.talouselama.fi/uutiset/ohjelmistokehittaja-ei-meneta-tyotaan-tekoalylle/a646ca9d-9e93-4f13-a0b0-776e8e630d1a

    IT-ala on vuosikymmeniä uudistanut muiden toimialojen liiketoimintaa tekoälytekniikoiden avulla. Nyt tekoäly uudistaa vuorostaan IT-alaa, kun koodia tulee koneelta pyytämällä heti. Ohjelmistosuunnittelu on kuitenkin paljon muutakin kuin koodin naputtelua.

    Reply
  6. Tomi Engdahl says:

    xAI, Elon Musk’s 10-month-old competitor to the AI phenom OpenAI, is raising $6 billion on a pre-money valuation of $18 billion, according to one trusted source close to the deal.

    The deal – which would give investors one quarter of the company – is expected to close in the next few weeks unless the terms of the deal change.

    Read more from Connie Loizos on xAI here: https://tcrn.ch/3QjS64s

    #TechCrunch #technews #xAI #ElonMusk #OpenAI

    Reply
  7. Tomi Engdahl says:

    thats the thing people fail to realize LLMs dont actually understand what they are doing they are returning strings that were arbitrarily scored, from a dataset. You could just as easily make a LLM that only gives wrong awnsers, or a funny enough case study gives horney awnsers. Which the Developers at chat GPT discovered when they accidently either forgot a – sine or added one.

    Reply
  8. Tomi Engdahl says:

    How to Use ChatGPT for 3D Printing
    BY
    SAMUEL L. GARBETT
    PUBLISHED OCT 11, 2023
    ChatGPT can help you to create and fix G-code and STL files for 3D printing, and even generate simple 3D models. Let’s explore what it can do.
    https://www.makeuseof.com/chatgpt-how-to-use-for-3d-printing/

    https://3dfy.ai/

    Reply
  9. Tomi Engdahl says:

    Tekoäly laitettiin kirjoittamaan hyökkäyskoodia – yksi korjasi koko potin
    Heidi Kähkönen24.4.202409:15|päivitetty24.4.202409:15TEKOÄLYHAAVOITTUVUUDETTIETOTURVA
    Avainasemassa olivat haavoittuvuuksien CVE-kuvaukset, joita hyödyntämällä yksi kielimalleista kykeni kirjoittamaan lähes kaikkiin tutkimuksen haavoittuvuuksiin käyttökelpoista hyökkäyskoodia.
    https://www.tivi.fi/uutiset/tekoaly-laitettiin-kirjoittamaan-hyokkayskoodia-yksi-korjasi-koko-potin/6ee32866-b644-443b-bae8-7e15026a7cdb

    Reply
  10. Tomi Engdahl says:

    Tech CEOs Altman, Nadella, Pichai and Others Join Government AI Safety Board Led by DHS’ Mayorkas

    CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s critical services from “AI-related disruptions.”

    https://www.securityweek.com/tech-ceos-altman-nadella-pichai-and-others-join-government-ai-safety-board-led-by-dhs-mayorkas/

    Artificial Intelligence
    CISA Rolls Out New Guidelines to Mitigate AI Risks to US Critical Infrastructure

    New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

    https://www.securityweek.com/cisa-rolls-out-new-guidelines-to-mitigate-ai-risks-to-us-critical-infrastructure/

    Reply
  11. Tomi Engdahl says:

    Tekoäly vie työt… jääkiekkoanalyytikoilta
    https://etn.fi/index.php/opinion/16154-tekoaely-vie-tyoet-jaeaekiekkoanalyytikoilta

    Tampereella vietetään nyt ansaittua pitkää vappua, kun Tappara juhlii Suomen mestaruutta. Mikään yllätys mestaruuden ei pitänyt olla, sillä Digian tekoäly ennusti sen etukäteen. Kaksi parasta tekoäly ennusti jo joulukuussa. Mihin enää tarvitsemme jääkiekkoanalyytikoita?

    Pitkin kautta Digian tekoäly on esittänyt ennusteitaan. Pääosin ennusteet ovat menneet ristiin jääkiekkoasiantuntija Petteri Sihvosen näkemysten kanssa. Tapparan kullan lisäksi tekoäly ennusti oikein myös Pelicansin hopean ja Kärppien pronssin. Sihvonen liputti lähes koko kauden Ilveksen mestaruuden puolesta. Kun Ilves putosi, Sihvonen nosti suosikiksi Pelicansin.

    Tekoäly perusti ennusteensa Liigan tilasto- ja tulospalvelun dataan.

    Reply
  12. Tomi Engdahl says:

    Artificial Intelligence
    Deepfake of Principal’s Voice Is the Latest Case of AI Being Used for Harm
    https://www.securityweek.com/deepfake-of-principals-voice-is-the-latest-case-of-ai-being-used-for-harm/

    Everyone — not just politicians and celebrities — should be concerned about this increasingly powerful deep-fake technology, experts say.

    The most recent criminal case involving artificial intelligence emerged last week from a Maryland high school, where police say a principal was framed as racist by a fake recording of his voice.

    The case is yet another reason why everyone — not just politicians and celebrities — should be concerned about this increasingly powerful deep-fake technology, experts say.

    “Everybody is vulnerable to attack, and anyone can do the attacking,” said Hany Farid, a professor at the University of California, Berkeley, who focuses on digital forensics and misinformation.

    Reply
  13. Tomi Engdahl says:

    Artificial Intelligence
    Why Using Microsoft Copilot Could Amplify Existing Data Quality and Privacy Issues

    Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

    https://www.securityweek.com/why-using-microsoft-copilot-could-amplify-existing-data-quality-and-privacy-issues/

    Reply
  14. Tomi Engdahl says:

    Kalley Huang / The Information:
    Some developers are releasing versions of Llama 3, which has a context window of 8K+ tokens, with longer context windows, thanks to Meta’s open-source approach

    https://www.theinformation.com/articles/how-developers-gave-llama-3-more-memory

    Reply
  15. Tomi Engdahl says:

    Alex Kantrowitz / Big Technology:
    Elon Musk says he wants Grok to create news summaries by relying solely on X posts, without looking at article text, and improved story citations are coming

    Elon Musk’s Plan For AI News
    Musk emails with details on AI-powered news inside X. An AI bot will summarize news and commentary, sometimes looking through tens of thousands of posts per story.
    https://www.bigtechnology.com/p/elon-musks-plan-for-ai-news

    Reply
  16. Tomi Engdahl says:

    Reuters:
    Sources: Fei-Fei Li raised a seed round for a “spatial intelligence” startup using human-like visual data processing to create AI capable of advanced reasoning

    https://www.reuters.com/technology/stanford-ai-leader-fei-fei-li-building-spatial-intelligence-startup-2024-05-03/

    Reply
  17. Tomi Engdahl says:

    PRICEY AI “DEVICE” TURNS OUT TO JUST BE AN ANDROID APP WITH EXTRA STEPS
    https://futurism.com/the-byte/pricey-ai-device-android-app-extra-steps

    “IT LOOKS LIKE THIS AI GADGET COULD HAVE JUST BEEN AN APP AFTER ALL.”

    Secretive wearables startup Humane disappointed with its AI Pin, quickly becoming one of the worst-reviewed tech products of all time.

    Competitor Rabbit’s R1, a similar — albeit cheaper — device that promises to be an AI chatbot-powered friend that can answer pretty much any question you can come up with, didn’t fare much better, with TechRadar calling it a “beautiful mess” that “nobody needs.”

    “I can’t believe this bunny took my money,” Mashable’s Kimberly Gedeon wrote in her review today. Famed YouTuber Marques “MKBHD” Brownlee slammed it as being “barely reviewable.”

    Reply
  18. Tomi Engdahl says:

    Toimitusjohtaja vaatii työntekijöitä käyttämään ChatGPT:tä vähintään 20 kertaa päivässä
    Joona Komonen29.4.202416:42TEKOÄLYKORONAVIRUS
    Modernan toimitusjohtaja on ollut ChatGPT-fani jo vuoden 2022 lopusta asti.
    https://www.tivi.fi/uutiset/toimitusjohtaja-vaatii-tyontekijoita-kayttamaan-chatgptta-vahintaan-20-kertaa-paivassa/7399bb3c-91a5-40df-8de5-cd7ea137c7b1

    Reply
  19. Tomi Engdahl says:

    Claude 3 Opus has stunned AI researchers with its intellect and ‘self-awareness’ — does this mean it can think for itself?
    News
    By Roland Moore-Coyler published April 24, 2024
    Anthropic’s AI tool has beaten GPT-4 in key metrics and has a few surprises up its sleeve — including pontificating about its existence and realizing when it was being tested.
    https://www.livescience.com/technology/artificial-intelligence/anthropic-claude-3-opus-stunned-ai-researchers-self-awareness-does-this-mean-it-can-think-for-itself

    Reply
  20. Tomi Engdahl says:

    Sam Altman says helpful agents are poised to become AI’s killer function
    Open AI’s CEO says we won’t need new hardware or lots more training data to get there.
    https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become-ais-killer-function/

    A number of moments from my brief sit-down with Sam Altman brought the OpenAI CEO’s worldview into clearer focus. The first was when he pointed to my iPhone SE (the one with the home button that’s mostly hated) and said, “That’s the best iPhone.” More revealing, though, was the vision he sketched for how AI tools will become even more enmeshed in our daily lives than the smartphone.

    “What you really want,” he told MIT Technology Review, “is just this thing that is off helping you.” Altman, who was visiting Cambridge for a series of events hosted by Harvard and the venture capital firm Xfund, described the killer app for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to.

    Reply
  21. Tomi Engdahl says:

    AI-as-a-Service for Signal Processing
    March 20, 2024 by Renesas Electronics
    The Reality AI software suite provides AI tools optimized for solving problems related to sensors and signals, enabling notifications to applications and devices so they can take action. This article covers technical aspects of the approach to machine learning and architecture of the solution.
    https://www.allaboutcircuits.com/partner-content-hub/renesas-electronics/ai-as-a-service-for-signal-processing/

    Reply
  22. Tomi Engdahl says:

    Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything?
    Philosopher Nick Bostrom popularized the idea superintelligent AI could erase humanity. His new book imagines a world in which algorithms have solved every problem.
    https://www.wired.com/story/nick-bostrom-fear-ai-fix-everything/?fbclid=IwZXh0bgNhZW0CMTEAAR1kvwkLv2nemxhRfCdXQWgSoE7ceX-ZrwPjlocTe1ReUOAzzb3UvGICOm0_aem_Ab4r7j0SFCkK5uo_IepYEDt-SKTrQ9JQoZTynp01fW8kd24NEttfJTOrt16D-9v8kDsgH8h_SCZ9crCoduWFTRnA

    Reply
  23. Tomi Engdahl says:

    Tämä työkalu toimii niin hyvin, ettei sitä uskalleta antaa ihmisten käyttöön
    https://www.iltalehti.fi/digiuutiset/a/31f1ebe9-c7f2-455e-b96e-79650ed69f39

    Tekoälyllä luodun synteettisen puheäänen kehityksessä on tapahtunut merkittävää edistystä, kertoo OpenAI-yhtiö. Se varoittaa teknologiansa vaarapuolista ja korostaa pitävänsä sen ainakin toistaiseksi pois tavallisten kansalaisten käsistä.

    ChatGPT:stä tunnettu OpenAI esittelee verkkosivuillaan Voice Engine -teknologiaansa, jota voisi suomeksi nimittää esimerkiksi äänimoottoriksi.

    Kyseessä on yhtiön kehittämä teknologia, jolla kyetään luomaan häkellyttävän aidolta kuulostavaa puhetta lyhyen ääninäytteen pohjalta. OpenAI:n mukaan se tarvitsee uskottavaan imitaatioon vain 15 sekuntia materiaalia.

    Teknologian mahdollisuuksiin lukeutuu esimerkiksi kääntäminen, sillä sen avulla vaikkapa sisällöntuottajat voivat luoda sisältöä eri kielillä niitä osaamatta, mutta käyttäen silti omaa ääntään.

    Lisäksi teknologia voi mahdollistaa luonnollisen äänen heille, joille puheen tuottaminen on vaikeaa tai joilla ei ole omaa ääntä lainkaan.

    Vain tarkkaan valitut tahot pääsevät käyttämään

    Kehittyneen teknologian kääntöpuolena on se, että sitä on helppo käyttää väärin.

    – Ymmärrämme, että oikeiden ihmisten ääntä matkivan puheen generoiminen pitää sisällään riskejä, kirjoituksessa todetaan ja viitataan muun muassa USA:n presidentinvaaleihin, jotka tarjoavat houkuttelevan tilaisuuden käyttää teknologiaa myös ikävällä tavalla.

    OpenAI kertoo tehneensä päätöksen, ettei teknologiaa tuoda ainakaan vielä tarjolle sen laajemmin. Se kuitenkin korostaa, että tekoälyn parissa saavutettavista edistysaskeleista on tärkeää tiedottaa avoimesti.

    OpenAI:n julkaisemiin ja hämmentävän aidolta kuulostaviin ääninäytteisiin voi tutustua yhtiön blogissa.

    Navigating the Challenges and Opportunities of Synthetic Voices
    https://openai.com/index/navigating-the-challenges-and-opportunities-of-synthetic-voices

    We’re sharing lessons from a small scale preview of Voice Engine, a model for creating custom voices.

    Reply
  24. Tomi Engdahl says:

    Ashley Carman / Bloomberg:
    Over 40K Audible books are marked as having been made with an AI “virtual voice”, saving authors hundreds or thousands of dollars per title on narration costs

    AI-Voiced Audiobooks Top 40,000 Titles on Audible
    https://www.bloomberg.com/news/newsletters/2024-05-02/audible-s-test-of-ai-voiced-audiobooks-tops-40-000-titles

    While authors appreciate the new revenue stream, audiobook listeners are complaining about the influx of new material

    Welcome to Amazon’s AI audiobooks era

    Last year, Amazon.com Inc. announced that self-published authors in the US who make their books available on the Kindle Store would soon be able to access a new tool in beta testing. Many of them, likely prohibited by cost and time, hadn’t yet turned their ebooks into audiobooks, but with the help of an Artificial Intelligence-generated “virtual voice” they could easily do so.

    In the months since the free tool launched in beta, authors have embraced it. Over 40,000 books in Audible are marked as having been created with it, and, in posts online, authors praise the fact that they have saved hundreds or thousands of dollars per title on narration costs. One author, Hassan Osman of the Writer on the Side blog said turning one of his books into an audiobook took only 52 minutes.

    Still, some consumers worry this development portends a tough future for narrators who will lose work while listeners suffer from lagging quality.

    “So depressing to discover Virtual Voice narrations of audiobooks on Audible,” posted one X user. “Yes, they are good enough. Alas.”

    And while authors’ counterparts in the music business have actively fought against the introduction of AI into their industry and sought safeguards against it, audiobook narrators appear to be facing the threat of the technology without much recourse.

    One narrator, Ramon de Ocampo, responded to someone on X about the test, saying virtual voices had “not taken all the jobs. But it’s trying to.”

    In conversations I’ve had with publishing industry executives, most say they’re treading lightly into the AI realm. They tend to agree that using the technology for translated books makes sense, but they seem less convinced, at least outwardly, that completely AI-narrated audiobooks will be the future.

    Still, Audible and Amazon’s rollout gets at the developing tension between publishers, authors and consumers. For authors who can’t afford to produce their own audiobooks, it’s easy to understand why virtual voices appeal to them. But for listeners who want options, or at least the ability to support only human-created works, the filtering and labeling systems have to improve.

    Reply
  25. Tomi Engdahl says:

    Wall Street Journal:
    How role-playing with an AI chatbot can help prepare for difficult conversations with family, friends, and colleagues, such as for terminating an employee

    For Conversations You Dread, Try a Chatbot
    Role-playing with an AI conversationalist can prepare you to handle difficult subjects with family, friends and colleagues
    https://www.wsj.com/tech/ai/for-conversations-you-dread-try-a-chatbot-8eecb643?st=ghwceur5k8a7upe&reflink=desktopwebshare_permalink

    Reply
  26. Tomi Engdahl says:

    Tara Copp / Associated Press:
    US Air Force plans a fleet of 1,000+ AI-controlled jets, the first of them operating by 2028; some AI versions already beat human pilots in air-to-air combat — With the midday sun blazing, an experimental orange and white F-16 fighter jet launched with a familiar roar that is a hallmark of U.S. airpower.

    An AI-controlled fighter jet took the Air Force leader for a historic ride. What that means for war
    https://apnews.com/article/artificial-intelligence-fighter-jets-air-force-6a1100c96a73ca9b7f41cbd6a2753fda

    EDWARDS AIR FORCE BASE, Calif. (AP) — With the midday sun blazing, an experimental orange and white F-16 fighter jet launched with a familiar roar that is a hallmark of U.S. airpower. But the aerial combat that followed was unlike any other: This F-16 was controlled by artificial intelligence, not a human pilot. And riding in the front seat was Air Force Secretary Frank Kendall.

    AI marks one of the biggest advances in military aviation since the introduction of stealth in the early 1990s, and the Air Force has aggressively leaned in. Even though the technology is not fully developed, the service is planning for an AI-enabled fleet of more than 1,000 unmanned warplanes, the first of them operating by 2028.

    Reply
  27. Tomi Engdahl says:

    Jessica Lucas / The Verge:
    Some teen users of Character.AI’s chatbots say they find the AI companions helpful, entertaining, and supportive but worry they may be addicted to the chatbots — Teens are opening up to AI chatbots as a way to explore friendship. But sometimes, the AI’s advice can go too far.

    The teens making friends with AI chatbots
    Teens are opening up to AI chatbots as a way to explore friendship. But sometimes, the AI’s advice can go too far.
    https://www.theverge.com/2024/5/4/24144763/ai-chatbot-friends-character-teens

    Reply
  28. Tomi Engdahl says:

    Charlie Warzel / The Atlantic:
    A profile of ElevenLabs, whose founders seem unprepared for how its impressive AI voice cloning tech can change the internet and unleash political chaos — My voice was ready. I’d been waiting, compulsively checking my inbox. I opened the email and scrolled until I saw a button that said, plainly, “Use voice.”

    ElevenLabs Is Building an Army of Voice Clones
    https://www.theatlantic.com/technology/archive/2024/05/elevenlabs-ai-voice-cloning-deepfakes/678288/?gift=2iIN4YrefPjuvZ5d2Kh307h_8v3HIJtVjeJOtXYQHMM&utm_source=copy-link&utm_medium=social&utm_campaign=share

    A tiny start-up has made some of the most convincing AI voices. Are its creators ready for the chaos they’re unleashing?

    Reply
  29. Tomi Engdahl says:

    Rina Diane Caballar / IEEE Spectrum:
    As CS students experiment with AI coding tools, professors say courses need to focus less on syntax and more on problem solving, design, testing, and debugging — Professors are shifting away from syntax and emphasizing higher-level skills — Generative AI is transforming the software development industry.

    AI Copilots Are Changing How Coding Is Taught
    Professors are shifting away from syntax and emphasizing higher-level skills
    https://spectrum.ieee.org/ai-coding

    Generative AI is transforming the software development industry. AI-powered coding tools are assisting programmers in their workflows, while jobs in AI continue to increase. But the shift is also evident in academia—one of the major avenues through which the next generation of software engineers learn how to code.

    Computer science students are embracing the technology, using generative AI to help them understand complex concepts, summarize complicated research papers, brainstorm ways to solve a problem, come up with new research directions, and, of course, learn how to code.

    “Students are early adopters and have been actively testing these tools,” says Johnny Chang, a teaching assistant at Stanford University pursuing a master’s degree in computer science. He also founded the AI x Education conference in 2023, a virtual gathering of students and educators to discuss the impact of AI on education

    So as not to be left behind, educators are also experimenting with generative AI. But they’re grappling with techniques to adopt the technology while still ensuring students learn the foundations of computer science.

    “It’s a difficult balancing act,” says Ooi Wei Tsang, an associate professor in the School of Computing at the National University of Singapore. “Given that large language models are evolving rapidly, we are still learning how to do this.”

    Less Emphasis on Syntax, More on Problem Solving

    The fundamentals and skills themselves are evolving. Most introductory computer science courses focus on code syntax and getting programs to run, and while knowing how to read and write code is still essential, testing and debugging—which aren’t commonly part of the syllabus—now need to be taught more explicitly.

    “We’re seeing a little upping of that skill, where students are getting code snippets from generative AI that they need to test for correctness,” says Jeanna Matthews, a professor of computer science at Clarkson University in Potsdam, N.Y.

    Another vital expertise is problem decomposition. “This is a skill to know early on because you need to break a large problem into smaller pieces that an LLM can solve,” says Leo Porter, an associate teaching professor of computer science at the University of California, San Diego. “It’s hard to find where in the curriculum that’s taught—maybe in an algorithms or software engineering class, but those are advanced classes. Now, it becomes a priority in introductory classes.”

    “Given that large language models are evolving rapidly, we are still learning how to do this.”
    —Ooi Wei Tsang, National University of Singapore

    As a result, educators are modifying their teaching strategies. “I used to have this singular focus on students writing code that they submit, and then I run test cases on the code to determine what their grade is,” says Daniel Zingaro, an associate professor of computer science at the University of Toronto Mississauga. “This is such a narrow view of what it means to be a software engineer, and I just felt that with generative AI, I’ve managed to overcome that restrictive view.”

    Zingaro, who coauthored a book on AI-assisted Python programming with Porter, now has his students work in groups and submit a video explaining how their code works. Through these walk-throughs, he gets a sense of how students use AI to generate code, what they struggle with, and how they approach design, testing, and teamwork.

    “It’s an opportunity for me to assess their learning process of the whole software development [life cycle]—not just code,” Zingaro says. “And I feel like my courses have opened up more and they’re much broader than they used to be. I can make students work on larger and more advanced projects.”

    Avoiding AI’s Coding Pitfalls

    But educators are cautious given an LLM’s tendency to hallucinate. “We need to be teaching students to be skeptical of the results and take ownership of verifying and validating them,” says Matthews.

    Matthews adds that generative AI “can short-circuit the learning process of students relying on it too much.” Chang agrees that this overreliance can be a pitfall and advises his fellow students to explore possible solutions to problems by themselves so they don’t lose out on that critical thinking or effective learning process. “We should be making AI a copilot—not the autopilot—for learning,” he says.

    Other drawbacks include copyright and bias. “I teach my students about the ethical constraints—that this is a model built off other people’s code and we’d recognize the ownership of that,” Porter says. “We also have to recognize that models are going to represent the bias that’s already in society.”

    Adapting to the rise of generative AI involves students and educators working together and learning from each other. For her colleagues, Matthews’s advice is to “try to foster an environment where you encourage students to tell you when and how they’re using these tools. Ultimately, we are preparing our students for the real world, and the real world is shifting, so sticking with what you’ve always done may not be the recipe that best serves students in this transition.”

    Reply
  30. Tomi Engdahl says:

    Isabelle Bousquette / Wall Street Journal:
    A look at AI companies gathering data from real people to create “digital twins”, to use as fashion models, focus group members, or clinical trial participants

    The AI-Generated Population Is Here, and They’re Ready to Work
    AI that can predict how specific humans will look, act and feel could do the jobs of fashion models, focus group members and clinical trial participants
    https://www.wsj.com/articles/the-ai-generated-population-is-here-and-theyre-ready-to-work-16f8c764?st=5sm4dgmd77py8sv&reflink=desktopwebshare_permalink

    Reply
  31. Tomi Engdahl says:

    Agence France-Presse:
    Ukraine unveils an AI-generated foreign ministry spokesperson called Victoria Shi, who will make official statements “written and verified by real people”

    Ukraine unveils AI-generated foreign ministry spokesperson
    https://www.theguardian.com/technology/article/2024/may/03/ukraine-ai-foreign-ministry-spokesperson

    Victoria Shi is modelled on Rosalie Nombre, a singer and former contestant on Ukraine’s version of the reality show The Bachelor

    Ukraine on Wednesday presented an AI-generated spokesperson called Victoria who will make official statements on behalf of its foreign ministry.

    The ministry said it would “for the first time in history” use a digital spokesperson to read its statements, which will still be written by humans.

    Dressed in a dark suit, the spokesperson introduced herself as Victoria Shi, a “digital person”, in a presentation posted on social media. The figure gesticulates with her hands and moves her head as she speaks.

    The foreign ministry’s press service said that the statements given by Shi would not be generated by AI but “written and verified by real people”.

    “It’s only the visual part that the AI helps us to generate,” Dmytro Kuleba, the Ukrainian foreign minister, said, adding that the new spokesperson was a “technological leap that no diplomatic service in the world has yet made”

    Reply
  32. Tomi Engdahl says:

    Joanna Stern / Wall Street Journal:
    Meta’s Ray-Ban smart glasses are a simple, reliable, and smartly priced AI gadget, whereas Humane’s AI Pin and the Rabbit R1 are more akin to science projects

    The AI Gadget That Can Make Your Life Better—and Two That Definitely Won’t
    The Humane AI Pin, Rabbit R1 and Ray-Ban Meta smart glasses take AI out of your smartphone and put it in a dedicated gadget. The future? Or just frustrating?
    https://www.wsj.com/tech/personal-tech/the-ai-gadget-that-can-make-your-life-betterand-two-that-definitely-wont-c51f49f0?st=s3j1ond4j1ehbcd&reflink=desktopwebshare_permalink

    Reply
  33. Tomi Engdahl says:

    Wendy Lee / Los Angeles Times:
    How the AI-generated music video for Washed Out’s The Hardest Part was created entirely using OpenAI’s Sora, a first from a major record label

    Washed Out’s new music video was created with AI. Is it a watershed moment for Sora?
    https://www.latimes.com/entertainment-arts/business/story/2024-05-02/first-major-music-artist-uses-openai-sora-to-create-music-video

    “The Hardest Part,” a new song from indie pop artist Washed Out, is all about love lost, among the most human of themes.

    But ironically, to illustrate the tune’s sense of longing, the musician turned to something far less flesh-and-blood: artificial intelligence.

    With Thursday’s release of “The Hardest Part,” Macon, Ga.-based Washed Out, whose real name is Ernest Greene, has the first collaboration between a major music artist and filmmaker on a music video using OpenAI’s Sora text-to-video technology, according to the singer-songwriter’s record label Sub Pop.

    The roughly four-minute video, directed by Paul Trillo, speedily zooms the viewer through key elements of a couple’s life. The audience sees the characters — a red-haired woman and a dark-haired man — go from making out and smoking in a 1980s high school to getting married and having a child. “Don’t you cry, it’s all right now,” Greene croons. “The hardest part is that you can’t go back.”

    The couple aren’t played by real actors. They’re created entirely digitally through Sora’s AI.

    The video could mark the beginning of a potentially groundbreaking trend of using AI in video production.

    “The Hardest Part” — the lead single from Greene’s new self-produced album, “Notes From a Quiet Life,” set for release on June 28 — is the longest music video made through Sora technology so far. The program creates short clips based on written text prompts. This enabled Trillo to build scenes in a way that would’ve been many times more expensive with actual actors, sets and locations.

    “Not having the limitations of budget and having to travel to different locations, I was able to explore all these different, alternate outcomes of this couple’s life,” Trillo said.

    Reply
  34. Tomi Engdahl says:

    Will Knight / Wired:
    Q&A with Nick Bostrom on his book Deep Utopia: Life and Meaning in a Solved World, which considers a future in which AI solves all of humanity’s problems
    https://www.wired.com/story/nick-bostrom-fear-ai-fix-everything/

    Reply
  35. Tomi Engdahl says:

    Rachel Metz / Bloomberg:
    A look at Runway’s second International AI Film Festival, which it says grew from 300 short-video submissions in 2023 to 3,000 in 2024 with 10 finalists chosen

    Startups Go to Hollywood: AI Movies Aren’t Just a Gimmick Anymore
    https://www.bloomberg.com/news/newsletters/2024-05-03/runway-sora-and-hollywood-ai-films-actually-get-good

    Artificial intelligence has changed filmmaking a lot over the past year, even though the technology is just getting started. But first…

    AI comes to filmmaking

    This week Runway AI Inc., which makes AI video generating and editing tools, held its second annual AI Film Festival in Los Angeles — its first stop before heading to New York next week. To give a sense for how much the event has grown since last year, Runway co-founder Cristóbal Valenzuela said last year people submitted 300 videos for festival consideration. This year they sent in 3,000.

    A crowd of hundreds of filmmakers, techies, artists, venture capitalists and at least one well-known actor (Poker Face star Natasha Lyonne) gathered at the Orpheum Theatre in downtown LA Wednesday night to view the 10 finalists chosen by the festival’s judges.
    The films were made with a range of AI tools and were about as wacky as you might expect. In one, a cartoon kiwi bird went on an adventure across the ocean. In another, the modern struggle with anxiety was personified by a man trapped in a house fighting with a meat monster.

    The curious, excited vibe of the event was similar to last year, when I attended Runway’s first AI Film Festival last March. The videos, however, were markedly different this time around. They looked a lot less like experimental films and a lot more like, well, films.

    At the time of last year’s festival, Runway was about to publicly release software that would let anyone generate a short video from a text prompt, marking the most high-profile instance of such technology outside of a research lab.

    Generative AI’s Next Frontier Is Video
    https://www.bloomberg.com/news/articles/2023-03-20/generative-ai-s-next-frontier-is-video

    A simple prompt can generate a three-second video on a new AI tool from the startup Runway, hinting at a future of AI-created films and videos

    Reply
  36. Tomi Engdahl says:

    Oliver Whang / New York Times:
    Some researchers are training AI models on headcam footage from infants and toddlers, to better understand language acquisition by both AI and children

    From Baby Talk to Baby A.I.
    https://www.nytimes.com/2024/04/30/science/ai-infants-language-learning.html?unlocked_article_code=1.pU0.Nhs2.iDvdhaloeCiY&smid=url-share

    Could a better understanding of how infants acquire language help us build smarter A.I. models?

    Reply
  37. Tomi Engdahl says:

    Generatiivinen tekoäly tulee IoT-laitteisiin
    https://etn.fi/index.php/13-news/16169-generatiivinen-tekoaely-tulee-iot-laitteisiin

    Englantilaislähtöisellä Arm:.a on oma neuroverkkoprosessorien sarja, joka on nimeltään Ethos. Nyt perheeseen on tuotu uusi versio. Ethos-U85 on suunniteltu tukemaan muuntaja- eli transformer-toimintoja vähävirtaisissa laitteissa. Käytännössä Arm tuo generatiiviset tekoälymallit IoT-laitteisiin.

    Kannattaa toki muistaa, etteivät IoT-laitteet jatkossakaan kykene prosessoimaan suuria kielimalleja eli LLM-malleille perustuvaa tekoälylaskentaa. Tässä vaiheessa Arm kertoo siirtäneensä esimerkiksi konenäkömalli ViT-Tinyn ja generatiivisen kielimallin TinyLlama-1.1B Ethos-U85-piirille.

    Ethos-U85:sta puhuttiin paljon jo kuukausi sitten Nürnbergin Embedded World -messuilla. Moni Arm:n asiakas hehkutti uutta NPU-yksikköä ja kertoi jo tuovansa sitä omille siruilleen. Julkisesti asiasta ei tietenkään saanut vielä puhua.

    Ethos-U85:ssä on kolmannen sukupolven mikroarkkitehtuuri. Toisen sukupolven U65:een verrattuna U85 on suurimmassa kokoonpanossaan 4 kertaa tehokkaampi ja 20 prosenttia energiatehokkaampi.

    Reply
  38. Tomi Engdahl says:

    GenAI vaatii uutta osaamista – joko on aika palkata Chief AI Officer?
    https://etn.fi/index.php/opinion/16175-genai-vaatii-uutta-osaamista-joko-on-aika-palkata-chief-ai-officer

    Yritysten hyödyntämä data on noussut generatiivisen tekoälyn myötä valokeilaan. Siellä paistattelevat myös tulevaisuuden osaajat, joita kaivataan tekoälyn mullistamiin uudenlaisiin työtehtäviin, kirjoittaa Kyndryl Finlandin maajohtaja Piia Hoffsten-Myllylä.

    Elämme datataloudessa, ja datan merkitys kasvaa jatkuvasti lähes kaikissa organisaatioissa. Viimeisimmän sysäyksen datan voittokululle antaa generatiivinen tekoäly.

    Ensimmäinen askel kohti organisaation tekoälyvalmiutta on datan järjestely ja kattavan datasuunnitelman tekeminen. Datan laatu vaikuttaa suoraan generatiivisen AI:n laatuun, ja siksi yrityksessä on varmistettava datan oikeellisuus ja vältettävä vinoumat. Vahva datapohja ja -arkkitehtuuri parantavat tiedon luotettavuutta ja laatua, mikä voi tuoda yritykselle merkittäviä liiketoimintahyötyjä.

    Jotta generatiivisen tekoälyn ratkaisuja voi hyödyntää vastuullisesti, on sen taustalla oleviin malleihin kiinnitettävä huomiota. Ratkaisut on suunniteltava niin, että niiden takana olevat päätöksentekoprosessit sekä generatiivisen tekoälyn koulutuksessa käytettävän datan luonne ja lähteet ovat läpinäkyviä.

    Reply
  39. Tomi Engdahl says:

    Rachel Metz / Bloomberg:
    Source: OpenAI is developing a feature for ChatGPT that can search the web and show results with citations to sources and images — The feature would let ChatGPT users search the web and cite sources in its results. — OpenAI is developing a feature for ChatGPT that can search the web …

    OpenAI Is Readying a Search Product to Rival Google, Perplexity
    https://www.bloomberg.com/news/articles/2024-05-07/openai-is-readying-an-ai-search-product-to-rival-google-perplexity?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcxNTEyODMzNywiZXhwIjoxNzE1NzMzMTM3LCJhcnRpY2xlSWQiOiJTRDRaTlFUMVVNMFcwMCIsImJjb25uZWN0SWQiOiJCQUY2MTkxMzA2NTk0RjNEOTI1MTc2MjdBQkQ3NzM1NSJ9.MWqse1xdzhsj7mDmrd8Ihpzm8p8Pc6lvxHSU5khNmNs

    The feature would let ChatGPT users search the web and cite sources in its results.

    OpenAI is developing a feature for ChatGPT that can search the web and cite sources in its results, according to a person familiar with the matter, potentially competing head on with Alphabet Inc.’s Google and AI search startup Perplexity.

    The feature would allow users to ask ChatGPT a question and receive answers that use details from the web with citations to sources such as Wikipedia entries and blog posts, according to the person, who asked to remain anonymous discussing private information. One version of the product also uses images alongside written responses to questions, when they’re relevant. If a user asked ChatGPT how to change a doorknob, for instance, the results might include a diagram to illustrate the task, the person said.

    The Information reported on a search product in development in February. The details on how the product might work have not previously been reported. OpenAI declined to comment.

    https://www.theinformation.com/articles/openai-develops-web-search-product-in-challenge-to-google

    Reply
  40. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    OpenAI says it’s developing a Media Manager tool, slated for release by 2025, to let content owners identify their works to OpenAI and control how they are used

    OpenAI says it’s building a tool to let content creators ‘opt out’ of AI training
    https://techcrunch.com/2024/05/07/openai-says-its-building-a-tool-to-let-content-creators-opt-out-of-ai-training/

    OpenAI says that it’s developing a tool to let creators better control how their content’s used in training generative AI.

    The tool, called Media Manager, will allow creators and content owners to identify their works to OpenAI and specify how they want those works to be included or excluded from AI research and training.

    The goal is to have the tool in place by 2025, OpenAI says, as the company works with “creators, content owners and regulators” toward a standard — perhaps through the industry steering committee it recently joined.

    “This will require cutting-edge machine learning research to build a first-ever tool of its kind to help us identify copyrighted text, images, audio and video across multiple sources and reflect creator preferences,” OpenAI wrote in a blog post. “Over time, we plan to introduce additional choices and features.”

    Reply
  41. Tomi Engdahl says:

    Katrina Manson / Bloomberg:
    Microsoft deploys a generative AI model entirely divorced from the internet, saying US intel agencies can now harness the tech to analyze top-secret information

    Microsoft Creates Top Secret Generative AI Service for US Spies
    https://www.bloomberg.com/news/articles/2024-05-07/microsoft-creates-top-secret-generative-ai-service-for-us-spies

    Product went live Thursday after overhauling a supercomputer
    US spy agencies are keen to harness generative AI technology

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*