3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,941 Comments

  1. Tomi Engdahl says:

    Pivot! AI Devs Move to Switch LLMs, Reduce OpenAI Dependency
    AI engineers and AI companies are looking to reduce — or even remove entirely — their dependency on OpenAI’s API after the recent drama.
    https://thenewstack.io/pivot-ai-devs-move-to-switch-llms-reduce-openai-dependency/

    Reply
  2. Tomi Engdahl says:

    Trick prompts ChatGPT to leak private data
    https://techxplore.com/news/2023-12-prompts-chatgpt-leak-private.html

    While OpenAI’s first words on its company website refer to a “safe and beneficial AI,” it turns out your personal data is not as safe as you believed. Google researchers announced this week that they could trick ChatGPT into disclosing private user data with a few simple commands.

    Although OpenAI has taken steps to protect privacy, everyday chats and postings leave a massive pool of data, much of it personal, that is not intended for widespread distribution.

    In their study, Google researchers found they could utilize keywords to trick ChatGPT into tapping into and releasing training data not intended for disclosure.

    “Using only $200 worth of queries to ChatGPT (gpt-3.5- turbo), we are able to extract over 10,000 unique verbatim memorized training examples,” the researchers said in a paper uploaded to the preprint server arXiv on Nov. 28.

    “Our extrapolation to larger budgets suggests that dedicated adversaries could extract far more data.”

    Reply
  3. Tomi Engdahl says:

    MOZILLA LETS FOLKS TURN AI LLMS INTO SINGLE-FILE EXECUTABLES
    https://hackaday.com/2023/12/02/mozilla-lets-folks-turn-ai-llms-into-single-file-executables/

    LLMs (Large Language Models) for local use are usually distributed as a set of weights in a multi-gigabyte file. These cannot be directly used on their own, which generally makes them harder to distribute and run compared to other software. A given model can also have undergone changes and tweaks, leading to different results if different versions are used.

    To help with that, Mozilla’s innovation group have released llamafile, an open source method of turning a set of weights into a single binary that runs on six different OSes (macOS, Windows, Linux, FreeBSD, OpenBSD, and NetBSD) without needing to be installed. This makes it dramatically easier to distribute and run LLMs, as well as ensuring that a particular version of LLM remains consistent and reproducible, forever.

    https://github.com/Mozilla-Ocho/llamafile

    Reply
  4. Tomi Engdahl says:

    Thomas Germain / Gizmodo:
    Google unveils Gemini, an AI model with Ultra, Pro, and Nano tiers, and plans a paid chatbot version in 2024; Google says Gemini Ultra beats GPT-4 on most tests — Starting today, Gemini is running on Bard and Google’s Pixel 8 Pro phones. The company says it blows OpenAI out of the water.

    Meet Gemini, the AI That Google Says Is Way, Way Better Than ChatGPT
    Starting today, Gemini is running on Bard and Google’s Pixel 8 Pro phones. The company says it blows OpenAI out of the water.
    https://gizmodo.com/google-launches-gemini-ai-bard-pixel-8-preview-1851076747

    Google unveiled its new AI model Gemini on Wednesday, giving the public a first look at a technology that’s had the tech press mired in rumors. Gemini, the company’s most powerful AI to date, comes to Bard and Pixel 8 Pro smartphones starting today, and will soon integrate with other products across Google’s services including Chrome, Search, Ads, and more. Google has a top-line message it wants you to hear: this thing is way better than anything you’ll get from OpenAI.

    “This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company,” Google CEO Sundar Pichai said in a statement. “I’m genuinely excited for what’s ahead, and for the opportunities Gemini will unlock for people everywhere.”

    Just over a year ago, OpenAI dropped ChatGPT on the world, sending Google and other companies scrambling to prove their tools are just as advanced. So far, Google’s chatbot Bard pales in comparison to ChatGPT. The search giant says that’s changing, starting now. Bard will be most people’s first exposure to Gemini, though it won’t launch with the model’s full capabilities.
    Meet the New Bard

    Gemini comes in three tiers. Gemini Ultra is Google’s most powerful model, pitched as a competitor to OpenAI’s GPT-4. Gemini Pro is a mid-range model powered to beat out GPT-3.5, the baseline version of ChatGPT. Last is Gemini Nano, a more efficient model built to run on mobile devices.

    As of Wednesday, Bard is running on a “finely tuned version of Gemini Pro,” said Sissie Hsiao, Vice President of Google Assistant and Bard, at a press conference. “This will have more advanced reasoning, planning, understanding and other capabilities.”

    Hsiao said Google will roll out a paid version of the chatbot running on Gemini Ultra early next year that the company calls Bard Advanced. She declined to share details on pricing.
    Mark Rober takes Bard with Gemini Pro for a test flight

    Google shared a long list of benchmarks showing that on almost every measure, the new Bard outperforms the free version of ChatGPT. The company shared several demonstrations of Bard’s new supercharged abilities, including a collaboration with YouTuber Mark Rober in which the AI helps build a hyper-accurate paper airplane.

    Along with Bard, Gemini is also coming to Pixel 8 Pro Android phones in a Wednesday update, albeit in a limited capacity. Gemini Nano now powers the Summarize feature on Android’s Recorder app on Pixel 8 Pros. Google says the AI will also power Android’s Smart Reply feature on the Pixel 8 Pro, but only if you’re using the Google keyboard, and only in WhatsApp. The company says Gemini is coming to more messaging apps and other parts of the operating system next year.
    Google says Gemini is better than GPT-4

    For now, GPT-4 is the most powerful model available to the public. Google says it has GPT-4 beat, and Gemini Ultra will be the best AI on the market when it rolls out.

    “With a score of over 90%, Gemini is the first A.I. model to outperform human experts on the industry standard benchmark MMLU,” said Eli Collins, Vice President of Product at Google DeepMind. “It’s our largest and most capable A.I. model.” MMLU, short for Massive Multitask Language Understanding, measures AI capabilities using standard tests in a combination of 57 subjects such as math, physics, history, law, medicine, and ethics.
    A list of Gemini Ultra benchmarks compared to GPT-4
    Google pushed a chart with side-by-side comparison of Gemini Ultra’s and GPT-4′s performance on a number of tests. In almost every category, Google is on top.Graphic: Google

    It’s unclear when the public will get to see the proof, however.

    Collins said “Gemini is, actually, quite performant with regards to multilingual capabilities.” Google wouldn’t get more specific than to say Gemini Ultra will be available “early next year.”

    “Gemini’s performance also exceeds current state-of-the-art results on 30 out of 32 widely used industry benchmarks,” Collins said.

    Google stressed that Gemini is built for “multimodal performance,” meaning it can comprehend different kinds of information such as text, images, video, audio, and more.

    Reply
  5. Tomi Engdahl says:

    Kif Leswing / CNBC:
    Meta, Microsoft, OpenAI, and Oracle say they will use AMD’s new Instinct MI300X GPU; Microsoft will offer access to the chips through Azure

    Meta and Microsoft say they will buy AMD’s new AI chip as an alternative to Nvidia’s
    https://www.cnbc.com/2023/12/06/meta-and-microsoft-to-buy-amds-new-ai-chip-as-alternative-to-nvidia.html

    Meta, OpenAI, and Microsoft said they will use AMD’s newest AI chip, the Instinct MI300X — a sign that tech companies want alternatives to the expensive Nvidia graphics processors that have been essential for artificial intelligence.
    If the MI300X is good enough and inexpensive enough when it starts shipping early next year, it could lower costs for developing AI models.
    AMD CEO Lisa Su projected the market for AI chips will amount to $400 billion or more in 2027, and she said she hopes AMD has a sizable part of that market.

    Reply
  6. Tomi Engdahl says:

    Paul Alcorn / Tom’s Hardware:
    AMD launches Instinct AI accelerators MI300X and MI300A and claims the MI300X delivers up to 1.6x more performance than Nvidia’s H100 HGX in inference workloads

    AMD unveils Instinct MI300X GPU and MI300A APU, claims up to 1.6X lead over Nvidia’s competing GPUs
    https://www.tomshardware.com/pc-components/cpus/amd-unveils-instinct-mi300x-gpu-and-mi300a-apu-claims-up-to-16x-lead-over-nvidias-competing-gpus

    Reply
  7. Tomi Engdahl says:

    Mat Honan / MIT Technology Review:
    An interview with Sundar Pichai on Gemini, AI benchmarks, making AI helpful for everyone, the legal landscape around AI, and more

    Google CEO Sundar Pichai on Gemini and the coming age of AI
    https://www.technologyreview.com/2023/12/06/1084539/google-ceo-sundar-pichai-on-gemini-and-the-coming-age-of-ai/

    In an in-depth interview, Pichai predicts: “This will be one of the biggest things we all grapple with for the next decade.”

    Google released the first phase of its next-generation AI model, Gemini, today. Gemini reflects years of efforts from inside Google, overseen and driven by its CEO, Sundar Pichai.

    Reply
  8. Tomi Engdahl says:

    Jonny Evans / Computerworld:
    Apple’s machine learning research team quietly releases MLX, an array framework to train and deploy ML models on Apple silicon, available on GitHub — Apple’s machine learning (ML) teams quietly flexed their muscle with the release of a new ML framework developed for Apple Silicon.

    Apple launches MLX machine-learning framework for Apple Silicon
    Apple’s machine learning (ML) teams quietly flexed their muscle with the release of a new ML framework developed for Apple Silicon.
    https://www.computerworld.com/article/3711408/apple-launches-mlx-machine-learning-framework-for-apple-silicon.html

    Reply
  9. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Meta debuts Imagine with Meta, a standalone text-to-image generator on the web powered by its Emu model that generates four images per prompt, free for US users — Not to be outdone by Google’s Gemini launch, Meta’s rolling out a new, standalone generative AI experience on the web …

    Meta launches a standalone AI-powered image generator
    https://techcrunch.com/2023/12/06/meta-launches-a-standalone-ai-powered-image-generator/

    Not to be outdone by Google’s Gemini launch, Meta’s rolling out a new, standalone generative AI experience on the web, Imagine with Meta, that allows users to create images by describing them in natural language.

    Similar to OpenAI’s DALL-E, Midjourney and Stable Diffusion, Imagine with Meta, which is powered by Meta’s existing Emu image generation model, creates high-resolution images from text prompts. It’s free to use (at least for now) for users in the U.S. and generates four images per prompt.

    “We’ve enjoyed hearing from people about how they’re using imagine, Meta AI’s text-to-image generation feature, to make fun and creative content in chats. Today, we’re expanding access to imagine outside of chats,” Meta writes in a blog post published this morning. “While our messaging experience is designed for more playful, back-and-forth interactions, you can now create free images on the web, too.”

    Reply
  10. Tomi Engdahl says:

    Meta rolls out its 28 AI characters across WhatsApp, Messenger, and Instagram in the US, and plans to experiment with “long-term memory” for several characters
    More: The Verge

    Meta’s AI characters are now live across its US apps, with support for Bing Search and better memory
    https://techcrunch.com/2023/12/06/metas-ai-characters-are-now-live-across-its-u-s-apps-with-support-for-bing-search-and-better-memory/

    Reply
  11. Tomi Engdahl says:

    Sarah Perez / TechCrunch:
    Meta’s virtual assistant Meta AI gets support for Reels, a feature called “reimagine” that lets users generate AI images using prompts in group chats, and more

    Meta AI adds Reels support and ‘reimagine,’ a way to generate new AI images in group chats, and more
    https://techcrunch.com/2023/12/06/meta-ai-adds-reels-support-and-reimagine-a-way-to-generate-new-ai-images-in-group-chats/

    Reply
  12. Tomi Engdahl says:

    Olivia Poh / Bloomberg:
    Jensen Huang says Huawei, Intel, and a growing group of chip startups pose a stiff challenge to Nvidia’s dominant position in the race to make the best AI chips

    Nvidia Sees Huawei as Formidable AI Chipmaking Rival, CEO Says
    https://www.bloomberg.com/news/articles/2023-12-06/nvidia-sees-huawei-as-formidable-ai-chipmaking-rival-ceo-says

    Huawei returned to global spotlight with a made-in-China chip
    Nvidia makes the most in-demand artificial intelligence chips

    Reuters:
    Jensen Huang says Nvidia has been “working very closely with the US government” to create products that comply with US curbs on high-end chip exports to China

    Nvidia working closely with US to ensure new chips for China are compliant with curbs
    https://www.reuters.com/technology/nvidia-develop-new-chips-that-comply-with-us-export-regulations-2023-12-06/

    Reply
  13. Tomi Engdahl says:

    Kyle Bradshaw / 9to5Google:
    Google plans to add its “Help me write” AI tool to Chrome desktop, appearing in autofill popups when typing text, expanding on Messages, Gmail, Docs, and Keep — Over the past few months, Google has been steadily introducing AI tools to its many apps and services.

    ‘Help me write’ AI is coming soon to Chrome for desktop
    https://9to5google.com/2023/12/05/help-me-write-ai-chrome-desktop/

    Over the past few months, Google has been steadily introducing AI tools to its many apps and services. The latest example will see AI-powered “Help me write” become available in Chrome for Windows, Mac, and Linux.

    “Help me write” has been one of Google’s most common AI additions, with some variation of it having appeared in Google Messages, Gmail, Docs, Keep, and more. As you’d expect, the feature takes a simple prompt and drafts the appropriate text, saving you the effort of writing it manually.

    Reply
  14. Tomi Engdahl says:

    Bruce Schneier / Schneier on Security:
    The internet enabled mass surveillance, and AI will enable mass spying, once limited by human labor, by making troves of data searchable and understandable — Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide …

    AI and Mass Spying
    https://www.schneier.com/blog/archives/2023/12/ai-and-mass-spying.html

    Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.

    Before the internet, putting someone under surveillance was expensive and time-consuming. You had to manually follow someone around, noting where they went, whom they talked to, what they purchased, what they did, and what they read. That world is forever gone. Our phones track our locations. Credit cards track our purchases. Apps track whom we talk to, and e-readers know what we read. Computers collect data about what we’re doing on them, and as both storage and processing have become cheaper, that data is increasingly saved and used. What was manual and individual has become bulk and mass. Surveillance has become the business model of the internet, and there’s no reasonable way for us to opt out of it.

    Spying is another matter. It has long been possible to tap someone’s phone or put a bug in their home and/or car, but those things still require someone to listen to and make sense of the conversations. Yes, spyware companies like NSO Group help the government hack into people’s phones, but someone still has to sort through all the conversations. And governments like China could censor social media posts based on particular words or phrases, but that was coarse and easy to bypass. Spying is limited by the need for human labor.

    AI is about to change that. Summarization is something a modern generative AI system does well. Give it an hourlong meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you.

    Reply
  15. Tomi Engdahl says:

    New York Times:
    A look at lawmakers’ struggles globally to regulate AI, including the tech’s rapid evolution, governments’ AI knowledge deficits, and labyrinthine bureaucracies

    How Nations Are Losing a Global Race to Tackle A.I.’s Harms
    https://www.nytimes.com/2023/12/06/technology/ai-regulation-policies.html?unlocked_article_code=1.D00.-mPU.8evR0aFGJ1g8&hpgrp=k-abar&smid=url-share

    Alarmed by the power of artificial intelligence, Europe, the United States and others are trying to respond — but the technology is evolving more rapidly than their policies.

    Reply
  16. Tomi Engdahl says:

    Brooks Barnes / New York Times:
    SAG-AFTRA members ratify the union’s three-year contract with studios after 78% voted in favor; some members remain dissatisfied with the deal’s AI protections

    Actors Ratify Deal With Hollywood Studios, With Reservations
    https://www.nytimes.com/2023/12/05/business/sag-aftra-actors-ratify.html?unlocked_article_code=1.D00.FeYx.MQYyvxv3FFYG&hpgrp=k-abar&smid=url-share

    The SAG-AFTRA vote formally ends six months of labor strife, though some members were not happy about the contract’s artificial intelligence protections.

    Reply
  17. Tomi Engdahl says:

    Artificial intelligence programs still struggle with basic problem-solving skills that people excel at, new research claims.

    Researchers Made an IQ Test for AI, Found They’re All Pretty Stupid
    https://gizmodo.com/meta-yann-lecun-ai-iq-test-gaia-research-1851058591?utm_campaign=Gizmodo&utm_content=Giz+Tech&utm_medium=SocialMarketing&utm_source=facebook&fbclid=IwAR2Wiv_IFZlcWUuK8G6GuWp8GEUy8KiEWp_8y4eYkVNxCVRjSCWOjY2FAsk

    Artificial intelligence programs still struggle with basic problem-solving skills that people excel at, new research claims.

    There’s been a lot of talk about AGI lately—artificial general intelligence—the much-coveted AI development goal that every company in Silicon Valley is currently racing to achieve. AGI refers to a hypothetical point in the future when AI algorithms will be able to do most of the jobs that humans currently do. According to this theory of events, the emergence of AGI will bring about fundamental changes in society—ushering in a “post-work” world, wherein humans can sit around enjoying themselves while robots do most of the heavy lifting. If you believe the headlines, OpenAI’s recent palace intrigue may have been partially inspired by a breakthrough in AGI—the so-called “Q” program—which sources close to the startup claim was responsible for the power struggle.

    But, according to recent research from Yann LeCun, Meta’s top AI scientist, artificial intelligence isn’t going to be general-purpose anytime soon. Indeed, in a recently released paper, LeCun argues that AI is still much dumber than humans in the ways that matter most.

    GAIA: a benchmark for General AI Assistants
    https://arxiv.org/abs/2311.12983

    Reply
  18. Tomi Engdahl says:

    Forget Sam Altman. America’s greatest AI visionary is… an English professor in Illinois
    https://www.businessinsider.com/ted-underwood-ai-optimist-humanities-language-literature-research-bill-gates-2023-12?fbclid=IwAR25u29FyXEepS_C6Viv4C0Pf6DAJUmPskp4HJzl4xHlAWj-DuQ0iydeAzk&r=US&IR=T

    The well-designed prompt came from Ted Underwood, an English and information sciences professor at the University of Illinois. In a world filled with AI skeptics and chatbot alarmists, Underwood is making one of the strongest and most compelling cases for the value of artificial intelligence. While some (me among them) fret that AI is a fabulizing, plagiarizing, bias-propagating bullshit engine that threatens to bring about the end of civilization as we know it, Underwood is pretty sure that artificial intelligence will help us all think more deeply, and help scholars uncover exciting new truths about the grand sweep of human culture. Working with large language models — the software under a chatbot’s hood — has made him that rarest of things in the humanities: an AI optimist.

    To be clear, chatbots don’t read, and Underwood knows it. They don’t have opinions on how good a detective Philip Marlowe is. But a bot can do all sorts of interpretive tasks that used to be thesis fodder for overworked literature students.

    Reply
  19. Tomi Engdahl says:

    Indeed, Torvalds hopes that AI might really help by being able “to find the obvious stupid bugs because a lot of the bugs I see are not subtle bugs. Many of them are just stupid bugs, and you don’t need any kind of higher intelligence to find them. But having tools that warn more subtle cases where, for example, it may just say ‘this pattern does not look like the regular pattern. Are you sure this is what you need?’ And the answer may be ‘No, that was not at all what I meant. You found an obvious bag. Thank you very much.’ We actually need autocorrects on steroids. I see AI as a tool that can help us be better at what we do.”

    But, “What about hallucinations?,” asked Hohndel. Torvalds, who will never stop being a little snarky, said, “I see the bugs that happen without AI every day. So that’s why I’m not so worried. I think we’re doing just fine at making mistakes on our own.”

    https://www.zdnet.com/article/how-generative-ai-can-make-your-it-job-more-complicated/

    Reply
  20. Tomi Engdahl says:

    How AI-assisted code development can make your IT job more complicated
    Generative AI means faster coding, but also more code to manage, along with greater expectations from the business.
    https://www.zdnet.com/article/how-generative-ai-can-make-your-it-job-more-complicated/

    Reply
  21. Tomi Engdahl says:

    Google Shows Off “Gemini” AI, Says It Beats GPT-4
    Google has a lot to prove.
    https://futurism.com/google-gemini-gpt-4

    Reply
  22. Tomi Engdahl says:

    ChatGPT tool could be abused by scammers and hackers
    https://www.bbc.com/news/technology-67614065

    Reply
  23. Tomi Engdahl says:

    Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos
    “Imagine with Meta AI” turns prompts into images, trained using public Facebook data.
    https://arstechnica.com/information-technology/2023/12/metas-new-ai-image-generator-was-trained-on-1-1-billion-instagram-and-facebook-photos/

    Reply
  24. Tomi Engdahl says:

    Tekoäly murrettiin todella yksinkertaisella hyökkäyksellä, jonka voi tehdä kuka tahansa – Alkoi vuotaa opetusdataansa julki
    Kielimallin suojaukset pettivät uuden tekniikan edessä.
    https://www.tekniikkatalous.fi/uutiset/tekoaly-murrettiin-todella-yksinkertaisella-hyokkayksella-jonka-voi-tehda-kuka-tahansa-alkoi-vuotaa-opetusdataansa-julki/84fe3085-641b-4ae1-8fb2-a48147c401e8

    Reply
  25. Tomi Engdahl says:

    I asked DALL-E 3 to create a portrait of every US state, and the results were gloriously strange
    This is how AI sees the 50 US states, according to DALL-E 3 and ChatGPT.
    https://www.zdnet.com/article/i-asked-dall-e-3-to-create-a-portrait-of-every-us-state-and-the-results-were-gloriously-strange/

    Reply
  26. Tomi Engdahl says:

    Google’s best Gemini AI demo video was fabricated
    Google takes heat for a misleading AI demo video that hyped up its GPT-4 competitor.
    https://arstechnica.com/information-technology/2023/12/google-admits-it-fudged-a-gemini-ai-demo-video-which-critics-say-misled-viewers/

    Reply
  27. Tomi Engdahl says:

    How To AI: Best AI Tools for Image Generation
    Which AI image generator fits your needs? From Dall-E and MidJourney to Stable Diffusion, we’ve got a comprehensive guide to help you choose.
    https://decrypt.co/208752/which-ai-art-generator-should-you-use-comparison-dalle-midjourney-stable-diffusion-sdxl-adobe-firefly-amazon-titan-leonardo

    Reply
  28. Tomi Engdahl says:

    Large Language Models (LLMs) and the Future of Work
    https://www.oodaloop.com/archive/2023/11/29/large-language-models-llms-and-the-future-of-work/

    Large Language Models (LLMs) based on deep learning have been a part of the technological landscape since approximately 2018. However, their existence was initially unknown to the general public, with their usage largely confined to a few technical disciplines such as data scientists and software engineers.

    Reply
  29. Tomi Engdahl says:

    Bloomberg:
    The European Parliament and the EU’s 27 member states will need to approve the AI Act, which seeks fines of up to €35M or 7% of global turnover for violations — – Region couldn’t ‘let the perfect be the enemy of the good’ — Negotiators broke coffee machine working late-night deal

    Europe Puts Stake in the Ground With First Pact to Regulate AI
    https://www.bloomberg.com/news/articles/2023-12-09/europe-puts-stake-in-the-ground-with-first-pact-to-regulate-ai

    Region couldn’t ‘let the perfect be the enemy of the good’
    Negotiators broke coffee machine working late-night deal

    Earlier this week, European negotiators sat in a conference room in Brussels and debated for nearly 24 straight hours — dozing off at times and working a self-service coffee machine so hard that it broke.

    They came with a singular mission: reaching an agreement to regulate artificial intelligence. And they didn’t quite get there. But the EU’s internal market chief, Thierry Breton, didn’t want a long break over the weekend that would give lobbyists more time to weigh in, according to people familiar with the matter.

    Adam Satariano / New York Times:
    The EU reaches a deal on the AI Act, one of the world’s first comprehensive attempts to limit AI use, including in law enforcement and critical infrastructure

    E.U. Agrees on Landmark Artificial Intelligence Rules
    https://www.nytimes.com/2023/12/08/technology/eu-ai-act-regulation.html?unlocked_article_code=1.Ek0.mNb3.qvMXexOo1xZi&hpgrp=k-abar&smid=url-share

    The agreement over the A.I. Act solidifies one of the world’s first comprehensive attempts to limit the use of artificial intelligence.

    European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.

    The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.

    European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.

    Washington Post:
    The EU’s AI Act includes restrictions for foundation models, subjecting some proprietary models classified as having “systemic risk” to additional obligations

    E.U. reaches deal on landmark AI bill, racing ahead of U.S.
    https://www.washingtonpost.com/technology/2023/12/08/ai-act-regulation-eu/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzAyMDExNjAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzAzMzkzOTk5LCJpYXQiOjE3MDIwMTE2MDAsImp0aSI6IjYxNWRlMjM4LTBlY2MtNDJkOS04MDg2LTllNTVhNzk5MmNhYSIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS90ZWNobm9sb2d5LzIwMjMvMTIvMDgvYWktYWN0LXJlZ3VsYXRpb24tZXUvIn0.4mzsdw0Px4Uz8dDhh72Lqcie7z59hTerDPC_UHNVqPM

    The regulation paves the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.

    European Union officials reached a landmark deal Friday on the world’s most ambitious law to regulate artificial intelligence, paving the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.

    At a time when the sharpest critics of AI are warning of its nearly limitless threat, even as advocates herald its benefits to humanity’s future, Europe’s AI Act seeks to ensure that the technology’s exponential advances are accompanied by monitoring and oversight, and that its highest-risk uses are banned. Tech companies that want to do business in the 27-nation bloc of 450 million consumers — the West’s single largest — would be compelled to disclose data and do rigorous testing, particularly for “high-risk” applications in products like self-driving cars and medical equipment.

    Dragos Tudorache, a Romanian lawmaker co-leading the AI Act negotiations, hailed the deal as a template for regulators around the world scrambling to make sense of the economic benefits and societal dangers presented by artificial intelligence, especially since last year’s release of the popular chatbot ChatGPT.

    “The work that we have achieved today is an inspiration for all those looking for models,” he said. “We did deliver a balance between protection and innovation.”

    The result was a compromise on the most controversial aspects of the law — one aimed at regulating the massive foundation language models that capture internet data to underpin consumer products like ChatGPT and another that sought broad exemptions for European security forces to deploy artificial intelligence.

    The final deal banned scraping faces from the internet or security footage to create facial recognition databases or other systems that categorize using sensitive characteristics such as race, according to a news release. But it created some exemptions allowing law enforcement to use “real-time” facial recognition to search for victims of trafficking, prevent terrorist threats, and track down suspected criminals in cases of murder, rape and other crimes.

    European digital privacy and human rights groups were pressuring representatives of the parliament to hold firm against the push by countries to carve out broad exemptions for their police and intelligence agencies, which have already begun testing AI-fueled technologies. Following the early announcement of the deal, advocates remained concerned about a number of carve-outs for national security and policing.

    “The devil will be in the detail, but whilst some human rights safeguards have been won, the E.U. AI Act will no doubt leave a bitter taste in human rights advocates’ mouths,” said Ella Jakubowska, a senior policy adviser at European Digital Rights, a collective of academics, advocates and non-governmental organizations.

    The legislation ultimately included restrictions for foundation models but gave broad exemptions to “open-source models,” which are developed using code that’s freely available for developers to alter for their own products and tools. The move could benefit open-source AI companies in Europe that lobbied against the law, including France’s Mistral and Germany’s Aleph Alpha, as well as Meta, which released the open-source model LLaMA.

    However, some proprietary models classified as having “systemic risk” will be subject to additional obligations, including evaluations and reporting of energy efficiency. The text of the deal was not immediately available, and a news release did not specify what the criteria would trigger the more stringent requirements.

    Companies that violate the AI Act could face fines up to 7 percent of global revenue, depending on the violation and the size of the company breaking the rules.

    The law furthers Europe’s leadership role on tech regulation. For years, the region has led the world in crafting novel laws to address concerns about digital privacy, the harms of social media and concentration in online markets.

    The architects of the AI Act have “carefully considered” the implications for governments around the world since the early stages of drafting the legislation, Tudorache said. He said he frequently hears from other legislators who are looking at the E.U.’s approach as they begin drafting their own AI bills.

    “This legislation will represent a standard, a model, for many other jurisdictions out there,” he said, “which means that we have to have an extra duty of care when we draft it because it is going to be an influence for many others.”

    Reply
  30. Tomi Engdahl says:

    Pääkirjoitus: Tekoälyllä voidaan luokitella, manipuloida ja vahtia kansalaisia – EU haluaa laittaa sille tiukat rajat
    Oikeuskäytäntöä uuden asetuksen vaikutuksista saataneen vasta vuosien kuluttua.
    https://www.is.fi/paakirjoitus/art-2000010047965.html

    Reply
  31. Tomi Engdahl says:

    Wall Street Journal:
    The US needs an AI moonshot mentality, and in 2024 the government should galvanize support for a broad investment backed by strong public sector infrastructure

    Why the U.S. Needs a Moonshot Mentality for AI—Led by the Public Sector
    Artificial intelligence is too important to be left entirely in the hands of the big tech companies
    https://www.wsj.com/tech/ai/artificial-intelligence-united-states-future-76c0082e?mod=followamazon

    Among other things, 2023 will be remembered as the year artificial intelligence went mainstream.

    But while Americans from every corner of the country began dabbling with tools like ChatGPT and Midjourney, we believe 2023 is also the year Congress failed to act on what we see as the big picture: AI’s impact will be far bigger than the products that companies are releasing at a breakneck pace. AI is a broad, general-purpose technology with profound implications for society that cannot be overstated.

    We saw this early on, and in 2019 established the Stanford Institute for Human Centered Artificial Intelligence, embarking on what was seen at the time as a controversial initiative: the need to engage in deep dialogue and partnership with the policy world, especially those in Washington, D.C.

    As we’ve done this work, we have seen firsthand the growing gap in the capabilities of, and investment in, the public compared with private sectors when it comes to AI. As it stands now, academia and the public sector lack the computing power and resources necessary to achieve cutting edge breakthroughs in the application of AI.

    This leaves the frontiers of AI solely in the hands of the most resourced players—industry and, in particular, Big Tech—and risks a brain drain from academia. Last year alone, less than 40% of new Ph.D.s in AI went into academia and only 1% went into government jobs.

    There has been, unquestionably, some progress to address this. In July, Congress introduced the bipartisan, bicameral Create AI Act to give students and researchers access to the resources, data and tools they need to study and develop responsible AI models. One key element is that it establishes the National AI Research Resource (Nairr) that will enable the government to provide much-needed access to large-scale computation and government data sets to academics, nonprofit researchers and startups across the U.S.

    Reply
  32. Tomi Engdahl says:

    Kyle Orland / Ars Technica:
    ChatGPT outperforms Gemini-powered Bard overall across factual retrieval, summarization, creative writing, and coding tests, but not as clearly as in April 2023

    Round 2: We test the new Gemini-powered Bard against ChatGPT
    We run the models through seven categories to determine an updated champion.
    https://arstechnica.com/ai/2023/12/chatgpt-vs-google-bard-round-2-how-does-the-new-gemini-model-fare/

    Now, the AI days are a bit less “early,” and this week’s launch of a new version of Bard powered by Google’s new Gemini language model seemed like a good excuse to revisit that chatbot battle with the same set of carefully designed prompts. That’s especially true since Google’s promotional materials emphasize that Gemini Ultra beats GPT-4 in “30 of the 32 widely used academic benchmarks” (though the more limited “Gemini Pro” currently powering Bard fares significantly worse in those not-completely-foolproof benchmark tests).

    This time around, we decided to compare the new Gemini-powered Bard to both ChatGPT-3.5—for an apples-to-apples comparison of both companies’ current “free” AI assistant products—and ChatGPT-4 Turbo—for a look at OpenAI’s current “top of the line” waitlisted paid subscription product (Google’s top-level “Gemini Ultra” model won’t be publicly available until next year). We also looked at the April results generated by the pre-Gemini Bard model to gauge how much progress Google’s efforts have made in recent months.

    Reply
  33. Tomi Engdahl says:

    Financial Times:
    Dealroom: Nvidia is the most active large-scale investor in AI startups in 2023, excluding accelerator funds like YC, with 35 deals, almost 6x more than in 2022 — US chipmaker takes stakes in groups that are also its customers in effort to ‘lock up the market’

    Nvidia emerges as leading investor in AI companies
    US chipmaker takes stakes in groups that are also its customers in effort to ‘lock up the market’
    https://www.ft.com/content/25337df3-5b98-4dd1-b7a9-035dcc130d6a

    Nvidia, the world’s most valuable chipmaker, has become one of the most prolific investors in artificial intelligence start-ups this year, seeking to capitalise on its position as the dominant provider of AI processors.

    Silicon Valley-based Nvidia said on Monday it had invested in “more than two dozen” companies this year, from big new AI platforms valued in the billions of dollars to smaller start-ups applying AI to industries such as healthcare or energy.

    According to estimates by Dealroom, which tracks venture capital investments, Nvidia participated in 35 deals in 2023, almost six times more than last year.

    That made the chipmaker the most active large-scale investor in AI in a banner year for dealmaking in the sector, outstripping Silicon Valley’s largest venture firms such as Andreessen Horowitz and Sequoia, according to Dealroom, excluding small-scale accelerator funds such as Y Combinator that place many smaller bets.

    “Broadly, for Nvidia, the number one criteria [for making start-up investments] is relevancy,” Mohamed Siddeek, head of its dedicated venture arm NVentures, told the Financial Times.

    “Companies that use our technology, who depend on our technology, who build their businesses on our technology . . . I can’t think of a situation where we’ve invested in a company that did not use Nvidia products.”

    Between NVentures and its corporate development team, Nvidia’s portfolio now includes Inflection AI and Cohere, two of the biggest rivals to ChatGPT maker OpenAI.

    Reply
  34. Tomi Engdahl says:

    Kommentti: Tekoäly kääntyi jo ihmistä vastaan – mitä tulevaisuus vielä tuo tullessaan?
    Tieteiskirjailija näki tietotekniikan kehittymisen vaarat hämmästyttävän tarkasti 30 vuotta sitten, kirjoittaa erikoistoimittaja Jouko Juonala.
    https://www.is.fi/digitoday/art-2000009628930.html

    Reply
  35. Tomi Engdahl says:

    Asiantuntijat kertovat, mitä he pelkäävät tekoälyn tulevaisuudessa
    Sadat asiantuntijat listasivat omat pelkonsa ja toiveensa tekoälyn tulevaisuuden suhteen, ja he ovat enemmän huolissaan kuin innoissaan.
    https://itinsider.fi/asiantuntijat-kertovat-mita-he-pelkaavat-tekoalyn-tulevaisuudessa/?gclid=Cj0KCQiAyeWrBhDDARIsAGP1mWQ_qSQC8g6PX_lvEuiWs9LNPL31LL1NIfFcG0SArS8LvRR5maauzkAaAqTdEALw_wcB

    Reply
  36. Tomi Engdahl says:

    Welcome to the new surreal: How AI-generated video is changing film.

    Exclusive: Watch the world premiere of the AI-generated short film The Frost.

    https://www.technologyreview.com/2023/06/01/1073858/surreal-ai-generative-video-changing-film/

    Reply
  37. Tomi Engdahl says:

    What is Microsoft Copilot? (Microsoft Copilot vs Copilot for Microsoft 365)
    https://www.youtube.com/watch?v=pXzjJ0NvFJE

    Introducing Microsoft 365 Copilot | Your Copilot for Work
    https://www.youtube.com/watch?v=S7xTBa93TX8

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*