3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,064 Comments

  1. Tomi Engdahl says:

    How Amazon is trying to make the world fall in love with its robots
    The retail giant’s autonomous robots have been precisely designed to avoid annoying or terrifying their human colleagues – but will the world welcome them, asks Andrew Griffin
    https://www.independent.co.uk/tech/amazon-autonomous-robots-proteus-b2628115.html

    Reply
  2. Tomi Engdahl says:

    AI engineers claim new algorithm reduces AI power consumption by 95% — replaces complex floating-point multiplication with integer addition
    News
    By Jowi Morales published 2 days ago
    Addition is simpler than multiplication, after all.
    https://www.tomshardware.com/tech-industry/artificial-intelligence/ai-engineers-build-new-algorithm-for-ai-processing-replace-complex-floating-point-multiplication-with-integer-addition

    Reply
  3. Tomi Engdahl says:

    Horror Studio Blumhouse Partners With Meta to Use Its AI Video Generator
    https://futurism.com/the-byte/horror-movie-studio-meta-ai-video

    As Variety reports, Blumhouse Productions, best known for franchises like “Halloween” and “The Purge,” has announced a partnership with Meta that grants it access to an early version of the tech company’s recently-unveiled video generation AI model Movie Gen.

    Reply
  4. Tomi Engdahl says:

    Machines of
    Loving Grace
    How AI Could Transform the World for the Better
    https://darioamodei.com/machines-of-loving-grace?fbclid=IwY2xjawGAGl1leHRuA2FlbQIxMQABHcjBkn-O-8M6bU0Oxy5GC0TNfA3Lz40o_wzRJOa4oC8sJBJVAN3uyXKplQ_aem_M6oPtGIVyhC4ygdxw5djJw#2-neuroscience-and-mind

    I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.

    Reply
  5. Tomi Engdahl says:

    Cristina Criddle / Financial Times:
    How OpenAI, Google, and Anthropic are using differing approches to improve “model behavior”, an emerging field shaping AI systems’ responses and characteristics

    https://www.ft.com/content/a7337550-9b42-45c8-9845-37d78d2c3209

    Reply
  6. Tomi Engdahl says:

    Mark Gurman / Bloomberg:
    Sources: some at Apple think it is over two years behind the leaders in generative AI; a closer look at changes in Apple’s HR and hardware engineering groups — Apple’s new iPad mini highlights the company’s secret advantage in artificial intelligence. Also: Sonos weighs a headphone reboot …

    Apple’s New iPad Mini Highlights the Company’s Secret AI Advantage
    https://www.bloomberg.com/news/newsletters/2024-10-20/apple-s-latest-ipad-mini-highlights-ai-advantage-sonos-considers-new-headphones-m2hkz4mn

    Apple’s new iPad mini highlights the company’s secret advantage in artificial intelligence. Also: Sonos weighs a headphone reboot after a sluggish start; Amazon rolls out a color Kindle; and Jony Ive-designed jackets arrive. On the management front, Apple’s chief people officer and top recruiter depart, and the company names new hardware leaders.

    When Apple Inc. announced the first iPad mini upgrade in three years this past week, it chose to recycle its marketing strategy for the iPhone 16 and go all-in on AI features.

    The company’s smallest iPad will have 8 gigabytes of memory and the same processor — the A17 Pro — as the iPhone 15 Pro line from last year. That gives it enough horsepower to support the new Apple Intelligence platform. And, considering that the new model doesn’t have other major new changes, it’s no surprise that Apple is heavily touting the AI capabilities.

    The bigger obstacle is that the first Apple Intelligence features are underwhelming — with the more impressive capabilities coming later. In the iPad mini marketing on Apple’s website, the company spotlights four features; three of them aren’t launching until between December and March.

    At the start, the signature feature will be notification summaries. These can be quite helpful — if they’re accurate — but they lack the wow factor of competitors’ offerings. Compared with the latest fare from Google, OpenAI and Meta Platforms Inc., Apple’s AI is still far behind.

    At some point, Apple will either develop, hire or acquire its way into the top tier of AI companies.

    Apple has another advantage as it tries to catch up: the ability to roll out features to a massive base of devices.

    When Apple announced its AI features in June, the software was only compatible with two iPhone models and a couple of iPads, as well as Macs with its in-house silicon. Now, the four newest iPhones, almost every iPad and all the Macs can support it. By 2026, nearly every Apple device with a screen will run it

    The Apple Watch doesn’t currently support the AI platform, but the notification summaries can be delivered to the device from a paired iPhone. And the company is working on bringing the features to the Vision Pro headset. Apple’s next wave of home devices, meanwhile, will also be built around AI capabilities.

    When Apple becomes a true player in AI, Google and Samsung Electronics Co. will be hard-pressed to roll out new features and upgrades at the same speed. They have more fragmented operating systems, and their hardware, software and services aren’t as tightly integrated.

    Still, Apple hasn’t yet shown it can achieve real competence in AI. Today, there’s little reason to buy products just to get Apple Intelligence. If consumers are sold on that idea by Apple’s marketing, they may be surprised to find few meaningful AI tools when they start using their new devices.

    But that raises a broader question: How much do customers actually care about AI? For now, the camera advancements on a new iPhone are a bigger draw.

    Apple has done a good job at convincing one group that it’s winning at artificial intelligence: investors.

    Some analysts have even made dubious claims that Apple Intelligence is already creating an “AI consumer revolution” that will “spark a massive holiday season.” But Apple’s AI glory is still years away. If the new iPhone is a hit this year, it will probably be because of everything but AI.

    Reply
  7. Tomi Engdahl says:

    Bloomberg:
    Students, academics, and developers say AI writing detectors are most likely to falsely flag essays written in a more generic manner as written by AI tools — About two-thirds of teachers report regularly using tools for detecting AI-generated content. At that scale, even tiny error rates can add up quickly.

    AI Detectors Falsely Accuse Students of Cheating—With Big Consequences
    https://www.bloomberg.com/news/features/2024-10-18/do-ai-detectors-work-students-face-false-cheating-accusations?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcyOTMxMDIyMiwiZXhwIjoxNzI5OTE1MDIyLCJhcnRpY2xlSWQiOiJTTEs0Q1REV1gyUFMwMCIsImJjb25uZWN0SWQiOiIwNEFGQkMxQkYyMTA0NUVEODg3MzQxQkQwQzIyNzRBMCJ9.KXDUCkelNZlFLe5OhoinVXptHroG9RSu0iKUaCxKoQM

    About two-thirds of teachers report regularly using tools for detecting AI-generated content. At that scale, even tiny error rates can add up quickly.

    Just weeks into the fall semester, Olmsted submitted a written assignment in a required class—one of three reading summaries she had to do each week. Soon after, she received her grade: zero. When she approached her professor, Olmsted said she was told that an AI detection tool had determined her work was likely generated by artificial intelligence.

    Olmsted disputed the accusation to her teacher and a student coordinator, stressing that she has autism spectrum disorder and writes in a formulaic manner that might be mistakenly seen as AI-generated, according to emails viewed by Bloomberg Businessweek. The grade was ultimately changed, but not before she received a strict warning: If her work was flagged again, the teacher would treat it the same way they would with plagiarism.

    Since OpenAI’s ChatGPT brought generative AI to the mainstream almost two years ago, schools have raced to adapt to a changed landscape. Educators now rely on a growing crop of detection tools to help spot sentences, paragraphs or entire assignments generated by artificial intelligence. About two-thirds of teachers report using an AI checker regularly, according to a survey of more than 450 instructors published in March by the Center for Democracy & Technology.

    The best AI writing detectors are highly accurate, but they’re not foolproof.

    Businessweek found the services falsely flagged 1% to 2% of the essays as likely written by AI, in some cases claiming to have near 100% certainty.

    Even such a small error rate can quickly add up, given the vast number of student assignments each year, with potentially devastating consequences for students who are falsely flagged. As with more traditional cheating and plagiarism accusations, students using AI to do their homework are having to redo assignments and facing failing grades and probation.

    The students most susceptible to inaccurate accusations are likely those who write in a more generic manner, either because they’re neurodivergent like Olmsted, speak English as a second language (ESL) or simply learned to use more straightforward vocabulary and a mechanical style, according to students, academics and AI developers. A 2023 study by Stanford University researchers found that AI detectors were “near-perfect” when checking essays written by US-born eighth grade students, yet they flagged more than half of the essays written by nonnative English students as AI-generated. OpenAI recently said it has refrained from releasing an AI writing detection tool in part over concerns it could negatively affect certain groups, including ESL students.

    Businessweek also found that AI detection services can sometimes be tricked by automated tools designed to pass off AI writing as human. This could lead to an arms race that pits one technology against another, damaging trust between educators and students with little educational benefit.

    Turnitin, a popular AI detection tool that Olmsted says was used to check her work, has said it has a 4% false positive rate when analyzing sentences. Turnitin declined to make its service available for testing.

    While some educators have backed away from AI detectors and tried to adjust their curricula to incorporate AI instead, many colleges and high schools still use these tools. AI detection startups have attracted about $28 million in funding since 2019, according to the investment data firm PitchBook, with most of those deals coming after ChatGPT’s release. Deepfake detection startups, which can check for AI-generated text, images, audio and video, raised more than $300 million in 2023, up from about $65 million the year before, PitchBook found.

    The result is that classrooms remain plagued by anxiety and paranoia over the possibility of false accusations, according to interviews with a dozen students and 11 teachers across the US. Undergraduates now pursue a wide range of time-consuming efforts to defend the integrity of their work, a process they say diminishes the learning experience. Some also fear using commonplace AI writing assistance services and grammar checkers that are specifically marketed to students, citing concerns they will set off AI detectors.

    Eric Wang, Turnitin’s vice president for AI, says the company intentionally “oversamples” underrepresented groups in its data set. He says internal tests have shown Turnitin’s model doesn’t falsely accuse ESL students, and that its overall false positive rate for entire documents is below 1% and improving with each new release. Turnitin doesn’t train specifically on neurodivergent student data or have access to medical histories to assess that classification.

    Copyleaks co-founder and Chief Executive Officer Alon Yamin says its technology is 99% accurate. “We’re making it very clear to the academic institutions that nothing is 100% and that it should be used to identify trends in students’ work,” he says. “Kind of like a yellow flag for them to look into and use as an opportunity to speak to the students.”

    “Every AI detector has blind spots,” says Edward Tian, the founder and CEO of GPTZero.

    It’s challenging to quantify AI use in schools. In one test, Businessweek analyzed a separate set of 305 essays submitted to Texas A&M in the summer of 2023, after ChatGPT launched, and found the same AI detectors flagged about 9% as being generated by artificial intelligence.

    AI writing detectors typically look at perplexity, a measure of how complex the words are in any given submission. “If the word choices tend to be more generic and formulaic, that work has a higher chance of being flagged by AI detectors,” says James Zou, a professor of biomedical data science at Stanford University and the senior author of the Stanford study on ESL students.

    “AI Humanizer” Edits a Human-Written Essay to Bypass AI Detection

    A Bloomberg test of a service called Hix Bypass found that a human-written essay that GPTZero incorrectly said was 98.1% AI went down dramatically to 5.3% AI after being altered by the service.

    The fear of being flagged by AI detectors has also forced students to rethink using popular online writing assistance tools.

    Bloomberg found using Grammarly to “improve” an essay or “make it sound academic” will turn work that passed as 100% human-written to 100% AI-written. Grammarly’s spell checker and grammar suggestions, however, have only a marginal impact on making documents appear more AI-written.

    Kaitlyn Abellar, a student at Florida SouthWestern State College, says she has uninstalled plug-ins for such programs as Grammarly from her computer.

    Stevens said she was put on academic probation for a year after a disciplinary hearing determined she’d cheated. She insisted she wrote the assignment herself, using only Grammarly’s standard spell-checking and grammar features.

    “This was a well-intentioned student who had been using Grammarly in the responsible way and was flagged by a third-party technology saying you did wrong. We can’t help how Turnitin operates, like they understand that they have false flags,”

    To some educators and students alike, the current system feels unsustainable because of the strain it places on both sides of the teacher’s desk and because AI is here to stay.

    “Artificial intelligence is going to be a part of the future whether we like it or not,” says Adam Lloyd, an English professor at the University of Maryland. “Viewing AI as something we need to keep out of the classroom or discourage students from using is misguided.”

    Instead of using Turnitin, which is available to faculty at his school, Lloyd prefers to go with his intuition. “I know my students’ writing, and if I have a suspicion, I’ll have an open discussion,” he says, “not automatically accuse them.”

    Reply
  8. Tomi Engdahl says:

    Todd Bishop / GeekWire:
    A historical look at Microsoft’s work in AI, from research to real-world applications, recent progress, competition, challenges, and the future

    https://www.geekwire.com/2024/ai-dreams-microsoft-50-chapter-1/

    Reply
  9. Tomi Engdahl says:

    Todd Bishop / GeekWire:
    Microsoft unveils 10 new AI agents for its enterprise-focused Dynamics 365 apps covering sales, finance, and more, ahead of Salesforce’s Agentforce availability — AI agents are reigniting the competition between Microsoft and Salesforce. — Microsoft announced 10 new AI agents …
    https://www.geekwire.com/2024/microsoft-unveils-new-autonomous-ai-agents-in-advance-of-competing-salesforce-rollout/

    Reply
  10. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    xAI launches an API for “grok-beta” priced at $5 per million input tokens or $15 per million output tokens; it is unclear which AI model “grok-beta” might be

    xAI, Elon Musk’s AI startup, launches an API
    https://techcrunch.com/2024/10/21/xai-elon-musks-ai-startup-launches-an-api/

    Reply
  11. Tomi Engdahl says:

    Simon Willison / Simon Willison’s Weblog:
    A look at some use cases of Anthropic’s Claude Artifacts, which lets users create interactive single-page apps via prompts

    Everything I built with Claude Artifacts this week
    https://simonwillison.net/2024/Oct/21/claude-artifacts/

    Reply
  12. Tomi Engdahl says:

    Brooks Barnes / New York Times:
    Blade Runner 2049′s producer sues Musk, Tesla, and WBD for allegedly using AI to create imagery close to the film to promote Cybercab, despite a denied request — Alcon Entertainment, the Hollywood company behind “Blade Runner 2049,” said it had denied a request to use images from the movie but that Mr. Musk did so anyway.

    https://www.nytimes.com/2024/10/21/business/media/elon-musk-alcon-entertainment-robotaxi-lawsuit.html?unlocked_article_code=1.T04.mTgX.TqvTpLs-wB9B&smid=url-share

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*