3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,140 Comments

  1. Tomi Engdahl says:

    Joseph Cox / 404 Media:
    A look at Fusus, which links a town’s security cameras into one central hub and analyzes the feeds using AI, as its system is deployed across 2,400 US locations

    AI Cameras Took Over One Small American Town. Now They’re Everywhere
    https://www.404media.co/fusus-ai-cameras-took-over-town-america/

    Reply
  2. Tomi Engdahl says:

    Tilasitko tämän pizzan Foodorasta? –moni ei ehkä tajua, mistä on kyse
    https://www.helsinginuutiset.fi/paikalliset/6311077#cxrecs_s

    Foodora on ottanut Suomessa käyttöön tekoälyllä luodut annoskuvat.

    Kuvilla on iso merkitys, kun ravintolat kilpailevat käyttäjien suosiosta kotiinkuljetussovelluksissa. Foodoran omistava Delivery Hero teki Aasiassa tutkimuksen, jonka mukaan kuvat lisäävät ravintoloiden myyntiä jopa kymmenellä prosentilla. Suomessa vastaava luku on kahdeksan prosenttia, Foodora kertoo.

    Foodora on ottanut Suomessa käyttöön tekoälyavusteiset annoskuvat, joiden pohjana käytetään olemassa olevia annoskuvia sekä sanallista annoskuvausta. Yritys ei kerro käyttämästään teknologiasta yksityiskohtaisesti.

    – Talon sisäisesti käytämme useita eri tekoälyalustoja, joiden avulla saadaan jäljiteltyä täydellinen kuva. Kuvien laadun varmistamiseksi useampi silmäpari käy ne vielä läpi. Joidenkin kuvien kohdalla tekoäly saattaa tehdä jotain virheitä, joten jokainen kuva tulee aina tarkastaa ennen kuin laitamme ne julkisiksi, Foodoralta kerrotaan Helsingin Uutister sisarlehti Avecille.

    Foodoran sovellukseen on lisätty lokakuun aikana jo tuhansia tekoälykuvia, yritys kertoo. Pääpaino on silti edelleen ihmisen ottamissa kuvissa.

    – Ravintolalle tarjotaan valokuvaajaa Foodoran puolesta. Ravintola voi halutessaan ottaa valokuvaajan tai valita tekoälykuvat. Suosittelemme ottamaan aidot kuvat.

    Tekoälyn luomat kuvat merkitään Foodoran sovelluksessa vesileimalla.

    Reply
  3. Tomi Engdahl says:

    Wow. OpenAI leaked the new stuff they are about to announce. If this is accurate, then ChatGPT is going to much more than a platform, more like an operating system for the creation of personalized apps.

    Time for all of us to get busy engaging with these new features.

    https://the-decoder.com/openais-massive-chatgpt-updates-leak-ahead-of-developer-conference/

    Reply
  4. Tomi Engdahl says:

    Elon Musk says xAI is ready to release its first AI model to a ‘select group’
    https://www.businessinsider.com/elon-musk-xai-ready-launch-first-ai-model-2023-11?utm_source=facebook&utm_campaign=business-sf&utm_medium=social&fbclid=IwAR3Ts4jkD__xnsgeqVWSJ2J2CS9IWKa1lOa0wehkkXEBGgR6ZuI4EKgF3fw&r=US&IR=T

    Elon Musk says xAI is about to release its first AI model.
    He said the model would be available to a “select group” on Saturday.
    xAI was launched in July with the lofty goal of understanding the “true nature of the universe.”

    Elon Musk’s xAI is gearing up to release its first AI model.

    In a post on X, formerly Twitter, Musk announced his company was set to release the model to a “select group” on Saturday.

    “In some important respects, it is the best that currently exists,” he said of the tech. It’s unclear who is part of the “select group” to gain access to the model.

    The release is the first from Musk’s newly minted AI lab. Officially launched in July after months of speculation, the company has the lofty mission of understanding the “true nature of the universe.”

    Reply
  5. Tomi Engdahl says:

    The first AI nation? A ship with 10,000 Nvidia H100 GPUs worth $500 million could become the first ever sovereign territory that relies entirely on artificial intelligence for its future
    News
    By Ross Kelly published 1 day ago
    The BlueSea Frontier Compute Cluster would be a roaming AI powerhouse at sea
    https://www.techradar.com/pro/the-first-ai-nation-a-ship-with-10000-nvidia-h100-gpus-worth-dollar500-million-could-become-the-first-ever-sovereign-territory-that-relies-entirely-on-artificial-intelligence-for-its-future?fbclid=IwAR1G0AoYfJMTH6rlO0LWADtGDY2c5M70SJRwXizDbVy2NVebGeHZmd72L3s

    Reply
  6. Tomi Engdahl says:

    Here’s what we know about generative AI’s impact on white-collar work
    Some jobs are vulnerable to automation but if you’re a ‘cyborg’ or a ‘centaur’ you can work better with robot help

    https://www.ft.com/content/b2928076-5c52-43e9-8872-08fda2aa2fcf?shareType=nongift&fbclid=IwAR2mPrREsR9XADOgUzPbVNz1uLIKr2CrI9BwFrxk2c9iiiRgAFEOMTTwpEs

    Since generative artificial intelligence burst on to the scene last November, the forecast for white-collar workers has been gloomy. OpenAI, the company behind ChatGPT, estimates that the jobs most at risk from the new wave of AI are those with the highest wages, and that someone in an occupation that pays a six-figure salary is about three times as exposed as someone making $30,000. McKinsey warns of the models’ ability to automate the application of expertise.

    I understand the temptation to wave away these warnings as mere projections. Thousands of years of history have lulled many of us into the false sense of security that automation is something that happens to other people’s jobs, never our own.

    Most strikingly, the study found that freelancers who previously had the highest earnings and completed the most jobs were no less likely to see their employment and earnings decline than other workers. If anything, they had worse outcomes. In other words, being more skilled was no shield against loss of work or earnings.

    Reply
  7. Tomi Engdahl says:

    Neuromorphic computing research: Team proposes hardware that mimics the human brain
    https://techxplore.com/news/2023-11-neuromorphic-team-hardware-mimics-human.html

    Reply
  8. Tomi Engdahl says:

    Tekoälykupla puhkeaa ensi vuonna?
    Suvi Korhonen6.11.202316:14|päivitetty6.11.202316:14TEKOÄLYDIGITALOUS
    Hypen ja todellisten hyötyjen ero tulee hiljalleen esille tekoälysovellusten kehittäjille ja käyttäjille. Lainsäädäntö laahaa yhä perässä.
    https://www.tivi.fi/uutiset/tekoalykupla-puhkeaa-ensi-vuonna/ebcb5146-fa01-4299-8536-848801a996cd

    Reply
  9. Tomi Engdahl says:

    And the AI Winner is Open-Source
    As per open-source platform Hugging Face, Vicuna and Meta’s Llama-2 have been downloaded over 4 million times
    https://analyticsindiamag.com/and-the-ai-winner-is-open-source/

    Reply
  10. Tomi Engdahl says:

    DISNEY HAS NO COMMENT ON MICROSOFT’S AI GENERATING PICTURES OF MICKEY MOUSE DOING 9/11
    https://futurism.com/disney-microsoft-ai-mickey-mouse

    Reply
  11. Tomi Engdahl says:

    Researchers from Meta and UNC-Chapel Hill Introduce Branch-Solve-Merge: A Revolutionary Program Enhancing Large Language Models’ Performance in Complex Language Tasks
    https://www.marktechpost.com/2023/10/31/researchers-from-meta-and-unc-chapel-hill-introduce-branch-solve-merge-a-revolutionary-program-enhancing-large-language-models-performance-in-complex-language-tasks/

    Reply
  12. Tomi Engdahl says:

    AI LETS JOHNNY CASH COVER TAYLOR SWIFT FROM BEYOND THE GRAVE
    https://futurism.com/the-byte/ai-johnny-cash-taylor-swift

    Reply
  13. Tomi Engdahl says:

    The future of AI hardware: Scientists unveil all-analog photoelectronic chip
    https://techxplore.com/news/2023-10-future-ai-hardware-scientists-unveil.html

    Reply
  14. Tomi Engdahl says:

    Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving
    https://venturebeat.com/ai/microsoft-unveils-lema-a-revolutionary-ai-learning-method-mirroring-human-problem-solving/

    Reply
  15. Tomi Engdahl says:

    ARTIFICIAL INTELLIGENCE
    Has AI Surpassed Human Creativity?
    A new study compares human versus AI creativity with surprising results.
    https://www.psychologytoday.com/intl/blog/the-future-brain/202309/has-ai-surpassed-human-creativity

    Reply
  16. Tomi Engdahl says:

    Samsung julkisti Galaxy AI:n: tuo ensi vuonna puhelimiin paljon lisää tekoälyä – esimerkiksi puhelut kääntyvät reaaliajassa toiselle kielelle
    https://mobiili.fi/2023/11/09/samsung-julkisti-galaxy-ain-tuo-ensi-vuonna-puhelimiin-paljon-lisaa-tekoalya-esimerkiksi-puhelut-kaantyvat-reaaliajassa-toiselle-kielelle/

    Samsung on julkistanut Galaxy AI:n, jonka alla Samsung esittelee jatkossa erilaisia tekoälyominaisuuksia osaksi Galaxy-älypuhelimiaan vuodesta 2024 alkaen. Käytännössä tekoälyn esiinmarssia Samsung-puhelimissa odotetaan alkuvuodesta 2024 julkistettavissa Galaxy S24 -puhelimissa.

    Samsung kuvailee Galaxy AI:ta kattavaksi mobiiliksi tekoälykokemukseksi, jonka taustalla ovat sekä Samsungin kehittämä laitteessa toimiva tekoäly kuin myös pilvipohjainen tekoäly, jota Samsungin kumppanit tarjoavat. Samsungin mukaan Galaxy AI tarjoaa universaalia älykkyyttä puhelimessa ennennäkemättömällä tavalla mahdollistaen esteistä vapaan kommunikoinnin, yksinkertaistetun tuottavuuden sekä rajattoman luovuuden.

    Toistaiseksi konkreettisena esimerkkinä tulevista Galaxy AI -ominaisuuksista Samsung esitteli jo AI Live Translate Call -toiminnon, joka nimensä mukaisesti mahdollistaa puhelut tekoälykääntäjän avulla.

    Reply
  17. Tomi Engdahl says:

    Tämä työkalu paljastaa ChatGPT:n käyttäjät jopa 100 % tarkkuudella
    Työkalu on jopa neljä kertaa aiempia vainukoiria tarkempi.
    https://www.tekniikkatalous.fi/uutiset/tama-tyokalu-paljastaa-chatgptn-kayttajat-jopa-100-tarkkuudella/ed5017ea-536a-49e0-aecb-556802af4456

    Jori Virtanen
    Kansasin yliopistossa työskentelevä kemisti Heather Desaire kertoo, että hänen työryhmänsä on kehittänyt tehokkaan työkalun tunnistamaan, milloin tieteellinen artikkeli on kirjoitettu ChatGPT-tekoälyn avulla.

    Reply
  18. Tomi Engdahl says:

    people are afraid that AI will take their jobs, while their jobs could be taken by a bash script or an awk one-liner

    Reply
  19. Tomi Engdahl says:

    But their boss will need to ask the AI for the one-liner.

    Reply
  20. Tomi Engdahl says:

    Elon Musk Announces Grok, a ‘Rebellious’ AI With Few Guardrails
    xAI, Elon Musk’s new company, claims to have built a powerful language model with cutting-edge performance in just two months.
    https://www.wired.com/story/elon-musk-announces-grok-a-rebellious-ai-without-guardrails/?mbid=social_facebook&utm_medium=social&utm_brand=wired&utm_source=facebook&utm_social-type=owned&fbclid=IwAR2BHsbX1Fk-PGNWdO2yRg55MJsv1bBnuiJ2hU-iyclTQgeCVESJvwGO7xk

    LAST WEEK, ELON Musk flew to the UK to hype up the existential risk posed by artificial intelligence. A couple of days later, he announced that his latest company, xAI, had developed a powerful AI—one with fewer guardrails than the competition.

    https://twitter.com/elonmusk/status/1721027243571380324

    Reply
  21. Tomi Engdahl says:

    Researchers from MIT and NVIDIA Developed Two Complementary Techniques that could Dramatically Boost the Speed and Performance of Demanding Machine Learning Tasks
    https://www.marktechpost.com/2023/11/12/researchers-from-mit-and-nvidia-developed-two-complementary-techniques-that-could-dramatically-boost-the-speed-and-performance-of-demanding-machine-learning-tasks/

    Reply
  22. Tomi Engdahl says:

    OPENAI SIGNALS THAT IT’LL DESTROY STARTUPS USING ITS TECH TO BUILD PRODUCTS
    https://futurism.com/the-byte/openai-signals-destroy-startups-using-its-tech

    Reply
  23. Tomi Engdahl says:

    DeepMind just dropped Lyria, a state-of-the-art music generative model. Last year, I said 2022 was the year of pixels and 2023 would be the year of soundwaves. We are making great progress here!

    The most impressive demo is converting humming to a full instrument suite. I think Lyria will unlock all the operators we are used to in image models: text-based editing, style transfer, in-painting (fill out tracks), out-painting (continue a track), super-resolution, etc.

    Lyria is deployed as an intuitive software tool to musicians, in partnership with YouTube. This is the right move: ship the model! With enough artists on board, Lyria could spin a data flywheel that learns from the artists’ feedback and editing signals

    Transforming the future of music creation
    https://deepmind.google/discover/blog/transforming-the-future-of-music-creation/?fbclid=IwAR1UcL-O0sMqEQ5s2zeIG23GB5qgVgjNAr9jQ081OpVuQcqrLi1yLITs2HQ

    Reply
  24. Tomi Engdahl says:

    Mike Wheatley / SiliconANGLE:
    Meta unveils new AI tools to edit images and generate videos from text instructions, based on its image generation model Emu announced in September — Artificial intelligence researchers from Meta Platforms Inc. said today they have made significant advances in AI-powered image and video generation.

    Meta announces new breakthroughs in AI image editing and video generation with Emu
    https://siliconangle.com/2023/11/16/meta-announces-new-breakthroughs-ai-image-editing-video-generation-emu/

    Artificial intelligence researchers from Meta Platforms Inc. said today they have made significant advances in AI-powered image and video generation.

    The Facebook and Instagram parent has developed new tools that enable more control over the image editing process via text instructions, and a new method for text-to-video generation. The new tools are based on Meta’s Expressive Media Universe or Emu, the company’s first foundational model for image generation.

    EMU was announced in September and today it’s being used in production, powering experiences such as Meta AI’s Imagine feature that allows users to generate photorealistic images in Messenger. In a blog post, Meta’s AI researchers explained that generative AI image generation is often a step-by-step process, where the user tries a prompt and the picture that’s generated isn’t quite what they had in mind. As a result, users are forced to keep tweaking the prompt until the image created is closer to what they had imagined.

    Emu Edit for image editing

    What Meta wants to do is to eliminate this process and give users more precise control, and that’s what its new Emu Edit tool is all about. It offers a novel approach to image manipulation, where the user simply inputs text-based instructions. It can perform local and global editing, adding or removing backgrounds, color and geometry transformations, object detection, segmentation and many more editing tasks.

    “Current methods often lean toward either over-modifying or under-performing on various editing tasks,” the researchers wrote. “We argue that the primary objective shouldn’t just be about producing a ‘believable’ image. Instead, the model should focus on precisely altering only the pixels relevant to the edit request.”

    To that end, Emu Edit has been designed to follow the user’s instructions precisely to ensure that pixels unrelated to the request are untouched by the edit made. As an example, if a user wants to add the text “Aloha!” to a picture of a baseball cap, the cap itself should not be altered.

    Emu Video for video generation

    Meta’s AI team has also been focused on enhancing video generation. The researchers explained that the process of using generative AI to create videos is actually similar to image generation, only it involves bringing those images to life by bringing movement into the picture.

    The Emu Video tool leverages the Emu model and provides a simple method for text-to-video generation that’s based on diffusion models. Meta said the tool can respond to various inputs, including text only, image only or both together.

    The video generation process is split into a couple of steps, the first being to create an image conditioned by a text prompt, before creating a video based on that image and another text prompt. According to the team, this “factorized” approach offers an extremely efficient way to train video generation models.

    “We show that factorized video generation can be implemented via a single diffusion model,” the researchers wrote. “We present critical design decisions, like adjusting noise schedules for video diffusion, and multi-stage training that allows us to directly generate higher-resolution videos.”

    https://siliconangle.com/2023/09/27/meta-rolls-ai-experiences-across-apps-devices/

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*