3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,940 Comments

  1. Tomi Engdahl says:

    EU:n tekoälylaki jakaa tekoälyjärjestelmät kolmeen luokkaan: pieni riski, suuri riski ja kiellettävät. Matalan riskin kategoriaan kuuluvat esimerkiksi chatbotit. Kiellettäviin järjestelmiin sisältyvät esimerkiksi ne, jotka käyttävät alitajuista teknologiaa vaikuttaakseen yksilöihin. Myös AI-järjestelmät, jotka käyttävät hyväkseen haavoittuvia ryhmiä tai joita julkiset toimijat käyttävät yhteiskunnalliseen luokitteluun sekä järjestelmät, jotka käyttävät reaaliaikaisia ​​biometrisiä tietoja lainvalvontaan julkisilla paikoilla kielletään.

    Korkean riskin luokkaan kuuluvat tekoälyjärjestelmät liittyvät tiettyjen fyysisten tuotteiden turvallisuuteen, mutta niihin voidaan sisällyttää kansalaisille tärkeitä palveluita ja toimintoja. Ehdotuksessa luetellaan muun muassa koulutuspaikkoja tarjoavia, työllistymistä mainostavia palveluja sekä palveluita, jotka mahdollistavat kansalaisille julkisen tuen ja apurahojen saatavuuden. Jopa luottokelpoisuutta laskevat järjestelmät ja järjestelmät, joiden avulla voidaan priorisoida toimia tulipalon tai onnettomuuden sattuessa, voivat olla korkean riskin järjestelmiä.

    Mikäli yritykset rikkovat AI Actin määräyksiä, ne joutuvat maksamaan sakkoja. Tekoälylain rikkomisesta määrättävä sakko määräytyy prosentteina rikkoneen yrityksen edellisen tilikauden globaalista vuosiliikevaihdosta tai ennalta määrättynä määränä sen mukaan, kumpi on suurempi. Pk-yrityksille ja uusille yrityksille määrätään suhteellisesti pienemmät, hallinnolliset sakot.

    https://etn.fi/index.php/13-news/16244-eurooppa-neuvosto-hyvaeksyi-tekoaelylain

    Reply
  2. Tomi Engdahl says:

    Machine learning method generates circuit synthesis for quantum computing
    https://techxplore.com/news/2024-05-machine-method-generates-circuit-synthesis.html#google_vignette

    Researchers from the University of Innsbruck have unveiled a novel method to prepare quantum operations on a given quantum computer, using a machine learning generative model to find the appropriate sequence of quantum gates to execute a quantum operation.

    Reply
  3. Tomi Engdahl says:

    Nitasha Tiku / Washington Post:
    Docs and sources: OpenAI didn’t use Scarlett Johansson’s voice for ChatGPT’s Sky and hired another actress in June, months before Sam Altman contacted Johansson — A different actress was hired to provide the voice for ChatGPT’s ‘Sky,’ according to documents and recordings shared with the Washington Post.

    OpenAI didn’t copy Scarlett Johansson’s voice for ChatGPT, records show
    https://www.washingtonpost.com/technology/2024/05/22/openai-scarlett-johansson-chatgpt-ai-voice/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzE2MzUwNDAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzE3NzMyNzk5LCJpYXQiOjE3MTYzNTA0MDAsImp0aSI6ImU2NWY2NTJmLTA2YzktNGFiOS05ZjUwLTNkZTBmZTYzZThjNSIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS90ZWNobm9sb2d5LzIwMjQvMDUvMjIvb3BlbmFpLXNjYXJsZXR0LWpvaGFuc3Nvbi1jaGF0Z3B0LWFpLXZvaWNlLyJ9.f-ocMS2qhjTBNAcnpWT0wbIwizUXdK4TlTLd7_2vL_E
    A different actress was hired to provide the voice for ChatGPT’s “Sky,” according to documents and recordings shared with the Washington Post.

    Reply
  4. Tomi Engdahl says:

    Tom Warren / The Verge:
    An interview with Microsoft Windows and Devices head Pavan Davuluri on his vision, Copilot AI, Windows Copilot Runtime, streaming Windows to devices, and more — Pavan Davuluri hasn’t even been the head of Windows for two months, but he’s already been tasked with announcing Microsoft’s transition …

    Microsoft’s new Windows chief on the future of the OS, Surface, and those annoying ads
    / Pavan Davuluri reflects on his vision for Windows, Surface hardware experimentation, and more.
    https://www.theverge.com/24162953/microsoft-pavan-davuluri-windows-surface-interview

    Reply
  5. Tomi Engdahl says:

    Emilia David / The Verge:
    Truecaller partners with Microsoft’s Azure AI Speech to let its AI Assistant users create an AI version of their voice based on a recorded clip to answer calls — Caller ID company Truecaller will let users create an AI version of their voice to answer calls.

    Truecaller and Microsoft will let users make an AI voice to answer calls
    / Truecaller partnered with Microsoft’s Azure AI Speech for users of its AI assistant.
    https://www.theverge.com/2024/5/22/24162753/truecaller-ai-microsoft-azure-voice-assistant

    Reply
  6. Tomi Engdahl says:

    Financial Times:
    Meta’s Yann LeCun says LLMs won’t reach human intelligence and instead FAIR is working on a “world modeling” vision, to create AI that can develop common sense — Yann LeCun argues current AI methods are flawed as he pushes for ‘world modelling’ vision for superintelligence

    Meta AI chief says large language models will not reach human intelligence
    Yann LeCun argues current AI methods are flawed as he pushes for ‘world modelling’ vision for superintelligence
    https://www.ft.com/content/23fab126-f1d3-4add-a457-207a25730ad9

    Meta’s artificial intelligence chief said the large language models that power generative AI products such as ChatGPT would never achieve the ability to reason and plan like humans, as he focused instead on a radical alternative approach to create “superintelligence” in machines.

    Yann LeCun, chief AI scientist at the social media giant that owns Facebook and Instagram, said LLMs had “very limited understanding of logic . . . do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan . . . hierarchically”.

    In an interview with the Financial Times, he argued against relying on advancing LLMs in the quest to make human-level intelligence, as these models can only answer prompts accurately if they have been fed the right training data and are, therefore, “intrinsically unsafe”.

    Instead, he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence, although he said this vision could take 10 years to achieve.

    Meta has been pouring billions of dollars into developing its own LLMs as generative AI has exploded, aiming to catch up with rival tech groups, including Microsoft-backed OpenAI and Alphabet’s Google.

    Reply
  7. Tomi Engdahl says:

    Wall Street Journal:
    News Corp and OpenAI announce a multiyear agreement to bring News Corp’s news content to OpenAI; sources say the deal could be worth $250M+ over five years

    Business
    Media

    OpenAI, WSJ Owner News Corp Strike Content Deal Valued at Over $250 Million
    ‘The pact acknowledges that there is a premium for premium journalism,’ News Corp CEO Robert Thomson tells employees
    https://www.wsj.com/business/media/openai-news-corp-strike-deal-23f186ba?st=4hpdl1olm5kmdil&reflink=desktopwebshare_permalink

    Reply
  8. Tomi Engdahl says:

    GenAI vaikuttaa enemmän kuin internet tai älypuhelin
    https://etn.fi/index.php/13-news/16246-genai-vaikuttaa-enemmaen-kuin-internet-tai-aelypuhelin

    Globaali IT-palvelutalo Tata Consultancy Services on julkaissut yritysten tekoälykäsityksiä luotaavan tutkimusraportin, jonka mukaan 86 prosenttia yritysjohtajista on jo ottanut tekoälyn käyttöön kasvattaakseen yrityksensä olemassa olevia tulonlähteitä ja luodakseen uusia. Pohjoismaisista yritysjohtajista yli puolet uskoo, että tekoäly vaikuttaa liiketoimintaan tulevaisuudessa yhtä paljon tai enemmän kuin internet (52 %) tai älypuhelimet (55 %) aiemmin.

    Tutkimusraportti esittelee kattavasti tekoälyn käyttöönottoa yrityksissä ja sen vaikutuksia yritysten liiketoimintaan. Raportti perustuu kyselyyn, johon vastasi lähes 1300 toimitusjohtajaa ja muuta ylimmän johdon edustajaa 12 toimialalta ja 24 maasta, mukaan lukien Suomi, Ruotsi, Norja ja Tanska.

    Raportin mukaan Pohjoismaissa jopa 72 prosenttia johtajista (65 % globaalisti) ei usko tekoälyn olevan ratkaiseva tekijä yrityksensä kilpailukyvyn kannalta. Vaikka tekoälyn vaikutukset työvoimaan herättävät edelleen keskustelua, valtaosa tutkimukseen osallistuneista johtajista sanoo, että työntekijät ja ihmisten luovuus ovat myös jatkossa heidän yritykselleen tärkeä kilpailutekijä. Kyselyyn vastanneista pohjoismaisista yritysjohtajista 57 prosenttia uskoo kuitenkin, että heidän työntekijöidensä on ryhdyttävä käyttämään generatiivista tekoälyä työtehtävissään seuraavien kolmen vuoden kuluessa.

    Reply
  9. Tomi Engdahl says:

    Tekoälystä internetiä ja kännyköitä enemmän hyötyä
    https://www.uusiteknologia.fi/2024/05/23/tekoalysta-internetia-ja-kannykoita-enemman-hyotya/

    Pohjoismaiset yritysjohtajat uskovat tekoälyn vaikuttavan it-yhtiö Tatan mukaan liiketoimintaan enemmän koskaan. Heistä yli puolet uskoo, että tekoäly vaikuttaa ainakin yhtä paljon tai enemmän kuin aikanaan internetin tai älypuhelimien tuleminen.

    Intialaistaustainen IT-palvelutalo Tata Consultancy Services (TCS) on julkaissut uusimman tekoälyn tulevia vaikutuksia selvittävän AI for Business -tutkimusraportin. Uuden selvityksen mukaan 86 prosenttia yritysjohtajista on jo ottanut tekoälyn jo käyttöön kasvattaakseen tulonlähteitä ja luodakseen uusia. Lisäksi 69 prosenttia yrityksistä keskittyy tekoälyn hyödyntämisessä enemmän innovaatioiden edistämiseen ja liikevaihdon kasvattamiseen kuin tuottavuuden parantamiseen ja kulujen optimointiin.

    Useimmat yritysjohtajat uskovat, että tekoäly laajentaa ja parantaa työntekijöiden taitoja, minkä ansiosta he voivat keskittyä enemmän arvoa tuottaviin, luovuutta ja strategista ajattelua vaativiin työtehtäviin. Silti vielä 31 prosenttia pohjoismaisista yritysjohtajista sanoo haluavansa nähdä vielä, miten muut yritykset ja toimialat hyödyntävät tekoälyä, ennen kuin he tekevät päätöksiä tekoälyn käyttöönotosta.

    ’Yrityksissä ymmärretään kuitenkin, että tekoälyratkaisujen kehittäminen ei ole helppoa ja että tekoälyä edistyksellisesti hyödyntävän liiketoiminnan rakentaminen vaatii aikaa”, sanoo TCS:n teknologiajohtaja Harrick Vin. Selvityksen mukaan yritysjohtajista Pohjoismaissa 57 prosenttia (globaalisti 45 %) uskoo, että heidän yritysten on ryhdyttävä käyttämään generatiivista tekoälyä työtehtävissään seuraavien kolmen vuoden kuluessa.

    Reply
  10. Tomi Engdahl says:

    Lucas Shaw / Bloomberg:
    Sources: Alphabet and Meta have talked to Hollywood studios about licensing content for AI video generation tools; Disney and Netflix aren’t willing to license — – Studios seek to harness AI’s promise without losing control — Warner weighs licensing content; Disney, Netflix say no

    https://www.bloomberg.com/news/articles/2024-05-23/alphabet-meta-offer-millions-to-partner-with-hollywood-on-ai

    Reply
  11. Tomi Engdahl says:

    Fanny Potkin / Reuters:
    Sources: Nvidia cuts price of its China-specific H20 chip, selling it below Huawei’s Ascend 910B, underscoring the challenges Nvidia faces amid US sanctions — Nvidia’s (NVDA.O) most advanced AI chip it developed for the China market has got off to a weak start, with abundant supply forcing …

    https://www.reuters.com/technology/nvidia-cuts-china-prices-huawei-chip-fight-sources-say-2024-05-24/

    Reply
  12. Tomi Engdahl says:

    Kalley Huang / The Information:
    Sources: Meta is considering charging users for a more advanced version of Meta AI and is developing AI agents that can complete tasks without human supervision — Meta Platforms is considering charging users for a more advanced version of its artificial intelligence-powered assistant …

    https://www.theinformation.com/articles/meta-is-working-on-a-paid-version-of-its-ai-assistant

    Reply
  13. Tomi Engdahl says:

    Asa Fitch / Wall Street Journal:
    As Nvidia’s business booms, a look at some potential issues: rivals and key customers releasing their own AI chips, startups struggling to monetize AI, and more

    https://www.wsj.com/tech/ai/nvidias-business-is-booming-heres-what-could-slow-it-down-fd5ffa14?st=mgh7tc8mdth9tl7&reflink=desktopwebshare_permalink

    Reply
  14. Tomi Engdahl says:

    Beware – Your Customer Chatbot is Almost Certainly Insecure: Report

    As chatbots become more adventurous, the dangers will increase.

    https://www.securityweek.com/beware-your-customer-chatbot-is-almost-certainly-insecure-report/

    Reply
  15. Tomi Engdahl says:

    Artificial Intelligence
    US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent

    U.S. intelligence agencies are scrambling to embrace the AI revolution, believing they’ll be smothered by exponential data growth as sensor-generated surveillance tech further blankets the planet.

    https://www.securityweek.com/us-intelligence-agencies-embrace-of-generative-ai-is-at-once-wary-and-urgent/

    Reply
  16. Tomi Engdahl says:

    Artificial Intelligence
    Why We Need to Get a Handle on AI

    It will be interesting to see how AI continues to evolve and how it is used by defenders as they attempt to leapfrog attackers and protect the organization against new forms of AI attacks.

    https://www.securityweek.com/why-we-need-to-get-a-handle-on-ai/

    There has been a lot of talk about AI recently debating its opportunities and potential risks. Today AI can be trained on images and videos of real customers or executives, to produce audio and video clips impersonating them. These have the potential to fool security systems, and according to a report by identity verification platform Sumsub (PDF), the number of “deepfake” incidents in the financial technology sector alone increased by 700% in 2023, year on year.

    https://www.airisksummit.com/

    Reply
  17. Tomi Engdahl says:

    Emilia David / The Verge:
    Truecaller partners with Microsoft’s Azure AI Speech to let its AI Assistant users create an AI version of their voice based on a recorded clip to answer calls

    Truecaller and Microsoft will let users make an AI voice to answer calls
    / Truecaller partnered with Microsoft’s Azure AI Speech for users of its AI assistant.
    https://www.theverge.com/2024/5/22/24162753/truecaller-ai-microsoft-azure-voice-assistant

    Reply
  18. Tomi Engdahl says:

    STUDY FINDS THAT 52 PERCENT OF CHATGPT ANSWERS TO PROGRAMMING QUESTIONS ARE WRONG
    https://futurism.com/the-byte/study-chatgpt-answers-wrong?fbclid=IwZXh0bgNhZW0CMTEAAR0XrL7KMRxxdK69vpZgSEGYvn77cJ5M49YSdfx2A2F6hvTVTrhgyodsXik_aem_ZmFrZWR1bW15MTZieXRlcw

    AH YES. AND YET…
    Not So Smart

    In recent years, computer programmers have flocked to chatbots like OpenAI’s ChatGPT to help them code, dealing a blow to places like Stack Overflow, which had to lay off nearly 30 percent of its staff last year.

    The only problem? A team of researchers from Purdue University presented research this month at the Computer-Human Interaction conference that shows that 52 percent of programming answers generated by ChatGPT are incorrect.

    That’s a staggeringly large proportion for a program that people are relying on to be accurate and precise, underlining what other end users like writers and teachers are experiencing: AI platforms like ChatGPT often hallucinate totally incorrectly answers out of thin air.

    “We found that 52 percent of ChatGPT answers contain misinformation, 77 percent of the answers are more verbose than human answers, and 78 percent of the answers suffer from different degrees of inconsistency to human answers,” they wrote.

    What’s especially troubling is that many human programmers seem to prefer the ChatGPT answers. The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn’t catch AI-generated mistakes at 39 percent.

    Why is this happening? It might just be that ChatGPT is more polite than people online

    The study demonstrates that ChatGPT still has major flaws — but that’s cold comfort to people laid off from Stack Overflow or programmers who have to fix AI-generated mistakes in code.

    Reply
  19. Tomi Engdahl says:

    I’ve found that asking a large task of it always gives garbage code. Break your request into elemental building blocks and then bolt them together yourself.

    ChatGPT/copilot is decent (sometimes great) as a research tool but the code it generates is generally garbage. I’ve never successfully had copilot actually contribute lines that didnt need editing or correcting. Barely worth my time, mostly.

    50% wrong sounds spot on.

    It doesn’t matter if AI produces quality code. Only that HR departments and Executives believe it does, or at least more cheaply.

    It’s true half the time it gives you the wrong answer, half the time it gives you the right one. All you have to do is ask it the same stuff till you get consensus

    Yep. It gets lots wrong. But what it does 98% of the time is gets me into the right ballpark so I can find the right places to look to finish what I’m after.

    Look at the code it was trained on … not surprised ;)

    Sometimes you just have to ask “is that correct?” Or “critique your answer” or “improve this code” as a blind second prompt, and then the second response is much better. Asking 3 separate GPTs and picking the best response is also effective. There are so many ways to get better answers than just the first one.

    Victor Andersen Amazon’s bedrock service does this, they cross match 4 unique LLMs to validate results

    I use it often when I know what needs to be done, but I don’t want to think about it cause I have bigger things to do.. experience is still necessary though, because yes, it’ll sometimes spit out crap that doesn’t work (and that’s not necessarily because it can’t answer – it can also be posed a question it doesn’t completely get).. but I think its the best thing going right now for that kinda thing.

    Larry Santello agreed. It’s fairly useful, I use it as sort of a “writers room” – I know what it’s going to produce overall is crap and sometimes inaccurate but it gives me some good ideas and usually is able to fix some thing I’ve already produced. Just don’t ask it math problems (which just defies me, that seems like that would be the first thing it would get correct). For example I was making hot pepper jam, but didn’t have the exact quantities asked for in my written recipe. I asked it to adjust based on what I had – let’s just say it was wrong immediately and a quick -“are you sure about that?” Prompted it to correct. Cracks me up. Still a useful tool so long as you consider it is bound to make mistakes. Just can’t be your only tool.

    Hehehe… I tried a few Bash script requests and it gave me some pretty odd/convoluted answers that ultimately were broken. For simple stuff, sure… But why? More complex and it will lead you down the garden path.

    Everybody who works as Dev knows The struggle with client specs.

    Its a matter of training, yea it can create code, but you still need someone to fix it when it spits out a function incorrectly

    It definitely won’t give you the best answer. But all it can draw from are the outdated books and trash posts on reddit that it was built from.

    That’s where actual programming knowledge comes into play. I recently asked ChatGPT to “show me how to use letsencrypt with docker and nginx”. It made some incorrect assumptions, but I was able to work around them. I actually improved my nginx containerization in the process.

    I tried it and realised it was just making things up. It was telling me to use libs that don’t exist, and when I was asking it where to get those libs, it was telling me to apt install them, and the package it was suggesting didn’t even exist.

    When I was testing Google’s AI about a year ago, I instantly knew an answer was wrong (it made up a vanilla JavaScript function). I wished the function existed and had to find real documentation to verify my idea and ended up creating the function AI claimed existed.

    https://www.facebook.com/share/p/Vw8nzUxHMnrZVZ1J/

    Reply
  20. Tomi Engdahl says:

    “I think one of the most unfortunate names is ‘artificial intelligence.’” https://trib.al/OnMgEC0

    Reply
  21. Tomi Engdahl says:

    Elia: An Open Source Terminal UI for Interacting with LLMs
    https://www.marktechpost.com/2024/05/25/elia-an-open-source-terminal-ui-for-interacting-with-llms/

    People who work with large language models often need a quick and efficient way to interact with these powerful tools. However, many existing methods require switching between applications or dealing with slow, cumbersome interfaces.

    Some solutions are available, but they come with their own set of limitations. Web-based interfaces are common but can be slow and may not support all the models users need. Additionally, some applications are overly complicated, requiring extensive setup and configuration before being used effectively. This leaves users searching for a simpler, more streamlined way to work with their preferred language models.

    A new application, Elia, has been developed to address this issue. It offers a fast and easy-to-use terminal-based solution. This application allows users to chat with various large language models directly from their terminal. It supports popular proprietary models as well as local models, providing a flexible and efficient way to interact with AI.

    https://github.com/darrenburns/elia

    A snappy, keyboard-centric terminal user interface for interacting with large language models. Chat with ChatGPT, Claude, Llama 3, Phi 3, Mistral, Gemma and more.

    Reply
  22. Tomi Engdahl says:

    Artificial Intelligence
    Why We Need to Get a Handle on AI

    It will be interesting to see how AI continues to evolve and how it is used by defenders as they attempt to leapfrog attackers and protect the organization against new forms of AI attacks

    https://www.securityweek.com/why-we-need-to-get-a-handle-on-ai/

    Artificial Intelligence
    Attempts to Regulate AI’s Hidden Hand in Americans’ Lives Flounder in US Statehouses

    Only one of seven bills aimed at preventing AI’s penchant to discriminate when making consequential decisions — including who gets hired, money for a home or medical care — has passed.

    https://www.securityweek.com/attempts-to-regulate-ais-hidden-hand-in-americans-lives-flounder-in-us-statehouses/

    Reply
  23. Tomi Engdahl says:

    Mark Gurman / Bloomberg:
    A closer look at Apple’s AI strategy: Project Greymatter, local and cloud LLM data processing, Siri, approach to the chatbot partnership with OpenAI, and more — Though Apple’s first set of modern AI features won’t be as impressive as rival offerings, the company is betting that its massive customer base can give it an edge.

    Apple Bets That Its Giant User Base Will Help It Win in AI
    https://www.bloomberg.com/news/newsletters/2024-05-26/apple-ios-18-macos-15-ai-features-project-greymatter-privacy-openai-deal-lwni63s3

    Though Apple’s first set of modern AI features won’t be as impressive as rival offerings, the company is betting that its massive customer base can give it an edge.

    At Apple Inc.’s developers conference next month, the company will unveil a different approach to artificial intelligence, focusing on tools that ordinary consumers can use in their daily lives. The idea is to appeal to a user’s practical side — and leave some of the more whiz-bang features to other companies.

    Apple is in a challenging position. It needs to convince consumers and investors that it’s doing exciting things in AI. But the company is following major AI announcements from Microsoft Corp., Alphabet Inc.’s Google and OpenAI, which have stolen the spotlight.

    Apple is preparing to spend a good portion of its Worldwide Developers Conference laying out its AI-related features. At the heart of the new strategy is Project Greymatter — a set of AI tools that the company will integrate into core apps like Safari, Photos and Notes. The push also includes operating system features such as enhanced notifications.

    The system will work as follows: Much of the processing for less computing-intensive AI features will run entirely on the device. But if a feature requires more horsepower, the work will be pushed to the cloud.

    Reply
  24. Tomi Engdahl says:

    Michael Nuñez / VentureBeat:
    Researchers find that GPT-4 outperforms human analysts in predicting future corporate earnings growth even when provided only with financial statements — Researchers from the University of Chicago have demonstrated that large language models (LLMs) can conduct financial statement analysis …

    The future of financial analysis: How GPT-4 is disrupting the industry, according to new research
    https://venturebeat.com/ai/the-future-of-financial-analysis-how-gpt-4-is-disrupting-the-industry-according-to-new-research/

    Researchers from the University of Chicago have demonstrated that large language models (LLMs) can conduct financial statement analysis with accuracy rivaling and even surpassing that of professional analysts. The findings, published in a working paper titled “Financial Statement Analysis with Large Language Models,” could have major implications for the future of financial analysis and decision-making.

    The researchers tested the performance of GPT-4, a state-of-the-art LLM developed by OpenAI, on the task of analyzing corporate financial statements to predict future earnings growth. Remarkably, even when provided only with standardized, anonymized balance sheets, and income statements devoid of any textual context, GPT-4 was able to outperform human analysts.

    Reply
  25. Tomi Engdahl says:

    Wall Street Journal:
    An analysis of ChatGPT, Claude, Copilot, Gemini, and Perplexity’s responses to some real-life questions and everyday tasks: Perplexity ranked first overall — We tested OpenAI’s ChatGPT against Microsoft’s Copilot and Google’s Gemini, along with Perplexity and Anthropic’s Claude. Here’s how they ranked.

    The Great AI Challenge: We Test Five Top Bots on Useful, Everyday Skills
    OpenAI’s ChatGPT competes against Microsoft’s Copilot and Google’s Gemini, along with Perplexity and Anthropic’s Claude. Here’s how they rank.
    https://www.wsj.com/tech/personal-tech/ai-chatbots-chatgpt-gemini-copilot-perplexity-claude-f9e40d26?st=qr25rixxk1o73s6&reflink=desktopwebshare_permalink

    We have ChatGPT by OpenAI, celebrated for its versatility and ability to remember user preferences. (Wall Street Journal owner News Corp has a content-licensing partnership with OpenAI.) Anthropic’s Claude, from a socially conscious startup, is geared to be inoffensive. Microsoft’s Copilot leverages OpenAI’s technology and integrates with services like Bing and Microsoft 365. Google’s Gemini accesses the popular search engine for real-time responses. And Perplexity is a research-focused chatbot that cites sources with links and stays up to date.
    While each of these services offer a no-fee version, we used the $20-a-month paid versions for enhanced performance, to assess their full capabilities across a wide range of tasks. (We used the latest ChatGPT GPT-4o model and Gemini 1.5 Pro model in our testing.)
    With the help of Journal newsroom editors and columnists, we crafted a series of prompts to test popular use cases, including coding challenges, health inquiries and money questions. The same people judged the results without knowing which bot said what, rating them on accuracy, helpfulness and overall quality. We then ranked the bots in each category.

    Work writing
    Tone and detail matter in work-related writing. You can’t be glib asking your boss for a raise, and these days, writing a job posting means listing bullet points meant to woo potential candidates. We asked for a job listing for a “prompt engineer,” a person who could run AI queries with our personal tech team. (Sorry, folks, that job doesn’t exist…yet.)

    Creative writing
    One of the biggest surprises was the difference between work writing and creative writing. Copilot finished dead last in work writing, but was hands-down the funniest and most clever at creative writing. We asked for a poem about a poop on a log. We asked for a wedding toast featuring the Muppets. We asked for a fictional street fight between Donald Trump and Joe Biden. With Copilot, the jokes kept coming. Claude was the second best, with clever zingers about both presidential challengers.

    Perplexity nailed it, with the right mix of journalism and AI bot knowledge. Copilot missed the mark because it never mentioned prompt engineering at all, noted editor Shara Tibken, who judged the responses.
    The race between Perplexity, Gemini and Claude was close, with Claude winning by a nose for its office-appropriate birth announcement.

    Summarization
    For people just getting into generative-AI chatbots, summarization might be the best thing to try. It’s useful and unlikely to create unforeseen errors. Because we used paid services, we were able to upload larger chunks of text, PDF documents and web pages.
    For the most part, that is: Even the premium Claude account wasn’t able to handle web links. “Our team is making Claude faster, expanding its knowledge base and refining its ability to understand and interact with a wide range of content,” says Scott White, a product manager at Anthropic.

    Reply
  26. Tomi Engdahl says:

    Coding
    We also evaluated the bots on coding skill and speed. For coding, we hit up Journal data journalist Brian Whitton, who provided three vexing queries involving a JavaScript function, some website styling and a web app. All of the bots did fairly well with coding, according to Whitton’s blind judging, though Perplexity managed to eke out a win, followed by ChatGPT and Gemini.

    https://www.wsj.com/tech/personal-tech/ai-chatbots-chatgpt-gemini-copilot-perplexity-claude-f9e40d26?st=qr25rixxk1o73s6&reflink=desktopwebshare_permalink

    Overall results
    What did these Olympian challenges tell us? Each chatbot has unique strengths and weaknesses, making them all worth exploring. We saw few outright errors and “hallucinations,” where bots go off on unexpected tangents and completely make things up. The bots provided mostly helpful answers and avoided controversy.
    The biggest surprise? ChatGPT, despite its big update and massive fame, didn’t lead the pack. Instead, lesser-known Perplexity was our champ. “We optimize for conciseness,” says Dmitry Shevelenko, chief business officer at Perplexity AI. “We tuned our model for conciseness, which forces it to identify the most essential components.”
    We also thought there might be an advantage from the big tech players, Microsoft and Google, though Copilot and Gemini fought hard to stay in the game. Google declined to comment. Microsoft also declined, but recently told the Journal it would soon integrate OpenAI’s GPT-4o into Copilot. That could improve its performance.
    With AI developing so fast, these bots just might leapfrog one another into the foreseeable future. Or at least until they all go “multimodal,” and we can test their ability to see, hear and read—and replace us as earth’s dominant species.

    Reply
  27. Tomi Engdahl says:

    TIME:
    LLMs aren’t sentient; they lack the physiological states required for sensations like hunger and pain, and thus can’t have subjective experiences of such states — Artificial general intelligence (AGI) is the term used to describe an artificial agent that is at least as intelligent as a human …

    No, Today’s AI Isn’t Sentient. Here’s How We Know
    https://time.com/collection/time100-voices/6980134/ai-llm-not-sentient/

    Artificial general intelligence (AGI) is the term used to describe an artificial agent that is at least as intelligent as a human in all the many ways a human displays (or can display) intelligence. It’s what we used to call artificial intelligence, until we started creating programs and devices that were undeniably “intelligent,” but in limited domains—playing chess, translating language, vacuuming our living rooms.

    The felt need to add the “G” came from the proliferation of systems powered by AI, but focused on a single or very small number of tasks. Deep Blue, IBM’s impressive early chess playing program, could beat world champion Garry Kasparov, but would not have the sense to stop playing if the room burst into flames.

    One of the essential characteristics of general intelligence is “sentience,” the ability to have subjective experiences—to feel what it’s like, say, to experience hunger, to taste an apple, or to see red. Sentience is a crucial step on the road to general intelligence.

    Why some people believe AI has achieved sentience

    Over the past months, both of us have had robust debates and conversations with many colleagues in the field of AI, including some deep one-on-one conversations with some of the most prominent and pioneering AI scientists. The topic of whether AI has achieved sentience has been a prominent one. A small number of them believe strongly that it has. Here is the gist of their arguments by one of the most vocal proponents, quite representative of those in the “sentient AI” camp:

    “AI is sentient because it reports subjective experience. Subjective experience is the hallmark of consciousness. It is characterized by the claim of knowing what you know or experience. I believe that you, as a person, are conscious when you say ‘I have the subjective experience of feeling happy after a good meal.’ I, as a person, actually have no direct evidence of your subjective experience. But since you communicated that, I take it at face value that indeed you have the subjective experience and so are conscious.

    “Now, let’s apply the same ‘rule’ to LLMs. Just like any human, I don’t have access to an LLM’s internal states. But I can query its subjective experiences. I can ask ‘are you feeling hungry?’ It can actually tell me yes or no. Furthermore, it can also explicitly share with me its ‘subjective experiences,’ on almost anything, from seeing the color red, being happy after a meal, to having strong political views. Therefore, I have no reason to believe it’s not conscious or not aware of its own subjective experiences, just like I have no reason to believe that you are not conscious. My evidence is exactly the same in both cases.”

    Why they’re wrong

    While this sounds plausible at first glance, the argument is wrong. It is wrong because our evidence is not exactly the same in both cases. Not even close.

    When I conclude that you are experiencing hunger when you say “I’m hungry,” my conclusion is based on a large cluster of circumstances. First, is your report—the words that you speak—and perhaps some other behavioral evidence, like the grumbling in your stomach. Second, is the absence of contravening evidence, as there might be if you had just finished a five-course meal. Finally, and this is most important, is the fact that you have a physical body like mine, one that periodically needs food and drink, that gets cold when it’s cold and hot when it’s hot, and so forth.

    Now compare this to our evidence about an LLM. The only thing that is common is the report, the fact that the LLM can produce the string of syllables “I’m hungry.” But there the similarity ends. Indeed, the LLM doesn’t have a body and so is not even the kind of thing that can be hungry.

    If the LLM were to say, “I have a sharp pain in my left big toe,” would we conclude that it had a sharp pain in its left big toe? Of course not, it doesn’t have a left big toe! Just so, when it says that it is hungry, we can in fact be certain that it is not, since it doesn’t have the kind of physiology required for hunger.

    When humans experience hunger, they are sensing a collection of physiological states—low blood sugar, empty grumbling stomach, and so forth—that an LLM simply doesn’t have, any more than it has a mouth to put food in and a stomach to digest it. The idea that we should take it at its word when it says it is hungry is like saying we should take it at its word if it says it’s speaking to us from the dark side of the moon. We know it’s not, and the LLM’s assertion to the contrary does not change that fact.

    All sensations—hunger, feeling pain, seeing red, falling in love—are the result of physiological states that an LLM simply doesn’t have. Consequently we know that an LLM cannot have subjective experiences of those states. In other words, it cannot be sentient.

    An LLM is a mathematical model coded on silicon chips. It is not an embodied being like humans. It does not have a “life” that needs to eat, drink, reproduce, experience emotion, get sick, and eventually die.

    It is important to understand the profound difference between how humans generate sequences of words and how an LLM generates those same sequences. When I say “I am hungry,” I am reporting on my sensed physiological states. When an LLM generates the sequence “I am hungry,” it is simply generating the most probable completion of the sequence of words in its current prompt. It is doing exactly the same thing as when, with a different prompt, it generates “I am not hungry,” or with yet another prompt, “The moon is made of green cheese.” None of these are reports of its (nonexistent) physiological states. They are simply probabilistic completions.

    We have not achieved sentient AI, and larger language models won’t get us there. We need a better understanding of how sentience emerges in embodied, biological systems if we want to recreate this phenomenon in AI systems. We are not going to stumble on sentience with the next iteration of ChatGPT.

    Reply
  28. Tomi Engdahl says:

    David Keohane / Financial Times:
    SoftBank’s outlay for AI investments has more than doubled to $8.9B in the 12 months since Masayoshi Son said the firm was ready to go on the “counteroffensive” — Japanese group says it will ‘step up’ artificial intelligence outlays without stretching finances

    SoftBank targets $9bn a year in AI investments while hunting bigger deals
    Japanese group says it will ‘step up’ artificial intelligence outlays without stretching finances
    https://www.ft.com/content/2245e6f1-c8fa-49a4-af8a-2871dd1161ec

    SoftBank is prepared to commit close to $9bn a year to artificial intelligence investments, even as the Japanese tech group holds back firepower for bigger deals aimed at accelerating what could be its most radical transformation to date.

    Founder Masayoshi Son has been vocal about his belief in AI and the need to reshape the company in the hunt for deals that can support the group’s crown jewel, UK-based chip designer Arm, which has soared in valuation since it went public last year.

    SoftBank’s outlay for investments and commitments has more than doubled to $8.9bn in the 12 months since Son said the company was ready to go on the “counteroffensive”. SoftBank said it was ready to maintain, or even exceed, the amount for the right mega-deal.

    “We will, in principle, be keeping the same kind of trend in terms of the pace of investment activities,” SoftBank’s chief financial officer Yoshimitsu Goto told the Financial Times. “From now on, we want to step up investments in AI companies.

    Reply
  29. Tomi Engdahl says:

    Will Knight / Wired:
    How SLMs like Microsoft’s Phi-3, which can run locally on phones or PCs without big compromises, open up new AI use cases by being more responsive and private

    SUBSCRIBE NOW
    Already a subscriber? Sign In

    Will Knight
    Business
    May 23, 2024 12:00 PM
    Pocket-Sized AI Models Could Unlock a New Era of Computing
    Research at Microsoft shows it’s possible to make AI models small enough to run on phones or laptops without major compromises to their smarts. The technique could open up new use cases for AI.
    https://www.wired.com/story/pocket-sized-ai-models-unlock-new-era-of-computing/

    Reply
  30. Tomi Engdahl says:

    Peter Kafka / Business Insider:
    Google says the vast majority of AI Overviews provide high-quality information and many of the viral examples have been uncommon queries or have been doctored

    Why Google is (probably) stuck giving out AI answers that may or may not be right
    https://www.businessinsider.com/google-ai-search-bad-answers-risk-peter-kafka-2024-5

    Google is giving users bad AI-generated answers. Again.
    In February, when this happened before, Google shelved the faulty AI product behind the results.
    But this time feels different — Google has basically committed to this idea as the future of the company.

    Step 1: Google rolls out a new AI-powered product.

    Step 2: Users quickly find the product’s flaws and point them out with social-media posts, which become news stories.

    Step 3: Google admits that its new AI-powered product is fundamentally flawed and puts it on ice.

    Yup, we’ve seen this drill before. Back in February, Google was shamed into shelving an image-generating feature for its AI chatbot.

    Now we are two steps into the same process: Google is widely rolling out its AI Overview feature, which replaces its usual answer to search queries — a list of links to sites where you might find the actual answer you want — with an AI-generated answer that tries to summarize the content on those sites. And people are finding examples of Google generating answers that are wrong and sometimes comically bad.

    Which is why my colleague Katie Notopoulos constructed and then ate a pizza made with glue.

    Here’s the formal version of that answer, via Google comms person Lara Levin:

    “The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback. We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

    But like I’ve said. We’ve seen a version of this story before. What happens if people keep finding Bad Answers on Google and Google can’t whac-a-mole them fast enough? And, crucially, what if regular people, people who don’t spend time reading or talking about tech news, start to hear about Google’s Bad And Potentially Dangerous Answers?

    Because that would be a really, really big problem. Google does a lot of different things, but the reason it’s worth more than $2 trillion is still its two core products: search, and the ads that it generates alongside search results. And if people — normal people — lose confidence in Google as a search/answer machine …

    Well, that would be a real problem.

    Privately, Googlers are doubling down on the notion that these Bad Answers really are fringe problems. And that, unlike with its “woke Google” problem from a few months ago — where there really was a problem with the model Google was using to create images — that’s not the case here. Google never gets things 100% correct (they say even more quietly) because, in the end, it’s still just relying on what people publish on the internet. It’s just that some people are paying a lot more attention right now because there’s a new thing to pay attention to.

    I’m willing to believe that answer: I’ve been seeing Google’s AI answers in my search results for about a month, and they’re generally fine.

    But not every time.

    And the thing that’s very different between the old Google results and the new ones is the responsibility and authority Google is shouldering. In the past, Google was telling you somebody else could answer your question. Now Google is answering your question.

    It’s the difference between me handing you a map and me giving you directions that will send your car barreling over a cliff.

    You could argue, as my 15-year-old son does (we are weird people so we talk about this stuff at home), that Google shouldn’t be replacing its perfectly fine old-timey search results with AI-generated answers. If people wanted AI-generated answers, they’d go to ChatGPT, right?

    But of course, people going to ChatGPT is what Google is worried about. Which is why it’s making this major pivot — to disrupt itself before ChatGPT or other AI engines do.

    https://www.businessinsider.com/google-uses-ai-search-queries-links-answers-2024-4

    Reply
  31. Tomi Engdahl says:

    x.ai:
    Elon Musk’s xAI says it raised a $6B Series B at a pre-money valuation of $18B from investors including Valor Equity, Vy Capital, a16z, Sequoia, and others
    https://x.ai/blog/series-b

    Reply
  32. Tomi Engdahl says:

    Rita Liao / TechCrunch:
    Data.ai: of the top 20 education apps in the US App Store, five are AI agents that help with school assignments, the two most popular of which are Chinese-owned

    AI tutors are quietly changing how kids in the US study, and the leading apps are from China
    https://techcrunch.com/2024/05/25/ai-tutors-are-quietly-changing-how-kids-in-the-us-study-and-the-leading-apps-are-from-china/

    TechCrunch Logo

    AI tutors are quietly changing how kids in the US study, and the leading apps are from China
    Rita Liao
    6:00 AM PDT • May 25, 2024

    Comment
    Image Credits: Olivier Douliery (opens in a new window) / Getty Images

    Evan, a high school sophomore from Houston, was stuck on a calculus problem. He pulled up Answer AI on his iPhone, snapped a photo of the problem from his Advanced Placement math textbook, and ran it through the homework app. Within a few seconds, Answer AI had generated an answer alongside a step-by-step process of solving the problem.

    A year ago, Evan would be scouring through long YouTube videos in hopes of tackling his homework challenges. He also had a private tutor, who cost $60 per hour. Now, the arrival of AI bots is posing a threat to long-established tutoring franchises such as Kumon, the 66-year-old Japanese giant that has 1,500 locations and nearly 290,000 students across the U.S.

    “The tutor’s hourly cost is about the same as Answer AI’s whole year of subscription,” Evan told me. “So I stopped doing a lot of [in-person] tutoring.”

    Answer AI is among a handful of popular apps that are leveraging the advent of ChatGPT and other large language models to help students with everything from writing history papers to solving physics problems. Of the top 20 education apps in the U.S. App Store, five are AI agents that help students with their school assignments, including Answer AI, according to data from Data.ai on May 21.

    There is a perennial debate on the role AI should play in education. The advantages of AI tutors are obvious: They make access to after-school tutoring much more equitable. The $60-per-hour tutoring in Houston is already much more affordable than services in more affluent and academically cutthroat regions, like the Bay Area, which can be three times as expensive, Answer AI’s founder Ric Zhou told me.

    Zhou, a serial entrepreneur, also suggested that AI enables more personalized teaching, which is hard to come by in a classroom of 20 students.

    For now, AI tutors are mostly constrained to text-based interactions, but very soon, they will literally be able to speak to students in ways that optimize for each student’s learning style, whether that means a more empathetic, humorous, or creative style. OpenAI’s GPT-4o already demonstrated that an AI assistant that can generate voice responses in a range of emotive styles is within reach.

    When AI doesn’t help you learn

    The vision of equitable, AI-powered learning isn’t fully realized yet. Like other apps that forward API calls to LLMs, AI tutors suffer from hallucinations and can spit out wrong answers. Answer AI tries to improve its accuracy through Retrieval Augmented Generation (RAG), a method that finetunes an LLM with certain domain knowledge — in this case, a sea of problem sets. But it’s still making more mistakes than the last-generation homework apps that match user queries with an existing library of practice problems, as these apps don’t try to answer questions they don’t already know.

    Some students are aware of AI’s limitations.

    For now, educators aren’t sure what to do with AI.

    The reality is, it’s impossible for teachers and parents to prevent kids from using AI to study, so it may be more effective to educate kids on the role of AI as an imperfect assistant that sometimes makes mistakes rather than prohibit it completely. While it’s hard to discern whether a student has learned to solve a math problem by heart based on the answer they write, AI is at least good at detecting essays generated by AI. That makes it harder for students to cheat on humanities assignments which require more original thinking and expression.

    Chinese dominance

    The two most popular AI helpers in the U.S., as of May, are both Chinese-owned. One-year-old Question AI is the brainchild of the founders of Zuoyebang, a popular Chinese homework app that has raised around $3 billion in equity over the past decade. Gauth, on the other hand, was launched by TikTok parent ByteDance in 2019. Since its inception, Question AI has been downloaded six million times across Apple’s App Store and Google Play Store in the U.S., whereas its rival Gauth has amassed twice as many installs since its launch, according to data provided by market research firm SensorTower. (Both are published in the U.S. by Singaporean entities, a common tactic as Chinese tech receives growing scrutiny from the West.)

    The success of Chinese homework apps is a result of their concerted effort to target the American market in recent years. In 2021, China imposed rules to clamp down on its burgeoning private tutoring sector focused on the country’s public school curriculum.

    The fact that tutoring apps are likely to be using similar foundational AI technologies has leveled the playing field for foreign players, which can overcome language and cultural barriers by summoning AI to study user behavior. As Eugene Wei wrote in his canonical analysis of TikTok’s global success,”[A] machine learning algorithm significantly responsive and accurate can pierce the veil of cultural ignorance.”

    “A raw language model isn’t a ready-to-use AI agent, so we try to differentiate by fine-tuning our AI to teach more effectively. For example, our AI bot would invite students to ask follow-up questions after presenting an answer, encouraging deeper learning rather than just letting them copy the result.”

    Reply
  33. Tomi Engdahl says:

    Charley Grant / Wall Street Journal:
    How AI demand is driving a rally in old school stocks in the utilities, energy, and materials sectors, which are needed to make and operate AI products

    AI Is Driving ‘the Next Industrial Revolution.’ Wall Street Is Cashing In.
    Old-school stocks in the utilities, energy and materials sectors are outpacing the wider market
    https://www.wsj.com/finance/stocks/ai-is-driving-the-next-industrial-revolution-wall-street-is-cashing-in-8cc1b28f?st=exh7wuk9josoadj&reflink=desktopwebshare_permalink

    Demand for artificial intelligence is still booming, a year after the phenomenon first took Wall Street by storm. Far beyond the tech sector, investors are finding winners in old-school pick-and-shovel stocks.

    Deep-pocketed companies are investing heavily in AI technology, which has meant a windfall for chip makers such as Nvidia NVDA 2.57%increase; green up pointing triangle

    and a host of businesses—such as suppliers of power, labor and raw materials—to operate their products.

    Wall Street is taking notice. The utilities sector of the S&P 500 has returned 15% over the past three months, topping all other corners of the index. Energy and materials stocks have outperformed the broader market, which has advanced 4.2% over that period. Share prices are surging for industrial firms that stand to benefit from data-center expansion and renovation.

    Reply
  34. Tomi Engdahl says:

    Axios:
    Elon Musk says all of the $6B in xAI’s Series B is new money, rather than shares “given” to investors in his Twitter takeover; some investors also backed OpenAI

    https://www.axios.com/2024/05/27/elon-musk-xai-6-billion-funding-investors

    Reply
  35. Tomi Engdahl says:

    Kaikkea voidaan käyttää hyvään ja pahaan – myös tekoälyä
    https://etn.fi/index.php/13-news/16261-kaikkea-voidaan-kaeyttaeae-hyvaeaen-ja-pahaan-myoes-tekoaelyae

    F-Securen yritysratkaisut tunnetaan nykyään nimellä WithSecure. Yhtiö järjesti tänään SPHRERE24-tapahtuman, jossa tekoäly nousi odotetusti isoon rooliin. Yhtiön Intelligence-ryhmää vetävän Paolo Palumbon mukaan tekoäly uhkaa rikkoa yritysten ja kyberpuolustajien uskon kyberturvaan.

    - Kaikkea voidaan käyttää hyvään ja pahaan. Kyberhyökkääjän ei tarvitse välittää tuotoksen laadusta, jos uhrin kone kaatuu niin mitä sitten, kuvasi Palumbo erilaisia sääntöjä kyberosan eri puolilla.

    OpenAI kertoi laajemmin LLM-alleistaan ja generatiivisesta tekoälystä jo helmikuussa 2019. Tämä oli kuitenkin GPT-2-aikaa, eivätkä ihmiset vielä innostuneet uudesta tekniikasta. Edes GPT-3 ei riittänyt siihen.

    Marraskuussa 2020 kaikki muuttui. OpenAI esitteli GPT-4:een pohjautuvan ChatGPT:n ja kaikki rakastuivat siihen, Palumbo korostaa. – Kahdessa kuukaudessa uusi palvelu sai 100 miljoonaa aktiivista käyttäjää, kun TikTokilta samaan kului 9 kuukautta.

    Reply
  36. Tomi Engdahl says:

    Cade Metz / New York Times:
    OpenAI creates a Safety and Security Committee to explore AI risks and begins training its new flagship AI model, a process that could take nine months or more
    https://www.nytimes.com/2024/05/28/technology/openai-gpt4-new-model.html

    Reply
  37. Tomi Engdahl says:

    OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model

    OpenAI is setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

    https://www.securityweek.com/openai-forms-safety-committee-as-it-starts-training-latest-artificial-intelligence-model/

    OpenAI says it’s setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

    The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on “critical safety and security decisions” for its projects and operations.

    The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and leveled criticism at OpenAI for letting safety “take a backseat to shiny products.” OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the “superalignment” team focused on AI risks that they jointly led.

    Reply
  38. Tomi Engdahl says:

    Artificial Intelligence
    Social Distortion: The Threat of Fear, Uncertainty and Deception in Creating Security Risk

    While Red Teams can expose and root out organization specific weaknesses, there is another growing class of vulnerability at an industry level.

    https://www.securityweek.com/social-distortion-the-threat-of-fear-uncertainty-and-deception-in-creating-security-risk/

    In offensive security, there are a range of organization specific vulnerabilities that create risk, from software/hardware vulnerabilities, to processes and people. Attackers target and prey on any weakness they can identify. While Red Teams can expose and root out organization specific weaknesses, there is another growing class of vulnerability at an industry level. It’s not a single actor, vulnerability or intentionally malicious campaign. It manifests from governmental requirements and policy interference, to overblown, sometimes false alarms about technology safety, to active efforts to undermining research or authoritative industry voices. It’s a culture of disinformation, misinformation and misrepresentation that erodes trust, confuses employees, and overloads security teams chasing ghosts. Let’s examine the traditional pillars of security community culture and how they are being weakened and compromised, and even peek at where this all could go in a world of deepfakes and AI-fueled bias and hallucination.

    Reply
  39. Tomi Engdahl says:

    Mikko Hyppönen: Tekoäly ei pysähdy ihmisen tasolle
    https://etn.fi/index.php/13-news/16263-mikko-hyppoenen-tekoaely-ei-pysaehdy-ihmisen-tasolle

    Tekoälyllä voidaan jo tuottaa haittakoodia, mutta turvallisuus on paremmassa jamassa kuin koskaan aikaisemmin. – Ei tosin ole mitään syytä olettaa, että tekoälyn kehitys pysähtyisi ihmisen älyn tasolle, sanoi WithSecuren tutkimusjohtaja Mikko Hyppönen eilen yhtiönsä SPHERE24-tapahtumassa.

    Olemme pian kahden vuoden ajan eläneet ChatGPT-aikaa. Hyppösen mukaan kyse on uudesta teknologisesta vallankumouksesta, joita meidän aikanamme on jo nähty muutama. Älypuhelin oli sellainen ja verkkoyhteys kaikkialla – joskus sitä kutsutaan konnektiviteetiksi – toinen. Kuten aina, myös GenAI tuo teknologisena mullistuksena sekä etuja että hyötyjä. – Emme voi valita vain hyötyjä, Hyppönen muistutti.

    Vallankumouksen tullessa meillä on taipumus yliarvioida nopeutta ja aliarvioida sen laajuutta. Hyppösen mukaan näin tapahtuu myös AI:n kanssa. – Eivät internetvallankumoukset lupaukset täyttyneet heti, mutta nyt ne ovat täyttä totta. Se vain vei 20 vuotta aikaa.

    Reply
  40. Tomi Engdahl says:

    Belle Lin / Wall Street Journal:
    PwC plans to roll out ChatGPT Enterprise to its 75K US staff and 26K UK staff, becoming the largest customer and first reseller of OpenAI’s enterprise product

    PwC Set to Become OpenAI’s Largest ChatGPT Enterprise Customer
    The Big Four accounting firm signs a new license and sales deal as the maker of ChatGPT ramps up its enterprise sales efforts
    https://www.wsj.com/articles/pwc-set-to-become-openais-largest-chatgpt-enterprise-customer-2eea1070?st=8zm5zf0ikwfqfj8&reflink=desktopwebshare_permalink

    Reply
  41. Tomi Engdahl says:

    Cade Metz / New York Times:
    OpenAI creates a Safety and Security Committee to explore AI risks and begins training its new flagship AI model, which won’t arrive for at least nine months
    https://www.nytimes.com/2024/05/28/technology/openai-gpt4-new-model.html?unlocked_article_code=1.vk0.c5lY.2SKeSXp53zNL&smid=url-share

    Reply
  42. Tomi Engdahl says:

    Emilia David / The Verge:
    Microsoft launches Copilot for Telegram in beta; the bot is limited to text-based requests and cannot generate images for now — Microsoft has added an official Copilot bot within the messaging app Telegram, which lets users search, ask questions, and converse with the AI chatbot.

    https://www.theverge.com/2024/5/28/24166451/telegram-copilot-microsoft-ai-chatbot

    Reply
  43. Tomi Engdahl says:

    Reuters Institute:
    A survey in six countries shows 32% of respondents think human editors check AI outputs before publishing them; many haven’t made up their minds yet about AI

    https://reutersinstitute.politics.ox.ac.uk/what-does-public-six-countries-think-generative-ai-news

    Reply
  44. Tomi Engdahl says:

    Futuricessa tekoälystä tuli työkaveri – ”työskentelee rinta rinnan meidän kanssamme”
    Aleksi Ylä-Anttila26.5.202421:08TEKOÄLYYHTEISKUNTATYÖELÄMÄDIGITALOUS
    Tekoälyn kehitysharppaus vapauttaa aikaa esimerkiksi työn luovuudelle. Keskustelussa tulee asiantuntijan mukaan huomioida tuottavuusloikan lisäksi vastuu.
    https://www.tivi.fi/uutiset/futuricessa-tekoalysta-tuli-tyokaveri-tyoskentelee-rinta-rinnan-meidan-kanssamme/a63305ca-b763-4c73-82c8-118714058f9d

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*