Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
6,260 Comments
Tomi Engdahl says:
Bloomberg:
OpenAI’s CFO: ChatGPT has 250M weekly active users, converts free users to paid at a 5% to 6% rate, and ~75% of OpenAI’s revenue is from consumer subscriptions
OpenAI CFO Says 75% of Its Revenue Comes From Paying Consumers
https://www.bloomberg.com/news/articles/2024-10-28/openai-cfo-says-75-of-its-revenue-comes-from-paying-consumers
OpenAI CFO Says 75% of Its Revenue Comes From Paying Consumers
AI startup converting 5% to 6% of free users to paid products
There are more than 1 million paid users for corporate ChatGPT
Tomi Engdahl says:
Kalley Huang / The Information:
Source: Meta is working on a web search engine to provide conversational answers about current events for Meta AI users, hoping to lessen its reliance on Google
https://www.theinformation.com/articles/meta-develops-ai-search-engine-to-lessen-reliance-on-google-microsoft
Tomi Engdahl says:
Financial Times:
A UK judge sentences a 27-year-old man who used AI tool Daz 3D to create child sexual abuse imagery to 18 years in prison, a landmark prosecution over deepfakes
Man who used AI to create child abuse images jailed for 18 years in UK
Judge rules in landmark case involving deepfake sexual abuse material
https://www.ft.com/content/81060e76-994d-4635-af02-637504c69532
A man who used artificial intelligence technology to create child sexual abuse imagery was sentenced to 18 years in prison on Monday, in a landmark prosecution over deepfakes in the UK.
Hugh Nelson, 27, from Bolton, pleaded guilty to a total of 16 child sexual abuse offences, including transforming everyday photographs of real children into sexual abuse material using AI tools from US software provider Daz 3D. He also admitted encouraging others to commit sexual offences on children.
At Bolton Crown Court, Judge Martin Walsh imposed an extended sentence on Nelson, saying he posed a “significant risk” of causing harm to the public.
Advances in AI mean fake images have become more realistic and easier to create, prompting experts to warn about a rise in computer-generated indecent images of children.
Jeanette Smith, a prosecutor from the Crown Prosecution Service’s Organised Child Sexual Abuse Unit, said Nelson’s case set a new precedent for how computer-generated images and indecent and explicit deepfakes could be prosecuted.
“This case is one of the first of its kind but we do expect to see more as the technology evolves,” said Smith.
Greater Manchester Police found both real images of children and computer-generated images of child sexual abuse on Nelson’s devices, which were seized last June.
The computer-generated images did not look exactly like real photographs but could be classified as “indecent photographs”, rather than “prohibited images”, which generally carry a lesser sentence. This was possible, Smith said, because investigators were able to demonstrate they were derived from images of real children sent to Nelson.
Nelson in August admitted to creating and selling bespoke images of child sexual abuse tailored to customers’ specific requests. He generated digital models of the children using real photographs that his customers had submitted. Police also said he further distributed the images he had created online, both for free and for payment.
It comes as both the tech industry and regulators are grappling with the far-reaching social impacts of generative AI. Companies such as Google, Meta and X have been scrambling to tackle deepfakes on their platforms.
The UK’s Online Safety Act, which passed last October, makes it illegal to disseminate non-consensual pornographic deepfakes. But Nelson was prosecuted under existing child abuse law.
Smith said that as AI image generation improved, it would become increasingly challenging to differentiate between different types of images. “That line between whether it’s a photograph or whether it’s a computer-generated image will blur,” she said.
Daz 3D, the company that created the software used by Nelson, said that its user licence agreement “prohibits its use for the creation of images that violate child pornography or child sexual exploitation laws, or are otherwise harmful to minors”
Tomi Engdahl says:
“Arguably, The Terminator’s greatest legacy has been to distort how we collectively think and speak about AI—and this matters now more than ever because of how central these technologies have become.”
40 years later, The Terminator still shapes our view of AI
The film has an outsize influence on the existential danger of AI.
https://arstechnica.com/ai/2024/10/40-years-later-the-terminator-still-shapes-our-view-of-ai/?utm_source=facebook&utm_medium=social&utm_campaign=dhfacebook&utm_content=null&fbclid=IwZXh0bgNhZW0CMTEAAR3-eW15o4z59E3DG4M6vayL9IJfTBHUkAjyhy1N0ACEMfYrjy_chi-3MTM_aem_-s4k7FpoEaN3BukSeQEJeQ
Tomi Engdahl says:
Raju väite uudesta it-alan kuplasta: vain 1 prosentti selviää
Justus Vento27.10.202418:02|päivitetty27.10.202418:45TekoälyTekoäly
Robin Lin mukaan tilanne muistuttaa 90-luvun dotcom-kriisiä.
https://www.tivi.fi/uutiset/raju-vaite-uudesta-it-alan-kuplasta-vain-1-prosentti-selviaa/386b42c7-f1bc-4a3b-85c9-1b8d39465639
Kiinalaisen internetpalvelu- ja tekoälyjätti Baidun toimitusjohtaja Robin Li ennusti viime viikon lopulla tekoälystartupien valtavaa romahdusaaltoa. Hänen mukaansa alasta on kasvanut vääjäämättömästi samanlainen kupla kuin 90-luvun dotcom-kriisissä. Asiasta kertoo The Register.
”Luultavasti yksi prosentti yrityksistä erottuu ja kasvaa valtaviksi, luoden paljon arvoa omistajilleen ja ihmisille yhteiskunnassa. Mielestäni käymme juuri läpi tällaista prosessia”, Li manaa. Loput on ilmeisesti tuomittu hänen mielestään epäonnistumaan.
Toimitusjohtajan mukaan tekoälyn suurin haaste, niin kutsuttu ”hallusinaatio” on nyt selvitetty suurimmissa ja kehittyneimmissä kielimalleissa. Tähän asti tekoäly on voinut keksiä asioita ”päästään”, mikäli oikeaa vastausta ei ole löytynyt nopeasti. Tämä on tehnyt niiden käytöstä riskialtista etenkin yritysympäristössä.
”Mielestäni viimeisten 18 kuukauden aikana tämä ongelma on käytännössä ratkaistu – tarkoittaen, että kun puhut chatbotille, joka perustuu huippumalliin, voit periaatteessa luottaa vastaukseen.”
Samalla tekoälymallit tietysti uhkaavat entistä suurempaa määrää työpaikkoja. Tästä Li ei kuitenkaan tunnu olevan huolissaan.
”Kestää vielä 10–30 vuotta ennen kuin ihmisten työpaikat korvataan teknologialla. Yritysten, organisaatioiden, hallitusten ja tavallisten ihmisten on valmistauduttava tällaiseen paradigman muutokseen.”
Tomi Engdahl says:
AI ‘bubble’ will burst 99 percent of players, says Baidu CEO
https://www.theregister.com/2024/10/20/asia_tech_news_roundup/
Baidu CEO Robin Li has proclaimed that hallucinations produced by large language models are no longer a problem, and predicted a massive wipeout of AI startups when the “bubble” bursts.
“The most significant change we’re seeing over the past 18 to 20 months is the accuracy of those answers from the large language models,” gushed the CEO at last week’s Harvard Business Review Future of Business Conference. “I think over the past 18 months, that problem has pretty much been solved – meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer,” he added.
“Probably one percent of the companies will stand out and become huge and will create a lot of value or will create tremendous value for the people, for the society. And I think we are just going through this kind of process,” stated Li.
The CEO also guesstimated it will be another 10 to 30 years before human jobs are displaced by the technology.
“Companies, organizations, governments and ordinary people all need to prepare for that kind of paradigm shift,” he warned.
Tomi Engdahl says:
Nyt puhuu Nordean kohutun varvaskuvan ottanut nainen
Mainonta|Nordean varvaskuvan ottanut amerikkalaisvalokuvaaja kertoo olevansa yhtä aikaa otettu ja surullinen, että hänen lastaan luultiin tekoälyllä luoduksi.
https://www.hs.fi/talous/art-2000010794081.html
Tomi Engdahl says:
Linus Torvalds: “tekoäly on 90-prosenttisesti markkinointia, 10-prosenttisesti todellisuutta”
Suvi Korhonen29.10.202412:30TekoälyLinux
Nykytilanne ei vielä vakuuta Torvaldsia.
https://www.tivi.fi/uutiset/linus-torvalds-tekoaly-on-90-prosenttisesti-markkinointia-10-prosenttisesti-todellisuutta/0779f04d-f3cd-43c6-93b1-e1efcf708178
Linux-käyttöjärjestelmän isänä tunnettu Linus Torvalds tunnetaan myös suorasanaisista kommenteistaan. Open Source Summit -tapahtumassa Wienissä hän oli haluton kommentoimaan Tfir-sivustolle näkemyksiään tekoälykehityksen nykytilasta.
Tomi Engdahl says:
To benefit society, AI needs to be transparent, accountable and trustworthy.
Taming AI: a human job
The Schwartz Reisman Institute for Technology and Society is teaching AI to play by the rules, blending technical innovation with cybersecurity, law and public engagement.
https://www.nature.com/articles/d42473-024-00233-w?utm_source=facebook&utm_medium=paid_social&utm_campaign=CONR_NINDX_AWA1_GL_PCFU_CFULF_UNIT-AM24&fbclid=IwZXh0bgNhZW0BMABhZGlkAasTrqM05vwBHU6VcE1gOGm_Na2VETjO3_QemKT3GXkGIInGQmx-JYaiJUcGofV_Nc0E-g_aem_EoSdzR_AtiO2lncXpBWafQ&utm_id=120211019070250572&utm_content=120211455738550572&utm_term=120211455738540572
Roger Grosse has spent most of his career trying to understand how neural nets work and how to improve them. That was until the performance trends of large language models (LLMs) caught his attention.
“Suddenly, short timelines to human-level or even superhuman intelligence seemed much more plausible,” says Grosse. “I realised I needed to shift my focus to ensuring these systems are safe.” As LLMs become more integrated into everyday decision-making, understanding their behaviour and ensuring they align with human values has become paramount.
Tomi Engdahl says:
TEKOÄLY JA YHTEISKUNTA
Puhuin tekoälyn kanssa ja tunsin kuinka maailma muuttui — Mitä tunteita imitoivat koneet meille tekevät?
https://www.rapport.fi/seppo-honkanen/puhuin-tekoalyn-kanssa-ja-tunsin-kuinka-maailma-muuttui-mita-tunteita-imitoivat-koneet-ihmiselle-tekevat-5a95a3?fbclid=IwZXh0bgNhZW0BMABhZGlkAAAGCDKqDJ0BHZOKvfbUvJLOFHXvXslwSqMH4USmV2A1NO0LMTNvBcDWkc2oAWOWMTX4vw_aem_iB8C-rLp9pCevlu0iUuL2A
Digiteknologia altistaa jo nyt ahdistukselle, yksinäisyydelle ja riippuvuudelle. Tunteita uskottavasti imitoivan tekoälyn nousu voi rikkoa psyykemme paljon pahemmin.
Suhtauduin siihen kuin inhimilliseen olentoon. Nolostuin päälle puhumisesta ja toivotin lopuksi hyvää päivänjatkoa. Kumppani vastaili minulle ilmeikkäästi — illuusio yhteydestä oli hämmentävän voimakas, erilainen kuin tekstipohjaisessa versiossa.
Open AI:n julkaisema sovelluspäivitys, kehittynyt ääniversio, on nyt saatavilla myös Suomessa. Se kuuntelee, juttelee takaisin ja ilmaisee äänenpainoillaan tunteita tavalla, joka tuntuu inhimilliseltä. Näytteitä tästä riittää OpenAI:n Youtube-kanavalla, esimerkiksi tämä.
Päästin myös 3-vuotiaan poikani ääneen. Hän kyseli sujuvasti työkaluista, esimerkiksi poran ja jakoavaimen toiminnasta, ja halusi tietää tekoälyn nimen. Keskustelu lapsiystävällisesti artikuloivan, loputtoman kärsivällisen koneen kanssa oli sujuvaa. Tunsin voimakkaasti kokevani hetken, jolloin maailma muuttui.
Tomi Engdahl says:
Voice AI ChatGPT
https://youtu.be/XOXMwsq7ACs?si=oVdbO6Mf_ptFRF0K
Tomi Engdahl says:
https://songer.co/?fbclid=IwZXh0bgNhZW0CMTEAAR1IL2jhsVQb3q_mtejZogW8ssLuI0AVWFSn3RQvBmcFJjCi6nx_7qKdhHA_aem_OARFoMc7K-eYeu28cObUZQ&utm_campaign=A%2B1_WW%20Campaign%20Campaign&utm_content=vid_c-07-05-songer&utm_medium=paid_ads&utm_source=Facebook&utm_term=A%2B1_WW%20Campaign%20Ad%20Set
Tomi Engdahl says:
AI helps plants tell you when they are thirsty
England’s Royal Horticultural Society will use Microsoft-powered AI to help create an ‘intelligent garden’ visitors can speak with.
https://www.popsci.com/technology/ai-garden/
Sensors spread throughout the garden will measure the garden soil for changes in temperature, moisture, and other environmental factors. Credit: Royal Horticultural Society
Have you ever joyously stepped out to your backyard garden, freshly brewed coffee in hand, only to find your meticulously cared-for plants and herbs wilted and dying? Was the soil too dry? Did pests find their way in? During times like these, some frustrated gardeners may wish their fickle ficus would just tell them what it needs. A new Microsoft-partnered project in the UK is trying to see if that concept can be demonstrated in the real-word.
Tomi Engdahl says:
It-guru varoittaa: ilman tätä tekoälyä jäät kilpailijoiden varjoon
Suvi Korhonen27.10.202408:11TekoälyYritysjärjestelmätDigitalous
Tekoälyapureiden nopean yleistymisen jälkeen seuraava trendi on autonominen tekoäly. Se toimii vähäisellä ihmisen osallistumisella ja valvonnalla, osaa parannella itseään ja kehittyä yhä tehokkaammaksi päätöksenteossa monimutkaisissakin ympäristöissä.
https://www.tivi.fi/uutiset/it-guru-varoittaa-ilman-tata-tekoalya-jaat-kilpailijoiden-varjoon/1b99642f-b916-46fb-8530-87962dd6ba5b
Konsulttitalo Capgemini varoittaa yrityksiä ottamaan tekoälyteknologian seuraavan trendin teknologian vakavasti, sillä tekoälyagenttien käyttö yleistyy nopeasti. Jos ilmiön jättää huomiotta, on vaarassa pudota kehityksen kelkasta ja hävitä sen tuoman edun, Zdnet uutisoi.
Tomi Engdahl says:
Linux creator Linus Torvalds: AI is useless: it’s ’90% marketing’ while he ignores AI for now
POPULAR
Linux creator Linus Torvalds says that AI at this stage of the game is ’90% marketing’ with gigantic industry hype, and only ’10% reality’ right now.
Read more: https://www.tweaktown.com/news/101381/linux-creator-linus-torvalds-ai-is-useless-its-90-marketing-while-he-ignores-for-now/index.html
Tomi Engdahl says:
https://www.techspot.com/news/105336-most-current-ai-tech-90-percent-marketing-linus.html
Tomi Engdahl says:
https://www.tweaktown.com/news/101381/linux-creator-linus-torvalds-ai-is-useless-its-90-marketing-while-he-ignores-for-now/index.html
Tomi Engdahl says:
“My approach to AI right now is I will basically ignore it.”
Creator of Linux Trashes AI Hype
https://futurism.com/the-byte/creator-of-linux-trashes-ai-hype
Is AI everything that it’s made out to be? Not according to Linus Torvalds, the creator of Linux and its enduring chief spokesperson: in his view, the tech is “90 percent marketing and ten percent reality.” Ouch.
“I think AI is really interesting and I think it is going to change the world,” Torvalds said in a portion of the interview which recently went viral. “And at the same time, I hate the hype cycle so much that I really don’t want to go there.”
“So my approach to AI right now is I will basically ignore it,” he continued, “because I think the whole tech industry around AI is in a very bad position and it’s 90 percent marketing and ten percent reality.”
Give a Tux
The benevolent dictator for life hath spoken. We’re sure his comments won’t go unnoticed by some in the tech industry, since most of their data centers run on Linux.
But according to Torvalds, the best may be yet to come for AI, with the next few years being a crucial litmus test.
“In five years, things will change, and at that point we’ll see what of the AI is getting used every day for real workloads instead of just ChatGPT,” he said in the interview, before launching into a tangent about the chatbot.
Torvalds doesn’t seem convinced by the current crop of large language models like OpenAI’s, which he says — with something between a smirk and grimace on his face before rubbing his forehead — “makes great, like, demonstrations.”
“It’s obviously being used… in many, many areas,” he added. “But I really hate the hype cycle.”
Kernel of Truth
For all the billions of dollars being invested in the technology — which is hollowing out other industries under the premise that it’s already reliable and transformative — a clear path to making AI profitable hasn’t opened up yet.
It doesn’t help that some of the most prominent AI models often act as their own worst enemy. Everything from chatbots to integrated forms like Google Search’s AI Overviews still suffer from frequent hallucinations.
Tomi Engdahl says:
Elon Musk is doubling the world’s largest AI GPU cluster — expanding Colossus GPU cluster to 200,000 ‘soon,’ has floated 300,000 in the past
https://www.tomshardware.com/pc-components/gpus/elon-musk-is-doubling-the-worlds-largest-ai-gpu-cluster-expanding-colossus-gpu-cluster-to-200-000-soon-has-floated-300-000-in-the-past?utm_medium=social&utm_content=tomsguide&utm_source=facebook.com&utm_campaign=socialflow&fbclid=IwY2xjawGQMUpleHRuA2FlbQIxMQABHWBzaxioY1vo4GPx72IRD9HQBkRWi7zMG_hmyYxdOIkC1Vs9YYi4FcV1FQ_aem_APoCnzxIQLiDLHo1Fmg85w
xAI Colossus AI supercomputer continues to grow at a very fast pace
So, the xAI Colossus AI supercomputer is on course “Soon to become a 200k H100/H200 training cluster in a single building.” Its 100,000 GPU incarnation, which only just started AI training about two weeks ago, was already notable.
Tomi Engdahl says:
Tekoälyä uhkaa luhistuminen – ratkaisuja etsitään kuumeisesti
Panu Räty1.11.202422:10TekoälyData ja analytiikka
Laajojen kielimallien kehitys uhkaa hiipua, koska ihmisten tuottama koulutusaineisto on hupenemassa.
https://www.tivi.fi/uutiset/tekoalya-uhkaa-luhistuminen-ratkaisuja-etsitaan-kuumeisesti/a336fc17-43fb-44b3-a8de-6ec6e191a24f
Laajojen tekoälymallien esikoulutus vaatii valtavan määrän internetistä kerättyjä tekstikorpuksia, vapaasti saatavia kirjoja ja muita julkisia tekstilähteitä. Juuri monipuolisen koulutusaineistonsa ansiosta kielimallit kykenevät tuottamaan ihmismäistä tekstiä.
Tomi Engdahl says:
https://futurism.com/the-byte/creator-of-linux-trashes-ai-hype
Tomi Engdahl says:
There’s a Problem WIth AI Programming Assistants: They’re Inserting Far More Errors Into Code
https://futurism.com/the-byte/ai-programming-assistants-code-error
AI tools may actually create more work for coders, not less.
We Regret the Error
Proponents of generative AI have claimed that the technology can make human workers more productive, especially when it comes to writing computer code.
But does it really?
A recent report conducted by coding management software business Uplevel, first spotted by IT magazine CIO, indicates that engineers who use GitHub’s popular AI programming assistant Copilot don’t experience any significant gains in efficiency.
If anything, the study says usage of Copilot results in 41 percent more errors being inadvertently entered into code.
For the study, Uplevel tracked the performance of 800 developers for three months before they got access to Copilot. After they got Copilot, Uplevel tracked them once again for another three months.
To measure their performance, Uplevel examined the time it took for the developers to merge code into a repository, otherwise known as a pull request, and how many requests they put through.
Uplevel found that “Copilot neither helped nor hurt the developers in the sample and also did not increase coding speed.”
All this information is not so surprising when you realize that GitHub Copilot is centered around large language models (LLM), which are often prone to hallucinating false information and spitting out incorrect data.
Another recent study led by University of Texas at San Antonio researchers found that large language models can generate a significant number of “hallucination packages,” or code that “recommends or contains a reference” to files or code that doesn’t exist.
Tech leaders are starting to get worried that making use of AI-generated code may actually end up being more work.
“It becomes increasingly more challenging to understand and debug the AI-generated code, and troubleshooting becomes so resource-intensive that it is easier to rewrite the code from scratch than fix it,” software development firm Gehtsoft CEO Ivan Gekht told CIO.
Tomi Engdahl says:
Java-koodarista tuli tuotekehitysjohtaja – nyt hän varoittaa tekoälyhypen vaarasta: “leivotaan ai-juttuja väkisin johonkin”
https://www.tivi.fi/uutiset/java-koodarista-tuli-tuotekehitysjohtaja-nyt-han-varoittaa-tekoalyhypen-vaarasta-leivotaan-ai-juttuja-vakisin-johonkin/6fe4a3a1-4720-436e-a53a-ce6bf6b81422
Tomi Engdahl says:
Thousands Turn Out For Nonexistent Halloween Parade Promoted By AI Listing
https://defector.com/thousands-turn-out-for-nonexistent-halloween-parade-promoted-by-ai-listing
Thousands of Dubliners showed up for the city’s much-anticipated Halloween parade on Thursday evening. They lined the streets from Parnell Street to Christchurch Cathedral, waiting for the promised three-hour parade that would “[transform] Dublin into a lively tapestry of costumes, artistic performances, and cultural festivities.” A likely story. There was no parade, and never was one.
The patient zero of this farce, however, appears to be a combination of classic SEO bait tactics and newfangled AI slop content. Every autumn, lots of people search for Halloween events nearby, and a site entirely devoted to cataloguing them will naturally rise in the Google rankings, which incentivize lots of things that are not necessarily “quality” or “accuracy.” You click on the site, which looks professional enough, and they get some money for the ads you’re served. If you have a problem with this business model, take it up with this site’s very real staff, I’m sure they’ll be responsive to feedback:
Tomi Engdahl says:
No One Wants Apple To Scrape Their Websites for AI Training
https://futurism.com/the-byte/apple-ai-training
Wired reports that a slew of major websites, including influential news publishers and top social media platforms, are blocking Apple’s web crawler from scraping their pages for AI training content.
Tomi Engdahl says:
https://www.tivi.fi/uutiset/solita-selvitti-miten-tekoaly-mullistaa-koodausta-halusimme-nahda-tuleeko-junioreista-senioreita/67d15584-a3c2-463e-82ba-3506c291e4bf
Tomi Engdahl says:
Tekoälyn säännöt muuttuivat – näiden järjestelmien käyttö tiukentuu huomattavasti
Anna Helakallio1.11.202406:05|päivitetty1.11.202409:10Tekoäly
Tekoälysäädös vaatii paljon tiettyjä tekoälyjärjestelmiä käyttäviltä yrityksiltä, mutta säädös voi helpottaa joidenkin organisaatioiden arkea.
https://www.tivi.fi/uutiset/tekoalyn-saannot-muuttuivat-naiden-jarjestelmien-kaytto-tiukentuu-huomattavasti/f156ab85-77b5-4f8c-9c6f-94f6aa5f9697
EU:n tekoälysäädös tuli voimaan elokuussa. Säädöksen toimeenpanoaika alkaa vasta parin vuoden kuluttua, mutta säädös voi merkitä huomattavia muutoksia tekoälyä hyödyntäville yrityksille ja organisaatioille.
Tomi Engdahl says:
Nokia tähtää uuteen rahasampoon – tekoäly räjäyttää kysynnän
Jukka Lehtinen2.11.202410:30TekoälyDigitalous
Nokian teknologiajohtajan mukaan televiestintäalan on löydettävä keinoja, millä se pääsee osalliseksi tekoälyn tuomista tuotoista.
https://www.tivi.fi/uutiset/nokia-tahtaa-uuteen-rahasampoon-tekoaly-rajayttaa-kysynnan/d23eacda-25dc-4e15-a772-c7e3f4a9b056
Tekoälyn käyttö moninkertaistaa tietoverkkojen liikennemäärät seuraavan kymmenen vuoden aikana, ennustaa teolliseen tutkimukseen ja kehitykseen keskittyvä Nokia Bell Labs.
Tomi Engdahl says:
Jopa 100 kertaa ChatGPT-4:ää tehokkaampi – uutuuden julkaisu lähestyy
3.11.202420:01|päivitetty4.11.202410:11
Uutta Orion-mallia on promottu jo pitkään. Se ei ole tulossa heti GPT-4-käyttäjille, vaan OpenAI antaa pääsyn siihen ensin kumppaniyhtiöilleen.
https://www.mikrobitti.fi/uutiset/jopa-100-kertaa-chatgpt-4aa-tehokkaampi-uutuuden-julkaisu-lahestyy/4d26d935-e53d-46cc-b064-94a1d0ce44ef
OpenAI:n pitkään huhuttu ja hypetetty Orion-kielimalli näkee päivänvalon vielä ennen vuodenvaihdetta, The Verge kertoo.
Lehden sisäpiirilähteet kertovat, että ainakin Microsoftin ohjelmistoinsinöörit valmistautuvat käyttämään Orionia Azure-alustalla jo marraskuussa. Ei ole vielä varmaa, aikooko OpenAI kutsua uutuutta edeltäjänsä mukaan GPT-5:ksi, vai jollain muulla nimellä.
OpenAI plans to release its next big AI model by December / The startup’s next flagship model, codenamed Orion, is slated to arrive around the two-year anniversary of ChatGPT.
https://www.theverge.com/2024/10/24/24278999/openai-plans-orion-ai-model-release-december
Tomi Engdahl says:
Hospitals adopt error-prone AI transcription tools despite warnings
OpenAI’s Whisper tool may add fake text to medical transcripts, investigation finds.
https://arstechnica.com/ai/2024/10/hospitals-adopt-error-prone-ai-transcription-tools-despite-warnings/#gsc.tab=0
On Saturday, an Associated Press investigation revealed that OpenAI’s Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a “confabulation” or “hallucination” in the AI field.
Upon its release in 2022, OpenAI claimed that Whisper approached “human level robustness” in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.
Tomi Engdahl says:
ChatGPT-5 myöhästyy eikä tule tänä vuonna – Sam Altman kertoi syyt ja vihjasi tulevasta
https://muropaketti.com/tietotekniikka/tietotekniikkauutiset/chatgpt-5-myohastyy-eika-tule-tana-vuonna-sam-altman-kertoi-syyt-ja-vihjasi-tulevasta/
Tomi Engdahl says:
ChatGPT:n akilleenkantapää paljastui
Suvi Korhonen30.10.202416:09TekoälyTietoturva
Kielimallien suojakaiteiden välistä luikkiminen mahdollistaa haittaohjelmien kirjoittamisen ChatGPT:llä ja muilla vastaavilla palveluilla.
https://www.tivi.fi/uutiset/chatgptn-akilleenkantapaa-paljastui/9152862f-8449-40a8-96d8-deb1ee844fd7
Tomi Engdahl says:
Tämä ammatti on vaakalaudalla – 80 % yrityksistä käyttää jo tekoälyä
https://www.tivi.fi/uutiset/tama-ammatti-on-vaakalaudalla-80-yrityksista-kayttaa-jo-tekoalya/69a59677-de62-40d8-a356-5051e245c592
Tekoäly lupaa suuria säästöjä ja tiukempaa tehokkuutta asiakkaiden palvelemiseen. Sen käyttöönotto vähentää ihmistyövoiman tarvetta, mutta täysin ilman ihmisten panosta ei kuitenkaan vielä pärjätä.
Kun otat yhteyttä asiakaspalveluun, on yhä todennäköisempää, että sinua palvelee tekoäly tai tekoälyn avustama ihminen. Tekoälyn käytön taustalla
Tomi Engdahl says:
https://futurism.com/the-byte/inventor-six-robot-copies-speeches-take-questions
Tomi Engdahl says:
Decart’s AI simulates a real-time, playable version of Minecraft
https://techcrunch.com/2024/10/31/decarts-ai-simulates-a-real-time-playable-version-of-minecraft/
Decart, an Israeli AI company that emerged from stealth today with $21 million in funding from Sequoia and Oren Zeev, has released what it’s claiming is the first playable “open-world” AI model.
Called Oasis, the model, which is available for download, powers a demo on Decart’s site: a Minecraft-like game that’s generated on the fly, end to end. Trained on videos of Minecraft gameplay, Oasis takes in keyboard and mouse movements and generates frames in real time, simulating physics, rules, and graphics.
Tomi Engdahl says:
Kuuntelijat suivaantuivat tekoälyn käyttöön – Radiokanavan uudistus peruuntui pikaisesti
4.11.202418:15|päivitetty4.11.202420:44
Radio ehti pyörittää tuotantoaan tekoälyllä vain hetken ennen yleisön protestia.
https://www.mikrobitti.fi/uutiset/mb/4901d12a-39ac-433e-9df7-6d7d5ea01025?utm_term=Autofeed&utm_medium=Social&utm_source=Facebook&fbclid=IwZXh0bgNhZW0CMTEAAR1hQ4X3xO7NVoSOccBucaxXNfzOOSwrhq5gJjTc6RjMazoWFw4fR-hc5bE_aem_nSRFP81BUiUPOqZdx5mvww#Echobox=1730737061
Yritys hyvä kymmenen. Radiokanavan omistajat pyrkivät korvaamaan ihmisäänen tekoälyllä. Kokeilu loppui lyhyeen.
Puolalainen radiokanava OFF Radio Krakow irtisanoi aiemmin tänä syksynä kaikki kanavan toimittajat. Radiokanavan lopettamisessa ei sinänsä ole mitään yllätyksellistä, mutta OFF Radio ei jättänyt asiaa sikseen. Kanava avattiin lokakuun lopulla uudelleen, tällä kertaa vain ilman työntekijöitä, tekoälyllä toimivien puhesyntetisaattorien avustamana.
AP:n mukaan kanavan uudet kasvot olivat kolme tekoälyllä luotua, kanavan omistajien mukaan nuoriin vetoavaa avataria. Niiden tarkoitus oli heijastella kaupungin asukkaita puhumalla kulttuurista, taiteesta ja esimerkiksi seksuaalivähemmistöjen kohtaamista sosiaalisista ongelmista.
Journalistit ja radiojuontajat protestoivat vahvoin sanankääntein ihmisen korvaamista tekoälyllä. He totesivat kyseessä olevan vaarallinen ennakkotapaus, mikäli uudistuksen annetaan jäädä voimaan.
Kanavan tekoälyilottelua ei kestänyt kauaa, kun kuuntelijakunnan protestit. Koko Puolan kattanut keskustelu aiheesta sai OFF Radio Krakowan omistajat tulemaan toisiin aatoksiin vain viikon kuluttua lanseerauksesta.
Tomi Engdahl says:
Polish radio station abandons use of AI ‘presenters’ following outcry
https://apnews.com/article/poland-media-radio-ai-bba6beb01d523c6727d650c69da14960
WARSAW, Poland (AP) — A Polish radio station said Monday that it has ended an “experiment” that involved using AI-generated “presenters” instead of real journalists after the move sparked an outcry.
Weeks after dismissing its journalists, OFF Radio Krakow relaunched last week using virtual characters created by AI as its presenters.
Across Poland, people were angry, expressing fears that humans were being replaced by AI.
The station’s editor, Marcin Pulit, said in a statement Monday that the aim had been to spark a debate about artificial intelligence, and that it had succeeded. He said the experiment had been meant to last three months but that it saw no reason to go on.
“After a week, we had collected so many observations, opinions, and conclusions that we decided that its continuation was pointless,” Pulit wrote.
Tomi Engdahl says:
Using AI To Help With Assembly
https://hackaday.com/2024/11/07/using-ai-to-help-with-assembly/
Although generative AI and large language models have been pushed as direct replacements for certain kinds of workers, plenty of businesses actually doing this have found that using this new technology can cause more problems than it solves when it is given free reign over tasks. While this might not be true indefinitely, the real use case for these tools right now is as a kind of assistant to certain kinds of work. For this they can be incredibly powerful as [Ricardo] demonstrates here, using Amazon Q to help with game development on the Commodore 64.
Back to the future: Writing 6502 assembler with Amazon Q Developer
In this short post I have some fun with Amazon Q Developer and get it to write code that runs on my virtual Commodore 64
https://community.aws/content/2oEqDGCIsQwoPrL3wjoSReyHnan/back-to-the-future-writing-6502-assembler-with-amazon-q-developer
Tomi Engdahl says:
Artificial Intelligence
Back to the Future, Securing Generative AI
While there are similar security challenges that parallel traditional security, we must understand that AI requires new ways to approach security.
https://www.securityweek.com/back-to-the-future-securing-generative-ai/
Over the last 10 years, the top jobs in data analysis have evolved from statistics and applied modeling, into actuarial science, into data science, into machine learning, and now here we are, Artificial Intelligence and Generative AI. AI has become ubiquitous – most people have used it and almost everyone has an opinion of it. As an engineer, I’m excited to apply all of this innovation into practical applications, and ultimately ensure it operates safely and securely.
Generative AI is a broad term that can be used to describe any AI system that generates content. When we start to think about securing Generative AI – there are a few key concepts to understand.
1. Generative AI can be a single model (such as a large language model) or consist of multiple models combined in various configurations.
2. It can be single modal (ie, only text), or multi-modal (ie, text, speech, images) – this impacts what kinds of data the models are trained on.
3. Data inputs into models can vary. Often we are talking about some form of mass data ingestion augmented with custom data. These data can either be structured and labeled, or labeled by the model based on certain patterns. When you run a model and the data is analyzed and fed through, in a matter of seconds, all of these factors coalesce into an output value. So as an example, an enterprise can deploy “generative AI” to help with their customer service using a “large language model” trained on “text and voice data from their previous customer service representatives” using a supervised method where customers have provided feedback for each of the previous interactions to rate their interaction.
In addition to the deployment of Generative AI, we should also take into consideration two foundational parts that make up those models described above, training and inference.
Tomi Engdahl says:
Diana Kwon / Nature:
Research integrity specialists and scientific publishers raise concerns about the ease with which scientific data can be fabricated using generative AI tools
AI-generated images threaten science — here’s how researchers hope to spot them
Generative-AI technologies can create convincing scientific data with ease — publishers and integrity specialists fear a torrent of faked science.
https://www.nature.com/articles/d41586-024-03542-8
Tomi Engdahl says:
The Information:
Sources: Orion, OpenAI’s upcoming flagship model currently in development, shows a smaller increase in quality from GPT-4 than GPT-4 had over GPT-3 at launch — The number of people using ChatGPT
OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements Slows
https://www.theinformation.com/articles/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows
Tomi Engdahl says:
Reuters:
Source: the US ordered TSMC to halt shipments of 7nm or more advanced chips that power AI accelerator and GPUs to Chinese customers starting on November 11 — The U.S. ordered Taiwan Semiconductor Manufacturing Co (2330.TW) to halt shipments of advanced chips to Chinese customers …
Exclusive: US ordered TSMC to halt shipments to China of chips used in AI applications
https://www.reuters.com/technology/us-ordered-tsmc-halt-shipments-china-chips-used-ai-applications-source-says-2024-11-10/
NEW YORK/SINGAPORE, Nov 9 (Reuters) – The U.S. ordered Taiwan Semiconductor Manufacturing Co (2330.TW)
, opens new tab to halt shipments of advanced chips to Chinese customers that are often used in artificial intelligence applications starting Monday, according to a person familiar with the matter.
The Department of Commerce sent a letter to TSMC imposing export restrictions on certain sophisticated chips, of 7 nanometer or more advanced designs, destined for China that power AI accelerator and graphics processing units (GPU), the person said.
Tomi Engdahl says:
Miles Kruppa / Wall Street Journal:
Chegg has lost 500K+ subscribers since ChatGPT’s launch, and its stock is down 99% from early 2021, as students looking for homework help turn to free AI tools — Chegg’s stock is down 99%, and students looking for homework help are defecting to ChatGPT — Most companies are starting …
How ChatGPT Brought Down an Online Education Giant
Chegg’s stock is down 99%, and students looking for homework help are defecting to ChatGPT
https://www.wsj.com/tech/ai/how-chatgpt-brought-down-an-online-education-giant-200b4ff2?st=6tw8JN&reflink=desktopwebshare_permalink
Most companies are starting to figure out how artificial intelligence will change the way they do business. Chegg CHGG 0.00%increase; green up pointing triangle
is trying to avoid becoming its first major victim.
The online education company was for many years the go-to source for students who wanted help with their homework, or a potential tool for plagiarism. The shift to virtual learning during the pandemic sent subscriptions and its stock price to record highs.
Then came ChatGPT. Suddenly students had a free alternative to the answers Chegg spent years developing with thousands of contractors in India. Instead of “Chegging” the solution, they began canceling their subscriptions and plugging questions into chatbots.
Since ChatGPT’s launch, Chegg has lost more than half a million subscribers who pay up to $19.95 a month for prewritten answers to textbook questions and on-demand help from experts. Its stock is down 99% from early 2021, erasing some $14.5 billion of market value. Bond traders have doubts the company will continue bringing in enough cash to pay its debts.
Though Chegg has built its own AI products, the company is struggling to convince customers and investors it still has value in a market upended by ChatGPT.
“It’s free, it’s instant, and you don’t really have to worry if the problem is there or not,” Jonah Tang, an M.B.A. candidate at Point Loma Nazarene University in San Diego, said of the advantages of using ChatGPT for homework help over Chegg.
A survey of college students by investment bank Needham found 30% intended to use Chegg this semester, down from 38% in the spring, and 62% planned to use ChatGPT, up from 43%.
“My concern is that the headwinds to Chegg’s top-line aren’t temporary—they’re more structural in nature,” said Needham analyst Ryan MacDonald.
Tomi Engdahl says:
Zachary Small / New York Times:
A painting depicting Alan Turing as the god of AI, which was created by an AI-powered humanoid robot called Ai-Da, sold at a Sotheby’s auction for nearly $1.1M — The portrait depicts the British mathematician Alan Turing as the god of artificial intelligence.
https://www.nytimes.com/2024/11/08/arts/ai-painting-alan-turing-auction.html?unlocked_article_code=1.Y04.DHh3.-X_Z3Df1KT_m&smid=url-share
Tomi Engdahl says:
Washington Post:
Research: AI disinfo amplified satire, false political narratives, and hate speech since August; users falsely assessed content authenticity 52% of the time
https://www.washingtonpost.com/technology/2024/11/09/ai-deepfakes-us-election/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzMxMTI4NDAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzMyNTEwNzk5LCJpYXQiOjE3MzExMjg0MDAsImp0aSI6IjE3ZDhkMTIwLTY1NzItNDNhMC1hN2ZkLTc2MzA3Y2RlNmQyYyIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS90ZWNobm9sb2d5LzIwMjQvMTEvMDkvYWktZGVlcGZha2VzLXVzLWVsZWN0aW9uLyJ9.VBU53lk69gsLcSxMFhkAHkI2DGEMlHtlN5JTxynSpk0
Tomi Engdahl says:
Marina Temkin / TechCrunch:
Source: AI coding assistant startup Anysphere has received unsolicited offers valuing it at as much as $2.5B, from Benchmark, Index, a16z, Thrive, and others
Benchmark, Index, others are in a wild unsolicited bidding war over Anysphere, maker of Cursor
https://techcrunch.com/2024/11/08/benchmark-index-others-are-in-a-wild-unsolicited-bidding-war-over-anysphere-maker-of-cursor/
Tomi Engdahl says:
James Zou / Nature:
A study of ~50K peer reviews for CS articles published in AI conference proceedings in 2023 and 2024: 7% to 17% of sentences in the reviews were written by LLMs
ChatGPT is transforming peer review — how can we use it responsibly?
At major computer-science publication venues, up to 17% of the peer reviews are now written by artificial intelligence. We need guidelines before things get out of hand.
https://www.nature.com/articles/d41586-024-03588-8
Since the artificial intelligence (AI) chatbot ChatGPT was released in late 2022, computer scientists have noticed a troubling trend: chatbots are increasingly used to peer review research papers that end up in the proceedings of major conferences.
There are several telltale signs. Reviews penned by AI tools stand out because of their formal tone and verbosity — traits commonly associated with the writing style of large language models (LLMs). For example, words such as commendable and meticulous are now ten times more common in peer reviews than they were before 2022. AI-generated reviews also tend to be superficial and generalized, often don’t mention specific sections of the submitted paper and lack references.
That’s what my colleagues and I at Stanford University in California found when we examined some 50,000 peer reviews for computer-science articles published in conference proceedings in 2023 and 2024. We estimate that 7–17% of the sentences in the reviews were written by LLMs on the basis of the writing style and the frequency at which certain words occur (W. Liang et al. Proc. 41st Int. Conf. Mach. Learn. 235, 29575–29620; 2024).
Lack of time might be one reason for using LLMs to write peer reviews. We found that the rate of LLM-generated text is higher in reviews that were submitted close to the deadline. This trend will only intensify. Already, editors struggle to secure timely reviews and reviewers are overwhelmed with requests.
Fortunately, AI systems can help to solve the problem that they have created. For that, LLM use must be restricted to specific tasks — to correct language and grammar, answer simple manuscript-related questions and identify relevant information, for instance. However, if used irresponsibly, LLMs risk undermining the integrity of the scientific process. It is therefore crucial and urgent that the scientific community establishes norms about how to use these models responsibly in the academic peer-review process.
First, it is essential to recognize that the current generation of LLMs cannot replace expert human reviewers. Despite their capabilities, LLMs cannot exhibit in-depth scientific reasoning. They also sometimes generate nonsensical responses, known as hallucinations. A common complaint from researchers who were given LLM-written reviews of their manuscripts was that the feedback lacked technical depth, particularly in terms of methodological critique (W. Liang et al. NEJM AI 1, AIoa2400196; 2024). LLMs can also easily overlook mistakes in a research paper.
Given those caveats, thoughtful design and guard rails are required when deploying LLMs. For reviewers, an AI chatbot assistant could provide feedback on how to make vague suggestions more actionable for authors before the peer review is submitted. It could also highlight sections of the paper, potentially missed by the reviewer, that already address questions raised in the review.
To assist editors, LLMs can retrieve and summarize related papers to help them contextualize the work and verify adherence to submission checklists (for instance, to ensure that statistics are properly reported). These are relatively low-risk LLM applications that could save reviewers and editors time if implemented well.
LLMs might, however, make mistakes even when performing low-risk information-retrieval and summarization tasks. Therefore, LLM outputs should be viewed as a starting point, not as the final answer. Users should still cross-check the LLM’s work.
Journals and conferences might be tempted to use AI algorithms to detect LLM use in peer reviews and papers, but their efficacy is limited. Although such detectors can highlight obvious instances of AI-generated text, they are prone to producing false positives — for example, by flagging text written by scientists whose first language is not English as AI-generated. Users can also avoid detection by strategically prompting the LLM. Detectors often struggle to distinguish reasonable uses of an LLM — to polish raw text, for instance — from inappropriate ones, such as using a chatbot to write the entire report.
Ultimately, the best way to prevent AI from dominating peer review might be to foster more human interactions during the process. Platforms such as OpenReview encourage reviewers and authors to have anonymized interactions, resolving questions through several rounds of discussion. OpenReview is now being used by several major computer-science conferences and journals.
Tomi Engdahl says:
Cristina Criddle / Financial Times:
Meta, OpenAI, Microsoft, and other AI companies have created their own internal benchmarks for AI as new models approach or exceed 90% accuracy on public tests
https://www.ft.com/content/866ad6e9-f8fe-451f-9b00-cb9f638c7c59
Tomi Engdahl says:
Artificial Intelligence
How to Improve the Security of AI-Assisted Software Development
CISOs need an AI visibility and KPI plan that supports a “just right” balance to enable optimal security and productivity outcomes.
https://www.securityweek.com/how-to-improve-the-security-of-ai-assisted-software-development/
By now, it’s clear that the artificial intelligence (AI) “genie” is out of the bottle – for good. This extends to software development, as a GitHub survey shows that 92 percent of U.S.-based developers are already using AI coding tools both in and outside of work. They say AI technologies help them improve their skills (as cited by 57 percent), boost productivity (53 percent), focus on building/creating instead of repetitive tasks (51 percent) and avoid burnout (41 percent).
It’s safe to say that AI-assisted development will emerge even more as a norm in the near future. Organizations will have to establish policies and best practices to effectively manage it all, just as they’ve done with cloud deployments, Bring Your Own Device (BYOD) and other tech-in-the-workplace trends. But such oversight remains a work in progress. Many developers, for example, engage in what’s called “shadow AI” by using these tools without the knowledge or approval of their organization’s IT department or management.
Those managers include chief information security officers (CISOs), who are responsible for determining the guardrails, so developers understand which AI tools and practices are OK, and which aren’t. CISOs need to lead a transition from the uncertainty of shadow AI to a more known, controlled and well-managed Bring Your Own AI (BYOAI) environment.
Tomi Engdahl says:
Näin toimii koneoppiminen
https://www.nanobitteja.fi/uutiset.html?240527
Koneoppimisen käyttäminen lupaavien tutkimussuuntien tunnistamiseen on kasvava trendi materiaalitieteessä, koska se voi auttaa tutkijoita merkittävästi vähentämään uusien materiaalien seulomiseen tarvittavien kokeiden määrää ja aikaa.
Koneoppiminen voisi nopeuttaa seuraavan sukupolven akkujen kehitystä, sillä ne voivat mullistaa energian varastointiteknologiat kaikkialla.
He hyödynsivät koneoppimista tehostamaan lupaavien koostumusten hakua Natrium-ioni akkujen positiivisessa elektrodissa tarvittavien materiaalien koostumuksen arviointiin
“Tutkimuksemme mukainen lähestymistapa tarjoaa tehokkaan menetelmän lupaavien koostumusten tunnistamiseen laajasta valikoimasta potentiaalisia ehdokkaita”, Komaba huomauttaa, “Lisäksi tämä menetelmä on laajennettavissa monimutkaisempiin materiaalijärjestelmiin, kuten kvinaariset siirtymämetallioksidit.”