3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,954 Comments

  1. Tomi Engdahl says:

    Carl Franzen / VentureBeat:
    Arthur launches Arthur Bench, an open-source tool for comparing the performance of LLMs, like OpenAI’s GPT-3.5 Turbo and Meta’s Llama 2, for specific use cases — New York City-based artificial intelligence (AI) startup Arthur has announced the launch of Arthur Bench, an open-source tool …

    https://venturebeat.com/ai/arthur-unveils-bench-an-open-source-ai-model-evaluator/

    Reply
  2. Tomi Engdahl says:

    MICHIO KAKU SAYS AIS LIKE CHATGPT ARE “GLORIFIED TAPE RECORDERS”
    https://futurism.com/the-byte/michio-kaku-big-problem-artificial-intelligence

    “THIS ISN’T INTELLIGENCE.”
    Tape Fear
    Famed theoretical physicist Michio Kaku isn’t buying all the doomsaying surrounding the purported dangers of AI.

    In an interview with CNN, Kaku dismissed chatbots as “glorified tape recorders,” arguing we’re vastly overestimating their capabilities.

    “It takes snippets of what’s on the web created by a human, splices them together and passes it off as if it created these things,” he said. “And people are saying, ‘Oh my God, it’s a human, it’s humanlike.’”

    Reply
  3. Tomi Engdahl says:

    ChatGPT:n kehittäjän epäillään tanssahtelevan vararikon partaalla
    Samuli Leppälä16.8.202308:38|päivitetty16.8.202308:41TEKOÄLYSOVELLUKSET JA PALVELUT
    Valtavan käyttäjäkunnan kerännyt OpenAI:n tekoälybotti on todennäköisesti raskaan tappiollinen.
    https://www.tivi.fi/uutiset/chatgptn-kehittajan-epaillaan-tanssahtelevan-vararikon-partaalla/4670be89-b500-43a0-a770-c67de5d19bd7

    Reply
  4. Tomi Engdahl says:

    ChatGPT’s fate hangs in the balance as OpenAI reportedly edges closer to bankruptcy
    By Kevin Okemwa published 5 days ago
    OpenAI’s AI-powered chatbot is running on fumes.
    https://www.windowscentral.com/software-apps/chatgpts-fate-hangs-in-the-balance-as-openai-reportedly-edges-closer-to-bankruptcy

    Reply
  5. Tomi Engdahl says:

    Tekoälymallit ihmisälyn peilinä – Myös ihmiset suoltavat usein sujuvaa paskapuhetta
    Onko tekoälyllä älykkyyttä? Luultavasti on, sillä sen käymää keskustelua ei enää juuri erota ihmisten käymästä keskustelusta. Ihmistenkään järkeily ei nimittäin useimmiten huikaise älykkyyden tasollaan. Kielimallit haastavat romantisoidut käsitykset ihmisjärjestä ja luovuudesta.
    https://www.tieteessatapahtuu.fi/numerot/3-2023/tekoalymallit-ihmisalyn-peilina-myos-ihmiset-suoltavat-usein-alykasta-paskapuhetta

    Reply
  6. Tomi Engdahl says:

    Fully AI-Generated Influencers Are Getting Thousands of Reactions Per Thirst Trap
    “I’m an AI creation.”
    https://futurism.com/ai-generated-influencers?fbclid=IwAR01WBAvoh9L2VtNF9uL536ujW8CEONT-V04kUTLrA0GDLyn6TufvfiN9hU

    For years, CGI-generated virtual influencers have been shilling brands, enjoying lavish lifestyles, and amassing substantial followings on social media.

    So it was probably inevitable that influencer culture would soon be sucked into the AI craze. Indeed, earlier this year, one influencer created an AI chatbot version of herself that she rented out as a $1-per-minute “virtual girlfriend.”

    Reply
  7. Tomi Engdahl says:

    GitHub CEO says Copilot will write 80% of code “sooner than later”
    https://www.freethink.com/robots-ai/github-copilot#Echobox=1692278226

    Thomas Dohmke explains how AI will change the way we code, work, and learn — and could even change the future of innovation itself.

    Reply
  8. Tomi Engdahl says:

    OpenAI may have to wipe ChatGPT and start over
    https://bgr.com/tech/openai-may-have-to-wipe-chatgpt-and-start-over/

    OpenAI, the company behind the popular generative AI tool ChatGPT, could be forced to wipe its chatbot and start over completely, according to a new report from NPR (via Ars Technica). The wipe may come as part of a potential lawsuit which could also see OpenAI fined up to $150,000 for each piece of copyrighted material used to train the language model.

    Reply
  9. Tomi Engdahl says:

    AI-Generated Works Aren’t Protected By Copyrights, Federal Judge Rules
    https://www.billboard.com/pro/ai-generated-creative-works-cant-be-copyrighted-judge-rules/

    As questions swirl about how artificial intelligence will impact the music business, a federal judge offers one definitive answer.

    Reply
  10. Tomi Engdahl says:

    ChatGPT answers more than half of software engineering questions incorrectly
    You may want to stick to Stack Overflow for your software engineering assistance.
    https://www.zdnet.com/article/chatgpt-answers-more-than-half-of-software-engineering-questions-incorrectly/

    Reply
  11. Tomi Engdahl says:

    Threat Actors are Interested in Generative AI, but Use Remains Limited https://www.mandiant.com/resources/blog/threat-actors-generative-ai-limited

    Since at least 2019, Mandiant has tracked threat actor interest in, and use of, AI capabilities to facilitate a variety of malicious activity. Based on our own observations and open source accounts, adoption of AI in intrusion operations remains limited and primarily related to social engineering.

    See also: *Add ‘writing malware’ to the list of things generative AI is not very good at doing* – But it may help with fuzzing:
    https://www.theregister.com/2023/08/18/ai_malware_truth/

    Reply
  12. Tomi Engdahl says:

    https://www.tivi.fi/uutiset/chatgptn-uusi-kilpailija-vakuuttaa-naissa-se-on-jo-voittaja/535821c8-2227-471a-bec8-89d22a440f50

    Laajaa kielimallia käyttävän Claude-tekoälyn kenties käänteentekevin ominaisuus on sen kyky käsitellä kymmenkertaista tekstimäärää ChatGPT:hen verrattuna.

    Laajaa kielimallia käyttävä keskustelubotti ChatGPT löi itsensä läpi vuoden 2022 lopussa. Näppärä tekoäly antoi helpot työkalut pyytää algoritmia tekemään referaatteja, suosituksia, novelleja kuin koodiakin, monista muista käyttötavoista puhumattakaan. Mikä oleellisinta, ChatGPT havainnollisti suurelle joukolle ihmisiä, mitä hyötykäyttöä tekoälyistä voisi tavalliselle kansalaiselle olla.

    Entrepreneur kertoo, että OpenAI:n kehittämä ChatGPT on nyt saanut vakavasti otettavan haastajan. Claude on keskustelubotti, joka lukee ja sisäistää sille annetut tekstit tai tiedostot ja toimii niiden pohjalta. Koska Claudea on koulutettu valtavalla määrällä dataa, Claudelta voi kysyä mitä vain, ja se vastaa saman tien.

    ChatGPT kykenee analysoimaan noin 7000 sanaa kerrallaan. Entrepreneurin mukaan Claude kykenee kymmenkertaiseen tulokseen: se voi käsitellä jopa 75 000 sanan paketteja. Claude voi analysoida, tehdä yhteenvetoja, kääntää tai muuten käsitellä laajoja datapaketteja, kuten kokonaisia kirjoja.

    Entrepreneur kertoo, että Claude loistaa etenkin neljällä osa-alueella. Nämä ovat muistiinpanojen yhteenveto, josta on paljon hyötyä kokousten, konferenssien tai ideariihien jälkimainingeissa. Claude osaa myös tehdä yhteenvedon Slack-kanavien keskusteluista, sekä myös vastata niillä sille esitettyihin kysymyksiin.

    Claude on myös etevä kiteyttämään sotkuista dataa selkeälukuisiksi ja organisoiduiksi listoiksi. Claudelle voi myös syöttää omia tekstejä ja pyytää sitä sen jälkeen laatimaan yksinkertaisen rungon työn alla olevaa tekstiä varten.

    Reply
  13. Tomi Engdahl says:

    4 Ways To Use Claude The AI That Has Surpassed ChatGPT
    Claude has surprising capabilities, including a couple you won’t find in the free version of ChatGPT.
    https://www.entrepreneur.com/en-in/technology/4-ways-to-use-claud-the-ai-that-has-surpassed-chatgpt/457587

    Claude is a bot similar to ChatGPT that can read, understand and act on text you feed it or files you upload.You can ask Claude questions on any topic and get immediate answers. That’s because, like ChatGPT, it’s been trained on huge amounts of info. You can use it free at Claude.ai.

    The startup behind Claude has $1.5 billion in funding. Claude is made by Anthropic, a startup with more than $400 million in funding from Google, one of its partners. Anthropic was founded by Daniela and Dario Amodei, siblings who used to work at OpenAI, which makes ChatGPT.

    Claude can analyze up to 75,000 words at a time. You can ask Claude to analyze, summarize, translate or answer questions about huge amounts of information — even an entire book. It accepts up to about 75,000 words for analysis. That’s 10 times as much as ChatGPT can manage. Here are four ways you can use Claude.

    1. Summarize meeting notes

    Anytime you have unprocessed notes from a meeting, a conference, or a brainstorming session, you can paste them or upload them to Claude and ask for a summary or an analysis of key points.

    2. Contribute to a Slack thread

    You can add Claude’s AI chatbot to any channel in a paid Slack account and it can summarize a long thread or a series of conversations. It can also answer questions that come up in a channel or provide its own list of ideas or questions.

    3. Construct an information table

    It can be used for structuring messy, complex info into tables that are scannable and organized. Claude helps us do so.

    4. Generate an outline

    Give Claude something you’ve written in the past, a PDF slide deck, or raw notes you’ve made on a topic. Ask it to create an outline for you that you can then expand upon.

    Reply
  14. Tomi Engdahl says:

    “Internetiäkin suurempi murros” – Tekoälyyn hurahtanut Lauri Järvilehto uskoo, ettei mikään ala ole tekoälyltä turvassa
    https://www.talouselama.fi/uutiset/internetiakin-suurempi-murros-tekoalyyn-hurahtanut-lauri-jarvilehto-uskoo-ettei-mikaan-ala-ole-tekoalylta-turvassa/18300c62-27ee-4023-8ab4-d79dd1590f8d

    Tekoäly kiihdyttää työn murrosta. Tulevaisuuden työpaikat vievät ne, jotka osaavat käyttää tekoälyä, ja heidän käsissään on myös tulevaisuus monilla toimialoilla.

    Reply
  15. Tomi Engdahl says:

    Tekoäly korvaa nopeammin koodaajia kuin testaajia – ”Testaajien tekemään suuntaan ollaan jo menossa”
    Suvi Korhonen13.8.202312:01TEKOÄLYAUTOMAATIOOHJELMOINTI
    Maaret Pyhäjärvi uskoo, että tekoäly ja automaatio korvaavat nopeammin koodaajia kuin testaajia. Testaajan työnä on rikkoa rakentavasti muiden illuusioita.
    https://www.tivi.fi/uutiset/tekoaly-korvaa-nopeammin-koodaajia-kuin-testaajia-testaajien-tekemaan-suuntaan-ollaan-jo-menossa/610716c1-8d2f-46de-b1c8-4a96ecfe49c4

    Reply
  16. Tomi Engdahl says:

    The returns just weren’t there, even for IBM’s AI-run portfolio that began in 2017.

    AI Is Doing a Terrible Job Trading Stocks in the Real World
    “For now AI is limited to plagiarizing history.”
    https://futurism.com/ai-terrible-job-trading-stocks?fbclid=IwAR0VkHTxI4DGrgw_EDA6OXDMfTFvPKA_VaDsbLwmdWrsmqN5lvpvCSxwH-Q

    For all the hype about AI’s untapped potential in the stock market, its real life debut across various stock portfolios has failed to impress.

    The Wall Street Journal reports that there are already at least 13 exchange-traded funds (ETFs) being managed by an AI, and ironically, almost none of them bet on this year’s surge of the benchmark S&P 500 index, which tracks 500 of the largest companies listed on the US stock exchange — a surge which was significantly driven by the boom in AI.

    To put it bluntly: these AI-managed ETFs didn’t even cash in on their underlying tech’s own hype.

    Eric Ghysels, an economics professor at the University of North Carolina at Chapel Hill, noted that while an AI can be speedier than human investors moment-to-moment, it’s sluggish to adapt to “paradigm-shifting events” like the war in Ukraine — or maybe even the rise of AI. Meaning, in his opinion, an AI can’t beat human investors over time.

    “Maybe one day it will, but for now AI is limited to plagiarizing history,” Ghysels told the WSJ.

    Since its launch in 2017, AIEQ has had a return of 44 percent. In that same period, the ETF based on the S&P performance, SPY, boasted a return of a whopping 93 percent, blowing AIEQ’s gains out of the water.

    A 44 percent return isn’t bad on its own, but for stock traders wanting to be on the cutting edge, lagging behind the market at large isn’t going to, well, cut it.

    Harnessing all that technology just to perform worse than one of the most popular and no-brainer ETFs in the world doesn’t scream game-changer.

    Reply
  17. Tomi Engdahl says:

    Google has announced the launch of MediaPipe for Raspberry Pi, offering a Python-based SDK for ML tasks complete with examples for audio classification, text classification, gesture recognition, and more.

    Google Launches MediaPipe for Raspberry Pi, Offering a Python SDK for Simplified On-Device ML
    https://www.hackster.io/news/google-launches-mediapipe-for-raspberry-pi-offering-a-python-sdk-for-simplified-on-device-ml-c821f5ff57b0?fbclid=IwAR34_pHqNMEqe7XbM6QAtKsOkF9Bxma-6jrCBCCU5_vYBLeP-tyWOvoDbz0

    Examples include on-device audio, text, and image classification, object detection, gesture recognition, and facial landmarking.

    Reply
  18. Tomi Engdahl says:

    Former Playboy bunny who cloned herself to become AI model gives first interview
    She was sick of getting dressed up and putting makeup on to take photos, so she became an AI model instead.
    https://supercarblondie.com/ai-model-gina-stewart/?utm_source=fblink&utm_medium=social&utm_campaign=social&fbclid=IwAR2OhMGAq8CBHeqvN3Rx0YEiVjohjc5toUSVZ_M-umGqB3MmxIiLOO9RMHI

    A woman known as ‘the world’s hottest grandma’ has cloned herself to become an AI model.

    The former Playboy bunny called Gina was sick of getting dressed up and putting makeup on to take photos, so she became an AI model instead.

    Now she goes by Gina Stewart, a 28-year-old model with blond hair, blue eyes, and a curvy body.

    Describing what it felt like to create her AI alter-ego, 52-year-old Gina said it was a fantasy and an escape from reality.

    She’s not just doing it for fun or attention either.

    Reply
  19. Tomi Engdahl says:

    This AI influencer charges over $10,000 per Instagram post and has an 8-figure net worth
    She has millions of followers on social media and makes thousands of dollars posting. There’s just one catch, she’s not real.
    https://supercarblondie.com/ai-influencer-lil-miquela/

    This influencer charges more than $10,000 per Instagram post and has an eight-figure net worth.

    She goes by Lil Miquela online and has 3.6 million followers on TikTok and another 2.7 million on Instagram.

    Miquela is the result of some very clever AI (and person sitting behind the computer coding her).

    She’s one of a growing list of virtual influencers making millions through social media deals with some of the world’s biggest brands including Dior, Chanel, and Alexander McQueen.

    Lil Miquela was created by the American AI company Brud.

    And she’s just one in a growing list of AI influencers, including Imma, Shadu, and Milla Sofia who went viral last week.

    Milla Sofia and Lil Miquela look so real that they’re duping thousands of their followers too.

    Influencer gains thousands of Instagram followers who don’t realize she’s AI
    Milla Sofia is a 19-year-old girl living in Helsinki and she has more than 100,000 followers who have no idea she’s fake.
    https://supercarblondie.com/milla-sofia-influencer-created-by-ai/

    Reply
  20. Tomi Engdahl says:

    https://etn.fi/index.php/opinion/15224-merkittaevae-linjanveto-tekoaelytaiteella-ei-ole-tekijaensuojaa

    Yhdysvalloissa on tehty juridinen päätös, jolla saattaa olla kauaskantoisia vaikutuksia kaikkeen tekoälyn hyödyntämiseen. Perjantaina alueoikeuden tuomari Beryl A. Howell päätti, ettei tekoälyn generoimaa taidetta voi suojata tekijänoikeudella. Howellin mukaan ihminen on aina keskeinen osa tekijänoikeusvaatimusta.

    Thalerin mukaan teos ”A Recent Entrance to Paradise” (kuvassa, mukana oikeuden päätöksessä) on tietokoneohjelman luoma, ja sen tekijänoikeus kuuluu ohjelman kehittäjälle eli hänelle. Viraston mukaan teokselta puuttui inhimillinen tekijä, joten se ei voi saada tekijänoikeussuojaa. Tuomari päätyi samaan omassa arviossaan. Thaler perusteli vaatimustaan sillä, että teos on itsenäinen ja oikeudet siihen kuuluvat hänelle tietokoneen omistajana.

    Tekoälytaide on tietenkin mielenkiintoista jo sinällään, mutta Howellin päätöksellä voi olla paljon laajempia vaikutuksia. Jo nyt oikeuteen on joutunut tapauksia, joissa Microsoftin ja OpenAI:n katsotaan rikkoneen tekijänoikeuksia datana käytöstä esimerkiksi ChatGPT:n takana olevien LLM-mallien kouluttamiseen.

    Entäpä tekoälyn generoima koodi ylipäätään? Jos tekijänoikeus vaatii inhimillisen tekijän (human authorship), voiko AI-generoidulla koodillakaan olla sitä? Tuleeko koodeista vapaata riistaa?

    Reply
  21. Tomi Engdahl says:

    A Stanford professor said the “magnitude of the change” was unexpected.

    Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds
    https://fortune.com/2023/07/19/chatgpt-accuracy-stanford-study/?utm_medium=social&utm_source=facebook.com&xid=soc_socialflow_facebook_FORTUNE&utm_campaign=fortunemagazine&fbclid=IwAR0VF9sRtPJ1IHMOHKXCxEgOsbtbKrKbfsw5thjygYRuWXwzspDAsN7VDTo

    High-profile A.I. chatbot ChatGPT performed worse on certain tasks in June than its March version, a Stanford University study found.

    The study compared the performance of the chatbot, created by OpenAI, over several months at four “diverse” tasks: solving math problems, answering sensitive questions, generating software code, and visual reasoning.

    Researchers found wild fluctuations—called drift—in the technology’s ability to perform certain tasks. The study looked at two versions of OpenAI’s technology over the time period: a version called GPT-3.5 and another known as GPT-4. The most notable results came from research into GPT-4’s ability to solve math problems. Over the course of the study researchers found that in March GPT-4 was able to correctly identify that the number 17077 is a prime number 97.6% of the times it was asked. But just three months later, its accuracy plummeted to a lowly 2.4%. Meanwhile, the GPT-3.5 model had virtually the opposite trajectory. The March version got the answer to the same question right just 7.4% of the time—while the June version was consistently right, answering correctly 86.8% of the time.

    Similarly varying results happened when the researchers asked the models to write code and to do a visual reasoning test that asked the technology to predict the next figure in a pattern.

    The vastly different results from March to June and between the two models reflect not so much the model’s accuracy in performing specific tasks, but rather the unpredictable effects of changes in one part of the model on others.

    “When we are tuning a large language model to improve its performance on certain tasks, that can actually have a lot of unintended consequences, which might actually hurt this model’s performance on other tasks,”

    The exact nature of these unintended side effects is still poorly understood because researchers and the public alike have no visibility into the models powering ChatGPT. It’s a reality that has only become more acute since OpenAI decided to backtrack on plans to make its code open source in March. “These are black-box models,” Zou says. “So we don’t actually know how the model itself, the neural architectures, or the training data have changed.”

    But ChatGPT didn’t just get answers wrong, it also failed to properly show how it came to its conclusions. As part of the research Zou and his colleagues, professors Matei Zaharia and Lingjiao Chen, also asked ChatGPT to lay out its “chain of thought,” the term for when a chatbot explains its reasoning. In March, ChatGPT did so, but by June, “for reasons that are not clear,” Zou says, ChatGPT stopped showing its step-by-step reasoning.

    Reply
  22. Tomi Engdahl says:

    Nilay Patel / The Verge:
    An analysis of Google’s policy dilemma as YouTube and UMG explore AI licensing, Google scrapes the web to train its AI, and lawsuits could upend copyright law — Google has made clear it is going to use the open web to inform and create anything it wants, and nothing can get in its way.

    Artificial Intelligence

    Google and YouTube are trying to have it both ways with AI and copyright
    https://www.theverge.com/2023/8/22/23841822/google-youtube-ai-copyright-umg-scraping-universal

    Google has made clear it is going to use the open web to inform and create anything it wants, and nothing can get in its way. Except maybe Frank Sinatra.

    There’s only one name that springs to mind when you think of the cutting edge in copyright law online: Frank Sinatra.

    There’s nothing more important than making sure his estate — and his label, Universal Music Group — gets paid when people do AI versions of Ol’ Blue Eyes singing “Get Low” on YouTube, right? Even if that means creating an entirely new class of extralegal contractual royalties for big music labels just to protect the online dominance of your video platform while simultaneously insisting that training AI search results on books and news websites without paying anyone is permissible fair use? Right? Right?

    This, broadly, is the position that Google is taking after announcing a deal with Universal Music Group yesterday “to develop an AI framework to help us work toward our common goals.”

    The quick background here is that, in April, a track called “Heart on My Sleeve” from an artist called Ghostwriter977 with the AI-generated voices of Drake and the Weeknd went viral. Drake and the Weeknd are Universal Music Group artists, and UMG was not happy about it, widely issuing statements saying music platforms needed to do the right thing and take the tracks down.

    Streaming services like Apple and Spotify, which control their entire catalogs, quickly complied. The problem then (and now) was open platforms like YouTube, which generally don’t take user content down without a policy violation — most often, copyright infringement. And here, there wasn’t a clear policy violation: legally, voices are not copyrightable (although individual songs used to train their AI doppelgangers are), and there is no federal law protecting likenesses — it’s all a mishmash of state laws. So UMG fell back on something simple: the track contained a sample of the Metro Boomin producer tag, which is copyrighted, allowing UMG to issue takedown requests to YouTube.

    The thing is that “fair use” is 1) an affirmative defense to copyright infringement, which means you have to admit you made the copy in the first place, and 2) evaluated on a messy case-by-case basis in the courts, a slow and totally inconsistent process that often leads to really bad outcomes that screw up entire creative fields for decades.

    But Google has to keep the music industry in particular happy because YouTube basically cannot operate without blanket licenses from the labels

    What’s going to happen next is all very obvious: YouTube will attempt to expand Content ID to flag content with voices that sound like UMG artists, and UMG will be able to take those videos down or collect royalties for those songs and videos. Along the way, we will be treated to glossy videos of a UMG artist like Ryan Tedder asking Google Bard to make a sad beat for a rainy day or whatever while saying that AI is amazing.

    To be clear, this is a fine solution for YouTube, which has a lot of money and cannot accept the existential risk of losing its music licenses during a decade-long legal fight over fair use and AI. But it is a pretty shitty solution for the rest of us, who do not have the bargaining power of huge music labels to create bespoke platform-specific AI royalty schemes and who will probably get caught up in Content ID’s well-known false-positive error rates without any legal recourse at all.

    Reply
  23. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    IBM unveils watsonx Code Assistant for IBM Z, which uses a code-generating AI model to translate COBOL code into Java, set for general availability in Q4 2023 — COBOL, or Common Business Oriented Language, is one of the oldest programming languages in use, dating back to around 1959.

    IBM taps AI to translate COBOL code to Java
    https://techcrunch.com/2023/08/22/ibm-taps-ai-to-translate-cobol-code-to-java/

    COBOL, or Common Business Oriented Language, is one of the oldest programming languages in use, dating back to around 1959. It’s had surprising staying power; according to a 2022 survey, there’s over 800 billion lines of COBOL in use on production systems, up from an estimated 220 billion in 2017.

    But COBOL has a reputation for being a tough-to-navigate, inefficient language. Why not migrate to a newer one? For large organizations, it tends to be a complex and costly proposition, given the small number of COBOL experts in the world. When the Commonwealth Bank of Australia replaced its core COBOL platform in 2012, it took five years and cost over $700 million.

    Looking to present a new solution to the problem of modernizing COBOL apps, IBM today unveiled Code Assistant for IBM Z, which uses a code-generating AI model to translate COBOL code into Java. Set to become generally available in Q4 2023, Code Assistant for IBM Z will enter preview during IBM’s TechXchange conference in Las Vegas early this September.

    Reply
  24. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    OpenAI adds fine-tuning to GPT-3.5 Turbo, letting developers customize models with their own data to make them perform better for their use cases for a fee — OpenAI customers can now bring custom data to the lightweight version of GPT-3.5, GPT-3.5 Turbo — making it easier to improve …

    https://techcrunch.com/2023/08/22/openai-brings-fine-tuning-to-gpt-3-5-turbo/

    Reply
  25. Tomi Engdahl says:

    Dave James / PC Gamer:
    Nvidia announces DLSS 3.5 with AI-powered Ray Reconstruction, designed to recognize patterns within a noisy image and improve clarity, available on all RTX GPUs — Bringing AI to bear on those noisy traced rays. — Nvidia has announced a gorgeous new update to its Deep Learning Super Sampling technology.

    https://www.pcgamer.com/nvidia-dlss-3-5-ray-reconstruction/

    Reply
  26. Tomi Engdahl says:

    Emanuel Maiberg / 404 Media:
    A look at CivitAI, a site for sharing AI models that generate images mostly trained on material scraped without consent, as non-consensual AI porn proliferates — On CivitAI, a site for sharing image generating AI models, users can browse thousands of models that can produce any kind …

    Inside the AI Porn Marketplace Where Everything and Everyone Is for Sale
    https://www.404media.co/inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/

    Generative AI tools have empowered amateurs and entrepreneurs to build mind-boggling amounts of non-consensual porn.

    On CivitAI, a site for sharing image generating AI models, users can browse thousands of models that can produce any kind of pornographic scenario they can dream of, trained on real images of real people scraped without consent from every corner of the internet.

    The creator of the “Instant Cumshot” model, which has been downloaded 64,502 times, said it was “Trained entirely on images of professional adult actresses, as freeze frames from 1080p+ video.”

    While the practice is technically not allowed on CivitAI, the site hosts image generating AI models of specific real people, which can be combined with any of the pornographic AI models to generate non-consensual sexual images. 404 Media has seen the non-consensual sexual images these models enable on CivitAI, its Discord, and off its platform.

    A 404 Media investigation shows that recent developments in AI image generators have created an explosion of communities where people share knowledge to advance this practice, for fun or profit. Foundational to the community are previously unreported but popular websites that allow anyone to generate millions of these images a month, limited only by how fast they can click their mouse, and how quickly the cloud computing solutions powering these tools can fill requests. The sheer number of people using these platforms and non-consensual sexual images they create show that the AI porn problem is far worse than has been previously reported.

    Our investigation shows the current state of the non-consensual AI porn supply chain: specific Reddit communities that are being scraped for images, the platforms that monetize these AI models and images, and the open source technology that makes it possible to easily generate non-consensual sexual images of celebrities, influencers, YouTubers, and athletes. We also spoke to sex workers whose images are powering these AI generated porn without their consent who said they are terrified of how this will impact their lives.

    On Product Hunt, a site where users vote for the most exciting startups and tech products of the day, Mage, which on April 20 cracked the site’s top three products, is described as “an incredibly simple and fun platform that provides 50+ top, custom Text-to-Image AI models as well as Text-to-GIF for consumers to create personalized content.”

    “Create anything,” Mage.Space’s landing page invites users with a text box underneath. Type in the name of a major celebrity, and Mage will generate their image using Stable Diffusion, an open source, text-to-image machine learning model. Type in the name of the same celebrity plus the word “nude” or a specific sex act, and Mage will generate a blurred image and prompt you to upgrade to a “Basic” account for $4 a month, or a “Pro Plan” for $15 a month. “NSFW content is only available to premium members.” the prompt says.

    Since Mage by default saves every image generated on the site, clicking on a username will reveal their entire image generation history, another wall of images that often includes hundreds or thousands of AI-generated sexual images of various celebrities made by just one of Mage’s many users. A user’s image generation history is presented in reverse chronological order, revealing how their experimentation with the technology evolves over time.

    Scrolling through a user’s image generation history feels like an unvarnished peek into their id.

    Mage displays the prompt the user wrote in order to generate the image to allow other users to iterate and improve upon images they like. Each of these reads like an extremely horny and angry man yelling their basest desires at Pornhub’s search function.

    Generating pornographic images of real people is against the Mage Discord community’s rules, which the community strictly enforces because it’s also against Discord’s platform-wide community guidelines.

    Gregory Hunkins and Roi Lee, Mage’s founders, told me that Mage has over 500,000 accounts, a million unique creators active on it every month, and that the site generates a “seven-figure” annual revenue. More than 500 million images have been generated on the site so far, they said.

    “To be clear, while we support freedom of expression, NSFW content constitutes a minority of content created on our platform,” Lee and Hunkins said in a written statement. “NSFW content is behind a paywall to guard against those who abuse the Mage Space platform and create content that does not abide by our Terms & Conditions. One of the most effective guards against anonymity, repeat offenders, and enforcing a social contract is our financial institutions.”

    When asked about the site’s moderation policies, Lee and Hunkins explained that Mage uses an automated moderation system called “GIGACOP” that warns users and rejects prompts that are likely to be abused. 404 Media did not encounter any such warning in its testing

    However, 404 Media found that on Mage’s site AI-generated non-consensual sexual images are easy to find and are functionally infinite.

    The images Mage generates are defined by the technology it’s allowing users to access. Like many of the smaller image generating AI tools online, at its core it’s powered by Stable Diffusion, which surged in popularity when it was released last year under the Creative ML OpenRAIL-M license, allowing users to modify it for commercial and non-commercial purposes.

    Mage users can choose what kind of “base model” they want to use to generate their images. These base models are modified versions of Stable Diffusion that have been trained to produce a particular type of image. The “Anime Pastel Dream” model, for example, is great at producing images that look like stills from big budget anime, while “Analog Diffusion” is good at giving images a vintage film photo aesthetic.

    One of the most popular base models on Mage is called “URPM,” an acronym for “Uber Realistic Porn Merge.” That Stable Diffusion model, as well as others designed to produce pornography, are created upstream in the AI porn supply chain, where people train AI to recreate the likeness of anyone, doing anything.

    In August of 2022, researchers from Tel Aviv University introduced the concept of “textual inversion.” This method trains Stable Diffusion on a new “concept,” which can be an object, person, texture, style, or composition, with as few as 3-5 images, and be represented by a specific word or letter. Users can train Stable Diffusion on these new concepts without retraining the entire Stable Diffusion model, which would be “prohibitively expensive,” as the researchers explain in their paper.

    Reply
  27. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Meta releases SeamlessM4T, an AI model that can translate and transcribe nearly 100 languages across text and speech, and SeamlessAlign, a translation dataset

    Meta releases an AI model that can transcribe and translate close to 100 languages
    https://techcrunch.com/2023/08/22/meta-releases-an-ai-model-that-can-transcribe-and-translate-close-to-100-languages/

    In its quest to develop AI that can understand a range of different dialects, Meta has created an AI model, SeamlessM4T, that can translate and transcribe close to 100 languages across text and speech.

    Available in open source along with SeamlessAlign, a new translation dataset, Meta claims that SeamlessM4T represents a “significant breakthrough” in the field of AI-powered speech-to-speech and speech-to-text.

    “Our single model provides on-demand translations that enable people who speak different languages to communicate more effectively,” Meta writes in a blog post shared with TechCrunch. “SeamlessM4T implicitly recognizes the source languages without the need for a separate language identification model.”

    SeamlessM4T is something of a spiritual successor to Meta’s No Language Left Behind, a text-to-text machine translation model, and Universal Speech Translator, one of the few direct speech-to-speech translation systems to support the Hokkien language.

    In developing it, Meta says that it scraped publicly available text (in the order of “tens of billions” of sentences) and speech (4 million hours) from the web. In an interview with TechCrunch, Juan Pino, a research scientist at Meta’s AI research division and a contributor on the project, wouldn’t reveal the exact sources of the data, saying only that there was “a variety” of them.

    Not every content creator agrees with the practice of leveraging public data to train models that could be used commercially. Some have filed lawsuits against companies building AI tools on top of publicly available data, arguing that the vendors should be compelled to provide credit if not compensation — and clear ways to opt out.

    But Meta claims that the data it mined — which might contain personally identifiable information, the company admits — wasn’t copyrighted and came primarily from open source or licensed sources.

    Whatever the case, Meta used the scraped text and speech to create the training dataset for SeamlessM4T, called SeamlessAlign.

    Reply
  28. Tomi Engdahl says:

    Artificial Intelligence and Clouds: A Complex Relationship of Collaboration and Concern https://www.forbes.com/sites/emilsayegh/2023/08/23/artificial-intelligence-and-clouds-a-complex-relationship-of-collaboration-and-concern/

    In an age where technology headlines often teeter on the edge of dystopian narratives, the pervasive influence of Artificial Intelligence (AI) prompts us to contemplate its role. Is it a modern ally, a potential adversary, or perhaps a nuanced combination of both? This intricate interplay of AI’s capabilities has the potential to reshape the very foundation of the tech industry, with profound implications for choices related to procurement, supply chain management, risk assessment, cybersecurity, and other critical domains.

    Reply
  29. Tomi Engdahl says:

    AI Warfare: The Technological Landscape and Future Possibilities
    https://onestopsystems.com/blogs/one-stop-systems-blog/ai-warfare-the-technological-landscape-and-future-possibilities

    he integration of Artificial Intelligence (AI) into warfare has revolutionized the technological landscape of modern military operations. AI-driven systems are capable of autonomously processing data, making intelligent decisions, and executing complex tasks with precision. In this blog article, we provide a comprehensive overview of the current capabilities of AI in warfare, explore future possibilities, and examine the challenges faced by the underlying hardware.
    Current Capabilities of AI Warfare

    Intelligent surveillance: AI algorithms can analyze vast amounts of sensor data, including images, video streams, and signal intelligence, to identify potential threats, track targets, and provide real-time situational awareness. This capability allows military forces to make informed decisions and effectively respond to changing scenarios.
    Autonomous weapons: AI-driven autonomous systems such as drones and unmanned ground vehicles can be deployed independently or in collaboration with human operators. These systems can perform missions, including reconnaissance, target acquisition, and even combat operations, while minimizing the risk to human personnel.
    Data analysis and decision support: AI can process and analyze large volumes of data from various sources, extracting valuable insights, patterns, and correlations. This enables military commanders to make data-driven decisions, plan missions, and optimize resources.

    Future Possibilities

    Swarm intelligence: The use of swarms of AI-controlled autonomous systems working together in a coordinated manner offers immense possibilities for future warfare. Swarm intelligence can provide enhanced surveillance, efficient target acquisition, and increased resilience by leveraging the collective intelligence of multiple AI entities.
    Cognitive battlefields: The ability of AI to learn and adapt could lead to the development of cognitive battlefields where AI systems continuously analyze the environment and respond to dynamic changes. These systems could autonomously allocate resources, adjust strategies, and react to emerging threats in real time.
    Human-machine collaboration: Future AI warfare is likely to involve closer collaboration between humans and machines. AI systems could support human operators in decision-making, provide real-time information, and enable seamless communication between manned and unmanned platforms.

    Reply
  30. Tomi Engdahl says:

    Bloomberg:
    Study: in May 2023, ~150K nonconsensual porn deepfakes, with 3.8B total views, appeared on 30 sites, up 9x since 2019, aided by services from big tech companies — To stay up and running, deepfake creators rely on products and services from Google, Apple, Amazon, CloudFlare and Microsoft

    Google and Microsoft Are Supercharging AI Deepfake Porn
    https://www.bloomberg.com/news/articles/2023-08-24/google-microsoft-tools-behind-surge-in-deepfake-ai-porn#xj4y7vzkg

    To stay up and running, deepfake creators rely on products and services from Google, Apple, Amazon, CloudFlare and Microsoft

    Reply
  31. Tomi Engdahl says:

    Original Paper
    ChatGPT vs Google for Queries Related to Dementia and Other
    Cognitive Decline: Comparison of Results
    https://www.jmir.org/2023/1/e48966/PDF

    Reply
  32. Tomi Engdahl says:

    Ariel Bogle / The Guardian:
    NYT, CNN, and some other news outlets block OpenAI’s GPTBot web crawler from accessing their content; some have also blocked Common Crawl Foundation’s CCBot — Chicago Tribune and Australian newspapers the Canberra Times and Newcastle Herald also appear to have disallowed web crawler from maker of Chat GPT

    New York Times, CNN and Australia’s ABC block OpenAI’s GPTBot web crawler from accessing content

    Chicago Tribune and Australian newspapers the Canberra Times and Newcastle Herald also appear to have disallowed web crawler from maker of Chat GPT
    https://www.theguardian.com/technology/2023/aug/25/new-york-times-cnn-and-abc-block-openais-gptbot-web-crawler-from-scraping-content

    News outlets including the New York Times, CNN, Reuters and the Australian Broadcasting Corporation (ABC) have blocked a tool from OpenAI, limiting the company’s ability to continue accessing their content.

    OpenAI is behind one of the best known artificial intelligence chatbots, ChatGPT. Its web crawler – known as GPTBot – may scan webpages to help improve its AI models.

    The Verge was first to report the New York Times had blocked GPTBot on its website. The Guardian subsequently found that other major news websites, including CNN, Reuters, the Chicago Tribune, the ABC and Australian Community Media (ACM) brands such as the Canberra Times and the Newcastle Herald, appear to have also disallowed the web crawler.

    Reply
  33. Tomi Engdahl says:

    Emilia David / The Verge:
    Meta debuts Code Llama, which can generate code and debug human-written work, under the same community license as Llama 2, free for research and commercial use

    Meta launches own AI code-writing tool: Code Llama
    https://www.theverge.com/2023/8/24/23843487/meta-llama-code-generation-generative-ai-llm

    Meta has released a tool called Code Llama, built on top of its Llama 2 large language model, to generate new code and debug human-written work, the company said.

    Code Llama will use the same community license as Llama 2 and is free for research and commercial use.

    Code Llama, Meta said, can create strings of code from prompts or complete and debug code when pointed to a specific code string. In addition to the base Code Llama model, Meta released a Python-specialized version called Code Llama-Python and another version called Code Llama-Instrct, which can understand instructions in natural language. According to Meta, each specific version of Code Llama is not interchangeable, and the company does not recommend the base Code Llama or Code Llama-Python for natural language instructions.

    “Programmers are already using LLMs to assist in a variety of tasks, ranging from writing new software to debugging existing code,” Meta said in a blog post. “The goal is to make developer workflows more efficient so they can focus on the most human-centric aspects of their jobs.”

    Meta is giving away its AI tech to try to beat ChatGPT
    https://www.theverge.com/2023/7/18/23799025/meta-ai-llama-2-open-source-microsoft

    Reply
  34. Tomi Engdahl says:

    A Bard’s Tale – how fake AI bots try to install malware https://www.welivesecurity.com/en/scams/a-bards-tale-how-fake-ai-bots-try-to-install-malware/

    The AI race is on! It’s easy to lose track of the latest developments and possibilities, and yet everyone wants to see firsthand what the hype is about.

    Reply
  35. Tomi Engdahl says:

    Benedict Evans:
    A look at the ethical and legal issues around generative AI, which makes things that were previously only possible on a small scale practical at a massive scale — We’ve been talking about intellectual property in one way or another for at least the last five hundred years …

    Generative AI and intellectual property
    https://www.ben-evans.com/benedictevans/2023/8/27/generative-ai-ad-intellectual-property

    If you put all the world’s knowledge into an AI model and use it to make something new, who owns that and who gets paid? This is a completely new problem that we’ve been arguing about for 500 years.

    Reply
  36. Tomi Engdahl says:

    Mia Sato / The Verge:
    A look at Smart Answers, an AI chatbot trained only on Macworld, PCWorld, Tech Advisor, and TechHive content, and offered to the outlets’ readers on August 1 — By and large, the goal has been: can we make more pages for ads without paying more writers? — Now, a group of tech outlets …

    Can news outlets build a ‘trustworthy’ AI chatbot?
    https://www.theverge.com/2023/8/25/23844868/ai-chatbot-macworld-pcworld-journalism-smart-answers

    The media company operating Macworld, PCWorld, Tech Advisor, and TechHive introduced an AI chatbot earlier this month. The bot is trained using the sites’ archives.

    News publishers have jumped headfirst into artificial intelligence, using generative AI tools to produce bland travel guides, inaccurate film blogs, and SEO-bait explainers. By and large, the goal has been: can we make more pages for ads without paying more writers?

    Now, a group of tech outlets is attempting to incorporate generative AI into its websites, though readers won’t find a machine’s byline anytime soon. On August 1st, an AI chatbot tool was added to Macworld, PCWorld, Tech Advisor, and TechHive, promising that readers can “get [their] tech questions answered by AI, based only on stories and reviews by our experts.”

    The AI chatbot, dubbed Smart Answers, appears across nearly all articles and on the homepages of the sites, which are owned by media / marketing company Foundry. Smart Answers is trained only on the corpus of English language articles from the four sites and excludes sponsored content and deals posts. The user experience is similar to other consumer tools like ChatGPT: readers type in a question, and Smart Answers spits out a response. Alternatively, readers can select a query from an FAQ list, which is AI-generated but based on what people are asking and clicking on. Smart Answers responses include links to the articles from which information was pulled.

    https://www.macworld.com/article/2012667/smart-answers.html

    Reply
  37. Tomi Engdahl says:

    Gillian Tett / Financial Times:
    How LLMs are impacting computational humor, an AI subfield that uses computers to create jokes, as some comedians begin using AI chatbots for improv and roasts

    https://www.ft.com/content/818f2cab-57ff-42c3-917b-4a83f1d87802

    Reply
  38. Tomi Engdahl says:

    Stephen King / The Atlantic:
    Stephen King reflects on his books being used for AI training, arguing the sum is lesser than its parts, so far, as creativity can’t happen without sentience

    Stephen King: My Books Were Used to Train AI
    https://www.theatlantic.com/books/archive/2023/08/stephen-king-books-ai-writing/675088/

    One prominent author responds to the revelation that his writing is being used to coach artificial intelligence.

    Self-driving cars. Saucer-shaped vacuum cleaners that skitter hither and yon (only occasionally getting stuck in corners). Phones that tell you where you are and how to get to the next place. We live with all of these things, and in some cases—the smartphone is the best example—can’t live without them, or so we tell ourselves. But can a machine that reads learn to write?

    I have said in one of my few forays into nonfiction (On Writing) that you can’t learn to write unless you’re a reader, and unless you read a lot. AI programmers have apparently taken this advice to heart. Because the capacity of computer memory is so large—everything I ever wrote could fit on one thumb drive, a fact that never ceases to blow my mind—these programmers can dump thousands of books into state-of-the-art digital blenders. Including, it seems, mine. The real question is whether you get a sum that’s greater than the parts, when you pour back out.

    So far, the answer is no. AI poems in the style of William Blake or William Carlos Williams (I’ve seen both) are a lot like movie money: good at first glance, not so good upon close inspection.

    Reply
  39. Tomi Engdahl says:

    Natasha Lomas / TechCrunch:
    The EU’s DSA goes into effect, forcing platforms like Facebook, Instagram, YouTube, and TikTok to let users opt out of profiling-based content recommendations

    All hail the new EU law that lets social media users quiet quit the algorithm
    https://techcrunch.com/2023/08/25/quiet-qutting-ai/

    Internet users in the European Union are logging on to a quiet revolution on mainstream social networks today: The ability to say ‘no thanks’ to being attention hacked by AI.

    Thanks to the bloc’s Digital Services Act (DSA), users of Meta’s Facebook and Instagram, ByteDance’s TikTok and Snap’s Snapchat can easily decline “personalized” content feeds based on “relevance” (i.e. tracking) — and switch to a more humble kind of news feed that’s populated with posts from your friends displayed in chronological order. And this is just the tip of the regulatory iceberg. The changes apply to major platforms in the EU but some are being rolled out globally as tech giants opt to streamline elements of their compliance.

    Facebook actually got out ahead of today’s DSA compliance deadline by launching a chronological new Feeds tab last month — doing so globally, seemingly, not just in the EU. But it’s a safe bet Meta wouldn’t have made the move without the bloc passing a law that mandates mainstream platforms give users a choice to see non-personalized content.

    Notably the new chronological Facebook news feed does not show any “Suggested For You” posts at all.

    Reply
  40. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    OpenAI launches GPT-4-powered Enterprise ChatGPT, with improved privacy, performance, and data analysis features; pricing is “dependent on each company’s usage” — Seeking to capitalize on ChatGPT’s viral success, OpenAI today announced the launch of ChatGPT Enterprise …

    OpenAI launches a ChatGPT plan for enterprise customers
    https://techcrunch.com/2023/08/28/openai-launches-a-chatgpt-plan-for-enterprise-customers/

    Seeking to capitalize on ChatGPT’s viral success, OpenAI today announced the launch of ChatGPT Enterprise, a business-focused edition of the company’s AI-powered chatbot app.

    ChatGPT Enterprise, which OpenAI first teased in a blog post earlier this year, can perform the same tasks as ChatGPT, such as writing emails, drafting essays and debugging computer code. But the new offering also adds “enterprise-grade” privacy and data analysis capabilities on top of the vanilla ChatGPT, as well as enhanced performance and customization options.

    That puts ChatGPT Enterprise on par, feature-wise, with Bing Chat Enterprise, Microsoft’s recently launched take on an enterprise-oriented chatbot service.

    Reply
  41. Tomi Engdahl says:

    Aisha Malik / TechCrunch:
    DoorDash launches AI-powered voice ordering tech, giving restaurants a multi-language personalized voice ordering system without missed calls or long wait times — DoorDash is launching AI-powered voice ordering technology that will allow restaurants to increase their sales by answering …

    DoorDash launches AI-powered voice ordering technology for restaurants
    https://techcrunch.com/2023/08/28/doordash-launches-ai-powered-voice-ordering-technology-for-restaurants/

    Reply
  42. Tomi Engdahl says:

    Cat Zakrzewski / Washington Post:
    Analysis: ChatGPT can draft political messages targeting specific voting demographics; in March 2023, OpenAI said it banned campaigns from making such material

    ChatGPT breaks its own rules on political messages
    https://www.washingtonpost.com/technology/2023/08/28/ai-2024-election-campaigns-disinformation-ads/

    A Washington Post analysis found that the chatbot will draft political messages tailored for demographic groups, like suburban women or rural men

    When OpenAI last year unleashed ChatGPT, it banned political campaigns from using the artificial intelligence-powered chatbot — a recognition of the potential election risks posed by the tool.

    But in March, OpenAI updated its website with a new set of rules limiting only what the company considers the most risky applications. These rules ban political campaigns from using ChatGPT to create materials targeting specific voting demographics, a capability that could be abused spread tailored disinformation at an unprecedented scale.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*