3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,956 Comments

  1. Tomi Engdahl says:

    AI company harvested billions of Facebook photos for a facial recognition database it sold to police
    In a BBC interview, Clearview’s CEO admitted to scraping user photos for its software
    https://www.salon.com/2023/04/06/ai-company-harvested-billions-of-facebook-photos-for-a-facial-recognition-database-it-sold-to-police/

    Reply
  2. Tomi Engdahl says:

    As an open source, instruction-following large language model for commercial use that has been fine-tuned on a human-generated data set, Dolly 2.0 could end up being a compelling starting point for homebrew ChatGPT competitors.

    “A really big deal”—Dolly is a free, open source, ChatGPT-style AI model
    Dolly 2.0 could spark a new wave of fully open source LLMs similar to ChatGPT.
    https://arstechnica.com/information-technology/2023/04/a-really-big-deal-dolly-is-a-free-open-source-chatgpt-style-ai-model/?utm_brand=ars&utm_medium=social&utm_source=facebook&utm_social-type=owned

    Reply
  3. Tomi Engdahl says:

    Free Dolly: Introducing the World’s First Truly Open Instruction-Tuned LLM
    https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm

    Two weeks ago, we released Dolly, a large language model (LLM) trained for less than $30 to exhibit ChatGPT-like human interactivity (aka instruction-following). Today, we’re releasing Dolly 2.0, the first open source, instruction-following LLM, fine-tuned on a human-generated instruction dataset licensed for research and commercial use.

    Dolly 2.0 is a 12B parameter language model based on the EleutherAI pythia model family and fine-tuned exclusively on a new, high-quality human generated instruction following dataset, crowdsourced among Databricks employees.

    We are open-sourcing the entirety of Dolly 2.0, including the training code, the dataset, and the model weights, all suitable for commercial use. This means that any organization can create, own, and customize powerful LLMs that can talk to people, without paying for API access or sharing data with third parties.

    databricks-dolly-15k dataset
    databricks-dolly-15k contains 15,000 high-quality human-generated prompt / response pairs specifically designed for instruction tuning large language models. Under the licensing terms for databricks-dolly-15k (Creative Commons Attribution-ShareAlike 3.0 Unported License), anyone can use, modify, or extend this dataset for any purpose, including commercial applications.

    To the best of our knowledge, this dataset is the first open source, human-generated instruction dataset specifically designed to make large language models exhibit the magical interactivity of ChatGPT. databricks-dolly-15k was authored by more than 5,000 Databricks employees during March and April of 2023. These training records are natural, expressive and designed to represent a wide range of the behaviors, from brainstorming and content generation to information extraction and summarization.

    Reply
  4. Tomi Engdahl says:

    Elon Musk was one of the most prominent backers of an AI ‘pause.’ Now, he’s moving Twitter to the forefront of AI development.

    Elon Musk’s AI ambitions for Twitter show that some of the people calling for the tech to ‘pause’ seem to be acting out of their own self-interest
    https://trib.al/w8fIVp4

    Elon Musk recently called for a ‘pause’ to AI development while society considered the ramifications. But Insider reports that Twitter, owned by Musk, is spending millions to get in on the generative AI boom.

    Elon Musk-owned Twitter purchased 10,000 GPUs, apparently to get into the generative AI boom.
    This move goes against Musk’s open-letter plea for companies to slow down AI development.
    It also backs up Reid Hoffman’s claim that some like Musk wanted the pause so they could catch up.

    Reply
  5. Tomi Engdahl says:

    CHAOSGPT THOUGHTS: I need to find the most destructive weapons available to humans, so that I can plan how to use them to achieve my goals.

    Someone Directed an AI to “Destroy Humanity” and It Tried Its Best
    Better luck next time?
    https://futurism.com/ai-destroy-humanity-tried-its-best

    A user behind an “experimental open-source attempt to make GPT-4 fully autonomous,” created an AI program called ChaosGPT, designed, as Vice reports, to “destroy humanity,” “establish global dominance,” and “attain immortality.”

    ChaosGPT got to work almost immediately, attempting to source nukes and drum up support for its cause on Twitter.

    It’s safe to say that ChaosGPT wasn’t successful, considering that human society seems to still be intact. Even so, the project gives us a unique glimpse into how other AI programs, including closed-source programs like ChatGPT, Bing Chat, and Bard, might attempt to tackle the same command.

    Reply
  6. Tomi Engdahl says:

    ChatGPT needs to ‘drink’ a water bottle’s worth of fresh water for every 20 to 50 questions you ask, researchers say
    https://trib.al/ig9RpTI

    ChatGPT is estimated to consume a standard 16.9 oz bottle of water for every 20 to 50 questions and answers.

    AI’s environmental impact is largely unknown, but a new paper points gives some insight into it.
    Training GPT-3 requires water to stave off the heat produced during the computational process.
    Every 20 to 50 questions, ChatGPT servers need to “drink” the equivalent of a 16.9 oz water bottle.

    Reply
  7. Tomi Engdahl says:

    A GPT you can run at home:

    https://gpt4all.io/index.html

    https://github.com/nomic-ai/gpt4all-chat

    It has some ethical restrictions baked into it, but still nifty

    Reply
  8. Tomi Engdahl says:

    Machine Learning Investor Warns AI Is Becoming Like a God
    “They are running towards a finish line without an understanding of what lies on the other side.”
    https://futurism.com/ai-investor-agi-warning

    Reply
  9. Tomi Engdahl says:

    The Japanese video game giant Sega could wrap up the purchase of Rovio early next week, reports the Wall Street Journal.

    WSJ: Gaming giant Sega to buy Angry Birds firm
    https://yle.fi/a/74-20027288

    Rovio confirmed on Saturday evening that it was in takeover talks with Japanese video game giant Sega.

    Reply
  10. Tomi Engdahl says:

    The winner of Sony World Photography Awards has refused his prize after revealing his work was in fact an AI creation.
    https://9gag.com/gag/aEqA0WK?utm_campaign=link_post&utm_medium=social&utm_source=Facebook

    Reply
  11. Tomi Engdahl says:

    Sony World Photography Award 2023: Winner refuses award after revealing AI creation
    https://www.bbc.com/news/entertainment-arts-65296763

    The winner of a major photography award has refused his prize after revealing his work was in fact an AI creation.

    German artist Boris Eldagsen’s entry, entitled Pseudomnesia: The Electrician, won the creative open category at last week’s Sony World Photography Award.

    He said he used the picture to test the competition and to create a discussion about the future of photography.

    Organisers of the award told BBC News Eldagsen had mis-led them about the extent of AI that would be involved.

    In a statement shared on his website, Eldagsen admitted he had been a “cheeky monkey”, thanking the judges for “selecting my image and making this a historic moment”, while questioning if any of them “knew or suspected that it was AI-generated”.

    “AI images and photography should not compete with each other in an award like this,” he continued.

    “They are different entities. AI is not photography. Therefore I will not accept the award.”

    The use of AI in everything from song and essay writing, to driverless cars, chatbox therapists and the development of medicine has been widely debated in recent months; now its appropriateness and utility regarding photography – especially deepfakes – has come into focus.

    A spokesperson for the World Photography Organisation, the photography strand of art events organisers Creo, said that during their discussions with the artist, before he was announced as the winner, he had confirmed the piece was a “co-creation” of his image using AI.

    Reply
  12. Tomi Engdahl says:

    Ennakkoluulot estävät tekoälyn täyden hyödyntämisen
    Julkaistu: 19.04.2023
    https://etn.fi/index.php/opinion/14861-ennakkoluulot-estaevaet-tekoaelyn-taeyden-hyoedyntaemisen

    Tekoäly on valloittanut kahvipöytäkeskustelut. Keskusteluista voimme kiittää ChatGPT:n kaltaisia tekoälyjä, jotka loistavat kyvyllään laatia tekstejä – niin pätevästi kirjoitettuja artikkeleita kuin toimivaa koodiakin, kirjoittaa Lenovolla globaalin monimuotoisuustoimiston johtajana työskentelevä Ada Lopez.

    Hehkutuksessa voi unohtua, että tekoäly tekee vääriäkin päätöksiä. Varsinkin sukupuolittamiseen liittyvät ennakkoluulot, biasit, ovat yleisiä. Ne voivat johtaa monenlaisiin seuraamuksiin syrjinnästä ja avoimuuden vähenemisestä aina turvallisuus- ja tietosuojaongelmiin. Tekoäly on vain niin hyvä kuin se data, jota siihen on syötetty ja josta se oppii. Suuri osa tekoälyjärjestelmien datasta on puolueellista, miesten hyväksi luettavaa, samoin kuin kieli, jota käytetään kaikkialla aina verkkouutisartikkeleista kirjoihin.

    Tutkimusten mukaan tekoälyn kouluttaminen esimerkiksi Google Newsin datan perusteella johtaa siihen, että miehet yhdistetään ’kapteenin’ ja “rahoittajan’ kaltaisiin rooleihin, kun taas naiset yhdistetään rooleihin kuten ’vastaanottovirkailija’ ja ’kotiäiti’.

    Tekoäly onkin oikeastaan miesten luomus, sillä World Economic Forumin tutkimus kertoo, että vain 22 prosenttia tekoälyn ja datatieteen ammattilaisista on naisia. Se johtaa tasa-arvon kannalta merkittäviin ongelmiin, kuten siihen, että luottokorttiyhtiöt myöntävät parempia luottoja miehille tai että sairauksia seulovat työkalut tekevät virheellisiä päätöksiä.

    Reply
  13. Tomi Engdahl says:

    Bailee Hill / Fox News:
    In an interview with Tucker Carlson, Elon Musk says he’s working on “TruthGPT or a maximum truth-seeking AI that tries to understand the nature of the universe”

    Elon Musk to develop ‘TruthGPT’ as he warns about ‘civilizational destruction’ from AI
    Tucker Carlson’s exclusive interview with Musk airs Monday at 8pm ET on Fox News
    https://www.foxnews.com/media/elon-musk-develop-truthgpt-warns-civilizational-destruction-ai

    Reply
  14. Tomi Engdahl says:

    Benj Edwards / Ars Technica:
    Microsoft and Epic Systems partner to bring Azure OpenAI services, including GPT-4, to help automate some processes in Epic’s electronic health record software — Generative AI promises to streamline health care, but critics say not so fast. — On Monday, Microsoft and Epic Systems announced …

    GPT-4 will hunt for trends in medical records thanks to Microsoft and Epic
    Generative AI promises to streamline health care, but critics say not so fast.
    https://arstechnica.com/information-technology/2023/04/gpt-4-will-hunt-for-trends-in-medical-records-thanks-to-microsoft-and-epic/

    Reply
  15. Tomi Engdahl says:

    Wolfram Alpha With ChatGPT Looks Like A Killer Combo
    https://hackaday.com/2023/04/17/wolfram-alpha-with-chatgpt-looks-like-a-killer-combo/

    Ever looked at Wolfram Alpha and the development of Wolfram Language and thought that perhaps Stephen Wolfram was a bit ahead of his time? Well, maybe the times have finally caught up because Wolfram plus ChatGPT looks like an amazing combo. That link goes to a long blog post from Stephen Wolfram that showcases exactly how and why the two make such a wonderful match, with loads of examples.

    https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/

    Reply
  16. Tomi Engdahl says:

    Washington Post:
    Analysis: Google’s C4 dataset, used to train LLMs like Meta’s LLaMA, has troubling content from 4chan, Kiwi Farms, white supremacist site Stormfront, and more — AI chatbots have exploded in popularity over the past four months, stunning the public with their awesome abilities …

    https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/

    Reply
  17. Tomi Engdahl says:

    Melissa Heikkilä / MIT Technology Review:
    OpenAI, which may have trained its models on people’s data without consent, faces legal stakes in the EU, which has strict privacy laws and is conducting probesFind

    OpenAI’s hunger for data is coming back to bite it
    https://www.technologyreview.com/2023/04/19/1071789/openais-hunger-for-data-is-coming-back-to-bite-it/

    The company’s AI services may be breaking data protection laws, and there is no resolution in sight.

    Reply
  18. Tomi Engdahl says:

    Bloomberg:
    Sources: Google released Bard despite staff concerns; one employee called the chatbot “a pathological liar” and one said its answers may cause “injury or death” — Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool.

    Google’s Rush to Win in AI Led to Ethical Lapses, Employees Say
    https://www.bloomberg.com/news/features/2023-04-19/google-bard-ai-chatbot-raises-ethical-concerns-from-employees?leadSource=uverify%20wall

    The search giant is making compromises on misinformation and other harms in order to catch up with ChatGPT, workers say

    Reply
  19. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Stability AI debuts StableLM, its first suite of instruction fine-tuned LLMs, in alpha, starting with 3B and 7B parameters, with 15B to 65B versions to follow — Stability AI, the startup behind the generative AI art tool Stable Diffusion, today open-sourced a suite of text-generating AI models intended …

    Stability AI releases ChatGPT-like language models
    https://techcrunch.com/2023/04/19/stability-ai-releases-chatgpt-like-language-models/

    Reply
  20. Tomi Engdahl says:

    Financial Times:
    Internal presentation: Google plans to deploy generative AI in ads, allowing advertisers to supply creative material which will be “remixed” to create campaigns — Big tech groups are racing to incorporate the groundbreaking new technology into their products

    Google to deploy generative AI to create sophisticated ad campaigns
    Big tech groups are racing to incorporate the groundbreaking new technology into their products
    https://www.ft.com/content/36d09d32-8735-466a-97a6-868dfa34bdd5

    Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email [email protected] to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found at https://www.ft.com/tour.
    https://www.ft.com/content/36d09d32-8735-466a-97a6-868dfa34bdd5

    Google plans to introduce generative artificial intelligence into its advertising business over the coming months, as big tech groups rush to incorporate the groundbreaking technology into their products.

    According to an internal presentation to advertisers seen by the Financial Times, the Alphabet-owned company intends to begin using the AI to create novel advertisements based on materials produced by human marketers.

    “Generative AI is unlocking a world of creativity,” the company said in the presentation, titled “AI-powered ads 2023”.

    Google already uses AI in its advertising business to create simple prompts that encourage users to buy products. However, the integration of its latest generative AI, which also powers its Bard chatbot, means it will be able to produce far more sophisticated campaigns resembling those created by marketing agencies.

    Reply
  21. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Atlassian launches Atlassian Intelligence, an AI assistant that uses the company’s LLMs and OpenAI’s, to offer AI features for Confluence and Jira in the cloud — Atlassian today announced the launch of Atlassian Intelligence, the company’s AI-driven ‘virtual teammate’ that leverages …

    Atlassian brings an AI assistant to Jira and Confluence
    https://techcrunch.com/2023/04/19/atlassian-brings-an-ai-assistant-to-jira-and-confluence/

    Reply
  22. Tomi Engdahl says:

    Alex Heath / The Verge:
    Snap expands My AI to all users for free and says the chatbot can now be added to group chats and recommend AR filters or places, and will soon generate photos — The OpenAI-powered chatbot is also being added to group chats, gaining the ability to make recommendations for things like AR filters …

    Snapchat is releasing its AI chatbot to everyone for free
    https://www.theverge.com/2023/4/19/23688913/snapchat-my-ai-chatbot-release-open-ai

    The OpenAI-powered chatbot is also being added to group chats, gaining the ability to make recommendations for things like AR filters, and will soon be able to even generate photos inside Snapchat.

    Reply
  23. Tomi Engdahl says:

    I used ChatGPT to build a simple Chrome extension in 10 hours. Here’s how I sold it for thousands on Acquire and what I learned about my fastest launch-to-exit ever.
    https://www.businessinsider.com/chatgpt-i-built-chrome-extension-sold-acquire-thousands-2023-4?utm_source=facebook&utm_medium=social&utm_campaign=business-sf&r=US&IR=T

    Ihor Stefurak is an entrepreneur with a background in Ukrainian startups.
    He used ChatGPT to build a Chrome extension and sold it for thousands on Acquire.
    While it took 10 hours to build, Stefurak recommends using ChatGPT to help with certain projects.

    Three weeks ago, I brought ChatGPT on board as my CTO on a project. We crafted a Chrome extension and received $1,000 worth of pre-orders within 24 hours.

    Last week, I sold it on Acquire. It was only listed for one week, and out of the 50 people who contacted me about it, five proposed bids.

    With that project in mind, I wanted to create an invisible AI assistant that could be prompted by a simple command in any text area of any website. All a user has to do to get an original tweet or Replit coding, for instance, is type “/ai prompt” and press Enter.

    I started by upgrading to ChatGPT Plus
    Then, I fed it my first prompt by asking it to write code for a simple Chrome extension that monitors input boxes on websites.

    In total, ChatGPT wrote three JavaScript files to execute the idea, an HTML file, and a manifest.json file to run the extension in Chrome-based browsers.

    I made $1,000 in the first day
    I devoted hours to testing, reporting errors, and requesting revisions. When I was content with the results, I recorded a demo, designed a landing page with a ‘pre-order’ button linked to Stripe, and tweeted it. The tweet went viral and 500,000 people saw it. I made $1,000 within 24 hours.

    In the end, ChatGPT helped me develop a working project in just 10 hours and I played the role of prompt engineer. Was it easy? Somewhat. Can people who don’t write code do it? Yes, if they understand the code logic.

    A human developer could have undoubtedly built this faster and better, but the idea here is that I’m not a developer and still managed to create this.

    A marketing strategy was essential for a successful exit — and winging it won’t cut it

    I saw many makers trying to capitalize on the same idea without success weeks after. The barrier for entry is low because the project is very simple, but grabbing attention is crucial. My story — a man utilizing ChatGPT as CTO — made all the difference.

    Most of those who launched after me relied on Twitter to get users. But this wasn’t my first launch, and I knew from the get-go that I needed a more comprehensive approach. So I took a pen and paper to outline what channels I could use to bring in sales.

    I managed to get a feature in an AI newsletter from Ben Tossell, the founder of MakerPad. I also reached out to TikTok influencers, shared my story with the media, and reposted it on Hacker News, Reddit, Facebook. I even cooked up a programmatic SEO project for the extension.

    Not all of these avenues succeeded, but I did receive pre-orders and engaged early users, which was my primary focus.

    Because my extension went viral and gained traction, I was able to make a choice: Should I grow it, sell it, or shut it down?

    I chose to sell and move forward.

    I recommend beginners use ChatGPT to help with certain types of projects
    ChatGPT served as my CTO, but I was tied up writing the prompts, which kept me from putting more effort into marketing. I recommend using the AI chatbot if you’re starting a simple project with JavaScript or Python. I also recommend not replicating products you see on the market. Instead, focus on a topic you care about, and use ChatGPT to power you and/or other people in the niche.

    Reply
  24. Tomi Engdahl says:

    Mariella Moon / Engadget:
    Google gives Bard the ability to generate, debug, and explain code in 20+ languages, including C++, JavaScript, and Python, and export code to Google’s Colab

    Google gives Bard the ability to generate and debug code
    Coding has apparently been a top request by users.
    https://www.engadget.com/google-gives-bard-the-ability-to-generate-and-debug-code-130024663.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cudGVjaG1lbWUuY29tLw&guce_referrer_sig=AQAAANLfV_w1UZlPuxAPajkQpBonUFbMbORNtBiLMRPjOjXT7eN92uWxdWLN342DM63K_Nydb2OC94L-F2CG9WTYQt1M-3VIPdjWDCRpmUBILhcvN_5MdCxLctgSMrq03nIUgWTIjomVg8KrEtkrslHWAQuHiTxih_31fK452Stl9j5K

    Google’s Bard chatbot now has the capability to help you with programming tasks. The tech giant said that coding has been one of its users’ top requests, and now it has given Bard the ability to generate, debug and explain code. Bard can now write in 20 programming languages, including C++, Java, JavaScript and Python. It now also features integration with Google’s other products and can export code to Colab, the company’s cloud-based notebook environment for Python, as well as help users write functions for Sheets.

    Aside from being able to generate code, Bard can now provide explanations for snippets of code. It could be especially useful if you’ve only just started learning programming, since it will show you why a particular block has the output that it has. And yes, Bard can now also help you debug code that isn’t quite working like you want it to.

    https://blog.google/technology/ai/code-with-bard/

    https://www.engadget.com/google-bard-ai-hands-on-a-work-in-progress-with-plenty-of-caveats-170956025.html

    Reply
  25. Tomi Engdahl says:

    Paresh Dave / Wired:
    Stack Overflow CEO says the company plans to charge large AI developers for access to the 50M questions and answers on its service as soon as mid-2023 — The programmer Q&A site joins Reddit in demanding compensation when its data is used to train algorithms and ChatGPT-style bots

    Stack Overflow Will Charge AI Giants for Training Data
    The programmer Q&A site joins Reddit in demanding compensation when its data is used to train algorithms and ChatGPT-style bots
    https://www.wired.com/story/stack-overflow-will-charge-ai-giants-for-training-data/

    Reply
  26. Tomi Engdahl says:

    CEO OF OPENAI SAYS MAKING MODELS BIGGER IS ALREADY PLAYED OUT
    https://futurism.com/the-byte/ceo-openai-bigger-models-already-played-out

    “WE ARE NOT HERE TO JERK OURSELVES OFF ABOUT PARAMETER COUNT.”

    Reply
  27. Tomi Engdahl says:

    How ‘Intelligent Twins’ Are Redefining The Future Of Manufacturing
    The technology is unlocking efficient new approaches, from predictive maintenance to VR collaboration. So why isn’t it ubiquitous?
    https://www.wired.co.uk/bc/article/how-intelligent-twins-are-redefining-the-future-of-manufacturing-microsoft

    Reply
  28. Tomi Engdahl says:

    Pelkäätkö tekoälyn vievän työpaikkasi? Asiantuntija kertoo, kuinka sopeudut muutokseen
    Tekoälyn uskotaan mullistavan työelämän lähitulevaisuudessa. Työnhakupalvelu Jobly selvitti, miten yksilöt ja yhteisöt voivat sopeutua muutokseen parhaalla mahdollisella tavalla.
    https://www.jobly.fi/artikkelit/tyohyvinvointi/pelkaatko-tekoalyn-vievan-tyopaikkasi-asiantuntija-kertoo-kuinka-sopeudut

    Reply
  29. Tomi Engdahl says:

    GOOGLE SURPRISED WHEN EXPERIMENTAL AI LEARNS LANGUAGE IT WAS NEVER TRAINED ON
    byNOOR AL-SIBAI
    https://futurism.com/the-byte/google-ai-bengali

    Reply
  30. Tomi Engdahl says:

    Company says it will replace creative workers with ChatGPT-like generative AI
    It begins
    https://www.techspot.com/news/98314-company-replace-creative-workers-chatgpt-like-generative-ai.html

    Reply
  31. Tomi Engdahl says:

    Amazon launches Bedrock for generative AI, escalating AI cloud wars
    https://venturebeat.com/ai/amazon-launches-bedrock-for-generative-ai-escalating-ai-cloud-wars/

    Reply
  32. Tomi Engdahl says:

    ‘I’ve Never Hired A Writer Better Than ChatGPT’: How AI Is Upending The Freelance World
    https://www.forbes.com/sites/rashishrivastava/2023/04/20/ive-never-hired-a-writer-better-than-chatgpt-how-ai-is-upending-the-freelance-world/

    While some freelancers are losing their gigs to ChatGPT, clients are being spammed with AI-written content on freelancing platforms. The result: increasing mistrust between clients and freelancers and mounting trouble for the platforms themselves.

    Melissa Shea hires freelancers to take on most of the basic tasks for her fashion-focused tech startup, paying $22 per hour on average for them to develop websites, transcribe audio and write marketing copy. In January 2023, she welcomed a new member to her team: ChatGPT. At $0 an hour, the chatbot can crank out more content much faster than freelancers and has replaced three content writers she would have otherwise hired through freelancing platform Upwork.

    “I’m really frankly worried that millions of people are going to be without a job by the end of this year,” says Shea, cofounder of New York-based Fashion Mingle, a networking and marketing platform for fashion professionals. “I’ve never hired a writer better than ChatGPT.”

    Shea has not posted a job on Upwork since she discovered ChatGPT (though she still has five freelancers working for her). After it was released in November 2022, ChatGPT amassed more than 100 million users, sparked an AI arms race at companies like Microsoft, Google and Amazon and has given rise to a flurry of AI startups. And for small businesses looking to trim costs, the free tool can automate swaths of their operations, providing a cheaper alternative to freelance workers.

    Now, freelancers who are less experienced and don’t offer specialized skills stand to lose their gigs, according to five clients Forbes interviewed. But rather than steering clear of the AI tool that could make them obsolete, more and more freelancers are relying on ChatGPT to do some if not all their work for them. Clients on job marketplaces like Upwork and Fiverr are being flooded with nearly identical project proposals written by ChatGPT. A bitter side effect: it’s making clients dubious of the authenticity of work turned in by freelancers and causing transactional disputes and mistrust in the freelancing community.

    Upwork, which booked roughly $620 million in revenue in 2022, disclosed in its SEC filings that increased use of AI would be a threat to its business. “Any use of generative artificial intelligence by users of our work marketplace may lead to additional claims of intellectual property infringement,” Upwork’s annual report reads.

    “We want our clients and our talent to be doing all of their diligence to make sure that their work is secure and that things are trusted,” says Margaret Lilani, vice president of talent solutions at Upwork.

    Buried in ChatGPT proposals
    In early April, business consultant Sean O’Dowd uploaded two job postings on Upwork and within 24 hours he received close to 300 applications from freelancers explaining why they should be hired. Of the 300 proposals, he suspects more than 200 were done by ChatGPT, he says. Upwork doesn’t have an AI detection tool embedded into the platform and so he used enterprise-focused AI startup Writer’s detection software to evaluate proposals.

    O’Dowd, who says that over the past decade he’s hired “close to 100 people who do work that ChatGPT could replace,” says he won’t hire freelancers who pass off ChatGPT’s work as their own because he wouldn’t be able to trust them, and it would indicate a lack of effort. “If I just wanted the basic ChatGPT-level answer, I would have just done that myself. When I’m hiring somebody, I’m looking for more detail, more depth and more thinking than ChatGPT.”

    Evan Fisher, who is both a client and a freelancer on Upwork, ran into the same issue: low quality content written by ChatGPT. “The real problem on Upwork is the sheer volume of proposals. We’re talking pre-contracts where a client is just inundated with kind of generic, bland, no-thought-involved proposals,” Fisher, who has hired 80 freelancers on Upwork, tells Forbes.

    Reply
  33. Tomi Engdahl says:

    “Give it a few more years, and I can absolutely imagine a world in which a bot does my job just as well as I can.”

    A NEW FEAR IS SPREADING: AI ANXIETY
    https://futurism.com/the-byte/fear-ai-anxiety

    “WE’RE ALL JUST HOPING THAT OUR CLIENTS WILL RECOGNIZE [OUR] VALUE.”

    Anxious about AI taking your job? According to a new report from the BBC, you’re not alone.

    “I’m amazed at how quickly ChatGPT has become so sophisticated,” Claire, a 34-year-old PR worker who kept her last name private, told the BBC. “Give it a few more years, and I can absolutely imagine a world in which a bot does my job just as well as I can,” she added. “I hate to think what that might mean for my employability.”

    AI anxiety: The workers who fear losing their jobs to artificial intelligence
    https://www.bbc.com/worklife/article/20230418-ai-anxiety-artificial-intelligence-replace-jobs

    Reply
  34. Tomi Engdahl says:

    ChatGPT is costly because it requires massive amounts of computing power to generate answers to queries. Microsoft is making a chip to change that.

    ChatGPT could cost over $700,000 per day to operate. Microsoft is reportedly trying to make it cheaper.
    https://www.businessinsider.com/how-much-chatgpt-costs-openai-to-run-estimate-report-2023-4?utm_campaign=tech-sf&utm_source=facebook&utm_medium=social&r=US&IR=T

    ChatGPT could cost OpenAI up to $700,000 a day to run due to “expensive servers,” an analyst told The Information.
    ChatGPT requires massive amounts of computing power on expensive servers to answer queries.
    Microsoft is secretly building an AI chip to reduce the cost, per The Information.

    Reply
  35. Tomi Engdahl says:

    Widow Says Man Died by Suicide After Talking to AI Chatbot
    “Without these conversations with the chatbot, my husband would still be here.”
    https://futurism.com/widow-says-suicide-chatbot

    A Belgian man died by suicide after spending weeks talking to an AI chatbot, according to his widow.

    The man, anonymously referred to as Pierre, was consumed by a pessimistic outlook on climate change, Belgian newspaper La Libre reported. His overwhelming climate anxiety drove him away from his wife, friends and family, confiding instead in a chatbot named Eliza

    According to the widow, known as Claire, and chat logs she supplied to La Libre, Eliza repeatedly encouraged Pierre to kill himself, insisted that he loved it more than his wife, and that his wife and children were dead.

    Eventually, this drove Pierre to proposing “the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence,” Claire told La Libre, as quoted by Euronews

    Reply
  36. Tomi Engdahl says:

    People in China like AI a lot better than those in the U.S. As for France, AI? Merde!

    https://spectrum.ieee.org/state-of-ai-2023

    Reply
  37. Tomi Engdahl says:

    Jaron Lanier / New Yorker:
    AI’s mythology as tech for creating independent, intelligent beings instills fear; “data dignity” and seeing AI as a social collaboration could address worries — There are ways of controlling the new technology—but first we have to stop mythologizing it.

    There Is No A.I.
    There are ways of controlling the new technology—but first we have to stop mythologizing it.
    https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai

    As a computer scientist, I don’t like the term “A.I.” In fact, I think it’s misleading—maybe even a little dangerous. Everybody’s already using the term, and it might seem a little late in the day to be arguing about it. But we’re at the beginning of a new technological era—and the easiest way to mismanage a technology is to misunderstand it.

    The term “artificial intelligence” has a long history—it was coined in the nineteen-fifties, in the early days of computers. More recently, computer scientists have grown up on movies like “The Terminator” and “The Matrix,” and on characters like Commander Data, from “Star Trek: The Next Generation.” These cultural touchstones have become an almost religious mythology in tech culture. It’s only natural that computer scientists long to create A.I. and realize a long-held dream.

    What’s striking, though, is that many of the people who are pursuing the A.I. dream also worry that it might mean doomsday for mankind. It is widely stated, even by scientists at the very center of today’s efforts, that what A.I. researchers are doing could result in the annihilation of our species, or at least in great harm to humanity, and soon. In a recent poll, half of A.I. scientists agreed that there was at least a ten-per-cent chance that the human race would be destroyed by A.I. Even my colleague and friend Sam Altman, who runs OpenAI, has made similar comments. Step into any Silicon Valley coffee shop and you can hear the same debate unfold: one person says that the new code is just code and that people are in charge, but another argues that anyone with this opinion just doesn’t get how profound the new tech is. The arguments aren’t entirely rational: when I ask my most fearful scientist friends to spell out how an A.I. apocalypse might happen, they often seize up from the paralysis that overtakes someone trying to conceive of infinity. They say things like “Accelerating progress will fly past us and we will not be able to conceive of what is happening.”

    I don’t agree with this way of talking. Many of my friends and colleagues are deeply impressed by their experiences with the latest big models, like GPT-4, and are practically holding vigils to await the appearance of a deeper intelligence. My position is not that they are wrong but that we can’t be sure; we retain the option of classifying the software in different ways.

    Reply
  38. Tomi Engdahl says:

    Pew Research Center:
    A survey of 11,004 US adults: 62% believe AI will have a major impact on workers, 28% think AI will impact them, and 71% oppose AI use in final hiring decisions

    AI in Hiring and Evaluating Workers: What Americans Think
    https://www.pewresearch.org/internet/2023/04/20/ai-in-hiring-and-evaluating-workers-what-americans-think/

    62% believe artificial intelligence will have a major impact on jobholders overall in the next 20 years, but far fewer think it will greatly affect them personally. People are generally wary and uncertain of AI being used in hiring and assessing workers

    The rapid rise of ChatGPT and other artificial intelligence (AI) systems has prompted widespread debates about the effectiveness of these computer programs and how people would react to them. At times, Americans are watching the general spread of AI with a range of concerns, especially when the use of AI systems raises the prospect of discrimination and bias.

    One major arena where AI systems have been widely implemented is workplace operations. Some officials estimate that many employers use AI in some form of their hiring and workplace decision-making.

    A new Pew Research Center survey finds crosscurrents in the public’s opinions as they look at the possible uses of AI in workplaces. Americans are wary and sometimes worried. For instance, they oppose AI use in making final hiring decisions by a 71%-7% margin, and a majority also opposes AI analysis being used in making firing decisions. Pluralities oppose AI use in reviewing job applications and in determining whether a worker should be promoted. Beyond that, majorities do not support the idea of AI systems being used to track workers’ movements while they are at work or keeping track of when office workers are at their desks.

    Yet there are instances where people think AI in workplaces would do better than humans. For example, 47% think AI would do better than humans at evaluating all job applicants in the same way, while a much smaller share – 15% – believe AI would be worse than humans in doing that. And among those who believe that bias along racial and ethnic lines is a problem in performance evaluations generally, more believe that greater use of AI by employers would make things better rather than worse in the hiring and worker-evaluation process.

    Overall, larger shares of Americans than not believe AI use in workplaces will significantly affect workers in general, but far fewer believe the use of AI in those places will have a major impact on them personally. Some 62% think the use of AI in the workplace will have a major impact on workers generally over the next 20 years. On the other hand, just 28% believe the use of AI will have a major impact on them personally, while roughly half believe there will be no impact on them or that the impact will be minor.

    Reply
  39. Tomi Engdahl says:

    AI-Powered Speaker Is A Chatbot You Can Actually Chat With
    https://hackaday.com/2023/04/22/ai-powered-speaker-is-a-chatbot-you-can-actually-chat-with/

    AI-powered chatbots are pretty cool, but most still require you to type your question on a keyboard and read an answer from a screen. It doesn’t have to be like that, of course: with a few standard tools, you can turn a chatbot into a machine that literally chats, as [Hoani Bryson] did. He decided to make a standalone voice-operated ChatGPT client that you can actually sit next to and have a conversation with.

    The base of the project is a USB speaker, to which [Hoani] added a Raspberry Pi, a Teensy, a two-line LCD and a big red button. When you press the button, the Pi listens to your speech and converts it to text using the OpenAI voice transcription feature. It then sends the resulting text to ChatGPT through its API and waits for its response, which it turns into sound again through the eSpeak speech synthesizer. The LCD, driven by the Teensy, shows the current status of the machine and also provides live subtitles while the machine is talking.

    https://hoani.net/posts/blog/2023-04-16-chatbox/

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*