3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,954 Comments

  1. Tomi Engdahl says:

    ChatGPT can finally access the internet in real time, but there’s a catch
    OpenAI fixes its chatbot’s major limitations. However, the new functionality is only available to some users.
    https://www.zdnet.com/article/chatgpt-can-finally-access-the-internet-in-real-time-but-theres-a-catch/

    Reply
  2. Tomi Engdahl says:

    Tekoäly nostaa työntekijän tehoa: luku hätkähdyttää – yksi ryhmä hyötyy muita enemmän
    Antti Kailio28.9.202310:12|päivitetty28.9.202310:12TEKOÄLYTIEDETYÖELÄMÄ
    Tekoälyn käyttö paransi konsulttien työn laatua peräti 40 prosenttia.
    https://www.tivi.fi/uutiset/tekoaly-nostaa-tyontekijan-tehoa-luku-hatkahdyttaa-yksi-ryhma-hyotyy-muita-enemman/21b4f411-ff29-4b3d-a906-b25683e98ba5

    Reply
  3. Tomi Engdahl says:

    Sam Altman Says He Intends to Replace Normal People With AI
    “Comparing AI to even the idea of median or average humans is a bit offensive.”
    https://futurism.com/sam-altman-replace-normal-people-ai

    That’s one way to talk about other human beings.

    As writer Elizabeth Weil notes in a new profile of OpenAI CEO Sam Altman in New York Magazine, the powerful AI executive has a disconcerting penchant for using the term “median human,” a phrase that seemingly equates to a robotic tech bro version of “Average Joe.”

    Reply
  4. Tomi Engdahl says:

    OpenAI introduces “gpt-3.5-turbo-instruct”, a new instruction language model that is as efficient as the chat-optimized GPT-3.5 Turbo.
    https://the-decoder.com/openai-releases-new-language-model-instructgpt-3-5/

    OpenAI is introducing “gpt-3.5-turbo-instruct” as a replacement for the existing Instruct models, as well as text-ada-001, text-babbage-001, text-curie-001, and the three text-davinci models that will be retired on January 4, 2024

    Reply
  5. Tomi Engdahl says:

    MICROSOFT NEEDS SO MUCH POWER TO TRAIN AI THAT IT’S CONSIDERING SMALL NUCLEAR REACTORS
    https://futurism.com/the-byte/microsoft-power-train-ai-small-nuclear-reactors

    Training large language models is an incredibly power-intensive process that has an immense carbon footprint. Keeping data centers running requires a ludicrous amount of electricity that could generate substantial amounts of greenhouse emissions — depending, of course, on the energy’s source.

    Now, The Verge reports, Microsoft is betting so big on AI that its pushing forward with a plan to power them using nuclear reactors. Yes, you read that right; a recent job listing suggests the company is planning to grow its energy infrastructure with the use of small modular reactors (SMR.)

    Microsoft is going nuclear to power its AI ambitions / Microsoft is looking at next-generation nuclear reactors to power its data centers and AI, according to a new job listing for someone to lead the way.
    https://www.theverge.com/2023/9/26/23889956/microsoft-next-generation-nuclear-energy-smr-job-hiring

    Reply
  6. Tomi Engdahl says:

    Add OpenAI, Google, and Common Crawl to your robots.txt to block generative AI from stealing content and profiting from it. See https://www.cyberciti.biz/web-developer/block-openai-bard-bing-ai-crawler-bots-using-robots-txt-file/ for more info and a firewall to block those and other AI bots, too. #AI

    Reply
  7. Tomi Engdahl says:

    AI Is Doing a Terrible Job Trading Stocks in the Real World
    “For now AI is limited to plagiarizing history.”
    https://futurism.com/ai-terrible-job-trading-stocks?fbclid=IwAR1D-hU7Xw9Cqv8C1mkz1hfhh8sEx3ysyVbcYMyA6JOm-QMEX0phUoAJrqE

    Reply
  8. Tomi Engdahl says:

    Casey Newton / Platformer:
    Between ChatGPT’s surprisingly human voice and Meta’s AI characters, we may be witnessing the rise of a new consumer internet era featuring synthetic companions — Between ChatGPT’s surprisingly human voice and Meta’s AI characters, our feeds may be about to change forever

    The synthetic social network is coming
    https://www.platformer.news/p/the-synthetic-social-network-is-coming

    Between ChatGPT’s surprisingly human voice and Meta’s AI characters, our feeds may be about to change forever

    Reply
  9. Tomi Engdahl says:

    Katie Paul / Reuters:
    Nick Clegg says Meta used public Facebook and Instagram posts to train its new AI assistant and took steps to filter out private details from training datasets — Meta Platforms (META.O) used public Facebook and Instagram posts to train parts of its new Meta AI virtual assistant …

    Meta’s new AI assistant trained on public Facebook and Instagram posts
    https://www.reuters.com/technology/metas-new-ai-chatbot-trained-public-facebook-instagram-posts-2023-09-28/

    MENLO PARK, California, Sept 28 (Reuters) – Meta Platforms (META.O) used public Facebook and Instagram posts to train parts of its new Meta AI virtual assistant, but excluded private posts shared only with family and friends in an effort to respect consumers’ privacy, the company’s top policy executive told Reuters in an interview.

    Meta also did not use private chats on its messaging services as training data for the model and took steps to filter private details from public datasets used for training, said Meta President of Global Affairs Nick Clegg, speaking on the sidelines of the company’s annual Connect conference this week.

    “We’ve tried to exclude datasets that have a heavy preponderance of personal information,” Clegg said, adding that the “vast majority” of the data used by Meta for training was publicly available.

    He cited LinkedIn as an example of a website whose content Meta deliberately chose not to use because of privacy concerns.

    The companies are weighing how to handle the private or copyrighted materials vacuumed up in that process that their AI systems may reproduce, while facing lawsuits from authors accusing them of infringing copyrights.

    Meta AI was the most significant product among the company’s first consumer-facing AI tools unveiled by CEO Mark Zuckerberg on Wednesday at Meta’s annual products conference, Connect. This year’s event was dominated by talk of artificial intelligence, unlike past conferences which focused on augmented and virtual reality.

    Reply
  10. Tomi Engdahl says:

    Olivia Solon / Bloomberg:
    Research: ahead of Slovakia’s parliamentary elections, videos with deepfake voices of politicians are spreading on apps like Facebook, Instagram, and Telegram

    Trolls in Slovakian Election Tap AI Deepfakes to Spread Disinfo
    https://www.bloomberg.com/news/articles/2023-09-29/trolls-in-slovakian-election-tap-ai-deepfakes-to-spread-disinfo#xj4y7vzkg

    Videos featuring AI-generated deepfake voices of politicians are spreading on social media ahead of the Slovak parliamentary elections this weekend, showcasing how the emergent technology is being harnessed for political disinformation.

    The clips are being shared on sites including Meta Platforms Inc.’s Facebook and Instagram and messaging apps like Telegram that include audio impersonating political opponents, Reset, a research group that looks at technology’s impact on democracy, said in a report on Friday.

    Reply
  11. Tomi Engdahl says:

    ChatGPT’s New Upgrade Teases AI’s Multimodal Future OpenAI’s chatbot learns to carry a conversation—and expect competition
    https://spectrum.ieee.org/chatgpt-multimodal

    ChatGPT isn’t just a chatbot anymore.

    OpenAI’s latest upgrade grants ChatGPT powerful new abilities that go beyond text. It can tell bedtime stories in its own AI voice, identify objects in photos, and respond to audio recordings. These capabilities represent the next big thing in AI: multimodal models.

    “Multimodal is the next generation of these large models, where it can process not just text, but also images, audio, video, and even other modalities,” says Dr. Linxi “Jim” Fan, Senior AI Research Scientist at Nvidia.

    “The future of generative AI is hyper personalization. This will happen for knowledge workers, creatives, and end users.”
    —Kyle Shannon, Storyvine

    OpenAI provides three specific multimodal features. Users can prompt the chatbot with images or voice, as well as receive responses in one of five Ai-generated voices. Image input is available on all platforms, while voice is limited to the ChatGPT app for Android and iOS.

    A demo from OpenAI shows ChatGPT being used to adjust a bike seat. A befuddled cyclist first snaps a photo of their bike and asks for help lowering the seat, then follows up with photos of the bike’s user manual and a toolset. ChatGPT responds with text describing the best tool for the job and how to use it.

    They’re now available to everyone willing to pay $20 a month for a ChatGPT Plus subscription.

    Image and voice input is the natural start for ChatGPT’s multimodal capabilities.

    Reply
  12. Tomi Engdahl says:

    Microsoft Bing Chat pushes malware via bad ads https://www.theregister.com/2023/09/29/microsoft_bing_chat_malware/

    Microsoft introduced its Bing Chat AI search assistant in February and a month later began serving ads alongside it to help cover costs. However, some of those adverts served by Microsoft’s own ad platform have turned out to be malicious. Security outfit Malwarebytes said on Thursday it has identified malvertising – harmful ads – distributed via Bing Chat conversations.

    “Ads can be inserted into a Bing Chat conversation in various ways,” said Jérôme Segura, director of threat intelligence, in a write-up. “One of those is when a user hovers over a link and an ad is displayed first before the organic result.” These particular bad ads require user action for any harm to be done. The victim has to click on the ad, at which point their browser will be taken to another site, which could attempt to phish their login details for a more legit service, push a malware-laden download onto them, or exploit a bug to hijack their computer, or similar.

    Reply
  13. Tomi Engdahl says:

    Artificial Intelligence
    National Security Agency is Starting an Artificial Intelligence Security Center
    https://www.securityweek.com/national-security-agency-is-starting-an-artificial-intelligence-security-center/

    The NSA is starting an artificial intelligence security center — a crucial mission as AI capabilities are increasingly acquired, developed and integrated into U.S. defense and intelligence systems.

    “We maintain an advantage in AI in the United States today. That AI advantage should not be taken for granted,” Nakasone said at the National Press Club, emphasizing the threat from Beijing in particular.

    Nakasone was asked about using AI to automate the analysis of threat vectors and red-flag alerts — and he reminded the audience that U.S. intelligence and defense agencies already use AI.

    “AI helps us, But our decisions are made by humans. And that’s an important distinction,” Nakasone said. “We do see assistance from artificial intelligence. But at the end of the day, decisions will be made by humans and humans in the loop.”

    The AI security center’s establishment follows an NSA study that identified securing AI models from theft and sabotage as a major national security challenge, especially as generative AI technologies emerge with immense transformative potential for both good and evil.

    Nakasone said it would become “NSA’s focal point for leveraging foreign intelligence insights, contributing to the development of best practices guidelines, principles, evaluation, methodology and risk frameworks” for both AI security and the goal of promoting the secure development and adoption of AI within “our national security systems and our defense industrial base.”

    He said it would work closely with U.S. industry, national labs, academia and the Department of Defense as well as international partners.

    Reply
  14. Tomi Engdahl says:

    Bitcoin Mining Negativity Spreading to AI as Nvidia Chips Consume Huge Energy
    https://www.ccn.com/news/bitcoin-mining-negativity-spreading-ai-nvidia-chips-energy/?fbclid=IwAR2EI-MTWmklHavZSyHnErvFZ7CNDtq1x5Q86Wec-cYw7nDStqYSSbwFd3s

    Stats show Nvidia GPUs consume too much power to enable AI.
    Demand for Nvidia chips jumped after the emergence of ChatGPT.
    Nvidia is currently under investigation by the European Union for alleged anti-competitive practices.
    The hype created by digital asset trading enhances the demand for intensive graphics cards created by the likes of Nvidia. “The crypto market’s hype can vary with major events like the FTX collapse and the NFT market decline.

    Now, there’s a rising demand for computing power for a new application: artificial intelligence, or AI. And, it all started with ChatGPT.

    But AI Models Come With a Major Flaw
    To compile extensive data for training AI models, developers rely on transformers, software that emulates quantum computing, rapidly generating multiple outcomes for a single query.

    Developers need substantial computing power, typically provided by Graphics Processing Units (GPUs), to drive transformers and meet ambitious timelines.

    While plenty of OEMs (Original Equipment Manufacturers) such as Intel, AMD, and ASUS offer a variety of GPU products, Nvidia seemingly has a near monopoly on the market (which is now under investigation by the European Union). The company’s latest RTX lineup offers staggering performance for AI developers, gamers, and crypto miners alike.

    AI Consumes Too Much Power
    However, understandably, such GPUs are rather power-hungry. To enable transformers to help developers create these AI models, GPUs draw electrical power at abnormal rates.

    As a result, critics are now pointing fingers at AI for causing a significant rise in power consumption amid discussions regarding climate change.

    Discussions on AI’s energy consumption draw parallels to past Bitcoin mining debates. Supporters claim Bitcoin mining controversies have subsided. But critics warn that AI data centers will escalate energy consumption.

    Critics even claim that Nvidia’s GPUs consume more power than the average American home.

    Reply
  15. Tomi Engdahl says:

    Are Local LLMs Useful in Incident Response?
    https://isc.sans.edu/diary/Are+Local+LLMs+Useful+in+Incident+Response/30274

    LLMs have become very popular recently. I’ve been running them on my home PC for the past few months in basic scenarios to help out. I like the idea of using them to help with forensics and Incident response, but I also want to avoid sending the data to the public LLMs, so running them locally or in a private cloud is a good option.

    I use a 3080 GPU with 10GB of VRAM, which seems best for running the 13 Billion model (1). The three models I’m using for this test are Llama-2-13B-chat-GPTQ , vicuna-13b-v1.3.0-GPTQ, and Starcoderplus-Guanaco-GPT4-15B-V1.0-GPTQ. I’ve downloaded this model from
    huggingface.co/ if you want to play along at home.

    Llama2 is the latest Facebook general model. Vicuna is a “Fine Tuned” Llama one model that is supposed to be more efficient and use less RAM. StarCoder is trained on 80+ coding languages and might do better on more technical explanations.

    Reply
  16. Tomi Engdahl says:

    Overall, these small models did poorly on this test. They do a good job on everyday language tasks, like giving text from an article and summarizing it or helping with proofreading. A specific version of Star is just for Python, which also works well. As expected for small models, the more specific they are trained, the better the results.
    https://isc.sans.edu/diary/Are+Local+LLMs+Useful+in+Incident+Response/30274

    Reply
  17. Tomi Engdahl says:

    AI Algorithms Are Biased Against Skin With Yellow Hues
    https://www.wired.com/story/ai-algorithms-are-biased-against-skin-with-yellow-hues/

    Google, Meta, and others test their algorithms for bias using standardized skin tone scales. Sony says those tools ignore the yellow and red hues at work in human skin color.

    After evidence surfaced in 2018 that leading face-analysis algorithms were less accurate for people with darker skin, companies including Google and Meta adopted measures of skin tone to test the effectiveness of their AI software. New research from Sony suggests that those tests are blind to a crucial aspect of the diversity of human skin color.

    By expressing skin tone using only a sliding scale from lightest to darkest or white to black, today’s common measures ignore the contribution of yellow and red hues to the range of human skin, according to Sony researchers. They found that generative AI systems, image-cropping algorithms, and photo analysis tools all struggle with yellower skin in particular. The same weakness could apply to a variety of technologies whose accuracy is proven to be affected by skin color, such as AI software for face recognition, body tracking, and deepfake detection, or gadgets like heart rate monitors and motion detectors.

    “If products are just being evaluated in this very one-dimensional way, there’s plenty of biases that will go undetected and unmitigated,” says Alice Xiang, lead research scientist and global head of AI Ethics at Sony. “Our hope is that the work that we’re doing here can help replace some of the existing skin tone scales that really just focus on light versus dark.”

    The sparring over scales is more than academic. Finding appropriate measures of “fairness,” as AI researchers call it, is a major priority for the tech industry as lawmakers, including in the European Union and US, debate requiring companies to audit their AI systems and call out risks and flaws. Unsound evaluation methods could erode some of the practical benefits of regulations, the Sony researchers say.

    Color scales that don’t properly capture the red and yellow hues in human skin have helped bias remain undetected in image algorithms.

    Reply
  18. Tomi Engdahl says:

    János Allenbach-Ammann / Euractiv:
    The European Commission starts collective risk assessments on advanced chips, AI, quantum, and biotech, the most sensitive areas for security and tech leakage

    Stricter EU controls on critical technologies possible from spring 2024
    https://www.euractiv.com/section/economy-jobs/news/stricter-eu-controls-on-critical-technologies-possible-from-spring-2024/

    The European Commission on Tuesday (3 October) announced that it would start collective risk assessments together with member states on four technology areas, which could lead to restrictive measures like export controls or development support for these technologies by spring 2024.

    The move seeks to reduce risks in technology security and technology leakage in advanced semiconductors, artificial intelligence, quantum technologies, and biotechnologies.

    “We’ve all seen what can be the risks of too much dependency, be it during the Covid pandemic or now with the Russian war in Ukraine. Europeans have paid the price for this,” Commissioner Věra Jourová said at a press conference in Strasbourg on Tuesday.

    According to the Commission, the technologies were selected due to their “transformative nature”, their risk of civil and military fusion, as well as their risk of being used in violation of human rights.

    The Commission’s recommendation to start these “collective risk assessments” is part of the EU’s “Economic Security Strategy” that the Commission published in June.

    Reply
  19. Tomi Engdahl says:

    SoftBank CEO Son says artificial general intelligence will come within 10 years
    https://www.reuters.com/technology/softbank-ceo-masayoshi-son-says-artificial-general-intelligence-will-come-within-2023-10-04/?fbclid=IwAR2Hdijx1VWz5gz9T_f8M5XSTi2yy0EThv1TKDirkJIP-JKrJZFjNKQHbX8

    TOKYO, Oct 4 (Reuters) – SoftBank (9984.T) CEO Masayoshi Son said he believes artificial general intelligence (AGI), artificial intelligence that surpasses human intelligence in almost all areas, will be realised within 10 years.

    Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas.

    “It is wrong to say that AI cannot be smarter than humans as it is created by humans,” he said. “AI is now self learning, self training, and self inferencing, just like human beings.”

    Son has spoken of the potential of AGI – typically using the term “singularity” – to transform business and society for some years, but this is the first time he has given a timeline for its development.

    He also introduced the idea of “Artificial Super Intelligence” at the conference which he claimed would be realised in 20 years and would surpass human intelligence by a factor of 10,000.

    Son is known for several canny bets that have turned SoftBank into a tech investment giant as well as some bets that have spectacularly flopped.

    Reply
  20. Tomi Engdahl says:

    “We had the good wisdom to go put the whole company behind [AI].”

    If AI Is a Gold Rush, Nvidia Is Selling Shovels
    Without Nvidia’s GPUs, there’d be no ChatGPT.
    https://futurism.com/nvidia-ai-gold-rush?fbclid=IwAR2_Cz6qyanBHmDgX7uERH_vOzIdCTewr6NqKtGKEofy2QVMFlAys5W9Vts

    In the public consciousness, OpenAI is the obvious winner of the meteoric AI boom. For a runner-up, you might consider Midjourney or Anthropic’s Claude, a high-performing competitor to ChatGPT.

    Whether any of those players will figure out how to effectively monetize that buzz is widely debated. But in the meantime, someone has to supply the hardware to run all that viral generative AI — and for now, that’s where the money is.

    Enter the Nvidia Corporation, a newly trillion dollar company that’s making so much dough off its gangbusters AI chips that its revenue has more than doubled from last year, quickly becoming the undisputed backbone of the AI industry.

    Per its latest quarterly earnings report, Nvidia’s revenue now sits at a hefty $13.5 billion, and the spike in its profits is even more unbelievable: a nine times increase in net income year-over-year, shooting up to $6.2 billion.

    In other words, there’s no question that AI is a gold rush — and regardless of whether any of the prospectors hit pay dirt, it’s currently Nvidia that’s selling shovels.

    Reply
  21. Tomi Engdahl says:

    Who Needs ChatGPT? How to Run Your Own Free and Private AI Chatbot
    Be your own AI content generator! Here’s how to get started running free LLM alternatives using the CPU and GPU of your own PC.
    https://me.pcmag.com/en/ai/19569/how-to-run-your-own-chatgpt-like-llm-for-free-and-in-private

    Reply
  22. Tomi Engdahl says:

    ChatGPT Vision and AI art generation tested WOW!
    https://www.geeky-gadgets.com/chatgpt-vision/

    OpenAI has recently introduced new voice and image capabilities in ChatGPT, a massive step forward in the field of artificial intelligence. I would highly recommend you check out the first examples I have come across of how this new ChatGPT 4 Vision technology can be used for a wide variety of applications. For instance simply draw a flowchart of your required program and ChatGPT will write the code to make it a reality

    These new ChatGPT Vision features enable users to have voice conversations and show images to the AI, expanding the ways ChatGPT can be used in daily life. From identifying landmarks to suggesting recipes based on pantry contents, or assisting with math problems, the possibilities are vast and almost endless.

    The rollout of these voice and image features will be available to ChatGPT Plus and Enterprise users over the next two weeks. Voice will be available on iOS and Android, while images will be available on all platforms. This expansion of capabilities is a testament to OpenAI’s commitment to making AI more accessible and useful.

    ChatGPT 4 Vision and AI art generation examples

    Reply
  23. Tomi Engdahl says:

    David Pierce / The Verge:
    Microsoft rolls out DALL-E 3 to all Bing Chat and Bing Image Creator users; OpenAI plans to roll out DALL-E 3 to paying ChatGPT subscribers later in October — ‘Bing, make me a picture of a dog taking a bath in a waterfall but the water is all Skittles.’ — The image generator inside …

    You can now use the DALL-E 3 AI image generator inside Bing Chat
    / ‘Bing, make me a picture of a dog taking a bath in a waterfall but the water is all Skittles.’
    https://www.theverge.com/2023/10/3/23901963/bing-chat-dall-e-3-openai-image-generator

    Reply
  24. Tomi Engdahl says:

    Charlotte Tobitt / Press Gazette:
    The UK’s Independent Publishers Alliance urges members to block OpenAI and Google crawling, as OpenAI extends ChatGPT’s training database beyond September 2021

    Major news publishers block the bots as ChatGPT starts taking live news
    https://pressgazette.co.uk/platforms/chatgpt-publishers-news-bing-google/

    Independent Publishers Alliance urges members to block GPTBot and Google Bard crawler ASAP.

    ChatGPT’s threat to news publishers looms larger than ever as it prepares to start reading up-to-date new stories- instead of relying on a database that has not been updated in two years.

    The UK’s Independent Publishers Alliance is urging its members to block crawling access for OpenAI and Google as soon as possible while an AI strategist told Press Gazette it is a “tricky time” for publishers – especially if they are expected to opt out of each generative AI company separately.

    Until now OpenAI’s ChatGPT was only able to use information up to September 2021, the cut-off date for its training database.

    But paying ChatGPT Plus and Enterprise users can now get “current and authoritative information” in answers from the chatbot and this will be expanded to all users “soon”. OpenAI also promised to provide “direct links to sources”.

    The change will mean users can ask ChatGPT questions relating to current affairs, with answers likely trained on content from news publishers across the world who will lose out on traffic if people find out what they want to know without ever having to go to the original source. It could prove an extension of the rise in “zero click searches” in which search engine results pages give users the answers they want directly without them needing to click through to articles that may have originated the information.

    The move comes as publishers continue to grapple with whether to block ChatGPT’s bot, and equivalent crawlers from the likes of Google and Bing, from using their content to train datasets.

    Google and Bing let publishers opt out of AI training without losing out in search

    Bing came first on 22 September, telling publishers they had created new ways for them to “have greater control over how their content is used in the AI era”. Microsoft-owned search engine Bing has added AI bot Bing Chat into search results, making use of OpenAI’s technology under a multi-year, multi-billion dollar investment. Bing Chat’s answers provide links to sources – many of which in Press Gazette’s queries so far appear to lead to Microsoft’s news aggregator MSN.

    If publishers take no action, their content will continue to be used as sources for Bing Chat. Content tagged NOCACHE “may” be included in Bing Chat answers but only URLs, snippets and titles would be displayed and used in training the model. Content tagged NOARCHIVE will not be included, linked to or used for training purposes.

    Bing added: “We also heard from publishers that they want to exercise these choices without impacting how Bing users can discover web content on Bing’s search results page. We can assure publishers that content with the NOCACHE tag or NOARCHIVE tag will still appear in our search results.”

    Reply
  25. Tomi Engdahl says:

    The CEO of IBM says he doesn’t intend to ‘get rid of’ a single programmer because of AI
    https://www.businessinsider.com/ibm-ceo-automation-ai-repetitive-white-collar-jobs-cuts-2023-10?utm_source=facebook&utm_medium=social&utm_campaign=personal-finance-sf&fbclid=IwAR35rg4V65AeXRjCu2FXXCR4ueI_sE0BLGTTtNo5PpxqHAUinHKkf81mbLs&r=US&IR=T

    IBM CEO Arvind Krishna says he doesn’t intend to “get rid of a single one” of his programmers because of AI.
    Krishna predicted that programmers would become 30% more productive because of AI.
    He also added that though AI could automate a “repetitive, white-collar job,” it was a job creator.

    Reply
  26. Tomi Engdahl says:

    On tiedostettava, ettei koneoppimisen malleihin voi luottaa 100% varmuudella, todetaan palkitussa tutkimuksessa. Tarvitaan varautumista siihen, että jokin menee pieleen, ja mekanismeja, jotka soveltuvat virheiden korjaukseen.

    Ei välttämättä ole vaarallista, jos streamauspalvelu esittelee käyttäjälle epäkiinnostavia vaihtoehtoja, vaikka se vähentää luottoa järjestelmän toimivuuteen. Mutta kriittisemmässä koneoppimiseen nojaavassa järjestelmässä voivat viat olla paljon haitallisempia.

    PALKITTU TUTKIMUS: ON TIEDOSTETTAVA, ETTEI KONEOPPIMISEN MALLEIHIN VOI LUOTTAA 100% VARMUUDELLA
    https://www.helsinki.fi/fi/uutiset/tekoaly/palkittu-tutkimus-tiedostettava-ettei-koneoppimisen-malleihin-voi-luottaa-100-varmuudella?utm_source&utm_medium=social_owned&fbclid=IwAR1TCpMKoYMLJqfo0TKQoi4QmxmW577sLbeQsEtcoibqxVdXnpGBv8Pl7bM_aem_AXG5jyoa0acR_hnjXQb16ryhYA-Z6wQhARQGQVFKWBWEkfLcr35v0uIVkk47so9R345djDIa0KgKMAkwGYci-mxu

    Asiantuntijahaastatteluihin pohjautuva tutkimus kehottaa huomioimaan enemmän järjestelmän mahdolliset virheet. Tarvitaan varautumista siihen, että jokin menee pieleen, ja mekanismeja, jotka soveltuvat virheiden korjaukseen.

    Reply
  27. Tomi Engdahl says:

    Stephen Shankland / CNET:
    A deep dive into the Pixel 8 and 8 Pro’s camera tech, like Video Boost, which uses AI processing in Google’s datacenters to dramatically improve image quality

    Google Gave the Pixel 8 Cameras a Major Upgrade. Here’s How They Did It
    https://www.cnet.com/tech/mobile/google-gave-its-pixel-8-cameras-a-major-upgrade-heres-how-they-did-it/

    In an exclusive interview, Google’s Pixel camera leader shares a deep dive into the photo and video improvements in the Pixel 8 and 8 Pro.

    With its Pixel 8 and Pixel 8 Pro smartphones, Google is bringing its big guns to the battle for smartphone photo and video leadership. Among more than a dozen notable improvements coming to the Android phones is a tool called Video Boost that uses AI processing on Google’s server-packed data centers to dramatically increase image quality.

    When you first shoot a video on the Pixel 8 Pro, you’ll have just a 1080p preview version. But during a couple hours or so for uploading and processing, Google uses new artificial intelligence models too big for a smartphone to improve shadow detail, reduce pesky noise speckles and stabilize the video. That means Google’s Night Sight technology, which in 2018 set a new standard for smartphone photos taken in dim and dark conditions, has now come to video, too. Or at least it will when Video Boost ships later this winter.

    “Night Sight means something very big to us,” said Isaac Reynolds, the lead product manager in charge of the Pixel cameras. “It is the best low-light smartphone video in the market, including any phones that might have recently come out,” he said in an unsubtle dig at Apple’s iPhone 15 models. But Video Boost improves daytime videos, too, with better detail and smoother panning.

    Camera abilities are key to smartphones, but especially to Google’s Pixel phones. They’re gaining market share but remain relatively rare, accounting for just 4% of North American phone shipments in the second quarter. Good photos, bolstered by years of computational photography work, are arguably the Pixel line’s strongest selling point.

    But the Pixel phones’ video has been weak when there’s not much light. Improving that, even if it takes a helping hand from Google’s servers, is crucial to making a Pixel phone worth buying.

    “Where we really wanted to make a huge difference this year was video,” Reynolds said. Video Boost is “the most exciting thing that I’ve done in years.”

    How Google’s Video Boost works

    Many developments were necessary to make Video Boost possible.

    At the foundation is a newer image sensor technology in the main camera called dual conversion gain that improves image noise and dynamic range — the ability to capture both shadow and highlight details. Google refers to its approach as “dual exposure,” but unlike conventional HDR (high dynamic range) technology, it doesn’t blend multiple separate shots.

    Instead, the dual conversion gain technology is able to simultaneously capture details from both low-light and bright areas of a scene pixel by pixel, then blend the best of both. The result: “Whether it’s a high-contrast scene or a low-light scene, you’re going to see dramatically better performance versus the Pixel 7 and Pixel 7 Pro,” Reynolds said. “You don’t have to give up the dynamic range. That means less underexposure, which means less shadow noise.”

    The technology is on both the Pixel 8 and 8 Pro, but only the 8 Pro gets Video Boost.

    Next is the new Tensor G3 processor, the third generation of Google’s Pixel phone processors. The G3 has more built-in Google circuitry for AI and image processing than last year’s G2, and Google uses it to produce two videos. One is the 1080p preview version you can watch or share immediately.

    The other is the Video Boost version that’s uploaded to Google for more editing. The G3 preprocesses that video and, for each frame, adds up to 400 metadata elements that characterize the scene, Reynolds said.

    The last Video Boost step takes place in Google’s data centers, where servers use newly developed algorithms for noise reduction, stabilization and sharpening with low-light imagery. That processed video then replaces the preview video on your phone, including a 4K version, if that’s the resolution you originally shot at.

    Reynolds defends the video’s data center detour as worthwhile.

    “The results are incredible,” he said. Besides, people like to reminisce, revisiting a moment through photos and videos hours later, not just months or years later. “I don’t think there’s any downside at all to waiting a couple of hours,” he said.

    Reply
  28. Tomi Engdahl says:

    Sarah Perez / TechCrunch:
    Google announces Assistant with Bard, a new mobile assistant version that adds generative AI capabilities, like planning trips and writing social media captions — Google Assistant is getting an AI-powered update. At today’s Made by Google live event, the company introduced Assistant with Bard …

    Google Assistant is getting AI capabilities with Bard
    https://techcrunch.com/2023/10/04/google-assistant-is-getting-ai-capabilities-with-bard/

    Google Assistant is getting an AI-powered update. At today’s Made By Google live event, the company introduced Assistant with Bard, a new version of its popular mobile personal assistant that’s now powered by generative AI technologies. Essentially a combination of Google Assistant and Bard for mobile devices, the new assistant will be able to handle a broader range of questions and tasks, ranging from simple requests like “what’s the weather?,” “set an alarm” or “text Jenny,” as before, to now more intelligent responses provided by Google’s Bard AI.

    This includes being able to dive into your own Google apps, like Gmail and Google Drive, to offer personalized responses to queries on an opt-in basis. That means you could do things like ask Google Assistant questions like “catch me up on my important emails I’ve missed this week,” and the digital helper can dig up emails you need to know about.

    Reply
  29. Tomi Engdahl says:

    Sarah Perez / TechCrunch:
    Meta rolls out generative AI features for advertisers, including to create backgrounds, expand images, and write versions of ad text based on the original copy

    Meta debuts generative AI features for advertisers
    https://techcrunch.com/2023/10/04/meta-debuts-generative-ai-features-for-advertisers/

    Meta announced today it’s rolling out its first generative AI features for advertisers, allowing them to use AI to create backgrounds, expand images and generate multiple versions of ad text based on their original copy. The launch of the new tools follows the company’s Meta Connect event last week where the social media giant debuted its Quest 3 mixed-reality headset and a host of other generative AI products, including stickers and editing tools, as well as AI-powered smart glasses.

    In the case of AI tools for the ad industry, the new products may not be as wild as the celebrity AIs that let you chat with virtual versions of people like MrBeast or Paris Hilton, but they showcase how Meta believes generative AI can assist the brands and businesses that are responsible for delivering the majority of Meta’s revenue.

    Reply
  30. Tomi Engdahl says:

    Nilay Patel / The Verge:
    Q&A with Microsoft CTO Kevin Scott on Bing’s rivalry with Google, the race to acquire and develop high-end GPUs, open-source AI models, AI and art, and more

    Microsoft CTO Kevin Scott on how AI and art will coexist in the future
    https://www.theverge.com/23900198/microsoft-kevin-scott-ai-art-bing-google-nvidia-decoder-interview

    Microsoft’s Kevin Scott sat down with us at Code to talk about Bing’s competition with Google, the race to acquire and develop high-end GPUs, and how art can survive in the age of AI.

    Reply
  31. Tomi Engdahl says:

    Jess Weatherbed / The Verge:
    Canva unveils AI tools to automate labor-intensive tasks, and plans to pay out $200M over three years to designers who let their work be used to train AI models

    Canva’s new AI tools automate boring, labor-intensive design tasks
    https://www.theverge.com/2023/10/4/23902794/canva-magic-studio-ai-design-new-tools

    / Magic Studio features like Magic Switch automatically convert your designs into blogs, social media posts, emails, and more to save time on manually editing documents.

    Reply
  32. Tomi Engdahl says:

    Build an Entire AI Agent Workforce | ChatDev and Google Brain “Society of Mind” | AGI User Interface
    https://www.youtube.com/watch?v=5Zj_zstLLP4

    [00:00] Cold Open

    [00:37] What AGI will look like?

    [01:52] ChatDev

    [06:40] Create an AI Content Development Agency

    Guide to Installing and Running the ChatDev GitHub Application

    https://natural20.com/chatdev/

    Reply
  33. Tomi Engdahl says:

    Google Just Turned the RPi into a Supercomputer…
    https://www.youtube.com/watch?v=mOY_Dbyq6OY

    Will the Coral AI be able to detect my guitar? The answer may surprise you.

    Reply
  34. Tomi Engdahl says:

    As AI porn generators get better, the stakes get higher
    Porn generators have improved while the ethics around them become stickier
    https://techcrunch.com/2023/07/21/as-ai-porn-generators-get-better-the-stakes-raise/?cx_testId=6&cx_testVariant=cx_undefined&cx_artPos=0#cxrecs_s

    Reply
  35. Tomi Engdahl says:

    The path to profitability is getting murkier by the day.

    BURST YOUR BUBBLE
    AI STARTUPS ARE ALREADY RUNNING INTO SOME SERIOUS PROBLEMS
    https://futurism.com/the-byte/ai-startups-problems?fbclid=IwAR2IY_P96lrihEksexILq2iOy7NzQEpw8pq-jVvRrYf0UKX_LgPhqVC54pE

    “A SHALLOW TROUGH OF DISILLUSIONMENT.”

    Red Flags
    Less than a year into the AI boom and startups are already grappling with what may become an industry reckoning.

    Take Jasper, a buzzy AI startup that raised $125 million for a valuation of $1.5 billion last year — before laying off staff with a gloomy note from its CEO this summer.

    Now, in a provocative new story, the Wall Street Journal fleshes out where the cracks are starting to form. Basically, monetizing AI is hard, user interest is leveling off or declining, and running the hardware behind these products is often very expensive — meaning that while the tech does sometimes offer a substantial “wow” factor, its path to a stable business model is looking rockier than ever.

    Closed AI
    Underlying it all is the OpenAI-shaped elephant in the room. The company’s game-changing chatbot’s release in November 2022 brought on some major magical thinking on the part of investors who hoped that the burgeoning technology’s commercial value “would materialize at light speed,” longtime AI investor and partner at the VC firm Index Ventures Mark Goldberg told the WSJ.

    Now that wellspring of optimism is coming back to haunt them, as even the OpenAI chatbot’s usage seems to be plateauing or even declining. Take Midjourney and Synthesia, two more brand name AI startups where, as the WSJ points out, data from the analytics platform Similarweb shows traction flatlining (the latter, though, raised a cool $90 million in June thanks to backing from Nvidia.)

    Adding to their woes, the market is lousy with free offerings — a perfect illustration being ChatGPT — and users have been hesitant to shell out for a paid version.

    The Upshot
    That’s not to say the whole ship is sinking. OpenAI is slated to generate $1 billion in revenue over the next year, offsetting at least some of the staggering cost of running ChatGPT.

    But while Google and Microsoft have the resources to lose money for years, and OpenAI has a premier product that went massively viral, it’s not clear how much oxygen that’s going to leave for the Jaspers of the world.

    Reply
  36. Tomi Engdahl says:

    Imagine that, tech startups with an unproven technology and no plan to monetize it after the venture capital dries up.

    Reply
  37. Tomi Engdahl says:

    OpenAI is exploring making its own chips as AI companies scramble to overcome the global processor shortage, report says
    https://www.businessinsider.com/openai-is-considering-making-its-own-ai-chips-chatgpt-2023-10?utm_medium=social&utm_campaign=business-sf&utm_source=facebook&fbclid=IwAR16SA8uvGxFF8x8ITvzv_kHKN5TxXDmLEgDPzVoa5I6l3L1qOBkcfSZII4&r=US&IR=T

    OpenAI is exploring plans to build its own AI chips to power ChatGPT, according to Reuters.
    The startup is battling a global shortage of processors that are vital for training its AI models.
    It’s more evidence of a rift between OpenAI and Microsoft, which is reportedly building its own AI.

    OpenAI is considering building its own AI chips to power ChatGPT, in the latest sign that the company is diverging from its partner Microsoft, according to a report from Reuters.

    OpenAI is exploring plans to build its own AI chips as it faces a shortage of the previous technology behind the AI revolution. The company, which has not yet committed to the plan, has even discussed acquiring a chipmaker to help with what would be an enormously expensive and time-consuming process, Reuters said.

    AI chips such as Nvidia’s H100 are the most valuable resource in tech right now, as they are crucial for training the large language models that power the likes of ChatGPT.

    The mad dash to acquire as many as possible has led to a global shortage, with many tech giants like Meta and Microsoft now seeking to develop their own chips as a result.

    “OpenAI’s potential move into hardware and building its own AI chips comes as no surprise,”

    But designing and manufacturing chips doesn’t happen overnight, it requires huge levels of expertise, and resources that are in increasingly short supply.

    “It took OpenAI over five years to develop GPT-4. I wouldn’t be surprised if hardware took a similar amount of time.”

    Information reporting that Microsoft is trying to reduce its reliance on OpenAI by developing its own in-house large language models.

    Microsoft invested $10 billion in OpenAI at the start of 2023.

    In 2020, Microsoft built the startup a huge supercomputer that uses 10,000 advanced Nvidia GPUs. In return, it has been able to integrate OpenAI’s models into its own products, such as Bing.

    However, unnamed sources told the Information that Microsoft has instructed researchers to build smaller and cheaper alternatives to OpenAI’s GPT models after growing concerned over the spiralling cost of more expensive models like ChatGPT.

    Reply
  38. Tomi Engdahl says:

    How Can We Trust AI If We Don’t Know How It Works
    https://www.scientificamerican.com/article/how-can-we-trust-ai-if-we-dont-know-how-it-works/?utm_source=facebook&utm_campaign=socialflow&utm_medium=social&fbclid=IwAR3tjXBaSrIRaZrscuhjgKxqNMI8hy9Ls8Ae_92GX1qpe1aCmJEFOwbH7RI

    Trust is built on social norms and basic predictability. AI is typically not designed with either

    There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone, determine your creditworthiness and write poetry and computer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily.

    But AI systems have a significant limitation: Many of their inner workings are impenetrable, making them fundamentally unexplainable and unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge.

    WHY AI IS UNPREDICTABLE
    Trust is grounded in predictability. It depends on your ability to anticipate the behavior of others. If you trust someone and they don’t do what you expect, then your perception of their trustworthiness diminishes.

    Many AI systems are built on deep learning neural networks, which in some ways emulate the human brain.

    Many of the most powerful AI systems contain trillions of parameters. Because of this, the reasons AI systems make the decisions that they do are often opaque. This is the AI explainability problem – the impenetrable black box of AI decision-making.

    AI can’t rationalize its decision-making. You can’t look under the hood of the self-driving vehicle at its trillions of parameters to explain why it made the decision that it did. AI fails the predictive requirement for trust.

    AI BEHAVIOR AND HUMAN EXPECTATIONS
    Trust relies not only on predictability, but also on normative or ethical motivations. You typically expect people to act not only as you assume they will, but also as they should. Human values are influenced by common experience, and moral reasoning is a dynamic process, shaped by ethical standards and others’ perceptions.

    Unlike humans, AI doesn’t adjust its behavior based on how it is perceived by others or by adhering to ethical norms. AI’s internal representation of the world is largely static, set by its training data. Its decision-making process is grounded in an unchanging model of the world, unfazed by the dynamic, nuanced social interactions constantly influencing human behavior. Researchers are working on programming AI to include ethics, but that’s proving challenging.

    How can you ensure that the car’s AI makes decisions that align with human expectations?

    For example, the car could decide that hitting the child is the optimal course of action, something most human drivers would instinctively avoid. This issue is the AI alignment problem, and it’s another source of uncertainty that erects barriers to trust.

    CRITICAL SYSTEMS AND TRUSTING AI
    One way to reduce uncertainty and boost trust is to ensure people are in on the decisions AI systems make.

    While keeping humans involved is a great first step, I am not convinced that this will be sustainable long term. As companies and governments continue to adopt AI, the future will likely include nested AI systems, where rapid decision-making limits the opportunities for people to intervene. It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible. At that point, there will be no option other than to trust AI.

    Avoiding that threshold is especially important because AI is increasingly being integrated into critical systems, which include things such as electric grids, the internet and military systems. In critical systems, trust is paramount, and undesirable behavior could have deadly consequences. As AI integration becomes more complex, it becomes even more important to resolve issues that limit trustworthiness.

    CAN PEOPLE EVER TRUST AI?
    AI is alien – an intelligent system into which people have little insight. Humans are largely predictable to other humans because we share the same human experience, but this doesn’t extend to artificial intelligence, even though humans created it.

    If trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it worthy of trust.

    Reply
  39. Tomi Engdahl says:

    This Is What AI Thinks Is The “Perfect” Man And Woman
    AI has unrealistic expectations of humans.
    https://www.iflscience.com/this-is-what-ai-thinks-is-the-perfect-man-and-woman-68999?fbclid=IwAR39rI7ify6YoLM7WRnA9qZSbyydCk300BrU77e9seDCsbIB6LUb-41iJrA

    An eating disorder awareness group is raising awareness of artificial intelligence (AI) image-generators and how they propagate unrealistic standards of beauty like the data on the Internet they were trained on.

    The Bulimia Project asked image generators Dall-E 2, Stable Diffusion, and Midjourney to create the perfect female body specifically according to social media in 2023, followed by the same prompt but for males.

    Reply
  40. Tomi Engdahl says:

    CEO ROASTS HUMAN WORKERS HE FIRED AND REPLACED WITH CHATGPT
    “IT WAS [A] NO-BRAINER FOR ME TO REPLACE THE ENTIRE TEAM WITH A BOT.”
    https://futurism.com/the-byte/ceo-roasts-human-workers-he-fired-and-replaced-with-chatgpt

    Earlier this year, Suumit Shah, a 31-year-old CEO of an Indian e-commerce platform called Dukaan, fired the vast majority of the humans making up his company’s customer service team — and replaced them with an in-house chatbot powered by OpenAI’s ChatGPT.

    Strikingly, Shah is now roasting his former human workers, saying the bot simply does a much better job than they did, and at a fraction of the price.

    “It was [a] no-brainer for me to replace the entire team with a bot,” he told the Washington Post, “which is like 100 times smarter, who is instant, and who cost me like 100th of what I used to pay to the support team.”

    It’s an unusually brazen approach to replacing human labor with AI chatbots, a dystopian future that’s seemingly creeping ever closer. Instead of making vague promises about AI changing how we work, as many AI purveyors have, Shah has chosen the scorched Earth approach — and it’s entirely possible other entrepreneurs will follow suit, if they haven’t already.

    Reply
  41. Tomi Engdahl says:

    Can AI Do Empathy Even Better Than Humans? Companies Are Trying It.
    https://www.wsj.com/tech/ai/ai-empathy-business-applications-technology-fc41aea2?mod=followamazon

    Artificial Intelligence is getting smart enough to express and measure empathy. Here’s how the new technology could change healthcare, customer service—and your performance review

    Busy, stressed-out humans aren’t always good at expressing empathy. Now computer scientists are training artificial intelligence to be empathetic for us.

    AI-driven large language models trained on massive amounts of voice, text and video conversations are now smart enough to detect and mimic emotions like empathy—at times, better than humans, some argue. Powerful new capabilities promise to improve interactions in customer service, human resources, mental health and other fields, tech experts say. They’re also raising moral and ethical questions about whether machines, which lack remorse and a sense of responsibility, should be allowed to interpret and evaluate human emotions.

    Companies like telecom giant Cox Communications and telemarketing behemoth Teleperformance
    use AI to measure the empathy levels of call-center agents and use the scores for performance reviews. Doctors and therapists use generative AI to craft empathetic correspondence with patients. For instance, Lyssn.io, an AI platform for training and evaluating therapists, is testing a specialized GPT model that suggests text responses to give to patients. When a woman discloses anxiety after a tough week at work, Lyssn’s chatbot gives three options the therapist could text back: “Sounds like work has really taken a toll this past week” or “Sorry to hear that, how have you been managing your stress and anxiety this week?” or “Thanks for sharing. What are some ways you’ve managed your anxiety in the past?”

    Even a person calling from your bank or internet provider may be reading a script that’s been generated by an AI-powered assistant. The next time you get a phone call, text or email, you may have no way of knowing whether a human or a machine is responding.

    The benefits of the new technology could be transformative, company officials say. In customer service, bots trained to provide thoughtful suggestions could elevate consumer interactions instantly, boosting sales and customer satisfaction, proponents say. Therapist bots could help alleviate the severe shortage of mental health professionals and assist patients with no other access to care.

    “AI can even be better than humans at helping us with socio-emotional learning because we can feed it the knowledge of the best psychologists in the world to coach and train people,” says Grin Lord, a clinical psychologist and CEO of mpathic.ai, a conversation analytics company in Bellevue, Wash.

    Some social scientists ask whether it’s ethical to use AI that has no experience of human suffering to interpret emotional states. Artificial empathy used in a clinical setting could cheapen the expectation that humans in distress deserve genuine human attention. And if humans delegate the crafting of kind words to AI, will our own empathic skills atrophy?

    AI may be capable of “cognitive empathy,” or the ability to recognize and respond to a human based on the data on which it was trained, says Jodi Halpern, professor of bioethics at University of California, Berkeley, and an authority on empathy and technology. But that’s different from “emotional empathy,” or the capacity to internalize another person’s pain, hope and suffering, and feel genuine concern.

    echnology could change healthcare, customer service—and your performance review
    By Lisa Bannon
    Oct. 7, 2023 9:02 am ET
    46 Responses
    Explore Audio Center

    Busy, stressed-out humans aren’t always good at expressing empathy. Now computer scientists are training artificial intelligence to be empathetic for us.

    AI-driven large language models trained on massive amounts of voice, text and video conversations are now smart enough to detect and mimic emotions like empathy—at times, better than humans, some argue. Powerful new capabilities promise to improve interactions in customer service, human resources, mental health and other fields, tech experts say. They’re also raising moral and ethical questions about whether machines, which lack remorse and a sense of responsibility, should be allowed to interpret and evaluate human emotions.

    Newsletter Sign-up

    The Future of Everything

    A look at how innovation and technology are transforming the way we live, work and play.
    Subscribe

    Companies like telecom giant Cox Communications and telemarketing behemoth Teleperformance
    use AI to measure the empathy levels of call-center agents and use the scores for performance reviews. Doctors and therapists use generative AI to craft empathetic correspondence with patients. For instance, Lyssn.io, an AI platform for training and evaluating therapists, is testing a specialized GPT model that suggests text responses to give to patients. When a woman discloses anxiety after a tough week at work, Lyssn’s chatbot gives three options the therapist could text back: “Sounds like work has really taken a toll this past week” or “Sorry to hear that, how have you been managing your stress and anxiety this week?” or “Thanks for sharing. What are some ways you’ve managed your anxiety in the past?”

    Even a person calling from your bank or internet provider may be reading a script that’s been generated by an AI-powered assistant. The next time you get a phone call, text or email, you may have no way of knowing whether a human or a machine is responding.

    The benefits of the new technology could be transformative, company officials say. In customer service, bots trained to provide thoughtful suggestions could elevate consumer interactions instantly, boosting sales and customer satisfaction, proponents say. Therapist bots could help alleviate the severe shortage of mental health professionals and assist patients with no other access to care.
    Accolade healthcare services company uses a machine-learning AI model to detect the presence of empathy in its healthcare assistants’ conversations, left. The company has been training the model to recognize a wider range of expressions as empathic, right. Hannah Yoon for The Wall Street Journal

    “AI can even be better than humans at helping us with socio-emotional learning because we can feed it the knowledge of the best psychologists in the world to coach and train people,” says Grin Lord, a clinical psychologist and CEO of mpathic.ai, a conversation analytics company in Bellevue, Wash.

    Some social scientists ask whether it’s ethical to use AI that has no experience of human suffering to interpret emotional states. Artificial empathy used in a clinical setting could cheapen the expectation that humans in distress deserve genuine human attention. And if humans delegate the crafting of kind words to AI, will our own empathic skills atrophy?

    AI may be capable of “cognitive empathy,” or the ability to recognize and respond to a human based on the data on which it was trained, says Jodi Halpern, professor of bioethics at University of California, Berkeley, and an authority on empathy and technology. But that’s different from “emotional empathy,” or the capacity to internalize another person’s pain, hope and suffering, and feel genuine concern.
    Advertisement

    “Empathy that’s most clinically valuable requires that the doctor experience something when they listen to a patient,” she says. That’s something a bot, without feelings or experiences, can’t do.

    Reply
  42. Tomi Engdahl says:

    Jo Craven McGinty / Wall Street Journal:
    Q&A with Stanford University School of Medicine Dean Dr. Lloyd Minor on generative AI in medicine, the privacy risks, responsibly deploying the tech, and more

    Why AI Is Medicine’s Biggest Moment Since Antibiotics
    https://www.wsj.com/tech/ai/artificial-intelligence-medicine-innovation-6739b4f8?mod=followamazon

    The dean of Stanford University’s medical school thinks artificial intelligence will transform the medicines you take, the care you get and the training of doctors

    Dr. Lloyd Minor, dean of the Stanford University School of Medicine, last year began playing around with AI-powered chatbots, the computer programs that simulate human conversation.

    “When ChatGPT was introduced in November, I just started using it to see what I could learn from it,” says Minor, who is also the university’s vice president for medical affairs. “And then when Bard came along, I started using Bard. And what I found was incredible.”

    “This is a transformative moment in human history,” Minor said. “We wanted to lead the way.”

    Minor spoke with the Journal about how he expects artificial intelligence to change medicine.

    In your wildest dreams, what role do you see AI serving in medicine?

    In healthcare delivery, my wildest dream is generative AI will help to break down barriers to access and will dramatically improve the quality and consistency and efficiency of healthcare.

    In biomedical science, generative AI has the possibility and the probability to dramatically improve the precision of the science. It will help us achieve the same quality and data from clinical trials but probably with more focused trials that don’t necessarily have the tens of thousands of participants that some clinical trials have to do today. It will help us safely get new therapies into medical practice.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*