Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
6,156 Comments
Tomi Engdahl says:
The AI Effect: A New Era in Music and Its Unintended Consequences
https://www.youtube.com/watch?v=-eAQOhDNLt4
In this video I discuss my predictions of the impact AI will have on music creation going forward.
Tomi Engdahl says:
Bloomberg:
Source: a leaked April 2023 internal document by Luke Sernau, a senior software engineer at Google, argues that open source AI will outcompete Google and OpenAI
Google Is Falling Behind in AI Arms Race, Senior Engineer Warns
https://www.bloomberg.com/news/articles/2023-05-05/google-staffer-claims-in-leaked-ai-warning-we-have-no-secret-sauce?leadSource=uverify%20wall
Leaked document from a senior software engineer sounds the alarm about Google’s edge in artificial intelligence.
A senior software engineer at Google wrote a critique asserting that the internet search leader is losing its edge in artificial intelligence to the open-source community, where many independent researchers use AI technology to make rapid and unexpected advances.
The engineer, Luke Sernau, published the document on an internal system at Google in early April. Over the past few weeks, the document was shared thousands of times among Googlers, according to a person familiar with the matter, who asked not to be named because they were not authorized to discuss internal company matters. On Thursday, the document was published by the consulting firm SemiAnalysis, and made the rounds in
Tomi Engdahl says:
Ina Fried / Axios:
Tim Cook says Apple views AI as “huge” but there are “a number of issues that need to be sorted”, and doesn’t expand on how it plans to use AI in its products
Apple CEO Tim Cook says AI is “huge,” but care is needed
https://www.axios.com/2023/05/04/apple-ceo-tim-cook-ai-huge-care-needed
Tim Cook said Thursday that AI is “huge” but cautioned that there are “a number of issues that need to be sorted” and declined to say how Apple will incorporate the latest technologies into its products.
Why it matters: While Microsoft, Google and others are racing to add generative AI tools across their products, Apple has had little to say on the trend.
What they’re saying: Speaking to analysts after a better-than-expected earnings report, Cook noted that Apple has used machine learning and other AI approaches to power features such as crash detection and heart rate monitoring.
“We view AI as huge and will continue weaving it into our products on a very thoughtful basis,” Cook said, while noting the company doesn’t talk about its future roadmap. “The potential is certainly very interesting.”
Yes, but: Cook also sounded a cautious note. “I do think it’s very important to be deliberative and thoughtful,” he said. “There’s a number of issues that need to be sorted.”
The big picture: Cook’s comments came as the leaders of Microsoft, Google, OpenAI and Anthropic met at the White House Thursday.
Tomi Engdahl says:
Washington Post:
Sources: Google, which published AI papers prolifically, in February moved to sharing only after the work had been productized, due to OpenAI’s ChatGPT launch — For years the tech giant published scientific research that helped jump-start its competitors. But now it’s lurched into defensive mode.
https://www.washingtonpost.com/technology/2023/05/04/google-ai-stop-sharing-research/
Tomi Engdahl says:
Bloomberg:
Sources: Microsoft is helping finance AMD’s AI chip expansion and working with the chipmaker on Microsoft’s own processor for AI workloads, codenamed Athena — Microsoft Corp. is working with Advanced Micro Devices Inc. on the chipmaker’s expansion into artificial intelligence processors …
Microsoft Working With AMD on Expansion Into AI Processors
https://www.bloomberg.com/news/articles/2023-05-04/microsoft-is-helping-finance-amd-s-expansion-into-ai-chips?leadSource=uverify%20wall
Software maker will help boost AMD’s supply of in-demand parts
Companies need AI processing power amid ChatGPT-fueled boom
An Advanced Micro Devices Inc. processing chip.
An Advanced Micro Devices Inc. processing chip. Photographer: David Paul Morris/Bloomberg
By
Dina Bass and
Ian King
4 May 2023 at 20.19 EESTUpdated on5 May 2023 at 3.19 EEST
Share this article
Follow the authors
@dinabass
+ Get alerts for
@ianmking
+ Get alerts for
Microsoft Corp. is working with Advanced Micro Devices Inc. on the chipmaker’s expansion into artificial intelligence processors, according to people with knowledge of the situation, part of a multipronged strategy to secure more of the highly coveted components.
The companies are teaming up to offer an alternative to Nvidia Corp., which dominates the market for AI-capable chips called graphics processing units, said the people, who asked not to be identified because the matter is private. The software giant is providing support to bolster AMD’s efforts, including engineering resources, and working with the chipmaker on a homegrown Microsoft processor for AI workloads, code-named Athena, the people said.
Tomi Engdahl says:
Erin Woo / The Information:
Sources: OpenAI’s losses roughly doubled to ~$540M in 2022, and Sam Altman privately suggested the company may try to raise as much as $100B in the coming years — OpenAI’s losses roughly doubled to around $540 million last year as it developed ChatGPT and hired key employees from Google …
OpenAI’s Losses Doubled to $540 Million as It Developed ChatGPT
https://www.theinformation.com/articles/openais-losses-doubled-to-540-million-as-it-developed-chatgpt
OpenAI’s losses roughly doubled to around $540 million last year as it developed ChatGPT and hired key employees from Google, according to three people with knowledge of the startup’s financials. The previously unreported figure reflects the steep costs of training its machine-learning models during the period before it started selling access to the chatbot.
Even as revenue has picked up—reaching an annual pace of hundreds of millions of dollars just weeks after OpenAI launched a paid version of the chatbot in February—those costs are likely to keep rising as more customers use its artificial intelligence technology and the company trains future versions of the software. Reflecting that capital drain, CEO Sam Altman has privately suggested OpenAI may try to raise as much as $100 billion in the coming years to achieve its aim of developing artificial general intelligence that is advanced enough to improve its own capabilities, his associates said.
Tomi Engdahl says:
Tom Warren / The Verge:
Microsoft opens its Bing chatbot to all, removing the waitlist, and announces features like image and video results, persistent chat and history, and plug-ins
Microsoft’s Bing Chat AI is now open to everyone, with plug-ins coming soon / All you need is a Microsoft account to get access to the GPT-4-powered version of Bing
https://www.theverge.com/2023/5/4/23710071/microsoft-bing-chat-ai-public-preview-plug-in-support
Microsoft is making its Bing GPT-4 chatbot available to everyone today, no more waitlist necessary. All you need to do is sign in to the new Bing or Edge with your Microsoft account, and you’ll now access the open preview version that’s powered by GPT-4. Microsoft is also massively upgrading Bing Chat with lots of new features and even plug-in support.
This open preview launch comes nearly two months after Microsoft experimented with removing the waitlist for its new Bing Chat feature. The chatbot originally launched in a private preview in February, and Microsoft has been gradually opening it up ever since.
Microsoft is now adding more smart features to Bing Chat, including image and video results, new Bing and Edge Actions feature, persistent chat and history, and plug-in support. The plug-in support will be the key addition for developers and for the future of Bing Chat.
Microsoft is working with OpenTable to enable its plug-in for completing restaurant bookings within Bing Chat and WolframAlpha for generating visualizations. Microsoft will share a lot more at its Build conference later this month
Tomi Engdahl says:
Alf Wilkinson / Financial Times:
A Finder.com experiment using ChatGPT to pick 38 stocks finds the portfolio rose 4.9% over eight weeks, compared to an average 0.8% loss for 10 popular UK funds
https://www.ft.com/content/c8de53ac-6978-4b88-b7ed-7cdbcbfebbbf
Tomi Engdahl says:
https://hackaday.com/2023/05/04/chatgpt-makes-a-3d-model-the-secret-ingredient-much-patience/
Tomi Engdahl says:
Thermal Camera Plus Machine Learning Reads Passwords Off Keyboard Keys
https://hackaday.com/2023/05/04/thermal-camera-plus-machine-learning-reads-passwords-off-keyboard-keys/
An age-old vulnerability of physical keypads is visibly worn keys. For example, a number pad with digits clearly worn from repeated use provides an attacker with a clear starting point. The same concept can be applied to keyboards by using a thermal camera with the help of machine learning, but it also turns out that some types of keys and typing styles are harder to read than others.
Thermal Cameras and Machine Learning Combine to Snoop Out Passwords
By Mark Tyson
https://www.tomshardware.com/news/thermal-cameras-and-machine-learning-combine-to-snoop-out-passwords
AI-driven ‘thermal attack’ analyzes touch-input heat signature after you have gone.
Researchers at the University of Glasgow have published a paper that highlights their so-called ThermoSecure implementation for discovering passwords and PINs. The name ThermoSecure provides a clue to the underlying methodology, as the researchers are using a mix of thermal imaging technology and AI to reveal passwords from input devices like keyboards, touchpads, and even touch screens.
ThermoSecure: Investigating the Effectiveness of AI-Driven Thermal Attacks on Commonly Used Computer Keyboards
https://dl.acm.org/doi/10.1145/3563693
Tomi Engdahl says:
Uuden ChatGPT:n kouluttaminen maksoi yli sata miljoonaa dollaria
https://etn.fi/index.php/13-news/14918-uuden-chatgpt-n-kouluttaminen-maksoi-yli-sata-miljoonaa-dollaria
OpenAI:n toimitusjohtaja Sam Altman kertoi Wired-lehden haastattelussa, että GPT-4:n kouluttaminen maksoi yli sata miljoonaa dollaria. Tämä nostaa väistämättä esiin kysymyksen tekoälyn tasapuolisuudesta. Onko vain kaikkein rikkaimmilla yrityksillä varaa kehittää omia LMM-mallejaan.
GPT-4 on OpenAI:n kehittynein kielimalli. Siitä pääsevät tällä hetkellä nauttimaan ChatGPT:n Plus-tilaajat. Plus-versio maksaa 20 dollaria kuukaudessa. Microsoft on jo investoinut yritykseen miljardi dollaria ja sitoutunut 10 miljardin lisädollarin sijoittamiseen.
OpenAI:n ei kannata avata omia mallejaan. Yhtiön arvoksi arvioitiin 29 miljardia dollaria, kun ChatGPT viime syksynä lanseerattiin. Ei ole mitään syytä olettaa, etteikö yhtiön arvo jatkossa kasva voimakkaasti. Kasvava joukko yrityksiä liittää ChatGPT:n omiin sovelluksiinsa OpenAI:n rajapintojen avulla.
Tomi Engdahl says:
Switch ASIC Wires Together Data Centers for Faster AI
May 2, 2023
Broadcom’s Jericho3-AI acts as a high-bandwidth networking fabric for data centers focused on AI.
https://www.electronicdesign.com/technologies/embedded/article/21265142/electronic-design-switch-asic-wires-together-data-centers-for-faster-ai?utm_source=EG+ED+Connected+Solutions&utm_medium=email&utm_campaign=CPS230427070&o_eid=7211D2691390C9R&rdx.identpull=omeda|7211D2691390C9R&oly_enc_id=7211D2691390C9R
Tomi Engdahl says:
Biden, Harris Meet With CEOs About AI Risks
https://www.securityweek.com/biden-harris-meet-with-ceos-about-ai-risks/
Vice President Kamala Harris met with the heads of companies developing AI as the Biden administration rolls out initiatives to ensure the technology improves lives without putting people’s rights and safety at risk.
Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people’s rights and safety at risk.
President Joe Biden briefly dropped by the meeting in the White House’s Roosevelt Room, saying he hoped the group could “educate us” on what is most needed to protect and advance society.
“What you’re doing has enormous potential and enormous danger,” Biden told the CEOs, according to a video posted to his Twitter account.
The popularity of AI chatbot ChatGPT — even Biden has given it a try, White House officials said Thursday — has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.
Tomi Engdahl says:
Jess Weatherbed / The Verge:
An in-depth look at the regulatory risks for OpenAI under GDPR, including questions around future data scraping and handling “right to be forgotten” requests — The European Union’s fight with ChatGPT is a glance into what’s to come for AI services.
OpenAI’s regulatory troubles are only just beginning
The European Union’s fight with ChatGPT is a glance into what’s to come for AI services.
https://www.theverge.com/2023/5/5/23709833/openai-chatgpt-gdpr-ai-regulation-europe-eu-italy
Tomi Engdahl says:
Will Oremus / Washington Post:
AI text generators are quietly authoring more of the internet; more AI-generated books and personalized articles mean fewer clients buying human-written content — From recipes to product reviews to how-to books, artificial intelligence text generators are quietly authoring more and more of the internet.
He wrote a book on a rare subject. Then a ChatGPT replica appeared on Amazon.
https://www.washingtonpost.com/technology/2023/05/05/ai-spam-websites-books-chatgpt/
From recipes to product reviews to how-to books, artificial intelligence text generators are quietly authoring more and more of the internet.
Chris Cowell, a Portland, Ore.-based software developer, spent more than a year writing a technical how-to book. Three weeks before it was released, another book on the same topic, with the same title, appeared on Amazon.
“My first thought was: bummer,” Cowell said. “My second thought was: You know what, that’s an awfully long and specific and cumbersome title to have randomly been picked.”
The book, titled “Automating DevOps with GitLab CI/CD Pipelines,” just like Cowell’s, listed as its author one Marie Karpos, whom Cowell had never heard of. When he looked her up online, he found literally nothing — no trace. That’s when he started getting suspicious.
The book bears signs that it was written largely or entirely by an artificial intelligence language model, using software such as OpenAI’s ChatGPT. (For instance, its code snippets look like ChatGPT screenshots.) And it’s not the only one. The book’s publisher, a Mumbai-based education technology firm called inKstall, listed dozens of books on Amazon on similarly technical topics, each with a different author, an unusual set of disclaimers and matching five-star Amazon reviews from the same handful of India-based reviewers. InKstall did not respond to requests for comment.
Experts say those books are likely just the tip of a fast-growing iceberg of AI-written content spreading across the web as new language software allows anyone to rapidly generate reams of prose on almost any topic.
“If you have a connection to the internet, you have consumed AI-generated content,” said Jonathan Greenglass, a New York-based tech investor focused on e-commerce. “It’s already here.”
What that may mean for consumers is more hyper-specific and personalized articles — but also more misinformation and more manipulation, about politics, products they may want to buy and much more.
As AI writes more and more of what we read, vast, unvetted pools of online data may not be grounded in reality, warns Margaret Mitchell, chief ethics scientist at the AI start-up Hugging Face. “The main issue is losing track of what truth is,” she said. “Without grounding, the system can make stuff up. And if it’s that same made-up thing all over the world, how do you trace it back to what reality is?”
Generative AI tools have captured the world’s attention since ChatGPT’s November release. Yet a raft of online publishers have been using automated writing tools based on ChatGPT’s predecessors, GPT-2 and GPT-3, for years. That experience shows that a world in which AI creations mingle freely and sometimes imperceptibly with human work isn’t speculative; it’s flourishing in plain sight on Amazon product pages and in Google search results.
Semrush, a leading digital marketing firm, recently surveyed its customers about their use of automated tools. Of the 894 who responded, 761 said they’ve at least experimented with some form of generative AI to produce online content, while 370 said they now use it to help generate most if not all of their new content, according to Semrush Chief Strategy Officer Eugene Levin.
“In the last two years, we’ve seen this go from being a novelty to being pretty much an essential part of the workflow,” Levin said.
In a separate report this week, the news credibility rating company NewsGuard identified 49 news websites across seven languages that appeared to be mostly or entirely AI-generated. The sites sport names like Biz Breaking News, Market News Reports, and bestbudgetUSA.com; some employ fake author profiles and publish hundreds of articles a day, the company said. Some of the news stories are fabricated, but many are simply AI-crafted summaries of real stories trending on other outlets.
Several companies defended their use of AI, telling The Post they use language tools not to replace human writers, but to make them more productive, or to produce content that they otherwise wouldn’t. Some are openly advertising their use of AI, while others disclose it more discreetly or hide it from the public, citing a perceived stigma against automated writing.
Ingenio used to pay humans to write birth sign articles on a handful of highly searched celebrities like Michael Jordan and Ariana Grande, said Josh Jaffe, president of its media division. But delegating the writing to AI allows sunsigns.com to cheaply crank out countless articles on not-exactly-A-listers, from Aaron Harang, a retired mid-rotation baseball pitcher, to Zalmay Khalilzad, the former U.S. envoy to Afghanistan. Khalilzad, the site’s AI-written profile claims, would be “a perfect partner for someone in search of a sensual and emotional connection.” (At 72, Khalilzad has been married for decades.)
In the past, Jaffe said, “We published a celebrity profile a month. Now we can do 10,000 a month.”
Jaffe said his company discloses its use of AI to readers, and he promoted the strategy at a recent conference for the publishing industry. “There’s nothing to be ashamed of,” he said. “We’re actually doing people a favor by leveraging generative AI tools” to create niche content that wouldn’t exist otherwise.
A cursory review of Ingenio sites suggests those disclosures aren’t always obvious, however.
Jaffe said he isn’t particularly worried that AI content will overwhelm the web. “It takes time for this content to rank well” on Google, he said — meaning that it appears on the first page of search results for a given query, which is critical to attracting readers. And it works best when it appears on established websites that already have a sizable audience: “Just publishing this content doesn’t mean you have a viable business.”
Google clarified in February that it allows AI-generated content in search results, as long as the AI isn’t being used to manipulate a site’s search rankings. The company said its algorithms focus on “the quality of content, rather than how content is produced.”
Reputations are at risk if the use of AI backfires. CNET, a popular tech news site, took flack in January when fellow tech site Futurism reported that CNET had been using AI to create articles or add to existing ones without clear disclosures. CNET subsequently investigated and found that many of its 77 AI-drafted stories contained errors.
But CNET’s parent company, Red Ventures, is forging ahead with plans for more AI-generated content, which has also been spotted on Bankrate.com, its popular hub for financial advice. Meanwhile, CNET in March laid off a number of employees, a move it said was unrelated to its growing use of AI.
BuzzFeed, which pioneered a media model built around reaching readers directly on social platforms like Facebook, announced in January it planned to make “AI inspired content” part of its “core business,” such as using AI to craft quizzes that tailor themselves to each reader. BuzzFeed announced last month that it is laying off 15 percent of its staff and shutting down its news division, BuzzFeed News.
“There is no relationship between our experimentation with AI and our recent restructuring,” BuzzFeed spokesperson Juliana Clifton said.
AI’s role in the future of mainstream media is clouded by the limitations of today’s language models and the uncertainty around AI liability and intellectual property.
That business is driven by a simple equation: how much it costs to create an article vs. how much revenue it can bring in. The main goal is to attract as many clicks as possible, then serve the readers ads worth just fractions of a cent on each visit — the classic form of clickbait. That seems to have been the model of many of the AI-generated “news” sites
NewsGuard found the sites by searching the web and analytics tools for telltale phrases such as “As an AI language model,” which suggest a site is publishing outputs directly from an AI chatbot without careful editing. One local news site, countylocalnews.com, churned out a series of articles on a recent day whose sub-headlines all read, “As an AI language model, I need the original title to rewrite it. Please provide me with the original title.”
Then there are sites designed to induce purchases, which insiders say tend to be more profitable than pure clickbait these days. A site called Nutricity, for instance, hawks dietary supplements using product reviews that appear to be AI-generated, according to NewsGuard’s analysis.
In the past, such sites often outsourced their writing to businesses known as “content mills,” which harness freelancers to generate passable copy for minimal pay. Now, some are bypassing content mills and opting for AI instead.
“Previously it would cost you, let’s say, $250 to write a decent review of five grills,” Semrush’s Levin said. “Now it can all be done by AI, so the cost went down from $250 to $10.”
The problem, Levin said, is that the wide availability of tools like ChatGPT means more people are producing similarly cheap content, and they’re all competing for the same slots in Google search results or Amazon’s on-site product reviews. So they all have to crank out more and more article pages, each tuned to rank highly for specific search queries, in hopes that a fraction will break through. The result is a deluge of AI-written websites, many of which are never seen by human eyes.
It isn’t just text. Google users have recently posted examples of the search engine surfacing AI-generated images.
The rise of AI is already hurting the business of Textbroker, a leading content platform based in Germany and Las Vegas, said Jochen Mebus, the company’s chief revenue officer. While Textbroker prides itself on supplying credible, human-written copy on a huge range of topics, “People are trying automated content right now, and so that has slowed down our growth,” he said.
Mebus said the company is prepared to lose some clients who are just looking to make a “fast dollar” on generic AI-written content.
He said a recent survey of the company’s customers found that 30 to 40 percent still want exclusively “manual” content, while a similar-size chunk is looking for content that might be AI-generated but human-edited to check for tone, errors and plagiarism.
“I don’t think anyone should trust 100 percent what comes out of the machine,” Mebus said.
Levin said Semrush’s clients have also generally found that AI is better used as a writing assistant than a sole author. “We’ve seen people who even try to fully automate the content creation process,” he said. “I don’t think they’ve had really good results with that. At this stage, you need to have a human in the loop.”
For Cowell, whose book title appears to have inspired an AI-written copycat, the experience has dampened his enthusiasm for writing.
“My concern is less that I’m losing sales to fake books, and more that this low-quality, low-priced, low-effort writing is going to have a chilling effect on humans considering writing niche technical books in the future,” he said. It doesn’t help, he added, knowing that “any text I write will inevitably be fed into an AI system that will generate even more competition.”
Amazon removed the impostor book, along with numerous others by the same publisher, after The Post contacted the company for comment.
AI-written books aren’t against Amazon’s rules, per se, and some authors have been open about using ChatGPT to write books sold on the site. (Amazon founder and executive chairman Jeff Bezos owns The Washington Post.)
“Amazon is constantly evaluating emerging technologies and innovating to provide a trustworthy shopping experience for our customers,” Hamilton said in a statement.
Tomi Engdahl says:
Theo Wayt / The Information:
Amazon is working on AI tools to generate photos and videos for merchants’ ad campaigns on its platform, as the company seeks to build a broader ad business
Amazon Plans to Generate Photos and Videos for Advertisers Using AI
https://www.theinformation.com/articles/amazon-plans-to-generate-photos-and-videos-for-advertisers-using-ai
Amazon is building a team to work on artificial intelligence tools that will generate photos and videos for merchants to use in advertising campaigns on its platform, a company spokesperson confirmed, efforts that could help diversify its ad business.
Amazon’s ad business has grown by double-digit percentages every quarter since Amazon started breaking out its revenue in 2021. It brought in $38 billion last year but currently centers on ads that give merchants a boost in search results. However, Amazon is trying to build a broader ad business, including through selling spots on its free video-streaming service, Freevee, as well as during Thursday Night Football broadcasts on Prime Video. The company also sells audio ads on Amazon Music and even runs digital ads on screens inside Amazon Fresh grocery stores, among other efforts.
Tomi Engdahl says:
There’s a little problem with AI: it’s shockingly expensive.
OPENAI IS LOSING A FLABBERGASTING AMOUNT OF MONEY ON CHATGPT
https://futurism.com/the-byte/openai-losing-money-chatgpt
AND COSTS ARE STILL ON THE RISE.
OpenAI, the Elon Musk-founded research firm behind ChatGPT, is apparently hemorrhaging money on the game-changing chatbot that put it on the map.
People familiar with the company’s losses confirmed to The Information that OpenAI spent upwards of $540 million last year while developing its widely-used chatbot — including funds it used to poach talent from the likes of Google.
The report highlights just how expensive it is to run and maintain the popular AI tool.
And OpenAI’s costs are still on the rise, something that could turn the company into “the most capital-intensive startup in Silicon Valley history,” as CEO Sam Altman suggested earlier this week, according to the report.
The Information’s latest figure is roughly in line with Fortune’s reporting in January that revealed the breakdown of the company’s 2022 expenses, which totaled $544.5 million: “$416.45 million on computing and data, $89.31 million on staff, and $38.75 million in unspecified other operating expenses.”
These costs would have been amassed before OpenAI struck a multi-year, multi-billion dollar deal with Microsoft at the beginning of this year.
In April, Dylan Patel, chief analyst at consulting firm SemiAnalysis, also told The Information that he estimated it costs $700,000 per day to run ChatGPT due to computing costs.
At the end of last year, Reuters reported that OpenAI CEO Sam Altman has set the company’s revenue bar very high in an investor pitch, with estimates that the firm could make $200 million this year and $1 billion next year.
Compared to the $30 million OpenAi made in revenue last year, according to Fortune, that figure seems almost impossibly high.
Tomi Engdahl says:
“AI researchers are more like late-stage teenagers.”
STANFORD DIRECTOR: AI SCIENTISTS’ “FRONTAL CORTEX IS MASSIVELY UNDERDEVELOPED”
https://futurism.com/the-byte/stanford-ai-scientists-frontal-cortex-underdeveloped
Getty Images
MEDDLING KIDS
STANFORD DIRECTOR: AI SCIENTISTS’ “FRONTAL CORTEX IS MASSIVELY UNDERDEVELOPED”
byMAGGIE HARRISON
YESTERDAY
GETTY IMAGES
“AI RESEARCHERS ARE MORE LIKE LATE-STAGE TEENAGERS.”
Kids in America
Associate director of Stanford’s Institute for Human-Centered Artificial Intelligence Robert Reich threw absolute daggers this week when, while speaking to Esquire about the newness of the AI industry and how that impacts its relationship to ethics, he likened those in the burgeoning field to actual children.
“AI researchers are more like late-stage teenagers,” Reich told Esquire, comparing those in the still-very-much-developing field of AI to those in more established, similarly ethics-concerned biomedical tech. “They’re newly aware of their power in the world, but their frontal cortex is massively underdeveloped.”
“Their [sense of] social responsibility,” he added, “as a consequence is not so great.”
Gotta say: damn. If the industry AI bros are indeed acting like kids, Reich may have just cemented himself as the adult in the room.
Teenage Invincibility
Honestly, when you think about it, it’s a pretty solid analogy.
For one thing, teens are often risk-happy thrill-seekers, and folks who work in AI — particularly the younger crowd — are constantly talking about how the unregulated technology that they’re building might annihilate humanity. But as their risk aversion generally seems to hover right around zero, they keep building the technology regardless.
It’s also common for young people to find themselves questioning the more existential questions of life, perhaps losing and finding religion or lack thereof along the way, and some can argue that the quest to build AI is a quest to do the same. After all, the folks in Silicon Valley are quite literally attempting to design a separate — and possibly even superior — being in our own image. Whether the goal is to build a version of a god in a machine or become a god to a machine will depend on the individual, but we’re certain that both outlooks are present in the fast-moving, competitive field.
Of course, the field is young, and Silicon Valley has long taken a similarly teen-like “move fast and break things” approach to all sorts of tech.
But as it should probably go without saying: the folks helming the AI race aren’t actually teenagers, and for the most part, their frontal cortexes are developed.
Tomi Engdahl says:
On top of that, sociopaths tend to do well in management and CEO’S often fall in this category, and they are the ones calling the shots in the industry.
Tomi Engdahl says:
Seems like he was comparing the INDUSTRY to a teen, and not individual researchers. The industry is entering the equivalent of it’s “later teens” and grapples with those existential questions as it becomes mature. Certainly the entire cohort (of business, research groups, and startups) will continue to age, and continuing the analogy some will become the equivalent of functional adults while others will become superstars, criminals, or even become stunted by the overwhelming experience of growing up and never amount to much. But the individuals who make up those organizations will bear the consequences
Tomi Engdahl says:
SNOOP DOGG EXPRESSES VIEWS ON AI THREAT
https://futurism.com/the-byte/snoop-dogg-ai-threat
“I’M LIKE, ARE WE IN A F*****G MOVIE RIGHT NOW, OR WHAT?”
Not sure how to feel about AI? You’re in good company.
During a conference on Wednesday, Variety asked rapper and business mogul Calvin “Snoop Dogg” Broadus Junior to share his thoughts on AI in regard to the ongoing Writer’s Guild strike.
Tomi Engdahl says:
And, well, Snoop didn’t hold back. He expressed a mix of fascination and concern, comparing the rise of AI to sci-fi movies, and questioned whether he should just give up and invest in the buzzy tech.
“It’s blowing my mind because I watched movies on this as a kid years ago,” Broadus reflected during the panel discussion, as quoted by Ars Technica. “When I see this shit I’m like what is going on?”
“Shit, what the fuck?” he added. “I’m lost, I don’t know.”
Snoop’s bewilderment is reflective of a much broader conversation: It’s increasingly difficult to keep up with the AI world, tech that’s seemingly breaking new ground on a monthly basis.
“And I heard the dude, the old dude that created AI saying, ‘This is not safe, ’cause the AIs got their own minds, and these motherfuckers gonna start doing their own shit,” Broadus continued, referring to Hinton.
Given Broadus’ usual tech enthusiasm — the rapper is a vocal fan of digital assets like crypto and NFTs, and even performed inside the metaverse last year — his ambivalence towards AI is surprising.
But then again, the rapidly developing, consumer-facing AI systems that we’ve seen crop up in recent months are increasingly difficult to make sense of.
https://futurism.com/the-byte/snoop-dogg-ai-threat
Tomi Engdahl says:
AI SHOWS WHAT MARK ZUCKERBERG WOULD LOOK LIKE LIVING IN POVERTY
https://futurism.com/the-byte/ai-zuckerberg-poverty
Tomi Engdahl says:
David Ingram / NBC News:
Two OpenAI contractors, one of them earning $15 per hour, speak about their work labeling the text and photos used to train the company’s products, like ChatGPT — Two OpenAI contractors spoke to NBC News about their work training the system behind ChatGPT.
ChatGPT is powered by these contractors making $15 an hour
Two OpenAI contractors spoke to NBC News about their work training the system behind ChatGPT.
https://www.nbcnews.com/tech/innovation/openai-chatgpt-ai-jobs-contractors-talk-shadow-workforce-powers-rcna81892
Tomi Engdahl says:
Josh Taylor / The Guardian:
An interview with Jürgen Schmidhuber, whose neural network work in the 1990s went on to be used in Google Translate and Siri, on why many AI fears are misplaced — Jürgen Schmidhuber believes AI will progress to the point where it surpasses human intelligence and will pay no attention to people
Rise of artificial intelligence is inevitable but should not be feared, ‘father of AI’ says
https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says
Jürgen Schmidhuber believes AI will progress to the point where it surpasses human intelligence and will pay no attention to people
Tomi Engdahl says:
Wall Street Journal:
Research papers and staff interviews detail the workarounds Chinese companies like Huawei, Baidu, and Alibaba use to develop AI systems without the latest chips — Research in China on workarounds, such as using software to leverage less powerful chips, is accelerating
U.S. Sanctions Drive Chinese Firms to Advance AI Without Latest Chips
https://www.wsj.com/articles/u-s-sanctions-drive-chinese-firms-to-advance-ai-without-latest-chips-f6aed67f?mod=djemalertNEWS
Research in China on workarounds, such as using software to leverage less powerful chips, is accelerating
Tomi Engdahl says:
Miles Kruppa / Wall Street Journal:
Documents: Google plans to make search more “visual, snackable, personal, and human”, adding more short videos, more social media posts, and AI conversations — Changes aim to respond to queries that can’t be easily answered by traditional ‘10 blue links’ web results
Google Plans to Make Search More ‘Personal’ with AI Chat and Video Clips
Changes aim to respond to queries that can’t be easily answered by traditional ‘10 blue links’ web results
https://www.wsj.com/articles/google-search-ai-artificial-intelligence-chatbot-tiktok-67c08870?mod=djemalertNEWS
Google is shifting the way it presents search results to incorporate conversations with artificial intelligence, along with more short video and social-media posts, a departure from the list of website results that has made it the dominant search engine for decades.
The changes represent a response to big shifts in the way people access information on the internet, including the emergence of AI bots like ChatGPT. They would nudge the service further away from its traditional format, known informally as the “10 blue links,” according to company documents and people familiar with the matter.
Google plans to make its search engine more “visual, snackable, personal, and human,” with a focus on serving young people globally, according to the documents.
Tomi Engdahl says:
Jessica Lyons Hardcastle / The Register:
The organizers of DEF CON AI Village, set to run from August 10 to August 13, 2023, say they will host “thousands” of people to find bugs and biases in LLMs — Can’t wait to see how these AI models hold up against a weekend of red-teaming by infosec’s village people
DEF CON to set thousands of hackers loose on LLMs
https://www.theregister.com/2023/05/06/ai_hacking_defcon/
Can’t wait to see how these AI models hold up against a weekend of red-teaming by infosec’s village people
Tomi Engdahl says:
Will Oremus / Washington Post:
AI text generators are quietly authoring more of the internet; more AI-generated books and personalized articles mean fewer clients buying human-written content
https://www.washingtonpost.com/technology/2023/05/05/ai-spam-websites-books-chatgpt/
Tomi Engdahl says:
Ryan Weeks / The Block:
A look at Art Blocks, an NFT marketplace for generative art that enforces 5% royalty fees and whose sales fell from $587M in August 2021 to $6.5M in April 2023
As NFT sales dwindle, Art Blocks resists pinning hopes on a renewed crypto bull run
https://www.theblock.co/post/229643/art-blocks-erick-calderon-crypto-bull-run
Quick Take
Volumes have fallen steeply for Art Blocks, the generative art platform that boomed in the last crypto bull run.
But founder Erick Calderon remains unwavering on topics such as creator royalties, while dismissing excessive fundraising as “gross.”
“I’m a little bit timid and hesitant… because I think a lot of that ‘invest in your startup’ mentality is just waiting for that next bull run — and there may not be another bull run.”
Tomi Engdahl says:
Jess Weatherbed / The Verge:
An in-depth look at the regulatory risks for OpenAI under GDPR, including questions around future data scraping and handling “right to be forgotten” requests
OpenAI’s regulatory troubles are only just beginning
The European Union’s fight with ChatGPT is a glance into what’s to come for AI services.
https://www.theverge.com/2023/5/5/23709833/openai-chatgpt-gdpr-ai-regulation-europe-eu-italy
OpenAI managed to appease Italian data authorities and lift the country’s effective ban on ChatGPT last week, but its fight against European regulators is far from over.
Earlier this year, OpenAI’s popular and controversial ChatGPT chatbot hit a big legal snag: an effective ban in Italy. The Italian Data Protection Authority (GPDP) accused OpenAI of violating EU data protection rules, and the company agreed to restrict access to the service in Italy while it attempted to fix the problem. On April 28th, ChatGPT returned to the country, with OpenAI lightly addressing GPDP’s concerns without making major changes to its service — an apparent victory.
The GPDP has said it “welcomes” the changes ChatGPT made. However, the firm’s legal issues — and those of companies building similar chatbots — are likely just beginning. Regulators in several countries are investigating how these AI tools collect and produce information, citing a range of concerns from companies’ collection of unlicensed training data to chatbots’ tendency to spew misinformation. In the EU, they’re applying the General Data Protection Regulation (GDPR), one of the world’s strongest legal privacy frameworks, the effects of which will likely reach far outside Europe. Meanwhile, lawmakers in the bloc are putting together a law that will address AI specifically — likely ushering in a new era of regulation for systems like ChatGPT.
Tomi Engdahl says:
Leaked Internal Google Document Claims Open Source AI Will Outcompete Google And OpenAI
https://hackaday.com/2023/05/05/leaked-internal-google-document-claims-open-source-ai-will-outcompete-google-and-openai/
Tomi Engdahl says:
Neural Networks All the Way Down
Artificial neural networks have learned to understand their biological counterparts, and even reconstruct video clips from brain waves.
https://www.hackster.io/news/neural-networks-all-the-way-down-9ac3956bc031
Tomi Engdahl says:
Will our descendants curse us for bringing AI into the world?
WARREN BUFFETT COMPARES AI TO THE ATOM BOMB
https://futurism.com/the-byte/warren-buffett-ai-atom-bomb
“BUT IS IT GOOD FOR THE NEXT 200 YEARS OF THE WORLD THAT THE ABILITY TO DO SO HAS BEEN UNLEASHED?”
Long Shadow
One of the world’s foremost financiers is sounding the alarm on artificial intelligence — even comparing it to the invention of the atomic bomb.
During Berkshire Hathaway’s annual meeting over the weekend, CEO Warren Buffett paraphrased Albert Einstein’s famous quote about the atomic bomb when saying that “with AI, it can change everything in the world, except how men think and behave, and that’s a big step to take.”
Tomi Engdahl says:
Teknojätti lopetti palkkaamisen tehtäviin, joissa tekoälyn uskotaan korvaavan ihmisen – vaarassa 7 800 työpaikkaa
Antti Kailio8.5.202313:30TEKOÄLYTYÖELÄMÄ
Jopa kolmannes yhtiön tukitoimista on vaarassa menettää työpaikkansa lähivuosina, mikäli toimitusjohtajaa on uskominen.
https://www.tekniikkatalous.fi/uutiset/teknojatti-lopetti-palkkaamisen-tehtaviin-joissa-tekoalyn-uskotaan-korvaavan-ihmisen-vaarassa-7800-tyopaikkaa/34b78b7d-60b6-4f47-a759-f4b2ed45c150
Teknologiayhtiö IBM ei enää palkkaa työntekijöitä tehtäviin, joissa yhtiö uskoo tekoälyn korvaavan ihmisen lähivuosina. Asiasta uutisoi Bloomberg.
IBM to Pause Hiring for Jobs That AI Could Do
https://www.bloomberg.com/news/articles/2023-05-01/ibm-to-pause-hiring-for-back-office-jobs-that-ai-could-kill
Roughly 7,800 IBM jobs could be replaced by AI, automation
CEO Krishna says IBM to pause hiring for replaceable roles
International Business Machines Corp. Chief Executive Officer Arvind Krishna said the company expects to pause hiring for roles it thinks could be replaced with artificial intelligence in the coming years.
Hiring in back-office functions — such as human resources — will be suspended or slowed, Krishna said in an interview. These non-customer-facing roles amount to roughly 26,000 workers, Krishna said. “I could easily see 30% of that getting replaced by AI and
Tomi Engdahl says:
Robodog Peeling Off a Model’s Clothes Is a Viral Riff on Ominous Tech
We can’t stop thinking about the robodog stunt. But only because it sucked.
https://futurism.com/robodogs-models-clothes-viral-ominous-tech
Tomi Engdahl says:
Angus Loten / Wall Street Journal:
Wendy’s partners with Google to automate its drive-through using an AI chatbot, rolling out in June to an Ohio restaurant; the bot has been programmed to upsell — The fast-food chain has customized a language model with terms like ‘JBC’ for junior bacon cheeseburger and ‘biggie bags’ for meal combos
Wendy’s, Google Train Next-Generation Order Taker: an AI Chatbot
The fast-food chain has customized a language model with terms like ‘JBC’ for junior bacon cheeseburger and ‘biggie bags’ for meal combos
https://www.wsj.com/articles/wendys-google-train-next-generation-order-taker-an-ai-chatbot-968ff865?mod=djemalertNEWS
Wendy’s is automating its drive-through service using an artificial-intelligence chatbot powered by natural-language software developed by Google and trained to understand the myriad ways customers order off the menu.
With the move, Wendy’s is joining an expanding group of companies that are leaning on generative AI for growth.
The Dublin, Ohio-based fast-food chain’s chatbot will be officially rolled out in June at a company-owned restaurant in Columbus, Ohio, Wendy’s said. The goal is to streamline the ordering process and prevent long lines in the drive-through lanes from turning customers away, said Wendy’s Chief Executive Todd Penegor.
Wendy’s didn’t disclose the cost of the initiative beyond saying the company has been working with Google in areas like data analytics, machine learning and cloud tools since 2001.
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
IBM announces watsonx, a suite of AI services that includes watsonx.ai, an “enterprise studio for AI builders”, watsonx.data, and watsonx.governance — IBM, like pretty much every tech giant these days, is betting big on AI. — At its annual Think conference …
IBM intros a slew of new AI services, including generative models
https://techcrunch.com/2023/05/09/ibm-intros-a-slew-of-new-ai-services-including-generative-models/
IBM, like pretty much every tech giant these days, is betting big on AI.
At its annual Think conference, the company announced IBM Watsonx, a new platform that delivers tools to build AI models and provide access to pretrained models for generating computer code, text and more.
It’s a bit of a slap in the face to IBM’s back-office managers, who just recently were told that the company will pause hiring for roles it thinks could be replaced by AI in the coming years.
Tomi Engdahl says:
Low De Wei / Bloomberg:
Chinese authorities arrest a man for using ChatGPT to write and spread fake news articles, one of the first known instances, with one article having 15K+ views — Chinese authorities have detained a man for using ChatGPT to write fake news articles, in what appears to be one of the first instances …
China Arrests ChatGPT User Who Faked Deadly Train Crash Story
https://www.bloomberg.com/news/articles/2023-05-09/china-arrests-chatgpt-user-who-faked-deadly-train-crash-story?leadSource=uverify%20wall
Tomi Engdahl says:
AI machines aren’t ‘hallucinating’. But their makers are
https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein
Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
Inside the many debates swirling around the rapid rollout of so-called artificial intelligence, there is a relatively obscure skirmish focused on the choice of the word “hallucinate”.
“No one in the field has yet solved the hallucination problems,” Sundar Pichai, the CEO of Google and Alphabet, told an interviewer recently.
That’s true – but why call the errors “hallucinations” at all? Why not algorithmic junk? Or glitches? Well, hallucination refers to the mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms. By appropriating a word commonly used in psychology, psychedelics and various forms of mysticism, AI’s boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species. How else could bots like Bing and Bard be tripping out there in the ether?
Warped hallucinations are indeed afoot in the world of AI, however – but it’s not the bots that are having them; it’s the tech CEOs who unleashed them, along with a phalanx of their fans, who are in the grips of wild hallucinations, both individually and collectively. Here I am defining hallucination not in the mystical or psychedelic sense, mind-altered states that can indeed assist in accessing profound, previously unperceived truths. No. These folks are just tripping: seeing, or at least claiming to see, evidence that is not there at all, even conjuring entire worlds that will put their products to use for our universal elevation and education.
Tomi Engdahl says:
OpenAI’s new tool attempts to explain language models’ behaviors
https://techcrunch.com/2023/05/09/openais-new-tool-attempts-to-explain-language-models-behaviors/
It’s often said that large language models (LLMs) along the lines of OpenAI’s ChatGPT are a black box, and certainly, there’s some truth to that. Even for data scientists, it’s difficult to know why, always, a model responds in the way it does, like inventing facts out of whole cloth.
In an effort to peel back the layers of LLMs, OpenAI is developing a tool to automatically identify which parts of an LLM are responsible for which of its behaviors. The engineers behind it stress that it’s in the early stages, but the code to run it is available in open source on GitHub as of this morning.
“We’re trying to [develop ways to] anticipate what the problems with an AI system will be,” William Saunders, the interpretability team manager at OpenAI, told TechCrunch in a phone interview. “We want to really be able to know that we can trust what the model is doing and the answer that it produces.”
Tomi Engdahl says:
STRANGELY-DRESSED EXPERT WARNS THAT AI COULD REPLACE 80% OF JOBS SOON
https://futurism.com/the-byte/expert-ai-replace-jobs
“YOU COULD PROBABLY OBSOLETE MAYBE 80 PERCENT OF JOBS THAT PEOPLE DO, WITHOUT HAVING AN AGI.”
Big Shoes
The guy responsible both for Sophia the Robot and for popularizing the term “artificial general intelligence” — as in the much-discussed concept of AGI — has a prediction that’s as bold as some of his fashion choices.
Tomi Engdahl says:
MIT Professor Compares Ignoring AGI to “Don’t Look Up”
“Sadly, I now feel that we’re living the movie ‘Don’t Look Up’ for another existential threat: unaligned superintelligence.”
https://futurism.com/mit-professor-agi-dont-look-up
Tomi Engdahl says:
Reality Is Melting as Lawyers Claim Real Videos Are Deepfakes
“Suddenly there’s no more reality.”
https://futurism.com/reality-melting-lawyers-deepfakes
Last month, Tesla CEO Elon Musk’s lawyers argued that 2016 recordings of him making big promises about the car’s Autopilot software could have been deepfaked.
Tomi Engdahl says:
Brady Snyder / XDA Developers:
Google launches a dedicated Labs page, where users can sign up to test Google’s early ideas for features and products, including Search and Workspace AI tools — Google is using artificial intelligence to improve its existing products, and you can sign up to try them in Labs now. Here’s how to get started.
https://www.xda-developers.com/join-waitlist-google-generative-ai-tools/
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
Google launches MusicLM, an experimental AI tool that can turn text prompts into several song versions, in its AI Test Kitchen app on the web, Android, and iOS
Google makes its text-to-music AI public
https://techcrunch.com/2023/05/10/google-makes-its-text-to-music-ai-public/
Google today released MusicLM, a new experimental AI tool that can turn text descriptions into music. Available in the AI Test Kitchen app on the web, Android or iOS, MusicLM lets users type in a prompt like “soulful jazz for a dinner party” or “create an industrial techno sound that is hypnotic” and have the tool create several versions of the song.
Users can specify instruments like “electronic” or “classical,” as well as the “vibe, mood, or emotion” they’re aiming for, as they refine their MusicLM-generated creations.
When Google previewed MusicLM in an academic paper in January, it said that it had “no immediate plans” to release it. The coauthors of the paper noted the many ethical challenges posed by a system like MusicLM, including a tendency to incorporate copyrighted material from training data into the generated songs.
Tomi Engdahl says:
James Vincent / The Verge:
Google rebrands its AI tools for its productivity apps as Duet AI and teases Sidekick, a feature that can read, summarize, and answer questions about documents
Google rebrands AI tools for Docs and Gmail as Duet AI — its answer to Microsoft’s Copilot / Google wants to make Gmail, Docs, Sheets, and Slides more useful with the help of generative AI. But most of its features are still in development.
https://www.theverge.com/2023/5/10/23718301/google-ai-workspace-features-duet-docs-gmail-io
Tomi Engdahl says:
Ben Schoon / 9to5Google:
Google launches Project Tailwind, an “AI-first notebook” experiment that pulls information from users’ Google Drive documents, available via a US-only waitlist
Google’s ‘Project Tailwind’ is an AI notebook that helps with study and more
https://9to5google.com/2023/05/10/google-project-tailwind-ai-notebook/
Google has launched “Project Tailwind,” a new AI-first tool that is effectively a notebook of the future to help you research information as you write about it.
Alongside its new AI integration in Google Search and Workspace products such as Docs and Gmail, Google has also launched “Project Tailwind.”
Google explains Tailwind as an “AI-first notebook” that pulls information from the documents that you upload or have in Google Drive.
Tailwind is your AI-first notebook, grounded in the information you choose and trust. Tailwind is an experiment, and currently available in the U.S. only. Join the waitlist to try it for yourself.
Users can ask the AI questions in natural language and get responses to help in the context of their documents, which includes notes. There are also buttons for “New Ideas,” “Reading Quiz,” and “Summary.” Effectively, this can create study guides not based on the information on the web, but the information you give it, to help with study and learning. Tailwind then cites all of its sources within your own documents.
The waitlist for “Project Tailwind” is open now, but it’s only available in the United States for now. Google says it is in its “early days” still.
https://thoughtful.sandbox.google.com/about
Tomi Engdahl says:
David Pierce / The Verge:
Google shows AI features coming to Search, including an AI-powered “snapshot” that summarizes search results with links to sites “corroborating” the information — Google is moving slowly and carefully to make AI happen. Maybe too slowly and too carefully for some people.
The AI takeover of Google Search starts now
Google is moving slowly and carefully to make AI happen. Maybe too slowly and too carefully for some people. But if you opt in, a whole new search experience awaits.
https://www.theverge.com/2023/5/10/23717120/google-search-ai-results-generated-experience-io
The future of Google Search is AI. But not in the way you think. The company synonymous with web search isn’t all in on chatbots (even though it’s building one, called Bard), and it’s not redesigning its homepage to look more like a ChatGPT-style messaging system. Instead, Google is putting AI front and center in the most valuable real estate on the internet: its existing search results.
To demonstrate, Liz Reid, Google’s VP of Search, flips open her laptop and starts typing into the Google search box. “Why is sourdough bread still so popular?” she writes and hits enter. Google’s normal search results load almost immediately. Above them, a rectangular orange section pulses and glows and shows the phrase “Generative AI is experimental.” A few seconds later, the glowing is replaced by an AI-generated summary: a few paragraphs detailing how good sourdough tastes, the upsides of its prebiotic abilities, and more. To the right, there are three links to sites with information that Reid says “corroborates” what’s in the summary.
Google calls this the “AI snapshot.” All of it is by Google’s large language models, all of it sourced from the open web.
Tomi Engdahl says:
James Vincent / The Verge:
Google makes Bard available in English in 180 countries and territories, promises AI image generation from Adobe and integration with services like Instacart — Google is adding a smorgasbord of new features to its AI chatbot Bard, including support for new languages (Japanese and Korean) …
Google drops waitlist for AI chatbot Bard and announces oodles of new features / Support for new languages, dark mode, export functions, and visual search were all announced today. Google is throwing everything at its AI chatbot Bard — while cautioning users it’s just an ‘experiment.’
https://www.theverge.com/2023/5/10/23718066/google-bard-ai-features-waitlist-dark-mode-visual-search-io