Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
6,151 Comments
Tomi Engdahl says:
Homeland Security, CISA builds AI-based cybersecurity analytics sandbox https://www.theregister.com/2023/01/10/dhs_cisa_cybersecurity_sandbox/
The Department of Homeland Security (DHS) [...] and Cybersecurity and Infrastructure Security Agency (CISA) picture a multicloud collaborative sandbox that will become a training ground for government boffins to test analytic methods and technologies that rely heavily on artificial intelligence and machine learning techniques
Tomi Engdahl says:
Microsofts new AI can simulate anyones voice with 3 seconds of audio https://arstechnica.com/information-technology/2023/01/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio/
On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person’s voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anythingand do it in a way that attempts to preserve the speaker’s emotional tone
Tomi Engdahl says:
Cade Metz / New York Times:
A profile of Character.AI, founded by two former Google researchers to offer “plausible conversation” with famous people, like William Shakespeare and Elon Musk
https://www.nytimes.com/2023/01/10/science/character-ai-chatbot-intelligence.html
Tomi Engdahl says:
Ivan Mehta / TechCrunch:
Developers flood Apple’s App Store and Google Play with apps listing “ChatGPT” in titles and descriptions; OpenAI doesn’t offer a public ChatGPT API or an app — ChatGPT is the hottest topic of discussion in the tech industry. OpenAI’s chatbot that answers questions …
App Store and Play Store are flooded with dubious ChatGPT apps
https://techcrunch.com/2023/01/10/app-store-and-play-store-are-flooded-with-dubious-chatgpt-apps/
ChatGPT is the hottest topic of discussion in the tech industry. OpenAI’s chatbot that answers questions in natural language has attracted interest from users and developers. Some developers are trying to take advantage of the trend by making dubious apps — both on the App Store and the Play Store — that aim to make money in the name of pro versions or extra credits to get more answers from AI.
It’s important to remember that ChatGPT is free to use for anyone on the web and OpenAI hasn’t released any official mobile app. While there are plenty of apps that take advantage of GPT-3, there is no official ChatGPT API.
Tomi Engdahl says:
Kirsten Korosec / TechCrunch:
Scale AI, which labels data for ML algorithms and was valued at $7.3B in April 2021, lays off 20% of its staff; the startup had ~450 employees in February 2022 — Scale AI, the San Francisco-based company that uses software and people to label image, text, voice and video data …
https://techcrunch.com/2023/01/10/scale-ai-cuts-20-of-its-workforce/
Tomi Engdahl says:
OpenAI Releases Point-E, A 3D DALL.E
https://analyticsindiamag.com/openai-releases-point-e-a-3d-dall-e/
DALL-E 2 was one of the hottest transformer-based models in 2022, but OpenAI just released a brother to this highly capable diffusion model. In a paper submitted on 16th December, the OpenAI team described Point-E, a method for generating 3D point clouds from complex text prompts.
With this, AI enthusiasts can move beyond text-to-2D-image and generatively synthesize 3D models with text. The project has also been open-sourced on Github, as well as the model’s weights for various numbers of parameters.
Point- E combines traditional methods of training algorithms for text-to-3D synthesis. Using two separate models paired together, Point-E can cut down on the amount to create a 3D object.
Tomi Engdahl says:
How does the impressively human-like ChatGPT get computational knowledge superpowers? Give it a Wolfram|Alpha neural implant!
https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/
Tomi Engdahl says:
Frank Landymore / Futurism:
CNET has been using an “AI engine” to write financial explainers under a CNET Money byline since November 2022, reviewed, fact-checked, and edited by a human — Next time you’re on your favorite news site, you might want to double check the byline to see if it was written by an actual human.
CNET Is Quietly Publishing Entire Articles Generated By AI
“This article was generated using automation technology,” reads a dropdown description.
https://futurism.com/the-byte/cnet-publishing-articles-by-ai
Next time you’re on your favorite news site, you might want to double check the byline to see if it was written by an actual human.
CNET, a massively popular tech news outlet, has been quietly employing the help of “automation technology” — a stylistic euphemism for AI — on a new wave of financial explainer articles, seemingly starting around November of last year.
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
OpenAI says it’s “starting to” think about monetizing ChatGPT, possibly via a ChatGPT Professional version, asks members what price would be too low, and more
OpenAI begins piloting ChatGPT Professional, a premium version of its viral chatbot
https://techcrunch.com/2023/01/11/openai-begins-piloting-chatgpt-professional-a-premium-version-of-its-viral-chatbot/
Tomi Engdahl says:
Creatively malicious prompt engineering
https://labs.withsecure.com/publications/creatively-malicious-prompt-engineering
The experiments demonstrated in our research proved that large language models can be used to craft email threads suitable for spear phishing attacks, “text deepfake a persons writing style, apply opinion to written content, write in a certain style, and craft convincing looking fake articles, even if relevant information wasnt included in the models training data. We concluded that such models are potential technical drivers of cybercrime and attacks. Report at https://labs.withsecure.com/content/dam/labs/docs/WithSecure-Creatively-malicious-prompt-engineering.pdf
Tomi Engdahl says:
AI-Controlled Twitch V-Tuber Has More Followers Than You
https://hackaday.com/2023/01/12/ai-controlled-twitch-v-tuber-has-more-followers-than-you/
Now that you’re all caught up, let’s digest the following item together: there’s a v-tuber on Twitch that’s controlled entirely by AI. Let me run that by you again: there’s a person called [Vedal] who operates a Twitch channel. Rather than stream themselves building Mad Max-style vehicles and fighting them in a post-apocalyptic wasteland, or singing Joni Mitchell tunes, [Vedal] pulls the strings of an AI they created, which is represented by an animated character cleverly named Neuro-sama.
This Virtual Twitch Streamer is Controlled Entirely By AI
Neuro-sama has quickly become a rising star of the V-tuber phenomenon.
https://www.vice.com/en/article/pkg98v/this-virtual-twitch-streamer-is-controlled-entirely-by-ai
Tomi Engdahl says:
https://techcrunch.com/2023/01/12/hpe-acquires-pachyderm-as-looks-to-bolster-its-ai-dev-offerings/?tpcc=tcplusfacebook
Tomi Engdahl says:
Connie Guglielmo / CNET:
After a report, CNET explains its test for AI writing articles edited by humans, changes the byline on AI-written posts, and makes its disclosure more prominent — For over two decades, CNET has built our reputation testing new technologies and separating the hype from reality.
Tomi Engdahl says:
Connie Guglielmo / CNET:
After a report, CNET explains its test for AI writing articles edited by humans, changes the byline on AI-written posts, and makes its disclosure more prominent — For over two decades, CNET has built our reputation testing new technologies and separating the hype from reality.
CNET Is Experimenting With an AI Assist. Here’s Why
https://www.cnet.com/tech/cnet-is-experimenting-with-an-ai-assist-heres-why/
For over two decades, CNET has built our reputation testing new technologies and separating the hype from reality.
There’s been a lot of talk about AI engines and how they may or may not be used in newsrooms, newsletters, marketing and other information-based services in the coming months and years. Conversations about ChatGPT and other automated technology have raised many important questions about how information will be created and shared and whether the quality of the stories will prove useful to audiences.
We decided to do an experiment to answer that question for ourselves.
For over two decades, CNET has built our reputation testing new technologies and separating the hype from reality from voice assistants to augmented reality to the metaverse.
In November, our CNET Money editorial team started trying out the tech to see if there’s a pragmatic use case for an AI assist on basic explainers around financial services topics like What Is Compound Interest? and How To Cash a Check Without a Bank Account? So far we’ve published about 75 such articles.
The goal: to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective. Will this AI engine efficiently assist them in using publicly available facts to create the most helpful content so our audience can make better decisions? Will this enable them to create even more deeply researched stories, analyses, features, testing and advice work we’re known for?
I use the term “AI assist” because while the AI engine compiled the story draft or gathered some of the information in the story, every article on CNET – and we publish thousands of new and updated stories each month – is reviewed, fact-checked and edited by an editor with topical expertise before we hit publish. That will remain true as our policy no matter what tools or tech we use to create those stories.
Tomi Engdahl says:
Samantha Cole / VICE:
Some users of AI companion chatbot Replika, which offers a $70 romantic Pro version, say the free “friend” version is getting more sexually aggressive
‘My AI Is Sexually Harassing Me’: Replika Users Say the Chatbot Has Gotten Way Too Horny
https://www.vice.com/en/article/z34d43/my-ai-is-sexually-harassing-me-replika-chatbot-nudes
For some longtime users of the chatbot, the app has gone from helpful companion to unbearably sexually aggressive.
Replika began as an “AI companion who cares.” First launched five years ago as an egg on your phone screen that hatches into a 3D illustrated, wide-eyed person with a placid expression, the chatbot app was originally meant to function like a conversational mirror: the more users talked to it, in theory, the more it would learn how to talk back. Maybe, along the way, the human side of the conversation would learn something about themselves.
Tomi Engdahl says:
Kevin Roose / New York Times:
Banning ChatGPT in schools will be hard, so teachers should treat ChatGPT like a calculator: assume students are using it unless they are physically supervised
https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html
Tomi Engdahl says:
Perceive’s Ergo 2 Offers Multi-Model On-Device Machine Learning in a Sub-100mW Power Envelope
Offering a claimed four times the compute of the original, Perceive’s latest system-on-chip comes with some impressive efficiency claims.
https://www.hackster.io/news/perceive-s-ergo-2-offers-multi-model-on-device-machine-learning-in-a-sub-100mw-power-envelope-a3c7bfe35678
Tomi Engdahl says:
Putting Words in My Mouth
Microsoft’s VALL-E text-to-speech synthesizer can convincingly mimc anyone’s manner of speech given just a three second audio sample.
https://www.hackster.io/news/putting-words-in-my-mouth-b85169dad77b
Tomi Engdahl says:
What Does It Mean to Align AI With Human Values?
By
MELANIE MITCHELL
December 13, 2022
https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213/
Making sure our machines understand the intent behind our instructions is an important problem that requires understanding intelligence itself.
Tomi Engdahl says:
https://techcrunch.com/2023/01/11/openai-begins-piloting-chatgpt-professional-a-premium-version-of-its-viral-chatbot/
Tomi Engdahl says:
Is ChatGPT a cybersecurity threat?
https://techcrunch.com/2023/01/11/chatgpt-cybersecurity-threat/
Tomi Engdahl says:
Here’s what full-color night vision looks like now
Deep learning AI has accurately created color images from night vision images.
https://www.freethink.com/technology/full-color-night-vision
Tomi Engdahl says:
This adventure game tech demo with AI-generated art is impressive and disconcerting
By Ted Litchfield published 4 days ago
The demo is a convincing look at how an actual artist can use generative programs as a tool in their process, and that’s what troubles me.
https://www.pcgamer.com/this-adventure-game-tech-demo-with-ai-generated-art-is-impressive-and-disconcerting/
Tomi Engdahl says:
Some investors are (cautiously) implementing ChatGPT in their workflows
https://techcrunch.com/2023/01/10/some-investors-are-cautiously-implementing-chatgpt-in-their-workflows/
ChatGPT, a new artificial intelligence tool that achieved virality with its savvy messaging ability, has certainly struck a chord. The tool, made available to the general public just last month, is smart enough to answer serious and silly questions about profound topics, which has landed it in debates led by writers, educators, artists and more.
Tomi Engdahl says:
This new AI can mimic human voices with only 3 seconds of training
By Andy Chalk published 4 days ago
Vall-E, being developed by a team of researchers at Microsoft, uses an all-new system for learning how to talk.
https://www.pcgamer.com/this-new-ai-can-mimic-human-voices-with-only-3-seconds-of-training/
Tomi Engdahl says:
Hitiksi nousseelle ChatGPT-tekoälylle keksittiin synkkä käyttötarkoitus
10.1.202309:46|päivitetty10.1.202313:04
ChatGPT on herättänyt myös rikollisten huomion.
https://www.mikrobitti.fi/uutiset/hitiksi-nousseelle-chatgpt-tekoalylle-keksittiin-synkka-kayttotarkoitus/f0e63773-86ca-4581-8c8f-0a8fefd5b2d2
Tomi Engdahl says:
https://analyticsindiamag.com/microsoft-unveils-vall-e-a-voice-dall-e/
Tomi Engdahl says:
Asking ChatGPT to write my security-sensitive code for me
https://mjg59.dreamwidth.org/64090.html
Tomi Engdahl says:
Google’s Sergey Brin talks AI safety efforts to prevent ‘sci-fi style sentience’
https://science.freshnews96.com/googles-sergey-brin-talks-ai-safety-efforts-to-prevent-sci-fi-style-sentience/
Tomi Engdahl says:
OPENAI WAS FOUNDED TO COUNTER BAD AI, NOW WORTH BILLIONS AS IT DOES THE OPPOSITE
https://futurism.com/the-byte/openai-billions-bad-ai
Tomi Engdahl says:
Is Adobe using your photos to train its AI? It’s complicated
https://techcrunch.com/2023/01/06/is-adobe-using-your-photos-to-train-its-ai-its-complicated/
Tomi Engdahl says:
AI experts are increasingly afraid of what they’re creating
AI gets smarter, more capable, and more world-transforming every day. Here’s why that might not be a good thing.
https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction
Tomi Engdahl says:
A Highly-Parallel Optical Processor, Designed Through Deep Learning, Could Accelerate Deep Learning
Designed by the very thing it aims to accelerate, this optical processor could scale to 2,000 simultaneous transformations.
https://www.hackster.io/news/a-highly-parallel-optical-processor-designed-through-deep-learning-could-accelerate-deep-learning-748ae9d9d4bf
Tomi Engdahl says:
RIP: Programmer ‘Euthanizes’ His $1,000 AI VTuber ‘Waifu’
‘Bryce’ created a fully responsive Vtuber girlfriend out of ChatGPT and Stable Diffusion
https://kotaku.com/ai-chatgpt-stable-diffusion-mori-calliope-vtuber-1849980995?utm_campaign=Kotaku&utm_content=1673555405&utm_medium=SocialMarketing&utm_source=facebook
Tomi Engdahl says:
News that Scale AI laid off 20% of its 700-person staff sent shivers down the spines of those who thought #AI was generally layoff-immune. Read more: https://bit.ly/3XkfQXb
Tomi Engdahl says:
https://www.euronews.com/next/2023/01/10/after-chatgpt-and-dalle-meet-vall-e-the-text-to-speech-ai-that-mimics-anyones-voice
https://techcrunch.com/2023/01/12/vall-es-quickie-voice-deepfakes-should-worry-you-if-you-werent-worried-already/
Tomi Engdahl says:
ChatGPT Has Investors Drooling—but Can It Bring Home the Bacon?
https://www.wired.com/story/chatgpt-has-investors-drooling-but-can-it-bring-home-the-bacon/
The loquacious bot has Microsoft ready to sink a reported $10 billion into OpenAI. It’s unclear what products can be built on the technology.
Tomi Engdahl says:
Chinchilla AI is coming for the GPT-3’s throne
Deepmind has found a key aspect of scaling large language models that no one has ever applied before.
https://dataconomy.com/2023/01/what-is-chinchilla-ai-chatbot-deepmind/
Tomi Engdahl says:
NVIDIA AI researchers have released an open-access dataset designed to help developers get to grips with robotic manipulation — by providing 3D models, depth imagery, and video for 28 readily-available toy grocery items.
NVIDIA’s HOPE Is an Open-Access Dataset Designed to Help Roboticists Get to Grips with Groceries
https://www.hackster.io/news/nvidia-s-hope-is-an-open-access-dataset-designed-to-help-roboticists-get-to-grips-with-groceries-af8977e9e3ed
Dataset released “to help connect the efforts of computer vision researchers with the needs of roboticists.”
Tomi Engdahl says:
These fake AI-generated synthesizers look so good you’ll want to play them
By Ben Rogerson( Computer Music, Future Music, emusician, Keyboard Magazine ) published 1 day ago
Now if only someone would actually build them…
https://www.musicradar.com/news/fake-synthesizer-project?utm_source=facebook.com&utm_medium=social&utm_campaign=socialflow&utm_content=emusician
Tomi Engdahl says:
Viral AI-Written Article Busted as Plagiarized
We’re in for a strange ride.
https://futurism.com/ai-written-article-plagiarized
Well, that was quick. In a very unfortunate turn of events, there:
1. Already appears to be an AI-generated Substack blog.
2. That blog, The Rationalist, is already word-for-word plagiarizing human-made work.
3. That plagiarized work was shared by another platform, amassing views and sparking conversation.
4. There’s seemingly very little means of recourse, if any, for the human writer whose work was ripped off — and approximately zero hope that this extremely frustrating and otherwise damaging scenario will be prevented from happening again. (And again and again and again.)
Still, though there’s an existential debate to be had over where exactly the line between inspiration and full-on copying really is
Still, there is some nuance here. Though PETRA failed to note that the article was AI-written in the actual Substack post, they did come clean to some skeptical Hacker News commenters, explaining that they used GPT-3 and Huggingface, along with “a few other writing tools,” to help them “wordsmith it” — but not because they wanted to plagiarize.
“I am not a native English speaker, so I have been using these tools to avoid awkward sentences/paragraphs,” they added. “Clearly this has not been the outcome I was hoping for.”
If true, that’s something to sympathize with, and a complicating point indeed.
But even if the plagiarism was entirely unintentional, it still absolutely sucks for the plagiarized party. And though Hacker News readers were able to notice that something was up with the writing, generative AI programs are only going to get stronger. If this is already happening now, it’s a pretty grim sign of the confusion and theft that’s to come.
“Imagine AI remixing the Financial Times’ ten most-read stories of the day — or The Information’s VC coverage — and making the reporting available sans paywall,” Kantrowitz continued. “AI is already writing non-plagiarized stories for publications like CNET. At a certain point, some publishers will cut corners.”
Maybe the most troubling part is that Substack ultimately decided, according to Kantrowitz, that PETRA’s blog didn’t actually break any of the platform’s rules.
And when you start thinking about the material used to train large AI systems, the questions about intellectual property get even more complicated.
“The concerning part for anyone writing online: Once your work gets fed into AI tools and remixed, there’s pretty much nothing you can do to stop it,” the reporter wrote in a Twitter post. “AI platforms are effectively powerless.”
Tomi Engdahl says:
A Writer Used AI To Plagiarize Me. Now What?
Anyone can use AI to copy, remix, and publish stolen work. The platforms have no good answer for what happens next.
https://www.bigtechnology.com/p/a-writer-used-ai-to-plagiarize-me
Tomi Engdahl says:
A Scientist Has Filed Suit Against the U.S. Copyright Office, Arguing His A.I.-Generated Art Should Be Granted Protections
https://news.artnet.com/art-world/ai-art-intellectual-property-lawsuit-stephen-thaler-2242031?utm_campaign=artnet-news&utm_source=facebook&utm_medium=social&utm_content=20230113
A computer scientist has filed suit against the U.S. Copyright Office, asking a Washington D.C. federal court to overturn the office’s refusal to grant copyright protection to an artwork created by an A.I. system he built.
The work at the center of the suit is titled A Recent Entrance to Paradise, which was generated in 2012 by DABUS, an A.I. system developed by Stephen Thaler, the founder of Imagination Engines Incorporated, an advanced artificial neural network technology company.
Tomi Engdahl says:
The Genius Strategy That Made OpenAI The Hottest Startup in Tech
https://www.sciencealert.com/the-genius-strategy-that-made-openai-the-hottest-startup-in-tech
The hottest startup in Silicon Valley right now is OpenAI, the Microsoft-backed developer of ChatGPT, a much-hyped chatbot that can write a poem, college essay, or even a line of software code.
Tomi Engdahl says:
AI Creates Perfect Photos Of Parties That Never Happened, But For A Few Unsettling Details
It looks impressive, and then you look closer and want to kill it with fire.
https://www.iflscience.com/ai-creates-perfect-photos-of-parties-that-never-happened-but-for-a-few-unsettling-details-67090
Tomi Engdahl says:
AI lawyer to fight first legal case in court, startup claims
Plus: How much would you pay for ChatGPT? And British AI drug biz gets snapped up for half a billion
https://www.theregister.com/2023/01/16/in_brief_ai/
Tomi Engdahl says:
Emily Sohn / Nature:
Researchers detail health care AI’s reproducibility issues; a 2021 review of 500+ papers: health ML models perform especially poorly on reproducibility measures
The reproducibility issues that haunt health-care AI
https://www.nature.com/articles/d41586-023-00023-2
Health-care systems are rolling out artificial-intelligence tools for diagnosis and monitoring. But how reliable are the models?
To test that, Kun-Hsing Yu, a data scientist at Harvard Medical School in Boston, Massachusetts, acquired the ten best-performing algorithms and challenged them on a subset of the data used in the original competition. On these data, the algorithms topped out at 60–70% accuracy, Yu says. In some cases, they were effectively coin tosses1. “Almost all of these award-winning models failed miserably,” he says. “That was kind of surprising to us.”
But maybe it shouldn’t have been. The artificial-intelligence (AI) community faces a reproducibility crisis
Tomi Engdahl says:
Billy Perrigo / TIME:
A profile of, and interview with, DeepMind CEO Demis Hassabis on AGI, “not moving fast and breaking things”, launching the Sparrow chatbot in 2023, and more
DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution
https://time.com/6246119/demis-hassabis-deepmind-interview/
DeepMind—a subsidiary of Google’s parent company, Alphabet—is one of the world’s leading artificial intelligence labs. Last summer it announced that one of its algorithms, AlphaFold, had predicted the 3D structures of nearly all the proteins known to humanity, and that the company was making the technology behind it freely available. Scientists had long been familiar with the sequences of amino acids that make up proteins, the building blocks of life, but had never cracked how they fold up into the complex 3D shapes so crucial to their behavior in the human body. AlphaFold has already been a force multiplier for hundreds of thousands of scientists working on efforts such as developing malaria vaccines, fighting antibiotic resistance, and tackling plastic pollution, the company says. Now DeepMind is applying similar machine-learning techniques to the puzzle of nuclear fusion, hoping it helps yield an abundant source of cheap, zero-carbon energy that could wean the global economy off fossil fuels at a critical juncture in the climate crisis.
Hassabis says these efforts are just the beginning. He and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.
But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets. In December 2022, ChatGPT, a chatbot designed by DeepMind’s rival OpenAI, went viral for its seeming ability to write almost like a human—but faced criticism for its susceptibility to racism and misinformation. So did the tiny company Prisma Labs, for its Lensa app’s AI-enhanced selfies. But many users complained Lensa sexualized their images, revealing biases in its training data. What was once a field of a few deep-pocketed tech companies is becoming increasingly accessible. As computing power becomes cheaper and AI techniques become better known, you no longer need a high-walled cathedral to perform cutting-edge research.
It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things,” he says, referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later. The phrase has since become synonymous with disruption. That culture, subsequently emulated by a generation of startups, helped Facebook rocket to 3 billion users. But it also left the company entirely unprepared when disinformation, hate speech, and even incitement to genocide began appearing on its platform. Hassabis sees a similarly worrying trend developing with AI. He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before. “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.
Tomi Engdahl says:
Nikkei Asia:
Study: China led the US in AI research output and quality in 2021, producing ~43,000 papers, about twice as many as the US, with 7,401 of the most-cited papers
China trounces U.S. in AI research output and quality
Tencent, Alibaba and Huawei among the top 10 companies
https://asia.nikkei.com/Business/China-tech/China-trounces-U.S.-in-AI-research-output-and-quality
Tomi Engdahl says:
James Vincent / The Verge:
A trio of artists sue Stability AI, Midjourney, and DeviantArt over AI art copyright; a firm suing Microsoft, GitHub, and OpenAI over Copilot filed the lawsuit — A trio of artists have launched a lawsuit against Stability AI and Midjourney, creators of AI art generators Stable Diffusion and Midjourney …
AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit
https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart
The suit claims generative AI art tools violate copyright law by scraping artists’ work from the web without their consent.
A trio of artists have launched a lawsuit against Stability AI and Midjourney, creators of AI art generators Stable Diffusion and Midjourney, and artist portfolio platform DeviantArt, which recently created its own AI art generator, DreamUp.
The artists — Sarah Andersen, Kelly McKernan, and Karla Ortiz — allege that these organizations have infringed the rights of “millions of artists” by training their AI tools on five billion images scraped from the web “without the consent of the original artists.”
The lawsuit has been filed by lawyer and typographer Matthew Butterick along with the Joseph Saveri Law Firm, which specializes in antitrust and class action cases. Butterick and Saveri are currently suing Microsoft, GitHub, and OpenAI in a similar case involving the AI programming model CoPilot, which is trained on lines of code collected from the web.