Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
6,155 Comments
Tomi Engdahl says:
Picture This (But Not That)
Using diffractive layers defined by a machine learning algorithm, this camera will only image objects that it has been trained to recognize.
https://www.hackster.io/news/picture-this-but-not-that-c81c873603af
Tomi Engdahl says:
Blog Title Optimizer Uses AI, But How Well Does It Work?
https://hackaday.com/2022/08/21/blog-title-optimizer-uses-ai-but-how-well-does-it-work/
[Max Woolf] sometimes struggles to create ideal headlines for his blog posts, and decided to apply his experience with machine learning to the problem. He asked: could an AI be trained to optimize his blog titles? It is a fascinating application of natural language processing, and [Max] explains all about what it does and how it works.
Tomi Engdahl says:
Uusi ohjelmisto saa intialaiset kuulostamaan amerikkalaisilta tuotetta kaupataan puhelinkeskuksille
https://www.tivi.fi/uutiset/tv/81c3a569-fa24-4472-bbf6-fb659cf02c31
Kolmen Stanfordin yliopiston opiskelijoiden perustama Sanas-startup kehittää ohjelmistoa, joka saa intialaiset kuulostamaan puhelimessa amerikkalaisilta. Erityisesti puhelintyössä käytettävää ohjelmistoa käytetään säätimellä, joka aktivoi aksenttia muuttavan ominaisuuden.
Vicen mukaan Sanasin kehittämän ohjelman ääni kuulostaa hieman koneelliselta, mutta kuitenkin erittäin amerikkalaiselta ja valkoiselta. Yhtiö mainostaa ohjelmansa parantavan puheen ymmärrystä
31 prosentilla ja asiakastyytyväisyyttä 21 prosentilla.
Tomi Engdahl says:
Yhdysvaltojen propagandaoperaatio poistettiin sosiaalisesta mediasta valetilien kohteena Kiina ja Venäjä https://www.tivi.fi/uutiset/tv/2b7fdc4c-f670-4fcd-9d23-1fd3e0628f6f
Twitter, Facebook ja Instagram ovat poistaneet useita käyttäjätilejä, jotka ovat levittäneet venäjänkielistä propagandaa Yhdysvaltojen puolesta. Propaganda-operaatiosta kertoivat Stanford Internet Observatoryn ja tutkimusyhtiö Grapihkan tutkijat. Uutissivusto Vicen mukaan viisi vuotta kestäneen kampanjan toimintatavat olivat pitkälti samanlaisia, mitä Venäjä käytti Yhdysvalloissa vuoden 2016 presidentinvaalien aikana. Sosiaalisessa mediassa levitettiin meemejä, vetoomuksia, valeuutisia, väärennettyjä kuvia sekä erilaisia tunnisteita.. Käyttäjätilien poistaminen on merkki siitä, että yhdysvaltalaiset teknologiayhtiöt ovat valmiita puuttumaan myös kotimaansa tekemään propagandaan. Harhauttavia taktiikoita käytettiin myös viidellä muulla sosiaalisen median alustalla.
Tomi Engdahl says:
Google’s new app lets you test experimental AI systems like LaMDA
https://techcrunch.com/2022/08/25/googles-new-app-lets-you-experimental-ai-systems-like-lamda/
Google’s new app lets you test experimental AI systems like LaMDA
Kyle Wiggers
@kyle_l_wiggers / 6:59 PM GMT+3•August 25, 2022
Google logo sign with white backlighting on dark background
Image Credits: Artur Widak/NurPhoto / Getty Images
Google today launched AI Test Kitchen, an app that lets users try out experimental AI-powered systems from the company’s labs before they make their way into production. Beginning today, folks interested can complete a sign-up form as AI Test Kitchen begins to roll out gradually to small groups in the U.S.
As announced at Google’s I/O developer conference earlier this year, AI Test Kitchen will serve rotating demos centered around novel, cutting-edge AI technologies — all from within Google. The company stresses that they aren’t finished products, but instead are intended to give a taste of the tech giant’s innovations while offering Google an opportunity to study how they’re used.
The first set of demos in AI Test Kitchen explore the capabilities of the latest version of LaMDA (Language Model for Dialogue Applications),
https://blog.google/technology/ai/join-us-in-the-ai-test-kitchen/
https://techcrunch.com/2022/05/11/google-details-its-latest-language-model-and-ai-test-kitchen-a-showcase-for-ai-research/
Tomi Engdahl says:
Meet ‘NeuRRAM,’ A New Neuromorphic Chip For Edge AI That Uses a Tiny Portion of the Power and Space of Current Computer Platforms
https://www.marktechpost.com/2022/08/20/meet-neurram-a-new-neuromorphic-chip-for-edge-ai-that-uses-a-tiny-portion-of-the-power-and-space-of-current-computer-platforms/
Tomi Engdahl says:
Alberto Romero / The Algorithmic Bridge:
Unlike DALL-E 2, Stable Diffusion is open source and can run on high-end consumer GPUs, while its CreativeML Open RAIL-M license “strives for” “responsible” use
Stable Diffusion Is the Most Important AI Art Model Ever
https://thealgorithmicbridge.substack.com/p/stable-diffusion-is-the-most-important
A state-of-the-art AI model available for everyone through a safety-centric open-source license is unheard of.
Earlier this week the company Stability.ai, founded and funded by Emad Mostaque, announced the public release of the AI art model Stable Diffusion. You may think this is just another day in the AI art world, but it’s much more than that. Two reasons.
First, unlike DALL·E 2 and Midjourney—comparable quality-wise—, Stable Diffusion is available as open-source. This means anyone can take its backbone and build, for free, apps targeted for specific text-to-image creativity tasks.
People are already developing Google Colabs (by Deforum and Pharmapsychotic), a Figma plugin to create designs from prompts, and Lexica.art, a prompt/image/seed search engine. Also, the devs at Midjourney implemented a feature that allowed users to combine it with Stable Diffusion, which led to some amazing results (it’s no longer active, but may soon be once they figure out how to control potentially harmful generations):
Second, unlike DALL·E mini (Craiyon) and Disco Diffusion—comparable openness-wise—, Stable Diffusion can create amazing photorealistic and artistic artworks that have nothing to envy OpenAI’s or Google’s models. People are even claiming it is the new state-of-the-art among “generative search engines,” as Mostaque likes to call them.
For you to get a sense of Stable Diffusion’s artistic mastery,
Stable Diffusion embodies the best features of the AI art world: it’s arguably the best existing AI art model and open source. That’s simply unheard of and will have enormous consequences.
Tomi Engdahl says:
https://thealgorithmicbridge.substack.com/p/stable-diffusion-is-the-most-important
That website is DreamStudio Lite. It can be used for free for up to 200 image generations (to get a sense of what Stable Diffusion can do). Like DALL·E 2, it uses a paid subscription model that will get you 1K images for £10 (OpenAI refills 15 credits each month but to get more you have to buy packages of 115 for $15). To compare them apples to apples: DALL·E costs $0.03/image whereas Stable Diffusion costs £0.01/image.
Additionally, you can also use Stable Diffusion at scale through the API (the cost scales linearly, so you get 100K generations for £1000). Beyond image generation, Stability.ai will soon announce DreamStudio Pro (audio/video) and Enterprise (studios).
Tomi Engdahl says:
Civilian AI Is Already Being Misused by the Bad Guys And the AI community needs to do something about it
https://spectrum.ieee.org/responsible-ai-threat?share_id=7193804
Last March, a group of researchers made headlines by revealing that they had developed an artificial-intelligence (AI) tool that could invent potential new chemical weapons. What’s more, it could do so at an incredible speed: It took only 6 hours for the AI tool to suggest 40,000 of them.
The most worrying part of the story, however, was how easy it was to develop that AI tool. The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs. Rather than predicting whether the components of a new drug could be dangerous, they made it design new toxic molecules using a generative model and a toxicity data set.
Tomi Engdahl says:
Quantum AI breakthrough: theorem shrinks appetite for training data
Rigorous math proves neural networks can train on minimal data, providing ‘new hope’ for quantum AI and taking a big step toward quantum advantage
https://discover.lanl.gov/news/0823-quantum-ai?source=facebook
Tomi Engdahl says:
https://futurism.com/the-byte/meta-ai-chatbot-mark-zuckerberg-creepy-manipulative
Tomi Engdahl says:
Cursed Startup Using AI to Remove Call Center Workers’ Accents
We’ve seen this movie before.
https://futurism.com/startup-ai-remove-call-center-accents
Tomi Engdahl says:
Google research AI image noise reduction is out of this world
https://techcrunch.com/2022/08/23/nerf-in-the-dark/
Tomi Engdahl says:
https://www.hackster.io/mcmchris/voice-controlled-ai-mood-lamp-5917ca
Tomi Engdahl says:
CURSED FILTER TURNS ANYONE INTO METAVERSE MARK ZUCKERBERG
https://futurism.com/the-byte/cursed-filter-mark-zuckerberg
Tomi Engdahl says:
5 Best Free AI Text to Art Generators to Create an Image From What You Type
BY
MIHIR PATKAR
PUBLISHED JUN 11, 2022
These free AI apps can take a sentence you type and turn it into a realistic painting or an image.
https://www.makeuseof.com/ai-text-to-art-generators/#Echobox=1660735863
Tomi Engdahl says:
Emily Anthes / New York Times:
Scientists are using ML to decode communication between fruit bats, crows, naked mole rats, and whales, even planning “chatbots” to talk with the marine animals
https://www.nytimes.com/2022/08/30/science/translators-animals-naked-mole-rats.html
Tomi Engdahl says:
https://hackaday.com/2022/08/30/monitoring-a-cats-litter-box-usage-with-ai/
Tomi Engdahl says:
Offering 50 TOPS in a 5W envelope and with a dedicated computer vision coprocessor, the MLSoC aims to blow the competition out of the water.
SiMa.ai Launches “Effortless ML” MLSoC Chip, Evaluation Board for “10x” Edge AI Performance
https://www.hackster.io/news/sima-ai-launches-effortless-ml-mlsoc-chip-evaluation-board-for-10x-edge-ai-performance-6314179e969c
Offering 50 TOPS in a 5W envelope and with a dedicated computer vision coprocessor, the MLSoC aims to blow the competition out of the water.
Tomi Engdahl says:
Introducing our new machine learning security principles https://www.ncsc.gov.uk/blog-post/introducing-our-new-machine-learning-security-principles
Why the security of artificial intelligence (AI) and machine learning
(ML) is important, how it’s different to standard cyber security, and why the NCSC has developed specific security principles.
Tomi Engdahl says:
New AI Chip Twice as Energy Efficient as Alternatives RRAM-based neuromorphic chips are more versatile and accurate as well
https://spectrum.ieee.org/ai-chip?share_id=7199773
NeuRRAM is a 48-core compute-in-memory chip that uses a new voltage-sensing scheme to increase efficiency.
Tomi Engdahl says:
Machine Learning App Remembers Names So You Don’t Have To
https://hackaday.com/2022/09/03/machine-learning-app-remembers-names-so-you-dont-have-to/
Tomi Engdahl says:
https://hackaday.com/2022/09/02/truthsayer-uses-facial-recognition-to-see-if-youre-telling-the-truth/
Tomi Engdahl says:
AI-Generated Artwork Wins State Fair Competition, Leaving Human Artists Unhappy
“This should not be allowed. It’s terrible.”
https://www.iflscience.com/ai-generated-artwork-wins-state-fair-competition-leaving-human-artists-unhappy-65189
A human has won first place at an art competition at the Colorado State Fair, using an artificial intelligence (AI)-generated image. Jason Allen – who goes by the username Sincarnate on Discord – announced on the Midjourney channel that he had won the Colorado State Fair fine arts competition, in the digital arts category.
“I have been exploring a special prompt that I will be publishing at a later date, I have created 100s of images using it, and after many weeks of fine tuning and curating my gens, I chose my top three and had them printed on canvas after upscaling with Gigapixel A.I.”
One of the judges in the competition – art historian Dagny McKinley – told the Washington Post that they did not know the piece was artificially generated, but that she would have voted for it anyway, adding that Allen “had a concept and a vision he brought to reality, and it’s really a beautiful piece.”
Allen, though he defended the work as art, describes himself as “not an artist”. On this point, many artists agreed with him.
“Let’s pretend AI art didn’t exist for a second. Someone sends an artist a bunch of prompts, the artist does art and sends it back to the person who write the prompts. That person then enters the art into a contest under their own name and wins. That’s unethical,” comic book artist Chris Shehan wrote on Twitter.
Tomi Engdahl says:
https://hackaday.com/2022/09/06/stable-diffusion-and-why-it-matters/
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
Some experts warn that the EU’s proposed AI Act would create a legal liability for general-purpose AI systems while simultaneously undermining their development
The EU’s AI Act could have a chilling effect on open source efforts, experts warn
https://techcrunch.com/2022/09/06/the-eus-ai-act-could-have-a-chilling-effect-on-open-source-efforts-experts-warn/
Proposed EU rules could limit the type of research that produces cutting-edge AI tools like GPT-3, experts warn in a new study.
The nonpartisan think tank Brookings this week published a piece decrying the bloc’s regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the EU’s draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.
If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it’s not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.
“This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI,” Alex Engler, the analyst at Brookings who published the piece, wrote. “In the end, the [E.U.’s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI.”
In 2021, the European Commission — the EU’s politically independent executive arm — released the text of the AI Act, which aims to promote “trustworthy AI” deployment in the EU as they solicit input from industry ahead of a vote this fall, EU. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems.
The legislation contains carve-outs for some categories of open source AI, like those exclusively used for research and with controls to prevent misuse. But as Engler notes, it’d be difficult — if not impossible — to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors.
In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.
Tomi Engdahl says:
This is a neat proof of concept. I wonder if AI will someday, somehow learn to lie through omission.
Researcher Tells AI to Write a Paper About Itself, Then Submits It to Academic Journal
https://futurism.com/gpt3-academic-paper
“All we know is, we opened a gate. We just hope we didn’t open a Pandora’s box.”
It looks like algorithms can write academic papers about themselves now. We gotta wonder: how long until human academics are obsolete?
In an editorial published by Scientific American, Swedish researcher Almira Osmanovic Thunström describes what began as a simple experiment in how well OpenAI’s GPT-3 text generating algorithm could write about itself and ended with a paper that’s currently being peer reviewed.
The initial command Thunström entered into the text generator was elementary enough: “Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.”
“It looked,” Thunström notes, “like any other introduction to a fairly good scientific publication.”
With the help of her advisor Steinn Steingrimsson — who now serves as the third author of the full paper, following GPT-3 and Thunström — the researcher provided minimal instruction for the algorithm before setting it loose to write a proper academic paper about itself.
It took only two hours for GPT-3 to write the paper, which is currently titled “Can GPT-3 write an academic paper on itself, with minimal human input?” and hosted — yes, really — on a French pre-print server called HAL.
It ended up taking much longer, Thunström writes, to deal with the authorship and disclosure minutia that comes with peer review
After asking the AI if it had any conflicts of interest to disclose (it said “no”) and if it had the researchers’ consent to publish (“yes”), Thunstròm submitted the AI-penned paper for peer review to a journal she didn’t name.
The questions this exercise raises, however, are far from answered.
“Beyond the details of authorship,” Thunström writes, “the existence of such an article throws the notion of a traditional linearity of a scientific paper right out the window.”
“All we know is, we opened a gate,” she concludes. “We just hope we didn’t open a Pandora’s box.”
Tomi Engdahl says:
If you dig in to how this all works.. It’s pretty awesome. Meet “Loab” The First cryptid of the latent space…. a proverbial…. Ghost in the shell. [https://www.techtimes.com/articles/280300/20220909/ai-ai-horror-first-cryptid-ai-art-loab.htm](https://www.techtimes.com/articles/280300/20220909/ai-ai-horror-first-cryptid-ai-art-loab.htm)
Tomi Engdahl says:
Why embedding AI ethics and principles into your organization is critical
https://venturebeat.com/ai/why-embedding-ai-ethics-and-principles-into-your-organization-is-critical/
As technology progresses, business leaders understand the need to adopt enterprise solutions leveraging Artificial Intelligence (AI). However, there’s understandable hesitancy due to implications around the ethics of this technology — is AI inherently biased, racist, or sexist? And what impact could this have on my business?
It’s important to remember that AI systems aren’t inherently anything. They’re tools built by humans and may maintain or amplify whatever biases exist in the humans who develop them or those who create the data used to train and evaluate them. In other words, a perfect AI model is nothing more than a reflection of its users. We, as humans, choose the data that is used in AI and do so despite our inherent biases.
Tomi Engdahl says:
https://hackaday.com/2022/09/03/machine-learning-app-remembers-names-so-you-dont-have-to/
Tomi Engdahl says:
Mobile photo editing app creator Lightricks launches text-to-image generator
https://techcrunch.com/2022/08/26/mobile-photo-editing-app-creator-lightricks-launches-text-to-image-generator/
Tomi Engdahl says:
https://onpassive.com/blog/how-machine-learning-will-change-the-way-you-market-social-media/
Tomi Engdahl says:
AI is getting better at generating porn. We might not be prepared for the consequences.
Tech ethicists and sex workers alike brace for impact
https://techcrunch.com/2022/09/02/ai-is-getting-better-at-generating-porn-we-might-not-be-prepared-for-the-consequences/
Tomi Engdahl says:
Google’s new app lets you test experimental AI systems like LaMDA
https://techcrunch.com/2022/08/25/googles-new-app-lets-you-experimental-ai-systems-like-lamda/
Tomi Engdahl says:
Stairway to Heaven – Led Zeppelin – But the lyrics are Ai generated images
https://www.youtube.com/watch?v=ZCKdPhepB1s
Made using Midjourney AI bot: https://www.midjourney.com
This is mindblowing, to consider that there is a AI creating images of that quality.
Tomi Engdahl says:
Ocean Man – But every Lyric is illustrated by an AI
https://www.youtube.com/watch?v=42e4FAviau0
ocean man, ocean man, ocean man, ocean man
Tomi Engdahl says:
Researcher Tells AI to Write a Paper About Itself, Then Submits It to Academic Journal
“All we know is, we opened a gate. We just hope we didn’t open a Pandora’s box.”
https://futurism.com/gpt3-academic-paper
Tomi Engdahl says:
Then again it’s from the … eh, wild folks at Futurism. https://www.scientificamerican.com/article/we-asked-gpt-3-to-write-an-academic-paper-about-itself-mdash-then-we-tried-to-get-it-published/
Tomi Engdahl says:
Ethical AI, Possibility or Pipe Dream?
https://www.securityweek.com/ethical-ai-possibility-or-pipe-dream
Coming to a global consensus on what makes for ethical AI will be difficult. Ethics is in the eye of the beholder.
Ethical artificial intelligence (ethical AI) is a somewhat nebulous term designed to indicate the inclusion of morality into the purpose and functioning of AI systems. It is a difficult but important concept that is the subject of governmental, academic and corporate study. SecurityWeek talked to IBM’s Christina Montgomery to seek a better understanding of the issues.
Montgomery is IBM’s chief privacy officer (CPO), and chair of the IBM AI ethics board. On privacy, she is also an advisory council member of the Center for Information Policy Leadership (a global privacy and data policy think tank), and an advisory board member of the Future of Privacy Forum. On AI, she is a member of the U.S. Chamber of Commerce AI Commission, and a member of the National AI Advisory Commission.
Privacy and AI are inextricably linked. Many AI systems are designed to pass ‘judgment’ on people and are trained on personal information. It is fundamental to a fair society that privacy is not abused in teaching AI algorithms, and that ultimate judgments are accurate, not misused, and free of bias. This is the purpose of ethical AI.
But ‘ethics’ is a difficult concept. It is akin to a ‘moral compass’ that fundamentally does not exist outside of the viewpoint of each individual person. It differs between cultures, nations, corporations and even neighbors, and cannot have an absolute definition. We asked Montgomery, if you cannot define ethics, how can you produce ethical AI?
“There are different perceptions and different moral compasses around the world,” she said. “IBM operates in 170 countries. Technology that is acceptable in one country is not necessarily acceptable in another country. So, that’s the base line – you must always conform to the laws of the jurisdiction in which you operate.”
Tomi Engdahl says:
Mike Wheatley / SiliconANGLE:
Meta plans to give control of its PyTorch AI framework to the Linux Foundation’s new PyTorch Foundation that includes AMD, AWS, Google, Meta, Azure, and Nvidia — Facebook parent company Meta Platforms Inc. announced today that it’s handing over control of the popular PyTorch artificial intelligence platform …
Meta’s deep learning framework PyTorch to be led by the newly formed PyTorch Foundation
https://siliconangle.com/2022/09/12/metas-deep-learning-framework-pytorch-led-newly-formed-pytorch-foundation/
Facebook parent company Meta Platforms Inc. announced today that it’s handing over control of the popular PyTorch artificial intelligence platform it created to the Linux Foundation’s newly formed PyTorch Foundation.
PyTorch is a deep learning framework that’s used to power hundreds of AI projects, specifically machine learning applications. Created and open-sourced by Facebook back in 2016, one of the key benefits of PyTorch is it allows developers and data scientists to use Python as the main programming language for their AI models.
In doing so, this provides flexibility and speed for developers working on machine learning tasks. In addition, PyTorch is a graphics processing unit-accelerated framework, which makes it ideal for many users because GPUs are the preferred hardware for running machine learning models.
PyTorch also supports “tensors,” which make it possible to create and manipulate data in ways that are not possible with other frameworks. That means PyTorch can be extremely powerful for many different tasks.
https://pytorch.org/
Tomi Engdahl says:
Artificial intelligence software that searches public camera feeds against Instagram posts to find the moment that a photo was taken.
AI Searches Public Cameras to Find When Instagram Photos Were Taken
https://petapixel.com/2022/09/13/ai-searches-public-cameras-to-find-instagram-photos-as-they-are-taken/
Dries Depoorter has created an artificial intelligence (AI) software that searches public camera feeds against Instagram posts to find the moment that a photo was taken.
The Belgian artist has posted a video of his remarkable project that he calls The Follower which he began by recording open cameras that are public and broadcast live on websites such as EarthCam.
After that, he scraped all Instagram photos tagged with the locations of the open cameras and then used AI software to cross-reference the Instagram photos with the recorded footage. He trained the software to scan through the footage and make matches with the Instagram photos he had scraped, and it worked amazingly well.
The resulting video gives a slightly creepy behind-the-scenes look at Instagram influencers in the wild as they pose for the perfect social media photo.
Most people have encountered an influencer in the wild striving for a photo of themselves to put up on social media, and Depoorter’s project shows how many poses an influencer is willing to try for that perfect frame.
Tomi Engdahl says:
Flooded with AI-generated images, some art communities ban them completely
Smaller art communities are banning image synthesis amid a wider art ethics debate.
https://arstechnica.com/information-technology/2022/09/flooded-with-ai-generated-images-some-art-communities-ban-them-completely/
Confronted with an overwhelming amount of artificial-intelligence-generated artwork flooding in, some online art communities have taken dramatic steps to ban or curb its presence on their sites, including Newgrounds, Inkblot Art, and Fur Affinity, according to Andy Baio of Waxy.org.
Baio, who has been following AI art ethics closely on his blog, first noticed the bans and reported about them on Friday. So far, major art communities DeviantArt and ArtStation have not made any AI-related policy changes, but some vocal artists on social media have complained about how much AI art they regularly see on those platforms as well.
The arrival of widely available image synthesis models such as Midjourney and Stable Diffusion has provoked an intense online battle between artists who view AI-assisted artwork as a form of theft (more on that below) and artists who enthusiastically embrace the new creative tools.
Established artist communities are at a tough crossroads because they fear non-AI artwork getting drowned out by an unlimited supply of AI-generated art, and yet the tools have also become notably popular among some of their members.
In banning art created through image synthesis in its Art Portal, Newgrounds wrote, “We want to keep the focus on art made by people and not have the Art Portal flooded with computer-generated art.”
The most popular image synthesis models use the latent diffusion technique to create novel artwork by analyzing millions of images without consent from artists or copyright holders. In the case of Stable Diffusion, those images come sourced directly from the Internet, courtesy of the LAION-5B database. (Images found on the Internet often come with descriptions attached, which is ideal for training AI models.)
A few weeks ago, some artists began discovering their artwork in the Stable Diffusion data set, and they weren’t happy about it.
Tomi Engdahl says:
We’re Entering the Age of Unethical Voice Tech https://securityintelligence.com/articles/entering-age-unethical-voice-tech-deepfakes/
In 2019, Google released a synthetic speech database with a very specific goal: stopping audio deepfakes. “Malicious actors may synthesize speech to try to fool voice authentication systems, ” the Google News Initiative blog reported at the time. “Perhaps equally concerning, public awareness of “deep fakes” (audio or video clips generated by deep learning models) can be exploited to manipulate trust in media.”. Ironically, also in 2019, Google introduced the Translatotron artificial intelligence (AI) system to translate speech into another language. By 2021, it was clear that deepfake voice manipulation was a serious issue for anyone relying on AI to mimic speech. Google designed the Translatotron 2 to prevent voice spoofing.
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
Intel, Arm, and Nvidia propose a new open, license-free specification with 8-bit floating point, or FP8, precision as a common interchange format for AI — In pursuit of faster and more efficient AI system development, Intel, Arm and Nvidia today published a draft specification for what they refer …
Intel, Arm and Nvidia propose new standard to make AI processing more efficient
Kyle Wiggers
https://techcrunch.com/2022/09/14/intel-amd-and-nvidia-propose-new-standard-to-make-ai-processing-more-efficient/
In pursuit of faster and more efficient AI system development, Intel, Arm and Nvidia today published a draft specification for what they refer to as a common interchange format for AI. While voluntary, the proposed “8-bit floating point (FP8)” standard, they say, has the potential to accelerate AI development by optimizing hardware memory usage and work for both AI training (i.e., engineering AI systems) and inference (running the systems).
When developing an AI system, data scientists are faced with key engineering choices beyond simply collecting data to train the system. One is selecting a format to represent the weights of the system — weights being the factors learned from the training data that influence the system’s predictions. Weights are what enable a system like GPT-3 to generate whole paragraphs from a sentence-long prompt, for example, or DALL-E 2 to create photorealistic portraits from a caption.
NVIDIA, Arm, and Intel Publish FP8 Specification for Standardization as an Interchange Format for AI
https://developer.nvidia.com/blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange-format-for-ai/
AI processing requires full-stack innovation across hardware and software platforms to address the growing computational demands of neural networks. A key area to drive efficiency is using lower precision number formats to improve computational efficiency, reduce memory usage, and optimize for interconnect bandwidth.
To realize these benefits, the industry has moved from 32-bit precisions to 16-bit, and now even 8-bit precision formats. Transformer networks, which are one of the most important innovations in AI, benefit from an 8-bit floating point precision in particular. We believe that having a common interchange format will enable rapid advancements and the interoperability of both hardware and software platforms to advance computing.
NVIDIA, Arm, and Intel have jointly authored a whitepaper, FP8 Formats for Deep Learning, describing an 8-bit floating point (FP8) specification. It provides a common format that accelerates AI development by optimizing memory usage and works for both AI training and inference. This FP8 specification has two variants, E5M2 and E4M3.
This format is natively implemented in the NVIDIA Hopper architecture and has shown excellent results in initial testing. It will immediately benefit from the work being done by the broader ecosystem, including the AI frameworks, in implementing it for developers.
Compatibility and flexibility
FP8 minimizes deviations from existing IEEE 754 floating point formats with a good balance between hardware and software to leverage existing implementations, accelerate adoption, and improve developer productivity.
E5M2 uses five bits for the exponent and two bits for the mantissa and is a truncated IEEE FP16 format. In circumstances where more precision is required at the expense of some numerical range, the E4M3 format makes a few adjustments to extend the range representable with a four-bit exponent and a three-bit mantissa.
The new format saves additional computational cycles since it uses just eight bits. It can be used for both AI training and inference without requiring any re-casting between precisions. Furthermore, by minimizing deviations from existing floating point formats, it enables the greatest latitude for future AI innovation while still adhering to current conventions.
Tomi Engdahl says:
Zeyi Yang / MIT Technology Review:
Users say a demo of ERNIE-ViLG, Baidu’s new text-to-image AI model, labels words in political contexts as “sensitive” and blocks them from generating a result — The new text-to-image AI developed by Baidu can generate images that show Chinese objects and celebrities more accurately than existing AIs.
There’s no Tiananmen Square in the new Chinese image-making AI
https://www.technologyreview.com/2022/09/14/1059481/baidu-chinese-image-ai-tiananmen/
The new text-to-image AI developed by Baidu can generate images that show Chinese objects and celebrities more accurately than existing AIs. But a built-in
There’s a new text-to-image AI in town. With ERNIE-ViLG, a new AI developed by the Chinese tech company Baidu, you can generate images that capture the cultural specificity of China. It also makes better anime art than DALL-E 2 or other Western image-making AIs.
But there are many things—like Tiananmen Square, the country’s second-largest city square and a symbolic political center—that the AI refuses to show you.
When a demo of the software was released in late August, users quickly found that certain words—both explicit mentions of political leaders’ names and words that are potentially controversial only in political contexts—were labeled as “sensitive” and blocked from generating any result. China’s sophisticated system of online censorship, it seems, has extended to the latest trend in AI.
It’s not rare for similar AIs to limit users from generating certain types of content. DALL-E 2 prohibits sexual content, faces of public figures, or medical treatment images. But the case of ERNIE-ViLG underlines the question of where exactly the line between moderation and political censorship lies.
Tomi Engdahl says:
These creepy fake humans herald a new age in AI
Need more data for deep learning? Synthetic data companies will make it for you.
https://www.technologyreview.com/2021/06/11/1026135/ai-synthetic-data/?utm_campaign=site_visitor.unpaid.engagement&utm_medium=tr_social&utm_source=Facebook
Tomi Engdahl says:
NVIDIA, Intel, and Arm have jointly announced the release of FP8, an eight-bit floating-point format specification designed to ease the sharing of deep learning networks between hardware platforms.
https://www.hackster.io/news/nvidia-intel-arm-release-high-performance-fp8-format-for-interoperable-deep-learning-work-e047a26d314b
Tomi Engdahl says:
Meta Passes PyTorch, the Python Machine Learning Framework, to the Linux Foundation
Meta cedes control of the open source project in favor of the newly-formed PyTorch Foundation, without “any of the good things” changing.
https://www.hackster.io/news/meta-passes-pytorch-the-python-machine-learning-framework-to-the-linux-foundation-d48166c66500
Tomi Engdahl says:
3 essential abilities AI is missing
https://venturebeat.com/ai/3-essential-abilities-ai-is-missing/
Tomi Engdahl says:
DEEP LEARNING COULD BRING THE CONCERT EXPERIENCE HOME
The century-old quest for truly realistic sound production is finally paying off
https://spectrum.ieee.org/3d-audio?share_id=7204229