3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,954 Comments

  1. Tomi Engdahl says:

    Lydia Saad / Gallup:
    Survey: 22% of US workers worry that tech will replace their job, up from 15% in 2021, 17% in 2019, and 13% in 2017, a jump driven by college-educated workers

    More U.S. Workers Fear Technology Making Their Jobs Obsolete
    https://news.gallup.com/poll/510551/workers-fear-technology-making-jobs-obsolete.aspx

    Reply
  2. Tomi Engdahl says:

    Financial Times:
    The widely diverging ideas and approaches on how to regulate the AI models across the world risk tying the AI industry up in red tape

    The global race to set the rules for AI
    https://www.ft.com/content/59b9ef36-771f-4f91-89d1-ef89f4a2ec4e

    Reply
  3. Tomi Engdahl says:

    Vector Embeddings – Antidote to Psychotic LLMs and a Cure for Alert Fatigue?
    https://www.securityweek.com/vector-embeddings-antidote-to-psychotic-llms-and-a-cure-for-alert-fatigue/

    Vector embeddings – data stored in a vector database – can be used to minimize hallucinations from a GPT-style large language model AI system (such as ChatGPT) and perform automated triaging on anomaly alerts.

    Vector embeddings – data stored in a vector database – can be used to minimize hallucinations from a GPT-style large language model AI system (such as ChatGPT) and perform automated triaging on anomaly alerts.

    In this article, we look at two cybersecurity applications of vector embeddings within a vector database. The first is to reduce the tendency for large language model AI (such as ChatGPT) to ‘hallucinate’; and the second is to surface the concerning occurrences (effectively, to triage) ‘alerts’ found within network logs.

    We’ll discuss vector embeddings, hallucinations, and anomaly analysis.

    Reply
  4. Tomi Engdahl says:

    BingGPT Brings AI Chat To The Desktop
    https://hackaday.com/2023/09/13/binggpt-brings-ai-chat-to-the-desktop/

    Interested in AI, but sick of using everything in a browser? Miss clicking on a good old desktop icon to open a local bit of software? In that case, BingGPT could be just the thing for you.

    It’s nothing too crazy—just a desktop application that gives you access to Bing’s AI-powered chatbot. It’s available on a range of platforms, from Windows, to Apple, and Linux, and binaries are available for Intel, Apple Silicon, and ARM processors.

    Using BingGPT is simple. Sign in with your Microsoft account, and away you go. There’s no need to use Microsoft Edge or any ugly browser plugins, and you can export your conversations to Markdown, PNG, and PDF for sharing beyond the program.

    https://github.com/dice2o/BingGPT

    Desktop application of new Bing’s AI-powered chat (Windows, macOS and Linux)

    Reply
  5. Tomi Engdahl says:

    Undress and deepfake AI apps are on the rise. Here’s what you need to know

    AI Deepfakes and Undress Apps Make the Internet Unsafer – The Penny FilesSeptember 12, 2023
    https://luddite.pro/ai-deepfakes-and-undress-apps-make-the-internet-unsafe/?fbclid=IwAR2nlQ0Di8ObVZJ85f_b70dwzLV27s0Ds6kLE4YE-hahjE18TIJb_NOGvr8

    Women and girls should be aware of what images you post online. This year, there’s a new trend (kickstarted by the public launch of Stable Diffusion) towards AI-powered undressing apps.

    These apps are powerful AI-generation engines that use the inpainting feature of a not-safe-for-work (NSFW) Stable Diffusion model. Although Stability AI is working toward making its platform safer, its NSFW models are already in the wild, and an unofficial fork was created called Unstable Diffusion to focus explicitly on NSFW.

    It led to a proliferation of deepfake and undress apps that will forever change the online landscape. The internet is no longer safe to post images, and the images you already posted my be impossible to fully delete, thanks to exploitative data scraping policies by social media platforms like Twitter/X, Facebook, and TikTok.

    One thing is clear–we are already living in the age of AI undress apps and deepfakes. It’s not a hypothetical situation about the future–instead, it’s a reality that every person should be aware of, although it largely affects women. Let’s dive into the horrible underground world of AI-generated undress and deepfake apps.

    The AI Deepfake Epidemic
    Generative AI tools like Stable Diffusion can create images based on vast datasets. Researchers have found ways to train these models more specifically using fewer images through “textual inversion.”

    There are various ways to monetize AI model making for LoRas and more. Some creators use platforms like Patreon and Ko-Fi for donations, while others participate in programs that pay per image generated. Platforms like CivitAI offer memberships for early access to features.

    CivitAI is a platform where people share these specialized models. However, many popular models produce explicit content, sometimes non-consensually, using images from online communities. This raises ethical and legal concerns about consent and the potential for exploitation, particularly for marginalized groups and public figures. CivitAI’s terms of service against such use are difficult to enforce effectively.

    Experts warn that as this technology becomes more widespread, those most at risk are the people with the least power to defend themselves. This caused human journalist Brian Penny to dive in with an investigation in July and August 2023.

    In each video, the famous woman’s face is deepfaked onto the adult actress’s face, but the male adult actor is left intact. This violates all people involved, including the adult actors whos bodies and owned IP footage was used without their consent.

    This propelled MrDeepfakes to the top of the internet, with 14.7 million visits in the month of August 2023 and visitors spending an average of four minutes and nine seconds on the site. It’s the 185th most popular adult site in the United States, and its global ranking dropped to 3,072.

    It would appear passive deepfake porn of celebrities isn’t enough–men of the internet want to undress people they know. And this is where the Undress app (and its flood of clones) came in.

    The Undress App Problem
    The Undress app has seen rapid growth in 2023, with its global ranking and search volume indicating high demand. These tools raise serious ethical and legal concerns about nonconsensual image use and privacy. There have also been cases where such tools are used for extortion.

    Experts suggest that regulatory frameworks and technological solutions, such as image classifiers to distinguish real from fake images, are urgently needed. Penny emphasized that these issues are no longer confined to public figures but are affecting everyday people, particularly women and children.

    He dove into several Discord servers and Facebook groups to uncover a group of users gleefully sharing tips and tricks while helping each other undress pictures they submit of women.

    Penny was disturbed and published his research on public in Twitter, linking to everything and refusing to edit the names of the people involved. He believes them to be criminals who should be held accountable for their actions.

    Popularity and Engagement: The Undress app is gaining significant attention, with over 7.6 million visits in one month (now up to 8.2 million) and an average session duration of 21 minutes 33 seconds (now down to 18:27)

    Functionality: Undress allows users to upload a photo and generate a “deep nude” image, altering the person’s clothes, height, skin tone, and body type. And it’s not the only one on the market. In fact, there are now over a dozen undress apps online, featuring names like “Deepnude.AI” and “Deepnudify”

    Each has a varying level of sophistication and often comes with FAQs and guides to help pick the right photograph to properly undress the woman or girl you are targeting.

    Lack of Accountability: Regardless of which platform you choose, the app’s creators disclaim any responsibility for the images generated, leaving victims with no avenue for complaints or removal of nonconsensual imagery.

    You can undress yourself IRL and look in the mirror, so there’s no reason for this technology to exist for anyone to use on other people’s images.

    Community and Extortion: Platforms like Discord, Telegram, Reddit, and 4Chan have become spaces where users share images they want altered by Undress. There are reports of fraudulent loan apps using tools like Undress to extort money from individuals by morphing their images.

    Regulatory Shortcomings: Despite the surge in usage and potential for harm, there is currently no effective regulation to govern these generative AI tools nor their deepfake and undress outputs. So far, the Federal Elections Commission is considering regulating AI in political ads, however.

    Harm to Vulnerable Populations: Children under the age of 18 are especially at risk, and once these altered images find their way online, they’re nearly impossible to remove. Kids are also often unable to control whether their parents share photos of them online.

    Here’s a recent commercial from Deutsche Telekom warning of the dangers of sharing your children’s photos online. Using AI technology, they’re able to age the photo and animate it to say and do whatever they want.

    Future Implications: Experts warn that these tools will continue to improve, making it increasingly difficult to distinguish altered images from real ones. And they will continue getting better at recognizing people in different poses and clothing, making it easier to undress anybody using any image.

    Proposed Solutions: Technology experts suggest a two-pronged approach to combat the issue — technological solutions like classifiers to identify fake images, and regulatory frameworks that mandate clear labeling of AI-generated content. However, it’s also necessary to tackle the problem of AI misuse.

    Much like exploitative data scraping policies allowed for generative AI companies to monetize the work of artists, graphic designers, and photographers, AI is being misused on images the users don’t own. Both civil and criminal actions may be necessary to stave off continued destruction caused by these programs.

    To Be Continued?
    Thus far, regulators are still slow to move, and although advocacy groups are working hard, there’s not much else we can do. Even an image protection solution like WebGlaze is largely ineffective with image-to-image model training methods.

    All we can do right now is hope that the media, police, and regulators around the world are paying attention. Deepfake and undress apps are here to stay, and their popularity will only continue to grow as the technology gets better.

    Reply
  6. Tomi Engdahl says:

    US Agencies Publish Cybersecurity Report on Deepfake Threats
    https://www.securityweek.com/us-agencies-publish-cybersecurity-report-on-deepfake-threats/

    CISA, FBI and NSA have published a cybersecurity report on deepfakes and recommendations for identifying and responding to such threats.

    Several US government agencies on Tuesday published a cybersecurity information sheet focusing on the threat posed by deepfakes and how organizations can identify and respond to deepfakes.

    Deepfake is a term used to describe synthetic media — typically fake images and videos. Deepfakes have been around for a long time, but advancements in artificial intelligence (AI) and machine learning (ML) have made it easier and less costly to create highly realistic deepfakes.

    Deepfakes can be useful for propaganda and misinformation operations. For example, deepfakes of both Russia’s president, Vladimir Putin, and his Ukrainian counterpart, Volodymyr Zelensky, have emerged since the start of the war.

    However, in their new report, the FBI, NSA and CISA warn that deepfakes can also pose a significant threat to organizations, including government, national security, defense, and critical infrastructure organizations.

    “Organizations and their employees may be vulnerable to deepfake tradecraft and techniques which may include fake online accounts used in social engineering attempts, fraudulent text and voice messages used to avoid technical defenses, faked videos used to spread disinformation, and other techniques,” the agencies said.

    https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF

    Reply
  7. Tomi Engdahl says:

    koitas https://bard.google.com/
    ‘equivalent for xxx..’
    osumia tulee, ota ite selvä onko ne ok

    Reply
  8. Tomi Engdahl says:

    Jon Victor / The Information:
    Sources: Google gave a small group of companies access to an early version of its Gemini AI model, suggesting the company is close to launching the GPT-4 rival

    Google Nears Release of Gemini AI to Challenge OpenAI
    https://www.theinformation.com/articles/google-nears-release-of-gemini-ai-to-rival-openai

    Google has given a small group of companies access to an early version of its highly anticipated conversational artificial intelligence software, according to three people with direct knowledge of the matter. Giving outside developers access to the software, known as Gemini, means Google is getting close to incorporating it in its consumer services and selling it to businesses through the company’s cloud unit.

    Gemini is intended to compete with OpenAI’s GPT-4 model, which has begun to generate meaningful revenue for the startup as financial institutions and other businesses pay to access the model and the ChatGPT chatbot it powers.

    Reply
  9. Tomi Engdahl says:

    Christopher Beam / Wired:
    A look at the race to develop tools that can identify AI-generated text and evade detection, such as GPTZero and WorkNinja, which were built by college students

    The AI Detection Arms Race Is On
    https://www.wired.com/story/ai-detection-chat-gpt-college-students/

    And college students are developing the weapons, quickly building tools that identify AI-generated text—and tools to evade detection.

    Edward Tian didn’t think of himself as a writer. As a computer science major at Princeton, he’d taken a couple of journalism classes, where he learned the basics of reporting, and his sunny affect and tinkerer’s curiosity endeared him to his teachers and classmates. But he describes his writing style at the time as “pretty bad”—formulaic and clunky. One of his journalism professors said that Tian was good at “pattern recognition,” which was helpful when producing news copy. So Tian was surprised when, sophomore year, he managed to secure a spot in John McPhee’s exclusive non-fiction writing seminar.

    Every week, 16 students gathered to hear the legendary New Yorker writer dissect his craft. McPhee assigned exercises that forced them to think rigorously about words: Describe a piece of modern art on campus, or prune the Gettysburg Address for length. Using a projector and slides, McPhee shared hand-drawn diagrams that illustrated different ways he structured his own essays: a straight line, a triangle, a spiral. Tian remembers McPhee saying he couldn’t tell his students how to write, but he could at least help them find their own unique voice.

    If McPhee stoked a romantic view of language in Tian, computer science offered a different perspective: language as statistics.

    And in the fall of 2022, he began to work on his senior thesis about detecting the differences between AI-generated and human-written text.

    When ChatGPT debuted in November, Tian found himself in an unusual position. As the world lost its mind over this new, radically improved chatbot, Tian was already familiar with the underlying GPT-3 technology. And as a journalist who’d worked on rooting out disinformation campaigns, he understood the implications of AI-generated content for the industry.

    While home in Toronto for winter break, Tian started playing around with a new program: a ChatGPT detector.

    His idea was simple. The software would scan a piece of text for two factors: “perplexity,” the randomness of word choice; and “burstiness,” the complexity or variation of sentences. Human writing tends to rate higher than AI writing on both metrics, which allowed Tian to guess how a piece of text had been created. Tian called the tool GPTZero—the “zero” signaled truth, a return to basics—and he put it online the evening of January 2.

    The goal was to combat “increasing AI plagiarism,” he wrote. “Are high school teachers going to want students using ChatGPT to write their history essays? Likely not.” Then he went to bed.

    Tian woke up the next morning to hundreds of retweets and replies. There was so much traffic to the host server that many users couldn’t access it. “It was totally crazy,” Tian says. “My phone was blowing up.”

    Within days, he was fielding calls from journalists around the world, eventually appearing on everything from NPR to the South China Morning Post to Anderson Cooper 360. Within a week, his original tweet had reached more than 7 million views.

    GPTZero was a new twist in the media narrative surrounding ChatGPT, which had inspired industrywide hand-wringing and a scourge of AI-generated ledes.

    Tian’s program was a starting gun of sorts. The race was now on to create the definitive AI detection tool. In a world increasingly saturated with AI-generated content, the thinking went, we’ll need to distinguish the machine-made from the human-made. GPTZero represented a promise that it will indeed be possible to tell one from the other, and a conviction that the difference matters. During his media tour, Tian—smiley, earnest, the A student incarnate—elaborated on this reassuring view that no matter how sophisticated generative AI tools become, we will always be able to unmask them. There’s something irreducible about human writing, Tian said: “It has an element that can never be put into numbers.”

    Life on the internet has always been a battle between fakers and detectors of fakes, with both sides profiting off the clash.

    As search engines grew more popular, creators looking to boost their pages’ rankings resorted to “keyword stuffing”—repeating the same word over and over—to get priority. Search engines countered by down-ranking those sites. After Google introduced its PageRank algorithm, which favored websites with lots of inbound links, spammers created entire ecosystems of mutually supporting pages.

    Around the turn of the millennium, the captcha tool arrived to sort humans from bots based on their ability to interpret images of distorted text. Once some bots could handle that, captcha added other detection methods that included parsing images of motorbikes and trains, as well as sensing mouse movement and other user behavior. (In a recent test, an early version of GPT-4 showed that it knew how to hire a person on Taskrabbit to complete a captcha on its behalf.)

    Generative AI re-upped the ante. While large language models and text-to-image generators have been evolving steadily over the past decade, 2022 saw an explosion of consumer-friendly tools like ChatGPT and Dall-E. Pessimists argue that we could soon drown in a tsunami of synthetic media. “In a few years, the vast majority of the photos, videos, and text we encounter on the internet could be AI-generated,” New York Times technology columnist Kevin Roose warned last year. The Atlantic imagined a looming “textpocalypse” as we struggle to filter out the generative noise. Political campaigns are leveraging AI tools to create ads, while Amazon is flooded with ChatGPT-written books (many of them about AI). Scrolling through product reviews already feels like the world’s most annoying Turing test. The next step seems clear: If you thought Nigerian prince emails were bad, wait until you see Nigerian prince chatbots.

    Soon after Tian released GPTZero, a wave of similar products appeared. OpenAI rolled out its own detection tool at the end of January, while Turnitin, the anti-plagiarism giant, unveiled a classifier in April. They all shared a basic methodology, but each model was trained on different data sets.

    As a result, precision varied wildly, from OpenAI’s claim of 26 percent accuracy for detecting AI-written text, up to the most optimistic claim from a company called Winston AI at 99.6 percent. To stay ahead of the competition, Tian would have to keep improving GPTZero, come up with its next product, and finish college in the meantime.

    Another threat to GPTZero was GPTZero. Almost immediately after it launched, skeptics on social media started posting embarrassing examples of the tool misclassifying texts. Someone pointed out that it flagged portions of the US Constitution as possibly AI-written. Mockery gave way to outrage when stories of students falsely accused of cheating due to GPTZero began to flood Reddit. At one point, a parent of one such student reached out to Soheil Feizi, a professor of computer science at the University of Maryland. “They were really furious,”

    Yet another headache for Tian was the number of crafty students finding ways around the detector. One person on Twitter instructed users to insert a zero-width space before every “e” in a ChatGPT-generated text. A TikTok user wrote a program that bypassed detection by replacing certain English letters with their Cyrillic look-alikes. Others started running their AI text through QuillBot, a popular paraphrasing tool. Tian patched these holes, but the workarounds kept coming. It was only a matter of time before someone spun up a rival product—an anti-detector.

    Semrai had an essay due the following week for a required freshman writing class. It was his least favorite type of assignment: a formulaic essay meant to show logical reasoning. “It’s a pretty algorithmic process,” says Semrai.

    ChatGPT was the obvious solution. But at the time, its responses tended to max out at a few paragraphs, so generating a full-length essay would be a multistep process. Semrai wanted to create a tool that could write the paper in one burst. He also knew there was a chance it could be detected by GPTZero. With the encouragement of his friend, Semrai pulled out his laptop and ginned up a script that would write an essay based on a prompt, run the text through GPTZero, then keep tweaking the phrasing until the AI was no longer detectable—essentially using GPTZero against itself.

    Semrai introduced his program a few days later at Friends and Family Demo Day, a kind of show-and-tell for Stanford’s undergraduate developer community. Standing before a roomful of classmates, he asked the audience for an essay topic—someone suggested “fine dining” in California—and typed it into the prompt box. After a few seconds, the program spat out an eight-paragraph essay, unoriginal but coherent, with works cited. “Not saying I’d ever submit this paper,” Semrai said, to chuckles. “But there you go. I dunno, it saves time.” He named the tool WorkNinja and put it on the app store two months later.

    Semrai is fundamentally a techno-optimist. He says he believes that we should speed the development of technology, including artificial general intelligence, because it will ultimately lead us toward a “post-scarcity” society—a worldview sometimes described as “effective accelerationism.” (Not to be confused with effective altruism, which holds that we should take actions that maximize “good” outcomes, however defined.) Semrai’s case for WorkNinja rests on its own kind of accelerationist logic. AI writing tools are good, in his view, not because they help kids cheat, but because they’ll force schools to revamp their curricula. “If you can follow a formula to create an essay, it’s probably not a good assignment,” he says. He envisions a future in which every student can get the kind of education once reserved for aristocrats, by way of personalized AI tutoring. When he was first learning how to program, Semrai says, he relied largely on YouTube videos and internet forums to answer his questions. “It would have been easier if there was a tutor to guide me,” he says. Now that AI tutors are real, why stand in their way?

    I recently used WorkNinja to generate a handful of essays, including one about Darwin’s theory of evolution. The first version it gave me was clumsy and repetitive, but workable, exploring the theory’s implications for biology, genetics, and philosophy. GPTZero flagged it as likely AI-generated.

    So I hit WorkNinja’s Rephrase button. The text shifted slightly, replacing certain words with synonyms. After three rephrasings, GPTZero finally gave the text its stamp of humanity. (When I tested the same text again a few weeks later, the tool labeled it a mix of human and AI writing.) The problem was, many of the rephrased sentences no longer made sense.

    At the very least, any student looking for a shortcut would have to clean up their WorkNinja draft before submitting it. But it points to a real issue: If even this janky work in progress can circumvent detectors, what could a sturdier product accomplish?

    In March, Soheil Feizi at the University of Maryland published his findings on the performance of AI detectors. He argued that accuracy problems are inevitable, given the way AI text detectors worked. As you increase the sensitivity of the instrument to catch more AI-generated text, you can’t avoid raising the number of false positives to what he considers an unacceptable level. So far, he says, it’s impossible to get one without the other. And as the statistical distribution of words in AI-generated text edges closer to that of humans—that is, as it becomes more convincing—he says the detectors will only become less accurate. He also found that paraphrasing baffles AI detectors, rendering their judgments “almost random.” “I don’t think the future is bright for these detectors,” Feizi says.

    “Watermarking” doesn’t help either, he says. Under this approach, a generative AI tool like ChatGPT proactively adjusts the statistical weights of certain interchangeable “token” words—say, using start instead of begin, or pick instead of choose—in a way that would be imperceptible to the reader but easily spottable by an algorithm.

    But Feizi argues that with enough paraphrasing, a watermark “can be washed away.”

    In the meantime, he says, detectors are hurting students. Say a detection tool has a 1 percent false positive rate—an optimistic assumption. That means in a classroom of 100 students, over the course of 10 take-home essays, there will be on average 10 students falsely accused of cheating. (Feizi says a rate of one in 1,000 would be acceptable.) “It’s ridiculous to even think about using such tools to police the use of AI models,” he says.

    Tian says the point of GPTZero isn’t to catch cheaters, but that has inarguably been its main use case so far. (GPTZero’s detection results now come with a warning: “These results should not be used to punish students.”) As for accuracy, Tian says GPTZero’s current level is 96 percent when trained on its most recent data set. Other detectors boast higher figures, but Tian says those claims are a red flag, as it means they’re “overfitting” their training data to match the strengths of their tools.

    Surprisingly, AI-generated images, videos, and audio snippets are far easier to detect, at least for now, than synthetic text. Reality Defender, a startup backed by Y Combinator, launched in 2018 with a focus on fake image and video detection and has since branched out to audio and text. Intel released a tool called FakeCatcher, which detects deepfake videos by analyzing facial blood flow patterns visible only to the camera. A company called Pindrop uses voice “biometrics” to detect spoofed audio and to authenticate callers in lieu of security questions.

    AI-generated text is more difficult to detect because it has relatively few data points to analyze, which means fewer opportunities for AI output to deviate from the human norm.

    With publicly available tools like GPTZero, anyone can run a piece of text through the detector and then tweak it until it passes muster. Reality Defender, by contrast, vets every person and institution that uses the tool, Colman says. They also watch out for suspicious usage, so if a particular account were to run tests on the same image over and over with the goal of bypassing detection, their system would flag it.

    Regardless, much like spam hunters, spies, vaccine makers, chess cheaters, weapons designers, and the entire cybersecurity industry, AI detectors across all media will have to constantly adapt to new evasion techniques. Assuming, that is, the difference between human and machine still matters.

    he more time I spent talking with Tian and Semrai and their classmate-colleagues, the more I wondered: Do any of these young people actually … enjoy writing? “Yeah, a lot!” said Tian, beaming even more than usual when I asked him last May on Princeton’s campus. “It’s like a puzzle.” He likes figuring out how words fit together and then arranging the ideas so they flow. “I feel like that’s fun to do.” He also loves the interview process, as it gives him “a window into people’s lives, plus a mirror into how you live your own.”

    In high school, Tian says, writing felt like a chore. He credits McPhee for stoking his love and expanding his taste.

    Semrai similarly found high school writing assignments boring and mechanistic—more about synthesizing information than making something new. “I’d have preferred open-format assignments that would’ve sparked creativity,” he says. But he put those synthesizing skills to work.

    After almost 20 years of typing words for money, I can say from experience, writing sucks. Ask any professional writer and they’ll tell you, it’s the worst, and it doesn’t get easier with practice. I can attest that the enthusiasm and curiosity required to perpetually scan the world, dig up facts, and wring them for meaning can be hard to sustain. And that’s before you factor in the state of the industry: dwindling rates, shrinking page counts, and shortening attention spans (readers’ and my own). I keep it up because, for better or worse, it’s now who I am. I do it not for pleasure but because it feels meaningful—to me at least.

    Some writers romanticize the struggle.
    “A writer is someone for whom writing is more difficult than it is for other people.” “You search, you break your heart, your back, your brain, and then—only then—it is handed to you,”

    The siren call of AI says, It doesn’t have to be this way. And when you consider the billions of people who sit outside the elite club of writer-sufferers, you start to think: Maybe it shouldn’t be this way.

    The purpose of Writer isn’t to write for you, she says, but rather to make your writing faster, stronger, and more consistent. That could mean suggesting edits to prose and structure, or highlighting what else has been written on the subject and offering counterarguments. The goal, she says, is to help users focus less on sentence-level mechanics and more on the ideas they’re trying to communicate. Ideally, this process yields a piece of text that’s just as “human” as if the person had written it entirely themselves. “If the detector can flag it as AI writing, then you’ve used the tools wrong,” she says.

    The black-and-white notion that writing is either human- or AI-generated is already slipping away, says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania. Instead, we’re entering an era of what he calls “centaur writing.” Sure, asking ChatGPT to spit out an essay about the history of the Mongol Empire produces predictably “AI-ish” results, he says. But “start writing, ‘The details in paragraph three aren’t quite right—add this information, and make the tone more like The New Yorker,’” he says. “Then it becomes more of a hybrid work and much better-quality writing.”

    Mollick, who teaches entrepreneurship at Wharton, not only allows his students to use AI tools—he requires it. “Now my syllabus says you have to do at least one impossible thing,” he says. If a student can’t code, maybe they write a working program. If they’ve never done design work, they might put together a visual prototype. “Every paper you turn in has to be critiqued by at least four famous entrepreneurs you simulate,” he says.

    Students still have to master their subject area to get good results, according to Mollick. The goal is to get them thinking critically and creatively: “I don’t care what tool they’re using to do it, as long as they’re using the tools in a sophisticated manner and using their mind.”

    Mollick acknowledges that ChatGPT isn’t as good as the best human writers. But it can give everyone else a leg up. “If you were a bottom-quartile writer, you’re in the 60th to 70th percentile now,” he says. It also frees certain types of thinkers from the tyranny of the writing process. “We equate writing ability with intelligence, but that’s not always true,” he says. “In fact, I’d say it’s often not true.”

    As some schools rushed to ban ChatGPT and tech CEOs signed letters warning of AI-fueled doom, the students were notably relaxed about a machine-assisted future. (Princeton left it up to professors to set their own ground rules.) One had recently used ChatGPT to write the acknowledgments section of her thesis. Others, including Tian, relied on it to fill in chunks of script while coding. Lydia You, a senior and computer science major who plans to work in journalism, had asked ChatGPT to write a poem about losing things in the style of Elizabeth Bishop—an attempt to re-create her famous poem “One Art.” (“The art of losing isn’t hard to master.”) The result was “very close” to the original poem,

    “I feel like people of our generation are like, We can figure out for ourselves how to use this.”

    Sophie Amiton, a senior studying mechanical and aerospace engineering, jumped in: “Also, I think our generation is lazier in a lot of ways,”

    “They’re disillusioned,” You said. “A lot of jobs are spreadsheets.”

    “I think that came out of Covid,” Amiton continued. “People reevaluated what the purpose of work even is, and if you can use ChatGPT to make your life easier, and therefore have a better quality of life or work-life balance, then why not use the shortcut?”

    Liz, a recent Princeton graduate who preferred not to use her surname, sent me a paper she’d written with the help of ChatGPT for a class on global politics. Rather than simply asking it to answer the essay question, she plugged in an outline with detailed bullet points, then had it write the paper based on her notes. After extensive back-and-forth—telling it to rewrite and rearrange, add nuance here and context there—she finally had a paper she was comfortable submitting. She got an A.

    I copied and pasted her paper into GPTZero. The verdict: “Your text is likely to be written entirely by a human.”

    In early May, just a few weeks before Tian and his classmates put on their black graduation gowns, the GPTZero team released the Chrome plug-in they’d been developing and called it Origin. Origin is still rudimentary: You have to select the text of a web page yourself, and its accuracy isn’t perfect. But Tian hopes that one day the tool will automatically scan every website you look at, highlighting AI-generated content—from text to images to video—as well as anything “toxic” or factually dubious. He describes Origin as a “windshield” for the information superhighway, deflecting useless or harmful material and allowing us to see the road clearly.

    “We’ve seen a lot of panic about almost everything in our lives. I feel like people of our generation are like, We can figure out for ourselves how to use this.”

    Semrai was taking a see-what-sticks approach to building. In addition to WorkNinja, he was developing a platform for chatbots based on real celebrities and trained on reams of their data, with which fans could then interact. He was also prototyping a bracelet that would record everything we say and do—Semrai calls it a “perfect memory”—and offer real-time tips to facilitate conversations. (A group of classmates at Stanford recently created a related product called RizzGPT, an eyepiece that helps its wearer flirt.)

    He expected the summer to give rise to an explosion of AI apps, as young coders mix and cross-pollinate.

    I noticed that his framing of GPTZero/Origin was shifting slightly. Now, he said, AI-detection would be only one part of the humanity-proving toolkit. Just as important would be an emphasis on provenance, or “content credentials.” The idea is to attach a cryptographic tag to a piece of content that verifies it was created by a human, as determined by its process of creation—a sort of captcha for digital files. Adobe Photoshop already attaches a tag to photos that harness its new AI generation tool, Firefly. Anyone looking at an image can right-click it and see who made it, where, and how. Tian says he wants to do the same thing for text and that he has been talking to the Content Authenticity Initiative—a consortium dedicated to creating a provenance standard across media—as well as Microsoft about working together.

    One could interpret his emphasis on provenance as a tacit acknowledgment that detection alone won’t cut it. (OpenAI shut down its text classifier in July “due to its low rate of accuracy.”) It also previews a possible paradigm shift in how we relate to digital media. The whole endeavor of detection suggests that humans leave an unmistakable signature in a piece of text—something perceptible—much the way that a lie detector presumes dishonesty leaves an objective trace. Provenance relies on something more like a “Made in America” label. If it weren’t for the label, we wouldn’t know the difference. It’s a subtle but meaningful distinction: Human writing may not be better, or more creative, or even more original. But it will be human, which will matter to other humans.

    In June, Tian’s team took another step in the direction of practicality. He told me they were building a new writing platform called HumanPrint, which would help users improve their AI-written text and enable them to share “proof of authenticity.” Not by generating text, though. Rather, it would use GPTZero’s technology to highlight sections of text that were insufficiently human and prompt the user to rewrite it in their own words—a sort of inversion of the current AI writing assistants. “So teachers can specify, OK, maybe more than 50 percent of the essay should still be written in your own words,” he said.

    McPhee, now 92 , said he’s unconcerned about AI replacing human writers. “I’m extremely skeptical and not the least bit worried about it,” he said. “I don’t think there’s a Mark Twain of artificial intelligence.”

    But, I asked, what if years from now, someone designs a McPheeBot3000 trained on McPhee’s writing, and then asks it to produce a book on a fresh topic? It might not be able to ford streams with environmental activists or go fly-fishing with ichthyologists, but couldn’t it capture McPhee’s voice and style and worldview? Tian argued that machines can only imitate, while McPhee never repeats himself: “What’s unique about McPhee is he comes up with things McPhee a day ago wouldn’t have.”

    I asked McPhee about the hypothetical McPheeBot3000. (Or, if Semrai has his way, not-so-hypothetical.) “If this thing ever happens, in a future where I’m no longer here,” he said, “I hope my daughters show up with a lawyer.”

    Reply
  10. Tomi Engdahl says:

    Emma Roth / The Verge:
    Google updates Bard to use data from Gmail, Docs, and Drive, not just the web, to help find and summarize emails, highlight points in a document, and more — Google’s Bard AI chatbot is no longer limited to pulling answers from just the web — it can now scan your Gmail, Docs …

    Google’s Bard chatbot can now find answers in your Gmail, Docs, Drive
    https://www.theverge.com/2023/9/19/23878999/google-bard-ai-chatbot-gmail-docs-drive-extensions

    / Google’s AI chatbot can now answer questions based on the information it finds in your Gmail inbox and Drive storage.

    Google’s Bard AI chatbot is no longer limited to pulling answers from just the web — it can now scan your Gmail, Docs, and Drive to help you find the information you’re looking for. With the new integration, you can ask Bard to do things like find and summarize the contents of an email or even highlight the most important points of a document you have stored in Drive.

    There’s a whole range of use cases for these integrations, which Google calls extensions, but they should save you from having to sift through a mountain of emails or documents to find a particular piece of information. You can then have Bard use that information in other ways, such as putting it into a chart or creating a bulleted summary. This feature is only available in English for now.

    Bard can now connect to your Google apps and services
    Use Bard alongside Google apps and services, easily double-check its responses and access features in more places.
    https://blog.google/products/bard/google-bard-new-features-update-sept-2023/

    Reply
  11. Tomi Engdahl says:

    Kevin Roose / New York Times:
    Hands-on with Bard Extensions, which lets the chatbot use data from Gmail, Docs, and Drive: the feature hallucinated emails, made wrong travel plans, and more — The chatbot now pulls information from a user’s Gmail, Google Docs and Google Drive accounts. The feature leaves a lot to be desired.

    https://www.nytimes.com/2023/09/20/technology/google-bard-extensions.html

    Reply
  12. Tomi Engdahl says:

    Steve Nadis / Quanta Magazine:
    A look at Poisson flow generative models, a physics-inspired alternative to diffusion-based AI models that can create same quality images 10 to 20 times faster

    The Physical Process That Powers a New Type of Generative AI
    https://www.quantamagazine.org/new-physics-inspired-generative-ai-exceeds-expectations-20230919/

    Some modern image generators rely on the principles of diffusion to create images. Alternatives based on the process behind the distribution of charged particles may yield even better results.

    Reply
  13. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Amazon unveils new generative AI features aimed at making Alexa more conversational and personalized, available as a free preview on all Echo devices in the US

    Amazon brings generative AI to Alexa
    https://techcrunch.com/2023/09/20/amazon-brings-generative-ai-to-alexa/

    During a press event this morning at its HQ2 headquarters in Arlington, Virginia, Amazon announced that it’ll soon use a new generative AI model to power improved experiences across its Echo family of devices.

    “Our latest model has been specifically optimized for voice and the things we know our customers love — like having access to real-time information, efficiently controlling their smart home, and getting the most out of their home entertainment,” Dave Limp, the SVP of devices and services at Amazon, said onstage.

    Amazon says that the new model will power more conversational experiences — experiences that take into account body language as well as a person’s eye contact and gestures. It’ll interact with APIs to enable new smart home capabilities, inferring the meaning of descriptions like “spooky” lighting. And it’ll give Alexa a bigger — and more opinionated — personality.

    It’s worth noting Alexa could do this before — at least to a degree. But there’s now more nuance to the assistant’s reactions… supposedly. We’ll have to put it to the test.

    Reply
  14. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Amazon refreshes its Alexa-powered Echo Frames glasses: enhanced speech processing, better noise isolation, 15% thinner, and a six-hour battery life for $270

    Amazon unveils next-gen Echo Frames starting at $269.99
    https://techcrunch.com/2023/09/20/amazon-unveils-next-gen-echo-frames-starting-at-269-99/

    Amazon today unveiled a new generation of Echo Frames, its lineup of Alexa-powered glasses, with enhanced speech processing, better noise isolation and a $269.99 price tag.

    The new Echo Frames are 15% thinner and last for six hours on a charge (with continuous media playback or talk time), an improvement over the previous gen. And they support “multipoint” pairing, allowing a wearer to pair the frames to multiple audio devices at once without taking out their phone.

    Amazon says that it also “completely redesigned” the audio experience with the Frames to deliver “more balanced sound, better audio clarity and less distortion.”

    Reply
  15. Tomi Engdahl says:

    Ina Fried / Axios:
    OpenAI teases DALL-E 3, which can be summoned and controlled using ChatGPT, and plans to make the tool available to ChatGPT+ and enterprise customers in October — OpenAI is offering an early look at DALL-E 3, the next version of its image generation tool.

    Open AI debuts next version of its image generation tool
    https://www.axios.com/2023/09/20/open-ai-dall-e-3-image-creator-chatgpt-integration

    Why it matters: OpenAI faces competition from open-source tools like Stable Diffusion and a host of tech companies large and small.

    The update allows DALL-E 3 to be summoned and controlled using ChatGPT and aims to produce higher-quality images that more faithfully reflect queries.

    Details: OpenAI says that DALL-E 3 is significantly better at understanding the intent of prompts, particularly longer ones, compared to DALL-E 2, which debuted in April 2022.

    OpenAI says DALL-E 3 also does better — but not perfectly — in areas that have tripped up image generators, such as text and hands.
    The ChatGPT integration will allow people to hone their request through conversations with the chatbot and receive the result directly within the chat app.
    OpenAI plans to make DALL-E 3 available to ChatGPT+ and enterprise customers in October
    DALL-E 3 will also be available sometime this fall

    Separately, OpenAI this week announced it was working with a group of expert contractors to “red team” its products in search of bias and other issues.

    Reply
  16. Tomi Engdahl says:

    AI-generated images open multiple cans of worms
    https://www.axios.com/2022/09/12/ai-images-ethics-dall-e-2-stable-diffusion

    Machine-learning programs that can produce sometimes jaw-dropping images from brief text prompts have advanced in a matter of months from a “that’s quite a trick” stage to a genuine cultural disruption.

    Why it matters: These new AI capabilities confront the world with a mountain of questions over the rights to the images the programs learned from, the likelihood they will be used to spread falsehoods and hate, the ownership of their output and the nature of creativity itself.

    Their rapid evolution has also raised concerns over the future of jobs for graphic designers, illustrators and anyone else whose income is tied to the production of images.

    Driving the news: Open AI’s Dall-E 2 kicked this revolution off earlier this year, as programmers, journalists and artists granted early access to the program flooded social media with examples of its work.

    Last month, another program, Stability AI’s Stable Diffusion, arrived that offered a similar level of image-making prowess with way fewer restrictions.

    Between the lines: Where Dall-E 2 is owned and controlled by Open AI, Stable Diffusion is open source, meaning anyone with a little skill can download and run it on their own systems.

    That means that, while Dall-E 2 is now beginning to charge users, Stable Diffusion is essentially free.

    Reply
  17. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    GitHub expands Copilot Chat beta in Visual Studio and VS Code to individual subscribers, after launching the tool for business users in July 2023 — Three months ago, GitHub launched Copilot Chat, its ChatGPT-like programming-centric chatbot, out of private preview by making it available …

    GitHub expands access to Copilot Chat to individual users
    https://techcrunch.com/2023/09/20/github-expands-access-to-copilot-chat-to-individual-users/

    Reply
  18. Tomi Engdahl says:

    Keisha Oleaga / nft now:
    A look at AI-generated geometric art crafted using Stable Diffusion and ControlNet, a neural network structure that adds extra conditions to diffusion models — This past weekend, a new kind of artwork made waves across the Internet: AI-generated spiral art.

    https://nftnow.com/ai/unraveling-the-ai-generated-spiral-art-phenomenon/

    Reply
  19. Tomi Engdahl says:

    Benj Edwards / Ars Technica:
    DeepMind researchers detail Optimization by PROmpting to improve LLM performance by using “meta-prompts” like “take a deep breath”, which helped Google’s PaLM 2

    Telling AI model to “take a deep breath” causes math scores to soar in study
    DeepMind used AI models to optimize their own prompts, with surprising results.
    https://arstechnica.com/information-technology/2023/09/telling-ai-model-to-take-a-deep-breath-causes-math-scores-to-soar-in-study/

    Google DeepMind researchers recently developed a technique to improve math ability in AI language models like ChatGPT by using other AI models to improve prompting—the written instructions that tell the AI model what to do. It found that using human-style encouragement improved math skills dramatically, in line with earlier results.

    In a paper called “Large Language Models as Optimizers” listed this month on arXiv, DeepMind scientists introduced Optimization by PROmpting (OPRO), a method to improve the performance of large language models (LLMs) such as OpenAI’s ChatGPT and Google’s PaLM 2. This new approach sidesteps the limitations of traditional math-based optimizers by using natural language to guide LLMs in problem-solving. “Natural language” is a fancy way of saying everyday human speech.
    Further Reading
    A jargon-free explanation of how AI large language models work

    “Instead of formally defining the optimization problem and deriving the update step with a programmed solver,” the researchers write, “we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions.”

    Reply
  20. Tomi Engdahl says:

    Ingrid Lunden / TechCrunch:
    LimeWire, which pivoted from piracy to content creation, acquires generative AI image tool BlueWillow, which has 2.5M Discord members and has made 500M+ images

    After relaunching as a studio for creators, LimeWire acquires BlueWillow, a Midjourney competitor
    https://techcrunch.com/2023/09/19/limewire-bluewillow/

    In the Wild West of generative AI, a new, unlikely cowboy is riding into town. LimeWire — once infamous for music piracy and incurring the wrath of the music industry before shutting down — last year pivoted under new owners into the world of content creation. Now, to build that out further, today it’s announcing the acquisition of BlueWillow, a popular generative AI image creation platform that competes with services like Midjourney and Stable Diffusion.

    The plan will be to keep BlueWillow’s presence on Discord but also integrate its functionality into LimeWire’s website, where it will form a part of LimeWire’s paid and advertising-based free service tiers for creators. It will also become the anchor for LimeWire to develop more media services in the future.

    For LimeWire, the deal underscores the company’s ongoing efforts to grow its community of users and revenues. When the startup was initially relaunched, its plan was to build an NFT marketplace for music creators, and to that end it has raised about $17.5 million through token sales, with its investors including Arrington Capital (run by Mike Arrington, the founder and former editor of this site), Kraken, Crypto.com Capital, CMCC Global, Hivemind, Deadmau5, and others. Sources have told us that it was valued at around $60 million earlier this year.

    But with interest in NFTs fizzling out, LimeWire founders Paul and Julian Zehetmayr have diversified into building a platform to create and distribute content, with NFTs, according to Julian, “more of a sideline” now rather than the core business of LimeWire.

    Reply
  21. Tomi Engdahl says:

    AI Is Starting to Look Like the Dot Com Bubble
    https://futurism.com/ai-dot-com-bubble?fbclid=IwAR1UKKvdNJi_ASAHbUeLUyIydEQMWmIL0DOHe6HGEufkMoQjvhTPu7TJ-Ts

    “There’s a huge boom in AI —some people are scrambling to get exposure at any cost, while others are sounding the alarm that this will end in tears.”

    As the AI industry’s market value continues to balloon, experts are warning that its meteoric rise is eerily similar to that of a different — and significant — moment in economic history: the dot com bubble of the late 1990s.

    The dot com bubble — and subsequent crash — was an era defined by a gold rush-like frenzy and inflated valuations. Hungry to cash in on a new, lucrative age of technology, venture capitalists took to throwing large sums at companies that, though they made all the right promises about their ability to change the world, had yet to actually prove their viability. And when the vast majority of these ventures ultimately fell short, they failed, swallowing roughly $5 trillion in fundraising as they sank into www dot oblivion.

    Fast forward to today, as The Wall Street Journal details in a new report, and that same gold rush energy is palpable in the burgeoning AI marketplace. VCs are all too happy to pour massive amounts of cash into a growing constellation of AI firms, even those that have yet to turn a profit. Or, for that matter, have yet to even introduce a discernible product.

    Company leaders, meanwhile, continue to make sweeping claims about the transformational power of their tech, which they consistently argue could save the world, destroy it, or — conveniently — both. Investors keep biting, and, per the WSJ, the stocks keep rising — shares of Nvidia, for example, the chipmaker whose GPUs are sought after for AI projects, have tripled in value this year, while tech giants like Meta, Microsoft, and Amazon, which are all working on AI tech, have seen their stock prices skyrocket by 154 percent, 65 percent, and 35 percent, respectively.

    And yet, though the tech is impressive, its true value — nevermind path to profitability — is still wildly unclear.

    “There’s a huge boom in AI — some people are scrambling to get exposure at any cost, while others are sounding the alarm that this will end in tears,”

    “Investors can benefit from innovation-led growth, but must be wary of overpaying for it.”

    Another striking similarity between AI and dot com? Market concentration. Per the WSJ, the ten biggest stocks in the S&P 500 right now make up more than a third of the total market, and this “concentration of leadership,”

    But while the comparisons between these economic times certainly shouldn’t be ignored, there are some big differences as well. Most notable is that most of the biggest players in the AI industry are longtime Silicon Valley behemoths, and have been working on developing the technology for a while. Think Google, Meta, Microsoft, Amazon, and so on. Some of these firms are even dot com survivors

    “It’s not like 1999 when investors were racing to hot IPOs for companies that had no chance of making money,” Edwards told the WSJ. “Today’s winners are disciplined, enormous companies that have moats in place and data sets to exploit.”

    Still, even the most seasoned companies, executives, and VCs can get too caught up in the hype of it all, and may well stumble in the race to establish their dominance and relevancy in a changing technological landscape. Plus, as a general rule, a high-dollar feedback loop never feels particularly healthy, and cracks in some leading industry players are already starting to show.

    Reply
  22. Tomi Engdahl says:

    AI Mania Triggers Dot-Com Bubble Flashbacks
    Nvidia shares have nearly tripled this year. Investors question whether the stock can live up to the hype.
    https://www.wsj.com/articles/artificial-intelligence-stocks-dot-com-bubble-fears-7848d402

    Reply
  23. Tomi Engdahl says:

    ChatGPT has got yet another lawsuit slapped against its parent company OpenAI. Top 17 authors are accusing the AI company of unlawfully copying their books to train their large language model – GPT. This isn’t new for OpenAI though, which is facing similar lawsuits from other creative entities. https://ie.social/4ZrMj

    Reply
  24. Tomi Engdahl says:

    Sean Penn says studio execs who won’t agree to AI protections for actors should let him and his friends ‘do whatever we want’ with their daughters’ likenesses
    https://www.insider.com/sean-penn-ai-studio-execs-daughters-likenesses-hollywood-strikes-2023-9?utm_source=facebook&utm_campaign=insider-sf&utm_medium=social&fbclid=IwAR05SuVDtei8r2Jr7sRztCWnc74eB4hxXK0tgTuaIPS5yINlIFWsGuqkCq4

    In a new Variety interview, Sean Penn proposed a way to end the current strike in Hollywood.
    He wants studio heads to let him make AI versions of their daughters so he can “invite my friends over to do whatever we want” with them.
    “This is a real exposé on morality — a lack of morality,” he said of the AI issue.

    Amid the ongoing SAG-AFTRA strike in Hollywood, Sean Penn expressed his anger over studios’ desire to use actors’ likenesses and voices for future AI use. In response, Penn proposed a disturbing resolution.

    In a new cover story for Variety released Wednesday, Penn laid out his hypothetical negotiation to reporter Stephen Rodrick.

    https://variety.com/2023/film/features/sean-penn-slams-will-smith-slap-ai-oscars-1235720417/

    Reply
  25. Tomi Engdahl says:

    AI-generated books force Amazon to cap e-book publications to 3 per day https://arstechnica.com/information-technology/2023/09/ai-generated-books-force-amazon-to-cap-ebook-publications-to-3-per-day/

    On Monday, Amazon introduced a new policy that limits Kindle authors from self-publishing more than three books per day on its platform, reports The Guardian. The rule comes as Amazon works to curb abuses of its publication system from an influx of AI-generated books.

    Since the launch of ChatGPT, an AI assistant that can compose text in almost any style, some news outlets have reported a marked increase in AI-authored books, including some that seek to fool others by using established author names. Despite the anecdotal observations, Amazon is keeping its cool about the scale of the AI-generated book issue for now. “While we have not seen a spike in our publishing numbers,” they write, “in order to help protect against abuse, we are lowering the volume limits we have in place on new title creations.”

    Reply
  26. Tomi Engdahl says:

    AI Is Now Better Than Humans at Solving Those Annoying “Prove You’re a Human” Tests
    https://futurism.com/the-byte/ai-better-solving-captchas-prove-human
    Researchers have found that bots are shockingly good at completing CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), which are those small, annoying puzzles designed — ironically — to verify that you’re really human.
    In fact, as the team led by Gene Tsudik at the University of California, Irvine discovered, the bots are actually way better and faster at solving these tests than us, a worrying sign that the already-aging tech is on its way out.

    Reply
  27. Tomi Engdahl says:

    Jos teet työtä näppäimistöllä, tekoäly muuttaa sen
    https://etn.fi/index.php?option=com_content&view=article&id=15337&via=n&datum=2023-09-20_15:06:14&mottagare=30929

    OpenAI:n ChatGPT lanseerattiin vajaa vuosi sitten ja se on jo ehtinyt mullistaa monen työtehtäviä. Ei ihme, että aihe oli laajasti esillä myös Dell Technologies Forum -tapahtumassa eilen. Keynoten aiheesta piti tekoälyasiantuntija Antti Merilehto.

    Merilehto aloitti omalla esimerkillä, joka osoitti hyvin tekoälyn voiman. AI-sovellus käänsi englanniksi puhutun videon saksan kielelle. Puhe vaikutti sujuvalta, saksalaisen mukaan sen kielioppi oli täydellistä, vaikkakin puhe oli hieman kulmikasta.

    Tämä tarkoittaa tietenkin valtavaa tehokkuushyppäystä yritysten markkinointiin. Pienellä firmalla ei tarvitse olla leegioita kääntäjiä palveluksessaan, vaan yksi ja sama markkinointivideo voidaan tekoälyn avulla kääntää sujuvasti eri kielille.

    Reply
  28. Tomi Engdahl says:

    Scaling Analytics & AI: Breaking Free From IT Roadblocks
    https://content.dataiku.com/it-roadblocks?utm_campaign=GLO+CONTENT+IT+Persona+Campaign+July+2023&utm_medium=email&_hsmi=272507280&_hsenc=p2ANqtz-8-V0bhaKU1YSzJeTbx5cU61HCYlxx0lYJxMkRLoC_yegi_EaDkXrm7r36M-Opyn5jABhMR4X4HhNQ9K30sQXMI5Q_-jaWLZ844W3BtSQsjMy8OHmc&utm_content=271658481&utm_source=hs_email

    In June 2023, Dataiku and Databricks surveyed 400 senior AI professionals in large companies around the world and — given the critical role that IT leaders play in helping organizations scale AI — we wanted to share a few of the insights specific to the respondents in IT and information security departments.

    Please note that while IT leaders made up a small sample of the total number of respondents (n=87), we feel like these takeaways are still important to share. So, what are the top barriers preventing organizations from delivering more value from analytics and AI, from the lens of IT leaders? How can they rise above the chaos and deliver data and AI products without sacrificing control? Find out here.

    Reply
  29. Tomi Engdahl says:

    5 ways Intelligent Automation can benefit your business
    https://www.ciklum.com/blog/5-ways-intelligent-automation-can-benefit-your-business?utm_medium=email&_hsmi=75038384&_hsenc=p2ANqtz-8dUUMM8plhoGRAeu8eK6qXXwS23DJs2awc7SUJUxNSl5hkea1XDLyWHG5pFSDAmytg8DXfqljckma86xWW2ids7rFpce5CbcwIfHaoXgXWq3TBQgg&utm_content=75038384&utm_source=hs_automation

    What is Intelligent Automation (IA)?
    Intelligent Automation (IA) represents the cutting-edge fusion of Robotic Process Automation (RPA) and Artificial Intelligence (AI) technologies. The synergy between the two technologies forms a more agile automation solution that has the potential to unlock amazing Intelligent Automation benefits for businesses.

    By adding cognitive abilities to RPA, IA solutions are able to continually learn from data sets and progressively improve their ability to support your teams. This includes the ability to adapt to different conditions, aid with decision-making, and streamline workflows.

    Today, IA solutions are used in nearly every industry to automate a wide array of end-to-end processes. This can range from procurement to security to ecommerce.

    Based on a recent survey by Gartner, it is predicted that the use cases and prevalence of IA will only continue to grow. In fact, 85% of infrastructure and operations (I&O) leaders without full automation are expected to increase their automation by 2025.

    If you would like to learn more about how to identify the best use case for IA, check out our blog to learn how Process Mining can help you map out your end-to-end processes and find promising opportunities for automation

    The benefits of working smarter with IA
    IA has proven itself to be a crucial asset for many businesses looking to optimize their decision-making and scale their operations. Here is a list of some of the major benefits of IA that can be achieved by augmenting your teams with IA tools.

    1. Increased employee satisfaction and work performance

    2. Enhance customer experiences

    3. Minimize the reliance on offshoring

    4. Improve the efficiency and accuracy of data analytics

    5. Reinforce cybersecurity

    Why continuous optimization is vital to Intelligent Automation
    Intelligent automation is not a solution that can be implemented and then forgotten about. Fine-tuning the automation process is necessary for IA to keep up with changing customer demands, market conditions, and data sources.

    Reply
  30. Tomi Engdahl says:

    EU:ssa tehdään hartiavoimin töitä, jotta kaavailtu tekoälyasetus saadaan pian valmiiksi. Se on valtava ja monimutkainen lainsäädäntöhanke, jota hankaloittavat tekoälyn määrittelyn vaikeus ja teknologian nopeat muutokset. Tekoälyasetus on kuitenkin välttämätön ja tarjoaa ihmisille suojaa.

    TEKOÄLYASETUS ON PAISUMASSA VAIKEASTI SOVELLETTAVAKSI MÖYKYKSI
    https://www.helsinki.fi/fi/uutiset/demokratia/tekoalyasetus-paisumassa-vaikeasti-sovellettavaksi-moykyksi?utm_source=facebook&utm_medium=social_owned&fbclid=IwAR1W9V4n0nRBerOr1ZEkJNr_AcUQYkyYOFUSCkECTW73o7JnPynHncBXqoQ_aem_AUQ8bORvKZenMEUzAXvzkCtkYoDrvLrCfFX1WOui5T0dUk7jz4jzh4v22AWlg3aT3a9wzua7VdSbUF2-PlViyryJ

    EU:n tekeillä oleva tekoälyasetus on valtava ja harvinaislaatuisen monimutkainen lainsäädäntöhanke. Puutteistaan huolimatta asetus on välttämätön ja tulee tarjoamaan suojaa tavallisille ihmisille.

    EU:ssa tehdään hartiavoimin töitä, jotta kaavailtu tekoälyasetus saataisiin valmiiksi lähitulevaisuudessa. Tekoälyä hyödyntävien järjestelmien kehittyessä huimaa vauhtia lakipaketti uhkaa kuitenkin kasvaa vaikeasti sovellettavaksi kokonaisuudeksi.

    – Alkaa näyttää siltä, että asetuksesta tulee hyvin laaja ja se tulee sisältämään hirvittävän määrän artikloja. Pidän hyvin epätodennäköisenä, että siitä saisi selkeän kokonaisuuden.

    Näin sanoo julkisoikeuden professori Susanna Lindroos-Hovinheimo, joka on seurannut asetuksen vaiheita tarkasti. Hän on perehtynyt asetukseen muun muassa osana Generation AI-hanketta, joka tutkii tekoälyn sääntelyä erityisesti lasten oikeuksien näkökulmasta.

    Asiaan perehtymätön saattaa ajatella, ettei tekoälyä tällä hetkellä säädellä lainkaan, kun tekoälyasetus on vasta valmistelussa. Näin ei ole, vaan lukuisat olemassa olevat lait, perustuslaista lähtien, määrittävät reunaehdot tekoälyn toiminnalle. Mitään yksittäistä, kaiken teknologian kattavaa säädöskokonaisuutta ei kuitenkaan ole olemassa. Tarve sellaiselle on ilmeinen, ja EU:ssa vaikuttaisi olevan vahva poliittinen tahtotila saada asetus aikaiseksi.

    – Digitalisaatio on edennyt kauan ilman että kehitystä on sen suuremmin problematisoitu. Meillä on vuosikymmenten sääntelyvelka ja sitä yritetään nyt kiriä kiinni niin EU:ssa kuin myös kansallisella tasolla, sanoo puolestaan Riikka Koulu, tekoälyn yhteiskunnallisten ja oikeudellisten vaikutusten apulaisprofessori ja Legal Tech Labin johtaja.

    Reply
  31. Tomi Engdahl says:

    Tekoälyn määritteleminen on hankalaa
    Yksi suurista kompastuskivistä tekoälyasetuksen kanssa on ollut se, miten tekoäly tulisi määritellä juridisessa mielessä. Tämä on yksi ratkaisevista kysymyksistä, johon ei ole vielä löytynyt yhteisymmärrystä neuvotteluissa. Kuvaavaa on, että edes datatieteilijät, insinöörit ja vastaavat tekoälyn kanssa tekemisissä olevat ammattilaiset eivät ole löytäneet kaikille kelpaava vastausta siitä, mikä on tekoälyä ja mikä ei.

    EU:n pyrkimyksenä on kuitenkin ollut pitää tekoälyn määritelmä laajana, ja asetuksessa on lukuisia yleistason linjauksia, jotka koskisivat kaikkea tekoälyä. Toisaalta asetus on monilta osin hyvinkin yksityiskohtainen, ja se puuttuu esimerkiksi yksittäisten järjestelmien teknisiin toimintamekanismeihin. Tässä mielessä asetus on Lindroos-Hovinheimon mielestä pahasti epätasapainoinen, sillä siinä liikutaan liian monella eri tasolla.

    https://www.helsinki.fi/fi/uutiset/demokratia/tekoalyasetus-paisumassa-vaikeasti-sovellettavaksi-moykyksi?utm_source=facebook&utm_medium=social_owned&fbclid=IwAR1W9V4n0nRBerOr1ZEkJNr_AcUQYkyYOFUSCkECTW73o7JnPynHncBXqoQ_aem_AUQ8bORvKZenMEUzAXvzkCtkYoDrvLrCfFX1WOui5T0dUk7jz4jzh4v22AWlg3aT3a9wzua7VdSbUF2-PlViyryJ

    Reply
  32. Tomi Engdahl says:

    Nyt voit kuunnella tuhansia eri äänikirjoja ilmaiseksi Spotifysta – Yksi pikku ”mutta,” vain: niiden lukijana on tekoäly
    Samuli Leppälä22.9.202320:01SUORATOISTOPALVELUTSOVELLUKSET JA PALVELUT
    Spotifyn lisäksi kirjat ovat kuunneltavissa muun muassa Apple Podcastissa ja Google Podcastissa.
    https://www.tivi.fi/uutiset/nyt-voit-kuunnella-tuhansia-eri-aanikirjoja-ilmaiseksi-spotifysta-yksi-pikku-mutta-vain-niiden-lukijana-on-tekoaly/97cefced-20ef-4229-b6e3-65a45868ff19

    E-kirjojen tallentamiseen ja jakeluun keskittynyt Gutenberg-projekti on ilmoittanut tuoneensa useisiin eri suoratoistopalveluihin tuhansia klassikkokirjoista tehtyjä äänikirjoja.

    Gutenberg-projektin piiriin kuuluu pääosin vapaasti levitettäviä teoksia.

    Äänikirjat on luotu yhteistyössä Microsoftin ja teknillisen korkeakoulu MIT:n kanssa. Äänimuotoon muutettuja teoksia on valtava katalogi, mutta kirjoja ei ole varten ei ole palkattu pientä armeijaa lukijoita. Sen sijaan projektissa on käytetty tekoälylukijaa.

    Project Gutenberg puts 5,000 audiobooks online for free using synthetic speech
    https://techcrunch.com/2023/09/19/project-gutenberg-puts-5000-audiobooks-online-for-free-using-synthetic-speech/

    Open book repository Project Gutenberg has turned thousands of its titles into audiobooks practically overnight using synthetic speech, available now for download or streaming on multiple services. The selection is a bit idiosyncratic (as indeed the archive’s is generally) but it is nevertheless a powerful demonstration of accessibility in literature.

    Making an audiobook via traditional narration naturally takes quite a long time even in the best case, and of course the reader must be paid for their time and there is the matter of editing and publishing. For many titles it doesn’t make sense financially to produce an audiobook, meaning many older and more obscure titles remain difficult for people who prefer that format to consume.

    “Each one of the e-books in Project Gutenberg is in its own idiosyncratic html format with lots of text you wouldn’t want to hear read aloud like tables, contents, indices, page numbers etc. The hardest part of the project was extracting the good text to read aloud.” explained project co-lead Mark Hamilton, affiliated with Microsoft and MIT.

    To solve this, they designed a system that worked through the archive and identified book files that were formatted similarly, then figured out which of those clusters were the best suited to being automatically read out.

    “We picked the books for the first batch based on what we felt the automated parser could do reasonably well,”

    As for the narration itself, the team has put together multiple machine learning and synthetic speech tools that have improved and become more accessible over the last few years. A few years ago it was obvious that automated audiobook production would soon arrive, and that is has — and at scale.

    The first 5,000 or so books are available to listen to for free on Spotify, Apple Podcasts, and the Internet Archive, and the code used to create them is being documented at GitHub.

    Reply
  33. Tomi Engdahl says:

    “Sampling led to hip-hop… AI music has the potential to do something similar”, says Holly Herndon
    AI tech will be integrated into DAWs within ten years, says Herndon, and it could have the same impact that sampling had on hip-hop.
    https://musictech.com/news/gear/holly-herndon-ai-in-music/?fbclid=IwAR00xvIloKoLCpg5bOt3ZfsDFxGtZxsw28Xr97SHIxfLsTVf3qegKyHtXco

    Electronic producer, singer and AI advocate Holly Herndon has drawn a comparison between AI music and sampling, saying that AI music could impact music in the same way sampling did hip-hop.

    “Sampling old records to create something new led to the formation of genres like hip hop and innovative new forms of artistic expression.” She says. “AI music has the potential to do something very similar.”

    She goes on to discuss the inevitability of AI technology in music, saying “AI music is no longer this sci-fi concept, it’s our new reality.”

    It’s “here to stay”, she continues, referring to the fact AI is set to be integrated into every major DAW in the next decade. herndon sees the potential for music to become more strange and innovative with AI, while producing formulaic music will become easier – something also stated recently by Bombay Bicycle Club’s Jamie McColl.

    Reply
  34. Tomi Engdahl says:

    In July 2021, Holly Herndon created Holly+, a browser-based AI deepfake that transforms any audio file into her voice. In 2022, she created an eerie cover of Dolly Parton’s Jolene using this tool, alongside an animated music video.

    https://musictech.com/news/holly-herndon-has-created-holly-an-ai-that-lets-you-transform-any-song-into-her-voice/

    https://musictech.com/news/music/holly-herndon-ai-powered-cover-dolly-partons-jolene-holly-plus/

    Reply
  35. Tomi Engdahl says:

    David Pierce / The Verge:
    OpenAI updates ChatGPT Plus and ChatGPT Enterprise to let users prompt the tool using voice commands or by uploading an image, coming to all users “soon after” — Most of OpenAI’s changes to ChatGPT involve what the AI-powered bot can do: questions it can answer, information it can access, improved underlying models.

    You can now prompt ChatGPT with pictures and voice commands
    https://www.theverge.com/2023/9/25/23886699/chatgpt-pictures-voice-commands-ai-chatbot-openai

    / The super-popular AI chatbot has always just been a text box. Now it’s learning to understand your questions in new ways.

    Most of OpenAI’s changes to ChatGPT involve what the AI-powered bot can do: questions it can answer, information it can access, and improved underlying models. This time, though, it’s tweaking the way you use ChatGPT itself. The company is rolling out a new version of the service that allows you to prompt the AI bot not just by typing sentences into a text box but by either speaking aloud or just uploading a picture. The new features are rolling out to those who pay for ChatGPT in the next two weeks, and everyone else will get it “soon after,” according to OpenAI.

    The voice chat part is pretty familiar: you tap a button and speak your question, ChatGPT converts it to text and feeds it to the large language model, gets an answer back, converts that back to speech, and speaks the answer out loud. It should feel just like talking to Alexa or Google Assistant, only — OpenAI hopes — the answers will be better thanks to the improved underlying tech. It appears most virtual assistants are being rebuilt to rely on LLMs — OpenAI is just ahead of the game.

    OpenAI’s excellent Whisper model does a lot of the speech-to-text work, and the company is rolling out a new text-to-speech model it says can generate “human-like audio from just text and a few seconds of sample speech.” You’ll be able to choose ChatGPT’s voice from five options, but OpenAI seems to think the model has vastly more potential than that. OpenAI is working with Spotify to translate podcasts into other languages, for instance, all while retaining the sound of the podcaster’s voice. There are lots of interesting uses for synthetic voices, and OpenAI could be a big part of that industry.

    I used OpenAI’s new tech to transcribe audio right on my laptop
    https://www.theverge.com/2022/9/23/23367296/openai-whisper-transcription-speech-recognition-open-source

    / The company behind DALL-E and GPT has made its automatic speech recognition system, called Whisper, and is letting developers and researchers use it.

    OpenAI, the company behind image-generation and meme-spawning program DALL-E and the powerful text autocomplete engine GPT-3, has launched a new, open-source neural network meant to transcribe audio into written text (via TechCrunch). It’s called Whisper, and the company says it “approaches human level robustness and accuracy on English speech recognition” and that it can also automatically recognize, transcribe, and translate other languages like Spanish, Italian, and Japanese.

    As someone who’s constantly recording and transcribing interviews, I was immediately hyped about this news — I thought I’d be able to write my own app to securely transcribe audio right from my computer. While cloud-based services like Otter.ai and Trint work for most things and are relatively secure, there are just some interviews where I, or my sources, would feel more comfortable if the audio file stayed off the internet.

    Using it turned out to be even easier than I’d imagined; I already have Python and various developer tools set up on my computer, so installing Whisper was as easy as running a single Terminal command. Within 15 minutes, I was able to use Whisper to transcribe a test audio clip that I’d recorded. For someone relatively tech-savvy who didn’t already have Python, FFmpeg, Xcode, and Homebrew set up, it’d probably take closer to an hour or two. There is already someone working on making the process much simpler and user-friendly, though, which we’ll talk about in just a second

    While OpenAI definitely saw this use case as a possibility, it’s pretty clear the company is mainly targeting researchers and developers with this release. In the blog post announcing Whisper, the team said its code could “serve as a foundation for building useful applications and for further research on robust speech processing” and that it hopes “Whisper’s high accuracy and ease of use will allow developers to add voice interfaces to a much wider set of applications.” This approach is still notable, however — the company has limited access to its most popular machine-learning projects like DALL-E or GPT-3, citing a desire to “learn more about real-world use and continue to iterate on our safety systems.”

    Reply
  36. Tomi Engdahl says:

    Amrita Khalid / The Verge:
    Spotify partners with OpenAI to debut an AI translation feature that reproduces podcasts in other languages using a synthesized version of the podcaster’s voice

    Spotify is going to clone podcasters’ voices — and translate them to other languages
    / A partnership with OpenAI will let podcasters replicate their voices to automatically create foreign-language versions of their shows.
    https://www.theverge.com/2023/9/25/23888009/spotify-podcast-translation-voice-replication-open-ai

    What if podcasters could flip a switch and instantly speak another language? That’s the premise behind Spotify’s new AI-powered voice translation feature, which reproduces podcasts in other languages using the podcaster’s own voice.

    The company has partnered with a handful of podcasters to translate their English-language episodes into Spanish with its new tool, and it has plans to roll out French and German translations in the coming weeks. The initial batch of episodes will come from some big names, including Dax Shepard, Monica Padman, Lex Fridman, Bill Simmons, and Steven Bartlett. Spotify plans to expand the group to include The Rewatchables from The Ringer and its upcoming show from Trevor Noah.

    The backbone of the translation feature is OpenAI’s voice transcription tool Whisper, which can both transcribe English speech and translate other languages into English. But Spotify’s tool goes beyond speech-to-text translation — the feature will translate a podcast into a different language and reproduce it in a synthesized version of the podcasters’ own voice.

    “By matching the creator’s own voice, Voice Translation gives listeners around the world the power to discover and be inspired by new podcasters in a more authentic way than ever before,” Ziad Sultan, Spotify’s vice president of personalization, said in a statement.

    OpenAI is likely behind the voice replication part of this new feature, too. The AI company is making a few announcements this morning, including the launch of a tool that can create “human-like audio from just text and a few seconds of sample speech.” OpenAI says it’s intentionally limiting how widely this tool will be available due to concerns around safety and privacy.

    That’s probably part of the reason why Spotify says the translation tech is only being tested with a “select group” of podcasters for now.

    Reply
  37. Tomi Engdahl says:

    Joanna Stern / Wall Street Journal:
    OpenAI updates ChatGPT for iOS and Android to let the chatbot speak in five different voices in a conversational tone, rolling out to Plus and Enterprise users

    You Can Now Talk With ChatGPT and It Sounds Like a Human (Pretty Much)
    Oh, and it can ‘see’ you now
    https://www.wsj.com/tech/personal-tech/chatgpt-can-now-chat-aloud-with-you-and-yes-it-sounds-pretty-much-human-3be39840?mod=followamazon

    You’ll have two reactions to hearing my conversation with the now-vocal ChatGPT:

    1) Holy crap! This is the future of communicating with computers that sci-fi writers promised us.

    2) I’m building an underground bunker and stockpiling toilet paper and granola bars.

    Yes, OpenAI’s popular chatbot is speaking up—literally. The company on Monday announced an update to its iOS and Android apps that will allow the artificially intelligent bot to talk out loud in five different voices. I’ve been doing a lot of talking with ChatGPT over the past few days, and testing another new tool that lets the bot respond to images you show it.

    So what’s it like?

    Think Siri or Alexa except…not. The natural voice, the conversational tone and the eloquent answers are almost indistinguishable from a human at times. Remember “Her”? The movie where Joaquin Phoenix falls in love with an AI operating system that’s really a faceless Scarlett Johansson? That’s the vibe I’m talking about.

    “It’s not just that typing is tedious,” Joanne Jang, a product lead at OpenAI, told me in an interview. “You can now have two-way conversations.”

    The new photo-comprehension tool also makes the bot more interactive. You can snap a shot and ask ChatGPT questions about it. Spoiler: It’s terrible at Tic-Tac-Toe. The image and voice features will be available over the next few weeks for those who subscribe to ChatGPT Plus for $20 a month.

    In essence, OpenAI is giving its chatbot a mouth and eyes. I’ve been running both features through tests—a best-friend chat, plumbing repairs, games. It’s all very cool and…creepy.

    Reply
  38. Tomi Engdahl says:

    Emilia David / The Verge:
    Getty partners with Nvidia to launch Generative AI by Getty Images, a tool trained on Getty’s licensed photos that lets users create legally protected images

    Getty made an AI generator that only trained on its licensed images
    / Generative AI by Getty Images harnesses Getty’s vast licensed library of images.
    https://www.theverge.com/2023/9/25/23884679/getty-ai-generative-image-platform-launch

    Getty Images is partnering with Nvidia to launch Generative AI by Getty Images, a new tool that lets people create images using Getty’s library of licensed photos.

    Generative AI by Getty Images (yes, it’s an unwieldy name) is trained only on the vast Getty Images library, including premium content, giving users full copyright indemnification. This means anyone using the tool and publishing the image it created commercially will be legally protected, promises Getty. Getty worked with Nvidia to use its Edify model, available on Nvidia’s generative AI model library Picasso.

    The company said any photos created with the tool will not be included in the Getty Images and iStock content libraries. Getty will pay creators if it uses their AI-generated image to train the current and future versions of the model. It will share revenues generated from the tool, “allocating both a pro rata share in respect of every file and a share based on traditional licensing revenue.”

    “We’ve listened to customers about the swift growth of generative AI — and have heard both excitement and hesitation — and tried to be intentional around how we developed our own tool,” says Getty Images chief product officer Grant Farhall in a statement.

    The Getty tool does limit what types of images users can generate.

    Customers can access Generative AI by Getty Images through the Getty Images website. The company said the tool will be priced separately from a standard Getty Images subscription, and pricing is based on prompt volume. It would not specify prices, however.

    Getty says users will get perpetual, worldwide, and unlimited rights to the image they created. (The technical copyright status of AI-generated images, that said, is still fuzzy.) Getty said it is similar to when customers license content from its library, where the company owns the file but licenses it out for use. They can either write their own prompt or use the prompt builder to guide them. Users may also integrate the tool into their own workflows through an API. True to form, Getty watermarks pictures created through the tool, identifying the photo as generated with AI.

    It’s no surprise Getty is getting into the AI image game; after all, it has one of the largest libraries of images out there. But the company has battled other text-to-image generative AI developers, suing Stability AI for copyright infringement, alleging its image generator Stable Diffusion used Getty photos without permission.

    By building its own generative AI image platform, Getty can undercut other companies that want to use its image libraries to train models. Getty is far from the only firm setting up AI image platforms with its licensed data. Adobe released its Firefly model, trained on its stable of licensed images, across its Creative Suite and Creative Cloud service.

    Reply
  39. Tomi Engdahl says:

    AI-generated art cannot be copyrighted, rules a US federal judge
    / DC District Court Judge Beryl A. Howell says human beings are an ‘essential part of a valid copyright claim.’
    https://www.theverge.com/2023/8/19/23838458/ai-generated-art-no-copyright-district-court

    Reply
  40. Tomi Engdahl says:

    WhisperFrame Depicts The Art Of Conversation
    https://hackaday.com/2023/09/22/whisperframe-depicts-the-art-of-conversation/

    At this point, you gotta figure that you’re at least being listened to almost everywhere you go, whether it be a home assistant or your very own phone. So why not roll with the punches and turn lemons into something like a still life of lemons that’s a bit wonky? What we mean is, why not take our conversations and use AI to turn them into art? That’s the idea behind this next-generation digital photo frame created by [TheMorehavoc].
    Essentially, it uses a Raspberry Pi and a Respeaker four-mic array to listen to conversations in the room. It listens and records 15-20 seconds of audio, and sends that to the OpenWhisper API to generate a transcript.
    This repeats until five minutes of audio is collected, then the entire transcript is sent through GPT-4 to extract an image prompt from a single topic in the conversation. Then, that prompt is shipped off to Stable Diffusion to get an image to be displayed on the screen. As you can imagine, the images generated run the gamut from really weird to really awesome.

    The WhisperFrame Generates Art From Conversations
    https://morehavoc.com/2023/09/17/whisperframe.html

    Reply
  41. Tomi Engdahl says:

    Microsoft is going nuclear to power its AI ambitions / Microsoft is looking at next-generation nuclear reactors to power its data centers and AI, according to a new job listing for someone to lead the way.
    https://www.theverge.com/2023/9/26/23889956/microsoft-next-generation-nuclear-energy-smr-job-hiring

    Reply
  42. Tomi Engdahl says:

    Tech bros like OpenAI’s Sam Altman keep obsessing about replacing the ‘median human’ with AI
    https://www.businessinsider.com/sam-altman-thinks-agi-replaces-median-humans-2023-9?fbclid=IwAR0UsGi2U-eIZqCShqoCBNLZ4Wfo7cO4U7cCHZ80psFYbwKKMNLQl6MnC3I&r=US&IR=T

    OpenAI CEO Sam Altman and others often talk about the “median human.”
    Altman has said his vision for artificial general intelligence was equivalent to a median human.
    The terminology, used by many in the tech bubble, has been raising eyebrows

    Tech bros aren’t always known for their sensitivity.

    In a recent profile in the New Yorker, OpenAI CEO Sam Altman compared his vision of AGI — artificial general intelligence — to a “median human.”

    He said: “For me, AGI…is the equivalent of a median human that you could hire as a co-worker.”

    It’s not the first time Altman has referred to a median human.

    In a 2022 podcast, Altman said this AI could “do anything that you’d be happy with a remote coworker doing just behind a computer, which includes learning how to go be a doctor, learning how to go be a very competent coder.”

    Altman’s company happens to be one of the current frontrunners for achieving AGI.

    Although a disputed term, AGI or artificial general intelligence has been defined as an AI model that surpasses average human intelligence or can achieve complex human capabilities like common sense and consciousness.

    The comparison with median human intellect isn’t new.

    “Comparing AI to even the idea of median or average humans is a bit offensive,” Brent Mittelstadt, director of research at the Oxford Internet Institute, told Insider. “I see the comparison as being concerning and see the terminology as being concerning too.”

    “It’s interesting to use the term median human — that’s importantly different from average human,” Henry Shevlin, an AI ethicist and professor at the University of Cambridge, told Insider. “It makes the quote sound more icky.”

    “There is an argument for thinking that Sam Altman could be more sensitive around this stuff,” he added.

    “One thing that current AI architectures and models have shown is that they can achieve basically typical human-level performance. That’s not problematic in itself,”

    How tech bros are defining this idea of median human intellect is open to question.

    “I think it’s an intentionally vague concept as compared to having a very specific grounded meaning.”

    Traditional measurements for comparing AI and human intelligence have tended to focus on capabilities rather than general intellect.

    “A lot of the classic benchmarks have involved things like the ability to play chess, the ability to produce good code, or the ability to pass as a human,” Shevlin said.

    But comparing AI with human intelligence at all can be ethically murky and potentially misleading, according to Mittelstadt.

    Reply
  43. Tomi Engdahl says:

    How to Use FreedomGPT to Install Free ChatGPT Alternatives Locally on Your Computer
    A different kind of conversational AI you can use offline.
    https://gizmodo.com/how-to-use-freedomgpt-for-free-chatgpt-alternatives-1850870965

    Reply
  44. Tomi Engdahl says:

    ALGORITMIT TARJOAVAT TEHOKASTA HALLINTAA JA SUOSITTELUA – MUTTA MILLÄ HINNALLA?
    Suosittelujärjestelmistä on tullut yhä vahvempia yhteiskunnallisen vallan välineitä, jotka kuitenkin toimivat käyttäjien näkökulmasta usein läpinäkyvyyden, johdonmukaisuuden, ja vastuullisuuden vaatimusten ulkopuolella, selviää tuoreesta väitöstutkimuksesta.
    https://www.helsinki.fi/fi/uutiset/hyva-yhteiskunta/algoritmit-tarjoavat-tehokasta-hallintaa-ja-suosittelua-mutta-milla-hinnalla

    Reply
  45. Tomi Engdahl says:

    ChatGPT can now search the web in real time / OpenAI promises up-to-date information with direct links to sources for subscribers only, but others will get the feature.
    https://www.theverge.com/2023/9/27/23892781/openai-chatgpt-live-web-results-browse-with-bing

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*