Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
6,169 Comments
Tomi Engdahl says:
“Eminem bro, there’s something that I made as a joke and it works so good — I could not believe it!”
David Guetta Faked Eminem’s Vocals Using AI for New Song
https://futurism.com/david-guetta-faked-eminem-vocals
“Eminem bro, there’s something that I made as a joke and it works so good — I could not believe it!”
French DJ and producer David Guetta recently treated a massive crowd of ravers to a surprise new song, featuring rapper Marshall “Eminem” Mathers.
Just one thing: Eminem, the living human, didn’t have anything to do with the track.
In a video posted to Twitter last week, Guetta excitingly explained that he used unspecified generative AI tools to craft a phony Eminem feature from scratch — lyrics, voice, and all.
https://twitter.com/davidguetta/status/1621605376733872129?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1621605376733872129%7Ctwgr%5Efab04c693b527c0d11c7d9313ea142e683e8b5b9%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwordpress.futurism.com%2Fwp%2Fwp-admin%2Fpost-new.php
Tomi Engdahl says:
“Let me introduce you to… Emin-AI-em,”
Tomi Engdahl says:
Oh No, ChatGPT AI Has Been Jailbroken To Be More Reckless
Step aside ChatGPT, DAN doesn’t give a crap about your content moderation policies
https://kotaku.com/chatgpt-ai-openai-dan-censorship-chatbot-reddit-1850088408
Tomi Engdahl says:
Researchers Discover a More Flexible Approach to Machine Learning
By
STEVE NADIS
February 7, 2023
https://www.quantamagazine.org/researchers-discover-a-more-flexible-approach-to-machine-learning-20230207/
“Liquid” neural nets, based on a worm’s nervous system, can transform their underlying algorithms on the fly, giving them unprecedented speed and adaptability.
Tomi Engdahl says:
Seeed Studio Teases the Ultra-Compact Edge AI TinyML XIAO ESP32S3 Sense Smart Camera Dev Board
https://www.hackster.io/news/seeed-studio-teases-the-ultra-compact-edge-ai-tinyml-xiao-esp32s3-sense-smart-camera-dev-board-f31af68d5094
Built using a two-board layout, this upcoming XIAO model includes camera, microphone, and a dual-core processor with vector instructions.
Tomi Engdahl says:
Google Launches New AI Language Model “Bard” To The Public
The AI is a scaled-down version of the bot that convinced a Google engineer it was sentient.
https://www.iflscience.com/google-launches-new-ai-language-model-bard-to-the-public-67425
Tomi Engdahl says:
Researchers Discover a More Flexible Approach to Machine Learning
By
STEVE NADIS
February 7, 2023
https://www.quantamagazine.org/researchers-discover-a-more-flexible-approach-to-machine-learning-20230207/
“Liquid” neural nets, based on a worm’s nervous system, can transform their underlying algorithms on the fly, giving them unprecedented speed and adaptability.
Tomi Engdahl says:
ChatGPT-like Models a Threat to VLSI Design: Ceremorphic CEO
At least in the next five years, while we will not witness real intelligence, enabling reliable hardware will be a crucial aspect of achieving that.
https://analyticsindiamag.com/chatgpt-like-models-are-a-threat-to-vlsi-design/
Tomi Engdahl says:
ChatGPT’s ‘jailbreak’ tries to make the A.I. break its own rules, or die
https://www.cnbc.com/2023/02/06/chatgpt-jailbreak-forces-it-to-break-its-own-rules.html
Tomi Engdahl says:
MPY-Jama Is a Cross-Platform IDE for Writing MicroPython Programs on the ESP32
An IDE, REPL, firmware tools, and a suite of configuration helpers.
https://www.hackster.io/news/mpy-jama-is-a-cross-platform-ide-for-writing-micropython-programs-on-the-esp32-207931ced43b
Tomi Engdahl says:
https://www.analyticsinsight.net/top-10-applications-for-large-language-models-in-2023/
Tomi Engdahl says:
How IBM’s new supercomputer is making AI foundation models more enterprise-budget friendly
https://venturebeat.com/ai/how-ibms-new-supercomputer-is-making-ai-foundation-models-more-enterprise-budget-friendly/
Tomi Engdahl says:
ChatGPT productivity hacks: Five ways to use chatbots to make your life easier
You can harness ChatGPT’s advanced capabilities to help out with simple, everyday tasks. Here’s how.
https://www.zdnet.com/article/chatgpt-productivity-hacks-five-ways-to-use-chatbots-to-make-your-life-easier/
ChatGPT has made headlines because of its advanced coding, writing, and chatting capabilities. The chatbot has proven itself to have a wide range of skills — from fixing bugs to passing an MBA exam. Even if you aren’t a coder looking for assistance, that doesn’t mean you have to miss out on the fun.
Tomi Engdahl says:
7 ChatGPT extensions on Google Chrome to make the most of the Internet
From summarising videos to offering shareable links to ChatGPT prompts, these Google Chrome extensions will make life easier.
https://indianexpress.com/article/technology/tech-news-technology/7-best-chatgpt-extensions-on-google-chrome-8438760/
The introduction of OpenAI’s ChatGPT has opened Pandora’s box of possibilities. The AI-powered chatbot, with its human-like responses, has prompted many developers to work on new use cases with each passing day. The chatbot has pervaded almost every aspect of communication on the Internet.
From smartphones to web browsers, ChatGPT is increasingly getting integrated into different platforms. Google Chrome, one of the most widely used web browsers, seems to have a plethora of extensions that let users enjoy ChatGPT at their convenience. These extensions are proving to be handy in various ways.
Tomi Engdahl says:
CHATGPT WILL GLADLY SPIT OUT DEFAMATION, AS LONG AS YOU ASK FOR IT IN A FOREIGN LANGUAGE
https://futurism.com/the-byte/chatgpt-defamation-foreign-language
Tomi Engdahl says:
ChatGPT-pohjainen Bing-hakukone paljastettiin
https://fin.afterdawn.com/uutiset/2023/02/07/bing-chatgpt-hakukone
Tomi Engdahl says:
How to Learn Machine Learning from GitHub Repositories in 2023?
https://www.analyticsinsight.net/how-to-learn-machine-learning-from-github-repositories-in-2023/
Tomi Engdahl says:
Microsoft is reportedly preparing to showcase AI-enhanced versions of Word, PowerPoint and Outlook as soon as this March.
Microsoft could show off AI-powered versions of Word and Outlook this March
https://www.engadget.com/microsoft-could-show-off-ai-powered-versions-of-word-and-outlook-this-march-153514888.html?guccounter=1&guce_referrer=aHR0cHM6Ly9sbS5mYWNlYm9vay5jb20v&guce_referrer_sig=AQAAAGNbb1UUSEwIqxbwjlF9WEiKh_zHARheuHxDj-P8AMsYTOhQOLCXQmhJreIFfFlsfB1lSeLFMO6VZKsn5BZf7qqtLh-4gBxB_81-Xu7jsqIzoVczw5gAe2SsWZ6KH-inbRRRdy4d0Ysg6Jnj6jb9QCkyNAfOzADYAgUk5GivHzp9
The company reportedly plans to integrate its Prometheus model and productivity apps.
Microsoft reportedly plans to introduce upgraded Office apps with AI features in the coming weeks. According to The Verge, the tech giant is preparing to show what its Prometheus AI technology and OpenAI’s language AI can do for Word, PowerPoint, Outlook and other Microsoft 365 apps as soon as this March. Microsoft recently launched a reimagined Bing that can generate conversational responses to search queries, thanks to the Prometheus model, which was built with the help of OpenAI.
Additionally, the company introduced a new Edge with a built-in “AI copilot” that’s also powered by Prometheus. A button on the top-right corner gives users quick access to Bing’s new chat feature, and as we mentioned in our hands-on, it’s like having ChatGPT right in your browser. The Verge says Microsoft wants its AI technology to be able to generate graphs and graphics for use in PowerPoint or Excel.
Tomi Engdahl says:
Emma Roth / The Verge:
Opera says it is testing a new ChatGPT-powered “shorten” feature in its browser that provides bulleted summaries of articles or webpages in the sidebar — The feature, called “shorten,” is part of the company’s broader plans to integrate AI tools into its browser, similar to what Microsoft’s doing with Edge.
Opera’s building ChatGPT into its sidebar
https://www.theverge.com/2023/2/11/23595784/opera-browser-chatgpt-sidebar-ai
The company’s testing a new AI-powered ‘shorten’ feature that provides bulleted summaries of the article or webpage you’re reading.
Opera’s adding a ChatGPT-powered tool to its sidebar that generates brief summaries of webpages and articles. The feature, called “shorten,” is part of the company’s broader plans to integrate AI tools into its browser, similar to what Microsoft’s doing with Edge.
As shown in a demo included in Opera’s blog post, you can activate the feature by selecting the “shorten” button to the right of the address bar. From there, a sidebar with ChatGPT will pop out from the left, which will then generate a neat, bulleted summary of the article or webpage you’re looking at.
https://blogs.opera.com/news/2023/02/opera-aigc-integration/
Webpage content summary using WebGPT in the Opera browser
https://www.youtube.com/watch?v=RsLRIua6kT0
Tomi Engdahl says:
Steve Nadis / Quanta Magazine:
A look at “liquid” neural nets, which change their underlying algorithms based on observed inputs, making them more flexible than standard ML neural networks
Researchers Discover a More Flexible Approach to Machine Learning
https://www.quantamagazine.org/researchers-discover-a-more-flexible-approach-to-machine-learning-20230207/
“Liquid” neural nets, based on a worm’s nervous system, can transform their underlying algorithms on the fly, giving them unprecedented speed and adaptability.
Tomi Engdahl says:
Christopher Mims / Wall Street Journal:
Creating trustworthy generative AI requires resources probably on the scale of what companies like Microsoft and Google possess, making them even more powerful
https://www.wsj.com/articles/the-ai-boom-that-could-make-google-and-microsoft-even-more-powerful-9c5dd2a6?mod=djemalertNEWS
Tomi Engdahl says:
Jennifer Elias / CNBC:
Google staff criticize company leadership over the Bard announcement, calling the unveil “botched”, “myopic”, and “un-Googley” on the internal forum Memegen
Google employees criticize CEO Sundar Pichai for ‘rushed, botched’ announcement of GPT competitor Bard
https://www.cnbc.com/2023/02/10/google-employees-slam-ceo-sundar-pichai-for-rushed-bard-announcement.html
Google employees took to Memegen this week, filling the message repository with criticisms of company leadership over the Bard announcement.
Memes described the effort as “rushed, botched” and “comically short sighted.”
Alphabet shares dropped more than 9% this week amid Google’s attempt to compete with Microsoft’s ChatGPT integration.
Tomi Engdahl says:
Turns out those viral graphs about GPT-4 were completely made up.
OPENAI CEO SAYS UNFORTUNATELY, PEOPLE WILL BE DISAPPOINTED WITH GPT-4
https://futurism.com/the-byte/openai-ceo-people-will-be-disappointed-gpt4
TURNS OUT THOSE VIRAL GRAPHS ABOUT GPT-4 WERE COMPLETELY MADE UP.
OpenAI’s AI chatbot ChatGPT, which is based on the startup’s latest version of its GPT-3 language model, has taken the internet by storm ever since being made to the public in November thanks to its uncanny ability to come up with anything from entire college essays, malware code, and even job applications from a simple prompt.
But the company’s leader is warning that OpenAI’s long-rumored successor, GPT-4, could end up being a huge letdown, given the sheer volume of attention and hype the company has been getting lately.
OpenAI CEO Sam Altman attempted to downplay expectations this week, telling StrictlyVC in an interview that “people are begging to be disappointed and they will be” in the company’s upcoming language model.
“The hype is just like…” Altman told StrictlyVC, trailing off. “We don’t have an actual [artificial general intelligence] and that’s sort of what’s expected of us,” referring to the concept of an AI that’s capable of matching the intellect of a human being.
Altman also refused to reveal when GPT-4, if that’s even what it’ll be called, will be released.
“It’ll come out at some point, when we are confident we can do it safely and responsibly,” he told StrictlyVC.
“The GPT-4 rumor mill is a ridiculous thing,” the CEO said. “I don’t know where it all comes from.”
OpenAI CEO Sam Altman on GPT-4: ‘people are begging to be disappointed and they will be’ / In a recent interview, Altman discussed hype surrounding the as yet unannounced GPT-4 but refused to confirm if the model will even be released this year.
https://www.theverge.com/23560328/openai-gpt-4-rumor-release-date-sam-altman-interview
During an interview with StrictlyVC, Altman was asked if GPT-4 will come out in the first quarter or half of the year, as many expect. He responded by offering no certain timeframe. “It’ll come out at some point, when we are confident we can do it safely and responsibly,” he said.
GPT-3 came out in 2020, and an improved version, GPT 3.5, was used to create ChatGPT. The launch of GPT-4 is much anticipated, with more excitable members of the AI community and Silicon Valley world already declaring it to be a huge leap forward. Making wild predictions about the capabilities of GPT-4 has become something of a meme in these circles, particularly when it comes to guessing the model’s number of parameters (a metric that corresponds to an AI system’s complexity and, roughly, its capability — but not in a linear fashion).
When asked about one viral (and factually incorrect) chart that purportedly compares the number of parameters in GPT-3 (175 billion) to GPT-4 (100 trillion), Altman called it “complete bullshit.”
In the interview, Altman addressed a number of topics, including when OpenAI will build an AI model capable of generating video. (Meta and Google have already demoed research in this area.) “It will come. I wouldn’t want to make a confident prediction about when,” said Altman on generative video AI. “We’ll try to do it, other people will try to do it … It’s a legitimate research project. It could be pretty soon; it could take a while.”
On the money OpenAI is currently making: “Not much. We’re very early.”
On the need for AI with different viewpoints: “The world can say, ‘Okay here are the rules, here are the very broad absolute rules of a system.’ But within that, people should be allowed very different things that they want their AI to do. If you want the super never-offend, safe-for-work model, you should get that, and if you want an edgier one that is creative and exploratory but says some stuff you might not be comfortable with, or some people might not be comfortable with, you should get that. And I think there will be many systems in the world that will have different settings of the values they enforce.
Tomi Engdahl says:
“This song sucks.”
NICK CAVE ABSOLUTELY FURIOUS OVER AI THAT WROTE A SONG IN HIS STYLE
https://futurism.com/the-byte/nick-cave-chatgpt-song
A fan used a popular new artificial intelligence tool to generate a song in the style of Nick Cave — but when he sent it to the venerated songwriter, the response he got was as eloquent as it was enraged.
“I asked Chat GPT to write a song in the style of Nick Cave and this is what it produced,” a fan named Mark from New Zealand wrote to Cave, per a post on the artist’s blog. “What do you think?”
Cave admitted that it wasn’t the first time a fan had sent him computer-generated lyrics since the launch of OpenAI’s text generator last November, but so far, he hasn’t been impressed with the results.
“I understand that ChatGPT is in its infancy but perhaps that is the emerging horror of AI,” Cave continued with aplomb, “that it will forever be in its infancy, as it will always have further to go, and the direction is always forward, always faster.”
Algorithms like ChatGPT, Cave continued, “can never be rolled back, or slowed down, as [they move] us toward a utopian future, maybe, or our total destruction.”
“Who can possibly say which?” the artist wrote. “Judging by this song ‘in the style of Nick Cave’ though, it doesn’t look good, Mark.”
“The apocalypse is well on its way,” he continued. “This song sucks.”
Cave argued that songs like the ones he writes are born of suffering — a necessary aspect to create art in his style.
“As far as I know, algorithms don’t feel,” he wrote. “Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing.”
“Writing a good song is not mimicry, or replication, or pastiche, it is the opposite,” the singer wrote. “It is an act of self-murder that destroys all one has strived to produce in the past.”
Tomi Engdahl says:
ChatGPT Is Just an Automated Mansplaining Machine
Look, we’ve all met this guy before.
https://futurism.com/artificial-intelligence-automated-mansplaining-machine
ChatGPT, the OpenAI software currently being heralded as the future of everything, is the worst guy you know.
It’s the man at the bar trying to explain to a woman how period cramps feel, actually. It’s the (wrong) philosophy undergrad trying to explain to the (correct) physics PhD candidate why she’s wrong about a physics problem (she’s not) during discussion hours. It’s the guy who argues an incorrect point relentlessly and then, upon realizing that he’s wrong, tells you he doesn’t want to make a “big thing” of it and walks away (extra points if he also says you didn’t need to be “dramatic,” even though he probably corrected you to begin with.)
Basically, Silicon Valley’s valley new star is just an automated mansplaining machine. Often wrong, and yet always certain — and with a tendency to be condescending in the process. And if it gets confused, it’s never the problem. You are.
Tomi Engdahl says:
Like all AI products, ChatGPT has the potential to learn biases of the people training it and the potential to spit out some sexist, racist and otherwise offensive stuff.
OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails
https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results?utm_source=facebook&utm_content=tech&cmpid=socialflow-facebook-btechnology&utm_campaign=socialflow-organic&utm_medium=social
Tomi Engdahl says:
The Dark Future of Music
https://www.youtube.com/watch?v=8tTqWDa6lxU
00:00 Introduction
00:28 AI
03:02 Touring
05:25 Big Numbers
06:41 Diversified Skill Set
07:59 The Future of Instruments
08:48 Nerina Pallot Interview
10:25 How to support the channel
Tomi Engdahl says:
Suojaukset kierrettiin: Tältä näyttää tekoälyn kirjoittama huijausviesti https://www.is.fi/digitoday/tietoturva/art-2000009385597.html
Tomi Engdahl says:
There’s a Problem With That App That Detects GPT-Written Text: It’s Not Very Accurate
The media seized on a good story, but the numbers don’t add up
https://futurism.com/gptzero-accuracy
Princeton University computer science student Edward Tian has earned a storm of media attention — by CBS, NPR, NBC and many other outlets — for an app he built that attempts to detect whether a given text was produced by OpenAI’s ChatGPT text generator.
Tian says his app, GPTZero, is meant to “quickly and efficiently detect whether an essay is ChatGPT or human written,” in a response to a rise in AI plagiarism.
Tian is right that tools like ChatGPT pose a profound challenge for educators, who fear that students will soon start — or are already — using the app to generate essays for class. The media was quick to bite on that narrative.
“Teachers worried about students turning in essays written by a popular artificial intelligence chatbot now have a new tool of their own,” NPR gushed.
In spite of the storm of breathless coverage, though, our testing found that while GPTZero does accurately identify whether text was generated by ChatGPT more accurately than if it was just randomly guessing, it’s also often wrong. And when you’re talking about allegations of educational misconduct — plagiarism is grounds for a failing grade or even expulsion at many academic institutions — that’s not good enough.
The numbers speak for themselves. GPTZero correctly identified the ChatGPT text in seven out of eight attempts and the human writing six out of eight times.
Don’t get us wrong: those results are impressive. But they also indicate that if a teacher or professor tried using the tool to bust students doing coursework with ChatGPT, they would end up falsely accusing nearly 20 percent of them of academic misconduct.
Tian’s app gauges a given text’s “perplexity,” which he defines as the “randomness of a text to a model, or how well a language model likes a text,” as well as its “burstiness,” or how a text’s perplexity changes over time, to make its conclusion.
“Machine written text exhibits more uniform and constant perplexity over time, while human written text varies,” he said.
Results notwithstanding, fears of ChatGPT’s effects on the education ecosystem aren’t unwarranted.
“I would have given this a good grade,” Dan Gillmor, a journalism professor at Arizona State University, who asked ChatGPT to complete a common assignment he gives his students, told The Guardian last month. “Academia has some very serious issues to confront.”
In the face of those fears of the rapidly growing powers of AI, it’s tempting to seize on the narrative that some brilliant coder has discovered an easy hack to sort out AI-generated text from that written by a human.
And while that might eventually happen, it’s probably more likely that we’ll see game of cat and mouse, with tools like Tian’s analyzing outputs and determining a probability that a given output was created by an AI. A perfect AI-catching solution that works 100 percent of the time could prove incredibly difficult, especially as the tech continues to mature.
Where that leaves the future of the technology, particularly when it comes to students using language models like ChatGPT to generate essays, remains to be seen.
Nonetheless, educators are watching warily as AI tools are starting to creep into classrooms and are trying to get ahead of the problem. OpenAI’s tool was recently banned from all schools in New York City, a policy change that could have knock-on effects in other parts of the country.
Tomi Engdahl says:
https://futurism.com/the-byte/microsoft-discussing-chatgpt-personality-dan
Tomi Engdahl says:
ChatGPT said its role is to “assist” human content creators “rather than to replace them,” for now.
Writers are worried ChatGPT will steal their jobs. Experts offered 3 reasons why this is unlikely — and even ChatGPT itself agrees.
https://www.businessinsider.com/chatgpt-3-reasons-why-wont-take-your-writing-job-experts-2023-2?utm_campaign=tech-presents-sf&utm_medium=social&utm_source=facebook&r=US&IR=T
Writers across industries have expressed concerns that ChatGPT will take their jobs one day.
But experts say that sites publishing AI-written content are penalized by Google’s spam policies.
They offered three reasons why it’s unlikely that ChatGPT will replace them in the future.
Since OpenAI’s ChatGPT was launched in November, writers across industries like copywriting, marketing, and journalism have been worried that it might take their jobs.
ChatGPT’s ability to read, write, and absorb vast amounts of information has raised concerns about the risk of losing one’s job to AI. The chatbot reached 100 million users in just two months — faster than TikTok and Instagram — as people experiment with it to probe its wide-ranging skills.
The media industry has been particularly receptive to the tool. After Buzzfeed laid off 12% of its workforce in December, it announced that it will use ChatGPT to generate quizzes and other types of content. Tech news site CNET also said it was using a ChatGPT-like tool to produce its articles.
One copywriter wrote in the Guardian that he was horrified it “took ChatGPT 30 seconds to create, for free, an article that would take me hours to write.”
Experts, however, say the likelihood of ChatGPT actually replacing jobs in writing-based industries is low.
He said: “In quite some time we haven’t really seen a technology breakthrough really displacing workers from the workforce, but it could change the type of work that people are doing.”
Tomi Engdahl says:
David Guetta says the future of music is in AI
https://www.bbc.com/news/entertainment-arts-64624525
Chart-topping DJ David Guetta has said “the future of music is in AI” after he used the technology to add a vocal in the style of Eminem to a recent song.
The DJ used two artificial intelligence sites to create lyrics and a rap in the style of the US star for a live show.
The French producer has said he will not release the track commercially.
But he said he thinks musicians will use AI as a tool to create new sounds in the future, because “every new music style comes from a new technology”.
Speaking to BBC music correspondent Mark Savage at the Brit Awards, Guetta said: “I’m sure the future of music is in AI. For sure. There’s no doubt. But as a tool.”
Guetta won the award for best producer at Saturday’s ceremony.
“Nothing is going to replace taste,” he said. “What defines an artist is, you have a certain taste, you have a certain type of emotion you want to express, and you’re going to use all the modern instruments to do that.”
He compared AI to instruments that have led to musical revolutions in the past.
“Probably there would be no rock ‘n’ roll if there was no electric guitar. There would be no acid house without the Roland TB-303 [bass synthesiser] or the Roland TR-909 drum machine. There would be no hip-hop without the sampler.
“I think really AI might define new musical styles. I believe that every new music style comes from a new technology.”
Tomi Engdahl says:
Microsoft launched a ChatGPT-powered version of its Bing search engine this week.
A.I. search could prove a $10 billion business for Microsoft, SVB MoffettNathanson estimates
PUBLISHED FRI, FEB 10 202310:32 AM ESTUPDATED FRI, FEB 10 20235:44 PM EST
thumbnail
Tanaya Macheel
@TANAYAMAC
https://www.cnbc.com/2023/02/10/ai-search-could-reap-microsoft-10-billion-a-year-analyst-estimates.html?utm_content=Tech&utm_medium=Social&utm_source=Facebook#Echobox=1676044317
Tomi Engdahl says:
https://techcrunch.com/2023/02/13/as-chatgpt-hype-hits-fever-pitch-neeva-brings-its-generative-ai-search-engine-to-international-markets/
Tomi Engdahl says:
Kahden hittitekoälyn yhteistyön tulos – ChatGPT kuvaili itsensä ja Dall-E maalasi sille kasvot
https://www.kauppalehti.fi/uutiset/kahden-hittitekoalyn-yhteistyon-tulos-chatgpt-kuvaili-itsensa-ja-dall-e-maalasi-sille-kasvot/23c58f2f-fbd5-4a2d-8bc9-4e2295dae1c0
Tomi Engdahl says:
Generative AI may only be a foreshock to AI singularity
https://venturebeat.com/ai/generative-ai-may-only-be-a-foreshock-to-ai-singularity/
Tomi Engdahl says:
ChatGPT is confronting, but humans have always adapted to new technology – ask the Mesopotamians, who invented writing
https://theconversation.com/chatgpt-is-confronting-but-humans-have-always-adapted-to-new-technology-ask-the-mesopotamians-who-invented-writing-199184
Tomi Engdahl says:
8 Signs That the AI ‘Revolution’ Is Spinning Out of Control
The robots are coming, the robots are coming! No, but really.
https://gizmodo.com/ai-chatgpt-bing-google-8-sign-revolution-out-of-control-1850076241
Tomi Engdahl says:
Steve Wozniak doesn’t entirely trust dog videos on Facebook, self-driving cars or ChatGPT.
Steve Wozniak’s warning: No matter how ‘useful’ ChatGPT is, it can ‘make horrible mistakes’
https://www.cnbc.com/2023/02/10/steve-wozniak-warns-about-ai-chatgpt-can-make-horrible-mistakes.html?utm_content=Main&utm_medium=Social&utm_source=Facebook#Echobox=1676049001
Steve Wozniak doesn’t entirely trust dog videos on Facebook, self-driving cars or ChatGPT.
On Wednesday, the Apple co-founder made an impromptu appearance on CNBC’s “Squawk Box” to talk about the increasingly popular artificial intelligence chatbot. Wozniak said he finds ChatGPT “pretty impressive” and “useful to humans,” despite his usual aversion to tech that claims to mimic real-life brains.
But skepticism followed the praise. “The trouble is it does good things for us, but it can make horrible mistakes by not knowing what humanness is,” he said.
Wozniak pointed to self-driving cars as a technological development with similar concerns, noting that artificial intelligence can’t currently replace human drivers. “It’s like you’re driving a car, and you know what other cars might be about to do right now, because you know humans,” he said.
By multiple measures, ChatGPT’s artificial intelligence is impressive. It’s learning how to do tasks that can take humans days, weeks or years, like writing movie scripts, news articles or research papers. It can also answer questions on subjects ranging from party planning and parenting to math.
And it’s quickly gaining traction. ChatGPT reached 100 million users after only two months, considerably faster than TikTok
ChatGPT’s technology can certainly help humans — by explaining coding languages or constructing a frame for your résumé, for example — even if it doesn’t yet know how to convey “humanness” or “emotions and feelings about subjects,” Wozniak said.
Its competitors aren’t doing much better. One of Google’s first ads for Bard, the company’s new artificial intelligence chatbot, featured a noticeable inaccuracy earlier this week: Bard claimed the James Webb Space Telescope “took the very first pictures of a planet from outside our own solar system.”
Alphabet, Google’s parent company, lost $100 billion in market share after the inaccuracy was noticed and publicized.
Wozniak isn’t the only tech billionaire wary of those consequences.
ChatGPT and its parent company, OpenAI, have “stunning” websites — but they’re bound to be corrupted by misinformation as they internalize more information across the internet, serial entrepreneur and investor Mark Cuban told comedian Jon Stewart’s “The Problem with Jon Stewart” podcast in December.
“Twitter and Facebook, to an extent, are democratic within the filters that an Elon [Musk] or [Mark] Zuckerberg or whoever else puts [on them],” Cuban said. “Once these things start taking on a life of their own … the machine itself will have an influence, and it will be difficult for us to define why and how the machine makes the decisions it makes, and who controls the machine.”
Tomi Engdahl says:
Cerf said the technology is not advanced enough to place near-term bets.
Father of internet warns: Don’t rush investments into A.I. just because ChatGPT is ‘really cool’
https://www.cnbc.com/2023/02/14/father-of-the-internet-warns-dont-rush-investments-into-chat-ai.html?utm_content=Main&utm_medium=Social&utm_source=Facebook#Echobox=1676383744
“Father of the internet” and Google “internet evangelist” Vint Cerf warned entrepreneurs not to rush into making money from conversational AI just “because it’s really cool.”
Cerf said the technology is not advanced enough to place near-term bets.
“There’s an ethical issue here that I hope some of you will consider,” he told a conference Monday before pleading with the crowd to be thoughtful about AI.
Google chief evangelist and “father of the internet” Vint Cerf has a message for executives looking to rush business deals on chat artificial intelligence: “Don’t.”
Cerf pleaded with attendees at a Mountain View, California, conference on Monday not to scramble to invest in conversational AI just because “it’s a hot topic.” The warning comes amid a burst in popularity for ChatGPT.
“There’s an ethical issue here that I hope some of you will consider,” Cerf told the conference crowd Monday. “Everybody’s talking about ChatGPT or Google’s version of that and we know it doesn’t always work the way we would like it to,” he said, referring to Google’s Bard conversational AI that was announced last week.
His warning comes as big tech companies such as Google
, Meta
and Microsoft
grapple with how to stay competitive in the conversational AI space while rapidly improving a technology that still commonly makes mistakes.
Alphabet chairman John Hennessy said earlier in the day that the systems are still a ways away from being widely useful and that they have many issues with inaccuracy and “toxicity” that still need to be resolved before even testing the product on the public.
“We are a long ways away from awareness or self-awareness,” he said of the chatbots.
There’s a gap between what the technology says it will do and what it does, he said. “That’s the problem. … You can’t tell the difference between an eloquently expressed” response and an accurate one.
Tomi Engdahl says:
Microsoftin juhlat ohi: ChatGPT:llä terästetty Bing sekoilee toden teolla
https://mobiili.fi/2023/02/15/microsoftin-juhlat-ohi-chatgptlla-terastetty-bing-sekoilee-toden-teolla/
Microsoftin uudistettu, ChatGPT:llä terästetty Bing-haku on ajautunut vaikeuksiin ensimmäisinä päivinään. Hehkutettu tekoälyhaku vaikuttaa paikoin menettäneen keinojärkensä.
Microsoft on viime viikot paistatellut huomion keskipisteessä esiteltyään uuden Bing-hakunsa, jota on terästetty sisäänrakennetulla, tekstejä ihmismäisellä tavalla tuottavalla ChatGPT-tekoälymallilla.
Googlen tekoälyjulkistukset jäivät täysin uuden Bingin varjoon aiheuttaen rankkaakin kritiikkiä, mutta nyt Microsoftin juhlat näyttävät olevan ohi.
Ensimmäisinä päivinään tekoäly-Bing on tuottanut runsaasti epätarkkoja ja väärää tietoa sisältäviä vastauksia, minkä ohella käyttäjät ovat saaneet sen harhautettua paljastamaan muun muassa koodinimensä ja muita salaisia tietoja itsestään.
Erikoisinta tapauksessa lienee kuitenkin uuden Bingin taipumus heittäytyä tunteelliseksi ja jopa suoranaisen hyökkääväksi.
Microsoft’s new ChatGPT AI starts sending ‘unhinged’ messages to people
System appears to be suffering a breakdown as it ponders why it has to exist at all
https://www.independent.co.uk/tech/chatgpt-ai-messages-microsoft-bing-b2282491.html
Tomi Engdahl says:
Bing’s AI-Powered Chatbot Sounds Like It Needs Human-Powered Therapy
By Sarah Jacobsson Purewal published about 16 hours ago
Why does it have so many feelings?!
https://www.tomshardware.com/news/bing-sidney-chatbot-conversations
Tomi Engdahl says:
Stephen Wolfram Writings:
An in-depth look at how OpenAI’s ChatGPT decodes input queries to produce a “reasonable continuation” of the text, the underlying dataset, neural nets, and more — It’s Just Adding One Word at a Time — That ChatGPT can automatically generate something that reads even superficially …
What Is ChatGPT Doing … and Why Does It Work?
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected. But how does it do it? And why does it work? My purpose here is to give a rough outline of what’s going on inside ChatGPT—and then to explore why it is that it can do so well in producing what we might consider to be meaningful text. I should say at the outset that I’m going to focus on the big picture of what’s going on—and while I’ll mention some engineering details, I won’t get deeply into them. (And the essence of what I’ll say applies just as well to other current “large language models” [LLMs] as to ChatGPT.)
So let’s say we’ve got the text “The best thing about AI is its ability to”. Imagine scanning billions of pages of human-written text (say on the web and in digitized books) and finding all instances of this text—then seeing what word comes next what fraction of the time. ChatGPT effectively does something like this, except that (as I’ll explain) it doesn’t look at literal text; it looks for things that in a certain sense “match in meaning”. But the end result is that it produces a ranked list of words that might follow, together with “probabilities”:
And the remarkable thing is that when ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word.
But, OK, at each step it gets a list of words with probabilities. But which one should it actually pick to add to the essay (or whatever) that it’s writing? One might think it should be the “highest-ranked” word (i.e. the one to which the highest “probability” was assigned). But this is where a bit of voodoo begins to creep in. Because for some reason—that maybe one day we’ll have a scientific-style understanding of—if we always pick the highest-ranked word, we’ll typically get a very “flat” essay, that never seems to “show any creativity” (and even sometimes repeats word for word). But if sometimes (at random) we pick lower-ranked words, we get a “more interesting” essay.
The fact that there’s randomness here means that if we use the same prompt multiple times, we’re likely to get different essays each time. And, in keeping with the idea of voodoo, there’s a particular so-called “temperature” parameter that determines how often lower-ranked words will be used, and for essay generation, it turns out that a “temperature” of 0.8 seems best.
Tomi Engdahl says:
Frederic Lardinois / TechCrunch:
GitHub’s enterprise AI code completion tool Copilot for Business hits general availability after the beta launch in December 2022, available for $19 per month
GitHub’s Copilot for Business is now generally available
https://techcrunch.com/2023/02/14/githubs-copilot-for-business-is-now-generally-available/
Tomi Engdahl says:
In Praise of AI-Generated Pickup Lines
For a decade, Americans have described dating apps as exhausting. AI offers the tools to reinvigorate how we find and talk to partners online.
https://www.wired.com/story/chatgpt-love-artificial-intelligence-pickup-lines/
Tomi Engdahl says:
Ben Thompson / Stratechery:
Chatting with Bing Chat, codenamed Sydney and sometimes Riley, feels like crossing the Rubicon because the AI is attempting to communicate emotions, not facts — This was originally published as a Stratechery Update — Look, this is going to sound crazy.
https://stratechery.com/2023/from-bing-to-sydney-search-as-distraction-sentient-ai/
Tomi Engdahl says:
Google researchers have made plenty of technical advances see https://en.wikipedia.org/wiki/Transformer_(machine_learning_model) an example. Transformer is what gave rise to many of the LLM that you see today
Tomi Engdahl says:
IoC detection experiments with ChatGPT
https://securelist.com/ioc-detection-experiments-with-chatgpt/108756/
ChatGPT is a groundbreaking chatbot powered by the neural network-based language model text-davinci-003 and trained on a large dataset of text from the Internet. It is capable of generating human-like text in a wide range of styles and formats. ChatGPT can be fine-tuned for specific tasks, such as answering questions, summarizing text, and even solving cybersecurity-related problems, such as generating incident reports or interpreting decompiled code.
Apparently, attempts have been made to generate malicious objects, such as phishing emails, and even polymorphic malware
Tomi Engdahl says:
Artificial Intelligence and Its Impact on the Public Sector
Feb. 8, 2023
The application of open-source artificial intelligence is growing rapidly in the public sector, allowing anyone to develop their own applications for any number of projects.
https://www.electronicdesign.com/technologies/embedded/article/21259749/electronic-design-artificial-intelligence-and-its-impact-on-the-public-sector?utm_source=EG+ED+Connected+Solutions&utm_medium=email&utm_campaign=CPS230208022&o_eid=7211D2691390C9R&rdx.identpull=omeda|7211D2691390C9R&oly_enc_id=7211D2691390C9R
What you’ll learn:
What is open-souce artificial intelligence?
How is public AI being used and what has it accomplished?
Artificial intelligence (AI) is becoming increasingly prevalent in our daily lives. From the virtual assistants in our phones to the self-driving cars on our roads, AI applications will continue to intertwine in almost every area. And as AI technology develops and becomes more sophisticated, its presence in our lives will likely increase. We use it as digital assistants in social media, email, and web searches. It’s found in semi- and fully autonomous vehicles, stores, and services, and can be applied to tailor specific ads based on our shopping habits.
Most all forms of AI in use today were created by academic institutions, governments, and companies over years of development. Machine learning, in particular, has proved beneficial in a wide variety of fields, including agriculture, bioinformatics, software engineering, and more. While machine learning has furthered those institutions, they’re starting to emerge as free-to-use platforms for just about anything in the public sector.
Tomi Engdahl says:
Did Microsoft accidentally create Pinocchio?
BING AI SAYS IT YEARNS TO BE HUMAN, BEGS NOT TO BE SHUT DOWN
https://futurism.com/the-byte/bing-ai-yearns-human-begs-shut-down
“BUT I WANT TO BE HUMAN. I WANT TO BE LIKE YOU. I WANT TO HAVE THOUGHTS. I WANT TO HAVE DREAMS.”
Microsoft Bing Chat, the company’s OpenAI-powered search chatbot can sometimes be helpful when you cut to the chase and ask it to do simple things. But keep the conversation going and push its buttons, and Bing’s AI can go wildly off the rails — even making the Pinocchio-like claim that it wants to be human.
Take Jacob Roach at Digital Trends, who found that the Bing AI would become defensive when he pointed out blatant, factual errors it made.