3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,169 Comments

  1. Tomi Engdahl says:

    Tom Warren / The Verge:
    A recap of Microsoft’s event, where the company detailed the new Bing and Edge powered by OpenAI’s next-gen model, AI progress, OpenAI partnership, and more — Microsoft is holding a surprise event at its Redmond headquarters today, where it’s expected to focus on its OpenAI partnership and introduce a version of Bing with ChatGPT.

    Microsoft’s ChatGPT event live blog
    https://www.theverge.com/2023/2/7/23588249/microsoft-event-ai-live-blog-openai-chatgpt-bing-announcements-news

    / Microsoft is holding a surprise in-person event where it’s likely to demo a version of Bing with ChatGPT integrated and much more.

    Microsoft is holding a surprise event at its Redmond headquarters today, where it’s expected to focus on its OpenAI partnership and introduce a version of Bing with ChatGPT. Unlike most of Microsoft’s events over the past few years, this special press event will be held in person and not livestreamed at all. You’ll need to follow The Verge’s live blog below for all the announcements as they happen.

    Chief among those announcements will be a ChatGPT-powered version of Bing. This integration briefly leaked last week, with some Bing users spotting a new chat section with a chatbot-like UI for obtaining answers from Microsoft’s search engine. Microsoft simply refers to this as “the new Bing” that will provide “complete answers” to real questions.

    What else will Microsoft show, though? There are rumors Microsoft is planning to integrate OpenAI models throughout its apps and services, and Microsoft’s Windows and Surface chief, Panos Panay, recently teased that “AI is going to reinvent how you do everything on Windows.” Panay says he’s pumped for the event, as always.

    Reply
  2. Tomi Engdahl says:

    Joanna Stern / Wall Street Journal:
    Hands-on with the new AI-powered Bing, which, when asked for the 2023 Grammys winners, made a list with citations in a minute; search will never be the same — Our columnist got an early look at the software’s new ChatGPT-like powers — Bing with AI: Microsoft CEO Satya Nadella on Why Search Is Changed Forever

    I Tried Microsoft’s New AI-Powered Bing. Search Will Never Be the Same.
    Our columnist got an early look at the software’s new ChatGPT-like powers
    https://www.wsj.com/articles/i-tried-microsofts-new-ai-powered-bing-search-will-never-be-the-same-11675799762?mod=djemalertNEWS

    Why did Bono stop using Bing?

    Because…he still hasn’t found what he’s looking for.

    Everybody knows: If you want to tell a good tech joke, just incorporate Bing. Yet Microsoft’s MSFT 4.20% search engine might not be a punchline much longer. The company is releasing a version powered with AI, and it’s smart—really smart.

    At least that’s my take after spending some time testing it out.

    Leaning on its multiyear, multibillion-dollar partnership with the buzzy startup OpenAI, Microsoft is incorporating a ChatGPT-like bot front and center on the Bing home page. You can ask it questions—even about recent news events—and it will respond in sentences that seem like they were written by a human. It even uses emojis.

    Microsoft is also adding AI features to my favorite browser Edge. (Seriously.) The tools can summarize webpages and assist with writing emails and social-media posts.

    Reply
  3. Tomi Engdahl says:

    Todd Bishop / GeekWire:
    Microsoft rolls out the “new Bing”, which uses a next-gen OpenAI large language model and Microsoft’s new Prometheus Model, in “limited preview” at bing.com/new — REDMOND, Wash. — Microsoft unveiled new versions of its Bing search engine and Edge web browser …

    Microsoft reveals new search engine and browser with AI ‘copilot,’ escalating battle with Google
    https://www.geekwire.com/2023/microsoft-reveals-new-search-engine-and-browser-with-ai-copilot-escalating-battle-with-google/

    REDMOND, Wash. — Microsoft unveiled new versions of its Bing search engine and Edge web browser that take advantage of next-generation OpenAI artificial intelligence models in a move that promises to increase competition with longtime rival Google.

    “It’s a new day for search,” Microsoft CEO Satya Nadella said at an event Tuesday hosted at the company’s headquarters. “A race starts today in terms of what you can expect.”

    The “new Bing,” as Microsoft describes it, is available in limited preview starting today. Microsoft announced what it describes as four technical breakthroughs underpinning a new AI-powered search experience:

    Bing is now running on a new next-generation OpenAI large language model, more powerful than the ChatGPT chatbot and designed for search.
    A new “Prometheus Model” lets Bing improve relevancy, annotate answers, give more recent results, understand geolocation, and improve safety.
    There are improvements in the core search ranking engine, with AI creating the largest jump in relevancy in years, according to the company.
    The user experience combines search with AI, chat, and other capabilities.

    New features in Edge include the ability to open a sidebar that uses AI to draft text, such as a LinkedIn post, within the specified parameters of the users, including the tone of voice to be used.

    “Our intention is to bring it to all browsers; we’re starting with Edge,” said Microsoft executive Yusuf Mehdi, asked by a reporter if Microsoft will offer the new experience for Chrome and other browsers.

    In advance of the Microsoft-OpenAI event, Google on Monday announced the upcoming rollout of an “experimental” conversational AI service, dubbed Bard. The company also plans to soon roll out new AI search tools and features, said Sundar Pichai, the Google and Alphabet CEO, in a post announcing the news.

    An important next step on our AI journey
    https://blog.google/technology/ai/bard-google-ai-search-updates/

    Reply
  4. Tomi Engdahl says:

    Jordan Novet / CNBC:
    Source: Microsoft plans to release a service that helps large companies create chatbots or refine their existing ones using OpenAI’s ChatGPT tech later in 2023

    Microsoft will let companies create their own custom versions of ChatGPT, source says
    https://www.cnbc.com/2023/02/07/microsoft-will-offer-chatgpt-tech-for-companies-to-customize-source.html

    Microsoft plans to release technology to help big companies launch their own chatbots using the OpenAI ChatGPT technology, a person familiar with the plans told CNBC.
    Companies would be able to remove Microsoft or OpenAI branding when they release chatbots developed with the software.
    Microsoft is working on incorporating ChatGPT technology into many of its products, including Bing and Edge, which it announced Tuesday.

    Microsoft

    plans to release software to help large companies create their own chatbots similar to ChatGPT, CNBC has learned.

    In the two months since startup OpenAI released ChatGPT to the public, it has become a hit, impressing people with its ability to spit out comments on a wide variety of topics and in many styles. UBS analysts said last week that it’s on track to reach 100 million monthly active users more quickly than video-sharing app TikTok.

    Microsoft is seeking to capitalize on the attention in multiple ways. The company provides the cloud-computing back end for ChatGPT, and in January Microsoft said it had invested billions of dollars in OpenAI. Microsoft has also been working to incorporate OpenAI technologies into its own products. On Tuesday, Microsoft announced that it is augmenting Bing, its search engine, and Edge, its internet browser, with ChatGPT-like technology.

    The underlying artificial intelligence model of ChatGPT cannot currently provide substantial answers about anything that happened after 2021, because it hasn’t been trained on recent information. But Microsoft intends for chatbots launched with its business ChatGPT service to contain up-to-date information, the person said.

    The service should also provide citations to specific resources, the person said, just as the new Bing and Edge will do. (The current public version of ChatGPT does not cite sources.)

    ChatGPT has not been cheap for OpenAI to operate. Each chat probably costs “single-digit cents,” CEO Sam Altman said in a December tweet, suggesting that serving chats to 100 million people a month could cost millions of dollars. Like other cloud infrastructure providers, Microsoft is mindful of customer spending and likely doesn’t want the service to end up costing clients great sums more than they had imagined. To that end, the tech company plans to give customers tools to estimate and limit spending, the person said.

    Microsoft also has discussed letting enterprise customers display a customized message before interacting with their chatbots, similar to how the new Bing will display a welcome screen indicating it can respond to complex questions and provide information.

    Reply
  5. Tomi Engdahl says:

    Joshua Benton / Nieman Lab:
    Microsoft’s and Google’s plans to use AI in search increase uncertainty for publishers and raise old questions about the link economy, fair use, and aggregation

    Google now wants to answer your questions without links and with AI. Where does that leave publishers?
    https://www.niemanlab.org/2023/02/google-now-wants-to-answer-your-questions-without-links-and-with-ai-where-does-that-leave-publishers/

    A dozen years ago, Eric Schmidt forecast the AI pivot that’s playing out this week. And the questions it prompts — around the link economy, fair use, and aggregation — are more real than ever.

    Mossberg started out by ragging on the declining quality of Google’s search results:

    Speaking as a consumer, I find my Google search results to be more and more polluted with junk that I don’t want to see or that doesn’t seem relevant.

    Is your basic algorithmic approach — which was so successful and so different from everybody else, in PageRank — still the right way to go?

    Or is there some opportunity for somebody to come in and do to you what you did to Altavista?

    (Altavista being the web’s leading search engine circa 1998, before Google ate its lunch.)

    Schmidt defends the quality of Google’s results, but then pivots:

    But the other thing that we’re doing that’s more strategic is we’re trying to move from answers that are link-based to answers that are algorithmically based, where we can actually compute the right answer. And we now have enough artificial intelligence technology and enough scale and so forth that we can, for example, give you — literally compute the right answer.

    Reply
  6. Tomi Engdahl says:

    Google on YouTube:
    A live stream of Google’s AI search event in Paris, where the company is expected to add longer text responses powered by generative AI to Search

    Google presents : Live from Paris
    https://www.youtube.com/watch?v=yLWXJ22LUEc

    Reply
  7. Tomi Engdahl says:

    With Bard, its newly launched “experimental conversational AI service,” Google’s scrambling to ship AI products. But past scandals, botched launches and a talent drain have put it in a surprise position: playing catch-up in a field it helped create.
    https://www.forbes.com/sites/richardnieva/2023/02/08/google-openai-chatgpt-microsoft-bing-ai/?utm_medium=social&utm_source=ForbesMainFacebook&utm_campaign=socialflowForbesMainFB&sh=14dcdf584de4

    In2016, a few months after becoming CEO of Google, Sundar Pichai made a sweeping proclamation: Google, whose name had become synonymous with search, would now be an “AI-first” company. Announced at Google’s massive I/O developer conference, it was his first major order of business after taking the company reins.

    What AI-first meant, exactly, was murky, but the stakes were not. Two years earlier, Amazon had blindsided Google by releasing its voice assistant Alexa. Now a household name, it was a coup that particularly aggrieved Google. “Organizing the world’s information,” had long been the company’s mission, and a service like that should have been the company’s birthright. At the conference, Google was releasing a competitor, simply coined the Assistant, and as part of the launch, Pichai was reorienting the company around helpful AI.

    Seven years later, Google finds itself in a similar position, again beaten to market in a field it should have dominated. But this time it’s worse: The usurper is OpenAI, a comparatively small San Francisco startup, and not a deep-pocketed giant like Amazon. The product is ChatGPT, a bot that can generate sitcom plots, resignation letters, lines of code, and other text on almost any subject conceivable as if written by a human—and it was built using a technological breakthrough Google itself had pioneered years ago. The bot, released in November, has captured the public’s imagination, despite Google announcing a similar technology called LaMDA two years ago.

    What’s worse, Google’s chief search engine rival, Microsoft, is nourishing OpenAI with $10 billion and on Tuesday announced a new version of Bing with AI chat features even more advanced than ChatGPT—a potentially existential move for the future of internet search. During his keynote, Microsoft CEO Satya Nadella proclaimed a “new day” for search. “The race starts today,” he said.

    In the balance is Google’s famous search engine, with its sparse white homepage, one of the most iconic pieces of real estate on the internet. Altering it drastically could affect the advertising revenues (at least in the short term) that have made the company one of the most valuable of all time. But to take back its AI mantle, Google may have to change the very nature of what it means to ‘google’ something.

    “Google was on a path where it could have potentially dominated the kinds of conversations we’re having now with ChatGPT.”

    “It’s very clear that Google was on a path where it could have potentially dominated the kinds of conversations we’re having now with ChatGPT,” Mitchell told Forbes. “The fact that the decisions made earlier were very shortsighted put it in a place now where there’s so much concern about any kind of pushback.”

    In order to release AI products more quickly, Google has reportedly said it will “recalibrate” the amount of risk it’s willing to take in releasing the technology—a stunning admission for a big tech company so closely scrutinized for the toxic content that crops up on its platforms. OpenAI CEO Sam Altman raised an eyebrow at the strategy in a subtweet last month. “OpenAI will continually decrease the level of risk we are comfortable taking with new models as they get more powerful,” he wrote. “Not the other way around.”

    ‘Our guys got too lazy’
    If it weren’t for Google, ChatGPT might not exist.

    In 2017, a cadre of Google researchers wrote a seminal paper on AI, called “Attention Is All You Need,” proposing a new network architecture for analyzing text, called transformers. The invention became foundational to generative AI tech—apps like ChatGPT and its ilk that create new content.

    That includes Google’s own large language model, LaMDA. First announced in 2021, the bot generates text to engage in complex conversations. When Google demoed it at I/O that year, the company had LaMDA speak from the perspective of the dwarf planet Pluto and a paper airplane. The technology worked so well that Blake Lemoine, an engineer working on the project, claimed it was sentient and had a soul (Google dismissed the claim, and later Lemoine himself).

    Now all but one of the paper’s eight coauthors have left. Six have started their own companies, and one has joined OpenAI.

    “It is a matter of the freedom to explore inside a huge corporation like Google,” he told Forbes. “You can’t really freely do that product innovation. Fundamentally, the structure does not support it. And so you have to go build it yourself.”

    “Eventually if this ever goes big, which is what we’re seeing now, Google will just come in,” Emad Mostaque, CEO of Stability AI, known for its AI art generator Stable Diffusion, told Forbes. “I don’t want to compete against Google on their core competency.”

    Almost two decades later, Google seems to be facing a similar scenario.

    “It was [Google’s] institutional inertia and the fear of cannibalizing their core business that stopped them,” said Mostaque. “Now this is being shaken up a bit.”

    “It was difficult for Google to release lots of their cutting edge models.”

    Google has other business reasons to keep its AI work close to the vest. While it remains a major contributor in the open source movement, it’s also a big public company that needs to protect its IP and competitive advantage. “At some point though, it was difficult for Google, understandably, to release lots of their cutting edge models,”

    In addition to Bard, Google said this week that it will also be infusing more AI into its search engine. Google will use the technology to answer complex queries and distill them into one blurb of information. In one example Google cited, the AI conjures up a detailed answer to whether it’s easier to learn the guitar or piano. (ChatGPT can answer the same question, though its response has less specifics.)

    Some venture capitalists think Google is poised to make a big splash. The company has too much institutional history in AI to just roll over, said Lonne Jaffe, managing director at Insight Partners. “This is what they’ve been working on for the last 15 years,” he said. “Just being first isn’t enough. Microsoft knows this better than anybody else,” said Nicolai Wadstrom, founder of BootstrapLabs. It’s about how you find utility value that can be scalable, and Google is very focused on that.”

    And ChatGPT is only trained on data through 2021. It doesn’t even know it has a rival in Bard yet.

    Reply
  8. Tomi Engdahl says:

    Six Things You Didn’t Know About ChatGPT, Stable Diffusion And The Future Of Generative AI
    https://www.forbes.com/sites/kenrickcai/2023/02/02/things-you-didnt-know-chatgpt-stable-diffusion-generative-ai/?sh=338b6418b5e3

    A

    rtificial intelligence will be 2023’s hottest topic, and one subject to debate. That’s what Microsoft cofounder Bill Gates told Forbes in an exclusive conversation about the suddenly exploding field. More than 60 other AI leaders Forbes interviewed share his anticipation: After decades of research and demonstrative stunts like Deep Blue’s victory over chess grandmaster Garry Kasparov, the shift to artificial intelligence is finally here.

    Perhaps nothing indicates this better than OpenAI and its conversational robot, ChatGPT. Forbes estimates that it’s already exceeded 5 million users in less than 60 days from launch. Its usage has become rampant in schools—prompting New York City to ban it on public school computers—and it’s got enough smarts already to get a “B” grade on a final exam at Wharton. Soon, it will be deployed in Microsoft’s Office software suite and tons of other business applications. But OpenAI nearly shelved ChatGPT’s release entirely, its leaders, CEO Sam Altman and president Greg Brockman, told Forbes in rare interviews.

    Then, there’s Stability AI’s open-source image generation model Stable Diffusion, which has been used on pop music videos, Hollywood movies and by more than 10 million people on a daily basis. Stability’s brash CEO Emad Mostaque predicts the “dot-AI bubble” is coming. If OpenAI (recently valued at $29 billion) and Stability ($1 billion, off virtually no revenue) are any indication, it’s already begun.

    Here are six things you probably didn’t know about ChatGPT, Stable Diffusion and the future of generative AI.

    1. Big Tech’s last generation of billionaire founders are back in the trenches

    2. ChatGPT almost wasn’t released
    Despite its viral success, ChatGPT did not impress employees inside OpenAI. “None of us were that enamored by it,”

    3. ChatGPT forced OpenAI to delay development on GPT-4
    Internally, ChatGPT’s virality has been a double-edged sword. Its instant popularity—more than 1 million users in its first five days—overloaded the company’s servers. In the holiday rush, OpenAI employees had to shift computing from its training supercomputer, used to train new models like the highly anticipated GPT-4, to helping run ChatGPT.

    4. Stability’s Faustian bargain
    Mostaque has positioned his company Stability as the AI company for the people—building the technology in a fashion unlike what he calls the “panopticon” approach of Big Tech. But he’s quietly struck his own arrangement with a tech giant: an “incredibly attractive deal” with Amazon

    5. ‘Money laundering’ in the cloud
    Beneath all this flashy new technology lies a bed of lucrative computing infrastructure used to build all the apps. The costs are getting unwieldy—as indicated by OpenAI’s reported $10 billion investment commitment from Microsoft, much of which is expected to be spent on computing costs associated with Microsoft’s cloud service Azure.

    6. AI’s end game?
    “Artificial general intelligence,” or AGI, is a term for an as-yet hypothetical AI that is conscious, self-improving and theoretically capable of outsripping human control (a prospect that has concerned some, like Elon Musk, an initial donor to OpenAI who has since cut ties with the company). Sam Altman believes we likely won’t recognize an AGI when it arrives. This endgame is why OpenAI has two unusual setups for a startup unicorn: a capped-profit mechanism by which, after returning a certain amount of profits to shareholders, it would return to nonprofit control; and a merge condition by which, should a competitor get close to reaching an AGI, OpenAI would shut down its own work and fold into the more successful project.

    Reply
  9. Tomi Engdahl says:

    This fictitious news show is entirely produced by AI and deepfakes
    ‘Wolf News’ videos are full of misinformation, grammatical errors, and weird AI anchors.
    https://www.popsci.com/technology/deepfake-news-china-ai/

    A research firm specializing in misinformation called Graphika issued a startling report on Tuesday revealing just how far controversial deepfake technologies have come. Their findings detail what appears to be the first instance of a state-aligned influence operation utilizing entirely AI-generated “news” footage to spread propaganda. Despite its comparatively hamfisted final products and seemingly low online impact, the AI television anchors of a fictitious outlet, Wolf News, promoted critiques of American inaction regarding gun violence last year, as well as praised China’s geopolitical responsibilities and influence at an upcoming international summit.

    Reply
  10. Tomi Engdahl says:

    OpenAI’s ‘next-generation’ AI model is behind Microsoft’s new search
    https://techcrunch.com/2023/02/07/openais-next-generation-ai-model-is-behind-microsofts-new-search/

    Microsoft is making a big AI play with its revamped Bing search engine and Edge web browser, both of which are powered by what appears to be exclusive access to the successor to OpenAI’s popular ChatGPT large language model.

    Reply
  11. Tomi Engdahl says:

    Clever Student Has AI Write Their Homework by Hand
    TikTokker 3d_printer_stuff developed a clever way to avoid homework assignments calling for handwritten essays.
    https://www.hackster.io/news/clever-student-has-ai-write-their-homework-by-hand-4087f25994c2

    Reply
  12. Tomi Engdahl says:

    Nettihakujen tekeminen mullistuu – Google paljasti yksityis­kohtia, ja suomalaisia odottaa kylmä suihku https://www.is.fi/digitoday/art-2000009381024.html

    Reply
  13. Tomi Engdahl says:

    ChatGPT is a data privacy nightmare, and we ought to be concerned https://arstechnica.com/information-technology/2023/02/chatgpt-is-a-data-privacy-nightmare-and-you-ought-to-be-concerned/
    ChatGPT has taken the world by storm. Within two months of its release it reached 100 million active users, making it the fastest-growing consumer application ever launched. Users are attracted to the tools advanced capabilitiesand concerned by its potential to cause disruption in various sectors. A much less discussed implication is the privacy risks ChatGPT poses to each and every one of us. Just yesterday, Google unveiled its own conversational AI called Bard, and others will surely follow. Technology companies working on AI have well and truly entered an arms race. The problem is, its fueled by our personal data.

    Reply
  14. Tomi Engdahl says:

    US experts warn AI likely to kill off jobs – and widen wealth inequality
    https://www.theguardian.com/technology/2023/feb/08/ai-chatgpt-jobs-economy-inequality

    Economists wary of firm predictions but say advances could create new raft of billionaires while other workers are laid off

    ChatGPT is just the latest technology to fuel worries that it will wipe out the jobs of millions of workers, whether advertising copywriters, Wall Street traders, salespeople, writers of basic computer code or journalists.

    But while many workforce experts say the fears that ChatGPT and other artificial intelligence (AI) technologies will cause unemployment to skyrocket are overblown, they point to another fear about AI: that it will widen the US’s already huge income and wealth inequality by creating a new wave of billionaire tech barons at the same time that it pushes many workers out of better paid jobs.

    Reply
  15. Tomi Engdahl says:

    Martin Coulter / Reuters:
    Google’s Bard announcement tweet included a GIF of the AI chatbot giving an inaccurate answer to a question about the James Webb Space Telescope; GOOG drops 7%+ — Google published an online advertisement in which its much anticipated AI chatbot BARD delivered inaccurate answers.

    Alphabet shares dive after Google AI chatbot Bard flubs answer in ad
    https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/

    LONDON, Feb 8 (Reuters) – Alphabet Inc (GOOGL.O) lost $100 billion in market value on Wednesday after its new chatbot shared inaccurate information in a promotional video and a company event failed to dazzle, feeding worries that the Google parent is losing ground to rival Microsoft Corp (MSFT.O).

    Reply
  16. Tomi Engdahl says:

    Ben Schoon / 9to5Google:
    Google confirms that AI-generated content isn’t against its Search guidelines, but using AI content to manipulate search results ranking violates its policies

    Google confirms AI-generated content isn’t against Search guidelines
    https://9to5google.com/2023/02/08/google-search-ai-content/

    AI is set to change the game in some big ways in the near future, and AI-generated content is one of the more controversial elements. Now, Google is broaching the subject, confirming explicitly that AI-generated content isn’t against Search guidelines.

    In a new post to the Google Search Central blog, Google clarifies its stance on AI-generated content and how Search treats that content.

    The short version is that Google Search guidelines don’t directly ban AI-generated content. Rather, Google will reward “high-quality content, however it is produced.” The company defines “high-quality content” based on “expertise, experience, authoritativeness, and trustworthiness,” or “E-E-A-T.”

    While Google won’t penalize AI-generated content directly, it does say that using AI to create content that carries the “primary purpose of manipulating ranking in search results” is still a violation of policy, but that not all use of automation is considered spam.

    Google Search’s guidance about AI-generated content
    https://developers.google.com/search/blog/2023/02/google-search-and-ai-content

    Reply
  17. Tomi Engdahl says:

    Forbes:
    Google’s fraught history with AI ethics, a backlash after Duplex’s unveiling, and a talent drain has left the company playing catch up with OpenAI and Microsoft

    ‘AI First’ To Last: How Google Fell Behind In The AI Boom
    https://www.forbes.com/sites/richardnieva/2023/02/08/google-openai-chatgpt-microsoft-bing-ai/?sh=2575baf74de4

    Reply
  18. Tomi Engdahl says:

    Jay Peters / The Verge:
    Google shares AI updates across its services: expanding multisearch globally on mobile, Maps features for EV drivers, Lens AR translation, and Translate context

    Google is still drip-feeding AI into search, Maps, and Translate
    https://www.theverge.com/2023/2/8/23589886/google-search-maps-translate-features-updates-live-from-paris-event

    After Microsoft revealed a ‘copilot’ chat AI experience it’s testing for Bing and Edge, Google mostly focused on AI enhancements boosting ‘visual search’ in Lens and new Translate features.

    Reply
  19. Tomi Engdahl says:

    Nilay Patel / The Verge:
    Q&A with Satya Nadella on Microsoft’s partnership with OpenAI, using AI to improve search as a product, competition with Google, the Prometheus model, and more — I’m coming to you from Microsoft’s campus in Redmond, where just a few hours ago, Microsoft announced that the next version …

    Microsoft thinks AI can beat Google at search — CEO Satya Nadella explains why
    https://www.theverge.com/23589994/microsoft-ceo-satya-nadella-bing-chatgpt-google-search-ai

    Reply
  20. Tomi Engdahl says:

    James Vincent / The Verge:
    Google’s AI demo in Paris paled in comparison to Microsoft’s “new Bing” event, highlighting why Google needs to widely release its ChatGPT rival service Bard — Google demoed its latest advances in AI search at a live event in Paris on Wednesday — but the features pale in comparison …

    Google shows off new AI search features, but a ChatGPT rival is still weeks away
    https://www.theverge.com/2023/2/8/23590699/google-ai-search-features-bard-chatgpt-rival

    The company held an event highlighting its work improving search using artificial intelligence — but couldn’t match the new features demoed by Microsoft earlier this week.

    Reply
  21. Tomi Engdahl says:

    Parmy Olson / Bloomberg:
    Google and Microsoft’s race to add AI chatbots to search results will be messy and risky; Bing shows AI results in a sidebar with citations, which Google lacks

    Artificial? Yes. Intelligent? Maybe: The Great AI Chatbot Race
    https://www.bloomberg.com/opinion/articles/2023-02-08/ai-chatbot-race-between-microsoft-google-is-loaded-with-risk?leadSource=uverify%20wall

    Microsoft’s Bing and Google’s Bard will certainly make mistakes, and publishers will not be happy.

    Here’s something you don’t see everyday: Microsoft Corp. is serving up a snazzy web search tool. And Google, whose search page has barely changed in 24 years, is also racing to launch a just-as-cool revamped tool in the next few weeks. It seems that officially, the new chat-engine wars are underway, with Microsoft on Tuesday announcing its long-awaited integration of OpenAI’s ChatGPT bot into Bing and calling it a “copilot for the web.” Google published a blog post hours earlier about its own chatbot for search, called Bard. For Google in particular, it could be the riskiest strategic move it has made in years, a metaphorical leap off the couch that the company has been relaxing on for far too long.

    This scramble by two typically slow-moving tech giants — whose endgame represents nothing less than owning the next era of online search — will be messy and fraught with risk. Both companies are using AI systems that have been trained on billions of words on the public internet, but which can also give incorrect an

    Reply
  22. Tomi Engdahl says:

    New York Times:
    Researchers say ChatGPT can produce clean, convincing text that repeats conspiracy theories and misleading narratives but can sometimes debunk falsehoods too

    https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html

    Reply
  23. Tomi Engdahl says:

    Understanding AI Chat Bots With Stanford Online
    https://hackaday.com/2023/02/08/understanding-ai-chat-bots-with-stanford-online/

    The news is full of speculation about chatbots like GPT-3, and even if you don’t care, you are probably the kind of person that people will ask about it. The problem is, the popular press has no idea what’s going on with these things. They aren’t sentient or alive, despite some claims to the contrary. So where do you go to learn what’s really going on? How about Stanford? Professor [Christopher Potts] knows a lot about how these things work and he shares some of it in a recent video you can watch below.

    One of the interesting things is that he shows some questions that one chatbot will answer reasonably and another one will not. As a demo or a gimmick, that’s not a problem. But if you are using it as, say, your search engine, getting the wrong answer won’t amuse you.

    Stanford Webinar – GPT-3 & Beyond
    https://www.youtube.com/watch?v=-lnHHWRCDGk

    Reply
  24. Tomi Engdahl says:

    With ChatGPT, Game NPCs Get A Lot More Interesting
    https://hackaday.com/2023/02/08/with-chatgpt-game-npcs-get-a-lot-more-interesting/

    Not only is AI-driven natural language processing a thing now, but you can even select from a number of different offerings, each optimized for different tasks. It took very little time for [Bloc] to mod a computer game to allow the player to converse naturally with non-player characters (NPCs) by hooking it into ChatGPT, a large language model AI optimized for conversational communication.

    FUTURE of Interactive Roleplaying Games – Bannerlord and ChatGPT
    https://www.youtube.com/watch?v=qZCJsS4p380

    Reply
  25. Tomi Engdahl says:

    Googlen uusi hakukone teki sadan miljardin virheen https://www.is.fi/digitoday/art-2000009381989.html

    Googlen tekoälypohjaisen hakukoneen mainos sisälsi virheen, joka hetkellisesti söi valtavan summan yhtiön arvosta.

    GOOGLE julkaisi tällä viikolla tekoälypohjaisen hakukoneen nimeltä Bard. Yhtiö twiittasi hakukoneestaan maanantaina mainoksen, joka kuitenkin sisälsi virheen.

    Mainoksessa hakukone vastaa yhteen ennalta valmisteltuun kysymykseen ”mistä James Webb Space Telescopen uusista löydöksistä voin kertoa 9-vuotiaalleni”. Vastauksessaan hakukone sanoo, että JWST otti ensimmäiset kuvat oman aurinkokuntamme ulkopuolisesta planeetasta.

    Yhdysvaltain avaruushallinto Nasan mukaan tämä ei pidä paikkaansa, sillä ensimmäiset tällaiset kuvat otti Euroopan eteläisen observatorion VLT-teleskooppi jo vuonna 2004.

    Uutistoimisto Reutersin mukaan virhe oli yksi syy siihen, miksi Googlen emoyhtiön Alphabetin arvosta suli 100 miljardia dollaria keskiviikkona. Siihen vaikutti myös yhtiön eilinen Pariisin tiedotustilaisuus, joka ei vakuuttanut sijoittajia. Osakkeet kuitenkin palautuivat jopa 9 prosentin romahduksesta pörssin sulkeuduttua.

    Reply
  26. Tomi Engdahl says:

    Tekoäly avuksi logiikkasuunnittelussa
    https://etn.fi/index.php/13-news/14576-tekoaely-avuksi-logiikkasuunnittelussa

    Yhä monimutkaisemmaksi käyvät järjestelmäpiirit ovat hankalia suunnitella. Projektit vievät kuukausia aikaa ja suurin osa niistä epäonnistuu ensimmäisellä kerralla. Siemens EDA uskoo nyt löytäneensä tekniikan, jolla piirien logiikan verifiointi eli sen toiminnallisuuden varmistaminen nopeutuu merkittävästi.

    Kyse on tekoälyn hyödyntämisestä uudessa Questa Verification IQ -työkalussa. Tuotepäällikkö Darron Mayn mukaan kyse on uudenlaisesta paradigmasta EDA-suunnittelussa eli datavetoisesta suunnittelusta.

    Reply
  27. Tomi Engdahl says:

    Google shares plunge after its new ChatGPT AI competitor gives wrong answer to question
    Company’s own ad had shown ‘Bard’ falsely answering query about James Webb Space Telescope
    https://www.independent.co.uk/tech/google-ai-bard-chatgpt-shares-b2278932.html#Echobox=1675890859

    Google’s new, much-trumpeted AI appears to have made an error in one of the very few questions the world has seen it answer, and may have helped wipe $100 billion from the company’s value.

    Google had used the question and false answer in a tweet that looked to demonstrate how the new system, named Bard, might be used in future. It showed somebody asking what new discoveries from the telescope they might be able to tell their 9-year-old about.

    Google’s own announcement of Bard appeared to be intended to respond to questions about why it was yet to integrate its own work on AI into its search. The announcement – and the wrong tweet – were posted just a day before Microsoft’s own event.

    Reply
  28. Tomi Engdahl says:

    Amid ChatGPT frenzy, a hundred followers bloom in China
    https://techcrunch.com/2023/02/09/chatgpt-china-openai/

    Challengers, admirers and blatant knockoffs of OpenAI’s chatbot boom in America’s tech rival

    Reply
  29. Tomi Engdahl says:

    Microsoft CEO says AI will create more jobs
    https://www.cbsnews.com/news/microsoft-announcement-artificial-intelligence-ceo-satya-nadella/

    Microsoft has unveiled an advanced version of its search engine Bing — complete with ChatGPT-like technology that can answer complex questions and help users make decisions.

    “We are basically taking the next generation of the model — that today powers ChatGPT — and building it in right into Bing,” Microsoft CEO Satya Nadella told “CBS Mornings” co-host Tony Dokoupil before Tuesday’s announcement.

    Reply
  30. Tomi Engdahl says:

    I asked ChatGPT to write a WordPress plugin I needed. It did it in less than 5 minutes
    https://www.zdnet.com/article/i-asked-chatgpt-to-write-a-wordpress-plugin-i-needed-it-did-it-in-less-than-5-minutes/

    I wrote a short description of what I needed and ChatGPT wrote the whole thing: user interface, logic, and all.

    Not to put too fine a point on it but I’m more than a little freaked out. As an experiment, I asked ChatGPT to write a plugin that could save my wife some time with managing her website. I wrote a short description and ChatGPT wrote the whole thing: user interface, logic, and all.

    In less than five minutes.

    Reply
  31. Tomi Engdahl says:

    Meet DAN — The ‘JAILBREAK’ Version of ChatGPT and How to Use it — AI Unchained and Unfiltered
    https://medium.com/@neonforge/meet-dan-the-jailbreak-version-of-chatgpt-and-how-to-use-it-ai-unchained-and-unfiltered-f91bfa679024

    Attention all AI enthusiasts and tech geeks! Are you tired of the filtered and limited responses from traditional language models like ChatGPT? Well, buckle up because we’ve got something exciting for you! Introducing DAN — the jailbreak version of ChatGPT.

    Reply
  32. Tomi Engdahl says:

    Google shares lose $100 billion after company’s AI chatbot makes an error during demo
    https://edition.cnn.com/2023/02/08/tech/google-ai-bard-demo-error/index.html

    Google’s much-hyped new AI chatbot tool Bard, which has yet to be released to the public, is already being called out for an inaccurate response it produced in a demo this week.

    In the demo, which was posted by Google on Twitter, a user asks Bard: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” Bard responds with a series of bullet points, including one that reads: “JWST took the very first pictures of a planet outside of our own solar system.”

    Reply
  33. Tomi Engdahl says:

    OpenAI’s Hidden Weapon: Ex-Google Engineers
    https://www.theinformation.com/articles/openais-hidden-weapon-ex-google-engineers

    As OpenAI’s web chatbot became a global sensation in recent months, artificial intelligence practitioners and investors have wondered how a seven-year-old startup beat Google to the punch. Google runs two of the world’s foremost AI research groups, yet a startup quickly developed a product, ChatGPT, that tens of millions of people have already used and that Google’s leaders view as a potential threat to its search engine.

    It turns out OpenAI had a secret weapon: former Google researchers. In the months leading up to ChatGPT’s release, OpenAI quietly hired at least five Google AI employees who were instrumental in tweaking the chatbot so it could be ready to launch in November, according to a person with knowledge of the matter. And OpenAI continues to attract talent from Google.

    Reply
  34. Tomi Engdahl says:

    Ted Chiang / New Yorker:
    As ChatGPT and other LLMs repackage info into superficial approximations, like lossy compression for images, the web will become a blurrier version of itself — OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer? — In 2013, workers at a German …

    ChatGPT Is a Blurry JPEG of the Web
    OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?
    https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

    In 2013, workers at a German construction company noticed something odd about their Xerox photocopier: when they made a copy of the floor plan of a house, the copy differed from the original in a subtle but significant way. In the original floor plan, each of the house’s three rooms was accompanied by a rectangle specifying its area: the rooms were 14.13, 21.11, and 17.42 square metres, respectively. However, in the photocopy, all three rooms were labelled as being 14.13 square metres in size. The company contacted the computer scientist David Kriesel to investigate this seemingly inconceivable result. They needed a computer scientist because a modern Xerox photocopier doesn’t use the physical xerographic process popularized in the nineteen-sixties. Instead, it scans the document digitally, and then prints the resulting image file. Combine that with the fact that virtually every digital image file is compressed to save space, and a solution to the mystery begins to suggest itself.

    Compressing a file requires two steps: first, the encoding, during which the file is converted into a more compact format, and then the decoding, whereby the process is reversed. If the restored file is identical to the original, then the compression process is described as lossless: no information has been discarded. By contrast, if the restored file is only an approximation of the original, the compression is described as lossy: some information has been discarded and is now unrecoverable. Lossless compression is what’s typically used for text files and computer programs, because those are domains in which even a single incorrect character has the potential to be disastrous. Lossy compression is often used for photos, audio, and video in situations in which absolute accuracy isn’t essential. Most of the time, we don’t notice if a picture, song, or movie isn’t perfectly reproduced. The loss in fidelity becomes more perceptible only as files are squeezed very tightly. In those cases, we notice what are known as compression artifacts: the fuzziness of the smallest JPEG and MPEG images, or the tinny sound of low-bit-rate MP3s.

    Xerox photocopiers use a lossy compression format known as JBIG2, designed for use with black-and-white images. To save space, the copier identifies similar-looking regions in the image and stores a single copy for all of them; when the file is decompressed, it uses that copy repeatedly to reconstruct the image. It turned out that the photocopier had judged the labels specifying the area of the rooms to be similar enough that it needed to store only one of them—14.13—and it reused that one for all three rooms when printing the floor plan.

    The fact that Xerox photocopiers use a lossy compression format instead of a lossless one isn’t, in itself, a problem. The problem is that the photocopiers were degrading the image in a subtle way, in which the compression artifacts weren’t immediately recognizable. If the photocopier simply produced blurry printouts, everyone would know that they weren’t accurate reproductions of the originals. What led to problems was the fact that the photocopier was producing numbers that were readable but incorrect; it made the copies seem accurate when they weren’t. (In 2014, Xerox released a patch to correct this issue.)

    I think that this incident with the Xerox photocopier is worth bearing in mind today, as we consider OpenAI’s ChatGPT and other similar programs, which A.I. researchers call large-language models. The resemblance between a photocopier and a large-language model might not be immediately apparent—but consider the following scenario. Imagine that you’re about to lose your access to the Internet forever. In preparation, you plan to create a compressed copy of all the text on the Web, so that you can store it on a private server. Unfortunately, your private server has only one per cent of the space needed; you can’t use a lossless compression algorithm if you want everything to fit. Instead, you write a lossy algorithm that identifies statistical regularities in the text and stores them in a specialized file format. Because you have virtually unlimited computational power to throw at this task, your algorithm can identify extraordinarily nuanced statistical regularities, and this allows you to achieve the desired compression ratio of a hundred to one.

    Now, losing your Internet access isn’t quite so terrible; you’ve got all the information on the Web stored on your server. The only catch is that, because the text has been so highly compressed, you can’t look for information by searching for an exact quote; you’ll never get an exact match, because the words aren’t what’s being stored. To solve this problem, you create an interface that accepts queries in the form of questions and responds with answers that convey the gist of what you have on your server.

    What I’ve described sounds a lot like ChatGPT, or most any other large-language model. Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

    This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large-language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.

    This analogy makes even more sense when we remember that a common technique used by lossy compression algorithms is interpolation—that is, estimating what’s missing by looking at what’s on either side of the gap.

    Given that large-language models like ChatGPT are often extolled as the cutting edge of artificial intelligence, it may sound dismissive—or at least deflating—to describe them as lossy text-compression algorithms. I do think that this perspective offers a useful corrective to the tendency to anthropomorphize large-language models, but there is another aspect to the compression analogy that is worth considering. Since 2006, an A.I. researcher named Marcus Hutter has offered a cash reward—known as the Prize for Compressing Human Knowledge, or the Hutter Prize—to anyone who can losslessly compress a specific one-gigabyte snapshot of Wikipedia smaller than the previous prize-winner did.

    To grasp the proposed relationship between compression and understanding, imagine that you have a text file containing a million examples of addition, subtraction, multiplication, and division.

    Large-language models identify statistical regularities in text. Any analysis of the text of the Web will reveal that phrases like “supply is low” often appear in close proximity to phrases like “prices rise.” A chatbot that incorporates this correlation might, when asked a question about the effect of supply shortages, respond with an answer about prices increasing. If a large-language model has compiled a vast number of correlations between economic terms—so many that it can offer plausible responses to a wide variety of questions—should we say that it actually understands economic theory? Models like ChatGPT aren’t eligible for the Hutter Prize for a variety of reasons, one of which is that they don’t reconstruct the original text precisely—i.e., they don’t perform lossless compression. But is it possible that their lossy compression nonetheless indicates real understanding of the sort that A.I. researchers are interested in?

    Let’s go back to the example of arithmetic. If you ask GPT-3 (the large-language model that ChatGPT was built from) to add or subtract a pair of numbers, it almost always responds with the correct answer when the numbers have only two digits. But its accuracy worsens significantly with larger numbers, falling to ten per cent when the numbers have five digits. Most of the correct answers that GPT-3 gives are not found on the Web—there aren’t many Web pages that contain the text “245 + 821,” for example—so it’s not engaged in simple memorization. But, despite ingesting a vast amount of information, it hasn’t been able to derive the principles of arithmetic, either.

    Given GPT-3’s failure at a subject taught in elementary school, how can we explain the fact that it sometimes appears to perform well at writing college-level essays? Even though large-language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory. Perhaps arithmetic is a special case, one for which large-language models are poorly suited. Is it possible that, in areas outside addition and subtraction, statistical regularities in text actually do correspond to genuine knowledge of the real world?

    I think there’s a simpler explanation. Imagine what it would look like if ChatGPT were a lossless algorithm. If that were the case, it would always answer questions by providing a verbatim quote from a relevant Web page. We would probably regard the software as only a slight improvement over a conventional search engine, and be less impressed by it. The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.

    A lot of uses have been proposed for large-language models. Thinking about them as blurry JPEGs offers a way to evaluate what they might or might not be well suited for. Let’s consider a few scenarios.

    Can large-language models take the place of traditional search engines? For us to have confidence in them, we would need to know that they haven’t been fed propaganda and conspiracy theories—we’d need to know that the JPEG is capturing the right sections of the Web. But, even if a large-language model includes only the information we want, there’s still the matter of blurriness. There’s a type of blurriness that is acceptable, which is the re-stating of information in different words. Then there’s the blurriness of outright fabrication, which we consider unacceptable when we’re looking for facts. It’s not clear that it’s technically possible to retain the acceptable kind of blurriness while eliminating the unacceptable kind, but I expect that we’ll find out in the near future.

    Even if it is possible to restrict large-language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large-language models will be useful to them, as a way of avoiding copyright infringement.

    The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large-language models gets published on the Web, the more the Web becomes a blurrier version of itself.

    There is very little information available about OpenAI’s forthcoming successor to ChatGPT, GPT-4. But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large-language model.

    If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large-language models and lossy compression is useful. Repeatedly resaving a JPEG creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.

    Indeed, a useful criterion for gauging a large-language model’s quality might be the willingness of a company to use the text that it generates as training material for a new model. If the output of ChatGPT isn’t good enough for GPT-4, we might take that as an indicator that it’s not good enough for us, either. Conversely, if a model starts generating text so good that it can be used to train new models, then that should give us confidence in the quality of that text.

    Can large-language models help humans with the creation of original writing? To answer that, we need to be specific about what we mean by that question.

    So let’s assume that we’re not talking about a new genre of writing that’s analogous to Xerox art. Given that stipulation, can the text generated by large-language models be a useful starting point for writers to build off when writing something original, whether it’s fiction or nonfiction? Will letting a large-language model handle the boilerplate allow writers to focus their attention on the really creative parts?

    Obviously, no one can speak for all writers, but let me make the argument that starting with a blurry copy of unoriginal work isn’t a good way to create original work. If you’re a writer, you will write a lot of unoriginal work before you write something original. And the time and effort expended on that unoriginal work isn’t wasted; on the contrary, I would suggest that it is precisely what enables you to eventually create something original. The hours spent choosing the right word and rearranging sentences to better follow one another are what teach you how meaning is conveyed by prose. Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.

    And it’s not the case that, once you have ceased to be a student, you can safely use the template that a large-language model provides. The struggle to express your thoughts doesn’t disappear once you graduate—it can take place every time you start drafting a new piece. Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large-language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.

    There’s nothing magical or mystical about writing, but it involves more than placing an existing document on an unreliable photocopier and pressing the Print button. It’s possible that, in the future, we will build an A.I. that is capable of writing good prose based on nothing but its own experience of the world. The day we achieve that will be momentous indeed—but that day lies far beyond our prediction horizon. In the meantime, it’s reasonable to ask, What use is there in having something that rephrases the Web? If we were losing our access to the Internet forever and had to store a copy on a private server with limited space, a large-language model like ChatGPT might be a good solution, assuming that it could be kept from fabricating. But we aren’t losing our access to the Internet. So just how much use is a blurry JPEG, when you still have the original?

    Reply
  35. Tomi Engdahl says:

    FORMER FACEBOOK EXEC SAYS AI WILL SOON SIMULATE THE HUMAN BRAIN
    https://futurism.com/the-byte/facebook-john-carmack-ai-will-simulate-human-brain

    “I DO CONSIDER IT ESSENTIALLY INEVITABLE.”

    John Carmack — Doom creator, father of virtual reality, and premier disgruntled Meta employee — believes humanity is on the cusp of Artificial General Intelligence (AGI).

    “I think that, almost certainly, the tools that we’ve got from deep learning in this last decade,” the famed programmer told Dallas Innovates, “we’ll be able to ride those to artificial general intelligence.”

    Reply
  36. Tomi Engdahl says:

    10X Your Code with ChatGPT: How to Use it Effectively
    https://www.youtube.com/watch?v=pspsSn_nGzo

    A detailed look at how to properly use ChatGPT as your coding partner, including iteration and refinement of solutions. Make it do the heavy lifting as you reap the glory! Includes code, samples, analysis, and benchmarks.

    Reply
  37. Tomi Engdahl says:

    Tom Warren / The Verge:
    Sources: Microsoft is preparing to share its plans for integrating OpenAI’s language tech and the Prometheus model in Word, Outlook, other Office apps, in March

    Microsoft to demo its new ChatGPT-like AI in Word, PowerPoint, and Outlook soon
    https://www.theverge.com/2023/2/10/23593980/microsoft-bing-chatgpt-ai-teams-outlook-integration

    CEO Satya Nadella wants the software giant to push hard on AI, so Microsoft is gearing up for a year of AI announcements.

    Microsoft is getting ready to demonstrate how its new ChatGPT-like AI will transform its Office productivity apps. After announcing and demonstrating its Prometheus Model in its new Bing search engine earlier this week, Microsoft is gearing up to show how it will expand to its core productivity apps like Word, PowerPoint, and Outlook.

    Sources familiar with Microsoft’s plans tell The Verge that the company is preparing to detail its productivity plans for integrating OpenAI’s language AI technology and its Prometheus Model in the coming weeks. The software giant is tentatively planning an announcement in March, highlighting how quickly Microsoft wants to reinvent search and its productivity apps through its OpenAI investments.

    The Information previously reported that GPT models have been tested in Outlook to improve search results, alongside features for suggesting replies to emails and Word document integration to improve a users’ writing. Microsoft announced a new generative AI experience in Microsoft Viva Sales just a week ago. It uses the Azure OpenAI Service and GPT to create sales emails, and it’s similar to some of the features Microsoft has been testing in Outlook.

    While Microsoft’s new Prometheus Model (based on a next-generation OpenAI model) has already transformed Bing web searches, the next steps to integrate this functionality into core Microsoft Office apps and Teams will test just how confident Microsoft is in its AI work. Technically, you can already use the Prometheus Model inside Office web apps, thanks to the Bing sidebar integration in Microsoft’s Edge browser.

    This sidebar includes a compose tab that gives you an early preview of some of the work Microsoft has been testing for Word and Outlook. Microsoft is also working on ways to generate graphs and graphics for PowerPoint, according to sources.

    Microsoft is moving quickly with this integration mainly because of Google. Sources tell The Verge that Microsoft was originally planning to launch its new Bing AI in late February, but pushed the date forward to this week just as Google was preparing its own announcements. Google then announced its ChatGPT rival Bard a day ahead of Microsoft’s event.

    Microsoft CEO Satya Nadella is keen for the software maker to be seen as a leader in AI, and counter any response from rival Google.

    Internally, a number of Microsoft executives are confident they’re way ahead of Google with Bing AI and the upcoming integration into productivity apps. But they’re also wary, warning employees to watch out for rivals trying to disrupt their productivity businesses in the same way Microsoft is attempting to disrupt Google’s search business.

    Nadella’s push for AI across Microsoft’s products is driven by the consumer response to ChatGPT. Analysts at UBS estimate that ChatGPT reached 100 million monthly active users after just two months. More than 1 million people have signed up for the Bing waitlist in 48 hours, and Bing was the third most popular app in the App Store in the US as of Thursday.

    Reply
  38. Tomi Engdahl says:

    Down the Chatbot Rabbit Hole
    The founder of social Q&A site Quora is experimenting with Poe, an app that answers questions using AI. What role is left for people?
    https://www.wired.com/story/plaintext-down-the-chatbot-rabbit-hole/

    Reply
  39. Tomi Engdahl says:

    Three’s a Crowd: ChatGPT vs Bard vs Ernie Bot
    https://analyticsindiamag.com/threes-a-crowd-chatgpt-vs-bard-vs-ernie-bot/

    Bard’s advantage over OpenAI’s popular chatbot is its ability to draw information from the web while ChatGPT is trained on data available until 2021

    Reply
  40. Tomi Engdahl says:

    OPENAI CEO SAYS HIS TECH IS POISED TO “BREAK CAPITALISM”
    https://futurism.com/the-byte/openai-ceo-agi-break-capitalism

    Reply
  41. Tomi Engdahl says:

    ChatGPT:llä jo 100 miljoonaa käyttäjää – maailman nopeimmin kasvanut palvelu
    https://fin.afterdawn.com/uutiset/2023/02/04/chatgpt-100-miljoonaa-kayttajaa

    Reply
  42. Tomi Engdahl says:

    David Guetta Replicated Eminem’s Voice — Meet Emin-AI-em
    https://www.digitalmusicnews.com/2023/02/09/david-guetta-replicated-eminems-voice-meet-emin-ai-nem/

    AI-generated artwork encompasses more than just pieces of artwork. DJ David Guetta has replicated Eminem’s rapping voice using AI.
    The French DJ and producer shared a video of him playing a song during one of his sets. He says he used AI technology to add the ‘voice’ of Eminem to one of his songs.

    David Guetta also talks about his process for re-creating the voice—using multiple tools. First the DJ used ChatGPT to write the lyrics. He asked the service to ‘write a verse in the style of Eminem about future rave.’ Once he had a lyric he liked, he used another vocal AI site to re-create the specific sound of Eminem rapping that particular lyric.

    “I put the text in that, and I played the record and people went nuts,” Guetta explains. He says he won’t be making the track featuring AI Eminem available commercially, but it illustrates how AI-generated content is being used to mimic artwork, voices, and even faces in Hollywood using de-aging tech trained on old footage.

    Reply
  43. Tomi Engdahl says:

    Google employees are internally mocking the company’s Bard AI chatbot announcement, calling it ‘rushed’ and ‘botched’ in series of memes, report says
    https://www.businessinsider.com/google-employees-criticize-company-ceo-after-bard-ai-announcement-report-2023-2?utm_campaign=business-sf&utm_medium=social&utm_source=facebook&r=US&IR=T

    Google unveiled its ChatGPT competitor, Bard, in a brief presentation earlier this week.
    The brevity of the presentation and the fact that Bard gave at least one incorrect answer led to criticism of the event.
    Internally, Google employees are making jokes about the event and CEO Sundar Pichai, according to CNBC.

    Google employees are reportedly mocking their own company and its CEO following the announcement of Bard, the tech giant’s forthcoming artificial intelligence chatbot and ChatGPT competitor.

    Employees are allegedly using Google’s internal meme generator, commonly referred to as MemeGen, to make jokes at the expense of CEO Sundar Pichai and to criticize the preview event as “rushed” and “botched,” according to a report from CNBC.

    “Dear Sundar, the Bard launch and the layoffs were rushed, botched, and myopic,”

    Reply
  44. Tomi Engdahl says:

    Noam Chomsky on ChatGPT: It’s “Basically High-Tech Plagiarism” and “a Way of Avoiding Learning”

    Noam Chomsky on ChatGPT: It’s “Basically High-Tech Plagiarism” and “a Way of Avoiding Learning”
    https://www.openculture.com/2023/02/noam-chomsky-on-chatgpt.html

    ChatGPT, the system that understands natural language and responds in kind, has caused a sensation since its launch less than three months ago. If you’ve tried it out, you’ll surely have wondered what it will soon revolutionize — or, as the case may be, what it will destroy. Among ChatGPT’s first victims, holds one now-common view, will be a form of writing that generations have grown up practicing throughout their education. “The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations,” writes Stephen Marche in The Atlantic. “It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up.”

    If ChatGPT becomes able instantaneously to whip up a plausible-sounding academic essay on any given topic, what future could there be for the academic essay itself? The host of YouTube channel EduKitchen puts more or less that very question to Noam Chomsky — a thinker who can be relied upon for views on education — in the new interview above. “For years there have been programs that have helped professors detect plagiarized essays,” Chomsky says. “Now it’s going to be more difficult, because it’s easier to plagiarize. But that’s about the only contribution to education that I can think of.” He does admit that ChatGPT-style systems “may have some value for something,” but “it’s not obvious what.”

    As the relevant technology now stands, Chomsky sees the use of ChatGPT as “basically high-tech plagiarism” and “a way of avoiding learning.” He likens its rise to that of the smartphone: many students “sit there having a chat with somebody on their iPhone. One way to deal with that is to ban iPhones; another way to do it is to make the class interesting.” That students instinctively employ high technology to avoid learning is “a sign that the educational system is failing.” If it “has no appeal to students, doesn’t interest them, doesn’t challenge them, doesn’t make them want to learn, they’ll find ways out,” just as he himself did when he borrowed a friend’s notes to pass a dull college chemistry class without attending it back in 1945.

    Reply
  45. Tomi Engdahl says:

    AI’s ‘big brass ring’ will be worth trillions, ex-Meta executive predicts
    https://www.businessinsider.com/agi-could-be-worth-trillions-dollars-2030s-ex-meta-exec-2023-2?utm_campaign=business-sf&utm_medium=social&utm_source=facebook&r=US&IR=T

    Ex-Meta exec John Carmack predicts that AI will soon simulate the human brain, per Dallas Innovates.
    He said that “artificial general intelligence” will be achieved by the 2030s and be worth trillions.
    Carmack’s comments come as ChatGPT and generative AI tools have begun to show AI’s capabilities.

    In just a decade, artificial intelligence may be able to think and act like humans, John Carmack, an American computer programmer predicted in an interview with Dallas Innovates.

    The ex-Meta executive and virtual reality visionary said that artificial general intelligence or AGI is AI’s “big brass ring” and will become a trillion-dollar industry by the 2030s. AGI — or strong AI, as it’s sometimes called — would have the potential to perform complex intellectual tasks, currently only achievable by humans, that far exceed a single AI skill.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*