3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,260 Comments

  1. Tomi Engdahl says:

    A Shiny New Programming Language
    Mirror is an entirely new concept in programming — just supply function signatures and some input-output examples, and AI does the rest.
    https://www.hackster.io/news/a-shiny-new-programming-language-e41357506c46

    Anyone with even a passing interest in machine learning understands how these algorithms learn to perform their intended function by example. This has proven to be a very powerful technique. It has made it possible to build algorithms that can reliably recognize complex objects in images, for example, which would be virtually impossible with standard rules-based programming techniques.

    Austin Z. Henley of Carnegie Mellon University has been exploring the idea of using a set of examples when running inferences against a trained model as well. Specifically, Henley has been designing a proof of concept programming-by-example programming language that is powered by a large language model (LLM). The basic idea of this unique language, called Mirror, is that the programmer should just provide a few input-output examples for a function, then the LLM should write and execute the actual code behind the scenes.

    To use Mirror, a user first defines the signature (name, input and output parameter data types) of a function. This is followed by one or more example calls of the function, with appropriate input and output parameters supplied. Functions are then called and chained together as needed to accomplish the programmer’s goal.

    On the backend, a traditional recursive descent parser makes a pass before the result is sent to an OpenAI LLM along with a prompt instructing it to generate JavaScript code to complete the functions with code that satisfies the constraints of the examples. The code is shown to the programmer, giving them the opportunity to provide more examples if things do not look quite right.

    If you would like to take a crack at programming with Mirror for yourself, a browser-based playground has been made available in the GitHub repository. Just supply your own OpenAI API key, and you are good to go.

    The concept behind Mirror is very interesting, and could ultimately lead to new and more efficient ways of working with computers in the future. But for now, it is in the very early stages and looks like something more appropriate for playing around with than much of anything else.

    https://austinhenley.com/blog/mirrorlang.html

    Reply
  2. Tomi Engdahl says:

    In Other News: Nvidia Fixes Critical Flaw, Chinese Linux Backdoor, New Details in WhatsApp-NSO Lawsuit

    Noteworthy stories that might have slipped under the radar: Nvidia fixes vulnerability with rare ‘critical’ severity, Chinese APT’s first Linux backdoor, new details emerge from the WhatsApp-NSO lawsuit.

    https://www.securityweek.com/in-other-news-nvidia-fixes-critical-flaw-chinese-linux-backdoor-new-details-in-whatsapp-nso-lawsuit/

    Google says AI-enhanced fuzzing is paying off

    Google says AI-enhanced fuzzing has proven to be highly effective in identifying vulnerabilities in open source projects. Over two dozen vulnerabilities were discovered recently, including an OpenSSL issue that Google believes wouldn’t have been found with existing fuzz targets written by humans.

    Leveling Up Fuzzing: Finding more vulnerabilities with AI
    https://security.googleblog.com/2024/11/leveling-up-fuzzing-finding-more.html

    Recently, OSS-Fuzz reported 26 new vulnerabilities to open source project maintainers, including one vulnerability in the critical OpenSSL library (CVE-2024-9143) that underpins much of internet infrastructure. The reports themselves aren’t unusual—we’ve reported and helped maintainers fix over 11,000 vulnerabilities in the 8 years of the project.

    But these particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets. The OpenSSL CVE is one of the first vulnerabilities in a critical piece of software that was discovered by LLMs, adding another real-world example to a recent Google discovery of an exploitable stack buffer underflow in the widely used database engine SQLite.

    This blog post discusses the results and lessons over a year and a half of work to bring AI-powered fuzzing to this point, both in introducing AI into fuzz target generation and expanding this to simulate a developer’s workflow. These efforts continue our explorations of how AI can transform vulnerability discovery and strengthen the arsenal of defenders everywhere.

    The story so far

    In August 2023, the OSS-Fuzz team announced AI-Powered Fuzzing, describing our effort to leverage large language models (LLM) to improve fuzzing coverage to find more vulnerabilities automatically—before malicious attackers could exploit them. Our approach was to use the coding abilities of an LLM to generate more fuzz targets, which are similar to unit tests that exercise relevant functionality to search for vulnerabilities.

    The ideal solution would be to completely automate the manual process of developing a fuzz target end to end:

    Drafting an initial fuzz target.

    Fixing any compilation issues that arise.

    Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues.

    Running the corrected fuzz target for a longer period of time, and triaging any crashes to determine the root cause.

    Fixing vulnerabilities.

    In August 2023, we covered our efforts to use an LLM to handle the first two steps. We were able to use an iterative process to generate a fuzz target with a simple prompt including hardcoded examples and compilation errors.

    In January 2024, we open sourced the framework that we were building to enable an LLM to generate fuzz targets. By that point, LLMs were reliably generating targets that exercised more interesting code coverage across 160 projects. But there was still a long tail of projects where we couldn’t get a single working AI-generated fuzz target.

    To address this, we’ve been improving the first two steps, as well as implementing steps 3 and 4.

    Reply
  3. Tomi Engdahl says:

    Asa Fitch / Wall Street Journal:
    A look at the rise of super clusters that use ~100K of Nvidia’s GPUs for training giant AI models and the new engineering challenges arising from such clusters

    AI’s Future and Nvidia’s Fortunes Ride on the Race to Pack More Chips Into One Place
    Musk’s xAI and Meta are among firms building clusters of advanced chips
    https://www.wsj.com/tech/ai/nvidia-chips-ai-race-96d21d09?st=SnHx7P&reflink=desktopwebshare_permalink

    Tech titans have a new way to measure who is winning in the race for AI supremacy: who can put the most Nvidia NVDA -3.22%decrease; red down pointing triangle

    chips in one place.

    Companies that run big data centers have been vying for the past two years to buy up the artificial-intelligence processors that are Nvidia’s specialty. Now some of the most ambitious players are escalating those efforts by building so-called super clusters of computer servers that cost billions of dollars and contain unprecedented numbers of Nvidia’s most advanced chips.

    Elon Musk’s xAI built a supercomputer it calls Colossus—with 100,000 of Nvidia’s Hopper AI chips—in Memphis in a matter of months. Meta Chief Executive Mark Zuckerberg said last month that his company was already training its most advanced AI models with a conglomeration of chips he called “bigger than anything I’ve seen reported for what others are doing.”

    A year ago, clusters of tens of thousands of chips were seen as very large. OpenAI used around 10,000 of Nvidia’s chips to train the version of ChatGPT it launched in late 2022, UBS analysts estimate.

    Such a push toward larger super clusters could help Nvidia sustain a growth trajectory that has seen it rise from about $7 billion of quarterly revenue two years ago to more than $35 billion today. That jump has helped make it the world’s most-valuable publicly listed company, with a market capitalization of more than $3.5 trillion.

    Installing many chips in one place, linked together by superfast networking cables, has so far produced larger AI models at faster rates. But there are questions about whether ever-bigger super clusters will continue to translate into smarter chatbots and more convincing image-generation tools.

    The continuation of the AI boom for Nvidia also depends in great measure on how the largest clusters of chips pan out. The trend promises not only a wave of buying for its chips but also fosters demand for Nvidia’s networking equipment, which is fast becoming a significant business and brings in billions of dollars of sales each year.

    Huang said that while the biggest clusters for training for giant AI models now top out at around 100,000 of Nvidia’s current chips, “the next generation starts at around 100,000 Blackwells. And so that gives you a sense of where the industry is moving.”

    The stakes are high for companies such as xAI and Meta, which are racing against each other for computing-power bragging rights but are also gambling that having more of Nvidia’s chips, called GPUs, will translate into commensurately better AI models.

    “There is no evidence that this will scale to a million chips and a $100 billion system, but there is the observation that they have scaled extremely well all the way from just dozens of chips to 100,000,” said Dylan Patel, the chief analyst at SemiAnalysis, a research firm.

    In addition to xAI and Meta, OpenAI and Microsoft have been working to build up significant new computing facilities for AI. Google is building massive data centers to house chips that drive its AI strategy.

    Reply
  4. Tomi Engdahl says:

    Robert Triggs / Android Authority:
    A look at the shortcomings of Google’s Tensor SoC for Pixel devices, with four generations failing to impress in key performance and power efficiency metrics

    Has Google’s Tensor project failed?
    Would future Pixels be better of returning to Snapdragon?
    https://www.androidauthority.com/has-google-tensor-failed-3499240/

    Thanks to an unprecedented leak from Google’s gchips division, we already know virtually everything about the processors coming in Google’s next-gen Pixel 10 and Pixel 11 flagships. While Google’s Tensor project has scored a few wins, most notably powering some unique AI and photography features, four generations of chips have failed to impress in key performance and power efficiency metrics, leaving Google’s consumers behind the curve. Unfortunately, according to Google’s internal projections, the Tensor G5 and Tensor G6 will continue to chart a virtually unchanged course.

    Google’s next two generations of smartphones will continue to leave power users wanting in many regards. Would Google be better off going back to Snapdragon? Has Google’s Tensor project already failed? I’m afraid there are increasingly strong arguments to be made here.

    Reply
  5. Tomi Engdahl says:

    Benedict Evans:
    An overview of macro tech trends for 2025, focusing on generative AI, LLMs, scaling challenges with training ever bigger AI models, the capex surge, and more — Presentations Every year, I produce a big presentation exploring macro and strategic trends in the tech industry. For 2025, ‘AI eats the world’.

    Presentations
    Every year, I produce a big presentation exploring macro and strategic trends in the tech industry.
    For 2025, ‘AI eats the world’.
    https://www.ben-evans.com/presentations

    Reply
  6. Tomi Engdahl says:

    Reuters:
    Utilities, regulators, and researchers in six countries say the power demand surge caused by AI and data centers is being met in the near-term by fossil fuels

    Data-center reliance on fossil fuels may delay clean-energy transition
    https://www.reuters.com/technology/artificial-intelligence/how-ai-cloud-computing-may-delay-transition-clean-energy-2024-11-21/

    Data centers increase reliance on fossil fuels, delaying transition to clean energy
    Utilities add gas plants, delay retirements to meet data-center demand
    Data companies’ green pledges fall short, rely on existing clean power

    A spike in electricity demand from the world’s big data providers is raising a worrying possibility for the world’s climate: a near-term surge in fossil-fuel use.
    Utilities, power regulators and researchers in a half-dozen countries told Reuters the surprising growth in power demand driven by the rise of artificial intelligence and cloud computing is being met in the near-term by fossil fuels like natural gas, and even coal, because the pace of clean-energy deployments is moving too slowly to keep up.

    Reply
  7. Tomi Engdahl says:

    Franz Lidz / New York Times:
    How researchers used AI and drones to find 303 previously uncharted Nazca Lines in Peru in six months, almost doubling the number that had been mapped by 2020

    Hundreds More Nazca Lines Emerge in Peru’s Desert
    With drones and A.I., researchers managed to double the number of mysterious geoglyphs in a matter of months.
    https://www.nytimes.com/2024/11/23/science/nazca-lines-peru-ai.html?unlocked_article_code=1.cU4.ZAeM.g7COqiKRoCSZ&smid=url-share

    “It took nearly a century to discover a total of 430 figurative geoglyphs,” said Masato Sakai, an archaeologist at Yamagata University in Japan who has studied the lines for 30 years.

    Dr. Sakai is the lead author of a survey published in September in the Proceedings of the National Academy of Sciences that found 303 previously uncharted geoglyphs in only six months, almost doubling the number that had been mapped as of 2020. The researchers used artificial intelligence in tandem with low-flying drones that covered some 243 square miles. Their conclusions also provided insights into the symbols’ enigmatic purpose

    The newly found images — an average of 30 feet across — could have been detected in past flyovers if the pilots had known where to look. But the pampa is so immense that “finding the needle in the haystack becomes practically impossible without the help of automation,” said Marcus Freitag, an IBM physicist who collaborated on the project.

    To identify the new geoglyphs, which are smaller than earlier examples, the investigators used an application capable of discerning the outlines from aerial photographs, no matter how faint. “The A.I. was able to eliminate 98 percent of the imagery,” Dr. Freitag said. “Human experts now only need to confirm or reject plausible candidates.”

    That 2 percent flagged by A.I. amounted to 47,410 potential sites from the desert plain. Dr. Sakai’s team then pored over the high-resolution photos and narrowed the field to 1,309 candidates. “These were then categorized into three groups based on their potential, allowing us to predict their likelihood of being actual geoglyphs before visiting,” Dr. Sakai said.

    Two years ago the researchers started scouting the more promising locations by foot and with drones, ultimately “ground-truthing” 303 geoglyphs. Among the depictions were plants, people, snakes, monkeys, cats, parrots, llamas and a grisly tableau of a knife-wielding orca severing a human head. Of the new figures, 244 were suggested by the technology, while the other 59 were identified during the fieldwork unaided by A.I.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*