3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,040 Comments

  1. Tomi Engdahl says:

    Kylie Robison / The Verge:
    A look at the promise of AI agents as a way for companies to monetize models; PitchBook: AI agent startups raised $8.2B over the last 12 months, up 81.4% YoY

    Agents are the future AI companies promise — and desperately need
    / And they’re betting you’ll pay for it.
    https://www.theverge.com/2024/10/10/24266333/ai-agents-assistants-openai-google-deepmind-bots

    Humans have automated tasks for centuries. Now, AI companies see a path to profit in harnessing our love of efficiency, and they’ve got a name for their solution: agents.

    AI agents are autonomous programs that perform tasks, make decisions, and interact with environments with little human input, and they’re the focus of every major company working on AI today. Microsoft has “Copilots” designed to help businesses automate things like customer service and administrative tasks. Google Cloud CEO Thomas Kurian recently outlined a pitch for six different AI productivity agents, and Google DeepMind just poached OpenAI’s co-lead on its AI video product, Sora, to work on developing a simulation for training AI agents. Anthropic released a feature for its AI chatbot, Claude, that will let anyone create their own “AI assistant.” OpenAI includes agents as level 2 in its 5-level approach to reach AGI, or human-level artificial intelligence.

    Reply
  2. Tomi Engdahl says:

    Carl Franzen / VentureBeat:
    Chinese researchers debut AI model Pyramid Flow, which makes a five-second, 384p video in 56 seconds, built using a new technique called pyramidal flow matching

    New high quality AI video generator Pyramid Flow launches — and it’s fully open source!
    https://venturebeat.com/ai/new-high-quality-ai-video-generator-pyramid-flow-launches-and-its-fully-open-source/

    The number of AI video generation models continues to grow with a new one, Pyramid Flow, launching this week and offering high quality video clips up to 10 seconds in length — quickly, and all open source.

    Developed by a collaboration of researchers from Peking University, Beijing University of Posts and Telecommunications, and Kuaishou Technology — the latter the creator of the well-reviewed proprietary Kling AI video generator — Pyramid Flow leverages a new technique wherein a single AI model generates video in stages, most of them low resolution, saving only a full-res version for the end of its generation process.

    It’s available as raw code for download on Hugging Face and Github, and can be run in an inference shell here but requires the user to download and run the model code on their own machine.
    https://twitter.com/reach_vb/status/1844241948233826385

    https://huggingface.co/rain1011/pyramid-flow-sd3
    https://github.com/jy0205/Pyramid-Flow
    https://github.com/jy0205/Pyramid-Flow/blob/main/video_generation_demo.ipynb

    Reply
  3. Tomi Engdahl says:

    Lauren Goode / Wired:
    Some developers say OpenAI’s GPT Store is a mixed bag, with revenue sharing reserved for a tiny number of GPT creators in an invite-only pilot program in the US
    https://www.wired.com/story/openai-gpt-store/

    Reply
  4. Tomi Engdahl says:

    Silicon Valley is debating if AI weapons should be allowed to decide to kill https://tcrn.ch/3NmZpqf

    Reply
  5. Tomi Engdahl says:

    Schools Are Using AI Surveillance to Catch Students Vaping Inside Bathrooms
    https://futurism.com/the-byte/schools-ai-surveillance

    A Colorado school district is using facial recognition technology to police vaping in bathrooms.

    As the Denver Post reports, the Cheyenne Mountain School District in Colorado Springs already boasts a network of 400 AI-powered facial recognition cameras scattered throughout its school buildings. The school district argues they ensure school safety and facilitate responses to emergency situations. Critics, however, argue that spending district money to put kids under sci-fi-esque surveillance comes with a slew of practical and ethical concerns.

    According to the report, the company behind the facial recognition cameras has also installed smart air sensors, designed to detect whether a kid is vaping or smoking weed.

    The idea is that if the air sensors detect vape or THC-laden smoke in bathrooms, surveillance cameras can then be used to locate and identify the culprit.

    But while safety should always be top of mind for educators, experts have consistently warned that the perceived safety benefits of deploying facial recognition are still pretty murky.

    Reply
  6. Tomi Engdahl says:

    Why did the Nobel Prize in Physics go to AI researchers this year? Surprising the academic community, the 2024 Nobel Prize in Physics was awarded to AI pioneers John J. Hopfield and Geoffrey E. Hinton for work ultimately still grounded in physics.

    Why the Nobel Prize in Physics Went to AI Research Nobel committee recognizes scientists for foundation research in neural networks
    https://spectrum.ieee.org/nobel-prize-in-physics?share_id=8460382&socialux=facebook&utm_campaign=RebelMouse&utm_content=IEEE+Spectrum&utm_medium=social&utm_source=facebook&fbclid=IwZXh0bgNhZW0CMTEAAR12UtOhM46XVWkgsO_LuAD1Qpw24dY4gVKGY-YPBJIxOJHAG3_VWbYWFbg_aem_LHQMBWabkhYeCpI_iAc0gw

    The Nobel Prize Committee for Physics caught the academic community off-guard by handing the 2024 award to John J. Hopfield and Geoffrey E. Hinton for their foundational work in neural networks.

    The pair won the prize for their seminal papers, both published in the 1980s, that described rudimentary neural networks. Though much simpler than the networks used for modern generative AI like ChatGPT or Stable Diffusion, their ideas laid the foundations on which later research built.

    Even Hopfield and Hinton didn’t believe they’d win, with the latter telling The Associated Press he was “flabbergasted.” After all, AI isn’t what comes to mind when most people think of physics. However, the committee took a broader view, in part because the researchers based their neural networks on “fundamental concepts and methods from physics.”

    “Initially, I was surprised, given it’s the Nobel Prize in Physics, and their work was in AI and machine learning,” says Padhraic Smyth, a distinguished professor at the University of California, Irvine. “But thinking about it a bit more, it was clearer to me why [the Nobel Prize Committee] did this.” He added that physicists in statistical mechanics have “long thought” about systems that display emergent behavior.

    And the connection between neural networks and physics isn’t a one-way street. Machine learning was crucial to the discovery of the Higgs boson, where it sorted the data generated by billions of proton collisions. This year’s Nobel Prize for Chemistry further underscored machine learning’s importance in research, as the award went to a trio of scientists who built an AI model to predict the structures of proteins.

    Reply
  7. Tomi Engdahl says:

    ByteDance lays off hundreds of TikTok employees in shift to AI content moderation https://tcrn.ch/3U6L8BU

    Reply
  8. Tomi Engdahl says:

    OpenAI confirms threat actors use ChatGPT to write malware
    https://www.bleepingcomputer.com/news/security/openai-confirms-threat-actors-use-chatgpt-to-write-malware/?fbclid=IwZXh0bgNhZW0CMTEAAR10pJvi-YZuSm-HzK3hsSlFF79hkFw9AVLUaLdlWYBHJTMNlw-Pk6XotyI_aem_317dZ4NZ-dgS3fuzvgdB_A

    OpenAI has disrupted over 20 malicious cyber operations abusing its AI-powered chatbot, ChatGPT, for debugging and developing malware, spreading misinformation, evading detection, and conducting spear-phishing attacks.

    The report, which focuses on operations since the beginning of the year, constitutes the first official confirmation that generative mainstream AI tools are used to enhance offensive cyber operations.

    The first signs of such activity were reported by Proofpoint in April, who suspected TA547 (aka “Scully Spider”) of deploying an AI-written PowerShell loader for their final payload, Rhadamanthys info-stealer.

    Reply
  9. Tomi Engdahl says:

    Gary Marcus / Marcus on AI:
    Apple AI researchers say they found no evidence of formal reasoning in language models and their behavior is better explained by sophisticated pattern matching

    LLMs don’t do formal reasoning – and that is a HUGE problem
    Important new study from Apple
    https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and

    A superb new article on LLMs from six AI researchers at Apple who were brave enough to challenge the dominant paradigm has just come out.

    Everyone actively working with AI should read it, or at least this terrific X thread by senior author, Mehrdad Farajtabar, that summarizes what they observed. One key passage:

    “we found no evidence of formal reasoning in language models …. Their behavior is better explained by sophisticated pattern matching—so fragile, in fact, that changing names can alter results by ~10%!”

    One particularly damning result was a new task the Apple team developed, called GSM-NoOp

    This kind of flaw, in which reasoning fails in light of distracting material, is not new.

    There is just no way can you build reliable agents on this foundation, where changing a word or two in irrelevant ways or adding a few bit of irrelevant info can give you a different answer.

    Another manifestation of the lack of sufficiently abstract, formal reasoning in LLMs is the way in which performance often fall apart as problems are made bigger.

    Performance is ok on small problems, but quickly tails off.

    We can see the same thing on integer arithmetic. Fall off on increasingly large multiplication problems has repeatedly been observed, both in older models and newer models. (Compare with a calculator which would be at 100%.)

    Elon Musk’s putative robotaxis are likely to suffer from a similar affliction: they may well work safely for the most common situations, but are also likely struggle to reason abstractly enough in some circumstances.

    The refuge of the LLM fan is always to write off any individual error. The patterns we see here, in the new Apple study, and the other recent work on math and planning (which fits with many previous studies), and even the anecdotal data on chess, are too broad and systematic for that.

    The inability of standard neural network architectures to reliably extrapolate — and reason formally — has been the central theme of my own work back to 1998 and 2001, and has been a theme in all of my challenges to deep learning, going back to 2012, and LLMs in 2019.

    I strongly believe the current results are robust. After a quarter century of “real soon now” promissory notes I would want a lot more than hand-waving to be convinced than at an LLM-compatible solution is in reach.

    Reply
  10. Tomi Engdahl says:

    Dario Amodei:
    An essay on what “powerful AI” might look like and how it could positively transform the world in biology, neuroscience, economic development, work, and more

    Machines of
    Loving Grace
    How AI Could Transform the World for the Better
    https://darioamodei.com/machines-of-loving-grace

    October 2024

    I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.

    In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong.

    First, however, I wanted to briefly explain why I and Anthropic haven’t talked that much about powerful AI’s upsides, and why we’ll probably continue, overall, to talk a lot about risks. In particular, I’ve made this choice out of a desire to:

    Maximize leverage. The basic development of AI technology and many (not all) of its benefits seems inevitable (unless the risks derail everything) and is fundamentally driven by powerful market forces. On the other hand, the risks are not predetermined and our actions can greatly change their likelihood.
    Avoid perception of propaganda. AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides. I also think that as a matter of principle it’s bad for your soul to spend too much of your time “talking your book”.
    Avoid grandiosity. I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
    Avoid “sci-fi” baggage. Although I think most people underestimate the upside of powerful AI, the small community of people who do discuss radical AI futures often does so in an excessively “sci-fi” tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes). I think this causes people to take the claims less seriously, and to imbue them with a sort of unreality. To be clear, the issue isn’t whether the technologies described are possible or likely (the main essay discusses this in granular detail)—it’s more that the “vibe” connotatively smuggles in a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how various societal issues will play out, etc. The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people.

    Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls.

    The list of positive applications of powerful AI is extremely long (and includes robotics, manufacturing, energy, and much more), but I’m going to focus on a small number of areas that seem to me to have the greatest potential to directly improve the quality of human life. The five categories I am most excited about are:

    Biology and physical health
    Neuroscience and mental health
    Economic development and poverty
    Peace and governance
    Work and meaning

    My predictions are going to be radical as judged by most standards (other than sci-fi “singularity” visions), but I mean them earnestly and sincerely.

    Reply
  11. Tomi Engdahl says:

    Christopher Mims / Wall Street Journal:
    A profile of, and interview with, Meta’s Yann LeCun, who says today’s AI models aren’t intelligent and warnings about AI’s existential peril are “complete BS” — Yann LeCun, an NYU professor and senior researcher at Meta Platforms, says warnings about the technology’s existential peril are ‘complete B.S.’

    This AI Pioneer Thinks AI Is Dumber Than a Cat
    Yann LeCun, an NYU professor and senior researcher at Meta Platforms, says warnings about the technology’s existential peril are ‘complete B.S.’
    https://www.wsj.com/tech/ai/yann-lecun-ai-meta-aa59e2f5?st=SYmYBM&reflink=desktopwebshare_permalink

    Yann LeCun helped give birth to today’s artificial-intelligence boom. But he thinks many experts are exaggerating its power and peril, and he wants people to know it.

    While a chorus of prominent technologists tell us that we are close to having computers that surpass human intelligence—and may even supplant it—LeCun has aggressively carved out a place as the AI boom’s best-credentialed skeptic.

    On social media, in speeches and at debates, the college professor and Meta Platforms META 1.05%increase; green up pointing triangle
    AI guru has sparred with the boosters and Cassandras who talk up generative AI’s superhuman potential, from Elon Musk to two of LeCun’s fellow pioneers, who share with him the unofficial title of “godfather” of the field. They include Geoffrey Hinton, a friend of nearly 40 years who on Tuesday was awarded a Nobel Prize in physics, and who has warned repeatedly about AI’s existential threats.

    LeCun thinks that today’s AI models, while useful, are far from rivaling the intelligence of our pets, let alone us. When I ask whether we should be afraid that AIs will soon grow so powerful that they pose a hazard to us, he quips: “You’re going to have to pardon my French, but that’s complete B.S.”

    In 2019, LeCun won the A.M. Turing Award, the highest prize in computer science, along with Hinton and Yoshua Bengio. The award, which led to the trio being dubbed AI godfathers, honored them for work foundational to neural networks, the multilayered systems that underlie many of today’s most powerful AI systems, from OpenAI’s chatbots to self-driving cars.

    LeCun jousts with rivals and friends alike. He got into a nasty argument with Musk on X this spring over the nature of scientific research, after the billionaire posted in promotion of his own artificial-intelligence firm.

    LeCun also has publicly disagreed with Hinton and Bengio over their repeated warnings that AI is a danger to humanity.

    Bengio says he agrees with LeCun on many topics, but they diverge over whether companies can be trusted with making sure that future superhuman AIs aren’t either used maliciously by humans, or develop malicious intent of their own.

    “I hope he is right, but I don’t think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy,” says Bengio. “That is why I think we need governments involved.”

    LeCun thinks AI is a powerful tool. Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it’s now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models.

    “The impact on Meta has been really enormous,” he says.

    At the same time, he is convinced that today’s AIs aren’t, in any meaningful sense, intelligent—and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous.

    If LeCun’s views are right, it spells trouble for some of today’s hottest startups, not to mention the tech giants pouring tens of billions of dollars into AI. Many of them are banking on the idea that today’s large language model-based AIs, like those from OpenAI, are on the near-term path to creating so-called “artificial general intelligence,” or AGI, that broadly exceeds human-level intelligence.

    OpenAI’s Sam Altman last month said we could have AGI within “a few thousand days.” Elon Musk has said it could happen by 2026.

    LeCun says such talk is likely premature.

    LeCun thinks real artificial general intelligence is a worthy goal—one that Meta, too, is working on.

    “In the future, when people will talk to their AI system, to their smart glasses or whatever else, we need those AI systems to basically have human-level characteristics, and really have common sense, and really behave like a human assistant,” he says.

    But creating an AI this capable could easily take decades, he says—and today’s dominant approach won’t get us there.

    The generative-AI boom has been powered by large language models and similar systems that train on oceans of data to mimic human expression. As each generation of models has become much more powerful, some experts have concluded that simply pouring more chips and data into developing future AIs will make them ever more capable, ultimately matching or exceeding human intelligence. This is the logic behind much of the massive investment in building ever-greater pools of specialized chips to train AIs.

    LeCun thinks that the problem with today’s AI systems is how they are designed, not their scale. No matter how many GPUs tech giants cram into data centers around the world, he says, today’s AIs aren’t going to get us artificial general intelligence.

    His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence.

    The large language models, or LLMs, used for ChatGPT and other bots might someday have only a small role in systems with common sense and humanlike abilities, built using an array of other techniques and algorithms.

    Today’s models are really just predicting the next word in a text, he says. But they’re so good at this that they fool us. And because of their enormous memory capacity, they can seem to be reasoning, when in fact they’re merely regurgitating information they’ve already been trained on.

    “We are used to the idea that people or entities that can express themselves, or manipulate language, are smart—but that’s not true,” says LeCun. “You can manipulate language and not be smart, and that’s basically what LLMs are demonstrating.”

    Reply
  12. Tomi Engdahl says:

    OpenAI Says Iranian Hackers Used ChatGPT to Plan ICS Attacks
    https://www.securityweek.com/openai-says-iranian-hackers-used-chatgpt-to-plan-ics-attacks/

    OpenAI has disrupted 20 cyber and influence operations this year, including the activities of Iranian and Chinese state-sponsored hackers.

    A report published this week by OpenAI reveals that the artificial intelligence company has disrupted more than 20 cyber and covert influence operations since the beginning of the year, including the activities of Iranian and Chinese state-sponsored hackers.

    The report highlights the activities of three threat groups that have abused ChatGPT to conduct cyberattacks.

    One of these threat actors is CyberAv3ngers, a group linked to Iran’s Islamic Revolutionary Guard Corps (IRGC) that has made headlines this year for its attacks on the water sector.

    The group has targeted industrial control systems (ICS) at a water utility in Ireland (the attack left people without water for two days), a water utility in Pennsylvania, and other water facilities in the United States.

    These attacks did not involve sophisticated hacking and instead relied on the fact that many organizations leave ICS exposed to the internet and protected with easy to obtain default credentials.

    According to OpenAI, accounts associated with CyberAv3ngers used ChatGPT to conduct reconnaissance, but also to help them with vulnerability exploitation, detection evasion, and post-compromise activity.

    Many of the reconnaissance activities are related to conducting attacks on programmable logic controllers (PLCs) and other ICS.

    Specifically, the hackers asked ChatGPT for industrial ports and protocols that can connect to the internet; industrial routers and PLCs commonly used in Jordan, as well as electricity companies and contractors in this country; and default passwords for Tridium Niagara devices and Hirschmann RS industrial routers.

    Reply
  13. Tomi Engdahl says:

    OpenAI unveils benchmarking tool to measure AI agents’ machine-learning engineering performance
    https://techxplore.com/news/2024-10-openai-unveils-benchmarking-tool-ai.html

    A team of AI researchers at Open AI, has developed a tool for use by AI developers to measure AI machine-learning engineering capabilities. The team has written a paper describing their benchmark tool, which it has named MLE-bench, and posted it on the arXiv preprint server. The team has also posted a web page on the company site introducing the new tool, which is open-source.

    MLE-bench
    Evaluating Machine Learning Agents on Machine Learning Engineering
    https://openai.com/index/mle-bench/

    Reply
  14. Tomi Engdahl says:

    Assessing Developer Productivity When Using AI Coding Assistants
    https://hackaday.com/2024/10/15/assessing-developer-productivity-when-using-ai-coding-assistants/

    We have all seen the advertisements and glossy flyers for coding assistants like GitHub Copilot, which promised to use ‘AI’ to make you write code and complete programming tasks faster than ever, yet how much of that has worked out since Copilot’s introduction in 2021? According to a recent report by code analysis firm Uplevel there are no significant benefits, while GitHub Copilot also introduced 41% more bugs. Commentary from development teams suggests that while the coding assistant makes for faster writing of code, debugging or maintaining the code is often not realistic.

    None of this should be a surprise, of course, as this mirrors what we already found when covering this topic back in 2021. With GitHub Copilot and kin being effectively Large Language Models (LLMs) that are trained on codebases, they are best considered to be massive autocomplete systems targeting code. Much like with autocomplete on e.g. a smartphone, the experience is often jarring and full of errors. Perhaps the most fair assessment of GitHub Copilot is that it can be helpful when writing repetitive, braindead code that requires very little understanding of the code to get right, while it’s bound to helpfully carry in a bundle of sticks and a dead rodent like an overly enthusiastic dog when all you wanted was for it to grab that spanner.

    Until Copilot and kin develop actual intelligence, it would seem that software developer jobs are still perfectly safe from being taken over by our robotic overlords.

    Devs gaining little (if anything) from AI coding assistants
    https://www.cio.com/article/3540579/devs-gaining-little-if-anything-from-ai-coding-assistants.html

    Code analysis firm sees no major benefits from AI dev tool when measuring key programming metrics, though others report incremental gains from coding copilots with emphasis on code review.

    Coding assistants have been an obvious early use case in the generative AI gold rush, but promised productivity improvements are falling short of the mark — if they exist at all.

    Many developers say AI coding assistants make them more productive, but a recent study set forth to measure their output and found no significant gains. Use of GitHub Copilot also introduced 41% more bugs, according to the study from Uplevel, a company providing insights from coding and collaboration data.

    The study measured pull request (PR) cycle time, or the time to merge code into a repository, and PR throughput, the number of pull requests merged. It found no significant improvements for developers using Copilot.

    In addition to measuring productivity, the Uplevel study looked at factors in developer burnout, and it found that GitHub Copilot hasn’t helped there, either. The amount of working time spent outside of standard hours decreased for both the control group and the test group using the coding tool, but it decreased more when the developers weren’t using Copilot.

    Uplevel’s study was driven by curiosity over claims of major productivity gains as AI coding assistants become ubiquitous, says Matt Hoffman, product manager and data analyst at the company. A GitHub survey published in August found that 97% of software engineers, developers, and programmers reported using AI coding assistants.

    “We’ve seen different studies of people saying, ‘This is really helpful for our productivity,’” he says. “We’ve also seen some people saying, ‘You know what? I’m kind of having to be more of a [code] reviewer.’”

    A representative of GitHub Copilot didn’t have a comment on the study, but pointed to a recent study saying developers were able to write code 55% faster using the coding assistant.

    “Our team’s hypothesis was that we thought that PR cycle time would decrease,” Hoffman says. “We thought that they would be able to write more code, and we actually thought that defect rate might go down because you’re using these gen AI tools to help you review your code before you even get it out there.”

    “We heard that people are ending up being more reviewers for this code than in the past, and you might have some false faith that the code is doing what you expect it to,” Hoffman adds. “You just have to keep a close eye on what is being generated; does it do the thing that you’re expecting it to do?”

    In the trenches, development teams are reporting mixed results.

    Developers at Gehtsoft USA, a custom software development firm, haven’t seen major productivity gains with coding assistants based on large language model (LLM) AIs, says Ivan Gekht, CEO of the company. Gehtsoft has been testing coding assistants in sandbox environments but has not used them with customer projects yet.

    “It becomes increasingly more challenging to understand and debug the AI-generated code, and troubleshooting becomes so resource-intensive that it is easier to rewrite the code from scratch than fix it.”
    —Ivan Gekht, CEO, Gehtsoft

    “Using LLMs to improve your productivity requires both the LLM to be competitive with an actual human in its abilities and the actual user to know how to use the LLM most efficiently,” he says. “The LLM does not possess critical thinking, self-awareness, or the ability to think.”

    There’s a difference between writing a few lines of code and full-fledged software development, Gekht adds. Coding is like writing a sentence, while development is like writing a novel, he suggests.

    “Software development is 90% brain function — understanding the requirements, designing the system, and considering limitations and restrictions,” he adds. “Converting all this knowledge and understanding into actual code is a simpler part of the job.”

    Like the Uplevel study, Gekht also sees AI assistants introducing errors in code. Each new iteration of the AI-generated code ends up being less consistent when different parts of the code are developed using different prompts.

    “It becomes increasingly more challenging to understand and debug the AI-generated code, and troubleshooting becomes so resource-intensive that it is easier to rewrite the code from scratch than fix it,” he says.

    Seeing gains

    The coding assistant experience at Innovative Solutions, a cloud services provider, is much different. The company is seeing significant productivity gains using coding assistants like Claude Dev and GitHub Copilot, says Travis Rehl, the CTO there. The company also uses a homegrown Anthropic integration to monitor pull requests and validate code quality.

    Rehl has seen developer productivity increase by two to three times, based on the speed of developer tickets completed, the turnaround time on customer deliverables, and the quality of tickets, measured by the number of bugs in code.

    Rehl’s team recently completed a customer project in 24 hours by using coding assistants, when the same project would have taken them about 30 days in the past, he says.

    Still, some of the hype about coding assistants — such as suggestions they will replace entire dev teams rather than simply supplement or reshape them — is unrealistic, Rehl says. Coding assistants can be used to quickly sub out code or optimize code paths by reworking segments of code, he adds.

    “Expectations around coding assistants should be tempered because they won’t write all the code or even all the correct code on the first attempt,” he says. “It is an iterative process that, when used correctly, enables a developer to increase the speed of their coding by two or three times.”

    Reply
  15. Tomi Engdahl says:

    OpenAI Says Iranian Hackers Used ChatGPT to Plan ICS Attacks

    OpenAI has disrupted 20 cyber and influence operations this year, including the activities of Iranian and Chinese state-sponsored hackers.

    https://www.securityweek.com/openai-says-iranian-hackers-used-chatgpt-to-plan-ics-attacks/

    Reply
  16. Tomi Engdahl says:

    https://etn.fi/index.php/opinion/16617-tekoaely-pitaeae-ottaa-kaeyttoeoen-kaikilla-tasoilla-organisaatiossa

    Tekoäly ei ole enää pelkkä muotisana, vaan se muuttaa jo nyt käytännössä lähes jokaista toimialaa. Yritysjohtajat näkevät, miten tekoäly voi olla heidän seuraava menestystekijänsä. Investoinnit tekoäly teknologiaan ovat kasvussa, ja Lenovon tilaaman IDC-raportin CIO PlayBook 2024 – All About Smarter AI mukaan tekoälyinvestointien odotetaan kasvavan tänä vuonna 61 prosenttia viime vuoteen verrattuna. Mutta kuinka moni onnistuu sijoittamaan investoinnit oikeaan paikkaan organisaatiossa, kysyy Lenovon Jari Hatermaa.

    IDC-raportti kertoo yksiselitteisesti, miten tekoäly on välttämättömyys. Se on uusi todellisuus, josta lähes kaikki ovat tietoisia ja investoivat siihen merkittäviä resursseja. Liian monelta yritykseltä puuttuu kuitenkin infrastruktuuri, jota tarvitaan, jotta investoinnit kannattaisivat ja tuottaisivat lisäarvoa liiketoiminnalle.

    Reply
  17. Tomi Engdahl says:

    Startup can identify deepfake video in real time
    Reality Defender says it has a solution for AI-generated video scams.
    https://www.wired.com/story/real-time-video-deepfake-scams-reality-defender/

    Ren is a product manager at Reality Defender, a company that makes tools to combat AI disinformation. During a video call last week, I watched him use some viral GitHub code and a single photo to generate a simplistic deepfake of Elon Musk that maps onto his own face. This digital impersonation was to demonstrate how the startup’s new AI detection tool could work. As Ren masqueraded as Musk on our video chat, still frames from the call were actively sent over to Reality Defender’s custom model for analysis, and the company’s widget on the screen alerted me to the fact that I was likely looking at an AI-generated deepfake and not the real Elon.

    Sure, I never really thought we were on a video call with Musk, and the demonstration was built specifically to make Reality Defender’s early-stage tech look impressive, but the problem is entirely genuine. Real-time video deepfakes are a growing threat for governments, businesses, and individuals. Recently, the chairman of the US Senate Committee on Foreign Relations mistakenly took a video call with someone pretending to be a Ukrainian official. An international engineering company lost millions of dollars earlier in 2024 when one employee was tricked by a deepfake video call. Also, romance scams targeting everyday individuals have employed similar techniques.

    “It’s probably only a matter of months before we’re going to start seeing an explosion of deepfake video, face-to-face fraud,” says Ben Colman, CEO and cofounder at Reality Defender. When it comes to video calls, especially in high-stakes situations, seeing should not be believing.

    The startup is laser-focused on partnering with business and government clients to help thwart AI-powered deepfakes.

    “We’re very pro-AI,” he says. “We think that 99.999 percent of use cases are transformational—for medicine, for productivity, for creativity—but in these kinds of very, very small edge cases the risks are disproportionately bad.”

    Academic researchers are also looking into different approaches to address this specific kind of deepfake threat. “These systems are becoming so sophisticated to create deepfakes. We need even less data now,” says Govind Mittal, a computer science PhD candidate at New York University. “If I have 10 pictures of me on Instagram, somebody can take that. They can target normal people.”

    Reply
  18. Tomi Engdahl says:

    Tekoälyjärjestelmät käsittelevät yksityisiä tietoja, ja siksi ne eivät saa muistaa liikaa
    Datatieteen professori Antti Honkela tutkii yksityisyyttä suojaavaa tekoälyä. Kun tekoälymalleja koulutetaan arkaluontoisia tietoja käyttäen, pitää varmistaa, ettei malli muista ja paljasta tietoja.
    https://www.helsinki.fi/fi/matemaattis-luonnontieteellinen-tiedekunta/ajankohtaista/tekoalyjarjestelmat-kasittelevat-yksityisia-tietoja-ja-siksi-ne-eivat-saa-muistaa-liikaa

    Reply
  19. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Mistral releases Les Ministraux AI models in 3B and 8B sizes with 128K context windows, aimed at on-device translation, internet-less smart assistants, and more

    Mistral releases new AI models optimized for laptops and phones
    https://techcrunch.com/2024/10/16/mistral-releases-new-ai-models-optimized-for-edge-devices/

    French AI startup Mistral has released its first generative AI models designed to be run on edge devices, like laptops and phones.

    Two Les Ministraux models are available — Ministral 3B and Ministral 8B — both of which have a context window of 128,000 tokens, meaning they can ingest roughly the length of a 50-page book.

    Ministral 8B is available for download as of today — albeit strictly for research purposes. Mistral is requiring devs and companies interested in Ministral 8B or Ministral 3B self-deployment setups to contact it for a commercial license.

    Otherwise, devs can use Ministral 3B and Ministral 8B through Mistral’s cloud platform, La Platforme, and other clouds with which the startup has partnered in the coming weeks. Ministral 8B costs 10 cents per million output/input tokens (~750,000 words), while Ministral 3B costs 4 cents per million output/input tokens.

    There’s been a trend toward small models, lately, which are cheaper and quicker to train, fine-tune, and run than their larger counterparts. Google continues to add models to its Gemma small model family, while Microsoft offers its Phi collection of models. In the most recent refresh of its Llama suite, Meta introduced several small models optimized for edge hardware.

    Reply
  20. Tomi Engdahl says:

    Adobe unveils AI video generator trained on licensed content
    New text-to-video tool focuses on video pros, made with content owner permission.
    https://arstechnica.com/ai/2024/10/adobe-unveils-ai-video-generator-trained-on-licensed-content/

    Reply
  21. Tomi Engdahl says:

    Top “Reasoning” AI Models Can be Brought to Their Knees With an Extremely Simple Trick
    Cutting-edge AI models may be a whole lot stupider than we thought.
    https://futurism.com/reasoning-ai-models-simple-trick

    A team of Apple researchers has found that advanced AI models’ alleged ability to “reason” isn’t all it’s cracked up to be.

    “Reasoning” is a word that’s thrown around a lot in the AI industry these days, especially when it comes to marketing the advancements of frontier AI language models. OpenAI, for example, recently dropped its “Strawberry” model, which the company billed as its next-level large language model (LLM) capable of advanced reasoning. (That model has since been renamed just “o1.”)

    But marketing aside, there’s no agreed-upon industrywide definition for what reasoning exactly means. Like other AI industry terms, for example, “consciousness” or “intelligence,” reasoning is a slippery, ephemeral concept; as it stands, AI reasoning can be chalked up to an LLM’s ability to “think” its way through queries and complex problems in a way that resembles human problem-solving patterns.

    Reply
  22. Tomi Engdahl says:

    Ashton Kutcher Threatens That Soon, AI Will Spit Out Entire Movies
    https://futurism.com/the-byte/ashton-kutcher-ai-entire-movies

    “You can generate any footage that you want.”

    Ashton Kutcher — who’s no stranger to controversy these days — has an eye rolling prophecy about the future of filmmaking.

    In the near future, the “That ’70s Show” star predicts, entire movies will be generated with artificial intelligence. Specifically, it’ll be OpenAI’s much-touted video generation tool Sora that’ll be paving the way to this nightmarish future, a prediction informed by his fiddling with a beta version of the tool.

    “You can generate any footage that you want. You can create good 10, 15-second videos that look very real,” Kutcher said during a conversation with former Google CEO Eric Schmidt at the Berggruen Salon in LA, per Variety.

    Reply
  23. Tomi Engdahl says:

    Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
    Irrelevant red herrings lead to “catastrophic” failure of logical inference.
    https://arstechnica.com/ai/2024/10/llms-cant-perform-genuine-logical-reasoning-apple-researchers-suggest/

    Reply
  24. Tomi Engdahl says:

    New AI Can Accurately Detect Health Conditions By Just Looking At Your Tongue
    “The color, shape, and thickness of the tongue can reveal a litany of health conditions.”
    https://futurism.com/neoscope/ai-detects-illness-tongue

    Reply
  25. Tomi Engdahl says:

    Why You Can’t Trust Chatbots—Now More Than Ever Even after language models were scaled up, they proved unreliable on simple tasks
    https://spectrum.ieee.org/chatgpt-reliability

    AI chatbots such as ChatGPT and other applications powered by large language models have found widespread use, but are infamously unreliable. A common assumption is that scaling up the models driving these applications will improve their reliability—for instance, by increasing the amount of data they are trained on, or the number of parameters they use to process information. However, more recent and larger versions of these language models have actually become more unreliable, not less, according to a new study.

    Large language models (LLMs) are essentially supercharged versions of the autocomplete feature that smartphones use to predict the rest of a word a person is typing. ChatGPT, perhaps the most well-known LLM-powered chatbot, has passed law school and business school exams, successfully answered interview questions for software-coding jobs, written real estate listings, and developed ad content.

    But LLMs frequently make mistakes.

    Reply
  26. Tomi Engdahl says:

    AMD unveils powerful new AI chip to challenge Nvidia
    AMD CEO Lisa Su on the MI325X: “This is the beginning, not the end of the AI race.”
    https://arstechnica.com/ai/2024/10/amd-unveils-powerful-new-ai-chip-to-challenge-nvidia/

    Reply
  27. Tomi Engdahl says:

    ChatGPT:n uutta ominaisuutta ylistetään – ”vuoden tärkein työkalu”
    https://www.tivi.fi/uutiset/chatgptn-uutta-ominaisuutta-ylistetaan-vuoden-tarkein-tyokalu/744dc95e-7bdd-4d41-b213-46c99d881faf

    OpenAI julkisti viime viikolla uuden ChatGPT Canvas -ominaisuuden, joka muuttaa palvelun vuorovaikutteisemmaksi yhteistyökumppaniksi pikemminkin kuin automaattityökaluksi, joka tekee kaiken työn käyttäjän puolesta.

    Tom’s Hardware -sivusto kehuu Canvasin vuoden tärkeimmäksi uudeksi työkaluksi. Sivuston mukaan Canvas toimii kirjoitus- ja koodauseditorina ja muistuttaa Anthropicin Artifacts-ominaisuutta. Canvas tosin menee pidemmälle ja muokkaa tekstiä tekoälyn tuottamilla kommenteilla. Mukana on tekstinkäsittelyohjelmista tuttuja kirjoitustyökaluja.

    https://www.tomsguide.com/ai/chatgpt/ive-been-testing-chatgpt-canvas-heres-why-i-think-its-the-most-important-ai-tool-of-the-year

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*