3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,744 Comments

  1. Tomi Engdahl says:

    A Shiny New Programming Language
    Mirror is an entirely new concept in programming — just supply function signatures and some input-output examples, and AI does the rest.
    https://www.hackster.io/news/a-shiny-new-programming-language-e41357506c46

    Anyone with even a passing interest in machine learning understands how these algorithms learn to perform their intended function by example. This has proven to be a very powerful technique. It has made it possible to build algorithms that can reliably recognize complex objects in images, for example, which would be virtually impossible with standard rules-based programming techniques.

    Austin Z. Henley of Carnegie Mellon University has been exploring the idea of using a set of examples when running inferences against a trained model as well. Specifically, Henley has been designing a proof of concept programming-by-example programming language that is powered by a large language model (LLM). The basic idea of this unique language, called Mirror, is that the programmer should just provide a few input-output examples for a function, then the LLM should write and execute the actual code behind the scenes.

    To use Mirror, a user first defines the signature (name, input and output parameter data types) of a function. This is followed by one or more example calls of the function, with appropriate input and output parameters supplied. Functions are then called and chained together as needed to accomplish the programmer’s goal.

    On the backend, a traditional recursive descent parser makes a pass before the result is sent to an OpenAI LLM along with a prompt instructing it to generate JavaScript code to complete the functions with code that satisfies the constraints of the examples. The code is shown to the programmer, giving them the opportunity to provide more examples if things do not look quite right.

    If you would like to take a crack at programming with Mirror for yourself, a browser-based playground has been made available in the GitHub repository. Just supply your own OpenAI API key, and you are good to go.

    The concept behind Mirror is very interesting, and could ultimately lead to new and more efficient ways of working with computers in the future. But for now, it is in the very early stages and looks like something more appropriate for playing around with than much of anything else.

    https://austinhenley.com/blog/mirrorlang.html

    Reply
  2. Tomi Engdahl says:

    In Other News: Nvidia Fixes Critical Flaw, Chinese Linux Backdoor, New Details in WhatsApp-NSO Lawsuit

    Noteworthy stories that might have slipped under the radar: Nvidia fixes vulnerability with rare ‘critical’ severity, Chinese APT’s first Linux backdoor, new details emerge from the WhatsApp-NSO lawsuit.

    https://www.securityweek.com/in-other-news-nvidia-fixes-critical-flaw-chinese-linux-backdoor-new-details-in-whatsapp-nso-lawsuit/

    Google says AI-enhanced fuzzing is paying off

    Google says AI-enhanced fuzzing has proven to be highly effective in identifying vulnerabilities in open source projects. Over two dozen vulnerabilities were discovered recently, including an OpenSSL issue that Google believes wouldn’t have been found with existing fuzz targets written by humans.

    Leveling Up Fuzzing: Finding more vulnerabilities with AI
    https://security.googleblog.com/2024/11/leveling-up-fuzzing-finding-more.html

    Recently, OSS-Fuzz reported 26 new vulnerabilities to open source project maintainers, including one vulnerability in the critical OpenSSL library (CVE-2024-9143) that underpins much of internet infrastructure. The reports themselves aren’t unusual—we’ve reported and helped maintainers fix over 11,000 vulnerabilities in the 8 years of the project.

    But these particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets. The OpenSSL CVE is one of the first vulnerabilities in a critical piece of software that was discovered by LLMs, adding another real-world example to a recent Google discovery of an exploitable stack buffer underflow in the widely used database engine SQLite.

    This blog post discusses the results and lessons over a year and a half of work to bring AI-powered fuzzing to this point, both in introducing AI into fuzz target generation and expanding this to simulate a developer’s workflow. These efforts continue our explorations of how AI can transform vulnerability discovery and strengthen the arsenal of defenders everywhere.

    The story so far

    In August 2023, the OSS-Fuzz team announced AI-Powered Fuzzing, describing our effort to leverage large language models (LLM) to improve fuzzing coverage to find more vulnerabilities automatically—before malicious attackers could exploit them. Our approach was to use the coding abilities of an LLM to generate more fuzz targets, which are similar to unit tests that exercise relevant functionality to search for vulnerabilities.

    The ideal solution would be to completely automate the manual process of developing a fuzz target end to end:

    Drafting an initial fuzz target.

    Fixing any compilation issues that arise.

    Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues.

    Running the corrected fuzz target for a longer period of time, and triaging any crashes to determine the root cause.

    Fixing vulnerabilities.

    In August 2023, we covered our efforts to use an LLM to handle the first two steps. We were able to use an iterative process to generate a fuzz target with a simple prompt including hardcoded examples and compilation errors.

    In January 2024, we open sourced the framework that we were building to enable an LLM to generate fuzz targets. By that point, LLMs were reliably generating targets that exercised more interesting code coverage across 160 projects. But there was still a long tail of projects where we couldn’t get a single working AI-generated fuzz target.

    To address this, we’ve been improving the first two steps, as well as implementing steps 3 and 4.

    Reply
  3. Tomi Engdahl says:

    Asa Fitch / Wall Street Journal:
    A look at the rise of super clusters that use ~100K of Nvidia’s GPUs for training giant AI models and the new engineering challenges arising from such clusters

    AI’s Future and Nvidia’s Fortunes Ride on the Race to Pack More Chips Into One Place
    Musk’s xAI and Meta are among firms building clusters of advanced chips
    https://www.wsj.com/tech/ai/nvidia-chips-ai-race-96d21d09?st=SnHx7P&reflink=desktopwebshare_permalink

    Tech titans have a new way to measure who is winning in the race for AI supremacy: who can put the most Nvidia NVDA -3.22%decrease; red down pointing triangle

    chips in one place.

    Companies that run big data centers have been vying for the past two years to buy up the artificial-intelligence processors that are Nvidia’s specialty. Now some of the most ambitious players are escalating those efforts by building so-called super clusters of computer servers that cost billions of dollars and contain unprecedented numbers of Nvidia’s most advanced chips.

    Elon Musk’s xAI built a supercomputer it calls Colossus—with 100,000 of Nvidia’s Hopper AI chips—in Memphis in a matter of months. Meta Chief Executive Mark Zuckerberg said last month that his company was already training its most advanced AI models with a conglomeration of chips he called “bigger than anything I’ve seen reported for what others are doing.”

    A year ago, clusters of tens of thousands of chips were seen as very large. OpenAI used around 10,000 of Nvidia’s chips to train the version of ChatGPT it launched in late 2022, UBS analysts estimate.

    Such a push toward larger super clusters could help Nvidia sustain a growth trajectory that has seen it rise from about $7 billion of quarterly revenue two years ago to more than $35 billion today. That jump has helped make it the world’s most-valuable publicly listed company, with a market capitalization of more than $3.5 trillion.

    Installing many chips in one place, linked together by superfast networking cables, has so far produced larger AI models at faster rates. But there are questions about whether ever-bigger super clusters will continue to translate into smarter chatbots and more convincing image-generation tools.

    The continuation of the AI boom for Nvidia also depends in great measure on how the largest clusters of chips pan out. The trend promises not only a wave of buying for its chips but also fosters demand for Nvidia’s networking equipment, which is fast becoming a significant business and brings in billions of dollars of sales each year.

    Huang said that while the biggest clusters for training for giant AI models now top out at around 100,000 of Nvidia’s current chips, “the next generation starts at around 100,000 Blackwells. And so that gives you a sense of where the industry is moving.”

    The stakes are high for companies such as xAI and Meta, which are racing against each other for computing-power bragging rights but are also gambling that having more of Nvidia’s chips, called GPUs, will translate into commensurately better AI models.

    “There is no evidence that this will scale to a million chips and a $100 billion system, but there is the observation that they have scaled extremely well all the way from just dozens of chips to 100,000,” said Dylan Patel, the chief analyst at SemiAnalysis, a research firm.

    In addition to xAI and Meta, OpenAI and Microsoft have been working to build up significant new computing facilities for AI. Google is building massive data centers to house chips that drive its AI strategy.

    Reply
  4. Tomi Engdahl says:

    Robert Triggs / Android Authority:
    A look at the shortcomings of Google’s Tensor SoC for Pixel devices, with four generations failing to impress in key performance and power efficiency metrics

    Has Google’s Tensor project failed?
    Would future Pixels be better of returning to Snapdragon?
    https://www.androidauthority.com/has-google-tensor-failed-3499240/

    Thanks to an unprecedented leak from Google’s gchips division, we already know virtually everything about the processors coming in Google’s next-gen Pixel 10 and Pixel 11 flagships. While Google’s Tensor project has scored a few wins, most notably powering some unique AI and photography features, four generations of chips have failed to impress in key performance and power efficiency metrics, leaving Google’s consumers behind the curve. Unfortunately, according to Google’s internal projections, the Tensor G5 and Tensor G6 will continue to chart a virtually unchanged course.

    Google’s next two generations of smartphones will continue to leave power users wanting in many regards. Would Google be better off going back to Snapdragon? Has Google’s Tensor project already failed? I’m afraid there are increasingly strong arguments to be made here.

    Reply
  5. Tomi Engdahl says:

    Benedict Evans:
    An overview of macro tech trends for 2025, focusing on generative AI, LLMs, scaling challenges with training ever bigger AI models, the capex surge, and more — Presentations Every year, I produce a big presentation exploring macro and strategic trends in the tech industry. For 2025, ‘AI eats the world’.

    Presentations
    Every year, I produce a big presentation exploring macro and strategic trends in the tech industry.
    For 2025, ‘AI eats the world’.
    https://www.ben-evans.com/presentations

    Reply
  6. Tomi Engdahl says:

    Reuters:
    Utilities, regulators, and researchers in six countries say the power demand surge caused by AI and data centers is being met in the near-term by fossil fuels

    Data-center reliance on fossil fuels may delay clean-energy transition
    https://www.reuters.com/technology/artificial-intelligence/how-ai-cloud-computing-may-delay-transition-clean-energy-2024-11-21/

    Data centers increase reliance on fossil fuels, delaying transition to clean energy
    Utilities add gas plants, delay retirements to meet data-center demand
    Data companies’ green pledges fall short, rely on existing clean power

    A spike in electricity demand from the world’s big data providers is raising a worrying possibility for the world’s climate: a near-term surge in fossil-fuel use.
    Utilities, power regulators and researchers in a half-dozen countries told Reuters the surprising growth in power demand driven by the rise of artificial intelligence and cloud computing is being met in the near-term by fossil fuels like natural gas, and even coal, because the pace of clean-energy deployments is moving too slowly to keep up.

    Reply
  7. Tomi Engdahl says:

    James Allen / Financial Times:
    How AI is reshaping the data-intensive field of Grand Prix racing, including helping design cars, setting F1′s technical regulations, and shaping race strategy

    AI enters the race to reshape life on and off the track
    The world’s most technically advanced sport is so data-intensive that humans can no longer manage alone
    https://www.ft.com/content/3586067f-27c3-4f62-946d-670585ae830d

    Formula One is the world’s most technologically advanced sport. For more than a century, it has been an incubator of future technologies for the automotive, oil and tyre industries. So it is hardly surprising that the motorsport is now attracting companies working in artificial intelligence.

    This rapidly evolving technology is set to reshape the data-intensive field of Grand Prix racing. Some engineers are predicting AI could one day take on the full design of a car, but the technology is not expected to replace the driver — autonomous car racing debuted earlier this year in a separate Abu Dhabi-funded motorsport series.

    Each F1 car is fitted with 300 sensors, generating 1.1mn data points per second on the track. And the key to improving the performance of car and driver is to process that huge volume of information as quickly as possible — a task that AI makes easier.

    Tanuja Randery, managing director of Amazon Web Services Europe, a partner of F1 and of Scuderia Ferrari, says the sport is the perfect environment for the new technology and has been embraced by all F1 teams. “We give them data to be able to improve their techniques and performance,” she explains. “Given the billions of data points generated here, the ability for us to do something with F1 is significant.”

    James Vowles, Williams Racing team principal, who has recruited a team to work on AI and machine learning, says the technology could not have come sooner. “Data is growing exponentially, so it’s already at the point where humans can’t ingest all the data coming in from one car,” he notes.

    F1’s technical department and the Fédération Internationale de l’Automobile, motorsport’s governing body, have long been able to simulate lap performance but were unable to model every aspect of racing, such as the effect of aerodynamic wake generated by a car on the one behind. So, engineers combined AI with computational fluid dynamics (CFD) to produce better simulations, resulting in a 30 per cent increase in overtaking.

    AI can spot patterns in CFD simulations of specific bodywork parts. “You might do 100 different iterations of a piece to find the best one,” says Rob Smedley of Smedley Group, which consults across F1 on AI and data analysis. “But there may be something in one of those local minima that you haven’t spotted because you’re not looking for it.

    “Humans instinctively look for the best piece — we’re not conditioned to look for the wrong piece that has potential,” explains Smedley, a former Ferrari and Williams F1 engineer. “AI helps teams to do that.”

    The technology also helps shape race strategy: the high-pressure tactical decisions teams make on pit-stop timing and tyre selection, which can win or lose races. “It’s quite a complex game, with lots of different variables,” says Smedley. “And there can be a triggering variable, like a pit stop or a safety car, that changes the game. AI is important to govern that, to assimilate and understand.”

    It is even being used to personalise the experience of F1’s 700mn global fans. AWS collaborates with software group Salesforce to tailor content to different geographies and demographics. “F1’s really trying to engage a lot more female and younger audiences, and succeeding,” says Randery of AWS. “But, to do that, you should be able to localise your content significantly.”

    F1 produces TV broadcast feeds from the 24 circuits around the world and relies increasingly on AI technology in everything from shot selection for video replays to processing clips for real time social media output.

    “The hardest thing about covering F1 is we have 20 cars and action happening simultaneously around a large real estate, at 200 miles an hour, unlike a stadium sport, where it’s right in front of you,” says Dean Locke, F1 director of broadcast and media. “There is a real hunger for content during the race and straight after,” he adds. “Machine learning can really help us in areas like storytelling and graphics.”

    Examples of this are the recent introduction of a TV graphic that shows the time lost by a driver making a mistake in a corner, or another that forecasts when a battle between two drivers will take place.

    Vowles believes AI will become core to race car design. “Do I see a car being designed, or at least bits of the car being designed with AI technology? Yes, but many, many years from now,” he says. “The one bit of it that I don’t want to see change is drivers. I’m here because we have some of the most incredible elite athletes in the world pushing themselves and the car to the limits.”

    Reply
  8. Tomi Engdahl says:

    Franz Lidz / New York Times:
    How researchers used AI and drones to find 303 previously uncharted Nazca Lines in Peru in six months, almost doubling the number that had been mapped by 2020

    Hundreds More Nazca Lines Emerge in Peru’s Desert
    With drones and A.I., researchers managed to double the number of mysterious geoglyphs in a matter of months.
    https://www.nytimes.com/2024/11/23/science/nazca-lines-peru-ai.html?unlocked_article_code=1.cU4.ZAeM.g7COqiKRoCSZ&smid=url-share

    “It took nearly a century to discover a total of 430 figurative geoglyphs,” said Masato Sakai, an archaeologist at Yamagata University in Japan who has studied the lines for 30 years.

    Dr. Sakai is the lead author of a survey published in September in the Proceedings of the National Academy of Sciences that found 303 previously uncharted geoglyphs in only six months, almost doubling the number that had been mapped as of 2020. The researchers used artificial intelligence in tandem with low-flying drones that covered some 243 square miles. Their conclusions also provided insights into the symbols’ enigmatic purpose

    The newly found images — an average of 30 feet across — could have been detected in past flyovers if the pilots had known where to look. But the pampa is so immense that “finding the needle in the haystack becomes practically impossible without the help of automation,” said Marcus Freitag, an IBM physicist who collaborated on the project.

    To identify the new geoglyphs, which are smaller than earlier examples, the investigators used an application capable of discerning the outlines from aerial photographs, no matter how faint. “The A.I. was able to eliminate 98 percent of the imagery,” Dr. Freitag said. “Human experts now only need to confirm or reject plausible candidates.”

    That 2 percent flagged by A.I. amounted to 47,410 potential sites from the desert plain. Dr. Sakai’s team then pored over the high-resolution photos and narrowed the field to 1,309 candidates. “These were then categorized into three groups based on their potential, allowing us to predict their likelihood of being actual geoglyphs before visiting,” Dr. Sakai said.

    Two years ago the researchers started scouting the more promising locations by foot and with drones, ultimately “ground-truthing” 303 geoglyphs. Among the depictions were plants, people, snakes, monkeys, cats, parrots, llamas and a grisly tableau of a knife-wielding orca severing a human head. Of the new figures, 244 were suggested by the technology, while the other 59 were identified during the fieldwork unaided by A.I.

    Reply
  9. Tomi Engdahl says:

    Tekoälyä halutaan, mutta syy uuden puhelimen hankintaan on vanha tuttu
    https://etn.fi/index.php/13-news/16884-tekoaelyae-halutaan-mutta-syy-uuden-puhelimen-hankintaan-on-vanha-tuttu

    OnePlussan tilaama ja OnePollin toteuttama kyselytutkimus paljasti suomalaisten kuluttajien kasvavan kiinnostuksen älypuhelinten tekoälyominaisuuksia kohtaan. Yleisin syy vaihtaa puhelin uudempaan on kuitenkin heikko akunkesto.

    Tutkimukseen osallistui tuhat suomalaista älypuhelimen käyttäjää, joista 45 prosenttia piti tekoälyominaisuuksia tärkeinä osana älypuhelinta. Kyselyssä suosituimmiksi tekoälyominaisuuksiksi nousivat reaaliaikaiset käännökset, jotka saivat kannatusta 39 prosentilta vastaajista. Toiseksi suosituimpia olivat tekoälypohjaiset valokuvatyökalut, kuten esineiden ja ihmisten poistaminen kuvasta, joita arvosti 38 prosenttia vastaajista. Automatisoidut toiminnot (35 prosenttia) ja tekoälypohjaiset hakutoiminnot (31 prosenttia) keräsivät myös paljon kiinnostusta.

    Tutkimus toi esiin myös suomalaisten yleisimmät ongelmat älypuhelinten käytössä. Yleisimmät haasteet olivat akun nopea kuluminen, jonka koki ongelmaksi 35 prosenttia vastaajista, sekä näytön heikko näkyvyys kirkkaassa auringonpaisteessa (25 prosenttia). Muita usein mainittuja ongelmia olivat sovellusten kaatumiset (15 prosenttia), hidas suorituskyky (14 prosenttia) ja latauksen hitaus (12 prosenttia).

    Reply
  10. Tomi Engdahl says:

    Stephen Nellis / Reuters:
    Nvidia unveils Fugatto, an AI model for generating music and audio that can also modify voices, trained on open-source data, and weighs whether to release it

    https://www.reuters.com/technology/artificial-intelligence/nvidia-shows-ai-model-that-can-modify-voices-generate-novel-sounds-2024-11-25/

    Reply
  11. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    PlayAI, which uses AI to clone voices for $49 or $99 per month and recently rolled out AI agents, raised a $21M seed co-led by 500 Startups and Kindred Ventures
    https://techcrunch.com/2024/11/25/playai-clones-voices-on-command/

    Reply
  12. Tomi Engdahl says:

    Tekoäly sekoitti työnhaun kahdessa vuodessa
    https://etn.fi/index.php/13-news/16885-tekoaely-sekoitti-tyoenhaun-kahdessa-vuodessa

    ChatGPT on nyt ollut markkinoilla kahden vuoden ajan. Siinä ajassa se on ehtinyt mullistaa monta alaa, mutta työnhaun tekoäly on saanut sekaisin. rekrytointipalveluja tarjoavan Remoten tutkimuksen mukaan 75 prosenttia työnantajista on saanut tekoälyn kirjoittamia ja vääriä tietoja sisältäviä työhakemuksia.

    Laadukkaat, kirjoitusvirheistä vapaat hakemukset eivät ole mikään erityinen siunaus uutta väkeä hakeville tai rekrytointiyrityksille. Päinvastoin, Remoten tekemän yritysjohdolle suunnatun tutkimuksen mukaan tekoälyllä tehdyt työhakemukset luovat merkittäviä ongelmia työnantajille. Tekoälyn kirjoittamat työhakemukset ovat yleisempiä johtotehtävissä ja suuryrityksissä. Tekoälyn käyttö myös lisää epäpätevien työnhakijoiden määrää rekrytointiprosesseissa.

    Tekoälyn hyödyntäminen työnhaussa ja sen luomat haasteet myös lisääntyvät. Samaan aikaan työnantajien merkittävin rekrytointihaaste on oikeanlaisten osaajien löytäminen (38 %). Kuusi kymmenestä vastaajasta uskoo, että tekoälyllä tehdyt työhakemukset johtavat epäpätevien työnhakijoiden määrän kasvuun rekrytointiprosesseissa. Samalla 74 prosenttia tästä vastaajajoukosta sanoo, että asia aiheuttaa haasteita liiketoiminnalle, ja 33 prosenttia kertoo kyseessä olevan erittäin merkittävä haaste.

    Tekoälyllä kirjoitetut työhakemukset aiheuttavat enemmän ongelmia työpaikoilla, jotka suosivat etätyötä (58 % vastanneista) ja hybridityötä (45 %) kuin toimistotyötä painottavilla työpaikoilla (43 %). Lisäksi erityisesti korkeamman tason rooleissa, kuten senioritason tehtävissä (58 % vastanneista) ja johtotason rooleissa (60 %), tekoälyn kirjoittamat työhakemukset ovat yleinen haaste.

    Tutkimuksessa kävi myös ilmi, että mitä suurempi yritys on, sitä todennäköisemmin se saa tekoälyn kirjoittamia työhakemuksia. 65 prosenttia vastanneista alle 250 työntekijän yrityksistä kokee tekoälyllä kirjoitetut työhakemukset ongelmaksi. Osuus nousee 84 prosenttiin yli 250 työntekijän suuryrityksissä. Eniten tekoälyn kirjoittamia työhakemuksia on rahoitusalalla (53 %), terveydenhuoltoalalla (51 %) sekä opetuksen ja koulutuksen alalla (47 %).

    Reply
  13. Tomi Engdahl says:

    How to Improve the Security of AI-Assisted Software Development
    https://www.securityweek.com/how-to-improve-the-security-of-ai-assisted-software-development/

    CISOs need an AI visibility and KPI plan that supports a “just right” balance to enable optimal security and productivity outcomes.

    By now, it’s clear that the artificial intelligence (AI) “genie” is out of the bottle – for good. This extends to software development, as a GitHub survey shows that 92 percent of U.S.-based developers are already using AI coding tools both in and outside of work. They say AI technologies help them improve their skills (as cited by 57 percent), boost productivity (53 percent), focus on building/creating instead of repetitive tasks (51 percent) and avoid burnout (41 percent).

    It’s safe to say that AI-assisted development will emerge even more as a norm in the near future. Organizations will have to establish policies and best practices to effectively manage it all, just as they’ve done with cloud deployments, Bring Your Own Device (BYOD) and other tech-in-the-workplace trends. But such oversight remains a work in progress. Many developers, for example, engage in what’s called “shadow AI” by using these tools without the knowledge or approval of their organization’s IT department or management.

    Those managers include chief information security officers (CISOs), who are responsible for determining the guardrails, so developers understand which AI tools and practices are OK, and which aren’t. CISOs need to lead a transition from the uncertainty of shadow AI to a more known, controlled and well-managed Bring Your Own AI (BYOAI) environment.

    The time for the transition is now, as recent academic and industry research reveals a precarious state: Forty-four percent of organizations are concerned about risks related to AI-generated code, according to the State of Cloud-Native Security Report 2024 (PDF). Research from Snyk shows that fifty-six percent of software and security team members say insecure AI suggestions are common. Four of five developers bypass security policies to use AI (i.e., shadow AI), but only one of every ten are scanning most of their code, often because the process adds more cycles for code review and thus slows overall workflows.

    In a Stanford University study, researchers found that a mere 3 percent of developers using an AI assistant wrote secure products, compared to 21 percent without access to AI. Thirty-six percent of those with AI access created products that were vulnerable to SQL injections, compared to 7 percent of those without access.

    The adoption of a well-conceived and executed BYOAI strategy would greatly help CISOs overcome the challenges as developers leverage these tools to crank out code at a rapid pace. With close collaboration between security and coding teams, CISOs will no longer stand outside of the coding environment with zero awareness of who is using what. They will cultivate a culture in which developers recognize they cannot trust AI blindly, because doing so will lead to multitudes of issues down the road. Many teams are already familiar with the need to “work backwards” to fix poor coding and security that weren’t addressed from the start, so perhaps AI security awareness will also highlight this more obviously for developers going forward.

    So how do CISOs reach this state? By incorporating the following practices and perspectives:

    Establish visibility. The surest way to eliminate shadow AI is to remove AI from the shadows, right? CISOs need to acquire “lay of the land” visibility of the tools developer teams are using, what tools they aren’t using, and why. With this, they will have a solid sense of where the code is coming from and whether any AI involvement is introducing cyber risks.
    Advertisement. Scroll to continue reading.
    Cyber AI & Automation Summit

    Strike a security/productivity balance. CISOs cannot keep teams from finding their own tools – nor should they. Instead, they must seek a fine balance between productivity and security. They need to be willing to allow relevant AI-related activity within certain boundaries, if it results in meeting production goals with minimal or at least acceptable risks.

    In other words, as opposed to adopting a “Department of No” mentality, CISOs should approach the creation of guidelines and endorsed processes for their developer teams with a mindset of, “We appreciate that you’re discovering new AI solutions that will enable you to create software more efficiently. We just want to ensure your solutions won’t cause security problems that ultimately hinder productivity. So let’s work on this together.”

    Measure it. Again, in the spirit of collaboration, CISOs should work with coding teams to come up with key performance indicators (KPIs) that measure both the productivity and reliability/safety of software. The KPIs should answer the questions, “How much are we producing with AI? How quickly are we doing it? Is the security of our processes getting better, or worse?”

    Bear in mind that these are not “security” KPIs. They are “organizational” KPIs and must align to company strategies and goals. In the best of possible worlds, developers will perceive the KPIs as something that better informs them, rather than something that burdens them. They will recognize that KPIs help them reach “more/faster/better” levels, while keeping the risk factor in check.

    Developer teams may be more on board with a “security first” partnership than CISOs anticipate. In fact, these team members rank security reviewing at the top of their priority list when deploying AI coding tools, along with code reviewing. They also believe collaboration results in cleaner and more protected code writing.

    Reply
  14. Tomi Engdahl says:

    Deep Learning Scaling is Predictable, Empirically
    https://arxiv.org/abs/1712.00409?fbclid=IwY2xjawGy-3tleHRuA2FlbQIxMQABHbX3A0lTgtD-EkuAsmbYN5UtlWSaxCjPQJYjGYWX321qa70Bx3PTUY0UxQ_aem_UXw7y3b-iqH4ywSkAf4NHw

    Deep learning (DL) creates impactful advances following a virtuous recipe: model architecture search, creating large training data sets, and scaling computation. It is widely believed that growing training sets and models should improve accuracy and result in better products. As DL application domains grow, we would like a deeper understanding of the relationships between training set size, computational scale, and model accuracy improvements to advance the state-of-the-art.
    This paper presents a large scale empirical characterization of generalization error and model size growth as training sets grow.

    Our empirical results show power-law generalization error scaling across a breadth of factors, resulting in power-law exponents—the “steepness” of the learning curve—yet to be explained by theoretical work. Further, model improvements only shift the error but do not appear to affect the power-law exponent. We also show that model size scales sublinearly with data size.

    Reply
  15. Tomi Engdahl says:

    ChatGPT:n suosio kasvaa vauhdilla – 3,7 miljardia käyntiä kuukaudessa
    https://dawn.fi/uutiset/2024/11/10/chatgpt-suosio-kasvussa?fbclid=IwY2xjawGzBitleHRuA2FlbQIxMQABHUch6pIMm-3aLrjCDF-Q-7FeOrnjLkJxyw1QFbUKh98wGDAHuyvPOOEnEA_aem_74BkJLB5Fgpj-KNKA3nNVg

    OpenAI:n tekoälybotti ChatGPT on noussut hurjalla vauhdilla yhdeksi maailman suosituimmista verkkopalveluista. Sen kasvu on ollut jopa ennennäkemättömän vauhdikasta.

    Verkkopalveluiden suosiota kartoittava SimilarWeb arvioi uusimmassa katsauksessaan, että palvelu kerää 3,7 miljardia käyntiä joka ikinen kuukausi. Kasvua vuoden takaiseen on hurjat 115,9 prosenttia eli käyntikerrat ovat yli tuplaantuneet vuodessa.

    ChatGPT onkin noussut vauhdilla maailman suosituimpien verkkopalveluiden joukkoon ja pitää tällä hetkellä sijaa 8 kaikkien verkkopalveluiden listauksessa. Se on samalla ainoa palvelu, joka on noussut kärkikymmenikössä ylöspäin edelliseen kuukauteen nähden.

    ChatGPT:n edellä, sijalla 7, on tällä hetkellä Wikipedia ja sen jälkeen kuutospaikalla oleskelee kriisistä toiseen kulkeva X eli entinen Twitter. Nykytahdilla vaikuttaa siltä, että ChatGPT pyyhkäisee molemmista ohi lähikuukausina, sillä palvelun kasvukäyrä on ollut kesästä 2024 lähtien lähes pystysuoraa kasvua osoittava.

    Myös muilla tekoälypalveluilla, kuten Googlen Geminillä, menee kovaa ja kasvu niissäkin on hurjaa. Mutta luvut ovat aivan eri tasossa ChatGPT:hen verrattuna: Geminillä oli lokakuussa 2024 noin 292 miljoonaa käyntiä eli alle kymmenesosa ChatGPT:n käyntimääristä.

    Reply
  16. Tomi Engdahl says:

    Perplexity.ai hallusinoi vähemmän kuin chatgpt, viimeaikaisten kokemuksien mukaan, mutta vimscriptiä se ei osannut vääntää.

    Reply
  17. Tomi Engdahl says:

    Itse kohtaan hallusinaatiota enimmäkseen silloin, kun ongelmaan ei ole mitään selkeää ratkaisua olemassakaan. Olisi ihan kiva, jos LLM:t joskus oppisivat silloin toteamaan, että nyt ei ole riittävän hyvää vastausta, joten annetaan olla.

    Reply
  18. Tomi Engdahl says:

    The Information:
    Source: Amazon’s new generative AI model, codenamed Olympus, can process images, video, and text, and could be unveiled at next week’s AWS re:Invent

    Amazon Develops Video AI Model, Hedging Its Reliance on Anthropic
    https://www.theinformation.com/articles/amazon-develops-video-ai-model-hedging-its-reliance-on-anthropic

    Reply
  19. Tomi Engdahl says:

    Check Point uudisti palomuurinsa: tekoäly mullistaa kyberturvan
    https://etn.fi/index.php/13-news/16895-check-point-uudisti-palomuurinsa-tekoaely-mullistaa-kyberturvan

    Check Point Software Technologies on julkistanut uuden Quantum Firewall Software R82 -ohjelmistonsa, joka tuo markkinoille ennennäkemättömiä tekoälyyn pohjautuvia kyberturvallisuusratkaisuja. Ohjelmiston tarkoituksena on vastata organisaatioiden kohtaamaan maailmanlaajuiseen kyberuhkien kasvuun, joka on yltänyt jopa 75 prosenttiin. R82 hyödyntää edistynyttä tekoälyteknologiaa ja tarjoaa tehokkaita ratkaisuja nollapäivähyökkäysten, tietojenkalastelun, haittaohjelmien ja DNS-haavoittuvuuksien torjumiseen.

    Check Pointin tuotepäällikkö Nataly Kremer korostaa, että uhkien monimutkaistuessa organisaatiot tarvitsevat älykkäitä ja ketteriä ratkaisuja pysyäkseen askeleen edellä. Uusi ohjelmisto ei vain tarjoa maailmanluokan turvallisuusinnovaatioita, vaan tekee niiden käyttöönotosta helppoa ja skaalautuvaa, mikä on elintärkeää nykypäivän liiketoimintaympäristössä.

    Quantum Firewall R82 -ohjelmisto hyödyntää neljää uutta tekoälymoottoria, joiden avulla se pystyy estämään jopa 99,8 prosenttia kaikista nollapäivähyökkäyksistä. Tämä tarkoittaa yli 500 000 lisähyökkäyksen torjumista kuukausittain. Lisäksi ohjelmisto on suunniteltu tukemaan datakeskusten ja sovelluskehityksen ketteryyttä. Virtuaalipalvelimien käyttöönotto on nyt jopa kolme kertaa nopeampaa, mikä mahdollistaa sovelluskehityksen nopean etenemisen ja monikäyttöympäristöjen vaivattoman hallinnan.

    Merkittävä parannus on myös ohjelmiston hyödyntämä NIST-hyväksytty Kyber-salaus, joka tarjoaa kvanttitietokoneiden kestävää tietoturvaa. Tämä varmistaa, että organisaatioiden salattu tieto pysyy turvassa myös tulevaisuudessa, kun kvanttitietokoneet mahdollisesti uhkaavat nykyisiä salausstandardeja.

    Check Point on myös lisännyt ohjelmistoonsa useita tekoälyyn pohjautuvia innovaatioita, kuten Infinity AI Copilot -avustajan, joka nopeuttaa uhkien ratkaisemista ja turvallisuuden hallintaa, sekä GenAI Protect -ratkaisun, joka mahdollistaa generatiivisen tekoälyn turvallisen käytön yrityksissä. Lisäksi yritys tarjoaa Infinity External Risk Management -palvelun, joka seuraa ja estää uhkia reaaliajassa.

    Reply
  20. Tomi Engdahl says:

    Pilvi ei enää ole kilpailuetu
    https://etn.fi/index.php/opinion/16896-pilvi-ei-enaeae-ole-kilpailuetu

    Pilvisiirtymien tahti on hidastunut, ja monet yritykset ovat jo saavuttaneet pilvestä saatavan kilpailuedun. Miten tämä vaikuttaa IT-palvelukumppaneiden rooliin ja prioriteetteihin tulevaisuudessa? TCS:n yritysarkkitehti Jukka-Pekka Ahonen kertoo omista näkemyksistään.

    Pilviteknologioiden kehitys on saavuttamassa kypsyysvaiheensa. Monille yrityksille pilvi ei enää ole kilpailuetu, vaan digitaalisten ratkaisujen perusta. Toisaalta pilvipalveluiden optimoinnissa riittää edelleen työsarkaa, ja esimerkiksi suuret järjestelmä- tai organisaatiouudistukset ovat usein paikkoja, joissa pilvipalvelujen kokonaisuutta on tärkeää arvioida tarkemmin.

    Samalla uudet teknologiat, kuten generatiivinen tekoäly, avaavat IT-palveluntarjoajille täysin uusia mahdollisuuksia vastata asiakkaiden ja loppuasiakkaiden tarpeisiin. GenAI vaatii uudenlaista skaalautuvuutta myös pilvipalveluilta – esimerkiksi TCS:llä otamme huomioon suurten kielimallien skaalautusvaatimukset, ja toisaalta hiekkalaatikoiden loogisen sijoittamisen pilviarkkitehtuurien suunnittelussa.

    Sanot, että pilvikumppaneiden tulisi uskaltaa kokeilla ja jopa epäonnistua lisäarvon luomiseksi. Miten IT-palvelukumppanit voivat käytännössä tuoda tällaisia rohkeita ideoita asiakkailleen? Millaisia tällaiset rohkeat kokeilut voisivat olla?

    Omat kokemukseni osoittavat, että innovatiivinen kokeilukulttuuri tuottaa tuloksia vain, jos epäonnistumisista opitaan ja suunnitelmia mukautetaan sen mukaisesti. Pitää olla rohkeutta kokeilla uutta ja hyväksyä, että epäonnistumiset ovat osa prosessia. Tämä vaatii molemmilta osapuolilta – asiakkaalta ja kumppanilta – avointa viestintää ja valmiutta jakaa kokemuksia. Esimerkiksi TCS:llä olemme ottaneet käyttöön toimintamalleja, joissa asiakkaat osallistuvat aktiivisesti kokeiluihin ja kehitykseen. Tämä lähestymistapa varmistaa, että sekä teknologiset että liiketoiminnalliset tavoitteet saavutetaan.

    Tällainen rohkeus ei rajoitu vain teknologian käyttöönottoon, vaan ulottuu myös kumppaneiden väliseen luottamukseen. Avoimuus haasteista, riskeistä ja epävarmuuksista mahdollistaa sen, että kumppanit voivat yhdessä suunnitella ja kokeilla ratkaisuja, jotka eivät ehkä olisi mahdollisia perinteisessä projektimallissa.

    Urani aikana olen nähnyt, että parhaat tulokset syntyvät, kun teknologiaa lähestytään asiakastarpeista käsin. Aidosti strategiset kumppanit eivät lähde liikkeelle teknologisista mahdollisuuksista, vaan asiakkaan todellisista ongelmista ja tavoitteista. Uudet teknologiat, kuten GenAI, tarjoavat mahdollisuuden vastata tarpeisiin, joita ei ehkä aiemmin ole edes tunnistettu. Tämä vaatii kumppaneilta kykyä asettua ennakkoluulottomasti asiakkaan saappaisiin, ymmärtää heidän haasteensa ja etsiä ratkaisuja, jotka ovat aidosti merkityksellisiä.

    Reply
  21. Tomi Engdahl says:

    Anthropic bets on personalization in the AI arms race with new ‘styles’ feature
    https://venturebeat.com/ai/anthropic-bets-on-personalization-in-the-ai-arms-race-with-new-styles-feature/

    Anthropic, a leading artificial intelligence company backed by major tech investors, announced today a significant update to its Claude AI assistant that allows users to customize how the AI communicates — a move that could reshape how businesses integrate AI into their workflows.

    The new “styles” feature, launching today on Claude.ai, enables users to preset how Claude responds to queries, offering formal, concise, or explanatory modes. Users can also create custom response patterns by uploading sample content that matches their preferred communication style.

    Reply
  22. Tomi Engdahl says:

    Four tie-ups uncover the emerging AI chip design models
    https://www.edn.com/four-tie-ups-uncover-the-emerging-ai-chip-design-models/#google_vignette

    The semiconductor industry is undergoing a major realignment to serve artificial intelligence (AI) and related environments like data centers and high-performance computing (HPC). That’s partly because AI chips mandate new design skills, tools, and methodologies.

    As a result, IP suppliers, chip design service providers, and AI specialists are far more prominent in the AI-centric design value chain. Below are four design use cases that underscore the realignment in chip design models serving AI applications.

    Reply
  23. Tomi Engdahl says:

    Why PyTorch Gets All the Love
    PyTorch has emerged as a top choice for researchers and developers due to its relative ease of use and continuing improvement in performance.
    https://thenewstack.io/why-pytorch-gets-all-the-love/

    PyTorch has grown exponentially since rising from the research community in 2016 and finding its place in the data science world. PyTorch’s star count on GitHub grew 388% between 2017 and 2018, according to GitHub data. Even though growth steadied to about 17% from 2022 to 2023, and TensorFlow remains the top deep learning framework, with nearly 39% market share, according to 6Sense, vs. 24% for PyTorch, momentum is on PyTorch’s side.

    Reply
  24. Tomi Engdahl says:

    Anthropic launches tool to connect AI systems directly to datasets / The Model Context Protocol connects an AI system to multiple data sources, which Anthropic says can eliminate the need to create custom code for each one.
    https://www.theverge.com/2024/11/25/24305774/anthropic-model-context-protocol-data-sources

    Reply
  25. Tomi Engdahl says:

    Sam Altman Says AGI Is “Achievable With Current Hardware”
    What does that even mean?
    https://futurism.com/sam-altman-agi-achievable-current-hardware

    OpenAI has made realizing artificial general intelligence, the point at which the capabilities of an AI surpass those of a human, its number one priority — yet plenty of questions remain.

    For one, the point at which AGI will become a reality remains a huge point of contention, with experts’ predictions ranging from years to the larger part of a decade to the current trajectory of machine learning being a dead end that’ll never get there.

    Reply
  26. Tomi Engdahl says:

    Zuckerberg Says It’s Fine to Train AI on Your Data Because It Probably Has No Value Anyway
    https://futurism.com/the-byte/zuckerberg-fine-train-ai-data-no-value

    “Individual creators or publishers tend to overestimate the value of their specific content.”
    Fair Game
    AI companies have been indiscriminately scraping mind-boggling amounts of content to train their AI models, a controversial practice that has led to a litany of lawsuits by copyright holders, from major record labels and newspapers to artists and authors.

    Reply
  27. Tomi Engdahl says:

    Tilaa oma CustomGPT sisällöntuotannon avuksi
    Tunnistatko markkinoijan tuskan – sivustolle ja mainontaan kaivataan säännöllisesti uutta tekstiä, mutta asiantuntijat ovat liian kiireisiä ja uuden henkilön palkkaaminen ei ole mahdollista. Apua tilanteeseen tuo organisaatiosi tarpeisiin räätälöity CustomGPT.
    https://www.hopkins.fi/palvelut/customgpt-yritykselle/

    Reply
  28. Tomi Engdahl says:

    Church Sets Up AI-Powered Jesus Inside Confessional Booth
    https://futurism.com/the-byte/ai-powered-jesus-confession-booth

    “I think there is a thirst to talk with Jesus.”
    In Jesus’ Name
    A church in the Swiss city of Lucerne has set up a computer inside a confessional booth that allows churchgoers to converse with an “AI Jesus.”

    Reply
  29. Tomi Engdahl says:

    Kisa kovenee generatiivisessa tekoälyssä
    https://etn.fi/index.php/13-news/16899-kisa-kovenee-generatiivisessa-tekoaelyssae

    Generatiivisen tekoälyn markkinoilla on nähty viime vuosina hurjaa kasvua, kun uusia toimijoita ilmaantuu alalle lähes päivittäin. Vaikka kilpailu on kiristynyt, OpenAI:n ChatGPT pysyi pitkään selkeänä johtajana. Tämä vuonna Meta otti jättiharppauksen ja nousi merkittäväksi haastajaksi.

    AltIndex.comin esittämien tietojen mukaan Meta AI:n markkinaosuus Yhdysvalloissa, maailman suurimmilla tekoälymarkkinoilla, nousi tänä vuonna 31 prosenttia. Samalla Metan markkinaosuus saavutti ChatGPT:n osuuden.

    Generatiivisen tekoälyn globaali kilpailu on kiihtynyt ennätysvauhtia, ja yhä useammat sovellukset taistelevat käyttäjien huomiosta. Vaikka ChatGPT on yhä suurin yksittäinen toimija, Meta AI:n vahva nousu erityisesti Yhdysvalloissa uhkaa sen pitkään jatkunutta ylivaltaa.

    Meta AI:n menestyksen taustalla on useita tekijöitä. Tekoäly on integroitu saumattomasti Metan alustoille, kuten Facebookiin, Instagramiin ja WhatsAppiin, mikä tekee siitä helposti saavutettavan laajalle käyttäjäkunnalle. Lisäksi Metan tekoälytyökalut on suunniteltu erityisesti sisällöntuotantoon, markkinointiin ja asiakaspalveluun, ja ne tarjoavat käyttäjilleen erittäin personoituja ja vuorovaikutteisia kokemuksia. Näiden ominaisuuksien lisäksi Meta on panostanut aggressiivisiin markkinointikampanjoihin ja kilpailukykyisiin hinnoittelumalleihin, jotka tekevät siitä houkuttelevan vaihtoehdon OpenAI:lle.

    Statista Consumer Insights -kyselyn mukaan Meta AI:n markkinaosuus Yhdysvalloissa kasvoi vuodessa 16 prosentista 31:een, mikä merkitsee lähes kaksinkertaistumista. Samalla ChatGPT:n markkinaosuus nousi vain vähän eli 26 prosentista 31:een.

    Statistan tiedot osoittavat myös, että muutkin markkinoiden toimijat ovat edistyneet merkittävästi. Googlen Gemini kasvatti markkinaosuuttaan 13 prosentista 27 prosenttiin. Microsoftin Copilot seuraa neljännellä sijalla 14 %:n osuudellaan, kun taas Snapchat AI ja Microsoft Bing AI jakavat 12 %:n osuuden.

    Generatiivisen tekoälyn kilpailu on saamassa uuden käänteen ensi vuonna, kun Apple astui markkinoille vuoden 2024 lopulla.

    Reply
  30. Tomi Engdahl says:

    Anthropic’s Claude: The AI Junior Employee Transforming Business
    https://www.forbes.com/sites/charlestowersclark/2024/11/29/anthropics-claude-the-ai-junior-employee-transforming-business/

    Large language models like ChatGPT and Claude have yet to match the versatility of human workers. This is partly because AI relies on uploaded data for context. Thus, AI tools have primarily served as co-pilots, helping users complete specific tasks, but unable to assist autonomously.

    Beyond Co-Pilot Assistance

    Last month, Anthropic released a new function via its API — Claude “Computer Use.” Despite its innocuous title, Computer Use represents the closest any mainstream AI has come to human-like agency.

    Anthropic’s Beta Computer Use enables Claude to interact directly with software environments and applications – navigating menus, typing, clicking, and executing complex, multi-step processes independently.

    This functionality mimics robotic process automation (RPA) in performing repetitive tasks, but it goes further by simulating human thought processes, not just actions. Unlike RPA systems that rely on pre-programmed steps, Claude can interpret visual inputs (like screenshots), reason about them, and decide on the best course of action.

    For instance, a business might task Claude with organizing customer data from a CRM, correlating it with financial data, and then crafting personalized WhatsApp messages – all without human intervention. A developer might request Claude to set up a Kubernetes cluster, integrating it with the right configurations and data. Such capabilities make it feasible to delegate work to Claude in the same way one would assign tasks to a junior employee.

    However, there are trade-offs: relying solely on Claude’s Computer Use can be slow because it mimics human actions step by step. Furthermore, Computer Use as stated in the name needs to have exclusive access to a computer when working.

    Reply
  31. Tomi Engdahl says:

    “Agents let teams unleash their output based on their ideas, not their size,” Vassilev explains. Each set of agents provided by Relevance is estimated to handle workflows equivalent to what would typically require five full-time employees. This could include activities such as lead qualification, personalized onboarding, and proactive customer success outreach—tasks that would be prohibitively resource-intensive without automation.
    https://www.forbes.com/sites/charlestowersclark/2024/11/29/anthropics-claude-the-ai-junior-employee-transforming-business/

    Reply
  32. Tomi Engdahl says:

    The Autonomous Edge
    The key distinction between co-pilots and autonomous agents lies in execution. What sets autonomous agents apart from co-pilots is their ability to execute tasks independently. As Vassilev puts it:

    “A co-pilot makes you twice as productive, but an autonomous agent lets you delegate the work entirely, leaving you to review the output.”

    As an example, Relevance uses their own AI agents to; research new customer signups in order to generate tailored recommendations, onboard users by pre-creating tools customized to their needs, and follow up with personalized communications. These agents shift human roles from task execution to oversight, freeing up time for strategic and creative work.

    Trust and Guardrails
    Despite their potential, AI agents are not infallible. Vassilev likens deploying AI agents to onboarding a new hire:

    “You wouldn’t let a new hire send an email to your customer’s CEO without oversight. Similarly, AI agents require a strong human-in-the-loop process.”

    The need for ensuring that AI agents are performing safely is reliant on setting guardrails about what they can and can not do and ensuring that they are trained properly – similarly to a junior employee.

    https://www.forbes.com/sites/charlestowersclark/2024/11/29/anthropics-claude-the-ai-junior-employee-transforming-business/

    Reply
  33. Tomi Engdahl says:

    Microsoft AI Introduces LazyGraphRAG: A New AI Approach to Graph-Enabled RAG that Needs No Prior Summarization of Source Data
    https://www.marktechpost.com/2024/11/26/microsoft-ai-introduces-lazygraphrag-a-new-ai-approach-to-graph-enabled-rag-that-needs-no-prior-summarization-of-source-data/

    In AI, a key challenge lies in improving the efficiency of systems that process unstructured datasets to extract valuable insights. This involves enhancing retrieval-augmented generation (RAG) tools, combining traditional search and AI-driven analysis to answer localized and overarching queries. These advancements address diverse questions, from highly specific details to more generalized insights spanning entire datasets. RAG systems are critical for document summarization, knowledge extraction, and exploratory data analysis tasks.

    One of the main problems with existing systems is the trade-off between operational costs and output quality. Traditional methods like vector-based RAG work well for localized tasks like retrieving direct answers from specific text fragments.

    Reply
  34. Tomi Engdahl says:

    Google DeepMind Research Unlocks the Potential of LLM Embeddings for Advanced Regression
    https://www.marktechpost.com/2024/11/28/google-deepmind-research-unlocks-the-potential-of-llm-embeddings-for-advanced-regression/

    Large Language Models (LLMs) have revolutionized data analysis by introducing novel approaches to regression tasks. Traditional regression techniques have long relied on handcrafted features and domain-specific expertise to model relationships between metrics and selected features. However, these methods often struggle with complex, nuanced datasets that require semantic understanding beyond numerical representations. LLMs provide a groundbreaking approach to regression by leveraging free-form text, overcoming the limitations of traditional methods. Bridging the gap between advanced language comprehension and robust statistical modeling is key to redefining regression in the age of modern natural language processing.

    Existing research methods for LLM-based regression have largely overlooked the potential of service-based LLM embeddings as a regression technique. While embedding representations are widely used in retrieval, semantic similarity, and downstream language tasks, their direct application in regression still needs to be explored.

    Reply
  35. Tomi Engdahl says:

    Andrew Ng’s Team Releases ‘aisuite’: A New Open Source Python Library for Generative AI
    https://www.marktechpost.com/2024/11/29/andrew-ngs-team-releases-aisuite-a-new-open-source-python-library-for-generative-ai/

    Generative AI (Gen AI) is transforming the landscape of artificial intelligence, opening up new opportunities for creativity, problem-solving, and automation. Despite its potential, several challenges arise for developers and businesses when implementing Gen AI solutions. One of the most prominent issues is the lack of interoperability between different large language models (LLMs) from multiple providers. Each model has unique APIs, configurations, and specific requirements, making it difficult for developers to switch between providers or use different models in the same application. This fragmented landscape often leads to increased complexity, extended development time, and challenges for engineers aiming to create effective Gen AI applications.

    Reply
  36. Tomi Engdahl says:

    Innovation
    Train Your Brain to Work Creatively with Gen AI
    https://hbr.org/2024/11/train-your-brain-to-work-creatively-with-gen-ai

    There are countless articles on how to use generative AI (gen AI) to improve work, automate repetitive tasks, summarize meetings and customer engagements, and synthesize information. There are also scores of virtual libraries brimming with prompting guides to help us achieve more effective and even fantastical output using gen AI tools. Many common digital tools already feature integrated AI co-pilots to automagically enhance and complete writing, coding, designing, creating, and whatever it is you’re working on. But there is so much more to generative AI beyond enhancing or accelerating what we already do. With the right mindset shift, or mindshift, we can train our brains to creatively rethink how we use these tools to unlock entirely new value and achieve exponential outcomes in what’s becoming an AI-first world.

    When most people prompt, they do so within the paradigm of how they think about what could or should come next. For example, when searching Google, users may ask a question, search for the best Thai restaurant “near me,” or insert specific criteria based on filtered output, such as “best downhill mountain bike for intermediate riders.” That approach is often carried into prompting. The results build on a linear path of thinking, research, and decision-making based upon the world as we know it. This is perfectly normal and effective. In fact, it’s how today’s generative AI models largely work.

    Generative AI relies on natural language processing (NLP) to understand the request and generate relevant results. It’s basically pattern recognition and pattern assembly based on instructions to deliver output that completes the task at hand. This approach aligns with our brains’ default mode: pattern recognition and efficiency-seeking, which favors short, straightforward prompts to get immediate, predictable results.

    If most people use gen AI in this way, then no matter how powerful the tools, we inadvertently create a new status quo in how we work and create. Training our brains to challenge our thinking, our assumptions of AI capabilities, and our expectations for predictable results starts with a mindshift, to recognize AI not as just a tool, but as a partner in innovation and exploring the unfamiliar.

    AI Enhances Today’s Work While Also Unlocking Tomorrow’s Opportunities, Today
    My friend Dharmesh Shah, co-founder and chief technology officer of HubSpot, once shared online that “you’re competing with A.I.”

    “That’s right,” people agreed. They went on to share other reactions including, “AI is going to take jobs,” with some harboring more dystopian outlooks, such as: “AI is going to destroy us.”

    AI empowers you to take what you do today and make it more efficient, more scalable, less expensive, and more automated. More so, AI supercharges your capabilities to do what you couldn’t do yesterday to augment your performance. It’s this part that requires imagination, creative and repetitive training, and a willingness to step beyond your comfort zone and explore the unknown (and have fun while you’re at it).

    If you’re not 100% satisfied with the results, you could ask it to regenerate ideas or guide it with more specific details or nuances, i.e., provide recipes under X number of calories, only recipes for baking or air frying, etc.

    You can also experiment in fun ways by adding your personality into the mix and exploring unconventional, radical, or previously impossible requests. This makes the output more creative, surprising, and mind-blowing.

    The idea here is to think about your interactions creatively, to practice by challenging your own conventions around how you think gen AI should work, and also the outcomes you think are expected or possible.

    Rethinking Collaboration with AI for More Creative, Innovative Outcomes
    Changing your mindset to more creatively and openly collaborate with AI is about the willingness to explore the unknown and the capacity to learn, unlearn, and experiment. Plus, it’s a lot of fun.

    Tapping into AI’s creative and transformative potential and training your brain for an AI-first world requires us to mindshift our prompting style, engaging AI as a collaborative partner, rather than just a tool.

    12 Exercises to Train Your Brain to Work More Creatively With AI
    Here are a dozen ways to train our brains to achieve broader, more innovative outcomes with gen AI:

    1. Set a daily “exploratory prompting” practice
    2. Frame prompts around “What if” and “How might we” questions
    3. Embrace ambiguity and curiosity in prompts
    4. Use prompts to explore rather than to solve
    5. Chain prompts to develop ideas iteratively
    6. Think metaphorically or analogically
    7. Prompt for perspectives, beyond facts
    8. Experiment with “role-play” prompts
    9. Ask for impossibilities and involve experiential scenarios
    10. WWAID — Reimagine AI’s role in the solution itself
    11. Establish a weekly “future-driven prompt” session
    12. Keep a journal of “breakthrough prompts”

    Generative AI is not just a tool; it’s a catalyst to rewire our thinking patterns, break free from the constraints of linear logic, and unlock creative insights we didn’t know we were capable of. A mindshift rewires us to abandon outdated thinking, embrace curiosity, and activate exponential potential by asking better, more audacious questions. To harness its power in innovative ways, the first step is to adopt what I call “exponential curiosity,” which allows us to move from merely using AI to actively co-creating with it.

    Reply
  37. Tomi Engdahl says:

    Deploying Ultralytics YOLO models on Raspberry Pi devices
    https://www.raspberrypi.com/news/deploying-ultralytics-yolo-models-on-raspberry-pi-devices/

    In this guest post, Ultralytics, creators of the popular YOLO (You Only Look Once) family of convolutional neural networks, share their insights on deploying and running their powerful AI models on Raspberry Pi devices, offering solutions for a wide range of real-world problems.

    Computer vision is redefining industries by enabling machines to process and understand visual data like images and videos. To truly grasp the impact of vision AI, consider this: Ultralytics YOLO models, such as Ultralytics YOLOv8 and the newly launched Ultralytics YOLO11, which support computer vision tasks like object detection and image classification, have been used over 100 billion times. There are 500 to 600 million uses every day and thousands of uses every second across applications like robotics, agriculture, and more.

    Reply
  38. Tomi Engdahl says:

    Automaatio ja tekoäly muuttavat jatkuvaa oppimista ja työn tekemisen tapoja
    Julkaistu: 27.11.2024
    Aalto-yliopiston työelämäprofessori Lauri Järvilehto painottaa, että automaatio ei poista ammatteja tai osaavien tekijöiden tarvetta, vaan se muokkaa työn tekemisen tapoja ja ammattilaisten osaamistarpeita. Jatkuva oppiminen auttaa pärjäämään työelämän muutoksissa ja tekoäly auttaa pärjäämään jatkuvassa oppimisessa.

    https://www.aalto.fi/fi/uutiset/automaatio-ja-tekoaly-muuttavat-jatkuvaa-oppimista-ja-tyon-tekemisen-tapoja

    Reply
  39. Tomi Engdahl says:

    Alibaba researchers unveil Marco-o1, an LLM with advanced reasoning capabilities
    https://venturebeat.com/ai/alibaba-researchers-unveil-marco-o1-an-llm-with-advanced-reasoning-capabilities/

    The recent release of OpenAI o1 has brought great attention to large reasoning models (LRMs), and is inspiring new models aimed at solving complex problems classic language models often struggle with. Building on the success of o1 and the concept of LRMs, researchers at Alibaba have introduced Marco-o1, which enhances reasoning capabilities and tackles problems with open-ended solutions where clear standards and quantifiable rewards are absent.

    OpenAI o1 uses “inference-time scaling” to improve the model’s reasoning ability by giving it “time to think.” Basically, the model uses more compute cycles during inference to generate more tokens and review its responses, which improves its performance on tasks that require reasoning. o1 is renowned for its impressive reasoning capabilities, especially in tasks with standard answers such as mathematics, physics and coding.

    However, many applications involve open-ended problems that lack clear solutions and quantifiable rewards. “We aimed to push the boundaries of LLMs even further, enhancing their reasoning abilities to tackle complex, real-world challenges,” Alibaba researchers write.

    Marco-o1 is a fine-tuned version of Alibaba’s Qwen2-7B-Instruct that integrates advanced techniques such as chain-of-thought (CoT) fine-tuning, Monte Carlo Tree Search (MCTS) and reasoning action strategies.

    The researchers trained Marco-o1 on a combination of datasets, including the Open-O1 CoT dataset; the Marco-o1 CoT dataset, a synthetic dataset generated using MCTS; and the Marco-o1 Instruction dataset, a collection of custom instruction-following data for reasoning tasks.

    Reply
  40. Tomi Engdahl says:

    Nobel Prize-Winning AI Breakthrough Paves the Way for Quantum Chemistry
    https://scitechdaily.com/nobel-prize-winning-ai-breakthrough-paves-the-way-for-quantum-chemistry/

    The Nobel Prize in Chemistry was awarded to three leaders in AI for predicting protein structures, while a Korean research team made strides in quantum computing, estimating molecular properties with unprecedented accuracy and fewer resources, promising advancements in drug development and material sciences.

    The Nobel Prize in Chemistry was just awarded to Professor David Baker from the University of Washington, Google DeepMind CEO Hassabis, and Principal Investigator John Jumper. Their groundbreaking work uses AI to predict protein structures, unlocking new possibilities for drug discovery and the creation of advanced materials. As AI and data science continue to revolutionize research, quantum computing is emerging as another transformative force in these fields.

    Reply
  41. Tomi Engdahl says:

    Työterveyslaitos varoittaa tekoälystä
    Anna Helakallio27.11.202411:04TekoälyIlmastonmuutosTyöelämä
    Tekoälyn käyttöönottoon liittyy monia ongelmia, kertoo Työterveyslaitoksen tutkija Jere Immonen.
    https://www.tivi.fi/uutiset/tyoterveyslaitos-varoittaa-tekoalysta/887ab00c-8dc6-457e-bda8-fd534ad4bf65

    Tekoälyn käyttöön liittyy eettisiä ja ilmastonmuutokseen liittyviä ongelmia, Työterveyslaitos kertoo. Havainto perustuu laitoksen teettämään selvitykseen.

    Selvityksen tehnyt Työterveyslaitoksen tutkija Jere Immonen kertoo laitoksen tiedotteessa, että yksi suurimmista huolenaiheista on ”vastuunjako ihmisen ja tekoälyn välillä.” Tämän lisäksi kielimallien kehittämiseen liittyy eettisiä ongelmia.

    ”Suurten kielimallien kouluttaminen on usein toteutettu globaalin etelän maissa pienillä palkoilla ja kyseenalaisissa työolosuhteissa”, Immonen kertoo laitoksen tiedotteessa.

    Toinen merkittävä tekoälyyn liittyvä ongelma on ilmastovaikutukset. Tekoälyn käyttö ilman ekologisia tavoitteita voi johtaa ympäristöhaittoihin.

    ”Etenkin tekoälyjärjestelmien vaatimukset datan varastoinnille ja käsittelylle lisäävät energiankulutusta”, Immonen kertoo.

    Tekoäly voi kuitenkin edistää vihreää siirtymää. Immonen kertoo, että tekoälyn suurin vaikutuskeino ilmastonmuutokseen voisi olla ihmisen toimintaa ohjaava vaikutus.

    ”Esimerkiksi finanssialalla generatiivinen tekoäly voi auttaa investointien ohjaamisessa vihreää siirtymää edistäviin kohteisiin.”

    Selvityksessä myös ilmeni, että kaikki tekoälyyn liittyvät huolenaiheet eivät välttämättä perustu todellisuuteen. Tekoälyn on pelätty korvaavan monen työt, mutta tekoäly voi olla jopa positiivinen vaikutus työelämään. Tekoäly on vähentänyt esimerkiksi asiantuntijatyössä rutiinitöitä.

    ”Generatiivisen tekoälyn luoma asiantuntijatyön muutos on enemmän työn luonnetta ja sisältöä muokkaavaa kuin täysin korvaavaa”, Immonen kertoo.

    Reply
  42. Tomi Engdahl says:

    Why Small Language Models Are The Next Big Thing In AI
    https://www.forbes.com/sites/deandebiase/2024/11/25/why-small-language-models-are-the-next-big-thing-in-ai/

    With Elon Musk’s xAI raising an additional $5 billion in funding from Andreessen Horowitz, Qatar Investment Authority, Valor Equity Partners, and Sequoia — and Amazon investing an additional $4 billion in OpenAI rival Anthropic — artificial intelligence enters the holiday season on fire.

    But while Microsoft, Google, Meta, Amazon and others invest billions in developing general-purpose large language models to handle a variety of tasks, one size does not fit all when it comes to AI. What’s good for these big dogs may not be what your company needs. And even if there is an impending bubble, now more than ever, the C-suite needs to better understand the impact of these technologies.

    With (too many) LLM startups enabling computers to synthesize vast amounts of data and respond to natural-language queries — LLM powered AI is becoming critical for businesses across the globe. “The response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable,” AWS CEO Matt Garman, stated in a release on their expanding partnership and investments. “By continuing to deploy Anthropic models in Amazon Bedrock and collaborating with Anthropic on the development of our custom Trainium chips, we’ll keep pushing the boundaries of what customers can achieve with generative AI technologies.”

    For many companies, LLMs are still the best choice for specific projects. For others, though, they can be expensive for businesses to run, as measured in dollars, energy, and computing resources. According to IDC calculations, worldwide AI spending will double over the next four years to $632 billion (seems low), with generative AI growing rapidly to represent 32% of all spending.

    I suspect there are emerging alternatives that will work better in certain instances—and my discussions with dozens of CEOs support that.

    Your Company Needs Small Language Models (SLMs)
    So, what exactly are small language models? They are simply language models trained only on specific types of data, that produce customized outputs. A critical advantage of this is the data is kept within the firewall domain, so external SLMs are not being trained on potentially sensitive data. The beauty of SLMs is that they scale both computing and energy use to the project’s actual needs, which can help lower ongoing expenses and reduce environmental impacts.

    Another important alternative — domain-specific LLMs — specialize in one type of knowledge rather than offering broader knowledge. Domain-specific LLMs are heavily trained to deeply understand a given category and respond more accurately to queries, by for example a CMO vs. a CFO, in that domain.

    AI’s Hallucination, Power And Training Challenges
    Since LLMs require thousands of AI processing chips (GPUs) to process hundreds of billions of parameters, they can cost millions of dollars to build especially when they’re being trained, but also afterwards, when they’re handling user inquiries.

    Reply
  43. Tomi Engdahl says:

    Multilingual and open source: OpenGPT-X research project releases large language model
    https://techxplore.com/news/2024-11-multilingual-source-opengpt-large-language.html#google_vignette

    Reply
  44. Tomi Engdahl says:

    Think Globally, Compute Locally
    This new deep learning architecture brings multimodal transformer models to the edge for fast processing of IoT sensor data.
    https://www.hackster.io/news/think-globally-compute-locally-8297583dc099

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*