3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,249 Comments

  1. Tomi Engdahl says:

    Etla: Generatiivinen tekoäly on lisännyt työn kysyntää
    https://www.uusiteknologia.fi/2024/11/19/etla-generatiivinen-tekoaly-on-lisannyt-tyon-kysyntaa/

    Generatiivinen tekoäly ei ole ainakaan tähän mennessä aiheuttanut negatiivisia työmarkkinavaikutuksia Suomessa, arvioidaan Elinkeinoelämän tutkimuskeskus Etlan tuoreessa selityksessä. Tulokset viittaavat enemmänkin siihen, että tekoäly on nostanut työn tuottavuutta ja sitä kautta työn kysyntää. Myös ansiokehitys tekoälylle altistuneissa ammateissa on ollut nopeampaa kuin ei-altistuneissa ammateissa.

    Generatiivisen tekoälyn kehittyessä ja käytön laajentuessa tulokset saattavat toki muuttua, tutkijat varoittavat jo etukäteen. Tulokset kuitenkin osoittavat, että ansiokehitys on ollut altistuneissa ammateissa nopeampaa kuin ei-altistuneissa ammateissa. ’’Sen sijaan työllisyyskehityksessä ei ole ollut eroja näiden ryhmien välillä’’, toteaa Etlan tutkimusjohtaja Antti Kauhanen.

    Generatiivisen tekoälyn työmarkkinavaikutuksista on keskusteltu siitä lähtien, kun ChatGPT julkaistiin marraskuussa 2022. Tähän mennessä luotettavimmat tutkimukset generatiivisen tekoälyn työmarkkinavaikutuksista on tehty tutkimalla alustataloutta ja vertailemalla tekoälylle enemmän altistuneita työtehtäviä vähemmän altistuneisiin tehtäviin, ennen ja jälkeen ChatGPT:n julkaisua.

    Tulosten mukaan työn kysyntä vähentyi tekoälylle altistuneissa tehtävissä. Altistuminen tarkoittaa sitä, missä määrin ammatti sisältää tehtäviä, joita voisi periaatteessa suorittaa tekoälyn avulla. Tänään julkaistussa Etla-tutkimuksessa Generatiivisen tekoälyn vaikutukset eivät ainakaan vielä näy työmarkkinoilla.

    Palkat eniten altistuneissa ammateissa pikemminkin nousseet (Etla Muistio 143) on verrattu ansio- ja työllisyyskehitystä tekoälylle (GenAI) enemmän altistuneissa ammateissa ja vähemmän altistuneissa ammateissa ennen ja jälkeen ChatGPT:n julkaisua. Tutkimuksessa on hyödynnetty Tilastokeskuksen tulorekisteriä.

    Generatiivinen tekoäly ei ole aiheuttanut meillä negatiivisia työmarkkinavaikutuksia, ainakaan tähän mennessä. Olemme tarkastelleet vaikutuksia elokuuhun 2024 asti eli 20 kuukauden aikana. Tulokset viittaavat pikemminkin siihen, että tekoäly on nostanut työn tuottavuutta ja sitä kautta työn kysyntää sekä ansioita.

    Kansainvälisissä tutkimuksissa on havaittu, että työn kysyntä ja palkkiot ovat generatiivisen tekoälyn tulon myötä vähentyneet kääntämisessä, mutta lisääntyneet web-kehityksessä. Suomessa tekoäly on pikemminkin lisännyt työn kysyntää.

    Reply
  2. Tomi Engdahl says:

    Salossa alkaa tekoälytietokoneiden kasaaminen
    https://www.uusiteknologia.fi/2024/11/19/salossa-alkaa-jollan-tekoalytietokoneiden-kasaaminen/

    Kännykkäyhtiönäkin aloittanut Jolla käynnistää uuden tekoälytietokoneena kokoonpanon entisissä Nokian Salon tuotantotiloissa. Kyse on uudenlaisesta Mind2 -koneesta, jonka kokoonpano alkaa Salon IoT Campuksen tiloissa joulukuussa. Alueen tiloissa toimii jo useita muitakin teknologiayrityksiä suunnittelusta akkuvalmistukseen.

    Nokian peruilta tilaa Salossa riittääkin kasvaa vaikka kuinka. Samalla Jolla voi näyttää muullekin suuntaa suomalaiselle elektroniikkateollisuudelle osoittaen, että tuotantoa voidaan rakentaa ja kehittää kotimaassa.

    “Omalla loppuvalmistuksella pystymme laatutarkistamaan elektroniikan ja ohjelmiston yksityiskohtaisesti ennen kuin tuotteet lähtevät käyttäjille’’, kertoo Jollan toimitusjohtaja Sami Pienimäki.

    Pääosin ohjelmistotaloksi muuttuneella Jollan Mind2-koneen käyttö pohjautuu tekoälyalusta Venho.AI:hin, jonka toiminnan ytimessä on tietoturvallisuus ja koko digikokemuksen pitäminen käyttäjän omassa kontrollissa.

    Jollan uudenlainen tekoälytietokone Mind2 on ensimmäinen askel kohti Jollan uutta Edge AI -strategiaa, jossa se kehittää paikallisissa laitteissa toimivia tekoälyohjelmistoja. Yhtiön ohjelmistoja voidaan käyttää palvelimien esimerkiksi kodin älylaitteissa, älylaseissa, droneissa ja autojen ohjaamoissa.

    Jolla Mind2 voidaan yhdistää puhelimeen tai muuhun laitteeseen. Mind2 huolehtii käyttäjänsä tiedon turvallisesta tekoälykäsittelystä toimien samalla henkilökohtaisena palvelimena.

    https://etn.fi/index.php/13-news/16855-jollan-tekoaelytietokone-valmistetaan-salossa

    Reply
  3. Tomi Engdahl says:

    Applen tekoäly saapuu Suomeen
    Petteri Uusitalo16.11.202414:20TekoälyMacOSiOS
    Viivästyksen syynä on EU-sääntely, joka kieltää Applea suosimasta omia palveluitaan ja tuotteitaan.
    https://www.tivi.fi/uutiset/applen-tekoaly-saapuu-suomeen/0608f0dd-a11b-4cf8-b7ae-2518e43bfe5f

    Reply
  4. Tomi Engdahl says:

    Raportti: OpenAI:lta tulossa tehtäviä suorittava tekoälyagentti – ensimmäinen julkaisu suunnitelmissa tammikuulle
    https://mobiili.fi/2024/11/14/raportti-openailta-tulossa-tehtavia-suorittava-tekoalyagentti-ensimmainen-julkaisu-suunnitelmissa-tammikuulle/

    Reply
  5. Tomi Engdahl says:

    How I write code using Cursor: A review
    https://www.arguingwithalgorithms.com/posts/cursor-review.html

    In forums relating to AI and AI coding in particular, I see a common inquiry from experienced software developers: Is anyone getting value out of tools like Cursor, and is it worth the subscription price?

    A few months into using Cursor as my daily driver for both personal and work projects, I have some observations to share about whether this is a “need-to-have” tool or just a passing fad, as well as strategies to get the most benefit quickly which may help you if you’d like to trial it. Some of you may have tried Cursor and found it underwhelming, and maybe some of these suggestions might inspire you to give it another try.

    What is Cursor?
    Cursor1 is a fork of Visual Studio Code (VS Code) which has Large Language Model (LLM) powered features integrated into the core UI. It is a proprietary product with a free tier and a subscription option; however, the pricing sheet doesn’t cover what the actual subscriber benefits are and how they compare to competing products. I’ll try to clarify that when discussing the features below based on my own understanding, but a quick summary:

    Tab completion: This is a set of proprietary fine-tuned models that both provide code completion in the editor, as well as navigate to the next recommended action, all triggered by the Tab key. Only available to subscribers.
    Inline editing: This is a chat-based interface for making edits to selected code with a simple diff view using a foundation model such as GPT or Claude. Available to free and paid users.
    Chat sidebar: This is also a chat-based interface for making larger edits in a sidebar view, allowing more room for longer discussion, code sample suggestions across multiple files, etc. using a foundation model such as GPT or Claude. Available to free and paid users.
    Composer: This is yet another chat-based interface specifically meant for larger cross-codebase refactors, generating diffs for multiple files that you can page through and approve, also using a foundation model such as GPT or Claude. Available to free and paid users.

    Tab completion
    While other LLM-powered coding tools focus on a chat experience, so far in my usage of Cursor it’s the tab completion that fits most naturally into my day-to-day practice of coding and saves the most time. A lot of thought and technical research has apparently gone into this feature, so that it can not only suggest completions for a line, several lines, or a whole function, but it can also suggest the next line to go to for the next edit. What this amounts to is being able to make part of a change, and then auto-complete related changes throughout the entire file just by repeatedly pressing Tab.

    One way to use this is as a code refactoring tool on steroids. For example, suppose I have a block of code with variable names in under_score notation that I want to convert to camelCase. It is sufficient to rename one instance of one variable, and then tab through all the lines that should be updated, including the other related variables. Many tedious, error-prone tasks can be automated in this way without having to write a script to do so

    Sometimes tab completion will indepedently find a bug and propose a fix. Many times it will suggest imports when I add a dependency in Python or Go. If I wrap a string in quotes, it will escape the contents appropriately.

    All in all, this tool feels like it is reading my mind, guessing at my next action, and allowing me to think less about the code and more about the architecture of I am building.

    Also worth noting: The completions are incredibly fast, and I never felt a delay waiting for a suggestion. They appear basically as soon as I stop typing. Having too long a wait would surely be a deal-breaker for me.

    My other complaint is the exact opposite situation: Sometimes a completion is dead wrong, and I intentionally dismiss it. Subsequently, but very infrequently, I will accept a totally different completion and the previously-declined suggestion will quietly be applied as well.

    Inline editing, chat sidebar, and composer
    As far as I can tell, these features are all very similar in their interaction with a foundational model – I use Claude 3.5 Sonnet almost exclusively – and the variance is in the user interface.

    Inline editing can be invoked by selecting some code and pressing Ctrl-K/Cmd-K. I type in the desired changes, and get a nice diff in the file that I can accept or reject. I use this mostly to implement bits of code inside a function or make minor refactors.

    Here is an example which takes an application’s database API and creates a REST API to access it, with parameter validation and correct HTTP status codes, then writes a client library to access that REST API

    As another example, here I am using the chat sidebar to convert the client library from Python to Go.

    Changes to my workflow
    The most exciting thing about a tool like Cursor is not that I can write code faster, because honestly the actual writing of code is not the bottleneck; in fact, I often have to slow myself down to avoid focusing too much on the code and not enough on the high-level problem being solved. The real value is in changing how I code.

    Summary
    Whether I’ll be using Cursor in a few years or have moved on to another tool, I can’t really tell. I am confident that at the time of writing this, Cursor is the best example of the potential of LLM coding assistants, and if you want to explore how this type of tool might be of value I suggest you give it a spin.

    Reply
  6. Tomi Engdahl says:

    Made a small shell script (curl+jq) like ChatGPT, but using Llama3 kindly provided by Duckduckgo https://github.com/zoobab/curlduck

    Reply
  7. Tomi Engdahl says:

    Yoshua Bengio / Financial Times:
    AI pioneer Yoshua Bengio says models like OpenAI’s o1 could accelerate research on AI itself and calls for more urgent AI regulation to protect the public
    https://www.ft.com/content/894669d6-d69d-4515-a18f-569afbf710e8

    Reply
  8. Tomi Engdahl says:

    Danielle Abril / Washington Post:
    Microsoft debuts an AI interpreter in Teams in limited preview that can simulate speaker voices and offer near-real-time voice interpretation in nine languages

    https://www.washingtonpost.com/business/2024/11/19/ai-voice-translator-microsoft-language-meetings/

    Reply
  9. Tomi Engdahl says:

    Sara Fischer / Axios:
    Meta is forming a new product group to build AI tools for the 200M businesses that use its apps, led by Clara Shih, who most recently was Salesforce AI’s CEO — – Shih will be responsible for developing and monetizing tools within Meta’s AI portfolio that are catered specifically to businesses.

    Scoop: Meta forms product group to build AI tools for businesses
    https://www.axios.com/2024/11/19/meta-new-ai-tools-businesses

    Reply
  10. Tomi Engdahl says:

    Dina Bass / Bloomberg:
    Microsoft unveils Azure AI Foundry to help customers build AI apps, including an SDK, tools for deploying AI agents, and a portal that replaces Azure AI Studio — The software maker also previewed new chips and Office AI features. … Azure AI Foundry will make it easier to switch between …

    Microsoft Unveils Software to Ease AI App Development, Model Switching
    The software maker also previewed new chips and Office AI features.
    https://www.bloomberg.com/news/articles/2024-11-19/microsoft-unveils-software-to-ease-ai-app-development-model-switching

    Reply
  11. Tomi Engdahl says:

    Matt Marshall / VentureBeat:
    Microsoft says it has the largest enterprise AI agent ecosystem, with 100K+ orgs making or editing AI agents in Copilot Studio and “2x growth in just a quarter”

    Microsoft quietly assembles the largest AI agent ecosystem—and no one else is close
    https://venturebeat.com/ai/microsoft-quietly-assembles-the-largest-ai-agent-ecosystem-and-no-one-else-is-close/

    Microsoft has quietly built the largest enterprise AI agent ecosystem, with over 100,000 organizations creating or editing AI agents through its Copilot Studio since launch – a milestone that positions the company ahead in one of enterprise tech’s most closely watched and exciting segments.

    “That’s a lot faster than we thought, and it’s a lot faster than any other kind of cutting edge technology we’ve released,” Charles Lamanna, Microsoft’s executive responsible for the company’s agent vision, told VentureBeat. “And that was like a 2x growth in just a quarter.”

    The rapid adoption comes as Microsoft significantly expands its agent capabilities. At its Ignite conference starting today, the company announced it will allow enterprises to use any of the 1,800 large language models (LLMs) in the Azure catalog within these agents – a significant move beyond its exclusive reliance on OpenAI’s models. The company also unveiled autonomous agents that can work independently, detecting events and orchestrating complex workflows with minimal human oversight. (See our full coverage of today’s Microsoft’s agent announcements here.)

    Reply
  12. Tomi Engdahl says:

    Sharon Goldman / Fortune:
    How Mark Zuckerberg made Llama a cornerstone of AI ambitions at Meta, whose smartphone-era services and products have been constrained by Apple and Google

    How Mark Zuckerberg has fully rebuilt Meta around Llama
    https://fortune.com/2024/11/19/zuckerberg-meta-ai-openai-llama/?sge246

    It was the summer of 2023, and the question at hand was whether to release a Llama into the wild.

    The Llama in question wasn’t an animal: Llama 2 was the follow-up release of Meta’s generative AI mode—a would-be challenger to OpenAI’s GPT-4. The first Llama had come out a few months earlier. It had originally been intended only for researchers, but after it leaked online, it caught on with developers, who loved that it was free—unlike the large language models (LLMs) from OpenAI, Google, and Anthropic—as well as state-of-the-art. Also unlike those rivals, it was open source, which meant researchers, developers, and other users could access the underlying code and its “weights” (which determine how the model processes information) to use, modify, or improve it.

    Yann LeCun, Meta’s chief AI scientist, and Joelle Pineau, VP of AI research and head of Meta’s FAIR (Fundamental AI Research) team, wanted to give Llama 2 a wide open-source release. They felt strongly that open-sourcing Llama 2 would enable the model to become more powerful more quickly, at a lower cost. It could help the company catch up in a generative AI race in which it was seen as lagging badly behind its rivals, even as the company struggled to recover from a pivot to the metaverse whose meager offerings and cheesy, legless avatars had underwhelmed investors and customers.

    But there were also weighty reasons not to take that path. Once customers got accustomed to a free product, how could you ever monetize it? And as other execs pointed out in debates on the topic, the legal repercussions were potentially ugly: What if someone hijacked the model to go on a hacking spree? It didn’t help that two earlier releases of Meta open-source AI products had backfired badly, earning the company tongue-lashings from everyone from scientists to U.S. senators.

    Reply
  13. Tomi Engdahl says:

    Daniel Thomas / Financial Times:
    ProRata, which is launching an AI search engine next month, inks licensing deals with DMG Media and more; sources say DMG bought a stake that values it at $130M

    https://www.ft.com/content/c917a1e1-60a5-42c5-9158-6199f8a1f9ab

    Reply
  14. Tomi Engdahl says:

    Financial Times:
    How AI breakthroughs like diffusion models, vision language models, and liquid neural networks are transforming the way robots learn new skills and tasks

    Are the robots finally coming?
    Advances in physical AI mean machines are learning skills previously thought impossible
    https://ig.ft.com/ai-robots/

    “This is not science fiction,” declared Jensen Huang, boss of chip giant Nvidia, in June, referring to the use of AI to instruct robots to carry out real world tasks. “The next wave of AI is physical AI. AI that understands the laws of physics, AI that can work among us.”

    In many ways this robotics revolution seems long overdue. For decades, people have envisioned living alongside humanoid domestic droids capable of doing their mundane chores.

    But after years of slow progress, researchers now appear to be on the cusp of the dramatic advances required to create a new generation of automatons. AI has powered a number of recent research breakthroughs that have brought complex tasks that had previously separated humans from robots within reach.

    While the multi-purpose machine helper capable of doing everything a human can, only better, is still a way off, the fact a robot can put a T-shirt on a coat hanger is a sign that this is now possible. Such developments could be transformative in the fields of homecare, health and manufacturing — and investors are taking note.

    The excitement around recent advances is attracting growing interest and large sums of cash from a welter of researchers, big tech companies and investors, even if the quantum leap in funding has not quite arrived. More than $11bn of robotics and drone venture capital deals had been done as of late October, surpassing last year’s $9.72bn but not quite reaching the $13.23bn of 2022, according to PitchBook.

    “The floodgates have really opened,”

    Yet in the real world, getting robots to perform even mundane tasks has proved difficult. Interacting with people remains particularly challenging, given robots need to navigate our dynamic spaces and understand the subtle ways humans communicate intentions. The fact that Elon Musk’s humanoid Optimus robots — which were seen serving drinks at a Tesla event — were actually operated remotely by humans is a case in point.

    Limitations of hardware and, especially, software have restricted robot abilities, even as they have transformed some industrial processes, such as automating warehouses. Previous generations of machines had to be programmed using complicated code or were taught slowly through trial and error, techniques that resulted in limited abilities in narrowly defined tasks performed in highly controlled environments.

    But thanks to advances in AI, the past two years have been different, even for those who have been working in the field for some time. “There’s an excitement in the air, and we all think this is something that is advancing at a much faster pace than we thought,” says Carolina Parada, head of Google DeepMind’s robotics team. “And that certainly has people energised.”

    Some of the biggest leaps in the field have been in software, in particular in the way robots are trained.

    AI-powered behaviour cloning methods — where a task is demonstrated to a robot multiple times by a human — have produced remarkable results. Researchers at the Toyota Research Institute can now teach robot arms complex movements within hours instead of weeks.

    Key to this learning process is “diffusion”. Well-known within the world of AI image generation, the technique has been further developed by roboticists.

    Instead of using diffusion to generate images, roboticists have begun to use it to produce actions.

    This means robots can learn a new task — such as how to use a hammer or turn a screw — and then apply it in different settings.

    When used for robot manipulation tasks, noise is applied to a training dataset in the form of random trajectories in a similar way to pixels being added to images.

    The diffusion model then removes each of these random trajectories until it has a clear path that can be followed by the robot.

    Researchers at Stanford University and Google DeepMind have had success with similar diffusion techniques. “We picked three of the most dexterous tasks we could think of . . . and tried to see if we could train a policy [the robot AI system] to do each,” said Stanford assistant professor Chelsea Finn in a post on X. “All of them worked!”

    The team had taught a previous version of the robot to autonomously cook shrimp, clean up stains and call a lift.

    Building on LLMs

    The extraordinary progress in generating text and images using AI over the past two years have been accomplished by the invention of large language models (LLMs), the system underpinning chatbots.

    Roboticists are now building on these and their cousins, visually conditioned language models, sometimes called vision-language models, which connect textual information and imagery.

    With access to huge existing troves of text and image data, researchers can “pre-train” their robot models on the nuances of the physical world and how humans describe it, even before they begin to teach their machine students specific actions.

    Reply
  15. Tomi Engdahl says:

    Bloomberg:
    Source: Microsoft signs a deal with News Corp’s HarperCollins to use nonfiction titles to train an unannounced AI model; HarperCollins says authors can opt out — – Nonfiction books will be used for training AI models — AI companies and publishers have butted heads in lawsuits

    https://www.bloomberg.com/news/articles/2024-11-19/microsoft-signs-ai-learning-deal-with-news-corp-s-harpercollins

    Reply
  16. Tomi Engdahl says:

    A Shiny New Programming Language
    Mirror is an entirely new concept in programming — just supply function signatures and some input-output examples, and AI does the rest
    https://www.hackster.io/news/a-shiny-new-programming-language-e41357506c46

    Reply
  17. Tomi Engdahl says:

    US Gathers Allies to Talk AI Safety as Trump’s Vow to Undo Biden’s AI Policy Overshadows Their Work

    Trump promised in his presidential campaign platform to “repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.”

    https://www.securityweek.com/us-gathers-allies-to-talk-ai-safety-as-trumps-vow-to-undo-bidens-ai-policy-overshadows-their-work/

    President-elect Donald Trump has vowed to repeal President Joe Biden’s signature artificial intelligence policy when he returns to the White House for a second term.

    What that actually means for the future of AI technology remains to be seen. Among those who could use some clarity are the government scientists and AI experts from multiple countries gathering in San Francisco this week to deliberate on AI safety measures.

    Hosted by the Biden administration, officials from a number of U.S. allies — among them Australia, Canada, Japan, Kenya, Singapore, the United Kingdom and the 27-nation European Union — began meeting Wednesday in the California city that’s a commercial hub for AI development.

    Their agenda addresses topics such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse.

    It’s the first such meeting since world leaders agreed at an AI summit in South Korea in May to build a network of publicly backed safety institutes to advance research and testing of the technology.

    Reply
  18. Tomi Engdahl says:

    Surf Security Adds Deepfake Detection Tool to Enterprise Browser

    Surf Security has released Deepwater, a deepfake detection tool integrated into the company’s enterprise browser.

    https://www.securityweek.com/surf-security-adds-deepfake-detection-tool-to-enterprise-browser/

    Surf’s Enterprise Zero-Trust Browser is a security-focused browser that provides data leakage prevention, download protection, anti-social engineering, and access control capabilities.

    Reply
  19. Tomi Engdahl says:

    Stubb Slushissa: olemme käännekohdassa
    https://etn.fi/index.php/13-news/16864-stubb-slushissa-olemme-kaeaennekohdassa

    Startup-tapahtuma Slush alkoi tänään Helsingin messukeskuksessa ja osuvasti räntäsateessa. Tekoäly on tapahtumassa isossa roolissa ja sitä sivusi omassa avauspuheenvuorossaan myös paikalle ehtinyt presidentti Alexander Stubb.

    Stubb on ollut Slushissa aiemminkin. Hän muisteli vuotta 2008, jolloin ajateltiin, että jokaisen pitäisi alkaa koodaamaan. – Nyt tekoäly taitaa pitää siitä huolta, Stubb sanoi.

    Tuttuun tapaansa Stubb luetteli kolme pääasiaa, joita kaikkien, myös rahoittajia Slushissa etsivien yritysten kannattaa miettiä. – Ensinnäkin pitää miettiä, mitä teknologia tekee sinulle. Puolestasi kerätään dataa koko ajan, joten pitää miettiä mitä teknologia tarkoittaa meille. Elämä on kuitenkin enemmän kuin vain datan hallintaa. Pitää miettiä, miten teknologian avulla osoitetaan empatiaa, Stubb evästi.

    Sama koskee yrityksiä. Tekeekö yritys hyvää, auttaako se meitä, tekeekö se vaikkapa terveydenhuollosta parempaa. – Pitää miettiä, miten tekniikka voi parantaa maailmaa.

    Stubbin mukaan olemme nyt tulossa käännekohtaan. – Tekoälyn myötä meillä on ensimmäistä kertaa jotain, joka ylittää kykymme. Tarvitsemme säännöt tekoälylle, jotta ihmiset ovat kuskin paikalla.

    Reply
  20. Tomi Engdahl says:

    Paresh Dave / Wired:
    Four former Google executives say innovation by rivals is key to breaking Google Search’s dominance, and ChatGPT-like tools will one day supplant Google Search

    Selling Chrome Won’t Be Enough to End Google’s Search Monopoly
    Despite shared concerns about Google’s power, critics of the company and former executives express little agreement on what, if anything, can really be done to increase competition.
    https://www.wired.com/story/doj-google-chrome-antitrust/

    GET DIGITAL ACCESS
    Already a subscriber?
    Sign In
    Paresh Dave
    Business
    Nov 20, 2024 11:18 PM
    Selling Chrome Won’t Be Enough to End Google’s Search Monopoly
    Despite shared concerns about Google’s power, critics of the company and former executives express little agreement on what, if anything, can really be done to increase competition.
    The Google Chrome application left on a smartphone arranged in the Queens borough of New York US on Tuesday Nov. 19…
    Photograph: Gabby Jones/Getty Images

    To dismantle Google’s illegal monopoly over how Americans search the web, the US Department of Justice wants the tech giant to end its lucrative partnership with Apple, share a trove of proprietary data with competitors and advertisers, and “promptly and fully divest Chrome,” Google’s search engine that controls over half of the US market. The government wants Google to sell Chrome to a buyer it approves, arguing the divesture would “pry open the monopolized markets to competition, remove barriers to entry, and ensure there remain no practices likely to result in unlawful monopolization.”

    Reply
  21. Tomi Engdahl says:

    Maxwell Zeff / TechCrunch:
    Jensen Huang says “foundation model pretraining scaling is intact” and models like OpenAI’s o1 could play a larger role in Nvidia’s business moving forward

    Nvidia’s CEO defends his moat as AI labs change how they improve their AI models
    https://techcrunch.com/2024/11/20/nvidias-ceo-defends-his-moat-as-ai-labs-change-how-they-improve-their-ai-models/

    Nvidia raked in more than $19 billion in net income during the last quarter, the company reported on Wednesday, but that did little to assure investors that its rapid growth would continue. On its earnings call, analysts prodded CEO Jensen Huang about how Nvidia would fare if tech companies start using new methods to improve their AI models.

    The method that underpins OpenAI’s o1 model, or “test-time scaling,” came up quite a lot. It’s the idea that AI models will give better answers if you give them more time and computing power to “think” through questions. Specifically, it adds more compute to the AI inference phase, which is everything that happens after a user hits enter on their prompt.

    Nvidia’s CEO was asked whether he was seeing AI model developers shift over to these new methods and how Nvidia’s older chips would work for AI inference.

    Huang indicated that o1, and test-time scaling more broadly, could play a larger role in Nvidia’s business moving forward, calling it “one of the most exciting developments” and “a new scaling law.” Huang did his best to ensure investors that Nvidia is well-positioned for the change.

    Reply
  22. Tomi Engdahl says:

    Michael Nuñez / VentureBeat:
    A look at OpenScholar, an LLM for scientific research built by the Allen Institute for AI and the University of Washington that outperforms GPT-4o on accuracy

    OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research
    https://venturebeat.com/ai/openscholar-the-open-source-a-i-thats-outperforming-gpt-4o-in-scientific-research/

    Scientists are drowning in data. With millions of research papers published every year, even the most dedicated experts struggle to stay updated on the latest findings in their fields.

    A new artificial intelligence system, called OpenScholar, is promising to rewrite the rules for how researchers access, evaluate, and synthesize scientific literature. Built by the Allen Institute for AI (Ai2) and the University of Washington, OpenScholar combines cutting-edge retrieval systems with a fine-tuned language model to deliver citation-backed, comprehensive answers to complex research questions.

    “Scientific progress depends on researchers’ ability to synthesize the growing body of literature,” the OpenScholar researchers wrote in their paper. But that ability is increasingly constrained by the sheer volume of information. OpenScholar, they argue, offers a path forward—one that not only helps researchers navigate the deluge of papers but also challenges the dominance of proprietary AI systems like OpenAI’s GPT-4o.

    How OpenScholar’s AI brain processes 45 million research papers in seconds

    At OpenScholar’s core is a retrieval-augmented language model that taps into a datastore of more than 45 million open-access academic papers. When a researcher asks a question, OpenScholar doesn’t merely generate a response from pre-trained knowledge, as models like GPT-4o often do. Instead, it actively retrieves relevant papers, synthesizes their findings, and generates an answer grounded in those sources.

    This ability to stay “grounded” in real literature is a major differentiator. In tests using a new benchmark called ScholarQABench, designed specifically to evaluate AI systems on open-ended scientific questions, OpenScholar excelled. The system demonstrated superior performance on factuality and citation accuracy, even outperforming much larger proprietary models like GPT-4o.

    One particularly damning finding involved GPT-4o’s tendency to generate fabricated citations—hallucinations, in AI parlance. When tasked with answering biomedical research questions, GPT-4o cited nonexistent papers in more than 90% of cases. OpenScholar, by contrast, remained firmly anchored in verifiable sources.

    Reply
  23. Tomi Engdahl says:

    Leah Nylen / Bloomberg:
    Filing: the US DOJ’s proposal requires Google to allow websites more ability to opt-out of its AI products and provide more ad placement controls to advertisers

    https://www.bloomberg.com/news/articles/2024-11-21/justice-department-seeks-google-chrome-sale-to-curb-monopoly

    Reply
  24. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    OpenAI releases a free online course for K-12 teachers to integrate ChatGPT into their classrooms, but some educators remain skeptical about the technology

    OpenAI releases a teacher’s guide to ChatGPT, but some educators are skeptical
    https://techcrunch.com/2024/11/20/openai-releases-a-teachers-guide-to-chatgpt-but-some-educators-are-skeptical/

    OpenAI envisions teachers using its AI-powered tools to create lesson plans and interactive tutorials for students. But some educators are wary of the technology — and its potential to go awry.

    Today, OpenAI released a free online course designed to help K-12 teachers learn how to bring ChatGPT, the company’s AI chatbot platform, into their classrooms. Created in collaboration with the nonprofit organization Common Sense Media, with which OpenAI has an active partnership, the one-hour, nine-module program covers the basics of AI and its pedagogical applications.

    OpenAI says that it’s already deployed the course in “dozens” of schools, including the Agua Fria School District in Arizona, the San Bernardino School District in California, and the charter school system Challenger Schools. Per the company’s internal research, 98% of participants said the program offered new ideas or strategies that they could apply to their work.

    “Schools across the country are grappling with new opportunities and challenges as AI reshapes education,” Robbie Torney, senior director of AI programs at Common Sense Media, said in a statement. “With this course, we are taking a proactive approach to support and educate teachers on the front lines and prepare for this transformation.”

    But some educators don’t see the program as helpful — and think it could, in fact, mislead.

    https://www.commonsense.org/education/training/chatgpt-k12-foundations

    Reply
  25. Tomi Engdahl says:

    Matt Swayne / The Quantum Insider:
    Google researchers introduce AlphaQubit, a machine-learning decoder that surpasses existing methods in identifying and correcting quantum computing errors — – Google researchers introduced AlphaQubit, an AI-powered decoder that improves quantum error correction, reducing errors by 6% compared …

    AI Power For Quantum Errors: Google Develops AlphaQubit to Identify, Correct Quantum Errors
    https://thequantuminsider.com/2024/11/20/ai-power-for-quantum-errors-google-develops-alphaqubit-to-identify-correct-quantum-errors/

    Reply
  26. Tomi Engdahl says:

    Larry Dignan / Constellation Research:
    Snowflake reports Q3 revenue up 28% YoY to $942.09M, vs. $898.46M est., unveils a deal with Anthropic to bring Claude to Cortex AI; SNOW jumps 18%+ after hours — Snowflake said it has inked a multi-year deal to bring Anthropic’s Claude models to Snowflake Cortex AI.

    Snowflake, Anthropic ink LLM partnership, delivers strong Q3, acquires Datavolo
    https://www.constellationr.com/blog-news/insights/snowflake-anthropic-ink-llm-partnership-delivers-strong-q3-acquires-datavolo

    Reply
  27. Tomi Engdahl says:

    Wall Street Journal:
    Sources: xAI has told investors it raised $5B in a funding round valuing it at $50B and that its revenue has reached $100M on an annualized basis — The artificial intelligence startup has more than doubled its valuation from the spring — Elon Musk’s artificial intelligence

    Elon Musk’s xAI Startup Is Valued at $50 Billion in New Funding Round
    The artificial-intelligence company has more than doubled its valuation since the spring
    https://www.wsj.com/tech/ai/elon-musks-startup-xai-valued-at-50-billion-in-new-funding-round-7e3669dc?st=qC75Gy&reflink=desktopwebshare_permalink

    Elon Musk’s artificial-intelligence startup, xAI, has told investors it raised $5 billion in a funding round valuing it at $50 billion—more than twice what it was valued at several months ago.

    Qatar’s sovereign-wealth fund, Qatar Investment Authority, and investment firms Valor Equity Partners, Sequoia Capital and Andreessen Horowitz are expected to participate in the round, according to people familiar with the matter. The financing brings the total amount xAI has raised to $11 billion this year.

    xAI was previously raising funds at a $40 billion valuation, before factoring in the new cash, The Wall Street Journal reported. Over the past few weeks, xAI raised that figure by $5 billion in negotiations with investors. The infusion of new cash brings its total post-investment value to $50 billion.

    xAI was valued at $24 billion when it raised $6 billion in the spring.

    Reply
  28. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Chinese AI company DeepSeek unveils DeepSeek-R1-Lite-Preview, a “reasoning” AI model that it claims is competitive with OpenAI’s o1, and plans to open source it

    A Chinese lab has released a ‘reasoning’ AI model to rival OpenAI’s o1
    https://techcrunch.com/2024/11/20/a-chinese-lab-has-released-a-model-to-rival-openais-o1/

    A Chinese lab has unveiled what appears to be one of the first “reasoning” AI models to rival OpenAI’s o1.

    On Wednesday, DeepSeek, an AI research company funded by quantitative traders, released a preview of DeepSeek-R1, which the firm claims is a reasoning model competitive with o1.

    Unlike most models, reasoning models effectively fact-check themselves by spending more time considering a question or query. This helps them avoid some of the pitfalls that normally trip up models.

    Similar to o1, DeepSeek-R1 reasons through tasks, planning ahead, and performing a series of actions that help the model arrive at an answer. This can take a while. Like o1, depending on the complexity of the question, DeepSeek-R1 might “think” for tens of seconds before answering.

    DeepSeek claims that DeepSeek-R1 (or DeepSeek-R1-Lite-Preview, to be precise) performs on par with OpenAI’s o1-preview model on two popular AI benchmarks, AIME and MATH. AIME uses other AI models to evaluate a model’s performance, while MATH is a collection of word problems. But the model isn’t perfect. Some commentators on X noted that DeepSeek-R1 struggles with tic-tac-toe and other logic problems (as does o1).

    DeepSeek can also be easily jailbroken — that is, prompted in such a way that it ignores safeguards. One X user got the model to give a detailed meth recipe.

    And DeepSeek-R1 appears to block queries deemed too politically sensitive. In our testing, the model refused to answer questions about Chinese leader Xi Jinping, Tiananmen Square, and the geopolitical implications of China invading Taiwan.

    Reply
  29. Tomi Engdahl says:

    Zac Hall / 9to5Mac:
    Meta unveils new Messenger features, including audio and video voicemail, AI backgrounds for video calls, and hands-free calling and messaging via Siri

    https://9to5mac.com/2024/11/20/facebook-messenger-siri-ai-video-backgrounds-voicemail/

    Reply
  30. Tomi Engdahl says:

    Ensimmäinen fotoniprosessori julkistettiin – AI-koulutus nopeammaksi
    https://etn.fi/index.php/13-news/16868-ensimmaeinen-fotoniprosessori-julkistettiin-ai-koulutus-nopeammaksi

    Saksalainen startup Q.ANT on julkistanut ensimmäisen kaupallisen fotoniprosessorinsa, joka tuo täysin uuden tason energiatehokkuutta ja suorituskykyä laskentaan ja tekoälysovelluksiin. Uusi Native Processing Unit (NPU) perustuu valon käyttöön laskennassa perinteisten elektronien sijaan.

    Prosessorin odotetaan olevan jopa 30 kertaa energiatehokkaampi kuin perinteiset CMOS-teknologiaan perustuvat ratkaisut, ja sen avulla voidaan saavuttaa huomattavia suorituskykyparannuksia erityisesti tekoälyn inferenssissä ja monimutkaisissa matemaattisissa laskuissa.

    Q.ANT:n toimitusjohtaja Michael Förtsch kuvailee prosessoria “fotoniteknologian ja kestävän kehityksen yhteensovittamisen merkkipaaluksi”. Uutuustuote perustuu yrityksen omaan LENA-arkkitehtuuriin (Light Empowered Native Arithmetics) ja käyttää litium-niobaatti-ohutkalvoteknologiaan, jonka Q.ANT on kehittänyt vuodesta 2018 lähtien. Prosessori on saatavilla PCI-Express-yhteensopivana, mikä tekee siitä helposti integroitavan olemassa oleviin tietojärjestelmiin.

    Fotoniprosessorin avulla voidaan ratkaista esimerkiksi suurten kielimallien (LLM) ja koneoppimisen laskennallisia haasteita huomattavasti energiatehokkaammin. Q.ANT:n testit osoittivat, että laite voi vähentää koneoppimisen parametreja ja operaatioiden määrää lähes puoleen verrattuna perinteisiin menetelmiin. Tämä avaa mahdollisuuksia myös nopeampaan tekoälyn koulutukseen ja inferenssiin eli päättelyyn.

    Reply
  31. Tomi Engdahl says:

    AI – Implementing the Right Technology for the Right Use Case

    Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.

    https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/

    If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.

    Don’t get me wrong, organizations are already starting to adopt AI across a broad range of business divisions:

    Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
    Employees are using third party GenAI tools for research and productivity purposes
    Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
    Companies are building their own LLMs for internal use cases and commercial purposes.

    AI is still maturing

    However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. One of the most well-known models to measure technology maturity is the Gartner hype cycle. This tracks tools through the initial “innovation trigger”, through the “peak of inflated expectations” to the “trough of disillusionment”, followed by the “slope of enlightenment” and finally reaching the “plateau of productivity”.

    Taking this model, I liken AI to the hype that we witnessed around cloud a decade ago when everyone was rushing to migrate to “the cloud” – at the time a universal term that had different meaning to different people. “The cloud” went through all stages of the hype cycle and we continue to find more specific use cases to focus on for greatest productivity. In the present day many are now thinking about how they ‘right-size’ their cloud to their environment. In some cases, they are moving part of their infrastructure back to on-premises or hybrid/multi-cloud models.

    Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization. This is a theme that also emerged as cybersecurity automation matured – the need to identify the right use case for the technology, rather than try to apply it across the board..

    There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.

    Understanding what data is being shared

    This is a fundamental issue for security leaders, identifying who is using AI tools and what they are using AI for. What company data are they sharing with external tools, are these tools secure, and are they as innocent as they seem? For example, are GenAI code assistants, that are being used by developers, returning bad code and introducing a security risk? Then there are aspects like Dark AI, which involves the malicious use of AI technologies to facilitate cyber-attacks, hallucinations, and data poisoning when malicious data is input to manipulate code which could result in bad decisions being made.

    To this point, a survey (PDF) of Chief Information Security Officers (CISOs) by Splunk found that 70% believe generative AI could give cyber adversaries more opportunities to commit attacks. Certainly, the prevailing opinion is that AI is benefiting attackers more than defenders.

    Finding the right balance

    Therefore, our approach to AI is focused on taking a balanced view. AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained, so it is about finding the right balance and using the technology in the right scenarios for the right use case and getting the outcomes that you need.

    Looking to the future, as companies better understand the use cases for AI, it will evolve from regular Gen AI to incorporate additional technologies as well. To date, generative AI applications have overwhelmingly focused on the divergence of information. That is, they create new content based on a set of instructions. As AI evolves, we believe we will see more applications of AI that converge information. In other words, they will show us less content by synthesizing the information available, which industry pundits are aptly calling “SynthAI”. This will bring a step function change to the value that AI can deliver – I’ll discuss this in a future article.

    https://www.splunk.com/en_us/pdfs/gated/ebooks/the-ciso-report.pdf

    Reply
  32. Tomi Engdahl says:

    Mirror: An LLM-powered programming-by-example programming language
    https://austinhenley.com/blog/mirrorlang.html

    Programming by example is a technique where users provide examples of the outcome they want, and the system generates code that can perform it. For example, in Excel, you can demonstrate how you want a column formatted through an example or two, and Excel will learn a pattern and apply it to the rest.

    But what if there was a programming language that only allows programming by example? Can we integrate AI into traditional programming languages?

    I wanted to take the idea of programming by example to the extreme. In the Mirror language, all you can do is define functions through a set of example input-outputs pairs, then call the functions. That is it. Everything must be expressed through examples.

    Let’s start with a really simple example of how we would express an is_even function:

    signature is_even(x: number) -> bool
    example is_even(0) -> true
    example is_even(1) -> false
    example is_even(222) -> true
    example is_even(-99) -> false

    You provide the function name, parameters and their types, and the return type. Then you provide one or more examples with the expected result. It uses a strict syntax and supports a handful of basic types. You can create as many functions as you need and then chain them together.

    After parsing using a traditional recursive descent parser, the “compiler” uses an LLM to generate JavaScript that satisfies the constraints expressed by the examples. You can see the generated code to verify if it is correct, or you can provide more examples and recompile it.

    You can call your functions, either chaining them or passing literals as arguments, and it will execute the generated JavaScript, and print out the result of each.

    Mirror: An LLM-powered programming-by-example programming language.
    https://github.com/AZHenley/Mirror

    In the Mirror programming language, you define functions by providing sets of input-outputs pairs and you can call those functions. That is it. The entire program behavior must be specified through those examples.

    Reply
  33. Tomi Engdahl says:

    AI – Implementing the Right Technology for the Right Use Case

    Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.

    https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/

    If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.

    Don’t get me wrong, organizations are already starting to adopt AI across a broad range of business divisions:

    Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
    Employees are using third party GenAI tools for research and productivity purposes
    Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
    Companies are building their own LLMs for internal use cases and commercial purposes.

    Reply
  34. Tomi Engdahl says:

    AI is still maturing

    However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. One of the most well-known models to measure technology maturity is the Gartner hype cycle. This tracks tools through the initial “innovation trigger”, through the “peak of inflated expectations” to the “trough of disillusionment”, followed by the “slope of enlightenment” and finally reaching the “plateau of productivity”.

    Taking this model, I liken AI to the hype that we witnessed around cloud a decade ago when everyone was rushing to migrate to “the cloud” – at the time a universal term that had different meaning to different people. “The cloud” went through all stages of the hype cycle and we continue to find more specific use cases to focus on for greatest productivity. In the present day many are now thinking about how they ‘right-size’ their cloud to their environment. In some cases, they are moving part of their infrastructure back to on-premises or hybrid/multi-cloud models.

    Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization. This is a theme that also emerged as cybersecurity automation matured – the need to identify the right use case for the technology, rather than try to apply it across the board..

    AI is a scale function

    That said, AI is and will continue to be a useful tool. In today’s economic climate, as businesses adapt to a new normal of continuous change, AI—alongside automation—can be a scale function for cybersecurity teams, enabling them to pivot and scale to defend against evermore diverse attacks. In fact, our recent survey of 750 cybersecurity professionals found that 58% of organizations are already using AI in cybersecurity to some extent.

    Understanding what data is being shared

    This is a fundamental issue for security leaders, identifying who is using AI tools and what they are using AI for. What company data are they sharing with external tools, are these tools secure, and are they as innocent as they seem? For example, are GenAI code assistants, that are being used by developers, returning bad code and introducing a security risk? Then there are aspects like Dark AI, which involves the malicious use of AI technologies to facilitate cyber-attacks, hallucinations, and data poisoning when malicious data is input to manipulate code which could result in bad decisions being made.

    To this point, a survey (PDF) of Chief Information Security Officers (CISOs) by Splunk found that 70% believe generative AI could give cyber adversaries more opportunities to commit attacks. Certainly, the prevailing opinion is that AI is benefiting attackers more than defenders.

    Finding the right balance

    Therefore, our approach to AI is focused on taking a balanced view. AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained, so it is about finding the right balance and using the technology in the right scenarios for the right use case and getting the outcomes that you need.

    Looking to the future, as companies better understand the use cases for AI, it will evolve from regular Gen AI to incorporate additional technologies as well. To date, generative AI applications have overwhelmingly focused on the divergence of information. That is, they create new content based on a set of instructions. As AI evolves, we believe we will see more applications of AI that converge information. In other words, they will show us less content by synthesizing the information available, which industry pundits are aptly calling “SynthAI”. This will bring a step function change to the value that AI can deliver – I’ll discuss this in a future article.

    https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/

    Reply
  35. Tomi Engdahl says:

    Mark Gurman / Bloomberg:
    Sources: Apple is testing a more conversational version of Siri, dubbed “LLM Siri”, with plans to release it in spring 2026 as part of iOS 19 and macOS 16 — – Company is working on overhaul that employees call ‘LLM Siri’ — Unveiling planned for next year, with rollout coming in 2026

    Apple Readies More Conversational Siri in Bid to Catch Up in AI
    https://www.bloomberg.com/news/articles/2024-11-21/apple-readies-more-conversational-llm-siri-in-bid-to-rival-openai-s-chatgpt

    Company is working on overhaul that employees call ‘LLM Siri’
    Unveiling planned for next year, with rollout coming in 2026

    Reply
  36. Tomi Engdahl says:

    The Information:
    Sources: OpenAI considered making a browser, discussed deals to power AI features on Samsung devices and search on sites and apps from Condé Nast and others — OpenAI is preparing to launch a frontal assault on Google. — The ChatGPT owner recently considered developing a web browser …
    https://www.theinformation.com/articles/openai-considers-taking-on-google-with-browser

    Reply
  37. Tomi Engdahl says:

    Aisha Malik / TechCrunch:
    Brave Search introduces AI chat to let users ask follow-up questions to initial queries, bringing together capabilities of chat-first and search-first tools

    Brave Search adds AI chat for follow-up questions after your initial query
    https://techcrunch.com/2024/11/21/brave-search-adds-ai-chat-for-follow-up-questions-after-your-initial-query/

    Brave announced on Thursday that it’s introducing an AI chat mode for follow-up questions based on initial queries on Brave Search. Earlier this year, the company launched “Answer with AI” summaries that appear above search results after you submit a query to give you an easy-to-read answer in response to a question. Now the Answer with AI summaries will include a chat bar that lets you ask follow-up questions regarding your initial query.

    The new feature gives users access to an experience that’s not available on Google, the largest search engine in the world. While Google does offer “AI Overviews,” which are similar to Brave’s “Answer with AI” summaries, the company does not offer a way for users to ask additional questions based on an initial query, as users instead have to complete an entirely new search when looking for more information.

    If you go to Brave Search and type in “Christopher Nolan films,” you will get an AI-generated summary that gives you an overview of who Christopher Nolan is and a list of some of his notable films. Starting today, you will now see a chat bar under the AI-generated summary that reads, “Ask a follow-up question.”

    Reply
  38. Tomi Engdahl says:

    Aisha Malik / TechCrunch:
    Meta is rolling out voice message transcripts on WhatsApp globally over the coming weeks in select languages, and says they are generated on users’ devices — WhatsApp announced on Thursday it’s rolling out voice message transcripts. The Meta-owned company says the new feature will come …

    WhatsApp rolls out voice message transcripts
    https://techcrunch.com/2024/11/21/whatsapp-rolls-out-voice-message-transcripts/

    Reply
  39. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    NYC-based Lightning AI, which lets customers fine-tune and run AI models in their preferred cloud environments, raised $50M, taking its total funding to $103M

    Lightning looks to make managing AI a piece of cake
    https://techcrunch.com/2024/11/21/lightning-ai-looks-to-make-managing-ai-a-piece-of-cake/

    Reply
  40. Tomi Engdahl says:

    Tekoäly valtasi Slushin
    https://etn.fi/index.php/13-news/16871-tekoaely-valtasi-slushin

    Startup-tapahtuma Slush on takanapäin, mutta mitä jäi käteen? Tekoäly oli laajasti esillä jo viime vuonna, mutta nyt AI oli selvästi keskeisessä roolissa. Voiko enää perustaa yritystä ilman, että on jokin kytkös tekoälyyn? Tai slushimaisemmin, voiko sellaiselle bisnesidealle saada rahoitusta?

    Jo tapahtuman avannut presidentti Alexander Stubb puhui tekoälystä. Hänen mukaansa olemme nyt tulossa käännekohtaan. – Tekoälyn myötä meillä on ensimmäistä kertaa jotain, joka ylittää kykymme. Tarvitsemme säännöt tekoälylle, jotta ihmiset ovat kuskin paikalla, Stubb vaati.

    Tänä vuonna Slushiin oli saatu vieraaksi tämän hetken ylivoimaisesti suuriman puolijohdeyrityksen eli Nvidian perustajiin kuuluva Chris Malachowsky. Nvidia takoo nyt rahaa hurjalla vauhdilla, mutta ei senkään alkuvaihe vuoden 1993 perustamisen jälkeen ollut pelkkää riemua. Yhtiön ensimmäinen tuote floppasi täysin.

    - Halusimme tulla maailman parhaaksi grafiikkayritykseksi, mutta tulimme työasemamaailmasta, jossa kallis hinta hyväksyttiin.

    Tekoälyssä valtaosa yrityksistä on startuppeja. Sellaiseksi voi kutsua myös avoimia malleja kehittävää Hugging Facea, jonka perustajiin kuuluva Thomas Wolfe kertoi Slushin pabeelissa näkemyksiään avoimen ja suljetun koodin eroista tekoälybisneksessä. Wolfen mukaan AI on internetin kaltainen perustavanlaatuinen tekniikka.

    - Olemme riippuvaisia verkosta ja pian AI on kaikkialla. Olisi mahdoton tilanne, jos tekoälyn hyödyntämiseksi joutuisi aina kehittämään mallit alusta saakka, Wolfe sanoi.

    OpenAI:n ChatGPT tietenkin pohjasi suljettuihin malleihin, jotka AI-ajan parina ensimmäisenä vuonna olivatkin suorituskyvyltään ylivertaisia. Nyt avoimet mallit ovat tulleet erittäin suorituskykyisiksi, WOlfe muistutti. Mallien määrä myös kasvaa koko ajan. Esimerkiksi Metan Llamalla on 1800 eri varianttia.

    Avoimien mallien ansiosta on sekin, että pienet mallit yleistyvät nyt nopeasti. Tekoälyä voidaan jo nyt ajaa paikallisesti läppärillä. Pienemmät mallit ovat yhä tarkempia ja parempia.

    Mitä tekoälyässä sitten tapahtuu seuraavaksi? Mistä kuulemme ensi vuoden Slushissa? Wolfen mukaan ainakin tekoäly on tulossan tieteeseen. – Tekoälyn avulla voidaan ennustaa tieteessä hyvin monia asioita, vaikkapa säätä. Se mikä nyt on hidasta ja kallista, näistä kaikista tekoäly tekee nopeita ja edullisia, Wolfe päätti.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*