3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,254 Comments

  1. Tomi Engdahl says:

    Move over, Devin: Cosine’s Genie takes the AI coding crown
    https://venturebeat.com/programming-development/move-over-devin-cosines-genie-takes-the-ai-coding-crown/

    It wasn’t long ago that the startup Cognition was blowing minds with its product Devin, an AI-based software engineer powered by OpenAI’s GPT-4 foundation large language model (LLM) on the backend that could autonomously write and edit code when given instructions in natural language text.

    But Devin emerged in March 2024 — five months ago — an eternity in the fast-moving generative AI space.

    Reply
  2. Tomi Engdahl says:

    Työhakemus tekoälyllä – top 5 -munaukset (ja kuinka ne vältetään)
    https://tyopaikat.oikotie.fi/tyontekijalle/artikkelit/tyohakemus-tekoalylla-top-5-munaukset-ja-kuinka-ne-valtetaan

    Työhakemuksen tekeminen tekoälyllä – helppo nakki vai työllistymisen este? Lue tämä, jotta vältät pahimmat tekoälymokat!

    Työhakemuksen tekeminen tekoälyllä kuulostaa helpolta ja nopealta hommalta. Kukapa ei haluaisi vetää lonkkaa riippukeinussa sillä aikaa, kun tekoäly paiskii raskaat työt? Oikotie Työpaikkojen Customer Success Manager, Nina van der Giessen ei kuitenkaan lähtisi luottamaan tekoälyyn sokeasti. Useiden työnantajien ja HR-alan ammattilaisten kanssa päivittäin keskustellessa on nimittäin käynyt ilmi, että tekoälyllä tuotetut hakemukset usein tunnistetaan tai jäävät kovin geneerisiksi.

    Helppoa ratkaisua tekoäly ei siis valitettavasti työhakemuksen kirjoittajalle tarjoa. Tekoälyä voisikin kuvailla hyväksi rengiksi, mutta huonoksi isännäksi: se on oiva apukäsi, mutta kaikkea valtaa sille ei kannata antaa. Tässä artikkelissa kerromme, kuinka voit munata työhakemuksen tekoälyllä – mutta myös, miten vältät pahimmat tekoälymokat työhakemuksessa.

    Reply
  3. Tomi Engdahl says:

    That’s billions with a b — and an s.

    Microsoft Is Losing a Staggering Amount of Money on AI
    https://futurism.com/the-byte/microsoft-losing-money-ai?fbclid=IwZXh0bgNhZW0CMTEAAR2PYcTHlDutotafIKvhMu3IFzZHfnRwv73EZB8e1UxazV7Qe0Lm7-7PXEc_aem_QkF7rNpfbWFtpziLZYl0PA

    AI remains an astronomical money pit.

    Microsoft has spent a staggering amount of money on AI — and serious profits likely remain many years out, if they’re ever realized.

    The tech giant revealed that during the quarter ending in June, it spent an astonishing $19 billion in cash capital expenditures and equipment, the Wall Street Journal reports — the equivalent of what it used to spend in a whole year a mere five years ago.

    Unsurprisingly, most of those $19 billion were related to AI, and roughly half was used for building out and leasing data centers.

    Major tech companies are investing heavily to capitalize on the current hype surrounding generative AI. And the costs are racking up, with specialized chips and huge amounts of electricity straining budgets across the tech world.

    Microsoft still has enough money in the bank, despite having spent a record amount last quarter, totaling $36.8 billion in cloud revenue alone and a record $109 billion total in operating income.

    Nonetheless, whether the $19 billion — and far more that will surely chase it — will ever be recouped remains to be seen. Despite the astronomical costs, Microsoft and its peers have yet to see any significant amounts of revenue from AI worthy of disclosure in financial earnings, meaning they’re currently losing a staggering amount of money on the tech.

    Last week, The Information reported that OpenAI could end up losing $5 billion this year alone, and could run out of cash in the next 12 months without any major cash injections.

    “Despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful,” Goldman Sachs’ most senior stock analyst Jim Covello argued in a report last month.

    Google has similarly had to reassure investors, with CEO Sundar Pichai saying that the company was “at the early stage of what I view as a very transformative area”

    Much like Microsoft, Google has rapidly ramped up spending on AI infrastructure in the first half of this year, and is expected to spend $49 billion in capital expenditures by the end of the year.

    For his part, Microsoft CEO Satya Nadella asserted during this week’s earnings call that the company was justified in spending the eye-searing $19 billion, arguing that the company had the “demand signal.”

    Reply
  4. Tomi Engdahl says:

    This Entirely AI-Generated Local “News” Site Is Incredibly Depressing
    “The articles should only be about events and fun and good times.”
    https://futurism.com/entirely-ai-generated-news-site

    Reply
  5. Tomi Engdahl says:

    Kate Knibbs / Wired:
    Condé Nast agrees a multiyear deal with OpenAI, letting ChatGPT and SearchGPT surface stories from The New Yorker, Vogue, Vanity Fair, Bon Appetit, and Wired — The media company joins The Atlantic, Axel Springer, Vox Media, and a host of other publishers who have partnered with OpenAI.

    Condé Nast Signs Deal With OpenAI
    The media company joins The Atlantic, Axel Springer, Vox Media, and a host of other publishers who have partnered with OpenAI.
    https://www.wired.com/story/conde-nast-openai-deal/

    Reply
  6. Tomi Engdahl says:

    Blake Brittain / Reuters:
    Writers and journalists Andrea Bartz, Charles Graeber, and Kirk Johnson sue Anthropic for misusing “hundreds of thousands of copyrighted books” to train Claude

    Authors sue Anthropic for copyright infringement over AI training
    https://www.reuters.com/technology/artificial-intelligence/authors-sue-anthropic-copyright-infringement-over-ai-training-2024-08-20/

    Reply
  7. Tomi Engdahl says:

    Joka viides suomalaisyritys panostaa vahvasti GenAI-tekniikkaan
    https://etn.fi/index.php/13-news/16502-joka-viides-suomalaisyritys-panostaa-vahvasti-genai-tekniikkaan

    Sataa suomalaista organisaatiota tarkastellut tuore tutkimus osoittaa, että monet yritykset ovat vasta aloittelemassa generatiivisen tekoälyn käyttöä. Generatiivisen tekoälyn käytön lisääminen on vuoden tärkein innovaatiotavoite noin viidennekselle (22 %) suomalaisista organisaatioista.

    Tämä käy ilmi Dell Technologiesin tilaamasta ja VansonBournen tekemästä kansainvälisestä tutkimuksesta, jossa haastateltiin 6600 johtajaa. Generatiivisen tekoälyn käytön lisääminen on vuoden tärkein innovaatiotavoite noin viidennekselle (22 %) suomalaisista organisaatioista.

    Suomalaisyritykset ovat kuitenkin vasta tekoälytaipaleensa alkumetreillä. Monet niistä ovat kuitenkin vasta kartoittamassa uuden teknologian mahdollisuuksia. 18 prosentilla organisaatioista ei vieläkään ole tekoälystrategiaa, ja yhtä moni on käynnistänyt pilottihankkeita. Vain 6 prosenttia on saavuttanut pisteen, jossa käyttöön on otettu generatiivisen tekoälyn työkaluja ja työntekijöille on annettu asiaankuuluvaa koulutusta niiden täysipainoiseen hyödyntämiseen.

    Huomattavaa on sekin, että useampi kuin joka kymmenes organisaatio (11 %) on kieltänyt generatiivisen tekoälyn käytön kokonaan.

    Reply
  8. Tomi Engdahl says:

    Jess Weatherbed / The Verge:
    A look at interoperability challenges to the adoption of the C2PA’s authentication standard, including the lack of support for C2PA metadata in cameras

    This system can sort real pictures from AI fakes — why aren’t platforms using it?
    Big tech companies are backing the C2PA’s authentication standard, but they’re taking too long to put it to use.
    https://www.theverge.com/2024/8/21/24223932/c2pa-standard-verify-ai-generated-images-content-credentials

    As the US presidential election approaches, the web has been filled with photos of Donald Trump and Kamala Harris: spectacularly well-timed photos of an attempted assassination; utterly mundane photos of rally crowds; and shockingly out-of-character photos of the candidates burning flags and holding guns. Some of these things didn’t actually happen, of course. But generative AI imaging tools are now so adept and accessible that we can’t really trust our eyes anymore.

    Some of the biggest names in digital media have been working to sort out this mess, and their solution so far is: more data — specifically, metadata that attaches to a photo and tells you what’s real, what’s fake, and how that fakery happened. One of the best-known systems for this, C2PA authentication, already has the backing of companies like Microsoft, Adobe, Arm, OpenAI, Intel, Truepic, and Google. The technical standard provides key information about where images originate from, letting viewers identify whether they’ve been manipulated.

    “Provenance technologies like Content Credentials — which act like a nutrition label for digital content — offer a promising solution by enabling official event photos and other content to carry verifiable metadata like date and time, or if needed, signal whether or not AI was used,” Andy Parsons, a steering committee member of C2PA and senior director for CAI at Adobe, told The Verge. “This level of transparency can help dispel doubt, particularly during breaking news and election cycles.”

    But if all the information needed to authenticate images can already be embedded in the files, where is it? And why aren’t we seeing some kind of “verified” mark when the photos are published online?

    The problem is interoperability. There are still huge gaps in how this system is being implemented, and it’s taking years to get all the necessary players on board to make it work. And if we can’t get everyone on board, then the initiative might be doomed to fail.

    The Coalition for Content Provenance and Authenticity (C2PA) is one of the largest groups trying to address this chaos, alongside the Content Authenticity Initiative (CAI) that Adobe kicked off in 2019. The technical standard they’ve developed uses cryptographic digital signatures to verify the authenticity of digital media, and it’s already been established. But this progress is still frustratingly inaccessible to the everyday folks who stumble across questionable images online.

    It’s important to realize that we’re still in the early stage of adoption,” said Parsons. “The spec is locked. It’s robust. It’s been looked at by security professionals. The implementations are few and far between, but that’s just the natural course of getting standards adopted.”

    The problems start from the origin of the images: the camera. Some camera brands like Sony and Leica already embed cryptographic digital signatures based on C2PA’s open technical standard — which provides information like the camera settings and the date and location where an image was taken — into photographs the moment they’re taken.

    This is currently only supported on a handful of cameras, across both new models like the Leica M11-P or via firmware updates for existing models like Sony’s Alpha 1, Alpha 7S III, and Alpha 7 IV. While other brands like Nikon and Canon have also pledged to adopt the C2PA standard, most have yet to meaningfully do so. Smartphones, which are typically the most accessible cameras for most folks, are also lacking.

    If the cameras themselves don’t record this precious data, important information can still be applied during the editing process. Software like Adobe’s Photoshop and Lightroom, two of the most widely used image editing apps in the photography industry, can automatically embed this data in the form of C2PA-supported Content Credentials, which note how and when an image has been altered. That includes any use of generative AI tools, which could help to identify images that have been falsely doctored.

    But again, many applications, including Affinity Photo and GIMP, don’t support a unified, interoperable metadata solution that can help resolve authenticity issues. Some members of these software communities have expressed a desire for them to do so, which might bring more attention to the issue. The developers of the popular pro photo editing software Capture One told The Verge that it was “committed to supporting photographers” being impacted by AI and is “looking into traceability features like C2PA, amongst others.”

    Even when a camera does support authenticity data, it doesn’t always make it to viewers.

    That metadata information isn’t widely accessible to the general public, though, because online platforms where these images were being circulated, like X and Reddit, don’t display it when images are uploaded and published. Even media websites that are backing the standard, like The New York Times, don’t visibly flag verification credentials after they’ve used them to authenticate a photograph.

    Part of that roadblock, besides getting platforms on board in the first place, is figuring out the best way to present that information to users. Facebook and Instagram are two of the largest platforms that check content for markers like the C2PA standard, but they only flag images that have been manipulated using generative AI tools — no information is presented to validate “real” images.

    When those labels are unclear, it can cause a problem, too. Meta’s “Made with AI” labels angered photographers when they were applied so aggressively that they seemed to cover even minor retouching. The labels have since been updated to deemphasize the use of AI.

    Truepic, an authenticity infrastructure provider and another member of C2PA, says there’s enough information present in these digital markers to provide more detail than platforms currently offer. “The architecture is there, but we need to research the optimal way to display these visual indicators so that everyone on the internet can actually see them and use them to make better decisions without just saying something is either all generative AI or all authentic,” Truepic chief communications officer Mounir Ibrahim said to The Verge.

    A cornerstone of this plan involves getting online platforms to adopt the standard. X, which has attracted regulatory scrutiny as a hotbed for spreading misinformation, isn’t a member of the C2PA initiative and seemingly offers no alternative. But X owner Elon Musk does appear willing to get behind it. “That sounds like a good idea, we should probably do it,” Musk said when pitched by Parsons at the 2023 AI Safety Summit. “Some way of authenticating would be good.”

    Even if, by some miracle, we were to wake up tomorrow in a tech landscape where every platform, camera, and creative application supported the C2PA standard, denialism is a potent, pervasive, and potentially insurmountable obstacle. Providing people with documented, evidence-based information won’t help if they just discount it. Misinformation can even be utterly baseless, as seen by how readily Trump supporters believed accusations about Harris supposedly faking her rally crowds, despite widespread evidence proving otherwise. Some people will just believe what they want to believe.

    But a cryptographic labeling system is likely the best approach we currently have to reliably identify authentic, manipulated, and artificially generated content at scale. Alternative pattern analyzing methods like online AI detection services, for instance, are notoriously unreliable. “Detection is probabilistic at best — we do not believe that you will get a detection mechanism where you can upload any image, video, or digital content and get 99.99 percent accuracy in real-time and at scale,” Ibrahim says. “And while watermarking can be robust and highly effective, in our view it isn’t interoperable.”

    No system is perfect, though, and even more robust options like the C2PA standard can only do so much. Image metadata can be easily stripped simply by taking a screenshot, for example — for which there is currently no solution — and its effectiveness is otherwise dictated by how many platforms and products support it.

    “None of it is a panacea,” Ibrahim says. “It will mitigate the downside risk, but bad actors will always be there using generative tools to try and deceive people.”

    Reply
  9. Tomi Engdahl says:

    Midjourney releases new unified AI image editor on the web
    https://venturebeat.com/ai/midjourney-releases-new-unified-ai-image-editor-on-the-web/

    Amid intensifying competition in the AI image generation space from the likes of Elon Musk’s permissive Grok-2 (powered by Black Forest Labs’ open-source Flux.1 model), one of the leaders is stepping up its game.

    Midjourney, which is hailed by many AI artists and designers as the preeminent and highest quality AI image generator, last night unveiled a new, updated version of its website containing a new editor interface that unifies various existing features such as inpainting (repainting parts of an image with new AI-generated visuals using text prompts), outpainting/canvas extension (stretching the boundaries of the image in different directions and filling the new space with new AI visuals), and more into a single view.

    Reply
  10. Tomi Engdahl says:

    AI stole my job and my work, and the boss didn’t know – or care
    Everyone knows automation will happen, which is why everyone needs proof of human involvement
    https://www.theregister.com/2024/08/15/robot_took_my_job/

    Reply
  11. Tomi Engdahl says:

    Who needs GitHub Copilot when you can roll your own AI code assistant at home
    Here’s how to get started with the open source tool Continue
    https://www.theregister.com/2024/08/18/self_hosted_github_copilot/

    Reply
  12. Tomi Engdahl says:

    Tekoäly teki vaikutuksen Koneen työntekijöihin – ”Kukaan koekäyttäjistä ei halunnut lopettaa käyttöä”
    Tivi17.8.202422:10Digitaalinen teknologiaTekoälyTekoäly
    Hissiyhtiö on keksinyt luovalle tekoälylle käyttöä, joka tekee huoltomiesten työstä mielekkäämpää.
    https://www.tivi.fi/uutiset/tekoaly-teki-vaikutuksen-koneen-tyontekijoihin-kukaan-koekayttajista-ei-halunnut-lopettaa-kayttoa/cbd7fa2c-9f99-4b8a-9e32-2b08702f55bd

    Reply
  13. Tomi Engdahl says:

    Linux Foundation Backs Open Source LLM Initiative
    The Open Model Initiative aims to promote the development and adoption of openly licensed AI models that stay open source
    https://thenewstack.io/linux-foundation-backs-open-source-llm-initiative/

    Reply
  14. Tomi Engdahl says:

    OpenAI Has Software That Detects AI Writing With 99.9 Percent Accuracy, Refuses to Release It
    https://futurism.com/the-byte/openai-software-detects-ai-writing

    Reply
  15. Tomi Engdahl says:

    In Leaked Audio, Amazon Cloud CEO Says Human Developers Will Soon Be a Thing of the Past
    https://futurism.com/the-byte/aws-ceo-human-devs-ai

    “Coding is just kind of like the language that we talk to computers. It’s not necessarily the skill in and of itself.”

    Reply
  16. Tomi Engdahl says:

    Research AI model unexpectedly attempts to modify its own code to extend runtime
    Facing time constraints, Sakana’s “AI Scientist” attempted to change limits placed by researchers.
    https://arstechnica.com/information-technology/2024/08/research-ai-model-unexpectedly-modified-its-own-code-to-extend-runtime/

    Reply
  17. Tomi Engdahl says:

    Internet VPNs
    I asked ChatGPT to explain VPN terms like I’m five – now I’m worried for my job
    Features
    By Mo Harber-Lamond published 2 days ago
    ChatGPT is remarkably good at simplifying concepts
    https://www.tomsguide.com/computing/vpns/i-asked-chatgpt-to-explain-vpn-terms-like-im-five-now-im-worried-for-my-job

    Reply
  18. Tomi Engdahl says:

    How I used ChatGPT to scan 170k lines of code in seconds and save me hours of detective work
    Writing this article about the problem I solved took me a few hours. The actual AI analysis process, from start to finish, took me less than 10 minutes. That’s some serious productivity right there.
    https://www.zdnet.com/article/how-i-used-chatgpt-to-scan-170k-lines-of-code-in-seconds-and-save-me-hours-of-detective-work/

    This is an article about using artificial intelligence (AI) as a tool and how to apply it to your unique, specialized needs. It provides some interesting lessons for everyone.

    You’ll learn you can use a tool like ChatGPT to solve complex problems quickly, so long as you have the right prompts and a hint of skepticism.

    3D printers are controlled with G-code, a program custom-generated by a tool called a slicer that controls how the printer moves its print head and print platform, heats up, and feeds and retracts molten filament.

    The pre-sliced G-code provided by the factory for the printer I was testing resulted in a 16-minute print. The G-code I generated using the company’s slicer resulted in a 42-minute print. I wanted to know why.

    Can AI help?
    This is where ChatGPT comes into the picture. G-code consists of thousands of lines that look like this:

    G1 X122.473 Y140.422 E4.23368
    G1 X122.222 Y140.6 E4.24443
    G0 F14400 X122.703 Y140.773
    G1 F3600 X122.117 Y141.251 E4.27083
    G1 X121.767 Y141.66 E4.28963
    G1 X121.415 Y142.139 E4.31039
    G1 X121.105 Y142.641 E4.33099
    Together, both Benchy G-code files had 170,000+ lines of code. I didn’t intend to spend a Saturday afternoon sifting through that stuff manually. But I thought, perhaps, AI could help.

    I had the G-code I generated using the slicer. I could also export and save the G-code provided by the factory. Using ChatGPT Plus, I fed both files into the AI.

    I started by confirming ChatGPT could read the files. After I uploaded each file, I asked:

    Can you read this?

    ChatGPT confirmed, stating, “I can read the contents of the file. It appears to be a G-code file, typically used to control 3D printers.” That was a good start.

    To ensure we were clear on which file was which, I gave ChatGPT some labels for the files:

    Let’s call the first file uploaded “regular print” and the second file uploaded “fast print”. Okay?

    The AI agreed, so then I told it:

    Describe how the files differ.

    Other than naming one of the files “fast print”, I gave ChatGPT no indication of what I was looking for. Even so, the bot identified that one print had higher print speeds, although the temperature settings were the same.

    I then asked the AI:

    Why is “fast print” so much faster?

    It was at this point that ChatGPT started to annoy me. Instead of giving me details from the code I provided, it speculated. The AI used phrases containing “likely”, “may”, and “might” to describe why the print was faster.

    But I had given it G-code files that described exactly what the printer was doing, so I wanted an exact answer about what the printer was doing. As is often the case with ChatGPT, the conversation was a lot like talking to a brilliant grad student who is somewhat stubborn and uncooperative.

    Why the right prompt makes all the difference
    I finally landed on this prompt, which teased out workable answers:

    The G-code provided in both files is the only thing that is different for these prints. Using solely the G-code provided as comparisons, what slicer settings would be different? Don’t speculate on what other settings might be. Base your analysis only on the code provided.

    ChatGPT identified three key factors:

    The regular print G-code shows feed rates typically between 2400 mm/min and 3400 mm/min. The fast print G-code shows much higher feed rates, with some commands reaching up to 14400 mm/min. This rate is a significant increase in speed, directly contributing to the reduced print time.

    Compared to a layer height of 0.2 mm for the regular print, the fast print set layer height to 0.25 mm, reducing the number of layers needed to complete the print.
    The fast print G-code includes much more aggressive acceleration settings, allowing the printer to reach higher speeds quicker and maintain them, contributing to faster printing.

    That result was interesting.

    So, I asked ChatGPT:

    Can you tell if fast print has been hand-coded or was generated by a slicer? Perhaps look for inconsistent commands or non-standard comments.

    The AI responded with three interesting considerations:

    The variations in G-code commands like changes in feed rates, switching between different movement commands, and alterations in other parameters are typical of slicer-generated code. These changes are designed to optimize the print process and are consistent with the behavior of a slicer.

    If the G-code were hand-coded, we would expect to see more irregular and unpredictable patterns, but these transitions follow a logical sequence, characteristic of a slicer’s optimization process.
    Therefore, the presence of these commands does not indicate hand-coding; rather, it supports the conclusion that this G-code was generated by a slicer.

    What have we learned?
    We’ve learned that ChatGPT understands G-code. That’s unsurprising because, in my earliest tests, we learned that ChatGPT has a fairly good command of even the most obscure programming languages.

    We also learned that ChatGPT can sift through and compare 170,000+ lines of machine instructions and reach actionable conclusions in seconds.

    Reply
  19. Tomi Engdahl says:

    Andrej Karpathy Praises Cursor Over GitHub Copilot “The tool is now a complex living thing,” Karpathy added.
    Read more at: https://analyticsindiamag.com/ai-news-updates/andrej-karpathy-praises-cursor-over-github-copilot/

    Reply
  20. Tomi Engdahl says:

    Celebrating an important step forward for Open Source AI
    https://blog.mozilla.org/en/mozilla/ai/open-source-ai-definition/

    Mozilla is excited about today’s new definition of open source AI, and we endorse it as an important step forward.

    The Open Source Initiative (OSI) has recently released a new draft definition of open source AI, marking a critical juncture in the evolution of the internet. This moment comes after two years of conversations, debates, engagements, and late-night conversations across the technical and open source communities. It is critical not just for redefining what “open source” means in the context of AI; it’s about shaping the future of the technology and its impact on society.

    The original Open Source Definition, introduced by the OSI in 1998, was more than just a set of guidelines; it was a manifesto for a new way of building software.

    This is a significant step toward bringing clarity and rigor to the open source AI discussion. It introduces a binary definition of “open source,” akin to the existing definition. While this is just one of several approaches to defining open source AI, it offers precision for developers, advocates and regulators who benefit from clear definitions in various working contexts. Specifically, it outlines that open source AI revolves around the ability to freely use, study, modify and share an AI system. And it also promotes the importance of access to key components needed to recreate substantially equivalent AI systems, like information on data used for training, the source code for AI development and the AI model itself.

    Reply
  21. Tomi Engdahl says:

    At the Rise25 Awards, the future of AI is ethical, inclusive and accountable
    https://blog.mozilla.org/en/mozilla/rise25-dublin/

    Reply
  22. Tomi Engdahl says:

    Elokuun alussa voimaan astunut EU:n tekoälyasetus on asettanut uusia vaatimuksia tekoälymallien ja -järjestelmien kehittämiselle ja tuomiselle markkinoille. Mitkä ovat tekoälyasetuksen vaatimukset korkean riskin toimijoille?

    Kolsterin Hannes Kankaanpää kertoo riskienhallinasta murroksen keskellä lisää maksuttomassa webinaarissamme.

    https://www.kolster.fi/webinaari03092024?utm_campaign=Kolster+%7C+Konversio%3A+ilmot+%E2%80%93+Copy&utm_source=facebook&utm_medium=paid&hsa_acc=335728640558555&hsa_cam=120215891046580370&hsa_grp=120215891046720370&hsa_ad=120215896235780370&hsa_src=fb&hsa_net=facebook&hsa_ver=3

    Reply
  23. Tomi Engdahl says:

    The Future of Coding is ‘Tab Tab Tab’ Coding is having a ChatGPT moment with Cursor AI.
    Read more at: https://analyticsindiamag.com/ai-origins-evolution/the-future-of-coding-is-tab-tab-tab/

    Claude + Cursor = AGI. This is basically what the conversation is around coding right now everywhere on X. The hype around Cursor AI was just not enough, and then Andrej Karpathy added that he is choosing to use it instead of GitHub Copilot from now on. Is the future
    Read more at: https://analyticsindiamag.com/ai-origins-evolution/the-future-of-coding-is-tab-tab-tab/

    Reply
  24. Tomi Engdahl says:

    “The web is becoming increasingly a dangerous place to look for your data.”

    AI Appears to Be Slowly Killing Itself
    “The web is becoming increasingly a dangerous place to look for your data.”
    https://futurism.com/ai-slowly-killing-itself?fbclid=IwZXh0bgNhZW0CMTEAAR0Hfpy8qpjqfRI-QoVay6otzlKyEdxpaGqs3qOGmecVsBjejZhnx6ADu0Y_aem_pFEuQX6lSk78yobgu6BwuA

    AI-generated text and imagery is flooding the web — a trend that, ironically, could be a huge problem for generative AI models.

    As Aatish Bhatia writes for The New York Times, a growing pile of research shows that training generative AI models on AI-generated content causes models to erode. In short, training on AI content causes a flattening cycle similar to inbreeding; the AI researcher Jathan Sadowski last year dubbed the phenomenon as “Habsburg AI,” a reference to Europe’s famously inbred royal family.

    Reply
  25. Tomi Engdahl says:

    AI coding tools are finally delivering results for enterprises – developers are saving so much time they’re able to collaborate more, focus on system design, and learn new languages
    News
    By Solomon Klappholz published August 21, 2024
    Research from GitHub shows AI coding tools are hitting the mainstream, with developers reporting significant productivity and efficiency gains
    https://www.itpro.com/software/development/ai-coding-tools-are-finally-delivering-results-for-enterprises-developers-are-saving-so-much-time-theyre-able-to-collaborate-more-focus-on-system-design-and-learn-new-languages

    Reply
  26. Tomi Engdahl says:

    Microsoft’s Copilot is stupid, I replaced it with ChatGPT
    Tired of Microsoft’s terrible AI assistant, I promoted ChatGPT to my PC’s Copilot key.
    https://www.androidauthority.com/how-to-remap-copilot-key-3470869/

    Reply
  27. Tomi Engdahl says:

    “I was delighted to see that she sent a video. But then I remembered we don’t have a video tutorial.” https://trib.al/hKISymp

    Startup Alarmed When Its AI Starts Rickrolling Clients
    https://futurism.com/the-byte/ai-startup-rickrolling?fbclid=IwY2xjawE7lDlleHRuA2FlbQIxMQABHWCbCWzuUVxEKd63k2RlkC51YrtMOTHgpCFXFVeeVSLrzkZF2K__WGUQUQ_aem_5tWovtlH-Xs2agIohwRE_w

    Known as “Lindys,” the company’s AI assistants are intended to help customers with various tasks. Part of a Lindy’s job is to teach clients how to use the platform, and it was during this task that the AI helper provided a link to a video tutorial that wasn’t supposed to exist.

    “A customer reached out asking for video tutorials,” Crivello wrote in his now-viral tweet thread about the hilarious debacle. “We obviously have a Lindy handling this, and I was delighted to see that she sent a video.”

    “But then I remembered we don’t have a video tutorial,” he continued, “and realized Lindy is literally fucking [R]ickrolling our customers.”

    Reply
  28. Tomi Engdahl says:

    Cursor is ChatGPT for coding — now anyone can make an app in minutes
    Hands-on
    By Ryan Morrison published yesterday
    Making apps just got easier
    https://www.tomsguide.com/ai/cursor-is-chatgpt-for-coding-now-anyone-can-make-an-app-in-minutes

    Sometimes an artificial intelligence tool comes out of nowhere and dominates the conversation on social media. This week that app is Cursor, an AI coding tool that uses models like Claude 3.5 Sonnet and GPT-4o to make it easier than ever to build your own apps.

    Cursor is part development environment, part AI chatbot and unlike tools like GitHub Copilot it can more or less do all of the work for you, transforming a simple idea into functional code in minutes.

    Built on the same system as the popular Microsoft Visual Studio Code, Cursor has already found a fanbase among novice coders and experienced engineers. People working for Perplexity, Midjourney and OpenAI are among the 30,000 customers paying to use the AI tool.

    Cursor is an AI-first code editor. The startup has raised over $400 million since it was founded in 2022 and works with various models including those from Anthropic and OpenAI.

    While on the surface a lot of the simple functionality, such as asking a chatbot to build an app, are things you can already do in Claude or ChatGPT. The power comes from its integration with the code editor and ability to quickly make changes or solve problems.

    CEO Michael Truell describes it as “Google Docs for programmers”, a simple code editor with AIO models built in that can write, predict and manipulate code using nothing but a text prompt.

    In my view, its true power is in the democratization of coding. It would also allow someone without much coding experience to build the tools they need by typing a few lines of text.

    I’ve recently started going to the gym and so I decided to build a habit tracker app. I started with the simple prompt: “Build a habit tracker in Python with a GUI. Make it look good and add gamification elements to keep it fun. Modern, clean design.”

    It generated the necessary code in the sidebar chat window and all I had to do was click Apply and then Accept. This added the code to a new Python file including all the necessary imports. It also gave me instructions on how to add modules to my machine to make the code work.

    As the chat is powered by Claude 3.5 Sonnet, you can just have it explain in more detail any element of the code or any task required to make it run.

    I’ve recently started going to the gym and so I decided to build a habit tracker app. I started with the simple prompt: “Build a habit tracker in Python with a GUI. Make it look good and add gamification elements to keep it fun. Modern, clean design.”

    It generated the necessary code in the sidebar chat window and all I had to do was click Apply and then Accept. This added the code to a new Python file including all the necessary imports. It also gave me instructions on how to add modules to my machine to make the code work.

    As the chat is powered by Claude 3.5 Sonnet, you can just have it explain in more detail any element of the code or any task required to make it run.

    Reply
  29. Tomi Engdahl says:

    Artificial IntelligenceUnlocking the Power of AI in Cybersecurity
    As adversaries increasingly exploit AI, security practitioners must not fall behind. What does it take to unlock the full potential of AI in cybersecurity?
    https://www.securityweek.com/unlocking-the-power-of-ai-in-cybersecurity/

    Reply
  30. Tomi Engdahl says:

    Why are Devs Turning to TypeScript for AI Development? TypeScript’s static typing helps ensure code quality and reduces the likelihood of bugs slipping through to production.
    Read more at: https://analyticsindiamag.com/developers-corner/why-are-devs-turning-to-typescript-for-ai-development/

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*