3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,742 Comments

  1. Tomi Engdahl says:

    Large language models can be squeezed onto your phone — rather than needing 1000s of servers to run — after breakthrough
    News
    By Keumars Afifi-Sabet published 2 days ago
    Running massive AI models locally on smartphones or laptops may be possible after a new compression algorithm trims down their size — meaning your data never leaves your device. The catch is that it might drain your battery in an hour.
    https://www.livescience.com/technology/artificial-intelligence/large-language-models-can-be-squeezed-onto-your-phone-rather-than-needing-1000s-of-servers-to-run-after-breakthrough

    Reply
  2. Tomi Engdahl says:

    A new algorithm, dubbed Calibration Aware Low precision Decomposition with Low Rank Adaptation (CALDERA), compresses the massive amounts of data needed to run a large language model (LLM) by trimming redundancies in the code and reducing the precision of its layers of information.

    This leaner LLM performs with accuracy and nuance at slightly lower levels than the uncompressed version, scientists said in a study published May 24 to the preprint database arXiv, ahead of a presentation at the Conference on Neural Information Processing Systems (NeurIPS) in December
    https://www.livescience.com/technology/artificial-intelligence/large-language-models-can-be-squeezed-onto-your-phone-rather-than-needing-1000s-of-servers-to-run-after-breakthrough

    Reply
  3. Tomi Engdahl says:

    AI found a new way to create quantum entanglement
    In a surprise discovery, researchers found a new way to generate quantum entanglement for particles of light, which could make building quantum information networks easier
    https://www.newscientist.com/article/2459102-ai-found-a-new-way-to-create-quantum-entanglement/

    Reply
  4. Tomi Engdahl says:

    Microsoft moves to stop M365 Copilot from ‘oversharing’ data
    news analysis
    Dec 04, 2024
    7 mins
    Data Privacy
    Generative AI
    Microsoft

    The generative AI assistant can surface sensitive information in over-permissioned files, a growing concern for businesses testing the technology. Microsoft is adding new features to SharePoint and Purview to make it easier to control what the tool can access.

    https://www.computerworld.com/article/3616459/microsoft-moves-to-stop-m365-copilot-from-oversharing-data.html

    Reply
  5. Tomi Engdahl says:

    How ChatGPT’s Canvas Can Help You Use AI More Productively
    Canvas, which is available to OpenAI’s paid subscribers, is a little bit like an AI-powered Google Docs. Here’s how to use it.
    https://www.wired.com/story/how-to-use-chatgpt-canvas-productivity/

    With multiple AI platforms and bots competing against each other—there’s Copilot, Gemini, ChatGPT, Claude, and Perplexity, to name just a few—we’re seeing new updates and upgrades appear on a frequent basis. One of the newest additions OpenAI has pushed out to ChatGPT is called Canvas, and it’s a little bit like an AI-powered Google Docs.

    Reply
  6. Tomi Engdahl says:

    Microsoft’s new Copilot Vision feature one-ups ChatGPT. Here’s how to access it
    Have you ever needed a second opinion on something you are looking at online, but didn’t have anyone readily available to chat with? Now, you can have an AI-powered assistant at your beck and call.
    https://www.zdnet.com/article/microsofts-new-copilot-vision-feature-one-ups-chatgpt-heres-how-to-access-it/

    Microsoft Copliot has proven a worthy competitor to ChatGPT. As ChatGPT continues to evolve, so has Copilot, but recently, it has been trailing behind. However, this latest update might be enough to return its competitive edge.

    On Thursday, Microsoft made Copilot Vision, an experience in which Copilot can view and understand the context of what you’re doing online to provide verbal real-time assistance, available in preview for Pro subscribers through Copilot Labs.

    When the user enables Copilot Vision, it can read along with you, discuss issues you are having while browsing, analyze your site, and provide insights based on what it sees. It’s essentially an assistant for all your browsing needs — on-call whenever you need it.

    I had the opportunity to demo the feature at a NYC Microsoft Copilot and Windows Event in October where I witnessed real-world applications that showcased some of Copilot Visions’ assistance value.

    Reply
  7. Tomi Engdahl says:

    It’s Nearly Impossible To Get A Job in Artificial Intelligence Right Now
    https://www.forbes.com/sites/saibala/2024/12/05/its-nearly-impossible-to-get-a-job-in-artificial-intelligence-right-now/

    Over the past few years, the topics of artificial intelligence and machine learning have become immensely popular, establishing themselves as mainstays of discussion in most households and workplaces. Undoubtedly, the industry has grown exponentially in a relatively short amount of time, perpetuated by numerous small and large players that are actively trying to capture market share.

    Alongside this explosion in growth has emerged a gargantuan demand for the right talent and workforce to ideate and develop these products. However, despite the demand, the standards for getting a job in artificial intelligence and machine learning have never been more stringent. This is especially true for AI jobs in healthcare.

    Research indicates that conservatively, the AI market is expected to reach nearly $267 billion in value by 2027, with nearly a 37% CAGR. In particular, the market cap for healthcare AI is supposed to increase by more than 10x over the next 8 years. Accordingly, the numbers also show that jobs relating to AI and ML have grown by nearly 74% annually over the last 4 years, indicating that the market is starving for talent to quench the needs of innovation.

    However, these figures may incorrectly lead a healthcare AI enthusiast to believe that there are plenty of jobs available and many opportunities exist for new entrants to work in the field. This could not be more false. In fact, despite the heavy demand that is being placed on the current workforce for talent in these fields, the standards for hiring in artificial intelligence and machine learning have increasingly become more difficult.

    Candidates for this work are expected to have a confluence of expertise across numerous different disciplines ranging from programming and IT architecture to at least a baseline understanding of computational engineering and neural networks. Moreover, the more senior roles that spearhead product teams or actually oversee the development of foundation and large language models are increasingly being granted to academics and industry stalwarts that have previously pursued the work in a research or more academic capacity. This is especially true in the field of healthcare, which requires intricate industry knowledge to apply to the fundamental principles of AI.

    Reply
  8. Tomi Engdahl says:

    Nvidia’s new AI audio model can synthesize sounds that have never existed
    What does a screaming saxophone sound like? The Fugatto model has an answer…
    https://arstechnica.com/ai/2024/11/nvidias-new-ai-audio-model-can-synthesize-sounds-that-have-never-existed/

    Reply
  9. Tomi Engdahl says:

    Soon, the tech behind ChatGPT may help drone operators decide which enemies to kill
    OpenAI and Palmer Luckey’s weapons company sign agreement to explore lethal drone defense for military use.
    https://arstechnica.com/ai/2024/12/openai-and-anduril-team-up-to-build-ai-powered-drone-defense-systems/

    Reply
  10. Tomi Engdahl says:

    Open Source Developers Guide to the EU AI Act
    https://huggingface.co/blog/eu-ai-act-for-oss-developers

    The EU AI Act, the world’s first comprehensive legislation on artificial intelligence, has officially come into force, and it’s set to impact the way we develop and use AI – including in the open source community. If you’re an open source developer navigating this new landscape, you’re probably wondering what this means for your projects. This guide breaks down key points of the regulation with a focus on open source development, offering a clear introduction to this legislation and directing you to tools that may help you prepare to comply with it.

    TL;DR: The AI Act may apply to open source AI systems and models, with specific rules depending on the type of model and how they are released. In most cases, obligations involve providing clear documentation, adding tools to disclose model information when deployed, and following existing copyright and privacy rules. Fortunately, many of these practices are already common in the open source landscape, and Hugging Face offers tools to help you prepare to comply, including tools to support opt-out processes and redaction of personal data. Check out model cards, dataset cards, Gradio watermarking, support for opt-out mechanisms and personal data redaction, licenses and others!

    The EU AI Act is a binding regulation that aims to foster responsible AI. To that end, it sets out rules that scale with the level of risk the AI system or model might pose while aiming to preserve open research and support small and medium-sized enterprises (SMEs). As an open source developer, many aspects of your work won’t be directly impacted – especially if you’re already documenting your systems and keeping track of data sources. In general, there are straightforward steps you can take to prepare for compliance.

    The regulation takes effect over the next two years and applies broadly, not just to those within the EU. If you’re an open source developer outside the EU but your AI systems or models are offered or impact people within the EU, they are included in the Act.

    The regulation works at different levels of the AI stack, meaning it has different obligations if you are a provider (which includes the developers), deployer, distributor etc. and if you are working on an AI model or system.

    In the AI Act, rules scale with the level of risk the AI system or model might pose. For all AI systems, risks may be:

    Unacceptable: systems that violate human rights, for example an AI system that scrapes facial images from the internet or CCTV footage. These systems are prohibited and cannot be put on the market.
    High: systems that may adversely impact people’s safety or fundamental rights, for example dealing with critical infrastructure, essential services, law enforcement. These systems need to follow thorough compliance steps before being put on the market.
    Limited: systems that interact directly with people and have the potential to create risks of impersonation, manipulation, or deception. These systems need to meet transparency requirements. Most generative AI models can be integrated into systems that fall into this category. As a model developer, your models will be easier and more likely to be integrated into AI systems if you already follow the requirements, such as by providing sufficient documentation.
    Minimal: the majority of the systems – that don’t pose the risks above. They need only comply with existing laws and regulations, no obligation is added with the AI Act.
    For general purpose AI (GPAI) models, there is another risk category called systemic risk: GPAI models using substantial computing power, today defined as over 10^25 FLOPs for training, or that have high-impact capabilities. According to a study by Stanford, in August 2024, based on estimates from Epoch, only eight models (Gemini 1.0 Ultra, Llama 3.1-405B, GPT-4, Mistral Large, Nemotron-4 340B, MegaScale, Inflection-2, Inflection-2.5) from seven developers (Google, Meta, OpenAI, Mistral, NVIDIA, ByteDance, Inflection) would meet the default systemic risk criterion of being trained using at least 10^25 FLOPs. Obligations vary if they are open source or not.

    Our focus in this short guide is on limited risk AI systems and open source non-systemic risk GPAI models, which should encompass most of what is publicly available on the Hub. For other risk categories, make sure to check out further obligations that may apply.

    For limited risk AI systems
    Limited-risk AI systems interact directly with people (end users) and may create risks of impersonation, manipulation, or deception. For example, a chatbot producing text or a text-to-image generator – tools that can also facilitate the creation of misinformation materials or of deepfakes. The AI Act aims to tackle these risks by helping the general end user understand that they are interacting with an AI system. Today, most GPAI models are not considered to present systemic risk. In the case of limited-risk AI systems, the obligations below apply whether or not they are open source.

    Developers of limited-risk AI systems need to:

    Disclose to the user that they are interacting with an AI system unless this is obvious, keeping in mind that end users might not have the same technical understanding as experts, so you should provide this information in a clear and thorough way.
    Mark synthetic content: AI-generated content (e.g., audio, images, videos, text) must be clearly marked as artificially generated or manipulated in a machine-readable format. Existing tools like Gradio’s built-in watermarking features can help you meet these requirements.
    Note that you may also be a ‘deployer’ of an AI system, not only a developer. Deployers of AI systems are people or companies using an AI system in their professional capacity. In that case, you also need to comply with the following:

    For emotion recognition and biometric systems: deployers must inform individuals about the use of these systems and process personal data in accordance with relevant regulations.
    Disclosure of deepfakes and AI-generated content: deployers must disclose when AI-generated content is used. When the content is part of an artistic work, the obligation is to disclose that generated or manipulated content exists in a way that does not spoil the experience.
    The information above needs to be provided with clear language, at the latest at the time of the user’s first interaction with, or exposure, to the AI system.

    The AI Office, in charge of implementing the AI Act, will help create codes of practice with guidelines for detecting and labeling artificially generated content. These codes are currently being written with industry and civil society participation, and are expected to be published by May 2025. Obligations will be enforced starting August 2026.

    For open source non-systemic risk GPAI models
    The following obligations apply if you are developing open source GPAI models, e.g. LLMs, that do not present systemic risk. Open source for the AI Act means “software and data, including models, released under a free and open source license that allows them to be openly shared and where users can freely access, use, modify and redistribute them or modified versions thereof”. Developers can select from a list of open licenses on the Hub. Check if the chosen license fits the AI Act’s open source definition.

    The obligations for non-systemic open source GPAI models are as follows:

    Draft and make available a sufficiently detailed summary of the content used to train the GPAI model, according to a template provided by the AI Office.
    The level of detail of the content is still under discussion but should be relatively comprehensive.
    Implement a policy to comply with EU law on copyright and related rights, notably to comply with opt-outs. Developers need to ensure they are authorized to use copyright-protected material, which can be obtained with the authorization of the rightsholder or when copyright exceptions and limitations apply. One of these exceptions is the Text and Data Mining (TDM) exception, a technique used extensively in this context for retrieving and analyzing content. However, the TDM exception generally does not apply when a rightsholder clearly expresses that they reserve the right to use their work for these purposes – this is called “opt-out.”

    The EU AI Act also ties into existing regulations on copyright and personal data, such as copyright directive and data protection regulation. For this, look to Hugging Face-integrated tools that support better opt-out mechanisms and personal data redaction, and stay updated on recommendations from European and national bodies like CNIL.

    Projects on Hugging Face have implemented forms of understanding and implementing opt-outs of training data, such as BigCode’s Am I In The Stack app and the integration of a Spawning widget for datasets with image URLs. With these tools, creators can simply opt out of allowing their copyrighted material to be used for AI training. As opt-out processes are being developed to help creators effectively inform publicly that they do not want their content used for AI training, these tools can be quite effective in addressing those decisions.

    Reply
  11. Tomi Engdahl says:

    Refactoring AI code: The good, the bad, and the weird
    feature
    Dec 02, 2024
    9 mins
    Artificial Intelligence
    Generative AI
    Software Development

    https://www.infoworld.com/article/3610521/refactoring-ai-code-the-good-the-bad-and-the-weird.html

    For most developers, maintaining and refactoring legacy code is all in a day’s work. But what about when the code was written by AI?

    Reply
  12. Tomi Engdahl says:

    Generative AI is no longer a novelty in the software development world: it’s being increasingly used as an assistant (and sometimes a free agent) to write code running in real-world production. But every developer knows that writing new code from scratch is only a small part of their daily work. Much of a developer’s time is spent maintaining an existing codebase and refactoring code written by other hands.

    What’s it like doing maintenance work when those hands aren’t human, or were generated by an AI copilot? Can AI tools like GitHub Copilot, Vercel’s v0, or Cursor IDE be used to lighten the load? I spoke to real-world practitioners to find out how this next phase of the AI revolution is playing out.

    AI written code: Useful, sometimes weird

    All the developers I spoke to expressed varying degrees of enthusiasm over the usefulness of generative AI as part of the development process. They also acknowledged that AI-generated code has its quirks—and those quirks can make it tricky to maintain and refactor.

    Dev Nag has been in the trenches with AI coding tools for several years and is the CEO of QueryPal, a software company focused on AI-powered ticket generation. He describes the process of refactoring and maintaining AI-generated code as surprisingly challenging. “The code often lacks consistency in style and naming conventions, which can make a codebase feel disjointed,” he says. “I’ve spent many hours cleaning up and standardizing AI-generated code to fit a project’s conventions.”

    Dhaval Gajjar, CEO of IT services and consulting company Pranshtech Solutions, CTO of SaaS development company Textdrip, and an experienced software developer, agrees. “AI-based code typically is syntactically correct but often lacks the clarity or polish that comes from a human developer’s understanding of best practices,” he says. “Developers often need to clean up variable names, simplify logic, or restructure code for better readability.”

    To Travis Rehl, CTO at Innovative Solutions, which migrates, modernizes, and builds next-gen systems on the cloud, the oddness of working with AI-written code in order to refactor or maintain it can go deeper. “When the AI has employed unfamiliar patterns or libraries, it can be challenging to refactor without a deep understanding of these choices,” he says. “There’s also the risk of breaking intricate dependencies that the AI might have created. It’s definitely a different experience. You’re often working with code that feels both familiar and alien at the same time. The AI might use approaches that seem unconventional to human developers, leading to ‘Why did it do it this way?’ moments.”

    Nag and Gajjar both note that AI-generated code can be more complex than human-written code that achieves the same results. According to Gajjar, “AI tools are known to overengineer solutions so that the code produced is bulkier than it really should be for simple tasks. There are often extraneous steps that developers have to trim off, or a simplified structure must be achieved for efficiency and maintainability.” Nag adds that AI can “throw in error handling and edge cases that aren’t always necessary. It’s like it’s trying to show off everything it knows, even when a simpler solution would suffice.”

    There’s one bit of showing off that Innovative Solutions’ Rehl appreciates. “AI also does things like add a ton of commentary around what a function does. That’s a double-edged sword,” he notes: “it’s useful for humans, but it also adds a lot of bloat to a codebase. But think about the next time you use the AI: You want some description as to the purpose of that function that the AI can read again later to understand the business context around it.”

    AI eats its own dog food
    Despite these quirks, the developers I spoke with thought that AI-generated code had its place within the software development lifecycle. In fact, they said that AI tools could be helpful in the code maintenance and refactoring process. Somewhat ironically, AI tools can even be used to overcome some of the flaws found in AI code to begin with.

    For instance, Rehl at Innovative Solutions deploys AI tools for both code analysis and automated refactoring. “AI can quickly analyze large codebases and identify areas that need refactoring, potential bugs, or optimization opportunities,” he said. “For simpler refactoring tasks, like renaming variables or extracting methods, AI tools can perform these operations across the entire codebase with high accuracy.” QueryPal’s Nag noted that he’s “used AI to great effect for changes across a codebase, like updating deprecated API calls.”

    Because commercially available AI tools are trained on best practices and patterns they’ve learned from vast codebases, they can also be deployed to suggest improvements that might not be immediately obvious to the human eye. “AI tools are excellent at identifying patterns and suggesting improvements, which can speed up the refactoring process significantly,” said Nag.

    “Tools like GitHub Copilot propose code simplification, correction of inefficiency, or even the restructuring of logic once identified from some patterns,” added Pranshtech’s Gajjar. “It can help with auto-repetitive tasks, clean boilerplate code, or even hinting towards those parts that demand refactoring.”

    Rehl described a practical situation in which he was able to use AI tools to refactor AI-assisted code. “The AI had created a sophisticated React component structure, but it didn’t align perfectly with the data model I had set up in the back end,” he explained.

    We still need humans in the loop
    None of the developers I spoke to suggested that AI was ready to be set loose on a codebase—at least not yet. Jason Wingate, a developer and the CEO of brand development company Emerald Ocean Ltd., said that AI tools allowed him to accelerate refactoring, but human oversight was still key. “Always review and refine AI-generated code changes,” he said.

    Wingate described a fundamentally iterative process of coding with AI assistance. “The most basic way would be to ask for refactoring suggestions and give it a chunk of the code. Include basic information about the language, coding standards, and conventions. Depending on what you’re looking for exactly, you can drill down with more questions of what exactly you’re looking to achieve.” Based on your goals—performance? readability?—you could either implement the suggestions yourself or get the AI to do it, he said. “Then review it, and (most likely) do it again.”

    Wingate also cautioned developers to watch out for hallucinations and run tests to ensure the tool followed prompts correctly. “You may put ‘using Sarah’s coding standards’—which means absolutely nothing—and it still may say ‘Sure! I’ll use Sarah’s coding standards!” he said.

    QueryPal’s Nag also noted the need to carefully review AI-generated code. “In my experience, the key to successfully using AI in both initial development and refactoring is to treat it as a very knowledgeable but sometimes unreliable junior team member. You wouldn’t let a new hire push code directly to production without review, and the same goes for AI-generated code. I always ensure that experienced developers on my team review and adjust the AI’s output.”

    The future isn’t quite yet
    Every developer and IT leader I spoke to emphasized that we are still in the early days of generative AI. For most shops, the amount of AI-assisted (or fully AI-written) code in their codebases will be relatively small. But it will inevitably grow as AI-assisted refactoring snowballs.

    Rehl referenced the strangler vine pattern to explain how he sees the process playing out:

    When you like old tech and you want to make new tech, you could either build a net new system alongside of the old one and just cut over, or you can build the components of the new around the old. You start swapping the component tree and its strangler vines. They come in and strangle the tree. I think that’s going to happen over time because of AI. As the AI is exposed to existing systems as a copilot, it’s going to start automatically documenting what’s surrounding the tree. And then, a year later, it will have enough commentary to understand the business context it was trying to achieve, and it can just take over.

    https://www.infoworld.com/article/3610521/refactoring-ai-code-the-good-the-bad-and-the-weird.html

    Reply
  13. Tomi Engdahl says:

    GitHub’s boast that Copilot produces high-quality code challenged
    We’re shocked – shocked – that Microsoft’s study of its own tools might not be super-rigorous
    https://www.theregister.com/2024/12/03/github_copilot_code_quality_claims/

    GitHub’s claim that the quality of programming code written with its Copilot AI model is “significantly more functional, readable, reliable, maintainable, and concise,” has been challenged by software developer Dan Cîmpianu.

    Cîmpianu, based in Romania, published a blog post in which he assails the statistical rigor of GitHub’s Copilot code quality data.

    GitHub last month cited research indicating that developers using Copilot:

    Had a 56 percent greater likelihood to pass all ten unit tests in the study (p=0.04);
    Wrote 13.6 percent more lines of code with GitHub Copilot on average without a code error (p=0.002);
    Wrote code that was more readable, reliable, maintainable, and concise by 1 to 3 percent (p=0.003, p=0.01, p=0.041, p=0.002, respectively);
    Were 5 percent more likely to have their code approved (p=0.014).

    Cîmpianu takes issue with the choice of assignment, given that writing a basic Create, Read, Update, Delete (CRUD) app is the subject of endless online tutorials and therefore certain to have been included in training data used by code completion models. A more complex challenge would be better, he contends.

    He then goes on to question GitHub’s inadequately explained graph that shows 60.8 percent of developers using Copilot passed all ten unit tests while only 39.2 percent of developers not using Copilot passed all the tests.

    More significantly, Cîmpianu takes issue with GitHub’s claim that devs using Copilot produced significantly fewer code errors. As GitHub put it, “developers using GitHub Copilot wrote 18.2 lines of code per code error, but only 16.0 without. That equals 13.6 percent more lines of code with GitHub Copilot on average without a code error (p=0.002).”

    Cîmpianu argues that 13.6 percent is a misleading use of statistics because it only refers to two additional lines of code.

    Cîmpianu is also unhappy with GitHub’s claim that Copilot-assisted code was more readable, reliable, maintainable, and concise by 1 to 3 percent. He notes that the metrics for code style and code reviews can be highly subjective, and that details about how code was assessed have not been provided.

    Another paper by researchers affiliated with Bilkent University in Turkey, released in April 2023 and revised in October 2023, found that ChatGPT, GitHub Copilot, and Amazon Q Developer (formerly CodeWhisperer) all produce errors. And to the extent those errors produced “code smells” – poor coding practices that can give rise to vulnerabilities – “the average time to eliminate them was 9.1 minutes for GitHub Copilot, 5.6 minutes for Amazon CodeWhisperer, and 8.9 minutes for ChatGPT.”

    That paper concludes, “All code generation tools are capable of generating valid code nine out of ten times with mostly similar types of issues. The practitioners should expect that for 10 percent of the time the generated code by the code generation tools would be invalid. Moreover, they should test their code thoroughly to catch all possible cases that may cause the generated code to be invalid.”

    Nonetheless, a lot of developers are using AI coding tools like GitHub Copilot as an alternative to searching for answers on the web. Often, a partially correct code suggestion is enough to help inexperienced coders make progress. And those with substantial coding experience also see value in AI code suggestion models.

    As veteran open source developer Simon Willison observed in a recent interview [VIDEO]: “Somebody who doesn’t know how to program can use Claude 3.5 artefacts to produce something useful. Somebody who does know how to program will do it better and faster and they’ll ask better questions of it and they will produce a better result.”

    For GitHub, maybe the message is that code quality, like security, isn’t top of mind for many developers.

    Cîmpianu contends it shouldn’t be that way. “[I]f you can’t write good code without an AI, then you shouldn’t use one in the first place,” he concludes.

    Try telling that to the authors who don’t write good prose, the recording artists who aren’t good musicians, the video makers who never studied filmmaking, and the visual artists who can’t draw very well.

    Reply
  14. Tomi Engdahl says:

    Sam Altman lowers the bar for AGI / ‘My guess is we will hit AGI sooner than most people in the world think and it will matter much less,’ says OpenAI’s CEO.
    https://www.theverge.com/2024/12/4/24313130/sam-altman-openai-agi-lower-the-bar

    Reply
  15. Tomi Engdahl says:

    ‘An easy button to get off Windows’: Amazon’s new AI moves Microsoft apps to Linux
    by Todd Bishop on December 3, 2024 at 12:19 pm

    In-depth Amazon coverage from the tech giant’s hometown, including e-commerce, AWS, Amazon Prime, Alexa, logistics, devices, and more.

    https://www.geekwire.com/2024/an-easy-button-to-get-off-windows-amazon-offers-ai-to-move-legacy-microsoft-apps-to-linux/

    Reply
  16. Tomi Engdahl says:

    DeepMind’s Genie 2 can generate interactive worlds that look like video games
    https://techcrunch.com/2024/12/04/deepminds-genie-2-can-generate-interactive-worlds-that-look-like-video-games/

    DeepMind, Google’s AI research org, has unveiled a model that can generate an “endless” variety of playable 3D worlds.

    Called Genie 2, the model — the successor to DeepMind’s Genie, which was released earlier this year — can generate an interactive, real-time scene from a single image and text description (e.g. “A cute humanoid robot in the woods”). In this way, it’s similar to models under development by Fei-Fei Li’s company, World Labs, and Israeli startup Decart.

    Reply
  17. Tomi Engdahl says:

    How to generate unit tests with GitHub Copilot: Tips and examples
    Learn how to generate unit tests with GitHub Copilot and get specific examples, a tutorial, and best practices.
    https://github.blog/ai-and-ml/how-to-generate-unit-tests-with-github-copilot-tips-and-examples/

    Developers writing enough unit tests? Sure, and my code never has bugs on a Friday afternoon.

    Whether you’re an early-career developer or a seasoned professional, writing tests—or writing enough tests—is a challenge. That’s especially true with unit tests, which help developers catch bugs early, validate code, aid with refactoring, improve code quality, and play a core role in Test-Driven Development (TDD).

    All of this to say, you can save a lot of time (and write better, more robust code) by automating your test generation—and AI coding tools are making that easier and quicker than ever.

    GitHub Copilot, GitHub’s AI-powered coding assistant, helps generate test cases on the fly and can save you time. I’ll be honest: I heavily rely on GitHub Copilot to generate tests in my own workflows—but I still manually write a number of them to help formulate my thoughts.

    You should always start with the purpose of your unit tests and the ultimate audience and role they’ll play. Here are a few helpful things to consider:

    Consider your testing philosophy. Are you looking to isolate classes and dependencies or write high-level tests that validate overall behavior against your requirements? It’s not an either/or question—but you should consider exactly what outcome you’re looking to achieve.
    Define the purpose—and audience—of your tests. Clearly state the purpose of each test to help future developers know when it’s safe to delete them. Tests should support requirements, classes, or APIs clearly. Tests should also be written with their audience in mind. Maybe you’re looking to satisfy a product owner, help with QA, educate new team members, or enable refactoring work.
    Focus on utility. Always prioritize what’s most useful and needed for your projects. TDD, for instance, requires practice and should improve your speed and confidence instead of slowing you down.

    How GitHub Copilot helps generate unit tests
    GitHub Copilot uses generative AI to provide real-time code suggestions in your IDE and via chat-based functions in your IDE and across your GitHub projects.

    Based on the context in your code or chat-based queries (or even slash commands you use after highlighting specific code blocks), it can suggest relevant unit tests, covering typical scenarios like edge cases, common inputs, and failure modes. This ability to anticipate and generate test code can lead to better code coverage and more resilient applications.

    So, how does this work in practice? Imagine you’re testing a piece of business logic—like validating your inputs with a regular expression. Writing unit tests can feel (and often is) repetitive and time consuming because you need to test various edge cases to ensure the code works as expected.

    Instead of manually writing every test case, you can use GitHub Copilot to generate tests on your behalf by highlighting your code or logic, and let Copilot suggest unit tests to cover a range of inputs and edge cases.

    There are a number of ways to generate unit tests with GitHub Copilot. For instance, you can select the code you want to test, right click in your IDE and select Copilot->Generate Tests. You can also use the slash command /tests in your IDE to generate tests (you’ll want to highlight the code or logic block first that you’re looking to test). And then you always have GitHub Copilot Chat—both in your IDE and across your online GitHub experience—that you can prompt to find existing tests or use to generate new ones.

    When should you avoid using GitHub Copilot to generate unit tests?
    I tend to write tests manually in the same scenarios where I write code manually, because I know what I want, so I just do it and get it done. But sometimes I need to formulate my thoughts, and the process of manually writing code can help me determine what I’m trying to do and how to do it. From there, I ask GitHub Copilot to expand what I’ve already built.

    Key benefits of using GitHub Copilot to generate unit tests
    Even if I don’t always use GitHub Copilot for unit tests, I use it a lot when it comes to unit tests. Some of the biggest benefits I find when using GitHub Copilot to generate unit tests include:

    Saving time on routine tasks. Unit tests are perfect candidates for automation because of their repetitive nature. With Copilot, you can offload much of the grunt work, letting you focus on coding features rather than manually writing test cases.
    Supporting TDD. TDD involves writing tests before implementing the code itself—a process that can feel daunting when other autocompletion tools don’t offer any suggestions. Copilot changes the game here. It “trusts” your description of the application you’re building, helping you generate tests for functionalities that don’t exist yet. For example, you can describe an app’s functionality to Copilot, and it will generate tests for those features. Then, you can build the app to meet the requirements of those tests to put TDD into play in your workflow.
    Increasing test coverage. By letting Copilot handle initial test generation, you can quickly cover a broad range of cases. You can then refine and extend those tests, ensuring they meet your exact requirements. This iterative process improves confidence in your test suite and the code it verifies.

    Best practices for using GitHub Copilot to generate unit tests
    During my time using GitHub Copilot for test generation, I’ve come away with a number of personal best practices that may prove useful.

    Highlight the code you want to test. You always want to highlight the code or logic you want Copilot to focus on when generating tests or before using the slash command. In my experience, this feels incredibly intuitive, but I often hear questions from a lot of first timers.
    Be specific in your prompts about what you want to test. Copilot doesn’t code like humans do. If I create a function, for instance, I focus on what the function does and how it works. Copilot doesn’t truly read code; it just evaluates patterns. So, if you know there is a specific part of the function you’re looking to test, tell Copilot to “look for this” or look for a specific piece of logic.
    Provide context. When using Copilot, make sure to add comments or docstrings explaining the intended behavior of your code. You can also use a #[file] command to get Copilot to point at existing tests you’ve written. This helps Copilot generate more accurate and meaningful tests.
    Review suggestions carefully. Just like with human-generated code, never trust any tests Copilot generates without going through your normal review process. Review the output yourself, run it through linters, and check the code.
    Be flexible and iterative. At the end of the day, unit tests are code that effectively describe code. The first iteration of generated tests, for instance, may not necessarily be exactly what you’re looking for. I find sometimes that it won’t generate mock objects, or sometimes it will hallucinate. Don’t be afraid to reframe your prompt or question.
    Ask Copilot if you’re missing any tests. You can always prompt Copilot with the question “is there anything I’m not testing?” and Copilot will—in my experience—provide a number of tests I hadn’t considered around edge cases, requirement verifications, and more. Try it out for yourself; it’s something I’ve found incredibly helpful. I also like using Copilot to generate tests for error conditions and code paths that generate expected failures. Testing for these is just as important as testing with good inputs so you know your application can handle errors gracefully.
    Use test coverage tools. Use coverage tools (like Jest’s coverage in JavaScript or Cobertura in Java) to assess Copilot’s test coverage and fill in any gaps. And here’s a pro tip: if you combine Copilot with a code coverage tool, you can quickly find untested code paths and use Copilot to generate tests and reduce the risk of unforeseen errors

    Example 1: Creating unit test in Python with GitHub Copilot
    Suppose we have a Python function to check that the price of something is greater than 0 or less than or equal to 1000. Let’s use Copilot to generate test cases for it.

    def validate_price(price: float):
    if price 1000:
    raise ValueError(“Price must be less than or equal to 1000″)
    With Copilot enabled, you’d want to enter the following slash command and prompt in Copilot Chat: /tests create tests for validate_price. Check the edge cases as well

    While these tests work, they can also be improved with a follow-up prompt to remove redundancies (such as test_price_edge_case_zero and test_price_zero) and adding a test to validate prices within the acceptable range (for example., validate_price(500)). Also remember: GitHub Copilot is powered by nondeterministic LLMs that don’t always produce the same results, and you should always review code suggested by Copilot—and re-prompt Copilot to improve that code.

    Reply
  18. Tomi Engdahl says:

    Meet MegaParse: An Open-Source AI Tool for Parsing Various Types of Documents for LLM Ingestion
    https://www.marktechpost.com/2024/12/03/meet-megaparse-an-open-source-ai-tool-for-parsing-various-types-of-documents-for-llm-ingestion/

    In the evolving landscape of artificial intelligence, language models are becoming increasingly integral to a variety of applications, from customer service to real-time data analysis. One key challenge, however, remains: preparing documents for ingestion into large language models (LLMs). Many existing LLMs require specific formats and well-structured data to function effectively. Parsing and transforming different types of documents—ranging from PDFs to Word files—for machine learning tasks can be tedious, often leading to information loss or requiring extensive manual intervention. As generative AI continues to grow, the need for an efficient, automated solution to transform various data types into an LLM-ready format has become even more apparent.

    Reply
  19. Tomi Engdahl says:

    I test AI tools for a living, and these are the only two worth paying for
    Generative AI can supercharge your productivity, but which tools truly deliver? These are my must-have subscriptions for programming, graphics, and creative workflows that actually pay off.
    https://www.zdnet.com/article/i-test-ai-tools-for-a-living-and-these-are-the-only-two-worth-paying-for/#google_vignette

    Generative AI is everywhere. It’s hard to believe it’s been only about two years since this new paradigm of productivity went mainstream, but here we are.

    As part of my job, I test AI tools. I’ll dig into just about any AI-related technology and see what I can make it do. Many of you have read my ongoing shootouts comparing AIs for programming, looking at AI content checkers, and more.

    1. ChatGPT Plus – $20/mo
    Speaking of AI and programming, it has essentially doubled my programming output. I use AI to help me with common-knowledge programming. I talked about it in-depth in my 25 tips article, but the core benefit is getting ChatGPT to write code for published APIs, so I don’t have to spend time searching for code examples and trying to reverse engineer comments on various programming boards.

    And yes, I mentioned ChatGPT. With the exception of Perplexity, which basically uses ChatGPT, none of the other AIs I tested could reliably help me with actual code I was working on for some real-world projects. ChatGPT did.

    In fact, that’s a big part of why I’m paying $20/mo for ChatGPT Plus. Sure, I’ve signed up and paid for some of the the other AIs just to test them, but ChatGPT Plus is the only chatbot I have found so consistently useful that I keep it as a regularly used tool.

    I use ChatGPT for lots of research tasks, sometimes throwing math problems at it, and all sorts of other questions and problems I’m dealing with. While I never take its output as an unimpeachable source of truth, I do find ChatGPT to be a very useful sounding board, substantially more so than a quick Google search.

    As for why I pay for ChatGPT Plus, and not, say, Google Gemini’s pro version: it’s because of the programming. I’ve done tangible tests and know it works much better. I did sign up and pay for Gemini Advanced for a few months, but it just didn’t provide the value I got with ChatGPT. I also broke it really hard. Oops.

    Now, to be fair, I did outline five ways that an AI could help me in Gmail. If Gemini Advanced could do these things reliably, I’d sign back up in a heartbeat. But I just don’t need the current email message I’m reading summarized, and I sure don’t need it to write a friendlier or more professional version of whatever I’ve currently written.

    2. Midjourney – $10/mo
    Both the free and Plus versions of ChatGPT include access to DALL-E 3, the integrated text-to-image generative AI tool. You get more pictures in a given time period with the Plus version, but otherwise there’s no real difference between the free and the Plus version.

    I’ve had a lot of fun experimenting with DALL-E 3, including using it to generate gloriously strange images of every US state.

    When I compared DALL-E and Midjourney, which I did in two comprehensive tests, the results were quite close. Both Midjourney and DALL-E 3 did great jobs with most of the images and failed with a few.

    But even though I get DALL-E 3 with my $20/mo ChatGPT Plus fee, I pay an extra $10/mo for Midjourney. Why?

    Some of the answer is subjective. I like a lot of the images I get with Midjourney. Midjourney also allows you to describe artist styles, and lets you riff off a vast array of stylistic choices. DALL-E 3, perhaps because of guardrails imposed by OpenAI, doesn’t present as much choice. That said, I showed how DALL-E flagrantly swiped images of Snoopy and Jack Skellington, so those guardrails are a bit porous.

    But I also have two specific and objective answers about why I pay for Midjourney. First, because image generation is so subjective, it’s nice to have a variety of tools when seeking a representation of what you have in your head. I’ll try different prompts and even the same prompts with both tools, and take what works best.

    Second, every month I generate a promotional image for my wife’s online business. She has an e-commerce site that supports a popular hobby. Each month, on her very active Facebook group, she gives a craft-along theme to her users. I generate an image for that theme. Over the months, I’ve found that Midjourney does a far better job of generating an image that incorporates elements of the hobby than DALL-E 3.

    Because Midjourney shaves what used to be two to three hours of work pushing pixels in Photoshop to generate those images down to about 10 minutes, it’s worth the $10/month to me just for that project.

    Photoshop Generative Fill – Honorable mention

    Reply
  20. Tomi Engdahl says:

    Qodo’s fully autonomous agent tackles the complexities of regression testing
    https://venturebeat.com/ai/qodos-fully-autonomous-agent-tackles-the-complexities-of-regression-testing/

    Code is continuously evolving in the software development process, requiring ongoing testing for quality and maintainability. This is the root of regression testing, in which existing tests are re-run to ensure that modified code continues to function as intended.

    However, regression testing can be time-consuming and complex, and may often be neglected in lieu of other priorities.

    Qodo (formerly CodiumAI) says it can ease headaches around the process with the release today of its new fully autonomous AI regression testing agent, Qodo Cover. Its agent creates validation suites to ensure that software applications are, essentially, behaving. The 2-and-a-half-year-old startup announced its new tool at AWS re:Invent, where it also pitched as a finalist in an AWS Unicorn Tank competition.

    “We’re moving toward a place where AI doesn’t just write code — it helps tackle the majority of developers’ workload by proving that code functions correctly,” Qodo CEO Itamar Friedman told VentureBeat.

    Qodo Cover is the newest of these. The fully autonomous agent analyzes source code and performs regression tests to validate it as it changes throughout its lifecycle. The platform ensures that each test runs successfully, passes and increases the amount of code it covers — and only keeps those that meet all three criteria.

    It’s estimated that enterprise developers spend only an hour a day actually writing code; the rest of their time goes to crucial tasks such as testing and review, Friedman pointed out. However, “many companies are rushing to generate code with AI, focusing on that one hour while ignoring the rest of the equation

    Traditional testing approaches simply don’t scale, he noted, which can stall the next leap in software development where AI can reliably generate 80% or more of high-quality code. “Just like how hardware verification revolutionized chip manufacturing a few decades ago, we’re now at a similar inflection point with software. When 25% or more of code is AI-generated, we need new paradigms to ensure reliability.”

    Hugging Face-approved
    Demonstrating its ability to generate production-quality tests, a pull request generated fully autonomously by Qodo Cover was recently accepted into Hugging Face’s PyTorch Image Models repository. Pull requests are a means of quality control in software development, allowing collaborators to propose and review changes before they are integrated into a codebase. This can keep bad code and bugs out of the main codebase to ensure quality and consistency.

    The acceptance by Hugging Face validates Qodo’s offering and exposes it to more than 40,000 projects in the popular machine learning (ML) repository.

    “Getting a contribution accepted into a major open-source project is a signal that AI agents are beginning to operate at the level of professional developers when it comes to understanding complex codebases and maintaining high standards for quality,” said Friedman. “It’s a peek into how software development will evolve.”

    Building off of Meta research
    Qodo Cover is built on an open-source project that Qodo launched in May. That project was based on TestGen-LLM, a tool developed by Meta researchers to fully automate test coverage. To overcome challenges with large language model (LLM)-generated tests, the researchers set out to answer specific questions:

    Does the test compile and run properly?
    Does the test increase code coverage?
    Once those questions are validated, it’s important to perform a manual investigation, Friedman writes in a blog post. This involves asking:

    How well is the test written?
    How much value does it actually add?
    Does it meet any additional requirements?
    Users provide several inputs to Qodo Cover, including:

    The source file for code to be tested
    Existing test suite
    Coverage report
    Command for building and running suites
    Code coverage targets and maximum number of iterations to run
    Additional context and prompting options
    Qodo Cover then generates more tests in the same style, validates them using the runtime environment (i.e., do they build and pass?), reviews metrics such as increased code coverage and updates existing test suites and coverage reports. This is repeated until code either reaches the coverage threshold or the maximum number of iterations.

    Giving devs full control, providing progress reports
    Qodo’s agent can be deployed as a comprehensive tool that analyzes full repositories to identify gaps and irregularities and extend test suites. Or, it can be established as a GitHub action that creates pull requests automatically to suggest tests for newly-changed code. Qodo emphasizes that developers maintain full control and have the ability to review and selectively accept tests. Each pull request also includes detailed coverage progress reports.

    Qodo Cover supports all popular AI models, including GPT-4o and Claude 3.5 Sonnet. The company says it delivers high-quality results across more than a dozen programming languages including JavaScript, TypeScript, C++, C#, Ruby, Go and Rust. It is intended to integrate with Qodo Merge, which reviews and handles pull requests, and coding tool Qodo Gen.

    Reply
  21. Tomi Engdahl says:

    GenAI comes for jobs once considered ‘safe’ from automation
    Specialty in cognitive non-routine tasks means high-skilled city workers affected
    https://www.theregister.com/2024/12/03/ai_oecd_report/

    Jobs in geographical areas and scope once thought to be at low risk of automation are soon to be the most affected by generative AI, according to the Organization for Economic Co-operation and Development (OECD).

    “Generative AI will transform many jobs, but its impact will be greatest in regions that have been least exposed to past waves of automation,” the OECD stated in its Job Creation and Local Economic Development 2024 report, released late last week.

    The impact of generative AI on the roles will be even greater and more wide-sweeping than it was for previous automated technologies, the report said.

    Previously, automation affected more rural and manufacturing jobs, but generative AI’s specialty lies in cognitive non-routine tasks, meaning it will affect more high-skilled workers and women than previous automation technologies. This also means its impact will be seen in metropolitan areas, where these roles are typically based.

    The OECD puts out a version of this report annually, reflecting major changes and disruptions in the job market for that year. The 2024 report focused on the geography of generative AI, while the 2023 version focused on examining the landscape of green jobs.

    The latest report measured the impact of generative AI by how much of workers’ tasks could become at least 50 percent faster through its use

    Reply
  22. Tomi Engdahl says:

    AWS brings multi-agent orchestration to Bedrock
    https://venturebeat.com/ai/aws-brings-multi-agent-orchestration-to-bedrock/

    AWS is doubling down on AI agents with the announcement of multiagent capabilities on its Amazon Bedrock platform.

    During his keynote at the AWS re:Invent conference, AWS CEO Matt Garman said customers building agents on Bedrock wanted a means to make agents that work together.

    “While a single agent can be useful, more complex tasks, like performing financial analysis across hundreds or thousands of different variables, may require a large number of agents with their own specializations,” AWS said. “However, creating a system that can coordinate multiple agents, share context across them, and dynamically route different tasks to the right agent requires specialized tools and generative AI expertise that many companies do not have available.”

    The new capabilities allow enterprises using Bedrock to build agentic workflows to build AI agents and establish their entire agentic ecosystem. This includes the ability to build orchestration agents to manage multiple agents and workflows that require multiple steps.

    Reply
  23. Tomi Engdahl says:

    AWS Reveals Multi-Agent Orchestrator Framework for Managing AI Agents
    https://www.infoq.com/news/2024/12/aws-multi-agent/

    AWS has introduced Multi-Agent Orchestrator, a framework designed to manage multiple AI agents and handle complex conversational scenarios. The system routes queries to the most suitable agent, maintains context across interactions, and integrates seamlessly with a variety of deployment environments, including AWS Lambda, local setups, and other cloud platforms.

    The framework supports dual-language implementation in Python and TypeScript and accommodates both streaming and non-streaming agent responses. It includes pre-built agents for rapid deployment and provides extensive features such as intelligent intent classification, robust context management, and the scalability to integrate new agents or customize existing ones. This makes it a versatile tool for enterprises managing diverse AI applications.

    Reply
  24. Tomi Engdahl says:

    Researchers from the University of Texas at Austin, led by Asst. Prof. Yuhao Kang, have introduced the “Soundscape-to-Image Diffusion Model.” This innovative AI technology generates highly accurate images from audio recordings of city streets.

    Trained on a dataset of 10-second audio-visual clips from various global locations, the system can now create visual representations that match the recorded ambient sounds with striking accuracy. This technology holds potential forensic applications and insights into the impact of sound on our perception of environments. The study, published in Nature, opens new avenues for enhancing urban design and community well-being.

    #AI #SoundscapeToImage #Technology #UrbanDesign #ForensicApplications #Soundscape #DeepLearning #UTAustin #Innovation #FutureTech

    Reply
  25. Tomi Engdahl says:

    Datakeskuksien sähköntarve voi johtaa energiapulaan. Tekoäly kasvattaa sähkönkulutusta räjähdysmäisesti, Gartner arvioi.

    Raju ennnuste: Sähkönkulutus räjähtää käsiin, datakeskuksia uhkaa energiapula
    7.12.202417:30
    Sähkö
    Tekoäly
    Digitalous
    Energia
    Datakeskuksien sähköntarve voi johtaa energiapulaan. Tekoäly kasvattaa sähkönkulutusta räjähdysmäisesti, Gartner arvioi.
    https://www.uusisuomi.fi/uutiset/us/6dcf9eef-678a-46f6-b24d-87548d17b0de?utm_term=Autofeed&utm_medium=Social&utm_source=Facebook&fbclid=IwZXh0bgNhZW0CMTEAAR1Bei2Hg6MBhnmFNjq4ioselTyBfgSNOAEcmdImnse1955O6ru6HSx7rfs_aem_10JXPpXf99lP7o9pFg_fhw#Echobox=1733586081

    40 prosenttia datakeskuksista tulee olemaan toimintarajoitteisia vuoteen 2027 mennessä, kertoo konsulttiyhtiö Gartner. Tilanne johtuu tekoälyn kasvattamasta sähkönkulutuksesta.

    Gartner arvioi, että tekoälyyn optimisoitujen datakeskuksien tarvitsema teho on 500 terawattituntia vuodessa vuonna 2027. Datakeskukset tarvitsivat vain 195 terawattituntia vuodessa viime vuonna. Gartnerin analyytikko Bob Johnson kertoo, että datakeskuksien vaatima sähköntarve tulee ylittämään sähkölaitosten kyvyn laajentaa kapasiteettiaan riittävän nopeasti.

    Päivän politiikan aiheet kätevässä paketissa – Tilaa Uuden Suomen maksuton uutiskirje
    ”Tämä puolestaan uhkaa häiritä energian saatavuutta ja johtaa energiapulaan, mikä rajoittaa datakeskusten kasvua vuodesta 2026 alkaen”, Johnson kertoo Gartnerin analyysissa.

    Gartner Predicts Power Shortages Will Restrict 40% of AI Data Centers By 2027
    Sydney, Australia, November 12, 2024

    Rapid Growth in Energy Consumption For GenAI Will Exceed Power Utilities’ Capacity
    AI and generative AI (GenAI) are driving rapid increases in electricity consumption, with data center forecasts over the next two years reaching as high as 160% growth, according to Gartner, Inc. As a result, Gartner predicts 40% of existing AI data centers will be operationally constrained by power availability by 2027.
    https://www.gartner.com/en/newsroom/press-releases/2024-11-12-gartner-predicts-power-shortages-will-restrict-40-percent-of-ai-data-centers-by-20270

    Reply
  26. Tomi Engdahl says:

    https://www.facebook.com/share/p/GoZ24ejqAV7Y52aa/

    I asked ChatGPT 4.o…. “What advice do you recommend to produce text that is authentically human?”

    ***** it’s response…

    Producing text that feels authentically human involves blending natural tone, emotional nuance, and context-awareness with clear communication. Here are some actionable recommendations:

    1. Understand Your Audience

    • Tailor your language to match the needs, preferences, and expectations of your readers.
    • Use empathy to imagine how they might feel or react as they read your text.

    2. Use Conversational Tone

    • Write as if you’re speaking to someone directly.
    • Use contractions (e.g., “don’t” instead of “do not”) to make the text feel less formal, unless the context demands otherwise.

    3. Embrace Imperfection

    • Authentic human writing isn’t flawless. Small quirks like informal phrasing, rhetorical questions, or even slight redundancy can make writing feel relatable.
    • Avoid overly polished or overly structured writing unless required by the context.

    4. Add Emotional Resonance

    • Express emotions or reflect on feelings where appropriate. For example, “I was genuinely thrilled to see…” conveys more humanity than “I was pleased to see…”
    • Use figurative language sparingly, but effectively, to evoke imagery or feelings.

    5. Incorporate Personal Experience or Perspective

    • Share relatable anecdotes or observations to ground your text in real-world experiences.
    • Phrases like “in my experience” or “I’ve noticed that…” can make your text more engaging.

    6. Vary Sentence Length and Structure

    • Mix long, complex sentences with short, punchy ones for rhythm and flow.
    • Avoid robotic, formulaic patterns.

    7. Use Humor or Playfulness (Where Appropriate)

    • A light joke, a pun, or a playful tone can make text feel alive.
    • For instance, “I’m no expert, but even I know that coffee before bed isn’t a great idea.”

    8. Acknowledge Uncertainty

    • Humans often hedge their claims with phrases like “I think,” “it seems,” or “maybe.” Adding these can make writing sound more authentic and less rigid.

    9. Pay Attention to Cultural and Contextual Nuances

    • Use idioms, colloquialisms, or culturally relevant references that resonate with your audience.
    • For example, “a stitch in time saves nine” could add a personal, localized touch.

    10. Revise with Intent

    • Read your text aloud to hear its tone and rhythm. Does it sound like something you’d say in real life?
    • Ask, “Does this feel natural? Would I say this to a friend or colleague?”

    11. Avoid Overuse of Jargon or Buzzwords

    • While expertise matters, humans communicate best with clarity. Replace overly technical terms with simple, relatable language when possible.

    12. Emphasize Storytelling

    • People connect with narratives. Instead of listing facts, weave them into a story.
    • Example: “When I first started writing, I thought every sentence had to be perfect. It turns out, imperfections are what make writing human.”

    By combining these techniques, you can craft text that feels natural, engaging, and authentically human, resonating with readers on a deeper level.

    Reply
  27. Tomi Engdahl says:

    Davide Castelvecchi / Nature:
    A randomized study at a corporate lab employing more than 1,000 researchers: teams using AI discovered 44% more new materials than teams with standard workflows

    Huge randomized trial of AI boosts discovery — at least for good scientists
    A controlled study at a firm measured the effects of using AI to assist research, and saw increases in discoveries and patents.
    https://www.nature.com/articles/d41586-024-03939-5

    Reply
  28. Tomi Engdahl says:

    How close is AI to human-level intelligence?
    Large language models such as OpenAI’s o1 have electrified the debate over achieving artificial general intelligence, or AGI. But they are unlikely to reach this milestone on their own.
    https://www.nature.com/articles/d41586-024-03905-1

    OpenAI’s latest artificial intelligence (AI) system dropped in September with a bold promise. The company behind the chatbot ChatGPT showcased o1 — its latest suite of large language models (LLMs) — as having a “new level of AI capability”. OpenAI, which is based in San Francisco, California, claims that o1 works in a way that is closer to how a person thinks than do previous LLMs.

    The release poured fresh fuel on a debate that’s been simmering for decades: just how long will it be until a machine is capable of the whole range of cognitive tasks that human brains can handle, including generalizing from one task to another, abstract reasoning, planning and choosing which aspects of the world to investigate and learn from?

    Such an ‘artificial general intelligence’, or AGI, could tackle thorny problems, including climate change, pandemics and cures for cancer, Alzheimer’s and other diseases. But such huge power would also bring uncertainty — and pose risks to humanity. “Bad things could happen because of either the misuse of AI or because we lose control of it,” says Yoshua Bengio, a deep-learning researcher at the University of Montreal, Canada.

    The revolution in LLMs over the past few years has prompted speculation that AGI might be tantalizingly close. But given how LLMs are built and trained, they will not be sufficient to get to AGI on their own, some researchers say. “There are still some pieces missing,” says Bengio.

    What’s clear is that questions about AGI are now more relevant than ever.

    Why the AGI debate changed

    The phrase artificial general intelligence entered the zeitgeist around 2007 after its mention in an eponymously named book edited by AI researchers Ben Goertzel and Cassio Pennachin. Its precise meaning remains elusive, but it broadly refers to an AI system with human-like reasoning and generalization abilities. Fuzzy definitions aside, for most of the history of AI, it’s been clear that we haven’t yet reached AGI.

    Reply
  29. Tomi Engdahl says:

    Dan Gallagher / Wall Street Journal:
    A look at chipmaker Marvell, whose market cap surpassed Intel’s, driven by its data center business; CEO Matt Murphy has reportedly been floated as Intel’s CEO

    Meet the Small AI Chip Maker Now More Valuable Than Intel
    Marvell’s role in helping tech titans create their own data center chips has boosted its revenue—and valuation
    https://www.wsj.com/tech/marvell-ai-chip-manufacturing-faa89cb6?st=tATaKW&reflink=desktopwebshare_permalink

    Reply
  30. Tomi Engdahl says:

    Margherita Bassi / Gizmodo:
    A study involving 747,604 women finds those who paid extra for AI-enhanced mammograms were 21% more likely to have breast cancer detected than those who didn’t — In a study, women who chose AI-powered mammograms were 21% more likely to have cancer detected than those who didn’t.

    AI Is Detecting More Breast Cancer Cases, Study Suggests
    In a study, women who chose AI-powered mammograms were 21% more likely to have cancer detected than those who didn’t.
    https://gizmodo.com/ai-is-detecting-more-breast-cancer-cases-study-suggests-2000534894

    Reply
  31. Tomi Engdahl says:

    Om Malik / Crazy Stupid Tech:
    The rise of generative AI will force the web browser to evolve again, just as the web browser evolved for a mobile-first world by contorting itself into apps — I’m addicted to Apple’s Vision Pro. It’s a nearly perfect entertainment device, serving as my ideal television. Sure, I would like it to be lighter.

    Will A.I. Eat The Browser?
    https://crazystupidtech.com/archive/will-ai-eat-the-browser/

    For most of us, it’s hard to imagine life without an internet browser. But as AI disaggregates information from text, video, and music into unique remixable AI chatbot answer streams, it’s clear to me that over the next decade the browser will need to adapt or die.

    Ever since I saw the earliest versions of Humane’s AIPin, Snap’s AR glasses, and caught wind of what would become Apple’s Vision Pro, I have wondered about the durability of the browser. Just over two years ago, with the arrival of a user-friendly version of ChatGPT, everything fell into place.

    I don’t expect these devices to dominate the world next year or the year after, but the journey has begun. And it’s already clear that many of these emerging devices are not like the computers we have used thus far. For starters, some of them won’t even have screens or keyboards.

    Secondly, with the rise of generative AI, we are starting to see atomization of web pages themselves. This in itself undermines the original premise of the web and how it has been built thus far. If there are no documents to connect, how does the browser do what it has done so far? (Bill Gross made a similar point in a conversation with Fred earlier this year. You can read Fred’s story on his new company on CrazyStupidTech.com.)

    More importantly, lost in the “AI” and “AGI” hype is the fact that the real breakthrough is the ability of large language models and related technologies to take data and create logical streams, generating text, video, or audio content. This is the fundamental advancement from an “information” standpoint. Even early (and recently developed) tools like NotebookLM (which creates audio from text) give us a directional view of the future.

    For instance, a decade (or sooner) from now, a customer of AppleNews could ask it to create a curated morning news show featuring information from preselected sources and topics, and have a synthetically created influencer either read it to them or have them watch it on a future version of Vision Pro, or something akin to it.

    None of this is science fiction — you can pretty much do all of these things now, albeit poorly. In time, it won’t just be possible — it will be second nature. As such, it will be a big change in how the information ecosystem on the internet has worked so far. These new technologies give us an opportunity to have more personalized, dialog-centric control over the information.

    Current apps require active user engagement. We must consciously track everything. We are always taking photos, or logging information and manually tracking calories, or checking ingredient lists, and researching nutrition facts when grocery shopping. The technical challenge isn’t just building a better food database. It’s creating seamless monitoring and intervention without requiring constant user input.

    In the near future, you can imagine a non-human entity — let’s call it a DietBot — acting as your personal nutritionist and meal planner and requiring little to no effort on your part.

    While browsers are so ubiquitous that it may be hard to imagine life without them, the truth is that we humans have had to adapt to what has been a document-centric web experience. We have been forced to adapt to technological constraints, rather than technology truly adapting to human needs.

    The entire ecosystem of the web exists for monetization by large platforms, and — as people like Flipboard founder and CEO Mike McCue, who worked for Netscape during its heyday, will tell you, it has served this purpose quite well.

    “Since the mid-90s, the web and the web browser have been exclusively focused on connecting and rendering content using open standards like HTML and HTTP,” he said. “This worked well for decades and fueled the rise of super valuable web-based businesses like Amazon, Airbnb, and many others.”

    McCue believes that with protocols like ActivityPub, combined with AI, we can create a more personalized, mediated information experience. While he views AI interfaces like Claude and ChatGPT as a seismic shift, he believes that “you’ll always need some technical vehicle.” What will change is how that vehicle is used. Just as the browser evolved for a mobile-first world by contorting itself into apps, the personalized, interactive, dialogue-centric AI system will force the browser to evolve again.

    So, what might that evolution look like?

    Josh Miller, co-founder of The Browser Company, is making “Arc,” a browser for the AI-first era. He believes that there is less of a need for the user interface of the browser of the past, but the internals of the browser are going to be pivotal for our future. “While most think we are building a browser,” Miller said in a conversation, “what we are building is a browser-based system.”

    He wants to transform the browser from a mere viewer to an operating system-like entity that maintains personal preferences and behaviors at the system level, allowing us to use “AI” across devices without replicating our choices at the app level. His new browser-based OS will understand user context and preferences at a fundamental level, making it easier to create personalized experiences. Rather than having applications dictate how we interact with information, our usage patterns and preferences will shape how information and services are presented to us.

    Miller believes the web browser’s core technologies, especially those that are open and widely adopted standards, make it easy for browsers to evolve quickly and adapt to a future where we will interact with multiple devices — not just desktop or laptop computers, or mobile phones. After all, wearables and devices without screens will need to browse, retrieve, and interact with information without the need for a browser as we know it.

    Just as the iPhone positioned itself as a reinvention of the phone, the browser will go through a similar transition, Miller said. The transition however “will be gradual” and the current form of the browser “will actually be an important” part of that transition “almost as a way to bridge” people to the future and “let their guard down.”

    As VR, AR, audio interfaces, and chat become ever more central to our daily lives — and not just for Vision Pro addicts like myself, but for everyone — the web browser’s limitations are becoming increasingly apparent. There is no doubt in my mind that the implications of this seismic change in what a browser does, and how it works, will be felt far and wide.

    Reply
  32. Tomi Engdahl says:

    Miles Kruppa / Wall Street Journal:
    A look at Chatbot Arena, which now ranks 170+ AI models; the two UCB researchers who launched the project in April 2023 hope to grow it into a Wikipedia for AI — Ranking AI is tricky, so two students developed a way to make the best bots battle — BERKELEY, Calif.—Record labels have the Billboard Hot 100.

    The UC Berkeley Project That Is the AI Industry’s Obsession
    Ranking AI is tricky, so two students developed a way to make the best bots battle
    https://www.wsj.com/tech/ai/the-uc-berkeley-project-that-is-the-ai-industrys-obsession-bc68b3e3?st=UiVdBt&reflink=desktopwebshare_permalink

    Record labels have the Billboard Hot 100. College football has its playoff rankings. Artificial intelligence has a website, run by two university students, called Chatbot Arena.

    Roommates Anastasios Angelopoulos and Wei-Lin Chiang never imagined the graduate school project they developed last year would quickly become the most-watched ranking of the world’s best AI systems.

    Traditionally, AI technologies have been assessed through advanced math, science and law tests. Chatbot Arena lets users ask a question, get answers from two anonymous AI models and rate which one is better.

    The ratings are aggregated onto a leaderboard where big Silicon Valley players like OpenAI, Google and Meta Platforms vie for supremacy with lesser-known startups from China and Europe.

    “Everyone is striving to be at the top of this leaderboard,” said Joseph Spisak, a director of product management at Meta Platforms working on AI. “It’s amazing to have a few students get together and be able to create that level of impact.”

    Reply
  33. Tomi Engdahl says:

    Financial Times:
    Sources: ByteDance has taken an early lead in the generative AI race in China by poaching top talent from rivals and becoming Nvidia’s biggest customer in China
    https://www.ft.com/content/e90f4a83-bc31-4a5c-b9ab-28d722924143

    Reply
  34. Tomi Engdahl says:

    Financial Times:
    Meta says there are no plans to release its AI-powered coding tool Metamate externally; Metamate lacks the more autonomous, agent-like features of rivals’ tools

    https://www.ft.com/content/68828793-2978-4fc7-9f00-df2ca4e8b2b0

    Reply
  35. Tomi Engdahl says:

    OpenAI:
    OpenAI launches Sora in select countries; ChatGPT Pro offers up to 500 videos per month at up to 1080p, and ChatGPT Plus offers up to 50 videos at up to 720p — We’re moving our video generation model out of research preview. — Learn more System Card — Our video generation model …

    Sora is here
    We’re moving our video generation model out of research preview.
    https://openai.com/index/sora-is-here/

    Reply
  36. Tomi Engdahl says:

    Marques Brownlee / Marques Brownlee on YouTube:
    Review of OpenAI’s Sora video generator: excels at depicting landscapes, people, and stylistic content but often fails with basic physics like object permanence — SORA generates videos. This is the first review.Get up to 40% off on last minute gifts at https://ridge.com/MKBHDThe (real) birding video:

    https://www.youtube.com/watch?v=OY2x0TyKzIQ

    Reply
  37. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    OpenAI says Sora limits the depiction of minors, and only a subset of users will be able to create videos of a real person using uploaded footage of that person

    https://techcrunch.com/2024/12/09/openai-is-only-letting-some-sora-users-create-videos-of-real-people/

    Reply
  38. Tomi Engdahl says:

    Saritha Rai / Bloomberg:
    A growing cohort of AI startups are recruiting scores of experts to train models on highly specialized tasks for sensitive sectors like defense and health care — From spotting weeds in cotton fields to scanning the bush for signs of poachers, AI startups are recruiting scores of experts for highly specialized tasks.

    https://www.bloomberg.com/news/articles/2024-12-10/secret-to-ai-profitability-is-hiring-a-lot-more-doctorates

    Reply
  39. Tomi Engdahl says:

    Kali Hays / Fortune:
    Stability AI CEO Prem Akkaraju says the startup, now focused on its API and licensing, has seen “triple digit growth” and has “a clean balance sheet, no debt”

    Stability AI’s new CEO, hired six months ago, says business growing by ‘triple digits’ and no more debt
    https://fortune.com/2024/12/09/stability-ai-new-ceo-prem-akkaraju-business-triple-digit-growth-greycroft-dana-settle-brainstormai/

    Reply
  40. Tomi Engdahl says:

    Nilay Patel / The Verge:
    Q&A with Microsoft AI CEO Mustafa Suleyman on AGI, Copilot, AI agents, the OpenAI deal, joining Microsoft in March 2024, managing ~10K staff, DeepMind, and more
    https://www.theverge.com/24314821/microsoft-ai-ceo-mustafa-suleyman-google-deepmind-openai-inflection-agi-decoder-podcast

    Reply
  41. Tomi Engdahl says:

    Evgeny Morozov / Boston Review:
    A look at generative AI’s impact, its structural issue of concentrated power among just a few companies, and utopian questions over an alternative vision for AI

    https://www.bostonreview.net/forum/the-ai-we-deserve/

    Reply
  42. Tomi Engdahl says:

    AI is not Designed for You
    https://www.youtube.com/watch?v=6Lxk9NMeWHg

    How long is it going to take until AI is useful, and why is everyone raving about this extremely mid tech? Here’s my theory!

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*