3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

6,742 Comments

  1. Tomi Engdahl says:

    Integrated multi-modal sensing and learning system could give robots new capabilities
    https://techxplore.com/news/2024-11-multi-modal-robots-capabilities.html#google_vignette

    Reply
  2. Tomi Engdahl says:

    Researchers from the University of Maryland and Adobe Introduce DynaSaur: The LLM Agent that Grows Smarter by Writing its Own Functions
    https://www.marktechpost.com/2024/11/23/researchers-from-the-university-of-maryland-and-adobe-introduce-dynasaur-the-llm-agent-that-grows-smarter-by-writing-its-own-functions/

    Reply
  3. Tomi Engdahl says:

    Hugging Face Releases Observers: An Open-Source Python Library that Provides Comprehensive Observability for Generative AI APIs
    https://www.marktechpost.com/2024/11/23/hugging-face-releases-observers-an-open-source-python-library-that-provides-comprehensive-observability-for-generative-ai-apis/

    Hugging Face has introduced Observers, a cutting-edge tool that enhances transparency and understanding of generative AI interactions. This open-source Python SDK offers developers an easy and flexible way to track, analyze, and manage interactions with AI models, marking a significant advancement in AI observability. Observers is a comprehensive solution for monitoring and analyzing generative AI systems. It enables developers to track interactions with various AI models, store observational data in multiple backends, and query these interactions efficiently. Transparency and ease of use are at the heart of its design, ensuring users can gain deep insights into their AI operations with minimal configuration.

    https://huggingface.co/blog/davidberenstein1957/observers-a-lightweight-sdk-for-ai-observability

    Reply
  4. Tomi Engdahl says:

    Analysis
    Microsoft’s 10 new AI agents strengthen its enterprise automation lead
    https://venturebeat.com/ai/microsofts-10-new-ai-agents-strengthen-its-enterprise-automation-lead/

    Microsoft made waves at Ignite 2024 with its announcement that 10 autonomous AI agents are now available for enterprise use. Microsoft effectively declared that AI agents are ready for prime time — achieving what others have yet to accomplish.

    Microsoft’s pre-built agents target core enterprise operations – from CRM and supply chain management to financial reconciliation. While competitors like Salesforce and ServiceNow offer AI agent solutions in some limited areas, Microsoft has created an extensive agent ecosystem that reaches beyond its own platform. The system includes 1,400 third-party connectors and supports customization across 1,800+ large language models. The scale of adoption is equally significant: 100,000 organizations are already creating or modifying agents, Microsoft says, with deployment rates doubling last quarter – adoption numbers that dwarf those of competitors

    https://www.microsoft.com/en-us/dynamics-365/blog/business-leader/2024/10/21/transform-work-with-autonomous-agents-across-your-business-processes/

    Reply
  5. Tomi Engdahl says:

    ESP32-Based Personal AI Terminal with ChatGPT
    https://www.elektormagazine.com/articles/esp32-personal-ai-terminal-chatgpt

    Can an AI like ChatGPT pass the Turing test? We explore this with an ESP32, keyboard, TFT display, and Google text-to-speech. Is it obvious it’s a machine? Check out this AI terminal project.

    Whether AI solutions such as ChatGPT can pass the Turing test is still up for debate. Back in the day, Turing imagined a human operator judging replies to questions sent and received using an electromechanical teletype machine. Here we build a 21st-century version of Turing’s original experimental concept, using an ESP32 with a keyboard and TFT display to communicate exclusively with ChatGPT via the Internet. In addition, Google text-to-speech together with a tiny I2S amplifier module and speaker lets you listen in to the conversation. In our case, it’s obvious from the start that we are communicating with a machine — isn’t it?

    There is no doubt that AI tools such as OpenAI’s ChatGPT and Google’s Gemini can be real game changers in so many situations. I have used ChatGPT to develop quite complex control solutions. I provide the initial idea, and as I give more inputs, it refines the code, making it better with each iteration. It can even convert Python code to MicroPython or an Arduino sketch.

    Reply
  6. Tomi Engdahl says:

    What Are AI Agents? Here’s how AI agents work, why people are jazzed about them, and what risks they hold
    https://spectrum.ieee.org/ai-agents

    The artificial intelligence world is abuzz with talk of AI agents. Microsoft recently released a set of autonomous agents that could help streamline customer service, sales, and supply chain tasks. Similarly, OpenAI unveiled Swarm, an experimental framework to explore better coordination between multi-agent systems. Meanwhile, Claude, the large language model (LLM) from Anthropic, is taking agentic AI to the next level with the beta stage of its computer use skills—from moving a mouse cursor around the screen to clicking buttons and typing text using a virtual keyboard.

    So, what exactly are AI agents?

    “AI agents are advanced artificial intelligence systems that are able to complete a task or make a decision,” says Adnan Ijaz, director of product management for Amazon Q Developer, an AI-powered software development assistant from Amazon Web Services (AWS). “Humans set the goal, and agents figure out on their own, autonomously, the best course of action.” The agents can interface with external systems to take action in the world.

    In addition to this autonomy, agentic AI can also receive feedback and continually improve on a task, says Yoon Kim, an assistant professor at MIT’s Computer Science and Artificial Intelligence Laboratory.

    Think of AI agents as a more capable version of generative AI. While both technologies rely on LLMs as their underlying model, generative AI creates new content based on the patterns it learned from its training data. Agentic systems, on the other hand, are not only able to generate content but are also able to take action based on the information they gain from their environment. “So all of that is essentially a step further than generative AI,” Ijaz says.

    How AI Agents Work
    To fulfill a particular task, AI agents usually follow a three-part workflow. First, they determine the goal through a user-specified prompt. Next, they figure out how to approach that objective by breaking it down into smaller, simpler subtasks and collecting the needed data. Finally, they execute tasks using what’s contained in their knowledge base plus the data they’ve amassed, making use of any functions they can call or tools they have at their disposal.

    Let’s take booking flights as an example, and imagine a prompt to “book the cheapest flight from A to B on Y date.” An AI agent might first search the web for all flights from A to B on Y date, scan the search results, and select the lowest-priced flight. The agent then calls a function that connects to the application programming interface (API) of the airline’s flight booking platform. The agent makes a booking for the chosen flight, entering the user’s details based on the information stored in its knowledge base.

    “The key point of agentic interaction is that the system is able to understand the goal you’re trying to accomplish and then operate on it autonomously,” says Ijaz. However, humans are still in the loop, guiding the process and intervening when required. For instance, the flight-booking AI agent might be instructed to notify the user if the cheapest flight has no available seats, allowing the user to decide on the next step. “If at any point humans don’t think the system is going in the right direction, they can override it—they have control,” Ijaz adds.

    Promises and Pitfalls of Agentic AI
    Much like generative AI, agentic AI holds the promise of increased efficiency and improved productivity, with the agent performing mundane tasks that would otherwise be tedious or repetitive for the average human.

    “If these systems become trustworthy enough, then we could have agents arrange a calendar for you or reserve restaurants on your behalf—do stuff that you would otherwise have an assistant do,” says Kim.

    The keyword there is trustworthy, with data privacy and security as major challenges for agentic AI. “Agents are looking at a large swath of data. They are reasoning over it, they’re collecting that data. It’s important that the right privacy and security guardrails are implemented,” Ijaz says.

    Agentic AI is still in its early stages, and as AI agents evolve, they’ll hopefully make people’s lives easier and more productive. But caution is still recommended for the risks they pose. “It’s an important advancement, so I think all the attention it’s getting is warranted,” Ijaz says. “Agents are another tool in the armory for humans, and humans will put those tools to good use granted that we build those agents in ways that follow responsible AI practices.”

    Reply
  7. Tomi Engdahl says:

    Google’s AI Breakthrough Brings Quantum Computing Closer to Real-World Applications
    Researchers have developed an AI-driven technique to stabilize quantum states, a breakthrough that could make quantum computing practical.
    https://decrypt.co/292918/ai-breakthrough-brings-quantum-computing-closer-to-real-world-applications

    Reply
  8. Tomi Engdahl says:

    Google Cloud launches AI Agent Space amid rising competition
    https://venturebeat.com/ai/google-cloud-launches-ai-agent-space-amid-rising-competition/

    As we’ve covered here before at VentureBeat, the cloud computing wars have swiftly morphed into the AI wars, with leading cloud computing divisions Google Cloud, Microsoft Azure, and Amazon Web Services (AWS) all rolling out new tools for customers to access, use, deploy, and build atop a range of AI models.

    Therefore, it was not too surprising to learn his week that Google Cloud was offering a new AI agent ecosystem program called AI Agent Space.

    https://console.cloud.google.com/marketplace/browse?filter=category:ai-agent&pli=1&inv=1&invt=Abi5yQ

    Reply
  9. Tomi Engdahl says:

    Microsoft’s new Copilot Actions use AI to automate repetitive tasks / Microsoft is also improving Copilot in a variety of Office apps soon.
    https://www.theverge.com/2024/11/19/24299961/microsoft-copilot-actions-powerpoint-outlook-ai-improvements

    Reply
  10. Tomi Engdahl says:

    What to know about OpenAI’s new, free AI training course
    The free course meant for teachers is supposed to demystify AI.
    https://www.fastcompany.com/91232424/what-know-about-openais-new-free-ai-training-course

    Reply
  11. Tomi Engdahl says:

    Cloudy with a chance of GPU bills: AI’s energy appetite has CIOs sweating
    Public cloud expenses have businesses scrambling for alternatives that won’t melt the budget
    https://www.theregister.com/2024/11/29/public_cloud_ai_alternatives/

    Canalys Forums EMEA 2024 Organizations are being forced to rethink where they host workloads in response to ballooning AI demands combined with rising energy bills, and shoving them into the public cloud may not be the answer.

    CIOs are facing a quandary over rising power consumption from the huge compute demands of training and deploying advanced AI models, while energy costs are simultaneously rising. Finding some way to square this circle is becoming a big concern for large corporates, according to Canalys.

    Speaking at the recent Canalys Forum EMEA in Berlin, chief analyst Alastair Edwards said that every company is trying to figure out what model or IT architecture they need to deploy to take best advantage of the business transformation that “AI promises”.

    Reply
  12. Tomi Engdahl says:

    Should You Still Learn to Code in an A.I. World?
    Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.
    https://www.nytimes.com/2024/11/24/business/computer-coding-boot-camps.html

    Reply
  13. Tomi Engdahl says:

    Microsoft preps big guns to shift Copilot software and PCs
    IT admins be warned: 13,000 tech suppliers coming for your employer’s checkbook
    https://www.theregister.com/2024/11/29/microsoft_preps_big_guns_for/

    Canalys Forums EMEA 2024 When Microsoft needs to make a market, it turns to the channel – a nebulous term used for resellers, distributors and an assortment of other independent third party suppliers that sell wares and services. And by goodness Microsoft needs more feet on the street than ever if it’s going to appease investors desperate to see returns on the billions of dollars it’s betting on Generative Artificial Intelligence.

    According to some estimates, Microsoft has sunk $13 billion into OpenAI, basing its own LLM Prometheus on the ChatGPT-4 foundation, tweaked for certain functionalities. Then there’s the eye-watering capital expenditure Microsoft is forking out on datacenters to manage this tech in expectation of customers signing up.

    Reply
  14. Tomi Engdahl says:

    Coding With SLMs and Local LLMs: Tips and Recommendations
    Small language models and local LLMs are increasingly popular with devs. We list the best models and provide tips for evaluation.
    https://thenewstack.io/coding-with-slms-and-local-llms-tips-and-recommendations/

    While the impact of GitHub Copilot and other mainstream AI solutions on coding is undeniable, plenty of questions arise around the trend as a whole.

    For starters, many developers aren’t too comfortable sharing their code, oftentimes proprietary, with third parties. There’s also the financial part of the equation since API costs can accumulate pretty quickly — especially if you’re using the most advanced models.

    Enter local language models and their diminutive equivalents, such as small language models. The developer community has been increasingly vocal about their benefits, so let’s see what all the fuss is about. In addition to the concept itself, we’ll cover the best models, their benefits, and how this affects AI-aided development as a whole.

    What Are Locally Hosted LLMs
    Locally hosted LLMs are advanced machine learning models that operate entirely within your local environment. These models, typically boasting billions of parameters, offer sophisticated code generation, contextual understanding, and debugging capabilities. Deploying LLMs locally allows developers to circumvent the latency, privacy issues, and subscription costs associated with cloud-hosted solutions.

    Running an LLM locally provides fine-grained control over model optimization and customization, which is particularly useful for specialized development environments.

    Furthermore, fine-tuning an LLM on proprietary codebases enables more context-aware suggestions, which can significantly streamline complex workflows. The ability to maintain sensitive data locally also reduces exposure to privacy risks, making this option attractive for enterprise developers who need compliance with strict data governance policies.

    However, running large models requires substantial hardware resources — typically multicore CPUs or GPUs with ample memory — making it a choice better suited for those with robust setups or specific performance needs. The trade-off is a powerful, adaptable tool that can provide deep insight and support in coding scenarios.

    What Are SLMs?
    SLMs, or Small Language Models, are more lightweight versions of their LLM counterparts. They are designed with fewer parameters, optimizing them for speed and efficiency without sacrificing core capabilities like code completion and simple context handling. They can’t do everything; but what they can do, they do brilliantly.

    The smaller architecture of SLMs also makes them highly efficient for tasks where reduced latency and smaller memory footprints are essential. SLMs are suitable for scenarios such as rapid prototyping, embedded systems development, or working on machines with limited computational resources.

    they’re considered attractive because experts believe phones will be able to run them efficiently in a matter of months. I’ve seen experiments of SLMs using computer vision to read bank statements and submit data to Freshbooks — more use cases like that will emerge.

    While Google, Microsoft and Anthropic are focused on giant models distributed as a service, Apple has emerged as a leader in the open source SLM space. Their OpenELM family is made to be run on mobile devices, and early feedback suggests they have the ability to complete coding tasks efficiently.

    How to Choose the Best Model for Coding
    Selecting the optimal local LLM or SLM for your development needs involves a combination of community insights, empirical benchmarks, and personal testing. Start by exploring community-driven leaderboards, which rank models based on various performance metrics such as speed, accuracy, and parameter efficiency.

    Benchmarks are critical, but they are not one-size-fits-all. Public benchmarks can provide a general overview of a model’s performance in standardized tasks, but the ultimate test is always how a model performs within your specific development environment.

    Running personal benchmarks on tasks typical of your workflows will help identify how well a given model fits your real-world needs — whether that’s generating boilerplate code, debugging legacy applications, or offering context-aware suggestions.

    Best Local LLMs for Coding
    I know the word “best” holds a lot of weight, but keep in mind that every list like this is subjective. Every benchmark, every test and every use case — they’re all different, so a model that works for you might not be ideal for someone else.

    DeepSeek V2.5
    DeepSeek V2.5 is an open source model that integrates the capabilities of DeepSeek-V2-Chat and DeepSeek-Coder-V2-Instruct, enhancing both general conversational and coding proficiencies. It supports a context length of up to 128K tokens, facilitating extensive dialogue management and complex data processing. This makes it ideal for larger projects.

    It seems to match the coding capabilities of GPT-4o, demonstrating strong and comprehensive coding abilities alongside good general and mathematical skills.

    The model supports a context length of 128K tokens and is proficient in 92 programming languages. It has achieved top performance on multiple code generation benchmarks, including EvalPlus, LiveCodeBench, and BigCodeBench, and performs comparably to GPT-4o in code repair tasks.

    I also like this model because it comes in various quantizations, being made available in 0.5, 1.5, 3, 7, 14 and 32 billion parameters. Hence, even those with lower-specced devices can run it to assist with coding tasks.

    Nxcode-CQ-7B-orpo
    Nxcode-CQ-7B-orpo is a local Large Language Model optimized for coding tasks. It offers balanced performance for simple coding scenarios, providing a lightweight solution for developers seeking efficient code generation and understanding capabilities.

    In contrast with Qwen2.5 and LLaMa 3, Nxcode-CQ-7B-orpo is designed to handle fundamental coding tasks effectively, making it suitable for projects with less complexity. Hence, it’s the best learning assistant for code-related tasks and basics related to JavaScript web development. I found it lackluster when dealing with more complex Three.js animation, for instance.

    OpenCodeInterpreter-DS-33B
    OpenCodeInterpreter-DS-33B is a high-parameter model focusing on advanced code interpretation and dynamic problem-solving, created by a team of Chinese scientists. It excels in understanding complex code structures and generating sophisticated code solutions.

    Artigenz-Coder-DS-6.7B
    Developed by an Indian team, Artigenz-Coder-DS-6.7B is tailored for rapid code prototyping, offering efficient code generation capabilities. While it may not match the robustness of larger models, it provides a practical solution for developers needing quick code drafts and prototyping assistance.

    Downsides of Local LLMs for Coding
    More than anything, local models are limited by hardware. With Nvidia’s top-of-the-line H100 GPU costing up to $40,000 and tech giants hoarding billions of dollars worth of them, there’s no conceivable way any individual or organization can match this computing power. That’s without even mentioning the fact these companies employ the foremost AI engineers and have a head start over entire countries.

    Then, there’s the fact that this data isn’t safe just because it’s on your device. Don’t be surprised to see people becoming more wary of accessing captive portals when connecting to WiFi networks, as hackers will try to steal local LLM data. Remember, this is still a nascent field and there are still vulnerabilities we aren’t even aware of.

    Finally, there’s also the unfortunate fact that Claude 3.5 Sonnet and o1-preview are far ahead of any open source, locally run competition. You can’t beat billions of gigabytes of VRAM and billions of dollars in R&D funds.

    Conclusion
    Many consider local LLMs and SLMs the future of coding assistants. Copilot, ChatGPT and Claude might have tens of billions in financial backing, but you’ll always be at the mercy of someone else’s software, restrictions, censorship and, of course, data center issues.

    Locally hosted models, on the other hand, are completely private and won’t require sharing code with a third party. Furthermore, you’re not at the mercy of the cloud or the limitations of your API budget.

    So, what’s the holdup? Well, these LLMs are not only less impressive from a performance standpoint, but they’re also less intuitive and harder to fine-tune than ready-made, no-code coding assistants like Copilot. Nevertheless, we’re already nearing the performance of mainstream models and the likes of Apple and Meta and focusing their efforts on open source. Exciting times are upon us.

    Reply
  15. Tomi Engdahl says:

    Microsoft Introduces Magentic-One, a Generalist Multi-Agent System
    https://www.infoq.com/news/2024/11/microsoft-magentic-one/

    Microsoft has announced the release of Magentic-One, a new generalist multi-agent system designed to handle open-ended tasks involving web and file-based environments. This system aims to assist with complex, multi-step tasks across various domains, improving efficiency in activities such as software development, data analysis, and web navigation.

    Magentic-One uses a multi-agent architecture led by an Orchestrator agent that coordinates four specialized agents: WebSurfer, which handles browser-based tasks such as navigating websites and interacting with online content; FileSurfer, which manages file-related operations, including reading documents and navigating directories; Coder, which writes and analyzes code to create solutions; and ComputerTerminal, which executes code and performs system-level operations.

    The system employs modular design principles, enabling agents to function independently and adapt to new tasks without significant system changes. Built on Microsoft AutoGen, an open-source framework for developing multi-agent systems, Magentic-One is model-agnostic and compatible with different large language models (LLMs), including GPT-4o.

    Reply
  16. Tomi Engdahl says:

    After Shaking Up Search Engine Market With SearchGPT, OpenAI Is Now Gearing Up To Challenge Google With Its Own Web Browser
    https://wccftech.com/after-shaking-up-search-engine-market-with-searchgpt-openai-is-now-gearing-up-to-challenge-google-with-its-own-web-browser/

    Reply
  17. Tomi Engdahl says:

    Top 14 AI Code Generator
    https://dev.to/dev_kiran/top-14-ai-code-generators-8ih

    1. Qodo- First AI Coding Platform | formerly Codium
    2. v0
    v0 is a generative chat interface with in-depth knowledge on modern web technologies. It can provide technical guidance while building on the web, generate UI with client-side functionality, write and execute code in JavaScript and Python, build diagrams explaining complex programming topics, and more.
    3. Cursor
    Cursor is an AI-powered code editor designed to make software development easier. It is same as Visual Studio Code (VS Code)

    Reply
  18. Tomi Engdahl says:

    Top 14 AI Code Generator
    https://dev.to/dev_kiran/top-14-ai-code-generators-8ih

    4. GitHub Copilot
    It is an AI-driven code completion assistant developed by GitHub. It enables you to write code faster and more efficiently by providing context-aware code suggestions directly within the editor.

    5. Intellicode
    Microsoft’s IntelliCode is an AI-powered tool designed to make coding faster and easier. It works within Visual Studio and Visual Studio Code to give you intelligent code recommendations based on the specific context of your project.

    6. Sourcegraph Cody
    Cody AI assistant uses the latest LLMs and codebase context to help you understand, write, and fix code faster. This makes it incredibly useful for tasks like troubleshooting, finding dependencies, refactoring code, and even learning a new codebase quickly.

    7. Tabnine
    Tabnine is a smart coding assistant that understands your coding style and helps you complete your code faster and with fewer errors.

    8. Codiga
    Codiga works by analyzing your code in real-time to detect issues, suggest improvements, and enforce coding standards automatically. Codiga integrates directly with popular IDEs and code editors, making it easy to spot and fix potential bugs or inefficiencies as you write.

    9. Replit
    Replit is an online coding platform that allows developers to code, compile, and deploy projects directly from the browser. It supports multiple programming languages and offers built-in collaboration features, making it easy to work with teammates or share projects with others.

    10. DeepCode AI
    Unlike many AI coding tools that rely solely on a single machine learning model, DeepCode AI uses a hybrid approach that combines symbolic AI, generative AI, and machine learning models, all trained on a vast amount of security-specific data.

    11. Figstack
    Figstack is an AI-powered platform that helps developers interpret and understand code more effectively. It provides features such as code explanations, language translation, and function documentation generation, making it easy to dive deep into unfamiliar code.

    12. Mutable AI
    Mutable AI integrates with popular IDEs to provide real-time code suggestions, instant fixes, and even refactoring recommendations based on your project’s context. Beyond autocomplete, MutableAI also automates repetitive tasks, making it easier to refactor, add comments, or adjust code to follow best practices.

    Reply
  19. Tomi Engdahl says:

    https://dev.to/dev_kiran/top-14-ai-code-generators-8ih

    13. Amazon CodeWhisperer
    Amazon CodeWhisperer tool developed by AWS to enhance productivity by providing real-time, context-aware code recommendations. Integrated with popular IDEs, CodeWhisperer can suggest entire lines or blocks of code based on your current task, whether you’re working in Python, Java, JavaScript, or other supported languages.

    14. CodeGeeX
    CodeGeeX is an AI-powered code generation tool designed to assist developers in writing, completing, and optimizing code more efficiently. It leverages deep learning models trained on a wide variety of programming languages and codebases, where it can provide context-aware code suggestions, complete code snippets, and even generate entire functions or modules.

    Reply
  20. Tomi Engdahl says:

    https://www.also.com/ec/cms5/fi_5710/1550_deals/article/article_74573.jsp?mc=5710.facebook.paid-social.microsoft-csp.azure-openai-service.is239423.social-ad-cta..fi.also&fbclid=IwY2xjawGsiMBleHRuA2FlbQEwAGFkaWQBqxX4g3IE_gEdP1eSSffdu8lQuB1t5-nRA7Vs0bKuRbE0C0kPgmCQOOeoea_PZjp0vJ-k_aem_nYJoZPbHaWjq3bEv4CupBA&utm_medium=paid&utm_source=fb&utm_id=120213972090980494&utm_content=120213972146250494&utm_term=120213972090990494&utm_campaign=120213972090980494

    Reply
  21. Tomi Engdahl says:

    LILYGO Launches a Dual-Microcontroller Edge AI Smart Camera Slash Tiny Autonomous Car
    With an Espressif ESP32 and a Kendryte K210 on board, this tiny camera can wander around your desk on-command.
    https://www.hackster.io/news/lilygo-launches-a-dual-microcontroller-edge-ai-smart-camera-slash-tiny-autonomous-car-9527371f68ed

    Reply
  22. Tomi Engdahl says:

    This AI Agent Will Defend You From Cyber Attacks
    https://www.forbes.com/sites/gilpress/2024/11/20/this-ai-agent-will-defend-you-from-cyber-attacks/

    Coming out of stealth, cybersecurity startup Twine announced today $12 million in seed funding, co-led by Ten Eleven Ventures and Dell Technologies Capital, with participation from angel investors including the founders of Wiz. Twine plans to address cybersecurity’s critical talent shortage by developing AI agents or “digital employees” to augment companies’ security teams. Alex, Twine’s first digital employee, is an expert in identity and access management or IAM.

    Twine is the first company to develop and sell cyber security digital employees. One of the first examples of a digital employee has been Amelia, developed by IPsoft ten years ago. With today’s generative AI and LLMs, AI agents have skills that go beyond basic tasks, working as digital employees in a variety of departments. 11x.ai, for example, focuses on automating sales and marketing operations and Alchemyst AI on sales development.

    “The primary LLM that we are currently using is OpenAI’s GPT4, and as experienced cybersecurity practitioners, all our products are built with security in mind through the entire architecture, including AI models along with the data associated with them,” says Porat. “Twine uses a separation of duty framework, keeping the agentic decision making from the data and deterministic functions.”

    According to Porat, Alex becomes operational within minutes. However, as a new employee, during the initial phase of onboarding to the organization, Alex primarily handles basic tasks. “Just like any human employee, the more access and trust given — the more Alex can do across the organization,” says Porat. Alex—and Twine’s future versions of digital employees—learn not only the specific corporate environment in which they are deployed but continuously update themselves with new knowledge and best practices related to their specific cybersecurity expertise.

    Reply
  23. Tomi Engdahl says:

    ChatGPT Exposes Its Instructions, Knowledge & OS Files
    According to Mozilla, users have a lot more power to manipulate ChatGPT than they might realize. OpenAI hopes those manipulations remain within a clearly delineated sandbox.
    https://www.darkreading.com/cloud-security/chatgpt-exposes-instructions-knowledge-os-files

    Reply
  24. Tomi Engdahl says:

    Google Uses AI to Discover 20-Year-Old Software Bug
    AI unearths 26 new vulnerabilities in open-source software projects, including a bug in OpenSSL.
    https://uk.pcmag.com/security/155430/google-uses-ai-to-discover-20-year-old-software-bug

    Google recently used an AI program to help it discover a software bug that’s persisted in an open-source software project for the past two decades.

    The software bug is among 26 vulnerabilities Google recently identified with the help of a ChatGPT-like AI tool, the company said in a blog post on Wednesday.

    Reply
  25. Tomi Engdahl says:

    Laura Rautjoki tekee taidetta yhdessä tekoälyn kanssa ja on taidekentällä harvinaisuus
    Tekoälykuva ovat tulleet ryminällä someen, mutta taiteessa käyttö on vielä marginaalissa. Huolta herättävät myös eettiset kysymykset.
    https://yle.fi/a/74-20123498

    Reply
  26. Tomi Engdahl says:

    World’s first
    Open source
    Software Testing Agent
    Effortless, Autonomous Test Automation—Zero Coding, Zero Maintenance, Infinite Possibilities.
    https://testzeus.com/hercules

    Wait, whats an AI Agent again?
    “Agent = LLM + Memory + Planning skills + Tool use”

    Lilian Weng
    @lilianweng

    Easy to Start. Simple to Scale.

    Reply
  27. Tomi Engdahl says:

    Your Next AI Project
    Just Got 100x Easier
    Welcome to Langflow, the visual IDE that makes building powerful RAG and multi-agent AI apps 100x easier!
    https://astra.datastax.com/signup?type=langflow

    Reply
  28. Tomi Engdahl says:

    ChatGPT on kaksivuotias
    https://etn.fi/index.php/13-news/16905-chatgpt-on-kaksivuotias

    Tekoäly on muuttanut maailmaamme monin tavoin. ChatGPT täytti lauantaina kaksi vuotta ja kyberturvallisuusyhtiö Check Point on koonnut yhteen, mitä kahden vuoden aikana on tapahtunut. Tämä kahden vuoden aikakausi on ollut täynnä nopeaa kehitystä ja kasvua, mutta myös uudenlaisten haasteiden esiinmarssia.

    ChatGPT lanseerattiin loppuvuodesta 2022, ja se saavutti miljoona käyttäjää vain viidessä päivässä – ennennäkemätön saavutus. Nyt käyttäjämäärä on kasvanut huimaan 200 miljoonaan, ja tekoäly on noussut kiinteäksi osaksi niin yritysten kuin yksilöiden arkea. ChatGPT:n kyky vastata nopeasti, sujuvasti ja asiayhteyteen sopivasti on mullistanut useita toimialoja ja auttanut ihmisiä ratkaisemaan monimutkaisia ongelmia ennätysajassa.

    Generatiivinen tekoäly on parantanut työnkulkua, nopeuttanut projektien etenemistä ja mahdollistanut aiemmin aikaa vievien tehtävien suorittamisen minuuteissa. Esimerkiksi kyberturvallisuudessa ChatGPT on auttanut yrityksiä analysoimaan suuria tietomassoja, tunnistamaan uhkia ja reagoimaan niihin entistä tehokkaammin. Samalla se on kuitenkin myös avannut uusia mahdollisuuksia kyberrikollisille, jotka voivat hyödyntää tekoälyä esimerkiksi realististen phishing-viestien luomisessa tai haittaohjelmien kehittämisessä.

    Vaikka generatiivinen tekoäly on muuttanut tapaa, jolla ihmiset työskentelevät ja toimivat, se on tuonut mukanaan myös merkittäviä riskejä. Check Point muistuttaa, että tekoälyn vastuullinen käyttö on avain sen turvalliseen hyödyntämiseen tulevaisuudessa. Työntekijöiden kouluttaminen, datan suojaaminen ja selkeiden sääntöjen asettaminen tekoälyn käytölle ovat välttämättömiä, jotta teknologia voi edistää kehitystä ilman haittavaikutuksia.

    ChatGPT:n tarina on vasta alussa, ja sen potentiaali vaikuttaa ihmisten ja yritysten toimintaan on valtava. Tärkeintä on kuitenkin varmistaa, että tämä potentiaali käytetään oikein.

    Reply
  29. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Elon Musk files for an injunction to halt OpenAI’s for-profit transition, alleging that OpenAI is discouraging investors from backing rivals like xAI, and more — Attorneys for tech billionaire Elon Musk have filed for a preliminary injunction against OpenAI, several of its co-founders …
    https://techcrunch.com/2024/11/30/elon-musk-files-for-injunction-to-halt-openais-transition-to-a-for-profit/

    Reply
  30. Tomi Engdahl says:

    Maria Deutscher / SiliconANGLE:
    AWS adds new generative AI features to Amazon Connect, which helps companies run their contact centers, including letting Lex-powered assistants use Amazon Q — Amazon Web Services Inc. is adding more artificial intelligence features to its Amazon Connect service, which helps companies run their contact centers more efficiently.

    AWS upgrades Amazon Connect with new generative AI features
    https://siliconangle.com/2024/12/01/aws-upgrades-amazon-connect-new-generative-ai-features/

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*