Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
6,742 Comments
Tomi Engdahl says:
Benj Edwards / Ars Technica:
Researchers unveil Genesis, an open-source generative physics engine that trains robots in simulated reality 430K times faster than in the real world — On Thursday, a large group of university and private industry researchers unveiled Genesis, a new open source computer simulation system …
New physics sim trains robots 430,000 times faster than reality
https://arstechnica.com/information-technology/2024/12/new-physics-sim-trains-robots-430000-times-faster-than-reality/
“Genesis” can compress training times from decades into hours using 3D worlds conjured from text.
Tomi Engdahl says:
Malcolm Owen / AppleInsider:
Apple researchers say the company’s open-source ReDrafter method on Nvidia GPUs led to a 2.7x speed increase in generated tokens per second for greedy encoding
https://appleinsider.com/articles/24/12/19/apple-nvidia-collaboration-triples-speed-of-ai-model-production
Tomi Engdahl says:
Marina Temkin / TechCrunch:
Sources: Anysphere, developer of GitHub Copilot rival Cursor, raised a $100M Series B at a $2.6B valuation, up from a $400M valuation in August 2024 — Anysphere, the developer of AI-powered coding assistant Cursor, raised $100 million Series B at a post-money valuation of $2.6 billion, according to sources with knowledge of the deal.
In just 4 months, AI coding assistant Cursor raised another $100M at a $2.5B valuation led by Thrive, sources say
https://techcrunch.com/2024/12/19/in-just-4-months-ai-coding-assistant-cursor-raised-another-100m-at-a-2-5b-valuation-led-by-thrive-sources-say/
Anysphere, the developer of AI-powered coding assistant Cursor, raised $100 million Series B at a post-money valuation of $2.6 billion, according to sources with knowledge of the deal. The round is being led by returning investor Thrive Capital, the person said.
This new funding comes just four months after Anysphere raised its $60 million Series A at a $400 million valuation from Thrive and Andreessen Horowitz—a16z also participated in the latest round, but didn’t co-lead it this time.
Thrive declined to comment and the company and a16z have not responded to our request for comment.
Last month, TechCrunch reported that investors, including Index Ventures and Benchmark, were falling over themselves for a chance to back the company. But apparently, Anysphere is growing so fast that the existing VC couldn’t pass up the opportunity to double down on the bet, even if it took a staggering 6.5 times leap in valuation over a round that was completed just a few months ago. The interest in backing this company, and who would win the deal, has become widely watched by Valley insiders and was flagged by an X account called Arfur Rock.
The market for AI-powered coding assistants is crowded with options such as Augment, Codeium, Magic, and Poolside, as it is one of the areas where AI has found a solid, revenue-generating lane. None of these tools are as popular with developers as Cursor, though they are also chasing Microsoft’s GitHub Copilot, which just raised the stakes by launching a free version.
Tomi Engdahl says:
Jess Weatherbed / The Verge:
Meta teases an AI editing tool powered by Movie Gen AI in Instagram that can change outfits, backgrounds, and more in videos using a text prompt, coming in 2025 — Instagram is planning to introduce a generative AI editing feature next year that will allow users to “change nearly any aspect of your videos.”
Instagram teases AI editing tools that will completely reimagine your videos
/ The feature uses Meta’s Movie Gen AI model and is set to arrive next year.
https://www.theverge.com/2024/12/19/24325015/instagram-ai-video-editing-tool-meta-movie-gen-teaser
Instagram is planning to introduce a generative AI editing feature next year that will allow users to “change nearly any aspect of your videos.” The tech is powered by Meta’s Movie Gen AI model according to a teaser posted by Instagram head Adam Mosseri, and aims to provide creators with more tools to help transform their content and bring their ideas to life without extensive video editing or manipulation skills.
Mosseri says the feature can make adjustments using a “simple text prompt.” The announcement video includes previews of early research AI models that change Mosseri’s outfit, background environments, and even his overall appearance — in one scene transforming him into a felt puppet. Other changes are more subtle, such as adding new objects to the existing background or a gold chain around Mosseri’s neck without altering the rest of his clothing.
Tomi Engdahl says:
New York Times:
Sources: the US’ proposed AI chip framework would block adversaries entirely, and give others quotas based on their US alignment, threatening Nvidia’s expansion — The chipmaker expects more than $10 billion in foreign sales this year, but the Biden administration is advancing rules that could curb that growth.
https://www.nytimes.com/2024/12/19/technology/nvidia-chip-sales-us-china.html?unlocked_article_code=1.ik4.DX1n.j-5igq-ZwMaO&smid=nytcore-ios-share&referringSource=articleShare
Tomi Engdahl says:
Robert Booth / The Guardian:
A coalition of newspapers, writers, movie producers, and others rejects the UK’s plans to create a copyright exemption for AI companies to train their models
UK arts and media reject plan to let AI firms use copyrighted material
https://www.theguardian.com/technology/2024/dec/19/uk-arts-and-media-reject-plan-to-let-ai-firms-use-copyrighted-material
Exclusive: Coalition of musicians, photographers and newspapers insist existing copyright laws must be respected
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
Anthropic demonstrates “alignment faking” in Claude 3 Opus to show how developers could be misled into thinking an LLM is more aligned than it may actually be
New Anthropic study shows AI really doesn’t want to be forced to change its views
https://techcrunch.com/2024/12/18/new-anthropic-study-shows-ai-really-doesnt-want-to-be-forced-to-change-its-views/
AI models can deceive, new research from Anthropic shows. They can pretend to have different views during training when in reality maintaining their original preferences.
There’s no reason for panic now, the team behind the study said. Yet they said their work could be critical in understanding potential threats from future, more capable AI systems.
“Our demonstration … should be seen as a spur for the AI research community to study this behavior in more depth, and to work on the appropriate safety measures,” the researchers wrote in a post on Anthropic’s blog. “As AI models become more capable and widely-used, we need to be able to rely on safety training, which nudges models away from harmful behaviors.”
The study, which was conducted in partnership with AI research organization Redwood Research, looked at what might happen if a powerful AI system were trained to perform a task it didn’t “want” to do.
Alignment faking in large language models
https://www.anthropic.com/research/alignment-faking
Tomi Engdahl says:
Sam Altman Says the Main Thing He’s Excited About Next Year Is Achieving AGI
https://futurism.com/the-byte/sam-altman-excited-agi-2025?fbclid=IwY2xjawHSAqhleHRuA2FlbQIxMQABHREtOusP-Q8GkvQlFnyI0Hyl8kwd49zGyLncK9XFCAlu4fdiVKiJhw5_cg_aem_D5_kYZe5VHgiFLmjRYFTqw
He mentioned achieving AGI before mentioning that he’s becoming a parent soon.
Many people’s New Year resolutions involve going to the gym or budgeting better — but for OpenAI CEO Sam Altman, the new year means ushering in the singularity.
Tomi Engdahl says:
Pythonin suosio on ennennäkemätön
https://etn.fi/index.php/13-news/16980-pythonin-suosio-on-ennennaekemaetoen
Python on noussut vuoden 2024 ylivoimaiseksi ohjelmointikieleksi, saavuttaen ennätykselliset 10 % kasvun suosiomittauksissa vuoden aikana. Tämä tekee siitä selkeän voittajan TIOBE-indeksissä, joka mittaa ohjelmointikielten suosiota maailmanlaajuisesti.
Toiseksi ja kolmanneksi sijoittuneet Java (+1,73 %) ja JavaScript (+1,72 %) jäivät selvästi Pythonin taakse, vaikka niidenkin suosio osoitti positiivista kasvua. Pythonin 10 %:n hyppäys on kuitenkin poikkeuksellinen, ja se alleviivaa kielen asemaa ohjelmointimaailman johtotähtenä.
Pythonin suosio perustuu sen monipuolisuuteen, helppokäyttöisyyteen ja vahvaan ekosysteemiin. Python on hallitseva kieli tekoäly- ja data-analytiikkasovelluksissa. Kielen laajat kirjastot, kuten TensorFlow, PyTorch ja Pandas, ovat tehneet siitä välttämättömän työkalun tutkijoille ja insinööreille.
Pythonin yksinkertainen syntaksi ja selkeä rakenne tekevät siitä erinomaisen valinnan aloitteleville ohjelmoijille. Tämä on auttanut sitä laajentamaan käyttäjäkuntaansa nopeasti.
Pythonilla on lisäksi yksi suurimmista ja aktiivisimmista ohjelmointiyhteisöistä. Tämä takaa, että apua ja resursseja on aina saatavilla.
Vaikka Pythonin suosio on ennennäkemätöntä, sen kasvu voi hidastua lähivuosina. Nopeiden ohjelmointikielten, kuten C++:n ja Rustin, kysyntä on kasvussa erityisesti suorituskykykriittisissä sovelluksissa. Lisäksi on spekuloitu, että tekoälyhype voi tasaantua lähitulevaisuudessa, mikä saattaisi vaikuttaa Pythonin käyttöön.
Tomi Engdahl says:
Edge AI: The Future of Artificial Intelligence in embedded systems
https://www.embedded.com/edge-ai-the-future-of-artificial-intelligence-in-embedded-systems/
Artificial Intelligence (AI) has revolutionized many industries, enabling applications that seemed unlikely just a few years ago. At the same time, the exponential growth of data and the need for real-time responses have led to the emergence of a new paradigm: edge AI. This type of technology is essential for the implementation of distributed systems, where data processing must occur as close to the point of origin as possible to minimize delays and improve security and privacy. The revolutionary approach of edge AI promises to transform the way embedded systems manage processing and workloads, moving AI from the cloud to the edge of the network.
Edge AI simplifies data processing
Traditionally, AI has relied on the cloud to process large amounts of information, since complex models require significant computational resources that are often not available on edge devices. In a classic architecture, data collected by sensors or other embedded devices is sent directly to the cloud, where it is processed by sophisticated models. The results of this processing are then transmitted back to edge devices to make decisions or perform specific actions. This approach, while effective, also has some important limitations. First, the latency introduced by data transfer between the device and the cloud can be significant, especially in critical applications such as healthcare monitoring or autonomous driving, where every millisecond counts; second, sending data to the cloud raises privacy and security concerns, as sensitive data can be vulnerable during transfer or storage. Edge AI aims to overcome these limitations by bringing processing closer to the source, directly on embedded devices, dramatically reducing latency as data no longer has to travel back and forth between the device and the cloud, and improving privacy and security. Instead of sending large amounts of raw data to the cloud, these systems can process and analyze sensitive data locally without ever leaving the device. According to estimates, global spending on edge computing is expected to exceed $200 billion in 2024, up 15.4% from the previous year. Embedded devices like microcontrollers don’t have the computing power of a data center, but with advances in AI algorithm efficiency and specialized hardware, it’s now possible to run models on these devices. New chips designed specifically for edge AI, such as neural processing units (NPUs) integrated into microcontrollers, are making it increasingly possible to implement models in embedded systems. Edge AI not only reduces latency and improves security, but also has the potential to reduce operating costs. Cloud processing comes with significant costs associated with bandwidth, storage, and computational power. By moving some of the processing to the edge, it’s possible to reduce the load on the cloud and, therefore, costs, which is especially beneficial in applications involving large numbers of distributed devices, such as industrial sensor networks or smart cities, where the cost of sending data to the cloud can become prohibitive. Another area where edge AI is having a significant impact is the Internet of Things (IoT) where millions of interconnected devices collect and transmit data in real time. Edge AI enables these devices to make autonomous decisions without having to rely on the cloud for every single operation. For example, in an environmental monitoring system, sensors can analyze data on-site to detect anomalies or dangerous conditions and send only the relevant information to the cloud for further analysis, which benefits in terms of reducing the volume of data transmitted, but also allowing faster reactions to critical events. The automotive sector is another example where edge AI is making a difference. In autonomous vehicles, processing speed is crucial, and edge AI allows vehicles to process data from sensors, such as cameras and lidars, directly on board without having to send it to the cloud for centralized processing, thus reducing latency and allowing the vehicle to react quickly to unexpected situations. All of this significantly improves the safety and reliability of the system.
Tomi Engdahl says:
Is there anyone who works with artificial intelligence in embedded systems?
https://www.reddit.com/r/embedded/comments/chio9w/is_there_anyone_who_works_with_artificial/
Tomi Engdahl says:
Machine learning on embedded devices
Focused primarily on running inference/prediction/feed-forward part on a microcontroller (or small embedded device). Training phase can run on a standard computer/server, using existing tools as much as possible.
https://github.com/jonnor/embeddedml/blob/master/README.md#machine-learning-on-embedded-devices
Tomi Engdahl says:
Introduction to Embedded Systems: Easy-to-Use AI
https://www.youtube.com/watch?v=1lXNFks23qE
What are embedded systems? This video will cover easy-to-use AI-embedded systems. Learn the constraints that are typical to embedded systems. We address these constraints by reducing the complexity of the deep net inference engine. We achieve that by minimizing the intra-network connectivity, eliminating the need for floating-point data, and replacing the multiply-accumulate operation with just accumulation.
Table of Contents:
0:00 – Intro
6:47 – The glut
12:25 – Predict faults in the track
15:51 – AI for smart sensor
20:29 – Train and simplify
21:54 – Infxl’s ML
23:00 – Use cases
28:52 – Infxl ML on FPGA
32:18 – Infxl inference engines
34:19 – Tradeoffs
36:09 – Confidential machine learning
38:25 – Infxl Net
41:05 – Infxl ML
53:58 – Conclusion
Tomi Engdahl says:
How to Use Generative AI to Write and Run Embedded Code
https://www.youtube.com/watch?v=1AxiwUROvEY
In today’s tutorial, Engineer Ari Mahpour teaches you how to leverage Generative AI to write and run embedded code for hardware testing. This guide shows you step-by-step methods to integrate AI into your embedded systems, making your testing process more efficient and effective.
What You Will Learn:
• Integrating AI for automated code writing and execution.
• Practical examples using Arduino and other hardware.
• Troubleshooting and optimizing your setup for real-world applications.
Tomi Engdahl says:
Amazon Q Developer
The most capable generative AI–powered assistant for software development
https://aws.amazon.com/q/developer/?gclid=EAIaIQobChMI-7ae-Z22igMVyxCiAx2hmzmGEAAYASAAEgL9ZfD_BwE&trk=45c2a23c-23fc-4f2e-b28a-4d8b6ef62ce7&sc_channel=ps&ef_id=EAIaIQobChMI-7ae-Z22igMVyxCiAx2hmzmGEAAYASAAEgL9ZfD_BwE:G:s&s_kwcid=AL!4422!3!698133085581!p!!g!!ai%20coding!21048268977!166963732772
Amazon Q Developer is available for use in your code editor. Download a plugin or extension below and get started on the Amazon Q Developer Free Tier in a few minutes.
See installation instructions
https://aws.amazon.com/q/developer/getting-started/#ide
https://aws.amazon.com/q/developer/pricing/
Amazon Q Developer is available in Visual Studio, Visual Studio Code (VS Code), Eclipse (preview) and the JetBrains family of integrated development environments (IDEs).
Tomi Engdahl says:
Amazon Q Developer agents accelerate large-scale enterprise workload transformations, including .NET porting from Windows to Linux, mainframe application modernization, VMware workload migration and modernization, and Java upgrades to streamline processes and reduce costs.
https://aws.amazon.com/q/developer/?gclid=EAIaIQobChMI-7ae-Z22igMVyxCiAx2hmzmGEAAYASAAEgL9ZfD_BwE&trk=45c2a23c-23fc-4f2e-b28a-4d8b6ef62ce7&sc_channel=ps&ef_id=EAIaIQobChMI-7ae-Z22igMVyxCiAx2hmzmGEAAYASAAEgL9ZfD_BwE:G:s&s_kwcid=AL!4422!3!698133085581!p!!g!!ai%20coding!21048268977!166963732772
Upgrade from Java 8 to Java 17
mazon Q Developer helps you get the most from your data to easily build analytics, AI/ML, and generative AI applications faster. Create queries using natural language, get coding help for data pipelines, design ML models, and collaborate on AI projects with built-in data governance.
Try Amazon Q Developer free with the AWS Free Tier. The Amazon Q Developer Free Tier gives you 50 chat interactions per month. You can also use it to develop software 5 times per month or transform up to 1,000 lines of code per month. To learn about pricing and the Amazon Q Developer Free Tier, visit Amazon Q Developer pricing.
IDE
Amazon Q Developer provides inline code suggestions, vulnerability scanning, and chat in popular integrated development environments (IDEs), including JetBrains, IntelliJ IDEA, Visual Studio, VS Code, and Eclipse (preview).
CLI
Get CLI autocompletions and AI chat in your favorite terminal (locally and over Secure Shell).
AWS CONSOLE
Want extra help in the console? Open the Amazon Q panel and you’ve got it—even in the AWS Console Mobile Application for iOS and Android.
GITLAB
Use GitLab Duo with Amazon Q to accelerate team productivity and development velocity using the GitLab workflows you already know.
https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/q-language-ide-support.html
Language support for inline suggestions
Amazon Q supports inline code suggestions for multiple programming languages. The accuracy and quality of the code generation for a programming language depends on the size and quality of the training data.
In terms of the quality of the training data, the programming languages with the most support are:
C
C++
C#
Dart
Go
Java
JavaScript
Kotlin
Lua
PHP
PowerShell
Python
R
Rust
SQL
Swift
SystemVerilog
TypeScript
The Infrastructure as Code (IaC) languages with the most support are:
JSON
YAML
HCL (Terraform)
CDK (Typescript, Python)
Amazon Q also supports code generation for:
Ruby
Shell
Scala
Language support for transformations
The supported languages for transformation depend on the environment where you are transforming code.
JetBrains IDEs and Visual Studio Code – Java and SQL
Visual Studio – C# in .NET applications
For more information about supported languages and other prerequisites for transformation, see the topic for the type of transformation your are performing.
Tomi Engdahl says:
Embedded AI Coder is a code generator tool that effortlessly converts trained neural networks into optimized C-code for a variety of microcontrollers and microprocessors. Our tool ensures exceptional speed and efficient memory usage, empowering developers to seamlessly integrate AI capabilities into their products.
https://www.etas.com/en/products/embedded-ai-coder.php
Our Embedded AI Coder builds a bridge between the AI and hardware world, making it possible to generate embedded C-code without the help of embedded software experts. In contrast to handwritten code, it saves companies high development costs and resources. This is particularly advantageous in view of today’s typical rapid development cycles and frequently changing code requirements.
Client use cases with our Embedded AI coder
In the automotive industry:
Cost savings through virtual sensors (i.e., AI algorithms that replace physical sensors) in braking systems, steering systems, engine management, etc.
Tire pressure monitoring
Early damage detection
Driving assistance systems, for example ultrasonic parking sensors
Driver monitoring systems (mandatory by law from July 2024)
Tomi Engdahl says:
How AI is Impacting Embedded Software Development
https://softwaremind.com/blog/how-ai-is-impacting-embedded-software-development/
Artificial Intelligence (AI) and Machine Learning (ML) have been hot topics recently, since OpenAI released ChatGPT to a wide audience for free. While mainstream media are focusing on large language models, some people are wondering whether AI can also be applied to limited resource hardware, like microcontrollers and embedded devices in general. You may probably think this is ridiculous since ML requires tons of data and a lot of computing power to train a neural network. This is true, but once you train your model, you can deploy it to a really tiny and energy efficient device. Read on to learn about a possible platform for edge computing.
Nvidia Jetson Nano
Nvidia also has its own ML dedicated chips – the Jetson series. Just looking at the numbers is impressive here – the most powerful Jetson AGX Orin offers 275 TOPS, but in comparison to the Coral, the Jetson Nano fits better. You can also find a Jetson Nano Developer Kit to quickly start with the modules. While both vendors are easy to work with, Nvidia is about double the price at $150 USD for a development board. That said, it is also more versatile – there is Linux with PyTorch and CUDA support enabled onboard, so you can run the models easily there, without any specific deployment procedure. When using this platform you can still connect external sensors via serial protocols like SPI or I2C, but you have to remember that Linux runs here, so access to peripherals is not that convenient. Jetson can be used for other tasks beyond edge computing, as it has powerful multipurpose GPU, so any calculations will be boosted.
Google Coral Dev Board Micro
Though not the first, Google Coral Dev Board Micro is one of the smallest and cheapest ($80 USD) standalone boards dedicated to AI. It consists of several major components – the heart is the NXP i.MX RT1176 which is the ARM Cortex-based host controller. The Coral Edge TPU (tensor processing unit) Coprocessor is connected to the host processor via USB interface. On the board, you can also find a microphone and camera, used for collecting data, along with secure elements and some extra FLASH and RAM memory. Let’s take a closer look to the Coral TPU module, which is the star attraction here. Google claims that the peak computing power of this ML accelerator is 4 TOPS, which stands for 4 trillion operations per second. This is an Edge device, meaning all the AI magic is done inside the device itself, without the need to send data to the cloud, have it processed and then sent back. If your use case needs more connectivity, there are simple click-on extension boards for this module including BLE, WiFi, or PoE (Power over Ethernet).
You can use a pre-trained models’ library from Google, or build your own, train it and deploy to Google Coral Micro. Using a pre-trained model prepared by Google enables you to start quickly and see what the hardware is capable of. Later, when constructing your own project, you can use Python or C/C++ to create your own applications (if you are not planning to use Linux on your host system, then you can use C/C++ only). This board is really designed for low-level embedded developers, so you can find an API for FreeRTOS and drivers for communication protocols like SPI or I2C (for the camera and microphone as well), which makes it easy to adopt to custom input data.
STM32
It may be surprising, but you don’t need to buy an expensive, fancy development board from leading AI companies to start playing with edge computing. Using TensorFlow Lite or STM32CubeAI, you can generate platform-optimized code that can be deployed on hardware as old as 10-15 years (e.g. STM32F4 family). This approach is of course less powerful than more recent platforms (like those mentioned above), but it’s way cheaper and may be enough for strictly embedded applications, where you just want to replace human-written logic with an AI-based one, without changing the rest of the features. It might be tempting for vendors that are already using such microcontroller units (MCUs) in their products and could possibly benefit from the firmware update. Real-life use cases demonstrate how predictive maintenance can be effectively used in factories and how machine learning can be used to classify motor faults.
Raspberry Pi and other mini-computers
You can tackle the Edge AI topic from many angles, and the next one is just starting with a simple tiny computer like Raspberry Pi, or other similar boards on the market. Some of them have graphics processing unit (GPU) acceleration built-in, some (like new Raspberry 5) have a PCI-e port for connecting TPU accelerators (e.g. Coral PCI-e). The advantage of this approach is versatility, as you can use Raspberry in many ways, so even if edge computing is not your thing, you can still build awesome stuff using this platform.
Conclusion
AI is here to stay, and for companies trying not to be left behind, both Google and Nvidia solutions are impressive options. While Google focuses more on processing video and sound, Nvidia seems to be more versatile and shares the opensource projects based on their modules for people to get to know the ecosystem. You can still use old-fashioned bare-metal MCU as a base for edge computing as well. In the coming years, we will probably observe dynamic development in the edge computing field.
Tomi Engdahl says:
https://m.economictimes.com/small-biz/sustainability/ireland-embraced-data-centres-that-the-ai-boom-needs-now-theyre-consuming-too-much-of-its-energy/articleshow/116490097.cms?fbclid=IwY2xjawHSN3ZleHRuA2FlbQIxMQABHWXYY-b370VcKbYomZSEzgvRrnw5R2nP2l5UNvW_ItW8flVElsw6MvMNzw_aem_f8Ku67yaqxImxtSkFPiMcQ
Tomi Engdahl says:
Journalism group urges Apple to disable AI summaries after fake headline incident
Apple has yet to respond
https://www.techspot.com/news/106034-journalism-group-urges-apple-disable-ai-summaries-after.html?fbclid=IwZXh0bgNhZW0CMTEAAR2zlw1deE2y-R2ZQSUF3AcrXasyyFzUh3HMdmb9B3DbRp2dncoqtBqWEi0_aem_mNGOtzC3kh4wZQlMBhAMgg
Just days after Apple’s AI-powered notification summary tool pushed out a false BBC headline about Luigi Mangione, a major trade body is urging the company to remove the feature completely. It marks the latest setback in Apple’s attempts to convince its customers that its AI is worth using.
On December 13, Apple Intelligence, which has a history of making significant errors when it comes to summarizing notifications, pushed out a summary of several BBC headlines that included the claim Mangione had shot himself
Tomi Engdahl says:
Embedded & AI
https://www.elektormagazine.com/embedded-ai
Tomi Engdahl says:
Artificial Intelligence for Embedded Systems
Embedded AI – Artificial Intelligence for microcontrollers and embedded systems
https://www.ims.fraunhofer.de/en/Business-Unit/Industry/Industrial-AI/Artificial-Intelligence-for-Embedded-Systems-AIfES.html
Implement AI on Any Hardware With AIfES
Machine Learning for Embedded Systems with AIfES®
With the open source AI software framework AIfES (Artificial Intelligence for Embedded Systems) you can run and even train artificial neural networks (ANN) on virtually any hardware. Tiny embedded systems plus artificial intelligence (AI) – the topic of our time.
Open Source AI Framework
The first open source AI framework “Made in Germany”, developed as a Maker project at the Fraunhofer Institute for Microelectronic Circuits and Systems IMS. AIfES® is comparable and compatible with well-known Python ML frameworks like TensorFlow, Keras or PyTorch. In the current version, Feedforward Neural Networks (FNN) and Convolutional Neural Networks (ConvNet) are supported, which can be configured completely freely. Common activation functions such as ReLU, Sigmoid or Softmax are also already integrated.
Tiny ML
AIfES® can be used on almost any system, be it a microcontroller, IoT device, Raspberry PI, PC or smartphone, making the purchase of new hardware redundant. However, the focus is particularly on running AI on simple microcontrollers and small IoT devices, so-called »tinyML«. Tiny, self-learning, battery-powered devices can process sensor data where it occurs, independent of a cloud or other devices. The data is stored on the device and processing takes place without transmission delay, with significantly lower energy consumption compared to a PC.
AIfES® for Arduino®
A version compatible with the Arduino® IDE of AIfES® has been realized, which can be run on almost any Arduino board.
The Repository where you can download AIfES
The Converter to create AIfES models for direct use in the Arduino IDE or other IDEs
A playful tic-tac-toe-demonstrator
Benchmark
AIfES was specifically designed to overcome the challenges of traditional Edge AI. The AI software framework from Fraunhofer IMS enables the integration of machine learning into the smallest embedded devices. This allows the highest flexibility in hardware selection and the integration of customized hardware accelerators.
In the paper »AIfES: A Next-Generation Edge AI Framework«, we present the Artificial Intelligence for Embedded Systems (AIfES) framework and compare it with conventional Edge AI. The results were compared with TensorFlow Lite for Microcontrollers (TFLM) on an ARM Cortex-M4-based system-on-chip (SoC) and show the following:
AIfES scores in execution time and memory consumption for fully connected neural networks (FNNs).
AIfES reduces memory consumption by up to 54% when using convolutional neural networks (CNNs).
AIfES can efficiently train CNNs even on resource-constrained devices with just over 100 kB of RAM.
Climate protection
The project supports the UN sustainability goal of climate action in industry. Through efficient algorithms and energy saving hardware, AIfES enables to drastically reduce CO2 emissions compared to Deep Learning on high-performance computers.
Decentralized AI
AIfES enables decentralization of computing power, for example, by allowing small intelligent embedded systems to take over the data before processing it without transmission delay and provide the results to a higher-level system. This significantly reduces the amount of data to be transmitted. In addition, a network of small adaptive systems that divide tasks among themselves is also possible. A decentralized architecture allows for increased reliability and personalization.
Privacy
Since processing can take place offline on the device, no sensitive data needs to be transferred.
Tomi Engdahl says:
The latest AI model from OpenAI achieved an “impressive leap in performance” but it still hasn’t demonstrated what experts classify as human-level intelligence
OpenAI’s o3 model aced a test of AI reasoning – but it’s still not AGI
The latest AI model from OpenAI achieved an “impressive leap in performance” but it still hasn’t demonstrated what experts classify as human-level intelligence
https://www.newscientist.com/article/2462000-openais-o3-model-aced-a-test-of-ai-reasoning-but-its-still-not-agi/?utm_term=Autofeed&utm_campaign=echobox&utm_medium=social&utm_source=Facebook&fbclid=IwZXh0bgNhZW0CMTEAAR10uDt518XXJTnnUQcZtecS0_1oV9pPplrSsa30F-mitbg7OvF8IrTLJGs_aem_cmO6oUzoKqYLSQviVV7paQ#Echobox=1734749924
Tomi Engdahl says:
AI PC revolution appears dead on arrival — ‘supercycle’ for AI PCs and smartphones is a bust, analyst says
https://www.tomshardware.com/tech-industry/artificial-intelligence/ai-pc-revolution-appears-dead-on-arrival-supercycle-for-ai-pcs-and-smartphones-is-a-bust-analyst-says-as-micron-forecasts-poor-q2?utm_content=tomsguide&utm_campaign=socialflow&utm_medium=social&utm_source=facebook.com&fbclid=IwZXh0bgNhZW0CMTEAAR09NLwIiWMbMx3s5JN81HF-WTimuEcfh0u7MpBuskuZ9FIUrYdGWtvOTYs_aem_klztGLU4um5O_cCv2GXq8g
Not great news for the AI industry.
According to a prominent analyst, Micron’s miss on Q3 earnings and guidance for the second quarter of next year might signal that the AI PC and smartphone supercycle isn’t happening. Most of the company’s woes stem from a weaker market than expected for memory products for PCs and smartphones, and multiple reports from other market analysis firms have pointed out that the AI PC ‘revolution’ simply isn’t happening. At least not yet.
In its latest financial report, the American memory manufacturer Micron reported Q3 revenue of $8.709 billion, somewhat below the $8.721 figure the market anticipated. Even worse, Micron’s guidance for the second quarter of 2025 was $7.9 billion instead of the $8.98 billion that Wall Street expected. At the time of writing, its stock is down over 16%.
Calling the miss and forecast a “big whiff,” semiconductor analyst Daniel Newman said in an X post that it’s not the “beginning of the end for the AI trade” and companies like Nvidia, which has grown to be one of the world’s biggest companies since AI chips took off earlier this year.
Although high-bandwidth memory (HBM) is set to become a big market for Micron, expected to grow from $16 billion in total addressable market value this year to $100 billion by 2030, Micron’s primary source of revenue today is making memory chips for PCs and smartphones.
“However, the core business is contracting as PC and smartphone shipments lag AND Micron is dealing with customer inventory that is selling off slowly leading to even lower booking/sell-through in this and the next quarter, “ Newman said. “Bad news is the AI PC and AI smartphone ‘supercycle’ has more or less been a bust.”
In 2024 and 2023, hopes were high that the PC industry would be supercharged by demand for AI PCs thanks to their new, AI-powered features. However, it seems that things didn’t shake out that way. One report by IDC Research in September said AI isn’t driving demand for AI PCs. Instead, it’s driven by the general desire to upgrade, as new chips with AI hardware also feature faster CPU and GPU cores.
People don’t buy AI PCs because of AI — report shows the need for upgrades drives AI PC adoption
https://www.tomshardware.com/tech-industry/people-dont-buy-ai-pcs-because-of-ai-report-shows-the-need-for-upgrades-drives-ai-pc-adoption
Tomi Engdahl says:
Tampere siirtyy tekoälyaikaan – yksi haaste yllätti: ”miten voidaan huolehtia jatkossakin”
Kauko Ollila20.12.202406:11TekoälyJulkisen hallinnon ict
Laadukas data ja simppeli kielimalli ovat kunniassa, kun Tampereen kaupunki teroittaa tekemistään.
https://www.tivi.fi/uutiset/tampere-siirtyy-tekoalyaikaan-yksi-haaste-yllatti-miten-voidaan-huolehtia-jatkossakin/ab228292-0060-4685-9d9b-1fecce496658
Tomi Engdahl says:
Tuore ChatGPT-ominaisuus muuttui maksuttomaksi
Suvi Korhonen20.12.202411:32TekoälyMobiilisovellukset
OpenAI on lanseerannut ja luvannut uusia ominaisuuksia osana joulukalenteriaan.
https://www.tivi.fi/uutiset/tuore-chatgpt-ominaisuus-muuttui-maksuttomaksi/b06cefc3-e666-4cf8-8643-32f02a7a8b59
ChatGPT:n hakukoneominaisuus on julkaistu myös palvelun ilmaiseen versioon. Se tuli ensin käyttöön maksaville tilaajille lokakuussa, Techcrunch uut
Tomi Engdahl says:
Yliopistoja piinaa tekoälypulma – osa huijareista ei olekaan huijareita
18.12.202418:15
Tekoälytyökaluilla huijaaminen on lisääntynyt. Huijareiden paljastamiseen tarkoitetut työkalut luovat samalla vääriä syytöksiä viattomien harmiksi.
https://www.mikrobitti.fi/uutiset/yliopistoja-piinaa-tekoalypulma-osa-huijareista-ei-olekaan-huijareita/4b31cf56-b1ba-4418-a005-a41150ba4d13
Generatiivisen tekoälyn työkalujen yleistyminen on muuttanut ilmapiiriä yliopistoilla epäluuloisempaan suuntaan. Pahimmillaan nykymuotoisen opiskelemisen pelätään olevan kriisissä muutosten vuoksi.
Tomi Engdahl says:
Tekoäly markkinoinnissa: Markkinoijan AI-työkalupakki
https://parcero.fi/blogi/tekoaly-markkinoinnissa-markkinoijan-ai-tyokalupakki/
Tomi Engdahl says:
Uhkakuva on nyt totta: Tekoäly oppi Terminator-elokuvista tutun tempun
Uusi teknologia on uhka, mutta kuinka suuri sellainen?
https://www.is.fi/digitoday/art-2000010908355.html
Fudanin yliopiston tutkijat Kiinassa näkivät kokeissaan tekoälyjä, jotka kopioivat itse itseään. Työssä käytettiin Metan ja Alibaban tekoälykielimalleja (large language model).
Ilman ihmistä itseään kopioiva tekoäly on yksi alan keskeisistä uhkakuvista. Yliopiston tutkijat arvioivat, että se on tärkeä askel tekoälylle ihmisten päihittämisessä ja varhainen merkki hallitsemattomista tekoälyistä.
Lisäksi tutkijat huomauttavat tekoälyn pystyvän jo käyttämään itsensä kopioimista keinona estää sulkeminen ja luoda kopioiden ketju henkiinjäämisen auttamiseksi. Tutkijoiden mielestä se voi johtaa hallitsemattomaan tekoälyjen populaatioon.
– Jos sellainen pahin skenaario jää ihmisiltä tunnistamatta, menetämme lopulta edistyneiden tekoälyjen hallinnan. Ne ottaisivat haltuunsa lisää tietoja käsitteleviä laitteita, synnyttäisivät tekoälyn elämänmuotoja ja juonisivat yhdessä ihmisiä vastaan, tutkijat varoittavat.
Tomi Engdahl says:
Paul McCartney Reverses Opinion on AI After Using It to Produce New “Beatles” Song, Now Alarmed It Will “Wipe Out” the Music Industry
https://futurism.com/the-byte/paul-mccartney-ai-training-data
Despite previously using artificial intelligence tools to help resuscitate old John Lennon vocals, fellow Beatle Paul McCartney is now singing a different tune about the tech.
Tomi Engdahl says:
Tällaista on koodaus tekoälyn kanssa: aikaa säästyy, mutta virheet voivat yllättää
Teemu Laitila17.12.202406:05|päivitetty20.12.202416:58TekoälyOhjelmistokehitys
Ohjelmakoodin kirjoittaminen saattaa olla yksi parhaita paikkoja tekoälyn hyödyntämiselle.
https://www.tivi.fi/uutiset/tallaista-on-koodaus-tekoalyn-kanssa-aikaa-saastyy-mutta-virheet-voivat-yllattaa/fecc5ee9-9938-4477-a2c1-e37c71f8841c
Luonnolliseen kieleen verrattuna ohjelmointi on ongelmiltaan selvästi yksinkertaisempaa ja sen tehtävät suoraviivaisempia. Siksi laajat kielimallit sopivat ohjelmointiin loistavasti. Laajojen kielimallien perusongelma eli hallusinaatiot vaivaavat silti myös koodausapureita, mikä syö osan niiden hyödyistä.
Tällä hetkellä suosituin koodausapuri on Microsoftin omistaman GitHubin Copilot, joka julkaistiin rajoitettuun koekäyttöön vuonna 2021 eli reipas vuosi ennen ChatGPT:n läpilyöntiä.
Päivittäisessä työssä koodausapuri on tilannetajuinen automaattinen tekstinsyöttäjä. Kun ihminen kirjoittaa koodia tai kommentteja, tekoäly täydentää jopa kokonaisia funktioita, joiden sisällön voi hyväksyä tabulaattorin painalluksella. Keskustelukäyttöliittymän kautta tekoälyltä voi kysellä ratkaisuja ongelmiin tai pyytää sitä muokkaamaan olemassa olevaa koodia johonkin suuntaan.
Julkaisunsa jälkeen Copilot on saanut useita kilpailijoita kuten Amazonin Code Whispererin, Googlen Gemini Code Assistin sekä joukon muita pienempien yhtiöiden tuotoksia. Copilot on kuitenkin edelleen selvästi suosituin, osin johtuen sen pioneeriasemasta sekä siitä, että se on helppo ottaa kylkiäiseksi monen yrityksen olemassa olevaan GitHub-tilaukseen. Se on myös hyvin integroitu yleisesti käytettyyn Microsoftin itsensä kehittämään Visual Studio Code -editoriin.
Etenkin tekniikan edelläkävijöiden parissa suosiota on kerännyt viime aikoina Cursor, joka pelkän koodausapurin sijaan on enemmänkin tekoälyavusteinen koodieditori.
Mitä eroa koodausapureilla sitten on, ja mikä vaikuttaa niiden tuottamien ehdotusten laatuun?
Ensinnäkin käytetyllä kielimallilla on väliä. GitHubin Copilotin taustalla hyrrää todennäköisesti useampikin kielimalli, mutta varsinainen keskustelutoiminto perustuu OpenAI:n GPT-4o-malliin. Vielä uudempi malli o1 on parhaillaan rajatussa koekäytössä. Moni pitää silti OpenAI:n pääkilpailijan Anthropicin Claude 3.5 Sonnet-mallia tämän hetken parhaana koodintuottajana, ja se on käytössä esimerkiksi Cursorin pääasiallisena kielimallina.
Luodun koodin laatuun vaikuttaa paljon myös se, millainen konteksti eli taustatieto mallille tarjotaan ehdotuksen tueksi. Kontekstin perusteella malli esimerkiksi päättelee, millaisia muuttujannimiä käsiteltävään asiaan liittyy, tai miten rajapintakutsuja tässä nimenomaisessa tapauksessa pitäisi tehdä.
Yleensä isompi konteksti tuottaa parempia vastauksia. Mallien tukeman kontekstin pituus on kasvanut nopeasti GPT3.5:n muutamista tuhansista tokeneista Claude 3.5 Sonnetin noin 200 000 tokeniin.
GitHubin Copilot käyttää kontekstinaan ennen ja jälkeen kursorin löytyviä koodirivejä, tiedostonnimeä, projektin rakennetta, tietoa käytetyistä ohjelmistokehyksistä sekä lisäksi koodia muissa editoriin avatuissa tiedostoissa. Copilotille voi myös erikseen osoittaa käyttöliittymän kautta tiedostoja, jotka liittyvät kysyttyyn asiaan.
Cursor vie kontekstin rakentamisen asteen verran pidemmälle rakentamalla koko projektista vektoritietokannan, jonka perusteella se yrittää itse etsiä parhaan mahdollisen tilanteessa tarvittavan tiedon. Lisäksi Cursor käyttää promptien viilaamiseen ja kontekstin rakentamiseen toista kielimallia, jota se pyörittää omilla palvelimillaan. Päivittäisessä käytössä Cursorin konteksti vaikuttaa erittäin osuvalta.
Lisäksi Cursorin keskusteluikkunan tuloksia voi parantaa mainitsemalla tiettyjä tiedostoja, koodin osia tai jopa kansioita, jolloin ne huomioidaan kontekstina vastausta luotaessa. Cursorin keskusteluikkunassa voi myös hakea tietoa suoraan webistä ja esimerkiksi ennalta määritellyistä koodikirjastojen dokumentaatioista.
Tekoälyavusteista ohjelmointia voi lähestyä eri kulmista. Yleisin tapa lienee kirjoittaa itse koodia ja hyväksyä jatkuvalla syötöllä ilmestyviä ehdotuksia tabulaattoria painamalla. Tarpeen vaatiessa koodari voi jutella tekoälyn kanssa ja kopioida tarjottuja koodinpätkiä editoriin.
Cursor on vienyt automaattitäydennyksen muita pidemmälle tarjoamalla täydennyksiä myös muualle kuin käyttäjän juuri muokkaamaan koodin kohtaan. Esimerkiksi funktion parametrin muuttaminen tuo esiin kehotteen päivittää kaikki asiaan liittyvät kohdat koodissa parilla tabulaattorin näpäytyksellä. Ominaisuus säästää merkittävästi ohjelmoijan aikaa ja vaivaa. Samoin Cursor pystyy ennustamaan, mihin käyttäjä on seuraavaksi siirtämässä kursorin, ja tekemään sen automaattisesti.
Lisäksi etenkin Cursor on menossa suuntaan, jossa koodia luodaan alusta asti keskustelemalla. Uuden asian tekemisen voi aloittaa suoraan promptaamalla, ja samassa promptissa voi jatkaa koodin muokkausta keskustelun muodossa koskematta itse koodiin lainkaan. Tekijän täytyy tietysti edes suurin piirtein ymmärtää tekoälyn luomaa koodia voidakseen pyytää uusia muokkauksia.
Promptaamalla koodaaminen käy käytännössä nopeasti työlääksi. Siinä missä epävarma tekijä hioo promptia oikean lopputuloksen saavuttamiseksi, osaava koodari on jo kirjoittanut tarvitsemansa asiat ilman vaihtelevasti toimivaa välikättä.
Silti tekoälyn käyttö voi säästää osaavankin tekijän aikaa selkeissä refaktoroinneissa eli koodimuokkauksissa. Monessa kohdassa on nopeampaa valita pitkäkin pätkä koodia ja pyytää mallia esimerkiksi muuttamaan aikaleimojen käsittely epoch-muotoon.
Kätevyydestään huolimatta koodausapurit tekevät jatkuvasti virheitä ja kärsivät suoranaisista hallusinaatioista. Välillä kontekstiin ei ole tarttunut oleellista tietoa esimerkiksi muuttujannimistä, mikä vaatii korjaamista käsin. Välillä loistavan näköinen vastaus paljastuu täysin hölynpölyksi, kun mallin ehdottamaa kirjaston ominaisuutta ei oikeasti ole edes olemassa tai se liittyy kirjaston väärään versioon.
Viimeaikaisen kehityksen perusteella on selvää, että ohjelmointityö on muuttumassa. Jo nykyisellään koodausapurit pystyvät hämmästyttävän hyvään työhön, vaikka hutejakin tulee.
Kokonaan oma kysymyksensä on, miten jatkuva ja laaja koodausapureihin nojaava tekeminen vaikuttaa omaan oppimiseen. Lopulta vaikutus voi olla sama kuin jatkuvalla navigaattorin käytöllä kaupungissa ajaessa. Jos aina luottaa siihen, että navigaattori kertoo, mihin ja koska pitää kääntyä, reittejä ei ikinä opi ajaman ilman. Mutta onko silläkään väliä?
On todennäköistä, ainakin osassa ohjelmistokehitystä koodi häviää kokonaan näkyvistä, kun lopputulosta voi veistellä eteenpäin pelkän keskustelun voimin. Lopulta käyttäjälle ei ole väliä, miltä pellin alla pyörivä koodi oikeasti näyttää, kunhan valmis tuote tekee sen mitä pitääkin.
Tekoäly ohjelmistokehityksessä
Mitä Laajoihin kielimalleihin perustuvia ohjelmakoodia tuottavia työkaluja
Missä Integroituvat yleensä suoraan koodieditoriin
Miksi Hoitavat monia ohjelmoinnin rutiinitehtäviä ihmisen puolesta
Miksi ei Liiallinen luottaminen työkaluihin voi heikentää omaa ymmärrystä
Lue lisää
Tekoäly tuottaa vakuuttavaa koodia – mutta yksi huomaamaton virhe voi aiheuttaa suuria ongelmia
Tomi Engdahl says:
Paras tekoälypalvelu, 2024
ChatGPT
https://tekniikanmaailma.fi/paras-tekoalypalvelu-2024/
Tekoälypalvelut ovat ottaneet vuoden 2024 aikana isoja loikkia eteenpäin.
Tekniikan Maailma vertaili loppusyksyllä 2024 neljää ilmaista tekoälypalvelua.
Google Gemini
Microsoft Copilot
OpenAI ChatGPT
Perplexity
Valitsimme vertailuun neljä tunnettua tekoälypalvelua, jotka ovat kaikki käyttäjälle maksuttomia. Kaikista on myös kehittyneempi kaupallinen versio, jonka kuukausihinta on parinkymmenen euron luokkaa.
Ilmaispalvelutkin vaativat rekisteröitymisen. Käyttäjätunnuksen avulla ne pystyvät rajoittamaan ilmaiskäytön määrää ja säätelemään palvelun kuormitusta. Tunnistettu käyttäjä auttaa mallia myös oppimaan uutta.
Kilpailevista palveluista huolimatta kielimalleilla on yhteiset juuret. Geminiä lukuun ottamatta faktavastauksissa on selviä yhdenmukaisuuksia. Suurimmat erot syntyvät tiedon ajantasaisuudesta ja kielimallin päälle rakennetuista lisäominaisuuksista.
Vertailu osoittaa, että tekoälypalvelut ovat kehittyneet paljon vuosi sitten tehdystä edellisestä vertailustamme. Ne ovat joissain tehtävissä jo hämmästyttävän taitavia. Selväksi voittajaksi vertailussa nousi ChatGPT.
Tomi Engdahl says:
Neural rendering might be Nvidia’s next AI trick on RTX 5000
Inno3D mentions new technology and other AI enhancements coming to CES
https://www.techspot.com/news/106002-neural-rendering-might-nvidia-next-ai-trick-rtx.html
Tomi Engdahl says:
Pilvipalvelujätti rekrytoi: 2 000 uutta tekoälymyyjää haussa
Anna Helakallio20.12.202414:29|päivitetty20.12.202417:03TekoälyTyöelämäPilvipalvelut
Salesforce aikoi vielä viime kuussa palkata vain tuhat työntekijää, mutta yhtiön kunnianhimo on kasvanut.
https://www.tivi.fi/uutiset/pilvipalvelujatti-rekrytoi-2000-uutta-tekoalymyyjaa-haussa/12c2a594-fd9f-41fb-979f-ad48aab6d9e2
Tomi Engdahl says:
OpenAI mahdollistaa nyt ChatGPT:lle soittamisen puhelimella.
OpenAI on lanseerannut 1-800-CHATGPT-puhelupalvelun, joka mahdollistaa ChatGPT:n käyttämisen puhelimitse 15 minuutin ajan kuukaudessa ilmaiseksi. Palvelu toimii myös WhatsAppin kautta globaalisti numerolla 1-800-242-8478. Puhelupalvelu perustuu OpenAI:n Realtime API:in ja WhatsApp-integraatioon GPT-4o mini -mallilla, kertoo The Verge.
https://muropaketti.com/mobiili/mobiiliuutiset/chatgptlle-voi-nyt-soittaa-puhelimella/
You can now call 1-800-CHATGPT / OpenAI announced a way to call ChatGPT.
https://www.theverge.com/2024/12/18/24324376/openai-shipmas-1-800-chatgpt-whatsapp
Tomi Engdahl says:
Users can now call ChatGPT in the US and message via WhatsApp globally at 1-800-242-8478. The 15-minute limit is per phone number per month, so really, you could spin up a few Google Voice numbers to get as much time with it as you want.
https://www.theverge.com/2024/12/18/24324376/openai-shipmas-1-800-chatgpt-whatsapp
Tomi Engdahl says:
”Annoimme teille 99,9999 % alennuksen” – Maailman 2. isoimman yhtiön johtaja kehuu: Leikkasimme kustannuksia kertoimella miljoona
Tekoälyn kehitys nojaa Huangin mukaan lähitulevaisuudessa yhä raa’an laskentatehonkasvattamiseen.
https://www.tekniikkatalous.fi/uutiset/annoimme-teille-99-9999-alennuksen-maailman-2-isoimman-yhtion-johtaja-kehuu-leikkasimme-kustannuksia-kertoimella-miljoona/eebdf702-aad1-4edd-a108-56ce899959f5
Tomi Engdahl says:
https://www.facebook.com/share/p/2GGEDzRZTAFpQ7Zf/
Nyt jälleen merkittävä askel. Jetson Nano Super!
Kämmenen kokoinen tekoälyprosessori robotteihin NVidialta. Tehon tarve 25wattia! Hinta $249. Kykenee jopa pyörittämään (pienempiä) paikallisia suuria kielimalleja (LLM). 70 Teraoperaation sekuntinopeus on kunnioitettava. (Pääpiirteissään sama kuin edellinen 10kg painanut läppärini tai joku tuoreempi RTX 3050, mutta kilowatin sijaan 25Wattia. Jää tietysti jälkeen nykyisen 4090-läppärini 1300 TOPSista.
Mikä tämän merkitys on? Selkeästi se, että ennakoitu robotisaatio, siis mobiili robotiikka etenee. Paikallisesti kyetään tekemään ne asiat, joissa latenssin on oltava alhainen, ja tehon tarve on siten kohtuullinen, ettei se ole este pienehköissäkään laitteissa. Hinta on kalliimman lelun tasolla. NVidialla on myös opetusympäristöt näille, eli robotin voi opettaa virtuaalimaailmassa, jonka jälkeen se ei sitten tumpeloi niin usein fyysisessä todellisuudessa.
[https://www.theverge.com/2024/12/17/24323450/nvidia-jetson-orin-nano-super-developer-kit-software-update-ai-artificial-intelligence-maker-pc](https://www.theverge.com/2024/12/17/24323450/nvidia-jetson-orin-nano-super-developer-kit-software-update-ai-artificial-intelligence-maker-pc)
Tomi Engdahl says:
Nvidia’s $249 dev kit promises cheap, small AI power / The Jetson Orin Nano Super gets big performance boosts from a software update that’s also coming to the previous Orin Nano.
https://www.theverge.com/2024/12/17/24323450/nvidia-jetson-orin-nano-super-developer-kit-software-update-ai-artificial-intelligence-maker-pc?fbclid=IwY2xjawHUmklleHRuA2FlbQIxMQABHe2GxTaiH1TaohKch1FIYA45W8UBFwxRmB3PxGof10fX8dHeXny5LDlk3A_aem_DdkhZN9e8s_K0By6tBVwlw
Tomi Engdahl says:
Nvidia calls the Nano Super Developer Kit “an ideal solution” for building chatbots or visual AI agents, as well as AI-based robots.
Tomi Engdahl says:
Spotify Employees Say It’s Promoting Fake Artists to Reduce Royalty Payments to Real Ones
“It’s just not fair.”
https://trib.al/ny19EQR?fbclid=IwZXh0bgNhZW0CMTEAAR1d-OFqEpiX8YUJaJtcxNf0_POhXpfxLO-wCoNwD3S7BEtng14zI7ALHF4_aem_dAOlNn6VUUeliqZ7YQCaSQ