Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
7,002 Comments
Tomi Engdahl says:
Katrina Manson / Financial Times:
Ex-Pentagon software chief Nicolas Chaillan, who recently resigned, says China is headed for global dominance due to advances in AI, ML, and cyber capabilities — Nicolas Chaillan speaks of ‘good reason to be angry’ as Beijing heads for ‘global dominance’ through technological innovation
https://t.co/64L6TWH6aE
Tomi Engdahl says:
Tim Culpan / Bloomberg:
Despite fears in the West, China’s alleged AI prowess is mostly limited to computer vision, useful for domestic control of citizens, while the US is far broader
https://www.bloomberg.com/opinion/articles/2021-10-11/china-isn-t-the-ai-juggernaut-the-west-fears-u-s-expertise-is-more-valuable
Tomi Engdahl says:
Kyle Wiggers / VentureBeat:
Microsoft and Nvidia claim to have trained the largest and most capable AI language model yet, containing 530B parameters — Microsoft and Nvidia today announced that they trained what they claim is the largest and most capable AI-powered language model to date: Megatron-Turing Natural Language Generation (MT-NLP).
https://venturebeat.com/2021/10/11/microsoft-and-nvidia-team-up-to-train-one-of-the-worlds-largest-language-models/
Tomi Engdahl says:
“We believed as long as we’re making clear this is a parody, we’re not doing anything to harm his image,” says visual effects artist Chris Umé, the creator of the hyper-realistic Tom Cruise deepfakes. https://cbsn.ws/3Bycd5u
Tomi Engdahl says:
Will AI Help Design Your Next Product?
Sept. 29, 2021
Machine learning is creeping into tools used to design chips, software, and more.
https://www.electronicdesign.com/altembedded/article/21175371/electronic-design-will-ai-help-design-your-next-product?utm_source=EG%20ED%20Connected%20Solutions&utm_medium=email&utm_campaign=CPS211004016&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R
Machine-learning (ML) and artificial-intelligence (AI) models based on deep neural networks (DNNs) are being exploited in a plethora of applications from voice analysis in the cloud for smart speakers to identifying objects for self-driving cars. Many of these applications employ multiple models that perform different types of identification and optimization chores.
But consumer and business applications aren’t the only places where AI/ML is coming into play. AI/ML software-development kits allow designers to incorporate these technologies into their own products, and tool developers are integrating them into their solutions so that the compiler you’re using might have an AI/ML model or two tuning your next design.
AI/ML will continue to crop up in more tools. However, in conventional language compilers like gcc or LLVM, it’s made fewer inroads because such compilers already have a good deal of optimization implemented by design.
Tomi Engdahl says:
You may not know the term “generative adversarial network” yet, but you can see it in action on a website that uses artificial intelligence to generate spookily realistic images of people who don’t exist. The site is aptly named: thispersondoesnotexist.com. https://cbsn.ws/3lvn7ng
Tomi Engdahl says:
Researchers from MIT and the Qatar Center for Artificial Intelligence have developed a machine learning system that analyzes high-resolution satellite imagery, GPS coordinates and historical crash data in order to map potential accident-prone sections in road networks, successfully predicting accident ‘hot spots’ where no other data or previous methods would indicate them.
DEEP LEARNING
AI Predicts Accident Hot-Spots From Satellite Imagery and GPS Data
https://www.unite.ai/ai-predicts-accident-hot-spots-from-satellite-imagery-and-gps-data/
Researchers from MIT and the Qatar Center for Artificial Intelligence have developed a machine learning system that analyzes high-resolution satellite imagery, GPS coordinates and historical crash data in order to map potential accident-prone sections in road networks, successfully predicting accident ‘hot spots’ where no other data or previous methods would indicate them.
Tomi Engdahl says:
Tekoäly ja robotit mullistavat toimistotyön
https://etn.fi/index.php/opinion/12704-tekoaely-ja-robotit-mullistavat-toimistotyoen
Koronaviruspandemia pakotti yritykset siirtymään digiaikaan työntekijöiden siirtyessä etätöihin. Mutta mitä toimistotyössä tapahtuu seuraavaksi? Todennäköisesti yksinkertaiset tehtävät katoavat. Toimistoon tulevat robotit ja tekoäly, sanoo Speech Processing Solutionsin myyntijohtaja Thomas Opolski.
Maailmalla on kirjoitettu valtavasti pandemian kertarysäyksellä muuttamasta työelämästä: ruokapöydistä ja vierashuoneista, joista tuli kotikonttoreita, etäpalavereihin osallistumisesta verkkarit jalassa sekä koululaisten etäopetuksesta.
Nyt monilla työpaikoilla ollaan onneksi palaamassa lähityöhön, ja etätöiden opettamat asiat ovat tuoreina mielessä. Yritykset ottavat käyttöön joustavia työ- ja toimistoratkaisuja, jotka mahdollistavat kotoa käsin työskentelyn aiempaa laajemmin, minkä seurauksena usein tuottavuus on korkeampi, omia töitä voidaan hallita paremmin ja työmatkoihin käytetään vähemmän aikaa, jolloin aikaa voidaan viettää perheen kanssa enemmän.
Mutta hybridityö on vasta alkua. Sitä seuraa sarja uudistuksia, niin suuria kuin pieniäkin, jotka mullistavat työpaikat.
Tomi Engdahl says:
Just like Smokey the Bear, AI CAN Prevent Forest fires, at least those caused by failures in the power grid
SMOKEY THE AI
https://spectrum.ieee.org/smokey-the-ai
Smart image analysis algorithms, fed by cameras carried by drones and ground vehicles, can help power companies prevent forest fires
The 2021 Dixie Fire in northern California is suspected of being caused by Pacific Gas & Electric’s equipment. The fire is the second-largest in California history.
The 2020 fire season in the United States was the worst in at least 70 years, with some 4 million hectares burned on the west coast alone. These West Coast fires killed at least 37 people, destroyed hundreds of structures, caused nearly US $20 billion in damage, and filled the air with smoke that threatened the health of millions of people. And this was on top of a 2018 fire season that burned more than 700,000 hectares of land in California, and a 2019-to-2020 wildfire season in Australia that torched nearly 18 million hectares.
While some of these fires started from human carelessness—or arson—far too many were sparked and spread by the electrical power infrastructure and power lines. The California Department of Forestry and Fire Protection (Cal Fire) calculates that nearly 100,000 burned hectares of those 2018 California fires were the fault of the electric power infrastructure, including the devastating Camp Fire, which wiped out most of the town of Paradise. And in July of this year, Pacific Gas & Electric indicated that blown fuses on one of its utility poles may have sparked the Dixie Fire, which burned nearly 400,000 hectares.
Until these recent disasters, most people, even those living in vulnerable areas, didn’t give much thought to the fire risk from the electrical infrastructure. Power companies trim trees and inspect lines on a regular—if not particularly frequent—basis.
However, the frequency of these inspections has changed little over the years, even though climate change is causing drier and hotter weather conditions that lead up to more intense wildfires. In addition, many key electrical components are beyond their shelf lives, including insulators, transformers, arrestors, and splices that are more than 40 years old. Many transmission towers, most built for a 40-year lifespan, are entering their final decade.
The way the inspections are done has changed little as well.
Historically, checking the condition of electrical infrastructure has been the responsibility of men walking the line.
Recently, power utilities have started using drones to capture more information more frequently about their power lines and infrastructure. In addition to zoom lenses, some are adding thermal sensors and lidar onto the drones.
Thermal sensors pick up excess heat from electrical components like insulators, conductors, and transformers. If ignored, these electrical components can spark or, even worse, explode. Lidar can help with vegetation management, scanning the area around a line and gathering data that software later uses to create a 3-D model of the area.
Bringing any technology into the mix that allows more frequent and better inspections is good news. And it means that, using state-of-the-art as well as traditional monitoring tools, major utilities are now capturing more than a million images of their grid infrastructure and the environment around it every year.
Now for the bad news. When all this visual data comes back to the utility data centers, field technicians, engineers, and linemen spend months analyzing it—as much as six to eight months per inspection cycle. That takes them away from their jobs of doing maintenance in the field. And it’s just too long: By the time it’s analyzed, the data is outdated.
It’s time for AI to step in. And it has begun to do so. AI and machine learning have begun to be deployed to detect faults and breakages in power lines.
Multiple power utilities, including Xcel Energy and Florida Power and Light, are testing AI to detect problems with electrical components on both high- and low-voltage power lines. These power utilities are ramping up their drone inspection programs to increase the amount of data they collect (optical, thermal, and lidar), with the expectation that AI can make this data more immediately useful.
Tomi Engdahl says:
This is literally a technique available in that game up.link
AI-Savvy Criminals Clone Executive’s Voice in $35 Million Deepfake Bank Heist
https://lm.facebook.com/l.php?u=https%3A%2F%2Fsingularityhub.com%2F2021%2F10%2F20%2Fai-savvy-criminals-pulled-off-a-35-million-deepfake-bank-heist%2F&h=AT0VC6C1s-ZZAEDMaQH0OxNmiI2VC6rfulKUlc67VKhFBCtORt_bD3iMNyCvPLsucAP81VNxRY4JrwVTvOyMcDuolA3dcLD1H8KX6RmHkG9GizfQLqVR0g78qkvLvY1jVg
Thanks to the advance of deepfake technology, it’s becoming easier to clone peoples’ voices. Some uses of the tech, like creating voice-overs to fill in gaps in Roadrunner, the documentary about Anthony Bourdain released this past summer, are harmless (though even the ethics of this move were hotly debated when the film came out). In other cases, though, deepfaked voices are being used for ends that are very clearly nefarious—like stealing millions of dollars.
An article published last week by Forbes revealed that a group of cybercriminals in the United Arab Emirates used deepfake technology as part of a bank heist that transferred a total of $35 million out of the country and into accounts all over the world.
In this case, criminals used deepfake software to recreate the voice of an executive at a large company (details around the company, the software used, and the recordings to train said software don’t appear to be available). They then placed phone calls to a bank manager with whom the executive had a pre-existing relationship, meaning the bank manager knew the executive’s voice. The impersonators also sent forged emails to the bank manager confirming details of the requested transactions. Between the emails and the familiar voice, when the executive asked the manager to authorize transfer of millions of dollars between accounts, the manager saw no problem with going ahead and doing so.
The fraud took place in January 2020, but a relevant court document was just filed in the US last week. Officials in the UAE are asking investigators in the US for help tracing $400,000 of the stolen money that went to US bank accounts at Centennial Bank.
Tomi Engdahl says:
Breakthrough proof clears path for quantum AI
https://phys.org/news/2021-10-breakthrough-proof-path-quantum-ai.html
Convolutional neural networks running on quantum computers have generated significant buzz for their potential to analyze quantum data better than classical computers can. While a fundamental solvability problem known as “barren plateaus” has limited the application of these neural networks for large data sets, new research overcomes that Achilles heel with a rigorous proof that guarantees scalability.
“The way you construct a quantum neural
network can lead to a barren plateau—or not,” said Marco Cerezo, co-author of the paper titled “Absence of Barren Plateaus in Quantum Convolutional Neural Networks,” published today by a Los Alamos National Laboratory team in
Physical Review X. Cerezo is a physicist specializing in quantum computing, quantum machine learning, and quantum information at Los Alamos. “We proved the absence of barren plateaus for a special type of quantum neural network. Our work provides trainability guarantees for this architecture, meaning that one can generically train its parameters.”
As an artificial intelligence (AI) methodology, quantum convolutional neural networks are inspired by the visual cortex. As such, they involve a series of convolutional layers, or filters, interleaved with pooling layers that reduce the dimension of the data while keeping important features of a data set.
These neural networks can be used to solve a range of problems, from image recognition to materials discovery
Tomi Engdahl says:
Next-Generation Batteries Will Be Brought to You by AI
Oct. 13, 2021
Maintaining a competitive edge in the battery manufacturing space may come down to the successful adoption of AI to accelerate testing phases and identify areas for cost efficiencies and performance improvements.
https://www.electronicdesign.com/power-management/whitepaper/21177573/addionics-nextgeneration-batteries-will-be-brought-to-you-by-ai?utm_source=EG%20ED%20Analog%20%26%20Power%20Source&utm_medium=email&utm_campaign=CPS211006027&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R
Tomi Engdahl says:
https://www.tomshardware.com/news/the-pentagon-aims-to-predict-enemys-actions-through-ai
Tomi Engdahl says:
Machine learning will unlock new value from urban IoT platforms
https://cities-today.com/industry/machine-learning-unlock-value-urban-iot-platforms/
Tomi Engdahl says:
Kyle Wiggers / VentureBeat:
AWS launches new EC2 instances powered by AI accelerators from Intel’s Habana, claims 40% better price-performance to train ML models over latest GPU instances
Amazon launches AWS instances powered by Habana’s AI accelerator chip
https://venturebeat.com/2021/10/26/amazon-launches-aws-instances-powered-by-habanas-ai-accelerator-chip/
Tomi Engdahl says:
Edge AI accelerators are just sand without future-ready software
https://www.edn.com/edge-ai-accelerators-are-just-sand-without-future-ready-software/?utm_source=edn_facebook&utm_medium=social&utm_campaign=Articles
While artificial intelligence (AI) advancements are often powered by massive GPUs in data centers, deployment of AI algorithms on edge devices requires a new generation of power- and cost-efficient AI chips. In recent years, several vendors have developed innovative AI chip architectures to address this need. However, unlike GPUs, which have well-established programming models and software toolchains, current AI processors often focus on performance benchmarks and there is a lack in software support.
An AI processor can’t be characterized by metrics like the number of tera operations per second (TOPS) or ResNet50 inferences it can process per second.
Tomi Engdahl says:
Bryan Walsh / Axios:
GitHub says as much as 30% of new code on its network is written with its AI tool Copilot, and 50% of developers who tried it since July have kept using it — The open-source software developer GitHub says as much as 30% of newly written code on its network is being done with the help of the company’s AI programming tool Copilot.
Nearly a third of new code on GitHub is written with AI help
https://www.axios.com/copilot-artificial-intelligence-coding-github-9a202f40-9af7-4786-9dcb-b678683b360f.html
The open-source software developer GitHub says as much as 30% of newly written code on its network is being done with the help of the company’s AI programming tool Copilot.
Why it matters: Copilot can look at code written by a human programmer and suggest further lines or alternative code, eliminating some of the repetitive labor that goes into coding.
How it works: Copilot is built on the OpenAI Codex algorithm, which was trained on terabytes of openly available source code and can translate human language into programming language. It serves as a more sophisticated autocomplete tool for programmers.
“We hear a lot from our users that their coding practices have changed using Copilot,” says Oege de Moor, VP of GitHub Next, the team rolling out Copilot. “Overall, they’re able to become much more productive in their coding.”
Between the lines: The company will announce at its GitHub Universe conference today that it will be rolling out Copilot support for all popular programming languages, including Java.
Tomi Engdahl says:
Man Arrested for Uncensoring Japanese Porn With AI in First Deepfake Case
https://www.vice.com/en/article/xgdq87/deepfakes-japan-arrest-japanese-porn
Deepfake technology could practically reverse the pixelation in Japanese adult videos, raising legal and ethical questions.
Tomi Engdahl says:
Chinese server builder Inspur trains monster text-generating neural network
Yuan 1.0 said to pass Turing test, and require many fewer GPUs than the GPT-3 Microsoft licensed from OpenAI
https://www.theregister.com/2021/10/28/yuan_1_natural_language_model/
Tomi Engdahl says:
https://blog.attractive.ai/2021/09/can-ai-monkey-understand-your-website.html
Tomi Engdahl says:
Näin hyödynnät tekoälyä kiinteistöissäsi
https://www.caverion.fi/opas/tekoaly-ebook/
Tomi Engdahl says:
A groundbreaking AI that constantly tests
key functions of your service. For free.
https://attractive.ai/en/monitoring
Tomi Engdahl says:
Facebook to Shut Down Face-Recognition System, Delete Data
https://www.securityweek.com/facebook-shut-down-face-recognition-system-delete-data
Facebook said it will shut down its face-recognition system and delete the faceprints of more than 1 billion people amid growing concerns about the technology and its misuse by governments, police and others.
“This change will represent one of the largest shifts in facial recognition usage in the technology’s history,” Jerome Pesenti, vice president of artificial intelligence for Facebook’s new parent company, Meta, wrote in a blog post on Tuesday. “Its removal will result in the deletion of more than a billion people’s individual facial recognition templates.”
Tomi Engdahl says:
Who has the patience to edit hours of multiple footage from a family event? AI does.
Watch Out, Wedding Videographers, AI Is Coming for You Automated video editors will intelligently merge simultaneous streams of events
https://spectrum.ieee.org/ai-video-editing
Although the quality of video recorded by smartphones has been improving dramatically in recent years, the hassle of collecting and assembling multiple recordings of a single event has changed little. Sure, TikTok mavens, Instagram influencers, and other dedicated amateurs have learned how to use editing software to piece together engaging, shareable, smartphone movies.
But that leaves out a lot of us out of the picture—though not for much longer. The next frontier of consumer video creation will be powered by AI, not by a professional videographer or dedicated amateur. These systems will intelligently and automatically combine video from multiple smartphones and other video devices, including action cameras, drones, gimbal cameras, or virtually any other connected camera into one finished production. We think this kind of system will be available to consumers within 2-3 years.
This is consumer multicam video production, an ecosystem of technologies that may just put wedding videographers out of business, or at least give them a run for the money. The building blocks for this system already exist. They include the cameras and advanced video processing software built into today’s smartphones, AI that’s already great at image recognition, and high speed, low latency wireless communications, including high-speed LTE wireless, Wi-Fi networks, and 5G.
Tomi Engdahl says:
Ingrid Lunden / TechCrunch:
H2O.ai, which provides an open source machine learning platform to build smart applications, raises a $100M Series E at a $1.6B pre-money valuation — H2O.ai — a startup that has developed an open-source framework as well as proprietary apps that make it easier for any kind of enterprise …
H2O.ai raises $100M at a $1.6B pre-money valuation for tools to make AI usable by any kind of enterprise
https://techcrunch.com/2021/11/07/h2o-ai-raises-100m-at-a-1-7b-valuation-for-tools-to-make-ai-usable-by-any-kind-of-enterprise/
H2O.ai — a startup that has developed an open-source framework as well as proprietary apps that make it easier for any kind of enterprise to build and operate artificial intelligence-based services — has seen a surge of interest as AI applications have become more ubiquitous, and enterprises beyond tech companies want to get in on the action. Now, it has raised $100 million to fuel its growth, a round of funding that values H2O.ai at $1.7 billion post-money ($1.6 billion pre-money).
Democratize AI
with the H2O AI Hybrid Cloud
https://www.h2o.ai/
Tomi Engdahl says:
Kyle Wiggers / VentureBeat:
Nvidia unveils Riva Custom Voice, a toolkit which companies can use to create custom, “human-like” voice assistants with only 30 minutes of speech data
Nvidia’s Riva Custom Voice lets companies create custom voices powered by AI
https://venturebeat.com/2021/11/09/nvidias-riva-custom-voice-lets-companies-create-custom-voices-powered-by-ai/
At its fall 2021 GPU Technology Conference (GTC), Nvidia unveiled Riva Custom Voice, a new toolkit that the company claims can enable customers to create custom, “human-like” voices with only 30 minutes of speech recording data. According to Nvidia, businesses can use Riva Custom Voice to develop a virtual assistant with a unique voice, while call centers and developers can leverage it to launch brand voices and apps to support people with speech and language disabilities.
Brand voices like Progressive’s Flo are often tasked with recording phone trees and elearning scripts in corporate training video series. For companies, the costs can add up — one source pegs the average hourly rate for voice actors at $39.63, plus additional fees for interactive voice response (IVR) prompts. Synthesization could boost actors’ productivity by cutting down on the need for additional recordings, potentially freeing the actors up to pursue more creative work — and saving businesses money in the process.
Tomi Engdahl says:
Mike Wheatley / SiliconANGLE:
Nvidia makes Omniverse, a real-time collaborative design tool, generally available, and now allows developers to create hyper-realistic, interactive AI avatars
Nvidia brings highly realistic, walking, talking AI avatars to its Omniverse design tool
https://siliconangle.com/2021/11/09/nvidia-brings-highly-realistic-walking-talking-ai-avatars-omniverse-design-tool/
Nvidia Corp. is expanding its hyper-realistic graphics collaboration platform and ecosystem Nvidia Omniverse with a new tool for generating interactive artificial intelligence avatars.
The company also announced a new synthetic data-generation engine that can generate physically simulated synthetic data for training deep neural networks. The new capabilities, announced at Nvidia GTC 2021 today, are designed to expand the usefulness of the Omniverse platform and enable the creation of a new breed of AI models.
Nvidia Omniverse is a real-time collaboration tool that’s designed to bring together graphics artists, designers and engineers to create realistic, complex simulations for a variety of purposes. Industry professionals from aerospace, architecture, construction, media and entertainment, manufacturing and gaming all use the software.
The platform has been available in beta for a couple of years, and today it finally launches into general availability.
NVIDIA Omniverse
Creating and Connecting Virtual Worlds
https://www.nvidia.com/en-us/omniverse/
NVIDIA Omniverse™ is an easily extensible, open platform built for virtual collaboration and real-time physically accurate simulation. Creators, designers, researchers, and engineers can connect major design tools, assets, and projects to collaborate and iterate in a shared virtual space. Developers and software providers can also easily build and sell Extensions, Apps, Connectors, and Microservices on Omniverse’s modular platform to expand its functionality.
Tomi Engdahl says:
NVIDIA Embedded has unveiled its next-generation edge AI Jetson module and developer kit, the Jetson AGX Orin, promising “server-class” performance in the palm of your hand and a sixfold boost over its predecessor.
NVIDIA Unveils Jetson AGX Orin — and Boasts of Sixfold Performance Boost
https://www.hackster.io/news/nvidia-unveils-jetson-agx-orin-and-boasts-of-sixfold-performance-boost-ab5206f9793b
New module and developer kit due to hit shelves in Q1 2022, but the company isn’t sharing pricing just yet.
Tomi Engdahl says:
Nanosatelliitti käsittelee kuvia tekoälyn avulla
https://etn.fi/index.php/13-news/12807-nanosatelliitti-kaesittelee-kuvia-tekoaelyn-avulla
Tekoälyä voidaan elektroniikan kehittymisen myötä hyödyntää yhä pienemmissä laitteissa. Tulevilla Space Tech Expo -messuilla CSUG ja Teledyne aikovat esitellä yksityiskohtaisia tietoja nanosatelliittista, joka osaa tunnistaa kuvia ennen kuin lähettää ne maa-asemalle.
QlevEr Sat-demo, joka on suunniteltu toteutettaviksi 10 x 20 x 30 -senttisessä 6U CubeSat -nanosatelliitissa, perustuu neliytimiseen Teledyne e2v Qormino QLS1046 -prosessoriin, joka operoi 1,8 gigahertsin kellotaajuudella. Järjestelmäpiiri (alla) hyödyntää 64-bittisiä Arm Cortex A72 -prosessoriytimiä, ja siinä on 4 gigatavun DDR4-muistikapasiteetti. Kuvia järjestelmä ottaa 16 megapikselin Emerald CMOS -kuvakennolla.
QlevEr Sat -järjestelmä hankkii ensin laajan alueen kuvat, minkä jälkeen ne muunnetaan sisäänrakennetun prosessointiresurssin kautta, johon on sisällytetty kustomoitu tekoälyalgoritmi. Algoritmi on kehitetty MIAI-instituutissa (Multidisciplinary Institute in Artificial Intelligence)ja siinä on keskitytty suorituskyvyn optimoimiseksi sulautetuissa kohteissa.
Ratkaisua voidaan käyttää monenlaisissa mahdollisissa sovelluksissa. Näitä ovat ensisijaisesti käytettävä metsäkadon seuranta, mutta myös tulivuoren toiminnan seuranta, luonnon- tai ihmisen aiheuttamien katastrofien aiheuttamien vahinkojen arviointi, kaupungistumisen kasvu, jäätiköiden liikkeiden analysointi ja valtameritutkimus sekä mahdolliset puolustukseen liittyvät tehtävät.
https://semiconductors.teledyneimaging.com/en/newsroom/deep-learning-ai-in-space-enabled-by-qormino-processing-module/
Tomi Engdahl says:
AI Don’t Like the Look of This
These autonomous drones have onboard AI processing that can detect chemical and explosive threats from the air to aid first responders.
https://www.hackster.io/news/ai-don-t-like-the-look-of-this-c1e736518204
Tomi Engdahl says:
How Artificial intelligence (AI) Stops Cybercriminals
https://www.hackread.com/how-artificial-intelligence-ai-stops-cybercriminals/
Newer AI algorithms are extremely good at analyzing data traffic, access, and transfer, as well as detecting outliers or anomalies in data trends. Below are some of the ways AI can prevent and mitigate the damage caused by cybercrime.
Tomi Engdahl says:
11 Myths About Analog Compute
Nov. 9, 2021
In the beginning there was analog. Then digital computing appeared. But analog never went away.
https://www.electronicdesign.com/technologies/analog/article/21180871/mythic-11-myths-about-analog-compute?utm_source=EG%20ED%20Analog%20%26%20Power%20Source&utm_medium=email&utm_campaign=CPS211025022&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R
What you’ll learn:
Comparing digital compute with analog compute
What strides have been taken to make analog a better alternative to digital?
How deep neural networks come into play.
In 1974, Theodore Nelson, the inventor of hypertext, wrote in his book “Computer Lib/Dream” that “analog computers are so unimportant compared to digital computers that we will polish them off in a couple of paragraphs.” This popular attitude toward analog computing hasn’t shifted much in the decades since then, despite the incredible advances made in analog computing technology.
The computational speeds and power efficiency of analog compared to digital have been promising for a long time. The problem is developing analog systems has been traditionally beset by a number of hurdles, including the size and cost of analog processors. The explosion of the IoT and the growth of AI applications have retriggered interest in developing new approaches of analog computing to solve some of the challenges associated with increasingly complex workloads.
Edge AI applications need to be low-cost, small-form-factor devices with low latency, high performance, and low power (see figure). It might surprise many people that analog solutions offer a very compelling solution to these challenges. Recent advances in analog technology, combined with the use of non-volatile memory like flash memory, have eliminated the traditional hurdles.
What follows are 11 common myths associated with analog computing.
1. Digital compute is better than analog compute.
Digital computing solutions have ushered in the Information Age and transformed what once were room-sized computers into incredibly powerful machines that can fit in the palm of our hands. It’s fair to say that for a long time, digital computing solutions were superior to analog solutions for most applications. However, times have changed and when we look at the needs of the future—one where every device will be equipped with powerful AI at the edge—it’s clear that digital compute won’t be able to keep up.
2. Moore’s Law will continue scaling.
Today, only a few manufacturers can follow the Moore’s Law trend—down from dozens in the 1990s—as it’s simply too cost-prohibitive. Process node improvements have slowed down while manufacturing costs have been dramatically rising. Simply put, it’s no longer business as usual with Moore’s Law scaling; new approaches are needed for the next generation of AI processing.
3. Analog systems are too complex to design.
Modern electronic-design-automation (EDA) tools have come a long way to enable high-speed simulation of analog circuits with a high level of fidelity. In addition, the ability for analog circuits to automatically calibrate and compensate for error has progressed by leaps and bounds.
4. Analog compute is mainly a research effort.
In the 1950s and 1960s, analog computers started to become obsolete for commercial applications, although analog computing was still used in research studies and certain industrial and military applications. Of course, a lot has changed since then. Companies like Mythic are taking analog processors to production
5. Analog systems aren’t capable of high performance.
Analog circuits can be incredibly fast, since they don’t need to rely on logic propagating through digital logic gates, or digital values pulled out of memory banks. By using tiny electrical currents steered through flash-memory arrays, massively parallel matrix operations can be performed in less than one microsecond.
Such performance makes analog systems ideal for compute-intensive workloads like video-analytics applications that use object detection, classification, and depth estimation.
6. Analog is power-hungry.
One under-the-radar problem is that digital systems are forced to store neural networks in DRAM, which is an expensive, inconvenient, and power-hungry approach. DRAM consumes lots of power both during active use and during idle periods, so system architects spend a great deal of time and effort to maximize the utilization of the processors.
Another issue with digital systems is that they’re extremely precise, which comes at a huge cost in performance and power, especially when it comes to neutral networks.
In practice, AI doesn’t need that level of precision. In fact, some analog processors, such as Mythic’s Analog Matrix Processor, which perform analog compute inside of very dense non-volatile memory, are already up to 10X more energy-efficient than digital systems (with the potential to be 100X to 1000X more energy-efficient for certain use cases). They’re also much faster and can pack 8X more information into the memory.
7. Analog chips are expensive to design and manufacture.
There has long been a perception that analog is much more expensive to design and manufacture than digital systems. However, the truth is that it’s becoming increasingly difficult for digital systems to keep up with the increasing costs of manufacturing and mask-set prices, which can reach beyond $100 million for the 1- to 3-nm range.
Analog systems offer a host of performance and power advantages, while also being incredibly cost-efficient. This is because high performance and incredible memory density can be achieved on older process nodes with analog compute. These process nodes are significantly lower cost in terms of mask sets and wafer prices, are mature and stable, and have far greater manufacturing capacity compared to bleeding-edge nodes
8. Analog systems—like digital systems—must store neural networks in DRAM.
One of the most important aspects of hardware is how much memory can be packed into a processor per millimeter square, and how much power is drawn by the memory. For digital systems, the mainstream memories—SRAM and DRAM—tend to consume too much power, take up too much chip area, and aren’t improving fast enough to drive the improvements needed for today’s AI era.
Analog systems have the advantage of being able to use non-volatile memory (NVM), which offers impressive densities and solves the power leakage problem. Some analog systems employ flash memory, one of the most common types of NVM,
9. Analog can’t run complex deep neural networks.
Conventional digital processing systems support complex deep neural networks (DNNs). The problem is that these platforms take up considerable silicon real estate, require DRAM, and consume lots of energy, which is why many AI applications offload most of the deep-learning work to remote cloud servers. For systems that require real-time processing for DNNs, the data must be processed locally.
When analog compute is combined with flash technology, processors can run multiple large, complex DNNs on-chip. This eliminates the need for DRAM chips and enables incredibly dense weight storage inside a single-chip accelerator. Processors can further maximize inference performance by having many of the compute-in-memory elements operate in parallel. With the growing demand for real-time processing, this type of on-chip execution of complex DNN models will become increasingly critical.
10. Analog systems aren’t as compact as digital systems.
It’s true that analog systems have traditionally been far too big. However, new approaches make it possible to design incredibly compact systems. One reason is the high density of flash, so by combining analog compute with flash memory, it’s possible to use a single flash transistor as a storage medium, and multiplier, and an adder (accumulator) circuit.
11. Analog systems aren’t resilient to changing environmental conditions.
One strength of digital is that it has a wide tolerance for changing environmental conditions, such as changes in temperature and fluctuating supply voltages. In analog systems of the past, any tiny variations in voltage could result in errors when being processed.
However, some approaches can make it possible for analog to have the same resiliency to different environmental conditions, and to deliver this at scale. Most modern analog circuits are software-controlled and use a bevy of compensation and calibration techniques. As a result, they can be manufactured in modern digital processes that sometimes exhibit a high degree of variation.
Tomi Engdahl says:
Tekoäly tarkistaa piirikortteja
https://etn.fi/index.php?option=com_content&view=article&id=12814&via=n&datum=2021-11-11_15:33:17&mottagare=30929
Automaatioyritys OMRON ilmoittaa tuoneensa markkinoille piirilevyjen tarkastusjärjestelmän, joka ensimmäisenä maailmassa hyödyntää tekoälyä PCB-osakokoonpanojen tarkastusprosessissa. Ratkaisu on nimeltään VT-S10.
Viime vuosina 5G:n, sähköautojen ja autonomisen ajon piirilevykokoonpanojen kysyntä on kasvanut nopeasti. Tämä nostaa myös piirilevyjen laatuvaatimuksia, koska nämä sovellukset voivat osoittautua ihmisille vaarallisiksi. Piirilevykokoonpanojen tihentyessä tarkastus on yhä haasteellisempaa, ja perinteisillä PCB-tarkastusjärjestelmillä on vaikeuksia ottaa tarkkoja kuvia esimerkiksi juotosmuodoista, mikä rajoittaa tarkastuksen laajuutta ja parametreja.
Tomi Engdahl says:
Clock and Data Recovery Plus AI Will Fuel the Data Center
Oct. 18, 2021
In this Q&A, Semtech’s Timothy Vang discusses how CDR solutions and AI development will optimize the data-center and wireless industries.
https://www.electronicdesign.com/technologies/analog/article/21176757/electronic-design-clock-and-data-recovery-plus-ai-will-fuel-the-data-center?utm_source=EG%20ED%20Connected%20Solutions&utm_medium=email&utm_campaign=CPS211110011&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R
Tomi Engdahl says:
AI can write a passing college paper in 20 minutes
Natural language processing is on the cusp of changing our relationship with machines forever.
https://www.zdnet.com/article/ai-can-write-a-passing-college-paper-in-20-minutes/
AI can do a lot of things extremely well. One thing that it can do just okay — which, frankly, is still quite extraordinary — is write college term papers.
That’s the finding from EduRef, a resource for students and educators, which ran an experiment to determine if a deep learning language prediction model known as GPT-3 could get passing marks in an anonymized trial.
“We hired a panel of professors to create a writing prompt, gave it to a group of recent grads and undergraduate-level writers, and fed it to GPT-3 and had the panel grade the anonymous submissions and complete a follow up survey for thoughts about the writers,” according to an EduRef post. The results were a surprising demonstration of the natural-language prowess of AI.
However, the hurdles for advanced natural language processing are enormous. According to a 2019 paper by the Allen Institute of Artificial Intelligence, machines fundamentally lack commonsense reasoning — the ability to understand what they’re writing. That finding is based on a critical reevaluation of standard tests to determine commonsense reasoning in machines, such as the Winograd Schema Challenge.
Which makes the results of the EduRef experiment that much more striking.
The AI scored the highest grades in U.S. History and Law writing prompts, earning a B- in both assignments. GPT-3 scored a “C” in a research paper on Covid-19 Vaccine Efficacy, scoring better than one human writer.
Overall, the instructor evaluations suggested that writing produced by GPT-3 was able to mimic human writing in areas of grammar, syntax, and word frequency, although the papers felt somewhat technical. As you might expect, the time it took the AI to complete the assignments was dramatically less than that required by human participants. The average time between assignment and completion for humans was 3 days, while the average time between assignment and completion for GPT-3 was between 3 and 20 minutes.
“Even without being augmented by human interference, GPT-3′s assignments received more or less the same feedback as the human writers,” according to EduRef. “While 49.2% of comments on GPT-3′s work were related to grammar and syntax, 26.2% were about focus and details. Voice and organization were also mentioned, but only 12.3% and 10.8% of the time, respectively. Similarly, our human writers received comments in nearly identical proportions. Almost 50% of comments on the human papers were related to grammar and syntax, with 25.4% related to focus and details. Just over 13% of comments were about the humans’ use of voice, while 10.4% were related to organization.”
Aside from potentially troubling implications for educators, what this points to is a dawning inflection point for natural language processing, heretofore a decidedly human characteristic.
Tomi Engdahl says:
Before AI can reshape healthcare, it needs to help shorten the waiting list
Rolling out artificial intelligence in healthcare still faces barriers. But even small-scale solutions are helping the NHS face its biggest problem
https://www.wired.co.uk/bc/article/ai-reshape-healthcare-shorten-waiting-list?utm_source=mixed-placement&utm_medium=social&utm_campaign=paid-spon–&utm_brand=wired&utm_social-type=paid&fbclid=IwAR1h0XpjJDJffxW_z8O9Ms4zm2-FGWZ_1qwuBB1BUpCYlh1VnOVAzeRzOi4
Artificial intelligence is already being used to read medical imagery, help patients manage conditions such as diabetes, and even suggest new cancer-drug regimens. But with the world still reeling from the pandemic, AI is now needed to help an even more pressing problem. “The biggest pressure is trying to marshal our resources against the waiting list,” said Catherine Pollard, director of tech policy at NHSx, speaking at a recent Microsoft Health breakfast at WIRED Health: Tech. “We have really significant staffing challenges. That needs solutions – ones that are really practical.”
5.6 million people in the UK are currently on hospital waiting lists. Digitisation is helping: remote appointments, for example, soared in popularity during the pandemic and can cut down time spent on non-urgent cases. “We’ve got 55 per cent of our adult population registered on our app. 600,000 patients total, over a million consultations,”
That’s where practical AI can come in useful. Enabling doctors to use voice assistant technology can save time that would previously have been used typing up notes. Natural Language Processing (NLP) can then help clinicians and researchers access those notes more effectively.
Tomi Engdahl says:
Bryan Walsh / Axios:
OpenAI makes its API for GPT-3 generally available in supported countries, eliminating the waiting list for access — The artificial intelligence research company OpenAI will eliminate the waiting list for access to the API of its natural language processing program (NLP) GPT-3.
OpenAI’s GPT-3 gets a little bit more open
https://www.axios.com/openai-gpt-3-waiting-list-api-929fd309-f8e1-4571-862a-879492e5ebc6.html
Tomi Engdahl says:
Artificial intelligence: Everyone wants it, but not everyone is ready
https://www.zdnet.com/article/artificial-intelligence-everyone-wants-it-but-not-everyone-is-ready/
Business is bullish on AI, but it takes a well-developed understanding to deliver visible business benefits.
Tomi Engdahl says:
Supercomputers Flex Their AI Muscles New benchmarks reveal science-task speedups
https://spectrum.ieee.org/ai-supercomputer
Tomi Engdahl says:
Alhaisen resoluution lähtökuvasta häkellyttävän tarkka superresoluutiokuva. Tekoäly tuottama kuva muistuttaa alkuperäistä.
https://www.youtube.com/watch?v=WCAF3PNEc_c&list=FL38u1Xr_cEh61HgFsknw9zA&index=3&ab_channel=TwoMinutePapers
Tomi Engdahl says:
Tekoäly näkee pimeässä
https://www.youtube.com/watch?v=bcZFQ3f26pA&ab_channel=TwoMinutePapers
Tomi Engdahl says:
Muutamalla perinteisellä valokuvalla saa riittävästi aineistoa siihen, että voi rakentaa videon jossa katsotaan eri suuntiin ja pyöritään ympäri.
https://www.youtube.com/watch?v=dZ_5TPWGPQI&ab_channel=TwoMinutePapers
Tomi Engdahl says:
Nvidia tuo supertietokoneen kämmenelle
https://etn.fi/index.php?option=com_content&view=article&id=12799&via=n&datum=2021-11-09_15:17:17&mottagare=31202
Nvidia on tehnyt lukuisia tuotejulkistuksia GTC-konferenssissaan. Niissä yhdistyy GPU-laskennan tehon kasvu sekä yhä nopeampi koneoppimismallien kehittäminen. Uusista tuotteista yksi mielenkiintoisista on uusi Jetson-korttitietokone.
Jetson AGX Orin maailman pienin, tehokkain ja energiatehokkain tekoälyä prosessoiva supertietokone robotiikkaan, autonomisiin koneisiin, lääketieteellisiin laitteisiin ja muihin sulautetun tietojenkäsittelyn muotoihin. Edeltäjään eli Jetson Xavieriin verrattuna Orin tuo kuusinkertaisen laskentatehon samanlaisella, kämmenelle sopivalla kortilla.
Tomi Engdahl says:
Jetson AGX Orin
Next-level AI performance for next-gen robotics.
https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-orin/
Tomi Engdahl says:
A team from the USC Viterbi School of Engineering has turned to generative adversarial networks — the technology normally associated with deepfake videos and generated images like This Person Does Not Exist — to build better brain-computer interfaces for people with disabilities.
Deepfake-Style GAN Tech Could Prove Key to Improving Brain-Computer Interface Training
https://www.hackster.io/news/deepfake-style-gan-tech-could-prove-key-to-improving-brain-computer-interface-training-cea42e8b1846
Running just one minute of data through a GAN provides the equivalent of 20 minutes of training, the team finds.
Tomi Engdahl says:
https://www.uusiteknologia.fi/2021/11/23/ilmainen-verkkokurssi-tekoalyn-etiikasta/
Tomi Engdahl says:
https://ethics-of-ai.mooc.fi/fi/
Tomi Engdahl says:
Pian avaat ovesi kasvoillasi
https://etn.fi/index.php/13-news/12869-pian-avaat-ovesi-kasvoillasi
NXP Semiconductors on esitellyt sovellusvalmiin alusta, jolla koneoppimista ja tekoälyä hyödyntävä 3D-kasvontunnistus voidaan tuoda osaksi älykodin järjestelmiä. Ratkaisu tarjoaa luotettavan 3D-kasvojentunnistuksen sisä- ja ulkokäyttöön erilaisissa valaistusolosuhteissa, mukaan lukien kirkas auringonvalo, hämärä yövalo tai muut vaikeat valaistusolosuhteet, jotka ovat haastavia perinteisille kasvojentunnistusjärjestelmille.
Alusta pohjaa 3D SLM -kameran käyttöön. Se mahdollistaa henkilön tunnistamisen, mikä auttaa erottamaan oikean henkilön huijaustekniikoista, kuten valokuvasta, jäljitelmämaskista tai 3D-mallista luvattoman käytön estämiseksi.
NXP EdgeReady MCU-Based Solution for 3D Face Recognition
https://www.nxp.com/design/designs/nxp-edgeready-mcu-based-solution-for-3d-face-recognition:VIZN3D?tid=vanmcu-vision3D
Tomi Engdahl says:
AI Reveals Previously Unknown Biology – We Might Not Know Half of What’s in Our Cells
https://scitechdaily.com/ai-reveals-previously-unknown-biology-we-might-not-know-half-of-whats-in-our-cells/
Tomi Engdahl says:
https://2050.earth/predictions/doctors-will-be-completely-replaced-by-robots