3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

7,002 Comments

  1. Tomi Engdahl says:

    Now just $59, is the Jetson Nano 2GB the perfect entry point to GPU-accelerated artificial intelligence at the edge?

    Gareth Halfacree goes hands-on with the NVIDIA developer kit: https://bit.ly/3lfrDTP

    Reply
  2. Tomi Engdahl says:

    InAccel Promises ML Performance Boosts with No Code Changes, Courtesy of FPGA Acceleration
    https://www.hackster.io/news/inaccel-promises-ml-performance-boosts-with-no-code-changes-courtesy-of-fpga-acceleration-6834cfb9038a

    Company’s Accelerated Machine Learning Studio allows existing code to be accelerated tenfold or more on automatically-managed FPGAs.

    Reply
  3. Tomi Engdahl says:

    NXP Launches AI Ethics Initiative
    https://www.eetimes.com/nxp-launches-ai-ethics-initiative/

    NXP has launched an AI ethics initiative intended to encourage the ethical development of AI systems in edge devices. The initiative, a framework of five key principles, is intended for NXP to use when developing AI applications or AI enabling technologies, but the company hopes to also set a good example for its customers.

    Edge AI systems today include all manner of devices that sense their environment and analyze the data in real time, on-device. This might be a smartphone using facial recognition to unlock itself, or home appliances that respond to the user’s voice commands. Many use NXP’s microcontrollers and application processors that are optimized for machine learning tasks.

    NXP started work on its AI ethics framework 18 months ago, following the model of the successful Charter of Trust for IoT Security, a cross-industry initiative founded in 2018. Input and insights were sought from engineers and from customers around the world.

    NXP’s five key principles for ethical AI systems are:

    Non-maleficence. Systems should not harm human beings and algorithmic bias should be minimized through ongoing research and data collection.
    Human autonomy. AI systems should preserve the autonomy of human beings and warrant freedom from subordination to — or coercion by — AI systems.
    Explainability and transparency. Vital to build and maintain trust of AI systems — users need to be aware they are interacting with AI and need the ability to retrace the system’s decisions.
    Continued attention and vigilance. To promote cross-industrial approaches to AI risk mitigation, foster multi-stakeholder networks to share new insights, best practices and information.
    Privacy and security by design. These factors must be considered from the start; they can not be bolted on as an afterthought. Traditional software attack vectors must be addressed, but they alone are not sufficient. Strive to build new frameworks for next-gen AI/ML.

    Reply
  4. Tomi Engdahl says:

    Philip Winston / kmeme:
    A GPT-3 AI bot posing as a human went seemingly undetected for a week on r/AskReddit, answering one question per minute on various topics including suicide

    GPT-3 Bot Posed as a Human on AskReddit for a Week
    https://www.kmeme.com/2020/10/gpt-3-bot-went-undetected-askreddit-for.html

    The MIT Technology Review called GPT-3 shockingly good after it was released in June of this year. GPT-3 is not an AI entity or an agent, it has no reason or logic or memory.

    Instead it’s basically autocomplete on steroids, but it does not just guess the word you are typing, it will write paragraphs upon paragraphs of what might plausibly come next after any “prompt” that you give it. Don’t like the result? Hit refresh and get an entirely attempt.

    Reply
  5. Tomi Engdahl says:

    UK passport photo checker shows bias against dark-skinned women
    https://www.bbc.com/news/technology-54349538

    Women with darker skin are more than twice as likely to be told their photos fail UK passport rules when they submit them online than lighter-skinned men, according to a BBC investigation.

    One black student said she was wrongly told her mouth looked open each time she uploaded five different photos to the government website.

    The passport application website uses an automated check to detect poor quality photos which do not meet Home Office rules. These include having a neutral expression, a closed mouth and looking straight at the camera.

    BBC research found this check to be less accurate on darker-skinned people.

    Ms Owusu said she managed to get a photo approved after challenging the website’s verdict, which involved writing a note to say her mouth was indeed closed.

    “If the algorithm can’t read my lips, it’s a problem with the system, and not with me.”

    Other reasons given for photos being judged to be poor quality included “there are reflections on your face” and “your image and the background are difficult to tell apart”

    “I understood the software was problematic – it was not my camera.

    Reply
  6. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    Google details how it is using AI to improve search, including indexing individual passages from webpages, allowing searches for songs by humming, and more — During a livestreamed event this afternoon, Google detailed the ways it’s applying AI and machine learning to improve the Google Search experience.

    Google details how it’s using AI and machine learning to improve search
    https://venturebeat.com/2020/10/15/google-details-how-its-using-ai-and-machine-learning-to-improve-search/

    Reply
  7. Tomi Engdahl says:

    Sarah Perez / TechCrunch:
    Google says Duplex has completed 1M+ bookings, from restaurant reservations to movie tickets, since launch and is piloting shopping and food ordering tasks

    Duplex, Google’s conversational AI, has updated 3M+ business listings since pandemic
    https://techcrunch.com/2020/10/15/duplex-googles-conversational-a-i-has-updated-3m-business-listings-since-pandemic/

    Reply
  8. Tomi Engdahl says:

    Brazilian researchers have developed a way to use #AI to combine ancient maps and modern satellite images to bring history alive.

    Use AI To Convert Ancient Maps Into Satellite-Like Images
    https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/ai-ancient-maps-satellite-images

    Reply
  9. Tomi Engdahl says:

    A radical new technique lets AI learn with practically no data
    “Less than one”-shot learning can teach a model to identify more objects than the number of examples it is trained on.
    https://www.technologyreview.com/2020/10/16/1010566/ai-machine-learning-with-tiny-data/

    Machine learning typically requires tons of examples. To get an AI model to recognize a horse, you need to show it thousands of images of horses. This is what makes the technology computationally expensive—and very different from human learning. A child often needs to see just a few examples of an object, or even only one, before being able to recognize it for life.

    In fact, children sometimes don’t need any examples to identify something.

    Now a new paper from the University of Waterloo in Ontario suggests that AI models should also be able to do this—a process the researchers call “less than one”-shot, or LO-shot, learning. In other words, an AI model should be able to accurately recognize more objects than the number of examples it was trained on. That could be a big deal for a field that has grown increasingly expensive and inaccessible as the data sets used become ever larger.

    If it’s possible to shrink 60,000 images down to 10, why not squeeze them into five? The trick, they realized, was to create images that blend multiple digits together and then feed them into an AI model with hybrid, or “soft,” labels. (Think back to a horse and rhino having partial features of a unicorn.)

    “If you think about the digit 3, it kind of also looks like the digit 8 but nothing like the digit 7,” says Ilia Sucholutsky, a PhD student at Waterloo and lead author of the paper. “Soft labels try to capture these shared features. So instead of telling the machine, ‘This image is the digit 3,’ we say, ‘This image is 60% the digit 3, 30% the digit 8, and 10% the digit 0.’”

    With carefully engineered soft labels, even two examples could theoretically encode any number of categories. “With two points, you can separate a thousand classes or 10,000 classes or a million classes,” Sucholutsky says.

    This is what the researchers demonstrate in their latest paper, through a purely mathematical exploration. They play out the concept with one of the simplest machine-learning algorithms, known as k-nearest neighbors (kNN), which classifies objects using a graphical approach.

    If you want to train a kNN model to understand the difference between apples and oranges, you must first select the features you want to use to represent each fruit. Perhaps you choose color and weight, so for each apple and orange, you feed the kNN one data point with the fruit’s color as its x-value and weight as its y-value. The kNN algorithm then plots all the data points on a 2D chart and draws a boundary line straight down the middle between the apples and the oranges. At this point the plot is split neatly into two classes, and the algorithm can now decide whether new data points represent one or the other based on which side of the line they fall on.

    While the idea of LO-shot learning should transfer to more complex algorithms, the task of engineering the soft-labeled examples grows substantially harder. The kNN algorithm is interpretable and visual, making it possible for humans to design the labels; neural networks are complicated and impenetrable, meaning the same may not be true. Data distillation, which works for designing soft-labeled examples for neural networks, also has a major disadvantage: it requires you to start with a giant data set in order to shrink it down to something more efficient.

    “The conclusion is depending on what kind of data sets you have, you can probably get massive efficiency gains,” he says.

    “Most significantly, ‘less than one’-shot learning would radically reduce data requirements for getting a functioning model built.” This could make AI more accessible to companies and industries that have thus far been hampered by the field’s data requirements. It could also improve data privacy, because less information would have to be extracted from individuals to train useful models.

    Sucholutsky emphasizes that the research is still early, but he is excited.

    initial reaction is to say that the idea is impossible, he says. When they suddenly realize it isn’t, it opens up a whole new world.

    Reply
  10. Tomi Engdahl says:

    Anil Ananthaswamy / Knowable Magazine:
    A look at “neurosymbolic AI”, which combines techniques of deep neural networks and “good old-fashioned AI”, with comments from its proponents and critics — The unlikely marriage of two major artificial intelligence approaches has given rise to a new hybrid called neurosymbolic AI.

    AI’s next big leap
    https://knowablemagazine.org/article/technology/2020/what-is-neurosymbolic-ai

    The unlikely marriage of two major artificial intelligence approaches has given rise to a new hybrid called neurosymbolic AI. It’s taking baby steps toward reasoning like humans and might one day take the wheel in self-driving cars.

    Reply
  11. Tomi Engdahl says:

    Bot Generated Fake Nudes Of Over 100,000 Women Without Their Knowledge, Says Report
    https://www.forbes.com/sites/siladityaray/2020/10/20/bot-generated-fake-nudes-of-over-100000-women-without-their-knowledge-says-report/?utm_campaign=forbes&utm_source=facebook&utm_medium=social&utm_term=Gordie/#676f7264696

    Around 104,852 women had their photos uploaded to a bot, on the WhatsApp-like text messaging app Telegram, which were then used to generate computer-generated fake nudes of them without their knowledge or consent, researchers revealed on Tuesday.

    These so-called “deepfake” images were created by an ecosystem of bots on the messaging app Telegram that could generate fake nudes on request, according to a report released by Sensity, an intelligence firm that specializes in deepfakes.

    The Telegram channels the researchers examined were made up of 101,080 members worldwide, with 70% coming from Russia and other eastern European countries.

    The bot that generated these images received significant advertising on the Russian social media website VK.

    70%. That’s the percentage of bot’s victims who were private individuals, not celebrities or influencers. The fake nudes of these women were generated using photos that were either taken from social media or private material. This is unlike deepfake non-consensual pornographic videos where celebrities are often the target.

    A deepfake bot on Telegram is violating women by forging nudes from regular pics
    https://www.cnet.com/news/a-deepfake-bot-on-telegram-is-violating-women-by-forging-nudes-from-regular-pics/

    Free, easy and requiring just a single still photo, the deepfake bot has produced more than 100,000 fake pornographic images — and that’s just the ones posted publicly online.

    Reply
  12. Tomi Engdahl says:

    Thousands Of Women Have No Idea A Telegram Network Is Sharing Fake Nude Images Of Them
    https://www.buzzfeednews.com/article/janelytvynenko/telegram-deepfake-nude-women-images-bot

    A new AI bot primarily spreading across Russia and Eastern Europe has created fake nude images of more than 680,000 women.

    Reply
  13. Tomi Engdahl says:

    How human-AI collaboration could kickstart the economy
    https://cybernews.com/editorial/how-human-ai-collaboration-could-kickstart-the-economy/?utm_source=facebook&utm_medium=cpc&utm_campaign=rm&utm_content=human_ai_collaboration&fbclid=IwAR2ZXpqXZMdBQKx8x7D2XeaStTPUtZzxzZ4nHhbO4lSMSOm3vwoLFDqfCaI

    Back in the early 80’s, offices were still full of typewriters rather than computers. Visionaries such as Steve Jobs preached that personal computers would enable teams to work more efficiently and create new opportunities. But many believed that the arrival of technology would be at the expense of their jobs. We all know how that story ended.

    Even 25 years ago, cloud architects, UX designers, app developers, social media, and community managers didn’t exist. But once again, we have the same concerns that the automated workplace will replace human workers. The reality is that human creativity, collaboration, and reflection will always take centre stage.

    We need to change the narrative and begin building a more resilient AI-ready workforce.

    It’s time for human employees to break free from a robotic existence and get back to doing what they do best, being human. Content creation, communication, empathy, critical thinking, management, and strategy are just a few human skills where machines cannot compete.

    AI is serendipity. It is here to liberate us from routine jobs, and it is here to remind us what it is that makes us human.

    As humans, we are often guilty of underestimating our strengths and what comes naturally to us. We beat ourselves up for not remembering telephone numbers or recalling information like we used to before we relied on our smartphones for everything. Sure, machines can now beat most of us in a game of chess or Go, but it cannot understand intonation, sentiment, or the emotion of its human opponent.

    Machines are also hopeless at navigating unfamiliar landscapes or manipulating objects. Contrary to what you read in the articles dominating your newsfeeds, machines cannot replace employees. But there is an increasing realization that human and machine intelligence perfectly complement each other.

    What we perceive as difficult problems are easy for machines to solve in a fraction of the time. Equally, what we find easy will make little or no sense to our new virtual colleagues. Ironically, the more we lean on technology to automate tasks, it increases the value of human skills required to solve problems that machines cannot perform. Welcome to a brave new world of collaborative intelligence.

    Can Human-AI collaboration prevent us repeating the same mistakes?
    As humans, we are guilty of creating more problems than solutions with AI technology. Predictive policing, the weaponization of space, and misuse of facial recognition are but a few examples that highlight the fragility of the human condition.

    AI’s potential to do good in the world and move humanity forward once again.

    We are already witnessing what happens when machines learn to be biased and prejudiced from the data that we feed them. Typically, this results in questionable behaviour.

    It is in our reach to build networks of humans and machines that sense, think, and collaborate with greater efficacy than either humans or AI systems alone.

    The World Economic Forum

    Reply
  14. Tomi Engdahl says:

    Activists Build Facial Recognition to ID Cops Who Hide Their Badges
    https://futurism.com/the-byte/activists-build-facial-recognition-id-cops-hide-badges

    In order to hold police accountable when they try to hide their identities, a growing number of activists are developing facial recognition tools that identify cops, The New York Times reports — a striking inversion of the way cops tend to use facial recognition on protestors and suspects.

    It’s a satisfying role reversal. Police are hiding their identities while cracking down on protests, in other words, just to be outed by the same invasive technology that they use to surveil the populace.

    Building these tools has become simple, the NYT reports, due to increasingly common off-the-shelf software. The real challenge, activists say, is finding enough images of local police to train the algorithm. They’ve had luck on social media, they told the newspaper.

    “For a while now, everyone was aware the big guys could use this to identify and oppress the little guys, but we’re now approaching the technological threshold where the little guys can do it to the big guys,” Andrew Maximov, a developer working on a similar project, told the NYT. “It’s not just the loss of anonymity. It’s the threat of infamy.”

    Reply
  15. Tomi Engdahl says:

    MonoEye Uses a Single Wide-Angle GoPro Chest Camera to Perform Accurate Human Motion Capture
    https://www.hackster.io/news/monoeye-uses-a-single-wide-angle-gopro-chest-camera-to-perform-accurate-human-motion-capture-08fa16235b7a

    A GoPro fitted with a 280-degree wide-angle lens, feeding three neural networks, proves more than a match for traditional mocap systems.

    “Our method estimates accurate 3D pose that is comparable with third-person view-based monocular motion capture methods. The combination of RGB images and deep learning methods provide stable results even in outdoor environments. We can estimate 3D human pose with camera orientation information [...] by combining the prediction results of CameraPoseNet and BodyPoseNet. We can distinguish the motions that have the same position and different camera directions, that previous portable motion capture systems are not able to distinguish.”

    MonoEye: Multimodal Human Motion Capture System Using A Single Ultra-Wide Fisheye Camera
    https://dl.acm.org/doi/10.1145/3379337.3415856

    Reply
  16. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    Microsoft, IBM, Nvidia, and others released an open framework to help security analysts detect, counter, and remediate threats against machine learning systems

    Microsoft and MITRE release framework to help fend off adversarial AI attacks
    https://venturebeat.com/2020/10/22/microsoft-and-mitre-release-framework-to-help-fend-off-adversarial-ai-attacks/

    Microsoft, the nonprofit MITRE Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch today released the Adversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with MITRE to build a schema that organizes the approaches employed by malicious actors in subverting machine learning models, bolstering monitoring strategies around organizations’ mission-critical systems.

    According to a Gartner report, through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems. Despite these reasons to secure systems, Microsoft claims its internal studies find most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses responding to the Seattle company’s recent survey indicated they don’t have the right tools in place to secure their machine learning models.

    The Adversarial ML Threat Matrix — which was modeled after the MITRE ATT&CK Framework — aims to address this with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE vetted to be effective against production systems. With input from researchers at the University of Toronto, Cardiff University, and the Software Engineering Institute at Carnegie Mellon University, Microsoft and MITRE created a list of tactics that correspond to broad categories of adversary action

    Reply
  17. Tomi Engdahl says:

    Can’t afford an AI-accelerating Nvidia Jetson Nano? Open-source emulator lets you prototype Python apps for it
    Get a feel for the gizmo’s programming environment
    https://www.theregister.com/2020/10/27/nvidia_jetson_emulator/

    If you’ve been thinking about playing with an Nvidia single-board computer for an AI task, but you’re not quite ready to part with your cash for something like the Jetson Nano just yet, here’s an application-level emulator of the hardware you can tinker with.

    It’s the Jetson AI-Computer Emulator, an open-source project created by machine-learning software engineer Tea Vui Huang.

    https://pypi.org/project/jetson-emulator/

    Reply
  18. Tomi Engdahl says:

    Jetson Emulator Gives Students A Free AI Lesson
    https://hackaday.com/2020/10/26/jetson-emulator-gives-students-a-free-ai-lesson/

    With the Jetson Nano, NVIDIA has done a fantastic job of bringing GPU-accelerated machine learning to the masses. For less than the cost of a used graphics card, you get a turn-key Linux computer that’s ready and able to handle whatever AI code you throw at it. But if you’re trying to set up a lab for 30 students, the cost of even relatively affordable development boards can really add up.

    Which is why [Tea Vui Huang] has developed jetson-emulator. This Python library provides a work-alike environment to NVIDIA’s own “Hello AI World” tutorials designed for the Jetson family of devices, with one big difference: you don’t need the actual hardware. In fact, it doesn’t matter what kind of computer you’ve got; with this library, anything that can run Python 3.7.9 or better can take you through NVIDIA’s getting started tutorial.

    So what’s the trick? Well, if you haven’t guessed already, it’s all fake. Obviously it can’t actually run GPU-accelerated code without a GPU, so the library [Tea] has developed simply pretends. It provides virtual images and even “live” camera feeds to which randomly generated objects have been assigned.

    Reply
  19. Tomi Engdahl says:

    Who’s got the best-performing AI? MLPerf has come up with new sets of benchmarks that measure how quickly an already-trained neural network can accomplish its task with new data. And, as usual, NVIDIA dominated the proceedings

    https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/new-ai-inferencing-records

    Reply
  20. Tomi Engdahl says:

    Neural network racing cars around a track
    https://m.youtube.com/watch?v=wL7tSgUpy8w

    Teaching a neural network to drive a car. It’s a simple network with a fixed number of hidden nodes (no NEAT), and no bias. Yet it manages to drive the cars fast and safe after just a few generations. Population is 650. The network evolves through random mutation (no cross-breeding). Fitness evaluation is currently done manually.

    Reply
  21. Tomi Engdahl says:

    AI Camera Ruins Soccer Game For Fans After Mistaking Referee’s Bald Head For Ball
    https://www.iflscience.com/technology/ai-camera-ruins-soccar-game-for-fans-after-mistaking-referees-bald-head-for-ball/

    The AI camera appeared to mistake the man’s bald head for the ball for a lot of the match, repeatedly swinging back to follow the linesman instead of the actual game. Many viewers complained they missed their team scoring a goal because the camera “kept thinking the Lino bald head was the ball,” and some even suggested the club would have to provide the linesman with a toupe or hat.

    With no fans allowed in the stadium due to Covid-19 restrictions, the fans of Inverness Caledonian Thistle FC and their opponents Ayr United could only watch via the cameras, and so were treated to mostly a view of the linesman’s head instead of any exciting moments of the match that were occurring off-camera, though some fans saw this as a bonus given the usual quality of performance.

    Reply
  22. Tomi Engdahl says:

    Programming language Python is a big hit for machine learning. But now it needs to change
    Despite its popularity, Python could become limited to data science alone on its current trajectory, say two experts.
    https://www.zdnet.com/article/programming-language-python-is-a-big-hit-for-machine-learning-but-now-it-needs-to-change/

    Reply
  23. Tomi Engdahl says:

    If at First You Don’t Succeed…
    LAND is a new machine learning approach that teaches robots to learn from their mistakes.
    https://www.hackster.io/news/if-at-first-you-don-t-succeed-fdd75a23f98e

    Reply
  24. Tomi Engdahl says:

    This toilet recognizes your butthole and uploads photos to the cloud
    https://mashable.com/article/smart-toilet-analprint-scan/

    Reply
  25. Tomi Engdahl says:

    AI has cracked a key mathematical puzzle for understanding our world
    https://www.technologyreview.com/2020/10/30/1011435/ai-fourier-neural-network-cracks-navier-stokes-and-partial-differential-equations/?utm_term=Autofeed&utm_campaign=site_visitor.unpaid.engagement&utm_medium=tr_social&utm_source=Facebook#Echobox=1604049241

    Partial differential equations can describe everything from planetary motion to plate tectonics, but they’re notoriously hard to solve.

    Reply
  26. Tomi Engdahl says:

    South Park creators have a new political satire series with some of the best AI-generated deepfakes on the internet yet
    https://www.theregister.com/2020/11/02/in_brief_ai/

    Not only is it pretty funny, the quality of the technology is shockingly good. The transitions and subtle facial expressions are smooth, apart from Zuckerberg who appears as robotic as ever, making it all the more realistic really.

    Reply
  27. Tomi Engdahl says:

    Conversational AI coming to a store near you
    https://cybernews.com/editorial/conversational-ai-coming-to-a-store-near-you/

    The arrival of COVID-19 has forced companies to find innovative ways to connect with customers while socially distanced to thrive in a contactless world. Retailers, restaurants, and salons are just a few prominent examples of businesses that have been challenged to think differently and find more creative ways to communicate and serve their customers.

    Reply
  28. Tomi Engdahl says:

    Learn the Basics of Training a Raspberry Pi Deep Learning Model on CGI Data
    Hugo Ponte explains how an object recognition model was trained using synthetic data while avoiding the sim2real gap.
    https://www.hackster.io/news/learn-the-basics-of-training-a-raspberry-pi-deep-learning-model-on-cgi-data-cb7c8a7a0146

    Reply
  29. Tomi Engdahl says:

    Must see, gets better and better as you watch, or scarier and scarier and Fact-based.

    From Essays to Coding, This New A.I. Can Write Anything
    https://m.youtube.com/watch?v=Te5rOTcE4J4

    Reply
  30. Tomi Engdahl says:

    Roboticists Make Robot AI More “Spontaneous” — By Implementing Chaotic Itinerancy
    Inspired by the chaotic operation of animal brains, a team from the University of Tokyo have come up with a “chaotic neural network.”
    https://www.hackster.io/news/roboticists-make-robot-ai-more-spontaneous-by-implementing-chaotic-itinerancy-f657e44a7445

    Reply
  31. Tomi Engdahl says:

    Scientists Create an AI From a Sheet of Glass
    https://futurism.com/scientists-create-ai-glass

    It turns out that you don’t need a computer to create an artificial intelligence. In fact, you don’t even need electricity.

    In an extraordinary bit of left-field research, scientists from the University of Wisconsin–Madison have found a way to create artificially intelligent glass that can recognize images without any need for sensors, circuits, or even a power source — and it could one day save your phone’s battery life.

    When the team then wrote down a number, the light reflecting off the digit would enter one side of the glass. The bubbles and impurities would scatter the lightwaves in certain ways depending on the number until they reached one of 10 designated spots — each corresponding to a different digit — on the opposite side of the glass.

    The glass could essentially tell the researcher what number it saw — at the speed of light and without the need for any traditional computing power source.

    “We’re accustomed to digital computing, but this has broadened our view,” Yu said. “The wave dynamics of light propagation provide a new way to perform analog artificial neural computing.”

    “We could potentially use the glass as a biometric lock, tuned to recognize only one person’s face,” Yu said. “Once built, it would last forever without needing power or internet, meaning it could keep something safe for you even after thousands of years.”

    Reply
  32. Tomi Engdahl says:

    €8 Million VEDLIoT Project Looks to Use Machine Learning to Improve the Internet of Things
    With 12 partners already selected, the European Commission-funded project has room for up to 10 more IoT projects.
    https://www.hackster.io/news/8-million-vedliot-project-looks-to-use-machine-learning-to-improve-the-internet-of-things-7a21f15a3c61

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*