3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

7,002 Comments

  1. Tomi Engdahl says:

    New AI Can Detect Emotion With Radio Waves
    https://www.defenseone.com/technology/2021/02/new-ai-can-detect-emotion-radio-waves/171863/

    There are national security and privacy implications to an experimental UK neural network that deciphers how people respond to emotional stimuli.

    Reply
  2. Tomi Engdahl says:

    AI Teaches Itself Diplomacy
    https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-learns-diplomacy-gaming

    Now that DeepMind has taught AI to master the game of Go—and furthered its advantage in chess—they’ve turned their attention to another board game: Diplomacy. Unlike Go, it is seven-player, it requires a combination of competition and cooperation, and on each turn players make moves simultaneously, so they must reason about what others are reasoning about them, and so on.

    “It’s a qualitatively different problem from something like Go or chess,”

    Reply
  3. Tomi Engdahl says:

    Science of Love app turns to hate for machine-learning startup in Korea after careless whispers of data
    Plus: Dodgy-looking AI coding bootcamp and a murderous conversation with OpenAI’s GPT-3?
    https://www.theregister.com/2021/02/15/in_brief_ai/?utm_source=dlvr.it&utm_medium=facebook

    Reply
  4. Tomi Engdahl says:

    Mila Jasper / Nextgov:
    A report from the National Security Commission on AI, which is chaired by Eric Schmidt, finds that the US is not fully prepared for AI competition with China

    U.S. Unprepared for AI Competition with China, Commission Finds
    https://www.nextgov.com/emerging-tech/2021/03/us-unprepared-ai-competition-china-commission-finds/172377/

    Retaining any edge will take White House leadership and a substantial investment, according to the National Security Commission on Artificial Intelligence.

    The National Security Commission on Artificial Intelligence is out with its comprehensive final report recommending a path forward for ensuring U.S. superiority in AI that calls for the Defense Department and the intelligence community to become “AI-ready” by 2025.

    NSCAI on Monday during a public meeting voted to approve its final report, which will also be sent to Congress. The report culminates two years of work that began after the 2019 National Defense Authorization Act established the commission to review advances in AI, machine learning and associated technologies.

    “The bottom line … is we don’t feel this is the time for incremental toggles to federal research budgets or adding a few new positions in the Pentagon for Silicon Valley technologists,” Commission Vice Chair Robert Work, former deputy secretary of defense, said during the meeting. “Those just won’t cut it. This will be expensive and requires significant change in mindset at the national, and agency, and Cabinet levels. America needs White House leadership, Cabinet member action, and bipartisan congressional support to win the AI competition and the broader technology competition.”

    The report details recommendations—along with detailed blueprints for action—around 16 different topics under two main umbrellas—defense in the AI era and winning the technology competition. Commissioners also identified four main pillars of interest orienting their recommendations: leadership, talent, hardware and innovation investment.

    Reply
  5. Tomi Engdahl says:

    Physicist creates AI algorithm that may prove reality is a simulation
    https://bigthink.com/surprising-science/physicist-creates-ai-algorithm-prove-reality-simulation?utm_medium=Social&facebook=1&utm_source=Facebook#Echobox=1614559122

    A physicist creates an AI algorithm that predicts natural events and may prove the simulation hypothesis.

    Princeton physicist Hong Qin creates an AI algorithm that can predict planetary orbits.
    The scientist partially based his work on the hypothesis which believes reality is a simulation.
    The algorithm is being adapted to predict behavior of plasma and can be used on other natural phenomena.

    A scientist devised a computer algorithm which may lead to transformative discoveries in energy and whose very existence raises the likelihood that our reality could actually be a simulation.

    Reply
  6. Tomi Engdahl says:

    DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs. We’ve found that it has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images.

    https://openai.com/blog/dall-e/

    Reply
  7. Tomi Engdahl says:

    … A team of researchers at CSIRO’s Data61, the data and digital arm of Australia’s national science agency, devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices, using a kind of AI system called a recurrent neural network and deep reinforcement-learning. To test their model they carried out three experiments in which human participants played games against a computer.

    AI can now learn to manipulate human behaviour
    https://theconversation.com/ai-can-now-learn-to-manipulate-human-behaviour-155031

    In a series of experiments, Australian researchers showed how machines can find vulnerabilities in human decision-making and exploit them to influence our behaviour.

    Reply
  8. Tomi Engdahl says:

    New research indicates the whole universe could be a giant neural network
    All the universe is a neural network, and all the humans merely nodes
    https://thenextweb.com/neural/2021/03/02/new-research-indicates-the-whole-universe-could-be-a-giant-neural-network/

    Reply
  9. Tomi Engdahl says:

    “We’ll never have true AI without first understanding the brain”
    https://www.technologyreview.com/2021/03/03/1020247/artificial-intelligence-brain-neuroscience-jeff-hawkins/

    Neuroscientist and tech entrepreneur Jeff Hawkins claims he’s figured out how intelligence works—and he wants every AI lab in the world to know about it.

    Reply
  10. Tomi Engdahl says:

    This channel shows the cutting art of AI. Making an instant 3D model from a 2D photo is pretty much a piece of cake at this moment. https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg

    Reply
  11. Tomi Engdahl says:

    How AI Will Impact The Future Of Work And Life
    https://trib.al/BePtiUd

    AI, or artificial intelligence, seems to be on the tip of everyone’s tongue these days. While I’ve been aware of this major trend in tech development for a while, I’ve noticed AI appearing more and more as one of the most in-demand areas of expertise for job seekers.

    Reply
  12. Tomi Engdahl says:

    #Deepfakes aren’t sophisticated enough yet to fool the vast majority of us, but deepfake innovations can be co-opted by bad actors to mess with other kinds of #machinelearning applications.

    Improved Technology for Deepfakes Highlights a Supply Chain Problem
    https://spectrum.ieee.org/riskfactor/telecom/security/deepfakes-supply-chain

    Every time a realistic-looking deepfake video makes the rounds—and lately, it feels like there is one every few days—there are warnings that the technology has advanced to the extent that these videos generated by artificial intelligence will be used in disinformation and other attacks.

    Typically, deepfake videos are generated by putting a person’s face onto the body of someone else, and the facial movements are manipulated to fit the original video using artificial intelligence. The technology isn’t sophisticated enough yet that people can’t tell the generated videos aren’t real, but the technology is improving rapidly, creating more opportunities for malicious actors to co-opt these applications for their own purposes

    “There is not a lot of harm yet [with deepfakes], but you can envision how this tech might be used in the future for other kinds of attacks, as the technology matures,” Sherman said at a recent Ai4 Cybersecurity 2021 Summit.

    The risk isn’t hypothetical. Back in 2019, an executive in a United Kingdom-based energy company received a phone call from his boss in Germany instructing him to wire €200,000 (US$220,000) to a Hungarian supplier within the hour. The call had actually been a deepfake audio,

    While it was bad that the company lost money, the damage wasn’t catastrophic. And that is what Sherman worries about. Currently, generating deepfake videos requires a good deal of technical expertise, time, processing power, and data, so it is still out of reach of the average user. Typically, transferring a person’s face onto the video of another person involves collecting thousands of pictures of both people, encoding the images using a deep learning neural network, and calculating features. Transferring the face of a person onto a video of another person could easily wind up involving 175 million parameters and millions of updates, Sherman said.

    Reply
  13. Tomi Engdahl says:

    Clever Neural Network Takes a Leap Towards Real-Time 3D Holography — on a Smartphone
    https://www.hackster.io/news/clever-neural-network-takes-a-leap-towards-real-time-3d-holography-on-a-smartphone-c40940e84f41

    New system can use depth data from cameras and LiDAR sensors, increasingly common on smartphones, to generate holograms in near-real-time.

    Reply
  14. Tomi Engdahl says:

    Next Raspberry Pi CPU Will Have Machine Learning Built In
    By Les Pounder 9 days ago
    Better machine learning is on the horizon
    https://www.tomshardware.com/news/raspberry-pi-pico-machine-learning-next-chip

    Reply
  15. Tomi Engdahl says:

    Deep coding: are Cloud ML and Microsoft DeepCoder the first step toward automated coding?
    https://www.spindox.it/en/blog/deep-coding-ai-2/

    Today the “deep” is fashionable: we have “deep” learning, “deep” vision, “deep” writing … so why not deep coding? Developing artificial intelligence software today can be a long process, so why not to automate coding? Developing machines that can develop themselves with deep coding is the last frontier of automation, perhaps the definitive one.

    Reply
  16. Tomi Engdahl says:

    Over the next few decades, AI is predicted to be the most significant commercial opportunity in the world—for companies and nations both. AI could advance the global GDP by 14 percent by 2030—$14 to $15 trillion. That’s no chump change—which is why despite the glitches of AI adoption, we need to keep moving ahead if we want in on the action. How do we do it? Start with the basics. Make sure you are entirely digitized so that you can pull and utilize data across departments. Make sure your AI projects are scalable so they can grow and spread throughout the company. And lastly, make sure that you have a cohesive AI strategy in place. At this point in digital transformation, you simply can’t advance without one.

    Six Reasons Why We Haven’t Seen Full AI Adoption
    https://www.forbes.com/sites/danielnewman/2019/03/12/6-reasons-why-we-havent-seen-full-ai-adoption/amp/?__twitter_impression=true

    On one hand, we know AI is the future of business. After all, manpower simply isn’t fast enough to keep up with the pace of consumer demand. That said, there’s a big difference between knowing AI is the future and actually implementing AI within your business successfully. That latter part—AI adoption—is where many companies are finding themselves stuck.

    Reply
  17. Tomi Engdahl says:

    Researchers Blur Faces That Launched a Thousand Algorithms
    Managers of the ImageNet data set paved the way for advances in deep learning. Now they’ve taken a big step to protect people’s privacy
    https://www.wired.com/story/researchers-blur-faces-launched-thousand-algorithms/?utm_medium=social&utm_brand=wired&utm_source=facebook&utm_social-type=owned&mbid=social_facebook

    Reply
  18. Tomi Engdahl says:

    Sam Altman:
    AI will drive a new tech revolution, applying Moore’s law to everything; to rebalance the newly created wealth, society should favor taxing capital over labor — My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe.

    Moore’s Law for Everything
    https://moores.samaltman.com/

    Reply
  19. Tomi Engdahl says:

    Fears of ‘digital dictatorship’ as Myanmar deploys AI
    https://news.trust.org/item/20210318130045-zsgja

    Protesters fear they are being tracked by cameras armed with facial recognition technology
    *Protesters fear they are being tracked by CCTV cameras

    *Cameras with AI technology can scan faces and licence plates, and alert police

    *Most of the equipment is from Chinese tech firm Huawei

    Reply
  20. Tomi Engdahl says:

    This New AI Exists For The Sole Purpose of Arguing With Humans
    https://www.sciencealert.com/this-new-ai-exists-for-the-sole-purpose-of-arguing-with-humans

    It can’t beat us. Yet. But an artificial intelligence (AI) program developed by scientists at IBM is so good at debating against humans that it may just be a matter of time before we can’t compete.

    Project Debater, an autonomous debating system that’s been in development for several years, is capable of arguing against humans in a meaningful way, taking a side in a debate on a chosen topic, and making a strong case for why its viewpoint is the right one.

    https://www.research.ibm.com/artificial-intelligence/project-debater/

    Reply
  21. Tomi Engdahl says:

    Amazon Delivery Drivers Forced to Sign ‘Biometric Consent’ Form or Lose Job
    https://www.vice.com/en/article/dy8n3j/amazon-delivery-drivers-forced-to-sign-biometric-consent-form-or-lose-job

    The new cameras, which are being implemented nationwide, use artificial intelligence to access drivers’ location, movement, and biometric data.

    Amazon delivery drivers nationwide have to sign a “biometric consent” form this week that grants the tech behemoth permission to use AI-powered cameras to access drivers’ location, movement, and biometric data.

    If the company’s delivery drivers, who number around 75,000 in the United States, refuse to sign these forms, they lose their jobs.

    Reply
  22. Tomi Engdahl says:

    Writing code today typically requires a keyboard and mouse. Why? #TalonVoice and Serenade AI say the human voice is all that’s needed to program. Coding-by-voice breaks down barriers of access, injury and abilities – and may just be the interface to beat.

    https://spectrum.ieee.org/computing/software/programming-by-voice-may-be-the-next-frontier-in-software-development

    Reply
  23. Tomi Engdahl says:

    An AI bot created a version of Eminem’s ‘My Name Is’—and it’s frighteningly spot-on
    https://conversations.indy100.com/eminem-ai-bot-my-name-is-song-2021?utm_content=Echobox&utm_medium=Social&utm_source=Facebook#Echobox=1616257793

    Who would have thought that the voice behind the clever revamp is not a person!?

    Reply
  24. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    AWS announces the general availability of Lookout for Metrics, a service for businesses that uses machine learning to analyze metrics and performance indicators — Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more. — Amazon today announced …

    Amazon launches Lookout for Metrics, an AWS service to monitor business performance
    https://venturebeat.com/2021/03/25/amazon-launches-lookout-for-metrics-an-aws-service-to-monitor-business-performance/

    Amazon today announced the general availability of Lookout for Metrics, a fully managed service that uses machine learning to monitor key factors impacting the health of enterprises. Launched at re:Invent 2020 last December in preview, Lookout for Metrics can now be accessed by most Amazon Web Services (AWS) customers via the AWS console and through supporting partners.

    Organizations analyze metrics or key performance indicators to help their businesses run effectively and efficiently. Traditionally, business intelligence tools are used to manage this data across disparate sources, but identifying these anomalies is challenging. Traditional rule-based methods look for data that falls outside of numerical ranges. Problematically, these ranges tend to be static and unchanging in response to conditions like the time of the day, day of the week, seasons, or business cycles.

    Reply
  25. Tomi Engdahl says:

    Deepfakes ja etiikka. ”All three of the latest free services say they’re mostly being used for positive purposes: satire, entertainment and historical re-creations. The problem is, we already know there are plenty of bad uses for deepfakes, too.”

    https://www.washingtonpost.com/technology/2021/03/25/deepfake-video-apps/

    Reply
  26. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    Facebook announces Fairness Flow, an internal toolkit for assessing AI bias, amid criticism of its work on AI biases and skepticism of the tool from experts

    AI experts warn Facebook’s anti-bias tool is ‘completely insufficient’
    https://venturebeat.com/2021/03/31/ai-experts-warn-facebooks-anti-bias-tool-is-completely-insufficient/

    Facebook today published a blog post detailing Fairness Flow, an internal toolkit the company claims enables its teams to analyze how some types of AI models perform across different groups. Developed in 2018 by Facebook’s Interdisciplinary Responsible AI (RAI) team in consultation with Stanford University, the Center for Social Media Responsibility, the Brookings Institute, and the Better Business Bureau Institute for Marketplace Trust, Fairness Flow is designed to help engineers determine how the models powering Facebook’s products perform across groups of people.

    The post pushes back against the notion that the RAI team is “essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization [on Facebook’s platform],” as MIT Tech Review’s Karen Hao wrote in an investigative report earlier this month. Hao alleges that the RAI team’s work — mitigating bias in AI — helps Facebook avoid proposed regulation that might hamper its growth. The piece also claims that the company’s leadership has repeatedly weakened or halted initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

    According to Facebook, Fairness Flow works by detecting forms of statistical bias in some models and data labels commonly used at Facebook. Here, Facebook defines “bias” as systematically applying different standards to different groups of people, like when Facebook-owned Instagram’s system disabled the accounts of U.S.-based Black users 50% more often than accounts of those who were white.

    Given a dataset of predictions, labels, group membership (e.g., gender or age), and other information, Fairness Flow can divide the data a model uses into subsets and estimate its performance. The tool can determine whether a model accurately ranks content for people from a specific group, for example, or whether a model under-predicts for some groups relative to others. Fairness Flow can also be used to compare annotator-provided labels with expert labels, which yields metrics showing the difficulty in labeling content from groups and the criteria used by the original labelers.

    How we’re using Fairness Flow to help build AI that works better for everyone
    https://ai.facebook.com/blog/how-were-using-fairness-flow-to-help-build-ai-that-works-better-for-everyone

    AI plays an important role across Facebook’s apps — from enabling stunning AR effects, to helping keep bad content off our platforms, to directly improving the lives of people in our communities through our COVID-19 Community Help hub. As AI-powered services become ubiquitous in everyday life, it’s becoming even more important to understand how systems might affect people around the world and how to help ensure the best possible outcomes for them.

    Facebook and the AI systems we use have a broad set of potential impacts on and responsibilities related to important social issues from data privacy and ethics, to security, the spread of misinformation, polarization, financial scams, and beyond. As an industry, we are working to understand those impacts, and as a research community, we have only just begun the journey of developing the qualitative and quantitative engineering, research, and ethical toolkits for grappling with and addressing them.

    Fairness in AI — a broad concern across the industry — is one such area of social responsibility that can have an enormous impact on the people who use our products and services. One aspect of fairness in AI relates to how AI-driven systems may affect people in diverse groups, or groups that have been historically marginalized in society. This has been a long-standing focus for Facebook’s engineers and researchers.

    Reply
  27. Tomi Engdahl says:

    Michael I. Jordan says that people are getting confused about the meaning of AI in discussions of technology trends

    Stop Calling Everything AI, Machine-Learning Pioneer Says
    https://spectrum.ieee.org/the-institute/ieee-member-news/stop-calling-everything-ai-machinelearning-pioneer-says

    Artificial-intelligence systems are nowhere near advanced enough to replace humans in many tasks involving reasoning, real-world knowledge, and social interaction. They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan

    Reply
  28. Tomi Engdahl says:

    “People are getting confused about the meaning of AI in discussions of technology trends—that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans,” he says. “We don’t have that, but people are talking as if we do.”
    https://spectrum.ieee.org/the-institute/ieee-member-news/stop-calling-everything-ai-machinelearning-pioneer-says

    Reply
  29. Tomi Engdahl says:

    In Computero: Hear How AI Software Wrote a ‘New’ Nirvana Song
    Computer-generated artificial tracks by Jimi Hendrix, Amy Winehouse and Jim Morrison highlight a new project that helps bring attention to mental illness
    https://www.rollingstone.com/music/music-features/nirvana-kurt-cobain-ai-song-1146444/

    Reply
  30. Tomi Engdahl says:

    Death Of The Turing Test In An Age Of Successful AIs
    https://hackaday.com/2021/04/06/death-of-the-turing-test-in-an-age-of-successful-ais/

    IBM has come up with an automatic debating system called Project Debater that researches a topic, presents an argument, listens to a human rebuttal and formulates its own rebuttal. But does it pass the Turing test? Or does the Turing test matter anymore?

    The Turing test was first introduced in 1950, often cited as year-one for AI research. It asks, “Can machines think?”. Today we’re more interested in machines that can intelligently make restaurant recommendations, drive our car along the tedious highway to and from work, or identify the surprising looking flower we just stumbled upon. These all fit the definition of AI as a machine that can perform a task normally requiring the intelligence of a human. Though as you’ll see below, Turing’s test wasn’t even for intelligence or even for thinking, but rather to determine a test subject’s sex.

    The Turing test as we know it today is to see if a machine can fool someone into thinking that it’s a human.

    Reply
  31. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    Google open sources Lyra in beta, an audio codec that uses ML to create high-quality voice calls for low-bandwidth networks or archiving large amounts of speech — Google today open-sourced Lyra in beta, an audio codec that uses machine learning to produce high-quality voice calls.

    Google launches Lyra codec in beta to reduce voice call bandwidth usage
    https://venturebeat.com/2021/04/06/google-launches-lyra-codec-in-beta-to-reduce-voice-call-bandwidth-usage/

    Google today open-sourced Lyra in beta, an audio codec that uses machine learning to produce high-quality voice calls. The code and demo, which are available on GitHub, compress raw audio down to 3 kilobits per second for “quality that compares favorably to other codecs,” Google says.

    While mobile connectivity has steadily increased over the past decade, the explosive growth of on-device compute power has outstripped access to reliable, fast internet. Even in areas with reliable connections, the emergence of work-from-anywhere and telecommuting have stretched data limits. For example, early in the pandemic, nearly 90 out of the top 200 U.S. cities saw internet speeds decline as bandwidth became strained, according to BroadbandNow.

    Lyra – enabling voice calls for the next billion users
    https://opensource.googleblog.com/2021/04/lyra-enabling-voice-calls-for-next-billion-users.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+GoogleOpenSourceBlog+%28Google+Open+Source+Blog%29

    The past year has shown just how vital online communication is to our lives. Never before has it been more important to clearly understand one another online, regardless of where you are and whatever network conditions are available. That’s why in February we introduced Lyra: a revolutionary new audio codec using machine learning to produce high-quality voice calls.

    As part of our efforts to make the best codecs universally available, we are open sourcing Lyra, allowing other developers to power their communications apps and take Lyra in powerful new directions. This release provides the tools needed for developers to encode and decode audio with Lyra, optimized for the 64-bit ARM android platform, with development on Linux. We hope to expand this codebase and develop improvements and support for additional platforms in tandem with the community.

    Lyra’s architecture is separated into two pieces, the encoder and decoder. When someone talks into their phone the encoder captures distinctive attributes from their speech. These speech attributes, also called features, are extracted in chunks of 40ms, then compressed and sent over the network. It is the decoder’s job to convert the features back into an audio waveform that can be played out over the listener’s phone speaker. The features are decoded back into a waveform via a generative model. Generative models are a particular type of machine learning model well suited to recreate a full audio waveform from a limited number of features. The Lyra architecture is very similar to traditional audio codecs, which have formed the backbone of internet communication for decades. Whereas these traditional codecs are based on digital signal processing (DSP) techniques, the key advantage for Lyra comes from the ability of the generative model to reconstruct a high-quality voice signal.

    The Lyra code is written in C++ for speed, efficiency, and interoperability, using the Bazel build framework with Abseil and the GoogleTest framework for thorough unit testing. The core API provides an interface for encoding and decoding at the file and packet levels. The complete signal processing toolchain is also provided, which includes various filters and transforms. Our example app integrates with the Android NDK to show how to integrate the native Lyra code into a Java-based android app. We also provide the weights and vector quantizers that are necessary to run Lyra.

    Reply
  32. Tomi Engdahl says:

    New Uses For AI
    https://semiengineering.com/new-uses-for-ai/?cmid=06eb7317-4814-40ed-a978-7517d016962b

    Big improvements in power and performance stem from low-level intelligence.

    Reply
  33. Tomi Engdahl says:

    Startup Transforms Compute-In-Memory
    https://www.eetimes.com/startup-transforms-compute-in-memory/

    At the TinyML Summit, early-stage analog AI accelerator startup Areanna presented the first public reveal of its architecture, disclosing some of the features of its 40 TOPS/W SRAM array-based design. The unusual design integrates analog-to-digital and digital-to-analog conversion within the memory array. Since ADCs and DACs typically take up the vast majority of silicon area and power budget for compute-in-memory designs, integrating this functionality within the memory array could be a game changer for analog compute technology.

    Areanna is led by former Tektronix analog design engineer Behdad Youssefi alongside another ex-Tek colleague, Patrick Satarzadeh. They remain the company’s only two full-time employees, alongside a couple of part time engineers and several advisors. The company has achieved a test chip with one computing tile based on its architecture up and running.

    Reply
  34. Tomi Engdahl says:

    A Scientist Taught AI to Generate Pickup Lines. The Results are Chaotic.
    “I Love You, I Love You, I Love You To The confines of death and disease, the legions of earth rejoices. Woe be to the world!”
    https://www.vice.com/en/article/z3vjey/artificial-intelligence-ai-pickup-lines

    Reply
  35. Tomi Engdahl says:

    A CPU Will Do
    Algorithmic advancements allow commodity CPUs to outperform highly-specialized hardware in training large machine learning models.
    https://www.hackster.io/news/a-cpu-will-do-fa6558887d83

    Computer scientists at Rice University are working to democratize machine learning with some algorithmic advancements that can allow CPUs to perform better than GPUs in deep learning model training. The team built upon the open source sub-linear deep learning engine (SLIDE) that recasts model training as a problem that can be solved with sparse hash table based back-propagation, rather than matrix multiplication. In their research, they uncovered ways in which the current implementation of SLIDE is less than optimal, and have proposed modifications that exploit technological advances available to modern CPUs.

    Reply
  36. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    Nvidia announces Morpheus, an AI-powered cloud-native app framework to help cybersecurity providers detect and prevent breaches in real-time, now in preview

    Nvidia announces Morpheus, an AI-powered app framework for cybersecurity
    https://venturebeat.com/2021/04/12/nvidia-announces-morpheus-an-ai-powered-app-framework-for-cybersecurity/

    During its GTC 2021 virtual keynote this morning, Nvidia announced Morpheus, a “cloud-native” app framework aimed at providing cybersecurity partners with AI skills that can be used to detect and mitigate cybersecurity attacks. Using machine learning, Morpheus identifies, captures, and acts on threats and anomalies, including leaks of sensitive data, phishing attempts, and malware.

    Morpheus is available in preview from today, and developers can apply for early access on Nvidia’s landing page.

    Reflecting the pace of adoption, the AI in cybersecurity market will reach $38.2 billion in value by 2026, Markets and Markets projects. That’s up from $8.8 billion in 2019, representing a compound annual growth rate of around 23.3%. Just last week, a study from MIT Technology Review Insights and Darktrace found that 96% of execs at large enterprises are considering adopting “defensive AI” against cyberattacks.

    Reply
  37. Tomi Engdahl says:

    Ryan Smith / AnandTech:
    NVIDIA unveils Grace, a high-performance Arm-based server CPU for large-scale neural network workloads, expected to become available in NVIDIA products in 2023

    NVIDIA Unveils Grace: A High-Performance Arm Server CPU For Use In Big AI Systems
    by Ryan Smith on April 12, 2021 12:20 PM EST
    https://www.anandtech.com/show/16610/nvidia-unveils-grace-a-highperformance-arm-server-cpu-for-use-in-ai-systems

    Reply
  38. Tomi Engdahl says:

    Karissa Bell / Engadget:
    Twitter says it will study any “unintentional harms” caused by its algorithms and will make the findings public as part of its Responsible ML Initiative

    Twitter will study ‘unintentional harms’ caused by its algorithms
    The company will study its content recommendations and image cropping as part of the effort.
    https://www.engadget.com/twitter-will-study-algorithms-for-unintentional-harm-182722681.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cudGVjaG1lbWUuY29tLw&guce_referrer_sig=AQAAAByhNpY7Z9dr0E6Cc5Q72cmQTspEdwcFgRMdK3pZVztrTyRVR_js7gbefYDWfg-vwv5hC0mTIDBdsXqMp66e0do1EZ_Re_GlJEnpnGKpm-6ZMVHgPKcrSLcPHNxe_C9nFQ6LqmSj0S3yAGkdT3Az84pnIF5dHg_O1by4WhG2Lmr4

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*