3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

7,003 Comments

  1. Tomi Engdahl says:

    New Facial Recognition Tech Only Needs Your Eyes and Eyebrows
    You won’t be able to hide behind a mask
    https://onezero.medium.com/new-facial-recognition-tech-only-needs-your-eyes-and-eyebrows-9e7dc155cd7f

    The term “facial recognition” typically refers to technology that can identify your entire face. How that recognition happens can vary, and can include infrared or lidar technology. Either way, you need the geometry of a person’s entire face to make it work.
    But in the coronavirus era, when everyone is advised to wear a mask, exposed faces are increasingly rare. That’s breaking facial recognition systems everywhere, from iPhones to public surveillance apparatuses

    Now, facial recognition company Rank One says it has a solution. This week, the company released a new form of facial recognition called periocular recognition, which can supposedly identify individuals by just their eyes and eyebrows. Rank One says the new system uses an entirely different algorithm from its standard facial recognition system and is specifically meant for masked individuals. Rank One says it will ship the technology to all of its active customers for free.

    Reply
  2. Tomi Engdahl says:

    Microsoft and Intel project converts malware into images before
    analyzing it
    https://www.zdnet.com/article/microsoft-and-intel-project-converts-malware-into-images-before-analyzing-it/
    Microsoft and Intel have recently collaborated on a new research
    project that explored a new approach to detecting and classifying
    malware. Called STAMINA (STAtic Malware-as-Image Network Analysis),
    the project relies on a new technique that converts malware samples
    into grayscale images and then scans the image for textural and
    structural patterns specific to malware samples.

    Microsoft and Intel project converts malware into images before analyzing it
    https://www.zdnet.com/article/microsoft-and-intel-project-converts-malware-into-images-before-analyzing-it/

    Microsoft and Intel Labs work on STAMINA, a new deep learning approach for detecting and classifying malware.

    Reply
  3. Tomi Engdahl says:

    Thunderbolt Flaws Expose Millions of PCs to Hands-On Hacking
    https://www.wired.com/story/thunderspy-thunderbolt-evil-maid-hacking/
    SECURITY PARANOIACS HAVE warned for years that any laptop left alone
    with a hacker for more than a few minutes should be considered
    compromised. Now one Dutch researcher has demonstrated how that sort
    of physical access hacking can be pulled off in an ultra-common
    component: The Intel Thunderbolt port found in millions of PCs.. Also:
    https://thehackernews.com/2020/05/thunderbolt-vulnerabilities.html.
    https://threatpost.com/millions-thunderbolt-devices-thunderspy-attack/155620/.
    https://www.zdnet.com/article/thunderbolt-flaws-affect-millions-of-computers-even-locking-unattended-devices-wont-help/.
    https://www.bleepingcomputer.com/news/security/new-thunderbolt-security-flaws-affect-systems-shipped-before-2019/

    Reply
  4. Tomi Engdahl says:

    Tenstorrent Is Changing the Way We Think About AI Chips
    https://www.designnews.com/electronics-test/tenstorrent-changing-way-we-think-about-ai-chips/181638547462981?ADTRK=InformaMarkets&elq_mid=13153&elq_cid=876648

    GPUs and CPUs are reaching their limits as far as AI is concerned. That’s why Tenstorrent is creating something different.

    GPUs and CPUs are not going to be enough to ensure a stable future for artificial intelligence. “GPUs are essentially at the end of their evolutionary curve,” Ljubisa Bajic, CEO of AI chip startup Tenstorrent told Design News. “[GPUs] have done a great job; they’ve pushed the field to to the point where it is now. But in order to make any kind of order of magnitude type jumps GPUs are going to have to go.”

    Spiking neural networks (SNNs) more closely mimic the functions of biological neurons, which send information via spikes in electrical activity. “Here people try to simulate natural neurons almost directly by writing out the differential equations that describe their operation and then implementing them as close we can in hardware,” Bajic explained. “So to an engineer this comes down to basically having many scalar processor cores connected to the scalar network.”

    This is very inefficient from a hardware standpoint. But Bajic said that SNNs have an efficiency that biological neurons have in that only a certain percentage of neurons are activated depending on what the neural net is doing – something that’s highly desirable in terms of power consumption in particular.

    Reply
  5. Tomi Engdahl says:

    Today Sony announced that it has developed and is distributing smart image sensors that use machine learning to process captured images on the sensor itself. They can then select only relevant images, or parts of images, to send on to cloud-based systems or local hubs.

    Sony Builds AI Into a CMOS Image Sensor
    https://spectrum.ieee.org/view-from-the-valley/sensors/imagers/sony-builds-ai-into-a-cmos-image-sensor

    Sony today announced that it has developed and is distributing smart image sensors. These devices use machine learning to process captured images on the sensor itself. They can then select only relevant images, or parts of images, to send on to cloud-based systems or local hubs.

    This technology, says Mark Hanson, vice president of technology and business innovation for Sony Corp. of America, means practically zero latency between the image capture and its processing; low power consumption enabling IoT devices to run for months on a single battery; enhanced privacy; and far lower costs than smart cameras that use traditional image sensors and separate processors.

    Sony builds these chips by thinning and then bonding two wafers—one containing chips with light-sensing pixels and one containing signal processing circuitry and memory. This type of design is only possible because Sony is using a back-illuminated image sensor.

    “We originally went to backside illumination so we could get more pixels on our device,” says Hanson. “That was the catalyst to enable us to add circuitry; then the question was what were the applications you could get by doing that.”

    Sony’s smart image processor can identify and track objects, only sending data on to the cloud when it spots an anomaly.

    Reply
  6. Tomi Engdahl says:

    https://etn.fi/index.php/13-news/10774-nvidian-tekoalyhirmulla-54-miljardia-transistoria#ETNartikel

    Nvidian piti alun perin järjestää GPU Technology -tapahtumansa maaliskuussa. Nyt tapahtuma järjestettiin virtuaalisesti ja Nvidian pääjohtaja Jensen Huangin keynote-puhe oli nauhoitettu miehen omassa keittiössä. Puheessaan mies paljasti maailman toistaiseksi tehokkaimman tekoälyprosessorin.

    Tähän asti Nvidian Volta-piirit ovat olleet se mittari, johon tekoälyprosessoreja on verrattu. Yhtiön uusi A100-piiri teki Voltasta kerralla hitaan. 54 miljardista 7 nanometrin prosessissa valmistetusta transistorista koostuva piiri on jopa 20 kertaa Volta-piirejä tehokkaampi, Jensen hehkutti.

    A100-prosessori on ensimmäinen, joka perustu Nvidian uuteen Ampere-arkitehtuuriin. Nvidian mukaan jo kahdeksantoista palveluntarjoajaa on sitoutunut käyttämänä A100-pohjaisia järjestelmiä. Mukana ovat esimerkiksi Alibaba Cloud, Amazon Web Services, Baidu Cloud, Cisco, Dell Technologies, Google Cloud, Hewlett Packard Enterprise, Microsoft Azure ja Oracle.

    Reply
  7. Tomi Engdahl says:

    Sonyn uusi prosessori tuo tekoälyn suoraan kameroihin
    https://etn.fi/index.php?option=com_content&view=article&id=10768&via=n&datum=2020-05-14_15:26:54&mottagare=31202

    Käyttäjä voi valita, mitä dataa IMX500-piirit tuottavat laitteen käyttöön. Valittavina on raaka pikselidata (eli normaali kuvainformaatio), metadata, kuva ISP-prosessoidussa muodossa tai haluttu kiinnostuksen kohde. Sonyn mukaan piirit tunnistavat älykkäästi videokuvan kohteita 1,3 millisekunnissa. Tämä tarkoittaa käytännössä kuvan objektien reaaliaikaista seurantaa.

    Käyttäjät voivat kirjoittaa valitsemansa AI-mallit piirin sulautettuun muistiin ja muuttaa ja päivittää malleja oman sovelluksen vaatimusten tai järjestelmän käyttöpaikan olosuhteiden mukaan. Samoja malleja voidaan myös käyttää eri sovelluksissa objektien tunnistamisesta käyttäytymisen tunnistamiseen.

    Reply
  8. Tomi Engdahl says:

    Artificial intelligence is struggling to cope with how the world has changed
    Narrow artificial intelligence is finding it hard to make good predictions in an environment filled with change. That means it’s time to look at better models for AI.

    https://www.zdnet.com/article/ai-models-are-struggling-to-cope-with-our-rapidly-changing-world/

    Reply
  9. Tomi Engdahl says:

    By monitoring people’s positions and the overall power usage of a property, this new deep learning system can infer appliance locations.

    CSAIL Engineers Build a System for Tracking Home Appliances — Without Manual Intervention
    https://www.hackster.io/news/csail-engineers-build-a-system-for-tracking-home-appliances-without-manual-intervention-fe13adaf32ac

    By monitoring people’s positions and the overall power usage of a property, a new deep learning system can infer appliance locations.

    Reply
  10. Tomi Engdahl says:

    Kumpi robotin pitäisi pelastaa – viaton kalastaja vai juopunut veneilijä? Vastaa ja katso, mitä mieltä muut ovat
    https://yle.fi/aihe/artikkeli/2020/04/28/kumpi-robotin-pitaisi-pelastaa-viaton-kalastaja-vai-juoppokuski-vastaa-ja-katso

    Tervetuloa tulevaisuuteen. Robotit pelastavat ja hoivaavat sinua, tekoäly ohjaa autoasi. Millaisia päätöksiä tekoälyjen pitäisi mielestäsi tehdä? Minkälainen eettinen koodi robotteihin pitäisi koodata? Lue neljä tarinaa robottien ristiriitatilanteista ja vastaa, miten robotin pitäisi toimia.

    Reply
  11. Tomi Engdahl says:

    Nvidia’s bleeding-edge Ampere GPU architecture revealed: 5 things PC gamers need to know
    Nvidia’s next-gen GPU architecture is finally here.
    https://www.pcworld.com/article/3543834/nvidia-ampere-gpu-architecture-reveal-geforce-pc-gamers.html

    Reply
  12. Tomi Engdahl says:

    Sony Launches “World’s First” Vision Sensor with On-Board Edge AI Processing Capabilities
    Designed for edge AI, Sony’s new IMX500 and IMX501 can run computer vision tasks entirely locally — and quickly, too.
    https://www.hackster.io/news/sony-launches-world-s-first-vision-sensor-with-on-board-edge-ai-processing-capabilities-66d0bfcf50e2

    Reply
  13. Tomi Engdahl says:

    Hands-On with the NVIDIA Jetson Xavier NX Developer Kit
    Finally available in a bundle with baseboard, does NVIDIA’s Volta-based edge AI acceleration machine deliver on its promises?
    https://www.hackster.io/news/hands-on-with-the-nvidia-jetson-xavier-nx-developer-kit-fda389cbe7d2

    Unveiled late last year, the Jetson Xavier NX is the latest entry in NVIDIA’s deep learning-accelerating Jetson family. Described by the company as “the world’s smallest supercomputer” and directly targeting edge AI implementations, the Developer Kit edition which bundles the core system-on-module (SOM) board with an expansion baseboard was originally due to launch in March this year — but a last-minute delay saw the device slip to May, launching today at $399. Does it deliver on its heady promise?

    Reply
  14. Tomi Engdahl says:

    Campbell Kwan / ZDNet:
    Sony and Microsoft announce partnership to embed Microsoft Azure AI capabilities onto Sony’s new image sensor with built-in AI that was announced last week

    Microsoft and Sony to create smart camera solutions for AI-enabled image sensor
    https://www.zdnet.com/article/microsoft-and-sony-partnership-to-create-smart-camera-solutions-with-ai-enable-image-sensor/

    Sony’s image sensor will have Microsoft Azure artificial intelligence capabilities.

    Sony and Microsoft have joined together to create artificial intelligence-powered (AI) smart camera solutions to make it easier for enterprise customers to perform video analytics, the companies announced.

    The companies will embed Microsoft Azure AI capabilities onto Sony’s AI-enabled image sensor IMX500. Announced last week, the IMX500 is the world’s first image sensor to contain a pixel chip and logic chip. The logic chip, called Sony’s digital signal processor, is dedicated to AI signal processing, along with memory for the AI model.

    “Video analytics and smart cameras can drive better business insights and outcomes across a wide range of scenarios for businesses,” said Takeshi Numoto, corporate vice president and commercial chief marketing officer at Microsoft.

    Sony and Microsoft also announced that they will create a smart camera managed app powered by Azure Internet of Things (IoT) and cognitive services that it hopes to use alongside the IMX500 sensor to provide new video analytics use cases for enterprise customers.

    According to Sony, the app will allow independent software vendors (ISVs) and smart camera original equipment manufacturers (OEMs) to develop AI models,

    Reply
  15. Tomi Engdahl says:

    https://semiengineering.com/week-in-review-design-low-power-94/

    Nvidia made a slew of announcements along with the company’s virtual GTC keynote. Primary among them was its new GPU architecture, Ampere. The new architecture aims to unify AI training and inference and boost performance by up to 20x over its predecessors. It adds automatic mixed precision and support for both Tensor Float (TF32) and Floating Point 64 (FP64). The first Ampere GPU is the A100, a universal workload accelerator built for AI, data analytics, scientific computing and cloud graphics. It can be partitioned into as many as seven independent instances for inferencing tasks or combine with other A100s as a single GPU.

    Nvidia’s EGX Edge AI platform was expanded with the addition of the EGX A100 for larger commercial off-the-shelf servers (which incorporates newly-acquired Mellanox technology and is and based on the Ampere architecture) and the credit card-sized EGX Jetson Xavier NX for micro servers and edge AI. And with health on the forefront of many minds, the company’s Clara healthcare platform was updated for faster genomic sequencing, new AI models, and integration of sensors for smart hospitals.

    Reply
  16. Tomi Engdahl says:

    Spiking Neural Networks: Research Projects or Commercial Products?
    Opinions differ widely, but in this space that isn’t unusual.
    https://semiengineering.com/spiking-neural-networks-research-projects-or-commercial-products/

    Spiking neural networks (SNNs) often are touted as a way to get close to the power efficiency of the brain, but there is widespread confusion about what exactly that means. In fact, there is disagreement about how the brain actually works.

    Some SNN implementations are less brain-like than others. Depending on whom you talk to, SNNs are either a long way away or close to commercialization. The varying definitions of SNNs leads to differences in how the industry is seen.

    “A few startups are doing their own SNNs,” said Ron Lowman, strategic marketing manager of IP at Synopsys. “It’s being driven by guys that have expertise in how to train, optimize, and write software for them.”

    On the other hand, Flex Logix Inference Technical Marketing Manager Vinay Mehta said that, “SNNs are out further than reinforcement learning,” referring to a machine-learning concept that’s still largely in the research phase.

    The entire notion of a “neural network” is motivated by attempts to model how the brain works. But current neural networks — like the convolutional neural networks (CNNs) that are so prevalent today – don’t follow the design of the brain. Instead, they rely on matrix multiplication for incorporating synaptic weights and gradient-descent algorithms for supervised training.

    Those working on SNNs often refer to these as “classical” networks or “artificial” neural networks (ANNs). That said, Alexandre Valentian, head of advanced technologies and system-on-chip laboratory for CEA-Leti, noted that CNNs reflect more of an approach or type of application, while SNNs reflect an implementation. “CNNs can be implemented in spikes – it’s not CNN vs. SNN.”

    Reply
  17. Tomi Engdahl says:

    11 Myths About Inference Acceleration
    The inference-acceleration market has heated up dramatically, and in turn, it’s led to many misconceptions circulating amongst ill-informed vendors and customers. This article debunks 11 of the most common myths.

    https://www.electronicdesign.com/technologies/embedded-revolution/article/21131142/11-myths-about-inference-acceleration?utm_source=EG+ED+IoT+for+Engineers&utm_medium=email&utm_campaign=CPS200519052&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Reply
  18. Tomi Engdahl says:

    Facial recognition firms are scrambling to see around face masks
    https://www.cnet.com/google-amp/news/facial-recognition-firms-are-scrambling-to-see-around-face-masks/

    Because of face coverings prompted by the coronavirus pandemic, companies are trying to ID people based on just their eyes and cheekbones.

    Reply
  19. Tomi Engdahl says:

    Face Recognition with Python, in Under 25 Lines of Code
    https://realpython.com/face-recognition-with-python/

    Reply
  20. Tomi Engdahl says:

    AI/DC: I made a bot write an AC/DC song
    https://www.youtube.com/watch?v=vpEVsDN84Hc

    Using lyrics.rip to scrape the Genius Lyrics Database, I made a Markov Chain write AC/DC lyrics. This is the end result- “Great Balls”.

    Reply
  21. Tomi Engdahl says:

    Help wanted: Autonomous robot guide
    As Postmates ramps up its autonomous delivery, an at-home tech job emerges
    https://techcrunch.com/2020/05/17/help-wanted-autonomous-robot-guide/

    Reply
  22. Tomi Engdahl says:

    The MaeGo Autonomous Robot Lets Kids Learn Coding While Playing
    https://www.hackster.io/news/the-maego-autonomous-robot-lets-kids-learn-coding-while-playing-fdfbf4193790

    MaeGo is an autonomous robot rover built for a target shooting game, but it also gives kids an opportunity to learn coding.

    Reply
  23. Tomi Engdahl says:

    Speech Synthesis Enters The Uncanny Valley, Or ‘What Will Biggie Rap Next?’
    http://www.synthtopia.com/content/2020/05/17/speech-synthesis-enters-the-uncanny-valley-or-what-will-biggie-rap-next/

    Speech synthesis, the use of computers to generate realistic human speech, is rapidly entering the ‘uncanny valley’ – creepily almost-realistic.

    Recent approaches have used neural networks, trained using only speech examples and text transcripts, to generate human-like text-to-speech synthesis.

    The ‘voice’ voice was computer-generated, using a text-to-speech model trained on the speech patterns of The Notorious B.I.G. In a nutshell, the approach uses an AI to ‘learn’ how an audio file of an individual’s speech compares to a text transcript. Once trained, the model can synthesize speech from text that conforms to the ‘learned’ speech patterns.

    The Vocal Synthesis channel on Youtube features a wide range of examples that demonstrate what’s currently possible.

    https://m.youtube.com/channel/UCRt-fquxnij9wDnFJnpPS2Q

    Reply
  24. Tomi Engdahl says:

    Luxonis Unveils Ultra-Compact DepthAI-Compatible 4k60 Computer Vision Module, MegaAI
    https://www.hackster.io/news/luxonis-unveils-ultra-compact-depthai-compatible-4k60-computer-vision-module-megaai-1b16b581c207

    Supporting 4k30 encode and 4k60 streaming from an on-board camera module, plus 4 TOPS of compute, the megaAI is small but mighty.

    Reply
  25. Tomi Engdahl says:

    Automated Pinball Machine Scores Big with Computer Vision
    https://www.hackster.io/news/automated-pinball-machine-scores-big-with-computer-vision-a3a67efa90e5

    This scratch-built pinball machine doesn’t just play ball; it plays itself.

    a pinball machine! The entire machine is made from scratch using CNC routed plywood, solenoid-powered actuators, some hobbyist electronics, and a Linux computer.

    An Arduino Mega lies at the heart of this build. However, most off-the-shelf pinball components use solenoids. These run on a 48v source and require quite a bit of current that the Mega isn’t able to deliver. The team went with some IRF44V MOSFETs to safely drive the required power to various flippers and bumpers, along with some protection circuitry to boot.

    As for the automation, a webcam mounted above the playfield keeps an eye on the ball’s position with the help of a computer running an OpenCV script. This looks for a ball entering the “Flip Zone” and sends a command to the Mega to trigger the flippers when it’s time to strike.

    Interestingly, the way they’ve written the OpenCV script does not detect the ball using circle detection, but instead by using a reference photo. A picture is taken of the playfield with no ball present and the flippers down, then all subsequent frames are compared against this baseline. Any differences between the two images are marked as a potential ball.

    https://www.instructables.com/id/Arduino-Pinball-Machine-That-Plays-Itself/

    Reply
  26. Tomi Engdahl says:

    Researchers Create Perovskite “Tree” Memories for More Natural, Energy-Efficient AI Hardware
    https://www.hackster.io/news/researchers-create-perovskite-tree-memories-for-more-natural-energy-efficient-ai-hardware-710d1ac4c1cd

    Designed to replace software-based AI with specific hardware, the new material is claimed to be considerably more efficient.

    Reply
  27. Tomi Engdahl says:

    Influencers Say Instagram Is Biased Against Plus-Size Bodies, And They May Be Right
    https://www.buzzfeednews.com/article/laurenstrapagiel/influencers-say-instagram-is-more-likely-to-remove-photos

    Plus-size influencers have long complained about their posts being flagged on social media, and there are a few reasons why it might be happening.

    There have been numerous reports of people like Fatale having their pictures and videos flagged and removed from social media.

    Even very famous women aren’t spared.

    While there’s no hard data showing images of plus-size people are flagged more often, there have been so many anecdotes of it that influencers can’t help but see a pattern.

    According to experts who spoke with BuzzFeed News, it’s very possible they are right. Content moderation on social media apps is usually a mix of artificial intelligence and human moderators, and both methods have a potential bias against larger bodies.

    “Technology and discrimination goes way back,” he told BuzzFeed News. “Anytime you design a new project or a new prototype you have to think about how it is going to break.”

    Companies like Facebook build their own proprietary image and video moderation AI. They build it by feeding it millions of images so it can identify patterns and learn what is acceptable and what is not. It learns, for example, to identify pornography, or a nipple, or a bikini. As it scans images uploaded by users, it decides how likely that image is to contain banned content. If it’s very sure, it can automatically flag the content. If it’s only sort of sure, it can forward that content along to a human to double-check.

    The problem is there are so many gray areas, and the AI can only make its guesses based on what it’s been taught. That’s where the first potential problem arises. If the AI wasn’t fed many images of plus-size women, which is a possibility given the bias against larger bodies in media, that could be the start of a problem.

    “If you take two models, one plus-size one not plus-size, there’s a chance there are more pixels related to skin,” he said.

    Since the AI doesn’t know the context of what it’s seeing, this could lead to incorrect categorization. However, these AI systems are built by people, and people are biased.

    “This technology is not trained to remove content based on a person’s size, it is trained to look for violating elements — such as visible genitalia or text containing hate speech.”

    Because of all these gray areas, and because of the sheer scale of these moderation databases, actually fixing a potential problem like this would be expensive and time-consuming, and companies have very little motivation to do it.

    Lo said apps like Instagram or TikTok are under pressure to keep things PG, both to keep themselves available on app stores, but also due to laws like FOSTA-SESTA or the resources it takes to remove terrorism-related content. It’s just easier to err on the side of caution.

    “I shouldn’t be silenced and erased because you are hypersexualizing my body because it’s bigger,”

    Reply
  28. Tomi Engdahl says:

    Sony will embed Microsoft Azure AI on the Sony intelligent vision sensor IMX500, to extract more image data from a smart camera and give an option for having cloud-based AI inferencing from multiple cameras/devices. Sony is making a smart camera app that works with the sensor that includes Azure IoT and Cognitive Services to hook systems to the Cloud via Microsoft Azure if needed. Sony and Microsoft are targeting enterprises that, for instance, may want to gather inventory or shelf-stock intelligence and use AI to crunch the data into actionable intelligence. Independent software vendors (ISVs) specializing in computer vision and video analytics solutions, as well as smart camera original equipment manufacturers (OEMs) are the targets and both Sony and Microsoft will work with partners and enterprise customers in the areas of computer vision and video analytics as part of Microsoft’s AI & IoT Insider Labs program, according to a press release.

    https://www.sony.com/en_us/SCA/company-news/press-releases/sony-corporation-of-america/2020/sony-semiconductor-solutions-and-microsoft-partner-to-create-sma.html

    Reply
  29. Tomi Engdahl says:

    AI for cybersecurity is a hot new thing—and a dangerous gamble
    https://www.technologyreview.com/2018/08/11/141087/ai-for-cybersecurity-is-a-hot-new-thing-and-a-dangerous-gamble/

    Machine learning and artificial intelligence can help guard against cyberattacks, but hackers can foil security algorithms by targeting the data they train on and the warning flags they look for.

    Reply
  30. Tomi Engdahl says:

    Microsoft sacks journalists to replace them with robots
    Users of the homepages of the MSN website and Edge browser will now see news stories generated by AI
    https://www.theguardian.com/technology/2020/may/30/microsoft-sacks-journalists-to-replace-them-with-robots

    Reply
  31. Tomi Engdahl says:

    Microsoft is cutting dozens of MSN news production workers and replacing them with artificial intelligence
    https://www.seattletimes.com/business/local-business/microsoft-is-cutting-dozens-of-msn-news-production-workers-and-replacing-them-with-artificial-intelligence/

    Microsoft won’t renew the contracts for dozens of news production contractors working at MSN and plans to use artificial intelligence to replace them, several people close to the situation confirmed on Friday.

    The roughly 50 employees — contracted through staffing agencies Aquent, IFG and MAQ Consulting — were notified Wednesday that their services would no longer be needed beyond June 30.

    “Like all companies, we evaluate our business on a regular basis,”

    Reply
  32. Tomi Engdahl says:

    Dave Gershgorn / OneZero :
    Analysis of NIST-submitted vendors shows at least 45 companies now advertise real-time facial recognition services, from RealNetworks to Toshiba

    From RealPlayer to Toshiba, Tech Companies Cash in on the Facial Recognition Gold Rush
    At least 45 companies now advertise real-time facial recognition
    https://onezero.medium.com/from-realplayer-to-toshiba-tech-companies-cash-in-on-the-facial-recognition-gold-rush-b40ab3e8f1e2

    Reply
  33. Tomi Engdahl says:

    Hardware Security For AI Accelerators
    Learn about the threats to AI/ML assets.
    https://semiengineering.com/hardware-security-for-ai-accelerators/

    Dedicated accelerator hardware for artificial intelligence and machine learning (AI/ML) algorithms are increasingly prevalent in data centers and endpoint devices. These accelerators handle valuable data and models, and face a growing threat landscape putting AI/ML assets at risk. Using fundamental cryptographic security techniques performed by a hardware root of trust can safeguard these assets from attack.

    Hardware Security for AI Accelerators
    https://go.rambus.com/hardware-security-for-ai-accelerators

    Dedicated accelerator hardware for artificial intelligence and machine learning (AI/ML) algorithms are increasingly prevalent in data centers and endpoint devices. These accelerators handle valuable data and models, and face a growing threat landscape putting AI/ML assets at risk. Using fundamental cryptographic security techniques performed by a hardware root of trust can safeguard these assets from attack.

    Reply
  34. Tomi Engdahl says:

    Introducing Google Coral Edge TPU — a New Machine Learning ASIC from Google
    https://medium.com/nordcloud-engineering/google-coral-edge-tpu-efc1ba24d319

    Reply
  35. Tomi Engdahl says:

    Can the EU make AI “trustworthy”? No – but they can make it just
    https://edri.org/can-the-eu-make-ai-trustworthy-no-but-they-can-make-it-just/

    Today, 4 June 2020, European Digital Rights (EDRi) submitted its answer to the European Commission’s consultation on the AI White Paper. On top of our response, in our additional paper we outline recommendations to the European Commission for a fundamental rights- based AI regulation. You can find our consultation response, recommendations paper, and answering guide for the public here

    Reply
  36. Tomi Engdahl says:

    IBM will no longer offer, develop, or research facial recognition technology
    https://www.theverge.com/2020/6/8/21284683/ibm-no-longer-general-purpose-facial-recognition-analysis-software

    IBM’s CEO says we should reevaluate selling the technology to law enforcement

    IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge.

    “IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

    Reply
  37. Tomi Engdahl says:

    Amazon Won’t Let Police Use Its Facial Recognition Technology For One Year
    https://www.forbes.com/sites/rachelsandler/2020/06/10/amazon-wont-let-police-use-its-facial-recognition-technology-for-one-year/?utm_campaign=forbes&utm_source=facebook&utm_medium=social&utm_term=Gordie/#676f7264696

    After facing scrutiny for its ties to police in the wake of the George Floyd protests, Amazon said Wednesday it would ban police from using its controversial facial recognition technology for one year.

    Organizations working to end human trafficking, such as Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics, can continue using the technology, which is called Rekognition.

    In a statement, Amazon said it hopes Congress will pass legislation governing the use of facial recognition during the moratorium.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*