3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

7,003 Comments

  1. Tomi Engdahl says:

    “Unorthodox” AI Helps Identify Best Cancer Treatments
    https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/ai-for-cancer-treatments

    AlphaGo became the first household AI name by teaching itself to play the ancient Chinese game Go and then beating the world’s best human player. Self-driving cars use AI systems to learn to park or merge into traffic by practicing the maneuvers over and over until they get it right.

    It’s clear that AI programs are good at training themselves to win, maximize, or perfect. But what if success means striking a balance?

    In cancer treatments, doctors endeavor to dose patients with enough drugs to kill as many tumor cells as possible but as few patient cells as possible. In other words, they balance shrinking a tumor with minimizing side effects.

    “We said, ‘Wait. This sounds like a machine-learning search problem and optimization issue,’ ”

    Today at the 2018 Machine Learning for Healthcare conference at Stanford University, Shah and researcher Gregory Yauney will present a self-learning artificial intelligence model that undertakes that balancing act. Trained with real patient data

    Currently doctors typically choose how to dose a patient using protocols based on animal studies and past clinical trials, some from the 1950s or earlier. Shah figured there was room for improvement.

    treatment can make patients very, very sick.

    “Our goal was to reduce toxicity and dosing for patients who unfortunately have this disease,” says Shah.

    It wasn’t an easy goal to achieve.

    Reply
  2. Tomi Engdahl says:

    Klarity uses AI to strip drudgery from contract review
    https://techcrunch.com/2018/08/17/klarity-uses-ai-to-strip-drudgery-from-contract-review/?utm_source=tcfbpage&sr_share=facebook

    Klarity, a member of the Y Combinator 2018 Summer class, wants to automate much of the contract review process by applying artificial intelligence, specifically natural language processing.

    Company co-founder and CEO Andrew Antos has experienced the pain of contract reviews first hand. After graduating from Harvard Law, he landed a job spending 16 hours a day reviewing contract language, a process he called mind-numbing. He figured there had to be a way to put technology to bear on the problem and Klarity was born.

    “A lot of companies are employing internal or external lawyers because their customers, vendors or suppliers are sending them a contract to sign,”

    Reply
  3. Tomi Engdahl says:

    Oracle open sources Graphpipe to standardize machine learning model deployment
    https://techcrunch.com/2018/08/15/oracle-open-sources-graphpipe-to-standardize-machine-learning-model-deployment/?utm_source=tcfbpage&sr_share=facebook

    Oracle, a company not exactly known for having the best relationship with the open source community, is releasing a new open source tool today called Graphpipe, which is designed to simplify and standardize the deployment of machine learning models.

    The tool consists of a set of libraries and tools for following the standard.

    AdChoices

    Oracle open sources Graphpipe to standardize machine learning model deployment
    Ron Miller
    @ron_miller / Aug 15, 2018

    Oracle Team USA flies into the air making a swift turn
    Oracle, a company not exactly known for having the best relationship with the open source community, is releasing a new open source tool today called Graphpipe, which is designed to simplify and standardize the deployment of machine learning models.

    The tool consists of a set of libraries and tools for following the standard.

    Vish Abrams, whose background includes helping develop OpenStack at NASA and later helping launch Nebula, an OpenStack startup in 2011, is leading the project. He says as his team dug into the machine learning workflow, they found a gap. While teams spend lots of energy developing a machine learning model, it’s hard to actually deploy the model for customers to use. That’s where Graphpipe comes in.

    He points out that it’s common with newer technologies like machine learning for people to get caught up in the hype. Even though the development process keeps improving, he says that people often don’t think about deployment.

    “Graphpipe is what’s grown out of our attempt to really improve deployment stories for machine learning models, and to create an open standard around having a way of doing that to improve the space,” Abrams told TechCrunch.

    As Oracle dug into this, they identified three main problems. For starters, there is no standard way to serve APIs, leaving you to use whatever your framework provides. Next, there is no standard deployment mechanism, which leaves developers to build custom ones every time. Finally, they found existing methods leave performance as an afterthought, which in machine learning could be a major problem.

    “We created Graphpipe to solve these three challenges. It provides a standard, high-performance protocol for transmitting tensor data over the network, along with simple implementations of clients and servers that make deploying and querying machine learning models from any framework a breeze,”

    https://blogs.oracle.com/developers/introducing-graphpipe

    Reply
  4. Tomi Engdahl says:

    Two Startups Use Processing in Flash Memory for AI at the Edge
    https://spectrum.ieee.org/tech-talk/computing/embedded-systems/two-startups-use-processing-in-flash-memory-for-ai-at-the-edge

    Irvine, Calif.–based Syntiant thinks it can use embedded flash memory to greatly reduce the amount of power needed to perform deep-learning computations. Austin, Texas–based Mythic thinks it can use embedded flash memory to greatly reduce the amount of power needed to perform deep-learning computations. They both might be right.

    A growing crowd of companies are hoping to deliver chips that accelerate otherwise onerous deep learning applications, and to some degree they all have similarities because “these are solutions that are created by the shape of the problem,” explains Mythic founder and CTO Dave Fick.

    Reply
  5. Tomi Engdahl says:

    Lime: Explaining the predictions of any machine learning classifier
    https://github.com/marcotcr/lime

    This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data) or images, with a package called lime (short for local interpretable model-agnostic explanations).

    Reply
  6. Tomi Engdahl says:

    University of Michigan’s MPU Could Bring Memristors to the General Computing World
    https://blog.hackster.io/university-of-michigans-mpu-could-bring-memristors-to-the-general-computing-world-fe6ad5220590

    Researchers from the University of Michigan have developed what they term “memory-processing units,” and believe they could one day lead to housing memristors on a chip and use them for general computing applications, such as tasking them for fast, energy efficient CPUs and memory. This would make them perfect for low-power hardware used in smartphones or supercomputing, where high heat and power consumption is a performance factor.

    Reply
  7. Tomi Engdahl says:

    A Poet of Computation Who Uncovers Distant Truths
    By
    ERICA KLARREICH
    August 1, 2018
    https://www.quantamagazine.org/computer-scientist-constantinos-daskalakis-wins-nevanlinna-prize-20180801/

    The theoretical computer scientist Constantinos Daskalakis has won the Rolf Nevanlinna Prize for explicating core questions in game theory and machine learning.

    Reply
  8. Tomi Engdahl says:

    Deep neural network on arduino – MNIST handwritten
    https://hackaday.io/project/41159-deep-neural-network-on-arduino-mnist-handwritten

    170 neurons running on arduino can recognize MNIST digits with 97% accuracy

    Proof of concept – running deep neural network with 170 neurons on Atmega328 – Arduino.

    Network topology is fully connected, with ReLU activation in two hidden layers and linear activation in output layer

    Network was trained on GPU (gtx1080) on my own neural network framework. After learning, weights have been rouded and mapped into 8 bit fixed point.

    Layer computing kernel

    is nothing else than vector matrix multiplication, I used some unrolling tricks (30% speed increase on intel i5), on ARM Cortex M4 it similar, because of pipeline

    There is no significant loss of accuracy -> I compared 32 bit float weights with 8 bit signed char

    Reply
  9. Tomi Engdahl says:

    AI chips for big data and machine learning: GPUs, FPGAs, and hard choices in the cloud and on-premise
    https://www.zdnet.com/article/ai-chips-for-big-data-and-machine-learning-gpus-fpgas-and-hard-choices-in-the-cloud-and-on-premise/

    How can GPUs and FPGAs help with data-intensive tasks such as operations, analytics, and machine learning, and what are the options?

    Reply
  10. Tomi Engdahl says:

    Democratic artificial intelligence will shape future technologies: Gartner
    https://www.zdnet.com/article/democratic-artificial-intelligence-will-shape-future-technologies-gartner/

    The research firm believes that AI will end up in everyone’s hands sooner than we think.

    Reply
  11. Tomi Engdahl says:

    Google just put an AI in charge of keeping its data centers cool
    https://www.zdnet.com/article/google-just-put-an-ai-in-charge-of-keeping-its-data-centers-cool/

    DeepMind’s neural networks will tweak data center conditions to cut power usage.

    Google is putting an artificial.intelligence system in charge of its data center cooling after the system proved it could cut energy use.

    Now Google and its AI company DeepMind are taking the project further; instead of recommendations being implemented by human staff, the AI system is directly controlling cooling in the data centers that run services including Google Search, Gmail and YouTube.

    “This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centers,” Google said.

    Safety-first AI for autonomous data center cooling and industrial control
    https://www.blog.google/inside-google/infrastructure/safety-first-ai-autonomous-data-center-cooling-and-industrial-control/

    Reply
  12. Tomi Engdahl says:

    What is AI? Everything you need to know about Artificial Intelligence
    https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence/

    An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

    What is artificial intelligence (AI)?

    It depends who you ask.

    What are the uses for AI?

    AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon’s Alexa and Apple’s Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.
    What are the different types of AI?

    At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

    Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

    This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

    Reply
  13. Tomi Engdahl says:

    How AI is decommoditizing the chip industry
    https://venturebeat.com/2018/08/16/how-ai-is-decommoditizing-the-chip-industry/

    Since the early days of computing, there has always been this idea that artificial intelligence would one day change the world. We’ve seen this future depicted in countless pop culture references and by futurist thinkers for decades, yet the technology itself remained elusive. Incremental progress was mostly relegated to fringe academic circles and expendable corporate research departments.

    That all changed five years ago. With the advent of modern deep learning, we’ve seen a real glimpse of this technology in action: Computers are beginning to see, hear, and talk. For the first time, AI feels tangible and within reach.

    AI development today is centered around Deep Learning algorithms like convolutional networks, recurrent networks, generative adversarial networks, reinforcement learning, capsule nets, and others. The one thing all of these have in common is they take an enormous amount of computing power. To make real progress towards generalizing this kind of intelligence, we need to overhaul the computational systems that fuel this technology.

    The 2009 discovery of the GPU as a compute device is often viewed as a critical juncture that helped usher in the Cambrian explosion around deep learning. Since then, the investment in parallel compute architectures has exploded. The excitement around Google’s TPU (Tensor Processing Unit) is a case in point, but the TPU is really just the beginning. New dedicated AI chip startups raised $1.5 billion in 2017 alone, a CB Insights spokesperson told my team. This is astonishing.

    Reply
  14. Tomi Engdahl says:

    From Cozmo to Vector: How Anki Designs Robots With Emotional Intelligence
    https://www.designnews.com/electronics-test/cozmo-vector-how-anki-designs-robots-emotional-intelligence/203383207559288?ADTRK=UBM&elq_mid=5342&elq_cid=876648

    Anki creates the AI-powered robots Cozmo and Vector, but it wants you to think of them as characters, not toys or machines.

    The little robot on my desk knows my name and recognizes my face. He tells me when he needs maintenance. He has three cubes that he loves to stack, knock over, and play with. He’ll also tell me when he’s bored and wants to play a game. When he wins, he’ll dance around. And if I beat him too many times, he’ll get sad or throw a temper tantrum.

    His name is Cozmo. And even though he’s a 2-inch-tall toy robot, the people that created him don’t want you to think of him as a toy or even as a machine. They want you to think of him as a character, like Wall-E, brought to life. Anki, the company behind Cozmo, says its mission is to “create robots that move you,” combining robotics and artificial intelligence to create technologies with which people can build emotional bonds.

    Reply
  15. Tomi Engdahl says:

    Root says its AI-driven approach to insurance can save good driver more than 50% on their policies

    A new unicorn is born: Root Insurance raises $100 million for a $1 billion valuation
    https://techcrunch.com/2018/08/22/a-new-unicorn-is-born-root-insurance-raises-100-million-for-a-1-billion-valuation/?utm_source=tcfbpage&sr_share=facebook

    Root Insurance, an Ohio-based car insurance startup with a tech twist, said Wednesday it has raised $100 million in a Series D funding round led by Tiger Global Management, pushing the company’s valuation to $1 billion.

    Root says its approach allows good drivers to save more than 50 percent on their policies compared to traditional insurance carriers.

    The company uses AI algorithms to adjust risk and sometimes provide discounts.

    Reply
  16. Tomi Engdahl says:

    Using AI In Chip Manufacturing
    https://semiengineering.com/using-ai-in-chip-manufacturing/

    Coventor’s CTO drills down into predictive maintenance, the impact of variation, and what this means for yield and future technology.

    Reply
  17. Tomi Engdahl says:

    The New Deep Learning Memory Architectures You Should Know About
    https://semiengineering.com/the-new-deep-learning-memory-architectures-you-should-know-about-2/

    Machines are making faster and better decisions than humans about business and personal matters.

    Reply
  18. Tomi Engdahl says:

    5 trending open source machine learning JavaScript frameworks
    https://opensource.com/article/18/5/machine-learning-javascript-frameworks?sc_cid=7016000000127ECAAY

    Whether you’re a JavaScript developer who wants to dive into machine learning or a machine learning expert who plans to use JavaScript, these open source frameworks may intrigue you.

    The tremendous growth of the machine learning field has been driven by the availability of open source tools that allow developers to build applications easily.

    JavaScript developers have been using various frameworks for training and deploying machine learning models in the browser.

    Reply
  19. Tomi Engdahl says:

    Machine learning: 9 challenges
    https://www.kaspersky.com/blog/machine-learning-nine-challenges/23553/

    The future will probably be awesome, but at present, artificial intelligence (AI) poses some questions, and most often they have to do with morality and ethics. How has machine learning already surprised us? Can you trick a machine, and if so, how difficult is it? And will it all end up with Skynet and rise of the machines?

    Strong and weak artificial intelligence

    What could go wrong?
    It’s still unclear when strong AI will be developed, but weak AI is already here, working hard in many areas.

    1. Bad intentions
    If we teach an army of drones to kill people using machine learning, can the results be ethical?

    2. Developer bias
    Even if machine-learning algorithm developers mean no harm, a lot of them still want to make money — which is to say, their algorithms are created to benefit the developers, not necessarily for the good of society.

    3. System parameters not always include ethics
    Computers by default don’t know anything about ethics. An algorithm can put together a national budget with the goal of “maximizing GDP/labor productivity/life expectancy,” but without ethical limitations programmed into the model, it might eliminate budgets for schools, hospices, and the environment, because they don’t directly increase the GDP.

    4. Ethical relativity
    Ethics change over time, and sometimes quickly. For example, opinions on such issues as LGBT rights and interracial or intercaste marriage can change significantly within a generation.
    Ethics can also vary between groups within the same country, never mind in different countries.

    5. Machine learning changes humans
    Machine-learning systems — just one example of AI that affects people directly — recommend new movies to you based on your ratings of other films and after comparing your preferences with those of other users. Some systems are getting pretty good at it.

    6. False correlations
    A false correlation occurs when things completely independent of each other exhibit a very similar behavior, which may create the illusion they are somehow connected. For example, did you know that margarine consumption in the US correlates strongly on the divorce rate in Maine?

    7. Feedback loops
    Feedback loops are even worse than false correlations. A feedback loop is a situation where an algorithm’s decisions affect reality, which in turn convinces the algorithm that its conclusion is correct.
    For example, a crime-prevention program in California suggested that police should send more officers to African-American neighborhoods based on the crime rate — the number of recorded crimes. But more police cars in a neighborhood led to local residents reporting crimes more frequently

    8. “Contaminated” or “poisoned” reference data
    The results of algorithm learning depend largely on reference data, which form the basis of learning. The data may turn out to be bad and distorted, however, by accident or through someone’s malicious intent (in the latter case, it’s usually called “poisoning”).

    Reply
  20. Tomi Engdahl says:

    ‪Everybody dance now – incredible machine learning based human pose video transfer https://arxiv.org/abs/1808.07371 shown in this video https://youtu.be/PCBTZh41Ris

    Reply
  21. Tomi Engdahl says:

    How A.I is Shaking the Foundations of Cybersecurity
    https://www.rsaconference.com/blogs/how-ai-is-shaking-the-foundations-of-cybersecurity?utm_source=facebook&utm_medium=social&utm_content=ai-shaking-foundations-blog&utm_campaign=august-image-3652018

    The crossroads are coming faster than ever for the world of cybersecurity.

    Information security leaders have had to do a lot of adapting over the past decade. They’ve shifted from an emphasis on protecting the perimeter of the network to defending access points. They’ve turned a large portion of their outward gaze inward as knowledge and awareness of insider threats mushroomed. They’ve extended their layers of protection endlessly to accommodate the growing thirst for mobile access to data. And they’ve accommodated unfathomable complexity as migration of applications to the cloud has accelerated.

    But all that might prove to be child’s play compared to what’s coming next, which is the very same wave gripping every part of nearly every industry: Namely, the fast-approaching age of artificial intelligence.

    Think about that: A show aimed at school-aged children is teaching them how to verify the effectiveness of an AI program, and is even educating them about one of the technology’s groundbreakers. It doesn’t take a huge leap to consider that a method for guiding the next generation of workers into AI-related jobs.

    Which is why security experts should pause before expressing skepticism about AI’s hype in the security realm. Sure, there’s a certain Kool-Aid quality to all the AI talk. Yes, it’s going to be some years before AI possesses the maturity to deliver on its many promises. And there’s no doubt that security professionals have every reason to fret over their future career paths if AI starts doing all of their work.

    Reply
  22. Tomi Engdahl says:

    Machine learning success needs systematic workflow
    https://www.edn.com/design/systems-design/4461025/Machine-learning-success-needs-systematic-workflow?utm_source=Aspencore&utm_medium=EDN&utm_campaign=social

    There will never be an easy “Point A” to “Point B” when it comes to machine learning (ML). Before even tackling this concept, engineers and scientists should understand they will be tweaking constantly and altering different ideas and approaches to improve their algorithms and models. During this process, challenges will arise, especially with handling data and determining the right model.

    When getting started with machine learning, it’s important for beginners to understand and appreciate the following

    Reply
  23. Tomi Engdahl says:

    Network monitoring is hard… If only there was some kind of machine that could learn to do it
    *AI bursts through wall* ‘OHHH YEAHHH!’
    https://www.theregister.co.uk/2018/08/22/ai_network_monitoring/

    Reply
  24. Tomi Engdahl says:

    AI image recognition systems can be tricked by copying and pasting random objects
    Picture of a human + elephant = Chair. Good job.
    https://www.theregister.co.uk/2018/08/28/ai_image_recognition_tricked/

    You don’t always need to build fancy algorithms to tamper with image recognition systems – adding objects in random places will do the trick.

    In most cases, adversarial models are used to change a few pixels here and there to distort images so objects are incorrectly recognized. A few examples have included stickers that turn images of bananas into toasters, or wearing silly glasses to be fool facial recognition systems into believing you’re someone else. Let’s not forget the classic case of when a turtle was mistaken as a rifle to really drill home how easy it is to outwit AI.

    Now researchers from the York University and the University of Toronto, Canada, however, have shown that it’s possible to mislead neural networks by copying and pasting pictures of objects into images, too. No real clever trickery is needed here.

    They performed a series of experiments with models taken from the Tensorflow Object Detection API, an open source framework built by engineers at Google to perform image recognition tasks.

    Reply
  25. Tomi Engdahl says:

    Smut slinger dreams of AI software to create hardcore flicks with your face – plus other machine-learning news
    Your need-to-know
    https://www.theregister.co.uk/2018/08/27/ai_roundup/

    Experimental ‘insult bot’ gets out of hand during unsupervised weekend
    Creators ticked off for running CPU flat out over the break
    https://www.theregister.co.uk/2018/08/27/who-me/

    Reply
  26. Tomi Engdahl says:

    Mary Jo Foley / ZDNet:
    Microsoft to add AI-powered audio and video transcription, file recommendations, and content-sharing tools to OneDrive for Business and SharePoint later in 2018 — Microsoft is adding automated audio and video transcription, file recommendations and new content-sharing options to its OneDrive …

    Microsoft touts more AI-powered OneDrive for Business and SharePoint features due later this year
    https://www.zdnet.com/article/microsoft-touts-more-ai-powered-onedrive-for-business-and-sharepoint-features-due-later-this-year/

    Microsoft is adding automated audio and video transcription, file recommendations and new content-sharing options to its OneDrive for Business and SharePoint storage services later this year.

    Starting “later this year,” Microsoft will be adding automated transcription to video and audio files stored in OneDrive and SharePoint. This transcription will use the same technology that Microsoft uses in its Microsoft Stream business video service. OneDrive and SharePoint video and audio files will become fully searchable thanks to these transcription services.

    Reply
  27. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Google Cloud says its Text-to-Speech API is generally available with support for more languages and that its Speech-to-Text API can transcribe multiple speakers

    Google updates its speech services for developers
    https://techcrunch.com/2018/08/28/google-updates-its-speech-services-for-developers/

    Google Cloud’s Text-to-Speech and Speech-to-Text APIs are getting a bunch of updates today that introduce support for more languages, make it easier to hear auto-generated voices on different speakers and that promise better transcripts thanks to improved tools for speaker recognition, among other things.

    With this update, the Cloud Text-to-Speech API is now also generally available.

    Let’s look at the details. The highlight of the release for many developers is probably the launch of the 17 new WaveNet-based voices in a number of new languages. WaveNet is Google’s technology for using machine learning to create these text-to-speech audio files. The result of this is a more natural sounding voice.

    With this update, the Text-to-Speech API now supports 14 languages and variants and features a total of 30 standard voices and 26 WaveNet voices.

    Reply
  28. Tomi Engdahl says:

    Charlie Warzel / BuzzFeed News:
    Reporter outlines how he used Lyrebird’s free AI-powered software to make a realistic copy of his voice, says the digital recreation fooled his mom

    I Used AI To Clone My Voice And Trick My Mom Into Thinking It Was Me
    https://www.buzzfeednews.com/article/charliewarzel/i-used-ai-to-clone-my-voice-and-trick-my-mom-into-thinking

    You can watch our journey into the terrifying future of fake news on BuzzFeed News’ “Follow This” series on Netflix. For more on this story, watch the new BuzzFeed News series “Follow This” on Netflix.

    In January a man named Aviv Ovadya scared the shit out of me. I’d arranged to chat with him about the future of disinformation expecting a sober prediction of the coming years as incrementally worse. But Ovadya painted a far bleaker picture — a future in which an array of easy-to-use and seamless technology would democratize the ability to manipulate perception and falsify reality. What happens, he mused, “when anyone can make it appear as if anything has happened, regardless of whether or not it did?”

    Ovadya told me about “reality apathy,” “human puppets,” and “the Infocalypse.” It was terrifying — more so because early versions of some of the dystopian technology we discussed is already here; some of it is even available to the public.

    Which is how I ended up creating an AI-rendered digital recreation of my voice that was so convincing it fooled the person who arguably knows my voice better than anyone: my mom.

    Reply
  29. Tomi Engdahl says:

    Russell Brandom / The Verge:
    Q&A with five experts on whether it is time to regulate facial recognition technology and its usage by police, and what needs to be done to overcome racial bias

    How should we regulate facial recognition?
    https://www.theverge.com/2018/8/29/17792976/facial-recognition-regulation-rules

    Reply
  30. Tomi Engdahl says:

    No need to code your webpage yourself, says Microsoft – draw it and our AI will do the rest
    Bots try to shift web designers into quality assurance
    https://www.theregister.co.uk/2018/08/29/microsoft_sketch2code_html/

    Microsoft has introduced an AI-infused web design tool called Sketch2Code that converts hand-drawn webpage mockups into functional HTML markup. It’s not to be confused with a similar AirBnB project that has been referred to, unofficially, as sketch2code.

    For years, drag-and-drop web page building apps have been capable of much the same thing, allowing users to move predefined and custom objects onto a digital workspace in order to generate the working web code.

    Sketch2Code Documentation
    https://github.com/Microsoft/ailab/tree/master/Sketch2Code

    Sketch2Code is a solution that uses AI to transform a handwritten user interface design from a picture to a valid HTML markup code.
    Process flow

    The process of transformation of a handwritten image to HTML this solution implements is detailed as follows:

    The user uploads an image through the website.
    A custom vision model predicts what HTML elements are present in the image and their location.
    A handwritten text recognition service reads the text inside the predicted elements.
    A layout algorithm uses the spatial information from all the bounding boxes of the predicted elements to generate a grid structure that accommodates all.
    An HTML generation engine uses all these pieces of information to generate an HTML markup code reflecting the result.

    Reply
  31. Tomi Engdahl says:

    The Five Steps to Bringing Machine Intelligence to Industry
    https://www.ge.com/digital/blog/five-steps-bringing-machine-intelligence-industry

    Here are five essential steps that companies are taking to successfully exploit machine learning in industrial systems:

    Step 1: Clearly define terms

    First, a quick review:

    Artificial intelligence (AI), while often used interchangeably with machine learning, is technically the superset. It encompasses any type of intelligence that can be provided by a machine.
    Machine learning is a programming technique that allows computers to learn and improve without being explicitly programmed. It can encompass both Supervised and Unsupervised learning, where the former does not mean supervised by a human, but rather learning from past outcomes. Machine learning can encompass a wide range of algorithms and be applied to a broad range of applications—from targeted advertising to voice recognition to anti-malware and more.
    Deep learning is a specific type of machine learning, in which the algorithm is structured like a neural network. Deep learning in its multiple flavors, including deep reinforcement learning, is among the most exciting areas in machine learning today, especially for industrial applications. For example, GE is using deep learning to help vision systems recognize signs of corrosion during inspection. Exciting work is also being done by startups in this space, applying deep reinforcement learning to systems that can take sets of data and learn the optimal way to control industrial systems, robots, HVAC systems, and more.

    Step 2: Know your business model
    For a time, just offering “machine learning” or “predictive analytics” constituted a business model in itself. As the toolkits and algorithms used to generate analytics have become less expensive and more widespread—to the point of becoming commoditized—that window is now closing. Increasingly, real profitability won’t come just from having a great algorithm, but from having large amounts of great data that algorithm can learn from.

    Step 3: Recognize the importance of domain expertise
    To support industrial systems, machine learning algorithms and tools must incorporate a deep, fundamental understanding of the physical behavior of the assets being managed, as well as the context in which those assets are being used. As noted in the previous step, real value creation in machine learning is increasingly likely to come not from compute and algorithms, but from their application to unique data, informed by deep domain expertise.

    Step 4: Recognize the value of both physical and data models
    The physical nature of industrial devices and systems implies a knowable, objective truth: there is real meaning to a temperature or pressure measurement on a physical asset. Failure modes of a device have a physical origin and explanation. For this reason, companies like GE build sophisticated models of the assets they support. Human expertise can be encoded in decision rules and analytics that help inform the actions required given a set of observed inputs.

    Step 5: Break up the problem into smaller components
    As the pressure mounts to apply analytics and machine learning more quickly and effectively, many companies have stepped in to try to answer this call, with mixed success. One of the biggest problems with many approaches is that they are trying to create a unified “machine learning” solution that extends from the “whiteboard” phase all the way to production. In reality, the things that dictate success in the early whiteboard phase are very different (and sometimes opposed) to what drives success in production.

    Reply
  32. Tomi Engdahl says:

    Energy-Efficient AI
    How to improve the energy efficiency of AI operations.
    https://semiengineering.com/energy-efficient-ai/

    Reply
  33. Tomi Engdahl says:

    AI sucks at stopping online trolls spewing toxic comments
    It’s easy to for hate speech to slip past dumb machines
    https://www.theregister.co.uk/2018/08/31/ai_toxic_comments/

    New research has shown just how bad AI is at dealing with online trolls.

    Such systems struggle to automatically flag nudity and violence, don’t understand text well enough to shoot down fake news and aren’t effective at detecting abusive comments from trolls hiding behind their keyboards.

    A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

    Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

    The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted.

    https://arxiv.org/pdf/1808.09115.pdf

    Reply
  34. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    Baidu launches EZDL, a platform that lets users build custom machine learning models using a drag-and-drop interface and without writing any code — Without the technical know-how and the right tools, training machine learning algorithms can be an exercise in frustration.

    Baidu launches EZDL, an AI model training platform that requires no coding experience
    https://venturebeat.com/2018/09/01/baidu-launches-ezdl-an-ai-model-training-platform-that-requires-no-coding-experience/

    Without the technical know-how and the right tools, training machine learning algorithms can be an exercise in frustration. Luckily, for folks who don’t have the wherewithal to wade through the jargon, Baidu this week launched an online tool in beta — EZDL — that makes it easy for virtually anyone to build, design, and deploy artificial intelligence (AI) models without writing a single line of code.

    Baidu’s EZDL was built with performance, ease of use, and security in mind, said Youping Yu, general manager of Baidu’s AI ecosystem division, and it targets three broad categories of machine learning: image classification, object detection, and sound classification. It’s aimed at small and medium-sized businesses, with the goal of “breaking down the barrier” to allow everyone to access AI “in the most convenient and equitable way,” Yu said.

    To train a model, EZDL requires 20-100 images, or more than 50 audio files, assigned to each label, and training takes between 15 minutes and an hour.

    Reply
  35. Tomi Engdahl says:

    Paul Sawers / VentureBeat:
    Google releases Content Safety API, an AI-powered tool to identify online child sexual abuse material and reduce human reviewers’ exposure to the content

    Google releases AI-powered Content Safety API to identify more child abuse images
    https://venturebeat.com/2018/09/03/google-releases-ai-powered-content-safety-api-to-identify-more-child-abuse-images/

    Reply
  36. Tomi Engdahl says:

    http://deepangel.media.mit.edu

    Deep Angel is an artificial intelligence that erases objects from photographs.

    Part art, part technology, and part philosophy, Deep Angel shares Angelus Novus’ gaze into the future. With this platform, you can explore the future of automated media manipulation by either uploading your own photos or submitting a public Instagram account to the AI. Beyond manipulation, Deep Angel enables you to uncover the aesthetics of absence.

    Reply
  37. Tomi Engdahl says:

    IPUs? These New Chips Are Minted For Marketing
    https://www.wired.com/story/ipus-these-new-chips-minted-for-marketing/

    IPU
    n. Short for intelligence processing unit, a new kind of computer chip optimized for AI.

    Way back in the early 2000s, when the first Xbox came out, researchers discovered they could hack video­game consoles for scientific uses.

    Today, researchers still use GPU chips, not just for modeling but for artificial intelligence. Since each one contains lots of mini brains that crowdsource the work in parallel, they’re good at big-data jobs like image recognition. Good, but not awesome. So companies are taking that idea and racing to create a new generation of chips just for AI.

    A startup called Graphcore (which recently built a 2,000-­teraflop AI supercomputer the size of a gaming PC) calls them IPUs. Get it? I for intelligence.

    As a name, IPU, unlike its bland _PU predecessors, seems minted for marketing. And for good reason.

    If Moore’s Law taps out soon, as many think it could, future gains in speed will come from specialization: niche chips designed for narrow uses. In a business ruled by lumbering giants, that’s a bonanza for newbies, and the VC money is flying.

    Reply
  38. Tomi Engdahl says:

    IBM’s new system automatically selects the optimal AI algorithm
    https://venturebeat.com/2018/09/04/ibms-new-system-automatically-selects-the-optimal-ai-algorithm/

    Not all deep learning systems — that is to say, systems consisting of layered nodes that ingest data, transform it, output it, and pass it on — are created equal. No algorithm is appropriate for every task, and finding the optimal one can be a long and frustrating exercise. Luckily, there’s hope: IBM developed a system that automates the process.

    Martin Wistuba, a data scientist at IBM Research Ireland, described in a recent blog post and accompanying paper the method. He claims it’s 50,000 times faster than other approaches, with only a small increase in error rate.

    “At IBM, engineers and scientists select the best architecture for a deep learning model from a large set of possible candidates. Today this is a time-consuming manual process; however, using a more powerful automated AI solution to select the neural network can save time and enable non-experts to apply deep learning faster,” he wrote. “My evolutionary algorithm is designed to reduce the search time for the right deep learning architecture to just hours, making the optimization of deep learning network architecture affordable for everyone.”

    Reply
  39. Tomi Engdahl says:

    Intelligent Machines: Can You Build a Robot You Can Trust?
    https://www.techbriefs.com/component/content/article/tb/stories/blog/32917?utm_source=TBnewsletter&utm_medium=Email&utm_campaign=20180904_Main_Newsletter&eid=376641819&bid=2225912

    Rehabilitation patients require a reliable relationship with their physical therapists. But what if that therapist is a robot?

    Although not a mainstream technology in clinical settings (…yet), socially assistive robots (SARs) are already being used as effective rehab strategies for patients who are recovering from strokes or other diseases causing severe functional deficits.

    For the SAR technology to become commonplace, however, robotics engineers will need to build both technology improvements, as well something slightly more abstract and complicated: Trust.

    In short: Smarter SARs will likely be more trustworthy for humans to interact with — perhaps much as in the case of human-human interaction.
    — Dr. Philipp Kellmeyer

    Reply
  40. Tomi Engdahl says:

    Turning to Machine Learning for Industrial Automation Applications
    https://www.electronicdesign.com/industrial-automation/turning-machine-learning-industrial-automation-applications?code=UM_NN8DS_010&utm_rid=CPG05000002750211&utm_campaign=19564&utm_medium=email&elq2=24cfaf26357b4bd3b0cb46ba13993c21

    We look at companies using machine learning in their industrial automation and manufacturing facilities and what results it’s generating for those businesses.

    At its core, machine learning studies the construction of algorithms and learns from them to make predictions on data by building models from sample inputs. If we further break it down, machine learning borrows heavily from computational statistics (prediction modeling using computers) and mathematical optimization, which provides methods, theory and application data to those models. In essence, it creates its own data models based on algorithms and then uses them to predict defined patterns within a range of data sets.

    Machine-learning algorithms can be broken down into five types: supervised, unsupervised, semi-supervised, active, and reinforcement, all of which act just like they sound. Supervised algorithms are programmed and implemented by humans to provide both input and output as well as furnishing feedback on predictive accuracy during training. Machine learning will then use what it has learned and apply it to introduced data sets. Unsupervised requires no training and relies on “deep learning” (an aspect of AI that automates predictive analytics) to analyze data, which it uses to predict data sets. Semi-supervised is provided with incomplete algorithms or training sets and learns by completing the missing components. Reinforcement learning provides feedback to the program as it completes actions in a dynamic environment and extrapolates predictive data sets by learning from said actions.

    There are a considerable number of algorithmic approaches to machine learning, and that number continues to grow given that AI, Deep Neural Networks (DNNs), and machine learning (all part of the same family) are still in their infancy. While volumes of information are written for each, they were all created for specific applications. For example, reinforcement learning can be found in self-driving vehicles and computer opponents in video games while decision-tree learning is used extensively for mining big data.

    That being said, machine learning has a surprising number of applications that move beyond self-driving vehicles and video games, including the medical industry (helps physicians make a more informed diagnosis), the financial industry (portfolio management, stock trading, fraud detection, etc.), and retail/customer service (pinpoint customer behaviors for advertisements), to name just a few.

    Reply
  41. Tomi Engdahl says:

    Of ML and malware: What’s in store?
    https://www.welivesecurity.com/2018/09/04/ml-malware-whats-in-store/

    All things labeled Artificial Intelligence (AI) or Machine Learning (ML) are making waves, but talk of them in cybersecurity contexts often muddies the waters. A new ESET white paper sets out to bring some clarity to a subject where confusion often reigns supreme

    It is no mean feat to find an area in business and technology where the proponents of Artificial Intelligence (AI) or Machine Learning (ML) don’t tout the benefits of any of their manifold applications. Cybersecurity is no exception, of course. Given the promised benefits of the technology and the urgency of stemming the rising tide of internet-borne threats, the sustained fever that this “next big thing” has triggered is understandable.

    However, this is also why it might be good to cool down and consider the broader picture, including where the technology’s often already-apparent promise and limitations reside. And, of course, it would be remiss of us should we also not consider the attendant risks and ask the question whether AI can fuel future malware.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*