3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

7,002 Comments

  1. Tomi Engdahl says:

    Issie Lapowsky / Protocol:
    FTC warns companies over “the sale or use of … racially biased algorithms”, which could violate federal laws, including the fair credit acts FCRA and ECOA — In a blog post this week, the Federal Trade Commission signaled that it’s taking a hard look at bias in AI …

    FTC issues stern warning: Biased AI may break the law
    https://www.protocol.com/ftc-bias-ai

    In a blog post this week, the Federal Trade Commission signaled that it’s taking a hard look at bias in AI, warning businesses that selling or using such systems could constitute a violation of federal law.

    “The FTC Act prohibits unfair or deceptive practices,” the post reads. “That would include the sale or use of – for example – racially biased algorithms.”

    The post also notes that biased AI can violate the Fair Credit Reporting Act and the Equal Credit Opportunity Act. “The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits,” it says. “The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.”

    “For me, algorithmic bias is an economic justice issue. We see disparate outcomes coming out of algorithmic decision-making that disproportionately affect and harm Black and brown communities and affect their ability to participate equally in society,” Slaughter said at the time. “That’s something we need to address, and we need to think about whether it fits under our unfairness framework, whether we might have rule-making authority that could apply to it or whether we use statutes like the Equal Credit Opportunity Act or the Fair Credit Reporting Act.”

    In the blog post, the FTC urges businesses to test their algorithms for bias, to base their systems on datasets that aren’t missing key demographics and to avoid exaggerating what those systems can do, among other things. On Twitter, University of Washington School of Law professor Ryan Calo called the post a “shot across the bow.”

    Reply
  2. Tomi Engdahl says:

    Regulatory frameworks regarding AI and healthcare generally will undoubtedly only continue to grow, especially as innovators attempt to solve increasingly difficult problems with technology.

    The EU Is Proposing Regulations On AI—And The Impact On Healthcare Could Be Significant
    https://trib.al/6l9PzPp

    The emphasis and development of artificial intelligence (AI) is swiftly growing, with innovators across the globe trying to create more viable use-cases for this groundbreaking technology. AI’s market reach has penetrated nearly every large industry, including manufacturing, retail, infrastructure, financial services, defense, and healthcare, among countless other sectors.

    Healthcare especially has experienced an incredible amount of attention in the AI space. The value proposition of AI in healthcare is undoubtedly extensive, especially as the industry is poised to surpass over $11 trillion in market valuation, and given that healthcare is such an inherently data-rich, innovation heavy, and operationally nuanced field.

    Last week, the European Union (EU) put forth its “Proposal for a Regulation on a European approach for Artificial Intelligence,” intending to create “the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.”

    Reply
  3. Tomi Engdahl says:

    “Those of us in machine learning are really good at doing well on a test set,” machine learning pioneer Andrew Ng told an online forum last week, “but unfortunately deploying a system takes more than doing well on a test set.”

    Andrew Ng X-Rays the AI Hype
    https://spectrum.ieee.org/view-from-the-valley/artificial-intelligence/machine-learning/andrew-ng-xrays-the-ai-hype

    There are challenges in making a research paper into something useful in a clinical setting, he indicated.

    “It turns out,” Ng said, “that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.”

    But, he said, “It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.

    “So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production.”

    This gap between research and practice is not unique to medicine, Ng pointed out, but exists throughout the machine learning world.

    “All of AI, not just healthcare, has a proof-of-concept-to-production gap,” he says. “The full cycle of a machine learning project is not just modeling. It is finding the right data, deploying it, monitoring it, feeding data back [into the model], showing safety—doing all the things that need to be done [for a model] to be deployed. [That goes] beyond doing well on the test set, which fortunately or unfortunately is what we in machine learning are great at.”

    Reply
  4. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    Microsoft open sources Counterfit, an AI security risk assessment tool that comes preloaded with algorithms that can be used to evade and steal AI models — — Microsoft today open-sourced Counterfit …

    Microsoft open-sources Counterfit, an AI security risk assessment tool
    https://venturebeat.com/2021/05/04/microsoft-open-sources-counterfit-an-ai-security-risk-assessment-tool/

    Microsoft today open-sourced Counterfit, a tool designed to help developers test the security of AI and machine learning systems. The company says that Counterfit can enable organizations to conduct assessments to ensure that the algorithms used in their businesses are robust, reliable, and trustworthy.

    AI is being increasingly deployed in regulated industries like health care, finance, and defense. But organizations are lagging behind in their adoption of risk mitigation strategies. A Microsoft survey found that 25 out of 28 businesses indicated they don’t have the right resources in place to secure their AI systems, and that security professionals are looking for specific guidance in this space.

    AI security risk assessment using Counterfit
    https://www.microsoft.com/security/blog/2021/05/03/ai-security-risk-assessment-using-counterfit/

    Today, we are releasing Counterfit, an automation tool for security testing AI systems as an open-source project. Counterfit helps organizations conduct AI security risk assessments to ensure that the algorithms used in their businesses are robust, reliable, and trustworthy.

    AI systems are increasingly used in critical areas such as healthcare, finance, and defense. Consumers must have confidence that the AI systems powering these important domains are secure from adversarial manipulation. For instance, one of the recommendations from Gartner’s Top 5 Priorities for Managing AI Risk Within Gartner’s MOST Framework published in Jan 20211 is that organizations “Adopt specific AI security measures against adversarial attacks to ensure resistance and resilience,” noting that “By 2024, organizations that implement dedicated AI risk management controls will successfully avoid negative AI outcomes twice as often as those that do not.”

    However, performing security assessments of production AI systems is nontrivial. Microsoft surveyed 28 organizations, spanning Fortune 500 companies, governments, non-profits, and small and medium sized businesses (SMBs), to understand the current processes in place to secure AI systems. We found that 25 out of 28 businesses indicated they don’t have the right tools in place to secure their AI systems and that security professionals are looking for specific guidance in this space.

    Reply
  5. Tomi Engdahl says:

    The European Commission thinks that there are some things AI just shouldn’t be used for in draft proposals for EU-wide law

    Too Perilous For AI? EU Proposes Risk-Based Rules
    https://spectrum.ieee.org/tech-talk/artificial-intelligence/embedded-ai/euairules

    As part of its emerging role as a global regulatory watchdog, the European Commission published a proposal on 21 April for regulations to govern artificial intelligence use in the European Union.

    The economic stakes are high: the Commission predicts European public and private investment in AI reaching €20 billion a year this decade, and that was before the additional earmark of up to €134 billion earmarked for digital transitions in Europe’s Covid-19 pandemic recovery fund, some of which the Commission presumes will fund AI, too. Add to that counting investments in AI outside the EU but which target EU residents, since these rules will apply to any use of AI in the EU, not just by EU-based companies or governments.

    Things aren’t going to change overnight: the EU’s AI rules proposal is the result of three years of work by bureaucrats, industry experts, and public consultations and must go through the European Parliament—which requested it—before it can become law. EU member states then often take years to transpose EU-level regulations into their national legal codes.

    The proposal defines four tiers for AI-related activity and differing levels of oversight for each. The first tier is unacceptable risk: some AI uses would be banned outright in public spaces, with specific exceptions granted by national laws and subject to additional oversight and stricter logging and human oversight. The to-be-banned AI activity that has probably garnered the most attention is real-time remote biometric identification, i.e. facial recognition. The proposal also bans subliminal behavior modification and social scoring applications. The proposal suggests fines of up to 6 percent of commercial violators’ global annual revenue.

    The proposal next defines a high-risk category, determined by the purpose of the system and the potential and probability of harm. Examples listed in the proposal include job recruiting, credit checks, and the justice system. The rules would require such AI applications to use high-quality datasets, document their traceability, share information with users, and account for human oversight. The EU would create a central registry of such systems under the proposed rules and require approval before deployment.

    Limited-risk activities, such as the use of chatbots or deepfakes on a website, will have less oversight but will require a warning label, to allow users to opt in or out. Then finally there is a tier for applications judged to present minimal risk.

    “I think one of the ideas behind this piece of regulation was trying to balance risk and get people excited about AI and regain trust,”

    Reply
  6. Tomi Engdahl says:

    #TBT: “Perhaps the Turing test doesn’t assess whether a machine is intelligent, but whether we’re willing to accept it as intelligent.”

    Untold History of AI: Why Alan Turing Wanted AI Agents to Make Mistakes
    https://spectrum.ieee.org/tech-talk/tech-history/dawn-of-electronics/untold-history-of-ai-why-alan-turing-wanted-ai-to-make-mistakes

    Part 3: Turing’s Error

    In 1950, at the dawn of the digital age, Alan Turing published what was to be become his most well-known article, “Computing Machinery and Intelligence,” in which he poses the question, “Can machines think?”

    Instead of trying to define the terms “machine” and “think,” Turing outlines a different method for answering this question derived from a Victorian parlor amusement called the imitation game. The rules of the game stipulated that a man and a woman, in different rooms, would communicate with a judge via handwritten notes. The judge had to guess who was who, but their task was complicated by the fact that the man was trying to imitate a woman.

    Inspired by this game, Turing devised a thought experiment in which one contestant was replaced by a computer.

    Reply
  7. Tomi Engdahl says:

    Raportti: Tekoälyavustimet eivät halua oppia
    https://www.uusiteknologia.fi/2021/05/06/raportti-tekoalyavustimet-eivat-halua-oppia/

    Nykyiset tekoälyratkaisut eivät ole vielä niin kehittyneitä kuin ehkä kuvittelemme. Tuore suomalaistutkimus selvitti, miten ihmiset pyrkivät hyödyntämään tekoälyä esimerkiksi suoratoistopalvelujen suosittelualgoritmien sekä ääniohjattavia älyavustajien kanssa. Monille ratkaisut ovat liiankin jäykkiä toiminnaltaan.

    ”Tekoälyllä varustetut arjen teknologiat pakottavat usein käyttäjänsä passiiviseen rooliin suomatta ihmisille riittäviä mahdollisuuksia ‘opettaa’ niitä. Haastattelemamme käyttäjät törmäsivät toistuvasti tilanteisiin, joissa he olisivat halunneet tuupata tekoälyä oikeaan suuntaan tai kertoa sille, että se on tehnyt virheen”, kertoo tutkija Kirsi Hantula Alice Labsistä.

    ”Monet kuluttajat ovat käyttäneet erilaisia tekoälyä hyödyntäviä palveluita jo yli vuosikymmenen ajan, eikä heitä ole enää yhtä helppo miellyttää. Kun palvelun algoritmi tuntuu yhä liian epätarkalta eikä sitä pysty opettamaan, ihmiset väsyvät pieleen meneviin suosituksiin tai älyassistentin toistuvasti tekemiin virheisiin. Kutsumme tilannetta algoritmiuupumukseksi: käyttäjä kokee, että todellisuus on edelleen kaukana siitä, millaista apua hän toivoisi teknologialta’’, Alice Labsin Hantula sanoo.

    Reply
  8. Tomi Engdahl says:

    AI Is Harder Than We Think: 4 Key Fallacies in AI Research
    https://singularityhub.com/2021/05/06/to-advance-ai-we-need-to-better-understand-human-intelligence-and-address-these-4-fallacies/

    Artificial intelligence has been all over headlines for nearly a decade, as systems have made quick progress in long-standing AI challenges like image recognition, natural language processing, and games. Tech companies have sown machine learning algorithms into search and recommendation engines and facial recognition systems, and OpenAI’s GPT-3 and DeepMind’s AlphaFold promise even more practical applications, from writing to coding to scientific discoveries.

    Indeed, we’re in the midst of an AI spring, with investment in the technology burgeoning and an overriding sentiment of optimism and possibility towards what it can accomplish and when.

    Reply
  9. Tomi Engdahl says:

    Smile! You’re on Doppler Camera!
    Vid2Doppler creates synthetic Doppler radar sensor data from videos of human activities to train privacy-preserving machine learning models.
    https://www.hackster.io/news/smile-you-re-on-doppler-camera-e18951501d69

    Reply
  10. Tomi Engdahl says:

    The Universe Is a Machine That Keeps Learning, Scientists Say
    Basically, we live in one giant algorithm.
    https://www.popularmechanics.com/science/a36112655/universe-is-self-learning-algorithm/

    Reply
  11. Tomi Engdahl says:

    I
    Built A Bot To Apply To Thousands Of Jobs At Once–Here’s What I Learned
    As this job seeker’s “faith in the front-facing application process eroded into near oblivion,” a lower-tech strategy took its place.
    I Built A Bot To Apply To Thousands Of Jobs At Once–Here’s What I Learned
    https://www.fastcompany.com/3069166/i-built-a-bot-to-apply-to-thousands-of-jobs-at-once-heres-what-i-learned

    Reply
  12. Tomi Engdahl says:

    Watching AI Slowly Forget a Human Face Is Incredibly Creepy
    This time lapse of a neural network with the neurons slowly switching off is a haunting experiment in machine learning.
    https://www.vice.com/en/article/evym4m/ai-told-me-human-face-neural-networks?utm_source=motherboardtv_facebook&utm_medium=social

    A programmer created an algorithmically-generated face, and then made the network slowly forget what its own face looked like.

    The result, a piece of video art titled “What I saw before the darkness,” is an eerie time-lapse view of the inside of a demented AI’s mind as its artificial neurons are switched off, one by one, HAL 9000 style.

    Reply
  13. Tomi Engdahl says:

    James Vincent / The Verge:
    Google announces LaMDA, a language model for dialogue applications that it says represents a “breakthrough” for having natural conversations with AI — Google wants to build AI that understands language better — Artificial intelligence is a huge part of Google’s business …

    Google showed off its next-generation AI by talking to Pluto and a paper airplane
    Google wants to build AI that understands language better
    https://www.theverge.com/2021/5/18/22442328/google-io-2021-ai-language-model-lamda-pluto?scrolla=5eb6d68b7fedc32c19ef33b4

    Reply
  14. Tomi Engdahl says:

    For companies that use ML, labeled data is the key differentiator
    https://techcrunch.com/2021/05/18/for-companies-that-use-ml-labeled-data-is-the-key-differentiator/?tpcc=ecfb2020

    AI is driving the paradigm shift that is the software industry’s transition to data-centric programming from writing logical statements. Data is now oxygen. The more training data a company gathers, the brighter will its AI-powered products burn.

    Why is Tesla so far ahead with advanced driver assistance systems (ADAS)? Because no one else has collected as much information — it has data on more than 10 billion driven miles, helping it pull ahead of competition like Waymo, which has only about 20 million miles. But any company that is considering using machine learning (ML) cannot overlook one technical choice: supervised or unsupervised learning.

    There is a fundamental difference between the two. For unsupervised learning, the process is fairly straightforward: The acquired data is directly fed to the models, and if all goes well, it will identify patterns.

    Elon Musk compares unsupervised learning to the human brain, which gets raw data from the six senses and makes sense of it. He recently shared that making unsupervised learning work for ADAS is a major challenge that hasn’t been solved yet.

    Supervised learning is currently the most practical approach for most ML challenges. O’Reilly’s 2021 report on AI Adoption in the Enterprise found that 82% of surveyed companies use supervised learning, while only 58% use unsupervised learning.

    Reply
  15. Tomi Engdahl says:

    This week at Google IO CEO Sundar Pichai announced a powerful new AI chip called the TPUv4. Its performance is esp. impressive when you stack up its scores against ML standard training sets – and compared to other leading AI systems.

    https://spectrum.ieee.org/tech-talk/computing/hardware/heres-how-googles-tpu-v4-ai-chip-stacked-up-in-training-tests

    Reply
  16. Tomi Engdahl says:

    It has become its own creator.

    Google’s AI Is Now Creating Its Own AI
    https://www.iflscience.com/technology/google-ai-creating-own-ai/

    Reply
  17. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Microsoft, an investor in OpenAI, says it’s now using the GPT-3 model in its low-code Power Apps service to translate natural language text into code — Unlike in other years, this year’s Microsoft Build developer conference is not packed with huge surprises — but there’s one announcement …

    Microsoft uses GPT-3 to let you code in natural language
    Frederic Lardinois@fredericl / 6:00 PM GMT+3•May 25, 2021
    https://techcrunch.com/2021/05/25/microsoft-uses-gpt-3-to-let-you-code-in-natural-language/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cudGVjaG1lbWUuY29tLw&guce_referrer_sig=AQAAAHL5rWBgUwlOQYdwD16XZG7isdAE6rgdF4_qQ_90BAMLNodLU-bfF0I9GCnA1MLLHw51NPwZrmRPT1oto9DYtmqBw7IEhVCdrmO2EbKVV9JFuqycWRYqd73swS3riyYkZ8EqGZZQmRoapeeFRw4vnPcpFMmEfOnE-fEs6ZyEcZfh

    Unlike in other years, this year’s Microsoft Build developer conference is not packed with huge surprises — but there’s one announcement that will surely make developers’ ears perk up: The company is now using OpenAI’s massive GPT-3 natural language model in its no-code/low-code Power Apps service to translate spoken text into code in its recently announced Power Fx language.

    Now don’t get carried away. You’re not going to develop the next TikTok while only using natural language. Instead, what Microsoft is doing here is taking some of the low-code aspects of a tool like Power Apps and using AI to essentially turn those into no-code experiences, too. For now, the focus here is on Power Apps formulas, which despite the low-code nature of the service, is something you’ll have to write sooner or later if you want to build an app of any sophistication.

    “Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call no code,” said Charles Lamanna, corporate vice president for Microsoft’s low-code application platform.

    Reply
  18. Tomi Engdahl says:

    Microsoft uses GPT-3 to let you code in natural language https://techcrunch.com/2021/05/25/microsoft-uses-gpt-3-to-let-you-code-in-natural-language/
    Microsoft is using OpenAI’s massive GPT-3 natural language model in its no-code/low-code Power Apps service to translate spoken text into code in its recently announced Power Fx language.

    Reply
  19. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    Facebook says it partnered with Microsoft to launch the PyTorch Enterprise Support Program, helping companies use the open source ML library PyTorch

    Facebook and Microsoft launch PyTorch Enterprise Support Program
    https://venturebeat.com/2021/05/25/facebook-and-microsoft-launch-pytorch-enterprise-support-program/

    Facebook today announced the launch of the PyTorch Enterprise Support Program, which enables service providers to develop and offer tailored enterprise-grade support to their customers. Facebook says the new offering, built in collaboration with Microsoft, was created in direct response to feedback from PyTorch enterprise users developing models in production for mission-critical apps.

    PyTorch, which Facebook publicly released in January 2017, is an open source machine learning library based on Torch, a scientific computing framework and script language that is in turn based on the Lua programming language. While TensorFlow has been around slightly longer (since November 2015), PyTorch continues to see rapid uptake in the data science and developer community. It claimed one of the top spots for fast-growing open source projects last year, according to GitHub’s 2018 Octoverse report, and Facebook recently revealed that in 2019 the number of contributors on the platform grew more than 50% year-over-year to nearly 1,200.

    The PyTorch Enterprise Support Program is available to any service provider and “designed [to] mutually benefit all program participants by sharing and improving PyTorch long-term support (LTS),” Facebook says — including contributions of hotfixes and other improvements found while working with customers and on their systems. To benefit the open source community, hotfixes that participants develop will be tested and fed back to the LTS releases of PyTorch regularly through PyTorch’s pull request process.

    The standard way of researching and deploying with different release versions of PyTorch won’t change with the PyTorch Enterprise Support Program. But to participate, service providers must apply and meet a set of program terms and certification requirements. Once accepted, the service provider becomes a program participant and can offer a packaged PyTorch Enterprise support service with LTS, prioritized troubleshooting, useful integrations, and more.

    Reply
  20. Tomi Engdahl says:

    #TBT: “IBM discovered that its powerful technology was no match for the messy reality of today’s health care system.” Eliza Strickland’s 2019 investigation into the failure of Watson.

    How IBM Watson Overpromised and Underdelivered on AI Health Care
    https://spectrum.ieee.org/biomedical/diagnostics/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care

    One dazzling 2014 demonstration of Watson’s brainpower showed off its potential to transform medicine using AI—a goal that IBM CEO Virginia Rometty often calls the company’s moon shot. In the demo, Watson took a bizarre collection of patient symptoms and came up with a list of possible diagnoses, each annotated with Watson’s confidence level and links to supporting medical literature.

    Within the comfortable confines of the dome, Watson never failed to impress: Its memory banks held knowledge of every rare disease, and its processors weren’t susceptible to the kind of cognitive bias that can throw off doctors. It could crack a tough case in mere seconds. If Watson could bring that instant expertise to hospitals and clinics all around the world, it seemed possible that the AI could reduce diagnosis errors, optimize treatments, and even alleviate doctor shortages—not by replacing doctors but by helping them do their jobs faster and better.

    Outside of corporate headquarters, however, IBM has discovered that its powerful technology is no match for the messy reality of today’s health care system. And in trying to apply Watson to cancer treatment, one of medicine’s biggest challenges, IBM encountered a fundamental mismatch between the way machines learn and the way doctors work.

    IBM’s bold attempt to revolutionize health care began in 2011. The day after Watson thoroughly defeated two human champions in the game of Jeopardy!, IBM announced a new career path for its AI quiz-show winner: It would become an AI doctor. IBM would take the breakthrough technology it showed off on television—mainly, the ability to understand natural language—and apply it to medicine.

    “Reputationally, I think they’re in some trouble,” says Robert Wachter, chair of the department of medicine at the University of California, San Francisco, and author of the 2015 book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age (McGraw-Hill). In part, he says, IBM is suffering from its ambition: It was the first company to make a major push to bring AI to the clinic. But it also earned ill will and skepticism by boasting of Watson’s abilities. “They came in with marketing first, product second, and got everybody excited,” he says. “Then the rubber hit the road. This is an incredibly hard set of problems, and IBM, by being first out, has demonstrated that for everyone else.”

    At a 2017 conference of health IT professionals, IBM CEO Rometty told the crowd that AI “is real, it’s mainstream, it’s here, and it can change almost everything about health care,” and added that it could usher in a medical “golden age.” She’s not alone in seeing an opportunity: Experts in computer science and medicine alike agree that AI has the potential to transform the health care industry. Yet so far, that potential has primarily been demonstrated in carefully controlled experiments. Only a few AI-based tools have been approved by regulators for use in real hospitals and doctors’ offices. Those pioneering products work mostly in the visual realm, using computer vision to analyze images like X-rays and retina scans. (IBM does not have a product that analyzes medical images, though it has an active research project in that area.)

    Looking beyond images, however, even today’s best AI struggles to make sense of complex medical information. And encoding a human doctor’s expertise in software turns out to be a very tricky proposition. IBM has learned these painful lessons in the marketplace, as the world watched. While the company isn’t giving up on its moon shot, its launch failures have shown technologists and physicians alike just how difficult it is to build an AI doctor.

    Reply
  21. Tomi Engdahl says:

    Startup Flawless AI and researchers at the Max Planck Institute are commercializing AI that can capture actors’ performances from 2D film or TV image frames and transform the actor’s voice and facial expression to fit an entirely different language.

    AI Modifies Actor Performances for Flawless Dubbing
    https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/ai-modifies-actor-performances-for-flawless-dubbing

    Reply
  22. Tomi Engdahl says:

    Fast Company:
    During the pandemic, electronic health record provider Epic quickly rolled out a non-peer reviewed AI-powered tool for triaging patients across US hospitals

    How a largely untested AI algorithm crept into hundreds of hospitals
    https://www.fastcompany.com/90641343/epic-deterioration-index-algorithm-pandemic-concerns

    During the pandemic, the electronic health record giant Epic quickly rolled out an algorithm to help doctors decide which patients needed the most immediate care. Doctors believe it will change how they practice.

    Reply
  23. Tomi Engdahl says:

    Killer drones may have autonomously attacked humans for the first time
    https://metro.co.uk/2021/05/31/killer-drones-may-have-autonomously-attacked-humans-for-the-first-time-14679285/

    Drones have been a mainstay on the battlefield for years now, but they have always required a human pilot to pull the trigger.

    That might be about to change.

    Last year, a group of Libyan rebels were attacked by drones acting autonomously, according to a UN report.

    The report alleges these ‘unmanned combat aerial vehicles and lethal autonomous weapons systems’ attacked the rebels without any input from a human operator.

    The Kargu-2 drones can be flown by human operators or they can use on-board cameras and artificial intelligence to seek our targets autonomously. 

    The drones ‘were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.’

    The report was provided by an anonymous source to New Scientist.

    If it proves to be accurate, it will be the first time that an autonomous drone has hunted down and attacked a human.

    Drones packed with explosives may have ‘hunted down’ and attacked HUMANS for the first time without using a remote pilot to guide them
    https://www.dailymail.co.uk/sciencetech/article-9629801/Fully-autonomous-drones-hunted-attacked-humans-time.html

    Reply
  24. Tomi Engdahl says:

    Alyse Stanley / Gizmodo:
    UN report: a Turkish-made autonomous weaponized drone “hunted down” and attacked a human target without instructions to do so during a 2020 conflict in Libya

    The Age of Autonomous Killer Robots May Already Be Here
    https://gizmodo.com/flying-killer-robot-hunted-down-a-human-target-without-1847001471?scrolla=5eb6d68b7fedc32c19ef33b4

    A “lethal” weaponized drone “hunted down” and “remotely engaged” human targets without its handlers’ say-so during a conflict in Libya last year, according to a United Nations report first covered by New Scientist this week. Whether there were any casualties remains unclear, but if confirmed, it would likely be the first recorded death carried out by an autonomous killer robot.

    In March 2020, a Kargu-2 attack quadcopter, which the agency called a “lethal autonomous weapon system,” targeted retreating soldiers and convoys led by Libyan National Army’s Khalifa Haftar during a civil conflict with Libyan government forces.

    “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” the UN Security Council’s Panel of Experts on Libya wrote in the report.

    It remains unconfirmed whether any soldiers were killed in the attack, although the UN experts imply as much. The drone, which can be directed to self-destruct on impact, was “highly effective” during the conflict in question when used in combination with unmanned combat aerial vehicles, according to the panel. The battle resulted in “significant casualties,” it continued, noting that Haftar’s forces had virtually no defense against remote aerial attacks.

    The Kargu-2 is a so-called loitering drone that uses machine learning algorithms and real-time image processing to autonomously track and engage targets. According to Turkish weapons manufacturer STM, it’s specifically designed for asymmetric warfare and anti-terrorist operations and has two operating modes, autonomous and manual. Several can also be linked together to create a swarm of kamikaze drones.

    Reply
  25. Tomi Engdahl says:

    A rogue killer drone ‘hunted down’ a human target without being instructed to, UN report says
    https://www.businessinsider.com/killer-drone-hunted-down-human-target-without-being-told-un-2021-5

    A deadly drone “hunted down” a human target without being instructed to do so, according to a UN report.
    The incident took place during clashes in Libya last year, the Daily Star reported.
    Experts are sounding the alarm about the lack of regulation around using “killer robots.”

    A “lethal” weaponized drone “hunted down a human target” without being told to for the first time, according to a UN report seen by the New Scientist.

    The March 2020 incident saw a KARGU-2 quadcopter autonomously attack a human during a conflict between Libyan government forces and a breakaway military faction, led by the Libyan National Army’s Khalifa Haftar, the Daily Star reported.

    Reply
  26. Tomi Engdahl says:

    China’s gigantic multi-modal AI is no one-trick pony
    https://www.engadget.com/chinas-gigantic-multi-modal-ai-is-no-one-trick-pony-211414388.html

    Sporting 1.75 trillion parameters, Wu Dao 2.0 is roughly ten times the size of Open AI’s GPT-3.

    When Open AI’s GPT-3 model made its debut in May of 2020, its performance was widely considered to be the literal state of the art. Capable of generating text indiscernible from human-crafted prose, GPT-3 set a new standard in deep learning. But oh what a difference a year makes. Researchers from the Beijing Academy of Artificial Intelligence announced on Tuesday the release of their own generative deep learning model, Wu Dao, a mammoth AI seemingly capable of doing everything GPT-3 can do, and more.

    Reply
  27. Tomi Engdahl says:

    Don’t fear the singularity
    Technology is moving up the skill curve. As AI becomes more ‘I’, how far can it go?
    https://artofthepossible.economist.com/how-artificial-intelligence-is-shaping-future

    You can look at the revolution in artificial intelligence (AI) in two ways—as two alternate universes, almost. One, where machines support and enhance the human. Star Trek, if you will. The other, where access to technology has created an impassable divide between the haves and have-nots, more like the dystopian film Elysium.

    Reply
  28. Tomi Engdahl says:

    Top 7 Most Common Errors When Implementing AI and Machine Learning Systems in 2021. #AI #MachineLearning #BigData #DeepLearning

    https://www.immuniweb.com/blog/top-ai-machine-learning-errors.html

    Reply
  29. Tomi Engdahl says:

    A multi-tier approach to #MachineLearning at the edge can help streamline both development and deployment for the #AIoT
    #EVS21 #AI #IoT
    https://buff.ly/3ioEPHD

    Reply
  30. Tomi Engdahl says:

    Melissa Heikkilä / Politico:
    An open letter with 170 signatories in 55 countries calls for a ban on biometric recognition tech, after EU proposes AI rules that critics say have loopholes — Activists fear loopholes in the bloc’s artificial intelligence bill could allow for widespread facial recognition beyond Europe’s borders.

    Europe’s AI rules open door to mass use of facial recognition, critics warn
    https://www.politico.eu/article/eu-ai-artificial-intelligence-rules-facial-recognition/

    Activists fear loopholes in the bloc’s artificial intelligence bill could allow for widespread facial recognition beyond Europe’s borders.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*