Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
7,003 Comments
Tomi Engdahl says:
Tutkijat onnistuivat muuttamaan ihmisen aivosignaaleja suoraan puheeksi – Tekoälyn onnistumisprosentti oli parhaimmillaan 97
https://tekniikanmaailma.fi/tutkijat-onnistuivat-muuttamaan-ihmisen-aivosignaaleja-suoraan-puheeksi-tekoalyn-onnistumisprosentti-oli-parhaimmillaan-97/
Tomi Engdahl says:
Hospitals Deploy AI Tools to Detect COVID-19 on Chest Scans
https://spectrum.ieee.org/the-human-os/biomedical/imaging/hospitals-deploy-ai-tools-detect-covid19-chest-scans
Tomi Engdahl says:
Scientists develop AI that can turn brain activity into text
Researchers in US tracked the neural data from people while they were speaking
https://www.theguardian.com/science/2020/mar/30/scientists-develop-ai-that-can-turn-brain-activity-into-text
Tomi Engdahl says:
https://techcrunch.com/2020/04/01/deepminds-agent57-ai-agent-can-best-human-players-across-a-suite-of-57-atari-games/
Tomi Engdahl says:
Extracting invoices using AI in a few lines of code
https://medium.com/@bzamecnik/extracting-invoices-using-ai-in-a-few-lines-of-code-96e412df7a7a
Extracting information from invoices is hard since no invoice is like each other. People mostly spend time doing it by hand. In big companies they try to set up software with templates and struggle to handle so many corner cases.
At Rossum we train state-of-the-art neural networks to extract data successfully from previously unseen invoices.
No matter if you build a mobile app for tracking finances and want to import invoices via phone camera, or you’re a student helping your mom accountant avoid typing a stack of invoices on weekends, you can save a lot of tedious work with Elis Extraction API.
https://rossum.ai/developers/
Tomi Engdahl says:
Understanding the limits of convolutional neural networks — one of AI’s greatest achievements
https://thenextweb.com/neural/2020/03/20/understanding-the-limits-of-convolutional-neural-networks-one-of-ais-greatest-achievements-syndication/
After a prolonged winter, artificial intelligence is experiencing a scorching summer mainly thanks to advances in deep learning and artificial neural networks. To be more precise, the renewed interest in deep learning is largely due to the success of convolutional neural networks (CNNs), a neural network structure that is especially good at dealing with visual data.
But what if I told you that CNNs are fundamentally flawed? That was what Geoffrey Hinton, one of the pioneers of deep learning, talked about in his keynote speech at the AAAI conference, one of the main yearly AI conferences.
The difference between CNNs and human vision
“CNNs learn everything end to end. They get a huge win by wiring in the fact that if a feature is good in one place, it’s good somewhere else. This allows them to combine evidence and generalize nicely across position,” Hinton said in his AAAI speech. “But they’re very different from human perception.”
One of the key challenges of computer vision is to deal with the variance of data in the real world. Our visual system can recognize objects from different angles, against different backgrounds, and under different lighting conditions
Creating AI that can replicate the same object recognition capabilities has proven to be very difficult.
“CNNs are designed to cope with translations,” Hinton said. This means that a well-trained convnet can identify an object regardless of where it appears in an image. But they’re not so good at dealing with other effects of changing viewpoints such as rotation and scaling.
Creating AI that can replicate the same object recognition capabilities has proven to be very difficult.
“CNNs are designed to cope with translations,” Hinton said. This means that a well-trained convnet can identify an object regardless of where it appears in an image. But they’re not so good at dealing with other effects of changing viewpoints such as rotation and scaling.
One approach to solving this problem, according to Hinton, is to use 4D or 6D maps to train the AI and later perform object detection. “But that just gets hopelessly expensive,” he added.
Tomi Engdahl says:
INTERVIEWSCharles J. Simon, Author, Will Computers Revolt? – Interview Series
https://www.unite.ai/charles-j-simon-author-will-computers-revolt-interview-series/
Tomi Engdahl says:
Using Amazon Rekognition to Identify Persons of Interest for Law Enforcement
https://aws.amazon.com/blogs/machine-learning/using-amazon-rekognition-to-identify-persons-of-interest-for-law-enforcement/
Tomi Engdahl says:
The Future of Design Is Machine Learning
https://www.hackster.io/news/the-future-of-design-is-machine-learning-82fdb93fcc3a
Great products no longer mean curvy aluminum and elegant iconography. They need magical experiences, powered by embedded software and ML.
Tomi Engdahl says:
https://www.uusiteknologia.fi/2020/02/20/liika-saately-ei-saa-jarruttaa-tekoalyn-kehitysta/
Tomi Engdahl says:
OpenSource GUI Tool For OpenCV And DeepLearning
https://hackaday.com/2020/02/28/opensource-gui-tool-for-opencv-and-deeplearning/
AI and Deep Learning for computer vision projects has come to the masses. This can be attributed partly to the community projects that help ease the pain for newbies. [Abhishek] contributes one such project called Monk AI which comes with a GUI for transfer learning.
https://github.com/Tessellate-Imaging/monk_v1
Tomi Engdahl says:
Researchers Train a Neural Network to Detect Human Occupancy by Sniffing Ambient Wi-Fi Signals
https://www.hackster.io/news/researchers-train-a-neural-network-to-detect-human-occupancy-by-sniffing-ambient-wi-fi-signals-21127327da3b
By monitoring the Wi-Fi signals in a room, and passing the data through a convolutional neural network, human presence can be detected.
Tomi Engdahl says:
Self-supervised learning is the future of AI
https://thenextweb.com/neural/2020/04/05/self-supervised-learning-is-the-future-of-ai-syndication/
Despite the huge contributions of deep learning to the field of artificial intelligence, there’s something very wrong with it: It requires huge amounts of data. This is one thing that both the pioneers and critics of deep learning agree on. In fact, deep learning didn’t emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.
Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.
In his keynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for “self-supervised learning,” his roadmap to solve deep learning’s data problem.
Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, it’s really hard to predict which technique will succeed in creating the next AI revolution (or if we’ll end up adopting a totally different strategy).
Tomi Engdahl says:
Smartphone Videos Produce Highly Realistic 3D Face Reconstructions
Carnegie Mellon Method Foregoes Expensive Scanners, Camera Setups, Studios
https://www.cs.cmu.edu/news/smartphone-videos-produce-highly-realistic-3d-face-reconstructions
Tomi Engdahl says:
Even more digital: Axel Springer Academy is the first school of journalism to integrate artificial intelligence into its curriculum
https://www.axelspringer.com/en/press-releases/even-more-digital-axel-springer-academy-is-the-first-school-of-journalism-to-integrate-artificial-intelligence-into-its-curriculum
Tomi Engdahl says:
AI spots critical Microsoft security bugs 97% of the time
https://venturebeat.com/2020/04/16/ai-spots-critical-microsoft-security-bugs-97-of-the-time/
Microsoft claims to have developed a system that correctly
distinguishes between security and non-security software bugs 99% of
the time, and that accurately identifies critical, high-priority
security bugs on average 97% of the time. In the coming months, it
plans to open-source the methodology on GitHub, along with example
models and other resources.9
Tomi Engdahl says:
[Classifying 4M Reddit posts in 4k subreddits] How to build an end-to-end machine learning pipeline and an actual data product that suggests subreddits for a post? https://hubs.ly/H0p6YZm0
#MachineLearning #fastText
Tomi Engdahl says:
Microsoft: Our AI can spot security flaws from just the titles of
developers’ bug reports
https://www.zdnet.com/article/microsoft-our-ai-can-spot-security-flaws-from-just-the-titles-of-developers-bug-reports/
Microsoft’s machine-learning model can speed up the triage process
when handling bug reports. Microsoft says its machine-learning model
correctly distinguishes between security and non-security bugs 99% of
the time. It can also accurately identify critical security bugs 97%
of the time.
Tomi Engdahl says:
Starsky Robotics Failure Offers a Sobering Look at the State of AI
https://www.designnews.com/automation-motion-control/starsky-robotics-failure-offers-sobering-look-state-ai/181515550662728?ADTRK=InformaMarkets&elq_mid=12956&elq_cid=876648
The shutdown of autonomous trucking company Starsky Robotics brings some key takeaways for the state of AI and autonomous vehicles.
Starsky Robotics was one of the pioneering companies in the autonomous trucking space.
Autonomous trucking company Starsky Robotics has closed its doors after failing to secure additional funding in late 2019. Anyone that follows the AI industry or the autonomous vehicle space knows that not every startup was going to make it to the finish line. But the demise of one of the pioneering companies of the autonomous trucking space strikes a particular cord.
Here was a company working on a specific niche case that has gathered interest from both government and private companies, and actually succeeding at it. Starsky took a LiDAR-free approach to autonomous trucking – focusing instead on automotive-grade camera systems and radar. The company made headlines in 2019 when it became the first company to have an autonomous truck drive on a live highway.
And they’re working in the right space. Analysts at Allied Market Research are predicting the self-driving truck market to be worth about $1.7 billion by 2025.
So what went wrong?
In a Medium post, Starsky’s co-founder and CEO, Stefan Seltz-Axmacher, offered his own insights into what led to his company’s fall. More than a reflection on the company itself, the closing of Starsky Robotics is a wake up call on the state of autonomous vehicles and the capabilities of artificial intelligence today.
Here are three key takeaways in the wake of Starsky Robotics shutting down:
AI Is Not Smart Enough
While there is plenty of evidence of autonomous vehicles working in test conditions at most levels up to Level 4, the push toward Level 5 autonomy isn’t happening nearly as fast as was predicted even a few years ago. OEMs, suppliers, and analysts alike are significantly rolling back their predictions on the arrival of fully autonomous cars and trucks that require no human supervision or intervention. Today, the most optimistic estimates put us at 10 or more years out, in part because of the lengthy amount of time that will need to be spent on testing.
“It’s widely understood that the hardest part of building AI is how it deals with situations that happen uncommonly, i.e. edge cases,” Seltz-Axmacher said. “In fact, the better your model, the harder it is to find robust data sets of novel edge cases. Additionally, the better your model, the more accurate the data you need to improve it.”
Safety Isn’t Sexy
Even without Level 5 vehicles on the roads, the general public is already showing concerns over their safety. In a 2019 survey by AAA about 71 percent of US drivers reported that they would be afraid to ride in a fully-autonomous vehicle.
But as big of a concern as safety is for autonomous vehicles, what gets people excited is features.
“The problem is that people are excited by things that happen rarely, like Starsky’s unmanned public road test. Even when it’s negative, a plane crash gets far more reporting than the 100 people who die each day in automotive accidents,” Seltz-Axmacher said, “By definition building safety is building the unexceptional; you’re specifically trying to make a system which works without exception.”
All of that invisible work however does exactly get investors revved up about a company, he added. “Investors expect founders to lie to them — so how are they to believe that the unmanned run we did actually only had a [one] in a million chance of fatality accident? If they don’t know how hard it is to do unmanned, how do they know someone else can’t do it next week?”
Traditional Industry Is Embattled
On one hand, autonomous trucks could address some challenges being faced by the trucking industry itself. The trucking industry is looking at driver shortages that could reach into the hundreds of thousands in the next few years.
While autonomous trucks, could help shore up these gaps – doing so also brings social concerns and taps into the ongoing debate of just how much automation will continue to impact jobs. Last summer, the global shipping company, Maersk, faced heavy backlash over an announcement that was introducing driverless cargo carriers to the Port of Los Angeles – a move that could potentially displace hundreds of human dockworkers’ jobs.
The Road Ahead
The end of Starsky Robotics doesn’t portend an end to the autonomous trucking market. But it should serve as a clear reminder that autonomous vehicles aren’t moving at the breakneck pace they were once predicted to be.
There are still several major companies in the field – including TuSimple, which recently inked a deal with automotive parts supplier ZF to develop new autonomous trucking technologies – including LiDAR, radar, and computing technologies.
“If we showed anything at Starsky, it’s that this is buildable if you religiously focus on getting the person out of the vehicle in limited-use cases,” Seltz-Axmacher wrote. “But it will need to be someone else to realize that vision.”
Tomi Engdahl says:
Debunking the Myth of China’s AI Superiority
https://www.eetimes.com/debunking-the-myth-of-chinas-ai-superiority/
Who’s winning the U.S. vs. China AI battle? It’s a question frequently asked but too often poorly answered. The responses tend to be either oversimplified or exceedingly complex, when not muddled by both sides imposing their own, politically motivated answers, colored by the U.S.-China trade war, and now further complicated by acrimony over the coronavirus outbreak.
In the U.S., China is widely believed to be building a technological lead in artificial intelligence. That belief is challenged, however, in a recently published study on China’s AI chips, written by economist Dieter Ernst.
Tomi Engdahl says:
Friend 1: AI is going to take over the world and turn us all into slaves.
Friend 2: Yeah, right. AI can’t even draw a fucking cat.
Me: https://ThisCatDoesNotExist.com
Tomi Engdahl says:
MIT Reduces AI’s Environmental Impact, Carbon Emissions with High-Efficiency Once-for-All Networks
https://www.hackster.io/news/mit-reduces-ai-s-environmental-impact-carbon-emissions-with-high-efficiency-once-for-all-networks-60099f23edf4
Instead of training an individual network for each target device, an OFA network can cover a multitude of target systems.
Tomi Engdahl says:
AIoT – How IoT Leaders are Breaking Away
Based on findings from a study of executives from across the globe
https://www.sas.com/sas/offers/19/aiot-how-iot-leaders-are-breaking-away-emea.html
Tomi Engdahl says:
Next Front for Computer Security: Machine Learning
https://innovate.ieee.org/innovation-spotlight/machine-learning-security/
Tomi Engdahl says:
https://www.reddit.com/r/Suomi/comments/g2d2ma/rsuomi_transformer/
Tomi Engdahl says:
New AI improves itself through Darwinian-style evolution
https://bigthink.com/surprising-science/automl
AutoML-Zero is a proof-of-concept project that suggests the future of machine learning may be machine-created algorithms.
Tomi Engdahl says:
Next Front for Computer Security: Machine Learning
https://innovate.ieee.org/innovation-spotlight/machine-learning-security/
As the machine learning field grows, security needs to be built into the design, not just patched on after issues develop. So says Gary McGraw, the man whom many consider the father of software security.
According to McGraw, at the beginning of the computer revolution, computer security was often an afterthought – after all, holes in the system could be patched and firewalls could protect the broken thing from hackers. But, ultimately, McGraw and others were able to convince computer manufacturers that it makes a lot more sense to build security in.
You’ve written that in the field of computer security, “a distinct absence of science is a problem.” What do you mean by this, and why do you think there is a lack of science?
First, let me stress that there are people doing science in computer security now, especially in the IEEE community. However, in commercial security, I believe there’s a distinct lack of science. There are many ideas and theories that aren’t backed by data, and a plethora of people who are convinced their way is the right way without real evidence. Faith-based security is folly.
Your recent research has focused on machine learning threats, and you’ve written that until recently not much attention has been focused on this issue. Why do you think organizations haven’t focused on machine learning security until now?
Machine learning is pretty new in terms of the hype cycle. There’s been some progress made with security, but I was interested to learn how ML really works. I dug into the literature with three other guys to see what’s happened in the last 25 years and discovered that the answer to “what’s new” is really simple: our machines are way, way faster, and our data sets are much bigger.
We’re seeing progress with machine learning, but it’s not breakthroughs in terms of cognitive science. We’re just learning how we can make machines do more interesting things. We’re so psyched about all of the things we can make them do, yet nobody’s really thinking about the security risks of how we’re doing what we’re doing. If we really want to secure machine learning, we have to identify and then manage all of the risks.
Beyond that, my view is that the machine learning work we’re doing is something that’s going to become commercially viable
Tomi Engdahl says:
Why we deploy machine learning models with Go — not Python
There’s more to production machine learning than Python scripts
https://towardsdatascience.com/why-we-deploy-machine-learning-models-with-go-not-python-a4e35ec16deb
Tomi Engdahl says:
The growing complexity of modern semiconductors is slowing progress, but Google has a potential solution: letting AI do the layout.
Google Researchers Turn to AI to Design Future Chips and Get Moore’s Law Back On Track
https://www.hackster.io/news/google-researchers-turn-to-ai-to-design-future-chips-and-get-moore-s-law-back-on-track-3f448b48a3a8
The growing complexity of modern semiconductors is slowing progress, but Google has a potential solution: letting AI do the layout.
Tomi Engdahl says:
Springer has released 65 Machine Learning and Data books for free
https://towardsdatascience.com/springer-has-released-65-machine-learning-and-data-books-for-free-961f8181f189
Hundreds of books are now free to download
Springer has released hundreds of free books on a wide range of topics to the general public. The list, which includes 408 books in total, covers a wide range of scientific and technological topics. In order to save you some time, I have created one list of all the books (65 in number) that are relevant to the data and Machine Learning field.
Among the books, you will find those dealing with the mathematical side of the domain (Algebra, Statistics, and more), along with more advanced books on Deep Learning and other advanced topics. You also could find some good books in various programming languages such as Python, R, and MATLAB, etc
Tomi Engdahl says:
Setting Up TensorFlow on the MaaXBoard
https://www.hackster.io/monica/setting-up-tensorflow-on-the-maaxboard-f33cbc
Go through the tutorial here to get the MaaXBoard setup in headless mode, expand the filesystem, increase the swap file size, and set up remote desktop.
Tensorflow setup
Building Tensorflow from source code can be a challenging step. This cannot be done on the MaaXBoard as is, and must be done with cross-compilation using another powerful machine (PC).
Tomi Engdahl says:
Artificial Intelligence and Machine Learning-Based Medical Devices: A Products Liability Perspective
How will products liability law be applied to AI?
https://www.mddionline.com/artificial-intelligence-and-machine-learning-based-medical-devices-products-liability-perspective?ADTRK=InformaMarkets&elq_mid=12999&elq_cid=876648
No enterprise better illustrates the careful balance between the endless potential of AI against the unique risks of products liability concerns than the medical device industry. This article discusses the uses and unique benefits of AI in the medical device context, while also exploring the developing products liability risks.
Why Medical Devices?
The medical industry, and in particular the field of diagnostic devices, has become fertile territory for AI/ML product development. This is no doubt because of the overlap between the sort of data recognition and processing these diagnostic devices require and the similar abilities of AI/ML systems. Making an accurate medical diagnosis is an incredibly complex procedure that requires a doctor to synthesize many often-contradictory data alongside individual patients’ subjective complaints, frequently under significant time constraints. To that end, some studies have estimated that as many as 12 million diagnostic errors occur every year in the U.S. alone. 2 Another symptom of the difficulty of diagnosis is the massive overuse of diagnostic testing; for example, researchers estimate that more than 50 CT scans, 50 ultrasounds, 15 MRIs, and 10 PET scans (125 tests altogether) are performed annually per every 100 Medicare recipients above the age of 65—many of which are medically unnecessary. 3 It requires no leap of the imagination to devise how AI/ML-based medical devices, which promise significantly improved diagnostic outcomes while massively reducing cost, could soon become commonplace at the point of care. Current devices on the market include the IDx-Dr, an AI-based diagnostic tool approved by the FDA in 2018, which independently analyzes images of the retina for signs of diabetic retinopathy, 4 and Medtronic’s Sugar.IQ Diabetes Assistant software, which pairs with a user’s continuous glucose monitor to offer personalized real-time advice on how certain foods or activities may affect blood glucose levels. 5
ML systems operate by analyzing data and, in turn, observing how various patterns and outcomes derive from those data. 6 Thus, in theory, ML algorithms can make predictions and decisions with increasing accuracy; as the quality, scope and, in many cases, organization of inputted data improves, so too does the accuracy of the corresponding outputted determinations.
Challenges in the Courtroom
In the meantime, there also remains significant uncertainty about how the most common products liability doctrines will apply to AI/ML SaMD. Products liability claims often sound in strict liability, meaning a plaintiff must not prove that a defendant was negligent, only that its product was defective or unreasonably dangerous. 16 Traditionally, however, strict liability claims only arise from “tangible” products. Thus, the threshold question is whether such software may even be susceptible to products liability exposure. Courts have struggled with this issue and are likely to split in determining whether software fits as a product.
If AI/ML software is deemed to be a product for the purposes of products liability claims, then many additional issues arise regarding the application of existing legal doctrines to this unique technology. Because AI/ML products are highly iterative and fluid, they are also likely to present novel challenges for both plaintiffs and defendants in litigating the three most common classes of products liability claims: design defect, manufacturing defect, and failure to warn.
Design and manufacturing defect claims, for example, operate on the assumption that a product is fixed.
product, like AI/ML SaMD, has the capacity to constantly evolve and redesign itself over time, including after it is sold. As such, we can expect to see a shift in the focus of defect claims towards the adaptations implemented by a piece of software over time, as opposed to preliminary design concepts. These theories of liability will also require an increased focus on the testing and data sets used to “train” AI/ML software in order to establish the existence of defects.
The same can be said for the sufficiency of warnings: How can a product’s manufacturer adequately warn users of potential risks if the product controls its own function and, by extension, the risks it may present? In the context of medical devices, as discussed above, FDA has proposed that manufacturers provide a detailed and finite range of algorithm change protocols (ACPs). It is likely, therefore, that product warnings will dovetail with these ACPs to cover the limited range of possible adaptations and, by extension, foreseeable risks. Plaintiffs, for their part, may seek to expand the doctrine of “post-sale duty to warn” to argue that defendants must continually monitor their products and provide updated warnings to users even after the time of sale.
Technological innovation outpaces the law, and artificial intelligence/machine learning is no different. Regardless of how legal doctrines evolve with the introduction of AI/ML-based products, until firm legal and regulatory guidelines progress, one thing is certain: There will be significant disagreement about how products liability law is applied.
Tomi Engdahl says:
This Meme Does Not Exist
https://imgflip.com/ai-meme
Example
https://imgflip.com/i/3yon4v
Tomi Engdahl says:
Andy Baio / Waxy.org:
Roc Nation files copyright claim over deepfake YouTube videos which use AI to impersonate Jay-Z’s voice, raising new copyright and fair use questions — On Friday, I linked to several videos by Vocal Synthesis, a new YouTube channel dedicated to audio deepfakes — AI-generated speech …
With questionable copyright claim, Jay-Z orders deepfake audio parodies off YouTube
https://waxy.org/2020/04/jay-z-orders-deepfake-audio-parodies-off-youtube/
On Friday, I linked to several videos by Vocal Synthesis, a new YouTube channel dedicated to audio deepfakes — AI-generated speech that mimics human voices, synthesized from text by training a state-of-the-art neural network on a large corpus of audio.
The videos are remarkable, pairing famous voices with unlikely dialogue: Bob Dylan singing Britney Spears, Ayn Rand and Slavoj Žižek dueting Sonny and Cher, Tucker Carlson reading the Unabomber Manifesto, Bill Clinton reciting “Baby Got Back,” or JFK touting the intellectual merits of Rick and Morty.
Many of the videos have been remixed by fans, adding music to create hilarious and surreal musical mashups.
Over the weekend, for the first time, the anonymous creator of Vocal Synthesis received a copyright claim on YouTube, taking two of his videos offline with deepfaked audio of Jay-Z reciting the “To Be or Not To Be” soliloquy from Hamlet and Billy Joel’s “We Didn’t Start the Fire.”
https://www.youtube.com/channel/UCRt-fquxnij9wDnFJnpPS2Q
Tomi Engdahl says:
High school student Andrew Bernas recently took first place in the AI for Social Good category of NVIDIA Embedded’s AI at the Edge Challenge.
Silicon Valley High Schooler Takes Top Award in Hackster Jetson Nano Competition
https://www.hackster.io/news/silicon-valley-high-schooler-takes-top-award-in-hackster-jetson-nano-competition-95138d3c08fa
Andrew Bernas recently won first place in the AI for Social Good category of the NVIDIA-supported AI at the Edge Challenge.
Tomi Engdahl says:
https://www.eetimes.eu/10-chip-startups-for-ai-in-edge-and-endpoint-applications/?utm_source=Aspencore+Network+Newsletters&utm_campaign=1e270f145a-EMAIL_CAMPAIGN_2020_04_27_11_11&utm_medium=email&utm_term=0_6c71af1646-1e270f145a-383755753
Tomi Engdahl says:
Over the next few decades, AI is predicted to be the most significant commercial opportunity in the world—for companies and nations both. AI could advance the global GDP by 14 percent by 2030—$14 to $15 trillion. That’s no chump change—which is why despite the glitches of AI adoption, we need to keep moving ahead if we want in on the action. How do we do it? Start with the basics. Make sure you are entirely digitized so that you can pull and utilize data across departments. Make sure your AI projects are scalable so they can grow and spread throughout the company. And lastly, make sure that you have a cohesive AI strategy in place. At this point in digital transformation, you simply can’t advance without one.
Six Reasons Why We Haven’t Seen Full AI Adoption
https://www.forbes.com/sites/danielnewman/2019/03/12/6-reasons-why-we-havent-seen-full-ai-adoption/amp/?__twitter_impression=true
On one hand, we know AI is the future of business. After all, manpower simply isn’t fast enough to keep up with the pace of consumer demand. That said, there’s a big difference between knowing AI is the future and actually implementing AI within your business successfully. That latter part—AI adoption—is where many companies are finding themselves stuck.
No one said digital transformation would be easy—but you’re not alone if you assumed AI adoption would be a cakewalk. Today’s AI is a miracle worker. If it can translate languages, process invoices, and change marketing messages in real time, it must be a magic bullet. Right? Except when it comes to implementation. Yes, AI is meant to make your business life easy. But real-life conditions don’t always cooperate. If your company has been less than successful in its AI efforts, you are not alone. The following are a few reasons that I see AI adoption failing to reach full-penetration in businesses around the globe.
Tomi Engdahl says:
Why we chose AWS over GCP for machine learning
And why we might’ve gotten it wrong
https://towardsdatascience.com/why-we-chose-aws-over-gcp-for-machine-learning-99a0dcb3d14a
About 10 months ago, a few of us began working on our model serving infrastructure. We wanted to build a tool that would take a trained model and turn it into a production web service, without us having to write glue code or wrangle AWS/Kubernetes for each deployment.
For our use cases and the test cases we ran it through, the tool worked great. When we open sourced it, however, we got an unexpected bit of feedback.
Cortex, our model serving tool, only supports AWS. Since we personally worked on AWS, and because we understood AWS to be the most popular cloud, we assumed multi-cloud support didn’t need to be much of a priority.
We seem to have been wrong about that one.
Since open-sourcing Cortex, our number one feature request far and away has been GCP support.
Tomi Engdahl says:
Artificial Intelligence Outperforms Human Intel Analysts In a Key Area
https://www.defenseone.com/technology/2020/04/artificial-intelligence-outperforms-human-intel-analysts-one-key-area/165022/
A Defense Intelligence Agency experiment shows AI and humans have different risk tolerances when data is scarce.
in a real situation where humans and AI were looking at enemy activity, those positions would be reversed.
Artificial intelligence can actually be more cautious than humans about its conclusions in situations when data is limited. While the results are preliminary, they offer an important glimpse into how humans and AI will complement one another in critical national security fields.
DIA analyzes activity from militaries around the globe.
In theory, with less data, the human analyst should be less certain in their conclusions, like the characters in WarGames. After all, humans understand nuance and can conceptualize a wide variety of outcomes. The researchers found the opposite.
“Once we began to take away sources, everyone was left with the same source material — which was numerous reports, generally social media, open source kinds of things, or references to the ship being in the United States — so everyone had access to the same data. The difference was that the machine, and those responsible for doing the machine learning, took far less risk — in confidence — than the humans did,” he said. “The machine actually does a better job of lowering its confidence than the humans do….There’s a little bit of humor in that because the machine still thinks they’re pretty right.”
Tomi Engdahl says:
What Are Deepfakes and How Are They Created?
https://spectrum.ieee.org/tech-talk/computing/software/what-are-deepfakes-how-are-they-created
Tomi Engdahl says:
Why we chose AWS over GCP for machine learning
And why we might’ve gotten it wrong
https://towardsdatascience.com/why-we-chose-aws-over-gcp-for-machine-learning-99a0dcb3d14a
Tomi Engdahl says:
New AI Tool Turns Any Song Into A Custom Beat Saber Map, And It Really Works
https://uploadvr.com/beat-sage-ai-beat-saber-custom/
Community created custom maps have long been a staple of Beat Saber, the untouchable king of VR rhythm games.
However, a new tool that utilizes neural networks and artificial intelligence might change the entire custom map scene. Beat Sage, which released last week, is able to generate a custom Beat Saber map out of any song on YouTube. Not only that but, with the right song, it actually works shockingly well.
Tomi Engdahl says:
Top 25 Machine Learning Startups To Watch In 2020
https://www.forbes.com/sites/louiscolumbus/2020/04/26/top-25-machine-learning-startups-to-watch-in-2020/
Tomi Engdahl says:
Custom Object Detection with CSI IR Camera on NVIDIA Jetson
https://www.hackster.io/pjdecarlo/custom-object-detection-with-csi-ir-camera-on-nvidia-jetson-c6d315
Detect any thing at any time using a Camera Serial Interface Infrared Camera on an NVIDIA Jetson Nano with Azure IoT and Cognitive Services.
Tomi Engdahl says:
https://www.hackster.io/monica/setting-up-tensorflow-on-the-maaxboard-f33cbc
Tomi Engdahl says:
Machine Learning Engineers WILL EXIST in 10+ Years
A counter to my post saying ML Engineers will NOT exist in 10 years.
https://towardsdatascience.com/machine-learning-engineers-will-exist-in-10-years-3e9422337c3f
The creation of value from Artificial Intelligence and more specifically Machine Learning over the past 5–10 years has been monumental. All the way from startups to established tech companies we’ve seen Machine Learning make its way into products up and down industries. And we’re seeing estimates come out of our famed futurists in consulting firms that speak of trillions of dollars of value creation annually!
Regardless of title, there will exist engineers working on Machine Learning for far past 10 years. This field is largely nascent and has only gotten a minor taste of the value it will create in the long-run.
If you’re a hiring manager, I hope this article serves as an indication you should see if there’s an opportunity for ML to positively disrupt your business.
AutoML is great for those one size fits all & cookie-cutter problems that don’t need a domain expert’s touch.
But guess what? The majority of value-creating problems DO need a domain expert’s touch! Machine Learning Engineers will always have a place to fit into the puzzle. Whether it’s helping make those connections between the data pipelines to the models, bridging the gap between Data Scientists and Data Engineers. Or maybe it’s in more research-heavy environments where a model needs to find its way into a scalable system. Or more prominently maybe it’s working directly for the companies building the Machine Learning infrastructure of the future.
Tomi Engdahl says:
Never Gonna Give You Up, but an AI attempts to continuously generate more of the song
https://m.youtube.com/watch?v=iJgNpm8cTE8
Tomi Engdahl says:
AI will be used by groups to recruit members and influence their opinions, and by foreign governments to influence elections
Observations
| Opinion
Don’t Regulate Artificial Intelligence: Starve It
https://blogs.scientificamerican.com/observations/dont-regulate-artificial-intelligence-starve-it/
The potential dangers of this technology are great enough that we need to be very careful about how powerful we allow it to be
Artificial intelligence is still in its infancy. But it may well prove to be the most powerful technology ever invented. It has the potential to improve health, supercharge intellects, multiply productivity, save the environment and enhance both freedom and democracy.
But as that intelligence continues to climb, the danger from using AI in an irresponsible way also brings the potential for AI to become a social and cultural H-bomb. It’s a technology that can deprive us of our liberty, power autocracies and genocides, program our behavior, turn us into human machines and, ultimately, turn us into slaves. Therefore, we must be very careful about the ascendance of AI; we don’t dare make a mistake. And our best defense may be to put AI on an extreme diet.
Tomi Engdahl says:
Researchers at North Carolina State have developed the first countermeasure to protect neural networks against a particularly sneaky kind of cyberattack.
Preventing AI From Divulging Its Own Secrets
https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/how-prevent-ai-power-usage-secrets
One of the sneakiest ways to spill the secrets of a computer system involves studying its pattern of power usage while it performs operations. That’s why researchers have begun developing ways to shield the power signatures of AI systems from prying eyes.
Among the AI systems most vulnerable to such attacks are machine learning algorithms that help smart home devices or smart cars automatically recognize different types of images or sounds such as words or music. Such algorithms consist of neural networks designed to run on specialized computer chips embedded directly within smart devices, instead of inside a cloud computing server located in a data center miles away.
This physical proximity enables such neural networks to quickly perform computations with minimal delay, but also makes it easy for hackers to reverse-engineer the chip’s inner workings using a method known as differential power analysis.
“This is more of a threat for edge devices or Internet of Things devices, because an adversary can have physical access to them,” says Aydin Aysu, an assistant professor of electrical and computer engineering at North Carolina State University in Raleigh. “With physical access, you can then measure the power or you can look at the electromagnetic radiation.”
Differential power analysis attacks have already proven effective against a wide variety of targets such as the cryptographic algorithms that safeguard digital information and the smart chips found in ATM cards or credit cards. But Aysu and his colleagues see neural networks as equally likely targets with possibly even more lucrative payoffs for hackers or commercial competitors at a time when companies are embedding AI systems in seemingly everything.
The researchers started out by showing how an adversary can use power consumption measurements to reveal the secret weight values that help determine a neural network’s computations. By repeatedly having the neural network run specific computational tasks with known input data, an adversary can eventually figure out the power patterns associated with the secret weight values. For example, this method revealed the secret weights of an unprotected binarized neural network by running just 200 sets of power consumption measurements.
Tomi Engdahl says:
Edge Impulse now has full support for WebAssembly, allowing you to deploy efficient machine learning on browser, mobile or Node.js!
Audio-based shower timer with a phone, Machine Learning and WebAssembly
https://www.edgeimpulse.com/blog/audio-based-shower-timer-with-a-phone-machine-learning-and-webassembly/
The past years have brought us three really interesting things: 1) TensorFlow Lite, a portable runtime for neural networks aimed at running machine learning models on mobile devices. 2) Emscripten, a toolchain that can compile C and C++ applications into WebAssembly, making these applications available from any browser. 3) Access to many sensors straight from your mobile web browser. All these features are cool on their own, but together they enable something truly magical: classifying what happens in the real world straight from the browser. Let’s put that in practice!
Collecting data, extracting features and training a neural network
Everything in machine learning starts with data, and the very first step of building a machine learning model should be to determine what data you actually need. For this model we know we want to obtain raw audio data of typical bathroom activities (brushing teeth, sink on/off, walking around, opening/closing the shower door), captured both with and without the shower on. That way the ‘shower on/off’ is the only discriminating action that the model will learn.
From model to WebAssembly
The machine learning model that I trained consists of two steps: 1) preprocessing of the data, by extracting a spectrogram from the raw audio waveform using MFCC, 2) running the spectrogram through a neural network to get a classification using TensorFlow. To run this model in the browser we thus need to convert both these steps into something the browser understands.
One way would be to reimplement the MFCC algorithm in JavaScript, convert the neural network using TensorFlow.js, and wire this up. But that actually takes effort, and is prone to bugs. But there is another way. As we (at Edge Impulse) already run these algorithms on very constraint microcontrollers – typically an 80MHz Cortex-M4F with 128K RAM – we have an optimized embedded C++ SDK that contains implementations of both the MFCC algorithm, Tensorflow Lite, and binding code to run both the preprocessing step and the classifier.
Paired with Emscripten – which can cross-compile C++ codebases to WebAssembly – we can thus convert our existing codebase to something that runs in the browser, without having to reimplement anything.
You can get a C++ library which contains both the MFCC code, the TensorFlow Lite runtime, and the TensorFlow Lite model
https://docs.edgeimpulse.com/docs/audio-classification