3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

7,002 Comments

  1. Tomi Engdahl says:

    https://www.hackster.io/news/get-your-wires-crossed-with-spinenet-a45ed448f9ad

    A scale-permuted convolutional neural network can improve object detection performance.

    Reply
  2. Tomi Engdahl says:

    Lattice sensAI 3.0 for on-device AI processing at the edge leverages customized convolutional neural network (CNN) accelerator IP for the low-power 28-nm FD-SOI-based CrossLink-NX family of FPGAs. The stack simplifies implementation of common CNN networks and is optimized to take advantage of the underlying parallel processing architecture of the FPGA. Developers can easily compile a trained neural network model and download it to a CrossLink-NX FPGA.
    https://www.edn.com/ai-stack-adds-cnn-accelerator-ip/?utm_source=newsletter&utm_campaign=link&utm_medium=EDNWeekly-20200709

    Reply
  3. Tomi Engdahl says:

    REGULATION
    Pentagon’s Joint AI Center (JAIC) Testing First Lethal AI Projects
    https://www.unite.ai/pentagons-joint-ai-center-jaic-testing-first-lethal-ai-projects/

    The new acting director of the Joint Artificial Intelligence Center (JAIC), Nand Mulchandani, gave his first-ever Pentagon press conference on July 8, where he laid out what is ahead for the JAIC and how current projects are unfolding.

    One of the Pentagon’s main objectives was to have algorithms implemented into “warfighting systems” by the end of 2017.

    The proposal was met by strong opposition,

    According to Mulchandani, that dynamic has changed and the JAIC is now receiving support from tech firms

    Termed JAIC 2.0, the new plan includes six mission initiatives that are all underway, including joint warfighting operations, warfighter health, business process transformation, threat reduction and protection, joint logistics, and the newest one, joint information warfare. The latest addition includes cyber operations.

    There is special focus now being turned to the joint warfighting operations mission, which adopts the priorities of the National Defense Strategy in regard to technological advances in the United States military.

    “I don’t want to start straying into issues around autonomy and lethality versus lethal — or lethality itself. So yes, it is true that many of the products we work will go into weapon systems.”

    “None of them right now are going to be autonomous weapon systems

    Reply
  4. Tomi Engdahl says:

    OpenCV AI Kit aims to do for computer vision what Raspberry Pi did for hobbyist hardware
    https://techcrunch.com/2020/07/14/opencv-ai-kit-aims-to-do-for-computer-vision-what-raspberry-pi-did-for-hobbyist-hardware/?tpcc=ECFB2020

    A new gadget called the OpenCV AI Kit, or OAK, looks to replicate the success of Raspberry Pi and other minimal computing solutions, but for the growing fields of computer vision and 3D perception. Its new multi-camera PCBs pack a lot of capability into a small, open-source unit and are now seeking funding on Kickstarter.

    The OAK devices use their cameras and onboard AI chip to perform a number of computer vision tasks, like identifying objects, counting people, finding distances to and between things in frame and more. This info is sent out in polished, ready-to-use form.

    Like the Raspberry Pi, which has grown to become the first choice for hobbyist programmers dabbling in hardware, pretty much everything about these devices is open source on the permissive MIT license. And it’s officially affiliated with OpenCV, a widespread set of libraries and standards used in the computer vision world.

    https://www.kickstarter.com/projects/opencv/opencv-ai-kit

    Reply
  5. Tomi Engdahl says:

    A European privacy body (European Data Protection Board) said it “has doubts” that using facial recognition technology developed by U.S. company Clearview AI is legal in the EU.

    Controversial US facial recognition technology likely illegal, EU body says
    https://www.politico.eu/article/clearview-ai-use-likely-illegal-says-eu-data-protection-watchdog/

    Clearview AI allows users to link facial images of an individual to a database of more than three billion pictures.

    Reply
  6. Tomi Engdahl says:

    An algorithm that merges online and offline reinforcement learning
    https://www.google.com/amp/s/techxplore.com/news/2020-07-algorithm-merges-online-offline.amp

    In recent years, a growing number of researchers have been developing artificial neural network (ANN)- based models that can be trained using a technique known as reinforcement learning (RL). RL entails training artificial agents to solve a variety of tasks by giving them “rewards” when they perform well, for instance, when they classify an image correctly.

    So far, most ANN-based models were trained employing online RL methods, where an agent that was never exposed to the task it is designed to complete learns by interacting with an online virtual environment. However, this approach can be quite expensive, time-consuming and inefficient.

    More recently, some studies explored the possibility of training models offline. In this case, an artificial agent learns to complete a given task by analyzing a fixed dataset, and thus does not actively interact with a virtual environment. While offline RL methods have achieved promising results on some tasks, they do not allow agents to learn actively in real time.

    Researchers at UC Berkeley recently introduced a new algorithm that is trained using both online and offline RL approaches.

    The AWAC algorithm developed by Nair and his colleagues can be pre-trained just as well offline as techniques that are specifically designed for offline training. However, its performance improves further and by a significant margin when it is trained online.

    Project software repository: awacrl.github.io/

    Reply
  7. Tomi Engdahl says:

    Wearable Reasoner uses AI to tell the wearer if an argument is supported by evidence.

    Trust Me, I’m an AI
    https://www.hackster.io/news/trust-me-i-m-an-ai-9075a87f4efc

    Wearable Reasoner uses AI to tell the wearer if an argument is supported by evidence.

    A group from the MIT Media Lab has created a prototype device called the Wearable Reasoner that they believe can help. Wearable Reasoner is a pair of glasses that can listen for arguments and inform the wearer whether or not they are supported by evidence.

    The wearable portion of the device is a pair of Bose Frames glasses. These glasses have speakers, a microphone, and are capable of connecting to a smartphone via Bluetooth. When activated, the linked smartphone will continually listen for an utterance with the help of the iOS Speech framework, which converts the detected utterance into text. The text is then sent to a cloud API that restores inter-word punctuation, and then finally, that result is sent to another API running the researchers’ reasoning algorithm. Based on the results from the reasoning algorithm, an audible response will be sent to the Bose Frames speakers to indicate if the detected utterance is supported by evidence.

    https://www.media.mit.edu/projects/wearable-reasoner/overview/

    Reply
  8. Tomi Engdahl says:

    “The fundamental advantage of NXP Semiconductors’ face recognition solution is that it runs entirely on a low-cost, energy-efficient processor like the i.MX RT106F.”

    Face and Emotion Recognition with Just an NXP i.MX RT Microcontroller
    https://www.hackster.io/news/face-and-emotion-recognition-with-just-an-nxp-i-mx-rt-microcontroller-5beeb4f94cbf

    Hands-on experience with NXP’s i.MX RT, an Arm-based solution that does not need the cloud.

    Reply
  9. Tomi Engdahl says:

    New Hardware Mimics Spiking Behavior of Neurons With High Efficiency
    https://spectrum.ieee.org/tech-talk/computing/hardware/new-hardware-mimics-spiking-behavior-of-neurons-with-high-efficiency

    Nothing computes more efficiently than a brain, which is why scientists are working hard to create artificial neural networks that mimic the organ as closely as possible. Conventional approaches use artificial neurons that work together to learn different tasks and analyze data; however, these artificial neurons do not have the ability to actually “fire” like real neurons, releasing bursts of electricity that connect them to other neurons in the network. The third generation of this computing tech aims to capture this real-life process more accurately – but achieving such a feat is hard to do efficiently.

    Reply
  10. Tomi Engdahl says:

    This Affordable Glove Can Translate Sign Language into Speech in Real-Time
    https://www.hackster.io/news/this-affordable-glove-can-translate-sign-language-into-speech-in-real-time-37adf76c4b9d

    UCLA engineers have developed a wearable glove device that can translate American Sign Language into English speech in real-time.

    Reply
  11. Tomi Engdahl says:

    How to fight COVID-19 with machine learning
    9 ways machine learning is helping us fight the viral pandemic.
    https://www.datarevenue.com/en-blog/machine-learning-covid-19?utm_source=facebook&utm_medium=cpc&utm_campaign=%5BNew%5D+Articles+-+Conversion+Goals&utm_content=COVID19

    Viral pandemics are a serious threat. COVID-19 is not the first, and it won’t be the last.

    But, like never before, we are collecting and sharing what we learn about the virus. Hundreds of research teams around the world are combining their efforts to collect data and develop solutions.

    We want to shine a light on their work and show how machine learning is helping us to:

    Identify who is most at risk,
    Diagnose patients,
    Develop drugs faster,
    Finding existing drugs that can help
    Predict the spread of the disease,
    Understand viruses better,
    Map where viruses come from, and
    Predict the next pandemic.

    Let’s promote the research to fight this pandemic – and prepare ourselves better for the next one.

    Reply
  12. Tomi Engdahl says:

    Reverse engineering of 3-D-printed parts by machine learning reveals security vulnerabilities
    https://techxplore.com/news/2020-07-reverse-d-printed-machine-reveals-vulnerabilities.html

    Reply
  13. Tomi Engdahl says:

    https://lexfridman.com/ai/

    Artificial Intelligence podcast (AI podcast) is a series of conversations about technology, science, and the human condition hosted by Lex Fridman.

    Reply
  14. Tomi Engdahl says:

    Ai ethics reading list
    https://www.aitruth.org/aiethics-readinglist

    This is a compilation of books, papers, and resources that AI Ethicists recommend to help you manage your AI initiatives responsibly or to in general get to know AI ethics better. Thanks to all who have helped compile the list. Please consider this a living, ever-evolving list as new AI Ethics works come forward.

    Reply
  15. Tomi Engdahl says:

    Researchers have created a way to train deep reinforcement learning algorithms on hardware commonly available in academic labs.

    Powerful AI Can Now Be Trained on a Single Computer
    https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/powerful-ai-can-now-be-trained-on-a-single-computer

    The enormous computing resources required to train state-of-the-art artificial intelligence systems means well-heeled tech firms are leaving academic teams in the dust. But a new approach could help balance the scales, allowing scientists to tackle cutting-edge AI problems on a single computer.

    A 2018 report from OpenAI found the processing power used to train the most powerful AI is increasing at an incredibly fast pace, doubling every 3.4 months. One of the most data-hungry approaches is deep reinforcement learning, where AI learns through trial and error by iterating through millions of simulations. Impressive recent advances on videogames like Starcraft and Dota2 have relied on servers packed with hundreds of CPUs and GPUs.

    Specialized hardware such as the Cerebras System’s Wafer Scale Engine promises to replace these racks of processors with a single large chip perfectly optimized for training AI. But with a price tag running into the millions, it’s not much solace for under-funded researchers.

    Now a team from the University of Southern California and Intel Labs have created a way to train deep reinforcement learning (RL) algorithms on hardware commonly available in academic labs. In a paper presented at the 2020 International Conference on Machine Learning (ICML) this week, they describe how they were able to use a single high-end workstation to train AI with state-of-the-art performance on the first-person shooter videogame Doom.

    “Any progress towards democratizing RL and reducing the energy needs for doing research is a step in the right direction,”

    The inspiration for the project was a classic case of necessity being the mother of invention.

    Using a single machine equipped with a 36-core CPU and one GPU, the researchers were able to process roughly 140,000 frames per second while training on Atari videogames and Doom, or double the next best approach.

    The leading approach to deep RL places an AI agent in a simulated environment that provides rewards for achieving certain goals, which the agent uses as feedback to work out the best strategy. This involves three main computational jobs: simulating the environment and the agent; deciding what to do next next based on learned rules called a policy; and using the results of those actions to update the policy.

    Training is always limited by the slowest process, says Petrenko, but these three jobs are often intertwined in standard deep RL approaches, making it hard to optimize them individually. The researchers’ new approach, dubbed Sample Factory, splits them up so resources can be dedicated to get them all running at peak speeds.

    Piping data between processes is another major bottleneck as these can often be spread across multiple machines,

    Reply
  16. Tomi Engdahl says:

    “We are setting ourselves up for technological domination.”

    Researcher Warns: Algorithms Are “Using and Even Controlling Us”
    http://www.futurism.com/professor-technology-controlling-us

    We’re surrounded by algorithms in almost everything we do, from browsing the web to making financial decisions. But do we, as humans, still have a say in the way those algorithms shape our reality?

    Maybe not, according to new research.

    “Our exploration led us to the conclusion that, over time, the roles of information technology and humans have been reversed,” professor at the Center for Systems Studies at Hull University in Yorkshire England Dionysios Demetis wrote in an essay for The Conversation. “In the past, we humans used technology as a tool. Now, technology has advanced to the point where it is using and even controlling us.”

    Rather than having algorithms and machines make decisions that don’t affect us in any way, Dementis argues that we are in fact “deeply affected by them in unpredictable ways.” And humans made it that way: “we have progressively restricted our own decision-making capacity and allowed algorithms to take over.”

    That over-reliance on algorithms, according to Demetis, can also result in market crashes — not caused by a “bug in the programming” but learned behavior that “emerged from the interaction of millions of algorithmic decisions playing off each other in unpredictable ways.”

    Reply
  17. Tomi Engdahl says:

    Collect a dataset from scratch, train a custom smile/frown detector CNN, and deploy it on an Open MV Cam in 15 minutes!

    Smile Detection with OpenMV IDE and Edge Impulse
    https://m.youtube.com/watch?feature=youtu.be&v=YKwVope5RsU

    The future of being able to easily train neural networks is finally here! With the launch of Edge Impulse you can now easily train powerful CNNs in the cloud for any application and have them run onboard your OpenMV Cam.

    In this video Kwabena shows off how you can:

    1. Collect a dataset of faces using an OpenMV Cam and OpenMV IDE.
    2. Train a custom smile/frown detector CNN using Edge Impulse in the cloud.
    3. Deploy this custom CNN on an OpenMV Cam and detect smiles/frowns.

    IN 15 MINUTES!!!

    Reply
  18. Tomi Engdahl says:

    OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless
    https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/

    The AI is the largest language model ever created and can generate amazing human-like text on demand but won’t bring us closer to true intelligence.

    “Playing with GPT-3 feels like seeing the future,” Arram Sabeti, a San Francisco–based developer and artist, tweeted last week. That pretty much sums up the response on social media in the last few days to OpenAI’s latest language-generating AI.

    OpenAI first described GPT-3 in a research paper published in May. But last week it began drip-feeding the software to selected people who requested access to a private beta. For now, OpenAI wants outside developers to help it explore what GPT-3 can do, but it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the AI via the cloud.

    GPT-3 is the most powerful language model ever.

    Sabeti linked to a blog post where he showed off short stories, songs, press releases, technical manuals, and more that he had used the AI to generate. GPT-3 can also produce pastiches of particular writers.

    There is even a reasonably informative article about GPT-3 written entirely by GPT-3.

    Others have found that GPT-3 can generate any kind of text, including guitar tabs or computer code. For example, by tweaking GPT-3 so that it produced HTML rather than natural language, web developer Sharif Shameem showed that he could make it create web-page layouts by giving it prompts like “a button that looks like a watermelon” or “large text in red that says WELCOME TO MY NEWSLETTER and a blue button that says Subscribe.” Even legendary coder John Carmack, who pioneered 3D computer graphics in early video games like Doom and is now consulting CTO at Oculus VR, was unnerved: “The recent, almost accidental, discovery that GPT-3 can sort of write code does generate a slight shiver.”

    We have a low bar when it comes to spotting intelligence. If something looks smart, it’s easy to kid ourselves that it is. The greatest trick AI ever pulled was convincing the world it exists. GPT-3 is a huge leap forward—but it is still a tool made by humans, with all the flaws and limitations that implies.

    GPT-3: An AI that’s eerily good at writing almost anything
    https://arr.am/2020/07/09/gpt-3-an-ai-thats-eerily-good-at-writing-almost-anything/

    Reply
  19. Tomi Engdahl says:

    OpenAI’s GPT-3 may be the biggest thing since bitcoin
    JUL 18, 2020
    https://maraoz.com/2020/07/18/openai-gpt3/

    Summary: I share my early experiments with OpenAI’s new language prediction model (GPT-3) beta. I explain why I think GPT-3 has disruptive potential comparable to that of blockchain technology.

    OpenAI, a non-profit artificial intelligence research company backed by Peter Thiel, Elon Musk, Reid Hoffman, Marc Benioff, Sam Altman and others, released its third generation of language prediction model (GPT-3) into the open-source wild. Language models allow computers to produce random-ish sentences of approximately the same length and grammatical structure as those in a given body of text.

    I predict that, unlike its two predecessors (PTB and OpenAI GPT-2), OpenAI GPT-3 will eventually be widely used to pretend the author of a text is a person of interest, with unpredictable and amusing effects on various communities. I further predict that this will spark a creative gold rush among talented amateurs to train similar models and adapt them to a variety of purposes, including: mock news, “researched journalism”, advertising, politics, and propaganda.

    Now for the fun part
    I have a confession: I did not write the above article.
    But I did it on my own blog! This article was fully written by GPT-3. Were you able to recognize it?

    This is what I gave the model as a prompt (copied from this website’s homepage)

    and then just copied what the model generated verbatim with minor spacing and formatting edits (no other characters were changed). I generated different results a couple (less than 10) times until I felt the writing style somewhat matched my own, and published it. I also added the cover image. Hope you were as surprised as I was with the quality of the result.

    That said, I do believe GPT-3 is one of the major technological advancements

    Reply
  20. Tomi Engdahl says:

    Linus Tech Tips says:
    I can safely retire now.
    https://m.youtube.com/watch?v=G0z50Am4Uw4

    Clone your voice with Resemble AI today at https://www.resemble.ai/

    Special thanks to the Deepfakers who helped us:
    Ctrl_shift_face: https://lmg.gg/Z2QsH
    The Deepfake Channel: https://lmg.gg/EJYWp
    VFXChris Ume: https://lmg.gg/3UBwo

    Robot Linus Reviews a Keyboard – Deepfake – Cleave Truly Ergonomic
    https://m.youtube.com/watch?v=34AmKPJNfCg

    Reply
  21. Tomi Engdahl says:

    MACHINES CAN LEARN UNSUPERVISED ‘AT SPEED OF LIGHT’ AFTER AI BREAKTHROUGH, SCIENTISTS SAY
    https://www.independent.co.uk/life-style/gadgets-and-tech/news/ai-machine-learning-light-speed-artificial-intelligence-a9629976.html

    Performance of photon-based neural network processor is 100-times higher than electrical processor

    Researchers have achieved a breakthrough in the development of artificial intelligence by using light instead of electricity to perform computations.

    The new approach significantly improves both the speed and efficiency of machine learning neural networks – a form of AI that aims to replicate the functions performed by a human brain in order to teach itself a task without supervision.

    Researchers from George Washington University in the US discovered that using photons within neural network (tensor) processing units (TPUs) could overcome these limitations and create more powerful and power-efficient AI.

    “We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a tensor processing unit, but they consume a fraction of the power and have higher throughput,” said Mario Miscuglio, one of the paper’s authors.

    “When opportunely trained, [the platforms] can be used for performing interference at the speed of light.”

    Potential commercial applications for the innovative processor include 5G and 6G networks, as well as data centres tasked with performing vast amounts of data processing.

    Dr Miscuglio said: “Photonic specialised processors can save a tremendous amount of energy, improve response time and reduce data centre traffic.”

    Reply
  22. Tomi Engdahl says:

    https://www.gwern.net/GPT-3

    GPT-3 Creative Fiction
    Creative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling. Plus advice on effective GPT-3 prompt programming & avoiding common errors.

    Reply
  23. Tomi Engdahl says:

    Five ways to bring a UX lens to your AI project
    https://tcrn.ch/3eNJQEg

    Whatever data sets you’re planning to use, it’s highly likely that people were involved in either the capture of that data or will be engaging with your AI feature in some way. Principles for UX design and data visualization should be an early consideration at data capture, and/or in the presentation of data to users.

    1. Consider the user experience early
    2. Be transparent about how you’re using data
    3. Collect user insights on how your model performs
    4. Evaluate accessibility when collecting user data
    5. Consider how you will measure fairness at the start of model development

    Reply
  24. Tomi Engdahl says:

    Research Proves End-to-End Analog Chips for AI Computation Possible
    https://www.eetimes.com/research-breakthrough-promises-end-to-end-analog-chips-for-ai-computation/

    A research collaboration between neuromorphic chip startup Rain Neuromorphics and Canadian research institute Mila has proved that training neural networks using entirely analog hardware is possible, creating the possibility of end-to-end analog neural networks. This has important implications for neuromorphic computing and AI hardware as a whole: it promises entirely analog AI chips that can be used for training and inference, making significant savings on compute, power, latency and size. The breakthrough marries electrical engineering and deep learning to open the door for AI-equipped robots that can learn on their own in the field, more like a human does.

    Reply
  25. Tomi Engdahl says:

    OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless
    The AI is the largest language model ever created and can generate amazing human-like text on demand but won’t bring us closer to true intelligence.

    https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/

    Reply
  26. Tomi Engdahl says:

    OpenAI’s GPT-3 may be the biggest thing since bitcoin
    https://maraoz.com/2020/07/18/openai-gpt3/

    Reply
  27. Tomi Engdahl says:

    Combining the strengths of the brain and an artificial neural network can bring a whole new dimension to machine intelligence.

    Creating a new paradigm of technology
    https://www.nature.com/articles/d42473-020-00239-0?utm_source=facebook&utm_medium=social&utm_campaign=bcon-samsung_article_1&utm_content=MachineIntelligence

    Driven by its R&D institute, Samsung is set to lead the next century of technological innovation

    “Hundreds of PhD holders gather here with one goal — to develop groundbreaking yet commercially viable technology. I feel truly fortunate to be able to discuss new and emerging knowledge with top experts every day,” says Sungwoo Hwang, president of the Samsung Advanced Institute of Technology (SAIT), a R&D arm of Samsung Electronics, established in 1987.

    Mimicking the brain at a network level

    SAIT is also working on an artificial electronic brain that copies biological synaptic organization to create an unprecedented neuromorphic electronic platform that exhibits the unique capabilities of the human brain.

    “The digital ANN (Artificial Neural Net) processor is a calculator — it handles big data well, unlike the brain,” says Donhee Ham, a Samsung Fellow and a professor of applied physics and electrical engineering at Harvard University who is leading the project. “The brain, being a chemical machine, works very differently from the ANN. It has the advantage of low power requirements and excels at tasks such as fast learning and cognition — areas where the ANN falls short,” he adds.

    An artificial electronic brain that brings alive the unique network capabilities of the biological one will add a new dimension to machine intelligence. “Developing such an artificial brain, by harnessing neurobiology and memory technology, is our goal,” says Ham

    Reply
  28. Tomi Engdahl says:

    A startup called SilviaTerra has harnessed machine learning to achieve forestry’s holy grail: producing detailed forest inventories from remotely sensed data.

    This AI Can See the Forest and the Trees
    https://spectrum.ieee.org/artificial-intelligence/machine-learning/this-ai-can-see-the-forest-and-the-trees

    Reply
  29. Tomi Engdahl says:

    Robots Use Common Sense to Navigate Around the House
    https://www.hackster.io/news/robots-use-common-sense-to-navigate-around-the-house-f8cc9255982b

    Researchers have developed a technique that uses ML to teach robots how to recognize objects and understand where they’d likely be found.

    Reply
  30. Tomi Engdahl says:

    Will The Latest AI Kill Coding?
    AI can now code in any language without additional training.
    https://towardsdatascience.com/will-gpt-3-kill-coding-630e4518c04d

    In 2017, researchers asked: Could AI write most code by 2040? OpenAI’s GPT-3, now in use by beta testers, can already code in any language. Machine-dominated coding is almost at our doorstep.
    GPT-3 was trained on hundreds of billions of words, or essentially the entire Internet, which is why it can code in CSS, JSX, Python, — you name it.

    Further, GPT-3 doesn’t need to be “trained” for various language tasks, since its training data is all-encompassing. Instead, the network constrains itself to the task at hand when given trivial instructions.

    Reply
  31. Tomi Engdahl says:

    Researchers the University of Montreal led by Yoshua Bengio along with colleagues at startup Rain Neuromorphics have come up with way for analog AIs to train themselves that could lead to continuously learning, low-power analog systems of a far greater computational ability than most in the industry now consider possible.

    Startup and Academics Find Path to Powerful Analog AI
    https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/startup-and-academics-find-path-to-powerful-analog-ai

    Reply
  32. Tomi Engdahl says:

    ELON MUSK CLAIMS AI WILL OVERTAKE HUMANS ‘IN LESS THAN FIVE YEARS’
    Existential threat posed by artificial intelligence is much closer than previously predicted, billionaire warns
    https://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-artificial-intelligence-ai-singularity-a9640196.html

    Reply
  33. Tomi Engdahl says:

    AI is struggling to adjust to 2020
    https://techcrunch.com/2020/08/02/ai-is-struggling-to-adjust-to-2020/?tpcc=ECFB2020

    2020 has made every industry reimagine how to move forward in light of COVID-19: civil rights movements, an election year and countless other big news moments. On a human level, we’ve had to adjust to a new way of living. We’ve started to accept these changes and figure out how to live our lives under these new pandemic rules. While humans settle in, AI is struggling to keep up.

    The issue with AI training in 2020 is that, all of a sudden, we’ve changed our social and cultural norms. The truths that we have taught these algorithms are often no longer actually true. With visual AI specifically, we’re asking it to immediately interpret the new way we live with updated context that it doesn’t have yet.

    Computer vision models are struggling to appropriately tag depictions of the new scenes or situations we find ourselves in during the COVID-19 era. Categories have shifted. For example, say there’s an image of a father working at home while his son is playing. AI is still categorizing it as “leisure” or “relaxation.” It is not identifying this as ‘”work” or “office,” despite the fact that working with your kids next to you is the very common reality for many families during this time.

    Another issue for AI right now is that machine learning algorithms are still trying to understand how to identify and categorize faces with masks. Faces are being detected as solely the top half of the face, or as two faces — one with the mask and a second of only the eyes. This creates inconsistencies and inhibits accurate usage of face detection models.

    One path forward is to retrain algorithms to perform better when given solely the top portion of the face (above the mask). The mask problem is similar to classic face detection challenges such as someone wearing sunglasses or detecting the face of someone in profile. Now masks are commonplace as well.

    At this point, we’re expanding the parameters of what the algorithm sees as a face — be it a person wearing a mask at a grocery store, a nurse wearing a mask as part of their day-to-day job or a person covering their face for religious reasons.

    Reply
  34. Tomi Engdahl says:

    Combining the strengths of the brain and an artificial neural network can bring a whole new dimension to machine intelligence.

    Creating a new paradigm of technology
    https://www.nature.com/articles/d42473-020-00239-0?utm_source=facebook&utm_medium=social&utm_campaign=bcon-samsung_article_1&utm_content=MachineIntelligence

    Driven by its R&D institute, Samsung is set to lead the next century of technological innovation

    Reply
  35. Tomi Engdahl says:

    Nvidia is out front of the competition when it comes to helping artificial intelligence learn at a faster pace.

    New Records for AI Training
    https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/new-records-for-ai-training

    The most broadly accepted suite of seven standard tests for AI systems released its newest rankings Wednesday, and GPU-maker Nvidia swept all the categories for commercially-available systems with its new A100 GPU-based computers, breaking 16 records. It was, however, the only entrant in some of them.

    The rankings are by MLPerf, a consortium with membership from both AI powerhouses like Facebook, Tencent, and Google and startups like Cerebras, Mythic, and Sambanova. MLPerf’s tests measure the time it takes a computer to train a particular set of neural networks to an agreed upon accuracy. Since the previous round of results, released in July 2019, the fastest systems improved by an average of 2.7x, according to MLPerf.

    “MLPerf was created to help the industry separate the facts from fiction in AI,”

    In this, the third round of MLPerf training results, the consortium added two new benchmarks and substantially revised a third, for a total of seven tests. The two new benchmarks are called BERT and DLRM.

    BERT, for Bi-directional Encoder Representation from Transformers, is used extensively in natural language processing tasks such as translation, search, understanding and generating text, and answering questions. It is trained using Wikipedia.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*