3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,948 Comments

  1. Tomi Engdahl says:

    The New Jobs for Humans in the AI Era
    Artificial intelligence threatens some careers, but these opportunities are on the rise
    https://www.wsj.com/tech/ai/the-new-jobs-for-humans-in-the-ai-era-db7d8acd?mod=series_foeevergreen

    Reply
  2. Tomi Engdahl says:

    Anthropic:
    A research paper details how decomposing groups of neural network neurons into “interpretable features” may improve safety by enabling the monitoring of LLMs — Neural networks are trained on data, not programmed to follow rules. With each step of training …

    Decomposing Language Models Into Understandable Components
    https://www.anthropic.com/index/decomposing-language-models-into-understandable-components

    Neural networks are trained on data, not programmed to follow rules. With each step of training, millions or billions of parameters are updated to make the model better at tasks, and by the end, the model is capable of a dizzying array of behaviors. We understand the math of the trained network exactly – each neuron in a neural network performs simple arithmetic – but we don’t understand why those mathematical operations result in the behaviors we see. This makes it hard to diagnose failure modes, hard to know how to fix them, and hard to certify that a model is truly safe.

    Neuroscientists face a similar problem with understanding the biological basis for human behavior.

    We can simultaneously record the activation of every neuron in the network, intervene by silencing or stimulating them, and test the network’s response to any possible input.

    Unfortunately, it turns out that the individual neurons do not have consistent relationships to network behavior. For example, a single neuron in a small language model is active in many unrelated contexts, including: academic citations, English dialogue, HTTP requests, and Korean text. In a classic vision model, a single neuron responds to faces of cats and fronts of cars. The activation of one neuron can mean different things in different contexts.

    In our latest paper, Towards Monosemanticity: Decomposing Language Models With Dictionary Learning, we outline evidence that there are better units of analysis than individual neurons, and we have built machinery that lets us find these units in small transformer models. These units, called features, correspond to patterns (linear combinations) of neuron activations. This provides a path to breaking down complex neural networks into parts we can understand, and builds on previous efforts to interpret high-dimensional systems in neuroscience, machine learning, and statistics.

    In a transformer language model, we decompose a layer with 512 neurons into more than 4000 features which separately represent things like DNA sequences, legal language, HTTP requests, Hebrew text, nutrition statements, and much, much more. Most of these model properties are invisible when looking at the activations of individual neurons in isolation.

    Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
    https://transformer-circuits.pub/2023/monosemantic-features/index.html

    Using a sparse autoencoder, we extract a large number of interpretable features from a one-layer transformer.

    Reply
  3. Tomi Engdahl says:

    Microsoft may debut its first AI chip at Ignite 2023 to mitigate cost
    By Kevin Okemwa published 1 day ago
    Microsoft might unveil its first tailor-made AI chip at Ignite in November
    https://www.windowscentral.com/software-apps/microsoft-may-debut-its-first-ai-chip-at-ignite-2023-to-mitigate-cost?fbclid=IwAR3RrRRnGOCq1sahCpNvi84NNx5HpMucq5gIUafhDHuHZzoJXu0HLZQR914

    What you need to know
    Microsoft’s annual developer conference, Ignite 2023, will take place in November.
    Sources reveal that the company might unveil its first dedicated artificial intelligence chip during the event.
    The chip will be used in Microsoft’s data center servers and also to power AI capabilities across its productivity apps.
    NVIDIA is unable to meet the high demand for GPUs, which could potentially impact Microsoft’s ventures in AI technology, affecting profitability.

    Reply
  4. Tomi Engdahl says:

    It’s a big promise, but the company’s CEO says that Getty’s AI tool proves that it’s possible for generative AI models to be built in ways that respect the intellectual property rights of artists and rightsholders.

    Getty Images built a “socially responsible” AI tool that rewards artists
    Getty Images CEO: AI makers that don’t pay artists create “a sad world.”
    https://arstechnica.com/tech-policy/2023/10/getty-images-built-a-socially-responsible-ai-tool-that-rewards-artists/?utm_brand=ars&utm_source=facebook&utm_medium=social&utm_social-type=owned&fbclid=IwAR01L6LR1Jg8RB1la62fgljTo_V2DXBmzU1mUs-5KY3eceAjKe9QsJLiyOs

    Getty Images CEO Craig Peters told the Verge that he has found a solution to one of AI’s biggest copyright problems: creators suing because AI models were trained on their original works without consent or compensation. To prove it’s possible for AI makers to respect artists’ copyrights, Getty built an AI tool using only licensed data that’s designed to reward creators more and more as the tool becomes more popular over time.

    “I think a world that doesn’t reward investment in intellectual property is a pretty sad world,” Peters told The Verge.

    The conversation happened at Vox Media’s Code Conference 2023, with Peters explaining why Getty Images—which manages “the world’s largest privately held visual archive”—has a unique perspective on this divisive issue.

    In February, Getty Images sued Stability AI over copyright concerns regarding the AI company’s image generator, Stable Diffusion. Getty alleged that Stable Diffusion was trained on 12 million Getty images and even imitated Getty’s watermark—controversially seeming to add a layer of Getty’s authenticity to fake AI images.

    Now, Getty has rolled out its own AI image generator that has been trained in ways that are unlike most of the popular image generators out there. Peters told The Verge that because of Getty’s ongoing mission to capture the world’s most iconic images, “Generative AI by Getty Images” was intentionally designed to avoid major copyright concerns swirling around AI images—and compensate Getty creators fairly.

    It’s a big promise, but Peters said that Getty’s AI tool proves that it’s possible for generative AI models to be built in ways that respect intellectual property (IP) rights of artists and rightsholders.

    Reply
  5. Tomi Engdahl says:

    Former Google CEO Warns That Humans Will Fall in Love With AIs
    “What happens when people fall in love with their AI tutor?”
    https://www.iltalehti.fi/paakirjoitus/a/807e5dcf-de53-4336-bcee-f0dab1a5c821

    Eric Schmidt, former Google CEO and coauthor of the book “The Age of AI,” has said that he’s worried humans will start falling in love with AI.

    It’s a fair concern, considering that, well, a good number of them already have.

    “Imagine a world where you have an AI tutor that increases the educational capability of everyone in every language globally,” Schmidt told ABC News in a Sunday interview, adding that this use case, among others, is “remarkable.”

    Reply
  6. Tomi Engdahl says:

    UK opposition leader targeted by AI-generated fake audio smear https://therecord.media/keir-starmer-labour-party-leader-audio-smear-social-media-deepfake

    An audio clip posted to social media on Sunday, purporting to show Britain’s opposition leader Keir Starmer verbally abusing his staff, has been debunked as being AI-generated by private-sector and British government analysis.

    Reply
  7. Tomi Engdahl says:

    AI is way more intrusive and exploitative than you think.

    AI ISN’T MAGIC, IT’S BEING USED TO SPY ON YOU, EXPERTS WARN
    https://futurism.com/the-byte/ai-magic-spy-on-you-experts-warn?fbclid=IwAR3ww3GdNssdq2VPN8cl97MH8iD49-R30PyGH1pAosRybWt8Slct700C4KM

    Reply
  8. Tomi Engdahl says:

    GENERATING AI
    Generative artificial intelligence (AI) like chatbots are changing the way many use AI.
    https://www.electronicdesign.com/magazine/51584

    Trying to Understand Generative AI
    Oct. 2, 2023
    The rise of generative AI has been explosive, but unknowns remain. And what exactly are its capabilities? I took a deeper dive to find out.
    https://www.electronicdesign.com/blogs/altembedded/article/21274753/electronic-design-trying-to-understand-generative-ai

    What you’ll learn:

    What is generative AI?
    What generative AI can do.
    What generative AI can’t do.

    Artificial intelligence (AI), machine learning (ML) and generative AI are all the rage and the terms everyone is using. Unfortunately, they’re now being used to describe a wide range of items and implementations that often have minimal relationship to each other. Those who don’t deal with the details of various platforms like ChatGPT tend to oversimplify what they think is going on and overestimate the capabilities or actual operation of these systems.

    AI/ML covers a lot of ground from rule-based systems to large language models (LLMs). The latter is where generative AI comes from—more on that later. The challenges faced by AI researchers early on involved training or how to get information into the system so that it could be used to respond to inputs. Artificial neural networks (ANNs) took an approach modeled on biological neural networks; however, ANNs and their bio counterparts are different. Like most approximations, things work the same if you don’t look too closely.

    It turns out that training neural networks is possible, but it takes a good bit of input. And the computations required to make this work are significant as one progresses from the simple idea of ANNs to production. Likewise, a whole host of neural-network type systems have been implemented, many tailored for specific input flows or analysis like image pattern recognition. Generative AI is one such area, but even here, generative AI covers a lot of ground.

    Reply
  9. Tomi Engdahl says:

    Tekoälyä voi nyt komentaa suunnittelemaan siruja
    https://etn.fi/index.php/13-news/15438-tekoaelyae-voi-nyt-komentaa-suunnittelemaan-siruja

    Copilot on jo tuttu käsite koodarien käsissä: tekoäly osaa generoida virheetöntä koodia kehotteiden avulla. Nyt SnapMagic on esitellyt oman tekoälypohjaisen ”siipimiehen” elektroniikkapiirien suunnitteluun.

    SnapMagic tunnettiin EDA-maailmassa nimellä SnapEDA. Tekoälymaailmaan siirtymisen myötä yritys brändää itsensä uudelleen. SnapMagic Copilot on näin tavallaan uusi alku koko firmalle. Yhtiö myös kertoo keränneen lisärahoitusta maailman johtavilta tekoäly- ja kehittäjätyökalusijoittajilta.

    Mistä sitten on kyse? SnapMagic Copilot yhdistää huippuluokan tekoälyn massiiviseen patentoituun datasarjaansa automatisoidakseen jotkin elektroniikan suunnittelun eniten aikaa vievistä prosesseista. Se pystyy esimerkiksi täydentämään suunnitteluja. Suunnittelija voi työkalua lisäämään piiriin mikro-ohjaimen, ja SnapMagic sijoittaa paikalleen myös tarvittavat liitäntä irrotuskondensaattorit.

    Suunnittelijat voivat ohjata piirilevytyökaluaan luonnollisella kielellä. He voivat esimerkiksi pyytää “ei-invertoivaa vahvistinta, jonka vahvistus on 2″, ja SnapMagic luo nämä piirit ja lisää suunnitteluun komponentit, jotka ovat tilattavissa.

    Reply
  10. Tomi Engdahl says:

    Artificial Intelligence
    Applying AI to API Security
    https://www.securityweek.com/applying-ai-to-api-security/

    While there is quite a bit of buzz and hype around AI, it is a technology that can add tremendous value to security programs.

    It is hard to go anywhere in the security profession these days without the topic of artificial intelligence (AI) coming up. Indeed, AI is a popular topic. Like many popular topics, there is quite a bit of buzz and hype around it. All of a sudden, it seems that everyone you meet is leveraging AI in a big way.

    As you can imagine, this creates quite a bit of fog around the topic of AI. In particular, it can be difficult to understand when AI can add value and when it is merely being used for its buzz and hype. Beyond buzz and hype, however, how can we know when AI is being leveraged in a useful way to creatively solve problems?

    In my experience, AI works best when applied to specific problems. In other words, AI needs to be carefully, strategically, and methodically leveraged in order to tackle certain problems that suit it. While there are many such problems, API security is one such problem that I’ve experienced AI producing good results for.

    Let’s take a look at five ways in which AI can be leveraged to improve API security:

    API discovery: AI can be leveraged to study request and response data for APIs. Behavioral analysis can be performed to discover previously unknown API endpoints.

    Schema enforcement/access control: As AI studies request and response data for APIs, there are other benefits beyond API discovery. Schemas for specific API endpoints can be learned and then enforced, and subsequent departures from learned schemas can be observed and then mitigated.

    Exposure of sensitive data: Yet another benefit to AI studying request and response data for APIs is the ability to identify sensitive data in transit. This includes the detection and flagging of Personally Identifiable Information (PII) that is being exposed. The exposure of sensitive data, including PII, is a big risk for most enterprises. Improving the ability to detect and mitigate the exposure of sensitive data improves overall API security.

    Layer 7 DDoS protection: While most enterprises have DDoS protection at layers 3 and 4, they may not have it at layer 7. With APIs, layer 7 is where the bulk of the action is. Thus, AI can be leveraged to help protect API endpoints from the misuse and abuse that can happen at layer 7. AI can be applied to analyze metrics and log data collected from an enterprise’s API endpoints. The visibility generated by this continuous analysis and baselining of API endpoint behavior provides insights and alerting on anomalies, which can then be used to generate layer 7 protection policies. Improved layer 7 DDoS protection means improved API security.

    Malicious user detection: Malicious users, or clients, pose a significant risk to most enterprises. All client interactions, including those with API endpoints, can be analyzed for the enterprise over time, and outliers can be identified.

    Both AI and API security are top of mind for most security professionals these days. While there is quite a bit of buzz and hype around AI, it is a technology that can add tremendous value to security programs. Not surprisingly, like many technologies, AI works best when applied to specific problems that suit it. In my experience, API security happens to be one of those problems. By carefully, strategically, and methodically applying AI to API security, enterprises can improve their overall security postures.

    Reply
  11. Tomi Engdahl says:

    Rebecca Bellan / TechCrunch:
    Foxconn and Nvidia announce plans to build “AI factories”, a new class of data centers to accelerate the development of autonomous EVs and industrial robots — Nvidia and Foxconn are working together to build so-called “AI factories,” a new class of data centers that promise …

    Foxconn and Nvidia are building ‘AI factories’ to accelerate self-driving cars
    https://techcrunch.com/2023/10/17/foxconn-and-nvidia-are-building-ai-factories-to-accelerate-self-driving-cars/

    Nvidia and Foxconn are working together to build so-called “AI factories,” a new class of data centers that promise to provide supercomputing powers to accelerate the development of self-driving cars, autonomous machines and industrial robots.

    Nvidia founder and CEO Jensen Huang and Foxconn chairman and CEO Young Liu announced the collaboration at Hon Hai Tech Day in Taiwan on Tuesday. The AI factory is based off an Nvidia GPU computing infrastructure that will be built to process, refine and transform vast amounts of data into valuable AI models and information.

    “We’re building this entire end-to-end system where on the one hand, you’re building this advanced EV car…with an AI brain inside that allows it to interact with drivers and interact with passengers, as well as autonomously drive, complemented by an AI factory that develops a software for this car,” said Huang onstage at the event. “This car will go through life experience and collect more data. The data will go to the AI factory, where the AI factory will improve the software and update the entire AI fleet.”

    Reply
  12. Tomi Engdahl says:

    Bloomberg:
    Researchers: soundscapes and an AI model trained on 100+ wildlife songs can be an effective and low-cost tool to track biodiversity recovery in tropical forests — By “listening” to forest soundscapes and identifying animal species, AI can be used to evaluate biodiversity efforts, according to a new study.

    To Track a Forest’s Recovery, Artificial Intelligence Just Listens
    https://www.bloomberg.com/news/articles/2023-10-17/to-track-biodiversity-researchers-are-turning-to-ai#xj4y7vzkg

    By “listening” to forest soundscapes and identifying animal species, AI can be used to evaluate biodiversity efforts, according to a new study.

    Reply
  13. Tomi Engdahl says:

    Refuel:
    LLMs can label data as well as humans, but 100x faster — Looking to clean, label or enrich your data, but taking too much time? Just describe your problem to our LLMs and let them do the work for you in minutes.

    https://www.refuel.ai/blog-posts/llm-labeling-technical-report?utm_source=techmeme&utm_medium=sponsor_post

    Reply
  14. Tomi Engdahl says:

    Financial Times:
    UK and Chinese officials say China plans to attend the AI summit at Bletchley Park in December, despite controversy over alleged spying by Beijing in the UK — Rishi Sunak is keen to bring together world leaders and tech executives to define governance of new technology

    https://www.ft.com/content/6a615c49-5bd9-4a81-8c31-cacbf3db1e2c

    Reply
  15. Tomi Engdahl says:

    Sebastian Herrera / Wall Street Journal:
    Amazon details Sequoia, its new AI and robotics tools for its warehouses to help reduce delivery times by up to 25% and identify inventory up to 75% faster

    https://www.wsj.com/tech/amazon-introducing-warehouse-overhaul-with-robotics-to-speed-deliveries-40e3e65?mod=followamazon

    Reply
  16. Tomi Engdahl says:

    Irina Anghel / Bloomberg:
    PwC partners with OpenAI to consult clients on complex tax, legal, and HR matters, as the Big Four audit firms look to AI to cut costs and boost productivity

    PwC Offers Advice From Bots in Deal With ChatGPT Firm OpenAI
    https://www.bloomberg.com/news/articles/2023-10-17/pwc-offers-advice-from-bots-in-deal-with-chatgpt-firm-openai#xj4y7vzkg

    PricewaterhouseCoopers LLP has teamed up with ChatGPT owner OpenAI to offer clients advice generated by artificial intelligence as the Big Four audit firms look to cut costs and boost productivity.

    Reply
  17. Tomi Engdahl says:

    Bloomberg:
    Document: the EU is considering a three-tiered approach to regulating generative AI under the AI Act and wants to finalize the legislation by the end of 2023

    EU Plans Stricter Rules for Most Powerful Generative AI Models
    https://www.bloomberg.com/news/articles/2023-10-18/eu-plans-stricter-rules-for-most-powerful-generative-ai-models#xj4y7vzkg

    Regulations would place technology in three categories
    ‘Very capable’ AI systems would require more vetting

    Reply
  18. Tomi Engdahl says:

    Katie Honan / The City:
    NYC Mayor Eric Adams is using AI to make robocalls in languages he doesn’t speak, raising ethical issues over making voters think he is fluent in many languages

    Tongue Twisted: Adams Taps AI to Make City Robocalls in Languages He Doesn’t Speak
    https://www.thecity.nyc/2023/10/16/adams-taps-ai-robocalls-languages-he-doesnt-speak/

    New York City law requires public documents and announcements be made available in a wide range of languages, but the mayor’s computer-assisted pretending raises alarms for some ethics experts.

    Reply
  19. Tomi Engdahl says:

    Evident Vascular launches with $35M for AI-enabled intravascular ultrasound
    https://www.mobihealthnews.com/news/evident-vascular-launches-35m-ai-enabled-intravascular-ultrasound

    The company developed an AI-powered intravascular catheter-based ultrasound that allows providers to capture more precise images of the inside of one’s blood vessels.

    Reply
  20. Tomi Engdahl says:

    While companies are heavily investing in AI tech that can generate business memos or code, the cost of running advanced AI models is proving to be a significant hurdle—and some services, like Microsoft’s GitHub Copilot, drive significant operational losses.

    So far, AI hasn’t been profitable for Big Tech
    https://arstechnica.com/information-technology/2023/10/so-far-ai-hasnt-been-profitable-for-big-tech/?utm_medium=social&utm_brand=ars&utm_source=facebook&utm_social-type=owned&fbclid=IwAR0BicLFYLCxOxv5MKdU_m0c_XfdKOYmMHeJAbuDM2V8_3bZjpzLZPk2dk4

    Microsoft loses around $20 per user per month on GitHub Copilot, according to the WSJ.

    Reply
  21. Tomi Engdahl says:

    The researchers weren’t quite sure why the AI had chosen this peculiar shape for the robot, which is filled with holes.

    AI was told to design a robot that could walk. Within seconds, it generated a ‘small, squishy, and misshapen’ thing that spasms.
    https://www.businessinsider.com/ai-designs-small-squishy-misshapen-robot-walks-by-spasming-northwestern-2023-10?utm_campaign=business-sf&utm_source=facebook&utm_medium=social&fbclid=IwAR37OQd4TTMw72Fc537wu3MlxSBKxyBHSpILVaabUG_KsRuRPzzVbhvcHXg&r=US&IR=T

    A group of researchers asked AI to design a walking robot.
    The result was a robot that “looks nothing like any animal that has ever walked the earth,” per the researchers.
    They weren’t sure why AI designed a robot with this peculiar shap

    When a group of researchers asked an AI to design a robot that could walk, it created a “small, squishy and misshapen” thing that walks by spasming when filled with air.

    The researchers — affiliated with Northwestern University, MIT, and the University of Vermont — published their findings in an article for the Proceedings of the National Academy of Sciences on October 3.

    “We told the AI that we wanted a robot that could walk across land. Then we simply pressed a button and presto!”

    Reply
  22. Tomi Engdahl says:

    https://hackaday.com/2023/10/15/hackaday-links-october-15-2023/

    Speaking of “Stupid AI Tricks,” Google AI tools are now in charge of traffic lights in a dozen cities around the world, and things are going pretty well, if the company’s report is to be trusted (Narrator: It’s not). On its face, Project Green Light is something any driver could get behind, as it aims to analyze real-time traffic data and train models that will be used to control the timing of traffic lights at major intersections, resulting in more green lights, smoother traffic flows, and reduced emissions from idling vehicles. Apparently the dataset is drawn from Google Maps traffic data, which of course uses geolocation data from phones that are zipping along with their owners, or more likely stuck waiting for the light to change. It really does seem like a good idea, but when Google is involved, why does it seem like something bad will happen?

    Google’s AI stoplight program is now calming traffic in a dozen cities worldwide
    Project Green Light is coming to even more intersections next year.
    https://www.engadget.com/google-ai-stoplight-program-project-green-light-sustainability-traffic-110015328.html

    It’s been two years since Google first debuted Project Green Light, a novel means of addressing the street-level pollution caused by vehicles idling at stop lights. At its Sustainability ‘23 event on Tuesday, the company discussed some of the early findings from that program and announced another wave of expansions for it.

    Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there. That information is then used to train AI models that can autonomously optimize the traffic timing at that intersection, reducing idle times as well as the amount of braking and accelerating vehicles have to do there. It’s all part of Google’s goal to help its partners collectively reduce their carbon emissions by a gigaton by 2030.

    Reply
  23. Tomi Engdahl says:

    Wall Street Journal:
    At a conference, Sam Altman and other tech leaders say AI will likely bring seismic workforce changes, eliminating professions, a hard sell to the most affected

    Tech Leaders Say AI Will Change What It Means to Have a Job
    At WSJ Tech Live, OpenAI’s Sam Altman said coming workforce changes will play out unevenly
    https://www.wsj.com/tech/ai/tech-leaders-say-ai-will-change-what-it-means-to-have-a-job-2dd556fb?mod=followamazon

    Artificial intelligence will likely lead to seismic changes to the workforce, eliminating many professions and requiring a societal rethink of how people spend their time, prominent tech leaders said Tuesday.

    Speaking at The Wall Street Journal’s Tech Live conference on Tuesday, OpenAI CEO Sam Altman said that the changes could hit some people in the economy more seriously than others, even if society as a whole improves. This will likely be a hard sell for the most affected people, he said.

    “We are really going to have to do something about this transition,” said Altman, who added that society will have to confront the speed at which the change happens. “People need to have agency, the ability to influence. We need to jointly be architects of the future.”

    Reply
  24. Tomi Engdahl says:

    Kevin Roose / New York Times:
    Stanford unveils the Foundation Model Transparency Index, featuring 100 indicators; Llama 2 led at 54%, GPT-4 placed third at 48%, and PaLM 2 took fifth at 40% — Stanford researchers have ranked 10 major A.I. models on how openly they operate. — How much do we know about A.I.?

    https://www.nytimes.com/2023/10/18/technology/how-ai-works-stanford.html

    Reply
  25. Tomi Engdahl says:

    Financial Times:
    Sources: the UK plans to announce an international advisory group on AI, loosely modeled on the UN’s IPCC, at the November 2023 AI summit in Bletchley Park — The British government wants to draw on expert knowledge about the fast-developing technology — The UK is planning to announce …

    UK poised to establish global advisory group on AI
    https://www.ft.com/content/700dd87c-d040-4215-9d1e-d36ba42c7655

    Reply
  26. Tomi Engdahl says:

    Financial Times:
    Mustafa Suleyman, Eric Schmidt, and others propose an IPCC-like International Panel on AI Safety, an objective advisory body to help shape protocols and norms

    Mustafa Suleyman and Eric Schmidt: We need an AI equivalent of the IPCC
    https://www.ft.com/content/d84e91d0-ac74-4946-a21f-5f82eb4f1d2d

    Reply
  27. Tomi Engdahl says:

    Microsoft Offers Up to $15,000 in New AI Bug Bounty Program
    Microsoft is offering rewards of up to $15,000 in a new bug bounty program dedicated to its new AI-powered Bing.
    https://www.securityweek.com/microsoft-offers-up-to-15000-in-new-ai-bug-bounty-program/

    Reply
  28. Tomi Engdahl says:

    Lucas Shaw / Bloomberg:
    Sources: YouTube is building an AI tool that lets creators record audio using famous musicians’ voices and seeking major record labels’ rights to train the tool

    YouTube Working on Tool That Lets Creators Sing Like Drake
    https://www.bloomberg.com/news/articles/2023-10-19/youtube-working-on-tool-that-would-let-creators-sing-like-drake

    YouTube is seeking rights from major music companies
    Video site had hoped to unveil the music tool last month

    YouTube is developing a tool powered by artificial intelligence that would let creators record audio using the voices of famous musicians, according to people familiar with the matter.

    The video site has approached music companies about obtaining the rights to songs it could use to train this tool, said the people, who asked not to be identified because the talks are confidential. Major record labels have yet to sign off on any deal, though discussions between the two sides continue.

    Reply
  29. Tomi Engdahl says:

    Kevin Roose / New York Times:
    Stanford unveils the Foundation Model Transparency Index, featuring 100 indicators; Llama 2 led at 54%, GPT-4 placed third at 48%, and PaLM 2 took fifth at 40%
    https://www.nytimes.com/2023/10/18/technology/how-ai-works-stanford.html

    Reply
  30. Tomi Engdahl says:

    Murray Stassen / Music Business Worldwide:
    Universal Music Publishing Group, Concord Music Group, and ABKCO sue Anthropic for allegedly violating their copyrights by using song lyrics to train AI models
    https://www.musicbusinessworldwide.com/ai-company-anthropic-amazon-sued-universal-music-group/

    Reply
  31. Tomi Engdahl says:

    Susan D’Agostino / Bulletin of the Atomic Scientists:
    Q&A with Yoshua Bengio on nuance in headlines about AI, taboos among AI researchers, and why top researchers may disagree about AI’s potential risks to humanity

    ‘AI Godfather’ Yoshua Bengio: We need a humanity defense organization
    https://thebulletin.org/2023/10/ai-godfather-yoshua-bengio-we-need-a-humanity-defense-organization/

    Yoshua Bengio, Mila’s scientific director, is a pioneer in artificial neural networks and deep learning—an approach to machine learning inspired by the brain. In 2018, Bengio, Meta chief AI scientist Yann LeCun, and former Google AI researcher Geoffrey Hinton received the Turing Award—known as the “Nobel” of computing—for “conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.” Together, the three computer scientists are known as the “godfathers of AI.”

    In July, Bengio spoke to a US Senate subcommittee that is considering possible legislation to regulate the fast-evolving technology. There, he explained that he and other top AI researchers have revised their estimates for when artificial intelligence could achieve human levels of broad cognitive competence.

    “Previously thought to be decades or even centuries away, we now believe it could be within a few years or decades,” Bengio told the senators. “The shorter timeframe, say five years, is really worrisome because we will need more time to effectively mitigate the potentially significant threats to democracy, national security, and our collective future.”

    Last month, on a day when temperatures in Montreal soared past 90 degrees Fahrenheit, Bengio and I sat down in his office to discuss nuance in attention-grabbing headlines about AI, taboos among AI researchers, and why top AI researchers may disagree about the risks AI may pose to humanity. This interview has been edited and condensed for clarity.

    Reply
  32. Tomi Engdahl says:

    Harmonic Lands $7M Funding to Secure Generative AI Deployments
    https://www.securityweek.com/harmonic-lands-7m-funding-to-secure-generative-ai-deployments/

    British startup is working on software to mitigate against the ‘wild west’ of unregulated AI apps harvesting company data at scale.

    Reply
  33. Tomi Engdahl says:

    Generative AI is the catalyst for AI Governance
    Learn how to inventory, control, and report on your LLMs with ModelOp
    https://go.modelop.com/govern-LLM

    Reply
  34. Tomi Engdahl says:

    IBM Research’s NorthPole chip blurs the line between processing and memory to speed up AI algorithms while reducing energy consumption.

    What Have You Done for Me, Latency?
    https://www.hackster.io/news/what-have-you-done-for-me-latency-9c2c2264a843?fbclid=IwAR2T6TNdQWQ_Gxuulw6pvQZjjxGCXCnq64JovY_UNLWTsugDOiqi-KZi_E4

    IBM Research’s NorthPole chip blurs the line between processing and memory to speed up AI algorithms while reducing energy consumption.

    Reply
  35. Tomi Engdahl says:

    Artificial general intelligence is the big goal of OpenAI. What exactly that is, however, is still up for debate.

    According to OpenAI CEO Sam Altman, systems like GPT-4 or GPT-5 would have passed for AGI to “a lot of people” ten years ago. “Now people are like, well, you know, it’s like a nice little chatbot or whatever,” Altman said.

    https://the-decoder.com/openai-ceo-sam-altman-says-chatgpt-would-have-passed-for-an-agi-10-years-ago/

    Reply
  36. Tomi Engdahl says:

    Instead of allowing others to leverage her likeness without her consent, she’s getting in on the ground floor.

    Riley Reid Launches Site for Adult Performers to Create AI Versions of Themselves
    https://futurism.com/riley-reid-ai-site?fbclid=IwAR2iJc_qZbWBoDm33GCXosvutrxYdTY5xGdmOjeAWLYe4TNfg0bmQ3mH3fs

    “I feel like we’re gonna be a huge part of AI adapting into our society, because porn is always like that.”

    Adult film star Riley Reid isn’t afraid of artificial intelligence — so instead of allowing others to leverage her likeness without her consent, she’s getting in on the ground floor.

    As recently launched independent media company 404 Media reports, Reid has cofounded a new company called Clona.ai that allows personalities like herself to create AI chatbot versions of themselves — and instead of shying away from sexting like other companies, the venture is leaning into it.

    “The reality is, AI is coming, and if it’s not Clona, it’s somebody else,” Reid told the site, which charges $30 per month to exchange more than a few messages with her avatar. “When [other people] use deepfakes or whatever — if I’m not partnering up with it, then someone else is going to steal my likeness and do it without me.”

    To create her virtual avatar, Clona.ai trained its Reid algorithm using her “YouTube videos, podcasts and interviews, or my X-rated scenes, to get some of my naughty bits as well, and everything on the sexy side, as well as the personal companion and intimate side.”

    Earlier this month, Facebook’s parent company also announced that it’s created over a dozen AI chatbots that are based on a variety of celebrities, including Kendall Jenner, Tom Brady, YouTube creator James “MrBeast” Donaldson.

    Most disturbing, though, are sites that don’t get consent from public figures before recreating them as chatbots, such as Janitor.ai or Chub.ai — both of which, 404 pointed out, are hosting chatbots of Reid that she had nothing to do with.

    That sounds like the key idea behind Clona: allowing performers to create their own authorized chatbots that, unlike Meta, don’t view certain topics as verboten.

    “I feel like we’re gonna be a huge part of AI adapting into our society, because porn is always like that,” Reid told 404. “It’s what it did with the internet.”

    It also comes after years of adult performers being treated poorly by the tech world, from exploitative streaming sites that steal their content to altering their clips with deepfakes without their permission.

    Instead of rejecting an AI future, in other words, Reid has chosen to embrace it.

    Reply
  37. Tomi Engdahl says:

    Lucas Shaw / Bloomberg:
    A look at YouTube’s negotiations with record labels over an AI tool that lets creators perform using major musicians’ voices, a pivotal moment for AI in music — Negotiations between the video site and record labels could be a key moment in the spread of AI technology.

    The Music Industry’s First Reckoning With AI Is Upon Us
    https://www.bloomberg.com/news/newsletters/2023-10-22/the-music-industry-s-first-reckoning-with-ai-is-upon-us

    Negotiations between the video site and record labels could be a key moment in the spread of AI technology.

    Reply
  38. Tomi Engdahl says:

    AI will never threaten humans, says top Meta scientist
    https://www.ft.com/content/30fa44a1-7623-499f-93b0-81e26e22f2a6?fbclid=IwAR1WcbgDOhfBiDDy2Rtt14sCbo-E9S1SOLCT6DLPe4tMM_osI8NHNcDYHlE

    Artificial intelligence is still dumber than cats, says pioneer Yann LeCun, so worries over existential risks are ‘premature’

    Premature regulation of artificial intelligence will only serve to reinforce the dominance of the big technology companies and stifle competition, Yann LeCun, Meta’s chief AI scientist, has said.

    “Regulating research and development in AI is incredibly counterproductive,”

    “They want regulatory capture under the guise of AI safety.”

    Demands to police AI stemmed from the “superiority complex” of some of the leading tech companies that argued that only they could be trusted to develop AI safely, LeCun said. “I think that’s incredibly arrogant. And I think the exact opposite,” he said in an interview for the FT’s forthcoming Tech Tonic podcast series.

    Regulating leading-edge AI models today would be like regulating the jet airline industry in 1925 when such aeroplanes had not even been invented, he said. “The debate on existential risk is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities, which we don’t have at the moment,” he said.

    Meta, which has launched its own LLaMA generative AI model, has broken ranks with other big tech companies, such as Google and Microsoft-backed OpenAI, in championing more accessible open-source AI systems. OpenAI’s latest model GPT-4 is a so-called black box in which the data and code used to build the model are not available to third parties.

    Reply
  39. Tomi Engdahl says:

    APPLE IS APPARENTLY SPENDING BILLIONS ON SECRET AI DEVELOPMENT
    https://futurism.com/the-byte/apple-spending-billions-ai

    CAN APPLE CATCH UP?

    Reply
  40. Tomi Engdahl says:

    In a 1st, AI neural network captures ‘critical aspect of human intelligence’
    News
    By Nicoletta Lanese published 3 days ago
    Scientists have demonstrated that an AI system called a neural network can be trained to show “systematic compositionality,” a key part of human intellect.
    https://www.livescience.com/technology/artificial-intelligence/in-a-1st-ai-neural-network-captures-critical-aspect-of-human-intelligence

    Reply
  41. Tomi Engdahl says:

    Lukio-opettaja keksi keinon, miten hän voi paljastaa Chat GPT:llä huijanneet oppilaat
    https://yle.fi/a/74-20055878

    Tekoäly on mullistanut koulunkäynnin nopeasti niin hyvässä kuin pahassa. Kysyimme, miten koululaiset huijaavat tekoälyllä ja miten vilppiä voi ehkäistä.

    Kun vitosen oppilas palauttaa Laura Saloselle tekstin, jonka kieli on täydellistä englantia ja terminologia asiantuntijatasoa, alkavat opettajan kellot soida.

    Kymmenet opettajat kertoivat Ylelle epäilleensä oppilaan huijanneen tekoälyllä, kun kysyimme asiaa hiljaittain opettajilta. Tavallisimmin tekoälyä pyydetään kirjoittamaan analyysi, essee tai kirjoitelma.

    Marraskuussa 2022 lanseerattu Chat GPT mullisti huijauskulttuurin. Se on keskustelubotti, joka pystyy tuottamaan vaikkapa kelvollisen ja valmiiksi kirjoitetun elokuva-analyysin lähes olemattomilla etukäteistiedoilla. Oppilaan ei tarvitse tehdä itse juuri mitään.

    Hän ja Yle.fi:n verkkosivuilla kommentoineet opettajat kertovat nyt parhaat vinkit huijaamisen kitkemiseksi.

    Kirjoitustehtävä kirjoitetaan tai tiivistetään suullisesti uudelleen opettajan edessä. Jos tekstiä ei synny tai oppilas ei keksi mitään kerrottavaa, on huijari jäänyt kiinni.

    Käytetään (oppilaitosten) tekoälyntunnistusohjelmia. Ne kertovat, onko teksti kokonaisuudessaan tai osin tekoälyn tuottamaa. Ongelma on, että tunnistusohjelmat eivät pysy keskustelubottien kehityksessä mukana eivätkä siksi ole kovinkaan tehokkaita.

    Tarkastellaan käytettyä kieltä. Usein tekoäly käyttää vaativia lauserakenteita ja sanavalintoja, jotka eivät kuulu peruskoululaisen tai lukiolaisen osaamiseen. Lisäksi pitkässä tekstissä tekoälyn kieli on epäjohdonmukaista ja se toistaa samoja lauserakenteita.

    Kirjoitustehtävä tehdään koulussa ja suljetussa oppimisympäristössä (esim. Abitti). Abitti on lukioiden käyttämä suljettu oppimisympäristö, johon pääsee vain koulun verkossa ja vaihtuvan kirjautumiskoodin avulla. Näin huijaaminen on hyvin hankalaa.

    Testataan oppiminen suullisesti tai käsitekartoin esimerkiksi esityksellä tai tentillä.

    Seurataan kirjoitustehtävään kulutettua aikaa ja kirjoitushistoriaa. Esimerksi Google Docsin historiaa voi selata, jolloin pitkien tekstimassojen liittäminen paljastuu. Erilaisissa oppimisympäristöissä on myös kello-sovelluksia, joilla näkee, miten kauan vastauksen laatiminen on kestänyt.

    Kirjoitustehtävä kirjoitetaan käsin ja perustuen ei-digitaalisiin lähteisiin. Käytössä ei ole myöskään internetiä tai muita apuvälineitä.

    Tehtävänanto muotoillaan uudelleen. Tekoäly ei ymmärrä hyvin laajoja aineistoja eikä osaa tulkita kuvaa, videota, karttoja tai kuvaajia.

    Kirjoitustehtävä henkilökohtaistetaan tai se on hyvin ajankohtainen. Oppilaan tulee kertoa esimerkiksi omista kokemuksistaan tai projektistaan. Tuoreista tapahtumista tekoäly ei osaa vielä koota tietoa.

    Reply
  42. Tomi Engdahl says:

    Salonen on saanut huijanneita kiinni pyytämällä heitä kirjoittamaan tekstin uudelleen hänen edessään paperille.

    – On aika epäilyttävää, jos ei muista yhtään mitään tekstistään, jonka on kirjoittanut eilen tai toissapäivänä.

    Jotkut ovat viimeistään tuossa vaiheessa myöntäneet vilppinsä. Yleisin tunne on nolostuminen. Jotkut kuitenkin kiistävät tekonsa ja hermostuvat, sillä huijaamisen seuraus on kurssin keskeytyminen. Tavatonta ei ole sekään, että vanhemmat suuttuvat opettajalle.

    https://yle.fi/a/74-20055878

    Reply
  43. Tomi Engdahl says:

    Salonen on huomannut, että oppilaat voi tekoälyn käyttäjinä jakaa kolmeen ryhmään: oikein toimivat, tarkoituksella huijaavat ja vahingossa huijaavat.

    Tekoälyä on sallittua käyttää esimerkiksi inspiraation lähteenä. Siitä voi hyvin katsoa, millaista rakennetta voisi soveltaa omaan analyysitehtävään tai myös, onko muistanut huomioida kaiken vastauksessaan, Salonen kertoo.

    Vahingossa huijaavat eivät osaa soveltaa tätä oikein.

    – He katsovat tekoälystä sallitusti mallia, mutta eivät sitten osaakaan muotoilla tekstiä omakseen. He ikään kuin huolimattomuuttaan kopioivat tekoälyn tuottaman tekstin.

    https://yle.fi/a/74-20055878

    Reply
  44. Tomi Engdahl says:

    Näin toimii Auto-GPT: tekoäly, joka hoitaa verkossa tehtäviä puolestasi
    Petteri Järvinen25.10.202308:39|päivitetty25.10.202308:39TEKOÄLY
    Auto-GPT suorittaa itsenäisesti käyttäjän antamia tehtäviä eikä vain anna vastauksia.
    https://www.tivi.fi/uutiset/nain-toimii-auto-gpt-tekoaly-joka-hoitaa-verkossa-tehtavia-puolestasi/19e459b5-63d5-4a52-b14b-eb50aff24b68

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*