Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
6,744 Comments
Tomi Engdahl says:
Alexandre Tanzi / Bloomberg:
A look at Death Clock, an app that claims to predict the user’s likely date of death by using an AI model trained on a dataset of 1,200+ life expectancy studies — – Model trained on longevity studies is popular on fitness app — Technology may have wider applications in economics, finance
https://www.bloomberg.com/news/articles/2024-11-30/ai-is-promising-a-more-exact-prediction-of-the-day-you-ll-die
Tomi Engdahl says:
Iris Kim / NBC News:
How Indigenous engineers are using AI to preserve Native American languages, building speech recognition models for 200+ endangered languages in North America — Indigenous languages are rapidly disappearing, and AI could help preserve them, according to Indigenous technologists.
https://www.nbcnews.com/tech/innovation/indigenous-engineers-are-using-ai-preserve-culture-rcna176012
Tomi Engdahl says:
Financial Times:
OpenAI says it plans to build data centers in parts of the US Midwest and Southwest; sources: OpenAI is spending $5B+ a year and is “not close to breaking even”
https://www.ft.com/content/e91cb018-873c-4388-84c0-46e9f82146b4
Tomi Engdahl says:
Willie D. Jones / IEEE Spectrum:
A look at AI-enabled dashcam tech being developed by Samsara, Motive, Nauto, and others to detect fatigue and deliver real-time audio alerts to drowsy drivers
AI Dash Cams Give Wake-Up Calls to Drowsy Drivers
Innovative tech detects driver fatigue and signals them to take a break
https://spectrum.ieee.org/driver-drowsiness-detection
Tomi Engdahl says:
While the results on display can be a bit hit or miss, the vast array of capabilities on display here helps support Nvidia’s description of its AI audio model as “a Swiss Army knife for sound.” https://arstechnica.visitlink.me/jFGTmt
Tomi Engdahl says:
A Shiny New Programming Language
Mirror is an entirely new concept in programming — just supply function signatures and some input-output examples, and AI does the rest.
https://www.hackster.io/news/a-shiny-new-programming-language-e41357506c46
Anyone with even a passing interest in machine learning understands how these algorithms learn to perform their intended function by example. This has proven to be a very powerful technique. It has made it possible to build algorithms that can reliably recognize complex objects in images, for example, which would be virtually impossible with standard rules-based programming techniques.
Austin Z. Henley of Carnegie Mellon University has been exploring the idea of using a set of examples when running inferences against a trained model as well. Specifically, Henley has been designing a proof of concept programming-by-example programming language that is powered by a large language model (LLM). The basic idea of this unique language, called Mirror, is that the programmer should just provide a few input-output examples for a function, then the LLM should write and execute the actual code behind the scenes.
To use Mirror, a user first defines the signature (name, input and output parameter data types) of a function. This is followed by one or more example calls of the function, with appropriate input and output parameters supplied. Functions are then called and chained together as needed to accomplish the programmer’s goal.
On the backend, a traditional recursive descent parser makes a pass before the result is sent to an OpenAI LLM along with a prompt instructing it to generate JavaScript code to complete the functions with code that satisfies the constraints of the examples. The code is shown to the programmer, giving them the opportunity to provide more examples if things do not look quite right.
If you would like to take a crack at programming with Mirror for yourself, a browser-based playground has been made available in the GitHub repository. Just supply your own OpenAI API key, and you are good to go.
https://github.com/AZHenley/Mirror
Tomi Engdahl says:
https://etn.fi/index.php/13-news/16909-tekoaely-tulee-yhae-tiukemmin-osaksi-piirien-suunnittelua
Siemens Digital Industries Software on julkistanut uuden sukupolven elektroniikkasuunnittelun ohjelmistonsa, joka yhdistää tekoälyn, pilviyhteydet ja käyttäjäystävällisen käyttöliittymän. Xpedition-, HyperLynx- ja PADS-ohjelmistot on nyt integroitu yhtenäiseksi ratkaisuksi, joka nopeuttaa suunnitteluprosesseja ja helpottaa monimutkaisten piirilevyjen kehittämistä. Tekoälyominaisuudet, kuten ennakoiva analytiikka, virheiden havaitseminen ja työnkulun optimointi, auttavat insinöörejä työskentelemään tarkemmin, nopeammin ja vähemmillä resursseilla.
Ohjelmiston tekoälyavusteiset työkalut kykenevät analysoimaan suunnitelmia reaaliajassa ja tarjoamaan suosituksia parannuksista, jotka voivat vähentää valmistusongelmia ja lyhentää tuotantoon siirtymisen aikaa. Esimerkiksi suunnittelun aikana ohjelmisto voi automaattisesti tunnistaa mahdolliset sähköiset häiriöongelmat tai epäoptimaaliset komponenttien sijoittelut ja ehdottaa niihin ratkaisuja ennen kuin ne muuttuvat merkittäviksi haasteiksi. Lisäksi tekoäly voi auttaa käyttäjiä optimoimaan piirin suorituskykyä ja energiatehokkuutta tarjoamalla ehdotuksia, jotka perustuvat laajoihin simulaatioihin ja aiempiin suunnittelutietoihin.
Tomi Engdahl says:
Iris Deng / South China Morning Post:
Chinese government data: generative AI users in China reached 230M at the end of June 2024; Ernie Bot had an 11.5% share, ChatGPT had 7.8%, and Gemini had 3.8% — The number of generative artificial intelligence (GenAI) users in China reached 230 million at the end of June …
https://www.scmp.com/tech/big-tech/article/3288989/chinas-generative-ai-users-reach-230-million-start-ups-big-tech-roll-out-llm-services
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
World Labs, founded by AI pioneer Fei-Fei Li, gives an “early preview” of its first project, an AI system that can generate game-like, 3D scenes from an image — World Labs, the startup founded by AI pioneer Fei-Fei Li, has unveiled its first project: an AI system
https://techcrunch.com/2024/12/02/world-labs-ai-can-generate-interactive-3d-scenes-from-a-single-photo/
Tomi Engdahl says:
Financial Times:
Preqin data shows the US accounted for 83% of VC funding in G7 economies over the past decade; OECD data shows AI VC funding in China is second only to the US — A month after dropping out of a Stanford University PhD programme in the spring of last year, Demi Guo and her friend Chenlin Meng had raised $5mn for their start-up.
https://www.ft.com/content/1201f834-6407-4bb5-ac9d-18496ec2948b
Tomi Engdahl says:
Financial Times:
OpenAI CFO Sarah Friar says OpenAI is weighing an ads model, but plans to be “thoughtful”; sources and LinkedIn analysis: OpenAI hired Meta and Google ad talent
OpenAI explores advertising as it steps up revenue drive
ChatGPT maker hires advertising talent from big tech rivals
https://www.ft.com/content/9350d075-1658-4d3c-8bc9-b9b3dfc29b26?accessToken=zwAGKEm0gbFQkdOTUNB1FlhNPNOLybmz38KbJg.MEUCIF53vKtuJOVZPK0YbD5kx_89evRoErqYyGyIndr71OvMAiEA8cAOD1R0AuYw193BehhKafLkoGTT4LGmQd0nPxaWnjo&sharetype=gift&token=e68e95d2-8019-42ef-b327-1de9c0797307
Tomi Engdahl says:
Ben Thompson / Stratechery:
How generative AI may bridge current computing paradigms to future innovations, especially in wearables, which will benefit from context-aware user interfaces
The Gen AI Bridge to the Future
https://stratechery.com/2024/the-gen-ai-bridge-to-the-future/
Tomi Engdahl says:
Maxwell Zeff / TechCrunch:
Early tests show Perplexity’s AI shopping agent, which lets users make purchases within its app, sometimes takes hours to process purchases or runs into issues
The race is on to make AI agents do your online shopping for you
https://techcrunch.com/2024/12/02/the-race-is-on-to-make-ai-agents-do-your-online-shopping-for-you/
Millions of Americans will pop open their laptops to buy gifts this holiday season, but tech companies are racing to turn the job of online shopping over to AI agents instead.
Perplexity recently released an AI shopping agent for its paying customers in the United States. It’s supposed to navigate retail websites for you, find the products you’re looking for, and even click the checkout button on your behalf.
Perplexity may be the first major AI startup to offer this, but others have been exploring the space for a while — so expect to see more AI shopping agents in 2025. OpenAI and Google are reportedly developing their own AI agents that can make purchases, such as booking flights and hotels. It would also make sense for Amazon, where millions of people already search for products, to evolve its AI chatbot, Rufus, to help with checkout as well.
Tomi Engdahl says:
“This is the first time ever that someone in the field of artificial intelligence did a prediction of the speed to singularity.”
Humanity May Reach Singularity Within Just 6 Years, Trend Shows
By one major metric, artificial general intelligence is much closer than you think.
https://www.popularmechanics.com/technology/robots/a63057078/when-the-singularity-will-happen/?fbclid=IwY2xjawG8xpJleHRuA2FlbQIxMQABHRrRvzQWLhAL_B1SIflAf7uAJxpRc9kZ_C5m9hwSUvsVt7hPU7V1o-aw9A_aem_1vD1sLxqcgGSsM8HAxFQYQ
By one unique metric, we could approach technological singularity by the end of this decade, if not sooner.
A translation company developed a metric, Time to Edit (TTE), to calculate the time it takes for professional human editors to fix AI-generated translations compared to human ones. This may help quantify the speed toward singularity.
An AI that can translate speech as well as a human could change society.
In the world of artificial intelligence, the idea of “singularity” looms large. This slippery concept describes the moment AI exceeds beyond human control and rapidly transforms society. The tricky thing about AI singularity (and why it borrows terminology from black hole physics) is that it’s enormously difficult to predict where it begins and nearly impossible to know what’s beyond this technological “event horizon.”
However, some AI researchers are on the hunt for signs of reaching singularity measured by AI progress approaching the skills and ability comparable to a human.
One such metric, defined by Translated, a Rome-based translation company, is an AI’s ability to translate speech at the accuracy of a human. Language is one of the most difficult AI challenges, but a computer that could close that gap could theoretically show signs of Artificial General Intelligence (AGI).
“That’s because language is the most natural thing for humans,” Translated CEO Marco Trombetti said at a conference in Orlando, Florida, in December 2022. “Nonetheless, the data Translated collected clearly shows that machines are not that far from closing the gap.”
The company tracked its AI’s performance from 2014 to 2022 using a metric called “Time to Edit,” or TTE, which calculates the time it takes for professional human editors to fix AI-generated translations compared to human ones. Over that 8-year period and analyzing over 2 billion post-edits, Translated’s AI showed a slow, but undeniable improvement as it slowly closed the gap toward human-level translation quality.
On average, it takes a human translator roughly one second to edit each word of another human translator, according to Translated. In 2015, it took professional editors approximately 3.5 seconds per word to check a machine-translated (MT) suggestion—today, that number is just 2 seconds. If the trend continues, Translated’s AI will be as good as human-produced translation by the end of the decade (or even sooner).
“The change is so small that every single day you don’t perceive it, but when you see progress … across 10 years, that is impressive,” Trombetti said on a podcast. “This is the first time ever that someone in the field of artificial intelligence did a prediction of the speed to singularity.”
Although this is a novel approach to quantifying how close humanity is to approaching singularity, this definition of singularity runs into similar problems of identifying AGI more broadly. And while perfecting human speech is certainly a frontier in AI research, the impressive skill doesn’t necessarily make a machine intelligent (not to mention how many researchers don’t even agree on what “intelligence” is).
Tomi Engdahl says:
Anyone thinking current ‘ai’ is on course for a ‘singularity’ grossly misunderstands how very much not ‘ai’ the product behind the marketing is.
So… Ray Kurzweil (a pioneer in the field of AI who has made many accurate predictions about technology over decades) is not someone in the field of AI who has been making predictions about the singularity? I think someone didn’t research their article well.
Hes been pretty close to being very accurate the entire time.
He has some new books out. I can see why people would think he’s a “crackpot” back in 2000 but I’ve followed him since then and he’s right. Exponential growth and improvement. Most people can’t comprehend that apparently.
I have read The Singularity Is Nearer recently. I just don’t believe there’s anything about LLM architectures that implies any rigorous capabilities of metacognition or cognitive self improvement at this juncture.
I think it relies heavily on magical thinking. Path dependent theories of history don’t really hold up, and it really bothers me.
Aesthetically I think it’s a cool vision and I can see a world with singularity, but if I just don’t see any rigorous application of social science to his futurism.
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
AWS unveils Nova models for Bedrock, including Micro, Lite, Pro, and Premier for text generation, and Canvas for image generation and Reel for video generation — At its re:Invent conference on Tuesday, Amazon Web Services (AWS), Amazon’s cloud computing division, announced a new family …
Amazon announces Nova, a new family of multimodal AI models
https://techcrunch.com/2024/12/03/amazon-announces-nova-a-new-family-of-multimodal-ai-models/
Kif Leswing / CNBC:
Apple says it’s using Amazon’s Trainium and Graviton chips to serve search services and is evaluating if Trainium2 chips can be used to pretrain its AI models — Apple is currently using Amazon Web Services’ custom artificial intelligence chips for services like search and will evaluate …
Apple says it uses Amazon’s custom AI chips
https://www.cnbc.com/2024/12/03/apple-says-it-uses-amazons-custom-ai-chips-.html
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
AWS launches Automated Reasoning checks to combat AI hallucinations, and Model Distillation, a tool to transfer the capabilities of a large model to a small one
AWS’ new service tackles AI hallucinations
https://techcrunch.com/2024/12/03/aws-new-service-tackles-ai-hallucinations/
Amazon Web Services (AWS), Amazon’s cloud computing division, is launching a new tool to combat hallucinations — that is, scenarios where an AI model behaves unreliably.
Announced at AWS’ re:Invent 2024 conference in Las Vegas, the service, Automated Reasoning checks, validates a model’s responses by cross-referencing customer-supplied info for accuracy. (Yes, the word “checks” is lowercased.) AWS claims in a press release that Automated Reasoning checks is the “first” and “only” safeguard for hallucinations.
But that’s, well … putting it generously.
Automated Reasoning checks is nearly identical to the Correction feature Microsoft rolled out this summer, which also flags AI-generated text that might be factually wrong. Google also offers a tool in Vertex AI, its AI development platform, to let customers “ground” models by using data from third-party providers, their own datasets, or Google Search.
In any case, Automated Reasoning checks, which is available through AWS’ Bedrock model hosting service (specifically the Guardrails tool), attempts to figure out how a model arrived at an answer — and discern whether the answer is correct. Customers upload info to establish a ground truth of sorts, and Automated Reasoning checks creates rules that can then be refined and applied to a model.
As a model generates responses, Automated Reasoning checks verifies them, and, in the event of a probable hallucination, draws on the ground truth for the right answer. It presents this answer alongside the likely mistruth so customers can see how far off-base the model might’ve been.
AWS says PwC is already using Automated Reasoning checks to design AI assistants for its clients. And Swami Sivasubramanian, VP of AI and data at AWS, suggested that this type of tooling is exactly what’s attracting customers to Bedrock.
But as one expert told me this summer, trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water.
AI models hallucinate because they don’t actually “know” anything. They’re statistical systems that identify patterns in a series of data, and predict which data comes next based on previously seen examples. It follows that a model’s responses aren’t answers, then, but predictions of how questions should be answered — within a margin of error.
AWS claims that Automated Reasoning checks uses “logically accurate” and “verifiable reasoning” to arrive at its conclusions. But the company volunteered no data showing that the tool is reliable.
In other Bedrock news, AWS this morning announced Model Distillation, a tool to transfer the capabilities of a large model (e.g., Llama 405B) to a small model (e.g., Llama 8B) that’s cheaper and faster to run. An answer to Microsoft’s Distillation in Azure AI Foundry, Model Distillation provides a way to experiment with various models without breaking the bank, AWS says.
“After the customer provides sample prompts, Amazon Bedrock will do all the work to generate responses and fine-tune the smaller model,” AWS explained in a blog post, “and it can even create more sample data, if needed, to complete the distillation process.”
But there’s a few caveats.
Model Distillation only works with Bedrock-hosted models from Anthropic and Meta at present. Customers have to select a large and small model from the same model “family” — the models can’t be from different providers. And distilled models will lose some accuracy — “less than 2%,” AWS claims.
If none of that deters you, Model Distillation is now available in preview, along with Automated Reasoning checks.
Also available in preview is “multi-agent collaboration,” a new Bedrock feature that lets customers assign AI to subtasks in a larger project. A part of Bedrock Agents, AWS’ contribution to the AI agent craze, multi-agent collaboration provides tools to create and tune AI to things like reviewing financial records and assessing global trends.
Customers can even designate a “supervisor agent” to break up and route tasks to the AIs automatically. The supervisor can “[give] specific agents access to the information they need to complete their work,” AWS says, and “[determine] what actions can be processed in parallel and which need details from other tasks before [an] agent can move forward.”
Tomi Engdahl says:
https://techcrunch.com/2024/07/13/what-exactly-is-an-ai-agent/
Tomi Engdahl says:
Frederic Lardinois / TechCrunch:
AWS announces the general availability of its Trainium2 chips, and EC2 Trn2 UltraServers, which feature 64 Trainium2 chips, and slates Trainium3 for late 2025 — At its re:Invent conference, AWS today announced the general availably of its Trainium2 (T2) chips for training and deploying large language models (LLMs).
AWS’ Trainium2 chips for building LLMs are now generally available, with Trainium3 coming in late 2025
https://techcrunch.com/2024/12/03/aws-trainium2-chips-for-building-llms-are-now-generally-available-with-trainium3-coming-in-late-2025/
Belle Lin / Wall Street Journal:
AWS announces Project Rainier, a cluster with “hundreds of thousands” of Trainium2 chips, slated for 2025, that Anthropic aims to use to train and run AI models
Amazon Announces Supercomputer, New Server Powered by Homegrown AI Chips
The company’s megacluster of chips for artificial-intelligence startup Anthropic will be among the world’s largest, it said, and its new giant server will lower the cost of AI as it seeks to build an alternative to Nvidia
https://www.wsj.com/articles/amazon-announces-supercomputer-new-server-powered-by-homegrown-ai-chips-18c196fc?st=QmLNWk&reflink=desktopwebshare_permalink
Amazon’s cloud computing arm Amazon Web Services Tuesday announced plans for an “Ultracluster,” a massive AI supercomputer made up of hundreds of thousands of its homegrown Trainium chips, as well as a new server, the latest efforts by its AI chip design lab based in Austin, Texas.
The chip cluster will be used by the AI startup Anthropic, in which the retail and cloud-computing giant recently invested an additional $4 billion. The cluster, called Project Rainier, will be located in the U.S. When ready in 2025, it will be one of the largest in the world for training AI models, according to Dave Brown, Amazon Web Services’ vice president of compute and networking services.
Amazon Web Services also announced a new server called Ultraserver, made up of 64 of its own interconnected chips, at its annual re:Invent conference in Las Vegas Tuesday.
Additionally, AWS on Tuesday unveiled Apple as one of its newest chip customers.
Combined, Tuesday’s announcements underscore AWS’s commitment to Trainium, the in-house-designed silicon the company is positioning as a viable alternative to the graphics processing units, or GPUs, sold by chip giant Nvidia.
Tomi Engdahl says:
Frederic Lardinois / TechCrunch:
Amazon says third-party services like Zoom, Asana, and others will soon be able to integrate Amazon Q Business, and gives Q Developer a wide range of updates
Amazon’s Q Business AI agents get smarter
https://techcrunch.com/2024/12/03/amazons-q-business-ai-agent-gets-smarter/
A year ago, AWS announced Q, its AI assistant platform for business users and developers. Q Developer is getting a wide range of updates today and so is Q Business. The focus for Q Business is on new integrations that can help businesses bring in more data from third-party tools, the ability for third-party platforms to integrate Q into their own services, and new actions that will allow Q to perform tasks on behalf of its users across third-party applications like Google Workspace, Microsoft 365, and Smartsheet, among others.
More Q in QuickSight
Previously, Q was already able to pull in data from about 40 enterprise tools ranging from data stores like Amazon’s own S3 to services like Google Drive, SharePoint, Zendesk, Box, and Jira. Q then creates a canonical index of all of this data (keeping access permissions and other settings intact). The idea now is to expand the types of data the service can index and then use that to provide ever more personalized results. This index, after all, is at the core of Q’s capabilities.
Now businesses will be able to take the data they have stored in databases, data warehouses, and data lakes and combine it with the rest of their business data, be that documents, wikis, or emails — and they can now do so in QuickSight, AWS’ business intelligence service. Amazon Q in QuickSight, the company says, will allow employees to query this data and quickly generate charts and graphs with the help of Q (or augment existing charts with content from a wider variety of sources).
More Q on third-party platforms
The feature that is maybe the most interesting from a business perspective is that third-party services like Zoom, Asana, Miro, PagerDuty, Smartsheet, and others will now be able to integrate Amazon Q Business into their own services. These services will get access to an API that will allow their generative AI-powered experiences to access the same index that is also used by Q.
Tomi Engdahl says:
Aisha Malik / TechCrunch:
Meta says AI content made up less than 1% of election-related misinformation on its apps during major elections in the US, the UK, India, Indonesia, and others
Meta says AI content made up less than 1% of election-related misinformation on its apps
https://techcrunch.com/2024/12/03/meta-says-ai-content-made-up-less-than-1-of-election-related-misinformation-on-its-apps/
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
Google DeepMind unveils Genie 2, a model that can generate 3D worlds from a single prompt image, playable by humans or AI agents using keyboard and mouse inputs — DeepMind, Google’s AI research org, has unveiled a model that can generate an “endless” variety of playable 3D worlds.
https://techcrunch.com/2024/12/04/deepminds-genie-2-can-generate-interactive-worlds-that-look-like-video-games/
William J. Broad / New York Times:
Google DeepMind unveils GenCast, an AI weather model that the company claims outperforms traditional methods on up to 15-day weather and deadly storm forecasts — GenCast, from the company’s DeepMind division, outperformed the world’s best predictions of deadly storms as well as everyday weather.
https://www.nytimes.com/2024/12/04/science/google-ai-weather-forecast.html
Tomi Engdahl says:
Hayden Field / CNBC:
Anduril and OpenAI partner to deploy advanced AI systems for national security missions, focusing on “improving the nation’s counter-unmanned aircraft systems” — OpenAI and Anduril on Wednesday announced a partnership allowing the defense tech company to deploy advanced artificial …
OpenAI partners with defense company Anduril
https://www.cnbc.com/2024/12/04/openai-partners-with-defense-company-anduril.html
OpenAI and Anduril on Wednesday announced a partnership allowing the defense tech company to deploy advanced AI systems for “national security missions.”
It’s part of a broader and controversial trend of AI companies not only walking back bans on military use of their products, but also entering into partnerships with defense industry giants.
Last month, Anthropic and defense contractor Palantir announced a partnership with Amazon Web Services to “provide U.S. intelligence and defense agencies access” to Anthropic’s AI systems.
Tomi Engdahl says:
Simon Willison / Simon Willison’s Weblog:
First impressions of Amazon Nova LLMs: they are competitive with Google Gemini and extremely inexpensive, and may position Amazon as a top tier model provider — Amazon released three new Large Language Models yesterday at their AWS re:Invent conference. The new model family is called Amazon Nova …
https://simonwillison.net/2024/Dec/4/amazon-nova/
Tomi Engdahl says:
Financial Times:
xAI plans to expand its Colossus supercomputer tenfold to incorporate 1M+ GPUs; work has already begun to increase the size of its Memphis, Tennessee facility — Facility in Memphis expected to incorporate more than 1mn GPUs as billionaire’s xAI aims to catch up with rivals
Elon Musk plans to expand Colossus AI supercomputer tenfold
Facility in Memphis expected to incorporate more than 1mn GPUs as billionaire’s xAI aims to catch up with rivals
https://www.ft.com/content/9c0516cf-dd12-4665-aa22-712de854fe2f?accessToken=zwAGKH6yWwXgkdOcBRbP3RJGZdOqInEt6FT-Lw.MEUCIAuQBEJ1VLrDDjvKtcFXNhAvbHBMD5Qc5RD0821VsgVwAiEAsg6H7QZGLKgKTdEa55O69lKOIiML9qkGxNpwAAcLJig&sharetype=gift&token=3aa571b8-1dc1-42b4-b0e9-e9ddeffd6ce3
Tomi Engdahl says:
Charles Rollet / TechCrunch:
Three members of Google’s NotebookLM team, including its lead Raiza Martin, are leaving to launch a startup focused on building “a user-first AI product” — Three members of Google’s NotebookLM team, including its team lead and designer, have announced they are leaving Google for a new stealth startup.
Key leaders behind Google’s viral NotebookLM are leaving to create their own startup
https://techcrunch.com/2024/12/04/key-leaders-behind-googles-viral-notebooklm-are-leaving-to-create-their-own-startup/
Tomi Engdahl says:
The Verge:
Sam Altman says OpenAI will begin 12 days of “shipmas” on December 5, with new features, products, and demos, sources say including text-to-video AI tool Sora — Happy holidays from OpenAI. The AI startup plans to kick off a “shipmas” period of new features and products for 12 days, starting on December 5th.
OpenAI’s 12 days of ‘shipmas’ include Sora and new reasoning model
/ OpenAI is getting ready for a busy, festive period of AI announcements.
https://www.theverge.com/2024/12/4/24312352/openai-sora-o1-reasoning-12-days-shipmas
Happy holidays from OpenAI. The AI startup plans to kick off a “shipmas” period of new features, products, and demos for 12 days, starting on December 5th. The announcements will include OpenAI’s long-awaited text-to-video AI tool Sora and a new reasoning model, sources familiar with OpenAI’s plans tell The Verge.
Just ahead of the launch, a few OpenAI employees began teasing the coming releases on social media: “What’s on your Christmas list?” a member of the technical staff posted. “Got back just in time to put up the shipmas tree,” another staffer wrote. Sora lead Bill Peebles responded to a staffer who posted that OpenAI is “unbelievably back” with one word: “Correct.” The startup’s senior vice president also responded with IYKYK (if you know, you know).
The imminent launch of Sora comes just weeks after artists leaked the model in protest of being used by OpenAI for what they claim is “unpaid R&D and PR.”
Google has also debuted its latest generative AI video model ahead of the Sora release. Veo is now available for businesses to start incorporating into their content creation pipelines. Originally unveiled in May, three months after OpenAI announced Sora, Veo is now in a private preview through Google’s Vertex AI platform.
One of the 12 days of OpenAI announcements may include a new Santa-inspired voice for ChatGPT. Some ChatGPT users have spotted code that replaces the voice mode button with a snowflake.
Tomi Engdahl says:
ChatGPT’s search results for news are ‘unpredictable’ and frequently inaccurate
/ Researchers say that despite a confident tone, OpenAI’s chatbot search gave ‘partially or entirely incorrect responses’ to most of their requests.
https://www.theverge.com/2024/12/3/24312016/chatgpt-search-results-review-inaccurate-unpredictable
Tomi Engdahl says:
Artificial Intelligence
Tuskira Scores $28.5M for AI-Powered Security Mesh
Tuskira is working on an AI-powered security mesh promising to integrate fragmented security tools and mitigate risk exposure in real time
https://www.securityweek.com/tuskira-scores-28-5m-for-ai-powered-security-mesh/
Tomi Engdahl says:
https://www.uusiteknologia.fi/2024/12/05/suomen-tekoalykehitysta-tuetaan-huippunimilla/
Tekoälyn uusi AI Finland -verkosto on perustanut Advisory Boardin tukemaan missiotaan nostaa Suomi tekoälyn kehittämisen ja soveltamisen globaaliksi edelläkävijäksi. Mukana on Silo AI:n Peter Salinin lisäksi monia huippuosaajia niin tutkimuksesta kuin yrityksistäkin pienistä suuriin.
Tomi Engdahl says:
Tekoäly valvoo pian kuljettajaa autossa
https://etn.fi/index.php/13-news/16925-tekoaely-valvoo-pian-kuljettajaa-autossa
LG Electronics ja tekoälyyn erikoistunut puolijohdeyhtiö Ambarella ovat julkistaneet yhteistyössä kehittämänsä uuden kuljettajanvalvontajärjestelmän (Driver Monitoring System, DMS), joka esitellään CES 2025 -messuilla tammikuussa. Järjestelmä hyödyntää Ambarellan CV25 AI -piirisarjaa, ja se on jo tuotannossa globaalin autonvalmistajan linjoilla.
Ambarellan CV25-piirisarja mahdollistaa korkearesoluutioisen videon reaaliaikaisen analysoinnin ajoneuvon sisäkameroista. Se tarjoaa tarkan objektien tunnistuksen, sujuvan videonkäsittelyn ja korkean energiatehokkuuden. Piirisarja toimii myös heikossa valaistuksessa ja vaihtelevissa olosuhteissa, mikä takaa luotettavan valvonnan vuorokaudenajasta tai säästä riippumatta.
Tomi Engdahl says:
Generatiivinen tekoäly uhkaa musiikki- ja videoalan tuloja
https://www.uusiteknologia.fi/2024/12/05/generatiivinen-tekoaly-uhkaa-musiikki-ja-videoalan-tuloja/
Generatiivisen tekoälyn tuottaman musiikki- ja av-sisällön markkinoiden arvioidaan kasvavan kolmesta miljardista 64 miljardiin euroon vuoteen 2028 mennessä, viestii tekijänoikeusjärjestöjen kattojärjestö CISAC selvityksessään. Samalla ihmisten luoma sisältö menettää osuutta markkinoilla selvitys ennustaa.
Tutkimuksessa on laskettu, että tekoälysisällön korvatessa perinteiset teokset eri käyttöalueilla musiikin tekijät menettävät 24 prosenttia ja av-alan tekijät 21 prosenttia tuloistaan vuoteen 2028 mennessä. Tekijöiden tulonmuodostus on vaarassa, jos tekoälysisältö valtaa markkinat, eikä korvauskysymyksiä saada ratkaistua
Selvityksen keskeinen johtopäätös on, että generatiivinen tekoäly vaarantaa luovan työn tulevaisuutta kahdelta suunnalta. Teoksia käytetään luvatta tekoälyn kouluttamiseen ja toiseksi tekoälysisältö syrjäyttää markkinoilla ihmisten luomaa taidetta. Samalla generatiivinen tekoäly tuottaa erityisesti teknologiayrityksille merkittävää taloudellista hyötyä.
Tomi Engdahl says:
Uhkaako tekoäly ihmiskuntaa kuten scifissä? Koneoppimisen tutkija vastaa
https://youtu.be/389amUmXXqs
Tomi Engdahl says:
OpenAI launched a new subscription plan for ChatGPT — and it’s very expensive
Confirming leaks this morning, OpenAI announced ChatGPT Pro, a new $200-per-month subscription tier that provides unlimited access to all of OpenAI’s models, including the full version of its o1 “reasoning” model.
OpenAI released a preview of o1 in September. Compared to the preview, users can expect “a faster, more powerful, and accurate reasoning model that is even better at coding and math,” an OpenAI spokesperson told TechCrunch.
Read more from Kyle Wiggers: https://tcrn.ch/41gnbwn
#TechCrunch #technews #openai #chatgpt #samaltman
Tomi Engdahl says:
“Researchers have long warned of the dangers of building relationships with chatbots. But an array of companies now offer AI companions to millions of people, who spend hours a day bonding with the tools. An array of popular apps are offering AI companions to millions of predominantly female users who are spinning up AI girlfriends, AI husbands, AI therapists — even AI parents — despite long-standing warnings from researchers about the potential emotional toll of interacting with humanlike chatbots.
While artificial intelligence companies struggle to convince the public that chatbots are essential business tools, a growing audience is spending hours building personal relationships with AI. In September, the average user on the companion app Character.ai spent 93 minutes a day talking to one of its user-generated chatbots, often based on popular characters from anime and gaming, according to global data on iOS and Android devices from market intelligence firm Sensor Tower.
That’s 18 minutes longer than the average user spent on TikTok. And it’s nearly eight times longer than the average user spent on ChatGPT, which is designed to help “get answers, find inspiration and be more productive.”
Some of the recently recommended bots on Chai. (Washington Post illustration; Monique Woo/The Washington Post/Chai)
These users don’t always stick around, but companies are wielding data to keep customers coming back.
The Palo Alto-based Chai Research — a Character.ai competitor — studied the chat preferences of tens of thousands of users to entice consumers to spend even more time on the app, the company wrote in a paper last year. In September, the average Chai user spent 72 minutes a day in the app, talking to customized chatbots, which can be given personality traits like “toxic,” “violent,” “agreeable” or “introverted.”
Some Silicon Valley investors and executives are finding the flood of dedicated users — who watch adds or pay for monthly subscription fees — hard to resist. While Big Tech companies have mostly steered clear of AI companions, which tend to draw users interested in sexually explicit interactions, app stores are now filled with companion apps from lesser-known companies in the United States, Hong Kong and Cyprus, as well as popular Chinese-owned apps, such as Talkie AI and Poly.AI. https://www.washingtonpost.com/technology/2024/12/06/ai-companion-chai-research-character-ai/?
Tomi Engdahl says:
Creativity is the last refuge of the artist. But the advent of artificial systems capable of replicating technical skill and style poses a fundamental question: What role does the human play in art creation when technology can replace skill?
This problem isn’t a new one. If we look at the long history of technology, we can see how new tools always extend the definition of what art is, writes Henry Shevlin.
“If one were to draw a lesson from these cases, it is that history (and ultimately the art world) seems to be on the side of those who would extend the concept of art to include new forms of human creativity.” But is AI-assisted art a special case?
Tap the to find out: https://iai.tv/articles/the-artist-is-dead-ai-killed-them-auid-2275
Tomi Engdahl says:
One teen worried that they wouldn’t be able to cope with their suicidal thoughts without the help of a bot. https://trib.al/gYd1nrJ
Tomi Engdahl says:
”Under current terms, when OpenAI creates AGI — defined as a “highly autonomous system that outperforms humans at most economically valuable work” — Microsoft’s access to such a technology would be void. The OpenAI board would determine when AGI is achieved.
The start-up is considering removing the stipulation from its corporate structure, enabling the Big Tech group to continue investing in and accessing all OpenAI technology after AGI is achieved, according to multiple people with knowledge of the discussions.”
GPT:n kehittäminen syö niin tähtitieteellisesti rahaa, että alkuperäisistä AGI-varovaisuusperiaatteista halutaan luopua ennemmin kuin mahdollisuudesta pyytää Microsoftilta tulevaisuudessa lisää riskirahoitusmiljardeja.
OpenAI seeks to unlock investment by ditching ‘AGI’ clause with Microsoft
Start-up discusses removing provision to protect powerful technology from being misused for commercial purposes
https://www.ft.com/content/2c14b89c-f363-4c2a-9dfc-13023b6bce65?shareType=nongift&fbclid=IwY2xjawHAVC9leHRuA2FlbQIxMQABHV5-LdtgYjC6En6HcBEhwKC7LMtG6g-pI25RPl4yhLZgHyKnNIeueB2bcQ_aem_-X3iMIzzriG7XV0XWZOAuA
OpenAI is in discussions to ditch a provision that shuts Microsoft out of its most advanced models when the start-up achieves “artificial general intelligence”, as it seeks to unlock billions of dollars of future investment.
Under current terms, when OpenAI creates AGI — defined as a “highly autonomous system that outperforms humans at most economically valuable work” — Microsoft’s access to such a technology would be void. The OpenAI board would determine when AGI is achieved.
The start-up is considering removing the stipulation from its corporate structure, enabling the Big Tech group to continue investing in and accessing all OpenAI technology after AGI is achieved, according to multiple people with knowledge of the discussions. A final decision has not been made and options are being discussed by the board, they added.
The clause was included to protect the potentially powerful technology from being misused for commercial purposes, giving ownership of the technology to its non-profit board. According to OpenAI’s website: “AGI is explicitly carved out of all commercial and IP licensing agreements.”
Tomi Engdahl says:
When two neurodivergent individuals meet Watch two AI chatbots have an ‘AI to AI’ convo and see their chat heat up!A world where AI talks to itself might sound like sci-fi, but it’s happening now.
Credit: rohitmen01 / TT
#ai #ArtificialIntelligence #technothinkers #machinelearning #programming #innovation #ChatGPT #siri #chatgpt4 #chatbot
Tomi Engdahl says:
nyt löytyi hieno ai työkalu: https://www.meshy.ai
Tomi Engdahl says:
ChatGPT julkaisi uuden poskettoman kalliin Pro-tilaustason
https://muropaketti.com/tietotekniikka/tietotekniikkauutiset/chatgpt-julkaisi-uuden-poskettoman-kalliin-pro-tilaustason/
OpenAI on julkaissut uuden ChatGPT Pro -tilaustason, jonka hinta on 200 dollaria kuukaudessa ennen veroja.
200 dollaria kuukaudessa kustantavan ChatGPT Pron tilaajat saavat rajattoman pääsyn yhtiön uusimpaan ja tehokkaimpaan o1-malliin, mukaan lukien uuteen o1 pro mode -versioon. Malli on erityisen vahva esimerkiksi koodauksessa, matematiikassa ja oikeustapausten analysoinnissa.
Tomi Engdahl says:
In Tests, OpenAI’s New Model Lied and Schemed to Avoid Being Shut Down
https://futurism.com/the-byte/openai-o1-self-preservation
It pursued survival at all costs
Survival Instinct
It sounds like OpenAI’s latest AI is showing signs of a drive for self-preservation.
Tomi Engdahl says:
How close is AI to human-level intelligence?
Large language models such as OpenAI’s o1 have electrified the debate over achieving artificial general intelligence, or AGI. But they are unlikely to reach this milestone on their own.
https://www.nature.com/articles/d41586-024-03905-1
OpenAI’s latest artificial intelligence (AI) system dropped in September with a bold promise. The company behind the chatbot ChatGPT showcased o1 — its latest suite of large language models (LLMs) — as having a “new level of AI capability”. OpenAI, which is based in San Francisco, California, claims that o1 works in a way that is closer to how a person thinks than do previous LLMs.
The release poured fresh fuel on a debate that’s been simmering for decades: just how long will it be until a machine is capable of the whole range of cognitive tasks that human brains can handle, including generalizing from one task to another, abstract reasoning, planning and choosing which aspects of the world to investigate and learn from?
Such an ‘artificial general intelligence’, or AGI, could tackle thorny problems, including climate change, pandemics and cures for cancer, Alzheimer’s and other diseases. But such huge power would also bring uncertainty — and pose risks to humanity. “Bad things could happen because of either the misuse of AI or because we lose control of it,” says Yoshua Bengio, a deep-learning researcher at the University of Montreal, Canada.
The revolution in LLMs over the past few years has prompted speculation that AGI might be tantalizingly close. But given how LLMs are built and trained, they will not be sufficient to get to AGI on their own, some researchers say. “There are still some pieces missing,” says Bengio.
The revolution in LLMs over the past few years has prompted speculation that AGI might be tantalizingly close. But given how LLMs are built and trained, they will not be sufficient to get to AGI on their own, some researchers say. “There are still some pieces missing,” says Bengio.
What’s clear is that questions about AGI are now more relevant than ever. “Most of my life, I thought people talking about AGI are crackpots,” says Subbarao Kambhampati, a computer scientist at Arizona State University in Tempe. “Now, of course, everybody is talking about it. You can’t say everybody’s a crackpot.”
Why the AGI debate changed
The phrase artificial general intelligence entered the zeitgeist around 2007 after its mention in an eponymously named book edited by AI researchers Ben Goertzel and Cassio Pennachin. Its precise meaning remains elusive, but it broadly refers to an AI system with human-like reasoning and generalization abilities. Fuzzy definitions aside, for most of the history of AI, it’s been clear that we haven’t yet reached AGI. Take AlphaGo, the AI program created by Google DeepMind to play the board game Go. It beats the world’s best human players at the game — but its superhuman qualities are narrow, because that’s all it can do
Tomi Engdahl says:
, thinks that the model incorporates a CoT generator that creates numerous CoT prompts for a user query and a mechanism to select a good prompt from the choices. During training, o1 is taught not only to predict the next token, but also to select the best CoT prompt for a given query. The addition of CoT reasoning explains why, for example, o1-preview — the advanced version of o1 — correctly solved 83% of problems in a qualifying exam for the International Mathematical Olympiad, a prestigious mathematics competition for high-school students, according to OpenAI. That compares with a score of just 13% for the company’s previous most powerful LLM, GPT-4o.
https://www.nature.com/articles/d41586-024-03905-1
Tomi Engdahl says:
LLMs, says Chollet, irrespective of their size, are limited in their ability to solve problems that require recombining what they have learnt to tackle new tasks. “LLMs cannot truly adapt to novelty because they have no ability to basically take their knowledge and then do a fairly sophisticated recombination of that knowledge on the fly to adapt to new context.”
https://www.nature.com/articles/d41586-024-03905-1
Tomi Engdahl says:
OpenAI CEO Sam Altman says it’d be ‘un-American’ for Elon Musk to wield political influence to harm rivals
https://techcrunch.com/2024/12/04/openai-ceo-sam-altman-says-itd-be-un-american-for-elon-musk-to-wield-political-influence-to-harm-rivals/
Tomi Engdahl says:
Opening the black box: how ‘explainable AI’ can help us understand how algorithms work
https://theconversation.com/opening-the-black-box-how-explainable-ai-can-help-us-understand-how-algorithms-work-244080
When you visit a hospital, artificial intelligence (AI) models can assist doctors by analysing medical images or predicting patient outcomes based on historical data. If you apply for a job, AI algorithms can be used to screen resumés, rank job candidates and even conduct initial interviews. When you want to watch a movie on Netflix, a recommendation algorithm predicts which movies you’re likely to enjoy based on your viewing habits. Even when you are driving, predictive algorithms are at work in navigation apps like Waze and Google Maps, optimising routes and predicting traffic patterns to ensure faster travel.
In the workplace, AI-powered tools like ChatGPT and GitHub Copilot are used to draft e-mails, write code and automate repetitive tasks, with studies suggesting that AI could automate up to 30% of worked hours by 2030.
But a common issue of these AI systems is that their inner workings are often complex to understand – not only for the general public, but also for experts!
AI and machine learning: what’s in a name?
With the current move toward integration of AI into organisations and the widespread mediatisation of its potential, it is easy to get confused, especially with so many terms floating around to designate AI systems, including machine learning, deep learning and large language models, to name but a few.
In simple terms, AI refers to the development of computer systems that perform tasks requiring human intelligence such as problem-solving, decision-making and language understanding. It encompasses various subfields like robotics, computer vision and natural language understanding.
One important subset of AI is machine learning, which enables computers to learn from data instead of being explicitly programmed for every task. Essentially, the machine looks at patterns in the data and uses those patterns to make predictions or decisions. For example, think about an e-mail spam filter. The system is trained with thousands of examples of both spam and non-spam e-mails. Over time, it learns patterns such as specific words, phrases or sender details that are common in spam.
Deep learning, a further subset of machine learning, uses complex neural networks with multiple layers to learn even more sophisticated patterns. Deep learning has been shown to be of exceptional value when working with image or textual data and is the core technology at the basis of various image recognition tools or large language models such as ChatGPT.
Regulating AI
The examples above demonstrate the broad application of AI across different industries. Several of these scenarios, such as suggesting movies on Netflix, seem relatively low-risk. However, others, such as recruitment, credit scoring or medical diagnosis, can have a large impact on someone’s life, making it crucial that they happen in a manner that is aligned with our ethical objectives.
Recognising this, the European Union proposed the AI Act, which its parliament approved in March. This regulatory framework categorises AI applications into four different risk levels: unacceptable, high, limited and minimal, depending on their potential impact on society and individuals. Each level is subject to different degrees of regulations and requirements.
Unacceptable risk AI systems, such as systems used for social scoring or predictive policing, are prohibited in the EU, as they pose significant threats to human rights.
High-risk AI systems are allowed but they are subject to the strictest regulation, as they have the potential to cause significant harm if they fail or are misused, including in settings such as law enforcement, recruitment and education.
Limited risk AI systems, such as chatbots or emotion recognition systems, carry some risk of manipulation or deceit. Here it is important that humans are informed about their interaction with the AI system.
Minimal risk AI systems include all other AI systems, such as spam filters, which can be deployed without additional restrictions.
Tomi Engdahl says:
For high-risk AI systems, Article 86 of the AI Act establishes the right to request an explanation of decisions made by AI systems, which is a significant step toward ensuring algorithmic transparency.
However, beyond legal compliance, transparent AI systems offer several other benefits for both model owners and those impacted by the systems’ decisions.
Transparent AI
First, transparency builds trust: when users understand how an AI system works, they are more likely to engage with it. Secondly, it can prevent biased outcomes, allowing regulators to verify whether a model unfairly favours specific groups. Finally, transparency enables the continuous improvement of AI systems by revealing mistakes or unexpected patterns.
But how can we achieve transparency in AI?
In general, there are two main approaches to making AI models more transparent.
First, one could use simple models like decision trees or linear models to make predictions. These models are easy to understand because their decision-making process is straightforward. For example, a linear regression model could be used to predict house prices based on features like the number of bedrooms, square footage and location. The simplicity lies in the fact that each feature is assigned a weight, and the prediction is simply the sum of these weighted features. This means one can clearly see how each feature contributes to the final house price prediction.
However, as data becomes more complex, these simple models may no longer perform well enough.
This is why developers often turn to more advanced “black-box models” like deep neural networks, which can handle larger and more complex data but are difficult to interpret. For example, a deep neural network with millions of parameters can achieve a very high performance, but the way it reaches its decisions is not understandable to humans, because its decision-making process is too large and complex.
Explainable AI
Another option is to use these powerful black-box models alongside a separate explanation algorithm to clarify the model or its decisions. This approach, known as “explainable AI”, allows us to benefit from the power of complex models while still offering some level of transparency.
One well-known method is counterfactual explanation. A counterfactual explanation explains the decision of a model by identifying minimal changes to the input features that would lead to a different decision.
For instance, if an AI system denies someone a loan, a counterfactual explanation might inform the applicant: “If your income would have been $5,000 higher, your loan would have been approved”. This makes the decision more understandable while the used machine learning model can still be very complex. However, one downside is that these explanations are approximations, which means there may be multiple ways to explain the same decision.
The road ahead
As AI models become increasingly complex, their potential for transformative impact grows – but so does their capacity to make mistakes. For AI to be truly effective and trusted, users need to understand how these models reach their decisions.
Transparency is not only a matter of building trust but is also crucial to detect errors and ensure fairness. For instance, in self-driving cars, explainable AI can help engineers understand why the car misinterpreted a stop sign or failed to recognise a pedestrian. Similarly, in hiring, understanding how an AI system ranks job candidates can help employers avoid biased selections and promote diversity.
By focusing on transparent and ethical AI systems, we can ensure that technology serves both individuals and society in a positive and equitable way.
https://theconversation.com/opening-the-black-box-how-explainable-ai-can-help-us-understand-how-algorithms-work-244080
Tomi Engdahl says:
AI-Powered Robots Can Be Tricked Into Acts of Violence
Researchers hacked several robots infused with large language models, getting them to behave dangerously—and pointing to a bigger problem ahead.
https://www.wired.com/story/researchers-llm-ai-robot-violence/
Tomi Engdahl says:
https://www.theatlantic.com/newsletters/archive/2024/12/a-glimpse-at-a-post-gpt-future/680913/