Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
6,742 Comments
Tomi Engdahl says:
From Ideas to Code: Enhancing Arduino Programming with chatGPT!
https://www.youtube.com/watch?v=S6ClV3dWiT4
In this video I am putting ChatGPT from OpenAi to the test asking it to write 4 Arduin programs. From the simple sketch to farily complex ones.
Tomi Engdahl says:
Former Google CEO Eric Schmidt warned that when a computer system reaches a point where it can self-improve, “we seriously need to think about unplugging it.
Ex-Google CEO warns there’s a time to consider “unplugging” AI systems
https://www.axios.com/2024/12/15/ai-dangers-computers-google-ceo?fbclid=IwY2xjawHL_l5leHRuA2FlbQIxMQABHfGFDFvagWJc7b6Wju5lhng-3V_BJxbV4wgChmNWc_IwcQq2bf7ALQRpVw_aem_bUXbd2lsWOkupXVuJmMMJw
Former Google CEO Eric Schmidt warned that when a computer system reaches a point where it can self-improve, “we seriously need to think about unplugging it.”
Why it matters: The multi-faceted artificial intelligence race is far from the finish line — but in just a few short years, the boundaries of the field have been pushed exponentially, sparking both awe and concern.
Regulations are in a state of flux, with discussions on Capitol Hill sputtering as this chapter of Congress nears its close.
But companies are still charging ahead.
“I’ve never seen innovation at this scale,” Schmidt said on ABC’s “This Week.” While he celebrated “remarkable human achievement,” he warned of the unforeseen dangers of rampant development.
Tomi Engdahl says:
Tekoäly tietää kohta tarkemmin, miten toimimme – siitä voi olla meille hyötyä
Tekoälyä yhdistetään Oulun yliopistossa eri tieteenaloihin, kuten psykologiaan, kasvatukseen ja terveyteen. Tavoitteena on saada tekoäly avustamaan meitä uudella tavalla.
https://yle.fi/a/74-20128602
Tomi Engdahl says:
https://www.facebook.com/share/p/Rt4soCp3PVQKjMFW/
Dani Filth, the enigmatic frontman of Cradle of Filth, has voiced his concerns about the rising influence of artificial intelligence (AI) in the music industry. Speaking with Spain’s “Metal Journal”, Dani offered a candid critique of the technology, highlighting its potential risks and its profound impact on creativity and human connection, as well as its broader implications for the arts.
When asked if he sees AI as a valuable innovation or a looming threat, Dani did not hesitate, labeling it as ‘dangerous.’ He shared a chilling account of his encounter with an AI music-generation program, stating: “I have a friend who is a computer programmer. He writes code for computer games and all kinds of weird and wonderful things, and last January I went for a meal at his house and he showed me something then that scared the shit outta me, which was a program that was very new at the time where you could literally just type in what kind of music you wanted, what the lyrics should be about, how you want the video to look, what genre it should be — you put all these things in and five minutes later, you had a song.”
Dani’s critique centers on the ‘soulless’ quality of AI-generated content. He elaborated: “The trouble is it’s soulless because essentially it’s just taking bits of information — millions — from around the web and binds them very quickly. And it learns. I know artists that are A.I. creators, and the longer they do it, the better it becomes.”
“You get a painter, for example, or a band that spends a year writing an album, recording it, putting all the visuals together, releasing it. These things can do it almost instantaneously. So not only is it taking away from the entertainment industry, whether it’s music, art, theater, cinema, but as soon as it becomes attached to a physical robot, something that can physically do the job that his mind creates, then it’s gonna affect every walk of life”, he explained.
Tomi Engdahl says:
Jo Constantz / Bloomberg:
How some companies are using AI agents: McKinsey for client onboarding, insurer Nsure to handle customer requests, and Accenture to support its marketing team
Work Shift
Technology & Skills
Big Tech’s New AI Obsession: Agents That Do Your Work for You
AI agents go beyond chatbots. “This is really the rise of digital labor.”
https://www.bloomberg.com/news/articles/2024-12-13/ai-agents-and-why-big-tech-is-betting-on-them-for-2025?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTczNDIzNjU3MiwiZXhwIjoxNzM0ODQxMzcyLCJhcnRpY2xlSWQiOiJTT0ZXR1ZUMEcxS1cwMCIsImJjb25uZWN0SWQiOiIwNEFGQkMxQkYyMTA0NUVEODg3MzQxQkQwQzIyNzRBMCJ9.5N873GpOGzb-w0vYDmgxDBqgCGyojK7V9r8gRaceBtU&leadSource=uverify%20wall
If you’re just getting up to speed on chatbots and copilots, you’re already falling behind. Talk in Silicon Valley now is squarely focused on agents — artificial intelligence that can handle multistep chores like onboarding clients, approving expenses and not just routing but actually responding to customer-service requests, all with minimal human supervision.
OpenAI Chief Executive Officer Sam Altman calls agents “the next giant breakthrough.” Salesforce Inc. has already signed deals to install AI agents at more than 200 companies including Accenture Plc, Adecco Group, FedEx Corp., International Business Machines Corp. and RBC Wealth Management.
“We’re really at the edge of a revolutionary transformation,” Salesforce CEO Marc Benioff said on the software company’s most recent earnings call. “This is really the rise of digital labor.”
Skeptics may note the similar tone of excitement that accompanied the 2022 debut of ChatGPT. While the OpenAI chatbot dazzled, it has yet to widely unlock substantial productivity gains or radically alter most workplaces. Agent technology goes a step further, not just performing parlor tricks and spitting out plausible responses to queries but actually doing the kinds of repetitive tasks that today are handled by millions of humans.
Agents are not just for workplaces. You might use one someday soon to research, select, and book each component of a lengthy vacation itinerary, for example. What Altman finds more exciting, though, is the prospect of an agent that acts like “a really smart senior coworker who you can collaborate on a project with,” he said in a podcast interview last month. “The agent can go do a two-day task — or two-week task — really well, and ping you when it has questions, but come back to you with a great work product.”
https://www.bloomberg.com/news/articles/2024-11-13/openai-nears-launch-of-ai-agents-to-automate-tasks-for-users
Tomi Engdahl says:
Suzanne Vranica / Wall Street Journal:
The merger of advertising giants Omnicom and Interpublic shows how the industry is preparing for an AI-driven upheaval, which is squeezing out creative talent
Sorry, Mad Men. The Ad Revolution Is Here.
Two advertisers are combining into a $30 billion behemoth to harness the data, tech and AI expertise now dominating Madison Avenue—and all the marketing you see
https://www.wsj.com/business/media/advertising-revolution-artificial-intelligence-data-mad-men-omnicom-interpublic-3c0c056b?st=oV8zbg&reflink=desktopwebshare_permalink
During an hour-long call this past week to sell investors on the virtues of a $30 billion merger of two advertising giants, data and technology came up a dozen times each. AI, eight times. “Creativity” was uttered once.
The speakers were the leaders of Omnicom Group OMC 0.18%increase; green up pointing triangle
and Interpublic Group IPG 0.37%increase; green up pointing triangle
, who plan to combine into the world’s biggest advertising business, one known for bold creative icons and legendary campaigns like Apple’s “Think different” and Mastercard’s “Priceless.”
Madison Avenue has been rapidly changing for over a decade, but the threats to an industry once centered on creatives have never been so great. Tech giants control more than half of the $1 trillion ad market, and quants armed with reams of data direct ad buying. Now, generative artificial intelligence is sending shock waves through the marketing world, promising to create and personalize ads cheaper and faster than ever.
Many industries talk about preparing for AI. With this blockbuster deal, the ad industry is trying to transform for it.
Tomi Engdahl says:
George Hammond / Financial Times:
A profile of Donald Trump’s AI and crypto czar David Sacks, who has urged AI companies and other tech startups to boost US competitiveness and national security
https://www.ft.com/content/82e859c8-ab66-47ad-bea0-11277bcd7a86
Tomi Engdahl says:
Pieter Haeck / Politico:
The EC pledges €750M of the €1.5B planned investment to fund AI-optimized supercomputers across seven sites to help EU startups compete with global rivals
Europe jumps into ‘incredibly costly’ AI supercomputing race
https://www.politico.eu/article/europe-costly-artificial-intelligence-race-supercomputer-startups/
The EU will fund AI-optimized supercomputers across the bloc to help startups compete with U.S. rivals.
Tomi Engdahl says:
Lillian Perlmutter / Rest of World:
Mexico’s Yucatán state says AI-powered app MeMind contributed to a 9% drop in suicides over the past two years by connecting 10,000 at-risk people to treatment
Mexico is using an AI-powered app to prevent suicides
MeMind has connected 10,000 at-risk people with mental health treatment, contributing to a 9% drop in suicides.
https://restofworld.org/2024/mexico-suicide-rates-yucatan-memind-app/
Wellness tracker company MeMind created a tailor-made version of its app for the state to curb a suicide epidemic.
The app’s AI diagnostic tool detects small changes in behavior patterns.
Over the past two years, the app’s AI tool has connected 10,000 at-risk people with mental health treatment.
Last fall, school administrators at the University of Yucatán asked Abraham Slim, a 23-year-old medical student, to fill out a survey on his smartphone.
The questions on the survey did not relate to his subjects but probed into his mental health. They asked if he had “started to work out the details of how to kill” himself the previous month, and if he “intended to carry out this plan,” among other queries.
The questionnaire was part of an artificial intelligence-backed mental health care initiative launched by the southern Mexican state of Yucatán, whose suicide rate is double that of the rest of the country. The survey “made me think about my mental health in a different way,” Slim told Rest of World. The app’s goal of raising consciousness about mental illness could “have a real effect, especially in urbanized areas,” he said.
Tomi Engdahl says:
Jess Weatherbed / The Verge:
An analysis of App Store rankings shows how market saturation for AI-based creative apps is making it harder to find good apps that use AI in more focused ways
AI is booming on the App Store, and developers are taking advantage of it
Many high-ranking AI apps feel like an attempted cash grab, and it’s not easy to spot the trash from the treasure.
https://www.theverge.com/2024/12/9/24314972/apple-app-store-ai-apps-art-design-photography
2024 in review: AI
There’s endless hype around generative AI this year, and app developers have clearly been paying attention. AI-focused tools are blowing up Apple’s App Store charts in almost every category, occupying top 10 rankings across education, productivity, and photo editing. Opportunities are particularly rife for free graphics and design apps, a category that’s positively saturated with AI content creation tools.
But quantity doesn’t mean quality — and a great many of these apps are bewilderingly bad. I’ve been using some of the most popular offerings to understand the state of AI tools as we head into 2025. And for every serious attempt to make AI useful, there seem to be several more designed to cash in on the hype: quietly paywalling features they advertise behind pricey subscriptions and greatly misrepresenting the results that users can achieve, if the app even works at all.
Around half of the App Store’s top 10 graphics and design apps have “AI” in the name, and three of them are all made by the same company — HUBX, an app developer founded in 2022 that’s based in Turkey. One of its apps, DaVinci AI, is advertised as an AI image generator with some photo editing features. Almost every tool is locked behind a $30 annual (or $5 weekly) subscription fee, and the free trial only unlocks a subpar text-to-image feature that gives you a choice between using unspecified versions of Stable Diffusion and DALL-E AI models.
The images it produces are low-quality and resized or cropped incorrectly. In-app ads appear when you click pretty much any link. The UI is unpleasant to navigate. If you do pay for the full version, you can’t download any edited images without slapping an ugly watermark on it. And yet, it sits far above the rankings for more recognizable creative platforms like Microsoft Designer, which has its own built-in text-to-image AI generator. Adobe Express is the only design-focused app in the same category that currently ranks above DaVinci AI.
The other two high-ranking HUBX products I tested were just as lackluster: the Home AI “interior design” app spits out hallucination-riddled images of rooms that are barely usable even as concept plans, and the Tattoo AI app refused to work entirely. Both apps have the same feature paywalls and review pop-up reminders as DaVinci AI. The copy and pasted App Store version history notes across all HUBX products are also devoid of any details.
While all three apps feature a surprisingly large number of five-star reviews, the user feedback across social media and the App Store comments is overwhelmingly negative. A recurring complaint is that customer service is impossible to get ahold of. HUBX never responded to my requests for comment.
Apps that clearly advertise having AI features are highly attractive.
AI-focused creative apps aren’t all bad, but the app market saturation makes it harder to find the good ones that use the technology in more focused ways. Apps like Google’s Magic Editor and Adobe Photoshop have specific tools for removing unwanted objects from photos or inserting new ones in specific places. Smaller developers that provide similar features and perform those tasks well are also climbing the App Store rankings.
For example, Photoroom and Picsart AI — all-in-one graphic design apps akin to platforms like Canva and Adobe Express — are actually pretty good! They rank highly in the free “Photo and Video” App Store category and provide a similar variety of features to quickly create online content using premade digital assets and templates, alongside some AI-powered editing tools like automatic background and object removal.
Neither app blew me away, but they did exactly what they advertised: the background removal features aren’t as good as Canva’s, but they did the job, and object removal erased undesirable aspects of photos even if it wasn’t as convincing as Google’s Magic Editor. Both apps lock some of their more premium editing features and digital assets behind a $13 monthly subscription, which is pretty standard if you’re being upfront about it — individual Canva and Adobe Express subscriptions start at $15 and $10 per month, respectively, by comparison.
The divide suggests the spammy AI app phenomenon is a mass-market one that hasn’t quite caught on with traditional photographers and illustrators who still want to use their tried-and-true apps. It also suggests that AI itself holds less appeal if users are expected to pay for it, which tracks with the kind of creative AI apps that appear most frequently — custom tattoos, logo makers, and interior design — which are skilled services that usually require payment. People don’t necessarily want to remove the skill barrier; they want to remove the financial one.
Using in-app charges to appear on the App Store’s free category, and thereby attract a wider audience, is hardly new. Mobile games have been taking advantage of this loophole for years — and what’s now happening with the creative AI market looks eerily familiar.
Tomi Engdahl says:
How does Google Translate work? Can it be considered an …
Quora
https://www.quora.com › How-does-Google-Translate-…
AI translation, also known as machine translation, works by leveraging complex algorithms and models to automatically translate text or speech …
https://www.quora.com/How-does-Google-Translate-work-Can-it-be-considered-an-example-of-artificial-intelligence-AI
Google Translate is a powerful tool that leverages artificial intelligence (AI) to facilitate translation between different languages. Here’s how it works and how it fits into the broader category of AI:
How Google Translate Works
Neural Machine Translation (NMT):
– Google Translate primarily uses a technology called Neural Machine Translation. NMT employs deep learning techniques to improve translation quality by considering entire sentences rather than just individual words or phrases.
– This approach allows the system to capture context, grammar, and nuances of language, resulting in more fluent and accurate translations.
Training on Large Datasets:
– Google Translate is trained on vast amounts of bilingual text data from various sources, including websites, books, and user contributions. This extensive dataset helps the model learn patterns, vocabulary, and syntax in multiple languages.
Continuous Learning:
– The system continuously improves through user interactions. Feedback from users helps refine translations over time, allowing the model to adapt to new phrases, slang, and language usage trends.
Contextual Understanding:
– NMT models use attention mechanisms to weigh the importance of different words in a sentence, which helps in understanding context and producing more coherent translations.
Support for Multiple Languages:
– Google Translate supports over 100 languages, making it a versatile tool for global communication. It can also handle various dialects and regional variations.
Is Google Translate AI?
Yes, Google Translate is a prime example of artificial intelligence. Here are a few reasons why:
Machine Learning: It uses machine learning algorithms to improve its translations based on data and user feedback, a hallmark of AI systems.
Natural Language Processing (NLP): Google Translate employs NLP techniques to understand and generate human language, which is a key area within AI.
Automation: The ability to automatically translate text without human intervention showcases the capabilities of AI in processing and generating language.
Conclusion
In summary, Google Translate is a sophisticated application of AI that utilizes neural networks and machine learning to provide translations. Its reliance on large datasets and continuous learning makes it a dynamic tool that exemplifies the advancements in artificial intelligence, particularly in the field of natural language processing.
Tomi Engdahl says:
Google Translate
A Brief Discussion on the AI behind Google Translate
https://mariam-jaludi.medium.com/google-translate-b6ad6328e7f2
Google Translate is a translation service developed by Google. If you’ve travelled, tried to learn another language, or wanted to understand the comments section under a video or post, chances are you’ve used Google Translate to help. Google Translate is available on web browser interface, mobile apps for Android and iOS, and even has an API that helps developers build browser extensions and software applications. Google Translate began in 2006 by using UN and European Parliament transcripts to gather linguistic data. It currently supports over 100 languages and boasts over 500 million daily users.
Let’s look at what approaches we could take to build our own translator:
Approach 1: Word-for-Word Translation
Word-for-Word translation involves taking every single word in the source language sentence and finding the corresponding word in the target language.
Approach 2: Neural Networks
Neural network learns to solve problems by looking a vast amount of examples. They can be used to define grammar for a translator. A simplistic model of a translator using a neural network may look something like
Neural networks are taught language patterns and eventually are able to translate a given English sentence into French all on their own.
To continue our example of English to French translation, a neural network takes an English sentence or sequence of words as an input and gives a French sentence or sequence as an output. In order for this input to be interpreted by a neural network, it must be converted into a format it understands, i.e. a vector or matrix.
Neural networks are taught language patterns and eventually are able to translate a given English sentence into French all on their own.
To continue our example of English to French translation, a neural network takes an English sentence or sequence of words as an input and gives a French sentence or sequence as an output. In order for this input to be interpreted by a neural network, it must be converted into a format it understands, i.e. a vector or matrix.
Vectors and matrices are an assortment of numbers representing data. The conversion from sentence to vector, called the Vector Mapper, is the first part of the network. It takes our English sentence and returns a vector that a computer can understand.
Because a translator deals with sentences or sequences of words, we can use a Recurrent Neural Network (RNN). RNNs are networks that learn to solve problems that involve sentences.
Once a sentence is translated into a vector, it needs to be translated into a French sentence. This vector mapping is done with a second neural network. Once again, because we are working with sentences, another RNN can be used. Together, these two neural networks make the basic foundation of a language translator. This is called the Encoder-Decoder Architecture.
The encoder encodes the input sequence of length n to n encoding vectors and the decoder decodes the vectors back into language.
More specifically, these RNNs are called “Long Short-Term Memory Recurrent Neural Networks” (LSTM-RNN). LSTM networks are able to deal with longer sentences fairly well. They were introduced by Hochreiter and Schmidhuber in 1997 and were refined and popularized by many people in later works.
The encoder-decoder architecture works well for medium-length sentences (around 15–20 words). However, LSTM-RNN encoder-decoder structures do not fair as well with longer sentences. RNNs are not able to address the complexity of grammar in longer sentences.
In order to look in both directions, forwards and backward, a normal RNN is replaced with a bi-directional recurrent neural network.
Approach 3: Bi-Directional Recurrent Neural Network
Bi-directional recurrent neural networks were introduced in 1993 but gained popularity recently with the emergence of deep learning. If we are performing English to French translation, while joining some word in the French translation, we are looking at words that come before it and words that come after it.
With a bi-directional network, we are able to do this. This solves a big problem but also brings up a new issue: is every word in a sentence pivotal to the structure of the previous and next word? Which words should we focus on more in a large sentence?
The Bi-Directional RNN model works as follows:
An English sentence is fed to an encoder.
The encoder translates the sentence to a vector (numbers).
The vector is sent to the Attention Mechanism (AM). The AM decides which French words will be generated by which English words.
The decoder will then generate the French translation, one word at a time, focusing its attention on the words determined by the AM, producing the French sentence.
Bahdanau et al. found that this model performs better than the original encoder/decoder architecture.
Let’s go back to Google Translate. In November 2016, Google announced the transition to a neural machine translation.
Google Translate’s AI works exactly like the bi-directional RNN, but at a much larger scale. Instead of using one LSTM for the encoder and decoder, they use 8 layers with connections between layers. 8 layers for the encoder and another 8 for the decoder. The first two layers are bi-directional, which take both forwards and backwards context into consideration. Google chose not to use bi-directional RNNs at every layer to save computation time.
This is done because deeper networks are better at modelling complex problems. This network is more capable of understanding the semantics of language and grammar. In the final model, English text is passed, word for word to the encoder. It converts these words into a number of word vectors. These word vectors are then passed into an attention mechanism. This determines the English words to focus on while generating some French word. This data is passed to the decoder which generates the French sentence one word at a time. This is a very high level summary of how Google Translate’s AI works. Google’s work on Neural Machine Translation utilizes state-of-the-art training techniques to improve the Google Translation system. Google Translate uses RNNs to directly learn the mapping between a sentence in one language to a sentence in another.
Great! You now know how Google Translate works! Now onto a deeper look into neural networks in a future discussion…
Tomi Engdahl says:
Click-lisäkortti mullistaa koneiden värähtelyanalyysin ML-mallien avulla
https://etn.fi/index.php/13-news/16960-click-lisaekortti-mullistaa-koneiden-vaeraehtelyanalyysin-ml-mallien-avulla
MikroBUS-väyläisistä sulautetuista korteistaan tunnettu MIKROE esittelee ML Vibro Sens Click -lisäkortin, joka tuo tehokkuutta ja tarkkuutta koneiden kunnonvalvontaan ja värähtelyanalyysiin.
Kortti tarjoaa kehittäjille innovatiivisen työkalun, jolla voidaan kerätä ja analysoida värähtelydataa koneoppimismallien kouluttamiseen. Kortti hyödyntää NXP:n FXLS8974CF 3-akselista, pienillä kiihtyvyyksillä toimivaa kiihtyvyysanturia, joka yhdistää korkean suorituskyvyn ja erittäin alhaisen virrankulutuksen.
ML Vibro Sens Click soveltuu monipuolisesti esimerkiksi teollisuuslaitteiden kunnonvalvontaan, älyvaatteiden liikeanalyysiin sekä maanjäristysten ja muiden seismisten tapahtumien havaitsemiseen. Lisäkortin avulla voidaan erottaa koneiden normaalit ja poikkeavat värähtelytilat.
https://www.mikroe.com/ml-vibro-sens-click
Tomi Engdahl says:
Eliza Strickland / IEEE Spectrum:
Q&A with Fei-Fei Li on her NeurIPS talk “Ascending the Ladder of Visual Intelligence”, her startup World Labs, giving machines 3D spatial intelligence, and more
AI Godmother Fei-Fei Li Has a Vision for Computer Vision
Her startup, World Labs, is giving machines 3D spatial intelligence
https://spectrum.ieee.org/fei-fei-li-world-labs
Stanford University professor Fei-Fei Li has already earned her place in the history of AI. She played a major role in the deep learning revolution by laboring for years to create the ImageNet dataset and competition, which challenged AI systems to recognize objects and animals across 1,000 categories. In 2012, a neural network called AlexNet sent shockwaves through the AI research community when it resoundingly outperformed all other types of models and won the ImageNet contest. From there, neural networks took off, powered by the vast amounts of free training data now available on the Internet and GPUs that deliver unprecedented compute power.
In the 13 years since ImageNet, computer vision researchers mastered object recognition and moved on to image and video generation. Li cofounded Stanford’s Institute for Human-Centered AI (HAI) and continued to push the boundaries of computer vision. Just this year she launched a startup, World Labs, which generates 3D scenes that users can explore. World Labs is dedicated to giving AI “spatial intelligence,” or the ability to generate, reason within, and interact with 3D worlds. Li delivered a keynote yesterday at NeurIPS, the massive AI conference, about her vision for machine vision, and she gave IEEE Spectrum an exclusive interview before her talk.
Tomi Engdahl says:
Stephanie Stacey / Financial Times:
Workers are adopting generative AI faster than companies can issue guidelines on how to do so; a survey says ~25% the US workforce already uses the tech weekly
Bosses struggle to police workers’ use of AI
Staff are adopting large language models faster than companies can issue guidelines on how to do so
https://www.ft.com/content/cd08b45d-12dc-447e-bd59-1c366a7e6396?accessToken=zwAGKV9bYHFYkdPNCLRdEtxEftO9WRw2an5jlg.MEUCIBLPOpSM78Av9wRV5ZiC1eCLREw6Af85BOagHQbkW5pOAiEA2hvnDJdXGSJEU4IMPUUx_qRepv-ymGg3LXbJ6RwfM4g&sharetype=gift&token=56a62062-81f0-4255-87ec-d021c2ba527b
Matt had a secret helping hand when he started his new job at a pharmaceutical company in September.
The 27-year-old researcher, who asked to be identified by a pseudonym, was able to keep up with his more experienced colleagues by turning to OpenAI’s ChatGPT to write the code they needed for their work.
“Part of it was sheer laziness. Part of it was genuinely believing it could make my work better and more accurate,” he says.
Matt still does not know for sure whether this was allowed. His boss had not explicitly prohibited him from accessing generative AI tools such as ChatGPT but neither had they encouraged him to do so — or laid down any specific guidelines on what uses of the technology might be appropriate.
“I couldn’t see a reason why it should be a problem but I still felt embarrassed,” he says. “I didn’t want to admit to using shortcuts.”
Employers have been scrambling to keep up as workers adopt generative AI at a much faster pace than corporate policies are written. An August survey by the Federal Reserve Bank of St Louis found nearly a quarter of the US workforce was already using the technology weekly, rising closer to 50 per cent in the software and financial industries. Most of these users were turning to tools such as ChatGPT to help with writing and research, often as an alternative to Google, as well as using it as a translation tool or coding assistant.
But researchers warn that much of this early adoption has been happening in the shadows, as workers chart their own paths in the absence of clear corporate guidelines, comprehensive training or cyber security protection. By September, almost two years after the launch of ChatGPT, fewer than half of executives surveyed by US employment law firm Littler said their organisations had brought in rules on how employees should use generative AI.
Among the minority that have implemented a specific policy, many employers’ first impulse was to jump to a blanket ban. Companies including Apple, Samsung, Goldman Sachs, and Bank of America prohibited employees from using ChatGPT in 2023, according to Fortune, primarily due to data privacy concerns. But as AI models have become more popular and more powerful, and are increasingly seen as key to staying competitive in crowded industries, business leaders are becoming convinced that such prohibitive policies are not a sustainable solution.
“We started at ‘block’ but we didn’t want to maintain ‘block’,” says Jerry Geisler, chief information security officer at US retailer Walmart. “We just needed to give ourselves time to build . . . an internal environment to give people an alternative.”
Walmart prefers staff to use its in-house systems — including an AI-powered chatbot called ‘My Assistant’ for secure internal use — but does not ban its workers from using external platforms, so long as they do not include any private or proprietary information in their prompts.
It has, however, installed systems to monitor requests that workers submit to external chatbots on their corporate devices. Members of the security team will intercept unacceptable behaviour and “engage with that associate in real-time”, says Geisler.
He believes instituting a “non-punitive” policy is the best bet for keeping up with the ever-shifting landscape of AI. “We don’t want them to think they’re in trouble because security has made contact with them. We just want to say: ‘Hey, we observed this activity. Help us understand what you’re trying to do and we can likely get you to a better resource that will reduce the risk but still allow you to meet your objective.’
“I would say we see probably almost close to zero recidivism when we have those engagements,” he says.
Walmart is not alone in developing what Geisler calls an “internal gated playground” for employees to experiment with generative AI. Among other big companies, McKinsey has launched a chatbot called Lilli, Linklaters has started one called Laila, and JPMorgan Chase has rolled out the somewhat less creatively named “LLM Suite”.
Companies without the resources to develop their own tools face even more questions — from which services, if any, to procure for their staff, to the risk of growing dependent on external platforms.
Workers must only access generative AI using the company’s subscription to ChatGPT Pro.
“The worst-case scenario is that people use their own ChatGPT account and you lose control of what’s being put into that,” says Usher.
She acknowledges that her current approach of asking employees to request approval for each individual use of generative AI may not be sustainable as the technology becomes a more established part of people’s working processes. “We’re really happy to keep changing our policies,” she says.
Even with more permissive strategies, workers who have been privately using AI to accelerate their work may not be willing to share what they have learnt.
“They look like geniuses. They don’t want to not look like geniuses,”
A report published last month by workplace messaging service Slack found that almost half of desk workers would be uncomfortable telling their managers they had used generative AI — largely because, like Matt, they did not want to be seen as incompetent or lazy, or risk being accused of cheating.
Workers polled by Slack also said they feared that, if their bosses knew about productivity gains made using AI, they would face lay-offs, and that those who survived future cuts would simply be handed a heavier workload.
The shifting legal landscape can also make it tricky for companies to implement a long-term strategy for AI. Legislation is under development in regions including the US, EU, and UK but companies still have few answers about how the technology will affect intellectual property rights, or fit into existing data privacy and transparency regulations. “The uncertainty is just leading some firms to try to ban anything to do with AI,”
For those attempting to develop some kind of strategy, Rose Luckin, a professor at University College London’s Knowledge Lab, says the “first hurdle” is simply figuring out who within the organisation is best placed to investigate what kinds of AI will be useful for their work. Luckin says she has so far seen this task assigned to everyone from a chief executive to a trainee
asked to research and design the rule book for how her more senior colleagues should be using AI. “It’s weird that it’s become my job,” she says. “I’m literally the most junior member of staff.”
Tomi Engdahl says:
UK creative sector seeks copyright protection from AI companies
https://www.ft.com/content/c697f9be-57ac-43e4-a351-4b096e2136d0
Tomi Engdahl says:
https://www.hackster.io/sologithu/a-federated-approach-to-train-and-deploy-embedded-ai-models-6e6508
Tomi Engdahl says:
https://www.reddit.com/r/embedded/comments/16nnn07/i_just_dont_understand_the_difference_between/
Problem one: you had good curiosity but you asked an idiot.
LLMs are very good at constructing coherent and plausible sentences. They are less good at being accurate or factual. They can be great tools for creativity.
AI prose is a great way to generate ways of saying something. It can conversationally explain things. It’s a super good tool that will continue to get better
But
AI facts need to be validated. AI analysis needs to be validated. AI opinions need to be validated.
Imagine a really smart confident friend explaining something to you. For fun? No problem! Writing a paper for school? You better verify it before you find out your friend thinks James I of England and James VI of Scotland were different people, etc.
Anyway. Just a side rant about coming in here with a primary opinion from an AI answer.
Tomi Engdahl says:
https://hackaday.com/2024/12/16/robot-air-hockey-player-predicts-your-next-move/
https://www.youtube.com/watch?v=VZdKkK-lPW4
https://www.instructables.com/Air-Hockey-Robot/
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
Google DeepMind unveils Veo 2, a video generation model that can create clips over two minutes long at resolutions up to 4K, available in VideoFX via a waitlist — Google DeepMind, Google’s flagship AI research lab, wants to beat OpenAI at the video generation game — and it might just, at least for a little while.
Google DeepMind unveils a new video model to rival Sora
https://techcrunch.com/2024/12/16/google-deepmind-unveils-a-new-video-model-to-rival-sora/
Google DeepMind, Google’s flagship AI research lab, wants to beat OpenAI at the video-generation game — and it might just, at least for a little while.
On Monday, DeepMind announced Veo 2, a next-gen video-generating AI and the successor to Veo, which powers a growing number of products across Google’s portfolio. Veo 2 can create two-minute-plus clips in resolutions up to 4k (4096 x 2160 pixels).
Notably, that’s 4x the resolution — and over 6x the duration — OpenAI’s Sora can achieve.
It’s a theoretical advantage for now, granted. In Google’s experimental video creation tool, VideoFX, where Veo 2 is now exclusively available, videos are capped at 720p and eight seconds in length. (Sora can produce up to 1080p, 20-second-long clips.)
Tomi Engdahl says:
Jay Peters / The Verge:
Google debuts Whisk, an image generator that takes other images as prompts to suggest the subject, scene, and style, and uses the new, “latest” Imagen 3 version — Google has announced a new AI tool called Whisk that lets you generate images using other images as prompts instead of requiring a long text prompt.
Google’s Whisk AI generator will ‘remix’ the pictures you plug in
/ Whisk is Google’s ‘fun’ AI experiment that uses images for prompts and doesn’t need words.
https://www.theverge.com/2024/12/16/24322614/google-whisk-ai-generator-remix-pictures-plug-in
Tomi Engdahl says:
Rachel Metz / Bloomberg:
OpenAI makes ChatGPT Search available to logged-in users, not just subscribers, and says they can set ChatGPT Search as their browser’s default search engine — The product, called ChatGPT Search, will be available to any user who is logged in with an account for the chatbot across …
OpenAI Rolls Out ChatGPT Search Features to All Users
https://www.bloomberg.com/news/articles/2024-12-16/openai-rolls-out-chatgpt-search-features-to-all-users
Tomi Engdahl says:
Alex Heath / The Verge:
Q&A with Arm CEO Rene Haas on AI’s potential, working with Intel, TSMC, and Samsung, the Trump admin, China and IP licenses, the CHIPS Act, OpenAI, and more
Arm CEO Rene Haas on the AI chip race, Intel, and what Trump means for tech
The head of the ubiquitous chip design firm on the ‘breathtaking’ pace of AI.
https://www.theverge.com/24320687/arm-ceo-rene-haas-on-the-ai-chip-race-intel-and-what-trump-means-for-tech
Rene is a fascinating character in the tech industry. He’s worked at two of the most important chip companies in the world: first Nvidia, and now Arm. That means he’s had a front-row seat to how the industry has changed in the shift from desktop to mobile and how AI is now changing everything all over again.
Arm has been central to these shifts, as the company that designs, though doesn’t build, some of the most important computer chips in the world. Arm’s architectures are behind Apple’s custom iPhone and Mac chips, they’re in electric cars, and they’re powering AWS servers that host huge chunks of the internet.
What do you think about the efforts by the Biden administration with the CHIPS Act to bring more domestic production here? Do you think we need a Manhattan Project for AI, like what OpenAI has been pitching?
I don’t think we need a government, OpenAI, Manhattan-type project. I think the work that’s being done by OpenAI, Anthropic, or even the work in open source that’s being driven by Meta with Llama, we’re seeing fantastic innovation on that. Can you say the US is a leader in terms of foundation and frontier models? Absolutely. And that’s being done without government intervention. So, I don’t think it’s necessary with AI, personally.
What is that shift? Is it a new product? Is it a hardware breakthrough, a combination of both? Some kind of wearable?
Well, as I said, whether it’s a wearable, a PC, a phone, or a car, the chips that are being designed are just being stuffed with as much compute capability as possible to take advantage of what might be there. So it’s a bit of chicken-and-egg. You load up the hardware with as much capability hoping that the software lands on it, and the software is innovating at a very, very rapid pace. That intersection will come where suddenly, “Oh my gosh, I’ve shrunk the large language model down to a certain size. The chip that’s going in this tiny wearable now has enough memory to take advantage of that model. As a result, the magic takes over.” That will happen. It will be gradual and then sudden.
Are you bullish on all these AI wearables that people are working on? I know Arm is in the Meta Ray-Bans, for example, which I’m actually a big fan of. I think that form factor’s interesting. AR glasses, headsets — do you think that is a big market?
Yeah, I do. It’s interesting because in many of the markets that we have been involved in, whether it’s mainframes, PCs, mobile, wearables, or watches, some new form factor drives some new level of innovation. It’s hard to say what that next form factor looks like. I think it’s going to be more of a hybrid situation, whether it’s around glasses or around devices in your home that are more of a push device than a pull device. Instead of asking Alexa or asking Google Assistant what to do, you may have that information pushed to you. You may not want it pushed to you, but it could get pushed to you in such a way that it’s looking around corners for you. I think the form factor that comes in will be somewhat similar to what we’re seeing today, but you may see some of these devices get much more intelligent in terms of the push level.
Amazon just announced that it’s working on the largest data center for AI with Anthropic, and Arm is really getting into the data center business. What are you seeing there with the hyperscalers and their investments in AI?
The amount of investment is through the roof. You just have to look at the numbers of some of the folks who are in this industry. It’s a very interesting time because we’re still seeing an insatiable investment in training right now. Training is hugely compute intensive and power intensive, and that’s driving a lot of the growth. But the level of compute that will be required for inference is actually going to be much larger. I think it’ll be better than half, maybe 80 percent over time would be inference. But the amount of inference cases that will need to run are far larger than what we have today.
That’s why you’re seeing companies like CoreWeave, Oracle, and people who are not traditionally in this space now running AI cloud. Well, why is that? Because there’s just not enough capacity with the traditional large hyperscalers: the Amazons, the Metas, the Googles, the Microsofts. I think we’ll continue to see a changing of the landscape — maybe not a changing so much, but certainly opportunities for other players in terms of enabling and accessing this growth.
It’s very, very good for Arm because we’ve seen a very large increase in growth in market share for us in the data center. AWS, which builds its Graviton general-purpose devices based on Arm, was at re:Invent this week. It said that 50 percent of all new deployments are Graviton. So 50 percent of anything new at AWS is Arm, and that’s not going to decrease. That number’s just going to go up.
Tomi Engdahl says:
Sarah Perez / TechCrunch:
YouTube adds a tool to let creators authorize third parties, including Amazon, Anthropic, Apple, Meta, Microsoft, and OpenAI to train AI models on their videos
YouTube will now let creators opt in to third-party AI training
https://techcrunch.com/2024/12/16/youtube-will-let-creators-opt-out-into-third-party-ai-training/
YouTube on Monday announced it will give creators more choice over how third parties can use their content to train their AI models. Starting today, creators and rights holders will be able to flag for YouTube if they’re permitting specific third-party AI companies to train models on the creator’s content.
From a new setting within the creator dashboard, YouTube Studio, creators will be able to opt in to this new feature, if they choose. Here, they’ll see a list of 18 companies they can select as having authorization to train on the creator’s videos.
The companies on the initial list include AI21 Labs, Adobe, Amazon, Anthropic, Apple, ByteDance, Cohere, IBM, Meta, Microsoft, Nvidia, OpenAI, Perplexity, Pika Labs, Runway, Stability AI, and xAI. YouTube notes these companies were chosen because they’re building generative AI models and are likely sensible choices for a partnership with creators. However, creators will also be able to select a setting that says “All third-party companies,” which means they’re letting any third party train on their data — even if they’re not listed.
Eligible creators are those with access to the YouTube Studio Content Manager with an administrator role, the company also notes. They’ll also be able to view or change their third-party training settings within their YouTube Channel settings at any time.
Third Party AI Trainability on YouTube
https://support.google.com/youtube/thread/313644973/third-party-ai-trainability-on-youtube?hl=en&sjid=17931526359361196897-EU
We know creators want more control over how they partner with third-party companies to develop new generative AI tools. That’s why in September, we announced that we were exploring ways to give creators more choice over how third parties might use their YouTube content.
Over the next few days, we’ll be rolling out an update where creators and rights holders can choose to allow third-party companies to use their content to train AI models directly in Studio Settings under “Third-party training.”
To be eligible for AI training, a video must be allowed by the creator as well as the applicable rights holders. This could include owners of content detected by Content ID.
This update does not change our Terms of Service. Accessing creator content in unauthorized ways, such as unauthorized scraping, remains prohibited.
Tomi Engdahl says:
Suunta.ai on kuin Chat GPT, mutta yrityksellesi räätälöity oma tekoäly. Se oppii tuntemaan yrityksesi strategiat, brändin ja äänensävyn, tuottaen vastauksia, jotka ovat linjassa yrityksen liiketoiminnan kanssa.
Haluatko kuulla lisää? Varaa ilmainen demo, niin kerromme miten Suunta.ai hyödyttää organisaatiotasi eteenpäin!
Tomi Engdahl says:
Let’s Build a ChatGPT Smart Speaker
https://www.youtube.com/watch?v=vdptq224aYQ
Let’s build a ChatGPT smart speaker and see how well it performs. This is a quick fun weekend project that uses a Raspberry Pi 4 and some simple Python Code.
https://github.com/DaveBben/chatgpt-smart-speaker
Tomi Engdahl says:
Building a SMART Home Assistant with ChatGPT and a Raspberry Pi
https://www.youtube.com/watch?v=1lATPsPnCrc
In this video, I’ll walk you through the process of building an AI-powered personal assistant using ChatGPT and a Raspberry Pi. This project is an exciting alternative to commercial smart home devices, offering a more customizable and hands-on experience…oh and it’s actually smart, not just a kitchen timer!
Tomi Engdahl says:
ChatGPT Creates Arduino Code
https://www.youtube.com/watch?v=MBX009FZB80
In this video, Dave uses ChapGPT 4o to create code to program an Arduino for a Digital Thermometer project. Join Dave as he walks you through this project.
Dave also used the DALL-E AI to create the robot programming an Arduino graphic.
Here is the prompt I used for the graphic:
Create a photo of chatgpt writing code to program an Arduino Uno
Here is the prompt I used to create the Arduni code:
Please create an arduino program for an arduino uno that measures temperature and displays that temperature in degrees F on a 2 line LCD display that has a serial interface using a backpack board. The temperature probe is a dallas semiconductor probe. also, please show a wiring diagram for the project and an explanation of how it works.
Tomi Engdahl says:
[Arduino Library] OpenAI’s API for ChatGPT
https://www.youtube.com/watch?v=IuvEtq73gyE
“Connecting ChatGPT in an Arduino environment is a significant accomplishment that can open up a range of possibilities for implementing conversational AI in various applications.
The ChatGPT Client library likely includes a set of functions and tools that enable communication between the ChatGPT model and the Arduino board.
It may include functions to initialize and configure the communication channels, load the ChatGPT model onto the Arduino board, and interact with the model using input from sensors or other external devices.
The integration of ChatGPT with Arduino can have many practical applications, including smart home devices, robotics, and interactive installations. For example, an Arduino-powered chatbot could provide voice-activated control for a smart home or provide natural language communication between a robot and humans.
Overall, The ChatGPT Client library represents an important step in bringing sophisticated AI capabilities to the Arduino platform and enabling new and innovative applications.” by ChatGPT
Tomi Engdahl says:
https://www.youtube.com/watch?v=IuvEtq73gyE
*Timestamps
0:00 – ChatGPT Client For Arduino
0:33 – Open AI (Pricing, API KEY)
1:41 – Library Install
2:37 – Ready To Go
4:25 – ChatGPT with MCU (Use Case [1] – More Additional Info)
5:57 – ChatGPT with MCU (Use Case [2] – Data Analysis)
6:41 – ChatGPT with MCU (Use Case [3] – Direct Use)
[ChatGPT Client For Arduino]
https://github.com/0015/ChatGPT_Client_For_Arduino
[OpenAI]
https://platform.openai.com/
Tomi Engdahl says:
It’s Time to Pay Attention to A.I. (ChatGPT and Beyond)
https://www.youtube.com/watch?v=0uQqMxXoNVs
Imagine being able to have a language conversation about anything with a computer. This is now possible and available to many people for the first time with ChatGPT. In this episode we take a look at the consequences and some interesting insights from Open AI’s CEO Sam Altman.
Tomi Engdahl says:
Give your Arduino project a chatGPT AI brain – for ALMOST free
https://www.youtube.com/watch?v=dv9cyqVv0CI
This Arduino lesson was created by Programming Electronics Academy. We are an online education company who seeks to help people learn about electronics and programming through the ubiquitous Arduino development board.
**We have no affiliation whatsoever with Arduino LLC, other than we think they are cool.**
00:00 Introduction
00:12 Project Overview
00:35 Project Map
01:23 Actual Demo
02:09 We’ve made contact…
02:48 Existential AI fear…
03:08 Software Overview
04:08 Library Specifics
04:50 Earn 1 Gold Coin $$
05:19 Best Arduino Video yet…
Tomi Engdahl says:
Google’s New TPU Turns Raspberry Pi into a Supercomputer!
https://www.youtube.com/shorts/VRk_itxLZQI
Well that escalated quickly…
https://coral.ai/
Google for local AI
Coral helps you bring on-device AI application ideas from prototype to production. We offer a platform of hardware components, software tools, and pre-compiled models for building devices with local AI.
We’re making AI more accessible
We’re creating a flexible development system that makes it easy for you to grow embedded AI products into reality.
Tomi Engdahl says:
LEGO ChatGPT Arduino robot!
https://www.youtube.com/watch?v=6-yY51_YgYE
A ROBOT WHIT A ARDUINO UNO, CHATGPT AND LEGOS
Tomi Engdahl says:
ChatGPT Arduino Programming Ultimate Test, ChatGPT for Arduino Projects
https://www.youtube.com/watch?v=Le2L5smHJHQ
ChatGPT Arduino Programming Ultimate Test, ChatGPT for Arduino Projects
https://www.electroniclinic.com/chatgpt-arduino-programming-ultimate-test-chatgpt-for-arduino-projects/#google_vignette
Tomi Engdahl says:
https://hackaday.com/2024/06/22/uncovering-chatgpt-usage-in-academic-papers-through-excess-vocabulary/
Tomi Engdahl says:
Audio Synthesizer Hooked Up With ChatGPT Interface
https://hackaday.com/2023/12/25/audio-synthesizer-hooked-up-with-chatgpt-interface/
ChatGPT is being asked to handle all kinds of weird tasks, from determining whether written text was created by an AI, to answering homework questions, and much more. It’s good at some of these tasks, and absolutely incapable of others. [Filipe dos Santos Branco] and [Edward Gu] had an out of the box idea, though. What if ChatGPT could do something musical?
They built a system that, at the press of a button, would query ChatGPT for a 10-note melody in a given musical key. Once the note sequence is generated by the large language model, it’s played out by a PWM-based synthesizer running on a Raspberry Pi Pico.
ChatGPT Pico Melodies
https://ece4760.github.io/Projects/Fall2023/elg227_fad28/FinalReport.html
Tomi Engdahl says:
ChatGPT And Other LLMs Produce Bull Excrement, Not Hallucinations
https://hackaday.com/2024/07/01/chatgpt-and-other-llms-produce-bull-excrement-not-hallucinations/
In the communications surrounding LLMs and popular interfaces like ChatGPT the term ‘hallucination’ is often used to reference false statements made in the output of these models. This infers that there is some coherency and an attempt by the LLM to be both cognizant of the truth, while also suffering moments of (mild) insanity. The LLM thus effectively is treated like a young child or a person suffering from disorders like Alzheimer’s, giving it agency in the process. That this is utter nonsense and patently incorrect is the subject of a treatise by [Michael Townsen Hicks] and colleagues, as published in Ethics and Information Technology.
Much of the distinction lies in the difference between a lie and bullshit, as so eloquently described in [Harry G. Frankfurt]’s 1986 essay and 2005 book On Bullshit. Whereas a lie is intended to deceive and cover up the truth, bullshitting is done with no regard for, or connection with, the truth. The bullshitting is only intended to serve the immediate situation, reminiscent of the worst of sound bite culture.
ChatGPT is bullshit
https://link.springer.com/article/10.1007/s10676-024-09775-5
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Large language models (LLMs), programs which use reams of available text and probability calculations in order to create seemingly-human-produced writing, have become increasingly sophisticated and convincing over the last several years, to the point where some commentators suggest that we may now be approaching the creation of artificial general intelligence (see e.g. Knight, 2023 and Sarkar, 2023). Alongside worries about the rise of Skynet and the use of LLMs such as ChatGPT to replace work that could and should be done by humans, one line of inquiry concerns what exactly these programs are up to: in particular, there is a question about the nature and meaning of the text produced, and of its connection to truth. In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.
The structure of the paper is as follows: in the first section, we outline how ChatGPT and similar LLMs operate. Next, we consider the view that when they make factual errors, they are lying or hallucinating: that is, deliberately uttering falsehoods, or blamelessly uttering them on the basis of misleading input information. We argue that neither of these ways of thinking are accurate, insofar as both lying and hallucinating require some concern with the truth of their statements, whereas LLMs are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they’re doing. This, we suggest, is very close to at least one way that Frankfurt talks about bullshit. We draw a distinction between two sorts of bullshit, which we call ‘hard’ and ‘soft’ bullshit, where the former requires an active attempt to deceive the reader or listener as to the nature of the enterprise, and the latter only requires a lack of concern for truth. We argue that at minimum, the outputs of LLMs like ChatGPT are soft bullshit: bullshit–that is, speech or text produced without concern for its truth–that is produced without any intent to mislead the audience about the utterer’s attitude towards truth. We also suggest, more controversially, that ChatGPT may indeed produce hard bullshit: if we view it as having intentions (for example, in virtue of how it is designed), then the fact that it is designed to give the impression of concern for truth qualifies it as attempting to mislead the audience about its aims, goals, or agenda. So, with the caveat that the particular kind of bullshit ChatGPT outputs is dependent on particular views of mind or meaning, we conclude that it is appropriate to talk about ChatGPT-generated text as bullshit, and flag up why it matters that – rather than thinking of its untrue claims as lies or hallucinations – we call bullshit on ChatGPT.
Tomi Engdahl says:
Financial Times:
The UK opens a consultation on AI and copyright, and aims to offer an “opt out” copyright exception, force companies to open models to scrutiny, and more
https://www.ft.com/content/2ced1e1f-7d14-44d7-b188-464ddd69890d
Tomi Engdahl says:
See What ‘They’ See In Your Photos
https://hackaday.com/2024/12/17/see-what-they-see-in-your-photos/
Once upon a time, a computer could tell you virtually nothing about an image beyond its file format, size, and color palette. These days, powerful image recognition systems are a part of our everyday lives. They See Your Photos is a simple website that shows you just how much these systems can interpret from a regular photo.
The website simply takes your image submission, runs it through the Google Vision API, and spits back out a description of the image. I tried it out with a photograph of myself, and was pretty impressed with what the vision model saw
https://theyseeyourphotos.com/
Your photos reveal a lot of private information.
In this experiment, we use Google Vision API to extract the story behind a single photo.
Vision AI
Extract insights from images, documents, and videos
https://cloud.google.com/vision?hl=en
Access advanced vision models via APIs to automate vision tasks, streamline analysis, and unlock actionable insights. Or build custom apps with no-code model training and low cost in a managed environment.
New customers get up to $300 in free credits to try Vision AI and other Google Cloud products.
Tomi Engdahl says:
Back in 2014, the webcomic XKCD stated that it would be an inordinately difficult task for a computer to determine if a digital photo contained a bird. These days, a computer model can tell us what’s in a photo down to the intimate details, and even make amusing assertions as to the lives of the subjects in the image and their intentions. We’ve come a long way, to be sure.
https://hackaday.com/2024/12/17/see-what-they-see-in-your-photos/
https://xkcd.com/1425/
Tomi Engdahl says:
AWS tuo nyt AI-tehomyllynsä tositoimiin
https://etn.fi/index.php/13-news/16967-aws-tuo-nyt-ai-tehomyllynsae-tositoimiin
AWS ei aio luovuttaa suurten kielimallien koulutusta Nvidian ja AMD:n temmellyskentäksi. Vuosi sitten yhtiö esitteli tekoälypalvelimien uuden tehoprosessorinsa eli Trainium2-piirin. Nyt yhtiö tuo prosessoria hyödyntävät laskentaresurssit – AWS:n kielellä instanssit – asiakkaiden käyttöön.
Trainium2 tukee massiivisia malleja, joissa on satoja miljardeja tai jopa biljoonia parametreja, jotka ovat usein liian suuria yhdelle palvelimelle. Tämä mahdollistaa erittäin laajojen generatiivisten tekoälysovellusten kehittämisen.
Prosessori hyrrää Trainium2 Ultra Servers -ratkaisun ytimessä. Jokainen tällainen ”ultrapalvelin” yhdistää neljä Trainium2-instanssia, mikä tuottaa huipputehokkaan ja skaalautuvan ympäristön AI-mallien kouluttamiseen.
Trainium2:n ja Ultra Servers -palvelimien odotetaan olevan käyttäjien saatavilla ensi vuoden alussa (2025). Trainium-piirit valmistaa tytäryhtiö Annapurna Labs, jonka AWS osti vuonna 2015.
Tomi Engdahl says:
https://www.raspberrypi.com/products/ai-camera/
https://www.raspberrypi.com/documentation/accessories/ai-camera.html
Tomi Engdahl says:
https://www.st.com/content/st_com/en/campaigns/simplifying-edge-ai-deployment-in-sensor-to-cloud-solutions-mems-mcecosys.html?ecmp=tt41646_gl_ps_nov2024&aw_kw=ai%20based%20sensors&aw_m=p&aw_c=22025921731&aw_tg=kwd-2383072184613&aw_gclid=EAIaIQobChMI9Ybv6vSuigMVZliRBR23dAZ0EAAYASAAEgJ69vD_BwE&gad_source=1&gclid=EAIaIQobChMI9Ybv6vSuigMVZliRBR23dAZ0EAAYASAAEgJ69vD_BwE
Ultralow-power 3-axis smart accelerometer with AI, antialiasing filter, and advanced digital features
https://www.st.com/en/mems-and-sensors/lis2dux12.html
The LIS2DUX12 is a smart, digital, 3-axis linear accelerometer whose MEMS and ASIC have been expressly designed to combine the lowest supply current possible with features such as always-on antialiasing filtering, a finite state machine (FSM) and machine learning core (MLC) with adaptive self-configuration (ASC).
The device has a dedicated internal engine to process motion and acceleration detection including free-fall, wake-up, single/double/triple-tap recognition, activity/inactivity, and 6D/4D orientation.
Tomi Engdahl says:
First AI-enhanced smart accelerometers from STMicroelectronics raise performance and efficiency for always-aware applications
https://newsroom.st.com/media-center/press-item.html/n4538.html
The integration of accelerometers and artificial intelligence makes IIoT operate smoothly
10 Mar 2020
https://www.arrow.com/en/research-and-events/articles/adi-the-integration-of-accelerometers-and-artificial-intelligence-makes-iiot-operate-smoothly
Tomi Engdahl says:
https://hailo.ai/lp/the-worlds-best-performing-ai-processor-for-edge-devices/?utm_term=ai%20acceleration&utm_campaign=generic_phrase_eu&utm_source=adwords&utm_medium=ppc&hsa_acc=6640877786&hsa_cam=20263195089&hsa_grp=151008117300&hsa_ad=661879767255&hsa_src=g&hsa_tgt=kwd-1457880491998&hsa_kw=ai%20acceleration&hsa_mt=p&hsa_net=adwords&hsa_ver=3&gad_source=1&gclid=EAIaIQobChMI9Ybv6vSuigMVZliRBR23dAZ0EAMYAiAAEgLb9fD_BwE
Tomi Engdahl says:
Kyle Wiggers / TechCrunch:
OpenAI adds o1 to its API, but only for certain developers to start, announces new versions of GPT-4o and GPT-4o mini as part of its Realtime API, and more — OpenAI is bringing o1, its “reasoning” AI model, to its API — but only for certain developers, to start.
OpenAI brings its o1 reasoning model to its API — for certain developers
https://techcrunch.com/2024/12/17/openai-brings-its-o1-reasoning-model-to-its-api-for-certain-developers/
OpenAI is bringing o1, its “reasoning” AI model, to its API — but only for certain developers, to start.
Starting Tuesday, o1 will begin rolling out to devs in OpenAI’s “tier 5” usage category, the company said. To qualify for tier 5, developers have to spend at least $1,000 with OpenAI and have an account that’s older than 30 days since their first successful payment.
O1 replaces the o1-preview model that was already available in the API.
Unlike most AI, reasoning models like o1 effectively fact-check themselves, which helps them avoid some of the pitfalls that normally trip up models. As a drawback, they often take longer to arrive at solutions.
They’re also quite pricey — in part because they require a lot of computing resources to run. OpenAI charges $15 for every ~750,000 words o1 analyzes and $60 for every ~750,000 words the model generates. That’s 6x the cost of OpenAI’s latest “non-reasoning” model, GPT-4o.
O1 in the OpenAI API is far more customizable than o1-preview, thanks to new features like function calling (which allows the model to be connected to external data), developer messages (which lets devs instruct the model on tone and style), and image analysis. In addition to structured outputs, o1 also has an API parameter, “reasoning_effort,” that enables control over how long the model “thinks” before responding to a query.
Tomi Engdahl says:
Financial Times:
A profile of 114-year-old Japanese conglomerate Hitachi, whose market cap recently hit $100B after becoming an industrial software and hardware provider
https://www.ft.com/content/56eb8539-ed4d-45ce-bcc2-6774354091d2
Tomi Engdahl says:
‘Monetising data’: how Hitachi has soared with bets on AI future
https://www.ft.com/content/56eb8539-ed4d-45ce-bcc2-6774354091d2
Tomi Engdahl says:
Avram Piltch / Tom’s Hardware:
Nvidia unveils Jetson Orin Nano Super Developer Kit, a $249 compact AI development board that promises 67 TOPS, compared to 40 TOPS of the $499 last-gen kit — The Jetson Orin Nano Super Developer Kit will be available later this month and comes with 8GB of RAM.
Nvidia’s new $249 AI development board promises 67 TOPS at half the price of the previous 40 TOPS model
https://www.tomshardware.com/tech-industry/artificial-intelligence/nvidia-launches-new-usd249-ai-development-board-that-does-67-tops