Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
6,770 Comments
Tomi Engdahl says:
Training A Self-Driving Kart
https://hackaday.com/2024/12/21/__trashed-11/
There are certain tasks that humans perform every day that are notoriously difficult for computers to figure out. Identifying objects in pictures, for example, was something that seems fairly straightforward but was only done by computers with any semblance of accuracy in the last few years. Even then, it can’t be done without huge amounts of computing resources. Similarly, driving a car is a surprisingly complex task that even companies promising full self-driving vehicles haven’t been able to deliver despite working on the problem for over a decade now. [Austin] demonstrates this difficulty in his latest project, which adds self-driving capabilities to a small go-kart.
[Austin] had been working on this project at the local park but grew tired of packing up all his gear when he wanted to work on his machine-learning algorithms. So he took all the self-driving equipment off of the first kart and incorporated it into a smaller kart with a very small turning radius so he could develop it in his shop.
Tomi Engdahl says:
https://softwaremind.com/blog/how-ai-is-impacting-embedded-software-development/
Tomi Engdahl says:
Artificial Intelligence
AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
February 2025: The Beginning of the EU AI Act Rollout
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.
High-Risk Applications and the Challenge of AI Asset Inventories
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
The concept of Shadow IT— employees using unsanctioned tools without approval — is not new, but generative AI tools have amplified the problem.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Understanding AI Use Cases: Beyond Tool Tracking
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing.
For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content. As AI usage expands, organizations must gain detailed visibility into these use cases to evaluate their risk profiles and ensure regulatory compliance.
The EU AI Act: Part of a Larger Governance Puzzle
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025. Regardless of geography, organizations will face growing pressure to understand, manage, and document their AI deployments.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments. While the challenges are significant, proactive organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
Three steps to success
With regulatory momentum accelerating globally, preparation today will be essential to avoid disruption tomorrow. Here’s what organizations can do today:
Establish an AI Committee – if you haven’t already, get a cross-functional team to tackle the challenge of AI. This should include governance representative, but also security and business stakeholders
Get visibility – understand what your employees are using and what they are using it for
Train users to understand AI and the risks
Tomi Engdahl says:
AI – Implementing the Right Technology for the Right Use Case
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Don’t get me wrong, organizations are already starting to adopt AI across a broad range of business divisions:
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. One of the most well-known models to measure technology maturity is the Gartner hype cycle. This tracks tools through the initial “innovation trigger”, through the “peak of inflated expectations” to the “trough of disillusionment”, followed by the “slope of enlightenment” and finally reaching the “plateau of productivity”.
Taking this model, I liken AI to the hype that we witnessed around cloud a decade ago when everyone was rushing to migrate to “the cloud” – at the time a universal term that had different meaning to different people. “The cloud” went through all stages of the hype cycle and we continue to find more specific use cases to focus on for greatest productivity. In the present day many are now thinking about how they ‘right-size’ their cloud to their environment. In some cases, they are moving part of their infrastructure back to on-premises or hybrid/multi-cloud models.
Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization. This is a theme that also emerged as cybersecurity automation matured – the need to identify the right use case for the technology, rather than try to apply it across the board..
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
Understanding what data is being shared
This is a fundamental issue for security leaders, identifying who is using AI tools and what they are using AI for. What company data are they sharing with external tools, are these tools secure, and are they as innocent as they seem? For example, are GenAI code assistants, that are being used by developers, returning bad code and introducing a security risk? Then there are aspects like Dark AI, which involves the malicious use of AI technologies to facilitate cyber-attacks, hallucinations, and data poisoning when malicious data is input to manipulate code which could result in bad decisions being made.
To this point, a survey (PDF) of Chief Information Security Officers (CISOs) by Splunk found that 70% believe generative AI could give cyber adversaries more opportunities to commit attacks. Certainly, the prevailing opinion is that AI is benefiting attackers more than defenders.
Finding the right balance
Therefore, our approach to AI is focused on taking a balanced view. AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained, so it is about finding the right balance and using the technology in the right scenarios for the right use case and getting the outcomes that you need.
Looking to the future, as companies better understand the use cases for AI, it will evolve from regular Gen AI to incorporate additional technologies as well. To date, generative AI applications have overwhelmingly focused on the divergence of information. That is, they create new content based on a set of instructions. As AI evolves, we believe we will see more applications of AI that converge information. In other words, they will show us less content by synthesizing the information available, which industry pundits are aptly calling “SynthAI”. This will bring a step function change to the value that AI can deliver – I’ll discuss this in a future article.
https://www.splunk.com/en_us/pdfs/gated/ebooks/the-ciso-report.pdf
Tomi Engdahl says:
Parmy Olson / Bloomberg:
As OpenAI, Anthropic, and Google reportedly see diminishing AI training returns, a break from the market hype could be useful, just as with previous innovations
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity
1 AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://ai.meta.com/open/
https://hamming.ai/
Tomi Engdahl says:
https://hackaday.com/2024/11/20/an-animated-walkthrough-of-how-large-language-models-work/
Tomi Engdahl says:
Siemens toi tekoälyn teollisuuden suunnitteluun
https://etn.fi/index.php/13-news/16840-siemens-toi-tekoaelyn-teollisuuden-suunnitteluun
Generatiivinen tekoäly tekee nopeaa tuloaan myös teollisuusmaailmaan. Siemens Industrial Copilot on teollisuusmaailman ensimmäinen generatiivisen tekoälyn avulla toimiva ohjelmistotuote ja markkinoiden ainoa copilot, joka kirjoittaa koodia automaatioteknologian tarpeisiin.
Nyt saksalainen thyssenkrupp Automation Engineering ottaa globaalisti käyttöönsä Siemens Industrial Copilotin. Generatiiviseen tekoälyyn pohjautuva ohjelmistotuote nopeuttaa suunnittelua ja tuotantolinjan toimintaa. Thyssenkrupp ottaa Siemensin Copilotin käyttöönsä laajassa mittakaavassa ensi vuoden aikana.
Tutkimus- ja konsultointiyhtiö Gartnerin tuore raportti kertoo, että vuoteen 2028 mennessä 75 prosenttia ohjelmistokehittäjistä käyttää säännöllisesti generatiivista tekoälyä koodin luomisessa, kun niiden osuus oli vielä vuoden 2023 alussa alle 10 prosenttia.
Koneita ja tehtaita rakentava thyssenkrupp integroi Siemensin tekoälyapurin koneeseen, jota käytetään sähköautojen akkujen laaduntarkastuksessa. Tekoälyapuri auttaa kehittämään jäsenneltyä ohjauskielikoodia ohjelmoitaville logiikkaohjaimille, integroi koodin ohjelmistoympäristöön ja luo visualisoinnin. Tämän ansiosta suunnittelutiimit voivat vähentää toistuvia ja yksitoikkoisia tehtäviä, kuten datanhallinnan automatisointia ja anturien asetusten määrittelyä. Tiimit voivat työskennellä entistä tehokkaammin ja optimoida prosessejaan.
Tomi Engdahl says:
Open Source AI
https://ai.meta.com/open/
400M+
downloads of Llama, Meta’s open source AI model.
When things are open source, people have equal access – and when people have equal access, everyone benefits.
Tomi Engdahl says:
Charles Q. Choi / IEEE Spectrum:
Researchers detail RoboPAIR, an algorithm that is designed to induce robots, relying on LLMs for their inputs, to ignore models’ safeguards without exception
It’s Surprisingly Easy to Jailbreak LLM-Driven Robots
Researchers induced bots to ignore their safeguards without exception
https://spectrum.ieee.org/jailbreak-llm
AI chatbots such as ChatGPT and other applications powered by large language models (LLMs) have exploded in popularity, leading a number of companies to explore LLM-driven robots. However, a new study now reveals an automated way to hack into such machines with 100 percent success. By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs.
Essentially, LLMs are supercharged versions of the autocomplete feature that smartphones use to predict the rest of a word that a person is typing. LLMs trained to analyze to text, images, and audio can make personalized travel recommendations, devise recipes from a picture of a refrigerator’s contents, and help generate websites.
The extraordinary ability of LLMs to process text has spurred a number of companies to use the AI systems to help control robots through voice commands, translating prompts from users into code the robots can run. For instance, Boston Dynamics’ robot dog Spot, now integrated with OpenAI’s ChatGPT, can act as a tour guide. Figure’s humanoid robots and Unitree’s Go2 robot dog are similarly equipped with ChatGPT.
However, a group of scientists has recently identified a host of security vulnerabilities for LLMs. So-called jailbreaking attacks discover ways to develop prompts that can bypass LLM safeguards and fool the AI systems into generating unwanted content, such as instructions for building bombs, recipes for synthesizing illegal drugs, and guides for defrauding charities.
Tomi Engdahl says:
Michael Nuñez / VentureBeat:
FrontierMath, a new benchmark for evaluating AI model’s advanced mathematical reasoning, shows current AI systems solve less than 2% of its challenging problems — Artificial intelligence systems may be good at generating text, recognizing images, and even solving basic math problems …
1 AI’s math problem: FrontierMath benchmark shows how far technology still has to go
AI’s math problem: FrontierMath benchmark shows how far technology still has to go
https://venturebeat.com/ai/ais-math-problem-frontiermath-benchmark-shows-how-far-technology-still-has-to-go/
Tomi Engdahl says:
1 Etla: Generatiivinen tekoäly on lisännyt työn kysyntää
https://www.uusiteknologia.fi/2024/11/19/etla-generatiivinen-tekoaly-on-lisannyt-tyon-kysyntaa/
Generatiivinen tekoäly ei ole ainakaan tähän mennessä aiheuttanut negatiivisia työmarkkinavaikutuksia Suomessa, arvioidaan Elinkeinoelämän tutkimuskeskus Etlan tuoreessa selityksessä. Tulokset viittaavat enemmänkin siihen, että tekoäly on nostanut työn tuottavuutta ja sitä kautta työn kysyntää. Myös ansiokehitys tekoälylle altistuneissa ammateissa on ollut nopeampaa kuin ei-altistuneissa ammateissa.
Generatiivisen tekoälyn kehittyessä ja käytön laajentuessa tulokset saattavat toki muuttua, tutkijat varoittavat jo etukäteen. Tulokset kuitenkin osoittavat, että ansiokehitys on ollut altistuneissa ammateissa nopeampaa kuin ei-altistuneissa ammateissa. ’’Sen sijaan työllisyyskehityksessä ei ole ollut eroja näiden ryhmien välillä’’, toteaa Etlan tutkimusjohtaja Antti Kauhanen.
Tomi Engdahl says:
Bloomberg:
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets
OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Tomi Engdahl says:
Michael Nuñez / VentureBeat:
Google DeepMind releases AlphaFold 3′s source code and model weights for academic use, which could accelerate scientific discovery and drug development — Google DeepMind has unexpectedly released the source code and model weights of AlphaFold 3 for academic use, marking a significant advance …
Google DeepMind open-sources AlphaFold 3, ushering in a new era for drug discovery and molecular biology
https://venturebeat.com/ai/google-deepmind-open-sources-alphafold-3-ushering-in-a-new-era-for-drug-discovery-and-molecular-biology/
Google DeepMind has unexpectedly released the source code and model weights of AlphaFold 3 for academic use, marking a significant advance that could accelerate scientific discovery and drug development. The surprise announcement comes just weeks after the system’s creators, Demis Hassabis and John Jumper, were awarded the 2024 Nobel Prize in Chemistry for their work on protein structure prediction.
The true test of AlphaFold 3 lies ahead in its practical impact on scientific discovery and human health. As researchers worldwide begin using this powerful tool, we may see faster progress in understanding and treating disease than ever before.
https://deepmind.google/
https://github.com/google-deepmind/alphafold3
Major AlphaFold upgrade offers boost for drug discovery
Latest version of the AI models how proteins interact with other molecules — but DeepMind restricts access to the tool.
https://www.nature.com/articles/d41586-024-01383-z
Tomi Engdahl says:
Simon Willison / Simon Willison’s Weblog:
A look at Qwen2.5-Coder-32B-Instruct, which Alibaba claims can match GPT-4o coding capabilities and is small enough to run on a MacBook Pro M2 with 64GB of RAM — There’s a whole lot of buzz around the new Qwen2.5-Coder Series of open source (Apache 2.0 licensed) LLM releases from Alibaba’s Qwen research team.
https://simonwillison.net/2024/Nov/12/qwen25-coder/
Tomi Engdahl says:
Google julkisti ”ajatuskulkunsa” esittävän Gemini 2.0 Flash Thinking -tekoälymallin
https://mobiili.fi/2024/12/20/google-julkisti-ajatuskulkunsa-esittavan-gemini-2-0-flash-thinking-tekoalymallin/
Google on jatkanut uusien tekoälymalliensa julkistuksia esittelemällä Gemini 2.0 Flash Thinking -mallin.
Nimensä mukaisesti Gemini 2.0 Flash Thinking -malli on niin sanotusti enemmän ajatteleva tekoälymalli. Sen vastaukset kestävät hieman kauemmin mallin pohtiessa sille esitettyä pyyntöä ja vastaustaan siihen tarkemmin ja monitahoisemmin.
Erikoisuutena Gemini 2.0 Flash Thinking voi esittää ajatuskulkunsa sille esitettyihin ongelmiin vastatessaan – malli ikään kuin ajattelee ääneen, kuten Google asian toteaa. Gemini 2.0 Flash Thinking perustuu pohjimmiltaan Gemini 2.0 Flashin nopeuteen ja suorituskykyyn, mutta ”ääneen ajattelu” johtaa Googlen mukaan parempaan suorituskykyyn enemmän järkeilyä (reasoning) vaativissa vastauksissa. Näitä voivat olla esimerkiksi monimutkaisemmat koodaamiseen ja matematiikkaan liittyvät kysymykset.
Tomi Engdahl says:
By Pradeep Viswanathan – OpenAI is changing its corporate structure again to raise more funds and better support its mission. It plans to transform into a Public Benefit Corporation (PBC) while retaining its non-profit arm. #OpenAI #PBC #Forprofit
OpenAI will be transforming its for-profit into a Delaware Public Benefit Corporation
https://www.neowin.net/news/openai-will-be-transforming-its-for-profit-into-a-delaware-public-benefit-corporation/?fbclid=IwZXh0bgNhZW0CMTEAAR0a1aNlXzc0iZqJzOenAwIDxAzMI1u57ebC_Ef_qSsjJ-gcM_3_JVLEhrY_aem_l7QxSemZKzjSXC-8FRVJGA
After months of speculation, OpenAI today officially announced that it is changing its corporate structure to best support its mission. Back in 2015, OpenAI started as a nonprofit. Since it faced difficulties in raising funds, OpenAI announced its “capped profit” structure in 2019.
The OpenAI Nonprofit remained as before, and its board was responsible for the overall governance of all OpenAI activities. The new for-profit subsidiary issued equity to raise capital and was responsible for research, development, commercialization, and other core operations. Now, OpenAI again wants to raise billions to continue pursuing the mission. However, investors are expecting changes to OpenAI’s corporate structure.
OpenAI’s board is now working with outside legal and financial advisors on how to best structure OpenAI. The OpenAI board is now considering transforming OpenAI into a Delaware Public Benefit Corporation (PBC) with ordinary shares of stock with the following objectives:
Tomi Engdahl says:
Some prominent researchers argue that we should pay heed to the welfare of AIs. Are they right, wonders Alex Wilkins
Should chatbots have rights – and should we care?
Some prominent researchers argue that we should pay heed to the welfare of AIs. Are they right, wonders Alex Wilkins
https://www.newscientist.com/article/mg26435233-300-should-chatbots-have-rights-and-should-we-care/?utm_term=Autofeed&utm_campaign=echobox&utm_medium=social&utm_source=Facebook&fbclid=IwZXh0bgNhZW0CMTEAAR0MbXrNe3-NsOD5ZjmzizU0BVvlyjRga-JyoiwGvrEjf-_2PATUc4vwonk_aem_01zHbe0g1EfYVVGpf5lcVA#Echobox=1735225509
Is your chatbot in distress? Many people, myself included, would scoff at this question. It is just computer code, optimised to predict the next word in a sequence. But some philosophers and psychologists say that we shouldn’t be so quick to dismiss this question, perhaps even granting chatbots their own rights. They might have a point.
In a recent academic paper, “Taking AI Welfare Seriously“, one group of researchers argue for a precautionary approach to how we treat AIs. They don’t look to answer the question of whether an AI is conscious or not, but say we should start…
Tomi Engdahl says:
Tekoäly on enemmän kuin ChatGPT – ”pihvi on siinä, miten saadaan liiketoiminnan käyttöön”
Kauko Ollila23.12.202406:05TekoälyJulkisen hallinnon ict
Suomalaiset yritykset panostavat tekoälyyn tehokkuuden nimissä, mutta unohtavat usein sen strategisen potentiaalin, sanoo Capgeminin automaatio- ja teknologiajohtaja Jaakko Lehtinen.
https://www.tivi.fi/uutiset/tekoaly-on-enemman-kuin-chatgpt-pihvi-on-siina-miten-saadaan-liiketoiminnan-kayttoon/683939df-2276-4f49-8b8c-6da9fcc30565
Jättikielimalli ei ole pihvi. ”Vaan datan laadukkuus, hallittavuus ja tasalaatuisuus”, tähdentää Capgeminin Sogeti-liiketoimintayksikön automaatio- ja teknologiajohtaja Jaakko Lehtinen.
Tampereen kaupungin intranetin uudistamisprojektin tekoälyosuutta on ollut leipomassa myös it-konsulttitalo Capgemini. ”Olemme tehneet 2010-luvun
Tomi Engdahl says:
Chat GPT:n kanssa voi nyt jutella Whatsappissa
Open AI julkisti tällä viikolla uuden palvelun, jossa Chat GPT:lle voi lähettää viestejä Whatsappissa.
https://yle.fi/a/74-20133091
Tekoälytyökalu Chat GPT:n kanssa voi keskustella nyt myös Whatsapp-viestisovelluksessa.
Chat GPT:n kehittäjä Open AI julkisti uuden palvelun tällä viikolla.
Keskustelun Chat GPT:n kanssa voi aloittaa lisäämällä Chat GPT:n numeron oman puhelimen yhteystietoihin. Numero on 1 800 242 8478.
Yhdysvaltalaisilla ja kanadalaisilla liittymillä Chat GPT:lle voi jopa soittaa puheluita. Suomessa saadaan sen sijaan tyytyä viestittelyyn.