Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
6,254 Comments
Tomi Engdahl says:
ChatGPT kiihdytti sähköpostikeskustelujen kaappauksia
https://etn.fi/index.php/13-news/16351-chatgpt-kiihdytti-saehkoepostikeskustelujen-kaappauksia
Pilvipohjaisia tietoturvaratkaisuja kehittävä Barracuda Networks on laatinut raportin, jossa tarkastellaan sähköposteihin liittyviä uhkia. Raportista käy ilmi, että viimeisten 12 kuukauden aikana yrityssähköpostin vaarantamiseen liittyvät hyökkäykset ovat lisääntyneet ja muodostavat nyt 10,6 prosenttia kaikista sähköpostipohjaisista social engineering -hyökkäyksistä eli käyttäjien manipuloinneista. Trendissä näkyy ChatGPT:n vaikutus.
Sähköpostikeskustelujen kaappaaminen on lisääntynyt 70 prosenttia vuodesta 2022, vaikka se on hyökkääjille resursseja vievä lähestymistapa. Yrityssähköpostin vaarantamiseen liittyvä BEC-hyökkäys (Business Email Compromise) on eräänlainen tietojenkalasteluhyökkäys, jossa tietoverkkorikollinen yhdistää erilaisia sosiaalisen manipuloinnin tekniikoita saadakseen uhrin toimimaan halutulla tavalla. Rikollinen voi esiintyy sähköpostissa esimerkiksi yrityksen johtajana saadakseen työntekijän, asiakkaan tai myyjän siirtämään rahaa väärälle tilille, maksamaan valelaskun tai luovuttamaan salaisia tietoja. Perinteiset tietojenkalastelu- eli phishing-hyökkäykset kohdistuvat yleensä suureen määrään työntekijöitä, kun taas BEC-hyökkäykset ovat hyvin kohdennettuja.
Tomi Engdahl says:
Tekoäly ennustaa verkkojen ruuhkautumista – jo 85 % tarkkuudella
https://www.uusiteknologia.fi/2024/06/19/tekoaly-ennustaa-verkkojen-ruuhkautumista-jo-85-tarkkuudella/
Suomalainen valokuituyhtiö Lounea ja verkon suorituskyvyn ja laadunvarmistukseen erikoistunut Creanord ovat selvittäneet käytännön kokein tekoälyn hyödyntämistä verkkojen ennakoivassa analytiikassa. Hankkeessa käytettiin koneoppista ja tekoälyä ennustamaan tulevia verkkojen ruuhkautumista. Tulosten mukaan tekoälyllä pystyttiin tunnistamaan jopa 85 prosenttia tulevista ongelmista.
Tomi Engdahl says:
Ericsson: tekoäly mullistaa verkkojen ja verkkoprosessorien kehityksen
https://etn.fi/index.php/13-news/16347-ericsson-tekoaely-mullistaa-verkkojen-ja-verkkoprosessorien-kehityksen
Pian jokainen käyttää mobiilidataa keskimäärin 50 gigatavua kuukaudessa. Tämä vaatii yhä enemmän suorituskykyä verkoilta ja niiden komponenteilta. Ericsson käyttää yhä enemmän tekoälyä uuden polven verkkojen ja prosessorien kehitykseen, kertoo yhtiön ohjelmistotekniikan kehityksestä vastaava Dag Lindbo.
Lindbo esitelmöi yhtiön AI-käytöstä AWS SUmmitissa Tukholmassa. Aiemmin yhtiö ei ole kertonut tutkimushankkeistaan ja tekoälytyökalujen hyödyntämisestä niissä julkisesti. Kehitys on itse asiassa aika uutta ja Ericsson on alkanut käyttää enemmän tekoälyä tuotekehityksessään vasta viime kuukausina.
Lindbo kertoi esimerkkinä järjestelmästä, jossa simuloidaan virtuaalisesti radiosignaalien käyttäytymistä yhtiön Tukholman kampuksella. Järjestelmä simuloi kaikka lähetettyjä ja heijastuvia radioaaltoja, joista mobiiliverkko koostuu.
Tomi Engdahl says:
https://www.securityweek.com/facial-recognition-startup-clearview-ai-settles-privacy-suit/
Tomi Engdahl says:
Artificial Intelligence
AI Weights: Securing the Heart and Soft Underbelly of Artificial Intelligence
AI model weights govern outputs from the system, but altered or ‘poisoned’, they can make the output erroneous and, in extremis, useless and dangerous.
https://www.securityweek.com/ai-weights-securing-the-heart-and-soft-underbelly-of-artificial-intelligence/
AI model weights are simultaneously the heart and soft underbelly of AI systems. They govern the outputs from the system. But altered or ‘poisoned’, they can make the output erroneous and, in extremis, useless and even dangerous.
It is essential that these weights are protected from bad actors. As we move towards greater AI integration in business, model weights become the new corporate crown jewels; but this is a new and not fully understood threat surface that must be secured. This becomes more essential with future generations of complex gen-AI systems – so important that RAND, a non-profit research organization, has developed a security framework (PDF) for what is termed Frontier AI.
Tomi Engdahl says:
Artificial Intelligence
When Vendors Overstep – Identifying the AI You Don’t Need
AI models are nothing without vast data sets to train them and vendors will be increasingly tempted to harvest as much data as they can and answer any questions later.
https://www.securityweek.com/when-vendors-overstep-identifying-the-ai-you-dont-need/
Tomi Engdahl says:
Ivan Mehta / TechCrunch:
Meta launches its Meta AI chatbot in India with support for English only, a week after Google’s Gemini app on Android debuted with support for 9 local languages — After a few months of testing during the general elections, Meta is making its Llama-3-powered AI chatbot available to all users in India.
Meta makes its AI chatbot available to all users in India
https://techcrunch.com/2024/06/23/meta-makes-its-ai-chatbot-available-to-all-users-in-india/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cudGVjaG1lbWUuY29tLw&guce_referrer_sig=AQAAAJMZevVzp3QppLycVFq9mC8mKfDsE6GEexiHjfz1qpzSoosAyNScqQo4kwG2bTBQLDtqSbwsVloEnNt8XJzMPJ4l1cKFKNvfm-fM-QiEy7ze3m4wE8xysv1KWMznO3_y2Oqauulp13ARgChyYx3dmqGG4FLZp9WOBJznI31uGyS6
Tomi Engdahl says:
Politico:
Y Combinator and 140 AI startups sign a letter opposing California’s AI safety bill, saying the bill could harm California’s ability to retain its AI talent
‘Little Tech’ brings a big flex to Sacramento
https://www.politico.com/newsletters/california-playbook/2024/06/21/little-tech-brings-a-big-flex-to-sacramento-00164369
THE BUZZ: FIRST IN PLAYBOOK: One of Silicon Valley’s heaviest hitters is wading into the fight over California’s AI regulations.
Y Combinator — the venture capitalist firm that brought us Airbnb, Dropbox and DoorDash — today issued its opening salvo against a bill by state Sen. Scott Wiener that would require large AI models to undergo safety testing.
Wiener, a San Fracisco Democrat whose district includes YC, says he’s proposing reasonable precautions for a powerful technology. But the tech leaders at Y Combinator disagree, and are joining a chorus of other companies and groups that say it will stifle California’s emerging marquee industry.
“This bill, as it stands, could gravely harm California’s ability to retain its AI talent and remain the location of choice for AI companies,” read the letter, which was signed by more than 140 AI startup founders.
It’s the first time the startup incubator, led by prominent SF tech denizen Garry Tan, has publicly weighed in on the bill. They argue it could hurt the many fledgling companies Y Combinator supports — about half of which are now AI-related.
While Wiener’s bill explicitly targets the biggest AI models, Y Combinator is leaning into the argument that it will hurt the little guys.
“This grassroots letter, which bubbled up from our community of founders in less than 48 hours, represents the voice of ‘Little Tech’ in California,” Luther Lowe, Y Combinator’s head of public policy, said in a statement.
Tomi Engdahl says:
Anton Shilov / Tom’s Hardware:
Qualcomm makes its AI models, which are optimized for 45 TOPS Hexagon NPU, available to developers for building AI-enabled apps for Snapdragon X Elite devices
Qualcomm makes its AI models available to app developers
https://www.tomshardware.com/tech-industry/artificial-intelligence/qualcomm-makes-its-ai-models-available-to-app-developers
Tomi Engdahl says:
AI Power Consumption: Rapidly Becoming Mission-Critical
https://www.forbes.com/sites/bethkindig/2024/06/20/ai-power-consumption-rapidly-becoming-mission-critical/
The IEA is projecting global electricity demand from AI, data centers and crypto to rise to 800 TWh in 2026 in its base case scenario, a nearly 75% increase from 460 TWh in 2022. The agency’s high case scenario calls for demand to more than double to 1,050 TWh.
Tomi Engdahl says:
Mia Sato / The Verge:
The RIAA sues AI music services Suno and Udio over alleged mass copyright infringement and claims they are trying to “hide the full scope of their infringement” — A group of record labels including the big three — Universal Music Group (UMG), Sony Music Entertainment …
Major record labels sue AI company behind ‘BBL Drizzy’
/ Citing ‘en masse’ copyright infringement, the RIAA and labels sued Udio, the company behind the hit track, and Suno, an AI music company with a Microsoft partnership.
https://www.theverge.com/2024/6/24/24184710/riaa-ai-lawsuit-suno-udio-copyright-umg-sony-warner
Tomi Engdahl says:
Mark Gurman / Bloomberg:
Sources: Apple rejected Meta’s overtures to integrate Llama into the iPhone months ago, in part because it sees Meta’s privacy practices as not stringent enough
Apple Spurned Idea of iPhone AI Partnership With Meta Months Ago
https://www.bloomberg.com/news/articles/2024-06-24/apple-spurned-idea-of-iphone-ai-partnership-with-meta-months-ago
Apple has been looking to forge agreements to use AI chatbots
Report indicated that Apple and Meta are in discussions
Tomi Engdahl says:
The Information:
Sources: Google has been developing a product to create customizable chatbots, which could be modeled on celebrities, and plans to launch it as soon as 2024
https://www.theinformation.com/articles/google-develops-challenger-to-metas-chatbots-and-character-ai
Tomi Engdahl says:
Cristina Criddle / Financial Times:
A Google DeepMind study of 200 observed incidents of misuse between January 2023 and March 2024 finds political deepfakes are the most common misuse of AI
https://www.ft.com/content/8d5bc867-c69d-44df-839f-d43c92785435
Tomi Engdahl says:
Jay Peters / The Verge:
Shopify unveils AI-focused updates for merchants, including suggested replies and product categorization, and launches its AI chatbot Sidekick in early access
Shopify’s AI ‘Sidekick’ chatbot for merchants is now in early access
/ The company is announcing a bunch of AI-focused updates.
https://www.theverge.com/2024/6/24/24183504/shopify-ai-sidekick-chatbot-merchants-summer-edition
Shopify’s new AI “Sidekick” chatbot is officially launching in early access, the company said as part of its “Summer ‘24 Edition” updates announced on Monday.
Sidekick, which the company first revealed last year, functions as a support chatbot for merchants, helping them do things like make discount codes, generate reports about your store, or suggest blog post ideas. Sure, it sounds like pretty typical AI chatbot stuff, but it seems like it could be a useful way to get help or assistance, especially if Shopify continues to tailor the chatbot to fit merchant needs.
Sidekick is live across thousands of Shopify stores, but it’s currently limited to merchants who have English stores in North America, Vanessa Lee, Shopify’s vice president of product, says in an interview with The Verge. If you don’t have Sidekick and want to try it, you can sign up for the waitlist. And the company wants to make it available in other languages and other locales, Lee says.
Tomi Engdahl says:
Artificial Intelligence
AI Weights: Securing the Heart and Soft Underbelly of Artificial Intelligence
AI model weights govern outputs from the system, but altered or ‘poisoned’, they can make the output erroneous and, in extremis, useless and dangerous.
https://www.securityweek.com/ai-weights-securing-the-heart-and-soft-underbelly-of-artificial-intelligence/
Tomi Engdahl says:
Testing Large Language Models For Circuit Board Design Aid
https://hackaday.com/2024/06/24/testing-large-language-models-for-circuit-board-design-aid/
Beyond bothering large language models (LLMs) with funny questions, there’s the general idea that they can act as supporting tools. Theoretically they should be able to assist with parsing and summarizing documents, while answering questions about e.g. electronic design. To test this assumption, [Duncan Haldane] employed three of the more highly praised LLMs to assist with circuit board design. These LLMs were GPT-4o (OpenAI), Claude 3 Opus (Anthropic) and Gemini 1.5 (Google).
The tasks ranged from ‘stupid questions’, like asking the delay per unit length of a trace on a PCB, to finding parts for a design, to designing an entire circuit. Of these tasks, only the ‘parsing datasheets’ task could be considered to be successful. This involved uploading the datasheet for a component (nRF5340) and asking the LLM to make a symbol and footprint, in this case for the text-centric JITX format but KiCad/Altium should be possible too. This did require a few passes, as there were glitches and omissions in the generated footprint.
Testing Generative AI for Circuit Board Design
https://blog.jitx.com/jitx-corporate-blog/testing-generative-ai-for-circuit-board-design
Tomi Engdahl says:
Uncovering ChatGPT Usage In Academic Papers Through Excess Vocabulary
https://hackaday.com/2024/06/22/uncovering-chatgpt-usage-in-academic-papers-through-excess-vocabulary/
That students these days love to use ChatGPT for assistance with reports and other writing tasks is hardly a secret, but in academics it’s becoming ever more prevalent as well. This raises the question of whether ChatGPT-assisted academic writings can be distinguished somehow. According to [Dmitry Kobak] and colleagues this is the case, with a strong sign of ChatGPT use being the presence of a lot of flowery excess vocabulary in the text. As detailed in their prepublication paper, the frequency of certain style words is a remarkable change in the used vocabulary of the published works examined.
For their study they looked at over 14 million biomedical abstracts from 2010 to 2024 obtained via PubMed. These abstracts were then analyzed for word usage and frequency, which shows both natural increases in word frequency (e.g. from the SARS-CoV-2 pandemic and Ebola outbreak), as well as massive spikes in excess vocabulary that coincide with the public availability of ChatGPT and similar LLM-based tools.
Delving into ChatGPT usage in academic writing through excess vocabulary
https://arxiv.org/abs/2406.07016
Recent large language models (LLMs) can generate and revise text with human-level performance, and have been widely commercialized in systems like ChatGPT. These models come with clear limitations: they can produce inaccurate information, reinforce existing biases, and be easily misused. Yet, many scientists have been using them to assist their scholarly writing. How wide-spread is LLM usage in the academic literature currently? To answer this question, we use an unbiased, large-scale approach, free from any assumptions on academic LLM usage. We study vocabulary changes in 14 million PubMed abstracts from 2010-2024, and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words. Our analysis based on excess words usage suggests that at least 10% of 2024 abstracts were processed with LLMs. This lower bound differed across disciplines, countries, and journals, and was as high as 30% for some PubMed sub-corpora. We show that the appearance of LLM-based writing assistants has had an unprecedented impact in the scientific literature, surpassing the effect of major world events such as the Covid pandemic.
Tomi Engdahl says:
Simon Sharwood / The Register:
Alibaba creates an English language version of Modelscope, an Amazon Bedrock like service for building AI models launched in 2022, and claims to host 5K+ models
Alibaba Cloud unleashes thousands of Chinese AI models to the world
comment bubble on black
Like Bedrock or Azure OpenAI Studio – but with the added fun of geopolitical risk
https://www.theregister.com/2024/06/25/alibaba_modelscope_english_translation/
Alibaba Cloud has created an English language version of Modelscope, its models-as-service offering.
Modelscope was launched in 2022 and is broadly comparable to services like Amazon Web Services’ Bedrock or Microsoft’s Azure OpenAI Studio. All are essentially libraries of foundation models from a variety of sources, offered in a form designed to ease the job of plumbing them into apps built on their respective cloudy hosts.
Alibaba Cloud claims it has won over five million developers as users of Modelscope since its 2022 debut, and that it is home to over 5,000 models. The Chinese web giant’s own “Qwen” models are present, plus plenty from Chinese startups. The service also hosts what Alibaba Cloud describes as “over 1,500 high-quality Chinese-language datasets and an extensive range of toolkits that support data processing.”
While the service has previously been accessible from anywhere – it’s a cloud, after all – like its Western rivals Alibaba Cloud offers a text-heavy user experience. Your correspondent has plenty of experience applying online translation tools to Chinese text, and can report they regularly produce some odd artefacts that would not delight developers.
Tomi Engdahl says:
Bacon ice cream and nugget overload sees misfiring McDonald’s AI withdrawn
https://www.bbc.com/news/articles/c722gne7qngo
McDonald’s is removing artificial intelligence (AI) powered ordering technology from its drive-through restaurants in the US, after customers shared its comical mishaps online.
A trial of the system, which was developed by IBM and uses voice recognition software to process orders, was announced in 2019.
It has not proved entirely reliable, however, resulting in viral videos of bizarre misinterpreted orders ranging from bacon-topped ice cream to hundreds of dollars’ worth of chicken nuggets.
McDonald’s told franchisees it would remove the tech from the more than 100 restaurants it has been testing it in by the end of July, as first reported by trade publication Restaurant Business.
Tomi Engdahl says:
MITEN TEKOÄLY TEHOSTAA HAKUKONEOPTIMOINTIA?
https://parcero.fi/blogi/miten-tekoaly-tehostaa-hakukoneoptimointia/
Tomi Engdahl says:
Is the future of manufacturing made with AI?
Deploying AI in manufacturing will enable companies of all sorts to maximize efficiency and lead the way in innovating the industry.
https://blog.3ds.com/topics/company-news/is-the-future-of-manufacturing-made-with-ai/
Tomi Engdahl says:
https://hackaday.com/2024/06/18/human-brains-can-tell-deepfake-voices-from-real-ones/
Tomi Engdahl says:
This 20,000HP AI-generated rocket engine took just two weeks to design and looks like HR Giger’s first attempt at designing a trumpet
https://www.pcgamer.com/hardware/this-20000hp-ai-generated-rocket-engine-took-just-two-weeks-to-design-and-looks-like-hr-gigers-first-attempt-at-designing-a-trumpet/
Tomi Engdahl says:
RIAA Sues Popular AI Song Generators for Training on Major Artists’ Work
The suits alleged Udio and Suno trained their AI models on the works of artists like Mariah Carey and The Beach Boys. Suno says it produces ‘completely new outputs.’
https://uk.pcmag.com/ai/152959/riaa-sues-popular-ai-song-generators-for-training-on-major-artists-work
Tomi Engdahl says:
Auto vai feikki?
https://yle.fi/a/74-20092635
Tomi Engdahl says:
Ahead of GPT-5 launch, another test shows that people cannot distinguish ChatGPT from a human in a conversation test — is it a watershed moment for AI?
https://www.techradar.com/pro/ahead-of-the-launch-of-gpt-5-another-test-shows-that-people-cannot-distinguish-chatgpt-from-a-human-in-a-conversation-test-is-it-a-watershed-moment-for-ai
Tomi Engdahl says:
Devin Coldewey / TechCrunch:
The RIAA’s lawsuit against generative music startups will be the bloodbath AI needs and an object lesson in hubris for similarly unethical AI companies — Like many AI companies, Udio and Suno relied on large-scale theft to create their generative AI models.
The RIAA’s lawsuit against generative music startups will be the bloodbath AI needs
https://techcrunch.com/2024/06/25/the-riaas-lawsuit-against-generative-music-startups-will-be-the-bloodbath-ai-needs/
Like many AI companies, music generation startups Udio and Suno appear to have relied on unauthorized scrapes of copyrighted works in order to train their models. This is by their own and investors’ admission, as well as according to new lawsuits filed against them by music companies. If these suits go before a jury, the trial could be both a damaging exposé and a highly useful precedent for similarly sticky-fingered AI companies facing certain legal peril.
The lawsuits, filed by the Recording Industry Association of America (RIAA), put us all in the uncomfortable position of rooting for the RIAA, which for decades has been the bogeyman of digital media. I myself have received nastygrams from them! The case is simply that clear.
The gist of the two lawsuits, which are extremely similar in content, is that Suno and Udio (strictly speaking, Uncharted Labs doing business as Udio) indiscriminately pillaged more or less the entire history of recorded music to form datasets, which they then used to train a music-generating AI.
And here let us quickly note that these AIs don’t “generate” so much as match the user’s prompt to patterns from their training data and then attempt to complete that pattern. In a way, all these models do is perform covers or mashups of the songs they ingested.
That Suno and Udio did ingest said copyrighted data seems, for all intents and purposes (including legal ones), very likely. The companies’ leadership and investors have been unwisely loose-lipped about the copyright challenges of the space.
Tomi Engdahl says:
Samantha Cole / 404 Media:
Users of the popular AI chatbot platform Character.AI are reporting that their bots’ personalities changed a few days ago and aren’t as fun as they once were — The company denied making “major changes,” but users report noticeable differences in the quality of their chatbot conversations.
‘No Bot is Themselves Anymore:’ Character.ai Users Report Sudden Personality Changes to Chatbots
https://www.404media.co/character-ai-chatbot-changes-filters-roleplay/
The company denied making “major changes,” but users report noticeable differences in the quality of their chatbot conversations.
Tomi Engdahl says:
The Information:
Sources: Huawei is struggling to ramp up production of its Ascend AI server chip, China’s leading alternative to Nvidia’s A100, due to a new US crackdown
https://www.theinformation.com/articles/a-new-u-s-crackdown-is-crippling-chinas-best-hope-to-rival-nvidia
Tomi Engdahl says:
Avoimella mallilla parempaa koodia
https://etn.fi/index.php/13-news/16365-avoimella-mallilla-parempaa-koodia
Yhä suurempi osa ohjelmointikoodista on tekoälyn generoimaa. Ranskalainen Mistral AI, joka tunnetaan avoimien LLM-mallien kehittäjänä, on esitellyt erityisesti koodaajille kehitetyn mallin. Codestral on tuottaa testien mukaan tarkempaa ja tehokkaampaa koodia kuin aiemmat mallit.
Codestral on avoimen painon tekoälymalli, joka on suunniteltu nimenomaan koodareille. Se auttaa kehittäjiä kirjoittamaan koodia ja korjaamaan sitä rajapinnan kautta. Codestralia voidaan käyttää kehittyneiden AI-sovellusten suunnitteluun ohjelmistokehittäjille.
Malli on koulutettu yli 80 ohjelmointikielen monipuoliselle tietojoukolle, mukaan lukien suosituimmat, kuten Python, Java, C, C++, JavaScript ja Bash. Se toimii hyvin myös Swiftin ja Fortranin kaltaisilla kielillä.
Codestral säästää kehittäjien aikaa ja vaivaa monella tapaa. Se voi suorittaa koodaustoimintoja, kirjoittaa testejä ja täydentää minkä tahansa osittaisen koodin myös koodin keskellä (fill-in-the-middle). Tuloksena on vähemmän bugeja.
Mistrala AI on verrannut Codestralia muihin koodaajien suosimiin LLM-malleihin, kuten CodeLlama 70B, Deepseek Coder 33B ja Llama 3 70B.
Codestral integroituu suotittuihin ympäristöihin kuten VSCode ja JetBrains. Tutkimus- ja arviointikäyttöön mallin voi ladata ilmaiseksi HuggingFacesta. Lisätietoja täällä.
https://mistral.ai/news/codestral/
Tomi Engdahl says:
Reddit has a warning for AI companies and other scrapers: play by our rules or get blocked. The company said in an update that it plans to update its Robots Exclusion Protocol (robots.txt file), which allows it to block automated scraping of its platform.
https://www.engadget.com/reddit-puts-ai-scrapers-on-notice-205734539.html?fbclid=IwZXh0bgNhZW0CMTEAAR2U5JUBiwFj6qVVHtY7HObdjLQiXk5ku-sj_r6mSsiD56sD0QWQE4wQ8Ug_aem_h3wOykyst_tu8h-Y51FFCA&guccounter=1&guce_referrer=aHR0cDovL20uZmFjZWJvb2suY29tLw&guce_referrer_sig=AQAAAM1c4eJEnpfxZup1OzNpneIDPCH0T_ib4GnKpjPVFhXy5VAiAOuMod-LjcdblWb28cM8uRzbORqRa4tYrI4sjuu8F8QrEcpI5TludYZcpJVSOD46FQd_e9rv3ex8fWLtvW92zV6_7L2tD_SmXk7WxZ7hBngVV5eylGhFRN0sRPTk
Tomi Engdahl says:
Tekoäly uhkaa tuhota nämä seitsemän ammattia ”kokonaan”
https://www.is.fi/taloussanomat/art-2000010523867.html
Kysyntää riittää ammateissa, jotka ovat välttämättömiä yhteiskunnan toiminnan kannalta, tuore tutkimus osoittaa.
Tuoreen tutkimuksen mukaan seitsemän ammattiryhmää on vaarassa kadota tulevaisuudessa kokonaan, kun tekoäly valtaa työmarkkinoita.
Arvio on peräisin it-yhtiö Cvappilta, joka tarjoaa verkon työkaluja muun muassa ansioluetteloiden tekemiseen.
Tuloksena oli, että erityisesti hallinnolliset ammattilaiset kuten sihteerit, palkanlaskijat, kirjanpitäjät ja tietojen syöttäjät ovat vaarassa menettää työpaikkansa tekoälyä käyttäville ohjelmistoille.
Myös pankki- ja postivirkailijan ammatit todennäköisesti katoavat kokonaan. Esimerkiksi postivirkailijan työhön oli koko maassa vain kaksi työpaikkailmoitusta.
Tekniikkaan ja automaatioon liittyvillä ammateilla kuten erilaisilla insinööritehtävillä, on sen sijaan vain lievä katoamisen riski.
Tomi Engdahl says:
Tekoäly siivitti Microsoftin hyvään tulokseen
Jälkipörssissä ohjelmistojätin osakkeen hinta pomppasi nelisen prosenttia.
https://www.is.fi/taloussanomat/art-2000010386876.html
Tomi Engdahl says:
Mikko Alasaarelaa kiusattiin koulussa, ja hän kasvoi ”luonnehäiriöiseksi” – nyt hänet tuntevat miljoonat maailmalla
Pahaa tekevät algoritmit kaappaavat lapsemme, niitä kehittänyt asiantuntija varoittaa.
https://www.is.fi/taloussanomat/art-2000010347384.html
Tomi Engdahl says:
Llama.ttf Is AI, In A Font
https://hackaday.com/2024/06/26/llama-ttf-is-ai-in-a-font/
It’s a great joke, and like all great jokes it makes you think. [Søren Fuglede Jørgensen] managed to cram a 15 M parameter large language model into a completely valid TrueType font: llama.ttf. Being an LLM-in-a-font means that it’ll do its magic across applications – in your photo editor as well as in your text editor.
What magic, we hear you ask? Say you have some text, written in some non-AI-enabled font. Highlight that, and swap over to llama.ttf. The first thing it does is to change all “o” characters to “ø”s, just like [Søren]’s parents did with his name. But the real magic comes when you type a length of exclamation points. In any normal font, they’re just exclamation points, but llama.ttf replaces them with the output of the TinyStories LLM, run locally in the font. Switching back to another font reveals them to be exclamation points after all. Bønkers!
This is all made possible by the HarfBuzz font extensions library.
Something screams mischief about running arbitrary WASM while you type, but we remind you that since PostScript, font rendering engines have been able to run code in order to help with the formatting problem. This ability was inherited by PDF
https://fuglede.github.io/llama.ttf/
Tomi Engdahl says:
Jay Peters / The Verge:
Google adds support for 110 new languages in Translate, up from 133 languages before this update, its largest expansion ever, aided by the company’s PaLM 2 LLM
Google Translate is getting support for more than 110 new languages
/ That’s nearly double what it supported before.
https://www.theverge.com/2024/6/27/24186223/google-translate-110-new-languages
Tomi Engdahl says:
Ceylan Yeğinsu / New York Times:
Testing ChatGPT, Vacay, and Mindtrip to plan a Norway trip shows none were perfect, but the AI chatbots streamlined travel decisions complementing one another
My First Trip to Norway, With A.I. as a Guide
https://www.nytimes.com/2024/06/26/travel/norway-artficial-intelligence-planners.html?unlocked_article_code=1.2k0.ZMNf.aMN7ZO4Zr9l6&smid=url-share
Can artificial intelligence devise a bucket-list vacation that checks all the boxes: culture, nature, hotels and transportation? Our reporter put three virtual assistants to the test.
The assignment was clear: Test how well artificial intelligence could plan a trip to Norway, a place I’d never been. So I did none of my usual obsessive online research and instead asked three A.I. planners to create a four-day itinerary. None of them, alas, mentioned the saunas or the salmon.
Tomi Engdahl says:
An AI version of Al Michaels will deliver Olympic recaps on Peacock
/ NBC’s AI-generated voice could create nearly 7 million customized recaps during the Paris Olympics.
https://www.theverge.com/2024/6/26/24185774/olympics-ai-al-michaels-voice-recaps
Legendary sportscaster Al Michaels is going to give daily, personalized recaps of the Paris Olympics on Peacock — well, an AI-generated Al Michaels voice will. In practice, the effect is a lot like hearing a sports announcer’s voice in a video game like Madden, except it’s spitting out lines about real-life sports, which, in this case, means custom Olympics coverage.
Here’s how it works. To set up what NBC is calling “Your Daily Olympic Recap” in the Peacock app, you’ll provide your name (the AI voice can welcome the “majority” of people by their first name, NBC says in a press release) and pick up to three types of sports that are interesting to you and up to two types of highlights (for example, “Top Competition” or “Viral & Trending Moments”). Then, each morning, you’ll get your Michaels-led rundown.
Tomi Engdahl says:
Will Knight / Wired:
OpenAI details CriticGPT, a model based on GPT-4 to catch errors in ChatGPT’s code output, assisting human trainers in assessing and spotting errors — Having humans rate a language model’s outputs produced clever chatbots. OpenAI says adding AI to the loop could help make them even smarter and more reliable.
OpenAI Wants AI to Help Humans Train AI
Having humans rate a language model’s outputs produced clever chatbots. OpenAI says adding AI to the loop could help make them even smarter and more reliable.
https://www.wired.com/story/openai-rlhf-ai-training/
Tomi Engdahl says:
Michael Nuñez / VentureBeat:
Meta releases LLM Compiler, a family of models built on Code Llama specifically designed for code optimization tasks, available with 7B and 13B parameters — Meta has unveiled the Meta Large Language Model (LLM) Compiler, a suite of robust, open-source models designed to optimize code and revolutionize compiler design.
Meta’s LLM Compiler is the latest AI breakthrough to change the way we code
https://venturebeat.com/ai/metas-llm-compiler-is-the-latest-ai-breakthrough-to-change-the-way-we-code/
Meta has unveiled the Meta Large Language Model (LLM) Compiler, a suite of robust, open-source models designed to optimize code and revolutionize compiler design. This innovation has the potential to transform the way developers approach code optimization, making it faster, more efficient, and cost-effective.
The researchers behind LLM Compiler have addressed a significant gap in applying large language models to code and compiler optimization, which has been underexplored. By training the model on a massive corpus of 546 billion tokens of LLVM-IR and assembly code, they have enabled it to comprehend compiler intermediate representations, assembly language, and optimization techniques.
“LLM Compiler enhances the understanding of compiler intermediate representations (IRs), assembly language, and optimization techniques,” the researchers explain in their paper. This enhanced understanding allows the model to perform tasks previously reserved for human experts or specialized tools.
https://huggingface.co/collections/facebook/llm-compiler-667c5b05557fe99a9edd25cb
Tomi Engdahl says:
Ken Yeung / VentureBeat:
Google says Gemma 2 will be available to researchers and developers through Vertex AI from July as a new 9B-parameter model and the prior 27B-parameter model — Google says Gemma 2, its open lightweight model series, will be available to researchers and developers through Vertex AI starting next month.
Google’s Gemma 2 series launches with not one, but two lightweight model options—a 9B and 27B
https://venturebeat.com/ai/googles-gemma-2-series-launches-with-not-one-but-two-lightweight-model-options-a-9b-and-27b/
Google says Gemma 2, its open lightweight model series, will be available to researchers and developers through Vertex AI starting next month. But while it initially only contained a 27-billion parameter member, the company surprised us by also including a 9-billion one.
Gemma 2 was introduced back in May at Google I/O as the successor to Gemma’s 2-billion and 7-billion parameter models, which debuted in February. The next-gen Gemma model is designed to run on Nvidia’s latest GPUs or a single TPU host in Vertex AI. It targets developers who want to incorporate AI into their apps or edge devices such as smartphones, IoT devices, and personal computers.
Tomi Engdahl says:
Wired:
Amazon is investigating Perplexity over whether the AI search startup is violating AWS rules by scraping websites that attempted to prevent it from doing so — AWS hosted a server linked to the Bezos family- and Nvidia-backed search startup that appears to have been used to scrape the sites …
https://www.wired.com/story/aws-perplexity-bot-scraping-investigation/
Tomi Engdahl says:
Sarah Perez / TechCrunch:
In an interview, Mark Zuckerberg says there will not be “just one AI”, disparages closed-source AI competitors as trying to “create God”, and more — Riffing on what he sees for the future of AI, Meta CEO Mark Zuckerberg said in an interview published Thursday …
Zuckerberg disses closed-source AI competitors as trying to ‘create God’
https://techcrunch.com/2024/06/27/zuckerberg-disses-closed-source-ai-competitors-as-trying-to-create-god/
Riffing on what he sees for the future of AI, Meta CEO Mark Zuckerberg said in an interview published Thursday that he deeply believes that there will not be “just one AI.” Touting the value of open source to put AI tools into many people’s hands, Zuckerberg took a moment to disparage the efforts of unnamed competitors who he sees as less than open, adding that they seem to think they’re “creating God.”
“I don’t think that AI technology is a thing that should be kind of hoarded and … that one company gets to use it to build whatever central, single product that they’re building,” Zuckerberg said in a new YouTube interview with Kane Sutter (@Kallaway).
“I find it a pretty big turnoff when people in the tech industry … talk about building this ‘one true AI,’” he continued. “It’s almost as if they kind of think they’re creating God or something and … it’s just — that’s not what we’re doing,” he said. “I don’t think that’s how this plays out.”
“I get why, if you’re in some AI lab … you want to feel like what you’re doing is super important, right? … It’s like, ‘We’re building the one true thing for the future.’ But I just think, like, realistically, that’s not how stuff works, right?” Zuckerberg explained. “It’s not like there was one app on people’s phones that people use. There’s not one creator that people want all their content from. There’s not one business that people want to buy everything from.”
Tomi Engdahl says:
Ivan Mehta / TechCrunch:
Meta starts testing user-created AI chatbots on Instagram in the US; Mark Zuckerberg says the chatbots, made using Meta AI studio, will be clearly labeled as AI
Meta starts testing user-created AI chatbots on Instagram
https://techcrunch.com/2024/06/27/meta-starts-testing-user-created-ai-chatbots-on-instagram/
Tomi Engdahl says:
Ben Goggin / NBC News:
ChatGPT and Microsoft’s Copilot seemingly drew on conservative misinformation to repeat a false claim about CNN’s debate broadcast being on a “1-2 minute delay”
OpenAI’s ChatGPT and Microsoft’s Copilot repeated a false claim about the presidential debate
The AI programs seemingly drew on conservative misinformation posted just hours earlier to generate their answers.
https://www.nbcnews.com/tech/internet/openai-chatgpt-microsoft-copilot-fallse-claim-presidential-debate-rcna159353
Tomi Engdahl says:
The Information:
As tech companies grapple with the limited availability of Nvidia chips globally, Chinese companies like Alibaba face a tougher situation due to US restrictions
https://www.theinformation.com/articles/chinas-ai-sector-faces-fallout-from-u-s-chip-curbs
Tomi Engdahl says:
Ivan Mehta / TechCrunch:
Character.AI lets users talk to AI characters over calls in multiple languages, including English and Chinese, and says 3M+ users made 20M+ calls during testing
Character.AI now allows users to talk with AI avatars over calls
https://techcrunch.com/2024/06/27/character-ai-now-allows-users-to-talk-with-avatars-over-calls/
Tomi Engdahl says:
Stuck on code debugging? Fix bugs in seconds with Claude 3.5 Sonnet, our industry-leading AI that excels at coding.
https://claude.ai/login?returnTo=%2F%3F%253Futm_source%3Dmeta%26utm_medium%3Dpaidcpc%26utm_campaign%3DBamboo_US_Conversions_Prospecting_SignUps_FreeSignups_Broad_SonnetCodingDebugSeconds_ST_0_StuckOnCoddingDebugging_TryClaudeFree_TalkToClaude_ClaudeLogin%26utm_source%3Dfb%26utm_id%3D120211640674470387%26utm_content%3D120212838258100387%26utm_term%3D120211640674450387
Tomi Engdahl says:
Somessa leviää video Kekkosesta: ”Minä olen ollut kuolleena kohta 40 vuotta”
Somessa leviävällä videolla entinen presidentti kertoo, kuinka tekoälyä voi käyttää väärin. Video toimii nokkelana muistutuksena siitä, kuinka kehittyneitä syväväärennökset ovat nykyisin.
https://www.talouselama.fi/uutiset/somessa-leviaa-video-kekkosesta-mina-olen-ollut-kuolleena-kohta-40-vuotta/a9bcdf80-ad5c-46d1-b8d2-76dceb13fcbc
https://youtu.be/3hRtxGBgsMw?si=dbrL1zwqtoxBySnT