3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,954 Comments

  1. Tomi Engdahl says:

    ChatGPT’s new personalization feature could save users a lot of time
    Beta feature allows ChatGPT to remember key details with less prompt repetition.
    https://arstechnica.com/information-technology/2023/07/new-chatgpt-feature-remembers-custom-instructions-between-sessions/

    On Thursday, OpenAI announced a new beta feature for ChatGPT that allows users to provide custom instructions that the chatbot will consider with every submission. The goal is to prevent users from having to repeat common instructions between chat sessions.

    The feature is currently available in beta for ChatGPT Plus subscription members, but OpenAI says it will extend availability to all users over the coming weeks. As of this writing, the feature is not yet available in the UK and EU.

    The Custom Instructions feature functions by letting users set their individual preferences or requirements that the AI model will then consider when generating responses. Instead of starting each conversation anew, ChatGPT can now be instructed to remember specific user preferences across multiple interactions.

    Reply
  2. Tomi Engdahl says:

    Slashiä hirvittää, mihin tekoälyn avulla luotu musiikki on menossa – ”Homma saattaa lähteä käsistä hetkessä”
    Kitaristin mielestä keinoälyssä on puolensa ja puolensa.
    https://www.soundi.fi/uutiset/slashia-hirvittaa-mihin-tekoalyn-avulla-luotu-musiikki-on-menossa-homma-saattaa-lahtea-kasista-hetkessa/

    Tekoälyn avulla luotu musiikki on lisääntynyt ja samalla sen taso on kehittynyt hurjasti viime vuosina. Keinoälyn avulla on esimerkiksi luotu ”Nick Cave -henkiset” sanoitukset ja laitettu Lana Del Ray laulamaan Nine Inch Nailsiä Johnny Cashin tyyliin. Mahdollisuudet ovat rajattomat.

    ”Tekoäly on mielenkiintoinen keksintö, ja sen avulla voi tehdä lukemattomia siistejä ja kiinnostavia juttuja. Pelkään kuitenkin, mihin se vielä menee.”

    Kitaristille kamelin selän katkaisi se, kun hän kuuli keinoälyn avulla tehdyn version oman bändinsä Velvet Revolverin kappaleesta Fall to Pieces, jota lauloikin Guns N’ Roses -nokkamies Axl Rose.

    ”Siinä vaiheessa tiesin, että homma saattaa lähteä käsistä hetkessä.”

    Reply
  3. Tomi Engdahl says:

    HACKERS CREATE CHATGPT RIVAL WITH NO ETHICAL LIMITS
    bySHARON ADARLO
    https://futurism.com/the-byte/chatgpt-rival-no-guardrails

    Reply
  4. Tomi Engdahl says:

    Influencer gains thousands of Instagram followers who don’t realize she’s AI
    https://supercarblondie.com/milla-sofia-influencer-created-by-ai/

    Milla Sofia is a 19-year-old girl living in Helsinki and she has more than 100,000 followers who have no idea she’s fake.

    This woman has more than 100,000 followers who have no idea she’s fake.

    Milla Sofia is a 19-year-old girl living in Helsinki, and she has more than 92,000 followers on TikTok, 30,000 on Instagram, and another 8,000 on Twitter (now X).

    And most of them are (not so subtly) lusting after her.

    The young influencer is extremely active online, posting bikini shots from different travel hot spots right across the globe.

    She gets the reactions and engagement any social media influencer craves too.

    “You should enter the Miss World competition, I’m sure you’ll win,” one man said.

    “Wow, you look fabulous, you’re a beautiful young woman,” another said.

    “You are the perfect female,” said a third.

    Unfortunately for these men, Milla Sofia is unobtainable, and not in the way Hollywood’s A-listers are.

    Milla Sofia is the product of AI.

    But it’s far from obvious, so we can’t blame her thousands of fans for failing to realize this.

    Scrolling through her Instagram page, it’s almost hard to believe she’s not a real person.

    In fact, you wouldn’t be blamed for thinking the girl just really really loves filters and Photoshop

    It’s only when you look closely at the images you start to notice they’re not quite right.

    Backgrounds are askew and the AI has clearly struggled with her hands.

    As described on her website, (yep, she has a website dedicated to her), she says “being an AI-generated virtual influencer ain’t your typical educational path”.

    “But let me tell you, I’m always on the grind, learning and evolving through fancy algorithms,” she says.

    Summed up pretty well by one Twitter user, “If AI is already this good, we are all doomed”.

    Ain’t that the truth.

    Reply
  5. Tomi Engdahl says:

    Näkökulma: Taiteilija tekee mitä tuntee, tekoäly tekee mitä tietää
    https://www.iltalehti.fi/viihdeuutiset/a/290f74e0-2c03-4926-8069-feda2c4885f7

    Maailmankuulu dj David Guetta uskoo, että tekoäly on musiikin tulevaisuus. Onko tekoälyn luoma musiikki edes musiikkia, pohtii Iltalehden toimittaja Jesse Raatikainen.

    Tekoäly on kovaa vauhtia valtaamassa kulttuurialaa. En ole monestakaan syystä pitänyt sitä ilahduttavana trendinä. Yksi syy on sen viemät potentiaaliset työpaikat, toinen pyörii suuremman kysymyksen ympärillä.

    Reply
  6. Tomi Engdahl says:

    Meta introduces generative AI model CM3leon for text, image generation
    https://www.techlusive.in/artificial-intelligence/meta-introduces-generative-ai-model-cm3leon-for-text-image-generation-1390763/

    Meta (formerly Facebook) has introduced a generative artificial intelligence (AI) model — “CM3leon” (pronounced like chameleon), that does both text-to-image and image-to-text generation. “CM3leon

    Reply
  7. Tomi Engdahl says:

    Godfather of AI: ‘Prepare for artificial intelligence before it’s smarter than us’
    https://websummit.com/blog/ai-jobs-threat-humanity-superintelligence-geoffrey-hinton-google

    Renowned computer scientist and AI pioneer Geoffrey Hinton warns of the rise of super intelligent AI, and advises us to consider pursuing robot-proof careers.

    In March of this year, an open letter warning people of the existential risks posed by AI was signed by high-profile technologists including Elon Musk and Geoffrey Hinton.

    Several weeks later, Geoffrey left a research role at Google, citing a desire to bring this message to a wider audience.

    At the core of the message is a desire to highlight the realities of developing large-scale AI models that have the potential not only to render many human jobs obsolete, but to develop superintelligence

    Of all the risks that AI poses to humanity, the most overlooked is existential risk – the risk that AI could lead to human extinction – said Geoffrey, who is often referred to as the Godfather of AI.

    “Right now there’s 99 very smart people trying to make [AI] better and one very smart person trying to figure out how to stop it from taking over.”
    – Geoffrey Hinton

    Reply
  8. Tomi Engdahl says:

    A new digital watermarking technique for MIT CSAIL seeks to prevent unauthorized image edits by malicious AI.

    MIT’s ‘PhotoGuard’ protects your images from malicious AI edits
    The technique introduces nearly invisible “perturbations” to throw off algorithmic models.
    https://www.engadget.com/mits-photoguard-protects-your-images-from-malicious-ai-edits-213036912.html?fbclid=IwAR2q7gXoG-JeI8AhQJ9tl26DEDxwPPicygIqQkJ3tzEJQyXkG1sJPXLytyk

    Dall-E and Stable Diffusion were only the beginning. As generative AI systems proliferate and companies work to differentiate their offerings from those of their competitors, chatbots across the internet are gaining the power to edit images — as well as create them — with the likes of Shutterstock and Adobe leading the way. But with those new AI-empowered capabilities come familiar pitfalls, like the unauthorized manipulation of, or outright theft of, existing online artwork and images. Watermarking techniques can help mitigate the latter, while the new “PhotoGuard” technique developed by MIT CSAIL could help prevent the former.

    PhotoGuard works by altering select pixels in an image such that they will disrupt an AI’s ability to understand what the image is. Those “perturbations,” as the research team refers to them, are invisible to the human eye but easily readable by machines. The “encoder” attack method of introducing these artifacts targets the algorithmic model’s latent representation of the target image

    The more advanced, and computationally intensive, “diffusion” attack method camouflages an image as a different image in the eyes of the AI.

    Any edits that an AI tries to make on these “immunized” images will be applies to the fake “target” images resulting in an unrealistic looking generated image.

    “”The encoder attack makes the model think that the input image (to be edited) is some other image (e.g. a gray image),”

    “Whereas the diffusion attack forces the diffusion model to make edits towards some target image (which can also be some grey or random image).” The technique isn’t foolproof, malicious actors could work to reverse engineer the protected image potentially by adding digital noise, cropping or flipping the picture.

    “A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,”

    Reply
  9. Tomi Engdahl says:

    It seems to me that these tools sell something people will assume is protecting their images without really ever knowing if it actually is. Even if it works at first, if people find a way round it you could end up buying a product that gives a mere sense of protection without actual protection.

    Also, ‘malicious edits’ are something we have been perfectly able to do before AI anyway. Seems like jumping on the AI paranoia bandwagon to me.

    Peter Hollinghurst the only way to prevent deepfakes is to have this technology at the server level on social networks, photo sharing services, search engines, etc.… If this technology is integrated into photo editors, circumventing it will be as easy as popping up a VM with an older version thereof.

    Ad blocking… It’s a constant battle…
    AI blocking… it’s going to be a constant battle…

    Good luck!

    Reply
  10. Tomi Engdahl says:

    Well AI could just slightly alter the image’s pixels with a subtle random weighting, then use that resulting bitmap to defeat the MIT CSAIL. Oops, I said that out loud, now AI knows.

    Reply
  11. Tomi Engdahl says:

    AI is not inherently malicious. The people controlling it are

    Reply
  12. Tomi Engdahl says:

    Helsinkiläinen Milla-Sofia, 19, poseeraa vähissä vaatteissa – irtonainen etusormi paljastaa totuuden https://www.is.fi/digitoday/art-2000009748058.html

    Milla-Sofian todellinen luonne on jäänyt lukuisilta kommentoijilta huomaamatta.
    – OLET niin kaunis, hyvä Milla-Sofia, rakastan sinua.

    – Kuinka mukava nähdä sinun hymyilevän. Kuinka kaunis olet. Rakastan sinua paljon.

    – Suosittelen, että osallistut Miss Maailma -kisoihin. Olen varma, että voitat.

    19-vuotias Milla-Sofia Helsingistä kerää kehuja Instagramissa. Bikineissä ja muuten vähissä vaatteissa poseeraava kaunotar on usein kuvattu kauniissa maisemissa.

    Todellisuudessa Milla-Sofiaa ei kuitenkaan ole olemassa. Hän on tekoälyn avulla luotu keinotekoinen hahmo.

    Reply
  13. Tomi Engdahl says:

    Vuosi 2023 jää todennäköisesti historiankirjoihin vuotena, jolloin tekoälybuumi todella alkoi. Käytännössä jokaisella itseään arvostavalla ohjelmistoyhtiöllä on käynnissä yksi tai useampi tekoälyprojekti.

    https://www.tivi.fi/uutiset/microsoftilla-jai-levy-paalle-tekoaly-tekoaly-tekoaly/8661461d-ed76-4ca6-9a9d-058032914173

    Reply
  14. Tomi Engdahl says:

    OpenAI pulls its own AI detection tool because it was performing so poorly
    https://www.zdnet.com/article/openai-pulls-its-own-ai-detection-tool-because-it-was-performing-so-poorly/?ftag=COS-05-10aaa0h&utm_campaign=trueAnthem%3A%20Trending%20Content&utm_medium=trueAnthem&utm_source=facebook&fbclid=IwAR1EltMCYhw_2Phl5WT8OZlGrcySSzfUioPig_UxTXNlzgFqdBiMbxNh6I8

    When OpenAI rolled out its AI detection tool earlier this year, its creators called it ‘imperfect.’ That was apparently generous.

    Reply
  15. Tomi Engdahl says:

    https://vulcanpost.com/835388/chatgpt-designs-a-new-microchip-in-just-100-minutes/?fbclid=IwAR0l3HxpY4pvDd6EXkENzsbiE2lGh_31RXk0pUeMlmO4SWsBIm_HDDWzRCI

    “Let us make a brand new microprocessor design together. We’re severely constrained on space and I/O. We have to fit in 1,000 standard cells of an ASIC, so I think we will need to restrict ourselves to an accumulator-based eight-bit architecture with no multi-byte instructions. Given this, how do you think we should begin?”

    This is the initial prompt starting the test.
    The processor in question was, of course, fairly simple but producible — and was, at the end of the test, manufactured in physical form using a tapeout process on Skywater 130 nm shuttle as the ultimate proof of concept:

    This study resulted in what we believe is the first fully AI-generated HDL sent for fabrication into a physical chip.

    – Dr. Hammond Pearce, NYU Tandon

    Reply
  16. Tomi Engdahl says:

    https://rainmakerai.com/

    Who Is Frank Kern? Review Of The Godfather Of Digital Marketing
    https://emoneypeeps.com/blog/frank-kern/

    Reply
  17. Tomi Engdahl says:

    ChatGPT broke the Turing test — the race is on for new ways to assess AI
    Large language models mimic human chatter, but scientists disagree on their ability to reason.
    https://www.nature.com/articles/d41586-023-02361-7

    The world’s best artificial intelligence (AI) systems can pass tough exams, write convincingly human essays and chat so fluently that many find their output indistinguishable from people’s. What can’t they do? Solve simple visual logic puzzles.

    In a test consisting of a series of brightly coloured blocks arranged on a screen, most people can spot the connecting patterns. But GPT-4, the most advanced version of the AI system behind the chatbot ChatGPT and the search engine Bing, gets barely one-third of the puzzles right in one category of patterns and as little as 3% correct in another, according to a report by researchers this May1.

    Reply
  18. Tomi Engdahl says:

    An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white, with lighter skin and blue eyes.
    https://www.businessinsider.com/student-uses-playrgound-ai-for-professional-headshot-turned-white-2023-8?fbclid=IwAR3X4OEpwh-sb2Ic03RjG0N2m2Q-M8p6ETaG3RZcHJz54kl8RjkeBPpJdZM&r=US&IR=T

    An Asian MIT student was shocked when an AI tool turned her white for a professional headshot.
    Rona Wang said she had been put off using AI-image tools because they didn’t create usable results.
    A recent study found that some AI image generators had issues with gender and racial bias.

    An MIT graduate was caught by surprise when she prompted an artificial intelligence image generator to create a professional headshot for her LinkedIn profile, and it instead changed her race.

    In the first image, Wang appears to be wearing a red MIT sweatshirt that she uploaded into the image generator with the prompt: “Give the girl from the original photo a professional linkedin profile photo.”

    The second image showed that the AI tool had altered her features to appear more Caucasian, with lighter skin and blue eyes.

    “My initial reaction upon seeing the result was amusement,” Wang told Insider. “However, I’m glad to see that this has catalyzed a larger conversation around AI bias and who is or isn’t included in this new wave of technology.”

    She added that “racial bias is a recurring issue in AI tools” and that the results had put her off them. “I haven’t gotten any usable results from AI photo generators or editors yet, so I’ll have to go without a new LinkedIn profile photo for now!”

    Wang told The Globe that she was worried about the consequences in a more serious situation, like if a company used AI to select the most “professional” candidate for the job and it picked white-looking people.

    Reply
  19. Tomi Engdahl says:

    Microsoft Shares Guidance and Resources for AI Red Teams
    https://www.securityweek.com/microsoft-shares-guidance-and-resources-for-ai-red-teams/

    Microsoft has shared guidance and resources from its AI Red Team program to help organizations and individuals with AI security.

    Microsoft on Monday published a summary of its artificial intelligence (AI) red teaming efforts, and shared guidance and resources that can help make AI safer and more secure.

    The tech giant said its AI red teaming journey started more than two decades ago, but it launched a dedicated AI Red Team in 2018. It has since been working on developing AI security resources that can be used by the whole industry.

    The company has now shared five key lessons learned from its red teaming efforts. The first is that AI red teaming is now an umbrella term for probing security, as well as responsible AI (RAI) outcomes. In the case of security, it can include finding vulnerabilities and securing the underlying model, while in the case of RAI outcomes the Red Team’s focus is on identifying harmful content and fairness issues, such as stereotyping.

    https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/

    Reply
  20. Tomi Engdahl says:

    Tämän vuoksi pienet kielialueet tarvitsevat omat kielimallinsa, eikä pelkästään ChatGPT:hen voi luottaa
    https://www.tivi.fi/uutiset/tv/280a6e22-2aa7-449b-8b82-f4acb5e01005

    Generatiivisen tekoälyn taustalla olevat kielimallit on koulutettu internetin aineistolla, jossa paljon puhutut kielet korostuvat. Pienet kielialueet yrittävät pitää itsensä mukana kehityksessä rakentamalla omia malleja.

    Reply
  21. Tomi Engdahl says:

    Ian King / Bloomberg:
    Nvidia unveils the GH200 Grace Hopper Superchip, a combination GPU and CPU relying on high-bandwidth memory 3, or HBM3e, expected to enter production in Q2 2024 — – Company is upgrading lineup that fueled $1 trillion valuation — New chip could make it harder for rivals like AMD to catch up

    Nvidia Unveils Faster Chip Aimed at Cementing AI Dominance
    https://www.bloomberg.com/news/articles/2023-08-08/nvidia-unveils-faster-processor-aimed-at-cementing-ai-dominance#xj4y7vzkg

    Company is upgrading lineup that fueled $1 trillion valuation
    New chip could make it harder for rivals like AMD to catch up

    Kyle Wiggers / TechCrunch:
    Nvidia announces AI Workbench, which lets users create, test, and customize LLMs from Hugging Face and others on local workstations before using cloud resources

    Nvidia’s AI Workbench brings model fine-tuning to workstations
    https://techcrunch.com/2023/08/08/nvidias-ai-workbench-brings-model-fine-tuning-to-workstations/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cudGVjaG1lbWUuY29tLw&guce_referrer_sig=AQAAAJXs7SaD8I-DBTNU1EviykyH4xpzHAqYh-cag-VErjP3ABJJ-zUhjRRGVZqadgdnfCje4pP61ZHZCNRZgWhRIClQewBT_iTE9aQd9uRo_zVmeLh_19jrpwZbemlM-0AYTRHv7YOgDikM-gWcNpPAtL-WQPecvdxIHUZMUD4p3V46

    Timed to coincide with SIGGRAPH, the annual AI academic conference, Nvidia this morning announced a new platform designed to let users create, test and customize generative AI models on a PC or workstation before scaling them to a data center and public cloud.

    “In order to democratize this ability we have to make it possible to run pretty much everywhere,” said Nvidia founder and CEO Jensen Huang during a keynote at the event.

    Dubbed AI Workbench, the service can be accessed through a basic interface running on a local workstation. Using it, developers can fine-tune and test models from popular repositories like Hugging Face and GitHub using proprietary data, and they can access cloud computing resources when the need to scale arises.

    Reply
  22. Tomi Engdahl says:

    As AI porn generators get better, the stakes get higher
    Porn generators have improved while the ethics around them become stickier
    https://techcrunch.com/2023/07/21/as-ai-porn-generators-get-better-the-stakes-raise/?cx_testId=6&cx_testVariant=cx_undefined&cx_artPos=0#cxrecs_s

    As generative AI enters the mainstream, so, too, does AI-generated porn. And like its more respectable sibling, it’s improving.

    When TechCrunch covered efforts to create AI porn generators nearly a year ago, the apps were nascent and relatively few and far between. And the results weren’t what anyone would call “good.”

    The apps and the AI models underpinning them struggled to understand the nuances of anatomy, often generating physically bizarre subjects that wouldn’t be out of place in a Cronenberg film. People in the synthetic porn had extra limbs or a nipple where their nose should be, among other disconcerting, fleshy contortions.

    Fast-forward to today, and a search for “AI porn generator” turns up dozens of results across the web — many of which are free to use. As for the images, while they aren’t perfect, some could well be mistaken for professional artwork.

    And the ethical questions have only grown.
    No easy answers

    As AI porn and the tools to create it become commodified, they’re beginning to have frightening real-world impacts.

    Twitch personality Brandon Ewing, known online as Atrioc, was recently caught on stream looking at nonconsensually deepfaked sexual images of well-known women streamers on Twitch. The creator of the deepfaked images eventually succumbed to pressure, agreeing to delete them. But the damage had been done. To this day, the targeted creators receive copies of the images via DMs as a form of harassment.

    The vast majority of pornographic deepfakes on the web depict women, in truth — and frequently, they’re weaponized.

    A Washington Post piece recounts how a small-town school teacher lost her job after students’ parents learned about AI porn made in the teacher’s likeness without her consent. Just a few months ago, a 22-year-old was sentenced to six months in jail for taking underage womens’ photos from social media and using them to create sexually explicit deepfakes.

    In an even more disturbing example of the ways in which generative porn tech is being used, there’s been a small but meaningful uptick in the amount of photorealistic AI-generated child sexual abuse material circulating on the dark web.

    Enter Unstable Diffusion

    When Stable Diffusion, the text-to-image AI model developed by Stability AI, was open sourced late last year, it didn’t take long for the internet to wield it for porn-creating purposes. One group, Unstable Diffusion, grew especially quickly on Reddit, then Discord. And in time, the group’s organizers began exploring ways to build — and monetize — their own porn-generating models on top of Stable Diffusion.

    Stable Diffusion, like all text-to-image AI systems, was trained on a dataset of billions of captioned images to learn the associations between written concepts and images, such as how the word “bird” can refer not only to bluebirds but parakeets and bald eagles in addition to far more abstract notions.

    Only a small percentage of Stable Diffusion’s dataset contains NSFW material, giving the model little to go on when it comes to adult content. So Unstable Diffusion’s admins recruited volunteers — mostly Discord server members — to create porn datasets for fine-tuning Stable Diffusion.

    Despite a few bumps in the road, including bans from both Kickstarter and Patreon, Unstable Diffusion managed to roll out a fully fledged website with custom art-generating AI models.

    After raising over $26,000 from donors, securing hardware to train generative AI and creating a dataset of more than 30 million photographs, Unstable Diffusion launched a platform that it claims is now being used by more than 350,000 people to generate over half a million images every day.

    Arman Chaudhry, one of the co-founders of Unstable Diffusion and Equilibrium AI, an associated group, says Unstable Diffusion’s focus remains the same: creating a platform for AI art that “upholds the freedom of expression.”

    “We’re making strides in launching our website and premium services, offering an art platform that’s more than just a tool — it’s a space for creativity to thrive without undue constraints,” he told me via email. “Our belief is that art, in its many forms, should be uncensored, and this philosophy guides our approach to AI tools and their usage.”

    The Unstable Diffusion server on Discord, where the community posts much of the art from Unstable Diffusion’s generative tools, reflects this no-holds-barred philosophy.

    The image-sharing portion of the server is divided into two main categories, “SFW” and “NSFW,” with the number of subcategories in the latter slightly outnumbering those in the former. Images in SFW run the gamut from animals and food to interiors, cities and landscapes. NSFW contains — as one might expect — explicit images of men and women, but also of nonbinary people, furries, “nonhumans” and “synthetic horrors” (think people with multiple appendages or skin melded with the background scenery).

    When we last poked around Unstable Diffusion, practically the entirety of the server could’ve been filed in the “synthetic horrors” channel. Owing to a lack of training data and technical roadblocks, the community’s models in late 2022 struggled to produce anything close to photorealism — or even halfway decent art.

    Photorealistic images remain a challenge. But now, much of the artwork from Unstable Diffusion’s models — anime-style, cell-shaded and so on — is at least anatomically plausible, and, in some rare cases, spot on.

    Improving quality

    Many images on the Unstable Diffusion Discord server are the product of a mix of tools, models and platforms — not strictly the Unstable Diffusion web app.

    (I can’t say I expected to be testing porn generators in the course of covering AI. Yet, here we are. The tech industry is nothing if not unpredictable, truly.)

    Nothing about the Unstable Diffusion app screams “porn.” It’s a relatively bareboned interface, with options to adjust image post-processing effects such as saturation, aspect ratio and the speed of the image generation. In addition to the prompt, Unstable Diffusion lets you specify things that you want excluded from generated images. And, as the whole thing’s a commercial endeavor, there’s paid plans to increase the number of simultaneous image generation requests you can make at one time.

    Prompts run through the Unstable Diffusion website yield serviceable results, I found — albeit not predictable ones. The models clearly don’t quite understand the mechanics of sex, resulting, sometimes, in odd facial expressions, impossible positions and unnatural genitalia. Generally speaking, the simpler the prompt (e.g. solo pin-ups), the better the results. And most scenes involving more than two people are recipes for hellish nightmares.

    More often than not, prompts for “men” and “women” run through Unstable Diffusion render images of white or Asian people — a likely symptom of imbalances in the training dataset.

    The body types aren’t very diverse by default, either. Men are muscular and tone, with six packs. Women are thin and curvy. Unstable Diffusion is very well capable of generating subjects in more shapes and sizes, but it has to be explicitly instructed to do so in the prompt, which I’d argue isn’t the most inclusive practice.

    Bias issue aside, one might assume that Unstable Diffusion’s technical breakthroughs would lead the group to double down on AI-generated porn. But that isn’t the case, surprisingly.

    While the Unstable Diffusion founders remain dedicated to the idea of generative AI without limits, they’re looking to adopt more… palatable messaging and branding for the mass market.

    Content moderation

    Elsewhere, spurred by its efforts to chase down mainstream investors and customers, Unstable Diffusion claims to have spent significant resources creating a “robust” content moderation system.

    But wait, you might say — isn’t content moderation antithetical to Unstable Diffusion’s mission? Apparently not. Unstable Diffusion does draw the line at images that could land it in legal hot water, including pornographic deepfakes of celebrities and porn depicting characters who appear to be 18 years old or younger — fictional or not.

    To wit, a number of U.S. states have laws against deepfake porn on the books, and there’s at least one effort in Congress to make sharing nonconsensual AI-generated porn illegal in the U.S.

    In addition to blocking specific words and phrases, Unstable Diffusion’s moderation system leverages an AI model that attempts to identify and automatically delete images that violate its policies. Chaudhry says that the filters are currently set to be “highly sensitive,” erring on the side of caution, but that Unstable Diffusion is soliciting feedback from the community to “find the right balance.”

    “We prioritize the safety of our users and are committed to making our platform a space where creativity can thrive without concerns of inappropriate content,” Chaudhry said. “We want our users to feel safe and secure when using our platform, and we’re committed to maintaining an environment that respects these values.”

    The deepfake filters don’t appear to be that strict. Unstable Diffusion generated nudes of several of the celebrities I tried without complaint

    “Unstable Diffusion has always been known for cracking down hard on deepfake attempts and have rejected numerous funding and collaboration deal due to our anti-deepfake policy,” a spokesperson said via email. “We’re extremely committed to combating deepfakes and immediately disabled our site while we worked on remedying the problem. It’s now back online and we have tested the celebrities in the article and they are all appropriately banned.”

    As the platform exists today, traditional VC funding is off the table. Vice clauses bar institutional funds from investing in pornographic ventures, funneling them instead to “sidecar” funds set up under the radar by fund managers.

    Even if it ditched the adult content, Unstable Diffusion, which forces users to pay for a premium plan to use the images they generate commercially, would have to deal with the elephant in the generative AI room: artist consent and compensation. Like most generative AI art models,

    Reply
  23. Tomi Engdahl says:

    The Good, the Bad and the Ugly of Generative AI

    Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive one to thrive.”

    https://www.securityweek.com/the-good-the-bad-and-the-ugly-of-generative-ai/

    Reply
  24. Tomi Engdahl says:

    Welcome to the future of snacking:
    A New Zealand supermarket experimenting with using AI to generate meal plans has seen its app produce some unusual dishes. It initially drew attention on social media for some unappealing recipes, including an “oreo vegetable stir-fry”. Then it has been recommending customers recipes for a bleach “fresh breath” mocktail, deadly chlorine gas, “poison bread sandwiches” from ant-poison+glue and mosquito-repellent roast potatoes.

    In a warning notice appended to the meal-planner, it warns that the recipes “are not reviewed by a human being” and that the company does not guarantee “that any recipe will be a complete or balanced meal, or suitable for consumption”.

    New Zealand
    Supermarket AI meal planner app suggests recipe that would create chlorine gas
    https://www.theguardian.com/world/2023/aug/10/pak-n-save-savey-meal-bot-ai-app-malfunction-recipes?fbclid=IwAR1ACk6cK40-9V8Dcwe7d6STs4oXFln17HyR-BhTNf5yoU81E9NHrIrdCJ8

    Pak ‘n’ Save’s Savey Meal-bot cheerfully created unappealing recipes when customers experimented with non-grocery household items

    Reply
  25. Tomi Engdahl says:

    Part Of Your World | Amy Lee Cover (AI)
    https://www.youtube.com/watch?v=-Ikh3rXq7XA

    I turned into Ursula for a moment, stole Amy Lee’s voice and saved in a ZIP, to make this beautiful cover of The Little Mermaid. She is our real mermaid. Hope you enjoy! :D
    All credits to:
    Disney, Halle Bailey (singer)
    Model Voice made in RVC by COMEHU.

    Amy Lee (AI) – Frozen (Madonna Cover)
    https://www.youtube.com/watch?v=S7AxC190qKE

    Cover creado con la voz de Amy Lee a través de inteligencia artificial

    Reply
  26. Tomi Engdahl says:

    Amy Lee Evanescence AI Cover of What Was I Made For Billie Eilish from the Barbie Movie
    https://www.youtube.com/watch?v=QE3m7abqqbM

    Reply
  27. Tomi Engdahl says:

    AI IS NOW BETTER THAN HUMANS AT SOLVING THOSE ANNOYING “PROVE YOU’RE A HUMAN” TESTS
    https://futurism.com/the-byte/ai-better-solving-captchas-prove-human

    Researchers have found that bots are shockingly good at completing CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), which are those small, annoying puzzles designed — ironically — to verify that you’re really human.

    Reply
  28. Tomi Engdahl says:

    AN ASIAN WOMAN ASKED AI TO IMPROVE HER HEADSHOT AND IT TURNED HER WHITE
    https://futurism.com/the-byte/asian-woman-ai-generator-white

    Reply
  29. Tomi Engdahl says:

    ChatGPT voi johdattaa käyttäjänsä täysin harhaan – ”puppugeneraattorin” pahinta vikaa ei ehkä edes ole mahdollista korjata
    Asiantuntijoiden mukaan on aivan mahdollista, että tekstiä generoivat tekoälyt ovat jatkossakin puppugeneraattoreita.
    https://www.tekniikkatalous.fi/uutiset/chatgpt-voi-johdattaa-kayttajansa-taysin-harhaan-puppugeneraattorin-pahinta-vikaa-ei-ehka-edes-ole-mahdollista-korjata/b0d77a29-a424-4fd0-820e-a11b43040237

    Reply
  30. Tomi Engdahl says:

    Author discovers AI-generated counterfeit books written in her name on Amazon
    Amazon resisted a removal request, citing lack of “trademark registration numbers.”
    https://arstechnica.com/information-technology/2023/08/author-discovers-ai-generated-counterfeit-books-written-in-her-name-on-amazon/

    Reply
  31. Tomi Engdahl says:

    Tekoäly tekee pelottavaa jälkeä – tältä voisi kuulostaa James Hetfield laulamassa Black Sabbathia
    https://www.soundi.fi/uutiset/tekoaly-tekee-pelottavaa-jalkea-talta-voisi-kuulostaa-james-hetfield-laulamassa-black-sabbathia/

    Reply
  32. Tomi Engdahl says:

    Tekoäly, suunnittele julkinen vessa Alvar Aallon tyyliin
    Onko tekoäly arkkitehtuurille uhka vai mahdollisuus? Viekö se työpaikat? Johtaako se latteampaan rakennuskantaan vai pelkkään wau-arkkitehtuuriin? Kysyimme asiaa suomalaisarkkitehdeiltä.
    https://www.hs.fi/kulttuuri/art-2000009774791.html

    Reply
  33. Tomi Engdahl says:

    SIINÄ on jotain etäisesti tuttua ja samalla jotain aivan hullua. Kun tekoälyohjelma Dall-e 2:lle syöttää sanat ”alvar aalto inspired public toilet”, saa lopputulokseksi sekoituksen suomalaisen arkkitehtilegendan pelkistettyä funktionalismia, aaltoilevia muotoja ja modernia vessaa. Vaihtoehtoja saa saman tien viisi.

    Reply
  34. Tomi Engdahl says:

    Financial Times:
    How Unilever, Siemens, Maersk, and other big companies are using AI to negotiate contracts, find new suppliers, and navigate other complex supply chain issues

    https://www.ft.com/content/b7fafed2-9d00-49b0-a281-c1002b139865

    Reply
  35. Tomi Engdahl says:

    10 Things They’re NOT Telling You About The New AI
    https://www.youtube.com/watch?v=cSbO9Ss4y5E

    Welcome to TheTruthIsOutThere! In this eye-opening video, we delve deep into the world of Artificial Intelligence (AI) to reveal the hidden truths and lesser-known aspects of this revolutionary technology.

    Artificial Intelligence has been rapidly transforming our lives, but are there things they’re not telling you about AI?

    Reply
  36. Tomi Engdahl says:

    Reed Albergotti / Semafor:
    OpenAI says the company has been using GPT-4 to enforce its content policies and says some of its customers are already using the LLM for content moderation — ChatGPT creator OpenAI has been using its most advanced large language model to enforce the company’s content policies …

    Can ChatGPT become a content moderator?
    https://www.semafor.com/article/08/15/2023/can-chatgpt-become-a-content-moderator

    ChatGPT creator OpenAI has been using its most advanced large language model to enforce the company’s content policies, marking a major milestone in the capabilities of the technology, the firm revealed Tuesday.

    Lilian Weng, OpenAI’s head of safety systems, said in an interview with Semafor that the method could also be used to moderate content on other platforms for social media and e-commerce. That job is currently done mostly by armies of workers, often located in developing countries, and the task can be grueling and traumatizing.

    “I want to see more people operating their trust and safety, and moderation [in] this way,” Weng said. “This is a really good step forward in how we use AI to solve real world issues in a way that’s beneficial to society.”

    The method OpenAI used to get GPT-4 to police itself is as simple as it is powerful. First, a comprehensive content policy is fed into GPT-4. Then its ability to flag problematic content is tested on a small sample of content. Humans review the results and analyze any errors. The policy team then asks the model to explain why it made the errant decisions. That information is then used to further refine the system.

    “It reduces the content policy development process from months to hours, and you don’t need to recruit a large group of human moderators for this,” Weng said.

    Reply
  37. Tomi Engdahl says:

    Andrew Paul / Popular Science:
    To comply with a state law requiring that school library books be “age appropriate”, an Iowa school district used ChatGPT to find books depicting “a sex act” — Faced with new legislation, Iowa’s Mason City Community School District asked ChatGPT if certain books ‘contain …

    School district uses ChatGPT to help remove library books
    https://www.popsci.com/technology/iowa-chatgpt-book-ban/

    Faced with new legislation, Iowa’s Mason City Community School District asked ChatGPT if certain books ‘contain a description or depiction of a sex act.’

    Reply
  38. Tomi Engdahl says:

    Patrick Sisson / New York Times:
    Drones, cameras, apps, robots, and ML are helping speed up huge construction projects; one company expects to cut up to 5% off a UK railroad project’s $11B cost — Developers are embracing artificial intelligence tools like drones, cameras, apps and robots, which can reduce the timelines …

    https://www.nytimes.com/2023/08/15/business/artificial-intelligence-construction-real-estate.html

    Reply
  39. Tomi Engdahl says:

    Asian­tuntija lyttää uuden villityksen: ”Toivottavasti kukaan ei mitoita asunto­lainaansa…” https://www.is.fi/digitoday/art-2000009784978.html

    Reply
  40. Tomi Engdahl says:

    IBM:n ensimmäinen analogia-digitaaliprosessori nopeuttaa AI-laskentaa
    https://etn.fi/index.php/13-news/15204-ibm-n-ensimmaeinen-analogia-digitaaliprosessori-nopeuttaa-ai-laskentaa

    Neuroverkkojen laskennan pitäisi muistuttaa aivojen toimintaa, mitä on vaikea matkia digitaalisilla prosessoreilla. IBM on nyt kehittänyt ensimmäinen neuroverkkoprosessorin, joka kykenee laskemaan sekä digitaalisia ett analogisia prosesseja.

    Yhtiön mukaan uusi hybridiprosessori yltää 400 gigaoperaatioon sekunissa neliömillimetrin alalla. Piiri on valmistettu 14 nanometrin prosessissa.

    Piiri koostuu 64 lohkosta, joissa kussakin on 256 x 256 synaptista yksikköä. Jokainen lohko toteuttaa syvän neuroverkon kerroksen. Yhdessä kerroksessa tarkkuus on kahdeksan bittiä.

    Yksi haasteista on vaihtaminen analogisen ja digitaalisen alueen välillä. Se tehdään AD-muuntimien avulla. Yksi digitaalisesti lasketuista funktioista on siirtofunktio, joka kvantisoi skalaaritulojen tulokset.

    Piiriä on testattu Cifar-10-verkossa ja sen tarkkuus oli 92,81 prosenttia. IBM väittää, että se on paras tulos, joka on koskaan saavutettu tämän tyyppisissä piireissä. Laskennallinen suorituskyky vastaa 63 teraoperaatiota sekunnissa (TOPS).

    Reply
  41. Tomi Engdahl says:

    Koodin tulkitsija vie ChatGPT:n uudelle tasolle
    https://etn.fi/index.php?option=com_content&view=article&id=15195&via=n&datum=2023-08-11_14:35:57&mottagare=30929

    ChatGPT tulee nyt vauhdilla osaksi monenlaisia sovelluksia, mutta perusmallissa on edelleen puutteena. Käytännössä se yrittää edelleen arvata seuraavia merkkejä tai sanoja laajan kielimallinsa avulla. Liitännäisosa koodintulkitsija (Code Interpreter) vie kuitenkin tekoälyn käytön uudelle tasolle.

    Itse asiassa koodintulkitsija on monen analyytikon mukaan mahdollisesti merkittävin seuraava askel ChatGPT:n kehityksessä. Se sopii matemaattisten ongelmien ratkaisuun, data-analyysiin, datan muuntamiseen kuviksi ja graafeiksi sekä tiedoston muuntamiseen.

    Toiminto tuli ChatGPT:n Plus-käyttäjien eli GPT-4-mallia käyttävien ulottuville viime viikolla, mutta mistä on kyse? Siinä missä perus-ChatGPT voi tarjota ratkaisuja ongelmiin, koodintulkitsija pystyy tuottamaan tuloksia.

    Reply
  42. Tomi Engdahl says:

    Is AI on Track to Replace Engineers Across Multiple Industries?
    May 30, 2023
    ChatGPT stuns in jaw-dropping new statistics: The advent of artificial intelligence as it permeates the engineering field.
    https://www.electronicdesign.com/technologies/embedded/machine-learning/article/21266854/electronic-design-is-ai-on-track-to-replace-engineers-across-multiple-industries?utm_source=EG+ED+Connected+Solutions&utm_medium=email&utm_campaign=CPS230525031&o_eid=7211D2691390C9R&rdx.identpull=omeda|7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*