Audio and video trends for 2017

Here are some audio and video trends picks for the year 2017:

It seems that 3D craze is over. So long, 3DTV – we won’t miss youBBC News reports that at this year’s CES trade show, there was barely a whimper of 3D TV, compared to just two years ago when it was being heralded as the next big thing. In the cinema, 3D was milked for all it was worth, and even James Cameron, who directed Avatar, is fed up with 3D. There are currently no major manufacturers making 3DTVs as Samsung, LG and Sony have now stopped making 3D-enabled televisions. According to CNet’s report, TV makers are instead focusing on newer technologies such as HDR.

360 degree virtual reality video is hot how. Movie studios are pouring resources into virtual reality story-telling. 360-Degree Video Playback Coming to VLC, VR Headset Support Planned for 2017 article tells that VLC media player previews 360° video and photo support for its desktop apps, says the feature will come to mobile soon; dedicated VLC apps for VR headsets due in 2017.

4K and 8K video resolutions are hot. Test broadcasting of 8K started in August 2016 in Japan and full service is scheduled for 2018. According to Socionext Introduces 8K HEVC Real-Time Encoder Solution press release the virtual reality technology, which is seeing rapid growth in the global market, requires an 8K resolution as the current 4K resolution cannot support a full 360-degree wraparound view with adequate resolution.

Fake News Is About to Get Even Scarier than You Ever Dreamed article tells that advancements in audio and video technology are becoming so sophisticated that they will be able to replicate real news—real TV broadcasts, for instance, or radio interviews—in unprecedented, and truly indecipherable, ways. Adobe showed off a new product that has been nicknamed “Photoshop for audio” that allows type words that are expressed in that exact voice of someone you have recording on. Technologists can also record video of someone talking and then change their facial expressions in real time. Digital avatars can be almost indecipherable from real people – on the latest Star Wars movie it is hard to tell which actors are real and which are computer-generated.

Antique audio formats seem to be making come-back. By now, it isn’t news that vinyl albums continue to sell. It is interesting that UK vinyl sales reach 25-year high to point that Vinyl Records Outsold Digital Downloads In the UK at least for one week.

I would not have quessed that Cassettes Are Back, and Booming. But a new report says that sales of music on cassette are up 140 percent. The antiquated format is being embraced by everyone from indie musicians to Eminem and Justin Bieber. For some strange reason it turns out there’s a place for archaic physical media of questionable audio fidelity—even in the Spotify era.

Enhance! RAISR Sharp Images with Machine Learning. Google RAISR Intelligently Makes Low-Res Images High Quality article tells that with Google’s RAISR machine learning-driven image enhancement technique, images can be up to 75% smaller without losing their detail.

Improving Multiscreen Services article tells that operators have discovered challenges as they try to meet subscribers’ requirements for any content on any device. Operators must choose from a variety of options for preparing and delivering video on multiple screens. And unlike the purpose-built video networks of the past, in multiscreen OTT distribution there are no well-defined quality standards such as IPTV’s SCTE-168.

2017: Digital Advertising to overtake TV Advertising in US this year article tells that according to PricewaterhouseCoopers, “Ad Spend” on digital advertising will surpass TV ads for the first time in 2017.For all these years, television gave a really tough fight to internet with respect to Ad spend, but online advertising to decisively take over the market in 2017. For details check How TV ad spending stacks up against digital ad spending in 4 charts.

Embedded vision, hyperspectral imaging, and multispectral imaging among trends identified at VISION 2016.

 

624 Comments

  1. Tomi Engdahl says:

    Handheld Gimbal with Off-The-Shelf Parts
    http://hackaday.com/2017/08/10/handheld-gimbal-with-off-the-shelf-parts/

    For anything involving video capture while moving, most videographers, cinematographers, and camera operators turn to a gimbal. In theory it is a simple machine, needing only three sets of bearings to allow the camera to maintain a constant position despite a shifting, moving platform. In practice it’s much more complicated, and gimbals can easily run into the thousands of dollars. While it’s possible to build one to reduce the extravagant cost, few use 100% off-the-shelf parts like [Matt]’s handheld gimbal.

    Handheld Camera Gimbal
    For Mirrorless and Mid-Size DSLRs.
    https://hackaday.io/project/25740-handheld-camera-gimbal

    Reply
  2. Tomi Engdahl says:

    Disney’s Building Its Own Netflix. Everyone Else Might, Too
    https://www.wired.com/story/disney-leaving-netflix

    The boats are coming for Disney’s movies, ready to evacuate them from Netflix’s disputed shore. The studio’s deal with the streaming service expires next year; in 2019 everything that smells even faintly of mouse will move to a new redoubt. Disney and Pixar movies will supply the pipeline for a new Disney-owned streaming platform, a company rep said during an earnings announcement. (CEO Bob Iger also said he wasn’t sure if the Star Wars and Marvel movies would be on the same new service or somewhere else entirely.)

    Don’t cry for Big Red, though. This isn’t the beginning of the end of Netflix—but it may well be the end of the beginning of what Hollywood calls Streaming Video on Demand. The first sign of SVOD’s Phase Two (to use a Marvelism) was the shift from “showing other people’s stuff” to “making stuff.” Netflix does it with, for example, the upcoming Defenders and Stranger Things. Hulu has The Handmaid’s Tale; Amazon Prime has Transparent.

    Reply
  3. Tomi Engdahl says:

    Robin Wauters / Tech.eu:
    Facebook acquires German computer vision startup Fayteq, which specializes in adding and removing objects from videos

    Facebook acquires German video modification and motion tracking technology startup fayteq
    http://tech.eu/brief/facebook-acquires-fayteq/

    Fayteq, a small German startup that develops technologies for video manipulation, has shut down all sales of its products and services, according to its website. Deutsche Startups reported this morning that the company, based in central Germany’s Erfurt, has in fact been acquired by Facebook. The social media giant later confirmed the acquisition with the news site Variety.

    The blog mentions that one of the startup’s investors, bm|t, published a news item about the acquisition (without mentioning Facebook as the buyer) but this has seemingly since been removed.

    Fayteq was founded in August 2011 as a spin-off from the Technical University of Ilmenau, and employs about 10 people according to LinkedIn.

    According to Siegfried Vater, a business angel and partner of Fayteq, the startup offered “innovative technologies in the area of off-line and real-time video manipulation, removing the border between reality and fiction.” He also writes that the company provides (or used to provide, at least) “sophisticated solutions for digital product placement, i.e. insertion and replacement of advertisements, seamless object insertion in and removal from video streams as well as logo removal from video sequences”.

    One of its products was an advanced motion tracker called FayIN, reviewed by Videomaker.com here about a year ago.

    Reply
  4. Tomi Engdahl says:

    This is stupid. This is only going increase to piracy.

    Don’t ruin streaming by turning it into cable
    https://techcrunch.com/2017/08/11/dont-ruin-streaming-by-turning-it-into-cable/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    Technology was going to free us from cable. It’s right there in the phrase “cord cutting” — a liberation from the bonds of traditional television. This is supposed to be the era of on-demand entertainment, when we don’t have to subscribe to some bloated cable package in order to get the content we want. But the golden age of television has yet to meet its streaming counterpart. And if the news this week from companies like Disney is any indication, we’re steadily moving in the wrong direction.

    It’s a growing trend toward fragmentation of streaming services that will ultimately work against the best interests of consumers. A world in which every film studio and television station has its own proprietary offering sounds like a bit of a nightmare — worse even than the most convoluted of cable plans. It’s a sort of death by a thousand cuts, each studio and TV station emptying viewers’ bank accounts, $5 or $10 at a time.

    Record labels attempted something similar in the post-Napster land rush, each launching proprietary music services. But most consumers don’t have loyalties to record labels, they have loyalty to bands

    The rush to fragment the video streaming landscape is being driven by studios that can’t wait to shoot themselves in the foot, in hopes of creating a walled content garden. It’s a shame really, because services like Netflix, Hulu and Amazon have proven that users will pay for access to content, as long as it’s part of a simple solution.

    ___
    Yeah, but they didn’t want ten different accounts, with ten different passwords, ten different apps, and ten different monthly payments. One place with most things you want. Otherwise everything will just get pirated bc it is easier than searching a bunch of different services.

    Reply
  5. Tomi Engdahl says:

    Over 50,000 digitized pieces of vinyl can now be listened to on Internet Archive
    Get familiar with the Great 78 Project
    https://www.theverge.com/2017/8/12/16126346/50000-digitized-vinyl-internet-archive-great-78-project

    Reply
  6. Tomi Engdahl says:

    SoundCloud: You can’t stop the music, nobody can stop the music
    Singaporean group that funded Dell EMC buy among bailout funders as new CEO steps in
    https://www.theregister.co.uk/2017/08/13/soundcloud_survives_secures_funding/

    SoundCloud has avoided collapse, announcing that it has secured “the largest financing round” it’s ever secured.

    The audio storage and streaming site laid off forty per cent of staff in July and batted away rumours that it had just a few weeks worth of cash to hand.

    The company’s strife saw it become an exemplar of the risks inherent in unprofitable cloud services that may have users galore but can’t go on forever without either black ink or willing investors. And when such outfits flame out, they tend to do so without leaving users much time in which to retrieve their stuff.

    With SoundCloud VC-funded, but never profitable during nine years of operation, users feared the worst.

    Reply
  7. Tomi Engdahl says:

    Crowdfunding Campaign Seeks a Libre Recording of a Newly-Completed Bach Work
    https://entertainment.slashdot.org/story/17/08/13/0436203/crowdfunding-campaign-seeks-a-libre-recording-of-a-newly-completed-bach-work

    Robert Douglass’s Kickstarter campaigns have resulted in free fan-funded open source recordings of Bach’s Goldberg Variations and the 48 pieces in his Well-Tempered Clavier, Book 1. “Even Richard Stallman found these recordings, and he promptly wrote an email encouraging us to drop the word ‘Open’ in favor of ‘Free’ or ‘Libre’,” Douglas tells BoingBoing (adding “when RMS writes you telling you to change the name of your music project, you change the name of your music project.”)

    Now Douglass is crowdfunding a libre recording of Bach’s last masterpiece, 20 fugues developed from a single theme called “the Art of the Fugue”.

    Kickstarting a “libre” recording of all of Bach’s fugues
    http://boingboing.net/2017/08/08/kimiko-ishizaka.html

    Reply
  8. Tomi Engdahl says:

    Instagram photos reveal predictive markers of depression
    https://epjdatascience.springeropen.com/articles/10.1140/epjds/s13688-017-0110-z

    Using Instagram data from 166 individuals, we applied machine learning tools to successfully identify markers of depression. Statistical features were computationally extracted from 43,950 participant Instagram photos, using color analysis, metadata components, and algorithmic face detection. Resulting models outperformed general practitioners’ average unassisted diagnostic success rate for depression.

    These results suggest new avenues for early screening and detection of mental illness.

    Reply
  9. Tomi Engdahl says:

    Amazon kills its European DVD rental biz, Lovefilm
    https://techcrunch.com/2017/08/14/amazon-kills-its-european-dvd-rental-biz-lovefilm/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    Amazon is getting out of the DVD rental business with the closure of Lovefilm, the so-called “Netflix of Europe” that Amazon bought in 2011

    subscription service that lets consumers rent DVDs that are sent out by mail – similar to Netflix’s original business model before it became the streaming powerhouse it is today.

    According to Amazon, however, DVD rentals by mail are no longer in demand.

    also points to Amazon Prime Video as an alternative going forward

    Reply
  10. Tomi Engdahl says:

    Kerry Flynn / Mashable:
    Snapchat debuts Crowd Surf feature, which stitches together Snaps from select live events by syncing audio, to create multi-perspective viewing experiences

    Snapchat’s newest feature is a game changer for concerts
    http://mashable.com/2017/08/14/snapchat-crowd-surf-concerts-our-stories/#TcNhKP3kRSqk

    Having FOMO about not seeing Lorde at Outside Lands? Well, Snapchat just released a feature that could help alleviate your melodrama.

    Called Crowd Surf, the feature connects snaps based on their audio and stitches them together in an attempt to give a near-seamless look at a live event from multiple perspectives.

    The new feature is already live within select Our Stories curated by Snapchat, with Lorde’s recent performance as the prime example. Users can see different perspectives of the same footage by clicking a new button in the right corner of their mobile screen.

    Because of the audio connection, which Mashable has learned is a proprietary machine learning technology built in-house by Snap’s Research team, Snapchat users can essentially change the camera angle without losing the context of what’s being shown.

    Snapchat first showed Crowd Surf off Monday with footage from Lorde’s performance at Outside Lands.

    Reply
  11. Tomi Engdahl says:

    Joe Flint / Wall Street Journal:
    Shonda Rhimes is leaving ABC for Netflix, says the streaming service will provide her with greater creative freedom than network television — Prolific producer will create new shows for streaming service, which faces growing competition from Amazon, Disney

    Netflix Signs ‘Scandal’ Creator Shonda Rhimes Away From ABC, as Battle for Talent Escalates
    https://www.wsj.com/articles/netflix-signs-scandal-creator-shonda-rhimes-away-from-abc-as-battle-for-talent-escalates-1502683261

    Prolific producer will create new shows for streaming service, which faces growing competition from Amazon, Disney

    Netflix Inc. has recruited prolific television producer Shonda Rhimes, the creator of ABC hits such as “Scandal” and “Grey’s Anatomy,” the clearest sign yet of an arms race for talent between new and old entertainment industry giants.

    Reply
  12. Tomi Engdahl says:

    Google is turning Street View imagery into pro-level landscape photographs using artificial intelligence
    http://nordic.businessinsider.com/google-street-view-into-pro-level-landscape-with-ai-2017-7/

    A new experiment from Google is turning imagery from the company’s Street View service into impressive digital photographs using nothing but artificial intelligence (AI).

    Google is using machine learning algorithms to train a deep neural network to roam around places such as Canada’s and California’s national parks, look for potentially suitable landscape images, and then work on them with special post-processing techniques.

    Reply
  13. Tomi Engdahl says:

    Over 50,000 digitized pieces of vinyl can now be listened to on Internet Archive
    Get familiar with the Great 78 Project
    https://www.theverge.com/2017/8/12/16126346/50000-digitized-vinyl-internet-archive-great-78-project

    New York’s ARChive of Contemporary Music (ARC) has been preserving audiovisual materials since 1985, and a little over a year ago, it partnered with the Internet Archive to bring its Great 78 Project to the public. Along with audiovisual digitization vendor George Blood L.P. and additional volunteers, the Great 78 Project to date has put over 50,000 digitized 78rpm discs and cylinder recordings on the Internet Archive, which can be listened to in all their crackling glory.

    An ongoing project, the Internet Archive actually has over 200,000 donated physical recordings, most of which are from the 1950s and earlier.

    The Internet Archive’s focus in digitizing the records lies in genres that are less commonly available and are overlooked. The collection offers expansive selections in early blues, bluegrass, yodeling, and, as Jessica Thompson of Coast Mastering notes, even several Novachord synthesizer recordings from 1941.

    Digitizing these older records is a complicated process

    http://great78.archive.org/

    Reply
  14. Tomi Engdahl says:

    Using graphics processors for image compression
    http://www.vision-systems.com/articles/print/volume-22/issue-7/features/using-graphics-processors-for-image-compression.html?cmpid=enl_vsd_vsd_newsletter_2017-08-14

    mage compression plays a vitally important part in many imaging systems by reducing the amount of data needed to store and/or transmit image data. While many different methods exist to perform such image compression, perhaps the most well-known and well-adopted of these is the baseline JPEG standard.

    Originally developed by the Joint Photographic Experts Group (JPEG; https://jpeg.org), a working group of both the International Standardization Organization (ISO, Geneva, Switzerland; http://www.iso.org) and the International Electrotechnical Commission (IEC, Geneva, Switzerland; http://www.iec.ch), the baseline JPEG standard is a lossy form of compression based on the discrete cosine transform (DCT).

    since the baseline JPEG standard can achieve 15:1 compression with little perceptible loss in image quality,

    Graphics acceleration

    In the past, JPEG image compression was performed on either host PCs or digital signal processors (DSPs). Today, with the advent of graphics processors such as the TITAN and GEFORCE series of graphics processors from NVIDIA (Santa Clara, CA, USA; http://www.nvidia.com) that contain hundreds of processor cores, image compression can be performed much faster (Figure 1). Using the company’s Compute Unified Device Architecture (CUDA), developers can now use an application programming interface (API) to build image compression applications using C/C++.

    Because CUDA provides a software abstraction of the GPUs underlying hardware and the baseline JPEG compression standard can be somewhat parallelized, the baseline JPEG compression process can be split into threads that act as individual programs, working in the same memory space and executing concurrently.

    Before an image can be compressed, however, it must be transferred to the CPU’s host memory and then to the GPU memory. To capture image data, standards such as GenICam from the European Machine Vision Association (EMVA; Barcelona, Spain; http://www.emva.org) provide a generic interface for GigE Vision, USB3 Vision, CoaXPress, Camera Link HS, Camera Link and 1394 DCAM-based cameras that allow such data acquisition to be easily made. When available, a GenTL Consumer interface can be used to link to the camera manufacturer’s GenTL Producer

    However, using the cudaMemcpy function is not the fastest method of transferring image data to the GPU memory. A higher bandwidth can be achieved between CPU memory and GPU memory by using page-locked (or “pinned”) memory.

    There is also a third approach to overcoming the data transfer speed between the host and GPU memory. Allowing a frame grabber and the GPU to share the same system memory, eliminates the CPU memory copy to GPU memory copy time and can be achieved using NVIDIA’s Direct-for-Video (DVP) technology. However, because NVIDIA DVP is not currently available for general use, BitFlow

    Before image compression can occur, the data from the image sensor in the camera must be correctly formatted. Most of today’s color image sensors use the Bayer filter mosaic, an array of RGB color filters arranged on a grid of photosensors

    Since the Bayer mosaic pattern produces an array of separate R, G and B pixel data at different locations on the image sensor, a Bayer demosaicing (interpolation) algorithm must be used to generate individual red (R), green (G) and blue (B) values at each pixel location. Several methods exist to perform this interpolation including bilinear interpolation (http://bit.ly/VSD-BiLin), linear interpolation with 5×5 kernels (http://bit.ly/VSD-LinIn), adaptive-homogeneity-directed algorithms (http://bit.ly/VSD-DEM) and using directional filtering with an a posteriori decision (http://bit.ly/VSD-POST) each of which has their own quality/computational tradeoffs.

    Color data can be reduced by transforming the RGB images into the YUV color space (or more accurately the Y’CbCr color space where Y’ represents luminance and the U and V components represent chroma or color difference values.

    Generally, the YUV (4:4:4) format, which samples each YUV component equally, is not used in lossy JPEG image compression schemes since the chrominance difference channels (Cr and Cb) can be sampled at half the sample rate of the luminance without any noticeable image degradation. In YUV (4:2:2), U and V are sampled horizontally at half the rate of the luminance while in YUV (4:2:0), Cb and Cr are sub-sampled by a factor of 2 in both the vertical and horizontal directions.

    Perhaps the most commonly used mode for JPEG image compression, the YUV (4:2:2) mode

    Bayer interpolation, color balancing and color space conversion can all be performed on the GPU. To perform these tasks, the image in the GPU is split into a number of blocks during which they are unpacked from 8-bit to 32-bit integer format (the native CUDA data type). For each block, pixels are organized in 32 banks of 8 pixels as this fits the shared memory architecture in the GPU. This allows 4 pixels to be fetched simultaneously by the processor cores of the GPU (stream processors) with each thread reading 4 pixels and processing one pixel at a time.

    After images are transformed from RGB to YUV color space, the JPEG algorithm is applied to each individual YUV plane (Figure 4). At the heart of the JPEG algorithm is the discrete cosine transform (DCT). This is used to transform individual 8 x 8 blocks of pixels in the image from the spatial to frequency domain.

    Zig-Zag ordering is applied to each of the Y, U and V components separately and two different quantization tables are used for both the Y and UV components.

    Reply
  15. Tomi Engdahl says:

    Cherlynn Low / Engadget:
    Qualcomm unveils new Spectra depth-sensing camera tech, expected to be part of the next flagship Snapdragon Mobile Platform — Dual cameras are so passé. Qualcomm is getting ready to define the next generation of cameras for the Android ecosystem.

    Qualcomm’s new depth-sensing camera is surprisingly effective
    The IR-based system could be the next dual camera.
    https://www.engadget.com/2017/08/15/qualcomm-spectra-premium-computer-vision-depth-sensing-module/

    Dual cameras are so passé. Qualcomm is getting ready to define the next generation of cameras for the Android ecosystem. It’s adding three new camera modules to its Spectra Module Program, which lets device manufacturers select readymade parts for their products. The additions are an iris-authentication front-facing option, an Entry-Level Computer Vision setup and a Premium Computer Vision kit. The latter two carry out passive and active depth sensing, respectively, using Qualcomm’s newly revamped image-signal-processing (ISP) architecture.

    Of the three new modules, the most intriguing is the premium computer vision kit. That option is capable of active depth sensing, using an infrared illuminator, IR camera and a 16-megapixel RGB camera (or 20-MP, depending on the configuration). The illuminator fires a light that creates a dot pattern (using a filter), and the IR camera searches for and reads the pattern. By calculating how the dots warp over a subject and the distance between points, the system can tell how far away something is. And since this technology uses infrared light, it can also work in the dark.

    We’ll have to wait and see it in action for ourselves before knowing if it’ll be effective in the real world, but so far the technology is impressive.

    The module can get very detailed, since it uses more than 10,000 points of depth and can discern up to 0.125mm between the dots. This precision is important. “Depth sensing is going to be mission critical going forward,” Qualcomm’s product marketing lead for camera and computer vision, Philip-James Jacobowitz, told Engadget.

    There are plenty of useful applications for depth sensing, one of the most widespread being creating artificial depth of field in images. It can also help in facial detection, recognition and authentication; 3D object reconstruction; and localization and mapping, according to Qualcomm.

    Reply
  16. Tomi Engdahl says:

    Josh Constine / TechCrunch:
    Facebook improves Camera with ability to go Live, shoot two-second GIFs, and make full screen text posts; all can be shared to Story, Messenger, and News Feed

    Facebook boosts snubbed Stories Camera with Live, GIF & text sharing
    https://techcrunch.com/2017/08/15/facebook-camera-gifs-live/

    Despite the tepid reception for Facebook Stories, the social network is doubling down on its full-screen Camera feature. Today Facebook added the ability to go Live, shoot two-second GIFs and share full-screen text posts on colored background from Facebook Camera, which lets you share to Facebook Stories, Direct messaging and the traditional News Feed.

    Reply
  17. Tomi Engdahl says:

    RS Components – Handheld test instrument for HDMI enables quick on-site testing and troubleshooting (LeCroy 780)
    http://www.electropages.com/2017/08/rs-components-handheld-test-instrument-hdmi-enables-quick-on-site-testing-troubleshooting/?utm_campaign=2017-08-16-Electropages&utm_source=newsletter&utm_medium=email&utm_term=article&utm_content=RS+Components+-+Handheld+test+instrument+for+HDMI+enables+quick+on-site+

    The Teledyne LeCroy 780 handheld test instrument for HDMI is a battery-powered, portable video and audio generator and HDMI analyser that enables you to conduct quick, on-site verification testing and troubleshooting of your HDMI system and analogue video displays. It is available now from RS Components.

    The instrument is equipped with both a reference HDMI source and a reference HDMI sink interface allowing you to test audio, video and HDMI protocols—HDCP, EDID, CEC and info-frames—of any type of HDMI device: sources, repeaters and sinks. The device supports HDMI testing at pixel rates up to 165MHz and TMDS rates up to 225MHz for deep colour on the output and pixel rates up to 150MHz on the HDMI input.

    Reply
  18. Tomi Engdahl says:

    Twitch gamers live-stream their vital signs to keep fans hooked
    https://www.newscientist.com/article/2144051-twitch-gamers-live-stream-their-vital-signs-to-keep-fans-hooked/

    Never let ’em see you sweat. It might appear to be sound advice, but maybe not for people who stream their gaming online. They seem to do better with audiences if they broadcast details like their heart rate and sweat levels.

    Lots of us now spend a big chunk of our time watching others play games. Roughly 10 million people tune in every day to watch the more than 2 million people who stream their games on platforms like the Amazon-owned Twitch. Many of these live‑streamers hope to make it to professional e-sports contests, where the big names can take home millions of dollars.

    But winning and keeping an audience is hard. There are lots of games to watch, and Twitch spectators are a fickle bunch.

    Robinson and her colleagues created a prototype tool called All the Feels. The software pulls physiological data from a Fitbit-like wristband and displays the readings in a bar graph next to the gaming window.

    It also uses face recognition software to turn the player’s emotional state into one more feature of the game. When it determines that their joy, surprise, anger, disgust, sadness or fear has hit a certain threshold, the corresponding emoji flashes up on screen.

    “Everyone likes the voyeurism of seeing someone’s insides,” says Regan Mandryk at the University of Saskatchewan, Canada.

    Reply
  19. Tomi Engdahl says:

    Tatiana Siegel / Hollywood Reporter:
    A new episode of Game of Thrones was accidentally aired on HBO Nordic and HBO Espana, with the leak then spreading to file-sharing sites — Though the episode was posted for a “brief” amount of time, the leak quickly spread on the Internet early Wednesday. — It’s a case of deja vu for HBO.

    New ‘Game of Thrones’ Episode Leaks
    http://www.hollywoodreporter.com/news/game-thrones-episode-leaks-is-pulled-down-1030098

    It’s a case of deja vu for HBO.

    For the second time in two weeks, the network has seen an upcoming episode of Game of Thrones leak ahead of its scheduled premiere. A HBO Europe spokesperson acknowledged the leak, which was described as “brief.”

    “We have learned that the upcoming episode of Game of Thrones was accidentally posted for a brief time on the HBO Nordic and HBO Espana platforms,” the spokesperson said. “The error appears to have originated with a third-party vendor and the episode was removed as soon as it was recognized. This is not connected to the recent cyber incident at HBO in the U.S.”

    Reply
  20. Tomi Engdahl says:

    Jason Guerrasio / Business Insider:
    A day after MoviePass announces its new $10 a month plan, AMC Theaters says it is consulting with attorneys as to whether it can block the service — Following the surprising news on Tuesday that MoviePass would begin a $9.95-a-month subscription service in which members can see one movie …

    The world’s largest movie-theater chain is trying to block MoviePass’ new $10-a-month plan
    http://nordic.businessinsider.com/amc-theaters-trying-to-block-moviepass-2017-8?op=1&r=US&IR=T

    Following the surprising news on Tuesday that MoviePass would begin a $9.95-a-month subscription service in which members can see one movie a day in US theaters, AMC Theaters has announced it is looking into whether it can block the service.

    The largest theater chain in the world issued a statement late Tuesday saying it was consulting with its attorneys on whether it could stop accepting MoviePass.

    “AMC believes that holding out to consumers that first-run movies can be watched in theaters at great quantities for a monthly price of $9.95 isn’t doing moviegoers any favors,” the statement said. “In AMC’s view, that price level is unsustainable and only sets up consumers for ultimate disappointment down the road if or when the product can no longer be fulfilled.”

    That is the biggest question many have in the exhibition world: How will MoviePass be financially sustainable? BoxOfficeMojo says movie tickets in the US cost $8.89 on average. At that price, the company will lose money on a subscriber who sees just two movies a month.

    One source told Business Insider it’s assumed that the company would be relying on advertising revenue, but MoviePass would have to do huge levels of traffic to really make any money. The service had about 20,000 subscribers in December and hopes to add 100,000 more with the new plan.

    Reply
  21. Tomi Engdahl says:

    Tripp Mickle / Wall Street Journal:
    Sources: Apple has set a budget of ~$1B to procure and produce original content over the next year, could acquire and produce as many as 10 TV shows — Company immediately becomes a considerable competitor in crowded market for original shows — Apple Inc. AAPL 1.09% has set a budget …

    Apple Readies $1 Billion War Chest for Hollywood Programming
    Company immediately becomes a considerable competitor in crowded market for original shows
    https://www.wsj.com/articles/apple-readies-1-billion-war-chest-for-hollywood-programming-1502874004

    Reply
  22. Tomi Engdahl says:

    Multiple Monitors With Multiple Pis
    http://hackaday.com/2017/08/17/multiple-monitors-with-multiple-pis/

    One of the most popular uses for the Raspberry Pi in a commercial setting is video walls, digital signage, and media players. Chances are, you’ve probably seen a display or other glowing rectangle displaying an advertisement or tweets, powered by a Raspberry Pi. [Florian] has been working on a project called info-beamer for just this use case, and now he has something spectacular. He can display a video on multiple monitors using multiple Pis, and the configuration is as simple as taking a picture with your phone.

    [Florian] created the info-beamer package for the Pi for video playback (including multiple videos at the same time), displaying public transit information, a twitter wall, or a conference information system. A while back, [Florian] was showing off his work on reddit when he got a suggestion for auto-configuration of multiple screens. A few days later, everything worked.

    Automatic video wall configuration with info-beamer hosted
    https://www.youtube.com/watch?v=GI00HTJhSMU

    This is an exciting new feature I’ve made available for the info-beamer hosted digital signage system: You can create a video wall consisting of freely arranged screens in seconds. The screens don’t even have to be planar. Just rotate and place them as you like. Configuration is as simple as creating a picture of your screens once you’ve physically set them up. The rest is completely automatic: You don’t have to configure screen resolutions, orientation, size or anything else really. You just have to take a picture. It can’t get easier.

    Reply
  23. Tomi Engdahl says:

    Think 12G-SDI Over Coax Isn’t Possible? Think Again!
    http://www.belden.com/blog/broadcastav/think-12g-sdi-over-coax-isn-t-possible-think-again.cfm

    4K is a term commonly used to describe video display resolution that is about 4000 pixels. That is roughly eight times the resolution of high definition (and four times the resolution of 1080p). The broadcast version of 4K, called UHD (ultra-high definition), has a resolution of 3840 pixels by 2160 lines. DCI (Digital Cinema Initiatives), Hollywood’s 4K version, has a resolution of 4096 pixels by 2160 lines. Both have a clock rate of close to 12 GHz, hence the 12G-SDI.

    Starting Down the Path Toward 12G-SDI

    Several years ago, we created an RG-6 video cable (1694A) that carried HD over 370 feet (113m). But when 3G-SDI hit the scene, that video signal – also called 1080p/60 or 1080p/50 – was double the bandwidth of high definition (HD), which reduced distance capabilities of 1694A down to 78m.

    Although it has changed over the years, the magic distance for video cables today is 100m (328 feet). I’ve always wondered where that number came from. Isn’t that the distance limitation of data cables like Category 5e, 6 or 6A? How does that apply to video cable?

    But then it occurred to me: Most broadcast and video installations use data cables. In fact, some professionals say that these applications will eventually consist solely of data cables. Right now, many installations have a hybrid design with both data and coax cables, so maybe it makes sense that coax cable follow the same rule.

    That’s when we decided to create the first cable designed specifically to carry signals up to 100m for 3G-SDI, 1794A – and we did so about five years ago. This was a slightly larger cable than 1694A. Today, however HD pretty much rules the video world, so the souped-up Type 7 cable we created didn’t end up being a hot seller.

    But would 12G-SDI signal transmission over coax cable ever be possible? Most people said no.

    If you know the clock frequency, or data rate, of the application, you can determine how far along the cable you can safely go and still maintain a picture. In any digital data system, the actual data cannot go past a frequency of half the clock. This is called the Nyquist limit. Originally, the formula for SD-SDI was -30 dB (attenuation) at ½ the clock frequency. Using this formula, digital signals could easily be sent hundreds, even thousands, of feet before reaching the -30dB distance. Then, with the change to HD, SMPTE standard ST 292 was written with a more conservative formula of -20 dB at ½ the clock frequency. That means that you can’t go as far. This safe distance was very conservative.

    Not only was the cable improving, but so were the connectors and chips sending and receiving the signals. When it looked like 4K would eventually become standard, Belden lobbied to the SMPTE standards group to change the distance formula for these applications. We proposed a new formula and got our wish: -40 dB at ½ the clock frequency. This means that, for a 12 GHz cable, attenuation must be no more than -40 dB at 6 GHz (½ the clock frequency of 12 GHz) For the 6 GHz version of 4K, the formula would be -40 dB at 3 GHz.

    There are quite a few new things to notice on this chart. The first is the column for SMPTE ST-425, which covers quad-link 12 GHz for UHDTV1. This was the original 12 GHz delivery system, which split 12 Gbps into four cables. In that case, each cable carries 3 Gbps/3 GHz cable performance, which already exists. But, with the new formula (-40 dB at ½ the clock frequency), they go farther than they did in the previous SMPTE ST-424 standard, even though they are the same cables you’ve always been using.

    If the cables under the -30 dB and -20 dB formulas can go two or three times the distance shown, where’s the cliff for the 12G-SDI cable – especially in the SMPTE ST 2082-1 column?

    A New Cable for 12G-SDI Signal Transmission

    We recently released a new cable for 4K/UHDTV (12G-SDI): Belden 4794R. This coax cable is the first designed specifically for 4K single-link UHD video cable in the broadcast market for 12G-SDI signal transmission.

    Reply
  24. Tomi Engdahl says:

    4K UHD wireless camera transmitter supports dual SFP modules for quad 3/6/12G SDI/HDMI/Fiber Optic/SMPTE 2022-6 HD-SDI over IP interfaces, plus Wi-Fi, Bluetooth
    http://www.cablinginstall.com/articles/2017/07/vislink-4k-uhd.html?cmpid=enl_cim_cim_data_center_newsletter_2017-08-17

    xG Technology, Inc. (NASDAQ: XGTI), a provider of wireless video solutions for the broadcast, law enforcement and defense markets, and private mobile broadband networks for critical communications, announces that its Vislink business will showcase its new HCAM, an HEVC 4K UHD camera transmitter, to the European market at IBC 2017 (Sep. 15-19) in Amsterdam, the world’s largest media, entertainment and technology show. HCAM will be among the exciting products for broadcast, sports and entertainment being jointly presented by Vislink and IMT at Stand 1.A69 at the conference.

    HCAM represents the next generation of HEVC 4K UHD on-camera wireless transmitters for broadcast, ENG and prosumer cameras due to its highly flexible and configurable mounting options and intuitive video interfaces. HCAM permits 4K UHD wireless video with a 70ms latency via Vislink’s world leading RF modulation.

    “We are excited to showcase HCAM to the European market at IBC 2017,”

    Reply
  25. Tomi Engdahl says:

    AI creates fictional scenes out of real-life photos
    It’s paint by numbers for creating dreamy worlds.
    https://www.engadget.com/2017/08/17/ai-creates-fictional-scenes-out-of-real-life-photos/

    AI’s not quite ready to build photorealistic worlds on its own. But it’s getting pretty close.

    Researcher Qifeng Chen of Stanford and Intel fed his AI system 5,000 photos from German streets. Then, with some human help it can build slightly blurry made-up scenes. The image at the top of this article is a example of the network’s output.

    To create an image a human needs to tell the AI system what goes where. Put a car here, put a building there, place a tree right there. It’s paint by numbers and the system generates a wholly unique scene based on that input.

    It’s not going to replace the high-end special effects houses that spend months building a world. But, it could be used to create video game and VR worlds where not everything needs to look perfect in the near future.

    Reply
  26. Tomi Engdahl says:

    Anousha Sakoui / Bloomberg:
    Sources: movie studios, including Warner Bros. and Universal Pictures, in talks with Apple, Comcast to offer movie rentals mere weeks after theatrical releases

    Hollywood, Apple Said to Mull Rental Plan, Defying Theaters
    https://www.bloomberg.com/news/articles/2017-08-18/hollywood-apple-are-said-to-mull-rental-plan-defying-theaters

    Movie studios are considering whether to ignore the objections of cinema chains and forge ahead with a plan to offer digital rentals of films mere weeks after they appear in theaters, according to people familiar with the matter.

    Some of the biggest proponents, including Warner Bros. and Universal Pictures, are pressing on in talks with Apple Inc. and Comcast Corp. on ways to push ahead with the project even without theater chains, the people said. After months of negotiations, the two sides have been unable to arrive at a mutually beneficial way to create a $30 to $50 premium movie-download product.

    Studios are said focused on project despite exhibitor pushback
    Theater chains are said to seek 10 years of revenue split

    Reply
  27. Tomi Engdahl says:

    Lyor Cohen / YouTube Blog:
    Lyor Cohen, YouTube’s Global Head of Music, on promoting new artists, copyright safe harbors, and how ads and subscriptions can thrive together — Earlier this year, I was asked by Google (because they know I am pre “Sucker M.C.”) to work on a Doodle celebrating the 44th anniversary of the music that changed my life.

    Five observations from my time at YouTube
    http://youtube.googleblog.com/2017/08/five-observations-from-my-time-at.html

    Reply
  28. Tomi Engdahl says:

    If Music Be the Food of Love, Have Your Ears Checked
    https://www.eeweb.com/blog/max_maxfield/if-music-be-the-food-of-love-have-your-ears-checked

    Most people’s hearing degrades over time, as you’ll discover if you take this online frequency hearing test, but PYOUR Audio headphones may offer the solution.

    In fact, I just heard from the folks at Absolute Audio Labs who say that studies show one out of every three people have some type of hearing damage. In order to address this, as seen in this video, they’ve created PYOUR Audio (“Pure Audio”), which they describe as “Revolutionary headphones for people who suffer from hearing loss.”

    Combined with an innovative app and advanced software, PYOUR Audio adapts the sound of your music to how well you can actually hear. After you’ve tested your hearing via a simple but super-accurate hearing test, the app calculates the best sound settings for you and programs them into the headphones (you can also dial in settings to further protect your hearing and customize the frequency response to your own personal taste).

    The reason I’m waffling on about this here is that most people’s hearing degrades over time. For example, I just took this online frequency hearing test, and now I’m sad because the highest frequency I can hear is 10 kHz.
    http://www.noiseaddicts.com/2009/03/can-you-hear-this-hearing-test/

    Reply
  29. Tomi Engdahl says:

    Making Visible Watermarks More Effective
    https://research.googleblog.com/2017/08/making-visible-watermarks-more-effective.html

    Whether you are a photographer, a marketing manager, or a regular Internet user, chances are you have encountered visible watermarks many times. Visible watermarks are those logos and patterns that are often overlaid on digital images provided by stock photography websites, marking the image owners while allowing viewers to perceive the underlying content so that they could license the images that fit their needs. It is the most common mechanism for protecting the copyrights of hundreds of millions of photographs and stock images that are offered online daily.

    It’s standard practice to use watermarks on the assumption that they prevent consumers from accessing the clean images, ensuring there will be no unauthorized or unlicensed use. However, in “On The Effectiveness Of Visible Watermarks” recently presented at the 2017 Computer Vision and Pattern Recognition Conference (CVPR 2017), we show that a computer algorithm can get past this protection and remove watermarks automatically, giving users unobstructed access to the clean images the watermarks are intended to protect.

    The Vulnerability of Visible Watermarks
    Visible watermarks are often designed to contain complex structures such as thin lines and shadows in order to make them harder to remove. Indeed, given a single image, for a computer to detect automatically which visual structures belong to the watermark and which structures belong to the underlying image is extremely difficult. Manually, the task of removing a watermark from an image is tedious, and even with state-of-the-art editing tools it may take a Photoshop expert several minutes to remove a watermark from one image.

    However, a fact that has been overlooked so far is that watermarks are typically added in a consistent manner to many images. We show that this consistency can be used to invert the watermarking process

    The first step of this process is identifying which image structures are repeating in the collection. If a similar watermark is embedded in many images, the watermark becomes the signal in the collection and the images become the noise, and simple image operations can be used to pull out a rough estimation of the watermark pattern.

    To actually recover the image underneath the watermark, we need to know the watermark’s decomposition into its image and alpha matte components.

    The vulnerability of current watermarking techniques lies in the consistency in watermarks across image collections. Therefore, to counter it, we need to introduce inconsistencies when embedding the watermark in each image.

    In a nutshell, the reason this works is because removing the randomly-warped watermark from any single image requires to additionally estimate the warp field that was applied to the watermark for that image — a task that is inherently more difficult.

    http://openaccess.thecvf.com/content_cvpr_2017/papers/Dekel_On_the_Effectiveness_CVPR_2017_paper.pdf

    Reply
  30. Tomi Engdahl says:

    The Windows App Store is Full of Pirate Streaming Apps
    https://yro.slashdot.org/story/17/08/21/1343244/the-windows-app-store-is-full-of-pirate-streaming-apps

    When we were browsing through the “top free” apps in the Windows Store, our attention was drawn to several applications that promoted “free movies” including various Hollywood blockbusters such as “Wonder Woman,” “Spider-Man: Homecoming,” and “The Mummy.” Initially, we assumed that a pirate app may have slipped past Microsoft’s screening process. However, the ‘problem’ doesn’t appear to be isolated.

    The Windows App Store is Full of Pirate Streaming Apps
    https://torrentfreak.com/the-windows-app-store-is-full-of-pirate-streaming-apps-170820/

    In recent years streaming piracy has become a popular pastime for millions of people. A lot of this takes place through ‘rogue’ websites or dedicated pirate devices, which are often scolded by the movie industry. The problem is not limited to unauthorized platforms alone though, as the “trusted” Windows App Store is full of widely used pirate apps as well.

    Reply
  31. Tomi Engdahl says:

    Verizon To Start Throttling All Smartphone Videos To 480p or 720p
    https://news.slashdot.org/story/17/08/22/144226/verizon-to-start-throttling-all-smartphone-videos-to-480p-or-720p

    Verizon Wireless will start throttling video streams to resolutions as low as 480p on smartphones this week. Most data plans will get 720p video on smartphones, but customers won’t have any option to completely un-throttle video.

    1080p will be the highest resolution provided on tablets, effectively ruling out 4K video on Verizon’s mobile network. Anything identified as a video will not be given more than 10Mbps worth of bandwidth. This limit will affect mobile hotspot usage as well. Verizon started selling unlimited smartphone data plans in February of this year, and the carrier said at the time that it would deliver video to customers at the same resolution used by streaming video companies.

    Verizon to start throttling all smartphone videos to 480p or 720p
    No 4K video allowed—new bandwidth limits apply to mobile hotspots, too.
    https://arstechnica.com/information-technology/2017/08/verizon-to-start-throttling-all-smartphone-videos-to-480p-or-720p/

    Reply
  32. Tomi Engdahl says:

    Disney Will Price Streaming Service At $5 Per Month, Analyst Says
    https://news.slashdot.org/story/17/08/21/2026205/disney-will-price-streaming-service-at-5-per-month-analyst-says

    Earlier this month, Disney announced it would end its distribution deal with Netflix and launch its own streaming service in 2019. Now, according to MoffettNathanson analyst Michael Nathanson, we have learned that Disney’s new streaming service will be priced around $5 per month in order to drive wider adoption.

    Disney will price streaming service at $5 per month, analyst says
    http://www.fiercecable.com/online-video/disney-will-price-streaming-service-at-5-analyst-says

    Disney’s upcoming branded streaming service will likely be priced around $5 per month in order to drive wider adoption, according to MoffettNathanson analyst Michael Nathanson.

    Nathanson said that the new Disney streaming service and the upcoming ESPN streaming service need a clear distinction. The ESPN service will likely test different prices as it prepares ESPN to be ready to go fully over-the-top, according to the report, but the Disney service is about building asset value instead of taking licensing money from SVOD deals.

    At $5 per month in ARPU, Nathanson sees revenues from the Disney streaming service ranging from $34 million to $38 million in the first year and more than $230 million by year three.

    Reply
  33. Tomi Engdahl says:

    Lizzie Plaugic / The Verge:
    Sony Music signs remixing deal with rights management firm Dubset; source says Dubset working on deals with Warner Music and Universal Music Group

    Sony Music’s new deal with Dubset will let it monetize unofficial remixes
    Dubset wants to legitimize distribution for DJs and producers
    https://www.theverge.com/2017/8/22/16183680/sony-music-dubset-remixes-rights-clearance

    Rights clearance startup Dubset has just inked a deal with Sony Music to allow monetization of its songs in DJ sets and of unofficial tracks that sample its artists. This means that artists could have their bootleg remixes and DJ sets legally cleared and distributed on Apple Music and Spotify, with every rights holder receiving a portion of royalties. This is the first major label Dubset has secured, and it’s a big step toward tackling gray area material that has proved problematic for platforms like SoundCloud.

    The deal is a tangible move forward in the music industry’s ongoing battle with sampling and bootlegs. Dubset claims more than 35,000 labels and publishers are already using its services, but the Sony agreement will massively expand coverage for DJs, who often use recognizable radio songs for remixes and DJ sets. Sony can set rules to restrict some of its content, but Dubset CEO Stephen White says “the majority” of the label’s catalog will be available.

    Dubset has platforms called MixSCAN and MixBANK, which work in tandem to identify songs and distribute royalties. After a track is uploaded on MixBANK, Dubset’s proprietary tech scans it for samples, identifies the rights owners, and clears the track for distribution on Apple Music or Spotify. Dubset also handles the royalties for each track uploaded, dividing the money between rights holders and DJs. “Sony was open to the deal because it not only opens up another revenue stream for them, it also revitalizes their back catalog,” says Alex Dias, a content manager for Dubset.

    Dubset recently succeeded in clearing a full DJ set for Apple Music, White tells The Verge. A landmark achievement, this marks the first time a sample-heavy mix has appeared on the streaming service in its entirety.

    Dubset’s deal is also a blow to SoundCloud. Once a popular platform for DJ mixes and bootlegs, SoundCloud has in recent years struggled to appeal to creators while also trying to appease labels gunning for copyright takedowns.

    “For artists, [this deal] means that the music being used in remixes and mixes is now being properly controlled, properly monetized, and they’re getting fairly compensated for the use of their works,” White says.

    A source close to Dubset tells The Verge that the company is also working on similar deals with Universal Music Group and Warner Music.

    Reply
  34. Tomi Engdahl says:

    Matt Weinberger / Business Insider:
    LinkedIn starts rolling out video upload support for its Android and iOS apps

    This is why LinkedIn is betting big on letting people share videos
    http://www.businessinsider.com/linkedin-opens-video-sharing-2017-8?op=1&r=US&IR=T&IR=T

    If you’ve been on LinkedIn recently, you may have noticed that some users are making and sharing videos, basically turning the social network into a more professional version of YouTube or Facebook Live.

    Following this initial testing period, video will soon be available to all users. On iOS and Android, the LinkedIn app is getting a “video” button that will let you record a new video or upload one you’ve already taken. The new feature will be available to many users on Tuesday, and the company will roll it out globally over the next few weeks.

    There are any number of places to post a video online. A key reason to post it on LinkedIn, though, is to share it with your professional audience, said Peter Roybal, a senior product manager at the company. Just like any other post on LinkedIn, text or otherwise, you would share videos because you want them to be noticed by your professional network.

    “Of course [users are] going to share content that is most relevant to the people they’re connected to,” Roybal said.

    LinkedIn users have been eager to have access to the video feature, he said.

    Reply
  35. Tomi Engdahl says:

    Natasha Lomas / TechCrunch:
    Prisma Labs, whose app turns photos into “artworks” using AI, shifts focus to selling its tools to other firms, says app has 5M-10M MAUs and won’t be retired

    Prisma shifts focus to b2b with an API for AI-powered mobile effects
    https://techcrunch.com/2017/08/19/prisma-shifts-focus-to-b2b-with-an-api-for-ai-powered-mobile-effects/

    The startup behind the Prisma style transfer app is shifting focus onto the b2b space, building tools for developers that draw on its expertise using neural networks and deep learning technology to power visual effects on mobile devices.

    It’s launched a new website, Prismalabs.ai, detailing this new offering.

    Initially, say Prisma’s co-founders, they’ll be offering an SDK for developers wanting to add effects like style transfer and selfie lenses to their own apps — likely launching an API mid next week.

    https://prismalabs.ai/

    Reply
  36. Tomi Engdahl says:

    Personalized Ad Insertion: The Key to OTT Success
    http://www.broadbandtechreport.com/articles/2017/08/personalized-ad-insertion-the-key-to-ott-success.html?cmpid=enl_btr_btr_video_technology_2017-08-21

    As consumer demand has rendered over-the-top (OTT) video provision a necessity, providers have been forced to look at how to monetize it. The market is highly competitive, the audience is fickle, and of course, monetizing digital content is almost as complicated as delivering it in the first place. In addition to subscription, OTT providers have also become reliant on advertising for revenue. But consumers are not particularly willing to sit through irrelevant ads, and this poses a problem for a business model reliant on advertising to generate revenue. In fact, a report from IPG’s Media Lab states that 65% of people skip online video advertising. The trick is to ensure a worthwhile experience for the viewer, as well as added value for the advertiser.

    Metadata is Critical

    Once upon a time, OTT was pretty much solely based on on-demand programming. Ads were added at the beginning – known as “pre-roll” – or in one or more “breaks” in the middle of the content – “mid-roll.” If the on-demand content is recorded from a live/linear transmission, the ads that are already in place may be retained for a few days (in the United States, Nielsen counts the ratings of content viewed after the original air date for three days). This doesn’t do anything to personalize the ads, of course.

    In order to make that happen, “metadata markers” are placed in the appropriate places in the video file during processing, and when the viewer reaches that pre-determined point in the video, a call is made to an ad decision server and the returned ads are dynamically inserted on the fly. Ads can be served based upon each user’s demographic profile, their location, the device they are using, as well as information contained in the metadata markers, such as what the content is, what the genre is, how far along in the content the viewer is. As each ad is played, tracking URLs can be called to report that it has been (or is being) viewed, and some of the more sophisticated systems allow reporting of click-through engagement as the viewer goes off to look at a shiny new BMW. The video will wait, of course, for the viewer to interact before resuming. This is a key advantage of on-demand viewing.

    This model has been in place for several years already, generating material revenues for content owners and distributors.

    Today, though, it is more complicated, as existing linear TV broadcasts are being made available online more or less at the same time. Online live/linear video is typically delayed by about 30 seconds to a minute from the broadcast because of the protocols used to get continuous video through a medium designed for file delivery (i.e., the Internet). Hence, there are significant differences between on-demand and linear delivery when it comes to advertising.

    Firstly, the timing of exactly when to insert and the length of the advertising breaks are much more critical in a live/linear situation. If the ad is inserted too early, it will cut off programming, and if it is inserted too late, the viewer will see whatever is to be replaced (filler promos, other ads etc.).

    Secondly, televison is scheduled in advance with breaks full of ads that the TV broadcaster has sold.

    So, in addition to tailoring the ads for individual user demographics, location and device type, the ads need to be scheduled in advance and be the same length as in the original broadcast. Individual broadcast TV ads must be capable of being removed and replaced with new ads.

    Where the Ads are Inserted Matters

    Up until now, we have discussed the decisions regarding what ads are replaced/inserted. Another significant shift since the start of OTT advertising in the on-demand-only days is where the ads are physically inserted into streams being delivered online.

    In the early days, dynamic insertion almost always took place on the device itself. The principal advantage was that it enabled the online delivery of the content to be completely uniform to every device.

    However, there were significant disadvantages. Firstly, hackers could quite easily interrupt the ad insertion process and essentially steal the ad inventory.

    As a result of the deficiencies in client-side insertion (as the above is known), providers started to insert ads on the server. Server-side Insertion makes it much easier to insert ads seamlessly and virtually eliminates the ability of hackers to steal inventory as there is no easy way to detect when the content ends and the ads start – there are no calls to ad decision servers and no metadata markers. It also simplifies the app on the device somewhat, though at the expense of making tracking URLs much harder to deliver, though not impossible.

    Because delivery over the Internet requires individual streams for each viewer, it is theoretically possible to deliver completely different ads (and content for that matter) to each viewer. In practice this is unlikely to happen.

    The broadcaster essentially gets to re-sell the same inventory multiple times since the same time slot can now be reused multiple times for different ads. These ads are also more relevant to the viewer and so are less likely to be a chore to watch. So everyone wins.

    Reply
  37. Tomi Engdahl says:

    Sarah Perez / TechCrunch:
    Facebook partners with sports network Stadium, will exclusively broadcast 15 college football games live — Facebook wants you to watch more video on its site – including those you can’t see elsewhere. To that end, the social network earlier this month launched a dedicated section for original video, called Watch.

    Facebook will live stream over a dozen college football games this year
    https://techcrunch.com/2017/08/23/facebook-will-live-stream-over-a-dozen-college-football-games-this-year/

    Facebook wants you to watch more video on its site – including those you can’t see elsewhere. To that end, the social network earlier this month launched a dedicated section for original video, called Watch. Now that’s being expanded with the addition of live-streamed college football games, broadcast in partnership with sports network Stadium.

    The deal will bring 15 live college football games to Facebook, including nine Conference USA games and six Mountain West games.

    Unlike games aired on traditional television, the games broadcast on Facebook will take advantage of the digital platform to introduce a number of interactive elements as part of the viewing experience. For example, they will include a live, curated chat from football personalities alongside the on-air presentation. Plus, a social team and other correspondents will work to engage the at-home audience in conversation.

    Reply
  38. Tomi Engdahl says:

    Malachy Browne / New York Times:
    YouTube machine learning tech removed thousands of videos that may document atrocities in Syria, advocates say; some videos reinstated after creator objections
    http://www.nytimes.com/2017/08/22/world/middleeast/syria-youtube-videos-isis.html

    Reply
  39. Tomi Engdahl says:

    Georg Szalai / Hollywood Reporter:
    Snapchat’s VP of Content Nick Bell says the company expects to debut scripted shows by year’s end, aiming to complement TV

    Snapchat Content Chief Expects First Scripted Fare This Year, Says “Mobile Is Not a TV Killer”
    http://www.hollywoodreporter.com/news/snapchat-content-chief-expects-first-scripted-fare-year-says-mobile-is-not-a-tv-killer-1031153

    “We want to be friends of media,” says Nick Bell at the Edinburgh Television Festival as he outlines the company’s mobile-first TV strategy.

    Snapchat owner Snap has been looking to reinvent and complement TV for mobile-focused millennials, with vp content Nick Bell leading the charge.

    The executive on Wednesday outlined and discussed the company’s mobile-first TV strategy on the first day of the Edinburgh TV Festival, saying the company doesn’t expect to do away with traditional television and would start doing scripted fare by year’s end.

    “Mobile is not a TV killer,” he said. In fact, mobile is the most complementary thing to TV yet as “there is no better place to watch a great show than on the glowing box on the wall,” while watching longform content on a mobile device is not a great experience, he argued.

    “We want to be friends of media,” he said. Snap’s focus is on “joining the dots” between TV and mobile, the executive added. Optimizing known properties from content makers is part of that, such as doing Snapchat series that are “complementary” to existing TV shows and reimagining them for the new platform. After all, “Snapchat shows drive tune in to TV,” he highlighted, mentioning that such series as The Bachelor get audience boosts of around 15 percent from related Snapchat content.

    Reply
  40. Tomi Engdahl says:

    Nikon’s new D850 has 45.7 megapixels and enough features to tempt Canon shooters
    Jack of all trades, master of some
    https://www.theverge.com/circuitbreaker/2017/8/24/16193250/nikon-d850-dslr-camera-45-megapixels

    Nikon has a new full-frame DSLR: the D850. Announced today, the D850 is a monster of a camera in terms of specs, and it’s one that will cost accordingly — the retail price is $3,299 for just the body when it goes on sale in September. The pro-level D5 may still be the king of Nikon’s current DSLR offerings, but at first glance it’s the D850 that will likely be the Nikon full-frame camera that gets the most use by pros, semi-pros, and amateurs with deep pockets. For all intents and purposes, this is Nikon’s flagship camera going forward.

    So what does a 2017 flagship camera look like in Nikon’s eyes? Well, this camera has just about everything you could want from a full-frame DSLR these days. It’s built around a hefty 45.7-megapixel CMOS sensor that’s back-side illuminated — a first for any of Nikon’s full-frame cameras. That should make the D850 handle low light situations pretty well despite the high megapixel count, which usually limits low light quality. And to wrangle those mega files, Nikon’s included its top-line Expeed 5 image processor.

    Nikon’s also following a major trend in digital cameras by not including a low pass filter on the D850, which — combined with the high megapixel count — means the camera should be able to capture incredible detail.

    And yet, Nikon’s not positioning the D850 as simply a great tool for stills and studio photographers. In fact, there’s a pretty good case to be made for the D850 in almost any shooting scenario judging from its specs. More bluntly: it doesn’t carry as many of the tradeoffs for that resolution that high-resolution DSLRs like Canon’s two-year-old 50-megapixel 5DS line asks of its users. It also outclasses (on paper, at least) Canon’s own “jack of all trades” full-frame camera, the year-old 5D Mark IV.

    That all starts with the D850’s video capabilities — it shoots 4K UHD footage at 30 or 24 frames per second, and 1080p video at up to 120 fps. It can record uncompressed 4:2:2 8-bit 4K UHD footage to an external recorder over the HDMI port while recording locally to a card at the same time.

    There’s an 8K time-lapse video mode, too, which is double the resolution that’s typically found on DSLRs these days.

    There’s also a “silent shooting” option — something more commonly seen on mirrorless cameras, not DSLRs — that lets users shoot up to 6 fps at full resolution (or up to 30 fps at 8.6-megapixels).

    The D850 has two memory card slots — one XQD and one SD — to help capture all that data, and the battery will last for about 1,800 shots (or 70 minutes of video).

    Wi-Fi, Bluetooth, and Snapbridge (the company’s solution for maintaining a constant connection to your smartphone) are all included as well.

    Reply
  41. Tomi Engdahl says:

    Sarah Perez / TechCrunch:
    Report: Roku’s market share among streaming media players in US is up to 37%, with Amazon’s Fire TV at 24%, Chromecast at 18%, and Apple TV down to 15% — Roku isn’t only maintaining its lead as the top streaming media player device in the U.S., it’s increasing it.

    Roku is the top streaming device in the U.S and still growing, report finds
    https://techcrunch.com/2017/08/23/roku-is-the-top-streaming-device-in-the-u-s-and-still-growing-report-finds/

    Roku isn’t only maintaining its lead as the top streaming media player device in the U.S., it’s increasing it. That’s the conclusion from the latest industry report out today from market intelligence firm Parks Associates, which states that 37 percent of streaming devices in U.S. households are Roku devices, as of the first quarter of this year.

    That’s up from 30 percent in the same quarter last year, the report notes.

    Reply
  42. Tomi Engdahl says:

    Spotify just signed the last big music label deal it needs to go public
    Now that it has Warner Music on board, look for a nontraditional IPO at the end of this year or early 2018.
    https://www.recode.net/2017/8/24/16199514/spotify-warner-music-label-deal-ipo

    Spotify has cleared what ought to be the last major hurdle before it goes public: It has renewed a licensing deal with Warner Music Group.

    That means that Spotify now has deals in place with all three of the major music labels, and that means it will be able to tell investors that it has a grip on music costs for the next few years.

    Reply
  43. Tomi Engdahl says:

    Steven Melendez / Fast Company:
    A look at projects by Mozilla and Google to make large, open-source datasets containing crowdsourced voice samples available to developers

    Google, Mozilla, And The Race To Make Voice Data For Everyone
    The voice-control platform wars are getting open sourced.
    https://www.fastcompany.com/40449278/google-mozilla-and-the-race-to-make-voice-data-for-everyone

    A voice-controlled virtual assistant–Siri, Alexa, Cortana, or Google Home–is only as good as the data that powers it. Training these programs to understand what you are saying requires a whole lot of real-world examples of human speech.

    That gives existing voice recognition companies a built-in advantage, because they have already amassed giant collections of sample speech data that can be used to train their algorithms.

    Google acknowledged as much on Thursday, in releasing a crowdsourced dataset of global voice recordings. The 65,000 one-second audio clips include people from around the world saying simple command words–yes, no, stop, go and the like. This comes just a couple of weeks after Mozilla, the organization behind the open source Firefox browser, recently introduced a new project called Common Voice. Their goal is to build a freely available, crowdsourced dataset of voice samples from around the world, speaking a wide variety of sample words and sentences.

    Google’s recordings were collected as part of the AIY do-it-yourself artificial intelligence program, designed to enable makers to experiment with machine learning.

    In full, it’s more than a gigabyte of sound, but that’s just a tiny fraction of the total amount of voice data Google has collected to train its own AI systems. The company once opened an automated directory assistance service which, it turned out, was primarily a way for them to gather human voice data.

    Amazon’s Alexa transmits voice queries from its users to a server, where they’re used to further train the tool. Apple teaches Siri new languages and dialects by hiring speakers to read particular passages of known text, and by having humans transcribe snippets of audio from the service’s speech-to-text dictation mode. Microsoft has reportedly set up simulated apartments around the world to grab audio snippets in a homelike setting to train its Cortana digital assistant.

    All of that is privately held, and generally unavailable to academics, researchers, or would-be competitors. That’s why Mozilla decided to launch its Common Voice project.

    Reply
  44. Tomi Engdahl says:

    Millions illegally streamed Mayweather-McGregor fight
    https://www.cnet.com/news/illegal-streaming-of-mayweather-mcgregor-fight-reaches-millions/

    About 2.9 million people watched Saturday’s big fight illegally through pirated streams and social media, according to a digital security firm.

    Apparently, watching the Floyd Mayweather-Conor McGregor fight was a big draw on the illegal streaming circuit, too.

    An estimated 2.9 million people on saw the undefeated boxing champion Mayweather beat McGregor, the UFC mixed martial arts champion, on 239 illegal streams on Saturday, according to digital security platform Irdeto.

    Twitter’s livestreaming service Periscope emerged as the winner of that bout, drawing people who wanted to watch the fight but didn’t want to pay the $100 pay-per-view fee.

    Saturday’s fight was apparently seen without permission on various social media platforms including Facebook, YouTube, Twitch and Periscope, who appeared to be shutting down streams as they popped up on their respective sites. Other streams also popped up on more typical pirate-streaming websites and on illicit streaming devices that had ads appearing on e-commerce sites including Amazon, eBay and Alibaba.

    The illegal streaming of the fight costing that $100 for a watch on Showtime pay-per-view in the US, probably won’t hurt either fighter’s bank accounts. Mayweather and McGregor reportedly earned $100 million and $30 million, respectively, despite the fact that the fight ultimately didn’t sell out.

    Reply
  45. Tomi Engdahl says:

    Netflix is Turning 20—But Its Birthday Doesn’t Matter
    https://www.wired.com/story/netflix-20th-anniversary

    A few years ago, one eagle-eyed YouTube user uploaded a true internet find: a 1998 DVD-Rom ad for a new service called NetFlix.com. Over a swell of stringed instruments and a parade of movie posters from Raging Bull to Twins, the new DVD rental company explained itself (“You won’t have to search for a video store that carries more than a few titles”). “Holy S**t!” wrote one commenter. “They had Netflix in ’98?!” They sure did, Shadowkey392.

    In fact, today marks the 20th anniversary of the birth of the company—August 29, 1997, is when Reed Hastings, flush off the sale of his company Pure Atria (nee Pure Software), cofounded it with his colleague Marc Randolph. It wasn’t even named Netflix then—it was called Kibble.

    Reply
  46. Tomi Engdahl says:

    Publishers Are Making More Video — Whether You Want It or Not
    https://news.slashdot.org/story/17/08/29/1441205/publishers-are-making-more-video—-whether-you-want-it-or-not

    Americans are expected to spend 81 minutes a day watching digital video in 2019, up from 61 minutes in 2015, according to projections by research firm eMarketer. Time spent reading a newspaper is expected to drop to 13 minutes a day from 16 minutes during that time. The question is whether those trends will sustain the growing number of outlets flooding social networks with video clips.

    Dozens of writers and editors have also been laid off this summer at news outlets like Vocativ, Fox Sports, Vice and MTV News. All of the moves were tied in part to focusing more resources on making videos. Publishers are heading in this direction even though polls show consumers find video ads more irritating than TV commercials. Google and Apple are testing features that let you mute websites with auto-play videos or block them entirely. More young Americans prefer reading the news than watching it, according to a survey last year by the Pew Research Center. But many publishers have little choice.

    Publishers Are Making More Video—Whether You Want It or Not
    https://www.bloomberg.com/news/articles/2017-08-29/publishers-are-making-more-video-whether-you-want-it-or-not

    Digital media churns out videos for tech and media giants
    Adults to spend 81 minutes a day on digital video: eMarketer

    Reply
  47. Tomi Engdahl says:

    Musician Taryn Southern on composing her new album entirely with AI
    How artificial intelligence simplifies music production for solo artists
    https://www.theverge.com/2017/8/27/16197196/taryn-southern-album-artificial-intelligence-interview

    Break Free – Taryn Southern (Official Music Video)
    https://www.youtube.com/watch?v=XUs6CznN8pw

    The music track and video art were created using artificial intelligence, lyrics and vocal melodies written by by Taryn.

    Reply
  48. Tomi Engdahl says:

    Wall Street Journal:
    Sources: Apple scrambles to land pricing deals with Hollywood on 4K content for its upcoming Apple TV; Apple wants to charge $20/movie, studios want $5-10 more — Tech giant wants to have major Hollywood films available in ultra-high definition on the new device

    Apple Spars With Movie Studios Over Pricing Ahead of Apple TV Rollout
    Tech giant wants to have major Hollywood films available in ultra-high definition on the new device
    https://www.wsj.com/articles/apple-studios-at-odds-over-movie-pricing-ahead-of-new-apple-tv-rollout-1504004401?mod=e2tw

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*