Audio and video trends for 2019

Here are some audio and video trends for 2019:

The global Hi-Fi Systems market was valued at million US$ in 2018 and is expected to growEISA Awards has selected Hi-Fi product category winners, but I did not see anything really fancy new innovations that would excite me there. The Hi-Fi speaker market has seen considerable consolidation over the years but is expected to grow. The global Hi-Fi speaker system market is highly competitive. Various established international brands, domestic brands and as well as new entrants form a competitive landscape. The market is expected to have higher growth rate as compared to the previous years due to the booming electronic industry globally. It is due to the rising income of individuals globally and increasing affordability of technology products globally. Due to technological adoption and smart gadgets, North America region is showing steady growth in the Hi-Fi speaker system market. On technology standpoint the Hi-Fi market is mainly based on pretty much stabilized technology as class D amplifiers have been on mainstream for many years.

Smart TVs are everywhere. The vast majority of televisions available today are “smart” TVs, with internet connections, ad placement, and streaming services built in. Despite the added functionality, TV prices are lower than ever. Your new smart TV was so affordable because it is collecting and selling your data. It is clear that TV companies are in a cutthroat business, and that companies like Vizio would have to charge higher prices for hardware if they didn’t run content, advertising, and data businesses. Google wants sensors and cameras in every room of your home to watch, analyze, you, patents show.

Streaming services competition stays high. Apple’s embracing the TV industry for the first time: Vizio and LG TVs will support AirPlay 2 and HomeKit, while Samsung TVs will get an iTunes Movies & TV app, as well as AirPlay 2 support. Google and Amazon are playing are important players on smart speaker markets.

4K video resolution is still as hot as in 2019 – it us becoming mainstream and getting cheaper. Peraso showcases 4K wireless video at CES 2019. LG has produced a market-ready rollable OLED TV. The new 75-inch 4K Micro LED TV announced at CES 2019 proves Samsung is serious about scaling the technology to do battle with OLED. But it seems that even in 1029 “4K” trend remains woefully deficient from a compelling-content-availability standpoint. CES 2019 is already full of weird and wonderful monitors.

But new higher 8K resolution is being pushed to market. The “8K” (resolution) tagline was apparently everywhere at CES this year. Samsung announced a 98-inch 8K TV because why not. LG has come strong to CES 2019 with an 88-inch 8K OLED TV, a 75-inch 8K LED/LCD TV, HDMI 2.1, new auto calibration features, Alexa built in, and many more features. It seems that this ongoing evolution is occurring out of necessity: as a given-size (and -pixel-dense) display becomes a low profit margin commodity, manufacturers need to continually “up-rev” one or both key consumer-attention-grabbing parameters (along with less quantifiable attributes like image quality) in order to remain profitable … assuming they can continue to stimulate sufficient-sized consumer demand in the process. I am not sure if they can stimulate 8K to mass market in next few years.

Wall size TVs are coming. Samsung announced a modular TV at CES. Samsung first showcased this MicroLED TV technology at CES 2018, showcasing how the screens were composed of millions of individual LEDs. Individuals screens could be combined to create massive displays, which the company calls The Wall TV. The wall-sized displays shown in recent years at CES are, in my opinion, quite ridiculous, at least for the masses.

 

HDMI updates are coming. At present, the HDMI equipment uses the 2.0 standard (adopted in 2013) tht provides support for example for 4K video. HDMI Forum announced a new 2.1 standard already in November 2017, but it just starter showing in CES in January 2019. 8K fiber-optic HDMI cables seen at CES 2019. The 2.1 standard is a big change in technology at the bus bandwidth increases from 18 gigabit to 48 gigabits per second. This enables up to 10K video transmission and up to 120 frames per second.

Bendable displays are really coming to PCs and smart phones. LG’s “rollable” display shown this year neatly showcased the technology’s inherent flexibility while also addressing the question of how to hide a gargantuan display when it’s not in use. Several foldable smart phones have been shown. Chinese company Royole was showing off the FlexPai at CES in Las Vegas.

Micro displays for VR and AR glasses have developed. MicroLED is better looking, more efficient and more versatile than any previous display tech. Now all Samsung, Sony, LG and others have to do is figure out how to manufacture it affordably.Nanoco Technologies and Plessey Semiconductors have partnered to shrink the pixel size of monolithic microLED displays using Nanoco’s cadmium-free quantum-dot (CFQD quantum dots) semiconductor nanoparticle technology. Microchips and organic LEDs that deliver 4K-like high resolution displays a quarter of the size and half the weight of existing virtual reality (VR) headsets have been developed under a European Union project. Marc Andreessen says VR will be “1,000” times bigger than AR even though VR seems to be the popular whipping boy amongst the tech community.

There seems to be no shortage of angst with the current (and unfortunately burgeoning) popularity of usage of the term artificial intelligence (AI). Intelligence has been defined in many ways which makes it hard to get good picture on what is going on. I am still waiting for sensible intelligent AI to do something useful. But the ability for a sufficiently trained deep learning  system to pattern-match images, sound samples, computer viruses, network hacking attempts, and the like is both impressive and effective.

Potential problems related to the coming of self-driving car technologies and cameras are expected. A man at CES in Las Vegas says that a car-mounted lidar permanently damaged the sensor in his new $1,998 Sony a7R II mirrorless camera. Man says CES lidar’s laser was so powerful it wrecked his $1,998 camera because the LIDAR laser power rules ensure lasers are safe for human eyes—but not necessarily for cameras. Is this something that camera and car manufacturers need to figure out together?

2019 Will Be the Year of Open Source from software and even hardware. Open source video player app VLC has now reached 3 billions downloads.

When almost all AV products are pushing more and more features, it seems that almost Everything is too complicated for an average Joe.

 

1,491 Comments

  1. Tomi Engdahl says:

    A GoPro for beetles: Researchers create a robotic camera backpack for insects
    UNIVERSITY OF WASHINGTON

    Scientists strapped a tiny camera to a beetle to test just how small video technology can get
    https://edition.cnn.com/2020/07/15/us/beetle-tiny-cameras-scli-scn-intl/index.html

    Scientists have developed a tiny wireless camera that can ride on the back of an insect, giving users a bug’s-eye view of the world.

    “Researchers at the University of Washington in the US developed the technology to test the potential of miniature cameras. Their device weighs about 250 milligrams — around one-tenth the weight of a playing card — and streams video to a smartphone at one to five frames per second.
    We have created a low-power, low-weight, wireless camera system that can capture a first-person view of what’s happening from an actual live insect or create vision for small robots,” senior author Shyam Gollakota said in a statement.

    They used a tiny, ultra-low-power black-and-white camera that can sweep across a field of view with the help of a mechanical arm, in order to best mimic how an insect sees its environment.

    The camera and arm were controlled via Bluetooth from a smartphone, which can be up to 120 meters (394 feet) away. It currently requires batteries, but the researchers are looking at creating a version that uses other forms of power.

    “As researchers we strongly believe that it’s really important to put things in the public domain so people are aware of the risks and so people can start coming up with solutions to address them,” Gollakota said

    A GoPro for beetles: Researchers create a robotic camera backpack for insects
    https://www.eurekalert.org/pub_releases/2020-07/uow-agf070720.php

    “Vision is so important for communication and for navigation, but it’s extremely challenging to do it at such a small scale. As a result, prior to our work, wireless vision has not been possible for small robots or insects.”

    Typical small cameras, such as those used in smartphones, use a lot of power to capture wide-angle, high-resolution photos, and that doesn’t work at the insect scale. While the cameras themselves are lightweight, the batteries they need to support them make the overall system too big and heavy for insects — or insect-sized robots — to lug around. So the team took a lesson from biology.

    “One advantage to being able to move the camera is that you can get a wide-angle view of what’s happening without consuming a huge amount of power,”

    The researchers attached their removable system to the backs of two different types of beetles — a death-feigning beetle and a Pinacate beetle. Similar beetles have been known to be able to carry loads heavier than half a gram, the researchers said.

    “We made sure the beetles could still move properly when they were carrying our system,” said co-lead author Ali Najafi, a UW doctoral student in electrical and computer engineering. “They were able to navigate freely across gravel, up a slope and even climb trees.”

    The beetles also lived for at least a year after the experiment ended.

    “We added a small accelerometer to our system to be able to detect when the beetle moves. Then it only captures images during that time,” Iyer said. “If the camera is just continuously streaming without this accelerometer, we could record one to two hours before the battery died. With the accelerometer, we could record for six hours or more, depending on the beetle’s activity level.”

    While the team is excited about the potential for lightweight and low-power mobile cameras, the researchers acknowledge that this technology comes with a new set of privacy risks.

    “As researchers we strongly believe that it’s really important to put things in the public domain so people are aware of the risks and so people can start coming up with solutions to address them,” Gollakota said.

    Reply
  2. Tomi Engdahl says:

    Tiny Camera Backpack Turns Beetles Into FPV Inspection Robots
    UW researchers developed a tiny camera that can ride aboard an insect, requiring very little energy to perform inspections and surveillance.
    https://www.hackster.io/news/tiny-camera-backpack-turns-beetles-into-fpv-inspection-robots-278132d940a3

    Reply
  3. Tomi Engdahl says:

    VidMob rethinks video production in the pandemic era
    https://techcrunch.com/2020/07/14/vidmob-pandemic/

    Reply
  4. Tomi Engdahl says:

    While Not Quite a Camera Obscura, This Obscure Camera Has Captured Our Interest!
    Pim de Groot takes us on an exploration into a DIY digital camera!
    https://www.hackster.io/news/while-not-quite-a-camera-obscura-this-obscure-camera-has-captured-our-interest-03b0a9c03f87

    Reply
  5. Tomi Engdahl says:

    Zrythm Approaching Beta As An Easy-To-Use, Open-Source Digital Audio Workstation
    https://www.phoronix.com/scan.php?page=news_item&px=Zrythm-0.8.694-Open-Source-DAW

    When it comes to open-source audio software, the Ardour digital audio workstation and Audacity audio editor are the two flagship offerings. But Zrythm continues advancing as another promising open-source digital audio workstation project. Zrythm is currently in a late alpha stage with its newest release this weekend but a beta appears to be on the horizon.

    Zrythm is an open-source DAW that is cross-platform, supports JACK and other audio backends, supports a variety of plugins, and has a growing list of other features for this digital audio workstation that is based on a GTK3 interface.

    Reply
  6. Tomi Engdahl says:

    Connected audio was a bad choice
    https://tcrn.ch/30aMr7e

    Good audio hardware should be timeless, and devices that need frequent firmware updates, have proprietary support for a certain operating system or can lose integration support quickly fly in the face of that.

    Home entertainment integrations with these speakers are just awful, even among products built by the same company. Repeatedly connecting my stereo HomePods to my Apple TV has been maddening.

    Smart assistants are much less ambitious than they were years ago and the ceiling of innovation already seems to have come down significantly. Third party integrations have sunk far below expectations and it’s pretty uncertain that these voice interfaces have as bright a future as these tech companies once hoped.

    Now, many of you will say that my true error was a lack of commitment to one ecosystem, which is undoubtedly spot-on and yet I don’t think any of the players had precisely what I wanted hence the wildly piecemeal approach. Dumping more funds into a robust Sonos setup probably would have been the wisest commitment, but I have commitment issues and I think part of it was a desire to see what was out there.

    Reply
  7. Tomi Engdahl says:

    Researchers developed a batteryless remote sensor powered completely by solar panels and capable of wirelessly transmitting images.

    Meet Camaroptera, the Solar-Powered Remote Image Sensor That Manages Itself
    https://www.hackster.io/news/meet-camaroptera-the-solar-powered-remote-image-sensor-that-manages-itself-ba950dc2887b

    Researchers developed a batteryless remote sensor powered completely by solar panels and capable of wirelessly transmitting images.

    Reply
  8. Tomi Engdahl says:

    SONY,SAMSUNG,SHARP,JVC, PANASONIC,AND ALL UNIVERSAL CHINA BOARD SERVICE MODE
    https://www.youtube.com/watch?v=kpGQ4RyM7bg

    Reply
  9. Tomi Engdahl says:

    Music consumption has changed, although many artists haven’t yet figured it out. You have to be constantly present in the consumer’s life if you want their money. There’s no going back because it’s the people who have changed their habits.

    Spotify CEO Daniel Ek says working musicians may no longer be able to release music only “once every three to four years”
    https://www.thefader.com/2020/07/30/spotify-ceo-daniel-ek-says-working-musicians-can-no-longer-release-music-only-once-every-three-to-four-years/

    Spotify CEO Daniel Ek discussed streaming and sustainability in a recent interview with Music Ally published on Thursday. Ek denied criticisms that Spotify pays insufficient royalties to artists, and insisted that the role of the musician had changed in today’s “future landscape.”

    Ek claimed that a “narrative fallacy” had been created and caused music fans to believe that Spotify doesn’t pay musicians enough for streams of their music. “Some artists that used to do well in the past may not do well in this future landscape,” Ek said, “where you can’t record music once every three to four years and think that’s going to be enough.”

    What is required from successful musicians, Ek insisted, is a deeper, more consistent, and prolonged commitment than in the past. “The artists today that are making it realize that it’s about creating a continuous engagement with their fans. It is about putting the work in, about the storytelling around the album, and about keeping a continuous dialogue with your fans.”

    Reply
  10. Tomi Engdahl says:

    LTA Headphones Offer User-Customized Audio Experiences
    A modular, open hardware framework for high-end headphones.
    https://www.hackster.io/news/lta-headphones-offer-user-customized-audio-experiences-3174d47b93e3

    Reply
  11. Tomi Engdahl says:

    The Montara monolithic true #MEMS speaker delivers high-fidelity, full-bandwidth sound with low THD for in-ear #audio devices

    MEMS speaker is implemented entirely in silicon
    https://www.edn.com/mems-speaker-is-implemented-entirely-in-silicon/?utm_content=buffer4812d&utm_medium=social&utm_source=edn_facebook&utm_campaign=buffer

    Montara, a monolithic true MEMS speaker from xMEMS Labs, delivers high-fidelity, full-bandwidth sound with low total harmonic distortion (THD) for in-ear personal audio devices, such as wireless ear buds. Further, the IP57-rated Montara is dust-resistant and waterproof up to 1 meter.

    The Montara microspeaker has a bandwidth of 20 Hz to 20 kHz with flat frequency response at >110 dB SPL. THD is <0.5% at 300 Hz/94 dB SPL. Mechanical latency of <0.1 ms enables active noise cancellation across a wider frequency range.

    Montara samples and evaluation kits are available now to select customers, with production slated for early 2021. LGA 5-lead packaging options include standard (6.05×8.4×0.985 mm) and side-firing (6.05×1.0×8.4 mm).

    https://xmems.com/products/

    Reply
  12. Tomi Engdahl says:

    Researchers at the University of Texas at San Antonio have created an automated program called AutoFoley that analyzes the movement in video frames and creates its own artificial sound effects to match the scene. In a survey, the majority of people polled indicated that they believed the fake sound effects were real.

    New AI Dupes Humans into Believing Synthesized Sound Effects Are Real
    https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/new-ai-dupes-humans-into-believing-synthesized-sound-effects-are-real

    researchers have created an automated program that analyzes the movement in video frames and creates its own artificial sound effects to match the scene. In a survey, the majority of people polled indicated that they believed the fake sound effects were real. The model, AutoFoley, is described in a study published June 25 in IEEE Transactions on Multimedia.

    “Adding sound effects in post-production using the art of Foley has been an intricate part of movie and television soundtracks since the 1930s,”

    The first machine learning model extracts image features (e.g., color and motion) from the frames of fast-moving action clips to determine an appropriate sound effect.

    The second model analyzes the temporal relationship of an object in separate frames. By using relational reasoning to compare different frames across time, the second model can anticipate what action is taking place in the video.

    In a final step, sound is synthesized to match the activity or motion predicted by one of the models.

    Reply
  13. Tomi Engdahl says:

    How China’s ACRCloud detects copyrighted music in short videos
    https://techcrunch.com/2020/08/12/acrcloud-profile/

    Reply
  14. Tomi Engdahl says:

    AI Magic Makes Century-Old Films Look New
    Denis Shiryaev uses algorithms to colorize and sharpen old movies, bumping them up to a smooth 60 frames per second. The result is a stunning glimpse at the past.
    https://www.wired.com/story/ai-magic-makes-century-old-films-look-new/

    Reply
  15. Tomi Engdahl says:

    How to Turn Your Photography Camera Into a Webcam
    Months into the pandemic, webcams are still hard to find. But if you’re a shutterbug, you already have a better option.
    https://www.wired.com/story/how-to-turn-your-camera-into-a-webcam/

    Reply
  16. Tomi Engdahl says:

    The growing importance of innovative MEMS microphones
    https://www.eetimes.com/the-growing-importance-of-innovative-mems-microphones/

    Microphone technology is becoming increasingly important as audio applications and features proliferate. Different factors have to be considered for a variety of applications, ranging from audio quality to water and dust robustness to size and cost. Infineon offers a portfolio of XENSIV™ MEMS microphones that ranges from low-cost models to products that deliver the highest performance levels.

    Reply
  17. Tomi Engdahl says:

    Qualcomm’s Mission: Get Better Video to People Everywhere
    https://www.eetimes.com/qualcomms-mission-get-better-video-to-people-everywhere/

    The future of communications is about video.

    The recent focus on media applications like TikTok and business services such as Zoom during the pandemic is a clear indication how streaming video has become such an important part of people’s life these days.

    The MPEG and ITU standards have evolved over the years, but they have always been the work of multiple companies that pool their IP and license it to equipment vendors and content providers.

    In July the latest version of the standard was finalized — the Versatile Video Coding (VVC) standard (also known as H.266). Final publication will occur within in a few months and first deployments are expected in 2021.

    explained some of the changes from the previous standard, High Efficiency Video Coding (HEVC) or H.265.

    HEVC has been critical to delivering 4K resolution and high dynamic Range (HDR) video to consumers, but it’s been around since 2013 and there’s a lot of developments since then. Without video compression technology 4K/60Hz HDR stream would take 7 Gbps! For reference, Roku and other streaming media boxes only require up to 25 Mbps for 4K UHD video with existing compression standards.

    VVC will further compress file size for content by 40% over HEVC as we prepare for 8K video streams. Several tools for HDR and wide color gamut were annexes onto the previous HEVC/H.265 spec and are now more tightly integrated into VVC. The new standard has enhanced VR and 360 video support for the next generation of immersive experiences. Game streaming will also be a key future entertainment option and the standard must be cognizant of latency issues. There are additional text graphic compression features designed for video conferencing and other business related application like Zoom.

    The goal is to make the delivered video appear as close to the original uncompressed video stream as possible, but the end result is dependent on the content. All video compression technologies are based on finding redundant patterns of raw pixels that can be replaced by a smaller codes.

    Part of the compression comes from comparing multiple video frames and finding pixel patterns that are similar between frames but have a motion vector. Compression is possible by modeling the motion between different frames and sending only the locality changer.

    Reply
  18. Tomi Engdahl says:

    Qualcomm Drives Completion of Next-Generation Video Compression Standard
    https://www.qualcomm.com/news/releases/2020/07/15/qualcomm-drives-completion-next-generation-video-compression-standard

    With a 40% increase in video streaming efficiency, Versatile Video Coding will reimagine how consumers digest digital content & accelerate adoption of video for remote work, education and telemedicine

    Reply
  19. Tomi Engdahl says:

    Application Note: Loudspeaker Electroacoustic Measurements
    This Audio Precision application note provides an overview of the key electroacoustic measurements used to characterize the performance of loudspeaker drive units and loudspeaker systems.
    https://www.electronicdesign.com/resources/white-paper/whitepaper/21135247/application-note-loudspeaker-electroacoustic-measurements?code=AudioPrecisionER1-07222020&utm_rid=CPG05000002750211&utm_campaign=32293&utm_medium=email&elq2=0043a5306f094d16a34341dc993d0baa&oly_enc_id=7211D2691390C9R

    Reply
  20. Tomi Engdahl says:

    Transparent OLED Hitting The Market With Xiaomi’s Mi TV LUX Transparent Edition
    https://hackaday.com/2020/08/21/transparent-oled-hitting-the-market-with-xiaomis-mi-tv-lux-transparent-edition/

    One of the major advantages of OLED over LCD panels is that the former can be made using far fewer layers as the pixels themselves are emitting the light instead of manipulating the light from a backlight. This led some to ask the question of whether it’s possible to make an OLED panel that is transparent or at least translucent. As Xiaomi’s new Mi TV LUX OLED Transparent Edition shows, the answer there is a resounding ‘yes’. Better yet, for a low-low price of about $7,200 you can own one of these 55″ marvels.

    Transparent OLED technology is not new, of course. Back in 2018 LG was showing off a prototype TV that used one of the early transparent OLED panels.

    it does appear that this kind of technology would be highly suitable for signage purposes, while also allowing for something like an invisible television or display in a room that could be placed in front of a painting or other decoration

    Reply
  21. Tomi Engdahl says:

    Ted Kinsman shows how to photograph water ripples using three LEDs and an Arduino-controlled dripper.

    (via PetaPixel)

    How to Do Water Ripple Tank Shadow Photography
    https://petapixel.com/2020/08/22/water-ripple-tank-shadow-photography/

    A simple ripple tank is the mainstay of every physics teacher’s demo collection. The typical demonstration is done with a point light source of a little tungsten light bulb a few feet above the ripple tank. The ripple tank is in reality a shallow pan of water with a clear bottom. The ripples are observed by placing a sheet of paper a foot or so below the pan of water.

    Reply
  22. Tomi Engdahl says:

    For those in the past, the concept of the color blue might not have existed at all.

    Did Ancient People Really Not See The Color Blue?
    https://www.iflscience.com/space/did-ancient-people-really-not-see-the-color-blue/

    The color blue is in the middle of a mystery that links biology, psychology, art, and linguistics. Many believe that the way we see blue – that is, as a distinct color – is actually a modern development. For those in the past, the concept of the color blue might not have existed at all. Even some cultures today don’t see blue in the same way as people in the West.

    This fact may seem impossible but it’s true. You may argue that the sky is blue and so is the sea, but it’s possible your experience is putting a label on it. You might say that blue is real but to misquote Morpheus in The Matrix, “’real’ is simply electrical signals interpreted by your brain.” As humans, we have many examples that reveal to us that our brains are not perfect boxes of logic, but instead full of biases and easily tricked. One of the best recent examples of this is the infamous dress photo: Is it white and gold or black and blue?

    Reply
  23. Tomi Engdahl says:

    Fun guitar instructions

    An instructional video that teaches you everything you need to know, and nothing you don’t.
    https://www.facebook.com/CollegeHumor/videos/303162960762516/

    Reply
  24. Tomi Engdahl says:

    Why ‘The Mandalorian’ Uses Virtual Sets Over Green Screen | Movies Insider
    https://m.youtube.com/watch?feature=share&v=Ufp8weYYDE8

    Julkaistu 11.6.2020
    For decades, film and TV productions have used green and blue screens to place actors into new environments. Now, LED walls are revolutionizing this process by projecting 3D environments in real time behind actors to provide the illusion of being in a physical location. These methods were put to the test on Disney’s “The Mandalorian,” of which over half was filmed indoors on a virtual set. This process of combining traditional cinematography techniques with advanced world-building technology effectively eliminates the need for a green screen.

    Reply
  25. Tomi Engdahl says:

    Many Canon cameras can now automatically back up pictures to Google Photos
    https://tcrn.ch/34Eodov

    Canon and Google today announced a new software integration that enables automatic Google Photos backup of pictures taken with select Canon cameras — a full list is available here, but it’s most of their recent interchangeable lens cameras dating back basically to when they started getting Wi-Fi on board.

    Reply
  26. Tomi Engdahl says:

    Make shaky footage a thing of the past! Use this two-axis DIY gimbal to capture smooth shots with your GoPro: https://bit.ly/2YEhrLD

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*