Computer technology trends for 2016

It seems that PC market seems to be stabilizing in 2016. I expect that the PC market to shrinks slightly. While mobile devices have been named as culprits for the fall of PC shipments, IDC said that other factors may be in play. It is still pretty hard to make any decent profits with building PC hardware unless you are one of the biggest players – so again Lenovo, HP, and Dell are increasing their collective dominance of the PC market like they did in 2015. I expect changes like spin-offs and maybe some mergers with with smaller players like Fujitsu, Toshiba and Sony. The EMEA server market looks to be a two-horse race between Hewlett Packard Enterprise and Dell, according to Gartner. HPE, Dell and Cisco “all benefited” from Lenovo’s acquisition of IBM’s EMEA x86 server organisation.

Tablet market is no longer high grow market – tablet maker has started to decline, and decline continues in 2016 as owners are holding onto their existing devices for more than 3 years. iPad sales are set to continue decline and iPad Air 3 to be released in 1st half of 2016 does not change that. IDC predicts that detachable tablet market set for growth in 2016 as more people are turning to hybrid devices. Two-in-one tablets have been popularized by offerings like the Microsoft Surface, with options ranging dramatically in price and specs. I am not myself convinced that the growth will be as IDC forecasts, even though Company have started to make purchases of tablets for workers in jobs such as retail sales or field work (Apple iPads, Windows and Android tablets managed by company). Combined volume shipments of PCs, tablets and smartphones are expected to increase only in the single digits.

All your consumer tech gear should be cheaper come July as shere will be less import tariffs for IT products as World Trade Organization (WTO) deal agrees that tariffs on imports of consumer electronics will be phased out over 7 years starting in July 2016. The agreement affects around 10 percent of the world trade in information and communications technology products and will eliminate around $50 billion in tariffs annually.

Happy Computer Laptop

In 2015 the storage was rocked to its foundations and those new innovations will be taken into wider use in 2016. The storage market in 2015 went through strategic foundation-shaking turmoil as the external shared disk array storage playbook was torn to shreds: The all-flash data centre idea has definitely taken off as a vision that could be achieved so that primary data is stored in flash with the rest being held in cheap and deep storage.  Flash drives generally solve the dusk drive latency access problem, so not so much need for hybrid drives. There is conviction that storage should be located as close to servers as possible (virtual SANs, hyper-converged industry appliances  and NVMe fabrics). The existing hybrid cloud concept was adopted/supported by everybody. Flash started out in 2-bits/cell MLC form and this rapidly became standard and TLC (3-bits/cell or triple layer cell) had started appearing. Industry-standard NVMe drivers for PCIe flash cards appeared. Intel and Micron blew non-volatile memory preconceptions out of the water in the second half of the year with their joint 3D XPoint memory announcement. Boring old disk  disk tech got shingled magnetic recording (SMR) and helium-filled drive technology; drive industry is focused on capacity-optimizing its drives.  We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being. Tape industry developed a 15TB LTO-7 format.

The use of SSD will increase and it’s price will drop. SSDs will be in more than 25% of new laptops sold in 2015.  SSDs are expected to be in 31% of new consumer laptops in 2016 and more than 40% by 2017. The prices of mainstream consumer SSDs have fallen dramatically every year over the past three years while HDD prices have not changed much.  SSD prices will decline to 24 cents per gigabyte in 2016. In 2017 they’re expected to drop to 11-17 cents per gigabyte (means a 1TB SSD on average would retail for $170 or less).

Hard disk sales will decrease, but this technology is not dead. Sales of hard disk drives have been decreasing for several years now (118 million units in the third quarter of 2015), but according to Seagate hard disk drives (HDDs) are set to still stay relevant around for at least 15 years to 20 years.  HDDs remain the most popular data storage technology as it is cheapest in terms of per-gigabyte costs. While SSDs are generally getting more affordable, high-capacity solid-state drives are not going to become as inexpensive as hard drives any time soon. 

Because all-flash storage systems with homogenous flash media are still too expensive to serve as a solution to for every enterprise application workload, enterprises will increasingly turn to performance optimized storage solutions that use a combination of multiple media types to deliver cost-effective performance. The speed advantage of Fibre Channel over Ethernet has evaporated. Enterprises also start  to seek alternatives to snapshots that are simpler and easier to manage, and will allow data and application recovery to a second before the data error or logical corruption occurred.

Local storage and the cloud finally make peace in 2016 as the decision-makers across the industry have now acknowledged the potential for enterprise storage and the cloud to work in tandem. Over 40 percent of data worldwide is expected to live on or move through the cloud by 2020 according to IDC.

Happy Computer Laptop

Open standards for data center development are now a reality thanks to advances in cloud technology. Facebook’s Open Compute Project has served as the industry’s leader in this regard.This allows more consolidation for those that want that. Consolidation used to refer to companies moving all of their infrastructure to the same facility. However, some experts have begun to question this strategy as  the rapid increase in data quantities and apps in the data center have made centralized facilities more difficult to operate than ever before. Server virtualization, more powerful servers and an increasing number of enterprise applications will continue to drive higher IO requirements in the datacenter.

Cloud consolidation starts heavily in 2016: number of options for general infrastructure-as-a-service (IaaS) cloud services and cloud management software will be much smaller at the end of 2016 than the beginning. The major public cloud providers will gain strength, with Amazon, IBM SoftLayer, and Microsoft capturing a greater share of the business cloud services market. Lock-in is a real concern for cloud users, because PaaS players have the ancient imperative to find ways to tie customers to their platforms and aren’t afraid to use them so advanced users want to establish reliable portability across PaaS products in a multi-vendor, multi-cloud environment.

Year 2016 will be harder for legacy IT providers than 2015. In its report, IDC states that “By 2020, More than 30 percent of the IT Vendors Will Not Exist as We Know Them Today.” Many enterprises are turning away from traditional vendors and toward cloud providers. They’re increasingly leveraging open source. In short, they’re becoming software companies. The best companies will build cultures of performance and doing the right thing — and will make data and the processes around it self-service for all their employees. Design Thinking to guide companies who want to change the lives of its customers and employees. 2016 will see a lot more work in trying to manage services that simply aren’t designed to work together or even be managed – for example Whatever-As-A-Service cloud systems to play nicely together with their existing legacy systems. So competent developers are the scarce commodity. Some companies start to see Cloud as a form of outsourcing that is fast burning up inhouse ITops jobs with varying success.

There are still too many old fashioned companies that just can’t understand what digitalization will mean to their business. In 2016, some companies’ boards still think the web is just for brochures and porn and don’t believe their business models can be disrupted. It gets worse for many traditional companies. For example Amazon is a retailer both on the web and increasingly for things like food deliveries. Amazon and other are playing to win. Digital disruption has happened and will continue.
Happy Computer Laptop

Windows 10 is coming more on 2016. If 2015 was a year of revolution, 2016 promises to be a year of consolidation for Microsoft’s operating system. I expect that Windows 10 adoption in companies starts in 2016. Windows 10 is likely to be a success for the enterprise, but I expect that word from heavyweights like Gartner, Forrester and Spiceworks, suggesting that half of enterprise users plan to switch to Windows 10 in 2016, are more than a bit optimistic. Windows 10 will also be used in China as Microsoft played the game with it better than with Windows 8 that was banned in China.

Windows is now delivered “as a service”, meaning incremental updates with new features as well as security patches, but Microsoft still seems works internally to a schedule of milestone releases. Next up is Redstone, rumoured to arrive around the anniversary of Windows 10, midway through 2016. Also Windows servers will get update in 2016: 2016 should also include the release of Windows Server 2016. Server 2016 includes updates to the Hyper-V virtualisation platform, support for Docker-style containers, and a new cut-down edition called Nano Server.

Windows 10 will get some of the already promised features not delivered in 2015 delivered in 2016. Windows 10 was promised coming  to PCs and Mobile devices in 2015 to deliver unified user experience. Continuum is a new, adaptive user experience offered in Windows 10 that optimizes the look and behavior of apps and the Windows shell for the physical form factor and customer’s usage preferences. The promise was same unified interface for PCs, tablets and smart phones – but it was only delivered in 2015 for only PCs and some tablets. Mobile Windows 10 for smart phone is expected to start finally in 2016 – The release of Microsoft’s new Windows 10 operating system may be the last roll of the dice for its struggling mobile platform. Because Microsoft Plan A is to get as many apps and as much activity as it can on Windows on all form factor with Universal Windows Platform (UWP), which enables the same Windows 10 code to run on phone and desktop. Despite a steady inflow of new well-known apps, it remains unclear whether the Universal Windows Platform can maintain momentum with developer. Can Microsoft keep the developer momentum going? I am not sure. In addition there are also plans for tools for porting iOS apps and an Android runtime, so expect also delivery of some or all of the Windows Bridges (iOS, web app, desktop app, Android) announced at the April 2015 Build conference in hope to get more apps to unified Windows 10 app store. Windows 10 does hold out some promise for Windows Phone, but it’s not going to make an enormous difference. Losing the battle for the Web and mobile computing is a brutal loss for Microsoft. When you consider the size of those two markets combined, the desktop market seems like a stagnant backwater.

Older Windows versions will not die in 2016 as fast as Microsoft and security people would like. Expect Windows 7 diehards to continue holding out in 2016 and beyond. And there are still many companies that run their critical systems on Windows XP as “There are some people who don’t have an option to change.” Many times the OS is running in automation and process control systems that run business and mission-critical systems, both in private sector and government enterprises. For example US Navy is using obsolete operating system Microsoft Windows XP to run critical tasks. It all comes down to money and resources, but if someone is obliged to keep something running on an obsolete system, it’s the wrong approach to information security completely.

Happy Computer Laptop

Virtual reality has grown immensely over the past few years, but 2016 looks like the most important year yet: it will be the first time that consumers can get their hands on a number of powerful headsets for viewing alternate realities in immersive 3-D. Virtual Reality will become the mainstream when Sony, and Samsung Oculus bring consumer products on the market in 2016. Whole virtual reality hype could be rebooted as Early build of final Oculus Rift hardware starts shipping to devs. Maybe HTC‘s and Valve‘s Vive VR headset will suffer in the next few month. Expect a banner year for virtual reality.

GPU and FPGA acceleration will be used in high performance computing widely. Both Intel and AMD have products with CPU and GPU in the same chip, and there is software support for using GPU (learn CUDA and/or OpenCL). Also there are many mobile processors have CPU and GPU on the same chip. FPGAs are circuits that can be baked into a specific application, but can also be reprogrammed later. There was lots of interest in 2015 for using FPGA for accelerating computations as the nest step after GPU, and I expect that the interest will grow even more in 2016. FPGAs are not quite as efficient as a dedicated ASIC, but it’s about as close as you can get without translating the actual source code directly into a circuit. Intel bought Altera (big FPGA company) in 2015 and plans in 2016 to begin selling products with a Xeon chip and an Altera FPGA in a single packagepossibly available in early 2016.

Artificial intelligence, machine learning and deep learning will be talked about a lot in 2016. Neural networks, which have been academic exercises (but little more) for decades, are increasingly becoming mainstream success stories: Heavy (and growing) investment in the technology, which enables the identification of objects in still and video images, words in audio streams, and the like after an initial training phase, comes from the formidable likes of Amazon, Baidu, Facebook, Google, Microsoft, and others. So-called “deep learning” has been enabled by the combination of the evolution of traditional neural network techniques, the steadily increasing processing “muscle” of CPUs (aided by algorithm acceleration via FPGAs, GPUs, and, more recently, dedicated co-processors), and the steadily decreasing cost of system memory and storage. There were many interesting releases on this in the end of 2015: Facebook Inc. in February, released portions of its Torch software, while Alphabet Inc.’s Google division earlier this month open-sourced parts of its TensorFlow system. Also IBM Turns Up Heat Under Competition in Artificial Intelligence as SystemML would be freely available to share and modify through the Apache Software Foundation. So I expect that the year 2016 will be the year those are tried in practice. I expect that deep learning will be hot in CES 2016 Several respected scientists issued a letter warning about the dangers of artificial intelligence (AI) in 2015, but I don’t worry about a rogue AI exterminating mankind. I worry about an inadequate AI being given control over things that it’s not ready for. How machine learning will affect your business? MIT has a good free intro to AI and ML.

Computers, which excel at big data analysis, can help doctors deliver more personalized care. Can machines outperform doctors? Not yet. But in some areas of medicine, they can make the care doctors deliver better. Humans repeatedly fail where computers — or humans behaving a little bit more like computers — can help. Computers excel at searching and combining vastly more data than a human so algorithms can be put to good use in certain areas of medicine. There are also things that can slow down development in 2016: To many patients, the very idea of receiving a medical diagnosis or treatment from a machine is probably off-putting.

Internet of Things (IoT) was talked a lot in 2015, and it will be a hot topics for IT departments in 2016 as well. Many companies will notice that security issues are important in it. The newest wearable technology, smart watches and other smart devices corresponding to the voice commands and interpret the data we produce - it learns from its users, and generate appropriate  responses in real time. Interest in Internet of Things (IoT) will as bring interest to  real-time business systems: Not only real-time analytics, but real-time everything. This will start in earnest in 2016, but the trend will take years to play out.

Connectivity and networking will be hot. And it is not just about IoT.  CES will focus on how connectivity is proliferating everything from cars to homes, realigning diverse markets. The interest will affect job markets: Network jobs are hot; salaries expected to rise in 2016  as wireless network engineers, network admins, and network security pros can expect above-average pay gains.

Linux will stay big in network server marker in 2016. Web server marketplace is one arena where Linux has had the greatest impact. Today, the majority of Web servers are Linux boxes. This includes most of the world’s busiest sites. Linux will also run many parts of out Internet infrastructure that moves the bits from server to the user. Linux will also continue to rule smart phone market as being in the core of Android. New IoT solutions will be moist likely to be built mainly using Linux in many parts of the systems.

Microsoft and Linux are not such enemies that they were few years go. Common sense says that Microsoft and the FOSS movement should be perpetual enemies.  It looks like Microsoft is waking up to the fact that Linux is here to stay. Microsoft cannot feasibly wipe it out, so it has to embrace it. Microsoft is already partnering with Linux companies to bring popular distros to its Azure platform. In fact, Microsoft even has gone so far as to create its own Linux distro for its Azure data center.

Happy Computer Laptop

Web browsers are coming more and more 64 bit as Firefox started 64 bit era on Windows and Google is killing Chrome for 32-bit Linux. At the same time web browsers are loosing old legacy features like NPAPI and Silverlight. Who will miss them? The venerable NPAPI plugins standard, which dates back to the days of Netscape, is now showing its age, and causing more problems than it solves, and will see native support removed by the end of 2016 from Firefox. It was already removed from Google Chrome browsers with very little impact. Biggest issue was lack of support for Microsoft’s Silverlight which brought down several top streaming media sites – but they are actively switching to HTML5 in 2016. I don’t miss Silverlight. Flash will continue to be available owing to its popularity for web video.

SHA-1 will be at least partially retired in 2016. Due to recent research showing that SHA-1 is weaker than previously believed, Mozilla, Microsoft and now Google are all considering bringing the deadline forward by six months to July 1, 2016.

Adobe’s Flash has been under attack from many quarters over security as well as slowing down Web pages. If you wish that Flash would be finally dead in 2016 you might be disappointed. Adobe seems to be trying to kill the name by rebranding trick: Adobe Flash Professional CC is now Adobe Animate CC. In practive it propably does not mean much but Adobe seems to acknowledge the inevitability of an HTML5 world. Adobe wants to remain a leader in interactive tools and the pivot to HTML5 requires new messaging.

The trend to try to use same same language and tools on both user end and the server back-end continues. Microsoft is pushing it’s .NET and Azure cloud platform tools. Amazon, Google and IBM have their own set of tools. Java is on decline. JavaScript is going strong on both web browser and server end with node.js , React and many other JavaScript libraries. Apple also tries to bend it’s Swift programming language now used to make mainly iOS applications also to run on servers with project Perfect.

Java will still stick around, but Java’s decline as a language will accelerate as new stuff isn’t being written in Java, even if it runs on the JVM. We will  not see new Java 9 in 2016 as Oracle’s delayed the release of Java 9 by six months. The register tells that Java 9 delayed until Thursday March 23rd, 2017, just after tea-time.

Containers will rule the world as Docker will continue to develop, gain security features, and add various forms of governanceUntil now Docker has been tire-kicking, used in production by the early-adopter crowd only, but it can change when vendors are starting to claim that they can do proper management of big data and container farms.

NoSQL databases will take hold as they be called as “highly scalable” or “cloud-ready.” Expect 2016 to be the year when a lot of big brick-and-mortar companies publicly adopt NoSQL for critical operations. Basically NoSQL could be seem as key:value store, and this idea has also expanded to storage systems: We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being.

In the database world Big Data will be still big but it needs to be analyzed in real-time. A typical big data project usually involves some semi-structured data, a bit of unstructured (such as email), and a whole lot of structured data (stuff stored in an RDBMS). The cost of Hadoop on a per-node basis is pretty inconsequential, the cost of understanding all of the schemas, getting them into Hadoop, and structuring them well enough to perform the analytics is still considerable. Remember that you’re not “moving” to Hadoop, you’re adding a downstream repository, so you need to worry on systems integration and latency issues. Apache Spark will also get interest as Spark’s multi-stage in-memory primitives provides more performance  for certain applications. Big data brings with it responsibility – Digital consumer confidence must be earned.

IT security continues to be a huge issue in 2016. You might be able to achieve adequate security against hackers and internal threats but every attempt to make systems idiot proof just means the idiots get upgraded. Firms are ever more connected to each other and the general outside world. So in 2016 we will see even more service firms accidentally leaking critical information and a lot more firms having their reputations scorched by incompetence fuelled security screw-ups. Good security people are needed more and more – a joke doing the rounds of ITExecs doing interviews is “if you’re a decent security bod, why do you need to look for a job”

There will still be unexpected single points of failures in big distributed networked system. The cloud behind the silver lining is that Amazon or any other cloud vendor can be as fault tolerant, distributed and well supported as you like, but if a service like Akamai or Cloudflare was to die, you still stop. That’s not a single point of failure in the classical sense but it’s really hard to manage unless you go for full cloud agnosticism – which is costly. This is hard to justify when their failure rate is so low, so the irony is that the reliability of the content delivery networks means fewer businesses work out what to do if they fail. Oh, and no one seems to test their mission-critical data centre properly, because it’s mission criticalSo they just over-specify where they can and cross their fingers (= pay twice and get the half the coverage for other vulnerabilities).

For IT start-ups it seems that Silicon Valley’s cash party is coming to an end. Silicon Valley is cooling, not crashing. Valuations are falling. The era of cheap money could be over and valuation expectations are re-calibrating down. The cheap capital party is over. It could mean trouble for weaker startups.

 

933 Comments

  1. Tomi Engdahl says:

    Say hello to Samsung and Netlist’s flash-DRAM grenade: HybriDIMM
    Shoving NAND on a DIMM with a DRAM cache to speed access
    http://www.theregister.co.uk/2016/08/08/samsung_and_netlist_hybridimm/

    Gold plate can give a durable and affordable alloy a 24-carat veneer finish, adding value to cheap metal. DRAM gives Samsung-Netlist Hybrid DIMMs a cache veneer, providing what looks like DRAM to applications but is really persistent NAND underneath, cheaper than DRAM and lots of it.

    HybriDIMM is the product result of combining Netlist HyperVault technology with Samsung DRAM and NAND; an initiative that began in November last year.

    The idea is to use NAND as a DRAM substitute, masking its slowness with predictive software called PreSight, that loads data into DRAM from NAND in anticipation of it being needed.

    The first generation HybriDIMM is configurable with 256-512GB NAND + 8‑16GB DRAM per DIMM/1866 MTS 3DPC for Broadwell CPUs, with Linux support. It has block storage mode and an application direct mode.

    Gen 2 HybriDIMM will add to this and be configurable to 1TB NAND + 32GB DRAM per DIMM/2400 MTS 2DPC for Purley processors. It will have both Linux and Windows support.

    This is broadly similar to Diablo’s Memory1 technology, which currently has 128GB DDR4-format DIMMs available, with 256GB ones coming, enabling up to 4TB of “memory” in a 2-socket server.

    There are three broad classes of non-volatile DIMMs – that is, memory DIMMS with flash onboard – according to JEDEC task group leader Jonathan Hinkle:

    NVDIMM-N is a DRAM/Flash hybrid memory module that only uses the Flash to save the DRAM contents upon a triggering event such as power failure. It only uses the Flash to make the data in DRAM persistent, and only needs enough Flash to do this.
    NVDIMM-F is a category we created to represent all-flash DIMMs – think ULLtraDIMM – like those made by Diablo/SanDisk.
    NVDIMM-P is not fully defined yet and may mostly have speed like DRAM, but may be somewhat slower by including capacity from NVM or NAND Flash.

    Reply
  2. Tomi Engdahl says:

    Companies prefer the old programming languages

    Recent research reveals that companies continue to seek experts ‘old’ languages: Topping the list are Java, Python and C.

    The results showed that all coders companies want to recruit Java experts. Python coders sent nearly 90 per cent of enterprises and C preference will rise to 70 per cent.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4800:yritykset-suosivat-vanhoja-ohjelmointikielia&catid=13&Itemid=101

    Reply
  3. Tomi Engdahl says:

    Ina Fried / Recode:
    Intel is acquiring deep-learning company Nervana Systems, the first big exit for Andy Rubin’s Playground Global; a source says the deal is valued at $350M+

    Intel is paying at least $350 million to buy deep-learning startup Nervana Systems
    The chip giant is betting that machine learning is going to be a big deal in the data center.
    http://www.recode.net/2016/8/9/12413600/intel-buys-nervana–350-million

    Intel is snapping up deep learning startup Nervana Systems in a huge bet that artificial intelligence represents the next big shift inside corporate data centers.

    The chip giant isn’t actually saying how much it is paying, but a source with knowledge of the deal said it is valued at more than $350 million.

    Intel vice president Jason Waxman told Recode that the shift to artificial intelligence could dwarf the move to cloud computing. Machine learning, he said, is needed as we move from a world in which people control a couple of devices that connect to the Internet to one in which billions of devices are connecting and talking to one another.

    “There is far more data being given off by these machines than people can possibly sift through,”

    Nervana’s approach has some direct appeal to a chipmaker like Intel in that the company has been working to bring machine learning all the way into the silicon, rather than simply making software that can run on top of anyone’s cluster of graphics chips.

    In data centers, Intel is operating from a position of strength as its chips dominate.

    Reply
  4. Tomi Engdahl says:

    Micron demos 3D XPoint in drives
    Latency and IOPS, but no prices shown
    http://www.eetimes.com/document.asp?doc_id=1330280&

    Micron revealed performance data of working solid-state drives based on 3D XPoint memory on the first day of the Flash Memory Summit here. Separately rival Toshiba showed progress on conventional NAND flash, and mega-customer Facebook called for multiple new kinds of memory products.

    A Micron engineer showed prototype SSDs with Xpoint memory chips on a PCIe Gen 3 interface handling writes at less than 20 microseconds and reads and less than 10 ms, ten times faster than existing NAND SSDs. Devices using four PCIe channels delivered up to 900,000 I/O operations per second. SSDs using eight lane PCIe peaked at 1.9 million IOPs.

    Overall, Micron promised the drives it will brand as Quantx will deliver four times the capacity of DRAM. Compared to NAND it will offer ten times lower latency and 10x higher IOPS at up to 32 queues.

    Facebook is among a handful of potential customers testing XPoint products.

    Reply
  5. Tomi Engdahl says:

    Monkeying with Virtual Reality
    Brave New World: Virtual, augmented, hybrid, hyper, diminished realities
    http://www.eetimes.com/author.asp?section_id=30&doc_id=1330089&

    Augmented reality involves adding computer-generated sensory input to a real-world environment, but what if the computer deletes things from the scene?

    My mind is currently churning with thoughts about the potential uses of various forms of virtual and augmented realities (VR and AR), all sparked by my recent acquisition of a virtual reality system composed of an Oculus Rift coupled with a processor-graphics combo.

    Reply
  6. Tomi Engdahl says:

    SIGGRAPH 2016: Still Addicted to Computer Graphics
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1330267&

    Still looking for light in all the odd places…

    I haven’t been to the AMC SIGGRAPH conference in more 20 years, but the mere mention of the special interest group (SIG) still evokes fond memories. Excerpts from last week’s gathering (July 24-28) in Anaheim ― particularly the computer animation contents ― reminded me once again of what I had missed. It was technology ― the pure computation muscle it takes to animate a 30-second video sequence, or a feature-length film, for that matter ― is what still fascinates.

    There is hardly a commercial animated or special effects film or lead-in to the 10-o’clock TV news that isn’t supported by computer graphics. The SIGGRAPH gathering reminds animators of how many machine cycles it takes your graphics processing unit (GPU) to light up a video screen.

    Follow the light
    Despite the intensified fascination with immersive realities (AR and VR), the technology for animating a computer graphics screen, remains very much familiar. “Pay attention to where the light comes from in each frame of the film,”

    Looking at presentations, pictures and the on-line renderings made available from SIGGRAPH on line, it turns out the basic questions haven’t changed. We’re still trying to identify where light comes from in a graphics picture frame. Only, the scale and sophistication has grown massively: What color does your 16th-million pixel have to be if you’re rendering a chase through thick fog? You have to account for every pixel ― millions of them ― by asking the same question:

    In terms of the pictures it renders, your graphics processing unit (GPU) will treat each pixel as some combination of red, green and blue, and separately calculate the light intensity (256 levels, 28) 8-bits for each color. Thus, on a computer graphics screen, each pixel represents one of 16.7 million possible colors (256 x 256 x 256), refreshed ― depending on the horizontal scan rate ― 60 or 75 times per second. Historically, there was a component called a Graphics DAC, or RAMDAC, three digital-to-analog converters on one chip, which would execute RGB color combinations and light intensity levels for each pixel. While the RAMDAC disappeared inside the integrated GPU after 2006, the principles of operation are largely the same today: The GPU determines the color levels and light intensity for each pixel on the screen, still asking the same basic question: Where does the light come from in each frame?

    To be sure, there are algorithms and programming techniques which speed the computation of pixel light intensity, specialized hardware to speed the delivery of pixels to a display screen, and cataloged software subroutines that can supply code for illuminating furry animals.

    Know that, SIGGRAPH presentations (those viewable online) seem divided between physics-and-math tutorials on what happens to light on various reflective and light-absorbing surfaces, and graphics product announcements with demonstrations intended to make film makers like George Lucas sit up and take notice.

    SIGGRAPH University – Introduction to “Physically Based Shading in Theory and Practice”
    https://www.youtube.com/watch?v=j-A0mwsJRmk

    Reply
  7. Tomi Engdahl says:

    The First Evil Maid-Proof Computer
    http://hackaday.com/2016/08/09/the-first-evil-maid-proof-computer/

    It doesn’t matter how many bits your password has, how proven your encryption is, or how many TrueCrypt volumes are on your computer. If someone wants data off your device, they can get it if they have physical access to your device.

    Today, Design Shift has released ORWL (as in George Orwell), the first computer designed with physical security in mind. This tiny disc of a computer is designed to defeat an Evil Maid through some very clever engineering on top of encryption tools we already use.

    At its heart, ORWL is a relatively basic PC. The CPU is an Intel Skylake, graphics are integrated Intel 515 with 4K support over a micro HDMI connection, RAM is either 4 or 8GB, storage is a 120 or 480GB SSD with AES 256-bit encryption, and wireless is Bluetooth 4.1 and 802.11 a/b/g/n/AC. Power is delivered through one of the two USB 3.0 Type C connectors.

    The reason ORWL exists is to be a physically secure computer, and this is where the fun happens. ORWL’s entire motherboard is surrounded by an ‘active secure mesh’ – an enclosure wrapped with electronic traces monitored by the MAX32550 DeepCover Secure Cortex-M3 microcontroller.

    If this microcontroller detects a break in this mesh, the SSD auto-encrypts, the CPU shuts down, and all data is lost. Even turning on the computer requires a secure key with NFC and Bluetooth LE. If ORWL is moved, or inertial sensors are tripped when the key is away, the secure MCU locks down the system.

    We first heard of ORWL a few months ago from Black Hat Europe. Now this secure computer is up on Crowdsupply, with an ORWL available for $700

    https://www.crowdsupply.com/design-shift/orwl/

    Reply
  8. Tomi Engdahl says:

    New policy demands 20 percent of federal code be open source
    Plus, agencies will have to share internally-developed code among themselves.
    https://www.engadget.com/2016/08/09/new-policy-demands-20-percent-of-federal-code-be-open-source/

    For years, the Obama Administration has been pushing for greater transparency and parity between federal agencies and the general public. After months of negotiations and discussions, the Office of Management and Budget is easing open federal computer code for inspection. The OMB revealed its finalized requirements for the Federal Source Code policy on Monday, which demand federal projects make at least 20 percent of their computer code open source. What’s more, agencies will be expected to share all internally-developed code with one another.

    The OMB plans to test this new policy during a two-year pilot program. Should the release of this code be deemed more valuable than whatever issues and unforeseen pitfalls arise from its being made public, the OMB may decide to increase the open-source requirements.

    These new rules don’t apply to privately developed code, even if it’s used by the government.

    Reply
  9. Tomi Engdahl says:

    Flash and Chrome
    https://chrome.googleblog.com/2016/08/flash-and-chrome.html

    Adobe Flash Player played a pivotal role in the adoption of video, gaming and animation on the Web. Today, sites typically use technologies like HTML5, giving you improved security, reduced power consumption and faster page load times. Going forward, Chrome will de-emphasize Flash in favor of HTML5. Here’s what that means for you. Today, more than 90% of Flash on the web loads behind the scenes to support things like page analytics. This kind of Flash slows you down, and starting this September, Chrome 53 will begin to block it. HTML5 is much lighter and faster, and publishers are switching over to speed up page loading and save you more battery life. You’ll see an improvement in responsiveness and efficiency for many sites.

    In December, Chrome 55 will make HTML5 the default experience, except for sites which only support Flash. For those, you’ll be prompted to enable Flash when you first visit the site. Aside from that, the only change you’ll notice is a safer and more power-efficient browsing experience.

    Reply
  10. Tomi Engdahl says:

    Software upgrade exhaustion
    http://www.edn.com/electronics-blogs/brians-brain/4442522/Software-upgrade-exhaustion?_mc=NL_EDN_EDT_EDN_consumerelectronics_20160810&cid=NL_EDN_EDT_EDN_consumerelectronics_20160810&elqTrackId=13f30f377e3e4c26abce6ac1bf1b6376&elq=ea5d319c66e4459cb3efe7a5eefb2257&elqaid=33392&elqat=1&elqCampaignId=29183

    The updates went more smoothly than I feared they might, a worry that had compelled me to ensure I’d made full backups of both systems beforehand. But along with the OS upgrades came requisite upgrades to a whole host of utilities and other applications that no longer worked properly as-is. Time-consuming? Yep. Tedious? You bet. Frustrating? Need I respond? Normally

    Wait … that’s not all. As I type these words, I’m backing up both of my primary NASs, a four-drive Netgear ReadyNAS NV+ and two-drive Duo, to USB 2-tethered external HDDs. I

    Plus, there’s the fact that the aggregate network storage backup requirements of my ever-expanding computing stable have exceeded the capacities of the NASs’ current hard drives … and since both NASs employ RAID arrays, I can’t just swap out the existing HDDs for larger replacements and automatically gain more storage space. Instead, I need to back the NASs up, replace all of the HDDs (thereby wiping the NASs), create new RAID arrays, then copy the backed-up files back … crossing my fingers that no bits get dropped in the process.

    Speaking of counting my blessings, after reading this, you might chalk these up as the “first world” rants of someone who should be grateful for the technology abundance in his life (and is always free to discard some of it in the pursuit of simplification, after all). You’d be right. And I realize that software updates have their place; both to fix bugs and add (sometimes questionably) valuable features. But geez, this constantly running upgrade treadmill is exhausting. Sound off with your thoughts if you find yourself in the same place.

    Reply
  11. Tomi Engdahl says:

    Flash technology is now developing at a rapid pace, as manufacturers move to 3D structures. Samsung introduced the Santa Clara Flash Memory Summit future are based on VNAND-circuits their products. The Company believes that before 2020 solid state disk fit into more than one hundred terabytes of data.

    Santa Clara, Samsung introduced as early as 32 TB of SAS-track disc referred to the company’s servers. TB is based on modules, which is as high as 512 based on the new metallization layer 64 VNAND circuit. These modules Samsung packs the same solid-state package of 32 pieces.

    32 TB of SSD storage will be a 2.5-inch format production next year. Reading data CDs successful 1500 megabytes per second and write 900 megabytes per second.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4820:ssd-levylle-sopii-pian-100-teratavua&catid=13&Itemid=101

    Reply
  12. Tomi Engdahl says:

    Hewlett Packard Enterprise acquires SGI for $275 million
    http://venturebeat.com/2016/08/11/hewlett-packard-enterprise-acquires-sgi-for-275-million/

    Hewlett Packard Enterprise, which split from HP last year, today announced that it has acquired SGI, a company that makes servers, storage, and software for high-performance computing, for $275 million in cash and debt.

    SGI (originally known as Silicon Graphics) was cofounded in 1981 by Jim Clark, who later cofounded Netscape with Marc Andreessen. Following a years-long decline, and after being de-listed from the New York Stock Exchange, SGI filed for Chapter 11 bankruptcy in 2009. That year it was acquired by Rackable Systems, which later adopted the SGI branding. SGI’s former campus in Mountain View, California, is now the site of the Googleplex.

    “HPE and SGI believe that by combining complementary product portfolios and go-to-market approaches they will be able to strengthen the leading position and financial performance of the combined business,” HP said in a statement on the deal.

    In combining itself with another legacy Silicon Valley brand, HPE is bolstering its enterprise-focused hardware and software with storied HPC gear.

    Reply
  13. Tomi Engdahl says:

    Deep Learning To Be In Smartphones Soon, Say CEVA, Rockchip
    http://www.eetimes.com/document.asp?doc_id=1330287&

    China’s leading fabless semiconductor company Rockchip has licensed CEVA’s XM4 imaging and vision DSP to enhance the imaging and computer vision capabilities of its’ SoC product lines.

    Rockchip will leverage the CEVA-XM4 for a range of advanced imaging and vision features at low power consumption among which include low-light enhancement, digital video stabilization, object detection and tracking, and 3D depth sensing. In addition, the CEVA-XM4 will enable Rockchip to use the latest deep learning technologies, utilizing CEVA’s comprehensive Deep Neural Network (CDNN2) software framework.

    By offloading these performance-intensive tasks from the CPUs and GPUs, the highly-efficient DSP dramatically reduces the power consumption of the overall system, while providing complete flexibility.

    Deep learning features soon to integrate smartphones, say partners CEVA and Rockchip
    http://www.electronics-eetimes.com/news/deep-learning-features-soon-integrate-smartphones-say-partners-ceva-and-rockchip

    Reply
  14. Tomi Engdahl says:

    Intel to Acquire Deep Learning Nervana
    Nervana neural chip part of the deal
    http://www.eetimes.com/document.asp?doc_id=1330281&

    Intel will announce its intention to acquire Nervana Systems at its Intel Developer Forum next week (IDF 2016, San Francisco, Calif., August 16-to-18)—a bid to obsolete the graphics processor unit (GPU) for deep learning artificial intelligence (AI) applications.

    Intel dominates the high-performance computing (HPC) market, but Nvidia has made significant inroads into deep learning verticals with its sophisticated GPUs. However, Nervana Systems (Palo Alto, Calif.) has already made a significant dent in Nvidia’s Cuda software for its GPUs with Nervana’s Neon cloud service that is Cuda-compatible. Intel, however, is acquiring Nervana for its promised deep-learning accelerator chip, which it promises by 2017. If the chip plays out as advertised, Intel will sell Deep Learning accelerator hardware boards that beat Nvidia’s GPU boards, while its newly acquired Neon cloud service will outperform Nvidia’s Cuda software.

    Reply
  15. Tomi Engdahl says:

    DRAMs to drag ICs to -2% in 2016
    http://www.eetimes.com/document.asp?doc_id=1330288&

    Oversupply of DRAM will drive average selling prices (ASPs) of the memory chips down 16% this year, dragging down the overall IC market to a contraction of 2% in 2016, according to the latest report from market watcher IC Insights.

    Declining shipments of PC, notebooks and tablets as well as a slowdown in smartphone growth will contribute to an overall decline of 19% in the DRAM market this year, the company said. DRAM prices are known for big swings with ASPs hitting a recent high of $3.16 in 2014 up from a recent low of $1.69 in 2012.

    Reply
  16. Tomi Engdahl says:

    Micron demos 3D XPoint in drives
    Latency and IOPS, but no prices shown
    http://www.eetimes.com/document.asp?doc_id=1330280&

    Micron revealed performance data of working solid-state drives based on 3D XPoint memory on the first day of the Flash Memory Summit here. Separately rival Toshiba showed progress on conventional NAND flash, and mega-customer Facebook called for multiple new kinds of memory products.

    A Micron engineer showed prototype SSDs with Xpoint memory chips on a PCIe Gen 3 interface handling writes at less than 20 microseconds and reads and less than 10 ms, ten times faster than existing NAND SSDs. Devices using four PCIe channels delivered up to 900,000 I/O operations per second. SSDs using eight lane PCIe peaked at 1.9 million IOPs.

    Overall, Micron promised the drives it will brand as Quantx will deliver four times the capacity of DRAM. Compared to NAND it will offer ten times lower latency and 10x higher IOPS at up to 32 queues.

    Samsung Debuts 3D XPoint Killer
    3D NAND variant stakes out high-end SSDs
    http://www.eetimes.com/document.asp?doc_id=1330285&

    Samsung lobbed a new variant of its 3D NAND flash into the gap Intel and Micron hope to fill with their emerging 3D XPoint memory. The news came one day after Micron showed at the Flash Memory Summit performance figures for its version of the XPoint solid-state drives (SSDs) under a new Quantx brand.

    Samsung announced plans for what it called Z-NAND chips that will power SSDs with similar performance but lower costs and risk than the 3D XPoint drives. However, it was secretive about the details of the technology that will appear in products sometime next year.

    By contrast, a Micron engineer leading its XPoint SSD program was surprisingly candid in an interview with EE Times. She described current prototypes using early XPoint chips and an FPGA-based controller for the SSDs expected to ship in about a year.

    Samsung’s Z-NAND will deliver 10x faster reads than multi-level cell flash and writes that are twice as fast, the company said. At the drive level, they will support both reads and writes at about 20 microseconds, suggesting some of write performance comes from an enhanced controller.

    Reply
  17. Tomi Engdahl says:

    Anya George Tharakan / Reuters:
    Nvidia Q2 revenue up 24% YoY to $1.43B, with net income $253M, up from $26M one year ago

    Gaming, data center strength propels Nvidia to another solid quarter
    http://www.reuters.com/article/us-nvidia-results-idUSKCN10M2EG

    Reply
  18. Tomi Engdahl says:

    Josh Mitchell / Wall Street Journal:
    Tech companies are increasingly hiring students from coding boot camps but doubt whether such academies can replace the four-year computer science degree — Employers are increasingly hiring graduates from nontraditional schools like Flatiron — NEW YORK—In a graffiti-splashed classroom …

    Coding Boot Camps Attract Tech Companies
    Employers are increasingly hiring graduates from nontraditional schools like Flatiron
    http://www.wsj.com/article_email/coding-boot-camps-attract-tech-companies-1470945503-lMyQjAxMTE2ODE2MTUxMzE4Wj

    In a graffiti-splashed classroom in lower Manhattan, students are learning to write computer code at a private academy whose methods and results have caught the eye of Silicon Valley and the Obama administration.

    The Flatiron School’s 12-week course costs $15,000, but earns students no degree and no certificate. What it does get them, at an overwhelming rate, is a well-paying job. Nearly everyone graduates, and more than nine in 10 land a job within six months at places like Alphabet Inc. ’s Google and Kickstarter. Average starting salary: $74,447.

    Employers are increasingly hiring graduates of the Flatiron model—short, intensely focused curricula that are constantly retailored to meet company needs. Success, its backers say, could help fuel a revolution in how the U.S. invests in higher education, pushing more institutions toward teaching distinct aptitudes and away from granting broad degrees.

    Reply
  19. Tomi Engdahl says:

    Jena McGregor / Washington Post:
    As his fifth anniversary as Apple CEO approaches, Tim Cook reflects on his tenure and Apple’s future: mistakes, progress made, AI and AR efforts, more — Apple’s CEO talks iPhones, AI, privacy, civil rights, missteps, China, taxes, Steve Jobs — and steers right past the car rumors

    Tim Cook, the interview: Running Apple ‘is sort of a lonely job’
    http://www.washingtonpost.com/sf/business/wp/2016/08/13/2016/08/13/tim-cook-the-interview-running-apple-is-sort-of-a-lonely-job/

    Reply
  20. Tomi Engdahl says:

    14 Views from the Flash Summit
    http://www.eetimes.com/document.asp?doc_id=1330297&

    New persistent memories and techniques promise to reshape computing. At the Flash Memory Summit here, engineers talked about how they are driving shifts in everything from server design and network storage to machine learning and flash chip prices.

    One of the hot topics of the event was closing the gap between flash storage arrays on the network and solid-state drives (SSDs) on the server. The idea that systems could access flash memory whether it is local or on the other side of the data center is driving new system, silicon and software designs.

    Fueling the trend, the NVM Express group just released a specification to enable the NVMe flash interface to run over networks such as Ethernet, Fibre Channel and Infiniband. The so-called NVMe over fabrics spec supports various schemes for direct memory access.

    The upcoming PCI Express Gen 4 standard also is driving the move, in part because PCIe already forms the plumbing for NVMe. The Gen 4 speeds will blow away older SAS and SATA interfaces on solid-state drives. Ultimately PCIe is expected to dominate SSDs and become relatively low cost.

    There’s plenty of work to be done, noted Stefanie Woodbury, director of advanced architecture at Micron and design lead for the company’s 3D XPoint-based SSDs. Interoperability will be critical given three fabrics, multiple direct-memory schemes and various block, file and streaming storage semantics –as well as emerging memory types such as 3D XPoint, she said.

    Reply
  21. Tomi Engdahl says:

    AMD, Nvidia GPU Battle Heats Up
    http://www.eetimes.com/document.asp?doc_id=1330305&

    The first battles between AMD’s Polaris and Nvidia’s Pascal are now being waged, but it’s too early to tell who will win the war of the new graphics processors.

    “It may come down to whose factory is pumping out the most,” McCarron said.

    Nvidia may have an edge here in using TSMC’s 16FF+ process. Multiple chips are now ramping in the technology the Taiwan foundry took some time to flesh out.

    By contrast AMD depends on the 14nm process Globalfoundries licensed from Samsung. AMD has a history of great architectural innovations in silicon it has had trouble making in some of the same fabs it spun out several years ago as the genesis of Globalfoundries.

    “It’s hard to tell if anyone has an edge in product delivery, but that should become obvious in the next quarter or two as we go into the holiday season,” said McCarron.

    Ultimately the two giants could each claim victories in different segments of the market.

    Reply
  22. Tomi Engdahl says:

    HybriDIMM use the standard DDR4 interface and is suitable for Intel’s x86 servers directly without BIOS or hardware modifications. However, applications have access to the data to catch up to a thousand times faster. And compared to other high-end memory HybriDIMM is up to 80 percent cheaper.

    There are several tune ups, with the NAND Flash is used for primary DRAM. Some of the talents of Dram flash content through, for example, during a power outage. Netlist Samsung, however, carried out in the DRAM memory hybriridiratkaisu is a company, according to the first to bring the storage capacity of working memory speed.

    First generation HybriDIMM memory is 256-512 GB of NAND memory in combination with 8-16 GB of DRAM memory. The second-generation version of the NAND-entry already be scaled up to terabytes. DRAM increases the size of 32 GB.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4845:hybridimuisti-korvaa-dram-muistit&catid=13&Itemid=101

    Reply
  23. Tomi Engdahl says:

    WD: Resistance is not futile
    SanDisk ReRAM becomes WD’s XPoint competitor
    http://www.theregister.co.uk/2016/08/16/wd_says_resistance_is_not_futile/

    WD, with its acquired SanDisk operation, is squaring up to Intel and Micron’s XPoint with ReRAM – Resistive RAM technology.

    Back in October last year, the then independent SanDisk joined forces with HPE to fight XPoint. The two signed an agreement to develop Storage-Class Memory (SCM*), a non-volatile storage medium with DRAM-class access speed, which meant faster data access than NAND. The two said at the time that their SCM was “expected to be up to 1,000 times faster than flash storage and offer up to 1,000 times more endurance than flash storage.”

    HP would contribute its Memristor tech, with SanDisk bring ReRAM tech. Once WD acquired SanDisk, we waited to see if ReRAM would remain its anti-XPoint weapon of choice.

    Reply
  24. Tomi Engdahl says:

    Even laptops are not sold

    PC sales this year had to turn to increase. Windows 10′s children’s diseases was supposed to end, and Intel’s processors Kaby Lake supposed to bring more power users are looking for. Nothing, however, hardware laptop sales will not turn to rise.

    DigiTimes predicts that laptops sold this year, a total of less than 150 million. The amount is 7.3 percent lower than a year earlier.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4850:edes-lapparit-eivat-mene-kaupaksi&catid=13&Itemid=101

    Reply
  25. Tomi Engdahl says:

    Lucas Matney / TechCrunch:
    Intel shows off a wireless all-in-one virtual reality headset called Project Alloy, with all of its cameras, sensors, and input controls built-in — Intel moved full-throttle into the VR space with the announcement of an all-in-one virtual reality headset alongside a new Intel Merged Reality platform at the Intel Developers Forum.

    Intel shows off all-in-one Project Alloy virtual reality headset
    https://techcrunch.com/2016/08/16/intel-shows-off-all-in-one-project-alloy-virtual-reality-headset/

    Intel moved full-throttle into the VR space with the announcement of an all-in-one virtual reality headset alongside a new Intel Merged Reality platform at the Intel Developers Forum. An announcer teased the dramatic reveal with the phrase, “what if you could move freely without any restriction on what you do next?”

    Project Alloy is completely wireless, something that distinguishes it significantly from headsets like the Oculus Rift or HTC Vive. The company’s headset is notable in that it is an all-on-one device with all of the cameras, sensors and input controls built-in.

    Intel CEO Brian Krzanich called virtual reality, “one of those fundamental shifts that redefines how we work, how we’re entertained, and how we communicate in the world.”

    Project Alloy relies entirely on hand-tracking as input through its integrated sensors.

    Reply
  26. Tomi Engdahl says:

    All Windows 10 PCs will support HoloLens next year
    AR comes closer to mainstream
    http://www.theverge.com/2016/8/16/12503868/microsoft-windows-holographic-windows-10-shell-features

    Virtual reality and augmented reality are being touted as the next big thing in truly innovative computing, but they’re still not exactly mainstream due to the computing power required and the need to be tethered to a phone or PC. Intel and Microsoft think they can change that.

    Today at Intel’s annual developers conference, Microsoft’s Windows chief, Terry Myerson, announced a partnership with the chip maker that will make all future Windows 10 PCs able to support mixed reality applications.

    “All Windows 10 PCs next year will include a holographic shell,” Myerson said, the same operating system that runs on the company’s HoloLens headset. PCs will work with a head-mounted display, and run all Windows Holographic applications, Myerson said, allowing wearers to interact not just with 3D applications but also 2D apps. Microsoft will enable these apps through a future Windows update and the company’s universal Windows app platform.

    Reply
  27. Tomi Engdahl says:

    Dean Takahashi / VentureBeat:
    Intel debuts a silicon photonics module for data centers, which uses a hybrid laser to beam info at 100Gbps across 2km; Microsoft’s Azure is an early adopter — Intel is launching a new silicon photonics product that will make it a lot easier to hurl data around data centers at tremendous speeds.

    Intel debuts silicon photonics module for lightning-fast connectivity in data centers
    http://venturebeat.com/2016/08/17/intels-silicon-photonics-for-data-centers-can-send-data-at-100-gigabits-per-second-over-two-kilometer/

    Intel is launching a new silicon photonics product that will make it a lot easier to hurl data around data centers at tremendous speeds.

    The Intel PSM4 silicon photonics module can deliver 100 gigabits per second across two kilometers, making it easier to share data at high speeds across the “spine” of a data center. The technology is the result of years-long efforts to bring both electronics and optical components onto a single piece of silicon, which is lower cost and easier to make.

    “Electrons running over network cables won’t cut it,” Bryant said. “Intel has been working on silicon photonics over 16 years. We are the first to light up silicon.”

    Other ways of delivering data often require optical technology, which is harder to manufacture and costs more than silicon-based products

    Microsoft is an early adopter of the silicon photonics technology for use in its Azure data centers. Microsoft is also starting to test field-programmable gate arrays (FPGAs) from Intel’s Altera business in its data centers.

    Reply
  28. Tomi Engdahl says:

    Lucian Armasu / Tom’s Hardware:
    Intel unveils next-gen Xeon Phi chips for deep learning; Nvidia says Intel used old benchmarks in marketing; Xeon Phi chips seem to compete on price regardless — Intel recently published some Xeon Phi benchmarks, which claimed that its “Many Integrated Core” Phi architecture …

    Chip Fights: Nvidia Takes Issue With Intel’s Deep Learning Benchmarks
    http://www.tomshardware.com/news/nvidia-intel-deep-learning-benchmarks,32491.html

    Intel recently published some Xeon Phi benchmarks, which claimed that its “Many Integrated Core” Phi architecture, based on small Atom CPUs rather than GPUs, is significantly more efficient and higher performance than GPUs for deep learning. Nvidia seems to have taken issue with this claim, and has published a post in which it detailed the many reasons why it believes Intel’s results are deeply flawed.
    GPUs Vs. Everything Else

    Whether they are the absolute best for the task or not, it’s not much of a debate that GPUs are the mainstream way to train deep learning neural networks right now. That’s because training neural networks requires low precision computation (as low as 8-bit), and not high-precision computation, for which CPUs are generally built. Whether GPUs will one day be replaced by more efficient alternatives for most customers, it remains to be seen.

    However, GPUs are not the only game in town when it comes to training deep neural networks. As the field seems to be booming right now, there are all sorts of companies, old and new, trying to take a share of this market for deep learning-optimized chips.

    In its paper, Intel claimed that four Knights Landing Xeon Phi chips were 2.3x faster than “four GPUs.” Intel also claimed that Xeon Phi chips could scale 38 percent better across multiple nodes (up to 128, which according to Intel can’t be achieved by GPUs).

    Nvidia’s main arguments seem to be that Intel was using old data in its benchmarks, which can be misleading when comparing against GPUs

    AI Chip Competition Heating Up (In A Good Way)

    It’s likely that Xeon Phi is still quite behind GPU systems when it comes to deep learning, in both the performance and software support dimensions. However, if Nvidia’s DGX-1 can barely beat 21 Xeon Phi servers, then that also means the Xeon Phi chips are quite competitive price-wise.

    A DGX-1 currently costs $129,000, whereas a single Xeon Phi server chip costs anywhere from $2,000 to $6,000. Even when using 21 of Intel’s highest-end Xeon Phi chips, that system still seems to match the Nvidia DGX-1 on price.

    Although the fight between Nvidia and Intel is likely to ramp up significantly over the next few years, what’s going to be even more interesting is whether ASIC-like chips like Google’s TPU can actually be the ones to win the day.

    https://newsroom.intel.com/newsroom/wp-content/uploads/sites/11/2016/06/Intel-ISC16-press-deck-x.pdf

    Reply
  29. Tomi Engdahl says:

    Tom Warren / The Verge:
    NVIDIA brings desktop GTX 1000 series GPUs to laptops, says they are VR-ready when on AC power — Nvidia first tried its hand at putting desktop-class graphics chips inside a notebook last year, with its Maxwell-based GTX 980. That was a hint at what the US-based technology company was really working on …

    Nvidia brings desktop GPUs to laptops for ‘VR ready’ gaming
    http://www.theverge.com/circuitbreaker/2016/8/16/12480554/nvidia-gtx-1000-series-laptop-gpu-vr-ready-features

    Paul Thurrott / Thurrott.com:
    HP announces VR-capable OMEN X desktop, OMEN 17 laptop featuring NVIDIA GTX 1000 series GPUs
    http://www.thurrott.com/hardware/76424/hp-targets-diy-crowd-new-high-end-gaming-gear

    Reply
  30. Tomi Engdahl says:

    IT can kill workers:

    Even if you exercise, too much sitting time is bad
    http://www.cbsnews.com/news/even-if-you-exercise-prolonged-sitting-time-is-bad-for-heart-health/

    Even if you exercise regularly, too much sitting can still be bad for your heart, a leading cardiologists’ group warns.

    The American Heart Association (AHA) also says that too many people are spending far too much time on chairs and sofas, period.

    “Based on existing evidence, we found that U.S. adults are sedentary for about six to eight hours a day,” said Deborah Rohm Young, chair of the AHA panel that wrote the new advisory.

    The problem only gets worse with age. “Adults 60 years and older spend between 8.5 to 9.6 hours a day in sedentary time,” Young said in an AHA news release. She directs behavioral research at Kaiser Permanente Southern California.

    One heart specialist said the new stance is justified.

    “Don’t be a ‘sitting duck for cardiovascular disease’ — move more, sit less,”

    “Regardless of how much physical activity someone gets, prolonged sedentary time could negatively impact the health of your heart and blood vessels,”

    According to the AHA, people should try to get at least 30 minutes of moderate to vigorous exercise a day to reach the recommended 150 minutes of moderate exercise or 75 minutes of vigorous exercise a week.

    “Our lives have become focused around activities requiring us to be still — whether it be commuting or transportation, our computers, or the television or computer in our leisure time,” Steinbaum said. “Sociologically, instead of being active to be productive or to have enjoyment, our productivity and fun often requires minimal exertion.’

    Reply
  31. Tomi Engdahl says:

    Josh Constine / TechCrunch:
    Facebook is building its own Steam-style desktop gaming platform with Unity
    https://techcrunch.com/2016/08/18/facebook-desktop-game-platform/

    Facebook may try to compete with Steam, or at least win back revenue lost when casual gaming shifted to mobile. Today Facebook formally announced it’s working with game engine Unity to build a dedicated, downloadable desktop gaming platform, plus it’s broadening the Facebook.com experience for gamers.

    Both will allow publishers the offer their iOS and Android games on desktop in addition to the casual games Facebook is known for, while the desktop PC app could support more hardcore games.

    Reply
  32. Tomi Engdahl says:

    Watson for the Masses
    CognizeR offers millions AI
    http://www.eetimes.com/document.asp?doc_id=1330313&

    The Columbus-Collaboratory has integrated IBM’s Watson into the R programming language with its CognizeR.

    The open-source R programming language, used by millions of engineers, scientists, statisticians and researchers worldwide, now has direct access to Watson on IBM’s Bluemix cloud. Called CognizerR, the new capability is offered, courtesy Columbus Collaboratory—an ecosystem of companies focused on compiling a common repository of open-source code for advanced analytics and cyber security. Download CognizeR for free.

    As the premier artificial intelligence (AI) solution from IBM, Watson has in the past required the manual coding of calls to its application programmer interface (API) for every app being developed to use Watson. CognizeR simplifies access to Watson’s “Cognitive AI” capabilities by inserting a bullet-proof family of built-in calls to the increasingly popular R language.

    “What is important here is that as more and more people start using standard statistical packages like R, Watson’s API services become a viable option for modeling and deep learning using the cloud services available on IBM’s BlueMix,”

    Reply
  33. Tomi Engdahl says:

    Wintel Reunites in Mixed Reality
    Intel’s Project Alloy cuts cord on headsets
    http://www.eetimes.com/document.asp?doc_id=1330314&

    Intel and Microsoft will collaborate on what they call a mixed-reality platform for Windows 10. The effort along with new depth-sensing cameras from Intel injected excitement into the keynote at the annual Intel Developer Forum here, but fell short of defining a new Wintel platform of the magnitude of the declining PC.

    Intel will contribute a wireless head-mounted display to the effort, making its hardware and software freely available late next year. Project Alloy uses Intel’s RealSense 3D cameras to enable users to move with six degrees of freedom and use their hands to control virtual spaces.

    In a demo of Alloy, a user walked among virtual rooms.

    Reply
  34. Tomi Engdahl says:

    AMD’s Zen Takes on Intel
    Competition returns to x86 in 2017
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1330323&

    Last night, AMD demonstrated an eight-core, dual threaded 3 GHz desktop chip using its new Zen core narrowly edging out a similar high-end desktop CPU from archrival Intel.

    The most impressive demo I saw on this week when the 2016 Intel Developer Forum is wrapping up came from Intel’s competitor Advanced Micro Devices at an event Wednesday night. AMD showed desktop and server versions of chips using its newly developed Zen CPU core, and to say the results were impressive would be an understatement.

    Despite Intel demos of its next-generation Kaby Lake PC processor the surprise star of this week is Zen. AMD’s Summit Ridge desktop processor and its Naples server processor are the first two instantiations of AMD’s re-booted x86 core and processor architecture. AMD impressed attendees here in a manner we haven’t seen from the company in a decade by achieving a stated goal of delivering a 40% improvement in instruction level parallelism compared to its previous Excavator x86 core.

    AMD has been working on the Zen core since 2012. The CPU core is a completely new design that that will target Intel’s Core processor at every performance level. The first Zen chip was demonstrated earlier this year at Computex in Taiwan.

    the performance with a head-to-head comparison against Intel’s Core i7 Extreme Edition — an 8-core monster that sells for a suggested retail price of $1,100.

    Reply
  35. Tomi Engdahl says:

    Dave McClure / 500 Hats:
    Expect more non-tech public companies to buy tech unicorns as a hedge against disruption — Yeah, There’s a Bubble… But it Ain’t in Tech … Everybody in the press loves to write stories about the next “Tech Bubble”. — Of course, they all think they’ve seen this movie before—twice in fact, in 2000 and in 2008.

    The Unicorn Hedge
    There’s a Bubble… but it Ain’t in Tech
    https://500hats.com/welcome-to-the-unicorn-hedge-2fd3c6b50f89#.fdsuqrkc8

    Abstract: the press have been whining “there’s another bubble in tech!” for years but it hasn’t happened (yet)… meanwhile VC-funded startups continue to raise capital, drive innovation, and disrupt incumbents. While some claim the recent downturn in unicorn financings and valuations is proof they were right (finally!) they couldn’t be more wrong — valuations have calmed down, but tech entrepreneurs and investors aren’t going anywhere. In fact, that ugly little asset class called Venture Capital is poised for monstrous growth as thousands of startups aim to disrupt EVERY public company, and since VC fund returns have risen over the past decade they don’t completely suck as much as they used to.

    No, the next bubble is NOT in tech where innovation and capital are never in short supply… rather, the REAL bubble is in far-too-generous P/E multiples and valuations of global public companies, whose business models are being obliterated by startups and improved by orders of magnitude. As more Fortune 500 CEOs recognize and admit their vulnerability to disruption, expect them to hedge their own public valuations by buying the very same unicorns that keep them awake at night… Welcome to the Unicorn Hedge.

    Reply
  36. Tomi Engdahl says:

    7 key data center innovations
    http://www.cablinginstall.com/articles/pt/2016/08/7-key-data-center-innovations.html?cmpid=Enl_CIM_DataCenters_August222016

    Froehlich adds, “If projections are anywhere near accurate, then we’re looking at global growth rate in new data centers of approximately 10% to 15% each year for the foreseeable future. If that’s the case, many of these new facilities will likely be implementing one or more of these innovations to reduce overall energy consumption and keep ahead of computing needs.”

    InformationWeek’s list of the 7 top data center innovations in the industry today is as follows:

    1. Artificial Intelligence
    2. Underwater Data Centers
    3. SDN Gains
    4. Free Cooling
    5. Micro Data Centers
    6. Close-Coupled Cooling
    7. Directly Modulated Lasers on Silicon

    Reply
  37. Tomi Engdahl says:

    Takashi Mochizuki / Wall Street Journal:
    Sources: Sony to debut a new PlayStation 4 standard model alongside a high-end version on September 7 — Japanese tech company to introduce a standard and high-end version of the console next month — TOKYO— Sony Corp. plans to introduce a new PlayStation 4 standard model alongside …

    Sony to Sell Updated Model of Standard PlayStation 4
    Japanese tech company to introduce a standard and high-end version of the console next month
    http://www.wsj.com/article_email/sony-to-sell-updated-model-of-standard-playstation-4-1471833069-lMyQjAxMTA2NDI2MjcyODI1Wj

    Sony Corp. plans to introduce a new PlayStation 4 standard model alongside a high-end version next month, people familiar with the matter said, in an effort to maintain demand for the best-selling videogame console.

    With the release of the two models, Sony hopes to attract hard-core fans and more casual users to its videogame platform, analysts said. The company has been trying to build a community of users so that it can earn consistent revenue from subscriptions and software downloads.

    Reply
  38. Tomi Engdahl says:

    AMD’s Zen Takes on Intel
    Competition returns to x86 in 2017
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1330323&

    Last night, AMD demonstrated an eight-core, dual threaded 3 GHz desktop chip using its new Zen core narrowly edging out a similar high-end desktop CPU from archrival Intel.

    The most impressive demo I saw on this week when the 2016 Intel Developer Forum is wrapping up came from Intel’s competitor Advanced Micro Devices at an event Wednesday night. AMD showed desktop and server versions of chips using its newly developed Zen CPU core, and to say the results were impressive would be an understatement.

    Despite Intel demos of its next-generation Kaby Lake PC processor the surprise star of this week is Zen.

    Reply
  39. Tomi Engdahl says:

    Are Energy Standards for Computers on Horizon?
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1330329&

    California has put the wheels in motion, and a NRDC says electricity use by computers can be cut in half using off-the-shelf technology with no impact on performance, and at negligible cost.

    The computer industry may soon be facing another round of regulatory measures. This time, they may come in the form of state-imposed energy efficiency rules.

    The California Energy Commission appears to be moving ahead with the nation’s first energy efficiency standards for computers and monitors. Some reports indicate that the standards, which would apply to the power-use settings for desktops, laptops and computer monitors sold in the state, may be adopted by the end of this year; given California’s market size and influence, adoption of these standards could spark industrywide changes, the news report noted.

    The standards, which would vary by computer type and possibly be phased in during 2017 and/or 2018, would save consumers hundreds of millions of dollars every year, according to the CEC’s March 2015 press release. For desktop computers alone, it is estimated that a $2 increase in manufacturing costs will return $69 to consumers in energy savings over the five-year life of a desktop, the organization claims.

    Reply
  40. Tomi Engdahl says:

    Google will stop almost all of Chrome apps

    Google launched three years ago on multiple platforms – Windows, Mac and Linux – Chrome-functional applications. Now the company says will spell the end applications from these platforms within the next two years. Chrome Apps are available in two types: packaged in a way independent of applications and operating Chrome browser, a kind of add-ons. According to Google, the former accounted for Chrome apps is just one per cent.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4888:google-lopettaa-lahes-kaikki-chrome-sovellukset&catid=13&Itemid=101

    Reply
  41. Tomi Engdahl says:

    Microsoft reveals secret HoloLens processor specs
    http://www.theverge.com/circuitbreaker/2016/8/23/12602516/microsoft-hololens-holographic-processing-unit-specifications

    Microsoft has finally revealed exactly what’s inside its HoloLens headset. While The Verge got exclusive access to a deconstructed HoloLens developer edition back in April, Microsoft has been keeping the details of its special Holographic Processing Unit (HPU) very secret. Microsoft revealed most of the HoloLens specifications earlier this year, and the special HPU is designed to do most of the processing so the CPU and GPU are able to just launch apps and display the holograms. Microsoft custom designed the HPU and it takes all of the data from the cameras and sensors and processes it in real-time so you can use gestures accurately.

    The Register reports that Microsoft’s special custom-designed HPU is a TSMC-fabricated 28nm coprocessor that has 24 Tensilica DSP cores. It has around 65 million logic gates, 8MB of SRAM, and an additional layer of 1GB of low-power DDR3 RAM. That RAM is separate to the 1GB that’s available for the Intel Atom Cherry Trail processor, and the HPU itself can handle around a trillion calculations per second.

    Reply
  42. Tomi Engdahl says:

    Microsoft’s HoloLens secret sauce: A 28nm customized 24-core DSP engine built by TSMC
    How to make your own virtual reality brain
    http://www.theregister.co.uk/2016/08/22/microsoft_hololens_hpu/

    Reply
  43. Tomi Engdahl says:

    WebVR 1.0 available in Firefox Nightly
    16 August 2016
    https://blog.mozvr.com/webvr-1-0-available-in-firefox-nightly/

    The WebVR API is a set of DOM interfaces that enable WebGL rendering into Virtual Reality headsets and access to the various sensors for orientation, positioning, and input controls.

    As of today, August 16, 2016, Firefox Nightly will support the WebVR 1.0 API. This replaces the earlier WebVR API implementation with the standard proposed by the WebVR W3C community group. Our earlier article on the proposal has some resources to help you get started.

    Firefox has been heavily optimized for the best VR experience. The latest updates includes a dedicated VR rendering path that ensures smooth and consistent frame rendering, lower latency and rendering at headsets native frame rate, independent from the user’s main monitor.

    This is the same API that has been implemented in the latest Chromium experimental builds and the Samsung Gear VR browser. In addition to enabling access to the latest WebVR sites, the new API supports new features such as rendering separate content to the user’s main display display for spectator views and the ability to traverse links between WebVR pages.

    Reply
  44. Tomi Engdahl says:

    NVMe over Fabrics Support Coming to the Linux 4.8 Kernel
    http://www.linuxjournal.com/content/nvme-over-fabrics-support-coming-linux-48-kernel?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    The Flash Memory Summit recently wrapped up its conferences in Santa Clara, California, and only one type of Flash technology stole the show: NVMe over Fabrics (NVMeF). From the many presentations and company announcements, it was obvious NVMeF was the topic that most interested the attendees.

    With the first industry specifications announced in 2011, Non-Volatile Memory Express (NVMe) quickly rose to the forefront of Solid State Drive (SSD) technologies. Historically, SSDs were built on top of Serial ATA (SATA), Serial Attached SCSI (SAS) and Fibre Channel buses. These interfaces worked well for the maturing Flash memory technology, but with all the protocol overhead and bus speed limitations, it did not take long for these drives to experience performance bottlenecks. Today, modern SAS drives operate at 12 Gbit/s, while modern SATA drives operate at 6 Gbit/s. This is why the technology shifted its focus to PCI Express (PCIe). With the bus closer to the CPU and PCIe capable of performing at increasingly stellar speeds, SSDs seemed to fit right in. Using PCIe 3.0, modern drives can achieve speeds as high as 40 Gbit/s. Leveraging the benefits of PCIe, it was then that the NVMe was conceived. Support for NVMe drives was integrated into the Linux 3.3 mainline kernel (2012).

    What really makes NVMe shine over the operating system’s SCSI stack is its simpler and faster queueing mechanism

    Almost immediately, the PCIe SSDs were marketed for enterprise-class computing with a much higher price tag. Although still more expensive than its SAS or SATA cousins, the dollar per gigabyte of Flash memory continues to drop—enough to convince more companies to adopt the technology. However, there was still a problem. Unlike the SAS or SATA SSDs, NVMe drives did not scale very well. They were confined to the server they were plugged in to.

    Today, the most commonly deployed SAN is based on iSCSI, which is SCSI over TCP/IP. Technically, NVMe drives can be configured within a SAN environment, although the protocol overhead introduces latencies that make it a less than ideal implementation. In 2014, the NVMe Express committee was poised to rectify this with the NVMeF standard.

    The goals behind NVMeF are simple: enable an NVMe transport bridge, which is built around the NVMe queuing architecture, and avoid any and all protocol translation overhead other than the supported NVMe commands (end to end). With such a design, network latencies noticeably drop (less than 200 ns). This design relies on the use of PCIe switches. There is a second design that has been gaining ground and that is based on the existing Ethernet fabrics using Remote Direct Memory Access (RDMA).

    Call it a coincidence, but also recently, the first release candidate for the 4.8 kernel introduced a lot of new code to support NVMeF. The patches were submitted as part of a joint effort by the hard-working developers over at Intel, Samsung and others.

    Reply
  45. Tomi Engdahl says:

    PC memories move to DDR5

    SDRAM memory is connected to the processor bus, which obeys the name of DDR4. Many people have already thought about that the next stage of transition to new, more exotic memories, but DDR5 memory are gaining standards already this year.

    According to Intel, DDR5 memory will be the first servers and schedule of talks since 2020. workstations and other micro-technology in local currencies year-one and a half later.

    Next year, the fastest Intel-sponsored DDR4 bus clock speed of 3200 MHz.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4892:pc-muisteissa-siirrytaan-ddr5-aikaan&catid=13&Itemid=101

    Reply
  46. Tomi Engdahl says:

    Death to copper cables: Intel turns to light for fast data transfers
    http://www.cablinginstall.com/articles/pt/2016/08/death-to-copper-cables-intel-turns-to-light-for-fast-data-transfers.html

    Intel’s first silicon photonics modules are designed for data transfers between servers. The company believes the days of using copper wires for data transfers, both between computers and inside of them, are numbered because optical communications advancements are rising fast on the horizon. The chipmaker has started shipping silicon photonics modules, which use light and lasers to speed up data transfers between computers.
    Read More at CIO India

    Death to copper cables: Intel turns to light for fast data transfers
    http://www.cio.in/news/death-copper-cables-intel-turns-light-fast-data-transfers

    Intel’s first silicon photonics modules are designed for data transfers between servers

    Intel believes the days of using copper wires for data transfers, both between computers and inside of them, are numbered because optical communications are on the horizon.

    The chipmaker has started shipping silicon photonics modules, which use light and lasers to speed up data transfers between computers.

    The silicon photonics components will initially allow for optical communications between servers and data centers, stretching over long distances, said Diane Bryant, executive vice president and general manager of Intel’s Data Center Group.

    Over time, Intel will put optical communications at the chip level, Bryant said during a keynote at Intel Developer Forum on Wednesday. That means light will drive communications inside computers.

    PCs and servers today use older electrical wiring for data transfers. But the data transfer speeds via those cables had reached a brick wall, and fiber optics provide a way to shuffle data at faster speeds, Bryant said.

    In addition to an ability to stretch across kilometers, the fiber optic cables will take up less space than older cables, Jason Waxman, corporate vice president and general manager of Intel’s Data Center Solutions Group, said in an interview.

    The first silicon photonics modules will allow for data transfers at up to 100Gbps (bits per second). The technology will be based on the widely used Ethernet protocol, but servers will require special switches to support silicon photonics. Ultimately, silicon photonics could support other data transfer and networking protocols.

    The silicon photonics transceivers and other components will be widely available later in the year, though many implementations could take place early next year, Waxman said.

    Intel has released a connector called MXC for silicon photonics connections between servers. The chipmaker has also created a protocol called O-PCI (Optical PCI) for PCI-Express communications over optical cables.

    Reply
  47. Tomi Engdahl says:

    Oracle reveals Java Applet API deprecation plan
    Big Red nods to plugin-hostile browser-makers, outlines proper Applet pension plan
    http://www.theregister.co.uk/2016/08/24/oracle_reveals_java_applet_api_deprecation_plan/

    Oracle has revealed its interim plan to help Java devs deal with browser-makers’ imminent banishment of plug-ins.

    Years of bugs in Java, Flash and other plugins have led browser-makers to give up on plugins. Apple recently decided that its Safari browser will just pretend Java, Flash and Silverlight aren’t installed. Google has announced it will soon just not run any Flash content in its Chrome browser.

    Oracle saw this movement coming and in January 2016 announced it would “deprecate the Java browser plugin in JDK 9”.

    Oracle explains its decision to work this way as follows:

    We do not intend to remove the Applet API in the next major release, hence we will not specify forRemoval = truein these annotations. If at some later point we do propose to remove this API then we will add forRemoval = true to these annotations at least one major release in advance.

    Oracle adds that “These annotations will cause deprecation warnings to be emitted by the Java compiler for all code that uses this API. If warnings are treated as errors, they will result in the build failure.”

    Reply
  48. Tomi Engdahl says:

    PlayStation 3 Games Are Coming To PC
    https://games.slashdot.org/story/16/08/23/1848206/playstation-3-games-are-coming-to-pc

    PlayStation 3 games are coming to Windows. Sony said Tuesday that it is bringing its PlayStation Now game-streaming program to Windows PCs. The service broadcasts PlayStation 3 games over the internet similar to the way Netflix beams movies to devices like Roku.

    PlayStation games are coming to PC, and other signs the end is nigh
    http://www.cnet.com/news/playstation-games-are-coming-to-windows-pc/

    Soon, you won’t need a PlayStation to experience games like God of War or Uncharted. Also, cats and dogs are living together now.

    Sony is actually, really, truly bringing PlayStation 3 games to your Windows PC, console wars be damned.

    The catch: you’ll be playing those games over the internet with Sony’s streaming game service, PlayStation Now. Think Netflix.

    PlayStation Now has already been around for a couple of years on the PS4, PS3, PS Vita handheld, plus a handful of Blu-ray players and smart TVs. For $20 a month or $45 for three (£13 monthly in the UK, but alas, not available in Australia), the service gives players unlimited access to a long list of over 400 PlayStation 3 games. (The service is available only in those countries as well as in Canada and Japan, with Belgium and the Netherlands currently in beta.)

    Like Netflix or any other streaming service, the quality can vary wildly depending on your internet connection — Sony requires a solid 5Mbps connection at all times, and that doesn’t change today.

    What changes is the size of Sony’s audience. With a Windows laptop or tablet, you aren’t tethered to a big-screen TV. You could theoretically take these PlayStation games anywhere — and wherever you go, your saved games stream with you.

    There are some caveats, though. In addition to the pricey monthly subscription and the stable internet connection, Sony recommends your Windows device have a 3.5GHz (or faster!) processor for best results.

    And you’ll need a DualShock 4 controller to play on Windows, instead of the older DualShock 3 that worked just fine with PlayStation Now on other platforms.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*