I did not have time to post my computer technologies predictions t the ends of 2016. Because I missed the year end deadline, I though that there is no point on posting anything before the news from CES 2017 have been published. Here are some of myck picks on the current computer technologies trends:
CES 2017 had 3 significant technology trends: deep learning goes deep, Alexa everywhere and Wi-Fi gets meshy. The PC sector seemed to be pretty boring.
Gartner expects that IT sales will growth (2.7%) but hardware sales will not have any growth – can drop this year. TEKsystems 2017 IT forecast shows IT budgets rebounding from a slump in 2016, and IT leaders’ confidence high going into the new year. But challenges around talent acquisition and organizational alignment will persist. Programming and software development continue to be among the most crucial and hard-to-find IT skill sets.
Smart phones sales (expected to be 1.89 billion) and PC sales (expected to be 432 million) do not grow in 2017. According to IDC PC shipments declined for a fifth consecutive year in 2016 as the industry continued to suffer from stagnation and lack of compelling drivers for upgrades. Both Gartner and IDC estimated that PC shipments declined about 6% in 2016.Revenue in the traditional (non-cloud) IT infrastructure segment decreased 10.8 per cent year over year in the third quarter of 2016. Only PC category that has potential for growth is ultramobile (includes Microsoft Surface ja Apple MacBook Air). Need for memory chips is increasing.
Browser suffers from JavaScript-creep disease: This causes that the browing experience seems to be become slower even though computer and broadband connections are getting faster all the time. Bloat on web pages has been going on for ages, and this trend seems to continue.
Microsoft tries all it can to make people to switch from older Windows versions to Windows 10. Microsoft says that continued usage of Windows 7 increases maintenance and operating costs for businesses as malware attacks that could have been avoided by upgrading to Windows 10. Microsoft says that continued usage of Windows 7 increases maintenance and operating costs for businesses. Microsoft: Windows 7 Does Not Meet the Demands of Modern Technology; Recommends Windows 10. On February 2017 Microsoft stops the 20 year long tradition of monthly security updates. Windows 10 “Creators Update” coming early 2017 for free, featuring 3D and mixed reality, 4K gaming, more.
Microsoft plans to emulate x86 instructions on ARM chips, throwing a compatibility lifeline to future Windows tablets and phones. Microsoft’s x86 on ARM64 Emulation is coming in 2017. This capability is coming to Windows 10, though not until “Redstone 3″ in the Fall of 2017.
Parents should worry less about the amount of time their children spend using smartphones, computers and playing video games because screen time is actually beneficial, the University of Oxford has concluded. 257 minutes is the time teens can spend on computers each day before harming wellbeing.
Outsourcing IT operations to foreign countries is not trendy anymore and companied live at uncertain times. India’s $150 billion outsourcing industry stares at an uncertain future. In the past five years, revenue and profit growth for the top five companies listed on the BSE have halved. Industry leader TCS too felt the impact as it made a shift in business model towards software platforms and chased digital contacts.
Containers will become hot this year and cloud will stay hot. Research firm 451 Research predicts this year containerization will be US $ 762 million business and that Containers will become 2.6 billion worth of software business in 2020. (40 per cent a year growth rate).
Cloud services are expected to have 22 percent annual growth rate. By 2020, the sector would grow from the current 22.2 billion to $ 46 billion. In Finland 30% of companies now prefer to buy cloud services when buying IT (20 per cent of IT budget goes to cloud).Cloud spend to make up over a third of IT budgets by 2017. Cloud and hosting services will be responsible for 34% of IT budgets by 2017, up from 28% by the end of 2016, according to 451 Research. Cloud services have many advantages, but cloud services have also disadvantages. In five years, SaaS will be the cloud that matters.
When cloud is growing, so is the spending on cloud hardware by the cloud companies. Cloud hardware spend hits US$8.4bn/quarter, as traditional kit sinks – 2017 forecast to see cloud kit clock $11bn every 90 days. In 2016′s third quarter vendor revenue from sales of infrastructure products (server, storage, and Ethernet switch) for cloud IT, including public and private cloud, grew by 8.1 per cent year over year to $8.4 billion. Private cloud accounted for $3.3 billion with the rest going to public clouds. Data centers need lower latency components so Google Searches for Better Silicon.
The first signs of the decline and fall of the 20+ year x86 hegemony will appear in 2017. The availability of industry leading fab processes will allow other processor architectures (including AMD x86, ARM, Open Power and even the new RISC-V architecture) to compete with Intel on a level playing field.
USB-C will now come to screens – C-type USB connector promises to really become the only all equipment for the physical interface.The HDMI connection will be lost from laptops in the future. Thunderbolt 3 is arranged to work with USB Type-C, but it’s not the same thing (Thunderbolt is four times faster than USB 3.1).
World’s first ‘exascale’ supercomputer prototype will be ready by the end of 2017, says China
It seems that Oracle Begins Aggressively Pursuing Java Licensing Fees in 2017. Java SE is free, but Java SE Suite and various flavors of Java SE Advanced are not. Oracle is massively ramping up audits of Java customers it claims are in breach of its licences – six years after it bought Sun Microsystems. Huge sums of money are at stake. The version of Java in contention is Java SE, with three paid flavours that range from $40 to $300 per named user and from $5,000 to $15,000 for a processor licence. If you download Java, you get everything – and you need to make sure you are installing only the components you are entitled to and you need to remove the bits you aren’t using.
Your Year in Review, Unsung Hero article sees the following trends in 2017:
- A battle between ASICs, GPUs, and FPGAs to run emerging workloads in artificial intelligence
- A race to create the first generation of 5G silicon
- Continued efforts to define new memories that have meaningful impact
- New players trying to take share in the huge market for smartphones
- An emerging market for VR gaining critical mass
Virtual Reality Will Stay Hot on both PC and mobile.“VR is the heaviest heterogeneous workload we encounter in mobile—there’s a lot going on, much more than in a standard app,” said Tim Leland, a vice president for graphics and imaging at Qualcomm. The challenges are in the needs to calculate data from multiple sensors and respond to it with updated visuals in less than 18 ms to keep up with the viewer’s head motions so the CPUs, GPUs, DSPs, sensor fusion core, display engine, and video-decoding block are all running at close to full tilt.
932 Comments
Tomi Engdahl says:
Salvador Rodriguez / Reuters:
Coding boot camps in the US are facing stiff competition in a crowded field, with eight schools having shut down or planning to close in 2017
Some U.S. coding boot camps stumble in a crowded field
http://www.reuters.com/article/us-bootcamps-enterprise-idUSKBN1AP2D7
The hype is fading for coding “boot camps,” for-profit U.S. schools offering graduates entry into the lucrative world of software development.
Closures are up in a field now jammed with programs promising to teach students in just weeks the skills needed to get hired as professional coders. So far this year, at least eight schools have shut down or announced plans to close in 2017, according to the review website Course Report.
Tomi Engdahl says:
Blair Hanley Frank / VentureBeat:
OpenAI bot has defeated three top professional Dota 2 players in 1v1 play over the last week — An artificial intelligence has beaten one of the world’s top Dota 2 players in single combat today. Danil Ishutin, better known by his gaming handle “Dendi,” threw in the towel in the middle …
OpenAI’s bot beats top Dota 2 player so badly that he quits
https://venturebeat.com/2017/08/11/openais-bot-beats-top-dota-2-player-so-badly-that-he-quits/
An artificial intelligence has beaten one of the world’s top Dota 2 players in single combat today. Danil Ishutin, better known by his gaming handle “Dendi,” threw in the towel in the middle of a second game against a bot that OpenAI created, one that had been beating him handily.
The two squared off in a pair of 1-on-1 matches in this multiplayer online battle arena (MOBA) game at The International, one of the biggest esports events in the world. The first character who scored two kills or destroyed an in-game tower would be crowned the winner. In the first game, OpenAI’s bot appeared dominant
It’s a milestone for OpenAI, an organization that’s trying to make sure future artificial general intelligences are positive additions to humankind. Greg Brockman, the organization’s cofounder and CTO, said in a streamed interview that this validates a step toward more impactful systems.
“What we’ve built here is a general learning system, which is still limited in a number of ways, but it’s still capable enough to beat the best human pros at Dota,” Brockman said. “This is a step toward building more general systems which can learn more complicated, messy, and important real-world tasks like being a surgeon.”
Tomi Engdahl says:
Elon Musk + AI + Microsoft = Awesome Dota 2 Player
https://games.slashdot.org/story/17/08/12/2020216/elon-musk–ai–microsoft–awesome-dota-2-player?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29
onight during Valve’s yearly Dota 2 tournament, a surprise segment introduced what could be the best new player in the world — a bot from Elon Musk-backed startup OpenAI. Engineers from the nonprofit say the bot learned enough to beat Dota 2 pros in just two weeks of real-time learning,
OpenAI bot bursts into the ring, humiliates top Dota 2 pro gamer in ‘scary’ one-on-one bout
BAH GAWD! BAH GAWD!
https://www.theregister.co.uk/2017/08/12/openai_bot_beats_top_dota_2_players_in_surprise_match/
In the past hour or so, an AI bot crushed a noted professional video games player at Dota 2 in a series of one-on-one showdowns.
The computer player was built, trained and optimized by OpenAI, Elon Musk’s AI boffinry squad based in San Francisco, California. In a shock move on Friday evening, the software agent squared up to top Dota 2 pro gamer Dendi, a Ukrainian 27-year-old, at the Dota 2 world championships dubbed The International.
The OpenAI agent beat Dendi in less than 10 minutes in the first round, and trounced him again in a second round, securing victory in a best-of-three match. “This guy is scary,” a shocked Dendi told the huge crowd watching the battle at the event. Musk was jubilant.
According to OpenAI, its machine-learning bot was also able to pwn two other top human players this week: SumaiL and Arteezy. Although it’s an impressive breakthrough, it’s important to note this popular strategy game is usually played as a five-versus-five team game – a rather difficult environment for bots to handle.
Complex strategy games are all the rage in the AI world at the moment. Some of the biggest companies, such as Facebook and Google’s DeepMind, are racing to conquer games including StarCraft or Montezuma’s Revenge.
Tomi Engdahl says:
IBM Deep Learning Breaks Through
Comes close to coveted linear scaling efficiency
http://www.eetimes.com/document.asp?doc_id=1332152&
IBM Research has reported an algorithmic breakthrough for deep learning that comes close to achieving the holy grail of ideal scaling efficiency: Its new distributed deep-learning (DDL) software enables a nearly linear speedup with each added processor (see figure). The development is intended to achieve similar speedups for each server added to IBM’s DDL algorithm.
The aim “is to reduce the wait time associated with deep-learning training from days or hours to minutes or seconds,” according to IBM fellow and Think Blogger Hillery Hunter, director of the Accelerated Cognitive Infrastructure group at IBM Research.
Hunter notes in a blog post on the development that “most popular deep-learning frameworks scale to multiple GPUs in a server, but not to multiple servers with GPUs.” The IBM team “wrote software and algorithms that automate and optimize the parallelization of this very large and complex computing task across hundreds of GPU accelerators attached to dozens of servers,” Hunter adds.
IBM Research achieves record deep learning performance with new software technology
https://www.ibm.com/blogs/research/2017/08/distributed-deep-learning/
Tomi Engdahl says:
A Parable: “The Blind GPUs and the Elephant”
https://www.ibm.com/blogs/think/2017/08/a-parable-the-blind-gpus-and-the-elephant/
This parable is helpful describing the problem that we are solving and context for the promising early results we have achieved in image recognition with deep learning. Despite initial disagreement, if these people are given enough time they can share enough information to piece together a pretty accurate collective picture of an elephant.
Deep learning is a widely used AI method to help computers understand and extract meaning from images and sounds through which humans experience much of the world. It holds promise to fuel breakthroughs in everything from consumer mobile app experiences to medical imaging diagnostics. But progress in accuracy and the practicality of deploying deep learning at scale is gated by technical challenges, such as the need to run massive and complex AI models – a process for which training times are measured in days and weeks.
For our part, my team in IBM Research has been focused on reducing these training times for large models with large data sets. Our objective is to reduce the wait-time associated with deep learning training from days or hours to minutes or seconds, and enable improved accuracy of these AI models. To achieve this, we are tackling grand-challenge scale issues in distributing deep learning across large numbers of servers and GPUs.
Tomi Engdahl says:
Place your bets: How long will 1TFLOPS HPE box last in space without proper rad hardening
NASA about to find out, thanks to SpaceX launch to ISS
https://www.theregister.co.uk/2017/08/12/spacex_hpe_iss_launch/
SpaceX and HPE will put a modest little supercomputer into space next week to test how computer systems operate in extreme conditions.
On Monday, August 14, HPE’s Spaceborne Computer will blast off to the International Space Station aboard a SpaceX CRS-12 rocket. It’s part of an experiment to examine if commercial off-the-shelf computers can survive a year in space, what with all the radiation, vibrations and so on, something that will be useful to know for the long trip to Mars.
The Spaceborne Computer isn’t exactly a top-of-the-range supercomputer, but it will be the most advanced machine to be sent to space. It can hit about one teraflop in terms of performance, we’re told, will mostly run benchmarking software on Red Hat Enterprise Linux, and will be built out of two HPE Apollo Intel x86 servers with a 56Gbps interconnect.
High Performance Commercial Off-The-Shelf (COTS) Computer System on the ISS (Spaceborne Computer) – 08.09.17
https://www.nasa.gov/mission_pages/station/research/experiments/2304.html
Tomi Engdahl says:
Paul Thurrott / Thurrott.com:
Microsoft memo: Surface Book return rate hit 17% in late-’15; source: Intel was made scapegoat for Surface Book problems, as other OEMs handled Skylake’s issues — Thurrott.com has seen an internal Microsoft memo that indicates that the software giant is readying a broader campaign to undercut …
Here’s What Microsoft is Saying Internally About Surface Quality and Reliability
http://www.thurrott.com/mobile/microsoft-surface/132832/heres-microsoft-saying-internally-surface-quality-reliability
Thurrott.com has seen an internal Microsoft memo that indicates that the software giant is readying a broader campaign to undercut this past week’s news from Consumer Reports. It also provides greater insight into why Microsoft believes the Consumer Reports recommendations are incorrect.
“It’s important for us to always learn more from our customers and how they view their ownership journey with our products,” the memo, from Microsoft corporate vice president Panos Panay reads. “Feedback like this [from Consumer Reports] stings, but pushes us to obsess more about our customers.”
Tomi Engdahl says:
Darrell Etherington / TechCrunch:
Swift and LLVM creator Chris Lattner joins Google Brain following his Tesla Autopilot stint — One of the key creators behind Apple programming language Swift, Chris Lattner, is on the move again. After a short six month stay at Tesla, which he joined last year from Apple to act as VP …
Swift creator Chris Lattner joins Google Brain after Tesla Autopilot stint
https://techcrunch.com/2017/08/14/swift-creator-chris-lattner-joins-google-brain-after-tesla-autopilot-stint/
Chris Lattner, one of the key creators behind the Apple programming language Swift, is on the move again. After a short six-month stay at Tesla, which he joined last year from Apple to act as VP of Autopilot Software, Lattner announced on Twitter today that his next stop is Google Brain.
Tomi Engdahl says:
AnandTech:
AMD’s Radeon RX Vega 64 review: on average neck-and-neck with the NVIDIA’s GeForce GTX 1080 in gaming performance but is less power efficient — We’ve seen the architecture. We’ve seen the teasers. We’ve seen the Frontier. And we’ve seen the specifications.
The AMD Radeon RX Vega 64 & RX Vega 56 Review: Vega Burning Bright
by Ryan Smith & Nate Oh on August 14, 2017 9:00 AM EST
http://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review
Tomi Engdahl says:
WPS Office 2016 for Linux
http://www.linuxjournal.com/content/wps-office-2016-linux
Promising the world’s best office experience for the Linux community, WPS Software presents WPS Office 2016 for Linux: a high-performing yet considerably more affordable alternative to Microsoft Office that is fully compatible with and comparable to the constituent PowerPoint, Excel and Word applications. The WPS Office suite, with more than 1.2 billion installs across all platforms, is a complete office suite, including Writer, Presentation, Spreadsheets and a built-in PDF reader. Linux, Windows, Android and iOS versions are available.
https://www.wps.com/
Tomi Engdahl says:
Mary Jo Foley / ZDNet:
Microsoft announces general availability of .NET Core 2.0, .NET Standard 2.0 and ASP.NET Core 2.0
Microsoft’s .NET Core 2.0: What’s new and why it matters
http://www.zdnet.com/article/microsofts-net-core-2-0-whats-new-and-why-it-matters/
Microsoft is rolling out .NET Core 2.0, .NET Standard 2.0 and ASP.NET Core 2.0, advancing the company’s goal of more of a common .NET platform everywhere.
Microsoft’s .NET Core 2.0 is done and available for download as of August 14.
dotnet2dot0.jpg
Simultaneously, Microsoft is delivering ASP.NET Core 2.0, Entity Framework Core 2.0 and the complete .NET Standard 2.0 specification.
Microsoft’s primary goal with .NET Core and .NET Standard was to make more of its application programming interfaces (APIs) consistent across the different versions of .NET.
Microsoft officials said the team has gone from 13,000 APIs in .NET Standard 1.6 to 32,000 in .NET Standard 2.0. Most of the newly added APIs are .NET Framework APIs, which means developers should have an easier time porting their existing .NET Framework code to .NET Standard.
Microsoft forked the .NET Framework to make the .NET Core subset of it more portable across platforms in in 2014. .NET Core became the open-source cross-platform implementation of the .NET development platform that runs on Windows, Linux and macOS.
.NET Core includes the .NET runtime, a set of framework libraries, a set of software development kit tools and language compilers. .NET Standard is an API spec that describes the .NET interfaces that developers can use across all .NET platforms.
Currently, .NET Standard 2.0 is supported on .NET Framework 4.6.1, .NET Core 2.0, Mono 5.4, Xamarin iOS 10.14, Xamarin.Mac 3.8 and Xamarin.Android 7.5. Support for Windows 10 Universal Windows Platform (UWP) is expected later this year.
.NET Core 2.0 provides performance improvements in the runtime and framework. It adds support for .NET to six new platforms, includign Debian Stretch, SUSE Linux Enterprise Server 12 SP2 and macOS High Sierra. The RyuJIT just-in time compiler (x86 version) is included in .NET Core 2.0, and Linux ARM32 is now supported in preview.
Tomi Engdahl says:
Paul Thurrott / Thurrott.com:
Microsoft memo: Surface Book return rate hit 17% in late-’15; source: Intel was made scapegoat for Surface Book problems, as other OEMs handled Skylake’s issues — Thurrott.com has seen an internal Microsoft memo that indicates that the software giant is readying a broader campaign to undercut …
Here’s What Microsoft is Saying Internally About Surface Quality and Reliability
http://www.thurrott.com/mobile/microsoft-surface/132832/heres-microsoft-saying-internally-surface-quality-reliability
hurrott.com has seen an internal Microsoft memo that indicates that the software giant is readying a broader campaign to undercut this past week’s news from Consumer Reports. It also provides greater insight into why Microsoft believes the Consumer Reports recommendations are incorrect.
“It’s important for us to always learn more from our customers and how they view their ownership journey with our products,” the memo, from Microsoft corporate vice president Panos Panay reads. “Feedback like this [from Consumer Reports] stings, but pushes us to obsess more about our customers.”
Panay says that Microsoft will continue to “engage” with Consumer Reports and try to both learn from their survey and testing to improve things for customers and “reverse their findings.”
Tomi Engdahl says:
Using graphics processors for image compression
http://www.vision-systems.com/articles/print/volume-22/issue-7/features/using-graphics-processors-for-image-compression.html?cmpid=enl_vsd_vsd_newsletter_2017-08-14
mage compression plays a vitally important part in many imaging systems by reducing the amount of data needed to store and/or transmit image data. While many different methods exist to perform such image compression, perhaps the most well-known and well-adopted of these is the baseline JPEG standard.
Originally developed by the Joint Photographic Experts Group (JPEG; https://jpeg.org), a working group of both the International Standardization Organization (ISO, Geneva, Switzerland; http://www.iso.org) and the International Electrotechnical Commission (IEC, Geneva, Switzerland; http://www.iec.ch), the baseline JPEG standard is a lossy form of compression based on the discrete cosine transform (DCT).
since the baseline JPEG standard can achieve 15:1 compression with little perceptible loss in image quality,
Graphics acceleration
In the past, JPEG image compression was performed on either host PCs or digital signal processors (DSPs). Today, with the advent of graphics processors such as the TITAN and GEFORCE series of graphics processors from NVIDIA (Santa Clara, CA, USA; http://www.nvidia.com) that contain hundreds of processor cores, image compression can be performed much faster (Figure 1). Using the company’s Compute Unified Device Architecture (CUDA), developers can now use an application programming interface (API) to build image compression applications using C/C++.
Because CUDA provides a software abstraction of the GPUs underlying hardware and the baseline JPEG compression standard can be somewhat parallelized, the baseline JPEG compression process can be split into threads that act as individual programs, working in the same memory space and executing concurrently.
Before an image can be compressed, however, it must be transferred to the CPU’s host memory and then to the GPU memory. To capture image data, standards such as GenICam from the European Machine Vision Association (EMVA; Barcelona, Spain; http://www.emva.org) provide a generic interface for GigE Vision, USB3 Vision, CoaXPress, Camera Link HS, Camera Link and 1394 DCAM-based cameras that allow such data acquisition to be easily made. When available, a GenTL Consumer interface can be used to link to the camera manufacturer’s GenTL Producer
However, using the cudaMemcpy function is not the fastest method of transferring image data to the GPU memory. A higher bandwidth can be achieved between CPU memory and GPU memory by using page-locked (or “pinned”) memory.
There is also a third approach to overcoming the data transfer speed between the host and GPU memory. Allowing a frame grabber and the GPU to share the same system memory, eliminates the CPU memory copy to GPU memory copy time and can be achieved using NVIDIA’s Direct-for-Video (DVP) technology. However, because NVIDIA DVP is not currently available for general use, BitFlow
Before image compression can occur, the data from the image sensor in the camera must be correctly formatted. Most of today’s color image sensors use the Bayer filter mosaic, an array of RGB color filters arranged on a grid of photosensors
Since the Bayer mosaic pattern produces an array of separate R, G and B pixel data at different locations on the image sensor, a Bayer demosaicing (interpolation) algorithm must be used to generate individual red (R), green (G) and blue (B) values at each pixel location. Several methods exist to perform this interpolation including bilinear interpolation (http://bit.ly/VSD-BiLin), linear interpolation with 5×5 kernels (http://bit.ly/VSD-LinIn), adaptive-homogeneity-directed algorithms (http://bit.ly/VSD-DEM) and using directional filtering with an a posteriori decision (http://bit.ly/VSD-POST) each of which has their own quality/computational tradeoffs.
Color data can be reduced by transforming the RGB images into the YUV color space (or more accurately the Y’CbCr color space where Y’ represents luminance and the U and V components represent chroma or color difference values.
Generally, the YUV (4:4:4) format, which samples each YUV component equally, is not used in lossy JPEG image compression schemes since the chrominance difference channels (Cr and Cb) can be sampled at half the sample rate of the luminance without any noticeable image degradation. In YUV (4:2:2), U and V are sampled horizontally at half the rate of the luminance while in YUV (4:2:0), Cb and Cr are sub-sampled by a factor of 2 in both the vertical and horizontal directions.
Perhaps the most commonly used mode for JPEG image compression, the YUV (4:2:2) mode
Bayer interpolation, color balancing and color space conversion can all be performed on the GPU. To perform these tasks, the image in the GPU is split into a number of blocks during which they are unpacked from 8-bit to 32-bit integer format (the native CUDA data type). For each block, pixels are organized in 32 banks of 8 pixels as this fits the shared memory architecture in the GPU. This allows 4 pixels to be fetched simultaneously by the processor cores of the GPU (stream processors) with each thread reading 4 pixels and processing one pixel at a time.
After images are transformed from RGB to YUV color space, the JPEG algorithm is applied to each individual YUV plane (Figure 4). At the heart of the JPEG algorithm is the discrete cosine transform (DCT). This is used to transform individual 8 x 8 blocks of pixels in the image from the spatial to frequency domain.
Zig-Zag ordering is applied to each of the Y, U and V components separately and two different quantization tables are used for both the Y and UV components.
Tomi Engdahl says:
Hewlett Packard Enterprise Sends Supercomputer into Space to Accelerate Mission to Mars
August 11, 2017•BY ALAIN ANDREOLI, SVP & GM, DATA CENTER INFRASTRUCTURE GROUP•Blog Post
https://news.hpe.com/hewlett-packard-enterprise-sends-supercomputer-into-space-to-accelerate-mission-to-mars/
In this article
A mission to Mars will require sophisticated computing capabilities to cut down on communication latencies and ensure astronauts’ survival
In an effort to advance this mission, HPE and NASA launched a supercomputer into space on the SpaceX Dragon Spacecraft
The Spaceborne Computer is a year-long experiment—roughly the same amount of time it would take to get to Mars—which will test a supercomputer’s ability to function in the harsh conditions of space
Tomi Engdahl says:
Old Firefox add-ons get ‘dead man walking’ call
After version 57, plugins go to browser heaven
https://www.theregister.co.uk/2017/08/14/firefox_57_to_disable_all_extensions/
The end of legacy Firefox plugins is drawing closer, with Mozilla’s Jorge Villalobos saying they’ll be disabled in an upcoming nightly build of the browser’s 57th edition.
While he didn’t specify just how soon the dread date will arrive, Villalobos writes: “There should be no expectation of legacy add-on support on this or later versions”.
It’s been a long dark tea-time of the soul for plugins: back in March, with Version 52, the devs made Flash the only anointed plugin, with anything reliant on the Netscape Plugin API (NPAPI) forbidden.
There’s always a legacy base, however, and that’s what Mozilla’s taking aim at in Version 57.
“All legacy add-ons will have strict compatibility set, with a maximum version of 56.*. This is the end of the line for legacy add-on compatibility. They can still be installed on Nightly with some preference changes, but may break due to other changes happening in Firefox”, Villalobos’ post states.
Developers can no longer upload legacy add-ons with the maximum version set higher than 56.
Tomi Engdahl says:
A Pragmatic Approach to Your Digital Transformation Journey
http://www.securityweek.com/pragmatic-approach-your-digital-transformation-journey
From the Amazon juggernaut to the now legendary story of Uber, examples of digital disruption reshaping markets and industries abound. In fact, in their 2017 State of Digital Disruption study, the Global Center for Digital Business Transformation (DBT Center) says that in just two years digital disruption has gone from a peripheral concern to top-of-mind. The DBT Center’s latest study finds that among the 636 business leaders polled across 44 countries and 14 industries, 75 percent believe that digital disruption will have a major or transformative impact on their industry. This is in sharp contrast to the 26 percent that felt that way when last surveyed in 2015.
With a security strategy and architecture in place, you are now ready to take on the next key stages in your digital journey.
1. Hyper-connectivity: Driving new patterns of rich connections between people, process, data, and things.
2. Data integration: Embedding data-driven insights and decisions directly into the workflows and applications that drive business.
3. Machine learning: Automating insight from business and operational data to intelligently scale key initiatives.
As with most new and challenging endeavors, you need to define a pragmatic approach to mastering hyper-connectivity, data integration, and machine learning.
Just as novice runners don’t start with a marathon – they begin with a 5K and work up from there – the same is true as you embark on digital transformation. With a strong cybersecurity foundation in place, the most successful journeys begin with initiatives that involve strategic, but limited, connectivity and data integration. As digital value is realized, you build on success, incrementally expanding connectivity and integration and layering in machine learning.
Tomi Engdahl says:
Samsung Portable SSD T5 Review: 64-Layer V-NAND Debuts in Retail
by Ganesh T S on August 15, 2017 10:00 AM EST
http://www.anandtech.com/show/11719/samsung-portable-ssd-t5-review-64layer-vnand-debuts-in-retail
Flash technology has seen rapid advancements in the last few years including, with the mass production of planar 1x nm NAND, TLC, and 3D NAND. External high-speed interfaces such as USB 3.1 Gen 2 and Thunderbolt 3 have also become ubiquitous. The advent of Type-C has also enabled device vendors to agree upon a standardized connector for their equipment (be it mobile devices or desktop PCs). These advances have led to the appearance of small and affordable direct attached storage units with very high performance for day-to-day data transfer applications.
Samsung has been an active participant in the high-performance external SSD market with their Portable SSD series. The T1 was introduced in early 2015, while the T3 came out in early 2016. The T3 was the first retail product to utilize Samsung’s 48-layer TLC V-NAND. Today, Samsung is launching the Portable SSD T5. It is a retail pilot vehicle for their 64-layer TLC V-NAND as they ramp up its production. The Portable SSD T5 comes in four different capacity points – 250GB, 500GB, 1TB, and 2TB. It also moves up to a USB 3.1 Gen 2 Type-C interface, while retaining the same compact form factor and hardware encryption capabilities of the Portable SSD T3.
Tomi Engdahl says:
64-bit Firefox is the New Default on 64-bit Windows
https://tech.slashdot.org/story/17/08/15/133216/64-bit-firefox-is-the-new-default-on-64-bit-windows
Users on 64-bit Windows who download Firefox will now get our 64-bit version by default. That means they’ll install a more secure version of Firefox, one that also crashes a whole lot less. How much less? In our tests so far, 64-bit Firefox reduced crashes by 39% on machines with 4GB of RAM or more.
64-bit Firefox is the new default on 64-bit Windows
https://blog.mozilla.org/firefox/firefox-64-default-64-bit-windows/
Users on 64-bit Windows who download Firefox will now get our 64-bit version by default. That means they’ll install a more secure version of Firefox, one that also crashes a whole lot less. How much less? In our tests so far, 64-bit Firefox reduced crashes by 39% on machines with 4GB of RAM or more
Tomi Engdahl says:
Server DRAM Supply Expected to Remain Tight
http://www.eetimes.com/document.asp?doc_id=1332155&
Server DRAM revenue among the top three DRAM vendors — Samsung Electronics, SK Hynix and Micron Technology — rose by 30 percent sequentially in the second quarter as the tight supply of DRAM chips continued to lift average selling prices, according to market watcher DRAMeXchange. The firm expects server DRAM supply to remain tight throughout the remainder of 2017.
Despite product mix adjustments, suppliers had trouble meeting the various growing demands of the DRAM market, said DRAMeXchange, a unit of market research firm TrendForce that tracks memory chip pricing.
“Thanks to the increase in the average memory density of server systems, as evidenced by the adoption of high-density 32GB RDIMMs and 64GB LRDIMMs in this year’s first half, the profit margin of server DRAM surged,” said Mark Liu, a DRAMeXchange analyst, in a press statement.
Tomi Engdahl says:
Katie Roof / TechCrunch:
Sources: MongoDB has filed confidentially for IPO, has submitted S-1, aims to go public before year end
Database provider MongoDB has filed confidentially for IPO
https://techcrunch.com/2017/08/15/database-provider-mongodb-has-filed-confidentially-for-ipo/
Tomi Engdahl says:
Bloomberg:
China’s key advantages in race to build AI: the government’s financial support, ability to mandate foreign firms’ cooperation, and pervasive data on citizens
China’s Plan for World Domination in AI Isn’t So Crazy After All
https://www.bloomberg.com/news/articles/2017-08-14/china-s-plan-for-world-domination-in-ai-isn-t-so-crazy-after-all
China has key AI ingredient: a vast well of government data
Big companies, startups, nation plowing money into field
Xu Li’s software scans more faces than maybe any on earth. He has the Chinese police to thank.
Xu runs SenseTime Group Ltd., which makes artificial intelligence software that recognizes objects and faces, and counts China’s biggest smartphone brands as customers. In July, SenseTime raised $410 million, a sum it said was the largest single round for an AI company to date. That feat may soon be topped, probably by another startup in China.
Tomi Engdahl says:
Intel Officially Reveals Post-8th Generation Core Architecture Code Name: Ice Lake, Built On 10nm+
https://hardware.slashdot.org/story/17/08/15/220257/intel-officially-reveals-post-8th-generation-core-architecture-code-name-ice-lake-built-on-10nm
Intel has confirmed the existence of a new processor family called Ice Lake that will be made on Intel’s 10nm+ process. The company published basic information on the Ice Lake architecture on their codename decoder. AnandTech reports:
Intel Officially Reveals Post-8th Generation Core Architecture Code Name: Ice Lake, Built on 10nm+
by Ian Cutress on August 15, 2017 9:20 AM EST
http://www.anandtech.com/show/11722/intel-reveals-ice-lake-core-architecture-10nm-plus
In an unusual move for Intel, the chip giant has ever so slightly taken the wraps off of one of their future generation Core architectures. Basic information on the Ice Lake architecture has been published over on Intel’s codename decoder, officially confirming for the first time the existence of the architecture and that it will be made on Intel’s 10nm+ process.
This is an unexpected development as the company has yet to formally detail (let alone launch) the first 10nm Core architecture – Cannon Lake – and it’s rare these days for Intel to talk more than a generation ahead in CPU architectures. Equally as interesting is the fact that Intel is calling Ice Lake the successor to their upcoming 8th generation Coffee Lake processors, which codename bingo aside, throws some confusion on where the 14nm Coffee Lake and 10nm Cannon Lake will eventually stand.
As a refresher, the last few generations of Core have been Sandy Bridge, Ivy Bridge, Broadwell, Haswell, Skylake, with Kaby Lake being the latest and was recently released at the top of the year. Kaby Lake is Intel’s third Core product produced using a 14nm lithography process, specifically the second-generation ’14 PLUS’ (or 14+) version of Intel’s 14nm process.
Tomi Engdahl says:
10 early warning signs of ERP disaster
http://www.cio.com/article/3214631/enterprise-resource-planning/10-early-warning-signs-of-erp-disaster.html
Enterprise resource planning systems reap business efficiency by tying together disparate systems and services. But an ERP implementation can quickly unravel into disaster if you don’t heed these early warning signs.
Enterprise resource planning (ERP) systems weave together software and services from disparate departments into a cohesive, collaborative whole. Companies rely on ERP systems to keep their business operations running smoothly. Without ERP implementations to coordinate purchases, customer orders, financial transactions and beyond, modern business would take a significant step back in efficiency.
But the integrated nature of ERP systems makes them tricky to maintain and update. Many ERP systems are out of date or need major improvements. Plus, many organizations, seeking cost savings and flexibility, are considering migrating their ERP needs to the cloud. All of these factors make ERP systems ripe for disaster.
Tomi Engdahl says:
Lenovo thought PC salesfolk could sell servers and was wrong by about $500m
‘Over-integration’ saw x86 business fail to launch
https://www.theregister.co.uk/2017/08/16/lenovo_server_sales_slowdown_back_story/
In Q3 2014m, the last period before offloading its x86 server business to Lenovo, IBM had server revenue of US$2.33bn, across x86 and its more exotic architectures. And in Q1 2017 IBM still had server revenue of $831.5m. Lenovo hauled in $731.5 which, once we do the math of IBM’s old revenue minus Lenovo’s current cash count, is a difference of about 750m.
So where did that money go? Some went into the ether as the server market shrank. Some went to new hyperscale wholesalers. If we round things to the nearest half-billion we end up with around $500m per quarter no longer landing in Lenovo’s pockets.
Today the company told El Reg the sales dip can be attributed to “over-integration” with its existing sales teams.
That term was given to us today by Rod Lappin, Lenovo’s senior veep for global sales and marketing, who explained it describes the company’s early efforts to have its existing PC-centric sales teams sell servers. The company did so because its PC sales team was extensive in terms of numbers and global presence, and had a fine track record. Integrating server and PC sales was thought to be the way to win.
But it turns out he PC sales people didn’t have the right skills or contacts: the company couldn’t have the right conversations with the right people.
Tomi Engdahl says:
When is a Barracuda not a Barracuda? When it’s really AWS S3
Now you can replicate backups to Barracuda’s actually-Amazonian cloud
https://www.theregister.co.uk/2017/08/16/barracuda_aws_s3/
Barracuda’s backup appliances can now replicate data to Amazon’s S3 cloud silos.
According to the California-based outfit, its backup appliance is now available in three flavors:
On-premises physical server
On-premises virtual server
In-cloud virtual server
Data can be backed up from Office 365, physical machines, and virtual machines running in Hyper-V or VMware systems, to an appliance. This box can then replicate its backups to a second appliance, typically at a remote site, providing a form of disaster recovery, or send the data to S3 buckets in AWS. For small and medium businesses with no second data centre, replicating to Amazon’s silos provides an off-site protection resource.
Tomi Engdahl says:
The future of Python: Concurrency devoured, Node.js next on menu
Programming language keeps getting fatter amid awkward version 3 split
https://www.theregister.co.uk/2017/08/16/python_future/
Analysis The PyBay 2017 conference, held in San Francisco over the weekend, began with a keynote about concurrency.
Though hardly a draw for a general interest audience, the topic – an examination of multithreaded and multiprocess programming techniques – turns out to be central to the future of Python.
Since 2008, the Python community has tried to reconcile incompatibility between Python 2 and newly introduced Python 3.
For years, adoption of Python 3 was slow and some even dared to suggest Python didn’t have a future.
Python remains one of the most popular programming languages.
It might even be argued that Python is resurgent, thanks to its utility for data science and machine learning projects. Now that almost all the popular Python packages have been ported to Python 3, doubts about the language have receded into the background.
But there’s a counterpoint. JavaScript is also exceedingly popular, more so than Python by Redmonk’s measure. And it has some advantages when dealing with browsers and interfaces. In April, Stanford began testing a version of its introductory programming course taught in JavaScript instead of Java.
Python and JavaScript are widely used in part because they’re easier to pick up than Java, C, C++, and C#. Both have active communities that write and maintain a large number of libraries. And neither has a strong corporate affiliation, the way Java has with Oracle, C# has with Microsoft, and Swift has with Apple.
Concurrency is important, he said, because it’s how the real world operates. Things don’t always happen in a predictable sequence.
Code that implements concurrency can handle multiple tasks that complete at different times. It’s essential for writing applications that scale
Last December, Python version 3.6 arrived, bringing with it non-provisional support for the asyncio module introduced in Python 3.4. The module provides a mechanism for writing single-threaded concurrent code.
“It’s insanely difficult to get large multi-threaded programs correct,” Hettinger explained. “For complex systems, async is much easier to get right than threads with locks.”
The thing is, asynchronous code is Node.js’s reason for being. Node, a JavaScript runtime environment, was created to allow non-blocking, event-driven programming. Python is moving rather quickly into the same territory, and to do so, the size of the language – in terms of the standard library – has expanded considerably.
“There’s twice as much Python as you know,” as Hettinger put it.
Event-driven programming was available in Python long before Node existed, through the Twisted framework
Event-driven programming relies on an event loop that runs continuously. Presented with asynchronous requests, it can process them without blocking other requests.
“I think async is the future,” said Hettinger. “Threading is so hard to get right.”
“Based on my own experience at PyCon, asyncio is really bringing the community together around event-driven concurrency as the main, blessed way to do concurrency at a language level in Python.”
Over the next decade, Lefkowitz believes the Python community will need to improve packaging and deployment. “JavaScript has a better back-end story than Python has a front-end story right now,” he said.
Tomi Engdahl says:
First-day-on-the-job dev: I accidentally nuked production database, was instantly fired
Um. Who put production credentials in onboarding doc?
https://www.theregister.co.uk/2017/06/05/dev_accidentally_nuked_production_database_was_allegedly_instantly_fired/
“How screwed am I?” a new starter asked Reddit after claiming they’d been marched out of their job by their employer’s CTO after “destroying” the production DB – and had been told “legal” would soon get stuck in.
Accidentally destroyed production database on first day of a job, and was told to leave, on top of this i was told by the CTO that they need to get legal involved, how screwed am i?
https://np.reddit.com/r/cscareerquestions/comments/6ez8ag/accidentally_destroyed_production_database_on/
Tomi Engdahl says:
Data Centre Arrow Virtualization
Who wants multiple virtual workstations on a GPU in a blade server?
https://www.theregister.co.uk/2017/08/18/nvidia_virtual_data_center_workstation/
NVIDIA reckons engineering types do, so it’s cut a new GPU and software to carve it up
NVIDIA’s cranked up the virtual workstation caper by giving the world a new GPU that slots into blade servers, plus software to let it run multiple workstation-grade VMs.
The new GPU is the TESLA P6 and uses NVIDIA’s Pascal architecture, the company’s current flagship. The P6 has 2,048 CUDA cores, 16 GB of memory and uses the MXM form factor so it can slot into blades.
The GPU is offered to those who want to build very dense GPU-enhanced compute rigs for the usual suspects: very graphic applications, machine learning and so on. But NVIDIA’s also decided it and the P4, P40 and P100 GPUs should also be put in harness to run virtual workstations.
Tomi Engdahl says:
Software definer wants you to befriend the ‘BFC’, do a bit of ‘reverse virtualization’
What’s that, TidalScale? The Big Friendly what?
https://www.theregister.co.uk/2017/08/18/tidalscale_is_software_defining_servers/
TidalScale is building a software-defined server product. But how would that work, as it needs to run in a server and you can’t really redefine the server you are running in, can you?
Its software creates virtual machines, of course, called TidalPods, and these are used to “right-size” X86 servers dynamically to application workload needs.
An X86 server has fixed hardware resources – CPU cores, memory, IO capacity and network ports. Oddly, to Reg storage desk, direct-attached storage (DAS) is not included. TidalScale’s HyperKernel software, based on the FreeBSD hypervisor bhyve, creates an abstracted and virtual software-defined server based on a group of physical X86 servers, a cluster, a non-uniform memory access or NUMA cluster.
TidalScale does so-called “reverse virtualisation” and is a distributed hypervisor running on each compute node.
Tomi Engdahl says:
We Print 50 Trillion Pages a Year, and Xerox Is Betting That Continues
https://yro.slashdot.org/story/17/08/17/2023221/we-print-50-trillion-pages-a-year-and-xerox-is-betting-that-continues
For most of its 111-year history, Xerox has been known as one of the tech industry’s most innovative companies. Now the legendary copier company is reinventing itself. In January, Xerox made the bold decision to split itself into two, spinning off its business services operations into a separate company called Conduent.
Speaking with Fortune’s Susie Gharib, Jacobson says Xerox is still “one of the top patent producing companies in the world”
http://fortune.com/2017/08/17/xerox-transformation-plan/
Tomi Engdahl says:
China’s Lenovo warns of cost challenges as it sinks to Q1 loss
http://www.reuters.com/article/us-lenovo-group-results-idUSKCN1AY02D
Chinese personal computer maker Lenovo Group Ltd (0992.HK) warned of higher costs and margin pressure due to shortages of components like memory chips, as it posted its first quarterly loss in almost two years on Friday.
Lenovo, which gave up its title as the world’s largest PC maker to HP Inc (HPQ.N) in the quarter through June, lost $72 million compared with a profit of $173 million for the same period last year.
Tomi Engdahl says:
UK.gov is hiring IT bods with skills in … Windows Vista?!
And Server 2003. Yep, this is the year 2017 and we’re not making this up
https://www.theregister.co.uk/2017/08/18/ukgov_is_hiring_it_bods_with_skills_in_windows_vista/
Freelance IT type? Know about the gubbins of Windows XP, Vista and Server 2003? Don’t care about all that IR35 guff? We’ve got great news – UK.gov wants to hire you.
Strictly speaking the role is with an agency rather than the Almighty Government itself, but the Technical Architect vacancy specifies competency in “Windows 2003 Server (R2), 2008, 2012, 2016, XP, Vista Windows 10 build, configuration and implementation”. [sic]
Knowledge of “auditing and security products” is listed under the “desirable” heading, as is the non-essential (ahem) skill of “software and hardware integration”.
Tomi Engdahl says:
David McCabe / Axios:
US lawmakers and advocates on both the left and right are increasingly calling for regulating Google, Facebook, and Amazon
The walls are closing in on tech giants
https://www.axios.com/the-walls-close-in-on-tech-2473228710.html
Tech behemoths Google, Facebook and Amazon are feeling the heat from the far-left and the far-right, and even the center is starting to fold.
Why it matters: Criticism over the companies’ size, culture and overall influence in society is getting louder as they infiltrate every part of our lives. Though it’s mostly rhetoric rather than action at the moment, that could change quickly in the current political environment.
The political establishment is starting to buy in to these concerns, too: Democrats are urging tougher antitrust enforcement as part of their “Better Deal” platform. Republican leadership staffers told Google, Facebook and Amazon that aggressive pro-net neutrality advocacy would put their policy objectives at risk; sources say they invoked privacy as one issue where the companies could be vulnerable.
As history shows, it takes time for talk to turn to action: AT&T’s antitrust disputes with its skeptics festered over a decade, and Microsoft’s opponents agitated for years before the government took them seriously. And fringe arguments have a way of becoming mainstream: Critics of Ma Bell and Microsoft looked like outliers before picking up steam.
Tomi Engdahl says:
Qualcomm moved its Snapdragon designers to its ARM server chip. We peek at the results
Centriq 2400 blueprints revealed this week
https://www.theregister.co.uk/2017/08/20/qualcomm_custom_cpu_design/
Hot Chips Qualcomm moved engineers from its flagship Snapdragon chips, used in millions of smartphones and tablets, to its fledgling data center processor family Centriq.
This shift in focus, from building the brains of handheld devices to concentrating on servers, will be apparent on Tuesday evening, when the internal design of Centriq is due to be presented at engineering industry conference Hot Chips in Silicon Valley.
The reassignment of engineers from Snapdragon to Centriq explains why the mobile side suddenly switched from its in-house-designed Kryo cores to using off-the-shelf ARM Cortex cores, or minor variations of them. Effectively, it put at least a temporary pause on fully custom Kryo development.
Late last year, Qualcomm unveiled the Snapdragon 835, its premium system-on-chip that will go into devices from top-end Android smartphones to Windows 10 laptops this year.
Tomi Engdahl says:
Neo-Nazi Daily Stormer loses its Russian domain, too
Russian official cites “strict regime” for combatting extremism online.
https://arstechnica.com/tech-policy/2017/08/neo-nazi-daily-stormer-loses-its-russian-domain-too/
When the Daily Stormer lost control of its .com domain in the face of a social media protest, the infamous hate site sought virtual refuge in Russia. For a few hours on Wednesday, the site re-appeared at the domain “dailystormer.ru” before the site lost DDoS protection from CloudFlare and disappeared from the Web once again.
Now the Russians have nixed the Daily Stormer’s new online home, citing the country’s laws against hate speech. According to Radio Free Europe, the Russian company responsible for registering the Daily Stormer’s Russian domain received a letter from Russian authorities asking it “to look into the possibility of register suspension due to extremist content of this domain. So we decided to suspend [the] domain Dailystormer.ru.”
Tomi Engdahl says:
Peter Bright / Ars Technica:
Intel unveils 8th Gen Intel Core processors based on an updated “Kaby Lake refresh” architecture, claims 40% performance boost over earlier gen — No Coffee Lake or Cannonlake here; these are doubled up Kaby Lake parts. — The first “8th generation” Intel Core processors roll out today …
Intel first 8th generation processors are just updated 7th generation chips
No Coffee Lake or Cannonlake here; these are doubled up Kaby Lake parts.
https://arstechnica.com/gadgets/2017/08/intel-first-8th-generation-processors-are-just-updated-7th-generation-chips/
The first “8th generation” Intel Core processors roll out today: a quartet of 15W U-series mobile processors. Prior generation U-series parts have had two cores, four threads; these new chips double that to four cores and eight threads. They also bump up the maximum clock speed to as much as 4.2GHz, though the base clock speed is sharply down at 1.9GHz for the top end part (compared to the 7th generation’s 2.8GHz). But beyond those changes, there’s little to say about the new chips, because in a lot of ways, the new chips aren’t really new.
Although Intel is calling these parts “8th generation,” their architecture, both for their CPU and their integrated GPU, is the same as “7th generation” Kaby Lake. In fact, Intel calls the architecture of these chips “Kaby Lake refresh.” Kaby Lake was itself a minor update on Skylake, adding an improved GPU (with, for example, hardware-accelerated support for 4K H.265 video) and a clock speed bump. The new chips continue to be built on Intel’s “14nm+” manufacturing process, albeit a somewhat refined one.
Tomi Engdahl says:
eSports gamers targeted for device delivering electric shock to enhance brain performance
http://www.abc.net.au/news/2017-08-20/humm-device-gives-brain-electric-shock-to-make-it-perform/8822400
Researchers are developing a computer interface to boost your brain’s performance by delivering it an electric shock — and they think eSports gamers will be lining up to get their hands on it.
The Perth research group — named HUMM — is developing a “brain computer interface” that makes your mind work both faster and better by delivering a shock of electricity.
Its prototype device consists of a headset with four electrodes to measure brain waves. It then stimulates the brain to improve performance.
HUMM’s four founders started developing the device with the support of UWA’s innovation quarter, and they’re aiming to sell it first to eSports gamers.
“It is a good place for us to build the core competency in the technology and the understanding of the neuroscience,” co-founder Iain McIntyre said.
“For us to be able to build products in the future that will target 7 billion people rather than 250 million.”
Tomi Engdahl says:
Your next computer will be nearly twice as fast as existing PCs
http://mashable.com/2017/08/21/intel-core-eighth-gen-processors/#X8zusDAbKaql
Computers may no longer be tech darlings now that everyone uses their smartphones to do basically everything, but they’re about to get really exciting again.
About every two years, Intel introduces new processors that shape the computing industry and this year’s no different. Whereas the last two generations of chips were kinda like half steps in performance, the new 8th-generation “Ice Lake” Core processors are nearly twice as fast as 7th-gen “Skylake” chips.
It’s a big bump up in performance and you’ll feel the speed even if you’re just browsing the web.
The first computers (laptops and 2-in-1s) with 8th-gen chips, available in Core i5 or i7 (U-series) flavors, will ship at the beginning of September with up to 145 designs available. Desktop processors will ship in the fall and then enterprise versions later.
Tomi Engdahl says:
Facebook won’t change React.js license despite Apache developer pain
We love open source so much we can’t drop sueball shield, says The Social Network™
https://www.theregister.co.uk/2017/08/21/facebook_apache_openbsd_plus_license_dispute/
Facebook’s decided to stick with its preferred version of the BSD license despite the Apache Foundation sin-binning it for any future projects.
The Foundation barred use of Facebook’s BSD-plus-Patents license in July, placing it in the “Category X” it reserves for “disallowed licenses”.
Facebook’s BSD+Patents license earned that black mark because the Foundation felt it “includes a specification of a PATENTS file that passes along risk to downstream consumers of our software imbalanced in favor of the licensor, not the licensee, thereby violating our Apache legal policy of being a universal donor.”
Apache’s decision became a problem because Facebook’s React UI-building JavaScript library has been widely adopted by projects that also code licensed in ways the Foundation approves. Developers are therefore faced with disentangling React if they want to stay on the right side of the T&Cs.
Developers who didn’t fancy that work therefore kicked off a GitHub thread calling for Facebook to change React’s licence.
Facebook could have walked away from open source, he says, but instead “decided to add a clear patent grant when we release software under the 3-clause BSD license, creating what has come to be known as the BSD + Patents license. The patent grant says that if you’re going to use the software we’ve released under it, you lose the patent license from us if you sue us for patent infringement.”
Wolff says Facebook believes “that if this license were widely adopted, it could actually reduce meritless litigation for all adopters, and we want to work with others to explore this possibility.”
Tomi Engdahl says:
Intel is announcing new processors, which generally means lowering the prices of older microphones. Not this time. Prices for DRAMs are still rising, which is also reflected in laptops prices.
According to DramExchange, DRAMs were sold in the second quarter of 16.51 billion dollars. Contract prices for PC memory grew by 10 per cent during the quarter.
This is not going to change at some point in the near future, which is a bad news for consumers.
Samsung is still clearly the largest DRAM manufacturer. In the second quarter, its sales increased by 20.7 per cent from January to March. Samsung sold over $ 7.6 billion worth of memories in April-June.
The second largest manufacturer, SK Hynix, increased its sales by 11.2 percent to $ 4.5 billion. The trip was Micron, whose sales grew by 20.2 percent to almost $ 3.6 billion. These three aggregates account for 95% of the market.
Source: http://etn.fi/index.php?option=com_content&view=article&id=6701&via=n&datum=2017-08-21_14:03:40&mottagare=31202
Tomi Engdahl says:
Q’comm Details ARM Server SoCs
Centriq built up from dual-core processors
http://www.eetimes.com/document.asp?doc_id=1332174
Qualcomm will describe the custom ARM core inside its first server processor at Hot Chips this week. The Falkor CPU is at the heart of the company’s 10-nm Centriq 2400, a 48-core SoC that will ship later this year, targeting big data centers.
To date, a handful of companies have tried to gain footholds in servers with ARM-based products. They have generally failed so far because their parts could not match the performance of Intel’s x86-based Xeon. However, earlier this year Microsoft’s data center group announced it is testing SoCs from Qualcomm and rival Cavium.
It’s still unclear how Qualcomm will fare. The company did not provide any performance, power consumption or price information on its parts.
Tomi Engdahl says:
AI Sees New Apps, Chips, says Q’comm
Researchers pick up Amsterdam team
http://www.eetimes.com/document.asp?doc_id=1332162&
ab work is extending machine learning to serve new applications and define new hardware architectures, said a Qualcomm researcher. He spoke on the occasion of the company acquiring Scyfer B.V., a small AI research team affiliated with University of Amsterdam that it had been working with previously.
Scyfer acted as a consulting firm, applying machine learning to industrial, IoT, banking, and mobile sectors. The group is now part of Qualcomm Research, seeking to expand machine learning in areas such as computer vision and natural language processing and exploring how emerging algorithms will impact the design of hardware accelerators.
“As the algorithms change, we think there is a space here for co-designing the neural networks and the hardware,” said Jeff Gehlhaar, a vice president of technology for corporate R&D who is responsible for AI at Qualcomm.
“As these networks evolve, we are starting to see patterns in execution profiles”
Tomi Engdahl says:
HPE memory options rising by double digits… from today
Meanwhile, DRAM makers toasting record sales hauls
https://www.theregister.co.uk/2017/08/21/hpe_server_price_rises_serve_you_right/
HPE is hiking server memory prices by up to 20 per cent from today, according to communications with the channel, seen by us.
The background is that the industry is recovering from a global DRAM shortage and consequently the component has become more expensive.
In an email about the changes sent to trade customers, HPE stated: “On Monday, August 21, HPE will raise the list prices of older and low volume memory SKUs by approximately 10 per cent to 20 per cent.”
Tomi Engdahl says:
Microsoft Speech Recognition Now As Accurate As Professional Transcribers
https://slashdot.org/story/17/08/21/0420215/microsoft-speech-recognition-now-as-accurate-as-professional-transcribers
Microsoft announced today that its conversational speech recognition system has reached a 5.1% error rate, its lowest so far. This surpasses the 5.9% error rate reached last year by a group of researchers from Microsoft Artificial Intelligence and Research and puts its accuracy on par with professional human transcribers who have advantages like the ability to listen to text several times.
Microsoft’s speech recognition system hits a new accuracy milestone
https://techcrunch.com/2017/08/20/microsofts-speech-recognition-system-hits-a-new-accuracy-milestone/
Microsoft announced today that its conversational speech recognition system has reached a 5.1% error rate, its lowest so far. This surpasses the 5.9% error rate reached last year by a group of researchers from Microsoft Artificial Intelligence and Research and puts its accuracy on par with professional human transcribers who have advantages like the ability to listen to text several times.
Both studies transcribed recordings from the Switchboard corpus, a collection of about 2,400 telephone conversations that have been used by researchers to test speech recognition systems since the early 1990s
Tomi Engdahl says:
The sky is blue, water is wet and UK PC shipments are down
Political uncertainty blamed for crap Q2
https://www.theregister.co.uk/2017/08/21/uk_pc_shipments_down/
Brexit and the general election were highlighted by Gartner as being among the reasons why the good folk of Britain purchased far fewer PCs in Q2.
Sales of computers into retailers and distributors dropped by around eleven per cent year-on-year in the period from the start of April to the end of June, according to data sent to The Register by the analyst.
This translated into shipments of around 1.9 million desktops, notebooks and ultramobiles: broken down into market segments, sales to consumers dropped ten per cent and businesses dropped by eight per cent.
Of the top five major vendors, Lenovo was hit hardest, with a unit sale crash of 22.6 per cent, HP Inc went backwards to the tune of 11.8 per cent and Apple dropped 9.6 per cent.
Tomi Engdahl says:
NVMe Will Oust SCSI by 2020
http://www.eetimes.com/document.asp?doc_id=1332171&
It wasn’t long ago that flash storage was reserved for high-demand data only. Now all-flash array adoption is not only outpacing hybrid arrays, but those with NVMe look to be rapidly hitting the mainstream.
Tegile Systems is putting its stake in the ground with what it said is the first unified all-NVMe array on the market with its IntelliFlash N-series. However, the company is also giving customers the flexibility to dial up or dial down the amount of flash they want to use over the life off array. Rob Commins, Tegile’s vice president of marketing, told EE Times in a telephone interview that the new storage platform can take the form of an all NVMe flash array, use multiple grades of flash or a hybrid array with spinning disk. Tegile’s management algorithm will absorb what’s available to balance the density.
He said the N-5000 series is a “memory-class storage array” and comes with an extensive set of data management services, including deduplication and compression for data reduction, encrypted data at rest, and complete data protection with snaps, clones and replication.
Commins said Tegile is taking a three-phase approach to implementing NVMe. “As the technology develops, we will put an embedded NVMe fabric in there that allows us to expand the pool of NVMe,” he said. As the NVMe ecosystem matures over the next year-and-a-half to two years, he said, Tegile will expose NVMe at the front end into full memory / flash fabric that hosts can natively connect to over a 40Gb fabric.
Tegile’s broader strategy has been to offer a level of modularity to its arrays so customers aren’t always having to do forklift upgrades. They can swap drives out over the life of the array, as well as controllers.
“It’s going to be a race between us with a full suite of data management software getting into NVMe against NMVe hardware vendors who need to build software,” he said.
With only a 20 percent premium on NVMe SSDs, he said, the protocol will quickly become the defacto standard. “It’s going to flip pretty fast,” he said.
Eric Burgener, IDC’s research director for storage, said the research firm is forecasting more revenue for NVMe SSDs than any other interface in 2020, and that by then it will have replaced SCSI. “The trend we’ve seen with all-flash array vendors is a rush to put a stake in the ground as to what they are doing with NVMe,” he said.
IDC has segmented the market in three categories: primary storage, big data, and rack scale flash.
Burgener said vendors are taking two different approaches to NVMe in their arrays. One is to add it piecemeal with a roadmap for customers that allows them to integrate NVMe devices followed by controllers and then fabric to the host. The other is to ship a complete NVMe system right away. “Most enterprise workloads don’t need this this kind of capability yet,” he said, “but some of the vendors are going to be providing it. By and large it’s positioning the platform for future growth. It gives customers a warm fuzzy that their vendor is on the leading edge.”
There will be combination of things that drive the need NVMe, including real-time big data analytics, said Burgener, which today is generally only something undertaken by large enterprises with custom applications for that specific vertical. “But we see real-time big data analytics becoming a mainstream type of workload over the course of the next three years,”
Tegile started as a hybrid flash array vendor, and started to shift to all-flash in late 2015, said Burgener, while continuing to make the hybrid arrays available.
Tegile has previously used SanDisk’s InfiniFlash in its arrays but is now using commodity SSDs
“Density doesn’t seem to a reason these days to go with a custom design,”
Tomi Engdahl says:
Microsoft unveils Project Brainwave for real-time AI
August 22, 2017 | Posted by Microsoft Research Blog
https://www.microsoft.com/en-us/research/blog/microsoft-unveils-project-brainwave/?ranMID=24542&ranEAID=TnL5HPStwNw&ranSiteID=TnL5HPStwNw-8IuoHRW5J_UsRcB5qgl6cA&tduid=(8146a5f43c1b39e8a3822587995d6105)(256380)(2459594)(TnL5HPStwNw-8IuoHRW5J_UsRcB5qgl6cA)()
Today at Hot Chips 2017, our cross-Microsoft team unveiled a new deep learning acceleration platform, codenamed Project Brainwave.
We designed the system for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency. Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.
The Project Brainwave system is built with three main layers:
A high-performance, distributed system architecture;
A hardware DNN engine synthesized onto FPGAs; and
A compiler and runtime for low-friction deployment of trained models.
First, Project Brainwave leverages the massive FPGA infrastructure that Microsoft has been deploying over the past few years. By attaching high-performance FPGAs directly to our datacenter network, we can serve DNNs as hardware microservices, where a DNN can be mapped to a pool of remote FPGAs and called by a server with no software in the loop. This system architecture both reduces latency, since the CPU does not need to process incoming requests, and allows very high throughput, with the FPGA processing requests as fast as the network can stream them.
Tomi Engdahl says:
Social VR is evolving, and AltspaceVR paid the price
https://venturebeat.com/2017/08/22/the-apparent-demise-of-altspacevr-shows-how-social-vr-is-changing/
Last month, AltspaceVR, one of the most beloved startups in the burgeoning, albeit struggling, VR consumer industry announced it was shutting its doors due to what they described as “unforeseen financial difficulty”. This is by no means an ordinary player in the VR community. The California-based startup is one of the earliest pioneers in the sector, having raised millions beginning with its seed round back in prehistoric 2014 and, more important, entering the incredibly crucial category of social VR.
There’s not a single person I know in the VR industry who doesn’t nod their heads at the importance of social VR, which may very well end up being the “killer app” experience that establishes mainstream appeal to VR. There are only a handful of others in the space, like Sansar by Linden Lab, High Fidelity, WebVR-based JanusVR, and most recently, Facebook Spaces (more on that in a bit).
It’s social VR that also touches on the underlying vision of immersive computing as the quintessential medium to eventually and inevitably open us up to the Ready Player One sort of “Metaverse” that so many in the industry are deeply invested in seeing created. The building of endlessly expansive virtual worlds is almost literally the holy grail of the industry. So, no wonder that the news of AltspaceVR’s collapse felt like a punch in the stomach!
Tomi Engdahl says:
Salvador Rodriguez / Reuters:
Google says it will announce specs for Titan, a security chip that scans cloud hardware for evidence of tampering, on Thursday
Google touts Titan security chip to market cloud services
http://www.reuters.com/article/us-alphabet-google-titan-idUSKCN1B22D6
SAN FRANCISCO (Reuters) – Alphabet Inc’s (GOOGL.O) Google this week will disclose technical details of its new Titan computer chip, an elaborate security feature for its cloud computing network that the company hopes will enable it to steal a march on Amazon.com Inc (AMZN.O) and Microsoft Corp (MSFT.O).
Titan is the size of a tiny stud earring that Google has installed in each of the many thousands of computer servers and network cards that populate its massive data centers that power Google’s cloud services.
Google is hoping Titan will help it carve out a bigger piece of the worldwide cloud computing market, which is forecast by Gartner to be worth nearly $50 billion.
Tomi Engdahl says:
Microsoft has been a de facto operating system for companies as long as they can remember. This may not be the case anymore. Google has introduced an expanded version of Chrome OS in Chromebooks that are popular with Google.
The idea of Chrome Enterprise is simple. The platform familiar with Chromebooks has been expanded with business-specific features – network management, print management, upgrade management, and so on – this is offered to companies at $ 50 per device price.
Source: http://www.etn.fi/index.php/13-news/6713-chrome-enterprise-haastaa-microsoftin-yrityksissa
More:
Introducing Chrome Enterprise
https://www.blog.google/topics/connected-workspaces/introducing-chrome-enterprise/
Since we launched Chrome OS in 2009, our goal has been to build the simplest, fastest, and most secure operating system possible. And we’ve been inspired by all the ways we’ve seen businesses embrace Chrome, from Chromebooks in the office, to shared Chrome devices in the field, to signage and kiosks for customer engagement in retail. But with so many different business needs—not to mention so many different devices—companies have also told us they want a single, cost-effective solution that gives them the flexibility and control to keep their employees connected. That’s why today we’re announcing Chrome Enterprise.
Chrome Enterprise offers a host of features, including access to enterprise app storefronts, deep security controls, 24/7 support, as well as integration with cloud and on-premise management tools, VMware Workspace ONE and MicrosoftⓇ Active DirectoryⓇ. We invite you to join our Chrome Enterprise webinar on August 23 to learn more and take part in our live Q&A.