Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    Mozilla’s Flash-killer ‘Shumway’ appears in Firefox nightlies
    Open source SWF player promises alternative to Adobe’s endless security horror
    http://www.theregister.co.uk/2015/02/16/mozillas_flashkiller_shumay_appears_in_firefox_nightlies/

    In November 2012 the Mozilla Foundation announced “Project Shumway”, an effort to create a “web-native runtime implementation of the SWF file format.”

    Two-and-a-bit years, and a colossal number of Flash bugs later, Shumway has achieved an important milestone by appearing in a Firefox nightly, a step that suggests it’s getting closer to inclusion in the browser.

    Shumway’s been available as a plugin for some time, and appears entirely capable of handling the SWF files.

    When it arrives in a full version of Firefox, it will mean that about 15.1 per cent of all web surfing won’t need Flash

    Reply
  2. Tomi Engdahl says:

    Adobe Photoshop turns 25
    Company celebrates by showcasing 25 of the most creative Photoshop artists under 25
    http://www.theinquirer.net/inquirer/news/2396127/adobe-photoshop-turns-25

    ADOBE PHOTOSHOP, the software that can make any Tom, Dick or Harry look like a million dollars, turned the ripe old age of 25 today.

    Photoshop is probably one of the most pervasive pieces of software known to man. It has helped to create logos, advertisements, marketing material and high-profile product designs across the world, from the logos on your clothes, to the apps downloaded on your smartphone and even the objects you buy and have around you every day.

    If everything that has been designed using Photoshop tools was to display a sign saying so, it would be inescapable. It is, therefore, one of the most recognised software brands in the world, with tens of millions of users.

    It is the go-to application for digital image manipulation across all media, from print to film to the web. So much so, that the software has become a part of our cultural consciousness in that ‘photoshopping’ is now a buzzword for all types of image manipulation.

    Reply
  3. Tomi Engdahl says:

    Report: ICT downtime costs businesses $4 million per year
    http://www.cablinginstall.com/articles/2015/02/infonetics-ict-downtime-survey.html

    Technology researcher Infonetics Research, now part of IHS, Inc. (NYSE: IHS), recently conducted in-depth surveys with 205 medium and large businesses in North America and discovered that companies are losing as much as $100 million per year to downtime related to information and communication technology (ICT).

    “Businesses participating in our ICT downtime survey are losing almost $4 million a year to downtime on average, about half a percent of their total revenue,” comments Matthias Machowinski, directing analyst for enterprise networks and video at Infonetics Research.

    According to the survey, the most common causes of ICT downtime are failures of equipment, software and third-party services; power outages; and human error. Infonetics’ respondent organizations said they experience an average of two outages and four degradations per month, with each event lasting around six hours.

    “Fixing the downtime issue is the smallest cost component,” adds Machowinski. “The real cost is the toll downtime takes on employee productivity and company revenue, illustrating the criticality of ICT infrastructure in the day-to-day operations of an organization.”

    Reply
  4. Tomi Engdahl says:

    Debian on track to prove binaries’ origins
    Reproducible binary project 83% complete
    http://www.theregister.co.uk/2015/02/23/debian_project/

    Debian is on its way to becoming what could be the first operating system to prove the origin of its binaries, technologist Micah Lee says.

    The feat will allow anyone to independently confirm that Debian binaries were built from a reported source package.

    So far a project team devoted to confirming the reproducibility of builds has knocked off 83 percent of source packages within the main archive of the unstable distribution.

    The effort will not be completed in time for the release of the next major Debian release, codenamed Jessie, but could see reproducible builds a feature for the following stable release dubbed Stretch.

    “The team developed the tool debbindiff to provide in-depth detailed diffs of binary packages,” Debian said in a report note.

    Reply
  5. Tomi Engdahl says:

    Linux kernel 19 million lines of code

    The Linux Foundation has published an interesting report for the sixth time the Linux kernel coder. The report made ​​clear that the 3.18 kernel includes almost 48 thousand files, and nearly 19 million lines of code.

    When Linus Torvalds announced kernel the first version in 1991, it had only about 10 thousand code lines.

    Since 2005, the Linux kernel development has participated in a total of 11 695 encoder by more than 1,200 companies

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2454:linux-ytimessa-19-miljoonaa-koodirivia&catid=13&Itemid=101

    Linux is developed in companies

    Linux Foundation survey, which performed additions to the kernel code base, only 16.4 per cent did not tell someone to work for the company. An independent consultant in the kernel developers registered for 2.5 per cent.

    At the moment, the Linux kernel has reached version 3.19. In version 3.19 was slightly less than 19 million lines of code.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2444:linuxia-kehitetaan-yrityksissa&catid=13&Itemid=101

    Reply
  6. Tomi Engdahl says:

    Linux is going to have a live update

    The next major Linux kernel may include the long-awaited new feature. 3.20-kernel is being increased by the opportunity to upgrade the core components without restarting the system.

    There have been two different ways to update the core components: developed by SUSE and other by Red Hat.

    The developers decided to reconcile and this combined solution to one. The ball is now thrown Torvalds. The aim is that the core of the ruler of Linus Torvalds would take technology for consideration and possible inclusion in the kernel version 3.20.

    Souce: http://www.etn.fi/index.php?option=com_content&view=article&id=2413:linuxiin-tulossa-live-paivitys&catid=13&Itemid=101

    Reply
  7. Tomi Engdahl says:

    Multi-core ARM processors coming from many companies…

    Chip Packs 100, 64-bit ARM Cores
    200G Tilera-MX joins ARM party
    http://www.eetimes.com/document.asp?doc_id=1325714&

    EZchip will pack a hundred 64-bit ARM A53 cores into a network processor competing for sockets in servers and communications systems with the likes of Broadcom, Cavium and Intel. The 200 Gbit/second Tile-Mx100 could outgun most if not all competitors despite the fact it will be made in a 28nm process and probably won’t be in production until 2017.

    The chip is based on the Tile-Gx multicore architecture EZchip acquired in July from Tilera. Besides moving Tile-Gx from a proprietary to an ARM core, the new generation adopts some key blocks from EZchip such as a traffic manager it hopes helps it stand apart from its larger competitors.

    “No one else we know of has announced a 100-core ARM v8 processor, so it will be one of the most powerful networking processors out there,” said Tom Halfhill, senior analyst with market watcher The Linley Group (Mountain View, Calif.)

    Cavium will deliver later this year its ThunderX, a 48-core chip using a single-threaded 64-bit ARM core. Broadcom is designing a competing 16nm chip based on multi-threaded superscalar ARM cores expected to be in production next year. Its existing XLP980 chip using 20 MIPS cores is already a heady competitor because its quad-threaded architecture enables it to handle many packet flows.

    Reply
  8. Tomi Engdahl says:

    Violin Doubles Down on Flash Fabric Architecture
    http://www.eetimes.com/document.asp?doc_id=1325753&

    Violin Memory is committing to its custom flash architecture with a refresh of its storage platform that includes data services customers are looking for if they are to adopt all-flash arrays as primary storage.

    Erik Ottem, Violin’s director of product marketing, said the updates reflect the company’s strategy to move away from just servicing niche applications and becoming a viable option for customers’ primary storage requirements at the same price point or lower.

    But cost is not the only thing customers are looking at when opting to make an AFA primary storage, said Ottem; they are looking for consistent performance, especially in virtualized environments that are prone to latency spikes, as well as data services, which he said high-end customers want to be able to turn on and off at a granular level. The 7300 FSP, for example, includes user selectable, block-level inline de-duplication and compression. The Concerto OS 7 combines Violin’s previous operating system, vMOS 5, and its Concerto data management and inline data-reduction capabilities.

    Reply
  9. Tomi Engdahl says:

    HP reorgs cloud biz as Marten Mickos cedes key responsibilities
    https://gigaom.com/2015/02/23/hp-reorgs-cloud-biz-as-marten-mickos-cedes-key-responsibilities/

    Marten Mickos, the former Eucalyptus CEO who became HP’s top cloud guy in September when HP bought Eucalyptus, is turning over key responsibilities to three other HP executives, according to an internal email message viewed by Gigaom.

    This looks like the HP-ization of Eucalpytus, a company whose claim to fame was offering a private cloud that supported key Amazon Web Services APIs.

    Given those roots, it remains unclear just how much HP Helion will carry over that AWS API support. Before buying Eucalyptus, HP said it was pulling that support from its public cloud.

    Reply
  10. Tomi Engdahl says:

    Memory Underdogs Rise by 2020
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1325705&

    Emerging kinds of non-volatile memory chips will get traction in mainstream applications by 2020, but DRAM and NAND will continue to dominate for years.

    The global market for today’s emerging non-volatile memory (NVM) technologies will expand from $65 million in 2014 to $7 billion by 2020, an impressive rise of more than 118% per year. However, this business will represent less than 10% of the total 2020 memory market, so there’s still a long way to go before the newcomers replace today’s NAND flash and DRAM chips.

    Memory chips are key building blocks in everything from cameras to cars. Thus standalone chips represent one of the largest semiconductor market segments, totaling $78 billion last year, according to our January report. The need for memory will surge in the next few years especially thanks to the continued explosion in Internet traffic driven by the rising number and use of connected devices.

    NAND and DRAM are well established, but their scalability is becoming increasingly complex and problematic.

    So far, sales of MRAM/STTMRAM and RRAM are still limited mainly to niche industrial and wearable markets due to their limited density.

    Improving scalability and therefore price per gigabit is the key to winning mainstream NAND and DRAM applications. Currently, limited product densities increase the price per gigabit to several $100/Gbit, a limiting factor for mass market adoption since NAND and DRAM chips cost less than $1/Gbit.

    In the next five years, scalability and chip density of the new memories will be greatly improved, and this will open up many new applications.

    Enterprise storage will be the killer market by far for emerging NVM chips in 2020. The new technologies will greatly improve data center storage performance, where requirements are intensifying as traffic rises.

    Microcontrollers for wearables, smart cards and other markets will increasingly adopt the emerging memories as the scaling of embedded MCU flash runs out of steam, especially after 2018 at the 28nm node.

    Reply
  11. Tomi Engdahl says:

    AMD Describes Notebook Processor
    Carrizo does HEVC, packs south bridge I/O
    http://www.eetimes.com/document.asp?doc_id=1325722&

    Advanced Micro Devices will describe at the International Solid-State Circuits Conference here the engineering prowess behind its next-generation notebook processor. Carrizo packs 29% more transistors and squeezes double-digit gains in performance and drops in power consumption out of the same 28nm process and die area as its current Kaveri chip.

    AMD uses the extra space to pack its previously external south bridge I/O unit into the die, saving system level power. The company claims the chip is also the first x86 to provide hardware-assisted decode of the new High Efficiency Video Codec (H.265).

    Because “AMD’s integrated graphics core is quite powerful, better than any of Intel’s GPUs – [Carrizo is] the equivalent of a graphics card with a processor and south bridge for free,”

    Reply
  12. Tomi Engdahl says:

    ‘Utterly unusable’ MS Word dumped by SciFi author Charles Stross
    LibreOffice on MacOSX is now Laundry Files man’s preferred change-tracking tool
    http://www.theregister.co.uk/2015/02/24/utterly_unusable_ms_word_dumped_by_scifi_author_charles_stross/

    British SciFi author Charles Stross once had the protagonist of his Laundry Files series, sysadmin/demon-hunter Bob Howard, narrate his day by saying “I’m sitting in my office, shivering over a cooling cup of coffee and reading The Register when my door opens without warning …”*.

    Stross is welcome in these pages for that inclusion alone, but in the past we’ve also featured his declaration that Microsoft Word is ”a tyrant of the imagination” . And that was just for starters. He went on to call it “a petty, unimaginative, inconsistent dictator that is ill-suited to any creative writer’s use.”

    Reply
  13. Tomi Engdahl says:

    How good a techie are you? Objective about yourself and your skills?
    I think I am… So come judge me, readers
    http://www.theregister.co.uk/2015/02/24/how_objective_are_you_sysadmins/

    How objective are you? Can you design IT solutions outside your own experience? Are you capable of testing unfamiliar and uncomfortable software, services and solutions with an open mind or do you immediately lash out against the mere idea of change?

    How far outside of your direct experience can you really step and at what point is the advice you give doing more harm than good?

    These probably strike you as somewhat existential questions; indeed, in asking them I am hoping to trigger a moment of professional solipsism in you with an end goal of opening your mind to a discussion about the very nature of our profession itself.

    There was lots of talk about how “credentials don’t matter so much as industry certifications” and “references speak louder than degrees”.

    Getting my ISP designation and joining CIPS would make me a legally recognized IT professional. The credentials are cross-recognized by other professional associations in other nations

    What holds me up is ethics. One of the most critical aspects of joining a processional association is a commitment to professional ethics. I believe passionately in this concept; demonstrating my commitment to it was one of the strongest motivators behind my joining the Canadian Association of Journalists, an organization with a similar ethical requirements.

    Insert tab A into slot B

    Computers are everywhere, they are in everything. The Internet Of Things is coming and, quite frankly, the day has already come that if IT practitioners are not capable of thinking beyond just the technical aspects of our job then people will die.

    Create a wireless pacemaker but forget to secure it against a world filled with madmen? People die. Create a car where electronics systems can be tampered with to override driver input on steering, braking, etc? People die. Create electronic display signage for emergency situations without taking into account people with visual disabilities of various types? People die.

    I’m scared of the future. I’m scared of a world of armed drones and cybernetic implants, of self-driving cards and creepy “always on” wearable video cameras. I’m scared of a world where these products and services are designed and overseen by nerds who can’t overcome brand loyalty or make objective judgments about privacy.

    Reply
  14. Tomi Engdahl says:

    Firefox 36 arrives with full HTTP/2 support and a new design for Android tablets
    http://venturebeat.com/2015/02/24/firefox-36-arrives-with-full-http2-support-and-a-new-design-for-android-tablets/

    Mozilla today launched Firefox 36 for Windows, Mac, Linux, and Android. Major additions to the browser include full HTTP/2 support and a new tablet user interface on Android.

    The biggest news for the browser is undoubtedly HTTP/2 support, the roadmap for which Mozilla outlined just last week. Mozilla plans to keep various draft levels of HTTP/2, already in Firefox, for a few versions. These will be removed “sometime in the near future.”

    HTTP/2, the second major version of Hypertext Transfer Protocol (HTTP) and the biggest update in years, was finalized earlier this month. It is the first new version of the HTTP protocol since HTTP 1.1, which was standardized back in June 1999.

    Reply
  15. Tomi Engdahl says:

    Can Tracking Employees Improve Business?
    http://tech.slashdot.org/story/15/02/24/2058209/can-tracking-employees-improve-business

    The rise of wearable technologies and big-data analytics means companies can track their employees’ behavior if they think it will improve the bottom line. Now an MIT Media Lab spinout called Humanyze has raised money to expand its technology pilots with big companies.

    Pilots with Bank of America and Deloitte have led to significant business improvements, but workplace privacy is a big concern going forward.

    Humanyze Hits Up Investors to Support “People Analytics” in Business
    http://www.xconomy.com/boston/2015/02/24/humanyze-hits-up-investors-to-support-people-analytics-in-business/?single_page=true

    The startup is called Humanyze—it was formerly known as Sociometric Solutions—and it spun out of the MIT Media Lab in 2011. Since then, the eight-person company has been heads-down developing technology to help businesses improve their performance by understanding how their employees behave on a daily basis.

    The key? Gathering and analyzing data on how employees talk to customers, who talks to whom within companies, what times of day people send e-mail, make phone calls, go on break, and so on. If it all sounds a little Big Brother-ish, well, Humanyze has thought carefully about those privacy concerns (more on that below).

    The field of data-driven human resources is starting to see a lot of interest—and hype—whether you call it people analytics, reality mining, or “Moneyball” for business. The trend is being driven in part by mobile and wearable technologies, as well as the rise of big-data analytics. Yet, as Waber puts it, it’s surprising “how data-driven companies are about their business, but it never includes their people.”

    Reply
  16. Tomi Engdahl says:

    The Intel IT department guide to running an IT department
    Interview Firm’s IT director Chris Shaw talks mobility, big data and the IoT
    http://www.theinquirer.net/inquirer/feature/2396587/the-intel-it-department-guide-to-running-an-it-department

    MANAGING your company’s IT can sometimes be overwhelming. The pace of change and the technical requirements of your organisation, coupled with budgetary restraints, can mean that rolling out a flagship system is like rolling a boulder uphill.

    He is keen to point out that Intel is just like any other company when it comes to the need to solve problems.

    “If you’re in IT at Intel you’re in no greater privileged position than you are in any other organisation. We have targets which we need to hit,” he said.

    “Like any other organisation we have to demonstrate our value within the organisation. We want to share our journey, not just our successes, but some of the things we’ve had to take a bit of a detour on.”

    The report, for the first time this year, adds a separate chapter for the Internet of Things (IoT).

    “We tried to make it familiar to the industry so we’ve used that familiar SMAC stack (social, mobile, analytics and cloud) but this year we’ve started to introduce thought processes for the IoT,” he said.

    “It’s so bleeding edge and so innovative that it is creating brand new ways of doing things, so we’ve separated it from the cloud in the hope of demonstrating some initiatives we’ve had that other organisations might be able to take advantage of too.”

    “We’re recognising that being able to collaborate on the same document at the same time is already improving efficiency. We’re seeing how that is scaling out to other parts of the business on a variety of things, which are achieving an efficiency and a velocity as a result,”

    Intel has recognised that the days of locking down company data on a need-to-know basis are long gone.

    “It’s always difficult to explain the ROI on social activity to external management types that aren’t on board, but let me give you an example. By coordinating our meeting room schedules on a social platform, we’ve calculated that we save about two minutes per meeting,”

    But the moral of the story is one of adding value. “The key to the story is that, just like any other company, we couldn’t just go and say: ‘Please Mr CSO, write us a check for millions of pounds.’ We had to go and prove there was value,”

    “It doesn’t matter what industry you’re in, you need to prove to the people that hold the purse strings that you are adding value, and it’s through processes like this that we’ll continue to see the value that IT within an organisation can bring.”

    “We work with open source technologies such as OpenStack and Apache to help people make these bespoke environments that work for them.”

    “It has allowed us to reduce the number of data centres we have. We used to have a data centre in every city where we had an office. Now it’s down in the region of around 60,”

    Finally, we turn to the IoT. It’s the buzz phrase of the moment, but it also represents the next revolution in technology, just as the web did 25 years ago.

    Shaw talks about it being something that the company has spearheaded since before it was a thing.

    “There’s been a part of us involved in the idea of embedding our products within items that stretches back 20 years to embedding chips in F1 engine management systems, so we have a long pedigree in this,” he said.

    “We’ve set up Intel IoT Labs all over the world. There’s one at our UK base in Swindon. We’ve got some investment in smart cities – how we can help London build a flagship smart city – and we’re doing that using some interesting internal pilots.”

    It’s these pilots that once again see Intel thinking small and local and then global and external.

    “We added motion sensors to the room. If after five or 10 minutes after the meeting starts there’s no motion, it’s either a really boring meeting or more likely they’ve not turned up, in which case an event is triggered to free up the meeting room.

    “Another idea is temperature sensors, not just to control temperatures in a building but in a room or department to improve productivity and reduce sickness through coughs and colds.”

    Reply
  17. Tomi Engdahl says:

    Embedded World 2015: Skylake-DT Industrial Motherboard Spotted
    by Ian Cutress on February 24, 2015 2:44 PM EST
    http://www.anandtech.com/show/9006/embedded-world-2015-skylakedt-industrial-motherboard-spotted

    Even with Broadwell not completely out of the door, a lot of attention is being put towards Skylake, the 14nm architecture update from Intel. Current information out the wild seems to contain a lot of hearsay and supposed leaks, but now we actually have at least some indication that Skylake is coming thanks to ComputerBase.de who spotted an ASRock industrial motherboard with the LGA1151 socket for Skylake processors at Embedded World.

    One of the big talking points of Skylake is the DDR4 compatibility, but this board throws a spanner into that by supporting two DDR3L-1600 SO-DIMM slots for up to 16GB of memory. It is also worth noting the separate chipset (most likely a server grade C236 for the next Xeon E3 CPUs) and support for three HDMI ports on the board.

    Reply
  18. Tomi Engdahl says:

    Storage modernisation reality check
    Are you a zero or a hero?
    http://www.theregister.co.uk/2015/02/25/storage_modernisation_reality_check/

    Keeping up with evolving storage demands is tough. This came through strongly in a recent Reg research study.

    Those taking part told us that pressure on the storage infrastructure wasn’t just down to increasing data volumes. New application requirements, the impact of virtualisation, and escalating business expectations for access and availability were among the other factors called out. As a result, many spoke about problems with hardware, software and maintenance costs, together with increased management overhead and challenges maintaining service levels.

    The study also told us, however, that those who had invested significantly in the right blend of modern technologies such as flash-based systems, storage virtualisation, and various forms of automation and scale out solutions (to complement existing solutions) judged themselves to be in a much better position to the majority. They were meeting current business demands and service level requirements more effectively, and felt more confident about the future. If you could articulate these benefits well enough, making the case for investment in storage modernisation should, at least in theory, be relatively straightforward.

    The reality, of course, is often quite different. Half of those taking part in the research said one of their problems was that senior execs did not appreciate the need to invest, and a similar number alluded to budgets not keeping up with growth in demand

    When it comes to storage, siloed budgets are particularly significant because most tell us that resource pooling is an important part of creating a more efficient, flexible and responsive environment

    The harsh reality is, though, that standing still is not really an option over the longer term, and the more you rely purely on traditional technologies and the manual processes that frequently go with them, the more you are just stacking up problems for the future. Storage teams who are unable to support their colleagues running heavily virtualised infrastructures or implementing new-style applications will also be increasingly marginalised. Alternative options such as cloud storage and self-managing storage appliances will creep into the organisation in an independent and uncoordinated manner, which won’t be good for either you or the business.

    Fight, if necessary, to get storage modernisation onto the agenda

    Reply
  19. Tomi Engdahl says:

    XenData’s storage Jurassic Park: PC tape backup is BACK
    Ask your granny about external tape drives, son
    http://www.theregister.co.uk/2015/02/25/xendata_storage_jurassic_park_usb/

    Back in the old days, those dim and distant ones before the millennium, before the ’90s even, you could backup your PC to external tape drives. It was a nightmare mix of weird software and slow transfer speeds … and then external disks came along to rescue us.

    Now XenData is bringing PC tape backup back, with USB-connect LTO6 drive and LTFS data access.

    Its X2500-USB is an LTO-6 external drive format archiving system for personal use with a USB 3.0 link (up to 140MB/sec) to PCs and notebooks.

    The drive is loaded with either a 2.5TB LTO-6 cartridge or 1.5TB LTO-5 format one. Files are written to the tape using either LTFS or the TAR format, with the tape cartridge formatted appropriately.

    The drive is less than 2.4 inches (60 mm) thick. XenData says its product is good or storing video, audio and image files.

    The X2500-USB is priced at $5,495, and product shipments start in March 2015.

    That’s a lot of money compared to an external disk drive and you would need hundreds of cartridges and/or a need for off-system storage for it to be worthwhile.

    Reply
  20. Tomi Engdahl says:

    Big data = big loss for Hadoop-flinger Hortonworks
    Professional services the future, for now
    http://www.theregister.co.uk/2015/02/25/hortonworks_inaugural_quarter_results/

    If the future of big data is Hadoop, those peddling it still have a long journey ahead of the them.

    Newly IPO’d Hortonworks’ inaugural results show a company whose losses are growing as its business expands.

    Further, it’s not subscriptions to Hortonworks’ implementation of the open-source Hadoop that are growing most – it is professional services.

    That translates as Hadoop remaining complicated to install and use for those customers who are finally ready to make the jump and start crunching big data.

    That’s an inconvenient truth for a tech firm trying to become the next Red Hat or Salesforce – living off open source software funded by accelerating and continuous subscriptions.

    Hortonworks dishes out Hadoop for HDS: Mmmm, open source with big vendor gravy
    This elephant can dance in the Big Data ballroom
    http://www.theregister.co.uk/2015/02/13/hds_doing_hadoop_for_enterprise_with_hortonworks/

    HDS will offer open-source data muncher Hadoop to the enterprise after doing a deal with Hortonworks.

    Hadoop distributor Hortonworks has signed an agreement with HDS to jointly promote and support the software. HDS can now deliver Hortonworks’ Data Platform (HDP), Hadoop in other words, to its enterprise customers.

    Hortonworks strategic marketing veep John Kreisa offered this canned quote: “The strategic agreement also provides a joint engineering commitment for the two companies on current and future projects that will help make Hadoop enterprise-ready.”

    This deal is another piece for HDS’ Big Data Lego castle, and follows on from the Pentaho data analytics acquisition.

    Reply
  21. Tomi Engdahl says:

    New Dell boxes reverberate with Blue Thunder
    Lightening the hyper-converged load
    http://www.theregister.co.uk/2015/02/25/blue_thunder_reverberates_from_dell/

    Dell has revealed a range of new hyper-converged XC appliances, running on Nutanix software and containing 13G PowerEdge servers.

    The appliances converge compute, storage and networking functions and are the second wave of such machines, following Dell’s announcement of an OEM deal with Nutanix in June 2014.

    They form part of Dell’s software-defined storage strategy – code-named Blue Thunder – which also incorporates elements from VMware, Microsoft, Nexenta, Red Hat (Ceph) and Hadoop.

    Nutanix – from who the appliance SW is OEM’d – is not announcing new models itself.

    Users can get into the XC Series/Nutanix appliances at the lowest starting list price to date, at roughly $38,000 US with 3 years of Dell ProSupport.

    Reply
  22. Tomi Engdahl says:

    Google now automatically converts Flash ads to HTML5
    http://venturebeat.com/2015/02/25/google-now-automatically-converts-flash-ads-to-html5/

    Google today began automatically converting Adobe Flash ads to HTML5. As a result, it’s now even easier for advertisers to target users on the Google Display Network without a device or browser that supports Flash.

    Back in September, Google began offering interactive HTML5 backups when Flash wasn’t supported. The Flash-to-HTML5 conversion tools for the Google Display Network and DoubleClick Campaign Manager created an HTML5 version of Flash ads, showing an actual ad rather than a static image backup.

    Reply
  23. Tomi Engdahl says:

    Quentin Hardy / New York Times:
    Google researchers have created an AI system that taught itself to play and win 49+ 1980s games; could be used for robots, driverless cars in future — A Google Computer Can Teach Itself Games — In a big step in the development of computers capable of independent reasoning
    http://bits.blogs.nytimes.com/2015/02/25/a-google-computer-can-teach-itself-games/

    Reply
  24. Tomi Engdahl says:

    Flash for Even Commodity Servers
    SolidFire’s software-only adds performance
    http://www.eetimes.com/document.asp?doc_id=1325806&

    The only reason for adding flash to your computer is to speed-up access to your data, unless your talking laptops, which also like the thinner profile and lower weight. The biggest consumers of flash, however, are the massive arrays filling entire racks for cloud-based high-performance systems. But now even enterprise data centers and server farms — such as Google’s — are incorporating flash, but not without the pain of writing the software that decides what’s the most important data to keep in the flash cache, and when to shuffle it back and forth to disk or tape without conflicts.

    That’s where SolidFire (Bolder, Colo.) comes in. SolidFire claims to be the number one vendor of solid-state arrays, but also claims its software is its only proprietary intellectual property (IP) and is now unbundling it so even the hyperscale server farms can use SolidFire.

    “We believe that eventually flash will replace all storage, except perhaps backup and archival data, but for now we want to provide the biggest number of options to allow every server to get in on the acceleration achieved by adding flash to your storage system,” Dave Wright, SolidFire CEO told EE Times.

    Reply
  25. Tomi Engdahl says:

    7 Linux Facts That Will Surprise You
    http://www.informationweek.com/software/7-linux-facts-that-will-surprise-you/d/d-id/1319177

    Here are seven things we bet you didn’t know about Linux and why it remains a software project of historic proportions.

    Reply
  26. Tomi Engdahl says:

    Firefox 36 swats bugs, adds HTTP2 and gets certifiably serious
    Three big bads, six medium messes and 1024-bit certs all binned in one release
    http://www.theregister.co.uk/2015/02/26/mozilla_swats_17_bugs_in_firefox_36/

    Mozilla has outfoxed three critical and six high severity flaws in its latest round of patches for its flagship browser.

    It stomps out memory safety bugs, exploitable use-after-free crashes, and a buffer overflow.

    Of the critical crashes, bad guys could potentially craft attacks targeting MP4 video playback through a buffer overflow in the libstagefright library (CVE-2015-0829).

    The new version of the browser also adds HTTP2 support

    Reply
  27. Tomi Engdahl says:

    Syneto: Behold, blockheads – an all-flash array… based on ZFS
    ZFS filesystem plus integrated KVM hypervisor
    http://www.theregister.co.uk/2015/02/26/synetos_smb_allflash_array/

    A ZFS-based all-flash array? There’s a thing to note in this block-focused all-flash array world. Europe-based Syneto has developed the product and it has updated its SMB-focused, ZFS-using Extreme 220 product.

    The Extreme 220 is a 2U x 24 hot-swap slot box with from 2TB to 144TB of SATA or SAS SSDs. That means 6TB SSDs are used to get the most capacity. There are two additional bays for OS disk drives.

    Syneto says the box, driven by 2 x Xeon E5 6-core processors, delivers up to 340,000 IOPS.

    Dragos Chioran, head of marketing at Syneto, said the IOPS number “is based on software RAIDs. The number of IOPS in software RAIDs is highly dependent on the number of disk groups (virtual devices) present in a data pool. Considering that our VDI architectures are always deployed in mirror configurations (analogous to RAID 10), the number of IOPS provided by a disk group is equal to one disk’s IOPS.

    Reply
  28. Tomi Engdahl says:

    Making emotive games from open data
    http://www.wired.co.uk/news/archive/2015-02/20/kati-london-gamifying-data

    Microsoft researcher Kati London’s aim is “to try to get people to think of data in terms of personalities, relationships and emotions”, she tells the audience at the Story Festival in London. Through Project Sentient Data, she uses her background in games development to create fun but meaningful experiences that bridge online interactions and things that are happening in the real world.

    One such experience invited children to play against the real-time flow of London traffic through an online game called the Code of Everand. The aim was to test the road safety knowledge of 9-11 year olds and “make alertness something that kids valued”.

    Reply
  29. Tomi Engdahl says:

    Benchmark Stresses Big Chips
    CoreMark-Pro targets 32-, 64-bit processors
    http://www.eetimes.com/document.asp?doc_id=1325752&

    The Embedded Microprocessor Benchmark Consortium (EEMBC), a trade group of 47 chip and system designers, released CoreMark-Pro. The suite of benchmarks for 32- and 64-bit processors expands the original CoreMark, a single performance test released in 2009 for microcontrollers and processors.

    The CoreMark-Pro suite provides a much richer set of metrics for high-end chips and also provides an extension of the group’s AndEBench, a suite of tests for Android-based systems. The consortium also is working on systems benchmarks for the Internet of Things and networking infrastructure.

    CoreMark-Pro consists of five integer and four floating-point tests. The integer workloads include JPEG compression, Zip compression, an XML parser, the SHA-256 security algorithm, and a more memory-intensive version of the original CoreMark. The floating-point workloads include a fast-Fourier transform, a linear algebra routine derived from Linpack, an enhanced version of the Livermore Loops benchmark, and a neural-net algorithm to evaluate patterns.

    Reply
  30. Tomi Engdahl says:

    Google is working on a Chrome reading mode, try it out
    http://www.engadget.com/2015/02/25/google-chrome-reader-mode/

    Google wants to give your peepers a break. Google Chromium Evangelist Francois Beaufort laid out early versions of Reader Mode for Chrome desktop and mobile in a post today on Google Plus (of course). Reader Mode is designed to make on-screen text easier to absorb, by removing unnecessary pictures, boxes, buttons and ads. Safari has long featured a Reader Mode, and extensions such as Readability offer similar services for Chrome, but now Google is getting into the game itself with these Reader-friendly experiments.

    Google’s project is based on Chromium’s open-source DOM Distiller, meaning technical minds can poke around right in the code.

    Reply
  31. Tomi Engdahl says:

    Microsoft man: Internet Explorer had to go because it’s garbage
    Even Redmond is fed up with ‘IE-specific behavior’
    http://www.theregister.co.uk/2015/02/26/microsoft_spartan_browser_rationale/

    Microsoft says it decided to start from scratch with a new web rendering engine for Windows 10 because keeping up with web standards while maintaining compatibility with older, noncompliant versions of Internet Explorer had simply become too much of a burden.

    “Fixing long standing interoperability bugs with other modern browsers could actually break sites who have coded to the IE-specific behavior,” Microsoft’s Charles Morris observed in a blog post on Thursday, echoing a lament that web developers have voiced for a decade or more.

    Recent versions of IE have included various backward-compatibility modes that force the browser to repeat the errors of earlier versions. But for Microsoft, the need to essentially maintain two browsers simultaneously – one that did things the IE way and one that did them properly – was making it difficult to keep up with the rapid pace of web standards development.

    Reply
  32. Tomi Engdahl says:

    The Programmers Who Want To Get Rid of Software Estimates
    http://developers.slashdot.org/story/15/02/26/2025205/the-programmers-who-want-to-get-rid-of-software-estimates

    This article has a look inside the #NoEstimates movement, which wants to rid the software world of time estimates for projects. Programmers argue that estimates are wrong too often and a waste of time. Other stakeholders believe they need those estimates to plan and to keep programmers accountable. Is there a middle ground?

    Estimates? We Don’t Need No Stinking Estimates!
    How a hashtag lit the nerdy world of project management aflame — or at least got it mildly worked up
    https://medium.com/backchannel/estimates-we-don-t-need-no-stinking-estimates-dcbddccbd3d4

    As long as we’ve been making software, we’ve been screwing up its deadlines. Beginning in the 1960s, as industry began to demand ambitious software projects, programmers began to find that the harder they tried to deliver polished work on time, the more miserably they failed. In the 1960s Frederick Brooks, tasked with leading a massive IBM programming project, famously discovered that adding more programmers to a late software project only makes it later.

    Well, that sucked.

    The annals of software-project history are packed with epic train-wrecks.

    Late software projects run up costs, incur collateral damage and sometimes take down entire companies. And so the software industry has devoted decades to waging a war on lateness — trying frontal assault, enfilade, sabotage, diplomacy and bribes, and using tactics with names such as object oriented programming, the Rational Unified Process, open-source, agile and extreme programming.

    Estimates play a part in nearly all of these approaches. Estimates are the siege-engines of the war on lateness. If we use them carefully and patiently and relentlessly, the hope is, maybe, eventually, we’ll win.

    Why is software so late? One venerable intellectual tradition in the field says the answer lies in software’s very nature. Since code costs nothing to copy, programmers are, uniquely, always solving new problems. If the problem already had a solution, you’d just grab a copy from the shelf. On top of that, we have a very hard time saying when any piece of software is “done.”

    There are lots of ways to try to do software estimates, but most of them look like this: First, you break your project down into pieces small enough to get your head around. Then you figure out how long each of those parts will take, breaking them down further into smaller pieces as needed. Then you add it up! There’s your estimate.

    You can do this all at once up front — that makes you a “waterfall” type, who likes to finish one thing before you start another. Or you can do it in little chunks as you go along — that’s the style popular today, because it gives you more room to change course. Teams around the world now use the agile “Scrum” technique, in which programmers consult with “project owners” to divide work up into “stories,” then eyeball these stories to guess how long they will take and how many can fit into a (brief, usually two-week) “sprint.”

    In this world, putting detailed days-and-hours estimates on stories is out of fashion; teams pick from a slew of different guesstimate styles. They assign “points” to each story, or they take a “shirt sizing” approach, assigning each story a label like S, M, L, XL.

    Some developers swear by these techniques; others roll their eyes at what they see as fashion trends in the fickle programming marketplace. The trouble remains: However you arrive at them, software project estimates are too often wrong, and the more time we throw at making them, the more we steal from the real work of building software. Also: Managers have a habit of treating developers’ back-of-the-envelope estimates as contractual deadlines, then freaking out when they’re missed. And wait, there’s more: Developers, terrified by that prospect, put more and more energy into obsessive trips down estimation rabbit-holes. Estimation becomes a form of “yak-shaving” — a ritual enacted to put off actual work.

    Zuill recommends quitting estimates cold turkey. Get some kind of first-stab working software into the customer’s hands as quickly as possible, and proceed from there. What does this actually look like? Zuill says that when a manager asks for an estimate up front, developers can ask right back, “Which feature is most important?”—then deliver a working prototype of that feature in two weeks. Deliver enough working code fast enough, with enough room for feedback and refinement, and the demand for estimates might well evaporate. That’s how Zuill says it has worked for him for more than a decade. “Let’s stop trying to predict the future,” he says. “Let’s get something done and build on that — we can steer towards better.”

    Reply
  33. Tomi Engdahl says:

    If in doubt, blow $4bn: IBM says it will fatten up on cloud, mobile, Big Data cake by 2018
    Ginni eyes $40bn-a-year sales
    http://www.theregister.co.uk/2015/02/26/ibm_investor_meeting/

    IBM is betting billions that it can claw its way back to growth by focusing on what it calls its “strategic imperatives,” including cloud, data analytics, mobility, social networking, and security.

    During Big Blue’s annual meeting with financial analysts on Thursday, CFO Martin Schroeter said IBM plans to shift $4bn of spending toward these hot-button areas in 2015.

    The goal of the investment, he said, is to grow the segments to $40bn in annual revenue by 2018.

    Reply
  34. Tomi Engdahl says:

    Google open-sources HTTP/2-based RPC framework
    Chocolate Factory’s microservices code is yours to coddle
    http://www.theregister.co.uk/2015/02/27/google_opensources_http2based_rpc_framework/

    Google has open-sourced something called “gRPC” that it says represents “a brand new framework for handling remote procedure calls” using HTTP/2.

    The Chocolate Factory says it has dogfooded gRPC on its own microservices and that it “enables easy creation of highly performant, scalable APIs and microservices” and offers “bandwidth and CPU efficient, low latency way to create massively distributed systems that span data centers, as well as power mobile apps, real-time communications, IoT devices and APIs.”

    HTTP/2′s inclusion is important, Google reckons, because the newly-signed-off standard can pack more jobs into a single TCP connection, which means less work for a mobile device’s innards to perform and therefore helps battery life.

    Reply
  35. Tomi Engdahl says:

    Why China is kicking foreign tech firms off its government procurement list
    https://www.techinasia.com/china-kicking-foreign-tech-firms-government-procurement-list/

    Yesterday, Reuters published an exclusive report on the banishment of some major US tech firms from China’s approved government procurement list. In short, government-run departments and agencies in China will are no longer allowed to buy equipment from Apple, Cisco, and Intel’s McAfee, among others.

    The question many are asking: “Is it a security measure, or is it protectionism?” Reuters gives equal voice to both Chinese authorities, who claim to have eliminated these companies over security concerns stemming from Edward Snowden’s high-profile PRISM leaks, and the western firms, who allege China is using Snowden as an excuse to implement protectionist economic policies.

    I’m inclined to side with the latter. China has used similar tactics before, and they’ve proven effective. China has been weening its internet and tech infrastructure off of foreign firms for the past few years, replacing them with homegrown alternatives as they arise. Reuters reports Cisco had 60 items on the procurement list in 2012, which dwindled to zero in 2014.

    Reply
  36. Tomi Engdahl says:

    Sony’s VAIO replacement is the ultrathin Xperia Z4 Tablet
    Coming in June with a Snapdragon 810 processor, Microsoft Office apps, and an impossibly thin profile
    http://www.theverge.com/2015/3/2/8128655/sony-xperia-z4-tablet-specs-feature-release-date-mwc-2015

    Announced at Mobile World Congress a year ago, the 10-inch Xperia Z2 Tablet remains one of the thinnest and lightest tablets around, but now it has a successor that steps things up a notch. The new Xperia Z4 Tablet matches the iPad Air 2′s 6.1mm thickness while being notably lighter at just 393g for its LTE model. It’s still waterproof, it still lasts an awfully long time, and of course it runs the latest version of Android, Lollipop.

    Reply
  37. Tomi Engdahl says:

    News & Analysis
    MediaTek Wrestles ARM A72 into Tablets
    http://www.eetimes.com/document.asp?doc_id=1325735&

    Taiwan-based chip vendor MediaTek announced a tablet SoC based on ARM’s new Cortex-A72 processor. The quad-core MT8173 chip is already shipping, targeting high-end systems.

    “The tablet category is totally deceived by lots of cost containment but is not relenting on the specs. I think industry as a whole is scratching its head as to what you can make a tablet do,” MediaTek’s business marketing vice president and general manager Mohit Bhushan told EE Times. “The next big thing is to make the tablet your workhorse for productivity, for enterprise.”

    To that end, MediaTek wants to leverage ARM’s new offerings to grow its share in tablets. The MT8173 is a 64-bit multi-core big.LITTLE architecture that combines two Cortex-A72 CPUs and two Cortex-A53 CPUs to provide a six-fold performance increase over MediaTek’s previous generation tablet chip. MediaTek did not provide power consumption or exact performance specs.

    The SoC runs at 2.4 GHz performance thanks to updates to MediaTek’s software. “We’ve innovated on CorePilot, the software that looks at the characteristics of cores and moderates them on thermal frequency and power planes to match the best CPU to best kind of job,” Bhushan said.

    Reply
  38. Tomi Engdahl says:

    Google’s artificial intelligence breakthrough may have a huge impact on self-driving cars and much more
    http://www.washingtonpost.com/blogs/innovations/wp/2015/02/25/googles-artificial-intelligence-breakthrough-may-have-a-huge-impact-on-self-driving-cars-and-much-more/

    Google researchers have created an algorithm that has a human-like ability to learn, marking a significant breakthrough in the field of artificial intelligence. In a paper published in Nature this week, the researchers demonstrated that the algorithm could master many Atari video games better than humans, simply through playing the game and learning from experience.

    “We can go all the way from pixels to actions as we call it and actually it can work on a challenging task that even humans find difficult,” said Demis Hassabis, one of the authors of the paper. “We know now we’re on the first rung of the ladder and it’s a baby step, but I think it’s an important one.”

    The researchers only provided the general-purpose algorithm its score on each game, and the visual feed of the game, leaving it to then figure out how to win. It dominated Video Pinball, Boxing and Breakout, but struggled with Montezuma’s Revenge and Asteroids.

    The algorithm is designed to tackle any sequential decision-making problem.

    “In the future I think what we’re most psyched about is using this type of AI to help do science and help with things like climate science, disease, all these areas which have huge complexity in terms of the data that the human scientists are having to deal with,” Hassabis said.

    Another potential use case be might telling your phone to plan a trip to Europe, and it would book your hotels and flights.

    Reply
  39. Tomi Engdahl says:

    Bloody TECH GIANTS… all they do is WASTE investors’ MONEY
    … and thank heavens they do
    http://www.channelregister.co.uk/2015/02/25/the_tech_giants_are_wasting_investors_cash/

    The idea that the tech giants are simply going to waste the pots of cash with which they have been entrusted is certainly counter-intuitive, but it wouldn’t surprise me at all if they did. For that’s pretty much the fate of all investment: to be wasted.’

    The claim about the tech giants can be found here:

    Yet the reality is completely different – and far more interesting. What is actually happening is that the tech giants have started blowing money on an epic scale. From challenging the car industry, to virtual reality glasses and watches that double up as computers, or TV series that don’t even have a script, the tech barons have embarked on a colossal spending spree from which the returns are likely to be meagre.

    For consumers, that’s great. We’ll get a lot of cool new stuff paid for by the likes of Sergey Brin, Jeff Bezos and Mark Zuckerberg. Much of it will even be free. But for shareholders, and the founders themselves, that will be terrible – and it may well end up costing them their dominance of the industry.

    The point is also made that Google is “investing” $10bn a year in R&D and doesn’t really have all that much to show for it. But then we almost certainly shouldn’t really be thinking of Google as an economic entity anyway. In a business sense it’s an ad business and nothing else

    The capitalists just don’t make all that much from the deployment of their capital. The VC industry has of course long acknowledged this in their internal thinking. Nine out of 10 investments will go kablooie in some manner, but it’s the tenth that becomes a 20-bagger and makes the entire process worthwhile. That is what I mean about why capitalism succeeds.

    Reply
  40. Tomi Engdahl says:

    Valve and HTC Reveal “Vive” SteamVR Headset
    http://games.slashdot.org/story/15/03/02/018234/valve-and-htc-reveal-vive-steamvr-headset

    Today Valve and HTC revealed the “Vive” SteamVR headset which is designed to compete with Oculus and others, which aim for a high-end VR experience on PC. The Vive headset uses dual 1200×1080 displays at 90Hz and a “laser position sensor” to provide positional tracking

    Valve’s VR headset is called the Vive and it’s made by HTC
    Plug this thing into your PC for a whole new world of Steam-powered experiences
    http://www.theverge.com/2015/3/1/8127445/htc-vive-valve-vr-headset

    HTC has just announced the Vive, a virtual reality headset developed in collaboration with Valve. It will be available to consumers later this year, with a developer edition coming out this spring. The company has promised to have a significant presence at the Game Developers Conference next week, where devs will have a chance to play with Valve’s VR technology.

    The Vive Developer Edition uses two 1200 x 1080 displays that refresh at 90 frames per second, “eliminating jitter” and achieving “photorealistic imagery,” according to HTC. The displays are said to envelope your entire field of vision with 360-degree views.

    The device uses a gyrosensor, accelerometer, and laser position sensor to track your head’s movements as precisely as one-tenth of a degree. Most surprisingly, there will be something called the Steam VR base station, which will let you walk around the virtual space instead of using a controller. A pair of the base stations can “track your physical location … in spaces up to 15 feet by 15 feet.”

    Gattis added at MWC that “It’s also really light, so you can wear it for a long time without feeling weighed down.”

    It will be available with a pair of HTC-made wireless controllers for manipulating objects or shooting weapons through hand tracking.

    Reply
  41. Tomi Engdahl says:

    First peek at the next Ubuntu 15.04 nester line-up
    Xubuntu, Kubuntu, Barney McGrubuntu, Cuthbert, GNOME and MATE
    http://www.theregister.co.uk/2015/03/02/ubuntu_15_04_flavours_vivid_vervet/

    Ubuntu 15.04 is here – almost. The first beta of Vivid Vervet has been delivered, and with it have come images of the penguin flock that nestles on this OS.

    I looked at Xubuntu, Kubuntu, Ubuntu GNOME and Ubuntu MATE but there’s also Lubuntu and the China-centric Ubuntu Kylin, which I didn’t test.

    The biggest news is that Ubuntu MATE has finally gained an “official” blessing from Canonical. That doesn’t mean much right now – other than the fact that you can now download it from a Canonical URL – but it is obviously good news for the future of Ubuntu MATE.

    If you’re dying to try MATE with Ubuntu behind it you can grab the beta, but I’d strongly suggest waiting.

    MATE may be the attention-grabbing newcomer, but perhaps the biggest changes await users of Kubuntu with KDE.

    Kubuntu 15.04 will be the first Kubuntu release to default to the impressive new Plasma 5 interface. Plasma 5 is perhaps most notable for its visual changes, which see KDE embracing a more streamlined, “flat” interface, but it’s also the first version of KDE to be powered by Qt 5 and the recently released KDE Frameworks 5.

    Under the hood, all these flavours share updates to the base system and a new kernel, all of which will also be part of the main, Unity-based version of Ubuntu 15.04, which is still an alpha release.

    Reply
  42. Tomi Engdahl says:

    Google+ is dead! Long live Photos and Streams and Hangouts
    http://www.neowin.net/news/google-is-dead-long-live-photos-and-streams-and-hangouts

    Google+ was never the popular kid in school, and it looks like Google is finally ready to give up on the project and break it apart into services that people might actually want to use.

    After a few rumors came forth, mainly thanks to an interview with Sundar Pichai, it’s now official: Google+ will be broken up into services. The confirmation came from Bradley Horowitz who announced he’s now the head of Photos and Streams, two elements of G+ that have now been spun off.

    Reply
  43. Tomi Engdahl says:

    Unreal Engine 4 Is Now Free
    http://games.slashdot.org/story/15/03/02/1910214/unreal-engine-4-is-now-free

    In 2014, Epic Games took the step of making Unreal Engine 4 available to everyone by subscription for $19 per month. Today, this general-purpose game engine is available to everyone for free. This includes future updates, the full C++ source code of the engine, documentation, and all sorts of bonus material.

    If You Love Something, Set It Free
    https://www.unrealengine.com/blog/ue4-is-free

    Unreal Engine 4 is now available to everyone for free, and all future updates will be free!

    You can download the engine and use it for everything from game development, education, architecture, and visualization to VR, film and animation. When you ship a game or application, you pay a 5% royalty on gross revenue after the first $3,000 per product, per quarter. It’s a simple arrangement in which we succeed only when you succeed.

    This is the complete technology we use at Epic when building our own games. It scales from indie projects to high-end blockbusters; it supports all the major platforms; and it includes 100% of the C++ source code.

    Reply
  44. Tomi Engdahl says:

    Ask Slashdot: Which Classic OOP Compiled Language: Objective-C Or C++?
    http://ask.slashdot.org/story/15/03/02/2342234/ask-slashdot-which-classic-oop-compiled-language-objective-c-or-c

    Comments:

    C++ is still very much a living, actively developed language. There’s a lot of people using it for modern projects. It’s well supported under pretty much all modern operating systems & you have excellent tools available under Linux.

    There’s not a lot of reason to pick up Objective C unless you plan on targeting Apple. It’s pretty much a dead language everywhere else, outside of a few niche projects.

    C++ is darn-near universal. It is everywhere and everyone uses it for everything.

    Learn C++, and if you find yourself needing to dabble in Objective-C for some Apple device, no problem there. Doing the reverse would be more challenging and would limit your skills.

    Even for non-GUI work, Qt is a blessing if you want to do cross-platform programming. The library does a lot, ranging from database access to network programming, all in a very well documented and well thought out API.

    Reply
  45. Tomi Engdahl says:

    News & Analysis
    Intel Tablet SoCs Pack LTE
    http://www.eetimes.com/document.asp?doc_id=1325861&

    Intel announced a new family of SoCs and modems for mobile devices under its Atom brand, including what appear to be its first tablet chips with integrated LTE and entry-level chips using ARM Mali graphics cores. The chip giant focused much of its news on its laptop-class graphics now available for tablets.

    “We’re in all technologies, licensed and unlicensed, and are marrying them to compute and graphics,” Aicha Evans, vice president and general manager of Intel’s wireless platform R&D group, said, adding that the company is “not backing down” from mobile competitors MediaTek and Qualcomm.

    The 64-bit Atom X5 and X7 are designed to be media-rich, with a 50% improvement in 3D graphics over previous generations, using Intel’s Generation 8 graphics –- the same GPUs it uses in it laptop and desktop chips. Intel also added support for its RealSense 3D camera, Pro WiDi wireless streaming for enterprise, and True Key authentication programs.

    A third generation LTE modem was announced alongside the new Atom line, with the first commercial devices expected in the second half of 2015. Intel’s XMM 7360 modem supports 29 LTE bands with CAT-9 and CAT-10 carrier aggregation, and is capable of up to 300 Mbits/second downlink and 50 Mbits/s uplink.

    “The modem comes with [IEEE 802.11ac], GNSS/GPS, and NFC all integrated so customers can get to market very quickly,”

    Reply
  46. Tomi Engdahl says:

    News & Analysis
    Benchmark Stresses Big Chips
    CoreMark-Pro targets 32-, 64-bit processors
    http://www.eetimes.com/document.asp?doc_id=1325752&

    The Embedded Microprocessor Benchmark Consortium (EEMBC), a trade group of 47 chip and system designers, released CoreMark-Pro. The suite of benchmarks for 32- and 64-bit processors expands the original CoreMark, a single performance test released in 2009 for microcontrollers and processors.

    The CoreMark-Pro suite provides a much richer set of metrics for high-end chips and also provides an extension of the group’s AndEBench, a suite of tests for Android-based systems. The consortium also is working on systems benchmarks for the Internet of Things and networking infrastructure.

    Reply
  47. Tomi Engdahl says:

    How a HPC array helps humanity destroy the Ebola virus
    Big Iron makes everyone’s lives better
    http://www.theregister.co.uk/2015/03/03/ringfencing_ebola_with_hpc/

    Agencies fighting the outbreak have had to understand where the outbreak is spreading, where communities in its path are most vulnerable and where to focus their treatment and containment efforts to ring-fence the affected areas.

    With such a fast-moving virus it’s been necessary to use computer simulations to model how the virus infects populations and assess how effective different actions, such as treatment station location, isolation measures, contact levels, hospital upgrades and hygiene education programs can be.

    In other words, to assess where’s best to concentrate resources in order to limit the numbers of people who will die from the Ebola epidemic.

    What’s done is to use IT to simulate a human population in areas such as Liberia, Mali, Guinea and elsewhere, along with characteristics that encourage – and inhibit – Ebola’s spread. The aim is to create a disease spread model that is as close as possible to the actual West African outbreak’s history.

    Reply
  48. Tomi Engdahl says:

    Jolla’s Sailfish OS reaches 2.0, embraces Intel, licensing
    http://www.slashgear.com/jollas-sailfish-os-reaches-2-0-embraces-intel-licensing-02371529/

    There comes a time when every child grows up and starts to leave home for his or her own adventures. For Jolla’s own Sailfish OS, that time has come. In preparation for the upcoming launch of the successfully crowdfunded Jolla Tablet next quarter, the Finnish startup is announcing the next major version of Sailfish 2.0. While details on what’s new for end users are still quite slim, the release is meant more to entice partners, particularly of the hardware kind, to join in on the fun

    Sailfish OS was primarily designed to be used on Jolla’s ARM-powered first smartphone.

    That tablet is expected to ship in the next few months, but while things are still cooking on the hardware front, Jolla apparently has a ready-made software meal. Sailfish OS 2.0, as it is dubbed, is the next evolution of the platform. It scales the OS from smartphones to tablets, from ARM to Intel. In particular, it is adding support for the Intel Atom x3 processor platform. This is Intel’s first processor to combine a 64-bit Atom CPU with 3G or 4G connectivity. In short, this opens the floodgates for more Sailfish-compatible devices in the future.

    Reply
  49. Tomi Engdahl says:

    Here comes Vulkan: The next generation of the OpenGL graphics API
    OpenGL is 22… it’s time for a replacement – Khronos Group
    http://www.theregister.co.uk/2015/03/03/here_comes_vulkan_the_next_generation_of_the_opengl_graphics_api/

    The Khronos Group, non-profit custodian of the OpenGL cross-platform graphics API, has announced its replacement, called Vulkan.

    Vulkan, previously known as glNext, is being presented in detail at The Game Developers Conference (GDC) currently under way in San Francisco.

    Khronos has also announced OpenCL 2.1, an updated version of its API for general-purpose parallel programming. OpenCL is typically used to take advantage of the multiple cores in a GPU to accelerate applications that require intensive processing. Both standards are designed for cross-platform implementation, unlike platform-specific APIs such as DirectX for Windows or the Metal API on Apple hardware.

    Underpinning both Vulkan and OpenCL 2.1 is a new version of SPIR (Standard Portable Intermediate Representation), an intermediate language developed by Khronos for parallel compute and graphics. As with the popular LLVM project, using an intermediate language enables support for multiple programming languages and toolsets. The new SPIR-V includes the flow control, graphics and parallel constructs necessary for high performance graphics programming. Support for the OpenGL Shader Language (GLSL), for defining drawing effects, is under development, and future support for a C++ shader language is likely. Use of an intermediate language enables simplification of GPU drivers since they only need to consume SPIR-V code. The use of SPIR-V in both Vulkan and OpenCL 2.1 is a significant move towards convergence of the two standards.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*