Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    Ailing AMD battered by goodwill, inventory charges
    Unsold PC product weighs down chipmaker in rocky Q4
    http://www.theregister.co.uk/2015/01/21/amd_q4_2014_earnings/

    AMD emerged from a bloody fourth quarter of its fiscal 2014 on Tuesday, reporting revenues down and profits sapped by painful one-time accounting charges.

    The chipmaker’s total revenues for the three months ending on December 27, 2014 were US$1.24bn, a 22 per cent year-on-year decline. And the figure for the full year was better, but only just, with total revenues gaining 4 per cent to reach $5.51bn.

    That quarterly revenue figure matched Wall Street’s expectations, but believe it or not, AMD’s earnings missed estimates – even though the analysts were only hoping to see a penny per diluted share. The firm’s reported earnings for the quarter were completely flat.

    Investors were not much impressed, however, and they sent AMD’s share price down nearly 6.3 per cent on the news.

    Reply
  2. Tomi Engdahl says:

    Windows 10: Microsoft’s date with destiny
    Putting success on hold for another two years?
    http://www.theregister.co.uk/2015/01/21/windows_10_meets_mobile/

    Reply
  3. Tomi Engdahl says:

    The Tech Industry’s Legacy: Creating Disposable Employees
    http://tech.slashdot.org/story/15/01/21/164225/the-tech-industrys-legacy-creating-disposable-employees

    VentureBeat is running an indictment of the tech industry’s penchant for laying off huge numbers of people, which they say is responsible for creating a culture of “disposable employees.” According to recent reports, layoffs in the tech sector reached over 100,000 last year, the highest total since 2009. Of course, there are always reasons for layoffs

    But the article argues that this is often just a smokescreen. “The notion here is that somehow these companies are backed into a corner, with no other option than to fire people. And that’s just not true. These companies are making a choice. They’re deciding that it’s faster and cheaper to chuck people overboard and find new ones than it is to retrain them.”

    Disposable employees may be tech industry’s greatest achievement
    http://venturebeat.com/2015/01/20/disposable-employees-may-be-tech-industrys-greatest-achievement/

    A Bloomberg story about layoffs rising at tech companies has been making the rounds. It seems so counter-intuitive: How can job cuts be soaring when tech is booming?

    Everyone seems surprised, but they shouldn’t be. Tech, more than any other industry, has succeeded in convincing us over the past two decades that workers are entirely disposable.

    We throw out smartphones after two years. Replace our PCs every few years when they get slow. Why can’t we do the same with people?

    Turns out, we can. Or, at least, we can try.

    “Despite the overall strength of the tech sector, employers in the computer industry saw the heaviest downsizing of the year, announcing a total of 59,528 planned layoffs. That is 69 percent more than a year ago.”

    That was seriously bucking a trend during a year in which layoff announcements across all industries in the U.S. fell from 509,051 to 483,171. As a result, in 2014, tech accounted for one out of 5 layoff announcements.

    The good news, I suppose, is that these numbers are well below the post dot-com bubble era.

    But if you look before the dot-com bust (1997-1999) and the post-bust, pre-recession years (2004-2008), tech still represented an unusually large percentage of overall job cuts.

    Benner sums up many of the explanations that are commonly handed out for the tech industry’s exuberant human weed-whacking. Companies buy other companies and need to rationalize headcount. And there’s all that disruption. Big companies, in particular, are seeing their business models challenged by startups, so they need to shed employees with skills they no longer need, and hire people with the right skills.

    “But I agree with them: It’s absolutely necessary for all of these companies to clean house,” Benner says. “They have to do it to change and, perhaps, survive.”

    She adds: “But a big part of the industry is trying to revamp, catch up, and keep the market healthy — and they’re cutting jobs to get there.”

    These companies are making a choice. They’re deciding that it’s faster and cheaper to chuck people overboard and find new ones than it is to retrain them. The economics of cutting rather than training may seem simple, but it’s a more complex calculation than most people believe.

    But severance is not the only cost. There are intangibles, like morale. And there are other fiscal costs, like the price of recruiting and orienting new employees. There is a cost in time and money when you have to train new employees in your internal procedures and culture, get their computer hooked up, help them find the toilets, etc.

    More often, layoffs make things worse, rather than better.

    “Some managers compare layoffs to amputation: that sometimes you have to cut off a body part to save the whole,” he wrote. “As metaphors go, this one is particularly misplaced. Layoffs are more like bloodletting, weakening the entire organism.”

    Which all begs the real question: If layoffs don’t really work, why does everyone keep doing them, even in good times?

    “Part of the answer lies in the immense pressure corporate leaders feel — from the media, from analysts, from peers — to follow the crowd no matter what,” Pfeffer said.

    In other words, corporate leaders in tech lay people off because it’s what everyone else does. This is perhaps the saddest commentary on the tech industry. Rather than being full of mavericks, tech’s overuse of layoffs reveals it to be an industry led by people who are unable to think for themselves

    Reply
  4. Tomi Engdahl says:

    AMD Upbeat Despite $330M Loss
    http://www.eetimes.com/document.asp?doc_id=1325357&

    Advanced Micro Devices reported large operating losses in its latest quarterly financial update, but AMD officials remained hopeful about the company’s trajectory for 2015.

    Reply
  5. Tomi Engdahl says:

    Brian Beach / Backblaze Blog:
    Backblaze: 4 TB HGST and Seagate hard drives are most reliable, 3 TB Seagate and WD drives fail often

    What is the Best Hard Drive?
    https://www.backblaze.com/blog/best-hard-drive/

    Reply
  6. Tomi Engdahl says:

    Jessi Hempel / Wired:
    Inside Nadella’s plan to reinvent Microsoft, starting with HoloLens, a flatter org chart, and a willingness to forge new partnerships — Satya Nadella’s Got a Plan to Make You Care About Microsoft. The First Step? Holograms … REDMOND MISSED MOBILE — The old Microsoft never gained turf in smartphone operating systems.

    http://www.wired.com/2015/01/microsoft-nadella/

    Reply
  7. Tomi Engdahl says:

    Emil Protalinski / VentureBeat:
    Microsoft announces Office 2016 universal apps for Windows 10 will arrive in the second half of 2015

    Microsoft announces Office 2016, coming in the second half of 2015
    http://venturebeat.com/2015/01/22/microsoft-announces-office-2016-slated-for-release-in-the-second-half-of-2015/

    At its Windows 10 event yesterday, Microsoft unveiled the touch-optimized version of Office. Today, the company offered more details about that version, and then snuck in another announcement: The next desktop version is under development, it is called Office 2016, and it will be generally available “in the second half of 2015.”

    Office for Windows 10 (Word, Excel, PowerPoint, OneNote, and Outlook), meanwhile, is also slated to arrive later this year, though Microsoft has shared more about it and plans to offer a preview in the coming weeks.

    Reply
  8. Tomi Engdahl says:

    Andreessen Horowitz:
    Andreessen Horowitz explains 16 tech trends of interest including VR and machine learning
    http://a16z.com/2015/01/22/16-things/

    Reply
  9. Tomi Engdahl says:

    Gov.UK inhaled G-Cloud, spat out framework – ex-lead claims
    Original team member claims there are ‘vision’ problems
    http://www.channelregister.co.uk/2015/01/23/gcloud_subsumed_into_gds_coding_house/

    Whitehall is coming under fire for subsuming the G-Cloud into its coding house GDS (government digital service) at the expense of encouraging uptake of the framework.

    Mark Craddock, former G-cloud lead, said: “GDS is obsessed with what I call pub-prietary software – the public sector building everything in-house and putting itself in danger of replicating the failures of the large [system integrators].”

    “G-Cloud is in danger of losing focus, because the poorly designed, implemented and over complex digital services store is consuming too much effort,” he said.

    Reply
  10. Tomi Engdahl says:

    Wide-Spread SSD Encryption is Inevitable
    http://www.eetimes.com/document.asp?doc_id=1325401&

    The recent Sony hack grabbed headlines in large part due to the political fallout, but it’s not the first corporate enterprise to suffer a high profile security breach and probably won’t be the last.

    Regardless, it’s yet another sign that additional layers of security may be needed as hackers find ways to break through network firewalls and pull out sensitive data, whether it’s Hollywood secrets from a movie studio, or customer data from retailers such as Home Depot or Target. And sometimes it’s not only outside threats that must be dealt with; those threats can come from within the firewall.

    While password-protected user profiles on the client OS have been standard for years, self-encrypting SSDs are starting to become more appealing as they allow for encryption at the hardware level, regardless of OS, and can be deployed in a variety of scenarios, including enterprise workstations or in a retail environment.

    In general, SSDs are becoming more common. SanDisk, for example, is bullish about adoption by average notebook users,

    A survey by the Storage Networking Industry Association presented at last year’s Storage Visions Conference found users lacked interest in built-in encryption features for SSDs, particularly in the mobile space.

    Ritu Jyoti, chief product officer at Kaminario, said customers are actually requesting encryption as a feature for its all-flash array, but also voice concerns about its effect on performance. “They do ask the question.” Customers in the financial services sector in particular are looking for encryption on their enterprise SSDs, she said, driven by compliance demands, as well as standards outlined by the National Institute of Standards and Technology.

    Jyoti said SEDs and encryption of all-flash arrays have become a growing trend in the enterprise. “They are going to become the defacto standard very quickly.”

    George Crump, president and founder of research firm Storage Switzerland, recently blogged about Kaminario’s new all-flash array and addressed its new features, including encryption, which he wrote is critical for flash systems in particular because of the way controllers manage flash. “When NAND flash cell wears out the flash controller, as it should, it marks that cell as read-only. The problem is that erasing a flash cell requires that null data be written to it,” he wrote. “But how do you do that if the flash controller had previously marked the cell as read-only? If you can’t erase the data, but you can read it, then some enterprising data thief may be able to get to your data.”

    Crumb noted that some vendors have special utilities they claim will override this setting to make sure the erasure can be done, but he has yet to see any guarantee this is the case.

    Reply
  11. Tomi Engdahl says:

    Increased Functionally Drives Flash Array Adoption
    http://www.eetimes.com/document.asp?doc_id=1325411&

    Flash arrays are here to stay, according to recent research released by IDC, and adoption is growing at a rapid pace.

    The research covers both all-flash arrays (AFAs) and hybrid flash arrays (HFAs) and shows the worldwide market for flash arrays will hit US$11.3 billion in 2014. IDC credits the growth to a wider of variety of offerings from vendors that handle different, increasingly complex workloads. The forecast is based on worldwide data on AFA and HFA sales revenue and raw terabytes sold between January 1, 2012 and June 30, 2014 that IDC has collected.

    “The market grew significantly faster than we expected,” said Eric Burgener, research director for storage systems at IDC. It was clear throughout last year that adoption was accelerating beyond initial forecasts, he said. The worldwide HFA and AFA segments will reach $10.0 billion and $1.3 billion, respectively, in 2014, according to IDC’s research.

    In the early days, organizations were typically buying AFAs for specific applications, such as databases that required a high level of performance, said Burgener. What’s driven last year’s growth is that vendors have been adding features that enterprises have come to expect on legacy storage systems. One of the major trends IDC identified for 2014 is that enterprises are now using AFAs for more primary applications – anywhere from five to eight, he said.

    More flash-based platforms are delivering enterprise-class data services, including snapshots, clones, encryption, replication, and quality of service as well as storage efficiency features, added Burgener, and once enterprises start putting more applications on a flash array, they don’t want to use the individual features such as replication for each application; they want one replication capability for all applications on that array.

    Enterprises are figuring out that efficiently-used flash is better than inefficiently-used hard drives, Peters said, and although there is a place for AFAs in some organizations, “a hybrid infrastructure is where most people will end up.”

    Reply
  12. Tomi Engdahl says:

    Parents in Taiwan are now legally obliged to limit their kids’ computer time
    http://www.neowin.net/news/parents-in-taiwan-are-now-legally-obliged-to-limit-their-kids-computer-time

    It is quite a given fact that many children these days spend a lot of time playing on the computer, their phones, or on a tablet. It seems that Taiwan is apparently very aware of this, and has recently expanded an existing law that puts a limit on the usage time for children whenever using gadgets, Quartz reports.

    The law states that children under 18 “may not constantly use electronic products for a period of time that is not reasonable.” The regulation also puts excessive computer or similar gadgets usage on par with common vices like smoking, drinking, chewing betel nuts and doing drugs.

    Parents in this country are now legally obliged to stop their kids spending time on computers
    http://qz.com/332675/parents-in-this-country-are-now-legally-obliged-to-stop-their-kids-spending-time-on-computers/

    A lot of parents are worried about their children spending too much in front of the phone or tablet. Parents in Taiwan now have to do something about it.

    Lawmakers have expanded existing legislation to say that children under 18 on the island “may not constantly use electronic products for a period of time that is not reasonable.”

    Parents who expose their kids to electronic products to the point where become “physically or mentally” ill are liable for a $1,600 fine. Of course, the law doesn’t say exactly how much time is unreasonable, which will no doubt complicate enforcement.

    The American Academy of Pediatrics, which recommends a maximum of two hours a day of screen time for kids, found in a recent study that US eight-year-olds spend an average of eight hours with some form of media—and many child-development psychologists urge more unstructured play time. In addition, there is another factor not covered in this law, which is the damage done to children from the fact that their parents are themselves always connected.

    Taiwan is not the only country to take steps to regulate the use of electronic media, and particularly gaming among teenagers. China has been trying to deter people from playing online games for more than three hours at a stretch since 2005 and adopted further regulation in 2010, while South Korea last year regulating online games and e-sports as if they were addictive substances.

    If the Taiwanese law is successful and copied by others, it may help to prevent nomophobia—”no mobile phobia”—the fear of being without one’s electronic device, which a study recently suggested can actually impair mental performance.

    Reply
  13. Tomi Engdahl says:

    Firefox enters the realm of virtual reality with the Oculus Rift
    http://www.zdnet.com/article/firefox-enters-the-realm-of-virtual-reality-with-the-oculus-rift/

    Summary:Mozilla has added support for virtual reality apps running on the Oculus Rift headset to an experimental version of the Firefox browser.

    Users of an experimental build of Firefox will be able to explore virtual reality inside the browser after Mozilla added support for the Oculus Rift headset.

    People running Firefox Nightly will be able to use the Rift to experience 3D environments inside web pages, following the addition of support for the WebVR API.

    Virtual reality allows users to traverse 3D spaces by donning a headset that tracks their head movements and allows them to look around a 3D computer-generated world.

    While VR content on the web is scarce today, WebVR could eventually see VR scenes embedded into web pages, for example a car maker could embed a 3D 1:1 model of a vehicle for people to explore.

    The final version of the Rift is expected to have a resolution of at least 2560 x 1440 and to improve on the already impressive head tracking of the developer versions. Realising a high resolution is seen as important to reducing the ‘screen door’ effect when using the headset – the name given to the gaps between pixels that are visible when the user’s eyes are right next to the screen.

    Reply
  14. Tomi Engdahl says:

    The future of GPU’s is open standards. GPU’s won’t take off until all major vendors support the latest (OpenCL 2.0) standards
    Here is the list of conformant products
    https://www.khronos.org/conformance/adopters/conformant-products#opencl

    Reply
  15. Tomi Engdahl says:

    IT professionals believe in open source

    The vast majority of IT professionals are replacing traditional software with open source tools.
    The reason for this is not the price, but better security.

    Ponemon Institute study that a Europe-wide study reported that 67 percent of IT professionals considers open source software to bring a better continuity of enterprise systems. In the US the figure is even higher, 74 percent.

    In the past, open source software is often justified at a lower price. Now the number one criterion has increased security.

    76 percent of those surveyed IT professionals believes that an open source transparency increases the reliability of the application. Two out of three believes that transparency increases the security and reduces the risks to privacy.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2335:it-ammattilaiset-uskovat-avoimeen-koodiin&catid=13&Itemid=101

    Reply
  16. Tomi Engdahl says:

    Librem 15: A Free/Libre Software Laptop That Respects Your Essential Freedoms
    https://www.crowdsupply.com/purism/librem-laptop

    The first high-end laptop that respects your freedom and privacy.

    The Purism Librem 15 is the first high-end laptop in the world that ships without mystery software in the kernel, operating system, or any software applications. Every other consumer-grade laptop you can purchase comes with an operating system that includes suspect, proprietary software, and there’s no way for you to know what that software does.

    The reality is that unless every aspect of your kernel, operating system, and software applications are free/libre and open source, there is no way to know that your computer is truly working in your best interest. Purism is the first to solve this problem.

    http://puri.sm/

    Reply
  17. Tomi Engdahl says:

    IBM To Cut More Than 110,000 Jobs, Report Says
    http://news.sky.com/story/1415024/ibm-to-cut-more-than-110000-jobs-report-says

    A report by a respected Silicon Valley journalist for Forbes said around 26% of the company’s 430,000-strong workforce would be cut this week.

    The company’s reorganisation is codenamed Project Chrome, and most of the staff being laid off are in the US.

    It will pave the way for IBM to focus on cloud computing, it is claimed, rather than its traditional hardware business.

    However, some technology commentators have said they are sceptical about the size of the reported cuts.

    “Last year, IBM hired 45,000 people, and the company currently has about 15,000 job openings around the world for new skills in growth areas such as cloud, analytics, security, and social and mobile technologies.

    IBM dismisses report saying more than 110,000 jobs being cut in massive overhaul
    http://business.financialpost.com/2015/01/26/ibm-corp-expected-to-cut-more-than-110000-jobs-in-massive-overhaul-report-says/?__lsa=9bdd-6ea4

    IBM dismissed on Monday a Forbes magazine report claiming the technology firm is preparing to cut about 26% of its workforce.

    IBM is in the process of layoffs, as disclosed in its latest earnings report last week, but they will affect “several thousand” employees only, according to an emailed statement from IBM to Reuters.

    “IBM does not comment on rumors, even ridiculous or baseless ones,”

    Reply
  18. Tomi Engdahl says:

    Brandon Bailey / Associated Press:
    Microsoft shares down 4.4% after hours as investors react to “not great” results for flagship business — Microsoft earnings report doesn’t excite market — 1 photo — SAN FRANCISCO (AP) — Big jumps in sales of its Surface tablets, cloud computing software
    http://www.bigstory.ap.org/article/41d4dab0754ca1d913c82e6b013d4541/microsoft-earnings-report-doesnt-excite-market

    Microsoft sees Surface revenue jump to $1.1B in Q2 2015, sells record 10.5M Lumia smartphones — As part of the company’s latest quarterly earnings announcement, Microsoft today revealed Surface revenue increased to $1.1 billion in Q2 2015, driven mainly by Surface Pro 3 and its accessories.
    http://venturebeat.com/2015/01/26/microsoft-sees-surface-revenue-jump-to-1-1b-in-q2-2015-sells-record-10-5m-lumia-smartphones/

    Reply
  19. Tomi Engdahl says:

    Linux chaps want to recycle your mobe as a supercomputer
    Slot your old smartphone into this chassis and let penguins pick its brains
    http://www.theregister.co.uk/2015/01/27/recycling_smartphone_motherboards_as_desktop_clusters/

    A Finnish group of phone developers, hoping to get the world interested in modular smartphones, has proposed a nifty idea for re-using their phone motherboards: turn them into clusters.

    The Linux-based Puzzlephone project wants to extend the life of smartphones by making more of the phone replaceable, on the premise that most of the hardware can last a decade, but consumers are locked into a much shorter upgrade cycle.

    The group, Circular Devices, is also giving thought to what do do with motherboards after they’ve been upgraded, and that’s where the cluster idea comes from.

    So the designers are getting to work on a chassis that can house multiple of the Puzzlephone’s motherboards, so that boards returned by upgrading users can be recycled as clusters they reckon could scale from home and small business users up to public institutions.

    PUZZLECLUSTER: The First Reuse Application of the PUZZLEPHONE
    http://www.puzzlephone.com/puzzlecluster-the-first-reuse-application-of-the-puzzlephone/753#more-753

    Reply
  20. Tomi Engdahl says:

    IBM details PowerPC microserver aimed at square kilometre array
    Fedora fires up on Freescale T4240-powered dense and frugal server
    http://www.theregister.co.uk/2015/01/27/ibm_details_powerpc_microserver_aimed_at_square_kilometre_array/

    IBM has revealed more about a PowerPC microserver it says will help to crunch data gathered by the square kilometre array (SKA), the colossal radio telescope to be built across South Africa and Australia.

    Once operational, the SKA is expected to generate around an exabyte – a million terabytes – of data each day. Even sorting the junk is going to need plenty of computing power, but operations at that scale need to be frugal lest electricity costs alone make the SKA horribly expensive to operate.

    IBM and others have been pondering this for some time and Big Blue’s efforts have included an effort to construct a microserver capable of being deployed in very, very, dense configurations. IBM’s trick for making that possible is water-cooling for each server.

    Freescale T4240 system-on-chip’s 12 actual (and 24 virtual) cores humming

    Reply
  21. Tomi Engdahl says:

    SimpliVity claims fivefold sales boost, hugs Cisco tightly
    Underdog grows into possible buyout target
    http://www.theregister.co.uk/2015/01/20/simplivity_posts_record_growth_whats_next/

    Things are looking up for hyperconverged vendor SimpliVity, which reported record growth in 2014 on the back of a number of strategic wins and a key partnership with Cisco.

    Simplivity is claiming a nearly 500 per cent increase in sales compared to 2013 and has now passed 400 employees worldwide, all of which makes me wonder what the future has in store for them.

    “over the course of 2014, SimpliVity shipped 1,500 OmniCube and OmniStack licenses”.

    “1,500 OmniCube and Omnistack licenses” can probably be translated as “1,500 deployments”.

    What Cisco chooses to do in this market is increasingly important. Cisco bought Invicta and thus has its own fibre channel arrays, something that has not made its previous hyperconvergd partners EMC and NetApp happy. VMware – a member of the EMC federation – is leading the software defined networking revolution, the end result of which will be a gutting of Cisco’s networking revenues.

    The politics have gotten even more complicated with VMware’s vSAN emerging.

    Reply
  22. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Former Opera CEO Launches Vivaldi, A New Browser For Power Users — Opera’s former CEO Jon von Tetzchner is launching the first preview of Vivaldi today, a new Chromium-based browser that is squarely aimed at power users. Vivaldi features tools like Quick Commands for using written commands instead

    Former Opera CEO Launches Vivaldi, A New Browser For Power Users
    http://techcrunch.com/2015/01/27/vivaldi-the-four-browsers/

    Vivaldi features tools like Quick Commands for using written commands instead of the mouse, an Opera-like Speed Dial for quickly accessing bookmarks, a note-taking feature and the ability to organize tabs into stacks.

    “We are making a browser for our friends,” von Tetzchner told me earlier this week. “Vivaldi is for all those people who want more from their browsers.”

    The team has worked on the Vivaldi browser for the last year-and-a-half, and while this is clearly still an early preview and many features are still missing, the browser feels pretty polished already.

    The Vivaldi team decided to go with Chromium as the foundation of the browser. The team was obviously too small to write its own engine from scratch, and while von Tetzchner also looked at using Mozilla’s engine and WebKit, he decided to go with Google’s project in the end.

    “Going with WebKit didn’t make a lot of sense,” he said. “And going with Mozilla — we felt that fewer people were using it. They were two good choices in any case, but we went with the safer choice.”

    In the long run, Vivaldi will also include a built-in mail client. That seems like a throwback to an earlier generation of browsers, but it’s something Opera long included in its browser and that, according to Tetzchner, is something he always liked to have in a browser himself.

    “I don’t think webmail is for everyone,” he told me. “It’s definitely not for the advanced user.”

    Vivaldi currently has about 25 employees and von Tetzchner is self-funding the project for now.

    Reply
  23. Tomi Engdahl says:

    Increased Functionally Drives Flash Array Adoption
    http://www.eetimes.com/document.asp?doc_id=1325411&

    Flash arrays are here to stay, according to recent research released by IDC, and adoption is growing at a rapid pace.

    The research covers both all-flash arrays (AFAs) and hybrid flash arrays (HFAs) and shows the worldwide market for flash arrays will hit US$11.3 billion in 2014. IDC credits the growth to a wider of variety of offerings from vendors that handle different, increasingly complex workloads.

    “The market grew significantly faster than we expected,”

    The worldwide HFA and AFA segments will reach $10.0 billion and $1.3 billion, respectively, in 2014, according to IDC’s research.

    Adoption of AFAs started around 2010, Burgener said, although vendors such as Violin Memory were selling them as early as 2007. The flash array landscape now includes startup revenue leaders Nimble Storage, Pure Storage, and SolidFire, while traditional enterprise storage vendors such as EMC, NetApp, Dell and others have all moved to offer flash-optimized HFAs, and in some cases AFAs. Some have acquired startups to add or enhance their flash capabilities.

    In the early days, organizations were typically buying AFAs for specific applications, such as databases that required a high level of performance

    More flash-based platforms are delivering enterprise-class data services, including snapshots, clones, encryption, replication, and quality of service as well as storage efficiency features

    Reply
  24. Tomi Engdahl says:

    3 key challenges for data center migration in 2015
    http://www.cablinginstall.com/articles/2015/01/tufin-datacenter-migration.html

    1. The need to understand your business applications in detail
    2. Minimizing disruption to the business during a migration
    3. Ensuring systems are secure and compliant

    Reply
  25. Tomi Engdahl says:

    6 data center trends to watch in 2015
    http://www.cablinginstall.com/articles/2015/01/emerson-six-datacenter-trends.html

    1. Cloud comes of age — Cloud computing has become established in the data center ecosystem as most organizations already use some form of software-as-a-service (SaaS). Now cloud is poised to expand from that foothold and become an engine of innovation.

    2. Integration extends its reach — Integrated systems were developed to help organizations deploy and scale applications faster while reducing risk and total costs. With rapid changes in many markets being driven by innovation, digitization and mobility, the need for speed that integration and convergence delivers is greater than ever. As a result, integration and convergence has expanded beyond the IT stack to the systems that support that stack. Most notably, data center facilities are now being designed and constructed from integrated, prefabricated modules

    3. Convergence goes macro — Technology systems aren’t the only things experiencing a convergence. The telecommunications and IT industries are moving closer together as voice and data services are now routinely consumed on the same device.

    4. Software paves the way for more software — Virtualization marked one of the most significant trends in the data center industry in the last twenty years. The impact of this development will continue to drive change for the foreseeable future as virtualization extends beyond computing to networking and storage.

    5. The edge gets stronger — After years of consolidation and centralization, IT organizations are turning their attention to the edge of the network to improve interactions with customers and applications. As organizations grow their use of analytics, location-based services, and personalized content, edge of network facilities will become critical in achieving competitive advantage. Capitalizing on this opportunity will require standard, intelligent and high availability infrastructure deployed close to users.

    6. Security becomes the new availability — When it comes to risk mitigation, data center managers have long had a singular focus: prevent downtime. Downtime hasn’t become any less of a risk, but a new threat has emerged in the form of cyber security.

    “Data centers are undergoing fundamental changes as management shifts their focus to issues such as speed of deployment, manageability, scalability, efficiency and security,”

    Reply
  26. Tomi Engdahl says:

    PwC: Forget margin, tech biz, HUSTLE for that SALE
    Also, you could be sold for parts. Sorry.
    http://www.channelregister.co.uk/2015/01/27/pwc_tech_companies_revenue_not_margins/

    The phrase “turnover is vanity, profit is sanity” apparently does not apply to the tech sector, with PwC advising companies this year to focus on revenue growth above margins and “yield maximum shareholder value”.

    According to the report 2015 Technology Industry Trends this is particularly important in a “volatile market”, which is set to see more HP-style “conscious uncouplings”.

    Last year eBay’s 20-year-old online auction business was separated from PayPal. Meanwhile IBM has invested billions of dollars by making dozens of purchases, especially in the cloud computing and big data sectors. By doing this, the company hopes to boost its sluggish revenue growth.

    “These are just early examples of the kinds of ongoing corporate re-evaluations that we believe many technology companies will undertake during the next few quarters,”

    According to the research “in the tech industry, in almost all situations, widening revenue streams is the only viable option for long-term survival.”

    Reply
  27. Tomi Engdahl says:

    Intel, Microsoft Improve Odds for AR
    Augmented Reality 101
    http://www.eetimes.com/document.asp?doc_id=1325410&

    There’s been an onslaught of announcements in the AR/VR field in just the last few weeks. The biggest was last week’s unveiling by Microsoft of its HoloLens platform. HoloLens projects three-dimensional images into the air, integrating virtual digital elements into the physical world. Picture Ben Kenobi talking to Princess Leia.

    Microsoft’s announcement turned tech-savvy media into instant AR/VR believers — sort of. Microsoft’s demo was definitely cool.

    But does this mean that 2015 will finally become the year when AR/VR starts going mainstream in earnest?

    Before we know the answer, we should consider what’s already clear. Big guns in the tech industry have departed the sidelines and begun throwing money and talent at AR/VR. Earlier this month, Intel announced a $24.8 million investment in Vuzix, a maker of enterprise-grade smart glasses.

    AR: Next-gen computing battle
    All these data points show that the industry sees AR/VR as a new platform where the next-generation computing battle will unfold.

    Ori Inbar, co-founder and CEO of AugmentedReality.org, sees AR “inevitable.” He said, “Consider the innovation cycles of computing from mainframes, to personal computers, to mobile computing, to wearables: It was driven by our need for computers to get smaller, better, and cheaper. Wearables are exactly that — mini computers on track to shrink and disappear on our bodies.”

    Microsoft noted that HoloLens will run on the Windows 10 operating system and that it should be pretty simple for developers who write software for other Windows 10 devices to adapt their code to it. Evidently, Microsoft has invited other VR and AR companies to start building experiences and hardware based on its new holographic platform.

    IHS analyst Harding-Rolls calls the HoloLens initiative in particular and AR/VR in general “Microsoft’s attempt to take a major role in establishing” the next-generation computing platform.

    Well, that’s the big picture.

    “Currently the major technology companies are split between deeply immersed virtual reality solutions and more open, less immersed, augmented reality solutions, which are generally less physically disorientating.”

    Reply
  28. Tomi Engdahl says:

    Jonathan Vanian / Gigaom:
    Netflix is revamping its data architecture for streaming movies — Netflix is revamping the computing architecture that processes data for its streaming video service, according to a Netflix blog post that came out on Tuesday. — The Netflix engineering team wanted an architecture

    Netflix is revamping its data architecture for streaming movies
    https://gigaom.com/2015/01/27/netflix-is-revamping-its-data-architecture-for-streaming-movies/

    The Netflix engineering team wanted an architecture that can handle three key areas the video-streaming giant believes greatly affects the user experience: knowing what titles a person has watched; knowing where in a given title did a person stop watching; and knowing what else is being watched on someone’s account, which is helpful for family members who may be sharing one account.

    Although Netflix’s current architecture allows the company to handle all of these tasks, and the company built a distributed stateful system (meaning that the system keeps track of all user interaction and video watching and can react to any of those changes on the fly) to handle the activity, Netflix “ended up with a complex solution that was less robust than mature open source technologies” and wants something that’s more scalable.

    There’s a viewing service that’s split up into a stateful tier that stores the data for active views in memory; Cassandra is used as the primary data store with the Memcached key-value store built on top for data caching. There’s also a stateless tier that acts as “a fallback mechanism when a stateful node was unreachable.”

    This basically means that when an outage occurs, the data stored in the stateless tier can transfer over to the end user, even though that data may not be exactly as up-to-date or as relevant as the data held in the stateful tier.

    In regard to caching, the Netflix team apparently finds Memcached helpful for the time being, but is looking for a different technology

    Things got a bit more complex from an architecture perspective when Netflix “moved from a single AWS region to running in multiple AWS regions”

    For Netflix’s upcoming architecture overhaul, the company is looking at a design that accommodates these three principles: availability over consistency; microservices; and polyglot persistence, which means having multiple data storage technologies to be used for different, specific purposes.

    Netflix’s Viewing Data: How We Know Where You Are in House of Cards
    http://techblog.netflix.com/2015/01/netflixs-viewing-data-how-we-know-where.html

    Reply
  29. Tomi Engdahl says:

    Apple Reports Record First Quarter Results
    Highest-ever revenue & earnings drive 48% increase in EPS
    Growth led by record revenue from iPhone, Mac & App Store
    http://www.apple.com/pr/library/2015/01/27Apple-Reports-Record-First-Quarter-Results.html

    CUPERTINO, California—January 27, 2015—Apple® today announced financial results for its fiscal 2015 first quarter ended December 27, 2014. The Company posted record quarterly revenue of $74.6 billion and record quarterly net profit of $18 billion, or $3.06 per diluted share.

    The results were fueled by all-time record revenue from iPhone® and Mac® sales as well as record performance of the App Store℠. iPhone unit sales of 74.5 million also set a new record.

    Reply
  30. Tomi Engdahl says:

    Five years of Sun software under Oracle: Were the critics right?
    They said Java, MySQL and Solaris were all doomed
    http://www.theregister.co.uk/2015/01/28/five_years_sun_software_under_oracle/

    Whose Java is it anyway?

    While the Java Community Process (JCP) is technically open, Oracle has followed Sun’s lead in taking the foremost governance role over the Java specs – and if anything, it has gripped the reins even tighter. That quickly led to dissention in the ranks, which culminated in the Apache Software Foundation quitting the JCP Executive Committee in December 2010 following clashes with Oracle over licensing issues.

    “The commercial concerns of a single entity, Oracle, will continue to seriously interfere with and bias the transparent governance of the ecosystem,” the ASF said, adding that independent, open-source implementations of Java were impossible under Oracle’s licensing terms.

    To be fair, the ASF had raised similar complaints when Sun was in charge of Java. But Sun was generally thought of as a friendlier company than Oracle, and certainly a less litigious one.

    But what of the Java platform itself? Work on the Java Development Kit (JDK) and the language itself has mostly continued apace following the merger. Unfortunately, in Java Land that means the process of adding new features and pruning outdated specs remains as slow and arduous as ever.

    Not that these delays can be laid solely at Oracle’s feet.

    Yet in many ways the Java community seems to have bitten off more than it can chew.

    Further complicating matters was the seemingly endless parade of critical zero-day security vulnerabilities that plagued Java throughout 2012 and 2013. The bug bonanza grew so severe at one point that Oracle was forced to rethink its typical, thrice-annual Critical Patch Update schedule when customers threatened to uninstall Java en masse. But fixing bugs has its price, and the change in priorities led to still more delays for Java 8.

    My, my, my, Maria

    Longtime MySQL users, on the other hand, seem less willing to swear fealty to Oracle – particularly now that there’s a legitimate fork of the open source database, in the form of MariaDB.

    MySQL co-creator Monty Widenius – who left Sun just before the Oracle acquisition closed – never had much faith that Oracle would honor its commitments to maintain MySQL as open source software. And sure enough, Big Red has done its best to capitalize on the code by releasing certain modules as proprietary software available only to customers of the “enterprise edition,” not to mention upping the price of support.

    Little wonder that many of the major Linux distributions have dropped MySQL from their default installation images in favor of MariaDB, including Arch Linux, Fedora, OpenSuse, Red Hat, and Slackware. (Others, including Ubuntu, have stuck with MySQL as the default.)

    Still other customers are switching not because of some sort of open source puritanism but wariness of Oracle – as evidenced by the fact that MariaDB offers its own subscription-based enterprise version of its MySQL fork. These customers don’t mind paying for their open-source database, but an Oracle support contract isn’t their bag.

    So were the naysayers right? Has Oracle’s purchase of Sun resulted in the dismantling of a once-great software portfolio? The answer, it seems, is a mixed bag.

    Reply
  31. Tomi Engdahl says:

    VMware cracks US$6bn for the year, says SaaS will slow growth
    New businesses are doing well, but cloudy cash comes in spurts
    http://www.theregister.co.uk/2015/01/28/vmware_cracks_us6bn_for_the_year_says_saas_will_slow_growth/

    VMware today used its Q4 2014 earnings call to warn its revenue growth will slow because it’s doing so well in the cloud.

    The overall picture is rosy for VMware, which announced a US$1,703 million quarter and $6.03 billion dollar year, up 15 per cent and 16 per cent respectively on the corresponding periods from 2013.

    NSX network virtualisation business cracking the 400-customer, and $200m a year, barriers and featuring in nine out of ten new $10m deals closed last quarter. One financial services company has even spent $10m on NSX alone.

    VSAN is doing well, having racked up its 1000th paying customer and AirWatch is humming along at $200m a year of sales.

    The Reg’s virtualisation desk has been waiting for some time for VMware to offer a number of some sort to describe the performance of its vCloud Air cloud efforts. That number finally arrived today in the form of the mention that hybrid cloud and software-as-a-service accounted for five per cent of Q4 revenue, or about $85 million.

    Reply
  32. Tomi Engdahl says:

    Talent management
    21st-Century Talent Spotting
    https://hbr.org/2014/06/21st-century-talent-spotting

    Why did the CEO of the electronics business, who seemed so right for the position, fail so miserably? And why did Algorta, so clearly unqualified, succeed so spectacularly? The answer is potential: the ability to adapt to and grow into increasingly complex roles and environments. Algorta had it; the first CEO did not.

    Having spent 30 years evaluating and tracking executives and studying the factors in their performance, I now consider potential to be the most important predictor of success at all levels, from junior management to the C-suite and the board.

    With this article, I share those lessons. As business becomes more volatile and complex, and the global market for top professionals gets tighter, I am convinced that organizations and their leaders must transition to what I think of as a new era of talent spotting—one in which our evaluations of one another are based not on brawn, brains, experience, or competencies, but on potential.

    A New Era

    The first era of talent spotting lasted millennia. For thousands of years, humans made choices about one another on the basis of physical attributes.

    I was born and raised during the second era, which emphasized intelligence, experience, and past performance. Throughout much of the 20th century, IQ—verbal, analytical, mathematical, and logical cleverness—was justifiably seen as an important factor in hiring processes (particularly for white-collar roles), with educational pedigrees and tests used as proxies. Much work also became standardized and professionalized. Many kinds of workers could be certified with reliability and transparency, and since most roles were relatively similar across companies and industries, and from year to year, past performance was considered a fine indicator. If you were looking for an engineer, accountant, lawyer, designer, or CEO, you would scout out, interview, and hire the smartest, most experienced engineer, accountant, lawyer, designer, or CEO.

    I joined the executive search profession in the 1980s, at the beginning of the third era of talent spotting, which was driven by the competency movement still prevalent today.

    Now we’re at the dawn of a fourth era, in which the focus must shift to potential. In a volatile, uncertain, complex, and ambiguous environment (VUCA is the military-acronym-turned-corporate-buzzword), competency-based appraisals and appointments are increasingly insufficient. What makes someone successful in a particular role today might not tomorrow if the competitive environment shifts, the company’s strategy changes, or he or she must collaborate with or manage a different group of colleagues. So the question is not whether your company’s employees and leaders have the right skills; it’s whether they have the potential to learn new ones.

    Unfortunately, potential is much harder to discern than competence (though not impossible, as I’ll describe later).

    The recent noise about high unemployment rates in the United States and Europe hides important signals: Three forces—globalization, demographics, and pipelines—will make senior talent ever scarcer in the years to come.

    The impact of demographics on hiring pools is also undeniable. The sweet spot for rising senior executives is the 35-to-44-year-old age bracket, but the percentage of people in that range is shrinking dramatically.

    The third phenomenon is related and equally powerful, but much less well known: Companies are not properly developing their pipelines of future leaders.

    In many companies, particularly those based in developed markets, I’ve found that half of senior leaders will be eligible for retirement within the next two years, and half of them don’t have a successor ready or able to take over. As Groysberg puts it, “Companies may not be feeling pain today, but in five or 10 years, as people retire or move on, where will the next generation of leaders come from?”

    Reply
  33. Tomi Engdahl says:

    Ask Slashdot: What Makes a Great Software Developer?
    http://ask.slashdot.org/story/15/01/27/2340240/ask-slashdot-what-makes-a-great-software-developer

    What does it take to become a great — or even just a good — software developer? According to developer Michael O. Church’s posting on Quora (later posted on LifeHacker), it’s a long list: great developers are unafraid to learn on the job, manage their careers aggressively, know the politics of software development (which he refers to as ‘CS666′), avoid long days when feasible, and can tell fads from technologies that actually endure… and those are just a few of his points.

    What Makes a Great Software Developer?
    http://news.dice.com/2015/01/27/makes-great-software-developer/?CMPID=AF_SD_UP_JS_AV_OG_DNA_

    What does it take to become a great—or even just a good—software developer?

    According to developer Michael O. Church’s posting on Quora (later posted on LifeHacker), developers who want to compete in a highly competitive industry should be unafraid to learn on the job; manage their careers aggressively; recognize under- and over-performance (and avoid both); know the politics of software development (which he refers to as “CS666”); avoid fighting other people’s battles; and physically exercise as often as possible.

    That’s not all: Church feels that developers should also manage their hours (and avoid long days when feasible), learn as much as they can, and “never apologize for being autonomous or using your own time.”

    Whether or not you subscribe to Church’s list, he makes another point that’s valuable to newbie and experienced developers alike: Recognize which technologies endure, and which will quickly fade from the scene. “Half of the ‘NoSQL’ databases and ‘big data’ technologies that are hot buzzwords won’t be around in 15 years,” he wrote. “On the other hand, a thorough working knowledge of linear algebra (and a lack of fear with respect to the topic!) will always suit you well.” While it’s good to know what’s popular, he added, “You shouldn’t spend too much time [on fads].” (Guessing which are fads, however, takes experience.)

    His own conclusion: If you want to become a great programmer or developer, learn to slow down.

    “If you create something with a solid foundation that is usable, maintainable and meets a real need, it will be as relevant when you finally bring it to market as it was when you came up with the idea, even if it took you much longer than you anticipated,” Gertner wrote. “In my experience, projects fail far more often because the software never really works properly than because they missed a tight market window.”

    But that hasn’t stopped many developers from embracing the concept of speed, even if it means skipping over things like code reviews.

    “We need to recognize that our job isn’t about producing more code in less time, it’s about creating software that is stable, performant [sic], maintainable and understandable.”

    What I Wish I Knew When Starting Out as a Software Developer: Slow the Fuck Down
    http://blog.salsitasoft.com/what-i-wish-i-knew-when-starting-out-as-a-software-developer-slow-the-fuck-down/

    Most obviously, Michael urges young programmers deciding how much effort to put into their work to “ebb towards underperformance”. He goes on to impart a wealth of useful tips for navigating corporate politics and coming out on top.

    More to the point, however, is that only one of Michael’s fourteen points (“Recognize core technological trends apart from fluff”) is specific to software development at all. The rest are good general career advice whatever your line of work. The implication of the article’s title (“…as a software developer”) is that it should be somehow specific to our industry, and that’s how I would approach the question.

    When I was a teenager, I thought I was a truly great programmer. I could sit down and bang out a few hundred lines of C or Pascal and have them compile and run as expected on the first try. I associated greatness in software development with an ability to solve hard technical problems and a natural, almost instinctive sense of how to translate my solutions into code.

    Even more than that, I associated it with speed.

    It wasn’t until after university that cracks in my self-satisfaction started to show.

    The most striking lesson I have learned is that timeframes that seem absurdly bloated beforehand tend to look reasonable, even respectable, at the end of a project. Countless times over the course of my career, my colleagues and I have agreed that a project couldn’t possibly take more than, say, three months. Then it ends up taking nine months or twelve months or more (sometimes much more). But looking back, with all the details and pitfalls and tangents in plain sight, you can see that that the initial estimate was hopelessly unrealistic. If anything, racing to meet it only served to slow the project down.

    In my experience, projects fail far more often because the software never really works reliably than because they missed a tight market window.

    And yet the vast majority of developers still seem relunctant to spend time on unit tests, design documentation and code reviews (something I am particularly passionate about). There is a widespread feeling that our job is about writing code, and that anything else is a productivity-killing distraction.

    We need to recognize that our job isn’t about producing more code in less time, it’s about creating software that is stable, performant, maintainable and understandable (to you or someone else, a few months or years down the road).

    What I Wish I Knew When I Started My Career as a Software Developer
    http://lifehacker.com/what-i-wish-i-knew-when-i-started-my-career-as-a-softwa-1681002791

    1. Don’t be afraid to learn on the job.
    2. Manage your career aggressively. Take responsibility for your own education and progress.
    3. Recognize under-performance and over-performance and avoid them.
    4. Never ask for permission unless it would be reckless not to. Want to spend a week investigating something on your own initiative? Don’t ask for permission. You won’t get it.
    5. Never apologize for being autonomous or using your own time.
    6. Learn CS666 (what I call the politics of software development) and you can usually forget about it. Refuse to learn it, and it’ll be with you forever.
    7. Don’t be quixotic and try to prove your bosses wrong. When young engineers feel that their ideas are better than those of their superiors but find a lack of support, they often double down and throw in a lot of hours.
    8. Don’t fight other peoples’ battles. As you’re young and inexperienced, you probably don’t have any real power in most cases. Your intelligence doesn’t automatically give you credibility.
    9. Try to avoid thinking in terms of “good” versus “bad.” Be ready to play it either way. Young people, especially in technology, tend to fall into those traps—labeling something like a job or a company “good” or “bad” and thus reacting emotionally and sub-optimally.
    10. Never step back on the salary scale except to be a founder. As a corollary, if you step back, expect to be treated as a founder. A 10% drop is permissible if you’re changing industries
    11. Exercise. It affects your health, your self-confidence, your sex life, your poise and your career. That hour of exercise pays itself off in increased productivity.
    12. Long hours: sometimes okay, usually harmful. The difference between 12% growth and 6% growth is meaningful.
    13. Recognize core technological trends apart from fluff. Half of the “NoSQL” databases and “big data” technologies that are hot buzzwords won’t be around in 15 years. On the other hand, a thorough working knowledge of linear algebra (and a lack of fear with respect to the topic!) will always suit you well. There’s a lot of nonsense in “data science” but there is some meat to it.
    14. Finally, learn as much as you can. It’s hard. It takes work. This is probably redundant with some of the other points, but once you’ve learned enough politics to stay afloat, it’s important to level up technically.

    Reply
  34. Tomi Engdahl says:

    Finnish companies do not survive the big data

    Although the majority of companies will promote big data project, 49 percent of respondents in Nordic countries, says that their company data center infrastructure is not ready to handle the new data volume and complexity. The reading of Finnish companies was as high as 63 per cent.

    You can see this SAS Institute, in December, the survey, which asked about the 300 largest companies in the big plans of the data. The survey found that 80 percent of large companies on key personnel can see that companies need to collect and analyze data is increasing.

    As many as 92 percent of survey respondents believe that their companies a competitive advantage in the future may be a new type of data collection and analytics. SAS Institute, this shows that Nordic companies have internalized the importance of big data and understand the impact of better data collection and analytics can be a company’s productivity and competitiveness.

    Two-thirds of companies plan to upgrade or buy a new data collection infrastructure that supports. The survey shows that companies However, there is uncertainty about the right technology choices and how to get started with making new investments.

    As many as 95 percent of the respondents of the Finnish companies will tell you that they have a growing need for analytics.

    Gartner recently estimated that half of the world’s knowledge will be within five years of record Hadoop technology. Currently, about 15 per cent of Finnish respondents reported using or being introduced to establish a Hadoop technology.

    Hadoop technology was launched as an open source tend to store, organize and analyze large amounts of data.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2346:suomalaisyritykset-eivat-selvia-big-datasta&catid=13&Itemid=101

    Reply
  35. Tomi Engdahl says:

    Spinning rust into gold: WD profit grows … though sales dip
    Rival Seagate being outsold by Milligan’s Mob
    http://www.theregister.co.uk/2015/01/28/wd_fortified_with_more_profit_fewer_discs_seagate/

    Western Digital sold 61 million drives in its latest quarter, 2.1 million less than a year ago, but made $30m more profit, and it out-shipped and out-earned rival Seagate

    All in all, there’s not a lot of change and a less-than-hoped-for PC sales number caused a decline in disk drive units shipped (61 million compared with 63.1 million a year ago, and 64.7 million a quarter ago).

    Where’s all that Big Data that’s supposed to be flooding in? Not on the desktop, that’s for sure.

    It’s expected that OC disk revenues will continue to decline as SSD take-up on the desktop increases

    Reply
  36. Tomi Engdahl says:

    Watch out Seagate: Here comes WD with a hybrid flash/disk drive launch
    ‘Moving toward simplification of the PC subsystem’
    http://www.theregister.co.uk/2015/01/07/seagate_wd_hybrid_flash_disk_drive/

    WD is demonstrating a faster-than-Seagate 4TB hybrid flash/disk drive at Storage Visions 2015.

    It is built with 3.5-inch disk drive, an up to 128GB SSD, and the SATA Express PCIe interface with the flash and disk components presented as a single volume

    The demos include commercially available ASRock and Gigabyte motherboards with cabled SATA Express PCIe interconnect flexibility and single-volume caching and RAID configuration options.

    The SATA Express PCIe scheme has multiple PCIe lanes and two SATA 3.0 (6Gbit/s) ports active through a single host SATA Express connector. WD demonstrated hard disk drives connected using SATA Express in June last year, with partners ASUS and Gigabyte.

    Hybrid flash/disk drives offer disk levels of capacity and near-SSD performance at lower prices.

    WD has been focussing on building in greater amounts of flash than Seagate.

    “We see the industry moving toward simplification of the overall PC subsystem to a single storage bus based around the PCIe protocol,” said Gary Meister, WD’s SVP of engineering. “In this demo, we placed a hard drive, flash NAND and SATAe technology into one package, freeing up one slot in the system.”

    Reply
  37. Tomi Engdahl says:

    Testing USB 3.1: Some Preliminary Results with the MSI X99A Gaming 9 ACK
    by Ian Cutress on January 28, 2015 11:00 AM EST
    http://www.anandtech.com/show/8938/testing-usb-3-1-some-preliminary-results-with-the-msi-x99a-gaming-9-ack

    Everyone wants more. Everyone wants more storage space, faster access, cheaper costs. It should have been here yesterday, but we are restrained at the current rate of development which sometimes moves in strides or sometimes does the 1-2 quickstep. For anyone that owns a desktop, a laptop, or a portable device that needs charging, the USB standard is one such technology that would benefit from being faster, more efficient and less expensive. The USB ecosystem has seen aside several other standards during its tenure, and is now ubiquitous worldwide when it comes to data transfer and charging.

    At CES 2015 we saw the first of a new wave of products ushering the latest standard from the USB Implementers Forum, USB-IF, known as USB 3.1. Since the movement to USB 3.0 taking over USB 2.0 in most first world business environments has taken a number of years, other standards such as Thunderbolt are projecting themselves to be a catch-all solution featuring data and display technology throughput with daisy chaining. USB 3.1 attempts to trump this by adding power to the mix, accepting a new type of reversible connector, and integrating itself by virtue of backwards compatibility with all the USB 1.0, USB 2.0 and USB 3.0 devices on the market.

    The penultimate one is very important, as it indicates that USB Type-C will cover DisplayPort, USB 3.1 data rates and power all within a single specification, more so than Thunderbolt. The last one is one of the reasons for today’s article: MSI is bringing motherboards with USB 3.1 to market.

    The controller requires two PCIe 2.0 lanes, meaning that the peak throughput for the Type A connectors overall will be 10 Gbps between the two, similar to the way a hub works.

    The hardware is an ASMedia testing board with the ASM1352R controller in place, connected to two SSDs.

    USB 3.1 is technically good for 1.25 GB/sec, minus overhead due to the 128b/132b encoding (3%) and other functionality (sources state only 7.2 Gbps is useable, limiting peaks to 900 MBps).

    All in all, USB 3.1 from the motherboard controller point of view is working. All we are waiting for is market adoption and the devices to start appearing that can use it. Unfortunately, my predictions lay it on more so for H2 2016 or H1 2017. I say this as a result of time scale adoption on USB 3.0, plus the added issues tied in with Type C connectors.

    Only the top USB 3.0 drives currently push the limits of the specification, or docking ports that can handle SSDs. So it will be a while before the basic run-of-the-mill flash drives can take advantage as they will have to device a better way to extract performance. But ultimately one of the benefits of USB 3.1 will be for docking, or external hubs that can take advantage of power plus data, especially on the go.

    Reply
  38. Tomi Engdahl says:

    Anonymous No More: Your Coding Style Can Give You Away
    http://developers.slashdot.org/story/15/01/28/1937252/anonymous-no-more-your-coding-style-can-give-you-away

    Researchers from Drexel University, the University of Maryland, the University of Goettingen, and Princeton have developed a “code stylometry” that uses natural language processing and machine learning to determine the authors of source code based on coding style.

    CSI Computer Science: Your coding style can give you away
    http://www.itworld.com/article/2876179/csi-computer-science-your-coding-style-can-give-you-away.html

    New research shows that programmers have ways of writing code which are almost as unique as fingerprints

    Researchers from Drexel University, the University of Maryland,the University of Goettingen, and Princeton have developed a “code stylometry,” which uses natural language processing and machine learning to determine the authors of source code based on coding style. Their findings, which were recently published in the paper “De-anonymizing Programmers via Code Stylometry,” could be applicable to a wide of range of situations where determining the true author of a piece of code is important. For example, it could be used to help identify the author of malicious source code and to help resolve plagiarism and copyright disputes.

    The authors based their code stylometry on traditional style features, such as layout (e.g., whitespace) and lexical attributes (e.g., counts of various types of tokens). Their real innovation, though, was in developing what they call “abstract syntax trees” which are similar to parse tree for sentences, and are derived from language-specific syntax and keywords. These trees capture a syntactic feature set which, the authors wrote, “was created to capture properties of coding style that are completely independent from writing style.” The upshot is that even if variable names, comments or spacing are changed, say in an effort to obfuscate, but the functionality is unaltered, the syntactic feature set won’t change.

    Here were some of their key findings:

    - Their code stylometry achieved 95% accuracy in identifying the author of anonymous code.
    - Accuracy rates weren’t statistically different when using an off-the-shelf C++ code obfuscators
    - Coding style is more well defined through solving harder problems. The identification accuracy rate improved when the training dataset was based on more difficult programming problems.

    In any case, though, be aware that your fingerprints are all over your code, for better or for worse.

    Reply
  39. Tomi Engdahl says:

    Office everywhere: More great news for Office on iOS and Android
    http://blogs.microsoft.com/blog/2015/01/29/office-everywhere-great-news-office-ios-android/

    Last week, we saw the new experiences coming with Windows 10, including how Office universal apps for Windows 10 will deliver new touch scenarios for the smallest devices all the way up to Microsoft Surface Hub. This came on the heels of our recent Office for iPhone release and Office for iPad update, as well as the Preview of Office for Android tablets. These are all important steps toward our goal of bringing the power of Office everywhere – to everyone, on every device.

    To date, we’ve seen more than 80 million downloads of Office on iPhone and iPad worldwide.

    With these releases, we’ve only scratched the surface of what’s possible as we work to reinvent productivity for a mobile-first, cloud-first world — and today, we are making two important announcements that continue this journey.

    First, we are pleased to announce that we’re removing the “preview” label from our Office for Android apps. As of today, Word for Android tablet, Excel for Android tablet and PowerPoint for Android tablet will be available in the Google Play store as free downloads. These apps join the highly rated OneNote for Android, as well. We are also committed to supporting the Intel chipsets via a native implementation that will be available within a quarter.

    If you’re a consumer, just download the free apps from the Google Play store and then log in with your Microsoft Account to create files, print and perform day-to-day editing for free.

    Reply
  40. Tomi Engdahl says:

    Intel 5th Gen vPro Goes 60GHz Wireless
    vPro Wireless Display (WiDi)/wireless docking
    http://www.eetimes.com/document.asp?doc_id=1325466&

    Intel’s new Pro Wireless Display (WiDi) and 60-GHz Wireless Docking technology enables tablets, laptops, detachable 2 in 1s and any other x86-based form factor to cut all cords. Keyboards, large-screen displays, printers, network connections, mice, USB accessories and everything else that once required a wire is now obsolete when using Intel’s 5th gen vPro core processors, according to Intel’s Tom Garrison, vice president and general manager of the Business Client Platform Division within the PC Client Group.

    HP and Fujitsu already have models available for sale, 12 worldwide OEMs have committed to 5th gen vPro and the top six OEMs will be showing demonstrations of their offerings today in New York City and London.

    “5th generation vPro processors are cutting the last cords to the computers using them,”

    The main additions to the 5th gen core processors is the integration of new on-chip capabilities to drive wireless displays and 60GHz docking stations, which Garrison claims will multiply employee productivity with automatic connection to the nearest large-screen displays in the meeting room and to keyboards, printers, mice and displays in the office cubicle.

    “Intel’s new ‘no wires workplace’ will increase employee efficiency by moving designs into the physical world more quickly, increasing productivity,”

    Original-equipment manufacturers (OEMs), such as ActionTec, are already manufacturing Pro WiDi adapters that connect to projectors, large-screen displays, printers, docking stations and anything else in the office that used to require a wired connection.

    Reply
  41. Tomi Engdahl says:

    Intel Announces Broadwell vPro Processors: Wireless Docking and More
    by Stephen Barrett on January 29, 2015 7:00 AM EST
    http://www.anandtech.com/show/8943/intel-announces-broadwell-vpro-processors-wireless-docking-and-more

    While Intel formally announced availability of Broadwell-U processors at CES this year, vendors did not actually have any devices available for purchase containing Intel vPro technology. Today that changes, as Intel states the HP Elite x2 1011 and several devices from Fujitsu sporting 5th Generation Intel vPro processors are now available with more to arrive shortly. Businesses that rely on vPro’s management features are now able to purchase new laptops containing Intel’s Broadwell-U processors with vPro features.

    If you’re not familiar with vPro, this is primarily an out-of-band management technology that Intel builds into several of their products such as SSDs, NICs, WiFi cards, chipsets, and CPUs. Intel brands their out-of-band management as Intel Active Management Technology (AMT). While many business professionals experience IT management such as software updates and group policy enforcement, these are all at the OS level. Intel AMT provides IT tools at the hardware level, which means remote PCs can be accessed even when the OS is down or the power is off.

    For example, if a device is lost containing sensitive data, AMT could be used to access location services of the device, restrict access, or even erase data. Another neat feature of AMT is using an integrated VNC server, allowing remote monitoring of the Intel integrated graphics feed and even keyboard/mouse control. Going even further, Intel AMT can even redirect the boot process of a PC to an IT provided remote image.

    Over time, vPro has broadened to include more than just Intel AMT. Importantly, vPro also includes Intel Trusted Execution Technology (TXT), which works with a Trusted Platform Module (TPM) to secure a device against low level attacks and provide unique secure device identifiers to management systems. With Broadwell vPro processors, Intel is again expanding vPro to encompass more technology with Intel Wireless Docking and Intel Pro Wireless Display. It is important to note that these features are available to manufactures using a vPro package from Intel, but each device may not implement them.

    Intel Wireless Docking could be the most exciting new feature. Using four channels of 802.11ad at 60 GHz radio frequency, Intel claims a total bandwidth of 7 Gbps. All data passed between the dock and device is protected with 128-bit AES hardware encryption, and two monitors plus USB 3.0 are supported

    One important note about Intel Wireless Docking is that while it uses 802.11ad 60 GHz networking (WiGig), the actual protocol layer Intel runs is different than devices such as the Dell D5000 that also uses 802.11ad; the two are not compatible. Intel stated devices such as the Dell D5000 are typically using 802.11ad as a simple USB bridge using the WiGig Bus Extension (WBE) layer, whereas Intel Wireless Docking is a tight integration with the Broadwell SoC providing new experiences such as those listed above, on-screen-display, and remote firmware management. Intel uses the WiGig Display Extension (WDE) and WiGig Serial Extension (WSE) layers. Intel states they are working with the other Wi-Fi Alliance members through various plug fest and other activities to promote interoperability.

    Intel Pro Wireless Display (Pro WiDi) might sound familiar, and this is a variation of the Intel Wireless Display (WiDi) technology with the key naming difference being the insertion of Pro.

    Finally, Intel Identity Protection Technology (IPT) now supports multi-factor authentication. This provides IT with more options to specify which authentication factors can be used for enterprise applications, such as a paired Bluetooth device.

    Reply
  42. Tomi Engdahl says:

    The server carbon footprint smaller

    The European Commission wants to define the data center’s carbon footprint. For this purpose, a pilot program was launched back in 2013. The aim is to harmonize legislation across the Union.

    Products’ carbon footprint is referred to as PEF (Product Environmental Footprint). The Commission shall determine the PEF-category rules and leading manufacturers produce group specific, product life-cycle-based guidelines for the design and manufacture. The goal is to get a more detailed and fuller understanding of the environmental impact of products, as the current operating efficiency, based on the method of calculation.

    Life Cycle Assessment to take into account the activity during the energy consumed in addition to product manufacturing, installation, dismantling and recycling of generating energy and resource consumption. Life cycle analysis provides both manufacturers and users a more accurate picture of the entire system environment.

    The result is a holistic picture decisions which affect the green data centers and the installed IT equipment selection. For example, the operation of energy-efficient use of time, but an unreliable product, which is based on less sustainable design may not be possible, “greener” product line development.

    Highlights of manufacturers and system integrators to minimize the impact of all factors, in order to ensure that the environmental effects of a balanced approach. Power Supply Architecture plays a key role in ensuring the right balance

    The server integration, the degree of growth has made it possible to compress multiple processor cores and support the logic of the same SoC system circuits (system-on-chip), which are each card can be found in a number.

    Only two decades ago, 150 watts of brick-class power supply realistic maximum. Now, up to a quarter-brick converters that take the printed circuit board space of only 21 square centimeters, can run up to 864 watts of power and soon even kilo watts.

    This high-density power thermal compatibility with the environment is one of the key factors

    Since the servers are the most important methods of cooling air conduction and airing, air flow planning is an important component. The open frame structure of power sources has become popular because of their structure to improve air flow efficiency. They also use less metal structures and enclosures. Their actual performance depends on operating conditions.

    The open structure with respect to the direction of the air flow is much more sensitive than the closed structures.

    In order to support multi-core servers, high currents and reliability requirements of the power supplies must often be used in parallel N + 1 configurations. THE REGULATION is the key to parallel architectures.

    In order to ensure the correct functioning of the power source circuits future input voltage is very close tolerances, often less than ± 30 millivolts.

    Digital control is more flexible and more efficient way.
    Digital control can also reduce the use of materials through the use of cheaper passive components.

    With the system-level requirements into the data centers of advanced IT systems designers to meet future stringent legal requirements, which are based on product life-cycle-based environmental impacts

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2355:palvelimen-hiilijalanjalki-pienemmaksi&catid=26&Itemid=140

    Reply
  43. Tomi Engdahl says:

    World could ‘run out of storage capacity’ within two years warns Seagate
    Blame the Data Capacity Gap
    http://www.techradar.com/news/internet/data-centre/world-could-run-out-of-storage-capacity-within-2-years-warns-seagate-vp-1278040/1

    Mark Whitby, SVP of branded products at Seagate, walks us through the fascinating world of storage, warning us of the dangers of not producing enough data and introducing us to the concept of the Zettabyte.

    TRP: Why should people care about storage?

    MW: Data has never been more important. As valuable as oil and just as difficult to mine, model and manage, data is swiftly becoming a vital asset to businesses the world over.

    Companies large and small are taking their first steps in data analytics, keen to gain an insight into how their customers behave and so better position themselves in the market place. Although still in its infancy, analytics holds the potential to one day allow them to find solutions, sell more products and develop customer trust.

    Data centres today are not equipped to be able to handle the anticipated influx generated by the Internet of Things, nor geared towards feeding it smoothly across to the analytics platforms where it can prove its worth. There is little chance that the billions of whirring silicon-based hard drives around the world will be able to keep up with the flood of data driven by the 26 billion connected devices (not including some 7.3 billion smartphones, tablets and PCs) that Gartner predicts will be in use by 2020.

    TRP: What do you think will be the main challenge facing the storage industry over the next 5 years?

    MW: Three words: data capacity gap.

    We are entering a world where everything is connecting to everything else and the resulting big data is anticipated to solve virtually all our problems. However, by 2016, the hard drives housed in all those connected devices, whirring away in countless data centres, will start to reach their limits.

    The total amount of digital data generated in 2013 was about 3.5 zettabytes (that’s 35 with 20 zeros following). By 2020, we’ll be producing, even at a conservative estimate, 44 zettabytes of data annually.

    At this current rate of production, by 2016 the world will be producing more digital information than it can easily store. By 2020, we can predict a minimum capacity gap of over six zettabytes – nearly double all the data that was produced in 2013.

    TRP: If the world is running out of storage, why can we not simply increase production of hard drives and build more data centres?

    MW: Unfortunately, the imminent breach between storage demand and production is not a problem that can so easily be solved. The fact of the matter is that it’s far harder to manufacture capacity than it is to generate data.

    TRP: What are some of the latest innovations in data storage that could help heal the data capacity gap in 2020?

    MW: Silicon may be the work-horse that has helped us get to where we are today, but it’s starting to show its age. Fortunately, there is an impressive amount of innovation taking place in the industry at the moment and a number of these advances could help us to seal the data storage breach over the next five to 10 years.

    RRAM (resistive random access memory) is one such example.

    If RRAM doesn’t seem quite far enough removed from the world of silicon-based storage, there’s also DNA to consider. Last year, a team of scientists from the European Bioinformatics Institute reportedly stored a complete set of Shakespeare’s sonnets, a PDF of the first paper to describe DNA’s double helix structure, a 26-second mp3 clip from Martin Luther King Jr.’s “I Have a Dream” speech, a text file of a compression algorithm, and a JPEG photograph in a strand of DNA, no bigger than a speck of dust.

    TRP: Is Seagate developing any new storage solutions at the moment?

    MW: Heat-assisted magnetic recording (HAMR) is one new technology that Seagate is investing in. This method uses lasers to first heat the high-stability media before magnetically recording data. HAMR is expected to increase the limit of magnetic recording by more than a factor of 100 and this could theoretically result in storage capacities as great as 50 terabits per square inch – current hard drives generally have a capacity of a only few hundred gigabits per square inch.

    TRP: Will CIOs need to supplement existing storage resources?

    MW: CIOs certainly need to consider the implications of a data capacity gap for their business and address it by thinking strategically and longer term in regards to their storage resources.

    One of the latest big data storage methods is a tiered model using existing technologies. This model utilises a more efficient capacity-tier based on pure object storage at the drive level. Above this sits a combination of high performance HDD (hard disk drives), SSHD (solid state hybrid) and SSD (solid state drives).

    Reply
  44. Tomi Engdahl says:

    MIT Randomizes Tasks To Speed Massive Multicore Processors
    http://hardware.slashdot.org/story/15/02/02/0317222/mit-randomizes-tasks-to-speed-massive-multicore-processors

    Researchers at the Massachusetts Institute of Technology have created a data structure that they claim can help large multicore processors churn through their workloads more effectively. Their trick? Do away with the traditional first-come, first-served work queue and assign tasks more randomly.

    Parallelizing common algorithms
    Researchers revamp a common “data structure” so that it will work with multicore chips.
    http://newsoffice.mit.edu/2015/new-priority-queues-data-structure-0130

    Today, hardware manufacturers are making computer chips faster by giving them more cores, or processing units. But while some data structures are well adapted to multicore computing, others are not. In principle, doubling the number of cores should double the efficiency of a computation. With algorithms that use a common data structure called a priority queue, that’s been true for up to about eight cores — but adding any more cores actually causes performance to plummet.

    In simulations, algorithms using their data structure continued to demonstrate performance improvement with the addition of new cores, up to a total of 80 cores.

    A priority queue is a data structure that, as its name might suggest, sequences data items according to priorities assigned them when they’re stored. At any given time, only the item at the front of the queue — the highest-priority item — can be retrieved. Priority queues are central to the standard algorithms for finding the shortest path across a network and for simulating events, and they’ve been used for a host of other applications, from data compression to network scheduling.

    With multicore systems, however, conflicts arise when multiple cores try to access the front of a priority queue at the same time.

    To avoid this problem, Kopinsky; fellow graduate student Jerry Li; their advisor, professor of computer science and engineering Nir Shavit; and Microsoft Research’s Dan Alistarh, a former student of Shavit’s, relaxed the requirement that each core has to access the first item in the queue. If the items at the front of the queue can be processed in parallel — which must be the case for multicore computing to work, anyway — they can simply be assigned to cores at random.

    But a core has to know where to find the data item it’s been assigned, which is harder than it sounds.

    Reply
  45. Tomi Engdahl says:

    Port authority: Belkin Thunderbolt 2 Express Dock
    Costly connectedness with 4K support
    http://www.theregister.co.uk/2015/02/02/review_belkin_thunderbolt_2_express_dock_f4u085/

    Reply
  46. Tomi Engdahl says:

    Google Earth Pro drops $399 subscription, now available for free
    http://www.slashgear.com/google-earth-pro-drops-399-subscription-now-available-for-free-31366926/

    Google has recently revealed that it is dropping the paid subscription from Google Earth Pro, a more robust version of its Google Earth software. The Pro service will now be available for free, previously costing $399 per year. While the standard version of Google Earth has often been more than enough for casual map and globe users, the Pro version has seen use among professionals from scientists to businesses, who have been able to take advantage of advanced features previously not available for free.

    Among the main differences between Google Earth and its Pro sibling are high-resolution imagery and automated geographic location.

    The imagery found in Google Earth is the same between the two versions, however Pro has features such as animated movie creation and area measurements that are targeted for businesses.

    Reply
  47. Tomi Engdahl says:

    Worldwide tablet shipments fall for the first time – down by 12% in Q4 2014
    - Notebook shipments remain flat as the total PC market declines 6%
    - See more at: http://www.canalys.com/newsroom/worldwide-tablet-shipments-fall-first-time-%E2%80%93-down-12-q4-2014#sthash.C9CA6f3p.dpuf

    Reply
  48. Tomi Engdahl says:

    Sony Sells Off Sony Online Entertainment
    http://games.slashdot.org/story/15/02/02/1915244/sony-sells-off-sony-online-entertainment

    Sony Online Entertainment is to become Daybreak Game Company and turn its focus to multi-platform gaming. The company has been acquired by Columbus Nova and is now an indie studio.

    SOE acquired, becomes Daybreak Game Company
    http://www.gamesindustry.biz/articles/2015-02-02-soe-acquired-becomes-daybreak-game-company

    Sony Online Entertainment titles headed to Xbox and mobile [UPDATE: SOE just wasn't "particularly strategic" for Sony, say analysts]

    Reply
  49. Tomi Engdahl says:

    Obama’s budget packs HUGE tax breaks for poor widdle tech giants
    Only the little people pay taxes – you don’t get rich cutting checks to the IRS
    http://www.theregister.co.uk/2015/02/03/obama_submits_budget_with_masssive_tax_break_for_tech_firms/

    President Obama’s budget for 2016 includes a whopping tax break that US tech giants have been demanding – because it will let them bring trillions of dollars held in offshore accounts to America without running up huge bills.

    Under today’s tax code, companies that earn money overseas must pay 35 per cent to Uncle Sam if they want to bring the money into the US and spend it. As a result there are an estimated $2tn sitting in bank accounts that firms are unwilling to bring home and pay tax on.

    In his budget to Congress, Obama proposes that the tax on foreign income be reduced to 19 per cent to encourage repatriation of funds. In addition he wants a special “one-off” tax holiday where firms would pay 14 per cent to the IRS.

    Reply
  50. Tomi Engdahl says:

    Rumours of on-premises software’s demise greatly exaggerated
    Microsoft commits to another version of SharePoint for your bit barn
    http://www.theregister.co.uk/2015/02/03/rumours_of_onpremises_softwares_demise_greatly_exaggerated/

    Microsoft has let it be known that it will create another version of SharePoint, and release a version of it for on-premises operations.

    “We know that the move to cloud doesn’t happen all at once,” Microsoft writes , adding “While we’ve seen growing demand for SharePoint Online, we recognize that our customers have a range of requirements that make maintaining existing SharePoint Server deployments the right decision for some.”

    There’s also a statement of intention that this will all be delivered in a hybrid model, a non-surprise of a position given Office365 is herding productivity suite user to the cloud.

    It’s no surprise that Microsoft’s persisting with on-premises SharePoint, as many users have lots of data – often large files – managed by the application. Gong cloud-only would likely make for an inferior user experience for those sucking down 20MB files a few times a day.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*