Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    UPDATE 3-HP to buy Wi-Fi gear maker Aruba Networks for $2.7 bln
    http://www.reuters.com/article/2015/03/02/aruba-networks-ma-hp-idUSL4N0W44QO20150302

    Hewlett-Packard Co said it would buy Wi-Fi network gear maker Aruba Networks Inc for about $2.7 billion, the biggest deal for the world’s No. 2 PC maker since its botched acquisition of Britain’s Autonomy Plc in 2011.

    HP has had a dismal record for big acquisitions

    HP, which has struggled to adapt to mobile and online computing, plans to separate its computer and printer businesses from its corporate hardware and services operations this year.

    Reply
  2. Tomi Engdahl says:

    IT Leadership: Signs You’re a Micromanager (And How to Stop)
    http://www.cio.com/article/2889159/leadership-management/it-leadership-signs-youre-a-micromanager-and-how-to-stop.html

    Micromanagement may seem harmless, but it’s sabotaging your teams, your productivity and morale from within, and stifling your business’s ability to grow. Here’s how to tell if you’re a micromanager, and some steps you can take to overcome this fatal flaw.

    Are you never quite satisfied with your team’s results? Do you avoid delegating at all costs, often taking on work that’s far below your experience and talent level just because you’re certain no one else can do it as well as you can? Are you constantly demanding status updates, progress reports and check-ins? It’s time to face the facts: You’re a micromanager.

    Micromanagement might not appear to be a big deal — you’re just trying to make sure tasks and projects are on time, done right and in ways that will benefit the business, right? — but the truth is, it’s incredibly damaging to every aspect of a business, says Stu Coleman, partner and senior managing director at WinterWyman Financial Contracting and a self-proclaimed recovering micromanager.

    Micromanagement Stunts Growth, Erodes Morale and Slows Productivity

    “At its core, micromanagement chokes off growth. You can’t micromanage a team, or an entire business and expect to grow; you have to support, groom and grow leadership within your organization that can take on roles and responsibilities that you used to perform so you can focus on strategic issues,” says Coleman.

    Micromanagement also negatively impacts employee morale, engagement and productivity, says Bob Hewes, senior partner with oversight for leadership development, coaching and knowledge management, Camden Consulting, and can lead to high turnover rates and even recruiting and talent retention problems.

    “Micromanagement is almost never benign. It’s a destructive force that goes far beyond a ‘management style;’ it really gets at morale, at engagement and you’ll find that micromanagers often see high turnover of their direct reports,”

    Reply
  3. Tomi Engdahl says:

    Unity 5 is here!
    http://unity3d.com/

    Unity 5 is ready for download! Find out more about how you can do it all with our latest release: Create outstanding games, connect to your true fans, and achieve greater success.

    The engine of your dreams
    Unity 5 personal edition: All engine features & all platforms
    http://unity3d.com/unity/personal-edition

    Aspiring developers everywhere: Get ready for Unity 5 Personal Edition, the new free version of the best development platform for creating multiplatform 2D/3D games and interactive experiences.

    Unity 5 Personal Edition includes all engine features: everything that’s new in Unity 5, all the top-tier features from previous releases that are loved by the pros, and deployment to all platforms (with the new Personal Edition splash screen).

    Reply
  4. Tomi Engdahl says:

    Next Generation OpenGL Becomes Vulkan: Additional Details Released
    by Ryan Smith on March 3, 2015 3:02 AM EST
    http://www.anandtech.com/show/9038/next-generation-opengl-becomes-vulkan-additional-details-released

    Reply
  5. Tomi Engdahl says:

    Game developers favor Steam and PC over other platforms
    http://venturebeat.com/2015/03/02/game-developers-are-favoring-steam-and-pc-platforms/

    About 75 percent of game developers say that Steam and the PC platform is “very important” for the future of the game industry, according to a survey by the International Game Developers Association.

    The survey of 2,200 developers show that many view the PC as the top platform for the next five years. And 25 percent of developers said that “proprietary platforms” such as virtual reality are also very important, said Kate Edwards, the executive director of the International Game Developers Association, in an interview with GamesBeat.

    Those results suggest a good outlook for PC games, Steam Machines, and PC-related technologies such as virtual reality, Edwards said.

    Reply
  6. Tomi Engdahl says:

    Sony Targets 2016 For ‘Project Morpheus’ VR on PS4
    http://www.wired.com/2015/03/sony-ps4-project-morpheus-release-date/

    Sony unveiled a new, much-enhanced prototype of its Project Morpheus virtual reality hardware for PlayStation 4 at the Game Developers Conference in San Francisco on Tuesday, saying that it intended to release the peripheral in the first half of 2016.

    Reply
  7. Tomi Engdahl says:

    AMD’s LiquidVR Announced: AMD Gets Expanded VR Headset Functionality
    by Ryan Smith on March 3, 2015 8:30 PM EST
    http://www.anandtech.com/show/9043/amds-liquidvr-announced-amd-gets-expanded-vr-headset-functionality

    2015 is going to be known as the year of virtual reality at GDC. Before the expo floor has even opened VR pitches, announcements, and press conference invitations are coming fast and furious. Though Oculus is still the favored child in the PC space, a number of other companies are either trying to pitch their own headsets, or alternatively are working on the middleware portion of the equation – bridging the gap between current systems and the VR hardware. Recent developments in the field have clearly sparked a lot of consumer and developer interest in the idea, and now we are in the rapid expansion phase of technological growth.

    Reply
  8. Tomi Engdahl says:

    Next Generation OpenGL Becomes Vulkan: Additional Details Released
    by Ryan Smith on March 3, 2015 3:02 AM EST
    http://www.anandtech.com/show/9038/next-generation-opengl-becomes-vulkan-additional-details-released

    Continuing this week’s GDC-2015 fueled blitz of graphics API news releases, we have Khronos, the industry consortium behind OpenGL, OpenCL, and other cross-platform compute and graphics APIs.

    Back in August of 2014 Khronos unveiled their own foray into low-level graphics APIs, announcing the Next Generation OpenGL Initiative (glNext). Designed around similar goals as Mantle, DirectX 12, and Metal, glNext would bring a low-level graphics API to the Khronos ecosystem, and in the process making it the first low-level cross-platform API. 2014’s unveiling was a call for participation, and now at GDC Khronos is announcing additional details on the API.

    First and foremost glNext has a name: Vulkan. In creating the API Khronos has made a clean break from OpenGL – something that game industry developers have wanted to do since OpenGL 3 was in development – and as a result they are also making a clean break on the name as well so that it’s clear to users and developers alike that this is not OpenGL.

    not only does Vulkan not bring with it the compatibility baggage of the complete history of OpenGL, but like other low-level APIs it will also have a higher skill requirement than high-level OpenGL.

    Reply
  9. Tomi Engdahl says:

    Khronos Announces OpenCL 2.1: C++ Comes to OpenCL
    by Ryan Smith on March 3, 2015 3:01 AM EST
    http://www.anandtech.com/show/9039/khronos-announces-opencl-21-c-comes-to-opencl

    Alongside today’s announcements of Vulkan and SPIR-V, Khronos is also using the occasion of the 2015 Game Developers Conference to announce the next iteration of OpenCL, OpenCL 2.1.

    OpenCL 2.1 marks two important milestones for OpenCl. First and foremost, OpenCL 2.1 marks the point where OpenCL (compute) and graphics (Vulkan) come together under a single roof in the form of SPIR-V. With SPIR-V now in place, developers can write graphics or compute code using SPIR, forming a common language frontend that will allow Vulkan and OpenCL to accept many of the same high level languages.

    But more significant about OpenCL 2.1 is that after several years of proposals and development, OpenCL is now gaining support for an official C++ dialect, extending the usability of OpenCL into even higher-level languages. Having originally launched using the OpenCL C dialect in 2008, there was almost immediate demand for the ability to write OpenCL code in C++, something that has taken the hardware and software some time to catch up to. And though C++ is not new to GPU computing – NVIDIA’s proprietary CUDA has supported it for some time – this marks the introduction of C++ to the cross-platform OpenCL API.

    Reply
  10. Tomi Engdahl says:

    CONFIRMED: Tiny Windows Server is on the way
    Cloud OS at the bottom, server in the middle and a dedicated client on top
    http://www.theregister.co.uk/2015/03/04/confirmed_tiny_windows_server_is_on_the_way/

    Microsoft’s plans to decompose Windows Server into a far lighter and leaner beast are real. On Monday we reported on the emergence of a Microsoft slide deck outlining a “Nano” version of Windows Server aimed at the cloud.

    Microsoft’s now pointed to an earlier statement about just what it plans.

    “We are going to have a cloud-optimised server,” he says. “Windows Server will be deeply refactored for a cloud scenario [with] just the components required to do that and nothing else.

    “Apps will target the existing set of APIs or the second set of cloud APIs.”

    Snover also says Microsoft “will be clear about a client vs a server” because “we have been fuzzy on this.” Windows Server Next will therefore offer the chance to install a designated client.

    Reply
  11. Tomi Engdahl says:

    AMD Enters Virtual Reality Fray With LiquidVR SDK At GDC
    The Game Developer Conference in San Francisco is underway and virtual reality (VR) is getting plenty of love from big names in gaming. One of them is AMD, which announced today its LiquidVR SDK that will help developers customize VR content for AMD hardware.

    Read more at http://hothardware.com/news/amd-tackles-virtual-reality-with-liquidvr-sdk-at-gdc#pwfHlmytciM5uvbG.99

    Reply
  12. Tomi Engdahl says:

    Bringing Hadoop in from the cold: MapR is throwing adoption barriers on the fire
    V4.1 has three shortcuts to get clusters up and running faster
    http://www.theregister.co.uk/2015/03/04/bringing_hadoop_in_from_the_cold/

    The pace of Hadoop development is relentless. Hortonworks recently has had its IPO. Distribution owners strive to get their version deployed faster. Alliances are forming fast. MapR is extending its own reach by adding features so the software can be adopted more easily.

    MapR was founded in 2009 to produce an enterprise-grade Hadoop distribution, and has been aggressively funded and grown: it has raised US$174m from five rounds, plus US$30m to finance its debt during an US$80m series E-stage in 2014. It has some 700 customers.

    Meanwhile, Hortonworks, founded in 2011, grew at an even more breakneck pace, and raised US$248m in five rounds.

    Before MapR can IPO, it has to grow its business well beyond its 700 or so customers, and show that investors won’t be disappointed.

    To that end, the latest MapR Hadoop 4.1 distribution includes:

    MapR-DB table replication: this provides multiple active replica clusters across the world thanks to realtime asynchronous replication.
    Table replication delivers realtime disaster recovery to reduce the risk of data loss upon site-wide failure.
    MapR POSIX client gives apps running on edge nodes NFS access, with compression, parallel access, authentication and encryption supported.
    A C API for MapR-DB giving software engineers the ability to write realtime Hadoop applications.

    MapR says its active-active, cross-data-center capability means operational data can be stored and processed close to users or devices, and replicated to a central analytics cluster for larger-scale analytics on enterprise-wide data.

    Mixing Big Data with other workloads on a set of servers

    Mesosphere has devised a Data Centre OS (DCOS) for managing data center and cloud resources at scale. The DCOS core is Mesos, a distributed systems kernel that abstracts CPU, memory, storage and other compute resources, allowing developers to treat the data center as a single pool of resource.

    MapR has got together with Mesosphere to produce Myriad, a resource management framework that allows Apache YARN jobs to run alongside other applications and services in enterprise and cloud data centers.

    Reply
  13. Tomi Engdahl says:

    Oracle’s piping hot new pot of Java takes out the trash (faster)
    JDK 8 Update 40 improves memory handling, JavaScript support
    http://www.theregister.co.uk/2015/03/04/java_8_update_40/

    Oracle’s latest update to the Java Development Kit doesn’t add any new language features or change any APIs, but it still includes a number of enhancements that should please Java developers and users.

    Released on Tuesday – a couple of weeks ahead of Java SE 8′s first birthday – Java Development Kit 8 Update 40 (JDK 8u40) improves performance, memory management, and JavaScript support and includes updates to the JavaFX UI framework for accessibility and OS X compatibility.

    It also fixes quite a few bugs, as usual, but none of them are security vulnerabilities this time around.

    Reply
  14. Tomi Engdahl says:

    GitLab Acquires Gitorious
    http://developers.slashdot.org/story/15/03/04/002226/gitlab-acquires-gitorious

    code repository GitLab has purchased rival service Gitorious. Gitorious users are now able to import their projects into GitLab. They must do so by the end of May, because Gitorious will shut down on June 1st.

    Code collaboration platform GitLab acquires rival Gitorious, will shut it down on June 1
    http://thenextweb.com/insider/2015/03/03/gitlab-acquires-rival-gitorious-will-shut-june-1/

    Both GitLab and Gitorious are open-source code collaboration platforms based on Git repository management system, same that is used by their more commercialized competitors GitHub and Bitbucket. Users can host their projects using the platform’s servers or download GitLab for free and install it on their own hardware: the company is making money on additional services and support.

    “Most people use GitLab on-premises, with more than 100,000 organizations using it we estimate there are millions of on-premise end users,”

    As for Gitorious, it currently has a whopping 822,000 registered users with “only a portion” of them being active. Among the bigger projects using Gitorious are Qt, OpenSUSE and XBMC, which will join GitLab’s famous clients that include AT&T, Expedia, Stack Overflow and NASA.

    The four employees at Powow, the company behind Gitorious, won’t be joining the team of GitLab

    Reply
  15. Tomi Engdahl says:

    SanDisk Ultra® microSDXC™ UHS-I Card
    World’s Highest Capacity microSD™ Card
    http://www.sandisk.com/products/memory-cards/microsd/ultra-premium-edition/

    Breakthrough capacity of 200GB** means you never have to stop shooting, saving, and sharing. This durable microSDXC™ UHS-I card can hold up to 20 hours of Full HD video** before you even have to think about moving anything to your PC. And with premium card-to-PC transfer speeds, you can transfer up to 1200 photos a minute

    Reply
  16. Tomi Engdahl says:

    Eric Johnson / Re/code:
    Oculus VR Will Go ‘Full Consumer’ With Samsung by Year’s End
    http://recode.net/2015/03/04/oculus-vr-will-go-full-consumer-with-samsung-by-years-end/

    Oculus VR’s mobile virtual reality headset, the Samsung Gear VR, will get a full-on consumer push by the end of the year, CTO John Carmack said today at the Game Developers Conference.

    Carmack’s talk, titled “The Dawn of Mobile VR,” covered the current state of the Gear VR, which is aimed at developers and early adopters, and is officially dubbed an “innovator edition.” It was originally made to work only with Samsung’s Galaxy Note 4, but Oculus recently announced that it will release another innovator edition soon that works on Samsung’s Galaxy S6 and S6 Edge smartphones, too.

    Reply
  17. Tomi Engdahl says:

    Marco Chiappetta / HotHardware.com News:
    Nvidia teases its next video card, the GeForce GTX Titan X, with 12GB VRAM, 8B transistors at GDC

    NVIDIA Announces Flagship GeForce GTX Titan X 12GB GPU At GDC 2015
    Read more at http://hothardware.com/news/nvidia-announced-flagship-geforce-gtx-titan-x-at-gdc-2015#77uRcvRgZPBVlf9y.99

    Reply
  18. Tomi Engdahl says:

    IBM Slumps, Cisco Gains In 2014 Server Sales
    http://www.eetimes.com/document.asp?doc_id=1325905&

    IBM and HP slipped while Cisco and Lenovo gained share in the worldwide server market in 2014. That’s the upshot of Gartner’s fourth-quarter and full-year report on server shipments, revenues, and market share shifts in 2014, which was released Tuesday.

    Reply
  19. Tomi Engdahl says:

    Nvidia Gives Android a Console Play
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1325887&

    Nvidia’s Shield Console marks its latest move into the systems business, pitting Android games against the likes of the Sony Playstation and Microsoft Xbox.

    Reply
  20. Tomi Engdahl says:

    Using Google Cloud Pub/Sub to Connect Applications and Data Streams
    http://googlecloudplatform.blogspot.fi/2015/03/using-Google-Cloud-pubsub-to-Connect-applications-and-data-streams.html

    Many applications need to talk to other varied, and often, distributed systems reliably and in real-time. To make sure things aren’t lost in translation, you need a flexible communication model to get messages between multiple systems simultaneously.

    That’s why we’re making the beta release of Google Cloud Pub/Sub available today, as a way to connect applications and services, whether they’re hosted on Google Cloud Platform or on-premises. The Google Cloud Pub/Sub API provides:

    Scale: offering all customers, by default, up to 10,000 topics and 10,000 messages per second

    Global deployment: dedicated resources in every Google Cloud Platform region enhance availability without increasing latency

    Performance: sub-second notification even when tested at over 1 million messages per second

    https://cloud.google.com/pubsub/

    Reply
  21. Tomi Engdahl says:

    Solution to Nausea Puts Virtual Reality Closer to Market
    http://www.nytimes.com/2015/03/05/technology/solution-to-nausea-puts-virtual-reality-closer-to-market.html?_r=0

    Few technologies have generated more attention than virtual reality, which promises to immerse people in 3-D games and video.

    Yet for the last couple of years, the companies building virtual reality headsets have begged for patience from content creators and the public. The companies’ biggest concern: that unpolished virtual reality products could make people physically sick.

    The public’s wait for virtual reality is nearing an end. In recent days, several of the most prominent companies making headsets offered rough timetables for consumer versions of their products, ending the guessing game about when virtual reality would get its first real test.

    The most closely watched of those companies, Oculus VR, which is owned by Facebook, said it expected to begin widely selling a product before the end of the year. Oculus has teamed up with Samsung on the product, a headset that uses a mobile phone as a screen.

    Sony said this week that it planned to ship its own virtual reality headset for the PlayStation 4 console, known as Project Morpheus, during the first half of next year. And Valve, an influential game maker and online game retailer, said HTC would start selling a virtual reality headset designed by the two companies before the end of this year. The device will be called Vive.

    It is well known that virtual reality headsets can cause motion sickness and eyestrain in people who use them, though the severity varies by person, the type of game being played and the length of time a game is played.

    Oculus and other companies are still making technical modifications to their products to avoid those effects. They are encouraging game developers to avoid creating virtual environments that tend to cause nausea, like roller coaster rides.

    a “nightmare scenario” that has worried him and other Oculus executives. “People like the demo, they take it home, and they start throwing up,” he said.

    “The fear is if a really bad V.R. product comes out, it could send the industry back to the ’90s,” he said.

    In that era, virtual reality headsets flopped, disappointing investors and consumers. “It left a huge, smoking crater in the landscape,”

    “It’s going to be a little bit rocky,” Mr. Sweeney said about the development of virtual reality. “Some people are going to ship products that won’t be good. But there is so much momentum behind this that V.R. is an inevitability.”

    Reply
  22. Tomi Engdahl says:

    Data Virtualization is here again

    Sometimes even good ideas need time to mature use. Data Virtualization is derived from the previous century, but until now it has been properly discovering the place of business analytics.

    Data Virtualization has been developed to solve the problem, which encountered before any other business analytics involved. Not there a traditional way of working has been put together to analyze the data from operational systems to its own database, data warehouse, which can poke around at will to production databases without touching.

    This procedure has many advantages: data can be collected, standardize and organize a variety of systems analysis according to the needs in the most efficient manner, and no reports of driving a load production systems. The data warehouse can also collect historical data, which can be run easily time series over a longer period.

    Analytic tools based on functional data warehouse technology is usually a sql database. Data warehouses to meet the business applications of the information collected has been developed very own tool in its class, etl software (short for extract, transform and load), which is driven analysis of the needs of, for example, on weekends or at night.

    So, where the problem again? Analytics well serving the data warehouse building is not just a simple thing to do. Since the source of several regimes, the work required in various applications and their data content experts.

    It is not unusual that the data warehouse construction or renovation is a year or two project.

    It would be convenient if you have a logical intermediate level, which could be defined as the repository form of data collection and aggregation rules, which would apply for data queries and the report calls for direct output systems. This is precisely the datavirtualisoinnin idea: to offer one and the same view and access to information, regardless of where and in what form it is stored.
    Or the integration of virtualization.

    Instead of using data applications should have read them to each specific field applications, databases, these individual practices and forms of performance in accordance with the details shown on a common virtualized SQL interface.

    Source: http://summa.talentum.fi/article/tv/2-2015/133539

    Reply
  23. Tomi Engdahl says:

    Nokia’s N1 Tablet Looks Good For ‘Designed In Finland, Built On Android’ Strategy
    http://techcrunch.com/2015/03/04/nokia-n1-hands-on/

    Nokia is showing off the Android tablet that’s part of its post-mobile phones strategy here at Mobile World Congress in Barcelona. It’s only available in China right now but the company tells TechCrunch it’s “looking at European markets” to consider whether to bring the N1 here too.

    Announced last November, the slender, almost 8-inch N1 is a brand licensing collaboration between Nokia and Chinese electronics manufacturing firm Foxconn. It’s not clear how the revenue share breaks down — and Nokia wouldn’t specify when we asked — but it’s definitely doing less of the heavy lifting, with Foxconn making the hardware and also handling distribution, sales and marketing. Nokia contributes design work, its brand name and Z Launcher Android launcher, also revealed last year.

    Nokia’s Tuukka Järvenpää, lead product manager of brand licensing, said the Nokia N9 design team worked with Foxconn on the tablet’s design

    It remains to be seen whether Nokia’s N1 tablet ends up being a China-only experiment. The version on show here in Barcelona has Google Play pre-loaded but the actual product sold in Asia comes with a Chinese app store, as is common in the market.

    If it is planning to bring Nokia-branded smartphones back to the market in 2016, the well-crafted, quality feel of this ‘Designed In Finland, made in China’ tablet certainly augurs well. After the moribund Windows Phone years for Nokia, this experimental new hardware direction is finally putting a spring back in its step.

    Reply
  24. Tomi Engdahl says:

    Cisco, MapR first to top Big Data TPC benchmark tree
    Although there’s no actual tree yet. Or even a shrub
    http://www.theregister.co.uk/2015/03/05/cisco_and_mapr_big_data_tpc_benchmark/

    Cisco and MapR have posted the first Big Data TPCx-HS benchmark, but, of course, there’s nothing to actually compare it with yet.

    The TPC Express HS (TPCx-HS) benchmark result was based on clustered Cisco UCS servers running Red Hat Linux and MapR Hadoop.

    TPC is the Transaction Processing Council, which helps produce auditable benchmarks.

    The TPCx-HS specification says the test “stresses both hardware and software including Hadoop run-time, Hadoop Filesystem API compatible systems and MapReduce layers”.

    Reply
  25. Tomi Engdahl says:

    CIOs Report That Spending Is on the Rise
    http://www.cio.com/article/2887832/budget/cios-report-that-spending-is-on-the-rise.html

    IT leaders expect to see bigger technology budgets in the coming year, according to CIO’s most recent Tech Poll. However, while spending in key areas is up, CIOs aren’t buying into all the buzz-worthy trends.

    IT leaders will see bigger technology budgets in the coming year, according to CIO’s most recent Tech Poll, which is conducted regularly to gauge IT spending and stages of implementation in key technology categories.

    Not only are tech budgets bigger, this year IT spending is rising more than it has in the past six years, according to our survey of 211 IT executives. Fifty-seven percent of respondents will see an overall budget increase in the coming year, up from 53 percent and 51 percent the previous two years, respectively. IT budgets are expected to rise an average of 6.2 percent, up from 5.2 percent a year ago and 3.9 percent in November 2012.

    Reply
  26. Tomi Engdahl says:

    Ubisoft Has New Video Game Designed To Treat Lazy Eye
    http://games.slashdot.org/story/15/03/05/0334203/ubisoft-has-new-video-game-designed-to-treat-lazy-eye

    Ubisoft, in partnership with McGill university, has developed a game designed to treat lazy eye. The game works as a treatment by training both eyes
    condition that affects 1-5% of the population.

    Ubisoft has new video game designed to treat lazy eye
    Treatment still needs approval from Health Canada and the Food and Drug Administration in U.S.
    http://www.cbc.ca/news/canada/montreal/ubisoft-has-new-video-game-designed-to-treat-lazy-eye-1.2979850

    The Montreal-based gaming company Ubisoft has developed a video game it says could be used to treat amblyopia, also known as lazy eye.

    Amblyopia is a condition in children where vision in one eye does not develop properly. If not treated early, the vision problems in the eye can become permanent. Correcting the eye later in life does not restore the lost vision.

    Ubisoft developed the game Dig Rush over two years. The company says it’s the first video game based on a patented method for the treatment of amblyopia.

    Treatment for the condition usually involves a child wearing an eye patch. But children were more likely to abandon the treatment because of the stigma of wearing an eye patch.

    Ferland said the game involves controlling moles on the tablet screen. The moles are digging for gold in mines.

    The treatment involves training both eyes, not just the weak eye, by using different levels of contrast of red and blue that the patient sees using stereoscopic glasses.

    Using this method, the physician can adjust the game’s settings in accordance with the patient’s specific condition.

    Reply
  27. Tomi Engdahl says:

    I wore the Vive VR headset and didn’t want to take it off
    http://www.theverge.com/2015/3/4/8146523/htc-vive-valve-vr-headset-hands-on-preview

    HTC’s Very Immersive Visual Experience is more impressive for its motion tracking than its display technology

    If the point of virtual reality headsets is to transport their wearer into another world, then HTC’s Vive VR is already a success. I strapped myself in for one of HTC’s demos at Mobile World Congress in Barcelona this week, and I did indeed feel like I had stepped into the 3D worlds rendered around me. But that standard has already been met by the excellent Oculus Rift, the headset that got the world excited about VR all over again. Where the Vive VR looks to set itself apart is by expanding the scope and scale of motions that I can perform while inside its simulated realms.

    Without motion tracking, VR headsets have a natural ceiling to what they can do. The Vive VR lifts that by using a laser-based motion tracking system — all the weird little glass plates scattered across the front of the headset are laser receptors — to know the exact position and orientation of my head at all times. Valve’s Steam VR software platform makes sense of that data, in combination with information about what I’m doing with the controllers, to generate a highly precise picture of what I’m doing and where I’m doing it in three-dimensional space. The next step is to simply (simply!) generate a 3D environment that reacts appropriately to my gestures and movements.

    Though they have a very reasonable 1200 x 1080 resolution each, I could detect the red, green, and blue subpixels

    Reply
  28. Tomi Engdahl says:

    Source 2 Will Also Be Free
    http://games.slashdot.org/story/15/03/05/1325258/source-2-will-also-be-free

    Valve is officially debuting its Source 2 engine at GDC this week alongside a host of other new technologies, and it’s expected to launch at a competitive price: free. The news of its release coincides with Epic making Unreal Engine 4 free-to-download and Unity announcing a full-featured free version of Unity 5.

    Reply
  29. Tomi Engdahl says:

    Intel Reveals Unlocked, Socketed Broadwell and Core i7 NUC With Iris Graphics
    http://hardware.slashdot.org/story/15/03/05/1449200/intel-reveals-unlocked-socketed-broadwell-and-core-i7-nuc-with-iris-graphics

    Intel held an event at a location adjacent to GDC last night, where the company discussed some updates to its 5th Gen Core processor line-up, Intel graphics developments, the Intel Hardware SDK, and its various game developer tools. Chris Silva, Director of Marketing for Premium Notebook and Client Graphics teams disclosed a few details that a socketed, unlocked, 65W desktop processor based on Intel’s Broadwell architecture, featuring Iris graphics, is due to arrive sometime in mid-2015.

    Intel Reveals Unlocked, Socketed Broadwell CPU and Core i7 NUC With Iris Graphics At GDC
    Read more at http://hothardware.com/news/intel-reveals-unlocked-socketed-broadwell-cpu-and-core-i7-nuc-with-iris-graphics-at-gdc#5LHGMyPCqzRxr1wV.99

    Reply
  30. Tomi Engdahl says:

    VMware sued for alleged GPL license infractions
    http://www.pcworld.com/article/2893852/vmware-sued-for-alleged-gpl-license-infractions.html?null

    A Linux kernel developer is suing VMware in Germany, alleging the company has not complied with copyright terms for using open-source software.

    Christoph Hellwig, who holds copyrights on portions of the Linux kernel, alleges VMware combined proprietary source code with open-source code in its ESXi product line but has not released it publicly as required by the General Public License version 2 (GPLv2). The suit was filed in district court in Hamburg.

    The Software Freedom Conservancy, a charity that supports open-source software projects, is funding Hellwig’s lawsuit through a grant, according to a news release.

    VMware said Thursday it believes the lawsuit is without merit.

    Conservancy Announces Funding for GPL Compliance Lawsuit
    VMware sued in Hamburg, Germany court for failure to comply with the GPL on Linux
    http://sfconservancy.org/news/2015/mar/05/vmware-lawsuit/

    Reply
  31. Tomi Engdahl says:

    Flashy upstarts facing IPO pressure. Get on with it then
    VC sequence could end not with a bang, but a whimper
    http://www.theregister.co.uk/2015/02/27/five_flash_array_startup_ipo_survivors_face_future_stagnation/

    Conventional wisdom has it that VC-funded tech start-ups get acquired or, hopefully, go through an IPO to become successful stand-alone companies. However, such may not be the fate of the five main surviving all-flash array (AFA) vendors in the face of fast and furious mainstream supplier reaction to their success.

    Step back a moment and remember the last big round of storage array start-up disruption from 3PAR, Compellent, EqualLogic, Isilon, LeftHand, Nexsan, Storwize and others.

    Basically, the good ones got bought

    However, the same pattern is not happening with all-flash arrays, where four mainstream vendors have devised their own flash-array tech, with three buying their way in and one new enterprise storage array supplier doing likewise, leaving the five remaining AFA startups looking somewhat exposed.

    Cisco bought Whiptail and its Invicta array, now in temporary hibernation
    Dell all-flashed its Storage Centre (Compellent) array
    EMC bought XtremIO and DSSD and has all-flash VMAX.VNX arrays
    HDS all-flashed its VSP and HUS arrays with a HAF module
    HP all-flashed its 3PAR array with the 7450
    IBM bought TMS and so gained its FlashSystem product line
    NetApp all-flashed its E-Series and FAS arrays and is building its own FlashRay tech
    WD bought Skyera so its HGST sub could enter the market

    This leaves Kaminario, Nimbus Data, Pure Storage, SolidFire, Syneto, and Violin Memory as a distinct group of FA suppliers all trying for glory by replacing disk arrays with their flash systems.

    There is now no mainstream storage array supplier lacking an all-flash array, meaning they have no need to buy one. This makes acquisition unlikely as a viable exit strategy for the stand-alone suppliers. One, Violin, has IPOed already and the other four have to IPO successfully so their backers can get their cash out.

    Reply
  32. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Microsoft in surprise partnership with Google to improve Angular 2, the next version of Google’s JavaScript web app framework — Microsoft And Google Collaborate On Angular 2 Framework, TypeScript Language — Here’s a partnership that may come as a surprise to many …

    Microsoft And Google Collaborate On Angular 2 Framework, TypeScript Language
    http://techcrunch.com/2015/03/05/microsoft-and-google-collaborate-on-typescript-hell-has-not-frozen-over-yet/

    Here’s a partnership that may come as a surprise to many: Microsoft and Google are working together to help make Angular 2 — the next (and somewhat controversial) version of Google’s JavaScript web app framework — better.

    Angular has been using its own AtScript superset of Microsoft’s TypeScript for a while now. TypeScript is Microsoft’s attempt at extending JavaScript with features like type annotations, generics and modules. Going forward, the two languages will converge. Angular 2 will be written in TypeScript and developers will be able to write their Angular 2 applications in this language,too.

    The AtScript language made its debut last October, but it looks like the AtScript name will be retired in favor of TypeScript.

    Angular, at various times in its development, was written in plain JavaScript, Google’s own Dart language and AtScript (there are still separate Dart and JavaScript versions of Angular 1.x today).

    Angular 2 has been widely criticized in the developer community because it breaks compatibility with the previous version. Adopting a Microsoft-led language may make it even harder for some to stomach the move to the new version. It’s definitely a win for TypeScript, though, which has seen growing adoption over the last year since its 1.0 release.

    Reply
  33. Tomi Engdahl says:

    Linux has a huge shortage of skilled developers

    The Linux Foundation has published a new report on jobs in the sector. The report made ​​clear that almost all companies would like to hire more Linux experts. The problem is that the qualified coders is now very difficult to find.

    Currently, companies are looking for open cloud experts. If your CV reads the skills the OpenStack or CloudStack, their jobs may be in practice to choose for yourself.

    23 percent of corporate recruiters, in turn, appreciate the security expertise. 19 per cent, or one in five believes the company is looking for linux hackers, who kheittävät software-based solutions for network applications, namely SDN components (softaware defined networks).

    Linux problem at the moment is that supply and demand do not apply. Since Linux is used so widely in all kinds of devices and applications. According to the Foundation of the linux-professionals need to grow much faster than can be educated professionals.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2522:linux-osaajista-valtava-pula&catid=13&Itemid=101

    Reply
  34. Tomi Engdahl says:

    2015 Linux Jobs Report: Linux Professionals in High Demand
    http://www.linuxfoundation.org/news-media/announcements/2015/03/2015-linux-jobs-report-linux-professionals-high-demand

    Rise of open cloud platforms driving need for Linux expertise; hiring managers increasingly looking to certified professionals for top positions

    NEW YORK and SAN FRANCISCO, March 4, 2015 – Recruiters are increasing efforts to hire Linux talent, according to the 2015 Linux Jobs Report, which forecasts the Linux job market based on a survey of hiring managers and Linux professionals. Hiring managers are also looking more to evidence of formal training and certifications to identify qualified prospects.

    The 2015 Linux Jobs Report includes data from hiring managers (1,010) and Linux professionals (3,446) and provides an overview of the state of the market for Linux careers and what motivates professionals in this industry.

    The purpose of this report is to inform the industry about the latest Linux job trends and how they impact the ability of professionals to find rewarding Linux job opportunities and for employers to attract and retain qualified talent.

    “Competition for Linux talent is accelerating, as the software becomes more ubiquitous,” said Shravan Goli, President of Dice. “Hiring managers need to ensure they are offering the right set of incentives to attract talent, while professionals need to provide evidence of their knowledge and skills, especially in areas of growing demand such as the cloud.”

    Nearly all hiring managers are looking to recruit Linux professionals in the next six months. With new Linux-based systems, projects and products constantly emerging, hiring the right talent to support all the growth continues to be a priority amongst employers. Ninety-seven percent of hiring managers report they will bring on Linux talent relative to other skills areas in the next six months.

    The rise of open cloud platforms is creating even more demand for Linux professionals with the right expertise. Forty-two percent of hiring managers say that experience in OpenStack and CloudStack will have a major impact on their hiring decisions, while 23 percent report security is a sought-after area of expertise and 19 percent are looking for Linux talent with Software-Defined Networking (SDN) skills.

    Linux-certified professionals will be especially well positioned in the job market this year, with 44 percent of hiring managers saying they’re more likely to hire a candidate with Linux certification, and 54 percent expecting either certification or formal training of their SysAdmin candidates.

    ”Demand for Linux talent continues apace, and it’s becoming more important for employers to be able to verify candidates have the skillsets they need,”

    Reply
  35. Tomi Engdahl says:

    Chinese Lenovo’s growth to be a major server manufacturer. It is, of course, is based on IBM’s x86 server purchase. Last year’s growth rates are outrageous due buying IBM server business.

    HP in October-December, by far the largest supplier in terms of money. HP servers sold 3.9 billion dollars. The amount is 28 percent of the total market for servers, which in the fourth quarter was a hair less than 14 billion dollars.
    Dell maintained the second position.

    In total, were sold worldwide in October-December to 2.7 million servers (5% higher than year earlier).

    Nearly a quarter of the server machine is now sold in the European market.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2521:lenovo-kasvoi-suureksi-palvelimissa&catid=13&Itemid=101

    Reply
  36. Tomi Engdahl says:

    Grab your pitchforks: Ubuntu to switch to systemd on Monday
    It’s Debian all o’er agin, I tells ye!
    http://www.theregister.co.uk/2015/03/07/ubuntu_to_switch_to_systemd/

    The Ubuntu Project is set to move forward with a plan to make a controversial system management tool a key part of Ubuntu Linux.

    On Monday, March 9, the Ubuntu maintainers will reconfigure code base for the forthcoming version of the OS so that it uses the much-debated systemd suite of tools to handle core initialization tasks and manage system daemons.

    That means that when Ubuntu 15.04 ships (presumably in April), all new Ubuntu installs will be running systemd by default.

    It’s a move that’s sure to annoy some.

    Reply
  37. Tomi Engdahl says:

    Bank of America wants to shove its IT into an OpenCompute cloud. What could go wrong?
    Cheaper customised hardware is good for Facebook
    http://www.theregister.co.uk/2015/03/07/bank_of_america_cloud/

    Selling hardware to the financial industry used to be a cash cow for big-name server makers, but they will be getting short shrift from Bank of America, which is shifting its IT into a white-box-powered, software-defined cloud.

    “I worry that some of the partners that we work closely with won’t be able to make this journey,” David Reilly, chief technology officer at the bank, told The Wall Street Journal.

    Bank of America made the decision to slide the bulk of its backend computing systems to the cloud in 2013, and wants to have 80 per cent of its systems running in software-defined data centres within the next three years. Last year it spent over $3bn on new computing kit.

    To make the move, the bank is talking to hardware manufacturers building low-cost, no-brand cloud boxes as part of the OpenCompute Project

    “What works for a Facebook or a Google may not work in a highly regulated environment such as the one we operate within,” Reilly noted.

    Reply
  38. Tomi Engdahl says:

    ‘If cloud existed decades ago, backups wouldn’t have been developed’
    Plus: ‘We’re in growth mode’, says 2,000 employee-zapping SAP
    http://www.theregister.co.uk/2015/03/08/quotw_ending_6_march/

    Reply
  39. Tomi Engdahl says:

    Obama Administration Claims There Are 545,000 IT Job Openings
    http://news.slashdot.org/story/15/03/09/2038211/obama-administration-claims-there-are-545000-it-job-openings

    The White House has established a $100 million program that endorses fast-track, boot camp IT training efforts and other four-year degree alternatives.

    The White House has established a $100 million program that endorses fast-track, boot camp IT training efforts and other four-year degree alternatives.

    the White House’s assertion that there are 545,000 unfilled IT jobs. It has not explained how it arrived at this number

    The White House’s $100M, H-1B funded tech job plan comes under fire
    http://www.computerworld.com/article/2894417/the-white-house-s-100m-h-1b-funded-tech-job-plan-comes-under-fire.html

    Obama administration says there are 545,000 IT job openings, but experts question that number

    President Barack Obama said in speech Monday in Washington that the program’s goals are “to help employers link up and find and hire folks based on their actual skills and not just their resumes. It doesn’t matter where you learned code, it just matters how good you are in writing code.

    “If you can do the job, you should get the job,” he said.

    The $100 million will be made available through grants distributed by the Department of Labor to organizations, employers, training institutions and others that address the White House goals.

    Norm Matloff, a professor of computer science at the University of California-Davis who has long challenged the idea that there’s a shortage of technical talent, said “the subtext of the White House announcement is to justify expanding the H-1B program.”

    “Just look at all the cases, including the recent Southern California Edison incident, in which Americans are laid off and forced to train their foreign-worker replacements; Clearly, it’s the foreign workers who need the training, not the Americans,” said Matloff. “The fact is that employers don’t want to hire Americans; they want cheap, immobile labor.”

    But a lot of the openings are for specific skills and experience, and not entry level jobs, said Janulaitis. Technologists with specialties in security, containerization, cloud-based apps and big data can find jobs. “The real issue is when entry-level positions will be available,” he said.

    Reply
  40. Tomi Engdahl says:

    Intel SoCs it to ‘em with new D: Tiny but powerful
    Chipzilla reveals more on plan to stuff Broadwells in hyperscale data centre
    http://www.theregister.co.uk/2015/03/09/intel_xeon_d/

    After some teasing late last year, Intel has taken the full wraps off the Xeon D: a system-on-chip (SoC) version of its data-centre darling that it hopes will excite hyperscale operators and those keen on very dense server rigs.

    The Xeon D will come in four and eight CPU core models, D-1520 and D-1540. Each will run two threads per core, at a list price of $199 and $581 respectively. The Broadwell 14nm microarchitecture powers both packages, and they are in production.

    Manufacturers are already picking them up for 50 designs, we’re told, for networking gear, back-end controllers of internet-of-things devices, storage systems, and more. It’s not targeted at core data centre gear – that’s for the Xeon E series – but it’s for the “edge” of the data centre.

    The new chips are proper Broadwell Xeons, so enterprise features like virtualisation extensions, trusted execution technology and encryption-speeding AES-NI are all present. There’s even some new inclusions in the form of a on-die power management system that can control power consumption without waiting for the operating system to dictate terms.

    While we’re on power consumption, the 1520 can do its thing with 25 watts of juice. The 1540 needs 40.

    The SoCs certainly have the grunt to pull off software-defined networking: both pack twin 10Gb ethernet, eight USB ports (four apiece of 2.0 and 3.0) and more PCIE lanes than you can poke a stick at (we think 28 was mentioned in the briefing).

    Intel seems unworried by overlaps with Atom SoCs, which it sees as having roles in less demanding machines. Nor does it feel that the market for conventional Xeons is at risk: the model Ds are plenty powerful, but aren’t designed to handle the weightiest transactional workloads.

    Reply
  41. Tomi Engdahl says:

    Intel Xeon D Launched: 14nm Broadwell SoC for Enterprise
    by Ian Cutress on March 9, 2015 8:00 PM EST
    http://www.anandtech.com/show/9070/intel-xeon-d-launched-14nm-broadwell-soc-for-enterprise

    It is very rare for Intel to come out and announce a new integrated platform. Today this comes in the form of Xeon D, best described as the meeting in the middle between Xeon E3 and Atom SoCs, taking the best bits of both and fitting into the market for the low-end server market prioritizing efficiency and networking. Xeon D, also known as Broadwell-DE, combines up to eight high performance Broadwell desktop cores and the PCH onto a single die, reduces both down to 14 nm for power consumption/die area and offers an array of server features normally found with the Xeon/Avoton line. This is being labeled as the first proper Intel Xeon SoC platform.

    This is the slide currently doing the rounds from Intel’s pre-briefings on Xeon D. This is showing the current top of the line Xeon D-1540, giving eight Broadwell cores for a total of sixteen threads.

    Speaking of networking, the SoC will have bandwidth for two 10GbE connections direct, which will work both in 1G and 2.5G modes. These are optimized for virtualization, allowing 128 Tx and Rx queues per port as well as SR-IOV and VMDq enhancements. With the integration on board, driver support should also be easier to manage rather than external controller solutions.

    The SoC also supports the more common server and enterprise aspects normally associated with this product range – virtualization, separate external system control and RAS (reliability, availability and serviceability).

    Reply
  42. Tomi Engdahl says:

    VR Gears Up for Prime Time
    Samsung headset expected this fall
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1325963&

    Samsung is expected to roll out this fall a consumer version of its Gear VR headset co-developed with Oculus, the next step in bringing virtual reality to the mainstream.

    Reply
  43. Tomi Engdahl says:

    Intel’s 1st Xeon SoC Twists ARM
    Microservers that outperform ARM
    http://www.eetimes.com/document.asp?doc_id=1325955&

    PORTLAND, Ore. — Intel’s first Xeon system-on-chip (SoC) is twisting ARM’s microserver ambitions, Charles King, principle analyst at Pundit-IT in Hayward, California told EE Times.

    “Its not just Intel’s first SoC in the Xeon family,” King told us. “Its the beginning of a new era at Intel — expect them to move very fast in SoCs. We are going to see many more SoCs specifically designed to combat ARM microservers plus serve many datacenter functions.”

    Today’s Xeon SoCs are designed for hosting and cloud services such as web hosting, memory caching, dynamic web serving, and warm storage. But the future Xeon SoC’s to which Hayward refers will be optimized for storage and network-optimized products such as storage-area networks (SANs) and network attached storage (NAS), mid-range routers, wireless base stations and embedded IoT devices. About 75 percent of current designs-wins for the Xeon D are for network, storage and IoT designs, whereas microservers are under development at Cisco, HP, NEC, Quanta Cloud Technology, Sugon and Supermicro.

    Intel’s previous strategy to best ARM in microservers was to beef-up its Atom line with its second-generation Atom processor C2750, but no more, according to King — “Atom will become a consumer only SoC,” he told us. The Xeon D-1500 family delivers 3.4-times faster performance per node and up to 1.7-times better performance per watt, which will also make it useful in high-end IoT devices that ARM cannot match, according to Intel. So far the only ARM 14-nanometer 64-bit core was made by Intel for Altera. Samsung has also shown a 14nm ARM 64-bit prototype core, but release no details or delivery estimates. The closest ARM has come to the Xeon D 14nm 64-bit cores are those made with TSMC’s 16nm process, which are not due out until later this year.

    Reply
  44. Tomi Engdahl says:

    Exploiting the DRAM Rowhammer Bug To Gain Kernel Privileges
    http://it.slashdot.org/story/15/03/10/0021231/exploiting-the-dram-rowhammer-bug-to-gain-kernel-privileges

    ‘Rowhammer’ is a problem with some recent DRAM devices in which repeatedly accessing a row of memory can cause bit flips in adjacent rows. We tested a selection of laptops and found that a subset of them exhibited the problem. We built two working privilege escalation exploits that use this effect. One exploit uses rowhammer-induced bit flips to gain kernel privileges on x86-64 Linux when run as an unprivileged userland process

    Project Zero
    Exploiting the DRAM rowhammer bug to gain kernel privileges
    http://googleprojectzero.blogspot.fi/2015/03/exploiting-dram-rowhammer-bug-to-gain.html

    Overview

    “Rowhammer” is a problem with some recent DRAM devices in which repeatedly accessing a row of memory can cause bit flips in adjacent rows. We tested a selection of laptops and found that a subset of them exhibited the problem. We built two working privilege escalation exploits that use this effect. One exploit uses rowhammer-induced bit flips to gain kernel privileges on x86-64 Linux when run as an unprivileged userland process. When run on a machine vulnerable to the rowhammer problem, the process was able to induce bit flips in page table entries (PTEs). It was able to use this to gain write access to its own page table, and hence gain read-write access to all of physical memory.

    We don’t know for sure how many machines are vulnerable to this attack, or how many existing vulnerable machines are fixable. Our exploit uses the x86 CLFLUSH instruction to generate many accesses to the underlying DRAM, but other techniques might work on non-x86 systems too.

    We expect our PTE-based exploit could be made to work on other operating systems; it is not inherently Linux-specific. Causing bit flips in PTEs is just one avenue of exploitation; other avenues for exploiting bit flips can be practical too. Our other exploit demonstrates this by escaping from the Native Client sandbox.

    This works because DRAM cells have been getting smaller and closer together. As DRAM manufacturing scales down chip features to smaller physical dimensions, to fit more memory capacity onto a chip, it has become harder to prevent DRAM cells from interacting electrically with each other. As a result, accessing one location in memory can disturb neighbouring locations, causing charge to leak into or out of neighbouring cells. With enough accesses, this can change a cell’s value from 1 to 0 or vice versa.

    Exploiting rowhammer bit flips

    Yoongu Kim et al say that “With some engineering effort, we believe we can develop Code 1a into a disturbance attack that … hijacks control of the system”, but say that they leave this research task for the future. We took on this task!

    We found various machines that exhibit bit flips (see the experimental results below). Having done that, we wrote two exploits:

    The first runs as a Native Client (NaCl) program and escalates privilege to escape from NaCl’s x86-64 sandbox, acquiring the ability to call the host OS’s syscalls directly. We have mitigated this by changing NaCl to disallow the CLFLUSH instruction. (I picked NaCl as the first exploit target because I work on NaCl and have written proof-of-concept NaCl sandbox escapes before.)
    The second runs as a normal x86-64 process on Linux and escalates privilege to gain access to all of physical memory. This is harder to mitigate on existing machines.

    Reply
  45. Tomi Engdahl says:

    Linux kernel devs adopt Bill and Ted’s excellent code of conduct
    Penguin dude Torvalds issues guide to being nice to each other when coding kernels
    http://www.theregister.co.uk/2015/03/10/linux_kernel_devs_adopt_bill_and_teds_excellent_code_of_conduct/

    The Linux kernel development community and its leader Linus Torvalds are both famously feisty: strong words are often exchanged on the Linux Kernel Mailing List, while Linux Lord Linus Torvalds is seldom shy of speaking his mind.

    Of late, however, Torvalds has copped some criticism for being a little too strident, perhaps counter-productively so as he’s though to have scared off a few developers with his blunt commentaries on their contributions to the kernel.

    Into that climate comes a new ”Code of Conflict” posted on Torvalds’ personal git.kernel.org pag

    https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/CodeOfConflict?id=ddbd2b7ad99a418c60397901a0f3c997d030c65e

    Reply
  46. Tomi Engdahl says:

    An incredibly shrinking Firefox faces endangered species status
    http://www.computerworld.com/article/2893514/an-incredibly-shrinking-firefox-faces-endangered-species-status.html?google_editors_picks=true

    Desktop browser continues to bleed user share; combined desktop + mobile share falls under 10%

    Mozilla’s Firefox is in danger of making the endangered species list for browsers.

    Just two weeks after Mozilla’s top Firefox executive said that rumors of its demise were “dead wrong,” the iconic browser dropped another three-tenths of a percentage point in analytics firm Net Applications’ tracking, ending February with 11.6%.

    Mozilla has been credited with restarting browser development, which had been moribund under IE.

    But Firefox has fallen on hard times.

    In the last 12 months, Firefox’s user share — an estimate of the portion of all those who reach the Internet via a desktop browser — has plummeted by 34%. Since Firefox crested at 25.1% in April 2010, Firefox has lost 13.5 percentage points, or 54% of its peak share.

    he numbers for Firefox were even worse when both the desktop and mobile data are combined.

    Firefox’s total user share — an amalgamation of desktop and mobile — was 9.5% for February, its lowest level since Computerworld began tracking the metric nearly six years ago,

    Mozilla faces a double whammy: Its flagship desktop browser continues to bleed share, while the company has been unable to attract a significant mobile audience. Although the company has long offered Firefox on Android and its Firefox OS has landed on an increasing number of low-end smartphone makers’ devices, its February mobile share was less than seven-tenths of one percent, about four times smaller than the second-from-the-bottom mobile browser, Microsoft’s IE.

    Apple, which had long trailed Mozilla in desktop + mobile browser user share, has leapfrogged its rival because of Firefox’s decline: Safari on desktop and mobile had a cumulative 11.8% user share, down half a point from July 2014. More than two-thirds of Apple’s total was credited to Safari on iOS.

    Google has been the biggest beneficiary of the losses suffered by Mozilla
    Last month, it had a combined desktop/mobile user share of 27.6%, 5 percentage points higher than seven months ago.

    Together, the aged stock Android browser and its replacement, Chrome, accounted for 41.5% of all mobile browsers by Net Applications’ count. Google’s pair remained behind Apple’s Safari on mobile, but has narrowed the gap.

    Reply
  47. Tomi Engdahl says:

    HP launches hyperscale Cloudline servers, aims for white box market
    http://www.zdnet.com/article/hp-launches-cloudline-servers-aims-for-white-box-market/

    Summary:HP launched a family of servers with minimalist designs as a result of its partnership with Foxconn.

    Hewlett-Packard on Tuesday launched a new set of servers designed for hyperscale cloud providers, who prefer white box gear. The server family, called Cloudline, is the result of a joint venture with Hon Hai’s Foxconn.

    CEO Meg Whitman alluded to the Foxconn partnership during HP’s first quarter earnings conference call. Whitman said that HP is looking to partner and play in the market for servers from original design manufacturers. So-called ODM servers are moving up the market share charts.

    HP announced Cloudline at the Open Compute Summit. What HP is trying to do is thread the needle between offering ODM servers and surrounding them with support and scale.

    Gromala said that HP isn’t concerned with cannibalization of the company’s existing server portfolio. Cloudline is for hyperscale deployments and probably wouldn’t be used for mission critical applications that need more redundancy on both hardware and software. For instance, HP’s Proliant servers have more redundancy and serve a different architecture. “Cloudline is for data centers willing to trade off capabilities and components for costs,” said Gromala.

    As for pricing, the Cloudline servers are likely to be anywhere from 10 percent to 25 percent less expensive than HP’s standard servers. The revenue split between HP and Foxconn isn’t disclosed.

    Reply
  48. Tomi Engdahl says:

    The quiet computing revolution built into Apple’s 12-inch MacBook
    http://www.washingtonpost.com/blogs/the-switch/wp/2015/03/10/the-quiet-computing-revolution-built-into-apples-12-inch-macbook/

    But a tiny feature of the new laptop may end up being just as critical a development for the computing world: the new USB port.

    The small plug — technically called “the USB Type C” — is the only one on Apple’s latest Macbook. That’s provoking some groans among road warriors who often need multiple ports to charge their phones, connect to a flash drive and keep their laptops charged at the same time. But inconveniences aside, it’s easy to see how Apple is forcing a major change in the industry.

    Why is USB-C so important? Versatility. It enables the transfer of power, data and even a video signal — all at the fastest rates. In the MacBook, that’s allowed Apple to collapse all of the ports you’d ordinarily see on the side of a computer into one

    USB-C is like a super-powered version of the old familiar USB ports that have been on laptops for years. It’s reversible, meaning there’s no right-side-up to the plugs. It can deliver more power, faster — so you can connect large external devices like monitors and hard drives with it. And the energy can flow both ways, so that you can charge a phone from your laptop (as before) or you can tell your phone to charge your laptop.

    Apple is so confident in USB-C’s capabilities that it’s provided only one USB-C port on the 12-inch MacBook. That’s potentially problematic for people who need to charge their laptop and also connect a monitor at the same time. The decision also portends a robust industry for after-market adapters and splitters.

    Reply
  49. Tomi Engdahl says:

    Intel gives Facebook the D – Xeons thrust web pages at the masses
    System designs and software libraries published
    http://www.theregister.co.uk/2015/03/10/facebook_open_compute_yosemite/

    Open Compute Summit Facebook is using Intel’s Xeon D processors to build stacks of web servers for the 1.39 billion people who visit the social network every month.

    The OpenRack server design is codenamed Yosemite, is pictured above, and is available for anyone to use under the OpenCompute project. The hardware “dramatically increases speed and more efficiently serves Facebook traffic,” the website’s engineers boast.

    Each sled holds four boards, and on each board sits a single Xeon D-1540 processor package with its own RAM and flash storage. That D-1540 part features eight cores (16 threads) running at 2GHz, plus two 10Gb Ethernet ports, PCIe and other IO.

    Each processor consumes up to 65W, 90W for the whole server card, and 400W (TDP) for a full sled. A single rack can hold 48 sleds, which adds up to 192 Xeon Ds and 1,536 Broadwell cores. The Yosemite motherboard has a 50Gb/s multi-host network interconnect that hooks the four CPU boards through a single Ethernet port.

    The key thing is that this design is easier for Facebook’s software engineers to program. Each independent server is essentially a single socket processor with its own RAM, storage and NIC, whereas previous designs are two-socket affairs. The single-socket design gets rid of all the NUMA headaches present in a two-socket system, when writing and tuning multi-threaded code to generate and serve web pages.

    “890 million people visit Facebook on mobile every day. We have to build the infrastructure to support this.”

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*