Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    Massive Worldwide Layoff Underway At IBM
    Posted 3 Feb 2015 | 17:00 GMT
    http://spectrum.ieee.org/view-from-the-valley/at-work/tech-careers/massive-worldwide-layoff-underway-at-ibm

    Project Chrome, a massive layoff that IBM is pretending is not a massive layoff, is underway. First reported by Robert X. Cringely (a pen name) in Forbes, about 26 percent of the company’s global workforce is being shown the door. At more than 100,000 people, that makes it the largest mass layoff at any U.S. corporation in at least 20 years. Cringely wrote that notices have started going out, and most of the hundred-thousand-plus will likely be gone by the end of February.

    IBM immediately denied Cringely’s report, indicating that a planned $600 million “workforce rebalancing” was going to involve layoffs (or what the company calls “Resource Actions”) of just thousands of people. But Cringely responded that he never said that the workforce reductions would be all called layoffs—instead, multiple tactics are being used

    The news is coming in from around the world, and is affecting folks in sales, support, engineering—just about every job description. The only IBM’ers spared are those working in semiconductor manufacturing, an operation that is in the process of being acquired by Global Foundries.

    Alliance@IBM, the IBM employees’ union, says it has so far collected reports of 5000 jobs eliminated

    Those are official layoff numbers. But then there’s that performance rating ploy—also known as a stealth layoff—that involves giving a previously highly rated employee the lowest rating (a 3), before showing them the door.

    This isn’t the first time IBM employees have received aberrant poor performance reviews shortly before a layoff; it’s reportedly standard operating procedure.

    Next Week’s Bloodbath At IBM Won’t Fix The Real Problem
    http://www.forbes.com/sites/robertcringely/2015/01/22/next-weeks-bloodbath-at-ibm-wont-fix-the-real-problem/

    I’ve been hearing since before Christmas about Project Chrome, the code name for what has been touted to me as the biggest reorganization in IBM IBM +2.46% history.

    To fix its business problems and speed up its “transformation,” next week about 26 percent of IBM’s employees will be getting phone calls from their managers. A few hours later a package will appear on their doorsteps with all the paperwork. Project Chrome will hit many of the worldwide services operations. The USA will be hit hard, but so will other locations.

    Those employees will all be gone by the end of February.

    In the USA mainframe and storage talent will see deep cuts. This is a bit short-sighted and typical for IBM. They just announced the new Z13 mainframe and hope it will stimulate sales. Yet they will be cutting the very teams needed to help move customers from their old systems to the new Z13.

    The storage cuts are likely to be short-sighted, too. Most cloud services use different storage technology than customers use in their data centers. This makes data replication and synchronization difficult. IBM’s cloud business needs to find a way to efficiently work well with storage systems found in customer data centers. Whacking the storage teams won’t help with this problem.

    Project Chrome appears to be a pure accounting resource action — driven by the executive suite and designed to make IBM’s financials look better for the next few quarters.

    Global Technology Services, the outsourcing part of IBM, is continuing to lose customers.

    When reached, IBM sent the following response: “We do not comment on rumors, even ridiculous or baseless ones. If anyone had checked information readily available from our public earnings statements, or had simply asked us, they would know that IBM has already announced the company has just taken a $600 million charge for workforce rebalancing.”

    The biggest reorganization in IBM’s history will not really begin until the Project Chrome resource actions are done. People let go will be excluded from consideration for the new business units. In a few months those new business units will start to work calling on IBM customers to sell them on the new CAMSS (Cloud, Analytics, Mobile, Social and Security) stuff. They will walk into a hornet’s nest.

    If you are an investor or Wall Street analyst it’s time to take a closer look at IBM’s messaging.

    So while IBM is supposedly transforming, they are also losing business and customers every quarter. What are they actually doing to fix this? Nothing.

    Reply
  2. Tomi Engdahl says:

    Microsoft pushes the open source door a little wider with .NET CoreCLR
    http://www.neowin.net/news/microsoft-pushes-the-open-source-door-a-little-wider-with-net-coreclr

    Microsoft has been pushing the open source message for many months now, and today the company announced a few updates to this initiative. The Redmond-based company has taken the next step in completely open-sourcing the full .NET Core server side stack by releasing the source code for the .NET Core Common Language Runtime (CLR).

    This release includes the full CoreCLR, which is the “execution engine” for .NET Core, including RyuJIT, the .NET GC, native interop and many other .NET runtime components. When you run ASP.NET 5 apps on top of .NET Core, CoreCLR is the component that is responsible for executing your code, in addition to the CoreFX/BCL libraries that you depend on.

    Microsoft says that there will be several additional milestones as they work to open source and cross-platform the .NET runtime and server-side stack.

    In addition to the open source announcements, Microsoft will also be hosting a virtual .NET conference on March 18th and 19th where developers can expect to hear more about .NET Core.

    Reply
  3. Tomi Engdahl says:

    ARM Cores Take on PC Processors
    2016 handsets to run 4K video, console games
    http://www.eetimes.com/document.asp?doc_id=1325546&

    ARM announced a new processor core, GPU core, and interconnect targeted for mobile SoCs. The 64-bit Cortex-A72 processor core, Mali-T880 graphics core, and CoreLink CCI-500 aim to power a new class of mobile devices that “serve as your primary and only compute platform.”

    While slim on specs, ARM promises that its Cortex-A72 processor will deliver 3.5 times the performance of its Cortex-A15 devices while consuming 75% less power. ARM says its Mali-T880 graphics cores are similarly impressive in generation-over-generation improvements supporting resolutions up to 4K pixels at 120 frames/second.

    Phones using the cores are expected in 2016 that deliver video with the quality of a set-top box, console-class gaming and virtual reality experiences while staying in a smartphone power budget, said Nandan Nayampally, an ARM vice president of marketing. ARM officials are banking on its wide ecosystem of partners to redirect content from PCs and TVs to the new handsets.

    “I think ARM is throwing down a bit of a gauntlet saying they are going to be able to do all the content creation and the virtual reality and the other things that everybody thought they needed a midrange or high-end PC to do,” said Nathan Brookwood, principal of market watcher Insight64.

    Several ARM partners in the film, communications, and gaming industries at an event here said they were encouraged by the ways the compute improvements and a power consumption decreases could affect their industries.

    An engineer from headset maker Oculus said the high level performance of Mali cores and their competitors allow for real time scheduling and other immersive experiences. Oculus uses a Cortex-M3 microcontroller from ST Micro in its Rift virtual-reality goggles. Samsung’s version of the headset, co-developed with Oculus, runs on a Cortex A-15 and A7 and Mali cores in the Samsung Galaxy S4.

    “We found some surprising things about mobile SoCs and GPUs that were very helpful in making virtual reality happen,”

    More than ten partners, including HiSilicon, MediaTek, and Rockchip, have licensed the Cortex-A72, which is based on the ARMv8-A architecture and is backward compatible with existing 32-bit software.

    With the new cores, ARM continues its practice of providing so-called POP IP, this time for TSMC’s 16nm FinFET+ process, to help chip makers rapidly migrate from 32nm or 28nm process nodes with predictable performance and power results. In the 16nm TSMC process the new Cortex-A72 can run at a sustained 2.5 GHz data rate, ARM said.

    playing catch up when it comes to GPUs
    “Now, if you have the highest [graphics] performance you’re either Qualcomm, Nvidia, or using an Imagination GPU,”

    Reply
  4. Tomi Engdahl says:

    Intel Triples Tablet Application Processor Shipments
    Intel’s shipments of application processors for tablet computers more than tripled in 3Q14, according to market research firm Strategy Analytics.
    http://www.eetimes.com/document.asp?doc_id=1325524&

    Strategy Analytics estimates that Intel’s tablet application processor shipments more than tripled in Q3 2014 compared to Q3 2013, thanks to increased traction in Android-based tablets.

    The firm added that during 3Q14 low-cost Chinese and Taiwanese tablet AP companies including Actions Semiconductor, Allwinner, MediaTek, Spreadtrum, Rockchip and others increased their cumulative volume share to 36 percent compared to 29 percent in Q3 2013. Strategy Analytics forecasts continued momentum for these vendors in 2015 as well.

    Reply
  5. Tomi Engdahl says:

    Who Gains the Most from ARM’s New IP?
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1325545&

    Introducing new IP, ARM also forecasts tablet’s demise.

    A suite of new IP — ranging from ARM Cortex A72 processor to cache coherent interconnect and new Mali T880 GPU — announced by ARM Tuesday (Feb. 3) has exposed some of the leading processing-core company’s hopes and dreams for mobile phones in 2016.

    Next year’s mobiles, as envisioned by ARM, will go beyond mere phones to become “primary computing platforms,” said Ian Ferguson, vice president, segment marketing at ARM.

    The 2016 mobile phones will be able to see, hear and understand users much better, through a new set of interfaces (going beyond voice, including gestures).

    The growing CPU and GPU processing power enabled by ARM’s new IP cores also suggests that the next-generation phone will be up to the task of “creating content,” instead of just consuming it, Ferguson added.

    Sensor data will be captured, processed and analyzed more locally, instead of being sent to the cloud, according to ARM. Phones will cease to be just a “conduit,” said Ferguson.

    But really, who will gain most from ARM’s new IP?

    The answer is China. Most striking in ARM’s announcement is the undeniable rising power of Asian fablesses, foundries and consumers that ARM is poised to serve.

    I’m sure Apple, Samsung and Qualcomm all have plans to leverage ARM’s new IP, but their primary focus is more on developing their own custom CPU architecture.

    Taiwan Semiconductor Manufacturing Co. (TSMC) also comes out as a big winner. ARM’s new physical IP suite is optimized for the TSMC 16nm FinFET+ process. Announcements on support for other foundries will probably come later

    Chip companies interested in enabling “All-Day Compute Devices” using ARM’s new cores in 2016 mobile phones are likely to resort to TSMC’s 16nm FinFET process, but not others.

    Asian consumers are also playing an important role in deciding the desired features and functions in 2016 mobile phones.

    Reply
  6. Tomi Engdahl says:

    Wide-Spread SSD Encryption is Inevitable
    http://www.eetimes.com/document.asp?doc_id=1325401&

    The recent Sony hack grabbed headlines in large part due to the political fallout, but it’s not the first corporate enterprise to suffer a high profile security breach and probably won’t be the last.

    Regardless, it’s yet another sign that additional layers of security may be needed as hackers find ways to break through network firewalls and pull out sensitive data, whether it’s Hollywood secrets from a movie studio, or customer data from retailers such as Home Depot or Target. And sometimes it’s not only outside threats that must be dealt with; those threats can come from within the firewall.

    While password-protected user profiles on the client OS have been standard for years, self-encrypting SSDs are starting to become more appealing as they allow for encryption at the hardware level, regardless of OS, and can be deployed in a variety of scenarios, including enterprise workstations or in a retail environment.

    In general, SSDs are becoming more common.

    A survey by the Storage Networking Industry Association presented at last year’s Storage Visions Conference found users lacked interest in built-in encryption features for SSDs, particularly in the mobile space. One of the chief concerns they had when adding features such as encryption to MCUs and SSDs is their effect on performance. Even though many SSDs being shipped today have data protection and encryption features built in, often those capabilities are not being switched on by OEMs, due to the misconception that encryption can reduce performance.

    Ritu Jyoti, chief product officer at Kaminario, said customers are actually requesting encryption as a feature for its all-flash array, but also voice concerns about its effect on performance. “They do ask the question.” Customers in the financial services sector in particular are looking for encryption on their enterprise SSDs, she said, driven by compliance demands, as well as standards outlined by the National Institute of Standards and Technology.

    George Crump, president and founder of research firm Storage Switzerland, recently blogged about Kaminario’s new all-flash array and addressed its new features, including encryption, which he wrote is critical for flash systems in particular because of the way controllers manage flash. “When NAND flash cell wears out the flash controller, as it should, it marks that cell as read-only. The problem is that erasing a flash cell requires that null data be written to it,” he wrote.

    Briefing Note: Kaminario Delivers Encryption, Poised for 2015 Growth
    http://storageswiss.com/2014/12/23/kaminario-delivers-encryption-poised-for-2015/

    Reply
  7. Tomi Engdahl says:

    Increased Functionally Drives Flash Array Adoption
    http://www.eetimes.com/document.asp?doc_id=1325411&

    Flash arrays are here to stay, according to recent research released by IDC, and adoption is growing at a rapid pace.

    The research covers both all-flash arrays (AFAs) and hybrid flash arrays (HFAs) and shows the worldwide market for flash arrays will hit US$11.3 billion in 2014. IDC credits the growth to a wider of variety of offerings from vendors that handle different, increasingly complex workloads.

    Reply
  8. Tomi Engdahl says:

    Out Googling Google on Big Data Searches
    NU giving away superior search algorithm
    http://www.eetimes.com/document.asp?doc_id=1325551&

    Almost every search algorithm through unstructured Big Data uses a technique called latent Dirichlet allocation (LDA). Northwestern University professor Luis Amaral became curious as to why LDA-based searches appear to be 90 percent inaccurate and unrepeatable 80 percent of the time, often delivering different “hit lists” for the same search string. To solve the conundrum Amaral took apart LDA, found its flaws, and fixed them.

    Now he is offering the improved version, which not only returns more accurate results but returns exactly the same list every time it is used on the same database. He’s offering all this for free to Google, Yahoo, Watson, and any other search engine makers — from recommendation systems to spam filtering to digital image processing and scientific investigation.

    “The common algorithmic implementation of the LDA model is incredibly naive,” Amaral told EE Times.

    First, there is this unrealistic belief that one is able to detect topics
    The other big problem with LDA is that it uses a technique that more often than not gets stuck in what are called local maximums.

    “The common algorithm assumes that by pretty much using steepest ascent it can find the global maximum in the likelihood function landscape. Physicists know from the study of disordered systems that when the landscape is rough, one gets trapped in local maxima and that the specific local maxima found depends on the initial state. In the specific case of LDA, what this means is that depending on the initial guess of the parameter values one is estimating, one gets a different estimate of the parameters,” Amaral told us.

    Reply
  9. Tomi Engdahl says:

    Tom Warren / The Verge:
    Microsoft stops manufacturing the Nokia Lumia 2520 tablet, thus ending all Windows RT devices
    http://www.theverge.com/2015/2/3/7974759/windows-rt-is-dead

    Reply
  10. Tomi Engdahl says:

    George Avalos / Mercury News:
    Report: Silicon Valley powers to record job boom, but surge produces income gap

    Report: Silicon Valley powers to record job boom, but surge produces income and gender gap
    http://www.mercurynews.com/business/ci_27449198/report-record-job-boom-silicon-valley-but-surge

    SAN JOSE — The Silicon Valley economy is red hot and the growth is intensifying, according to the latest Joint Venture Silicon Valley Index released Tuesday, but the surge has been accompanied by a yawning income and gender gap.

    “The hot economy is getting hotter,” said Russell Hancock, president of Joint Venture Silicon Valley. “It is really extraordinary. We are blowing through every economic record.”

    Silicon Valley, defined as Santa Clara County, San Mateo County and Fremont, added about 58,000 jobs in 2014, a 4.1 percent annual jump measured over a 12-month period that ended in June.

    Unlike previous growth periods, such as the dot-com boom, that were driven by a speculative enthusiasm without real revenue, the current economic surge has more heft with more mature companies, Hancock said.

    “In 2000, if you were a teenager and had a business plan on the back of a napkin, you would get financing,” Hancock said. “Back then, the venture capitalists were tripping over themselves to finance startups.”

    This time around, Hancock said, the job gains, particularly in technology, are sustained; established firms such as Google and Apple have shown consistent growth; and digital companies capitalizing on “the Internet of Things” have plenty of new territory to explore for growth opportunities. In addition, venture capitalists are being more careful with their investments.

    Reply
  11. Tomi Engdahl says:

    User Interface Designers: Start Seeing Colorblind People
    http://www.designnews.com/author.asp?section_id=1386&doc_id=276504&

    In order to understand how colorblindness works, it’s important to understand the basics of how color vision works.

    There’s nothing particularly magical about the number three: most birds, insects, reptiles, and amphibians have four types of cones, and a few have five. These animals can perceive and distinguish between millions of colors that humans can’t. If birds had invented color television, CRTs would probably have at least four kinds of phosphors.

    In humans, a lack of medium-wavelength cones is called deuteranopia.
    Not everyone who is lacking one type of cone has deuteranopia. Some people have blue and green cones, but lack red ones; this is called protanopia.

    Some people have all three types of cones, but still experience color blindness. This is because the peak sensitivity of one cone type is shifted.

    By now, you’ve probably realized that colorblind people don’t simply see in black and white.

    User interface designers ignore the fact that some form of colorblindness affects as much as 10% of the male population at their own risk. Relying on users to distinguish between colors that are fairly close together in the spectrum is a good way to ensure that at least some of them will have difficulties. There are programs (including the open-source Color Oracle) that allow you to see things the way colorblind people see them. This can allow you to spot possible issues. That being said, it’s a good practice to use text, shape, and patterns to distinguish between user interface elements, rather than color alone.

    That also goes for spreadsheets, presentations, CAD files, and FEA results, among other things. It even goes for electrical wiring: I’ve been known to stick pieces of tape with the words “red” and “green” on wires so I won’t mix them up.

    Color Oracle
    Design for the Color Impaired
    http://colororacle.org/

    Color Oracle is a free color blindness simulator for Window, Mac and Linux. It takes the guesswork out of designing for color blindness by showing you in real time what people with common color vision impairments will see.

    Reply
  12. Tomi Engdahl says:

    Microsoft’s Nadella is on a mission to make Windows matter again
    http://www.cnet.com/news/microsoft-nadella-is-on-a-mission-to-make-windows-matter-again/

    The CEO of the world’s largest software maker, who celebrates one year on the job today, is changing Microsoft from the inside out — because he has to.

    If former Microsft CEO Steve Ballmer saw Windows as a cash cow that just needed to be milked, Satya Nadella seems to view the software as a workhorse straining to pull the company out of its rut.

    He’s got a tough sell.

    Sure, almost 90 percent of all personal computers run some version of Windows. But by 2016, over 2 billion people — or more than a quarter of the world’s population — will have a smartphone, according to eMarketer. And where does Microsoft stand in one of the fast-growing technology arenas in the world? In the shadows, with its Windows software for mobile devices holding a paltry 2.7 percent share of the market, well behind Google’s Android and Apple’s iOS offerings.

    Nadella, who’s spent the last 23 years at Microsoft, knows that the company’s days as master of the tech industry are long gone. Now he’s trying to make it relevant by pushing Microsoft into the age of mobile computing. For Nadella, it’s a question of “renew” or die.

    “You renew yourself every day. Sometimes you’re successful, sometimes you’re not. But it’s the average that counts,”

    With Windows 10, the next version of Microsoft’s operating system, due this year, developers are being promised the ability to write to a single code base. That could be the lure Microsoft needs to convince developers who want to write just once and create the apps that look and feel the same across computers, tablets and smartphones, regardless of what software powers the device.

    Microsoft also needs to be “a company whose developer tools are perceived as being useful to everyone, not just on their own platforms.”

    “Microsoft is going to stay in the phone business as long they’re not cratering,” Gartner’s Adrian said. “Not because they want to get to No. 1 or No. 2 in the phone business, but because as a market participant they learn a lot that extends to the rest of the company.” Microsoft, for instance, can apply what it learns in the consumer market to the enterprise space — and vice versa.

    “Ballmer didn’t have a sense of where Microsoft should go,” said Kay. Nadella, with his steadier hand than the famously bombastic Ballmer, might have a better chance of changing the software behemoth’s course.

    Nadella knows the outcome for Microsoft if he can’t renew the business.

    Reply
  13. Tomi Engdahl says:

    Google acquires maker of Toontastic storytelling app
    http://www.cnet.com/news/google-acquires-toontastic-for-tike-friendly-tinkering/

    The search giant scoops up Launchpad Toys and its popular make-your-own cartoon app for kids, which is now free to all users.

    Launchpad Toys offers a few kid-friendly mobile apps, including augmented reality app TeleStory, but Toontastic has proven to be its most popular child-focused mobile program. Toontastic is a storytelling app that allows kids to create their own cartoons and tell a story. The app is designed for Apple’s iPad, and as the company puts it, is essentially a modern-day puppet show.

    Once those shows are put on, they can be recorded and shared with others.

    Reply
  14. Tomi Engdahl says:

    In just one year, Satya Nadella made Microsoft so much better
    http://www.businessinsider.com/satya-nadella-totally-changed-microsoft-2015-2?op=1

    One year ago, Microsoft appointed long-time Microsoftie Satya Nadella as the company’s historic third CEO, and by all accounts, it’s been a helluva ride for the company ever since.

    Much of it good. Some of it not so good.

    What everyone can agree is that Nadella immediately set about creating a more cooperative Microsoft willing to partner with competitors, not just shoot bullets at them.

    That was in stark contrast to the highly combative CEOs it had before, co-founder Bill Gates and early sales chief Steve Ballmer.

    Nadella still has a lot of work to do before the world sees Microsoft as back in the saddle again. But he’s made tremendous progress.

    He’s built excitement for Windows again, thanks to Windows 10.

    He’s built excitement for Windows 10 among developers

    He’s making it easier to bring Android and iOS apps to Windows: He’s offering developers the “Holy Grail” in app development, tools that let an app developer write the app once and easily convert it to all Windows versions and to iOS and Android.

    He’s rolled out a bunch of new products that show what the “New Microsoft” is all about. Nadella’s vision for Microsoft is to “reinvent productivity.”

    He’s built excitement for up-and-coming new Microsoft products. Microsoft’s computerized glasses, HoloLens blew folks away when it was demonstrated.

    He’s turned some of Microsoft’s most bitter rivals into partners.

    He’s overseen Microsoft biggest layoff ever — to the applause of employees. Microsoft has cut 18,000 employees, most of them from the feature-phone business of acquired company Nokia

    Nadella basically ended Microsoft’s war with Apple and Android. He’s still competing with them, but not treating them like pariahs.

    He launched Microsoft into the hottest new market, called the Internet of Things (IoT), which will become a $1.6 trillion market within four years, Microsoft says.

    He made Windows free for all devices with nine-inch or smaller screens, a major business-model change for Windows.

    He ended some of Microsoft’s tackier marketing tactics.

    Overall, though, Nadella has had a pretty remarkable first year. The stock shows it

    Reply
  15. Tomi Engdahl says:

    Are virtualisation and the cloud SNUFFING OUT traditional backup software?
    Watch out for upstarts, enterprise storage bigshots
    http://www.theregister.co.uk/2015/02/05/virtualisation_and_the_cloud_set_to_kill_trad_backup_sw_growth/

    Five years ago, you might have said that Symantec, EMC and other mainstream suppliers dominated the enterprise backup market with their products – Backup Exec, Net Backup, Networker, TSM and others. But this is no longer the case.

    Server virtualisation and the cloud have ripped gaping holes in the market and newer and smaller backup vendors – including Asigra, Veeam, Infrascale and more – are flooding thorough, growing their businesses at rates that make mainstream backup software product growth seem like trickles compared to gushing streams.

    Reply
  16. Tomi Engdahl says:

    Two tiers to stop storage weeping: It’s finally happening
    Flash + object strategies
    http://www.theregister.co.uk/2015/01/16/more_data_more_needs_we_need_to_rethink_storage/

    Comment Enterprises are storing much more data than they did in the past (no news here), and they are going to be storing much more than now in the next future (no news here either). About a year ago I wrote an article about the necessity for enterprises to consider a new two tier strategy based on flash and object storage technologies.

    You can see the first signs of this happening ― even if we are at the beginning of a long trail. There are aspects we need take into serious account to make it really successful.

    Flash memory, in all its nuances and implementations, isn’t a niche technology any more and every primary data storage deal in 2015 will contain a certain amount of flash. Some will be all-flash, others will be hybrid but it can no longer be avoided.

    The economics of traditional primary workloads (IOPS and latency-sensitive) running on flash memory are undeniable when compared to spinning media. But the opposite is also true: when it comes to space ($/GB), the hard disk still wins hands down.
    when data is correctly organised, you can stream data out of a disk very quickly and at a lower cost than flash.

    Primary storage could be part of a hyper-converged infrastructure or external arrays and it has all the smart data services we are now used to seeing (I mean thin provisioning, snapshots, remote replicas and so on). On the other side we could have huge object-based scale-out distributed infrastructures capable of managing several petabytes of data for all non-primary (or better, non IOPS/latency-sensitive workloads), in practice everything ranging from file services and big data, to backup and cold data (like archiving).

    Smart cloud-based analytics is becoming more and more common for primary storage vendors (vendors like Nimble are giving analytics a central role in their strategy, and rightly so). We can’t say the same for secondary systems (which are becoming not so secondary after all). If object-storage becomes the platform to store all the rest of our data, then analytics will become of considerably greater importance in the future.

    One more important aspect is application-awareness. Some primary storage systems know when they are working with a particular database or hypervisor (just to bring up a couple of examples), and they enable specific performance profiles or features to offload servers from doing some heavy tasks (VMWARE vSphere Storage APIs are a vivid example here).

    We need similar functionalities on secondary object-based storage too, but in this case it is necessary to climb up the stack.

    This post stands somewhere between predictions and wishful thinking. But many of the necessary pieces to build the complete picture are ready or in development. However, connecting all the dots will take time, probably somewhere in the range of four to five years.

    Storage is changing, thinking about a new strategy
    http://juku.it/en/storage-changing-thinking-new-straegy/

    Today’s business needs are quickly changing. IT is in charge of the infrastructure that should serve these needs but traditional approaches are no longer aligned with the requests. It’s not about what works and what doesn’t, it’s all about TCA and TCO: the way you buy infrastructure (storage in this case), provision, access and manage it.

    SDS and Flash, are really happening

    Nonetheless, next generation All Flash Arrays and Hybrid Arrays are having a big momentum and they all show numbers that are impossibile to see on traditional primary storage systems… especially when you also compare prices.

    Reply
  17. Tomi Engdahl says:

    Helium HDD prices rise way above air-filled spinning rust
    Squeaky voice time for HGST as trad drives come in cheaper
    http://www.theregister.co.uk/2015/02/06/helium_drives_cost_more_than_air_filled_hdds/

    A 6TB Helium-filled drive from HGST costs $120 more than a traditional 6TB drive from Seagate.

    Analyst haus Stifel Nicolaus’ MD, Aaron Rakers, interviewed Seagate’s CFO, Pat O’Malley, and noted afterwards that Seagate estimated an approximate $30/drive extra cost versus traditional air-filled drives. Rakers notes the “HGST (WD) UltraStar He6 (helium) 6TB 7.2k HDDs are currently listed at an average price of ~$550, which compares to the Seagate Enterprise Constellation 6TB 7.2k HDDs listed at ~$420-$430.”

    The Seagate drive has six platters, according to our understanding, while the HGST one has seven.

    Rakers says Seagate was unable to fulfil the strong demand for its 6TB drives in the latest quarter.
    Such a drive takes more than three months for components to pass through the supply chain; a 6TB+ HDD requires up to 20+ weeks of supply chain – 13-14 weeks for heads (wafer-to-head production) and 3+ weeks of test cycles.

    Reply
  18. Tomi Engdahl says:

    Out Googling Google on Big Data Searches
    NU giving away superior search algorithm
    http://www.eetimes.com/document.asp?doc_id=1325551&

    Reply
  19. Tomi Engdahl says:

    Wanna see something insane? How about an SSH library written in x64 assembly?

    HeavyThing x86_64 assembler library
    https://2ton.com.au/HeavyThing/

    GPLv3 licensed, commercial support available
    Very fast TLS 1.2 client/server implementation
    Very fast SSH2 client/server implementation

    Reply
  20. Tomi Engdahl says:

    A Pixelated Platform Game That Never Plays the Same Way Twice
    http://www.wired.com/2015/02/moonman-videogame/

    Minecraft set the gaming world on fire by creating a world where anything can be built. Moonman offers almost the opposite experience. Its eponymous hero, which bears some resemblance to a Minecraft Creeper, is charged with exploring seven themed and completely destroyable worlds. His goal is to find and assemble a series of moon fragments by exploring exotic lands and winning boss fights along the way.

    It’s a simple conceit with an approachable, old-school aesthetic. Its creator, Ben Porter, says you can finish the game in an hour. That’s not long at all, but what makes Moonman work is its infinite replay factor. Every time you start a new session, forests, villages, and crypts are generated anew.

    Porter says successful procedural generation involves a mix of algorithms, hard-coded design features, and above all, a sense of taste.

    “I went through three different fluid simulation techniques before settling on something that works well, but, more importantly, doesn’t subtract the fun-ness from the game.”

    “Modern games technology and tools means that one developer can now do something that took five developers in the 90s, when pixel art games when at their peak,” says Porter. “Now we can really start to experiment and push the medium in all different, wonderful, directions.”

    Reply
  21. Tomi Engdahl says:

    Where the iPad should go next: Look toward Windows 10
    http://www.cnet.com/news/where-the-ipad-should-go-next-look-toward-windows-10/

    Where can the iPad go from here? Microsoft has been coming up with some pretty good ideas.

    The iPad is nearly 5 years old. That product, ever since, has continued to ride a thin dividing line between iPhones and Macs: mobile, and computers.

    Apple’s world has since become all about the iPhone. Look at the latest sales numbers, and they’re shockingly lopsided: iPhones now represent almost three-quarters of Apple’s revenue. Macs and iPads, while still selling in numbers that would cause celebrations at other companies, just don’t seem to have the same fire. Sales of iPads, in fact, are trending downward.

    But 2015 seems to offer the promise of new things in the world of both product lines: the two biggest rumors predict a “big screen” iPad, and a new “small screen” MacBook Air. The irony? Both of these products are said to include a screen in the 12.5-inch realm.

    If these rumors are real, shouldn’t this be just one product? Maybe a thin and light touchscreen tablet, with a detachable keyboard? And maybe — just maybe — running an operating system that combines the best of Mac OS X and iOS?

    For me, the answer lies with Microsoft.

    What Microsoft is getting right: One world, multiple devices

    Windows 8 took a first crack at cross-device one-size-fits-all computing for a variety of types of tablets, laptops and all-in-one PCs, and failed. The experience wasn’t fun for a lot of people, and it turned out that some types of hybrid PCs were better suited to Windows 8 than others.

    Windows 10 looks like it’s made all the right improvements, and explores cross-device computing in a way that feels very Google. Universal apps promise to run on phones, tablets, PCs and maybe even the Xbox One. In practice we’ll see how that goes, but in theory it sounds wonderful.

    Google’s apps like YouTube, Gmail, Drive and Docs take everything to the next level of true connectivity and I can use Google Drive on any device. It generally works well, no matter where I am. Isn’t that the works-anywhere philosophy brought to life? Microsoft’s apps, especially OneDrive, take a similar approach.

    Apple hasn’t been sitting still in this regard. In fact, iOS and OS X have gotten a lot closer, and core apps work are coming together in a similar way
    But Macs still stand apart from iPhones and iPads.

    Microsoft Windows 10′s Continuum has another solution: it promises to swap seamlessly between touchscreen tablet and keyboard-connected modes, with software that smartly recognizes when keyboard peripherals are attached. It promises to switch between an app-based tablet, or a more traditional computer desktop — intelligently.

    Reply
  22. Tomi Engdahl says:

    Pimp my cluster: GPUs, liquid nitrogen and AAAAAH! ..that new compiler smell
    Vets, first timers face off at supercomputing compo
    http://www.theregister.co.uk/2015/02/09/no_u_in_team_unless_its_us_cluster_kid_warriors/

    Reply
  23. Tomi Engdahl says:

    Linux 3.19 released for your computing pleasure
    3.20 is next and 4.0 is nowhere in sight
    http://www.theregister.co.uk/2015/02/09/linux_319_released_for_your_computing_pleasure/

    Version 3.19 of the Linux kernel has been signed off by Linus Torvalds.

    New in this release is improved support for Intel and AMD graphics, plus support for LZ4 compression in the SquasFS which should make for better Linux performance on Live CDs.

    The KVM Hypervisor has dropped support for the IA64 chip, a milestone in that architecture’s demise.

    Reply
  24. Tomi Engdahl says:

    Daniel Eran Dilger / AppleInsider:
    LinkedIn, American Airlines, and other major developers begin to adopt Apple’s Swift, report improved productivity and fewer errors
    http://appleinsider.com/articles/15/02/07/apples-new-swift-programming-language-takes-flight-with-getty-images-american-airlines-linkedin-and-duolingo

    Reply
  25. Tomi Engdahl says:

    MongoDB’s feline brains trust sink their claws into NoSQL
    Relational payoff in the wings?
    http://www.theregister.co.uk/2015/02/10/mongodb_swallows_wiredtiger/

    By any measure, BerkeleyDB was a hit. It became the world’s most widely deployed embedded and open-source database, meaning that the company which did the most work to maintain it, Sleepycat, got swallowed by Larry Ellison’s database giant Oracle in 2006.

    Mike Olsen was Sleepycat’s business chief. He’s now CEO of Hadoop venture Cloudera, which is talking about an IPO.

    Michael Cahill and Keith Bostic – two of the brains behind both Berkeley DB and Sleepycat – followed up with database storage engine specialist WiredTiger.

    “We can access an incredible amount of experience in this area – collectively they have more than 75 years of innovation in this space. There aren’t many people who possess their level of experience,” Kelly Stirman, MongoDB director of products, told The Reg.

    “We can continue to develop and work with that technology and gain better visibility and control over how WiredTiger develops in the future,” he added.

    MongoDB 3.0 sees a fundamental re-write of the MongoDB core storage engine and back-end architecture, based on WiredTiger. WiredTiger makes MongoDB a more flexible and scalable NoSQL database.

    MongoDB had been a slave to locking – make a change to one field or object and the entire database was locked while changes were replicated. In the understated lexicon that is dev-speak, this was, er, “sub-optimal.”

    WiredTiger brought in document-level locking, which meant greater flexibility in environments where lots of updates and changes are made. That means places with lots of machine data flooding in – for example, IoT apps, website updates or data analysis.

    MongoDB can now handle lots of write-heavy loads out of the box and claims a performance improvement on write throughput of between seven and 10 times on the previous version.

    WiredTiger also adds compression to Mongo DB. The NoSQL document store will now eat between 50 to 80 per cent less disk space than before.

    The biggest impact of WiredTiger is it sees MongoDB embark upon a policy of working with lots of different engines and data types. MongoDB 3.0 has three storage engines: the original MMAP, WiredTiger plus in-memory for those shy of writing their data to disk.

    MongoDB lets you work across different data types, too – it’s not either/or – while keeping the same JSON programming model.

    Reply
  26. Tomi Engdahl says:

    PostgreSQL, the NoSQL Database
    http://www.linuxjournal.com/content/postgresql-nosql-database

    One of the most interesting trends in the computer world during the past few years has been the rapid growth of NoSQL databases. The term may be accurate, in that NoSQL databases don’t use SQL in order to store and retrieve data, but that’s about where the commonalities end. NoSQL databases range from key-value stores to columnar databases to document databases to graph databases.

    On the face of it, nothing sounds more natural or reasonable than a NoSQL database. The “impedance mismatch” between programming languages and databases, as it often is described, means that we generally must work in two different languages, and in two different paradigms. In our programs, we think and work with objects, which we carefully construct. And then we deconstruct those objects, turning them into two-dimensional tables in our database. The idea that I can manipulate objects in my database in the same way as I can in my program is attractive at many levels.

    In some ways, this is the holy grail of databases: we want something that is rock-solid reliable, scalable to the large proportions that modern Web applications require and also convenient to us as programmers. One popular solution is an ORM (object-relational mapper), which allows us to write our programs using objects.
    ORMs certainly make it more convenient to work with a relational database, at least when it comes to simple queries.
    But ORMs have their problems as well, in no small part because they can shield us from the inner workings of our database.

    NoSQL advocates say that their databases have solved these problems, allowing them to stay within a single language. Actually, this isn’t entirely true. MongoDB has its own SQL-like query language, and CouchDB uses JavaScript. But there are adapters that do similar ORM-like translations for many NoSQL databases, allowing developers to stay within a single language and paradigm when developing.

    The ultimate question, however, is whether the benefits of NoSQL databases outweigh their issues. I have largely come to the conclusion that, with the exception of key-value stores, the answer is “no”—that a relational database often is going to be a better solution. And by “better”, I mean that relational databases are more reliable, and even more scalable, than many of their NoSQL cousins.

    The thing is, even the most die-hard relational database fan will admit there are times when NoSQL data stores are convenient. With the growth of JSON in Web APIs, it would be nice to be able to store the result sets in a storage type that understands that format and allows me to search and retrieve from it. And even though key-value stores, such as Redis, are powerful and fast, there are sometimes cases when I’d like to have the key-value pairs connected to data in other relations (tables) in my database.

    If this describes your dilemma, I have good news for you. As I write this, PostgreSQL, an amazing database and open-source project, is set to release version 9.4. This new version, like all other PostgreSQL versions, contains a number of optimizations, improvements and usability features. But two of the most intriguing features to me are HStore and JSONB, features that actually turn PostgreSQL into a NoSQL database.

    PostgreSQL was and always will be relational and transactional, and adding these new data types hasn’t changed that. But having a key-value store within PostgreSQL opens many new possibilities for developers. JSONB, a binary version of JSON storage that supports indexing and a large number of operators, turns PostgreSQL into a document database, albeit one with a few other features in it besides.

    Reply
  27. Tomi Engdahl says:

    Linaro Launches an Open-Source Spec For ARM SBCs
    http://hardware.slashdot.org/story/15/02/10/0434229/linaro-launches-an-open-source-spec-for-arm-sbcs

    Not content to just standardize ARM-based Linux and Android software, Linaro has just launched 96Boards, an open-source spec for ARM-based single board computers. Along with the spec’s rollout, Linaro also announced a $129 HiKey SBC

    The 96Boards initiative plans to offer a series of specs for small-footprint 32- and 64-bit Cortex-A boards, including an Enterprise Edition (EE) of its spec in Q2.

    Linaro launches open ARM SBC spec, and an octa-core SBC
    http://linuxgizmos.com/linaro-launches-open-arm-sbc-spec-and-an-octa-core-sbc/

    Linaro has launched an open-source spec for ARM SBCs called “96Boards,” first available in a $129 “HiKey” SBC, featuring a Huawei octa-core Cortex-A53 SoC.

    Linaro, the ARM-backed not-for-profit engineering organization that has aimed to standardize open source Linux and Android software for Cortex-A processors, is now trying to do the same thing for hardware. Linaro, which is owned by ARM and many of its top system-on-chip licensees, has launched 96Boards.org, a cross between a single board computer hacker community and an x86-style hardware standards organization.

    96Boards.org has released a Consumer Edition (CE) of the spec with an 85 x 54mm or 85 x 100mm footprint, and both 40- and 60-pin expansion connectors for stackable boards. This will be followed in the second quarter by an Enterprise Edition (EE).

    The 96Boards initiative will offer a series of specs for small-footprint 32- and 64-bit Cortex-A boards “from the full range of ARM SoC vendors,” says Linaro

    32- and 64-bit ARM Open Hardware Boards
    https://www.96boards.org/

    Reply
  28. Tomi Engdahl says:

    RMS Objects To Support For LLVM’s Debugger In GNU Emacs’s Gud.el
    http://developers.slashdot.org/story/15/02/08/210241/rms-objects-to-support-for-llvms-debugger-in-gnu-emacss-gudel

    An anonymous reader writes with the news that Richard Stallman is upset over the prospect of GNU Emacs’s Grand Unified Debugger (Gud.el) supporting LLVM’s LLDB debugger. Stallman says it looks like there is a systematic effort to attack GNU packages and calls for the GNU Project to respond strategically.

    He wrote his concerns to the mailing list after a patch emerged that would optionally support LLDB alongside GDB as an alternative debugger for Emacs.

    RMS Feels There’s “A Systematic Effort To Attack GNU Packages”
    http://www.phoronix.com/scan.php?page=news_item&px=RMS-Emacs-Gud-LLVM

    The patch proposal for gud.el is just about adding basic LLDB support and not about stripping out the GDB support, adding a bunch of LLVM code into Emacs, or anything along those lines… Just about supporting LLDB as an alternative debugger. Richard Stallman responded to this work by writing:

    It looks like there is a systematic effort to attack GNU packages. The GNU Project needs to respond strategically, which means not by having each GNU package cooperate with each attack. For now, please do NOT install this change.

    “Neither Windows nor MacOS was intended to push major GNU packages out of use. What I see here appears possibly to be exactly that. Whether that is the case is what I want to find out.”

    For now the LLVM LLDB debugger support isn’t being added to Emacs gud.el, even though it’s very basic support and an alternative to GDB

    Reply
  29. Tomi Engdahl says:

    Guide to Software Collections – From CentOS Dojo Brussels 2015
    http://www.karan.org/blog/2015/02/05/guide-to-software-collections-from-centos-dojo-brussels-2015/

    At the CentOS Dojo Brussels 2015 Honza Horak presented on Software Collections. Starting from what they are, how they work and how they are implemented. During this 42 min session he also ran through how people can create their own collections and how they can extend existing ones.

    Software Collections are a way to deliver parallel installable rpm tree’s that might contain extension to existing software already on the machine, or might deliver a new version of a component ( eg. hosting multiple versions of python or ruby on the same machine at the same time, still manageable via rpm tools )

    Reply
  30. Tomi Engdahl says:

    DMCA Exemption Campaign Would Let Fans Run Abandoned Games
    http://games.slashdot.org/story/15/02/11/0146207/dmca-exemption-campaign-would-let-fans-run-abandoned-games

    Games that rely on remote servers became the norm many years ago, and as those games age, it’s becoming more and more common for the publisher to shut them down when they’re no longer popular. This is a huge problem for the remaining fans of the games, and the Digital Millennium Copyright Act forbids the kind of hacks and DRM circumvention required for the players to host their own servers.

    An Exemption to the DMCA Would Let Game Fans Keep Abandoned Games Running
    https://www.eff.org/let-game-fans-keep-abandoned-games-running

    Reply
  31. Tomi Engdahl says:

    Which Freelance Developer Sites Are Worth Your Time?
    http://developers.slashdot.org/story/15/02/11/0334259/which-freelance-developer-sites-are-worth-your-time

    Many websites allow you to look for freelance programming jobs or Web development work.

    The problem for developers in the European Union and the United States is that competition from rivals in developing countries is crushing fees for everybody, as the latter can often undercut on price.

    His conclusion? “It’s my impression that the bottom has already been reached, in terms of contractor pricing; to compete these days, it’s not just a question of price, but also quality and speed.” Do you agree?

    Are Freelance Developer Sites Worth Your Time?
    http://news.dice.com/2015/02/10/are-freelance-developer-sites-worth-your-time/?CMPID=AF_SD_UP_JS_AV_OG_DNA_

    The big question, of course, isn’t so much cost as quality, and you can find excellent code anywhere from Indiana to India. I once commissioned some Flash development work via a freelance site and received a range of bids from $30 to $5,000; I shortlisted five and asked to see evidence of previous work from them. One was an Indian guy living in Thailand with a really extensive portfolio. I picked him and he didn’t disappoint: For $150 I got a terrific piece of work done, which included source code. (It was so well done I gave him a $75 bonus.)

    There are reasons to stick closer to home, of course, when selecting a freelancer. Time zone differences can delay changes; trying to arrange discussions with a developer eight (or more) time zones away can quickly become a real pain—even if you leave as detailed instructions as possible, chances are good you’ll still have to talk to him or her face-to-face. But cheaper prices are nonetheless a strong motivation for picking developers in developing countries.

    Reply
  32. Tomi Engdahl says:

    Microsoft researchers say their newest deep learning system beats humans — and Google
    http://venturebeat.com/2015/02/09/microsoft-researchers-say-their-newest-deep-learning-system-beats-humans-and-google/

    Microsoft Research has outdone itself again when it comes to a trendy type of artificial intelligence called deep learning.

    In a new academic paper, employees in the Asian office of the tech giant’s research arm say their latest deep learning system can outperform humans by one metric.

    The Microsoft creation got a 4.94 percent error rate for the correct classification of images in the 2012 version of the widely recognized ImageNet data set , compared with a 5.1 percent error rate among humans, according to the paper. The challenge involved identifying objects in the images and then correctly selecting the most accurate categories for the images, out of 1,000 options. Categories included “hatchet,” “geyser,” and “microwave.”

    Delving Deep into Rectifiers:
    Surpassing Human-Level Performance on ImageNet Classification
    http://arxiv.org/pdf/1502.01852v1.pdf

    In this work, we study rectifier neural networks for image classification from two aspects.

    Based on our PReLU networks (PReLU-nets), we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66% [29]). To our knowledge, our result is the first to surpass human-level performance (5.1%, [22]) on this visual recognition challenge

    Reply
  33. Tomi Engdahl says:

    Is Modern Linux Becoming Too Complex?
    http://linux.slashdot.org/story/15/02/11/0444232/is-modern-linux-becoming-too-complex

    Debian developer John Goerzen asks whether Linux has become so complex that it has lost some of its defining characteristics. “I used to be able to say Linux was clean, logical, well put-together, and organized. I can’t really say this anymore. Users and groups are not really determinitive for permissions, now that we have things like polkit running around.”

    Has modern Linux lost its way? (Some thoughts on jessie)
    http://changelog.complete.org/archives/9299-has-modern-linux-lost-its-way-some-thoughts-on-jessie

    This is, in my mind, orthogonal to the systemd question. I used to be able to say Linux was clean, logical, well put-together, and organized. I can’t really say this anymore.

    systemd may help with some of this, and may hurt with some of it; but I see the problem more of an attitude of desktop environments to add features fast without really thinking of the implications. There is something to be said for slower progress if the result is higher quality.

    Then as I was writing this, of course, my laptop started insisting that it needed the root password to suspend when I close the lid. And it’s running systemd. There’s another quagmire…

    Reply
  34. Tomi Engdahl says:

    It’s not the cloud: The problem lies between the chair and the computer – Gartner
    Nearly 100% of private clouds ‘fail’ ‘cos people just don’t ‘get it’
    http://www.theregister.co.uk/2015/02/12/gartner_private_cloud_fail/

    Nearly all private clouds customers build are failing – and the biggest problem isn’t the technology, it’s people.

    That’s according to analyst Gartner, which reckons 95 per cent of the people it polled reported problems with their private clouds.

    The single biggest problem is failure of organisations to change the way they work once they’ve got a cloud running.

    That’s according to Thomas Bitten, vice president and distinguished engineer, who surveyed 140 attendees of Gartner’s Data Center Conference in December.

    The single biggest trap private cloud-spinners have fallen into is failing to change their operational model.

    Technology is one thing, but agile clouds need agile processes – and processes aren’t changing for the more on-demand, service-oriented world of cloud.

    “People are your biggest supporters and your biggest roadblocks,” Bitten wrote last week.

    The next biggest problem is “doing too little” followed by failure to changing the funding model

    Reply
  35. Tomi Engdahl says:

    Firefox To Mandate Extension Signing
    http://tech.slashdot.org/story/15/02/11/210247/firefox-to-mandate-extension-signing

    In a recent blog post, Mozilla announced its intention to require extensions to be signed in Firefox, without any possible user override. From the post: “For developers hosting their add-ons on AMO, this means that they will have to either test on Developer Edition, Nightly, or one of the unbranded builds. The rest of the submission and review process will remain unchanged, except that extensions will be automatically signed once they pass review.”

    Introducing Extension Signing: A Safer Add-on Experience
    https://blog.mozilla.org/addons/2015/02/10/extension-signing-safer-experience/

    This year will bring big changes for add-on development, changes that we believe are essential to safety and performance, but will require most add-ons to be updated to support them. I’ll start with extension signing, which will ship earlier, and cover other changes in an upcoming post.

    The Mozilla add-ons platform has traditionally been very open to developers. Not only are extensions capable of changing Firefox in radical and innovative ways, but developers are entirely free to distribute them on their own sites, not necessarily through AMO, Mozilla’s add-ons site. This gives developers great power and flexibility, but it also gives bad actors too much freedom to take advantage of our users.

    Extensions that change the homepage and search settings without user consent have become very common, just like extensions that inject advertisements into Web pages or even inject malicious scripts into social media sites. To combat this, we created a set of add-on guidelines all add-on makers must follow, and we have been enforcing them via blocklisting (remote disabling of misbehaving extensions). However, extensions that violate these guidelines are distributed almost exclusively outside of AMO and tracking them all down has become increasingly impractical. Furthermore, malicious developers have devised ways to make their extensions harder to discover and harder to blocklist, making our jobs more difficult.

    An easy solution would be to force all developers to distribute their extensions through AMO, like what Google does for Chrome extensions. However, we believe that forcing all installs through our distribution channel is an unnecessary constraint. To keep this balance, we have come up with extension signing

    All Firefox extensions are affected by this change, including extensions built with the Add-ons SDK. Other add-on types like themes and dictionaries will not require signing and continue to install and work normally. Signature verification will be limited to Firefox, and there are no plans to implement this in Thunderbird or SeaMonkey at the moment.

    Reply
  36. Tomi Engdahl says:

    Your Java Code Is Mostly Fluff, New Research Finds
    http://developers.slashdot.org/story/15/02/11/1744246/your-java-code-is-mostly-fluff-new-research-finds

    But here’s the bottom line: Only about 5% of written Java code captures the core functionality.

    Your code is far more chaff than wheat
    http://www.itworld.com/article/2881655/your-code-is-far-more-chaff-than-wheat.html

    New research finds that the core functionality of a program is encapsulated by just a small fraction of its code

    If you’re a software developer and somebody asked you what percentage of the code you write represents the actual functionality versus how much is filler, fluff or code just required by the language to actually run? 95%? 75%? 50%? No matter what you guessed, you’re probably way off because new research has discovered that only about 5% of written code captures the core functionality it actually provides.

    In a new paper titled A Study of “Wheat” and “Chaff” in Source Code, researchers from the University of California, Davis, Southeast University in China, and University College London theorized that, just as with natural languages, some – and probably, most – written code isn’t necessary to convey the point of what it does.

    The authors claimed that the wheat of a function could be encapsulated by small sets of keywords, which they called the “minimum distinguishing subset” or MINSET. The MINSET can be derived by breaking a method down into lexemes (i.e., code delimited by space or punctuation), discarding what’s not important to the behavior of the function and mapping the remaining ones to keywords. Those keywords then make up the MINSET.

    To test their theory that MINSETs of functions are, in fact, a small percentage of the code written

    Here are their main findings:

    MINSETS are surprisingly small. The mean MINSET size of a method was 1.55 keywords and the largest consisted of 6.
    MINSET size didn’t increase with method size. When looking only at the 1,000 largest methods, the average and max MINSET size actually decreased to 1.12 and 4, respectively. This indicates, the authors wrote, that “minsets are small and potentially effective indices of unique information even for abnormally large methods.”

    Most code is almost all chaff. On average, only 4.6% of the unique lexemes in a method make up the MINSET. That is, over 95% of the code is chaff.

    Reply
  37. Tomi Engdahl says:

    Elementary OS: Why We Make You Type “$0″
    http://news.slashdot.org/story/15/02/11/1753219/elementary-os-why-we-make-you-type-0

    Open source software can always be acquired without charge, but can still incur significant development costs. Elementary OS wants to make people aware of this, and have changed their website to suggest donating when downloading, and make users explicitly enter “$0″ if they want a free download. This is the same strategy Canonical has used when offering Ubuntu. The Elementary OS blog explains: “Developing software has a huge cost. Some companies offset that cost by charging hundreds of dollars for their software, making manufacturers pay them to license the software, or selling expensive hardware with the OS included. Others offset it by mining user data and charging companies to target ads to their users. [...]”

    Payments
    Or, Why we make you type “$0”
    http://blog.elementaryos.org/post/110645528530/payments

    We explicitly say you can download Luna for free, we include a pay-what-you-want (including $0) text entry with $10 pre-filled, and we also include an explicit “Download Luna for free” link that simply sets the text entry to $0 for you.

    Users have downloaded Luna over 2,000,000 times. Around 99.875% of those users download without paying. Of the tiny 0.125% who do, the most common payments are the default $10, followed by $1. But again, only a tiny fraction of one percent of users even decide to pay in the first place.

    On the new site, we’re changing the approach. Rather than a text entry by default, we’ve opted to present users with some easy one-button choices

    Why We Make You Type “$0”

    We want users to understand that paying for software is important and not paying for it is an active choice. We didn’t exclude a $0 button to deceive you; we believe our software really is worth something.

    But Open Source means Free!

    elementary is under no obligation to release our compiled operating system for free download. We’ve invested money into its development, hosting our website, and supporting users. However, we understand the culture that currently surrounds open source: users tend to feel that they should receive full, compiled releases of software at zero cost. While we could rightfully disallow free downloads, we don’t want to. We believe in the pay-what-you-want model and want to see it succeed. Most importantly, we don’t want to lock out people who may not be able to afford pricey software, especially in volume.

    Reply
  38. Tomi Engdahl says:

    Elementary, My Dear Linux User
    http://www.linuxjournal.com/content/elementary-my-dear-linux-user

    I suspect there are as many Ubuntu-based Linux distributions as there are all other distributions combined. Many of them are designed with a specific purpose in mind.

    Elementary OS is just another in a long list of variants, but what it does, it does very well.

    Upon first boot, Elementary obviously is designed to look and function like OS X.

    Elementary OS includes the Ubuntu Software Center, and like most variants, it can install any program in the Ubuntu repositories. Out of the box, however, it’s a clean, fast operating system that people familiar with OS X will recognize right away.

    the version based on Ubuntu 14.04 still is in beta, but the stable version is available today and looks very similar. Give it a spin at http://elementaryos.org.

    Reply
  39. Tomi Engdahl says:

    Dell XPS 13 Teardown
    https://www.ifixit.com/Teardown/Dell+XPS+13+Teardown/36157

    Dell says “no” to physics and threads a 13.3″ HD display into an impossibly small laptop.

    The early 2015 Dell XPS 13 is our newest bit of teardown tech—time to tear it open!

    This is a good example of what companies should do for us in the first place, Dell is by no means losing anything by giving away their service manuals, if you open the computer between the warranty period, you just void it and end of story. But after that time is gone, if you want to do your own repairs, you can do it without guessing or breaking stuff. I’m a hardcore Apple user & certified technician, but this one really got my attention, it’s a nice looking laptop.

    Reply
  40. Tomi Engdahl says:

    Dell XPS 13 Teardown: ‘Strikingly Similar’ To MacBook Air, Says iFixit
    http://www.forbes.com/sites/brookecrothers/2015/02/12/dell-xps-13-teardown-strikingly-similar-to-macbook-air-says-ifixit/

    Reviews have compared the new Dell XPS 13 regularly to the MacBook Air. Now repair site iFixit is proclaiming a “striking” resemblance in some key respects.

    Calling the 2015 XPS 13 “Dell’s New MacBook Air,” the widely-followed teardown experts at iFixit cracked open an XPS 13 to find a blueprint for a MacBook Air.

    Reply
  41. Tomi Engdahl says:

    ONTAP isn’t putting NetApp ONTOP
    All that is storage does not beget gold
    http://www.theregister.co.uk/2015/02/12/ontap_isnt_putting_netapp_ontop_instead_its_blinding_the_firm/

    NetApp’s third quarter results were poor but NetApp revenues have been flat or falling for a couple of years now. Management doesn’t seem to view this as a deep-set problem requiring deep-set answers.

    NetApp remains convinced of the fundamentals and its strategy of using an ONTAP data fabric to provide end-to-end storage facilities across private and public clouds.

    FlashRay is a single controller product in limited availability, with little or no ONTAP integration and few data services. The sense of excitement around it is palpably absent.

    Over-reliance on ONTAP

    El Reg would sum up NetApp by saying it is focusing too much on its installed base and, secondly, is far too ONTAP-centric. Every strategic initiative has to support ONTAP in some way. Look at what happened to FlashRay. The product, intended to be revolutionary at its inception, is arguably being stifled now by the concentration on all-flash FAS for data services and EF560 for speed.

    Reply
  42. Tomi Engdahl says:

    Traditional enterprise workloads on an all-flash array? WHY WOULD I BOTHER?
    Latency from humans in front of the screen
    http://www.theregister.co.uk/2015/02/12/interesting_question/

    Are all-flash arrays ready for legacy enterprise workloads? The latest little spat between EMC and HP bloggers asked that question.

    But it’s not really an interesting question. A more interesting question would be: “Why would I put traditional enterprise workloads on an AFA?”

    More and more I’m coming across people who are asking precisely that question and struggling to come up with an answer. Yes, an AFA makes a workload run faster, but what does that gain me? It really is very variable across application type and where the application bottlenecks are. If you have a workload that does not rely on massive scale and parallelism, you will probably find that a hybrid array will suit you better and you will gain pretty much all the benefits of flash at a fraction of the cost.

    If all your latency is the human in front of the screen, the differences in response times from your storage become pretty insignificant.

    Reply
  43. Tomi Engdahl says:

    DDN purrs, rubs itself around Big Blue’s legs, snuggles up to POWER
    Storage firm hasn’t snubbed Intel CPUs just yet…
    http://www.theregister.co.uk/2015/02/13/ddn_getting_more_power/

    Intel CPU-using DDN has joined the OpenPOWER Foundation, which is focussed on the use of IBM’s competing POWER processor design.

    DDN, which supplies storage arrays for HPC, big data and enterprise use, says that, by working with the foundation it will help partners and customers drive HPC infrastructure technologies further into enterprise markets.

    These include DDN’s fastest growth areas: financial services, oil and gas and large scale web, cloud and service providers.

    DDN has systems that employ IBM’s GPFS parallel file system software.

    Reply
  44. Tomi Engdahl says:

    W3C turns BROWSERS into VIBRATORS
    New API should shake things up and generate some buzz
    http://www.theregister.co.uk/2015/02/13/w3c_turns_browsers_into_vibrators/

    Web wonks at the W3C have issued a new Recommendation that gives browsers control of vibrators.

    Recommendations are the W3C’s polite way of defining standards, so this week’s notification that the Vibration API has attained this status means the world now has a standard way to make devices throb, buzz, jitter, oscillate or flutter.

    The API’s intended uses include gaming and adding tactile feedback to all manner of applications.

    A few folks have also pondered whether the standard might be abused, for example by shaking a phone so that its owner thinks they have received a call or notification. Battery depletion is another scenario

    Vibration API
    http://www.w3.org/TR/2015/REC-vibration-20150210/

    // vibrate for 1000 ms
    navigator.vibrate(1000);

    // cancel any existing vibrations
    navigator.vibrate(0);

    Reply
  45. Tomi Engdahl says:

    FOCUS! 7680 x 4320 notebook and fondleslab screens are coming
    DisplayPort standard beefs up to plonk 8k screens in your lap
    http://www.theregister.co.uk/2015/02/13/displayport_standard_beefs_up_to_handle_8k_notebooks/

    An update to the DisplayPort standard is promising to bring about a new line of 8k screens for notebooks, tablets and all-in-ones.

    The Video Electronics Standards Association (VESA) has published a new standard for embedded display hardware. The eDP 1.4a update will support connections of 8.1Gbps per-channel and allow for resolutions of 7680 x 4320, or 8K.

    The embedded display category includes notebooks, tablets, smartphones and all-in-one PCs, meaning vendors will soon be able to offer those devices with clearer, higher resolution screens.

    The eDP 1.4a will also add support for partial update functions, in which the GPU only changes a portion of the screen on refresh, and Multi-SST Operation, a feature which VESA believes will allow vendors to build thinner LCD displays that require less power to operate.

    Reply
  46. Tomi Engdahl says:

    What Does It Mean to Be a Data Scientist?
    http://news.dice.com/2015/02/12/what-does-it-mean-to-be-a-data-scientist/?CMPID=AF_SD_UP_JS_AV_OG_DNA_

    At Dice, we’ve had a data-science team for two years. As research and development for the firm, we’ve worked on a number of different projects

    At times we may get technical, show some neat visualizations of our data, or wax lyrical about current and emerging trends in the industry

    To be honest, I often don’t tell people I am a data scientist. It’s not that I don’t enjoy my job (I do!) nor that I’m not proud of what we’ve achieved (I am); it’s just that most people don’t really understand what you mean when you say you’re a data scientist, or they assume it’s some fancy jargon for something else

    If I do answer, I normally follow up with analogies. In the modern world, data science is pervasive; it impacts a lot of what we do, particularly online. So I talk about how I work on recommender systems, citing Netflix and Amazon as examples, and work on enhancing our search engine. The latter always involves a reference to Google, the pinnacle of search sophistication

    I’m not the first to try and define “data scientist,” and I won’t be the last.

    IBM stresses that a data scientist, above all else, must have a strong business acumen and be able to effectively communicate ideas to core decision makers in the organization.

    Uses the Right Tools

    There are a plethora of tools for data science, from machine learning to statistical analysis and crunching large datasets. It can be very tempting to spend a lot of time researching different tools, and using the coolest new toys to solve a particular problem. However, it’s important to actually get some work done, and there’s only so much time you can spend evaluating tools: You need to be selective, and listen to what other people in the industry recommend for similar problems.

    The technology industry is as much driven by fads as the fashion world, and there is a tendency to try to use new technologies for problems they aren’t suited to handle. The best and most commonly stated example of this is Hadoop. A lot of companies seem to be under the impression that if you’re not using Hadoop, then you are not doing data science. The reality is that a lot of businesses don’t have the amount of data that warrants a Hadoop cluster. For those that do, it may still not be the best tool out there; certain tasks, for instance certain machine learning algorithms, have to be executed in a serial manner and cannot take full advantage of MapReduce.

    Similarly, Hadoop is not a good tool for running complex queries, which is one of the reasons that Google has moved away from the pure MapReduce paradigm they invented into more complicated systems such as Spanner. At Dice, we find Amazon’s RedShift more than competent for most of our Big Data-processing needs, and also leverage Apache Spark for some of the most processing-intensive tasks.

    Reply
  47. Tomi Engdahl says:

    XO-Infinity is a modular laptop for students, picks up where OLPC left off
    http://liliputing.com/2015/02/xo-infinity-is-a-modular-laptop-for-students-picks-up-where-olpc-left-off.html

    The One Laptop Per Child project introduced the idea of small, low-cost, low-power durable laptops that could be used by students around the world, with a special focus on developing markets. The first XO Laptop units shipped in 2007, and over the next few years the developers improved the hardware and software… and inspired the consumer devices like netbooks as well as other education-focused projects like Intel’s Classmate PC lineup.

    It’s been a few years since OLPC launched a new laptop, much of the core team has left the project, and these days the company partners with Vivitar to offer cheap Android tablets.

    But the folks at OLPC’s Australian partner One Education have decided to pick up where OLPC left off. They’ve just introduced a new laptop design called the XO-Infinity. It’s a modular laptop designed to be upgraded and modified, and it could ship in 2016.

    Reply
  48. Tomi Engdahl says:

    Top 10 crowd-funded PCs: How Steve Jobs’ heirs are building the next great computer
    http://www.zdnet.com/article/top-10-crowd-funded-pcs-how-steve-jobs-heirs-are-building-the-next-great-computerreat-computer/

    Summary:The crowd-funding revolution has led to a number of fascinating desktop, laptop, and tablet PC projects. Here are some of the most noteworthy — and successful.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*