Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    Self-aware storage? It’ll be fine. Really – your arrays aren’t the T-1000
    Smarter, savvier boxes will change the way we look at infrastructure
    http://www.theregister.co.uk/2015/04/10/thinking_different_about_storage/

    Comment In the last few months I have had several interesting briefings with storage vendors. Now, I need to stop and try to connect the dots, and think about what could come next.

    It’s incredible to see how rapidly the storage landscape is evolving and becoming much smarter than in the past. This will change the way we store, use and manage data and, of course, the design of future infrastructure.

    I recently put a few ideas together in a couple of posts, here and here, but I will try to develop my thoughts even further now.

    Reply
  2. Tomi Engdahl says:

    New DARPA project aims to do away with IT updates
    http://www.zdnet.com/article/new-darpa-project-aims-to-do-away-with-it-updates/

    Summary:DARPA is hoping to do away with software updates as it embarks on a new project to establish a computer system designed to outlive 100 years of technological change.

    The United States Defense Advanced Research Projects Agency (DARPA) is launching a program aimed at helping do away with software updates, with plans to design a computer system that has the ability to outlive over 100 years of technological change.

    The four-year project, Building Resources Adaptive Software Systems (BRASS), for which DARPA is currently soliciting research proposals, will look at the computational and algorithmic requirements needed for software systems and accompanying data to remain functional for a century or longer.

    The United States Defense Advanced Research Projects Agency (DARPA) is launching a program aimed at helping do away with software updates, with plans to design a computer system that has the ability to outlive over 100 years of technological change.

    DARPA, which played a vital role in establishing the protocol standards that led to the development of the internet, claims that such advances are likely to require new “linguistic abstractions” and resource-aware programs that are able to discover and specify program transformations, as well as systems designed to monitor changes in the surrounding digital ecosystem.

    The four-year project, Building Resources Adaptive Software Systems (BRASS), for which DARPA is currently soliciting research proposals, will look at the computational and algorithmic requirements needed for software systems and accompanying data to remain functional for a century or longer.

    “Technology inevitably evolves, but, very often, corresponding changes in libraries, data formats, protocols, input characteristics, and models of components in a software ecosystem undermine the behaviour of applications,” said DARPA program manager Suresh Jagannathan. “The inability to seamlessly adapt to new operating conditions undermines productivity, hampers the development of cybersecure infrastructure, and raises the long-term risk that access to important digital content will be lost as the software that generates and interprets content becomes outdated.”

    DARPA’s new project comes as the COBOL programming language enters its 56th year of existence, with the language still being widely used in legacy applications by enterprises within the business and finance sectors, albeit after a handful of major revisions.

    Reply
  3. Tomi Engdahl says:

    Jason Koebler / Motherboard:
    As Flash usage wanes, Internet archivists struggle to preserve Flash websites — Gone in a Flash: The Race to Save the Internet’s Least Favorite Tool — Navigating awful, 100 percent Flash-based sites is an experience many of us have had, and is unequivocally part of internet canon.

    http://motherboard.vice.com/read/gone-in-a-flash-the-race-to-save-the-internets-least-favorite-tool

    Reply
  4. Tomi Engdahl says:

    AWS joins Azure and Watson in bringing machine learning to big data
    Retail secrets offered to the wider community
    http://www.theinquirer.net/inquirer/news/2403570/aws-joins-azure-and-watson-in-bringing-machine-learning-to-big-data

    AMAZON WEB SERVICES (AWS) has announced that it will soon offer machine learning as an option to customers.

    The technology is the same creepy stuff that uses algorithms to recommend things that you might also like when you browse Mazon’s retail site.

    Amazon said at the AWS Summit in San Francisco this week that it will help customers to use the big data at their fingertips in more intelligent ways.

    The news follows hot on the heels of Microsoft’s introduction of similar technology in the Azure Cloud platform earlier in the year, and IBM’s Watson which has had the facility for some time.

    “Amazon has a long legacy in machine learning,” said Jeff Bilger, a senior manager with Amazon Machine Learning.

    “It powers the product recommendations customers receive on Amazon.com. It is what makes Amazon Echo able to respond to your voice, and it is what allows us to unload an entire truck full of products and make them available for purchase in as little as 30 minutes.”

    Reply
  5. Tomi Engdahl says:

    US nuclear fears block Intel China supercomputer update
    http://www.bbc.com/news/technology-32247532

    The US government has refused to let Intel help China update the world’s biggest supercomputer.

    Intel applied for a licence to export tens of thousands of chips to update the Tianhe-2 computer.

    The Department of Commerce refused, saying it was concerned about nuclear research being done with the machine.

    Separately, Intel has signed a $200m (£136m) deal with the US government to build a massive supercomputer at one of its national laboratories.

    The Tianhe-2 uses 80,000 Intel Xeon chips to generate a computational capacity of more than 33 petaflops. A petaflop is equal to about one quadrillion calculations per second.

    According to the Top 500, an organisation that monitors supercomputers, the Tianhe-2 has been the world’s most powerful machine for the past 18 months.

    Reply
  6. Tomi Engdahl says:

    Java Byte Code, Ahead Of Time Compilers, And A TI-99
    http://hackaday.com/2015/04/12/java-byte-code-ahead-of-time-compilers-and-a-ti-99/

    Java famously runs on billions of devices, including workstations, desktops, tablets, supercomputers, and jewelry. Yes, jewelry. Look it up. [Michael] realized Java doesn’t run on Commodore 64s, TI-99s, and a whole bunch of other platforms. Not anymore.

    Last year, [Michael] wrote Java Grinder, a Java byte-code compiler that compiles classes into assembly language instead of being part of a JVM. This effectively turns Java from a Just In Time compiled language to a normally compiled language, like C. He wrote this for the 6502/6510, the MSP430, and a Z80. The CPU in the TI-99/4A is a weird beast, though, and finally [Michael] turned this Java Grinder on that CPU, the TMS9900.

    http://www.mikekohn.net/micro/retro_console_java.php

    Reply
  7. Tomi Engdahl says:

    PC sales fell further

    In the first quarter sold 71.7 million PC machine. The amount is 5.2 per cent lower than a year earlier. According to Gartner, the market will shrink this year, but at a slower pace.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2670:pc-myynti-laski-edelleen&catid=13&Itemid=101

    Reply
  8. Tomi Engdahl says:

    Tech.eu M&A report – Q1 2015: the number of tech exits in Europe has increased by 160% year-over-year
    http://tech.eu/features/4324/eu-tech-exits-report-q1-2015/

    Tech.eu took another deep dive into all the available data on European tech company exits announced during the first quarter of this year. In total, we counted 140 deals, of which 134 were acquisitions.

    Still, there’s clearly more appetite for mergers and acquisitions of European technology companies. To wit, the maturation of the EU-wide technology industry is now even more measurable and quantifiable than before, and Tech.eu will continue to closely track deals and improve our reporting of the data.

    Reply
  9. Tomi Engdahl says:

    BitTorrent launches its Maelstrom P2P Web Browser in a public beta
    http://thenextweb.com/insider/2015/04/11/bittorrent-launches-its-maelstrom-p2p-web-browser-in-a-public-beta/

    Back in December, we reported on the alpha for BitTorrent’s Maelstrom, a browser that uses BitTorrent’s P2P technology in order to place some control of the Web back in users’ hands by eliminating the need for centralized servers.

    Along with the beta comes the first set of developer tools for the browser, helping publishers and programmers to build their websites around Maelstrom’s P2P technology

    Build the Future of the Internet with Project Maelstrom
    http://blog.bittorrent.com/2015/04/10/project-maelstrom-developer-tools/

    Today marks the next step toward a distributed web with the beta release of Project Maelstrom. With Project Maelstrom, we aim to deliver technology that can sustain an open internet; one that doesn’t require servers, that allows anyone to publish to a truly open web, and that uses the power of distributed technology to scale efficiently.

    Let’s start building the future together. Here’s what’s new since the alpha release:

    Improved stability
    Support for auto-update
    DHT visualization for users when loading torrents
    Developer publishing tool

    The developer tool for publishing will help you build for Project Maelstrom easily, even from the command line.

    Reply
  10. Tomi Engdahl says:

    Learn yourself hireable: Top tips for improving your tech appeal
    No time? No budget? No problem – get yourself skilled up
    http://www.theregister.co.uk/2015/04/13/skills_for_new_jobs/

    There comes a point in most people’s career when they get a bit bored of the day job and start looking to move. but one factor that can prevent upward mobility is a tired CV.

    Aside from the obvious updating and checking for grammar and punctuation errors, what else can give that bit of sparkle back to the resumé and get it noticed?

    A lot of companies are reluctant to provide staff with specialist vendor training courses, not only due to cost but also because of the sometimes-misplaced belief that once a staff member is trained up, they may look to move on. If your company is one of these types of employer, how can you escape this catch-22?

    Fortunately, it turns out there are a number of courses and exams available that don’t cost an arm and a leg that can help prove understanding of the fundamentals. Examples of quick wins to help you get your CV back in the game again include:

    Associate exams
    Self-learning
    Vendor-agnostic courses and exams

    Reply
  11. Tomi Engdahl says:

    The US administration has chosen Intel to produce the next two supercomputer at Argonne National Laboratory. One of the new machines will bring massive computing power, but a ridiculous power consumption.

    Aurora is based on 50 000 Intel Xeon Phi calculation unit. The computational power of 180 petaflops. When all the processors are in full in the calculation, the power consumption increases by the value of 13 MW.

    Aurora is the fastest supercomputer system, the construction of which has been publicly stated. At present, the world’s fastest supercomputer is made of 33.86 petaflops Chinese Tianhe-2. Aurora computing power of this machine is more than five times.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2672:intelin-tuleva-superkone-kuluttaa-13-megawattia-tehoa&catid=13&Itemid=101

    Reply
  12. Tomi Engdahl says:

    Software Testing Needs More Automation
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1326326&

    As software and hackers get more sophisticated, QA testing will gain importance. That will require more software QA engineers and more test automation. Software QA budgets are on the rise.

    Reply
  13. Tomi Engdahl says:

    Four boggling websites we found hidden in the BitTorrent network using the Maelstrom browser
    Chromium clone enters beta, plus tools to set up sites
    http://www.theregister.co.uk/2015/04/14/maelstrom_browser_beta/

    BitTorrent has released its Maelstrom combo-browser to all as a beta release.

    Maelstrom merges the open-source Chromium web browser with a BitTorrent client, so you can fetch and render regular webpages on the internet, and download stuff from the BitTorrent file-sharing network, all from the same application.

    Clicking on a torrent link to, say, a flick distributed over the peer-to-peer network starts playing the video in your browser tab.

    More interestingly, though, is the ability to view static websites hosted in the BitTorrent network: there are tools available to create and seed simple pages into the network, which people can find using a torrent link. There’s no central server for the pages – they’re pulled from those seeding the site for you.

    Reply
  14. Tomi Engdahl says:

    Is hyperconvergence a good thing? Ask a mini computer veteran
    Divergence and convergence brings us back to the future?
    http://www.theregister.co.uk/2015/04/14/the_appeal_of_appliances/

    Hyper-converged systems integrate compute, storage and networking into a single purchasable entity that is easier to deploy, operate and manage than traditional best-of-breed component systems.

    They are a step up from converged systems that integrated just storage and compute. That’s the simple story – but the definitions are blurry and the boundaries movable.

    Remember minicomputers? Way back when, even before the PC, minicomputers from DEC and Data General were all-in-one systems with server, operating system software, storage and networking all included with a single purchase order number.

    Divergence began and became more normal, with servers, storage arrays and network switches of various types being developed for specific needs and bought separately.

    A new type of reseller came into being: the system integrator, whose expertise lay in bringing together the various components to build a customised system for users. Thus we arrived at what are now called traditional IT systems.

    Implementing IT systems in this way is a complex affair but a big advantage is that you are not locked in to a single supplier. If you get badly treated by a storage array vendor, then you switch to a different one and connect its array to the cables coming from the servers.

    Another benefit is that the resulting IT system is well matched to your particular requirements.

    These are undoubted benefits but the costs of these unconverged systems have been rising

    All the components bought separately have their own support needs and arrangements and costs, their own management interfaces, skill sets and costs, and their own supplier relationships. Then there are the complex software update and patching arrangements needed to keep the whole thing functioning.

    IT budgets are flat, data volumes are rising, processing needs are rising, and somehow costs have to be taken out of the equation.

    One way is to reduce the separate component-related capital expenditure (CAPEX) and operating expenditure (OPEX) elements. If a single supplier can become responsible for a combined set of components, cutting both the OPEX and CAPEX but without losing the ability to avoid supplier lock-in, IT budgets comes under less strain.

    EMC, with its VMWare subsidiary, and Cisco have been instrumental in integrating their components with their VCE (Virtual Computing Environment) initiative

    Other suppliers have developed similar offerings and two startups created hyper-converged systems from the ground up: Nutanix, using commodity-off-the-shelf (COTS) hardware, and SimpliVity, using COTS plus ASIC-based acceleration, with their own operating system software.

    Hyper-convergence wins

    These proved attractive to customers, even though lock-in was present, and both suppliers attracted plenty of venture capital funding and grew quickly.

    Competitors sprang up offering hyper-converged system software which channel partners could use with certified hardware to produce their own hyper-converged systems; think Maxta and ScaleIO.

    Such software-driven hyper-convergence, with sets of certified hardware components reduced the hardware lock-in disadvantage, although not the software one.

    Target market

    IT departments running lots of standard virtual machines, say from 100 to 400, or 250 to 1,000 virtual desktops, could buy EVO:RAIL appliance units to power them. This would be a lot easier to do than buying the separate components, paying for them to be integrated and then operating the resulting multi-supplier infrastructure.

    Multiple EVO:RAIL clusters could be used where more virtual machines are needed. Each cluster would start small and grow appliance by appliance, with scaling needing a standard balance of compute, storage and networking resources.

    If your application scales up past 1,000 virtual machines then EVO:RAIL won’t cope with that. Bigger systems are needed, such as the coming EVO:RACK, Vblocks or ones built using more traditional multi-supplier component IT.

    Reply
  15. Tomi Engdahl says:

    Hortonworks to gobble Budapest-based SequenceIQ
    Rapid deployment tool for Hadoop clusters here we come
    http://www.theregister.co.uk/2015/04/14/hortonworks_sequence_iq_purchase/

    Hortonworks decided it needed an automated tool to launch Hadoop clusters in the cloud, or any environment supporting Docker containers, and one buy of SequenceIQ later, ’tis done.

    The loss-making slinger of enterprise Apache Hadoop is to consume the Budapest-based open source developer of rapid deployment tools, for an undisclosed sum.

    The buy isn’t expected to close until the end of calendar Q2 at which point Hortonworks will begin rolling SequenceIQ’s wares into its Hortonworks Data Platform (HDP) and shift the tech into the Apache Software Foundation.

    Reply
  16. Tomi Engdahl says:

    Intel Reports Q1 2015 Earnings: Lower PC Sales And Higher Data Center Revenues
    by Brett Howse on April 14, 2015 11:00 PM EST
    http://www.anandtech.com/show/9159/intel-reports-q1-2015-earnings-lower-pc-sales-and-higher-data-center-revenues

    Intel released their Q1 2015 earnings today. The company posted revenues of $12.8 billion USD for the quarter which is down 13% from Q4 2014, and flat year-over-year.

    Reply
  17. Tomi Engdahl says:

    Intel Keeps It Simple for IoT Firmware Developers
    Firmware Engine boots up precertified code
    http://www.eetimes.com/document.asp?doc_id=1326347&

    Based on the Unified Extensible Firmware Interface (UEFI) standard, Firmware Engine is a GUI-based tool, hosted on a Microsoft Windows platform, that allows developers to quickly create binary firmware images based on Intel-certified UEFI code. These binaries can then be used by a developer to create the basic software needed to initialize platform hardware and launch operating systems such as Microsoft Windows, Android, and Linux.

    The UEFI standard defines a software interface between an operating system and platform firmware and replaces the Basic Input/Output System (BIOS) firmware interface used on early Intel- and Windows-based systems. As of Version 2.4 released in 2013, it now supports Intel’s Itanium, x86, and x86-64 as well as ARM’s AArch32 and AArch64. The Linux kernel is also able to use EFI at boot time.

    In a recent blog post, Michael Greene, vice president of the Intel Software and Services Group, said that the tool was developed because device manufacturers expressed the need for firmware to do one basic job, booting their systems.

    Barriers removed – simplified and accelerated firmware development
    http://blogs.intel.com/evangelists/2015/04/07/barriers-removed-simplified-and-accelerated-firmware-development/

    Reply
  18. Tomi Engdahl says:

    Moore from the Archives
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1326342&

    Moore’s law at age 50 continues to be viable but for how long? Here’s a look into EE Time’s archives, including a link to what Gordon Moore thought of his own law’s chances for longevity.

    Reply
  19. Tomi Engdahl says:

    MySQL maestro Percona makes NoSQL play with Tokutek gobble
    Gains transaction-safe MongoDB distro plus speedy database tech
    http://www.theregister.co.uk/2015/04/14/percona_buys_tokutek/

    MySQL consulting and services outfit Percona has acquired Tokutek, makers of a commercial, transaction-safe distribution of the open source MongoDB NoSQL database.

    “TokuMX, a fully functional transactional, document (row) level locking NoSQL derivative of MongoDB, will now be serviced, supported and remotely managed by Percona’s industry leading staff of experts,” the company said in a statement on Tuesday.

    Reply
  20. Tomi Engdahl says:

    Why are enterprises being irresistibly drawn towards SSDs?
    Life in the fast lane
    http://www.theregister.co.uk/2015/04/14/why_are_enterprises_being_irresistibly_drawn_towards_ssds/

    SSDs have been the subject of hype, hype and more hype. Any of us who have used them in our personal computers know the benefits SSDs bring, but personal experiences and hype alone don’t explain the robustness of enterprise flash adoption.

    Flash has a lot of naysayers. Any article on The Register about flash inevitably invites at last one crank commenter who feels the need to tell the world how flash is just too new and they wouldn’t be caught dead deploying it in their data centre.

    As someone who uses flash in production, I wonder when (or if) the naysayers stopped using mercury delay lines and magnetic core memory. Or do they just etch the 1s and 0s directly onto recording wire manually with an electron microscope?

    It is easy to make fun; I have taken the plunge and run flash under real world conditions for years. I have burned out flash drives well before their supposed lifetimes and seen flash drives that seem to last forever. I have a feel for what is going to wear flash out and what isn’t, but despite having run up against its very real limitations, I certainly wouldn’t go back to magnetics.

    But flash does have technical limitations and they are easily magnified by those who don’t know what they are doing. Flash is faster than traditional magnetic disk, but it is not a like-for-like replacement and it won’t kill traditional disk.

    Reply
  21. Tomi Engdahl says:

    Need speed? Then PCIe it is — server power without the politics
    No longer for nerds and HPC geeks
    http://www.theregister.co.uk/2015/04/14/pcie_breaks_out_server_power/

    How many PCI Express (PCIe) lanes does your computer have? How many of those are directly provided by the CPU? With only a few exceptions, nobody except high-end gamers and the High Performance Computing (HPC) crowd care. Regular punters (maybe) care about how many PCIe slots they have.

    So why is it that gamers and HPC types get worked up about the details of lanes and why are start-ups emerging with the mission of taking PCIe out from inside the server, and of using it to connect nodes?

    We should start with “what exactly is PCIe”.

    Graphics cards, RAID controllers, network cards, HBAs and just about everything else you can think of connects to a modern computer through PCIe. Other items you might add to a computer – a hard disk via the SATA interface or a keyboard via USB – plug into a controller which often backs on to PCIe.

    To understand how this all slots together today, it’s easiest to understand how it all used to work.

    Eventually, what’s left of the Northbridge (the PCIe controller) was simply built into the CPU as well. The Northbridge had disappeared entirely.

    This means that PCIe devices have a shorter path to the CPU (yay!). If, however, you want more PCIe lanes on a system than are provided by the CPU die itself you either need another CPU or you hang them off the (distant and slow) Southbridge (boo!).

    Adding PCIe lanes is thusly not really possible.

    In a system-on-a-chip solution, the Southbridge is integrated into the CPU die and things can get all manner of complicated, as engineers seek to cut out as many interconnects as possible.

    Flash drives are fast. SATA is slow. This is a problem.

    The Southbridge of current Intel chips use DMI 2.0, which with an x4 link providing a paltry 20 Gbit/s between the CPU and the Southbridge. A single USB 3.0 port is (theoretically) 5Gbit/s while individual SATA ports are 6Gbit/sec. That’s before we get into handful of PCIe lanes hanging off the Southbridge too.

    Clearly, “onboard SATA” is unlikely to deliver on advertised speeds even if you plug in flash drives capable of pushing the limits. This, just by the by, is why everyone loves PCIe SSDs, and why NVMe SSDs (PCIe in a SAS/SATA form factor and hotswap tray) are such a big deal. They bypass the Southbridge and use the much (much) faster PCIe.

    Networks suck

    Not only does anything hanging off the Southbridge suck, but today’s networks suck. Network cards generally plug into PCIe, but compared with having a CPU talk directly to its RAM, going out across the network to put or get information takes positively forever. Sadly, we’re using networks for everything these days.

    We need reliable shared storage between nodes in a cluster so we use a network. That network could be fibre channel, or it could be hyperconverged or anything in between, but it still requires one bundle of “does things with data” to talk to another bundle of “does things with data”. This is true no matter whether you call any given device a server, an SAN array, or what-have-you.

    Those of us who paid attention to the HPC world will remember a time, about 10 years ago, where Hypertransport was one of the most promising new technologies available.

    Needless to say, this utopia never really arrived.

    Today, all over again

    Today technologies have evolved somewhat to fill the gap. For some time storage devices have been able to do Direct Memory Access: they dump the information directly into RAM without bothering the CPU. Network cards now offer this capability, allowing a remote computer to write directly to a server’s RAM without having to wait on the slow network stack.

    This is important because without DMA, network cards can only move at the speed of the operating system.

    Of course, it’s never fast enough. We still want that Hypertransport computer where everything talks directly to the CPU. PCIe SSDs are considered not quite fast enough by some, and so Memory Channel Storage has evolved, to bring that storage even closer to the CPU, and make it even lower latency.

    Similarly, RDMA networks are fast, but there’s still a translation happening where PCIe is converted into Ethernet or Infiniband and then back again. A new wave of startups such as A3Cube are emerging. A3Cube was started by Emilio Billi, one of the folks behind Hypertransport.

    So Billi is now extending PCIe outside the server and using it to lash nodes together. And A3Cube isn’t the only company trying this,

    Success in bringing PCIe outside the box will probably depend largely on the people involved, and their ability to convince us to collectively invest in the changes to our hypervisors, operating systems and applications necessary to really take advantage of PCIe as an inter-node interconnect.

    The biggest problem I see currently is that eliminating communication layers between nodes – Ethernet, the TCP/IP stack and so forth – is a deeply nerdy endeavour. The benefits are clear, but it’s spectacularly complicated and the various minds behind the differing approaches are understandably proud of what they have managed to achieve.

    That can – and does – lead to blind spots regarding market realities. The CTO of one of the companies involved in extending PCIe outside the box recently said he didn’t view two companies that use the same bus with different goals as competitors. In one sense, he’s right: if the technological approach is sufficiently different, the result will be two very different products that simply won’t compete.

    PCIe matters. Whether you’re just plugging in a graphics card or a RAID controller or you are seeking to build the next-generation supercomputer, in the modern computer it’s the one interconnect that binds everything together.

    Reply
  22. Tomi Engdahl says:

    IT Consultant Talks About ‘Negotiating for Nerds’ (Video)
    http://it.slashdot.org/story/15/04/14/1840210/it-consultant-talks-about-negotiating-for-nerds-video

    Matt Heusser did a Slashdot video interview back in 2013 titled How to Become an IT Expert Companies Seek Out and Pay Well.

    Today, Matt is with us again. This video is about ‘Negotiating for Nerds.’ Matt talks about negotiating a pay raise or consulting fee increase, starting with learning who has the actual power to negotiate with you. This is essential knowledge if you are employed (or self-employed) in IT and want to make sure you’re getting all you are worth.

    How to Become an IT Expert Companies Seek Out and Pay Well (Video)
    http://developers.slashdot.org/story/13/01/07/208229/how-to-become-an-it-expert-companies-seek-out-and-pay-well-video

    Reply
  23. Tomi Engdahl says:

    For marketing pros, digital equals dollars
    http://www.cio.com/article/2909513/cmo-role/for-marketing-pros-digital-equals-dollars.html

    Marketing professionals who bring skills in areas such as SEO, social media, Web design and analytics can demand top dollar. (Includes infographic.)

    Quick, how much does a marketing manager make? A lot more than you think. Try $133,700. Marketing managers are one of the highest-paid professions in the country this year, according to Chicago-based Digital Professional Institute, which looked at some 500 job descriptions.

    Of course, there’s a catch. Digital Professional Institute points out that 93 percent of today’s marketing positions require at least one digital skill, such as SEO, website design or analytics

    Tech infusion brings higher pay

    It shouldn’t be a big surprise, considering sweeping trends in the marketing technology space that now has the CMO eyeing the CEO job and, in turn, the CIO eyeing the CMO job. Simply put, marketing has become infused with technology. Today’s marketing manager must have a deep understanding in marketing tech tools.

    The boost in pay is a reflection of marketing’s heightened role, as digital marketers are tasked with building an online mobile and social relationship with the customer. This puts them within a hair’s breadth of the sales conversion. Marketers will play a key role in influencing customers on their buying decisions, essentially cutting out the salesperson. In fact, Forrester forecasts that 1 million B2B sales jobs will disappear by the year 2020.

    Reply
  24. Tomi Engdahl says:

    “Booting Up” a California Computer Efficiency Standard?
    http://www.edn.com/electronics-blogs/eye-on-efficiency/4439183/-Booting-Up–a-California-Computer-Efficiency-Standard-?_mc=NL_EDN_EDT_EDN_today_20150415&cid=NL_EDN_EDT_EDN_today_20150415&elq=32e16977841c4c9bb5457852642ca4ab&elqCampaignId=22552&elqaid=25360&elqat=1&elqTrackId=5e41a57163d741f2a657da83feef1f84

    In March, the California Energy Commission (CEC) published their proposed minimum energy efficiency requirements for computers and displays. These two product groups account for approximately five percent of California’s commercial and residential electricity consumption, according to the CEC. The commission believes that the proposed standard could save 2,702 GWh per year, reducing consumer and business electricity costs by up to $430 million annually.

    Computer products covered by the CEC’s proposal include desktops, notebooks, thin-clients, small-scale servers, and workstations. Tablets, game consoles, handheld gaming devices, servers other than small-scale units, and industrial small scale servers are excluded.

    Strongly influenced by ENERGY STAR’s Program Requirements for Computers version 6.1, the proposal uses a Total Energy Consumption (TEC) approach which focuses on the limiting the computer’s annual energy consumption during non-productive idle, standby, and off modes.

    Reply
  25. Tomi Engdahl says:

    DDN unveils filer boxes for all occasions at NAB 2015
    Flies two new fancy filers for filer fetishists
    http://www.theregister.co.uk/2015/04/15/ddn_flies_two_new_filers/

    DDN has launched two new filers: the MEDIAScaler for media workflows and a refreshed GRIDScaler for HPC-style workloads.

    Both were launched at NAB 2015, the National Association of Broadcasting Show in Las Vegas, where user attendees will typically work with large sets of often multi-component video files and need fast access to large amounts of content for processing. It’s a magnet for filer performance fetishists.

    The MEDIAScaler is claimed to support more than 10,000 simultaneous users and sustain its performance with more than 90 per cent of the system’s resources utilised. It is configured as a cluster of nodes sharing a single file namespace. with load-aware software making sure the highest-performing available node is assigned to incoming work.

    DDN claims Quantum (StorNext) and NetApp only support around 8 concurrent uncompressed 4K streams with Isilon supporting up to 1. Quantum and NetApp claim up to 1.6GB per sec bandwidth, compared to the MEDIAScaler’s 4GB per sec, with Isilon doing less than 1GB per sec.

    DDN says the MEDIAScalar array can support various media workloads such as data ingest, editing, transcoding, distributing, collaborating and archiving, and is marketing it as an end-to-end media workflow platform. High-performance nodes do the ingest, editing and transcoding work. Cloud nodes look after collaboration, distribution and high-availability/disaster recovery. Cold data is sent to object and/or tape-based so-called active archives.

    Reply
  26. Tomi Engdahl says:

    Revealed: The AMAZING technology behind Apple’s $1299 Retina MacBooks – a lot of glue
    RAM and SSDs stuck down with Sir Jony’s juice
    http://www.theregister.co.uk/2015/04/16/retina_macbook_ifixit_teardown/

    Reply
  27. Tomi Engdahl says:

    Intel Does Balancing Act
    Q1 flat: PCs down & IoT, servers, tablets up
    http://www.eetimes.com/document.asp?doc_id=1326348&

    Intel reported flat Q1 revenue year over year, with $2.6 billion operating income, up 4 percent over last year. Intel, the largest semiconductor company, hit a Q1 revenue of $12.8 billion with operating income of 2.6 billion (net income 2.0 billion), the company reported Tuesday (April 14) in its earnings call.

    “Year-over-year revenues were flat, with double-digit revenue growth in the data center, IoT and memory businesses offsetting lower than expected demand for business desktop PCs,” said Intel CEO Brian Krzanich in a prepared statement.

    Reply
  28. Tomi Engdahl says:

    Would MediaTek Dare Take on Intel in PC Business?
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1326356&

    MediaTek, the world’s third-largest chip designer, is rumored to take on Intel in the PC business this year.

    Some interesting speculation has cropped up on the Internet in recent days that MediaTek, the world’s third-largest chip designer, may take on Intel in the PC business sometime around the end of this year.

    Much of the conjecture centers around MediaTek’s MT8173 processor, which combines two ARM Cortex A53 cores with a pair of Cortex A72 cores. The MediaTek chip scores about 1,500 on single-core Geekbench results — reportedly the highest score ever for a mobile processor, including those from Intel.

    An extrapolation of the 1,500 single-core score indicates a multi-core grade of about 7,500 for an octa-core processor based on A72 technology. The MT8173 is fabricated with 28 nm process technology at Taiwan Semiconductor Manufacturing Co. (TSMC).

    If MediaTek were to produce the chip with TSMC’s 16 nm FinFET when the technology becomes commercially available later this year, so the story goes, MediaTek’s new version of the chip would benefit from a 25% boost in performance, yielding a single-core performance score of about 1,850 and an octa-core rating as high as 9,000.

    That would put the hypothetical MediaTek chip in the same ballpark as Intel’s desktop PC processors.

    Reply
  29. Tomi Engdahl says:

    The Crazy-Tiny Next Generation of Computers
    http://tech.slashdot.org/story/15/04/15/1941243/the-crazy-tiny-next-generation-of-computers

    University of Michigan professors are about to release the design files for a one-cubic-millimeter computer, or mote. They have finally reached a goal set in 1997, when UC Berkeley professor Kristopher Pister coined the term “smart dust” and envisioned computers blanketing the Earth. Such motes are likely to play a key role in the much-ballyhooed Internet of Things.

    The Crazy-Tiny Next Generation of Computers
    “Smart dust,” long heralded in research papers
    and sci-fi, is now a reality. Just don’t sneeze.
    https://medium.com/backchannel/the-crazy-tiny-next-generation-of-computers-17e89e472839

    The race to build the world’s smallest computer has been in the works since UC Berkeley professor Kristofer Pister coined the phase “smart dust” in 1997, back when Apple computers were the size of large lapdogs, and smart dust the stuff of fan-boy fiction. Pister envisioned a future where pinhead-sized computers would blanket the earth like a neural cloud, relaying real-time data about people and the environment. Each particle of “dust” would function as a single autonomous computer: a tiny bundle of power, sensor, computing and communication chips that could incorporate and relay information about their environment, perform basic data-processing and communicate with one other, using almost zero energy consumption. And each computer would be no larger than one cubic millimeter in size.

    But Pister’s smart dust vision never came to pass. After leaving academia to found a company called Dust Networks in early 2003, Pister got derailed by the mechanics of running a company and stopped scaling computers full time. The smallest mote his company currently makes is about the size of sugar cube: good for performing diagnostics on utilities, not so good for exploring the brain or anything else that requires a very small and unobtrusive presence — qualities essential for meshing with the much-ballyhooed Internet of Things. “Probably my single most important contribution is coming up with a catchy name,”

    By July of 1999, Pister had developed a mote with a volume of 100 cubic millimeters that had a working transmitter. Jason Hill, a colleague of Pister’s, created the TinyOS operating system

    By 2010, Dutta was at the University of Michigan, where colleagues David Blaauw and Dennis Sylvester had created a 1.5-mm³ marvel called the Phoenix Chip. The solar-powered sensor system was intended to measure intraocular pressure for glaucoma patients. Dutta was impressed. But he wanted to push towards “tagging” things in the environment, particularly for monitoring scarce natural resources.

    So Blaauw, Sylvester and Dutta sketched plans for what would become the M3 project.

    M3 has spoken with corporations for the inclusion of smart dust in wearables, and they’ve recently launched a for-profit company called CubeWorks.

    Reply
  30. Tomi Engdahl says:

    Should you use Debian or Ubuntu?
    http://www.itworld.com/article/2852249/should-you-use-debian-or-ubuntu.html

    In today’s open source roundup: Debian versus Ubuntu. Plus: Five Linux distros for your computer, and which game genres need more games on Linux?

    Debian versus Ubuntu

    Ubuntu and Debian are two of the most popular options for Linux users. But newer users can sometimes be unaware of the differences between the two, and might not understand which one is the better choice for their needs. Datamation looks at the pros and cons of Ubuntu and Debian.

    Ubuntu is specifically designed to be easy for inexperienced users to use. Initial configuration of Debian may be more difficult

    Ubuntu’s early motto was “Linux for human beings”, while Debian describes itself as “the universal operating system.”

    Community is probably the biggest distinguishing feature besides distribution “flavor”. The Ubuntu forums are more accessible to newcomers, while Debian forums are more technical. Both distributions depend heavily on a large community of volunteer open-source software developers and users who provide free support for each other while using the software.

    Reply
  31. Tomi Engdahl says:

    Hybrid IT? Not a long-term thing, says AWS CTO
    Cloud security now ‘much stronger than on-premises’
    http://www.theregister.co.uk/2015/04/16/hybrid_it_its_not_a_longterm_thing_says_amazon_web_services_cto/

    AWS Summit Hybrid IT — systems that are part on-premises and part public cloud — is simply a path to the cloud, not a destination, Amazon CTO Werner Vogels and told the 3,000 attendees at the AWS (Amazon Web Services) Summit in London yesterday.

    “We have built a whole set of services that allow you to run seamlessly together [services] on-premise [and] in the cloud,” Vogels said.

    “However, you have to realise that in our eyes hybrid IT is not the endpoint … There will be less and less data centres over time. Hybrid IT is the path to more cloud usage. Many more of your applications and services will move over into AWS.”

    It often makes sense to extend into the cloud for scalability, sometimes called “cloudbursting”, or for resilience.

    Reply
  32. Tomi Engdahl says:

    iFixit:
    MacBook teardown: terraced battery fully glued to body, CPU/RAM/flash storage soldered to logic board, 1/10 repairability score — Retina Macbook 2015 Teardown — Teardown — Teardowns provide a look inside a device and should not be used as disassembly instructions.

    Retina Macbook 2015 Teardown
    https://www.ifixit.com/Teardown/Retina+Macbook+2015+Teardown/39841

    Reply
  33. Tomi Engdahl says:

    Nvidia starts building ‘the world’s most powerful’ supercomputer
    Summit machine is so powerful it “could change the world”
    http://www.theinquirer.net/inquirer/news/2404397/nvidia-starts-building-the-worlds-most-powerful-supercomputer

    NVIDIA HAS ANNOUNCED that it has started building two supercomputers using its NVLink GPU speed interconnect technology which it says are so powerful they “could change the world”.

    The announcement marks the latest development since the US Department of Energy (DoE) threw some $325m at IBM and Nvidia to build the world’s fastest supercomputers by 2017.

    Tipped to deliver more than three times the performance of those currently available, the first of the two machines, dubbed Summit, is now being constructed at the Oak Ridge National Laboratory in Tennessee and will be used for civilian research.

    The second and less powerful supercomputer, called Sierra, will power nuclear weapons simulation and is currently being built at the Lawrence Livermore National Laboratory in California.

    The Summit supercomputer will not be completed until 2018, despite the DoE’s original goal of 2017, but will offer 150 to 300 petaflops of computational performance when completed, Nvidia said.

    Reply
  34. Tomi Engdahl says:

    Businesses are not using apps from the Windows Store
    When it comes to universal apps, Flappy Bird beats Firefox browser
    http://www.theinquirer.net/inquirer/news/2404396/windows-users-are-not-using-microsofts-apps-for-business

    MICROSOFT HAS PUBLISHED some statistics relating to the use of the Windows Store, in turn revealing that businesses haven’t taken to using Windows apps.

    The move towards convergence with Windows 10 is now a few months away, and the stats provide some interesting insight into the way that customers are taking to the new-style Windows Apps introduced in Windows 8, and their relationship to the company’s mobile platform.

    However, the first point to note is that Microsoft states: “As the vast majority of our users are now using the latest version of the OS, it is the right time to prepare for Windows 10 by moving your apps to Windows and Windows Phone 8.1.”

    We hate to be pedantic, but almost 60 percent of users are currently running Windows 7, according to the monthly figures from Net Applications.

    Finally, with the worldwide adoption rate of Windows Apps continuing to increase, the company claims that an app in English will attract just 16 percent of the potential audience, but localising it to the top 10 languages gives 65 percent penetration.

    Worth thinking about when you’re compiling your universal apps.

    Microsoft spills some beans on the Windows 10 Universal Apps platform
    Convergence is the way forward
    http://www.theinquirer.net/inquirer/news/2397827/microsoft-spills-some-beans-on-the-windows-10-universal-apps-platform

    Reply
  35. Tomi Engdahl says:

    Bloodied SanDisk preps for job cuts after market reading mis-steps
    Taking ‘aggressive measures to regain excellence’. Better hurry
    http://www.theregister.co.uk/2015/04/16/sandisk_bloodied_by_misreading_misstep/

    SanDisk said its Q1 numbers were going to be bad, and bad they were, with revenue and profits drops due to an embedded component material screw-up and execs not seeing where the market was going. Job cuts are coming.

    What’s happened is that SanDisk fell short in the enterprise market, mis-read trends, and then discovered it needed a better product qualification and validation process too. These weaknesses have badly affected its results.

    So what are the big issues?

    Mehrotra pointed to four problems:

    Product issues, including qualification delays impacting embedded and enterprise sales
    Reduced 2015 opportunity in the enterprise market due to rapid market shifts
    Weaker-than-anticipated pricing
    Supply challenges

    Looks like SanDisk simply mis-read the enterprise flash market.

    PCIe and SATA Market movements

    The second issue is a reduced-opportunity whammy “due to market shifts in PCIe and SATA”. PCIe looks bad: “Our Q1 results as well as 2015 revenue estimates for our Fusion-io PCIe solutions are significantly below our original plan”.

    That’s happening because “a substantial portion of the PCIe TAM (total addressable market) is moving to lower cost solutions using enterprise SATA SSDs”.

    SATA SSDs have gotten fast enough to make PCIe flash look expensive for the performance it delivers. SanDisk was the PCIe market leader but has a lowly share in the enterprise SATA SSD market. NVMe with its standard drivers should expand the PCIe market,

    Now, another biggy — supply challenges

    SanDisk expects second-quarter revenue to show an annual decline, partly due to the faster-than-anticipated ending of client SSD shipments to that large customer.

    Enterprise SATA and SAS SSD revenues will also decline.

    Third- and fourth-quarter revenues are also expected to show year-on-year declines, making for an all-in-all miserable 2015.

    The screw-ups in product qualification, client SSD sales, and enterprise market mis-steps have been disastrous.

    Reply
  36. Tomi Engdahl says:

    Is the Death of the PC Industry Exaggerated?
    http://www.ebnonline.com/author.asp?section_id=3226&doc_id=277243&&elq=31567fca78c8473ea0bae727225fd3b0&elqCampaignId=22576&elqaid=25387&elqat=1&elqTrackId=84f3d64945484589a76cd3abe81dd909

    Volume trends are at best difficult to resolve when industry segments are at a tipping point, making the demise of the personal computer a much discussed topic with few solid conclusions. Recent numbers show that peak-PC is behind us, but we have to adjust for one-time factors such as new operating systems or different model configurations to get a long-term trend picture.

    These short-term effects have impacted the PC market significantly. The fiasco around Windows 8 seriously damaged the PC franchise. First, it removed direct pressure to upgrade PC desktops and mobiles to match the new code. These upgrades would have included touch-screen features as well as better graphics and raw horsepower increases, improving the average unit price by around 20% to 25%.

    The OS upgrade failed because the user base baulked at moving from a high quality and stable Windows 7 to a new set of interfaces that better matched the smartphone.

    Microsoft completely misunderstood the reluctance of their user base to give up on app interfaces and screen operations that had been ingrained for nearly three decades, and the result was the wave of upgrades turned into a delayed ripple.

    The belated arrival of Windows 8.1 hasn’t helped much.

    Microsoft hasn’t helped their case by strongly pushing the browser-based alternative of Office-365. The idea that a cheap, dumb tablet or screen could do the job as well is especially intriguing when control of the app set and security are considered. The cloud office wins all hands down in these categories, though performance still raises questions on occasion.

    Hybrid disk drives likewise are in the sales doldrums.
    low-end SSD drives are priced close to hybrid drives

    The outside forces on the PC market are interesting. Tablet sales are sluggish too,

    Mobile phones, with the exception of Apple’s product, also are hitting the “it’s good enough” barrier.

    So what does this mean for PC volumes? A few months back, the major pundits predicted a return to growth towards the end of 2015. Most talked about low-single digit numbers

    IDC’s latest report shows PC sales down to 68.5 million units, which is the lowest number since 2009. Lenovo and HP still eked out a 3 percent growth but everyone else took a hit.

    Finally, Intel has just announced Q1 2015 results that have their client-computing unit dropping 8.4% in revenue year over year, due to weak PC demand

    Taken together, these are strong indicators of a market past its peak. The next question is how fast the decline will occur.

    The major impactor is the deployment of browser-based solutions, especially those, like Office-365 and Google Apps, which follow a SaaS model. The mess over Windows 8 has created an Oklahoma land-grab opportunity for theses SaaS solutions.

    The lack of a boost from Windows 10

    Reply
  37. Tomi Engdahl says:

    The best HDMI operating system sticks
    http://www.zdnet.com/article/the-best-hdmi-operating-system-sticks/

    Summary:There’s a new kind of computer in town and it’s resides on an HDMI stick that’s not much bigger than a pack of gum.

    Reply
  38. Tomi Engdahl says:

    Is IT getting predictable? IT confidence rises despite budget slowdown
    http://www.zdnet.com/article/paradox-it-confidence-rises-while-tech-budgets-slow/

    Summary:Technology priorities seem to have settled into a predictable pattern: security, big data, mobile and cloud. Everyone seems to know the drill — for now, a least.

    Don’t let the business side catch wind of this: IT budget growth will be tepid over the coming year, yet IT leaders are more convinced than ever they can deliver the goods to the business.

    That’s the key takeaway from TEKsystems’ Annual IT Forecast for 2015, which surveyed 500 IT executives on their hopes and dreams for the coming year and beyond.

    Still, technology executives aren’t heading for the bunkers — if anything, they’re feeling relaxed and ready to take on the world. Seventy-one percent of IT leaders report confidence in their ability to satisfy business demands in 2015, representing an increase from 66 percent and 54 percent in forecasts for 2014 and 2013, respectively.

    Where spending is increasing from 2014 to 2015:

    Security 65%
    Mobile 54%
    Cloud 53%
    BI/big data 49%
    Storage 46%
    Legacy modernization 36%

    Skils demand also remains predictable, the TEKsystems survey finds

    Programmers/developers
    Software engineers
    Architects
    Project managers
    Security specialists
    Business intelligence
    Big data analytics

    Reply
  39. Tomi Engdahl says:

    Problems & Solutions: Analyzing The EU’s Antitrust Charges Against Google Over Shopping Search
    The EU has raised four concerns about how Google has dominated shopping search. A close look at each charge and possible fixes that might emerge.
    http://searchengineland.com/problems-solutions-eu-antitrust-google-218899

    Reply
  40. Tomi Engdahl says:

    IBM vs. Intel in Supercomputer Bout
    Two National Labs choose IBM, One Intel
    http://www.eetimes.com/document.asp?doc_id=1326359&

    The U.S. wants to one-up China in supercomputers and it’s looking for a few good semiconductor architectures to help out.

    The fastest supercomputer in the world is currently the Chinese Tianhe-2 running at a peak of 55 petaflops on Intel Xeon and Xeon Phi processors. The Collaboration of Oak Ridge, Argonne and Lawrence Livermore (CORAL) project financed by the U.S. Department of Energy (DOE) aims to one-up the Chinese with up to 200 petaflops systems by 2018. The three systems, named Summit, Aurora and Sierra, respectively, have also pitted IBM/Nvidia and their graphics processing units (GPUs) against Intel/Cray’s massively parallel x86 (Xeon Phi) architecture.

    Reply
  41. Tomi Engdahl says:

    AMD Withdraws From High-Density Server Business
    http://hardware.slashdot.org/story/15/04/17/029214/amd-withdraws-from-high-density-server-business

    “AMD has pulled out of the market for high-density servers, reversing a strategy it embarked on three years ago with its acquisition of SeaMicro.”

    Update: AMD withdraws from high-density server business
    The company is ditching a business it formed by acquiring SeaMicro three years ago
    http://www.computerworld.com/article/2911158/servers/amd-withdraws-from-high-density-server-business.html

    AMD has pulled out of the market for high-density servers, reversing a strategy it embarked on three years ago with its acquisition of SeaMicro.

    AMD delivered the news Thursday as it announced financial results for the quarter. Its revenue slumped 26 percent from this time last year to $1.03 billion, and its net loss increased to $180 million, the company said.

    AMD paid $334 million to buy SeaMicro, which developed a new type of high-density server aimed at large-scale cloud and Internet service providers.

    The chip maker needs to focus its resources on growth opportunities, and microservers like those SeaMicro developed haven’t been as popular as expected, Su said on the company’s earnings call.

    AMD still sees growth potential in the server market, but not from selling complete systems. It’s returned its focus to x86 chips and to the development of its first ARM server processor, code-named Seattle.

    That chip seems to be delayed, however. Volume shipments will start in the second half of this year

    Reply
  42. Tomi Engdahl says:

    Are YOU The One? Become a guru of your chosen sysadmin path
    Deeper than training, knowing yourself
    http://www.theregister.co.uk/2015/04/17/are_you_the_one_true_sysadmin_guru/

    Systems administrators are system administrators, right? Not really. Once upon a time systems admins were jack of all trades and (perhaps) master of them all. Most of the IT-related functions were performed by an administrator and if some new technology came along they adapted and learnt the new package or system.

    However, as IT has evolved and become much more complex, a lot of administrators have tended to become more specialised in specific areas. In big organisations one can make a career out of a single product, other than Oracle or Microsoft.

    This contrasts starkly with the smaller businesses who still have one, or – if they are lucky – two administrators that support the entire infrastructure, from the plug of the photocopier through to the mail system or that nasty application that is mission-critical but that nobody dares touch. The outlooks between the two groups are very different as well; rarely do admins move between large and small environments.

    Companies are aware of the difference in culture and outlook. When hiring, larger companies tend to recruit their new staff from existing pools of talent, often specifically from companies in the same industry sectors.

    It is much cheaper doing it that way, rather than having to train them up and potentially certify them so that they achieve not only the correct level of industry recognition but also security clearance. Amongst bigger companies there is also a greater degree of specialisation. Network people don’t touch the software and vice versa.

    For anyone considering the move up, it is a major change in the way infrastructure is managed. No longer can you just rock up to a machine, ensure the backup is good and then replace a disk. In a highly managed and audited environment it becomes a drawn out process of:

    1. Ask the local operators to verify the failure

    2. Once verified, raise a call to get a new disk. Create and submit a change request

    3. Get approval from business owners

    4. Represent at Change Management and then get the local guys to replace the disk

    5. Then verify and close the change. Changes also need to be raised from incidents

    Reply
  43. Tomi Engdahl says:

    Flash dead end is deferred by TLC and 3D
    Behold, data centre bods, the magical power of three
    http://www.theregister.co.uk/2015/04/17/flash_deadend_deferral_with_tlc_and_3d/

    The arrival of a flash dead-end is being delayed by two technologies, both involving the number three – three-level cell (TLC) flash and three-dimensional (3D) flash – with the combination promising much higher flash chip capacities.

    As ever with semi-conductor technology, users want more data in the same space and faster access to it too, please.

    Progress means making devices faster and denser: getting more transistors in flash dies, and hence more cells, with no access time penalty or shortened working life.

    Flash data access can be speeded up by using PCIe NVMe interfaces, with several lanes active simultaneously, and so going faster than SAS or SATA disk-based interfaces.

    It can also be hastened by putting the flash in memory DIMM slots, as SanDisk’s ULLtraDIMM product does using Diablo Technologies’ intellectual property. However Diablo and SanDisk are involved in a legal case involving NetList alleging that they are using its intellectual property improperly. Until that is resolved DIMM flash technology is in hibernation.

    But the core issue is flash chip capacity: how can we get denser chips and hence larger capacity SSDs?

    With flash memory this has been achieved by adding a bit to the original single-level cell (SLC) flash, and by making the process geometry smaller.

    It is currently in the 1X area, meaning cell sizes in the range of 19nm to 10nm.

    Smaller cells don’t last as long as larger cells as they sustain fewer write cycles. With 2-layer cell technology, called MLC, the cell stores two bits through two levels of charge and this adds to the process shrink problem.

    It has been managed successfully with better error detection and the use of digital signal processing techniques by the flash controllers so weaker signals can be processed successfully with 2X-class flash (29-20nm cell geometry).

    Shrink the process size to the 1X area, however, and the problems get worse the further below the 19nm level we get. Go below 10nm and they look insoluble. You can’t defeat physics.

    TLC technology has been around for some years. It gives an immediate 50 per cent increase in capacity over MLC flash

    A serious problem is detecting the level of charge in the cell. What happens is that there are eight possible levels, double the four levels of MLC flash, which is double the two levels of SLC flash.

    SLC flash can have two levels of charge, or states, equivalent to binary 1 or 0. MLC adds a second bit to the SLC cell, meaning each SLC state can have two additional states, 0 or 1, giving us four states in total.

    TLC flash goes one stage further, adding a third bit and therefore two additional states for each MLC state.

    The main flash foundry operators, Intel-Micron, Samsung and SanDisk-Toshiba, are all active in TLC and 3D NAND developments.

    Reply
  44. Tomi Engdahl says:

    JetBrains releases CLion – new cross-platform IDE for C/C++ users
    Also out is ReSharper C++ for Visual C++ folk
    http://www.theregister.co.uk/2015/04/17/jetbrains_releases_clion_crossplatform_ide_c_plus_users/

    Developer tools company JetBrains has released CLion, a new cross-platform IDE for C and C++.

    JetBrains is a survivor in a product area dominated either by vendor-specific tools (such as Microsoft’s Visual Studio and Apple’s Xcode) or free open source projects (Eclipse and NetBeans).

    The core JetBrains product, IntelliJ IDEA, is a Java IDE with strong smart editor and refactoring features. Building on this foundation, the company has created IDEs for PHP, Python and Ruby, as well as AppCode which runs on the Mac and supports C, C++, Objective-C and Swift.

    All these tools run on the JVM (Java Virtual Machine).

    Reply
  45. Tomi Engdahl says:

    Paul Thurrott / Thurrott.com:
    Former Microsoft designer talks about original Windows Phone UX and mistakes the company made with Modern UI — Ex-Microsoft Designer Explains the Move Away from Metro — Windows Phone fans pining for the days of Metro panoramas and integrated experiences have had a tough couple of years

    Ex-Microsoft Designer Explains the Move Away from Metro
    https://www.thurrott.com/mobile/windows-phone/3000/ex-microsoft-designer-explains-the-move-away-from-metro

    Windows Phone fans pining for the days of Metro panoramas and integrated experiences have had a tough couple of years, with Microsoft steadily removing many of the platform’s user experience differentiators. But as I’ve argued, there’s reason behind this madness. And now an ex-Microsoft design lead who actually worked on Windows Phone has gone public and agreed with this assessment. You may have loved Windows Phone and Metro, but it had to change.

    Why a hamburger menu? “Windows Phone’s original interaction model put actions on the bottom and navigation on the sides, as swipes,” the design lead notes. “That’s not a great pattern for a variety of reasons.” Long story short, panoramas and pivots are good for exploration (like spinning through photos) but are not good for organizing information. And other platforms have adopted a common UX layout with common actions (commands) on the bottom, navigation on the top, and less-needed commands found hidden behind a hamburger menu or similar UI. Only Windows Phone lacked this UX model. So it had to change … And swiping sucks. It hides content

    But people need to be able to use it with one hand. “What the research is showing is that people aren’t actually as wedded to one handed use as we used to believe they are. Don’t get me wrong, this is clearly a tradeoff. Frequently used things have to be reachable, even one-handed. But hamburgers are not frequently used, and one-handed use is not ironclad. Combine those two factors together and you see why the industry has settled on this standard. It wasn’t random … And, sorry. But the hamburger has some real issues, but ‘I can’t reach uncommon things without adjusting my hand on my massive phone and that annoys me because it reminds me of the dominant OS on earth” [is not one of them].

    But the bottom is better. “It turns out bottom is not better. You’d think that something 3 pixels from your palm would be easier to reach than something in the middle of the phone. But nope. The way average people hold phones means the middle of the device is the best location. Both bottom and top require your hand to make a bit of a shift to reach. You don’t use the hamburger very often … You have to design for the 80% case, no matter how much that annoys the other (vocal) 20%

    What does the future look like? “Imagine your kid is writing a book report and doing a presentation in 10 years. I don’t think it’s in Word and PowerPoint as we know it today. But I do think it could be with Microsoft software. And statistically speaking, it’s on Android … “

    Reply
  46. Tomi Engdahl says:

    30 per cent of servers, storage and switches now sold to clouds
    US$16.5 billion spent on public cloud alone in 2014 says IDC
    http://www.theregister.co.uk/2015/04/20/30_per_cent_of_servers_storage_and_switches_now_sell_to_clouds/

    Three in ten servers, storage arrays and ethernet switches are now being sold to clouds, either private or public, says abacus-wielder IDC.

    The company’s Worldwide Quarterly Cloud IT Infrastructure Tracker for 2014′s fourth quarter found that “total cloud IT infrastructure spending grew by 14.4% year over year to $8.0 billion”, accounting for 30 per cent of all IT infrastructure spend. That’s up from

    Across the entire year, the market-watcher reckons “cloud IT infrastructure spending totaled $26.4 billion, up 18.7% year over year from $22.3 billion; private cloud spending was just under $10.0 billion, up 20.7% year over year, while public cloud spending was $16.5 billion, up 17.5% year over year.”

    Reply
  47. Tomi Engdahl says:

    Microsoft absorbs open-source internal startup MS Open Technology
    2,000 open source projects later, Redmond reckons it’s ready to imbibe open goodness
    http://www.theregister.co.uk/2015/04/20/microsoft_absorbs_opensource_spinoff/

    In what would once have been thought of as an Ominous SignTM, Microsoft has decided it’s time for the MS Open Technologies “startup subsidiary” to be absorbed back into Redmond proper.

    Rather than being ominous, the move seems encouraging, with Microsoft promising to set up a new open source program office in Redmond proper, tasked to “scale the learnings and practices in working with open source and open standards that have been developed in MS Open Tech across the whole company”.

    With a bunch of successes under its belt, including open source .Net, Docker containerisation for Windows, and MS Build, it looks like Microsoft has decided it can get into bed with open source technology without needing a chaperone.

    The original idea behind MS Open Tech was that it would let Redmond pal up with the open source world while firewalling its proprietary properties.

    Reply
  48. Tomi Engdahl says:

    Qumulo gets fat appliance alongside skinny fast one
    Scale-out big box is a 1,000-node wannabe Isilon-beater
    http://www.theregister.co.uk/2015/04/20/qumulo_gets_fat_appliance_alongside_skinny_fast_one/

    Qumulo – the start-up building a better-than-Isilon scale-out NAS and staffed by Isilon vets – has launched a capacity-focused hardware product to complement its fast skinny one.

    Qumulo’s QC24 is a 1U appliance focused on performance, with 24TB of raw disk and 1.6TB of raw flash capacity (2 x 800GB eMLC SSDs) per node with the node count running up to 1,000. It’s powered by a 6-core Xeon E5 1650v2 3.50GHz CPU with 64GB of RAM.

    The QC208 appliance is optimised for capacity, being a 4U enclosure running commodity hardware in a 4-node cluster design. Per-node it has 208TB of raw HDD capacity – 26 x 8TB HDD – and 2.6TB of raw SSD capacity per node – 13 x 200GB eMLC SSD – all hot-swappable.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*