Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    Dell and HP tech support staff are telling customers to ditch Windows 10
    The words ‘technical’ and ‘support’ don’t seem to apply here
    http://www.theinquirer.net/inquirer/news/2433750/dell-and-hp-tech-support-staff-are-telling-customers-to-ditch-windows-10

    A NUMBER OF OEMs have been caught discouraging the use of Microsoft Windows 10, and in some cases persuading customers to roll back to Windows 8.1.

    Research conducted by Laptop magazine for its annual Tech Showdown found that telephone support agents for Dell and HP told customers that they don’t encourage upgrading to Windows 10.

    The Dell agent told researchers that the company was getting “a ton of support calls from Windows 10 users” and recommended rolling back to 8.1, while a second agent said that there are “a lot of glitches” in the new OS.

    An HP agent spent an hour trying to get an HP proprietary feature working in Windows 10, took control of the researcher’s computer, attempted to fix it, failed, attempted to roll back to Windows 8.1, failed again and then suggested buying a $40 rescue USB.

    Reply
  2. Tomi Engdahl says:

    10 Years $B In Funding 1 Online Form
    https://www.washingtonpost.com/rweb/politics/a-decade-into-a-project-to-digitize-us-immigration-forms-just-1-is-online/2015/11/08/f63360fc-830e-11e5-a7ca-6ab6ec20f839_story.html

    Heaving under mountains of paperwork, the government has spent more than $1 billion trying to replace its antiquated approach to managing immigration with a system of digitized records, online applications and a full suite of nearly 100 electronic forms.

    A decade in, all that officials have to show for the effort is a single form that’s now available for online applications and a single type of fee that immigrants pay electronically. The 94 other forms can be filed only with paper.

    This project, run by U.S. Citizenship and Immigration Services, was originally supposed to cost a half-billion dollars and be finished in 2013. Instead, it’s now projected to reach up to $3.1 billion and be done nearly four years from now

    From the start, the initiative was mismanaged, the records and interviews show. Agency officials did not complete the basic plans for the computer system until nearly three years after the initial $500 million contract had been awarded to IBM, and the approach to adopting the technology was outdated before work on it began.

    Only three of the agency’s scores of immigration forms have been digitized — and two of these were taken offline after they debuted because nearly all of the software and hardware from the original system had to be junked.

    The sole form now available for electronic filing is an application for renewing or replacing a lost “green card” — the document given to legal permanent residents.

    Reply
  3. Tomi Engdahl says:

    What Computational Physics Is Really About
    http://www.wired.com/2015/11/what-computational-physics-is-really-about/

    I would like to address the following question:

    When you use a computer to solve a problem (I would call this a numerical calculation), is it an experiment or theory? Or is it something else?

    It’s a very common question. One that comes up often—usually when drinking beer with scientists from a variety of fields. I think it’s an important topic to discuss in order to help everyone understand the nature of science.

    Here are some examples of scientific models:

    A lump of clay in the shape of an amoeba.
    A chart showing the transfers of energy as a block slides along a table.
    The idea that forces change the velocity of objects.
    The equation for the gravitational force between two objects.
    A differential equation describing the motion of a mass on a spring.
    Oh, and a computer program that calculates the motion of a baseball with air resistance, this is a model too.

    So, a model can be many different things. It doesn’t have to be a mathematical model—but that’s often what we see in science.

    Now for a conversation with this computational physicist. Here are some key points that will be brought up.

    Computers are very important in science.
    We create some code and then run it. It produces data which is then analyzed.
    Since a computer program produces data, it is very much like an experiment that produces data.
    Oh, but the computer program is also theoretical because we created it.
    Computational science bridges both theory and experiment. It’s sort of like the third kind of science (with the other two being theoretical and experimental).

    A Computer Program Is a Model

    When you write a computer program, it does indeed give you some numbers in the end. Also, it is true that you don’t always know what these values will look like until you actually run the program. This doesn’t mean it’s like a real experiment. In the end, the program was made by a human and not real life.

    Reply
  4. Tomi Engdahl says:

    Google open sources machine learning software
    TensorFlow to help your phone find cats and, you know, everything else
    http://www.theregister.co.uk/2015/11/09/google_open_sources_tensorflow/

    Google is helping bring HAL to life by open sourcing its machine learning software.

    The software, called TensorFlow, is the successor to the DistBelief system that the online giant used for the past five years to make sense of the vast amounts of data it has access to.

    DistBelief has sifted email for spam, scanned YouTube videos for cars or cats, improved speech recognition and many other applications. But it was closely tied in with Google internal systems and was difficult to configure.

    TensorFlow – Google’s latest machine learning system, open sourced for everyone
    http://googleresearch.blogspot.fi/2015/11/tensorflow-googles-latest-machine_9.html

    Reply
  5. Tomi Engdahl says:

    H2O.ai Raises $20M For Its Open Source Machine Learning Platform
    http://techcrunch.com/2015/11/09/h2o-ai-raises-20m-for-its-open-source-machine-learning-platform/

    H2O is an open source platform for data scientists and developers who need a fast machine learning engine for their applications. H2O.ai, the company behind the service, today announced that it has raised a $20 million Series B funding round led by Paxion Capital Partners (the new firm of GoPro board member Michael Marks) and existing investors Nexus Venture Partners and Transamerica. New investor Capital One Growth Ventures also joined this round. In total, the company has now raised $34 million.

    The H2O platform the company’s main product, but it also offers a number of tools around that platform. These include Sparkling Water, which combines the Apache Spark data processing engine with the H2O platform (see where that name comes from?), as well as Flow, an open source notebook-style user interface for H2O (similar to iPython notebooks).

    Reply
  6. Tomi Engdahl says:

    Block storage is dead, says ex-HP and Supermicro data bigwig
    Recent consolidations show ‘distinct lack of imagination’
    http://www.theregister.co.uk/2015/11/10/block_storage_dead_interview/

    Block storage is dead, object storage can be faster than file storage, and storage-class memory will be the only local storage on a server. So said Robert Novak … but who he?

    Novak was until recently a Distinguished Technologist in the HP servers’ Hyperscale Business Unit and has had an interesting employment history, being Director of Systems Architecture at Nexenta Systems, April 2012 – December 2014, introducing NexentaEdge at VMworld 2014, which was a scale-out storage architecture that provides a global name space across a cluster of clusters while offering global inline deduplication, dynamic load balancing, capacity balancing and more.

    Before that he was Director of Enterprise Servers at Supermicro, July 2007 – April 2012.

    El Reg Robert, why is block storage dead and what has that to do with Hollerith cards?

    Robert Novak I have been working in the storage industry for a very long time. I used to teach second-year computer science students about the Unix File System and how it used inodes (now called metadata) to track where a file was placed on the blocks of a disk drive.

    In a recent piece of work on looking at new file systems, I started my research by collecting every book I could find in print on storage and file systems.

    In each book they begin with a description of the “Unit Record Device”. Very few of your readers are old enough to remember using them, but in the heyday of the IBM mainframe it was known as the 80 column punched card. This card was actually a revamping of an older technology known as the Hollerith card which had its roots in punched railway tickets.

    The “unit record” was too small to keep as separate records on storage devices (even for tape) so that the unit records were collected into groups of records called “blocks”.

    El Reg Is object storage based in underlying file storage and how did that come about?

    Robert Novak Most object stores started their life by storing objects as a collection of files. Some object stores actually manage objects directly on top of blocks in their own file system, but most of them are built on top of file storage and use separate spaces in the file storage to separate the metadata (name of object, date of creation, owner, etc.) from the data (picture, video, document).

    El Reg How will key/value storage and direct disk addressing improve that?

    Robert Novak Let’s talk about key/value storage first. In 2013, Seagate announced its plans to build Key/Value storage devices, the “Kinetic” drive, and actually started to ship those drives one year later in 2014.

    With these drives, you don’t need to know anything about the size of the drive, the size of the blocks of storage on the drive, or where on the drive the data is actually stored.

    All you need to know is the “key” (up to 4096 bits in the Kinetic model). This is politically incorrect and no disrespect is intended, but I sometimes refer to it as the Chinese Laundry model of storage. You take your clothing to the Chinese Laundry store and drop it off for cleaning. The proprietor gives you a ticket with a number on the ticket.

    A few days later you return to the laundry to reclaim your clothing (value), but you forgot your laundry ticket (key). The proprietor says, “No tickee, no laundry”.

    Key/Value drives work in a similar fashion. However instead of the proprietor giving you a ticket (key), you create your own key for the data that must be globally unique.

    The difference that this makes is that the host server knows nothing about WHERE on the device the data is stored. It does not build any dependencies on the data the way other file systems did.

    Reply
  7. Tomi Engdahl says:

    White box servers? We can do that, says HP Enterprise chief
    Give us your hosting masses
    http://www.theregister.co.uk/2015/11/10/hpe_white_box_servers/

    The Far Eastern white box makers are sucking up some server volumes in Europe at the expense of the big brands – but that’s just fine, according to Hewlett Packard Enterprise’s EMEA chief.

    Stats from IDC revealed that in Q2 the ODMs (original design manufacturers) grew unit sales and revenues by 40 and 42 per cent respectively in the region by selling to smaller hosting firms.

    This equates to shipment market share of seven per cent but only two per cent of the total factory revenues. Peter Ryan, MD for EMEA at HPE, told us: “ODMs have been making progress. But while it’s a lot of units it’s very little of the [overall] revenues value of the market … they are not swamping the whole market, that is clear.”

    “At the same we’ve seen very significant double digit growth in the region in our own server business for our standard x86 business in local currency,” he said. “So there is solid demand in the marketplace in Western Europe for both private and public environments, whether they are hosted by someone else or whether you are building them out.”

    It is a different story in the US, where white box builders are a significantly bigger presence. Here they are building specs to design for Google, Facebook and the other mammoth service providers, a trend that started four years ago.

    The white box trend is spreading to Europe, albeit in a gentler way. Europe’s indigenous hosting firms tend to be much smaller than their US counterparts – but customers’ data sovereignty concerns means Americans are on their way. Only this month, for instance, AWS and Microsoft have announced that they are opening data centres in the UK.

    China and the cloud sink their teeth into server sales
    Tier-1 server suppliers feel the bite as Chinese vendors increase market share
    http://www.theregister.co.uk/2015/08/04/china_and_the_cloud_sinking_teeth_into_server_sales/

    Reply
  8. Tomi Engdahl says:

    Apple’s Tim Cook declares the end of the PC and hints at new medical product
    http://www.telegraph.co.uk/technology/apple/11984806/Apples-Tim-Cook-declares-the-end-of-the-PC-and-hints-at-new-medical-product.html

    Exclusive interview: The Apple boss has two missions – taking on the office PC with his new devices and keeping his customers safe from cyber criminals

    Unlike in the consumer world, where phones and tablets have revolutionised consumption habits, PCs remain kings of the workplace.

    But time may finally be running out for the traditional computer. Looking at the shiny new super-sized iPad Pros tucked away in a special room on the third floor of Apple’s flagship Covent Garden store, complete with detachable keyboards, split view functionality and Apple Pencil stylus, it is clear that the world’s largest company has radical plans to change the way we work.

    “I think if you’re looking at a PC, why would you buy a PC anymore? No really, why would you buy one?”, asks Tim Cook, Apple’s chief executive

    “Yes, the iPad Pro is a replacement for a notebook or a desktop for many, many people. They will start using it and conclude they no longer need to use anything else, other than their phones,”

    The second is music and movie consumers: the sound system and speakers are so powerful that the iPad appears to pulsate in one’s hands when one plays a video.

    Reply
  9. Tomi Engdahl says:

    Tom Warren / The Verge:
    Tim Cook’s claim that the iPad Pro could replace PCs for many people shows Apple hasn’t learned from Microsoft’s Surface

    Apple has learned nothing from Microsoft’s Surface
    The iPad Pro won’t replace a PC just yet
    http://www.theverge.com/2015/11/10/9704020/apple-tim-cook-ipad-pro-replaces-a-pc

    Apple’s 12.9-inch iPad Pro goes on sale tomorrow, and that means CEO Tim Cook is ready to drum up interest in his company’s giant tablet. In a series of interviews with UK publications, Cook has been discussing how a bigger iPad will change the world, but one particular quote really stands out. “I think if you’re looking at a PC, why would you buy a PC anymore?” asks Cook, in an interview with The Telegraph. “No really, why would you buy one?”

    Cook argues that the iPad Pro “is a replacement for a notebook or a desktop” for lots of people. “They will start using it and conclude they no longer need to use anything else, other than their phones.” While it’s certainly true that some consumers are replacing PCs with tablets, it’s not a trend that has caught on broadly for businesses yet. iPad sales growth has halted, and Tim Cook really wants people to believe in the iPad once again.

    There are a number of reasons why iPads aren’t selling as well anymore. Old iPads are probably still sufficient for basic tasks and don’t need to be replaced, in the same way that PC makers are desperately teaming up together to convince consumers to replace their old laptops. There’s also the unavoidable truth that tablets aren’t replacing PCs yet. The stats don’t lie, and Apple’s latest financial results show that Mac sales are up 3 percent and iPad sales are down a staggering 20 percent.

    It’s easy to see that Apple has learned nothing from Microsoft’s Surface work. The original Surface RT shipped with just one angle for its kickstand and it was awkward to use as a laptop replacement on your lap. The Surface Pro is a little better in the lap, but the stylus still doesn’t really have a safe home so it constantly falls off. Apple’s iPad Pro can only be used at one angle with the keyboard, and there’s no place to store the stylus when you’re not using it.

    It also goes far beyond the hardware fundamentals. Apple’s iPad Pro ships with a tablet operating system that doesn’t have true support for a mouse and keyboard. You won’t use a trackpad on the iPad Pro, you’ll touch the screen.

    Future versions of both the iPad Pro and Surface Pro will address the obvious usability issues, but Microsoft has already started dropping hints at its own direction.

    Reply
  10. Tomi Engdahl says:

    Google open sources its machine learning engine Tensorflow
    AI AI Oh!
    http://www.theinquirer.net/inquirer/news/2433999/google-open-sources-its-machine-learning-engine-tensorflow

    GOOGLE HAS announced the open-sourcing of its machine learning engine TensorFlow.

    Despite sounding like a sanitary product, TensorFlow is in fact behind some of Google’s biggest recent advances, such as the improvements in speech recognition that have allowed Google Now to expand.

    “TensorFlow is not complete; it is intended to be built upon, improved, and extended.

    Everything you need is included, from the source code itself, development kits, Apache 2.0 licenced examples, tutorials, and sample use cases.

    Earlier this year, a Tensorflow project made the news when Google’s Deepdream showed us what computer’s dream about. It turns out that when you show them Fear and Loathing in Las Vegas, they dream about some quite terrifying stuff that takes it to a whole other level.

    TensorFlow, Google’s Open Source AI, Signals Big Changes in Hardware Too
    http://www.wired.com/2015/11/googles-open-source-ai-tensorflow-signals-fast-changing-hardware-world/

    In open sourcing its artificial intelligence engine—freely sharing one of its most important creations with the rest of the Internet—Google showed how the world of computer software is changing.

    These days, the big Internet giants frequently share the software sitting at the heart of their online operations. Open source accelerates the progress of technology. In open sourcing its TensorFlow AI engine, Google can feed all sorts of machine-learning research outside the company, and in many ways, this research will feed back into Google.

    But Google’s AI engine also reflects how the world of computer hardware is changing. Inside Google, when tackling tasks like image recognition and speech recognition and language translation, TensorFlow depends on machines equipped with GPUs, or graphics processing units, chips that were originally designed to render graphics for games and the like, but have also proven adept at other tasks. And it depends on these chips more than the larger tech universe realizes.

    Reply
  11. Tomi Engdahl says:

    Intel expands Xeon D chip family to fuel shift to cloud-ready communications
    Offers more performance, energy efficiency, and twice the maximum memory
    http://www.theinquirer.net/inquirer/news/2434075/intel-expands-xeon-d-chip-family-to-fuel-shift-to-cloud-ready-communications

    INTEL HAS EXPANDED its family of low-power Xeon D processors to include better support for network and storage in order to speed up the move towards cloud-ready communications.

    The fresh Xeon D-1500 products come in nine flavours and are said to offer more performance, energy efficiency, and twice the maximum memory of the previous iteration, making them ideal for dense environments in networking, cloud and enterprise storage, as well as IoT applications, Intel said.

    “Billions of devices are becoming connected – from smartphones to cars to factories – and that brings new use cases and service opportunities that drive unprecedented growth in network and storage demands,” explained Intel. “Today’s networks are not designed in a way that allows communications providers to quickly or cost effectively expand their infrastructure.”

    Intel believes that for us to take advantage of the Internet of Things (IoT) and enhance mobile computing experiences, communications networks need to be “re-architected”, with increased programmability and built-in flexibility throughout the infrastructure to handle the anticipated increase in volume and complexity of data traffic.

    Intel said that more than 50 vendors are currently building systems using these new Xeon D-1500 chips.

    As part of the same networking communications focus, Intel also unveiled a host of Ethernet controllers.

    The Ethernet Multi-host Controller FM10000 Family is said to combine Ethernet technology with advanced switch resources for use in high-performance communications network applications and dense server platforms.

    With up to 200Gbps of high-bandwidth multi-host connectivity and multiple 100GbE ports, the FM10000 Ethernet controller delivers a better packet processing capability that should help to reduce network traffic bottlenecks within and between servers.

    Reply
  12. Tomi Engdahl says:

    Imagination gives MIPS Warrior CPU a 64-bit boost with P6600 chip
    Brings improvements such as deep 16-stage pipeline with multi-issue and OoO execution
    http://www.theinquirer.net/inquirer/news/2434173/imagination-gives-mips-warrior-cpu-a-64-bit-boost-with-p6600-chip

    IMAGINATION TECHNOLOGIES has introduced three new additions to the MIPS Warrior CPU family, updating its embedded 32-bit M-class CPUS with the new M6200 and M6250, as well as the higher performing P-class CPU with the 64-bit P6600.

    The MIPS P6600 is touted as “the next evolution” of the P-class family and is intended to “pave the way” to future generations of higher performance 64-bit processors.

    Reply
  13. Tomi Engdahl says:

    Nvidia Brings Computer Vision and Deep Learning to the Embedded World
    http://hackaday.com/2015/11/10/nvidia-brings-computer-vision-and-deep-learning-to-the-embedded-world/

    Today, Nvidia announced their latest platform for advanced technology in autonomous machines. They’re calling it the Jetson TX1, and it puts modern GPU hardware in a small and power efficient module. Why would anyone want GPUs in an embedded format? It’s not about frames per second; instead, Nvidia is focusing on high performance computing tasks – specifically computer vision and classification – in a platform that uses under 10 Watts.

    For the last several years, tiny credit card sized ARM computers have flooded the market. While these Raspberry Pis, BeagleBones, and router-based dev boards are great for running Linux, they’re not exactly very powerful. x86 boards also exist, but again, these are lowly Atoms and other Intel embedded processors. These aren’t the boards you want for computationally heavy tasks. There simply aren’t many options out there for high performance computing on low-power hardware.

    The Jetson TX1 uses a 1 TFLOP/s 256-core Maxwell GPU, a 64-bit ARM A57 CPU, 4 GB of DDR4 RAM, and 16 GB of eMMC flash for storage, all in a module the size of a credit card. The Jetson TX1 runs Ubuntu 14.04, and is compatible with the computer vision and deep learning tools available for any other Nvidia platform. This includes Nvidia Visionworks, OpenCV, OpenVX, OpenGL, machine learning tools, CUDA programming, and everything you would expect from a standard desktop Linux box.

    Nvidia unveils credit card-sized ‘supercomputer’ for portable AI
    CEO wants to be the ultimate helicopter parent
    http://www.theregister.co.uk/2015/11/11/nvidias_latest_ai_hardware/

    Older readers may think of Nvidia as a graphics firm but, according to CEO Jen-Hsun Huang, the firm is now all-in on accelerating computing and machine learning.

    “I’ve been in the computer industry for 30-some years and this has got to be one of the most exciting things that’s happening: The ability for computers to learn, the ability for computers to write software themselves and do artificially intelligent things is revolutionizing web services,” he said.

    To that end he spent Tuesday showing off new hardware that the company has launched for the machine learning market. At the data center end there are new GPU accelerators aimed at easing the video and graphics workload that servers handle, and then in the afternoon the firm showed off the Jetson TX1.

    This 50 x 87mm card contains a one-teraflop 256-core Maxwell GPU, a 64-bit ARM A57 CPU, and 4GB of memory, along with Ethernet and Wi-Fi. It will be available for order in the first quarter of next year with a $299 price tag, or you can get the software developer’s kit and hardware later this month for $599, or $299 if you’re in school.

    Reply
  14. Tomi Engdahl says:

    Most developers have never seen a successful project
    CD Guru: You’re doing it all wrong, again and again
    http://www.theregister.co.uk/2015/11/11/most_developers_never_seen_successful_project/

    Most software professionals have never seen a successful software development project, continuous delivery evangelist Dave Farley said, and have “built careers on doing the wrong thing”.

    Farley, kicking off the Continuous Lifecycle conference in Mannheim, said study after study had shown that a small minority of software development projects could be judged successes.

    One study of 5,400 projects, by McKinsey and Oxford University, showed that 17 per cent of projects were so catastrophically bad they had threatened the very existence of the company.

    Given these sorts of statistics, Farley argued, individuals could plausibly spend their whole career in software development without ever encountering, never mind running, an unequivocally successful development project.

    “I think the vast majority of people in our industry have spent the vast majority of their careers not knowing what a successful software project looks like,”

    Farley traced the sorry state of software development practices to a fundamental misreading of the 1970 Winston Royce paper (PDF) considered as a defining the waterfall method that has shaped traditional software development practices.

    “This paper was a description of what not to do,” said Farley.

    Royce was “arguing in the 1970s for iterative development” Farley claimed. Instead, Farley continued, we have a situation where taking an entirely ad hoc approach to software arguably leads to more successful outcomes than traditional waterfall approaches.

    To improve their chances of producing successful development Farley advised his audience to automate as much as they could, especially testing, config management, and slash cycle times.

    “We’re saying the same thing, but with slightly different terminology, he said. However, when dealing with CEOs and “the business” it is easier to talk about a switch to a “continuous delivery”model, than a Devops model, because non-technical execs’ first question will be “what’s operations?”

    Reply
  15. Tomi Engdahl says:

    Micron Persistent Memory Pairs RAM/Flash
    http://www.eetimes.com/document.asp?doc_id=1328227&

    Micron Technologies Inc. (Boise, Idaho) has combined a digital-in-line memory (DIMM) board with an self-backup-powered (with a super-capacitor) solid-state drive (SSD) to create a new memory architecture that the company claims can boost up-time.

    It is Micron’s first foray into “persistent memory”–a type of memory that cannot be destroyed come rain-snow-sleet or the resultant power outages. The theory goes that by building double-backups into each standard dual-inline-memory-module (DIMM) used by computers from PCs to supercomputers–brown-outs from the likes of summer AC-overload to power surges from acts-of-god like lightening strikes, cannot bring down your system.

    Persistent memory is especially useful for the trend of “in-memory” computing, where all the data and algorithms are loaded into a gigantic memory space (sometimes taking days or even weeks) before execution, so no SSDs or hard-disk drives (HDDs) slow down the fastest programs in the world–from modeling the most efficient internal combustion engines to stewardship of our nuclear arsenal. (I told them Sony’s PS3 had been doing in-memory computing for a long time before it became trendy.)

    “Micron is filling in the gap between DRAM and non-volatile memories like flash,” Ryan Baxter, a director of marketing for CNBU at Micron told EE Times. “With so-called ‘persistent memory’ latency is cut to a minimum with speedy DRAM reads combined with the nonvolatile security of flash that can be emergency backed-up in seconds instead of hours, days or even weeks.”

    Reply
  16. Tomi Engdahl says:

    64-Bit Debug Tool Works Back in Time
    http://www.eetimes.com/document.asp?doc_id=1328230&

    At ARM TechCon this week Undo Software has unveiled its unique debugger for 64-bit ARM processors—Live Recorder. This software tool records key information about system code execution so that developers can “rewind” and replay the code later to readily track down errors. Live Recorder works both as a traditional trace tool allowing developers to step through code and also offers the ability to step backwards through code execution to more quickly find and fix bugs.

    The goal of Live Recorder is to help speed the development of Linux-based 64-bit ARM systems, Undo Software co-founder and CEO Greg Law told EE Times in an interview. “There is a big effort underway to port to 64-bit ARM,” Law said. “Everything in development for smartphones, for instance, is 64-bit.” Servers for industrial and communications markets are also moving to 64-bit, he added.

    “The way Linux has changed over the last 10 years makes it easier than ever to port,” Law said, “and developers are using large amounts of open source. But it’s not quite that simple,” he noted. “There are subtle differences between 32- and 64-bit processors that create hard-to-find bugs. Further, the fragmented sources of software mean that you no longer know all the code. Developers are spending half their time just finding bugs.”

    The Undo Live Recorder aims to speed debugging by capturing system state changes as code executes so that code execution can be reconstructed and replayed once an error has manifested. By capturing system state history in this way, the tool eliminates the need to insert breakpoints, watchpoints, printf statements, or the like and eliminates the need to repeat the error on command in order to track down its source. The tool also allows developers to remotely debug errors offline that occurred in the field, by having the user send the record in for analysis.

    According to Law, Live Recorder exploits computer determinism in its operation. The tool’s code, which is available as a library add-in for user code, takes a series of snapshots of system state, buffering them for storage when a triggering event, such as an error, occurs. Developers can then distribute and copy this record, allowing multiple developers to debug code collaboratively on their machines without tying up the user’s system.

    This latest version of Live Recorder targets 64-bit ARM processors and is an extension of a 32-bit version already available. The tool will be sold direct to developers and is also licensed to development tool vendor

    Reply
  17. Tomi Engdahl says:

    Walt Mossberg / The Verge:
    iPad Pro review: not a laptop replacement, too bulky, disappointing and costly optional keyboard case, few optimized apps — Mossberg: The iPad Pro can’t replace your laptop totally, even for a tablet lover — I am a tablet man, specifically an iPad man. I do love my trusty, iconic, MacBook Air laptop.
    http://www.theverge.com/2015/11/11/9707864/walt-mossberg-ipad-pro-laptop-replacement

    Andrew Cunningham / Ars Technica:
    iPad Pro review: excellent performance, nice hardware and speakers, but expensive, suffers from iOS 9′s limitations, and takes a long time to charge — iPad Pro review: Mac-like speed with all the virtues and restrictions of iOS — There’s some promise here, but iOS makes this a very different kind of computer.
    http://arstechnica.com/apple/2015/11/ipad-pro-review-mac-like-speed-with-all-the-virtues-and-limitations-of-ios/

    Reply
  18. Tomi Engdahl says:

    Sean Gallagher / Ars Technica:
    Microsoft’s Project Oxford APIs offer face tracking, emotion sensing, voice recognition, and spell checking — Microsoft’s Azure gets all emotional with machine learning — Project Oxford AI services detect emotions, identify voices, and fix bad spelling.

    Microsoft’s Azure gets all emotional with machine learning
    Project Oxford AI services detect emotions, identify voices, and fix bad spelling.
    http://arstechnica.com/information-technology/2015/11/new-microsoft-azure-tools-can-tell-who-you-are-and-what-your-mood-is/

    Reply
  19. Tomi Engdahl says:

    Eva Dou / Wall Street Journal:
    Lenovo Group Swings to Net Loss of $714 Million for Quarter

    Lenovo Group Swings to Net Loss of $714 Million for Quarter
    First quarterly net loss in more than six years as PC maker restructures
    http://www.wsj.com/articles/lenovo-group-swings-to-net-loss-of-714-million-for-quarter-1447305804

    Reply
  20. Tomi Engdahl says:

    Hackathons: Don’t try them if you don’t like risks
    Rules and tools to get the most out of your pizza-replete staff
    http://www.theregister.co.uk/2015/11/12/hackathon_risks/

    When organisations grind to a halt, weighed down by their own bureaucracy, inertia and politics, they flail about for something to give a short, sharp shock to their vitals. Something to get them moving again.

    The techniques used to get things humming along again have varied over the years – a rogue’s gallery of specious business trends and fads. Twenty years ago, it might have been role playing. Ten years ago, an offsite with those cringeworthy trust-building games.

    Today, we turn to hackathons.

    Until quite recently, hackathons were the exclusive preserve of the tech startup community, thriving on the 48-hours-locked-in-a-room-together intensity followed by the near-orgasmic release of a great pitch. Suddenly, both big business and big government, in a collective penny-drop moment, have adopted the hackathon methodology to inspire employees and capture innovative ideas.

    That should be making us suspicious. The purpose of a hackathon is to create a space so unconstrained by conventional wisdom as to be truly disruptive. Owing nothing to anyone, participants can be free to ‘think different’.

    That’s the theory, anyway – but I doubt anything would be more terrifying to a big organisation.

    Big bureaucracies – whether corporate or government – are at odds with hackathons, so they try to have it both ways: they stack the deck of the hackathon, then complain if they don’t get the promised results.

    One: You can establish the questions – but not the answers

    The most common defensive strategy in any hackathon is an attempt to control outcomes. That’s often done by framing the questions so narrowly only one answer is possible, or by limiting the terms of discussion, or limiting the range of proposed solutions.

    Two: Conflict resolution skills are key

    One recent hackathon saw one team split by an irreconcilable impasse. Half the group wanted to go in one direction, half in another – but they were only given one pitch. Their solution? Clumsily glue the two pitches together, doing a disservice to both.

    Three: Meta-moderation

    Every hackathon needs a meta-moderator whose sole purpose is to keep each team of hackathoners moving smoothly toward their goal. Meta-moderators check in regularly with each team, observing group dynamics, and stepping in with tweaks as needed, using their own conflict resolution skills to keep the teams coherent, focused, and productive – while passing those skills along to the team.
    They’re the ultimate arbiters, and use their powers to help each team achieve the best possible outcome.

    Four: Expect mixed results

    Hackathons will never produce uniformly excellent results. The mixture of personalities and ideas and organisation is too variable to provide any guarantee of success. Instead, every hackathon will likely produce a few standouts, a few good efforts, and a few disappointments. That’s as it should be: even the most disappointing pitches will teach you something you didn’t know before.

    Five: Don’t expect anything to stick

    Throwing a group of individuals into a crucible may produce an excellent pitch, but everything after that is left to the organisation supporting those individuals.

    Reply
  21. Tomi Engdahl says:

    Tech Pros’ Struggle For Work-Life Balance Continues
    http://it.slashdot.org/story/15/11/11/2118246/tech-pros-struggle-for-work-life-balance-continues?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Work-life balance among technology professionals is very much in the news following a much-discussed New York Times article about workday conditions at Amazon. That piece painted a picture of a harsh workplace where employees literally cried at their desks.

    Tech Pros’ Quest for Work-Life Balance
    http://insights.dice.com/2015/11/11/tech-pros-quest-for-work-life-balance/

    Reply
  22. Tomi Engdahl says:

    Seagate offers California Uni genome data storage K-drives
    USC becoming part of application developers ecosystem
    http://www.theregister.co.uk/2015/11/12/seagate_kinetic_drives_university_of_california/

    Seagate has given the University of California, Santa Cruz a petabyte of Kinetic drives to store and access genomic data.

    Kinetic disk drives have direct access over the Ethernet, and a key:value store structure with an object-style PUT and GET interface.

    Andy Hospodor, USC’s executive director of the Storage Systems Research Centre (SSRC), part of the institution’s Jack Baskin School of Engineering, said: “This gift provides the basis for a major research program on storage of genomic data.”

    The promised benefits include savings on hardware and software complexity compared with existing disk IO stacks.

    There needs to be an ecosystem of application developers writing software that uses these drives for them to become popular, and Seagate is trying to kick-start such an ecosystem into existence.

    We know Seagate is working with Europe’s atom-splitting CERN facility to do this. It’s also set up a Kinetic Open Storage Project with Toshiba, Western Digital, Cisco, Cleversafe (now IBM), Dell, DigitalSense, NetApp, Red Hat and Scality. The aim is to promote object storage on disk drives.

    The disk drives can’t be used in traditional drive arrays and require system/application software additions to manage data writing and reading instead of existing disk drive IO access software stacks.

    Seagate layoffs SHOCKER: 1,000 heads to be laid under the axe
    Spinning off excess bods to lower costs
    http://www.theregister.co.uk/2015/09/10/seagate_layoffs_1050_jobs_cut_share_buyback/

    Disk drive maker Seagate has two per cent of its workforce destined for the exit door.

    It’s announced a restructuring plan which means 1,050 people are going to have a pink slip shock.

    Seagate expects to save $113m by doing this. It has seen two quarters of declining revenues and profits.

    Some of the savings are expected to be put into cloud products and flash.

    Reply
  23. Tomi Engdahl says:

    Acer chief exec Stan Shih: Exiting the PC market? Not us!
    IDC bloke grabs vendors, plants kiss of death on half a dozen of them
    http://www.theregister.co.uk/2015/11/12/acer_stan_shih_commitment_pc_market/

    IDC analyst Tom Mainelli put the cat among the pigeons with his prediction last week that two of the top 10 PC vendors – but not the top four – will shutter their operations within the next two years.

    His prognosis stung Asustek CEO Jerry Shen yesterday to proclaim his company’s allegiance to the PC market.

    The world’s big four PC vendors are Lenovo, HP, Dell and Apple. So by inference Mainelli’s candidates for shuffling off this PC mortal coil are:

    Acer
    Asustek
    Toshiba
    Samsung
    Tongfang
    Fujitsu

    Samsung and Sony have already withdrawn from the PC arena in Europe, having exited last year. It is clear that the PC market is in secular decline, competition is cut-throat and margins are down the toilet

    Reply
  24. Tomi Engdahl says:

    Nvidia launches Tesla M4 and M40 GPUs for deep learning, high-performance computing
    http://venturebeat.com/2015/11/10/nvidia-launches-tesla-m4-and-m40-gpus-for-deep-learning-high-performance-computing/

    Chipmaker Nvidia today announced two new flavors of Tesla graphics processing units (GPUs) targeted at artificial intelligence and other complex types of computing. The M4 is meant for scale-out architectures inside data centers, while the larger M40 is all about impressive performance.

    The M4 packs 1,024 Nvidia Cuda cores, 4GB of GDDR5 memory, 88GB/second of bandwidth, power usage of 50-75 watts, and a peak of 2.2 teraflops.

    The more brawny M40, by contrast, comes with 3,072 Cuda cores, 12GB of GDDR5 memory, 288 GB/second of bandwidth, power usage of 250 watts, and a peak of 7 teraflops.

    These new GPU accelerators, based on Nvidia’s Maxwell architecture, are the successors for Nvidia’s Kepler-based Tesla K40 and K80. (Specs are here.)

    The GPU has become a recognized standard for a type of AI called deep learning. In the past year, Nvidia has increasingly pushed hard to market itself as a key arms dealer for deep learning. For years, Nvidia had marketed its Tesla line of GPUs under the term “accelerated computing,” but in its annual report for investors this year, the company changed its tune and began emphasizing Tesla’s deep learning capability.

    Reply
  25. Tomi Engdahl says:

    PostgreSQL learns to walk and chew gum
    First shot at parallelisation arrives
    http://www.theregister.co.uk/2015/11/13/postgresql_learns_to_walk_and_chew_gum/

    The open source PostgreSQL database is about to get query parallelisation, starting with just a few processes, and is looking for crash-test dummies to give it a whirl.

    Developer Robert Haas blogs that he worked with Amit Kapila over “several years” to get the feature working, and if things go well, he hopes to see the feature included in production code in PostgreSQL 9.6.

    Parallelisation works with a “gather” node that supervises (in Haas’ demonstration) four workers. He continues:

    “Those workers all execute the subplan in parallel. Because the subplan is a Parallel Seq Scan rather than an ordinary Seq Scan, the workers coordinate with each other so that each block in the relation is scanned just once. Each worker therefore produces on a subset of the final result set, and the Gather node collects all of those results.”

    Since it’s early days, there are limitations that Haas hopes to get fixed before the PostgreSQL 9.6 release cycle: Gather nodes don’t work for inheritance hierarchies, and you can’t push joins down to workers.

    Reply
  26. Tomi Engdahl says:

    NoSQL: Injection vaccination for a new generation
    This future architecture still falls into some of the same old traps
    http://www.theregister.co.uk/2015/11/13/nosql_security_new_generation/

    We are becoming more and more accustomed to reading about losses of online data through malicious hack attacks, accidents, and downright carelessness – it’s almost as if we don’t know how to secure data against the most common form of attack.

    Of course, that isn’t really true as best practice, legislation, and education on the matter are easy to come by, from a variety of sources.

    Yet we continue to see common attacks being repeated, with SC Magazine reporting recently that 100,000 customers where compromised by SQL injection.

    NoSQL is, or was meant to be (you pick) the future architecture, an opportunity, almost, to start afresh. Given that and with the wealth of knowledge that’s amassed from decades of SQL, you’d think NoSQL databases and systems wouldn’t fall into the same traps as the previous generations of RDBM systems.

    Just this February nearly 40,000 MongoDB systems where found with no access control and with default port access open. To be fair, not all the possible faults I’m about to mention apply to all NoSQL systems; some are harder than others and some from distribution companies are deliberately hardened out of the box.

    The first rule of security is that if someone manages to get on your box then it’s just about game over. Hopefully, I don’t need to re-emphasise the importance of firewalling database boxes (whatever the flavour) from the outside world and only allowing access to your application servers, but it is worth stating that this is as important in the NoSQL world as any other.

    Reply
  27. Tomi Engdahl says:

    Thin Client Devices Revisited
    Technology best forgotten or time for a renaissance?
    http://www.theregister.co.uk/2015/11/13/thin_client_survey_results/

    With conversations around end user computing dominated by highly desirable mobile technology, it’s easy to overlook the potential of thin client hardware. Deployed in the right way to the right types of user, however, far from being the compromise option, thin client devices can enhance the user’s overall experience. While the majority see a role for such technology, legacy perceptions can be an impediment to adoption.

    The majority have an out of date view of thin client technology

    Results of a recent Reg reader survey of 220 IT professionals suggests that knowledge of thin client technologies is frequently out of date. This is even true in a self-selecting sample that will be biased towards those with an interest in the topic

    All types of thin client front ends are seen to be relevant, but in different ways

    The role of browser centric devices is similarly seen to be reasonably broad, though it is probably worth acknowledging that options here often include limited off-line storage and working capability. For less sophisticated needs, even in a mobile context, Chromebook’s and similar therefore seem to be finding their place.

    Drivers can also be blockers, and vice versa

    When conducting research, it’s important to be careful not to make too many assumptions.

    The bottom line

    Dedicated hardware-based thin clients clearly have a lot to offer. The challenge is that conversations in the end user computing space at the moment tend to be dominated by mobile computing and all of the highly desirable devices that many users want. Against this background, while making a case for investment based on the usual cost and risk related factors is possible, it’s important to stand back and look at the potential from a user perspective. In today’s network-centric, multi-device world, thin clients are no longer the compromise option. Deployed intelligently, they can enhance the user experience and play a vital role in workforce transformation overall.

    Reply
  28. Tomi Engdahl says:

    Has the next generation of monolithic storage arrived?
    Infindat’s Infinibox: Hybrid and RAM-heavy
    http://www.theregister.co.uk/2015/11/13/infinidat_monolith_hybrid_infinibox/

    Infindat’s product is called Infinibox. It’s a monolithic storage system, or could even be called next generation monolithic, and competes against EMC VMAX, HDS VSP or 3PAR 10K.

    It’s not all-flash. It’s hybrid, with lots of RAM and flash at the front-end and big 7200 RPM disks at the back-end, bringing a total usable capacity of 2PB per rack.

    The product is designed to be resilient, and everything is N+1 (for example it has a particular three-controller configuration, unusual but very effective).

    The first time I saw Infinidat’s Infinibox I thought about XIV; the two products have some design similarities, and, as far as I know, many engineers followed Infinidat founder Moshe Yanai in this new venture when he left IBM after disputes about the development of XIV.

    The price tag is quite low for this kind of system, namely $1/GB. And, this is a unified system, with FC, iSCSI, NFS, SMB and object storage protocols already enabled and FiCON coming soon.

    Reply
  29. Tomi Engdahl says:

    Decoding Microsoft: Cloud, Azure and dodging the PC death spiral
    Too many celebs and robot cocktails clouding the message?
    http://www.theregister.co.uk/2015/11/13/decoding_future_decoded_microsoft_sets_out_its_stall/

    Microsoft’s Future Decoded event took place in London this week, with CEO Satya Nadella and Executive VP Cloud and Enterprise Scott Guthrie on stage to pitch the company’s “cloud and mobile” message.

    Nadella was in Paris on Monday and moved on to a shorter Future Decoded event in Rome later in the week, so this is something of a European tour for the company.

    The event is presented as something to do with “embracing organisational transformation”, presumably with the hope that celeb-bedazzled attendees will translate this to “buy our stuff.”

    There were a couple of announcements at the show, the biggest being Nadella’s statement about a UK region for the Azure cloud. Guthrie stated the next day that Microsoft has “more than 26 Azure regions around the world – twice as many as Amazon and Google combined.” – though note that more regions does not necessarily mean more capacity.

    The company is betting on cloud services to make up for declining PC sales and its failure in mobile. Will we buy though? Microsoft is a long way behind Amazon Web Services (AWS) in the IaaS (Infrastructure as a service) market but makes up for that to some extent by a huge SaaS (Software as a Service) presence with Office 365, though this is far from pure cloud since it hooks into Office applications running on the desktop or on mobile devices. This is why Microsoft has raced to get Office onto iOS and Android.

    A well-attended session on “Azure and its competitors” was given by analyst David Chappell. Chappell does not see Microsoft overtaking AWS in IaaS, but gave his pitch on why Microsoft will be a strong number two, and ahead in certain other cloud services. Unlike Amazon, Microsoft has enterprise strength, he said.

    Azure Active Directory has “very few serious competitors,” he said, since it offers single sign-on across on-premises and cloud services both from Microsoft and from third-parties. IBM will not get the cloud scale it needs, he said, VMware is not a full public cloud platform, and OpenStack has gaps which get filled with vendor-specific extensions that spoil its portability advantage. Google has potential, he said, but he questioned its long-term commitment to enterprise cloud services when most of its business is built around advertising.

    Another announcement at the show concerned Project Oxford, Microsoft’s artificial intelligence service. New services include emotion recognition, spell check, video enhancement, and speaker recognition. Marketers are all over this kind of stuff, since it can help with contextual advertising. It is not hard to envisage an outdoor display or even a TV that would pump out different ads according to how you are feeling, for example.
    There was a live demo of Project Oxford emotion recognition, but think about it for a moment.

    What about the mobile part of “cloud and mobile”? Two things tell you all you need to know. One was Nadella showing off what he called an iPhone Pro, an iPhone stuffed with Microsoft applications. The other was the Lumia stand in the exhibition, which proclaimed “The Phone that works like your PC.” This is a reference to the Continuum project, where you can plug the phone into an external display and have Universal Windows Platform Apps morph into something like desktop applications. An interesting feature, but Microsoft is no longer targeting the mass market for mobile phones.

    Despite the sad story of Lumia, Microsoft is keen to talk up the role of Windows on other small devices. There are two sides to the company’s IoT (Internet of Things) play. One is Azure as a back-end for IoT data, both for storage and analytics. The other is Windows 10 IoT Core, which runs on devices such as Raspberry Pi.

    Reply
  30. Tomi Engdahl says:

    Its an article from 2006 but still relevant:

    Survival of the unfittest
    http://www.theguardian.com/technology/2006/feb/09/guardianweeklytechnologysection
    Lotus Notes is used by millions of people, but almost all of them seem to hate it. How can a program be so bad, yet thrive?

    Imagine a program used by 120 million people, of whom about 119m hate it. Sound unlikely? Yet that’s the perception one garners in trying to discover whether Lotus Notes, IBM’s “groupware” application, is – as readers of Technology blog suggested – the “world’s worst application”.

    Reply
  31. Tomi Engdahl says:

    IBM speeds up Power processor architecture with Nvidia Tesla K80 GPUs
    GPU acceleration increases Watson’s processing performance by 10 times
    http://www.theinquirer.net/inquirer/news/2434826/ibm-speeds-up-power-processor-architecture-with-nvidia-tesla-k80-gpus

    IBM HAS delivered some significant improvements to its collaborative OpenPower initiative along with its partners, including new technologies that the company said will enable faster and deeper data analysis.

    The new offerings bring integration of IBM’s open and licensable Power processors with accelerators and dedicated high-performance processors that can be optimised for computationally intensive software code.

    The update also provides collaborations and further investments in accelerator-based tools on top of the Power processor architecture. This includes incorporating Nvidia Tesla K80 GPUs to accelerate Watson’s Retrieve and Rank API capabilities to 1.7 times normal speed.

    Reply
  32. Tomi Engdahl says:

    Here are the tools Microsoft CEO Satya Nadella uses every day
    http://www.businessinsider.com/microsoft-ceo-satya-nadellas-everyday-setup-2015-11?IR=T&r=UK

    Microsoft CEO Satya Nadella gave a keynote address at the Future Decoded event this week in London on how partners use Microsoft products every day. Other headline speakers include the CEO of Virgin Atlantic, the CEO of Lastminute.com, and the Ministry of Defence.

    During his 40-minute speech Nadella laid down the products, and services, he uses daily, all of which (unsurprisingly) revolve around Microsoft’s offerings.

    The first thing Nadella demonstrated was an “iPhone Pro,” a regular iPhone packed with Microsoft services. Before Nadella became CEO, Microsoft had no meaningful presence on iOS, but it now has more than 15 apps that are compatible with it, including Word, Skype, Wunderlist, and Sunrise.

    Nadella made a big show of OneNote for iOS, an app that can take and distribute notes. He also showed off Office

    Nadella’s main device is the new Lumia 950XL, announced in early October. He demonstrated Wunderlist, a to-do app that allows for multiple accounts and cloud-based syncing.

    One of the headline features of the Lumia 950 is its tight integration with Windows 10. This has resulted in Continuum, a feature that lets customers use their phones in a “desktop mode” — i.e., with a mouse, keyboard, and large monitor — simply by connecting it to a $99 (£65) dock.

    The motivation behind Continuum comes from the requirements of developing markets: Desktop computers are expensive and unwieldy, while smartphones are cheap and can be taken anywhere. Building a smartphone that is powerful enough to run a full Windows 10 experience solves this problem.

    Nadella demoed Windows Hello, the security features that are baked into Windows 10 that use biometrics, or visuals and fingerprints, over a simple password.

    Toward the end of the keynote, Nadella frequently referred to HoloLens, the augmented-reality headset the company is deploying in early 2016.

    Reply
  33. Tomi Engdahl says:

    Google Open-Sourcing TensorFlow Shows AI’s Future Is Data, Not Code
    http://www.wired.com/2015/11/google-open-sourcing-tensorflow-shows-ais-future-is-data-not-code/

    When Google open sourced its artificial intelligence engine last week—freely sharing the code with the world at large—Lukas Biewald didn’t see it as a triumph of the free software movement. He saw it as a triumph of data.

    In open sourcing the TensorFlow AI engine, Biewald says, Google showed, when it comes to AI, the real value lies not in the software or the algorithms but in the data needed to make it all smarter. Google is giving away the other stuff, but keeping the data.

    “As companies become more data-driven, they feel more comfortable open sourcing lots of [software]. They know they’re sitting on lots of proprietary data that nobody else has access to,”

    Reply
  34. Tomi Engdahl says:

    SteamOS gaming performs significantly worse than Windows, Ars analysis shows
    Cross-platform 3D games face 21- to 58-percent frame rate dip on same hardware.
    http://arstechnica.com/gaming/2015/11/ars-benchmarks-show-significant-performance-hit-for-steamos-gaming/

    Since Valve started publicly talking about its own Linux-powered “Steam Boxes” about three years ago now, we’ve wondered what kind of effect a new gaming-focused OS would have on overall PC gaming performance. On the one hand, Valve said back in 2012 that it was able to get substantial performance increases on an OpenGL-powered Linux port of Left 4 Dead 2. On the other hand, developers I talked to about SteamOS development earlier this year told me that the state of Linux’s drivers, OpenGL tools, and game engines often made it hard to get Windows-level performance on SteamOS, especially if a game was built with DirectX in mind in the first place.

    With this week’s official launch of Valve’s Linux-based Steam Machine line (for non-pre-orders), we decided to see if the new OS could stand up to the established Windows standard when running games on the same hardware. Unfortunately for open source gaming supporters, it looks like SteamOS gaming comes with a significant performance hit on a number of benchmarks.

    Reply
  35. Tomi Engdahl says:

    A Look At The New Features Of The Linux 4.4 Kernel
    http://www.phoronix.com/scan.php?page=article&item=linux-44-features&num=1

    If all goes according to plan, the Linux 4.4 kernel merge window will end today with the release of the 4.4-rc1 kernel. As all of the major subsystem updates have already landed for Linux 4.4, here’s my usual look at the highlights for this kernel cycle.

    Reply
  36. Tomi Engdahl says:

    AMD Pushes GPUs Through New HSA Tools
    http://www.eetimes.com/document.asp?doc_id=1328279&

    Under its “Boltzmann Initiative,” AMD has announced a suite of tools designed to ease development of high-performance, energy efficient heterogeneous computing systems.

    The “Boltzmann Initiative” leverages heterogeneous system architectures’ (HSA) ability to harness both central processing units (CPU) and AMD FirePro graphics processing units (GPU) for maximum compute efficiency through software.

    The first results of this initiative include the Heterogeneous Compute Compiler (HCC); a headless Linux driver and HSA runtime infrastructure for cluster-class, High Performance Computing (HPC); and the Heterogeneous-compute Interface for Portability (HIP) tool for porting CUDA-based applications to a common C++ programming model.

    The promise of combining multi-core, serial processing CPUs with parallel-processing GPUs to maximize compute efficiency is already being seen in the industry, as driven by the Heterogeneous Systems Architecture (HSA) Foundation founded by AMD among other chip vendors.

    One of the goals for HSA is easing the development of parallel applications through use of higher level languages. The HCC C++ compiler offers more simplified development via single source execution, with both the CPU and GPU code in the same file. The compiler automates the placement code that executes on both processing elements for maximum execution efficiency.

    Reply
  37. Tomi Engdahl says:

    FPGA Interfaces Speeding Up
    IBM/Xilinx collaborate, Red Hat wants standard
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328278&

    IBM and Xilinx have joined the race to bring FPGA accelerators to data centers. The problem they and their competitors have yet to solve is delivering an easy-to-use standard interface for them, according to a Red Hat executive.

    Jon Masters, a chief ARM architect at Red Hat, developed an FPGA accelerator called Trilby, he described at last week’s ARM Tech Con. “Ninety percent of the effort in using an FPGA accelerator is in interfacing to it — that’s crazy it should be 10% — what I’d like to see is a standard for the industry,” he said.

    Today’s FPGA accelerators typically require some programming in Verilog, but that’s unacceptable, said Masters. A researcher at Microsoft raised a similar compliant more in an August 2014 paper describing work using FPGA accelerators in Microsoft’s data centers.

    Microsoft and China’s Baidu are both exploring use of FPGAs on the servers, sparking interest in the area. Intel accelerated interest with its $16 billion bid in June to acquire Altera, which has led the way in moving its FPGAs to OpenCL.

    Masters said he has talked to all FPGA makers and others about starting an initiative to define a programming interface for FPGA accelerators, probably based on OpenCL. Such an interface should include standard drivers that support PCI Express virtualization and be available for download on system boot up, he said.

    Reply
  38. Tomi Engdahl says:

    Jacob Demmitt / GeekWire:
    Microsoft and Code.org will use Minecraft to teach kids the basics of computer programming
    http://www.geekwire.com/2015/microsoft-is-using-minecraft-to-teach-kids-basics-of-computer-programming-with-code-org-partnership/

    Microsoft wants to turn kids’ love of Minecraft into a love of computer programming through a partnership with Code.org, announced on Monday morning.

    They’ve built a tutorial that students across the world can use during Code.org’s annual Hour of Code event in December. Microsoft knows how much kids love the wildly popular game, which the company bought through its $2.5 billion Mojang acquisition in 2014, so it volunteered Minecraft for the cause.

    The tutorial, which is available now for free, walks students through 14 levels. It looks and feels like the Minecraft game that kids are so familiar with, but they have to use basic computer science principles to play.

    https://code.org/mc

    Reply
  39. Tomi Engdahl says:

    Microsoft to world: We’ve got open source machine learning too
    Help teach Cortana to say ‘Sorry, Dave’
    http://www.theregister.co.uk/2015/11/17/microsoft_to_world_weve_got_open_source_machine_learning_too/

    Microsoft’s decided that it, too, wants to open source some of its machine learning space, publishing its Distributed Machine Learning Toolkit (DMTK) on Github.

    Google released some of its code last week. Redmond’s (co-incidental?) response is pretty basic: there’s a framework, and two algorithms, but Microsoft Research promises it will get extended in the future.

    The DMTK Framework is front-and-centre, since that’s where both extensions will happen. It’s a two-piece critter, consisting of a parameter server and a client SDK.

    The parameter server has “separate data structures for high- and low-frequency parameters”, Microsoft says, so as to balance memory capacity and access speed.

    Reply
  40. Tomi Engdahl says:

    Supercomputers get their own software stack – dev tools, libraries etc
    OpenHPC group to take programmers to the highest heights
    http://www.theregister.co.uk/2015/11/17/supercomputers_get_their_own_software_stack/

    SC15 Supercomputers are going to get their own common software stack, courtesy of a new group of elite computer users.

    The OpenHPC Collaborative Project was launched just before this week’s Supercomputer Conference 2015 in Austin, Texas, and features among its members the Barcelona Supercomputing Center, the Center for Research in Extreme Scale Technologies, Cray, Dell, Fujitsu, HP, Intel, Lawrence Berkeley, Lenovo, Los Alamos, Sandia and SUSE – in other words, the owners and builders of the world’s biggest and fastest machines.

    The project describes itself as “a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries.”

    It comes with the backing of the Linux Foundation, hardly surprisingly since the open-source software is used in virtually every supercomputer in the world.

    Just six of the top 500 supercomputers don’t use GNU/Linux, and all of them use some flavor of Unix, so no look-in for Windows nor OS X.

    “Current HPC [high-performance computing] systems are very difficult to program, requiring careful measurement and tuning to get maximum performance on the targeted machine. Shifting a program to a new machine can require repeating much of this process, and it also requires making sure the new code gets the same results as the old code. The level of expertise and effort required to develop HPC applications poses a major barrier to their widespread use.”

    Reply
  41. Tomi Engdahl says:

    Supercomputers have a new gimmick – more computing power at a lower electric bill

    Super computer arms race has introduced new ways of computing power can be increased without the space requirements and power consumption will increase tremendously. The solution is found for a particular purpose built co-processors who care designed for those calculations universal energy-efficient processors.

    According to a recent supercomputers list of the world’s 500 fastest computer already uses 110 use co-processors.

    For example, the Chinese Tianhe-2 uses Intel’s Xeon Phi co-processors with x86 cores in parallel has been added designed for vector processors.

    Graphics Processors have been used in super computers for some time, and their manufacturers equip their cards increasingly larger amounts of memory. Popular co-processors are also Nvidia GPUs, which are particularly helpful in graphic simulations.

    FPGA processors are also used. For example, Microsoft uses FPGA in Bing search machine to accelerate. It is still too early to say whether the fpga-circuits the more general trend.

    Source: http://www.tivi.fi/Kaikki_uutiset/supertietokoneilla-on-uusi-kikka-lisaa-laskentatehoa-pienemmalla-sahkolaskulla-6067012

    Reply
  42. Tomi Engdahl says:

    Refined player: Fedora 23′s workin’ it like Monday morning
    The Linux distro that doesn’t stop
    http://www.theregister.co.uk/2015/11/17/fedora_23_review/

    OK, it was a slight delay – one week – but the latest Fedora, number 23, represents a significant update that was worth waiting for.

    Like its predecessor, this Fedora comes in three base configurations – Workstation, Server and Cloud. The former is the desktop release and the primary basis for my testing, though I also tested out the Server release this time around.

    The default Fedora 23 live CD will install the GNOME desktop though there are plenty of spins available if you prefer something else.

    The new upgrade tools are a welcome change not just because upgrading is easier and safer (with the ability to roll back should things go awry), but because Fedora has no LTS style release. Fedora 23 will be supported for 12 months and then you’ll need to move on to Fedora 24. That’s a bit abrupt if you’re coming from the Ubuntu (or especially Debian) world of LTS releases with two years of support.

    If you want that in the Red Hat ecosystem then you need to turn to RHEL or CentOS. However, now that Fedora is capable of transactional updates with rollbacks the missing LTS release feels, well, less missing, since upgrading is less problematic.

    Reply
  43. Tomi Engdahl says:

    Recipy for Science
    http://www.linuxjournal.com/content/recipy-science?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    More and more journals are demanding that the science being published be reproducible. Ideally, if you publish your code, that should be enough for someone else to reproduce the results you are claiming. But, anyone who has done any actual computational science knows that this is not true. The number of times you twiddle bits of your code to test different hypotheses, or the specific bits of data you use to test your code and then to do your actual analysis, grows exponentially as you are going through your research program. It becomes very difficult to keep track of all of those changes and variations over time.

    Because more and more scientific work is being done in Python, a new tool is available to help automate the recording of your research program. Recipy is a new Python module that you can use within your code development to manage the history of said code development.

    Recipy exists in the Python module repository

    Reply
  44. Tomi Engdahl says:

    Intel’s Omni-Path supercomputer architecture promises better efficiency with 26 percent more compute
    But the Knights Landing processor in which it will integrate is delayed
    http://www.theinquirer.net/inquirer/news/2434875/intels-omni-path-supercomputer-architecture-promises-better-efficiency-with-26-percent-more-compute

    INTEL HAS ANNOUNCED detailed specifications of its next-generation Omni-Path architecture for high performance computing (HPC), offering up to 17 percent lower latency and a seven percent higher messaging rate over the InfiniBand networking communications standard.

    Announcing the news at the SC15 conference in Austin, Texas, Intel said the architecture, which is part of its Intel’s Scalable System Framework (SSF) announced earlier this year, will reduce the size of fabric budgets and offer better economics for businesses.

    “The industry has been waiting for this for some time. This is a 100GB solution that is tuned for true application performance,” said Charles Wuischpard, general manager of Intel’s high-performance platforms group.

    “But when we talk about ‘high performance’ here we are talking about performance of the application in a parallel fashion and we’ve measured up to 17 percent lower latency and a seven percent messaging rate.”

    The Omni-Path fabric’s main goal is to drive better economics. Wuischpard said it is a 48-port switch with a higher rating than is currently available in the industry, allowing customers to develop networks at a lower cost with fewer components thus reducing the size of fabric budgets.

    Intel said that the Omni-Path architecture requires 60 percent less power than an InfiniBand system.

    As part of the announcement, Intel also disclosed details of a Xeon Phi-powered workstation which it plans to make available for developers.

    High Performance Computing Fabric Products from Intel
    http://www.intel.com/content/www/us/en/high-performance-computing-fabrics/products-overview.html

    Reply
  45. Tomi Engdahl says:

    Intel Scales System Framework
    Instead of dominating, now cooperating
    http://www.eetimes.com/document.asp?doc_id=1328276&

    -Intel has switched from trying to dominate the market for PCs, servers and high-performance computing (HPCs, commonly called supercomputers) to building a Scalable System Framework (SSF). With SSF, Intel is attempting to integrate the future of supercomputing with every level of computing, to create an ecosystem of hardware and software suppliers that can serve all systems, from the smallest Internet of Things (IoT) devices to the largest supercomputer installations

    “We introduced the idea of a scalable system framework when we announced that Intel was priming the Coral concept of the U.S. government, specifically called Aurora–a 180 petaflops system to be delivered in the 2018 time frame,”

    Since then Intel’s Scalable Systems Framework has been adopted by Cray, Hewlett Packard, Lenova, Fujitsu, Ansys, SGI, Dell, SuperMicro, Altair, Colefax, Inspur, Penguin Computing, MSC Software and many others. All are now working together to have a data infrastructure and ecosystem that can enable systems of any size to follow the same basic rules for interoperability.

    “Instead of saying Intel inside,” Wuischpard told us. “We and our partners will start saying the Scalable Systems Framework inside.”

    Wuischpard also said some of the new Xeon Phi design wins using next-generation features will be announced at SC15

    Reply
  46. Tomi Engdahl says:

    The Next Big IT Projects From the University Labs
    http://tech.slashdot.org/story/15/11/16/1956236/the-next-big-it-projects-from-the-university-labs

    From unstructured data mining to visual microphones, academic labs are bringing future breakthrough possibilities to light, writes InfoWorld’s Peter Wayner in his overview of nine university projects that could have lasting impact on IT. ‘Open source programmers can usually build better code faster’

    9 research projects that could transform the enterprise
    http://www.infoworld.com/article/3004876/application-development/9-research-projects-that-could-transform-the-enterprise.html

    From unstructured data mining to visual microphones, academic labs are bringing future breakthrough possibilities to light

    If you take a look at the list of trending repositories on GitHub, you’ll see amazing code from programmers who live around the world and efforts for firms big and small. But one thing you don’t often see is work that comes from the university labs. It’s rare for the next big thing to escape from an academic computer science department and capture the attention of the world.

    That’s not a knock on university research. But competing with open source projects that enjoy broad support across the industry and around the world is challenging for a handful of academics and grad students. Sure, many of the top computer science schools are well off, but that doesn’t mean the money is pouring into research. Open source programmers, on the other hand, can usually build better code faster, often because their have bosses who pay them to build something that will pay off next quarter, not next century.

    Yet good computer science departments still manage to punch above — sometimes well above — their weight.

    DeepDive

    Big data is one area where academia’s focus on mathematical foundations can pay off, and one of the more prominent packages to gain attention of late is DeepDive, a tool for exploring unstructured text

    ZeroCoin

    Bitcoin may be many things, but it is not as anonymous as many assume. The system tracks all transactions
    ZeroCoin wants to change that.

    Burlap

    Finding the best route or the optimal answer can be harder than looking for a needle in a haystack. Many problems have billions, trillions, or even quadrillions of possible solutions, and finding the best one takes plenty of computing power.

    Burlap lets you define the problem as a network of nodes with vectors of features or attributes attached to it. The algorithms can search through the network using a combination of brute-force searching and statistically guided exploration.

    SpiroSmart

    The smartphones may let us talk, text, and even watch cat videos, but their greatest contribution to society may be as mobile doctors, ready to track our health, day in and day out.

    Halide

    As digital photography becomes more common, it’s only natural that people will want to do more to their images than merely look at them. Some want to filter the colors, others want to edit the images, and still more want to use the images as input to some algorithm, perhaps for steering an autonomous car.
    Halide is a computer language for image processing designed to abstract away these decisions for you. It will worry about the loops and GPU conversions for you.

    Visual Microphone

    Cameras have traditionally been used to take static photos of things to save for the future.
    Now that superfast cameras can capture hundreds or thousands of images per second, researchers are discovering that the cameras can do more than imitate the eyes. They can also do what our ears and skin can do by sensing sound or vibration using light alone.

    Drake

    Robots and drones are becoming more and more common in the enterprise as they move from the labs and take on crucial roles. Controlling these machines requires a good grasp of the laws of physics. Drake is a collection of packages that makes it a bit easier to write the code controlling these machines.

    R

    Anyone who’s spent time with big data or data scientists knows that they rely, more often than not, on a language called R to chew through the numbers and deliver the kind of statistical insights that make managers happy.

    Education

    Now, saving the best for last, is the one thing that universities do better than anyone: teach.

    Reply
  47. Tomi Engdahl says:

    All Dell breaks loose in latest Gartner disk array magic quadrant
    NetApp and IBM stumble, EMC still ahead
    http://www.theregister.co.uk/2015/10/22/gartners_disk_array_mq/

    Dell has become a top-four player in disk arrays, as IBM and NetApp are left behind in Gartner’s 2015 Magic Quadrant for general purpose disk arrays.

    Reply
  48. Tomi Engdahl says:

    Completely free tool for ARM development

    ARM-based processors there is a legion of different development tools. Some of them are free, but in general the design size is limited in some way. Swedish Atollicin new tool package, rather than completely free.

    Atomillicin True Studio Lite is completely free to download, use and distribution. Use does not even require registration.

    Still, the tool corresponds to commercial C / C ++ – programming tools, Atollic praises. According to the company True Studio Lite was developed for the ARM development world would get rid of fragmentation. The market has a lot of the same types of tools that offer basic editing, compiler and debugging, but most of the usability is poor, because the tools are not integrated in any package.

    True Studio Lite is based on open Aclipse platform planted with commercial tools familiar wizard tools as well as extensive support for different manufacturers of ARM-controller circuits.
    Atollicilla is also a commercial package, which it sold under the name True Studio Pro.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3602:taysin-ilmainen-tyokalu-arm-kehitykseen&catid=13&Itemid=101

    Reply
  49. Tomi Engdahl says:

    Python’s on the Rise… While PHP Falls
    http://insights.dice.com/2015/11/16/pythons-on-the-rise-while-php-falls/

    According to PYPL, which pulls its raw data for analysis from Google Trends, Python has grown the most over the past five years—up 5 percent since roughly 2010. Over the same period, PHP also declined by 5 percent. Since PYPL looks at how often language tutorials are searched on Google, its data is a good indicator of how many developers are (or aren’t) learning a language, presumably because they see it as valuable to their careers.

    Python’s growth is even more notable when you consider the language is 25 years old, an eternity in the tech industry. That being said, its continuing popularity rests on a few key attributes: It’s easy to learn, runs on a variety of platforms, serves as an exemplary general-purpose language, and boasts a robust community devoted to regularly improving its features.

    Just because PYPL shows PHP losing market-share over the long term doesn’t mean that language is in danger of imminent collapse; over the past year or so, the PHP community has concentrated on making the language more pleasant to use, whether by improving features such as package management, or boosting overall performance. Plus, PHP is still used on hundreds of millions of Websites, according to data from Netcraft.

    Indeed, if there’s any language on these analysts’ lists that risks doom, it’s Objective-C, Apple’s longtime language for programming iOS and Mac OS X apps, and its growing obsolescence is by design. Its replacement, Swift, has, well, swiftly climbed the TIOBE, RedMonk, PYPL, and other rankings over the past year.

    For developers and other tech pros, these lists come in useful when deciding which languages to pursue.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*